Stuttgart Mühlhausen, Germany
Stuttgart Mühlhausen, Germany

Time filter

Source Type

Kipp A.,High Performance Computing Center Stuttgart | Jiang T.,High Performance Computing Center Stuttgart | Fugini M.,Polytechnic of Milan | Salomie I.,Technical University of Cluj Napoca
Future Generation Computer Systems | Year: 2012

This paper presents a novel approach to characterising energy consumption in IT service centres comprising the whole system view, from the IT infrastructures, to the applications and their design and execution environments. The approach is based on a set of energy-related metrics called Green Performance Indicators (GPIs) which enable monitoring the energy consumption level of the IT service centre in terms of computing resource usage, impact on the environment, costs/efforts required to develop and redesign applications and reconfiguring the IT-infrastructure. Our approach, which is framed within the EU Project GAMES [15] about green IT, defines four layers of GPIs allowing for a comprehensive description of energy consumptions at various system levels, in relation to a given Quality of Service (QoS), to impact on the environment (consumables, power, emissions, cooling systems, and so on) and to high-level business objectives. These GPI metrics are defined to measure the "greenness" of a computing environment, and more precisely, to detect where the IT service centre wastes energy. The monitoring and evaluation of GPIs allows the introduction of energy efficiency measures through interventions at various levels (e.g., improving IT resource usage or changing the QoS), allowing for an adaptation and optimisation of the according execution modes. © 2011 Elsevier B.V. All rights reserved.


Zhu Q.,State University of New York at Stony Brook | Oganov A.R.,State University of New York at Stony Brook | Oganov A.R.,Moscow State University | Glass C.W.,High Performance Computing Center Stuttgart | Stokes H.T.,Brigham Young University
Acta Crystallographica Section B: Structural Science | Year: 2012

Evolutionary crystal structure prediction proved to be a powerful approach for studying a wide range of materials. Here we present a specifically designed algorithm for the prediction of the structure of complex crystals consisting of well defined molecular units. The main feature of this new approach is that each unit is treated as a whole body, which drastically reduces the search space and improves the efficiency, but necessitates the introduction of new variation operators described here. To increase the diversity of the population of structures, the initial population and part ( 20%) of the new generations are produced using space-group symmetry combined with random cell parameters, and random positions and orientations of molecular units. We illustrate the efficiency and reliability of this approach by a number of tests (ice, ammonia, carbon dioxide, methane, benzene, glycine and butane-1,4-diammonium dibromide). This approach easily predicts the crystal structure of methane A containing 21 methane molecules (105 atoms) per unit cell. We demonstrate that this new approach also has a high potential for the study of complex inorganic crystals as shown on examples of a complex hydrogen storage material Mg(BH4)2 and elemental boron. © 2012 International Union of Crystallography Printed in Singapore - all rights reserved.


Cheptsov A.,High Performance Computing Center Stuttgart
International Journal of Computer Science and Applications | Year: 2011

The latest advances in the Semantic Web community have yielded a variety of reasoning methods used to process and exploit semantically annotated data. However most of those methods have only been approved for small, closed, trustworthy, consistent, and static domains. In the last years, Semantic Web has been facing significant challenges dealing with the emergence of Internet-scale data-intensive application use cases. Still, there is a deep mismatch between the requirements for reasoning on a Web scale and the existing efficient reasoning algorithms over restricted subsets. This paper discusses the Large Knowledge Collider (LarKC), a platform, which focuses on supporting large-scale reasoning over billions of structured data in heterogeneous data sets. The architecture of LarKC allows for an effective combination of techniques coming from different Semantic Web domains by following a service-oriented approach, supplied by sustainable infrastructure solutions. © Technomathematics Research Foundation.


Cheptsov A.,High Performance Computing Center Stuttgart
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

Reasoning is one of the essential application areas of the modern Semantic Web. At present, the semantic reasoning algorithms are facing significant challenges when dealing with the emergence of the Internet-scale knowledge bases, comprising extremely large amounts of data. The traditional reasoning approaches have only been approved for small, closed, trustworthy, consistent, coherent and static data domains. As such, they are not well-suited to be applied in data-intensive applications aiming on the Internet scale. We introduce the Large Knowledge Collider as a platform solution that leverages the service-oriented approach to implement a new reasoning technique, capable of dealing with exploding volumes of the rapidly growing data universe, in order to be able to take advantages of the large-scale and on-demand elastic infrastructures such as high performance computing or cloud technology. © Springer-Verlag Berlin Heidelberg 2015.


Cheptsov A.,High Performance Computing Center Stuttgart
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

The Large Knowledge Collider (LarKC) is a prominent development platform for the Semantic Web reasoning applications. Guided by the preliminary goal to facilitate the incomplete reasoning, LarKC has evolved in a unique platform, which can be used for the development of robust, flexible, and efficient semantic web applications, also leveraging the modern grid and cloud resources. As a reaction on the numerous requests coming from the tremendously increasing user community of LarKC, we set up a demonstration package for LarKC that is intended to present the main subsystems, development tools and graphical user interfaces of LarKC. The demo aims for both early adopters and experienced users and serves the purpose of promoting Semantic Web Reasoning and LarKC technologies to the potentially new user communities. © Springer-Verlag Berlin Heidelberg 2015.


Cheptsov A.,High Performance Computing Center Stuttgart
ACM International Conference Proceeding Series | Year: 2014

The current IT technologies have a strong need for scaling up the high-performance analysis to large-scale datasets. Tremendously increased over the last few years volume and complexity of data gathered in both public (such as on the web) and enterprise (e.g. digitalized internal document base) domains have posed new challenges to providers of high performance computing (HPC) infrastructures, which is recognised in the community as Big Data problem. On contrast to the typical HPC applications, the Big Data ones are not oriented on reaching the peak performance of the infrastructure and thus offer more opportunities for the "capacity" infrastructure model rather than for the "capability" one, making the use of Cloud infrastructures preferable over the HPC. However, considering the more and more vanishing difference between these two infrastructure types, i.e. Cloud and HPC, it makes a lot of sense to investigate the abilities of traditional HPC infrastructure to execute Big Data applications as well, despite their relatively poor efficiency as compared with the traditional, very optimized HPC ones. This paper discusses the main state-of-the-art parallelisa-tion techniques utilised in both Cloud and HPC domains and evaluates them on an exemplary text processing application on a testbed HPC cluster. © ACM 2014.


Schneider R.,High Performance Computing Center Stuttgart
High Performance Computing on Vector Systems 2010 | Year: 2010

In this work the effort necessary, to derive a linear elastic, inhomogeneous, anisotropic material model for cancellous bone from micro computer to-mographic data via direct mechanical simulations, is analyzed. First a short introduction to the background of biomechanical simulations of bone-implant-systems is given along with some theoretical background about the direct mechanics approach. The implementations of the single parts of the simulation process chain are presented and analyzed with respect to their resource requirements in terms of CPU time and amount of I/O-data per core and second. As the result of this work the data from the analysis of the single implementations are collected and from test cases, the data were derived from, they are extrapolated to an application of the technique to a complete human femur. © Springer-Verlag Berlin Heidelberg 2010.


Cheptsov A.,High Performance Computing Center Stuttgart
CEUR Workshop Proceedings | Year: 2014

With billions of triples in the Linked Open Data cloud, which continues to grow exponentially, challenging tasks start to emerge related to the exploitation and reasoning of Web data. A considerable amount of work has been done in the area of using Information Retrieval (IR) methods to address these problems. However, although applied models work on the Web scale, they downgrade the semantics contained in an RDF graph by observing each physical resource as a 'bag of words (URIs/literals)'. Distributional statistic methods can address this problem by capturing the structure of the graph more efficiently. However, these methods are computationally expensive. In this paper, we describe the parallelization algorithm of one such method (Random Indexing) based on the Message-Passing Interface technology. Our evaluation results show super linear improvement.


Kopecki A.,High Performance Computing Center Stuttgart
WMSCI 2011 - The 15th World Multi-Conference on Systemics, Cybernetics and Informatics, Proceedings | Year: 2011

When working collaboratively with others, it is often difficult to bring existing applications into the collaboration process. In this paper, an approach is shown how to enable different applications to work collaboratively. It enables a user to do three things: First, the ability to work collaboratively with the application of choice, selecting those applications that fit the need of the scenario best, and the user is comfortable to employ. Second, the user can work in the environment he chooses, even if the application is not specifically designed for this environment like Virtual Reality Environments or mobile devices. Third, the technology presented makes it possible to mesh applications to gain new functionalities not found in the original applications by connecting those applications and making them interoperable. Taking a Virtual Reality Environment and a standard office application, the use and fitness of this approach is shown. It should be specifically noted that the work underlying this paper is not specifically on multimodal usage of Virtual Environments, although it is used that way here, but rather showing a concept of meshing application capabilities to implement "Meta-Applications" that offer functionality beyond their original design.


Cheptsov A.,High Performance Computing Center Stuttgart
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

High performance is a key requirement for a number of Semantic Web applications. Reasoning over large and extra-large data is the most challenging class of Semantic Web applications in terms of the performance demands. For such problems, parallelization is a key technology for achieving high performance and scalability. However. development of parallel application is still a challenge for the majority of Semantic Web algorithms due to their implementation in Java - a programming language whose design does not allow the porting to the High Performance Computing Infrastructures to be trivially achieved. The Message-Passing Interface (MPI) is a well-known programming paradigm, applied beneficially for scientific applications written in the "traditional" programming languages for High Performance Computing, such as C, C++ and Fortran. We describe an MPI based approach for implementing parallel Semantic Web applications and evaluate the performance of a pilot Semantical Statistics application - random indexing over large text volumes. © 2013 Springer-Verlag.

Loading High Performance Computing Center Stuttgart collaborators
Loading High Performance Computing Center Stuttgart collaborators