High Performance Computing Center Stuttgart

Stuttgart Mühlhausen, Germany

High Performance Computing Center Stuttgart

Stuttgart Mühlhausen, Germany
SEARCH FILTERS
Time filter
Source Type

News Article | May 15, 2017
Site: www.eurekalert.org

The Gauss Centre for Supercomputing (GCS) approved 30 large-scale projects during the 17th call for large-scale proposals, set to run from May 1, 2017 to April 30, 2018. Combined, these projects received 2.1 billion core hours, marking the highest total ever delivered by the three GCS centres -- the High Performance Computing Center Stuttgart (HLRS), Jülich Supercomputing Centre (JSC), and Leibniz Computing Centre of the Bavarian Academy of Sciences and Humanities (LRZ). In addition to delivering record-breaking allocation time, GCS also broke records in proposals received and number of allocations awarded. GCS awards large-scale allocations to researchers studying earth and climate sciences, chemistry, particle physics, materials science, astrophysics, and scientific engineering, among other research areas of great importance to society. Of the 30 projects, four were granted allocations exceeding 100 million core-hours -- another first for GCS -- speaking to users' increasingly strong command of making the best possible use of the various GCS centres' flagship supercomputers. "As we continue to provide world-class computing resources and user support at our three GCS facilities, our user base continues to expand based on the wide variety of excellent proposals we receive during each successive large-scale call," said Dr. Dietmar Kröner, University of Freiburg Professor and Chairman of the GCS Scientific Steering Committee. "We have tough decisions to make, as we only have so many core-hours per year, and the proposals continue to get better each year." Several of the largest allocations are benefiting from the variety of architectures offered through the three GCS centres. A team led by Dr. Matthias Meinke of RWTH Aachen University received a total of 335 million core hours--250 million at HLRS and 85 million at JSC--for a project dedicated to understanding turbulence, one of the last major unsolved fluid dynamics problems. The team studies turbulence as it relates to jet engine dynamics, and its research is focused on creating quieter, safer, more fuel efficient jet engines. A team of astrophysicists led by Dr. Hans-Thomas Janka of the Max Planck Institute for Astrophysics was granted 120 million core hours on LRZ's SuperMUC system to simulate supernovas--the death and explosion of stars, and one of the main ways that heavy elements travel across the universe. In previous allocations, the team was able to create one of the first first-principle simulations of a 3D supernova, and plans to expand its research to more accurately understand the volatile, dynamic processes that govern the formation of neutrinos and gravitational waves occurring after a supernova. Supercomputing has become an indispensable tool in studying the smallest, most fundamental building blocks of matter known to man--the quarks and gluons that make up protons and neutrons, and, in turn, our world. A research group based at the Department of Theoretical Physics at the University of Wuppertal is benefitting from two separate allocations--one of which uses both HLRS and JSC resources, while the other is solely based at JSC -- to more deeply understand the mysterious subatomic world: Dr. Szabolcs Borsányi leads a project aiming to make the first-ever estimate of sheer viscosity of the quark-gluon plasma--a novel state of matter that exists only at extremely high temperatures, making it very hard to study experimentally. The project was granted 35 million core hours on JSC's JUQUEEN. Prof. Dr. Zoltán Fodor was granted 130 million core-hours on HLRS's Hazel Hen and 78 million core-hours on JUQUEEN to support large-scale international experimental work being done at the Large Hadron Collider in Switzerland and the Relativistic Heavy Ion Collider in the United States. The team uses HPC to more fully understand phase transitions within quantum chromodynamics -- the behaviour of subatomic particles under extreme pressure or temperature conditions. For a complete list of projects, please visit: http://www. About GCS Large-Scale Projects: In accordance with the GCS mission, all researchers in Germany are eligible to apply for computing time on the petascale HPC systems of Germany's leading supercomputing institution. Projects are classified as "large-scale" if they are allocated more than 35 million core-hours in a given year at a GCS member centre's high-end system. Computing time is allocated by the GCS Scientific Steering Committee to ground-breaking projects which seek solutions to long-standing complex science and engineering process that cannot be solved without access to world-leading computing systems. The projects are evaluated through a strict peer review process on the basis of the project's scientific and technical excellence. More information on the application process for a large-scale project can be found at: http://www. About GCS: The Gauss Centre for Supercomputing (GCS) combines the three German national supercomputing centres HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre, Garching near Munich) into Germany's integrated Tier-0 supercomputing institution. Together, the three centres provide the largest, most powerful supercomputing infrastructure in all of Europe to serve a wide range of academic and industrial research activities in various disciplines. They also provide top-tier training and education for the national as well as the European High Performance Computing (HPC) community. GCS is the German member of PRACE (Partnership for Advanced Computing in Europe), an international non-profit association consisting of 24 member countries, whose representative organizations create a pan-European supercomputing infrastructure, providing access to computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level. GCS is jointly funded by the German Federal Ministry of Education and Research and the federal states of Baden-Württemberg, Bavaria, and North Rhine-Westphalia. It is headquartered in Berlin, Germany.


Kipp A.,High Performance Computing Center Stuttgart | Jiang T.,High Performance Computing Center Stuttgart | Fugini M.,Polytechnic of Milan | Salomie I.,Technical University of Cluj Napoca
Future Generation Computer Systems | Year: 2012

This paper presents a novel approach to characterising energy consumption in IT service centres comprising the whole system view, from the IT infrastructures, to the applications and their design and execution environments. The approach is based on a set of energy-related metrics called Green Performance Indicators (GPIs) which enable monitoring the energy consumption level of the IT service centre in terms of computing resource usage, impact on the environment, costs/efforts required to develop and redesign applications and reconfiguring the IT-infrastructure. Our approach, which is framed within the EU Project GAMES [15] about green IT, defines four layers of GPIs allowing for a comprehensive description of energy consumptions at various system levels, in relation to a given Quality of Service (QoS), to impact on the environment (consumables, power, emissions, cooling systems, and so on) and to high-level business objectives. These GPI metrics are defined to measure the "greenness" of a computing environment, and more precisely, to detect where the IT service centre wastes energy. The monitoring and evaluation of GPIs allows the introduction of energy efficiency measures through interventions at various levels (e.g., improving IT resource usage or changing the QoS), allowing for an adaptation and optimisation of the according execution modes. © 2011 Elsevier B.V. All rights reserved.


Zhu Q.,State University of New York at Stony Brook | Oganov A.R.,State University of New York at Stony Brook | Oganov A.R.,Moscow State University | Glass C.W.,High Performance Computing Center Stuttgart | Stokes H.T.,Brigham Young University
Acta Crystallographica Section B: Structural Science | Year: 2012

Evolutionary crystal structure prediction proved to be a powerful approach for studying a wide range of materials. Here we present a specifically designed algorithm for the prediction of the structure of complex crystals consisting of well defined molecular units. The main feature of this new approach is that each unit is treated as a whole body, which drastically reduces the search space and improves the efficiency, but necessitates the introduction of new variation operators described here. To increase the diversity of the population of structures, the initial population and part ( 20%) of the new generations are produced using space-group symmetry combined with random cell parameters, and random positions and orientations of molecular units. We illustrate the efficiency and reliability of this approach by a number of tests (ice, ammonia, carbon dioxide, methane, benzene, glycine and butane-1,4-diammonium dibromide). This approach easily predicts the crystal structure of methane A containing 21 methane molecules (105 atoms) per unit cell. We demonstrate that this new approach also has a high potential for the study of complex inorganic crystals as shown on examples of a complex hydrogen storage material Mg(BH4)2 and elemental boron. © 2012 International Union of Crystallography Printed in Singapore - all rights reserved.


Cheptsov A.,High Performance Computing Center Stuttgart
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

Reasoning is one of the essential application areas of the modern Semantic Web. At present, the semantic reasoning algorithms are facing significant challenges when dealing with the emergence of the Internet-scale knowledge bases, comprising extremely large amounts of data. The traditional reasoning approaches have only been approved for small, closed, trustworthy, consistent, coherent and static data domains. As such, they are not well-suited to be applied in data-intensive applications aiming on the Internet scale. We introduce the Large Knowledge Collider as a platform solution that leverages the service-oriented approach to implement a new reasoning technique, capable of dealing with exploding volumes of the rapidly growing data universe, in order to be able to take advantages of the large-scale and on-demand elastic infrastructures such as high performance computing or cloud technology. © Springer-Verlag Berlin Heidelberg 2015.


Cheptsov A.,High Performance Computing Center Stuttgart
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

The Large Knowledge Collider (LarKC) is a prominent development platform for the Semantic Web reasoning applications. Guided by the preliminary goal to facilitate the incomplete reasoning, LarKC has evolved in a unique platform, which can be used for the development of robust, flexible, and efficient semantic web applications, also leveraging the modern grid and cloud resources. As a reaction on the numerous requests coming from the tremendously increasing user community of LarKC, we set up a demonstration package for LarKC that is intended to present the main subsystems, development tools and graphical user interfaces of LarKC. The demo aims for both early adopters and experienced users and serves the purpose of promoting Semantic Web Reasoning and LarKC technologies to the potentially new user communities. © Springer-Verlag Berlin Heidelberg 2015.


Cheptsov A.,High Performance Computing Center Stuttgart
ACM International Conference Proceeding Series | Year: 2014

The current IT technologies have a strong need for scaling up the high-performance analysis to large-scale datasets. Tremendously increased over the last few years volume and complexity of data gathered in both public (such as on the web) and enterprise (e.g. digitalized internal document base) domains have posed new challenges to providers of high performance computing (HPC) infrastructures, which is recognised in the community as Big Data problem. On contrast to the typical HPC applications, the Big Data ones are not oriented on reaching the peak performance of the infrastructure and thus offer more opportunities for the "capacity" infrastructure model rather than for the "capability" one, making the use of Cloud infrastructures preferable over the HPC. However, considering the more and more vanishing difference between these two infrastructure types, i.e. Cloud and HPC, it makes a lot of sense to investigate the abilities of traditional HPC infrastructure to execute Big Data applications as well, despite their relatively poor efficiency as compared with the traditional, very optimized HPC ones. This paper discusses the main state-of-the-art parallelisa-tion techniques utilised in both Cloud and HPC domains and evaluates them on an exemplary text processing application on a testbed HPC cluster. © ACM 2014.


Schneider R.,High Performance Computing Center Stuttgart
High Performance Computing on Vector Systems 2010 | Year: 2010

In this work the effort necessary, to derive a linear elastic, inhomogeneous, anisotropic material model for cancellous bone from micro computer to-mographic data via direct mechanical simulations, is analyzed. First a short introduction to the background of biomechanical simulations of bone-implant-systems is given along with some theoretical background about the direct mechanics approach. The implementations of the single parts of the simulation process chain are presented and analyzed with respect to their resource requirements in terms of CPU time and amount of I/O-data per core and second. As the result of this work the data from the analysis of the single implementations are collected and from test cases, the data were derived from, they are extrapolated to an application of the technique to a complete human femur. © Springer-Verlag Berlin Heidelberg 2010.


Cheptsov A.,High Performance Computing Center Stuttgart
CEUR Workshop Proceedings | Year: 2014

With billions of triples in the Linked Open Data cloud, which continues to grow exponentially, challenging tasks start to emerge related to the exploitation and reasoning of Web data. A considerable amount of work has been done in the area of using Information Retrieval (IR) methods to address these problems. However, although applied models work on the Web scale, they downgrade the semantics contained in an RDF graph by observing each physical resource as a 'bag of words (URIs/literals)'. Distributional statistic methods can address this problem by capturing the structure of the graph more efficiently. However, these methods are computationally expensive. In this paper, we describe the parallelization algorithm of one such method (Random Indexing) based on the Message-Passing Interface technology. Our evaluation results show super linear improvement.


Kopecki A.,High Performance Computing Center Stuttgart
WMSCI 2011 - The 15th World Multi-Conference on Systemics, Cybernetics and Informatics, Proceedings | Year: 2011

When working collaboratively with others, it is often difficult to bring existing applications into the collaboration process. In this paper, an approach is shown how to enable different applications to work collaboratively. It enables a user to do three things: First, the ability to work collaboratively with the application of choice, selecting those applications that fit the need of the scenario best, and the user is comfortable to employ. Second, the user can work in the environment he chooses, even if the application is not specifically designed for this environment like Virtual Reality Environments or mobile devices. Third, the technology presented makes it possible to mesh applications to gain new functionalities not found in the original applications by connecting those applications and making them interoperable. Taking a Virtual Reality Environment and a standard office application, the use and fitness of this approach is shown. It should be specifically noted that the work underlying this paper is not specifically on multimodal usage of Virtual Environments, although it is used that way here, but rather showing a concept of meshing application capabilities to implement "Meta-Applications" that offer functionality beyond their original design.


Cheptsov A.,High Performance Computing Center Stuttgart
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

High performance is a key requirement for a number of Semantic Web applications. Reasoning over large and extra-large data is the most challenging class of Semantic Web applications in terms of the performance demands. For such problems, parallelization is a key technology for achieving high performance and scalability. However. development of parallel application is still a challenge for the majority of Semantic Web algorithms due to their implementation in Java - a programming language whose design does not allow the porting to the High Performance Computing Infrastructures to be trivially achieved. The Message-Passing Interface (MPI) is a well-known programming paradigm, applied beneficially for scientific applications written in the "traditional" programming languages for High Performance Computing, such as C, C++ and Fortran. We describe an MPI based approach for implementing parallel Semantic Web applications and evaluate the performance of a pilot Semantical Statistics application - random indexing over large text volumes. © 2013 Springer-Verlag.

Loading High Performance Computing Center Stuttgart collaborators
Loading High Performance Computing Center Stuttgart collaborators