Entity

Time filter

Source Type

Stuttgart Mühlhausen, Germany

Zhu Q.,State University of New York at Stony Brook | Oganov A.R.,State University of New York at Stony Brook | Oganov A.R.,Moscow State University | Glass C.W.,High Performance Computing Center Stuttgart | Stokes H.T.,Brigham Young University
Acta Crystallographica Section B: Structural Science | Year: 2012

Evolutionary crystal structure prediction proved to be a powerful approach for studying a wide range of materials. Here we present a specifically designed algorithm for the prediction of the structure of complex crystals consisting of well defined molecular units. The main feature of this new approach is that each unit is treated as a whole body, which drastically reduces the search space and improves the efficiency, but necessitates the introduction of new variation operators described here. To increase the diversity of the population of structures, the initial population and part ( 20%) of the new generations are produced using space-group symmetry combined with random cell parameters, and random positions and orientations of molecular units. We illustrate the efficiency and reliability of this approach by a number of tests (ice, ammonia, carbon dioxide, methane, benzene, glycine and butane-1,4-diammonium dibromide). This approach easily predicts the crystal structure of methane A containing 21 methane molecules (105 atoms) per unit cell. We demonstrate that this new approach also has a high potential for the study of complex inorganic crystals as shown on examples of a complex hydrogen storage material Mg(BH4)2 and elemental boron. © 2012 International Union of Crystallography Printed in Singapore - all rights reserved. Source


Schneider R.,High Performance Computing Center Stuttgart
High Performance Computing on Vector Systems 2010 | Year: 2010

In this work the effort necessary, to derive a linear elastic, inhomogeneous, anisotropic material model for cancellous bone from micro computer to-mographic data via direct mechanical simulations, is analyzed. First a short introduction to the background of biomechanical simulations of bone-implant-systems is given along with some theoretical background about the direct mechanics approach. The implementations of the single parts of the simulation process chain are presented and analyzed with respect to their resource requirements in terms of CPU time and amount of I/O-data per core and second. As the result of this work the data from the analysis of the single implementations are collected and from test cases, the data were derived from, they are extrapolated to an application of the technique to a complete human femur. © Springer-Verlag Berlin Heidelberg 2010. Source


Cheptsov A.,High Performance Computing Center Stuttgart
ACM International Conference Proceeding Series | Year: 2014

The current IT technologies have a strong need for scaling up the high-performance analysis to large-scale datasets. Tremendously increased over the last few years volume and complexity of data gathered in both public (such as on the web) and enterprise (e.g. digitalized internal document base) domains have posed new challenges to providers of high performance computing (HPC) infrastructures, which is recognised in the community as Big Data problem. On contrast to the typical HPC applications, the Big Data ones are not oriented on reaching the peak performance of the infrastructure and thus offer more opportunities for the "capacity" infrastructure model rather than for the "capability" one, making the use of Cloud infrastructures preferable over the HPC. However, considering the more and more vanishing difference between these two infrastructure types, i.e. Cloud and HPC, it makes a lot of sense to investigate the abilities of traditional HPC infrastructure to execute Big Data applications as well, despite their relatively poor efficiency as compared with the traditional, very optimized HPC ones. This paper discusses the main state-of-the-art parallelisa-tion techniques utilised in both Cloud and HPC domains and evaluates them on an exemplary text processing application on a testbed HPC cluster. © ACM 2014. Source


Cheptsov A.,High Performance Computing Center Stuttgart
CEUR Workshop Proceedings | Year: 2014

With billions of triples in the Linked Open Data cloud, which continues to grow exponentially, challenging tasks start to emerge related to the exploitation and reasoning of Web data. A considerable amount of work has been done in the area of using Information Retrieval (IR) methods to address these problems. However, although applied models work on the Web scale, they downgrade the semantics contained in an RDF graph by observing each physical resource as a 'bag of words (URIs/literals)'. Distributional statistic methods can address this problem by capturing the structure of the graph more efficiently. However, these methods are computationally expensive. In this paper, we describe the parallelization algorithm of one such method (Random Indexing) based on the Message-Passing Interface technology. Our evaluation results show super linear improvement. Source


Cheptsov A.,High Performance Computing Center Stuttgart
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

The Large Knowledge Collider (LarKC) is a prominent development platform for the Semantic Web reasoning applications. Guided by the preliminary goal to facilitate the incomplete reasoning, LarKC has evolved in a unique platform, which can be used for the development of robust, flexible, and efficient semantic web applications, also leveraging the modern grid and cloud resources. As a reaction on the numerous requests coming from the tremendously increasing user community of LarKC, we set up a demonstration package for LarKC that is intended to present the main subsystems, development tools and graphical user interfaces of LarKC. The demo aims for both early adopters and experienced users and serves the purpose of promoting Semantic Web Reasoning and LarKC technologies to the potentially new user communities. © Springer-Verlag Berlin Heidelberg 2015. Source

Discover hidden collaborations