Entity

Time filter

Source Type


Mordvinova O.,University of Heidelberg | Runz D.,University of Heidelberg | Kunkel J.M.,German Climate Computing Center | Ludwig T.,University of Hamburg
Procedia Computer Science | Year: 2010

Choosing an appropriate cluster file system for a specific high performance computing application is challenging and depends mainly on the specific application I/O needs. There is a wide variety of I/O requirements: Some implementations require reading and writing large datasets, others out-of-core data access, or they have database access requirements. Application access patterns reflect different I/O behavior and can be used for performance testing. This paper presents the programmable I/O benchmarking tool Parabench. It has access patterns as input, which can be adapted to mimic behavior for a rich set of applications. Using this benchmarking tool, composed patterns can be automatically tested and easily compared on different local and cluster file systems. Here we introduce the design of the proposed benchmark, focusing on the Parabench programming language, which was developed for flexible pattern creation. We also demonstrate here an exemplary usage of Parabench and its capabilities to handle the POSIX and MPI-IO interfaces. Source


News Article
Site: http://www.scientificcomputing.com/rss-feeds/all/rss.xml/all

The Deutsches KlimaRechenZentrum (DKRZ), German Climate Computer Center, supercomputer is ranked among the largest systems employed for scientific computing. On October 5, 2015, Germany enhanced its leadership in climate research with the inauguration of Mistral — a state-of-the-art HPC system and one of the world’s most efficient supercomputers. The Mistral HPC system is 20 times faster than the previous supercomputer and features a large storage system to house the large climate simulation data archive managed by DKRZ. Using the Mistral system and high performance computing (HPC) tools will keep DKRZ at the forefront in supporting scientific and climate modeling research. Scientists conduct premium climate research and are able to simulate anthropogenic influences on the climate system, which includes cloud research. Clouds and precipitation strongly influence atmospheric radiation and are critical for life on earth. The scale of clouds spans from a micrometer — which is the size of a single cloud particle — to hundreds of kilometers, which is the dimension of a frontal system. Researchers have to resolve all ranges, which makes exact modeling of clouds and precipitation physically difficult and extremely resource-consuming in terms of computer time and storage space. Climate modeling research requires supercomputers that combine the power of thousands of computers and HPC tools to simulate complex climate models and research problems. Mistral is used in the High Definition Clouds and Precipitation for Climate Prediction-HD(CP)2 project, which integrates cloud building and precipitation processes into atmospheric simulations to better understand and research clouds and cloud related processes. The project uses cloud resolving modeling to determine cloud formations in central Europe. According to Professor Thomas Ludwig, Director of the German Climate Computing Center, “The unique characteristic of HD(CP)2 is to develop a cloud resolving LES version (Large Eddy Simulation) of the ICON model (Icosahedral non-hydrostatic general circulation model, a joint development of the German Weather Service DWD and the Max Planck Institute for Meteorology, see e.g. www.mpimet.mpg.de/en/science/models/icon.html) in order to explicitly simulate cloud and precipitation processes. The model region is centered on Germany using a grid with a resolution of 10,000 x 10,000 x 400 grid elements and a grid spacing of 100 m (www.hdcp2.eu). Such simulations are computationally very intensive and the necessary computing power can be found only on massively parallel computing platforms. In order to achieve this, DKRZ performed a major refactoring of the ICON model.” Figure 2 shows a visualization of the simulated cloud water content for one time step with about 3.5 billion cells per time step (22.5 million cells per slice on 150 levels). The data is on-the-fly resampled from an unstructured ICON grid onto a regular Cartesian grid with a down sampling of 1/10. The ICON simulation was performed using over 400 nodes of Mistral, while the visualization was done using the Vapor software on one single GPU node of the system, thereby consuming over 200 GB of main memory. “The ability to conduct this level of cloud and atmospheric research requires the use of a state-of-the-art HPC system. Using Mistral and HPC tools allows DKRZ to run new processes and ensemble members as well as see clouds or local climate at a higher resolution. Per core, we see a performance improvement of our models between 1.8 and 2.6 using one Intel Xeon processor core, as compared to one 4.7 GHz IBM Power6 core. In times where scientists expect performance gains only through scaling, this is a welcome advancement,” Ludwig states. Mistral, the new High Performance Computer System for Earth System Research (HLRE-3) is from the French company Bull, which was purchased by Atos in 2014. Mistral replaces the IBM Power6 system named Blizzard which was in operation at DKRZ since 2009. The Mistral supercomputer is being installed in two stages. Phase 1 of the Mistral system began in June 2015, with the second stage of Mistral expansion scheduled for summer 2016. Parallel to the installation of the Mistral system in Phase 1, DKRZ users had access to a small test system with 432 Intel Xeon processor cores and a 300 TByte Lustre file system from Xyratex/Seagate, for the purpose of preparing climate models for the new architecture. During the testing, DKRZ provided training classes on how to use the new system in the areas of debugging, machine usage and using visualization tools. The Mistral System consists of computer components by Bull, a disk storage system by Xyratex/Seagate and high performance network switches by Mellanox. These components are distributed over 41 racks weighing up to or even more than a metric ton, which are connected by bundles of fiber fabric. The Phase 1 Mistral supercomputer has about 1,500 compute nodes on the basis of the bullx 700 DLC system each with two 12 core Intel Xeon processors 2680 v3 (for a total of 36,000 cores) — the system and racks deploy hot liquid direct cooling. The Mistral Intel processor-based system allows an inlet cooling liquid temperature of 40 degrees centigrade. The hot liquid heats up to 50 degrees centigrade and is piped to the roof for cooling by fans only. “This means that all the racks that have the hot liquid cooling do not require additional expensive chillers, as the temperature on the roof in Hamburg almost never exceeds 40 degrees,” states Ludwig. Mistral provides 24 high-end visualization nodes equipped with powerful graphics processors and 100 further nodes for pre- and post-processing and analysis of data. All components are connected with each other via optical cables and can directly access the shared file system. This means that the results of modeling calculated on the supercomputer can be directly analyzed on the data visualization nodes. DKRZ does not conduct climate research itself but supports climate modeling and related scientific research. Ludwig indicates, “We participate in various infrastructure and research projects with the aim to support the climate scientists in all aspects of their work in our HPC environment. DKRZ departments support scientists in model parallelization and optimization of the code, data management, storage, data compression, analysis and visualization, help with libraries, improving I/O as well as quality assurance and archiving of data.” DKRZ uses the Allinea DDT debugging tool and Vampire and Intel VTune software as performance tuning tools and Vapor as the visualization tool. The DKRZ staff creates customized in-house tools to help with issues such as data compression, scalability and visualization issues in the parallel climate simulations and research models. There is close cooperation with Ludwig´s research group from the University of Hamburg. In fact, his chair for Scientific Computing has his offices in the DKRZ building. A group of 10 researchers focus on file system and storage issues and on energy efficiency for HPC. In the Mistral phase 1 system, DKRZ uses a 20 PBytes Lustre filesystem based on a Xyratex/Seagate CS9000 system with a bandwidth in excess of 150 GB/s. The metadata performance is outperforming its competitors. The Lustre filesystem will be expanded to 50 PBytes and 430 GB/s in 2016 as the Mistral system expands. According to Ludwig, “In addition to supporting our users to efficiently utilize the supercomputer, we engage in joint projects to enable new science on the current and future systems. Since our users run a large diversity of different models on our system, DKRZ also develops universally usable libraries to facilitate scalable parallel models (YAXT) and make better use of the available storage capacity through data compression (libAEC).” In addition to their other services, DKRZ manages the world’s largest climate simulation data archive. The archive is used by researchers worldwide and contains massive amounts of data. The archive currently contains more than 40 PBytes of data and is projected to grow by 75 PBytes annually over the next five years. “There is a growing gap in the ability of HPC systems to generate large amounts of data and the cost of storage to store this data. DKRZ estimates we are currently spending 25 percent of our investment budget, as well as the electricity expenses, on storage, and we expect this gap to increase for the climate modeling data created in the future. The widening gap between compute capabilities and storage is a problem which means we need to shift some focus to how to maximize storage if you want to keep the balance in the ability to store all the data being generated.” “Lustre as a file system gets constantly increasing support from major vendors and from the computer science community,” Ludwig said. “We are confident that emerging requirements will be picked up quickly and solutions can be provided promptly.” DKRZ supports CMIPs and the Intergovernmental Panel on Climate Change (IPCC) Research DKRZ performs simulations for the research community, such as the Climate Modeling Intercomparison Projects (CMIPs), which build the foundation for the findings presented in the IPCC reports. Climate modelers in Germany worked on the IPCC project performing calculations using the DKRZ computer with an Earth system model from the Max Planck Institute for Meteorology that also simulated the carbon cycle. Ludwig indicates, “We stored approximately 2 PBytes of CMIP5 data on the Mistral machine from DKRZ and international centers. Planning is going on for how much data DRKZ will receive on the next CMIP6 project. It is expected there will be 20 to 50 times more data. DKRZ expects to begin computations for the next IPCC report in 2016. The German climate model data contribution for publications that will be included in the next IPCC assessment report will start to be released in 2016 and will be computed exclusively on the expanded Mistral machine. The DKRZ Center has extensive experience in computations and data dissemination for the IPCC report, and with the new Mistral system, we have a powerful computer and storage system to host at least all the computations that will be conducted on the German side, and probably more.” Installation of the expanded Mistral system is predicted to start in February of 2016 and will also use the Bull direct hot liquid cooling being used in the Mistral Phase 1 system. According to Helena Liebelt, Intel Business Development Manager, “The second phase of the Mistral HLRE-3 System is planned to be available in summer 2016. The expanded Mistral system in 2016 will have more than 3,000 computing nodes and more than 68,000 cores. This extension will roughly double computing and disk storage capacity. With a peak performance of 3 PFlops and a 50 PByte parallel file system, scientists can improve the regional resolution, account for more processes in the Earth system models or reduce uncertainties in climate projections.” Ludwig estimates the expanded Mistral system will be in the Top 100 of the June 2016 TOP500 list and in the top five in the file system capabilities — making Mistral one of the top HPC systems worldwide for storage. While DKRZ uses Seagate Lustre in the production environment of Mistral, Ludwig´s research group became a member of Intel´s Parallel Processing Center for Lustre (IPCC-L) and will conduct research on data compression mechanisms. The group is also using Intel Xeon Phi coprocessors and graphics processing units (GPUs) in their test environment. How HPC will aid climate modeling in the future Professor Ludwig indicates that computer scientists face a number of challenges in climate modeling, including “the growing number of cores and the fact that parallelization is becoming more complicated due to multiple runs of climate simulations which are mathematically non-linear. Memory bandwidth is always a problem, because climate modeling applications are memory intensive. The ability to modify code to take advantage of HPC parallelization and optimization is a problem because of legacy code and not enough software engineers to adapt codes. Growing energy requirements may become a limitation in providing more computational power to future climate models. In addition, I/O bandwidth and storage capacity growth may be even harder to maintain. Science is looking to computer scientists to develop software that can handle the huge number of computing elements.” As supercomputers such as Mistral and HPC tools advance, it will be possible to create finer grids and more grid cells, which will provide a higher resolution of climate information. The German government has funded a project called PalMod that takes the opposite approach and uses a coarse grid for a very long simulated time period. It seeks to apply today’s climate models against 135,000 years of data going back to the ice age. The hope is that this will allow researchers to recompute climate data to see how effective the current climate models are in showing past climate changes and as a way to predict climate changes in the future. DKRZ will be involved in supporting PalMod. However, today’s many and multi core processor architectures will probably not be sufficient to achieve the desired performance: a challenge to be addressed jointly by DKRZ and industry. “DKRZ is the link between hardware vendors, solution providers and the climate research community. Its vision is to make the potential of accelerating technical progress reliably accessible to climate research. We closely follow technological trends and are in permanent contact with companies such as processor producers. At the same time, we participate in climate research projects to learn about the future resources that will be necessary for new insights. We translate between these communities and communicate scientific requirement specifications and technical product characteristics. An efficient usage of HPC adds optimal value to the science of climate researchers,” states Ludwig. Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR.


Sahito W.A.,University for Information Science and Technology | Chunpir H.I.,German Climate Computing Center | Chunpir H.I.,University of Hamburg | Hussain Z.,University for Information Science and Technology | And 2 more authors.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

This paper presents key findings about on-screen optimal line length for tablet personal computers (PCs). It examines the effects of four different line lengths on the reading speed and reading efficiency. Seventy participants ranging between the ages of 20 and 40 participated in this study. They read four different texts with an average length of 2000 characters. The texts contained substitution words, which were to be detected by the subjects to measure reading accuracy. Moreover, the subjects were asked to subjectively vote on their reading experience in the context of subjective measures like reading speed and accuracy. The results of the study revealed that 90 characters per line (CPL) were preferred by most of the participants. Nonetheless, some participants falling between the ages of 35 and 40 years preferred 60 CPL. The findings presented in this paper are quite worthwhile as the Tablet PC are extensively used for e-reading. In essence, this study suggests optimal line length for reading on screen using Tablet PC and eventually benefiting people who use Tablet PC for reading, hailing from every walk of life. © Springer International Publishing Switzerland. Source


Von Storch J.-S.,Max Planck Institute for Meteorology | Eden C.,University of Hamburg | Fast I.,German Climate Computing Center | Haak H.,Max Planck Institute for Meteorology | And 5 more authors.
Journal of Physical Oceanography | Year: 2012

This paper presents an estimate of the oceanic Lorenz energy cycle derived from a 1/ 10 ° simulation forced by 6-hourly fluxes obtained from NCEP-NCAR reanalysis-1. The total rate of energy generation amounts to 6.6 TW, of which 1.9 TW is generated by the time-mean winds and 2.2 TW by the time-varying winds. The dissipation of kinetic energy amounts to 4.4 TW, of which 3 TWoriginate from the dissipation of eddy kinetic energy. The energy exchange between reservoirs is dominated by the baroclinicpathway and the pathway that distributes the energy generated by the time-mean winds. The former converts 0.7 to 0.8 TW mean available potential energy to eddy available potential energy and finallyto eddy kinetic energy, whereas the latter converts 0.5 TW mean kinetic energy to mean available potential energy. This energy cycle differs from the atmospheric one in two aspects. First, the generation of the mean kinetic and mean available potential energy is each, to a first approximation, balancedbythe dissipation. The interaction of the oceanic general circulation with mesoscale eddies is hence less crucial than the corresponding interaction in the atmosphere. Second, the baroclinic pathway in the ocean is facilitated not only by the surface buoyancy flux but also by the winds through a conversion of 0.5 TW mean kinetic energy to mean available potential energy. In the atmosphere, the respective conversion is almost absent and the baroclinic energy pathway is driven solely by the differential heating. © 2012 American Meteorological Society. Source


Hertwig E.,Max Planck Institute for Meteorology | von Storch J.-S.,Max Planck Institute for Meteorology | Handorf D.,Alfred Wegener Institute for Polar and Marine Research | Dethloff K.,Alfred Wegener Institute for Polar and Marine Research | And 2 more authors.
Climate Dynamics | Year: 2015

This study analyzes the effect of increasing horizontal resolution in the atmospheric model ECHAM6 on the simulated mean climate state and climate variability. For that purpose three AMIP-style simulations with the resolutions T63L95, T127L95, and T255L95 are compared to reanalysis data and observations. Biases in atmospheric fields as well as tropospheric and stratospheric biases individually are analyzed. Besides mean errors of the climate state and the variance, some atmospheric phenomena with different time scales are studied at the three horizontal resolutions: the transient eddy kinetic energy, storm tracks, atmospheric teleconnections, the Madden–Julian-Oscillation (MJO), and the Quasi-Biennial Oscillation (QBO). The main result is that, overall, the bias of the simulated climate is reduced with increasing resolution when considering the mean state and the variance. A greater improvement takes place in the extra-tropical than in the tropical troposphere. The errors in the stratosphere are generally larger but the relative benefit of increasing resolution is greater than in the troposphere and we find that stratospheric phenomena, like the QBO, are sensitive to horizontal resolution. Globally, the bias of the mean state improves by 19 %, while the bias of the variability improves by 15 % (from T63 to T255). Major challenges remain the simulation of the precipitation and climate features like the MJO, which might require a coupled atmosphere–ocean model. © 2014, Springer-Verlag Berlin Heidelberg. Source

Discover hidden collaborations