Entity

Time filter

Source Type

Garching bei München, Germany

Lummel N.,Ludwig Maximilians University of Munich | Bernau C.,Leibniz Rechenzentrum | Thon N.,Ludwig Maximilians University of Munich | Bochmann K.,Ludwig Maximilians University of Munich | Linn J.,TU Dresden
Neuroradiology | Year: 2015

Introduction: Superficial siderosis is presumably a consequence of recurrent bleeding into the subarachnoid space. The objective of this study was to assess the prevalence of superficial siderosis after singular, aneurysmal subarachnoid hemorrhage (SAH) in the long term.Methods: We retrospectively identified all patients who presented with a singular, acute, aneurysmal SAH at our institution between 2010 and 2013 and in whom a magnetic resonance imaging (MRI) including T2*-weighted imaging was available at least 4 months after the acute bleeding event. MRI scans were judged concerning the presence and distribution of superficial siderosis. Influence of clinical data, Fisher grade, localization, and cause of SAH as well as the impact of neurosurgical interventions on the occurrence of superficial siderosis was tested.Results: Seventy-two patients with a total of 117 MRIs were included. Mean delay between SAH and the last available MRI was 47.4 months (range 4–129). SAH was Fisher grade 1 in 2 cases, 2 in 4 cases, 3 in 10 cases, and 4 in 56 cases. Superficial siderosis was detected in 39 patients (54.2 %). In all patients with more than one MRI scan, localization and distribution of superficial siderosis did not change over time. Older age (p = 0.02) and higher degree of SAH (p = 0.03) were significantly associated with the development of superficial siderosis.Conclusion: Superficial siderosis develops in approximately half of patients after singular, aneurysmal SAH and might be more common in patients with an older age and a greater amount of blood. However, additional factors must play a role in whether a patient is prone to develop superficial siderosis or not. © 2014, Springer-Verlag Berlin Heidelberg. Source


Hacker H.,TU Munich | Trinitis C.,TU Munich | Weidendorfer J.,TU Munich | Brehm M.,Leibniz Rechenzentrum
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

In contrast to just a few years ago, the answer to the question "What system should we buy next to best assist our users" has become a lot more complicated for the operators of an HPC center today. In addition to multicore architectures, powerful accelerator systems have emerged, and the future looks heterogeneous. In this paper, we will concentrate on and apply the abovementioned question to a specific accelerator with its programming environment that has become increasingly popular: systems using graphics processors from NVidia, programmed with CUDA. Using three benchmarks encompassing main computational needs of scientific codes, we compare performance results with those obtained by systems with modern x86 multicore processors. Taking the experience from optimizing and running the codes into account, we discuss whether the presented performance numbers really apply to computing center users running codes in their everyday tasks. © 2010 Springer-Verlag. Source


Groen D.,University College London | Borgdorff J.,University of Amsterdam | Bona-Casas C.,University of Amsterdam | Hetherington J.,University College London | And 8 more authors.
Interface Focus | Year: 2013

Multiscale simulations are essential in the biomedical domain to accurately model human physiology. We present a modular approach for designing, constructing and executing multiscale simulations on a wide range of resources, from laptops to petascale supercomputers, including combinations of these. Our work features two multiscale applications, in-stent restenosis and cerebrovascular bloodflow, which combine multiple existing single-scale applications to create a multiscale simulation. These applications can be efficiently coupled, deployed and executed on computers up to the largest (peta) scale, incurring a coupling overhead of 1-10% of the total execution time. © 2013 The Author(s) Published by the Royal Society. All rights reserved. Source


Beronov K.,Leibniz Rechenzentrum | Ozyilmaz N.,Friedrich - Alexander - University, Erlangen - Nuremberg
High Performance Computing in Science and Engineering 2009 - Transactions of the High Performance Computing Center Stuttgart, HLRS 2009 | Year: 2010

The largest known direct numerical simulations of grid-generated turbulence were performed, on one hand resolving the grid placed in an incompressible fluid flow while on the other hand following reliably the evolution of flow structure both in the direction of the mean flow and across it well into a region of self-similar behaviour. The excellent scaling of the employed lattice Boltzmann algorithm allowed to perform a series of DNS runs and quantify several parametric effects. Several major qualitaitve results were obtained, as well, having immediate impact on turbulence modeling: A new algebraic law (of Kolmogorv type) was discovered for the decay of turbulent kinetic energy in low-intensity grid-generated turbulence. Furthermore, the assumption of the existence of a single decay rate for the anisotropy of weak " homogeneous" turbulence was shown to be qualitatively inappropriate, while the assumption of an "inertial range" for the spatial development of such turbulence - on which the former assumption should be based - appears to be valid. © Springer-Verlag Berlin Heidelberg 2010. Source


Hammer N.,Leibniz Rechenzentrum | Jamitzky F.,Leibniz Rechenzentrum | Satzger H.,Leibniz Rechenzentrum | Allale M.,Leibniz Rechenzentrum | And 35 more authors.
Advances in Parallel Computing | Year: 2016

We report lessons learned during the friendly user block operation period of the new system at the Leibniz Supercomputing Centre (SuperMUC Phase 2). © 2016 The authors and IOS Press. All rights reserved. Source

Discover hidden collaborations