Exzellenzcluster Universe

Garching bei München, Germany

Exzellenzcluster Universe

Garching bei München, Germany
SEARCH FILTERS
Time filter
Source Type

Wong K.C.,Japan National Astronomical Observatory | Wong K.C.,Academia Sinica, Taiwan | Suyu S.H.,Academia Sinica, Taiwan | Suyu S.H.,Max Planck Institute for Astrophysics | And 16 more authors.
Monthly Notices of the Royal Astronomical Society | Year: 2017

Strong gravitational lenses with measured time delays between the multiple images allow a direct measurement of the time-delay distance to the lens, and thus a measure of cosmological parameters, particularly the Hubble constant, H0. We present a blind lens model analysis of the quadruply imaged quasar lens HE 0435-1223 using deep Hubble Space Telescope imaging, updated time-delay measurements from the COSmological MOnitoring of GRAvItational Lenses (COSMOGRAIL), a measurement of the velocity dispersion of the lens galaxy based on Keck data, and a characterization of the mass distribution along the line of sight. HE 0435-1223 is the third lens analysed as a part of the H0 Lenses in COSMOGRAIL's Wellspring (H0LiCOW) project. We account for various sources of systematic uncertainty, including the detailed treatment of nearby perturbers, the parametrization of the galaxy light and mass profile, and the regions used for lens modelling. We constrain the effective timedelay distance to be DΔt = 2612+208 -191 Mpc, a precision of 7.6 per cent. From HE 0435-1223 alone, we infer a Hubble constant of H0 = 73.1+5.7 -6.0 km s-1 Mpc-1 assuming a flat ΛCDM cosmology. The cosmographic inference based on the three lenses analysed by H0LiCOW to date is presented in a companion paper (H0LiCOW Paper V). © 2016 The Authors.


Steininger T.,Max Planck Institute for Astrophysics | Steininger T.,Ludwig Maximilians University of Munich | Greiner M.,Max Planck Institute for Astrophysics | Greiner M.,Ludwig Maximilians University of Munich | And 4 more authors.
Journal of Big Data | Year: 2016

We introduce d2o, a Python module for cluster-distributed multi-dimensional numerical arrays. It acts as a layer of abstraction between the algorithm code and the data-distribution logic. The main goal is to achieve usability without losing numerical performance and scalability. d2o’s global interface is similar to the one of a numpy.ndarray, whereas the cluster node’s local data is directly accessible for use in customized high-performance modules. d2o is written in pure Python which makes it portable and easy to use and modify. Expensive operations are carried out by dedicated external libraries like numpy and mpi4py. The performance of d2o is on a par with numpy for serial applications and scales well when moving to an MPI cluster. d2o is open-source software available under the GNU General Public License v3 (GPL-3) at https://gitlab.mpcdf.mpg.de/ift/D2O. © 2016, The Author(s).


Hilbert S.,Exzellenzcluster Universe | Hilbert S.,Ludwig Maximilians University of Munich | Xu D.,Heidelberg Institute for Theoretical Studies | Schneider P.,Argelander Institute For Astronomie | And 4 more authors.
Monthly Notices of the Royal Astronomical Society | Year: 2017

We study intrinsic alignments (IA) of galaxy image shapes within the Illustris cosmic structure formation simulations. We investigate how IA correlations depend on observable galaxy properties such as stellar mass, apparent magnitude, redshift and photometric type, and on the employed shape measurement method. The correlations considered include the matter density- intrinsic ellipticity (mI), galaxy density-intrinsic ellipticity (dI), gravitational shear-intrinsic ellipticity (GI) and intrinsic ellipticity-intrinsic ellipticity (II) correlations. We find stronger correlations for more massive and more luminous galaxies, as well as for earlier photometric types, in agreement with observations. Moreover, the correlations significantly depend on the choice of shape estimator, even if calibrated to serve as unbiased shear estimators. In particular, shape estimators that down-weight the outer parts of galaxy images produce much weaker IA signals on intermediate and large scales than methods employing flat radial weights. The expected contribution of IA to the observed ellipticity correlation in tomographic cosmic shear surveys may be below one percent or several percent of the full signal depending on the details of the shape measurement method. A comparison of our results to a tidal alignment model indicates that such a model is able to reproduce the IA correlations well on intermediate and large scales, provided the effect of varying galaxy density is correctly taken into account. We also find that the GI contributions to the observed ellipticity correlations could be inferred directly frommeasurements of galaxy density-intrinsic ellipticity correlations, except on small scales, where systematic differences between mI and dI correlations are large. © 2017 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.


Saghiha H.,University of Bonn | Simon P.,University of Bonn | Schneider P.,University of Bonn | Hilbert S.,Exzellenzcluster Universe | Hilbert S.,Ludwig Maximilians University of Munich
Astronomy and Astrophysics | Year: 2017

Testing predictions of semi-analytic models of galaxy evolution against observations helps to understand the complex processes that shape galaxies. We compare predictions from the Garching and Durham models implemented on the Millennium Simulation (MS) with observations of galaxy-galaxy lensing (GGL) and galaxy-galaxy-galaxy lensing (G3L) for various galaxy samples with stellar masses in the range 0:5 ≤ M∗/1010 M⊙ < 32 and photometric redshifts in the range 0:2 ≤ z < 0:6 in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We find that the predicted GGL and G3L signals are in qualitative agreement with CFHTLenS data. Quantitatively, the models succeed in reproducing the observed signals in the highest stellar mass bin, 16 ≤ M∗/1010 M⊙ < 32, but show dierent degrees of tension for the other stellar mass samples. The Durham models are strongly excluded by the observations at the 95% confidence level because they largely over-predict the amplitudes of the GGL and G3L signals, probably because they predict too many satellite galaxies in massive halos. © 2017 ESO.


Koepferl C.M.,Ludwig Maximilians University of Munich | Ercolano B.,Ludwig Maximilians University of Munich | Ercolano B.,Exzellenzcluster Universe | Dale J.,Ludwig Maximilians University of Munich | And 5 more authors.
Monthly Notices of the Royal Astronomical Society | Year: 2013

The time-scale over which and the modality by which young stellar objects (YSOs) disperse their circumstellar discs dramatically influence the eventual formation and evolution of planetary systems. By means of extensive radiative transfer modelling, we have developed a new set of diagnostic diagrams in the infrared colour-colour plane (K - [24] versus K - [8]), to aid with the classification of the evolutionary stage of YSOs from photometric observations. Our diagrams allow the differentiation of sources with unevolved (primordial) discs from those evolving according to different clearing scenarios (e.g. homologous depletion versus inside-out dispersal), as well as from sources that have already lost their disc. Classification of over 1500 sources in 15 nearby star-forming regions reveals that approximately 39 per cent of the sources lie in the primordial disc region, whereas between 31 and 32 per cent disperse from the inside-out and up to 22 per cent of the sources have already lost their disc. Less than 2 per cent of the objects in our sample lie in the homogeneous draining regime. Time-scales for the transition phase are estimated to be typically a few 105 yr independent of stellar mass. Therefore, regardless of spectral type, we conclude that currently available infrared photometric surveys point to fast (of the order of 10 per cent of the global disc lifetime) inside-out clearing as the preferred mode of disc dispersal. © 2012 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.


Hanggi P.,University of Augsburg | Hilbert S.,Exzellenzcluster Universe | Dunkel J.,Massachusetts Institute of Technology
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2016

Depending on the exact experimental conditions, the thermodynamic properties of physical systems can be related to one or more thermostatistical ensembles. Here, we survey the notion of thermodynamic temperature in different statistical ensembles, focusing in particular on subtleties that arise when ensembles become non-equivalent. The 'mother' of all ensembles, the microcanonical ensemble, uses entropy and internal energy (the most fundamental, dynamically conserved quantity) to derive temperature as a secondary thermodynamic variable. Over the past century, some confusion has been caused by the fact that several competing microcanonical entropy definitions are used in the literature, most commonly the volume and surface entropies introduced by Gibbs. It can be proved, however, that only the volume entropy satisfies exactly the traditional form of the laws of thermodynamics for a broad class of physical systems, including all standard classical Hamiltonian systems, regardless of their size. This mathematically rigorous fact implies that negative 'absolute' temperatures and Carnot efficiencies more than 1 are not achievable within a standard thermodynamical framework. As an important offspring of microcanonical thermostatistics, we shall briefly consider the canonical ensemble and comment on the validity of the Boltzmann weight factor.We conclude by addressing open mathematical problems that arise for systems with discrete energy spectra. © 2016 The Author(s).


Hilbert S.,Exzellenzcluster Universe | Hanggi P.,University of Augsburg | Dunkel J.,Massachusetts Institute of Technology
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2014

The recent experimental realization of exotic matter states in isolated quantum systems and the ensuing controversy about the existence of negative absolute temperatures demand a careful analysis of the conceptual foundations underlying microcanonical thermostatistics. Here we provide a detailed comparison of the most commonly considered microcanonical entropy definitions, focusing specifically on whether they satisfy or violate the zeroth, first, and second laws of thermodynamics. Our analysis shows that, for a broad class of systems that includes all standard classical Hamiltonian systems, only the Gibbs volume entropy fulfills all three laws simultaneously. To avoid ambiguities, the discussion is restricted to exact results and analytically tractable examples. © 2014 American Physical Society.


PubMed | University of Augsburg, Massachusetts Institute of Technology and Exzellenzcluster Universe
Type: Journal Article | Journal: Philosophical transactions. Series A, Mathematical, physical, and engineering sciences | Year: 2016

Depending on the exact experimental conditions, the thermodynamic properties of physical systems can be related to one or more thermostatistical ensembles. Here, we survey the notion of thermodynamic temperature in different statistical ensembles, focusing in particular on subtleties that arise when ensembles become non-equivalent. The mother of all ensembles, the microcanonical ensemble, uses entropy and internal energy (the most fundamental, dynamically conserved quantity) to derive temperature as a secondary thermodynamic variable. Over the past century, some confusion has been caused by the fact that several competing microcanonical entropy definitions are used in the literature, most commonly the volume and surface entropies introduced by Gibbs. It can be proved, however, that only the volume entropy satisfies exactly the traditional form of the laws of thermodynamics for a broad class of physical systems, including all standard classical Hamiltonian systems, regardless of their size. This mathematically rigorous fact implies that negative absolute temperatures and Carnot efficiencies more than 1 are not achievable within a standard thermodynamical framework. As an important offspring of microcanonical thermostatistics, we shall briefly consider the canonical ensemble and comment on the validity of the Boltzmann weight factor. We conclude by addressing open mathematical problems that arise for systems with discrete energy spectra.


Moll A.,Max Planck Institute for Physics | Moll A.,Exzellenzcluster Universe
Journal of Physics: Conference Series | Year: 2011

The future of CP-Violation experiments is to begin in 2014 with the launch of the SuperKEKB collider in Tsukuba, Japan. As a part of this process the BELLE experiment will undergo an upgrade, giving rise to the BELLE II experiment. The BELLE II detector will include improvements and redesigns of various subdetectors, as well as the addition of an entire new subdetector for precise vertexing. In order to reflect these changes in the existing BELLE software framework, major modifications of nearly all parts of the software would have been necessary. As a result the decision was made to completely rewrite the software framework. In this article the main concepts of the new framework and the applied technologies are presented.


De Gasperin F.,Max Planck Institute for Astrophysics | De Gasperin F.,Exzellenzcluster Universe | Merloni A.,Exzellenzcluster Universe | Merloni A.,Max Planck Institute for Extraterrestrial Physics | And 4 more authors.
Monthly Notices of the Royal Astronomical Society | Year: 2011

We present the results of the analysis of a sample of 17 low-luminosity (LX≲ 1042ergs-1), radio-loud active galactic nuclei in massive galaxies. The sample is extracted from the Sloan Digital Sky Survey data base and it spans uniformly a wide range in optical [Oiii] emission line and radio luminosity, but within a narrow redshift range (0.05 < z < 0.11) and a narrow supermassive black hole mass range (∼108M⊙). For these sources we measured core X-ray emission with the Chandra X-ray Telescope and radio emission with the Very Large Array. Our main goal is to establish which emission component, if any, can be regarded as the most reliable accretion/jet-power estimator at these regimes. In order to do so, we studied the correlation between emission-line properties, radio luminosity, radio spectral slopes and X-ray luminosity, as well as more complex multivariate relations involving black hole mass, such as the Fundamental Plane of black hole activity. We find that 15 out of 17 sources of our sample can be classified as low-excitation galaxies (LEGs), and their observed properties suggest X-ray and radio emission to originate from the jet basis. We also find that X-ray emission does not appear to be affected by nuclear obscuration and can be used as a reliable jet-power estimator. More generally, X-ray, radio and optical emission appear to be related, although no tight correlation is found. In accordance with a number of recent studies of this class of objects, these findings may be explained by a lack of cold (molecular) gaseous structures in the innermost region of these massive galaxies. © 2011 The Authors Monthly Notices of the Royal Astronomical Society © 2011 RAS.

Loading Exzellenzcluster Universe collaborators
Loading Exzellenzcluster Universe collaborators