Time filter

Source Type

Zurich, Switzerland

ETH Zürich is an engineering, science, technology, mathematics and management university in the city of Zürich, Switzerland. Like its sister institution Swiss Federal Institute of Technology in Lausanne , it is an integral part of the Swiss Federal Institutes of Technology Domain that is directly subordinate to Switzerland's Federal Department of Economic Affairs, Education and Research.ETH Zürich is consistently rated among the top universities in the world. It is currently ranked 4th in Europe overall, and 3rd best university in the world in engineering, science and technology. Twenty-one Nobel Prizes have been awarded to students or professors of the Institute in the past, the most famous of which is Albert Einstein in 1921, and the most recent is Richard F. Heck in 2010. It is a founding member of the IDEA League and the International Alliance of Research Universities and a member of the CESAER network.The school was founded by the Swiss Federal Government in 1854 with the stated mission to educate engineers and scientists, serve as a national center of excellence in science and technology and provide a hub for interaction between the scientific community and industry. Wikipedia.

Bates D.,University of Wisconsin - Madison | Machler M.,ETH Zurich | Bolker B.M.,McMaster University | Walker S.C.,McMaster University
Journal of Statistical Software | Year: 2015

Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer. © 2015, American Statistical Association. All rights reserved.

Foster S.,ETH Zurich | Manser T.,University of Fribourg
Academic Medicine | Year: 2012

Purpose: To summarize the available evidence about patient handoff characteristics and their impact on subsequent patient care in hospitals. Method: In January and February 2011, the authors searched the Cochrane Central Register of Controlled Trials, PubMed, Embase, CINAHL, PsycINFO, ERIC, ISI Web of Science, and the reference lists of relevant articles to carry out their systematic review. They selected articles that (1) had patient handoffs in hospitals as their explicit research focus and (2) reported at least one statistical test of an association between a handoff characteristic and outcome. They assessed study quality using 11 quality indicators. Results: The authors identified 18 articles reporting 37 statistical associations between a handoff characteristic and outcome. The only handoff characteristic investigated in more than one study was the use of a standardized handoff sheet. Seven of those 12 studies reported significant improvements after introduction of the sheet. Four of the 18 studies used a randomized controlled trial design. Conclusions: Published research is highly diverse and idiosyncratic regarding the handoff characteristics and outcomes assessed and the methodologies used, so comparing studies and drawing general conclusions about the field are difficult endeavors. The quality of research on the topic is rather preliminary, and there is not yet enough research to inform evidence-based handoff strategies. Future research, then, should focus on research methods, which outcomes should be assessed, handoff characteristics beyond information transfer, mechanisms that link handoff characteristics and outcomes, and the conditions that moderate the characteristics' effects.

Stenflo J.O.,ETH Zurich
Astronomy and Astrophysics | Year: 2012

The average unsigned magnetic flux density in magnetograms of the quiet Sun is generally dominated by instrumental noise. Due to the entirely different scaling behavior of the noise and the solar magnetic pattern it has been possible to determine the standard deviation of the Gaussian noise distribution and remove the noise contribution from the average unsigned flux density for the whole 15-yr SOHO/MDI data set and for a selection of SDO/HMI magnetograms. There is a very close correlation between the MDI disk-averaged unsigned vertical flux density and the sunspot number, and regression analysis gives a residual level of 2.7 G when the sunspot number is zero. The selected set of HMI magnetograms, which spans the most quiet phase of solar activity, has a lower limit of 3.0 G to the noise-corrected average flux density. These apparently cycle-independent levels may be identified as a basal flux density, which represents an upper limit to the possible flux contribution from a local dynamo, but not evidence for its existence. The 3.0 G HMI level, when scaled to the Hinode spatial resolution, translates to 3.5 G, which means that the much higher average flux densities always found by Hinode in quiet regions do not originate from a local dynamo. The contributions to the average unsigned flux density come almost exclusively from the extended wings of the probability density function, also in the case of HMI magnetograms with only basal-level magnetic flux. These wings represent intermittent magnetic flux. As the global dynamo continually feeds flux into the small scales at a fast rate through turbulent shredding, a hypothetical local dynamo may only be relevant to the Sun if its rate of flux build-up can be competitive. While the global dynamo appears to dominate the magnetic energy spectrum at all the resolved spatial scales, there are indications from the observed Hanle depolarization in atomic lines that the local dynamo may dominate the spectrum at scales of order 1-10 km and below. © 2012 ESO.

Barrabes N.,Vienna University of Technology | Sa J.,ETH Zurich
Applied Catalysis B: Environmental | Year: 2011

Catalytic hydrogenation of nitrates has been studied since the beginning of the nineties. Despite the encouraging initial results the problematic of ammonium by-product formation is yet to be solved. This manuscript aims to be an overview of past and present research in the field and propose some key areas, which should be addressed to improve current and newly developed systems. © 2011 Elsevier B.V.

This paper presents an empirical analysis on the residential demand for electricity by time-of-day. This analysis has been performed using aggregate data at the city level for 22 Swiss cities for the period 2000-2006. For this purpose, we estimated two log-log demand equations for peak and off-peak electricity consumption using static and dynamic partial adjustment approaches. These demand functions were estimated using several econometric approaches for panel data, for example LSDV and RE for static models, and LSDV and corrected LSDV estimators for dynamic models. The attempt of this empirical analysis has been to highlight some of the characteristics of the Swiss residential electricity demand. The estimated short-run own price elasticities are lower than 1, whereas in the long-run these values are higher than 1. The estimated short-run and long-run cross-price elasticities are positive. This result shows that peak and off-peak electricity are substitutes. In this context, time differentiated prices should provide an economic incentive to customers so that they can modify consumption patterns by reducing peak demand and shifting electricity consumption from peak to off-peak periods. © 2011 Elsevier Ltd.

The hypothesis that large earthquakes may be preceded by a period of accelerating seismicity, or Accelerating Seismic Release (ASR), was proposed about twenty years ago. A compilation of almost one hundred peer-reviewed publications on this topic since the late 1980s to the present day shows that the rate of ASR studies increased gradually until 2004 but decreased afterwards. This negative trend is amplified by a recent increase in the number of negative results and criticisms of the ASR hypothesis. The author suggests that much of the recent negativity regarding this topic is due to the formulation of this hypothesis as a power-law fit to cumulative seismicity series. This approach is intrinsically linked to the consensus for criticality, evident from an overview of the ASR literature, to explain the emergence of power-laws in earthquake populations. The holistic view of the earth's crust as a complex system restricts seismicity pattern analyses to the study of main features such as power-laws, while a reductionist view would allow for more refined ones. Such a paradigm shift, or 'sea change', might be under way in the ASR literature where in 2007 a new approach was proposed to explain the ASR power-law from combined concepts of elastic rebound and geometry. Reductionism versus holism is a fundamental problem that not only applies to the study of ASR but also to the broader field of earthquake physics and earthquake predictability science. © 2011 Elsevier B.V.

Seebach D.,ETH Zurich
Angewandte Chemie - International Edition | Year: 2011

Two at a time: The replacement of the carbonyl oxygen atom in amides, lactams, aldehydes, ketones, and acid chlorides by two new substituents (C=O→CR1R2) can be carried out with organometallic reagents (Li, Mg, Ti, Zr, Ce). Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Isakov S.V.,ETH Zurich | Hastings M.B.,Duke University | Hastings M.B.,University of California at Santa Barbara | Melko R.G.,University of Waterloo
Nature Physics | Year: 2011

The Landau paradigm of classifying phases by broken symmetries was shown to be incomplete when it was realized that different quantum-Hall states can only be distinguished by more subtle, topological properties1. The role of topology as an underlying description of order has since branched out to include topological band insulators and certain featureless gapped Mott insulators with a topological degeneracy in the ground-state wavefunction. Despite intense work, very few candidates for such topologically ordered ′spin liquids′ exist. The main difficulty in finding systems that harbour spin-liquid states is the very fact that they violate the Landau paradigm, making conventional order parameters non-existent. Here, we describe a spin-liquid phase in a Bose-Hubbard model on the kagome lattice, and determine its topological order directly by means of a measure known as topological entanglement entropy. We thus identify a non-trivial spin liquid through its entanglement entropy as a gapped ground state with emergent Z 2 gauge symmetry. © 2011 Macmillan Publishers Limited. All rights reserved.

Franck C.M.,ETH Zurich
IEEE Transactions on Power Delivery | Year: 2011

The continuously increasing demand for electric power and the economic access to remote renewable energy sources such as off-shore wind power or solar thermal generation in deserts have revived the interest in high-voltage direct current (HVDC) multiterminal systems (networks). A lot of work was done in this area, especially in the 1980s, but only two three-terminal systems were realized. Since then, HVDC technology has advanced considerably and, despite numerous technical challenges, the realization of large-scale HVDC networks is now seriously discussed and considered. For the acceptance and reliability of these networks, the availability of HVDC circuit breakers (CBs) will be critical, making them one of the key enabling technologies. Numerous ideas for HVDC breaker schemes have been published and patented, but no acceptable solution has been found to interrupt HVDC short-circuit currents. This paper aims to summarize the literature, especially that of the last two decades, on technology areas that are relevant to HVDC breakers. By comparing the mainly 20+ years old, state-of-the art HVDC CBs to the new HVDC technology, existing discrepancies become evident. Areas where additional research and development are needed are identified and proposed. © 2011 IEEE.

Habert G.,ETH Zurich
International Journal of Life Cycle Assessment | Year: 2013

Purpose: The most efficient way to reduce the environmental impact of cement production is to replace Portland cement with alternative cementitious materials. These are most often industrial waste such as blast-furnace slags (GBFS) and coal combustion fly ashes (FA). However, a recent European directive no longer considers these products as waste but as by-products. Therefore, the impact of their production has to be considered. Within this new framework, this study develops an evaluation method of their environmental impacts. Method: This paper presents pre-existing methods and underlines their limits. Through our evaluation of these methods, it has become clear that the allocation procedure is necessary; however, results depend highly on the chosen allocation procedure. This study presents a new allocation method, based on the fact that both cement and the alternative materials, GBFS and FA, are produced by energy-intensive industries (cement iron and coal) which are all subjected to the European Union Greenhouse Gas Emission Trading System. In this carbon trading system, it is economically beneficial for industries to reduce their environmental impact, like for when, by example, by-products from one industry are used as alternative 'green' material by another industry. Our allocation coefficient is calculated so that the economic gains and losses are the same for all of the industries involved in these exchanges and provides the overall environmental benefit of the exchanges. Results and discussion: The discussion shows that whilst this method has much in common with other allocation methods, it is more accurate as it allocates the environmental costs fairly over the industries involved and is more robust because of its constant value. One of its limits is that it cannot be used for life cycle inventories; however, we test the possibility of choosing a coefficient from one impact category and applying it to all the others. Conclusion: Lastly, the technical term of the equation this paper presents could be employed for consequential life cycle assessment, to calculate the most environmental uses by-products could be put to. © 2012 Springer-Verlag.

Quack M.,ETH Zurich
Faraday Discussions | Year: 2011

We review the frontiers of spectroscopy from a historical perspective, starting with the development of atomic spectroscopy about 150 years ago, followed by some comments on selected previous Faraday Discussions. As the spectrum of frontiers at the Faraday Discussion 150 is very broad, we give only a brief survey providing a map of the various frontiers approached today. This is followed by an exemplary discussion of one particular frontier towards the spectroscopic detection of symmetry violations in fundamental physics. In particular the understanding of parity violation in chiral molecules has recently made great progress. We briefly describe the advances made in recent decades as well as the current status of theory and experiments in this exciting field of research. We conclude with an outlook on open questions and frontiers of the future in spectroscopy. © The Royal Society of Chemistry 2011.

Gerya T.,ETH Zurich
Journal of Geodynamics | Year: 2011

During the last four decades, subduction remained one of the most challenging and captivating geodynamic processes investigated with numerical techniques. While significant progress has been made toward deciphering the diverse array of subduction zone observations within the context of modeled physical processes, numerous questions remain regarding multiple aspects of subduction zone dynamics. A review of recent numerical studies highlights a number of open topics that subduction modeling can provide significant insight into in the future:. 1.Resolving the controversy of subduction initiation.2.Constraining robust high-resolution models of terrestrial plate tectonics.3.Understanding deep slab processes in the mantle.4.Constraining crustal growth and differentiation in magmatic arcs.5.Modeling of fluid and melt transport in subduction zones.6.Deciphering evolution of high- and ultrahigh-pressure rock complexes.7.Developing geochemical-thermo-mechanical models of subduction.8.Coupling of subduction models with volcanic and seismic risk assessment.9.Understanding the onset of plate tectonics on Earth.Progress in subduction modeling will require strong input from other disciplines (rheology, phase petrology, seismic tomography, geochemistry, numerical analysis, etc.). Indeed, due to the intrinsic complexity of terrestrial subduction processes, the role of geodynamic modeling will inevitably grow and provide an integrative basis for conducting quantitative cross-disciplinary subduction studies combining natural observations, laboratory experiments and modeling. © 2011 Elsevier Ltd.

Geering B.,ETH Zurich
International Journal of Biochemistry and Cell Biology | Year: 2015

Death-associated protein kinase 2 (DAPK2/DRP-1) belongs to a family of five related serine/threonine kinases that mediate a range of cellular processes, including membrane blebbing, apoptosis, and autophagy, and possess tumour suppressive functions. The three most conserved family members DAPK1/DAPK, DAPK2 and DAPK3/ZIPK share a high degree of homology in their catalytic domain, but differ significantly in their extra-catalytic structures and tissue-expression profiles. Hence, each orthologue binds to various unique interaction partners, localizes to different subcellular regions and controls some dissimilar cellular functions. In recent years, mechanistic studies have broadened our knowledge of the molecular mechanisms that activate DAPK2 and that execute DAPK2-mediated apoptosis, autophagy and inflammation. In this "molecules in focus" review on DAPK2, the structure, modes of regulation and various cellular functions of DAPK2 will be summarized and discussed. © 2015 Elsevier Ltd. All rights reserved.

Ottinger H.C.,ETH Zurich
Physical Review Letters | Year: 2010

A Reply to the Comment by Henning Struchtrup and Manuel Torrilhon. © 2010 The American Physical Society.

Park H.G.,ETH Zurich | Jung Y.,Korea Advanced Institute of Science and Technology
Chemical Society Reviews | Year: 2014

Fast transport of water inside nanoscale graphitic surfaces, namely carbon nanotubes and graphene, forms the basis of aqueous carbon nanofluidic phenomena for which there are numerous applications in energy and environmental fields. In this tutorial review, we provide the basic principles of nanofluidics using carbon materials. We also address thermodynamic and structural aspects of favourable water confinement between hydrophobic carbon surfaces. We outline the experimental and theoretical history and consider the current status of ultrafast water transport inside carbon nanotubes, together with recent applications of aqueous carbon nanofluidic phenomena for energy generation and energy storage. We briefly discuss the challenges and opportunities for utilization of fast water transport in the graphitic nanoenvironment as a means to offer a clean and sustainable source of energy technology. © The Royal Society of Chemistry.

For decades researchers have studied the On/Off counting problem where a measured rate consists of two parts. One part is due to a signal process and the other is due to a background process, the magnitudes for both of which are unknown. While most frequentist methods are adequate for large number counts, they cannot be applied to sparse data. Here, I want to present a new objective Bayesian solution that only depends on three parameters: the number of events in the signal region, the number of events in the background region, and the ratio of the exposure for both regions. First, the probability of the counts only being due to background is derived analytically. Second, the marginalized posterior for the signal parameter is also derived analytically. With this two-step approach it is easy to calculate the signal's significance, strength, uncertainty, or upper limit in a unified way. This approach is valid without restrictions for any number count, including zero, and may be widely applied in particle physics, cosmic-ray physics, and high-energy astrophysics. In order to demonstrate the performance of this approach, I apply the method to gamma-ray burst data. © 2014. The American Astronomical Society. All rights reserved..

In this article, I develop a methodology for inferring the transmission rate and reproductive value of an epidemic on the basis of genotype data from a sample of infected hosts. The epidemic is modeled by a birth-death process describing the transmission dynamics in combination with an infinite-allele model describing the evolution of alleles. I provide a recursive formulation for the probability of the allele frequencies in a sample of hosts and a Bayesian framework for estimating transmission rates and reproductive values on the basis of observed allele frequencies. Using the Bayesian method, I reanalyze tuberculosis data from the United States. I estimate a net transmission rate of 0.19/year [0.13, 0.24] and a reproductive value of 1.02 [1.01, 1.04]. I demonstrate that the allele frequency probability under the birth-death model does not follow the well-known Ewens' sampling formula that holds under Kingman's coalescent. © 2011 by the Genetics Society of America.

We study the statistical properties of turbulence driven by structure formation in a massive merging galaxy cluster at redshift z = 0. The development of turbulence is ensured as the largest eddy turnover time is much shorter than the Hubble time independent of mass and redshift. We achieve a large dynamic range of spatial scales through a novel Eulerian refinement strategy where the cluster volume is refined with progressively finer uniform nested grids during gravitational collapse. This provides an unprecedented resolution of 7.3 h -1 kpc across the virial volume. The probability density functions of various velocity-derived quantities exhibit the features characteristic of fully developed compressible turbulence observed in dedicated periodic-box simulations. Shocks generate only 60% of the total vorticity within the R vir/3 region and 40% beyond that. We compute second- and third-order longitudinal and transverse structure functions for both solenoidal and compressional components in the cluster core, virial region, and beyond. The structure functions exhibit a well-defined inertial range of turbulent cascade. The injection scale is comparable to the virial radius but increases toward the outskirts. Within R vir/3, the spectral slope of the solenoidal component is close to Kolmogorov's, but for the compressional component is substantially steeper and close to Burgers's; the flow is mostly solenoidal and statistically rigorously, which is consistent with fully developed homogeneous and isotropic turbulence. Small-scale anisotropy appears due to numerical artifact. Toward the virial region, the flow becomes increasingly compressional, the structure functions become flatter, and modest genuine anisotropy appear particularly close to the injection scale. In comparison, mesh adaptivity based on Lagrangian refinement and the same finest resolution leads to a lack of turbulent power on a small scales, an excess thereof on large scales, and unreliable density-weighted structure functions. © 2014. The American Astronomical Society. All rights reserved..

Ries J.,ETH Zurich | Schwille P.,TU Dresden
BioEssays | Year: 2012

Fluorescence correlation spectroscopy (FCS) is a powerful technique to measure concentrations, mobilities, and interactions of fluorescent biomolecules. It can be applied to various biological systems such as simple homogeneous solutions, cells, artificial, or cellular membranes and whole organisms. Here, we introduce the basic principle of FCS, discuss its application to biological questions as well as its limitations and challenges, present an overview of novel technical developments to overcome those challenges, and conclude with speculations about the future applications of fluorescence fluctuation spectroscopy. © 2012 WILEY Periodicals, Inc.

Miniati F.,ETH Zurich | Martin D.F.,Lawrence Berkeley National Laboratory
Astrophysical Journal, Supplement Series | Year: 2011

We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout. © 2011. The American Astronomical Society. All rights reserved.

Ellis R.K.,Fermi National Accelerator Laboratory | Kunszt Z.,ETH Zurich | Melnikov K.,University of Baltimore | Zanderighi G.,University of Oxford
Physics Reports | Year: 2012

The success of the experimental program at the Tevatron re-inforced the idea that precision physics at hadron colliders is desirable and, indeed, possible. The Tevatron data strongly suggests that one-loop computations in QCD describe hard scattering well. Extrapolating this observation to the LHC, we conclude that knowledge of many short-distance processes at next-to-leading order may be required to describe the physics of hard scattering. While the field of one-loop computations is quite mature, parton multiplicities in hard LHC events are so high that traditional computational techniques become inefficient. Recently, new approaches based on unitarity have been developed for calculating one-loop scattering amplitudes in quantum field theory. These methods are especially suitable for the description of multi-particle processes in QCD and are amenable to numerical implementations. We present a systematic pedagogical description of both conceptual and technical aspects of the new methods. © 2012 Elsevier B.V.

D'Souza R.M.,University of California at Davis | Nagler J.,ETH Zurich
Nature Physics | Year: 2015

The emergence of large-scale connectivity on an underlying network or lattice, the so-called percolation transition, has a profound impact on the system's macroscopic behaviours. There is thus great interest in controlling the location of the percolation transition to either enhance or delay its onset and, more generally, in understanding the consequences of such control interventions. Here we review explosive percolation, the sudden emergence of large-scale connectivity that results from repeated, small interventions designed to delay the percolation transition. These transitions exhibit drastic, unanticipated and exciting consequences that make explosive percolation an emerging paradigm for modelling real-world systems ranging from social networks to nanotubes. © 2015 Macmillan Publishers Limited. All rights reserved.

Keller D.,ETH Zurich
Structural and Multidisciplinary Optimization | Year: 2011

A method aimed at the optimization of locally varying laminates is investigated. The structure is partitioned into geometrical sections. These sections are covered by global plies. A variable-length representation scheme for an evolutionary algorithm is developed. This scheme encodes the number of global plies, their thickness, material, and orientation. A set of genetic variation operators tailored to this particular representation is introduced. Sensitivity information assists the genetic search in the placement of reinforcements and optimization of ply angles. The method is investigated on two benchmark applications. There it is able to find significant improvements. A case study of an airplane's side rudder illustrates the applicability of the method to typical engineering problems. © Springer-Verlag 2010.

Dittmar M.,ETH Zurich
Science of the Total Environment | Year: 2013

Historic data from many countries demonstrate that on average no more than 50-70% of the uranium in a deposit could be mined. An analysis of more recent data from Canada and Australia leads to a mining model with an average deposit extraction lifetime of 10. ±. 2. years. This simple model provides an accurate description of the extractable amount of uranium for the recent mining operations.Using this model for all larger existing and planned uranium mines up to 2030, a global uranium mining peak of at most 58. ±. 4. ktons around the year 2015 is obtained. Thereafter we predict that uranium mine production will decline to at most 54. ±. 5. ktons by 2025 and, with the decline steepening, to at most 41. ±. 5. ktons around 2030. This amount will not be sufficient to fuel the existing and planned nuclear power plants during the next 10-20. years. In fact, we find that it will be difficult to avoid supply shortages even under a slow 1%/year worldwide nuclear energy phase-out scenario up to 2025. We thus suggest that a worldwide nuclear energy phase-out is in order.If such a slow global phase-out is not voluntarily effected, the end of the present cheap uranium supply situation will be unavoidable. The result will be that some countries will simply be unable to afford sufficient uranium fuel at that point, which implies involuntary and perhaps chaotic nuclear phase-outs in those countries involving brownouts, blackouts, and worse. © 2013 Elsevier B.V.

Luisier M.,ETH Zurich
Chemical Society Reviews | Year: 2014

Computational chemistry deals with the first-principles calculation of electronic and crystal structures, phase diagrams, charge distributions, vibrational frequencies, or ion diffusivity in complex molecules and solids. Typically, none of these numerical experiments allows for the calculation of electrical currents under the influence of externally applied voltages. To address this issue, there is an imperative need for an advanced simulation approach capable of treating all kind of transport phenomena (electron, energy, momentum) at a quantum mechanical level. The goal of this tutorial review is to give an overview of the "quantum transport" (QT) research activity, introduce specific techniques such as the Non-equilibrium Green's Function (NEGF) formalism, describe their basic features, and underline their strengths and weaknesses. Three examples from the nanoelectronics field have been selected to illustrate the insight provided by quantum transport simulations. Details are also given about the numerical algorithms to solve the NEGF equations and about strategies to parallelize the workload on supercomputers. © 2014 The Royal Society of Chemistry.

Kuvshinov A.V.,ETH Zurich
Surveys in Geophysics | Year: 2012

This review paper summarizes advances in deep electromagnetic studies of the Earth in the past decade. The paper reports progress in data interpretation, with special emphasis on three-dimensional and quasi one-dimensional developments, and results. The results obtained from data of different origin-geomagnetic observatories, long-period magnetotelluric experiments, submarines cables, and from low-Earth orbiting geomagnetic satellite missions-are described. Both frequency-domain and time-domain approaches are addressed. Perspectives for the future are also discussed. © 2011 Springer Science+Business Media B.V.

Gerya T.,ETH Zurich
Tectonophysics | Year: 2012

Mid-ocean ridges sectioned by transform faults represent prominent surface expressions of plate tectonics. A fundamental problem of plate tectonics is how this pattern has formed and why it is maintained. Gross-scale geometry of mid-ocean ridges is often inherited from respective rifted margins. Indeed, transform faults seem to nucleate after the beginning of the oceanic spreading and can spontaneously form at a single straight ridge. Both analog and numerical models of transform faults were investigated since the 1970s. Two main groups of analog models were developed: thermomechanical (freezing wax) models with accreting and cooling plates and mechanical models with non-accreting lithosphere. Freezing wax models reproduced ridge-ridge transform faults, inactive fracture zones, rotating microplates, overlapping spreading centers and other features of oceanic ridges. However, these models often produced open spreading centers that are dissimilar to nature. Mechanical models, on the other hand, do not accrete the lithosphere and their results are thus only applicable for relatively small amount of spreading. Three main types of numerical models were investigated: models of stress and displacement distribution around transforms, models of their thermal structure and crustal growth, and models of nucleation and evolution of ridge-transform fault patterns. It was shown that a limited number of spreading modes can form: transform faults, microplates, overlapping spreading centers, zigzag ridges and oblique connecting spreading centers. However, the controversy exists whether these patterns always result from pre-existing ridge offsets or can also form spontaneously at a single straight ridge during millions of year of accretion. Therefore, two types of transform fault interpretation exist: plate fragmentation structures vs. plate accretion structures. Models of transform faults are yet relatively scarce and partly controversial. Consequently, a number of first order questions remain standing and significant cross-disciplinary efforts are needed in the future by combining natural observations, analog experiments, and numerical modeling. © 2011 Elsevier B.V.

The core-mantle boundary (CMB) - the interface between the silicate mantle and liquid iron alloy outer core - is the most important boundary inside our planet, with processes occurring in the deep mantle above it playing a major role in the evolution of both the core and the mantle. The last decade has seen an astonishing improvement in our knowledge of this region due to improvements in seismological data and techniques for mapping both large- and small-scale structures, mineral physics discoveries such as post-perovskite and the iron spin transition, and dynamical modelling. The deep mantle is increasingly revealed as a very complex region characterised by large variations in temperature and composition, phase changes, melting (possibly at present and certainly in the past), and anisotropic structures. Here, some fundamentals of the relevant processes and uncertainties are reviewed in the context of long-term Earth evolution and how it has led to the observed present-day structures. Melting has been a dominant process in Earth's evolution. Several processes involving melting, some of which operated soon after Earth's formation and some of which operated throughout its history, have produced dense, iron rich material that has likely sunk to the deepest mantle to be incorporated into a heterogeneous basal mélange (BAM) that is now evident seismically as two large low-velocity regions under African and the Pacific, but was probably much larger in the past. This BAM modulates core heat flux, plume formation and the separation of different slab components, and may contain various trace-element cocktails required to explain geochemical observations. The geographical location of BAM material has, however, probably changed through Earth's history due to the inherent time-dependence of plate tectonics and continental cycles. © 2011 Elsevier B.V.

Mylonitic gneisses of the Bulgarian and Greek Rhodope were deformed under amphibolite-facies conditions of medium pressure type metamorphism. The kinematic information contained on the strain regime and histories of these gneisses shows that ductile, shear-deformation occurred during development of a nappe complex. The nappe complex is characterised by south to southwestward (forelandward) piling-up and both coeval and subsequent extension. Different lithologies, deformation and metamorphic histories discriminate lower (footwall) and upper (hangingwall) continental terranes that define a crustal-scale duplex. Ultrahigh-Pressure metamorphic rocks, eclogites, ophiolitic and magmatic arc protoliths are found in various units of the crustal-scale duplex structure. These rocks delineate a suture zone between the hanging wall and footwall continental units. Synmetamorphic suturing and thrusting imply crustal thickening during the Cretaceous, which implies that the Rhodope massif is a complex of synmetamorphic nappes stacked in a Tethyan active margin environment. The two blocks involved in the collision are the Moesian part of the European continent to the north, and the Lower-Rhodope Terrane to the south, which was a migrating block detached from Pangea during breakup times of this supercontinent. Regional inversions of synmetamorphic sense-of-shear indicate that exhumation tectonics began in Cretaceous times, possibly linked to upward-forward expulsion of low density arc and continental rocks. A Late Eocene marine transgression separates the early, late-orogenic extension/exhumation phase from another extension event accompanied by a major thermal and magmatic event and followed by the Miocene Aegean extension responsible for late grabens over the Rhodope Massif.

Werner P.,ETH Zurich | Millis A.J.,Columbia University
Physical Review Letters | Year: 2010

We present an efficient method for incorporating a dynamically screened interaction into the single-site dynamical mean field approximation widely used to treat correlation physics. Model system calculations are presented which capture the essential features of the frequency-dependent interactions previously obtained for Gd, Ni, SrVO3 and other compounds. Dynamical screening is shown to shift the metal-insulator phase boundary, to change the spectral function near the Mott-Hubbard gap edge, and to renormalize the quasiparticle weight. A multipeak spectral function is found, with shakeoff (Hubbard) bands which are generically neither separated by the screened nor by the unscreened interaction energy, implying that the common practice of extracting the Hubbard U from the energies of features in photoemission and inverse photoemission spectra requires reexamination. © 2010 The American Physical Society.

Schindler K.,ETH Zurich
IEEE Transactions on Geoscience and Remote Sensing | Year: 2012

An elementary piece of our prior knowledge about images of the physical world is that they are spatially smooth, in the sense that neighboring pixels are more likely to belong to the same object (class) than to different ones. The smoothness assumption becomes more important as sensor resolutions keep increasing, both because the radiometric variability within classes increases and because remote sensing is employed in more heterogeneous areas (e.g., cities), where shadow and shading effects, a multitude of materials, etc., degrade the measurement data, and prior knowledge plays a greater role. This paper gives a systematic overview of image classification methods, which impose a smoothness prior on the labels. Both local filtering-type approaches and global random field models developed in other fields of image processing are reviewed, and two new methods are proposed. Then follows a detailed experimental comparison and analysis of the presented methods, using two different aerial data sets from urban areas with known ground truth. A main message of the paper is that when classifying data of high spatial resolution, smoothness greatly improves the accuracy of the resultin our experiments up to 33%. A further finding is that global random field models outperform local filtering methods and should be more widely adopted for remote sensing. Finally, the evaluation confirms that all methods already oversmooth when most effective, pointing out that there is a need to include more and more complex prior information into the classification process. © 2012 IEEE.

Steurer W.,ETH Zurich
Chemical Society Reviews | Year: 2012

In the past two decades significant progress has been made in the search for stable quasicrystals, the determination of their structures and the understanding of their physical properties. Now, quasiperiodic ordering states are not only known for intermetallic compounds, but also for mesoscopic systems such as ABC-star terpolymers, liquid crystals or different kinds of colloids. However, in spite of all these achievements fundamental questions concerning quasicrystal formation, growth and stability are still not fully answered. This tutorial review introduces the research in these fields and addresses some of the open questions concerning the origin of quasiperiodicity. © 2012 The Royal Society of Chemistry.

Nesper R.,ETH Zurich
Zeitschrift fur Anorganische und Allgemeine Chemie | Year: 2014

The Zintl-Klemm concept (ZKC) is a combination of a valence electron counting rule and a set of structure-chemical considerations which works well for a continuously growing class of compounds between classical salts and classical covalent compounds on the on hand, and intermetallic phases on the other. In this historical article the lines of development of the ZKC are reviewed. They originate from Eduard Zintl in Darmstadt and were carried on and extended by Wilhelm Klemm in Münster, early on. After World War II different schools developed their own points of view on the ZKC which converged in the 1990s, backed by blossoming quantum mechanical schemes of treating complex solids. The limits of applicability of the ZKC are subject of ongoing investigations, still, and it seems that the concept has reached a state of quite general acceptance in solid state sciences. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Schneider A.,ETH Zurich
Physical Review Letters | Year: 2010

A Comment on the Letter by N.N. Zinov'ev, A.S. Nikoghosyan, and J.M. Chamberlain, Phys. Rev. Lett. 98, 044801 (2007)PRLTAO0031-900710.1103/ PhysRevLett.98.044801. The authors of the Letter offer a Reply. © 2010 The American Physical Society.

Dama J.F.,James Franck Institute | Parrinello M.,ETH Zurich | Voth G.A.,James Franck Institute
Physical review letters | Year: 2014

Metadynamics is a versatile and capable enhanced sampling method for the computational study of soft matter materials and biomolecular systems. However, over a decade of application and several attempts to give this adaptive umbrella sampling method a firm theoretical grounding prove that a rigorous convergence analysis is elusive. This Letter describes such an analysis, demonstrating that well-tempered metadynamics converges to the final state it was designed to reach and, therefore, that the simple formulas currently used to interpret the final converged state of tempered metadynamics are correct and exact. The results do not rely on any assumption that the collective variable dynamics are effectively Brownian or any idealizations of the hill deposition function; instead, they suggest new, more permissive criteria for the method to be well behaved. The results apply to tempered metadynamics with or without adaptive Gaussians or boundary corrections and whether the bias is stored approximately on a grid or exactly.

Jaccard S.L.,ETH Zurich | Galbraith E.D.,McGill University
Geophysical Research Letters | Year: 2013

Despite its tremendous size, the deep North Pacific has received relatively little attention by paleoceanographers. It was recently suggested that the deep North Pacific was directly ventilated by dense waters formed in the subarctic Pacific during Heinrich Stadial 1 (HS1) of the early deglaciation. Here we present new redox-sensitive trace metal data from a sediment core at 2393 m in the subarctic Pacific, in comparison with previously published data from elsewhere in the region. The combined picture shows no sign of ventilation during the early deglaciation in any available core from water depths of 2393 m and deeper, while the deepest core to display clear signs of enhanced ventilation during HS1 was raised from 1366m water depth. Thus, it appears likely that, although the North Pacific was well ventilated to intermediate depths during HS1, the deep ocean did not receive a significant input of dense waters from a local source, but remained isolated from the surface waters above. Sample unit level copyright. © 2013. American Geophysical Union. All Rights Reserved.

Using fits to numerical simulations, we show that the entire hierarchy of moments quickly ceases to provide a complete description of the convergence one-point probability density function leaving the linear regime. This suggests that the full N-point correlation function hierarchy of the convergence field becomes quickly generically incomplete and a very poor cosmological probe on nonlinear scales. At the scale of unit variance, only 5% of the Fisher information content of the one-point probability density function is still contained in its hierarchy of moments, making clear that information escaping the hierarchy is a far stronger effect than information propagating to higher order moments. It follows that the constraints on cosmological parameters achievable through extraction of the entire hierarchy become suboptimal by large amounts. A simple logarithmic mapping makes the moment hierarchy well suited again for parameter extraction. © 2012 American Physical Society.

Mei J.-W.,ETH Zurich
Physical Review Letters | Year: 2012

We study the lightly doped Kitaev spin liquid (LDKSL) and find it to be the Fermi liquid. The LDKSL satisfies the two key properties of the standard Landau Fermi liquid: the low-energy quasiparticles are well defined and the Fermi sea has the quantized volume determined by Luttinger's theorem. These features can be observed in angle-resolved photoemission spectroscopy measurements. Meanwhile, the LDKSL has the topological Kitaev spin liquid surrounding the Fermi sea. So the LDKSL violates the Wiedemann-Franz law and has a large Wilson ratio. These results have the potential experimental verifiability in iridates upon doping. © 2012 American Physical Society.

Wang L.,Massachusetts Institute of Technology | Renner R.,ETH Zurich
Physical Review Letters | Year: 2012

The one-shot classical capacity of a quantum channel quantifies the amount of classical information that can be transmitted through a single use of the channel such that the error probability is below a certain threshold. In this work, we show that this capacity is well approximated by a relative-entropy-type measure defined via hypothesis testing. Combined with a quantum version of Stein's lemma, our results give a conceptually simple proof of the well-known Holevo-Schumacher-Westmoreland theorem for the capacity of memoryless channels. More generally, we obtain tight capacity formulas for arbitrary (not necessarily memoryless) channels. © 2012 American Physical Society.

Parker R.J.,ETH Zurich | Goodwin S.P.,University of Sheffield
Monthly Notices of the Royal Astronomical Society | Year: 2012

Observations of binaries in clusters tend to be of visual binaries with separations of tens to hundreds of au. Such binaries are 'intermediates' and their destruction or survival depends on the exact details of their individual dynamical history. We investigate the stochasticity of the destruction of such binaries and the differences between the initial and processed populations using N-body simulations. We concentrate on Orion nebula cluster-like clusters, where the observed binary separation distribution ranges from 62 to 620au. We find that, starting from the same initial binary population in statistically identical clusters, the number of intermediate binaries that are destroyed after 1Myr can vary by a factor of >2, and that the resulting separation distributions can be statistically completely different in initially substructured clusters. We also find that the mass ratio distributions are altered (destroying more low mass-ratio systems), but not as significantly as the binary fractions or separation distributions. We conclude that finding very different intermediate (visual) binary populations in different clusters does not provide conclusive evidence that the initial populations were different. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS.

We trace with analytical methods and in a model parameter-independent manner the independent bits of Fisher information of each of the moments of the lognormal distribution as a now standard prescription for the distribution of the cosmological matter density field as it departs from Gaussian initial conditions. We show that, when entering the regime of large fluctuations, only a tiny, dramatically decaying fraction of the total information content remains accessible through the extraction of the full series of moments of the field. This is due to the known peculiarity of highly tailed distributions that they cannot be uniquely recovered given the values of all of their moments. Under this lognormal assumption, cosmological probes such as the correlation function hierarchy or, equivalently, their Fourier transforms, are rendered fundamentally limited once the field becomes nonlinear, for any parameter of interest. We show that the fraction of the information accessible from two-point correlations decays to zero following the inverse squared variance of the field. We discuss what general properties of a random field's probability density function are making the correlation function hierarchy an efficient or inefficient, complete or incomplete, set of probes of any model parameter. © 2011. The American Astronomical Society. All rights reserved.

Spaldin N.A.,ETH Zurich
Journal of Solid State Chemistry | Year: 2012

The so-called Modern Theory of Polarization, which rigorously defines the spontaneous polarization of a periodic solid and provides a route for its computation in electronic structure codes through the Berry phase, is introduced in a simple qualitative discussion. © 2012 Elsevier Inc.

Pfeifer R.,University of Zurich | Iida F.,ETH Zurich | Lungarella M.,Dynamic Devices AG
Trends in Cognitive Sciences | Year: 2014

Traditionally, in cognitive science the emphasis is on studying cognition from a computational point of view. Studies in biologically inspired robotics and embodied intelligence, however, provide strong evidence that cognition cannot be analyzed and understood by looking at computational processes alone, but that physical system-environment interaction needs to be taken into account. In this opinion article, we review recent progress in cognitive developmental science and robotics, and expand the notion of embodiment to include soft materials and body morphology in the big picture. We argue that we need to build our understanding of cognition from the bottom up; that is, all the way from how our body is physically constructed. © 2014 Elsevier Ltd.

Metanis N.,Hebrew University of Jerusalem | Hilvert D.,ETH Zurich
Current opinion in chemical biology | Year: 2014

Once considered highly toxic, the element selenium is now recognized as a micronutrient essential for human health. It is inserted co-translationally into many proteins as the non-canonical amino acid selenocysteine, providing the resulting selenoprotein molecules with a range of valuable redox properties; selenocysteine is also increasingly exploited as a structural and mechanistic probe in synthetic peptides and proteins. Here we review topical investigations into the preparation and characterization of natural and artificial selenoproteins. Such molecules are uniquely suited as tools for protein chemistry and as a test bed for studying novel catalytic activities. Copyright © 2014 Elsevier Ltd. All rights reserved.

GCOS (Global Climate Observing System) is a long-term program for monitoring the climate, detecting the changes, and assessing their impacts. Remote sensing techniques are being increasingly used for climate-related measurements. Imagery of the SEVIRI instrument on board of the European geostationary satellites Meteosat-8 and Meteosat-9 are often used for the estimation of essential climate variables. In a joint project between the Swiss GCOS Office and ETH Zurich, geometric accuracy and temporal stability of 1-km resolution HRV channel imagery of SEVIRI have been evaluated over Switzerland. A set of tools and algorithms has been developed for the investigations. Statistical analysis and blunder detection have been integrated in the process for robust evaluation. The relative accuracy is evaluated by trackinglarge numbers of feature points in consecutive HRV images taken at 15-minute intervals. For the absolute accuracy evaluation, lakes in Switzerland andsurroundings are used as reference. 20 lakes digitized from Landsat orthophotos are transformed into HRV images and matched via 2D translation terms at sub-pixel level. The algorithms are tested using HRV images taken on 24 days in 2008 (2 days per month). The results show that 2D shifts that are upto 8 pixels are present both in relative and absolute terms. © 2013 by the authors.

Mancktelow N.S.,ETH Zurich
Journal of Structural Geology | Year: 2013

For 2D linear viscous flow, it is shown that the rates of rotation and stretch of an isolated elliptical inclusion with a coaxial elliptical rim are fully determined by two corresponding scalar values. For power-law viscosity, effective viscosity ratios of the inclusion and rim to the matrix depend on orientation and the system is more complex but, in practice, the simplification with two scalar values still provides a good approximation. Finite-element modelling (FEM) is used to determine the two characteristic values across a wide parameter space for the linear viscous case, with a viscosity ratio (relative to the matrix) of the inclusion from 106 to 1, of the rim from 10-6 to 1, axial ratios from 1.00025 to 20, and rim thicknesses relative to the inclusion axes of 5 to 20%. Results are presented in a multi-dimensional data table, allowing continuous interpolation over the investigated parameter range. Based on these instantaneous rates, the shape fabric of a population of inclusions is forward modelled using an initial value Ordinary Differential Equation (ODE) approach, with the simplifying but unrealistic assumption that the rim remains elliptical in shape and coaxial with respect to the inclusion. However, comparison with accurate large strain numerical experiments demonstrates that this simplified model gives qualitatively robust predictions and, for a range of investigated examples, also remarkably good quantitative estimates for shear strains up to at least γ = 5. A statistical approach, allowing random variation in the initial orientation, axial ratio and rim viscosity, can reproduce the characteristic shape preferred orientation (SPO) of natural porphyroclast populations. However, vorticity analysis based on the SPO or the interpreted stable orientation of inclusions is not practical. Varying parameters, such as inclusion and rim viscosity, rim thickness, and power law-exponents for non-linear viscosity, can reproduce the range of naturally observed behaviour (e.g., back-rotation, effectively stable orientations at back-rotated angles, a cut-off axial ratio separating rotating from stable inclusions) even for constant simple shear and these features are not uniquely characteristic of the vorticity of the background flow. © 2012 Elsevier Ltd.

Trakhtenbrot B.,ETH Zurich
Astrophysical Journal Letters | Year: 2014

The radiative efficiencies (η) of 72 luminous unobscured active galactic nuclei at z ∼ 1.5-3.5, powered by some of the most massive black holes (BHs), are constrained. The analysis is based on accretion disk (AD) models, which link the continuum luminosity at rest-frame optical wavelengths and the BH mass (MBH) to the accretion rate through the AD, . The data are gathered from several literature samples with detailed measurements of the Hβ emission line complex, observed at near-infrared bands. When coupled with standard estimates of bolometric luminosities (L bol), the analysis suggests high radiative efficiencies, with most of the sources showing η > 0.2, that is, higher than the commonly assumed value of 0.1, and the expected value for non-spinning BHs (η = 0.057). Even under more conservative assumptions regarding L bol (i.e., L bol = 3 × L 5100), most of the extremely massive BHs in the sample (i.e., M BH ≳ 3 × 109 M⊙) show radiative efficiencies which correspond to very high BH spins (a *), with typical values well above a * ≃ 0.7. These results stand in contrast to the predictions of a "spin-down" scenario, in which a series of randomly oriented accretion episodes leads to a * ∼ 0. Instead, the analysis presented here strongly supports a "spin-up" scenario, which is driven by either prolonged accretion or a series of anisotropically oriented accretion episodes. Considering the fact that these extreme BHs require long-duration or continuous accretion to account for their high masses, it is argued that the most probable scenario for the super-massive black holes under study is that of an almost continuous sequence of randomly yet not isotropically oriented accretion episodes. © 2014. The American Astronomical Society. All rights reserved.

Marino A.,ETH Zurich
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | Year: 2013

Ship detection with Synthetic Aperture Radar (SAR) is a major topic for the security and monitoring of maritime areas. One of the advantages of using SAR lays in its capability to acquire useful images with any-weather conditions and at night time. Specifically, this paper proposes a new methodology exploiting polarimetric acquisitions (dual-and quad-polarimetric). © 2013 IEEE.

Colbeck R.,Perimeter Institute for Theoretical Physics | Renner R.,ETH Zurich
Physical Review Letters | Year: 2012

Although quantum mechanics is one of our most successful physical theories, there has been a long-standing debate about the interpretation of the wave function-the central object of the theory. Two prominent views are that (i) it corresponds to an element of reality, i.e., an objective attribute that exists before measurement, and (ii) it is a subjective state of knowledge about some underlying reality. A recent result has placed the subjective interpretation into doubt, showing that it would contradict certain physically plausible assumptions, in particular, that multiple systems can be prepared such that their elements of reality are uncorrelated. Here we show, based only on the assumption that measurement settings can be chosen freely, that a system's wave function is in one-to-one correspondence with its elements of reality. This also eliminates the possibility that it can be interpreted subjectively. © 2012 American Physical Society.

Iber D.,ETH Zurich | Zeller R.,University of Basel
Current Opinion in Genetics and Development | Year: 2012

Limb bud development has been studied for decades and contributed a wealth of knowledge to our understanding of the molecular and cellular mechanisms that govern organogenesis in vertebrate embryos. However, the general regulatory paradigms that underlie the functional and structural organization of complex systems such as developing limb buds have remained largely elusive. A significant number of mathematical theories have been proposed to explain these developmental processes, but have rarely been validated by experimental analysis. In the age of systems biology, experimental and mathematical approaches have become interlinked and enable the experimental validation of computational models by molecular and genetic analysis. This in turn allows refinement of the mathematical simulations such that simulating limb bud development becomes increasingly more realistic. The resulting models not only detect inconsistencies in the interpretation of experimental data, but their predictive power facilitates identification of key regulatory interactions and definition of so-called core and accessory mechanisms. The ongoing integrative analysis of vertebrate limb organogenesis indicates that these network simulations may be suitable for in silico genetics, that is the computational modeling of complex loss-of-functions and gain-of-functions states. Such in silico genetic approaches will permit the simulation of complex mutant phenotypes tedious or impossible to generate using mouse molecular genetics. © 2012 Elsevier Ltd.

Hager W.H.,ETH Zurich
Journal of Hydraulic Research | Year: 2012

Although the Bond number is generally known in fluid mechanics to describe the effects of surface tension and weight of a liquid drop or bubble, W.N. Bond as an individual is currently hardly known. Based on a lucky coincidence, the author was able to find documents, medals and photographs through his grandson. This work describes on the one hand the physical background of Bond's works, mainly relating to the measurement of viscosity and surface tension, the significance and relevance of the Bond number, and the professional and private background of a young and outstanding individual, who tragically passed away due to a serious illness. It is hoped that the basic knowledge of fluid flow is revitalized by this work, and that similar projects may be directed to the understanding of the historical developments of the hydraulic community. © 2012 Copyright International Association for Hydro-Environment Engineering and Research.

Silver D.M.,University of Calgary | Kotting O.,ETH Zurich | Moorhead G.B.G.,University of Calgary
Trends in Plant Science | Year: 2014

Phosphoglucan phosphatases are novel enzymes that remove phosphates from complex carbohydrates. In plants, these proteins are vital components in the remobilization of leaf starch at night. Breakdown of starch is initiated through reversible glucan phosphorylation to disrupt the semi-crystalline starch structure at the granule surface. The phosphoglucan phosphatases starch excess 4 (SEX4) and like-SEX4 2 (LSF2) dephosphorylate glucans to provide access for amylases that release maltose and glucose from starch. Another phosphatase, LSF1, is a putative inactive scaffold protein that may act as regulator of starch degradative enzymes at the granule surface. Absence of these phosphatases disrupts starch breakdown, resulting in plants accumulating excess starch. Here, we describe recent advances in understanding the biochemical and structural properties of each of these starch phosphatases. © 2014.

Toutov A.A.,California Institute of Technology | Liu W.-B.,California Institute of Technology | Betz K.N.,California Institute of Technology | Fedorov A.,California Institute of Technology | And 3 more authors.
Nature | Year: 2015

Heteroaromatic compounds containing carbon-silicon (C-Si) bonds are of great interest in the fields of organic electronics and photonics, drug discovery, nuclear medicine and complex molecule synthesis, because these compounds have very useful physicochemical properties. Many of the methods now used to construct heteroaromatic C-Si bonds involve stoichiometric reactions between heteroaryl organometallic species and silicon electrophiles or direct, transition-metal-catalysed intermolecular carbon-hydrogen (C-H) silylation using rhodium or iridium complexes in the presence of excess hydrogen acceptors. Both approaches are useful, but their limitations include functional group incompatibility, narrow scope of application, high cost and low availability of the catalysts, and unproven scalability. For this reason, a new and general catalytic approach to heteroaromatic C-Si bond construction that avoids such limitations is highly desirable. Here we report an example of cross-dehydrogenative heteroaromatic C-H functionalization catalysed by an Earth-abundant alkali metal species. We found that readily available and inexpensive potassium tert-butoxide catalyses the direct silylation of aromatic heterocycles with hydrosilanes, furnishing heteroarylsilanes in a single step. The silylation proceeds under mild conditions, in the absence of hydrogen acceptors, ligands or additives, and is scalable to greater than 100 grams under optionally solvent-free conditions. Substrate classes that are difficult to activate with precious metal catalysts are silylated in good yield and with excellent regioselectivity. The derived heteroarylsilane products readily engage in versatile transformations enabling new synthetic strategies for heteroaromatic elaboration, and are useful in their own right in pharmaceutical and materials science applications. © 2015 Macmillan Publishers Limited. All rights reserved.

Dorfler F.,ETH Zurich | Bullo F.,University of California at Santa Barbara
Automatica | Year: 2014

The emergence of synchronization in a network of coupled oscillators is a fascinating subject of multidisciplinary research. This survey reviews the vast literature on the theory and the applications of complex oscillator networks. We focus on phase oscillator models that are widespread in real-world synchronization phenomena, that generalize the celebrated Kuramoto model, and that feature a rich phenomenology. We review the history and the countless applications of this model throughout science and engineering. We justify the importance of the widespread coupled oscillator model as a locally canonical model and describe some selected applications relevant to control scientists, including vehicle coordination, electric power networks, and clock synchronization. We introduce the reader to several synchronization notions and performance estimates. We propose analysis approaches to phase and frequency synchronization, phase balancing, pattern formation, and partial synchronization. We present the sharpest known results about synchronization in networks of homogeneous and heterogeneous oscillators, with complete or sparse interconnection topologies, and in finite-dimensional and infinite-dimensional settings. We conclude by summarizing the limitations of existing analysis methods and by highlighting some directions for future research. © 2014 Elsevier Ltd. All rights reserved.

Azzalin C.M.,ETH Zurich
Nucleus | Year: 2012

The human helicase and ATPase up frame shift suppressor 1 (UPF1), traditionally known as a major player in several RNA quality control mechanisms, is emerging as a crucial caretaker of the stability of the genome. Work from my laboratory has provided insight into the function of UPF1 during DNA metabolism and has revealed that this versatile enzyme sustains the proper replication of telomeres, the protective structures located at the ends of linear eukaryotic chromosomes. We have supplied direct evidence that telomere replication is not completed in cells with compromised UPF1 function, leading to the accumulation of DNA damage and telomere abnormalities. We also have isolated a number of factors that physically interact with UPF1 and might represent molecular links between UPF1 and telomeres. In this paper, I re-evaluate the functions of UPF1 in maintaining the stability of telomeres and of the genome at large and suggest a model that explains how UPF1 might be recruited and function during telomere replication. © 2012 Landes Bioscience.

Huenteler J.,ETH Zurich | Huenteler J.,Harvard University
Renewable and Sustainable Energy Reviews | Year: 2014

Government support in the form of so-called feed-in tariff policies (FITs), which combine long-term, fixed-price electricity purchase agreements and guaranteed grid-access, has attracted large private-sector investments in sustainable electricity generation in the industrialized world. In an effort to replicate these experiences globally, a number of international organizations, NGOs, banks and donor countries are proposing mechanisms to cover part of the cost of FITs in developing countries. This paper reviews these proposals for supported FITs and then uses a case study of Thailand's Alternative Energy Development Plan 2013-2021 to investigate the opportunities and challenges of supporting FITs at a global scale. The review highlights that these proposed mechanisms foresee different roles for national governments and supporting entities, particularly in terms of who is responsible to balance fixed FIT payments with uncertain revenues and savings from carbon markets, donors and avoided fuel consumption. The case study results then show that the uncertainty about the actual cost of supported FITs is so significant that the responsibility to balance the FIT budget has to be considered carefully in the design of any mechanism that is to be employed at scale. To a considerable extent, the uncertainty is driven by the counterfactual analysis, i.e., by assumptions about the future savings from avoided fossil fuel consumption: for example, depending on the fossil fuel price scenario the FIT may result in a cost of USD 17bn or savings of 23bn. Unlike uncertainty about the necessary level of FIT payments, uncertainty about the avoided fossil fuel cost materializes only over the course of the policy's lifetime, making it politically challenging. This suggests that an international support mechanism that differentiates the allocation of responsibility depending on the income-level of the recipient country is more suitable for global-scale support than a one-size-fits-all approach. © 2014 Elsevier Ltd.

Schonsleben P.,ETH Zurich
CIRP Annals - Manufacturing Technology | Year: 2012

For many companies, mass-customized products have become the de facto standard products. For these companies, "non-standard" or "customized" means products that even the most sophisticated of product or process configuration tools for parameterized product families cannot define in advance. Products like this need a design-to-order production environment (DTO). This paper starts by presenting industrial examples that need the quickest and most efficient engineering process before releasing the customized manufacturing order. It then goes on to examine the organizational requirements, methods and tools that support a fast and efficient DTO process that comes close to the needs of mass customization. © 2012 CIRP.

Heng K.,ETH Zurich
Astrophysical Journal Letters | Year: 2012

Supersonic flows are expected to exist in the atmospheres of irradiated exoplanets, but the question of whether shocks develop lingers. Specifically, it reduces to whether continuous flow in a closed loop may become supersonic and if some portions of the supersonic flow steepen into shocks. We first demonstrate that continuous, supersonic flow may exist in two flavors: isentropic and non-isentropic, with shocks being included in the latter class of solutions. Supersonic flow is a necessary but insufficient condition for shocks to develop. The development of a shock requires the characteristics of neighboring points in a flow to intersect. We demonstrate that the intersection of characteristics may be quantified via the knowledge of the Mach number. Finally, we examine three-dimensional simulations of hot Jovian atmospheres and demonstrate that shock formation is expected to occur mostly on the dayside hemisphere, upstream of the substellar point, because the enhanced temperatures near the substellar point provide a natural pressure barrier for the returning flow. Understanding the role of shocks in irradiated exoplanetary atmospheres is relevant to correctly modeling observables such as the peak offsets of infrared phase curves. © 2012. The American Astronomical Society. All rights reserved.

Nakagawa T.,Japan Agency for Marine - Earth Science and Technology | Tackley P.J.,ETH Zurich
Earth and Planetary Science Letters | Year: 2012

Two-dimensional thermo-chemical mantle convection simulations are used to investigate the influence of melting-inducted differentiation on the thermal evolution of Earth's mantle, focussing in particular on matching the present-day surface heat flow and the 'Urey ratio'. The influence of internal heating rate, initial mantle temperature and partitioning of heat-producing elements into basaltic crust are studied. High initial mantle temperatures, which are expected following Earth's accretion, cause major differences in early mantle thermo-chemical structures, but by the present-day surface heat flux and internal structures are indistinguishable from cases with a low initial temperature. Assuming three different values of mantle heat production that vary by more than a factor of two results in small differences in present-day heat flow, as does assuming different partitioning ratios of heat-producing elements into crust. Indeed, all of the cases presented here, regardless of exact parameters, have approximately Earth's present-day heat flow, with substantial fractions coming from the core and from mantle cooling. As a consequence of the model present-day surface heat flow varying only slightly with parameters, the Urey ratio (the ratio of total heat production to the total surface heat flow) is highly dependent on the amount of internal heat production, and due to the large uncertainty in this, the Urey ratio is considered to be a much poorer constraint on thermal evolution than the heat flow. The range of present-day Urey ratio observed in simulations here is about 0.3 to 0.5, which is consistent with observational and geochemical constraints (Jaupart et al., 2007). Magmatic heat transport contributes an upper bound of 9% to Earth's present-day heat loss but a much higher fraction at earlier times-often more than convective heat loss-so neglecting this causes an overestimation of the Urey ratio. Magmatic heat transport also plays an important role in mantle cooling. Considering these points, it is important to include magmatic effects when attempting to understand the thermal evolution of the Earth. © 2012 Elsevier B.V..

Finger R.,ETH Zurich
Food Policy | Year: 2010

We analyze trends in crop yields and yield variability of barley, maize, oats, rye, triticale and wheat in Switzerland from 1961 to 2006. It shows that there have been linear increases in crop yields since the 1960s. However, yields of barley, oats, rye, triticale and wheat have leveled off in Switzerland since the early 1990s, which contrasts linear trends in cereal yields that is usually assumed for Europe. We show a relationship between the introduction of agricultural policy measures towards environmentally friendly cereal production that fostered widespread adoption of extensive farming practices and the observed leveling-off of crop yields. Thus, this paper emphasizes that agricultural policy can be an important reason for slowing crop yield growth. Agricultural policy measures will be one of the key driving forces of future crop yields. Thus, the potential leveling-off of crop yields that is indicated in this study should be considered in analyses of future land use and food supply as well as in the evaluation of agri-environmental measures and policy reforms. © 2009 Elsevier Ltd.

Huang L.,ETH Zurich
Systems and Control Letters | Year: 2013

This paper is concerned with the stochastic stabilization and destabilization of nonlinear differential equations, which, more precisely, further develops the general theory of stochastic stabilization and destabilization that was studied by Mao (1994) [4] and Appleby et al. (2008) [1]. The proposed results are of substantial improvements and reveal the more fundamental principle for stochastic stabilization and destabilization of dynamical systems, which is also verified with examples of application to mass-action systems. © 2012 Elsevier Ltd. All rights reserved.

In this study, we make a general comparison of the accuracy and robustness of five multivariate calibration models: partial least squares (PLS) regression or projection to latent structures, polynomial partial least squares (Poly-PLS) regression, artificial neural networks (ANNs), and two novel techniques based on support vector machines (SVMs) for multivariate data analysis: support vector regression (SVR) and least-squares support vector machines (LS-SVMs). The comparison is based on fourteen (14) different datasets: seven sets of gasoline data (density, benzene content, and fractional composition/boiling points), two sets of ethanol gasoline fuel data (density and ethanol content), one set of diesel fuel data (total sulfur content), three sets of petroleum (crude oil) macromolecules data (weight percentages of asphaltenes, resins, and paraffins), and one set of petroleum resins data (resins content). Vibrational (near-infrared, NIR) spectroscopic data are used to predict the properties and quality coefficients of gasoline, biofuel/biodiesel, diesel fuel, and other samples of interest. The four systems presented here range greatly in composition, properties, strength of intermolecular interactions (e.g., van der Waals forces, H-bonds), colloid structure, and phase behavior. Due to the high diversity of chemical systems studied, general conclusions about SVM regression methods can be made. We try to answer the following question: to what extent can SVM-based techniques replace ANN-based approaches in real-world (industrial/scientific) applications? The results show that both SVR and LS-SVM methods are comparable to ANNs in accuracy. Due to the much higher robustness of the former, the SVM-based approaches are recommended for practical (industrial) application. This has been shown to be especially true for complicated, highly nonlinear objects. © 2011 The Royal Society of Chemistry.

Sallusto F.,Institute for Research in Biomedicine | Lanzavecchia A.,Institute for Research in Biomedicine | Lanzavecchia A.,ETH Zurich | Araki K.,Emory University | Ahmed R.,Emory University
Immunity | Year: 2010

Vaccines work by eliciting an immune response and consequent immunological memory that mediates protection from infection or disease. Recently, new methods have been developed to dissect the immune response in experimental animals and humans, which have led to increased understanding of the molecular mechanisms that control differentiation and maintenance of memory T and B cells. In this review we will provide an overview of the cellular organization of immune memory and underline some of the outstanding questions on immunological memory and how they pertain to vaccination strategies. Finally we will discuss how we can learn about antigen design from the interrogation of our memory T and B cells-a journey from vaccines to memory and back. © 2010 Elsevier Inc.

Jeschke G.,ETH Zurich
Progress in nuclear magnetic resonance spectroscopy | Year: 2013

Long-range distance measurements based on paramagnetic relaxation enhancement (PRE) in NMR, quantification of surface water dynamics near biomacromolecules by Overhauser dynamic nuclear polarization (DNP) and sensitivity enhancement by solid-state DNP all depend on introducing paramagnetic species into an otherwise diamagnetic NMR sample. The species can be introduced by site-directed spin labeling, which offers precise control for positioning the label in the sequence of a biopolymer. However, internal flexibility of the spin label gives rise to dynamic processes that potentially influence PRE and DNP behavior and leads to a spatial distribution of the electron spin even in solid samples. Internal dynamics of spin labels and their static conformational distributions have been studied mainly by electron paramagnetic resonance spectroscopy and molecular dynamics simulations, with a large body of results for the most widely applied methanethiosulfonate spin label MTSL. These results are critically discussed in a unifying picture based on rotameric states of the group that carries the spin label. Deficiencies in our current understanding of dynamics and conformations of spin labeled groups and of their influence on NMR observables are highlighted and directions for further research suggested. Copyright © 2013. Published by Elsevier B.V.

Wennemers H.,ETH Zurich
Journal of Peptide Science | Year: 2012

The question whether peptides can fulfill functions for which nature utilizes large macromolecules is an overarching theme of the research in the Wennemers laboratory. The Zervas Award Lecture summarized our research on the development of peptides as asymmetric catalysts and templates for the controlled formation of silver nanoparticles. Tripeptides of the general type Pro-Pro-Xaa (Xaa=acidic amino acid) were presented that effectively catalyze aldol and conjugate addition reactions. These peptides are not only highly active, robust, and stereoselective catalysts but have also remarkable chemoselectivities. In the second part, short peptides that allow for the generation of silver nanoparticles in distinctly different sizes were presented. © 2012 European Peptide Society and John Wiley & Sons, Ltd.

Gruber N.,ETH Zurich
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2011

In the coming decades and centuries, the ocean's biogeochemical cycles and ecosystems will become increasingly stressed by at least three independent factors. Rising temperatures, ocean acidification and ocean deoxygenation will cause substantial changes in the physical, chemical and biological environment, which will then affect the ocean's biogeochemical cycles and ecosystems in ways that we are only beginning to fathom. Ocean warming will not only affect organisms and biogeochemical cycles directly, but will also increase upper ocean stratification. The changes in the ocean's carbonate chemistry induced by the uptake of anthropogenic carbon dioxide (CO2) (i.e. ocean acidification) will probably affect many organisms and processes, although in ways that are currently not well understood. Ocean deoxygenation, i.e. the loss of dissolved oxygen (O2) from the ocean, is bound to occur in a warming and more stratified ocean, causing stress to macro-organisms that critically depend on sufficient levels of oxygen. These three stressors-warming, acidification and deoxygenation-will tend to operate globally, although with distinct regional differences. The impacts of ocean acidification tend to be strongest in the high latitudes, whereas the low-oxygen regions of the low latitudes are most vulnerable to ocean deoxygenation. Specific regions, such as the eastern boundary upwelling systems, will be strongly affected by all three stressors, making them potential hotspots for change. Of additional concern are synergistic effects, such as ocean acidification-induced changes in the type and magnitude of the organic matter exported to the ocean's interior, which then might cause substantial changes in the oxygen concentration there. Ocean warming, acidification and deoxygenation are essentially irreversible on centennial time scales, i.e. once these changes have occurred, it will take centuries for the ocean to recover. With the emission of CO2 being the primary driver behind all three stressors, the primary mitigation strategy is to reduce these emissions. © 2011 The Royal Society.

Halperin A.,Joseph Fourier University | Kroger M.,ETH Zurich
Biomaterials | Year: 2012

Poly (N-isopropylacrylamide) (PNIPAM) brushes and hydrogels serve as temperature-responsive cell culture substrates. The cells adhere at 37 °C and are detached by cooling to below the lower critical solution temperature T LCST ≈ 32 °C, an effect hitherto attributed to change in PNIPAM hydration. The article proposes a mechanism coupling the change of hydration to integrin mediated environmental sensing for cell culture on brushes and hydrogels in serum containing medium. Hydration is associated with swelling and higher osmotic pressure leading to two effects: (i) The lower osmotic pressure in the collapsed brush/hydrogel favors the adsorption of serum borne extracellular matrix (ECM) proteins enabling cell adhesion; (ii) Brush/hydrogel swelling at T < T LCST gives rise to a disjoining force f cell due to confinement by the ventral membrane of a cell adhering via integrin-ECM bonds. f cell places the integrin-ECM bonds under tension thus accelerating their dissociation and promoting desorption of ECM proteins. Self consistent field theory of PNIPAM brushes quantifies the effect of the polymerization degree N, the area per chain Σ, and the temperature, T on ECM adsorption, f cell and the dissociation rate of integrin-ECM bonds. It suggests guidelines for tuning Σ and N to optimize adhesion at 37 °C and detachment at T < T LCST. The mechanism rationalizes existing experimental results on the influence of the dry thickness and the RGD fraction on adhesion and detachment. © 2012 Elsevier Ltd.

Through the interconnectedness of global business, the local consumption of products and services is intervening in the hydrological cycle throughout the world to an unprecedented extent. In order to address the unsustainable use of global freshwater resources, indicators are needed which make the impacts of production systems and consumption patterns transparent. In this paper, a revised water footprint calculation method, incorporating water stress characterisation factors, is presented and demonstrated for two case study products, Dolmio® pasta sauce and Peanut M&M's® using primary production data. The method offers a simple, yet meaningful way of making quantitative comparisons between products, production systems and services in terms of their potential to contribute to water scarcity. As such, capacity is created for change through public policy as well as corporate and individual action. This revised method represents an alternative to existing volumetric water footprint calculation methods which combine green and blue water consumption from water scarce and water abundant regions such that they give no clear indication about where the actual potential for harm exists. © 2009 Elsevier Ltd. All rights reserved.

Waldvogel J.,ETH Zurich
Celestial Mechanics and Dynamical Astronomy | Year: 2012

We consider the planar symmetric four-body problem with two equal masses m 1 = m 3 > 0 at positions (±x 1(t), 0) and two equal masses m 2 = m 4 > 0 at positions (0, ±x 2(t)) at all times t, referred to as the rhomboidal symmetric four-body problem. Owing to the simplicity of the equations of motion this problem is well suited to study regularization of the binary collisions, periodic solutions, chaotic motion, as well as the four-body collision and escape manifolds. Furthermore, resonance phenomena between the two interacting rectilinear binaries play an important role. © 2012 Springer Science+Business Media B.V.

Korkmaz S.,ETH Zurich
Computers and Structures | Year: 2011

An active structure is an engineering structure containing sensors and actuators that, when active, modify the response of the structure to its environment. Research into active structural control is growing due to factors such as new challenges in extreme environments (space, undersea, polar), contaminated contexts (nuclear, chemical, biological) and increasing awareness of earthquake risks. Advances in theory and practice of active control technology have modified the general perception of structures. This paper provides an overview of advances enabling feasible applications of active structures. Computing challenges that are important to the creation of the next generation of active structures are identified. © 2011 Elsevier Ltd. All rights reserved.

Low-frequency, gas-phase vibrational (Raman) spectroscopy was used in conjunction with a jet-cooled technique and ab initio calculations to study the intrinsic thermodynamic properties of the free (gas-phase) amino acid - glycine (Gly, H 2NCHRCOOH). The first experimental evaluation of the enthalpy differences between the Gly conformations in the vapor phase is presented. The enthalpy values were determined to be 0.33 ± 0.05 and 1.15 ± 0.07 kcal mol -1 for the ccc and gtt rotamers, respectively; the corresponding relative entropy values were -2.86 ± 0.12 and -0.12 ± 0.16 cal mol -1 K -1, respectively. It was proven that the low-frequency Raman and infrared spectroscopy is capable of estimating intrinsic thermodynamic parameters of protein building blocks, such as intermolecular hydrogen bonds (ccc conformer) and rotation around one of the bonds (N-C, gtt conformer). The inaccuracy of the RRHO approximation to Gly conformers was experimentally confirmed. Benchmark data for quantum theory and molecular dynamics were provided.

In this study we explore the effects of different charging behaviors of PHEVs in the United States on electricity demand profiles and energy use, in terms of time of day and location (at home, the workplace, or public areas). Based on driving behavior statistics on vehicle distance traveled and daily trips (US DOT, 2003) in the US, we develop a simulation algorithm to estimate the PHEV charging profiles of electricity demand with plausible plug-in times and depth of discharge of the PHEVs.The model enables simulations of the impacts of various grid management strategies on the availability of vehicle charging in public places, the charge power levels and standards, scheduling charging in off-peak periods and policy measures to promote PHEV adoption. PHEV charging imposes a modest pressure on system load on the order of 560-910Wp per vehicle. We find that enabling charging in places other than home increases the daily electric energy use of PHEV from 24% to 29% (1.5-2kWh/day). Major findings of the different scenarios are that PHEVs with a 20 mile range (PHEV-20) shift 45-65% of vehicle miles traveled in the United States to electricity, compared with 65-80% for PHEVs with a 40 mile range (PHEV-40). © 2011 Elsevier Ltd.

Hertli T.,ETH Zurich
Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS | Year: 2011

The PPSZ algorithm by Paturi, Pudlák, Saks, and Zane [1998] is the fastest known algorithm for Unique k-SAT, where the input formula does not have more than one satisfying assignment. For k ≥5 the same bounds hold for general k-SAT. We show that this is also the case for k=3,4, using a slightly modified PPSZ algorithm. We do the analysis by defining a cost for satisfiable CNF formulas, which we prove to decrease in each PPSZ step by a certain amount. This improves our previous best bounds with Moser and Scheder [2011] for 3-SAT to O(1.308 n) and for 4-SAT to O(1.469 n). © 2011 IEEE.

Stadler T.,ETH Zurich
Journal of Evolutionary Biology | Year: 2013

Phylogenetic trees of only extant species contain information about the underlying speciation and extinction pattern. In this review, I provide an overview over the different methodologies that recover the speciation and extinction dynamics from phylogenetic trees. Broadly, the methods can be divided into two classes: (i) methods using the phylogenetic tree shapes (i.e. trees without branch length information) allowing us to test for speciation rate variation and (ii) methods using the phylogenetic trees with branch length information allowing us to quantify speciation and extinction rates. I end the article with an overview on limitations, open questions and challenges of the reviewed methodology. © 2013 European Society For Evolutionary Biology.

Ridoutt B.G.,CSIRO | Pfister S.,ETH Zurich
International Journal of Life Cycle Assessment | Year: 2013

Purpose: A complete assessment of water use in life cycle assessment (LCA) involves modelling both consumptive and degradative water use. Due to the range of environmental mechanisms involved, the results are typically reported as a profile of impact category indicator results. However, there is also demand for a single score stand-alone water footprint, analogous to the carbon footprint. To facilitate single score reporting, the critical dilution volume approach has been used to express a degradative emission in terms of a theoretical water volume, sometimes referred to as grey water. This approach has not received widespread acceptance and a new approach is proposed which takes advantage of the complex fate and effects models normally employed in LCA. Methods: Results for both consumptive and degradative water use are expressed in the reference unit H2Oe, enabling summation and reporting as a single stand-alone value. Consumptive water use is assessed taking into consideration the local water stress relative to the global average water stress (0.602). Concerning degradative water use, each emission is modelled separately using the ReCiPe impact assessment methodology, with results subsequently normalised, weighted and converted to the reference unit (H2Oe) by comparison to the global average value for consumptive water use (1.86 × 10-3 ReCiPe points m-3). Results and discussion: The new method, illustrated in a simplified case study, incorporates best practice in terms of life cycle impact assessment modelling for eutrophication, human and eco-toxicity, and is able to assimilate new developments relating to these and any other impact assessment models relevant to water pollution. Conclusions: The new method enables a more comprehensive and robust assessment of degradative water use in a single score stand-alone water footprint than has been possible in the past. © 2012 Springer-Verlag.

Busse C.,ETH Zurich
Journal of Supply Chain Management | Year: 2016

This paper investigates how changes in sustainability-related conditions (SRCs) at a supplier influence buyers' economic performance. Using a conceptual theory-building approach and drawing on instrumental stakeholder theory, this research identifies four mediating effects through which suppliers' SRCs affect buyers' economic performance. The four mediating factors are purchasing costs, supply chain sustainability risk costs, cooperation benefits, and benefits stemming from spillovers of a supplier's self-promotion. Purchasing costs exert a negative influence on buyers' performance, whereas the other three factors contribute positively to performance. The study also suggests that the anticipation of performance effects which are salient to the buyer triggers efforts to improve the supplier's SRCs. This research makes several contributions. First, the study identifies and analyzes the causal pathways through which a supplier's SRCs influence buyers' performance, thus contributing to the development of sustainable supply chain management theory. Second, the study helps buyers make better decisions about their suppliers. Finally, the study provides a conceptual platform for improved dialogue between stakeholders and buyers. © 2016 Institute for Supply Management, Inc.

Aebi M.,ETH Zurich
Biochimica et Biophysica Acta - Molecular Cell Research | Year: 2013

N-linked protein glycosylation in the endoplasmic reticulum (ER) is a conserved two phase process in eukaryotic cells. It involves the assembly of an oligosaccharide on a lipid carrier, dolichylpyrophosphate and the transfer of the oligosaccharide to selected asparagine residues of polypeptides that have entered the lumen of the ER. The assembly of the oligosaccharide (LLO) takes place at the ER membrane and requires the activity of several specific glycosyltransferases. The biosynthesis of the LLO initiates at the cytoplasmic side of the ER membrane and terminates in the lumen where oligosaccharyltransferase (OST) selects N-X-S/T sequons of polypeptide and generates the N-glycosidic linkage between the side chain amide of asparagine and the oligosaccharide. The N-glycosylation pathway in the ER modifies a multitude of proteins at one or more asparagine residues with a unique carbohydrate structure that is used as a signalling molecule in their folding pathway. In a later stage of glycoprotein processing, the same systemic modification is used in the Golgi compartment, but in this process, remodelling of the N-linked glycans in a protein-, cell-type and species specific manner generates the high structural diversity of N-linked glycans observed in eukaryotic organisms. This article summarizes the current knowledge of the N-glycosylation pathway in the ER that results in the covalent attachment of an oligosaccharide to asparagine residues of polypeptide chains and focuses on the model organism Saccharomyces cerevisiae. This article is part of a Special Issue entitled: Functional and structural diversity of endoplasmic reticulum. © 2013 Elsevier B.V.

Geisler S.,Case Western Reserve University | Geisler S.,ETH Zurich | Coller J.,Case Western Reserve University
Nature Reviews Molecular Cell Biology | Year: 2013

The increased application of transcriptome-wide profiling approaches has led to an explosion in the number of documented long non-coding RNAs (lncRNAs). While these new and enigmatic players in the complex transcriptional milieu are encoded by a significant proportion of the genome, their functions are mostly unknown. Early discoveries support a paradigm in which lncRNAs regulate transcription via chromatin modulation, but new functions are steadily emerging. Given the biochemical versatility of RNA, lncRNAs may be used for various tasks, including post-transcriptional regulation, organization of protein complexes, cell-cell signalling and allosteric regulation of proteins. © 2013 Macmillan Publishers Limited. All rights reserved.

Escobedo C.,Queens University | Escobedo C.,ETH Zurich
Lab on a Chip - Miniaturisation for Chemistry and Biology | Year: 2013

The integration of nanohole array based plasmonic sensors into microfluidic systems has enabled the emergence of platforms with unique capabilities and a diversified palette of applications. Recent advances in fabrication techniques together with novel implementation schemes have influenced the progress of these optofluidic platforms. Here, we review the advances that nanohole array based sensors have experienced since they were first merged with microfluidics. We examine established and new fabrication methodologies that have enabled both the fabrication of nanohole arrays with improved optical attributes and a reduction in manufacturing costs. The achievements of several platforms developed to date and the significant benefits obtained from operating the nanoholes as nanochannels are also reviewed herein. Finally, we discuss future opportunities for on-chip nanohole array sensors by outlining potential applications and the use of the abilities of the nanostructures beyond the optical context. © 2013 The Royal Society of Chemistry.

Patt A.G.,ETH Zurich | Weber E.U.,Columbia University
Wiley Interdisciplinary Reviews: Climate Change | Year: 2014

Public opinion polls reveal that the perception of climate change as an uncertain phenomenon is increasing, even as consensus has increased within the scientific community of its reality and its attribution to human causes. At the same time, the scientific community has sought to improve its communication practices, in order to present a more accurate picture to the public and policy makers of the state of scientific knowledge about climate change. In this review article, we examine two sets of insights that could influence the success of such communication efforts. The first set questions which uncertainties matter for effective climate policy. While the literature has focused disproportionately on uncertainties with respect to the climate system, we draw attention here to uncertainties associated with the solution space. The second set examines which factors lead people to take slow and deliberated decisions versus quick and spontaneous ones, and looks at the results of these two systems of thought on climate change action. From the review of these two sets of literature, we propose a new hypothesis: that the gap between public and scientific attitudes toward climate change will narrow not because of greater attention to and communication of climate system risks and uncertainties, but rather out of growing experience with the policies and technological systems needed to address the problem. © 2013 John Wiley & Sons, Ltd.

Vogeli B.,ETH Zurich
Progress in Nuclear Magnetic Resonance Spectroscopy | Year: 2014

The nuclear Overhauser enhancement or effect (NOE) is the most important measure in liquid-state NMR with macromolecules. Thus, the NOE is the subject of numerous reviews and books. Here, the NOE is revisited in light of our recently introduced measurements of exact nuclear Overhauser enhancements (eNOEs), which enabled the determination of multiple-state 3D protein structures. This review encompasses all relevant facets from the theoretical considerations to the use of eNOEs in multiple-state structure calculation. Important aspects include a detailed presentation of the relaxation theory relevant for the nuclear Overhauser effect, the estimation of the correction for spin diffusion, the experimental determination of the eNOEs, the conversion of eNOE rates into distances and validation of their quality, the distance-restraint classification and the protocols for calculation of structures and ensembles. © 2013 Elsevier B.V. All rights reserved.

Panse V.G.,ETH Zurich | Johnson A.W.,University of Texas at Austin
Trends in Biochemical Sciences | Year: 2010

In eukaryotic cells, ribosomes are pre-assembled in the nucleus and exported to the cytoplasm where they undergo final maturation. This involves the release of trans-acting shuttling factors, transport factors, incorporation of the remaining ribosomal proteins, and final rRNA processing steps. Recent work, particularly on the large (60S) ribosomal subunit, has confirmed that the 60S subunit is exported from the nucleus in a functionally inactive state. Its arrival in the cytoplasm triggers events that render it translationally competent. Here we focus on these cytoplasmic maturation events and speculate why eukaryotic cells have evolved such an elaborate maturation pathway. © 2010.

Lachkar Z.,ETH Zurich
Geophysical Research Letters | Year: 2014

Upwelling-favorable winds have increased in several coastal upwelling systems and may further increase in the future. The present study investigates the effects of upwelling intensification on ocean acidification in the California and Canary Current systems (CSs). Model simulations show that the volume of water undersaturated with respect to aragonite almost triples in the California CS under a doubling of wind stress. In contrast, the same wind perturbation results in the disappearance of undersaturation in the Canary CS. These contrasting responses arise from the differences in the relative contributions of circulation and biological processes to aragonite undersaturation in the two systems and the sensitivity of these processes to upwelling intensification. When combined with rising atmospheric CO2 and increased stratification, upwelling intensification accentuates acidification in the California CS and dampens it in the Canary CS. These findings highlight the challenge to predict the future evolution of ocean acidification in regions subject to concurrent disturbances. Key Points The effect of increased coastal upwelling on ocean acidification is examined Upwelling intensification has opposing effects on coastal ocean acidification The mechanisms and implications of these contrasting changes are investigated ©2013. American Geophysical Union. All Rights Reserved.

Halperin A.,Joseph Fourier University | Kroger M.,ETH Zurich
Macromolecules | Year: 2011

Protein adsorption onto brush displaying surfaces is strongly affected by collapse, an effect utilized in protein chromatography and in harvesting cell sheets for tissue engineering applications. For relatively small particles, the free energy penalty incurred upon insertion into the brush Fins is related to the work expended against the osmotic pressure of the unperturbed brush. Within the self-consistent field (SCF) theory of brushes, the scale of Fins decreases with the brush thickness 〈z〉 because the value of the osmotic pressure at the grafting surface Π(0) ∼ 〈z〉 irrespective of the interaction free energy. Brush collapse thus favors adsorption because it reduces Fins via two effects: (i) lowering of the osmotic pressure and (ii) a possible decrease of the inserted volume of the particle. These general results are supplemented by numerical solutions of SCF equations for the collapse of thermoresponsive poly(N-isopropylacrylamide) (PNIPAM) brushes as described by the empirical free energy of Afroze et al. (J. Mol. Struct.2000, 554, 55). These yield the monomer concentration profiles c(z) and the corresponding osmotic pressure profiles Π(z) as functions of the altitude z and the temperature T. c(z) and Π(z) are then used to characterize Fins for collapsed and swollen brushes as well as the adsorption isotherms for three adsorption mechanisms of relevance to ex vivo biotechnology applications involving PNIPAM brushes: (a) primary adsorption at the wall, (b) adsorption onto a ligand embedded within the brush, and (c) ternary adsorption within the brush due to weak attraction between the polymer and the adsorbing particle. Our results rationalize existing experimental results concerning the interactions between proteins and PNIPAM brushes and predict the effects of tuning parameters such as grafting density, polymerization degree and protein dimensions. © 2011 American Chemical Society.

Glacier mass balance and secular changes in mountain glaciers and ice caps are evaluated from the annual net balance of 137 glaciers from 17 glacierized regions of the world. Further, the winter and summer balances for 35 glaciers in 11 glacierized regions are analyzed. The global means are calculated by weighting glacier and regional surface areas. The area-weighted global mean net balance for the period 1960-2000 is -270 ± 34 mm a-1 w. e. (water equivalent, in mm per year) or (-149 ± 19 km3 a-1 w. e.), with a winter balance of 890 ± 24 mm a-1 w. e. (490 ± 13 km3 a-1 w. e.) and a summer balance of -1,175 ± 24 mm a-1 w. e. (-647 ± 13 km3 a-1 w. e.). The linear-fitted global net balance is accelerating at a rate of -9 ± 2.1 mm a-2. The main driving force behind this change is the summer balance with an acceleration of -10 ± 2.0 mm a-2. The decadal balance, however, shows significant fluctuations: summer melt reached its peak around 1945, followed by a decrease. The negative trend in the annual net balance is interrupted by a period of stagnation from 1960s to 1980s. Some regions experienced a period of positive net balance during this time, for example, Europe. The balance has become strongly negative since the early 1990s. These decadal fluctuations correspond to periods of global dimming (for smaller melt) and global brightening (for larger melt). The total radiation at the surface changed as a result of an imbalance between steadily increasing greenhouse gases and fluctuating aerosol emissions. The mass balance of the Greenland ice sheet and the surrounding small glaciers, averaged for the period of 1950-2000, is negative at -74 ± 10 mm a-1 w. e. (-128 ± 18 km3 a-1 w. e.) with an accumulation of 297 ± 33 mm a-1 w. e. (519 ± 58 km3 a-1 w. e.), melt ablation -169 ± 18 mm a-1 w. e. (-296 ± 31 km3 a-1 w. e.), calving ablation -181 ± 19 mm a-1 w. e. (-316 ± 33 km3 a-1 w. e.) and the bottom melt-21 ± 2 mm a-1 w. e. (-35 ± 4 km3 a-1 w. e.). Almost half (-60 ± 3 km3 a-1) of the net mass loss comes from mountain glaciers and ice caps around the ice sheet. At present, it is difficult to detect any statistically significant trends for these components. The total mass balance of the Antarctic ice sheet is considered to be too premature to evaluate. The estimated sea-level contributions in the twentieth Century are 5.7 ± 0.5 cm by mountain glaciers and ice caps outside Antarctica, 1.9 ± 0.5 cm by the Greenland ice sheet, and 2 cm by ocean thermal expansion. The difference of 7 cm between these components and the estimated value with tide-gage networks (17 cm) must result from other sources such as the mass balance of glaciers of Antarctica, especially small glaciers separated from the ice sheet. © 2011 Springer Science+Business Media B.V.

Organic ligands are known to interfere with the polymerization of Fe(III), but the extent of interference has not been systematically studied as a function of structural ligand properties. This study examines how the number and position of phenol groups in hydroxybenzoic acids affect both ferrihydrite formation and its local (<5Å) Fe coordination. To this end, acid Fe(III) nitrate solutions were neutralized up to pH 6.0 in the presence of 4-hydroxybenzoic acid (4HB), 2,4-dihydroxybenzoic acid (2,4DHB), and the hydroquinone 3,4-dihydroxybenzoic acid (3,4DHB). The initial molar ligand/Fe ratios ranged from 0 to 0.6. The precipitates were dialyzed, lyophilized, and subsequently studied by X-ray absorption spectroscopy and synchrotron X-ray diffraction. The solids contained up to 32wt.% organic C (4HB~2,4DHB<3,4DHB). Only precipitates formed in 3,4DHB solutions comprised considerable amounts of Fe(II) (Fe(II)/Fe tot≤6mol%), implying the abiotic mineralization of the catechol-group bearing ligand during Fe(III) hydrolysis under oxic conditions. Hydroxybenzoic acids decreased ferrihydrite formation in the order 4HB~2,4DHB≪3,4DHB, which documents that phenol group position rather than the number of phenol groups controls the ligand's interaction with Fe(III). The coordination numbers of edge- and double corner-sharing Fe in the precipitates decreased by up to 100%. Linear combination fitting (LCF) of Fe K-edge X-ray absorption near edge structure (XANES) and extended X-ray absorption fine structure (EXAFS) spectra revealed that this decrease was due to increasing amounts of organic Fe(III) complexes in the precipitates. Although EXAFS derived coordination numbers of Fe in ferrihydrite remained constant within error, all organic ligands decreased the coherently scattering domain (CSD) size of ferrihydrite as indicated by synchrotron X-ray diffraction analysis (4HB<2,4DHB≪3,4DHB). With decreasing particle size of ferrihydrite its Fe(O,OH) 6 octahedra became progressively distorted as evidenced by an increasing loss of centrosymmetry of the Fe sites. Pre-edge peak analysis of the Fe K-edge XANES spectra in conjunction with LCF results implied that ferrihydrite contains on an average 13±3% tetrahedral Fe(III), which is in very good agreement with the revised single-phase structural model of ferrihydrite (Michel, F. M., Barron, V., Torrent, J., Morales, M. P. et al. (2010) Ordered ferrimagnetic form of ferrihydrite reveals links among structure, composition, and magnetism. Proc. Natl. Acad. Sci. USA 107, 2787-2792). The results suggest that hydroxybenzoic acid moieties of natural organic matter (NOM) effectively suppress ferrihydrite precipitation as they kinetically control the availability of inorganic Fe(III) species for nucleation and/or polymerization reactions. As a consequence, NOM can trigger the formation of small ferrihydrite nanoparticles with increased structural strain. These factors may eventually enhance the biogeochemical reactivity of ferrihydrite formed in NOM-rich environments. This study highlights the role of hydroquinone structures of NOM for Fe complexation, polymerization, and redox speciation. © 2011 Elsevier Ltd.

The lowland dipterocarp forests of Southeast Asia support a substantial proportion of the world's biodiversity. They are of considerable environmental and economic value at the local, regional and global scale, providing many goods and services to a growing population. The forests of this region are among the fastest disappearing in the world and restoration is urgently required. This paper provides a review of the ecological constraints to restoration of lowland dipterocarp forest in Southeast Asia. It focuses on the production of planting stock, the significance of site-species matching and post-planting site maintenance. It identifies gaps in our knowledge and highlights priority areas of research. Adopting a long-term view is essential for restoring as well as conserving the dipterocarp forests of Southeast Asia. An immediate strategy for the conservation and management of dwindling genetic resources of these important timber species is essential. This will provide the foundations for sourcing seed and production of planting material for longer term restoration. The importance of species-site matching, mycorrhizal fungi and post-planting maintenance for restoration are apparent. Financing is a major limiting factor to dipterocarp forest restoration. Trading in carbon, private finance and environmental markets afford considerable opportunities for restoring these forests providing their total value is recognised. Despite the wealth of ecological knowledge we already have for scientifically-informed forest restoration, without the backing of governments and corporate stakeholders, forest restoration will not gain the urgently required momentum. © Springer Science+Business Media B.V. 2009.

Obrist D.,ETH Zurich
Physics of Fluids | Year: 2011

Localized acoustic sources can often be modeled by wave packets. It has been recognized for a long time that the particular structure of these wave packet sources has a strong influence on the character of the acoustic emission to the far field. In the present work, we study the acoustic emission patterns with respect to the phase velocity, group velocity, size, and aspect ratio of the wave packet sources. To this end, the acoustic problem is formulated on the basis of Lighthill's acoustic analogy and then recast to the geometrical problem of conic sections. This leads to the notion of elliptic (subsonic), parabolic (sonic), and hyperbolic (supersonic) acoustic emission patterns. The resulting geometric theory for acoustic emissions from wave packets includes phenomena such as Mach waves, bi- and superdirectivity, Doppler shift, and silent directions. © 2011 American Institute of Physics.

Hansen A.K.,Yale University | Vorburger C.,ETH Zurich | Vorburger C.,Eawag - Swiss Federal Institute of Aquatic Science and Technology | Moran N.A.,Yale University
Genome Research | Year: 2012

Bacterial endosymbionts exert a variety of beneficial effects on insect hosts. In pea aphids (Acyrthosiphon pisum), several inherited endosymbiont species protect their hosts against parasitoid wasps, which are major natural enemies. However, strains of these symbiont species vary in their ability to confer protection against parasitoids, with some conferring almost complete protection and others conferring almost none. In this study, two strains of the endosymbiont Regiella insecticola (R. insecticola 5.15 and R. insecticola LSR1) were found to differ in ability to protect pea aphids attacked by the parasitoid Aphidius ervi. Parasitism trials reveal that R. insecticola 5.15, but not R. insecticola LSR1, significantly reduced parasitoid success and increased aphid survivorship. To address the potential genetic basis of protection conferred by R. insecticola 5.15 we sequenced the genome of this symbiont strain, and then compared its gene repertoire with that of the already sequenced nonprotective strain R. insecticola LSR1. We identified striking differences in gene sets related to eukaryote pathogenicity. The protective strain R. insecticola 5.15 encoded five categories of pathogenicity factors that were missing or inactivated in R. insecticola LSR1. These included genes encoding the O-antigen biosynthetic pathway, an intact Type 1 Secretion System and its secreted RTX toxins, an intact SPI-1 Type 3 Secretion System and its effectors, hemin transport, and the two-component system PhoPQ. These five pathogenicity factors and translocation systems are hypothesized to collectively play key roles in the endosymbiont's virulence against parasitoids, resulting in aphid protection. Mechanisms through which these factors may target parasitoids are discussed. © 2012 by Cold Spring Harbor Laboratory Press.

Frehner M.,ETH Zurich
Terra Nova | Year: 2014

Geological folds are inherently 3D structures; therefore, they also grow in three dimensions. Here, fold growth in all three dimensions is quantified by numerically simulating upright single-layer folds in 3D Newtonian media. Horizontal uniaxial shortening leads to a buckling instability, which grows from a point-like initial perturbation in all three dimensions by fold amplification (vertical), fold elongation (parallel to fold axis) and sequential fold growth (parallel to shortening direction) of secondary (and further) folds adjacent to the initial isolated fold. The two lateral directions exhibit similar averaged growth rates, leading to bulk fold structures with aspect ratios in map view close to 1. However, fold elongation is continuous with increasing bulk shortening, while sequential fold growth exhibits jumps whenever a new sequential fold appears and the bulk fold structure therefore suddenly occupies more space. Compared with the two lateral growth directions, fold amplification exhibits a slightly higher growth rate. © 2014 John Wiley & Sons Ltd.

Goertz F.,ETH Zurich
Physical Review Letters | Year: 2014

To measure the Yukawa couplings of the up and down quarks, Yu,d, seems to be far beyond the capabilities of current and (near) future experiments in particle physics. By performing a general analysis of the potential misalignment between quark masses and Yukawa couplings, we derive predictions for the magnitude of induced flavor-changing neutral currents (FCNCs), depending on the shift in the physical Yukawa coupling of first-generation quarks. We find that a change of more than 50% in Yd would generically result in ds transitions in conflict with kaon physics. This could already be seen as evidence for a nonvanishing direct coupling of the down quark to the newly discovered Higgs boson. The nonobservation of certain - already well-constrained - processes is thus turned into a powerful indirect measure of otherwise basically unaccessible physical parameters of the effective standard model. Similarly, improvements in limits on FCNCs in the up-type quark sector can lead to valuable information on Yu. © 2014 American Physical Society.

Jagoutz O.,Massachusetts Institute of Technology | Schmidt M.W.,ETH Zurich
Earth and Planetary Science Letters | Year: 2013

Most primitive arc melts are basaltic in composition, yet the bulk continental crust, thought to be generated in arcs, is andesitic. In order to produce an andesitic crust from primitive arc basalts, rocks complementary to the andesitic crust have to be fractionated and subsequently removed, most likely through density sorting in the lower arc crust. The Kohistan Arc in northern Pakistan offers a unique opportunity to constrain the composition and volume of material fluxes involved in this process. In a lower crustal section >10km cumulates (dunites, wehrlites, websterites, clinopyroxene-bearing garnetites and hornblendites, and garnet-gabbros) are exposed that are 0.1-0.3g/cm3 denser than the underlying mantle. The cumulates combine with the andesitic bulk Kohistan Arc crust to reproduce the major and trace element composition of primitive basaltic arc melts. Our petrochemical analysis suggests that fractionation and subsequent foundering of wehrlites+ultramafic hornblende-garnet-clinopyroxene cumulates+garnet-gabbros is a viable mechanism for producing andesitic crust from a calc-alkaline/tholeiitic primitive high-Mg basalt. The mass of the foundered material is approximately twice that of the arc crust generated. For an overall andesitic arc composition, we estimate a magma flux into the arc (11-15km3/yr) about three times the rate of arc crust production itself. Foundering fluxes of cumulates (6.4-8.1km3/yr) are one third to half those of the globally subducted oceanic crust (~19km3/yr). Hence, the delaminate forms a volumetrically significant, albeit refractory and depleted geochemical reservoir in the mantle. Owing to its low U/Pb and high Lu/Hf the foundered material evolves with time to a reservoir characterized by unradiogenic Pb and highly radiogenic Hf isotopes, unlike any of the common mantle endmembers defined by OIB chemistry. The unradiogenic Pb of the foundered arc cumulates could counterbalance the radiogenic Pb composition of the depleted mantle. The predicted highly radiogenic Hf (at rather unradiogenic Nd) of the foundered material can explain the εHf-εNd systematics observed in some abyssal peridotites and mantle xenoliths. © 2013.

Kornmann B.,ETH Zurich
Current Opinion in Cell Biology | Year: 2013

A long-observed but often neglected property of cellular organelles is their ability to associate into junctions. Aspects of cell physiology appear more and more to depend upon these contact sites, as their central molecular components are being identified. Contact sites between the endoplasmic reticulum (ER) and the mitochondria are emerging as a prime example of such contacts. The physiological role of these contact sites, first thought to be limited to the facilitation of lipid and calcium exchange between the two organelles, is found to extend to unexpected aspects of mitochondria and ER functions. © 2013 Elsevier Ltd.

Smith R.S.,ETH Zurich
IEEE Transactions on Automatic Control | Year: 2014

Subspace identification techniques have gained widespread acceptance as a method of obtaining a low-order model from data. These are based on using the singular-value decomposition as a means of estimating the underlying system order and extracting a basis for the extended observability space. In the presence of noise rank determination becomes difficult and the low rank estimates lose the structure required for exact realizability. Furthermore the noise corrupts the singular values in a manner that is inconsistent with physical noise processes. These problems are addressed by an optimization based approach using a nuclear norm minimization objective. By using Hankel matrices as the underlying data structure exact realizability of the low rank system models is maintained. Noise in the data enters the formulation linearly, allowing for the inclusion of more realistic noise weightings. A cumulative spectral weight is presented and shown to be useful in estimating models from data corrupted via noise. A numerical example illustrates the characteristics of the problem. © 2014 IEEE.

In view of the importance of optically pure chiral products there is ample reason to develop methods that facilitate their efficient production. Compared to the mostly applied homogeneous catalysts based on transition metals coordinated to suitable chiral ligands, heterogeneous chiral catalysts could offer several features that are beneficial for practical application such as stability, ease of catalyst separation and regeneration as well as straightforward access to continuous process operation. Various strategies have been developed for imparting chirality to catalytic active surfaces, among which the chiral modification of active metal surfaces by adsorption of suitable chiral organic compounds has so far been among the most successful. In this tutorial review lessons learned from research on asymmetric hydrogenation on chirally modified noble metals will be presented. Key aspects for the design of such catalysts will be elucidated using chirally modified platinum catalysts for the asymmetric hydrogenation of α-activated ketones as an example. © The Royal Society of Chemistry.

Kuhn T.,ETH Zurich
Computational Linguistics | Year: 2014

What is here called controlled natural language (CNL) has traditionally been given many different names. Especially during the last four decades, a wide variety of such languages have been designed. They are applied to improve communication among humans, to improve translation, or to provide natural and intuitive representations for formal notations. Despite the apparent differences, it seems sensible to put all these languages under the same umbrella. To bring order to the variety of languages, a general classification scheme is presented here. A comprehensive survey of existing English-based CNLs is given, listing and describing 100 languages from 1930 until today. Classification of these languages reveals that they form a single scattered cloud filling the conceptual space between natural languages such as English on the one end and formal languages such as propositional logic on the other. The goal of this article is to provide a common terminology and a common model for CNL, to contribute to the understanding of their general nature, to provide a starting point for researchers interested in the area, and to help developers to make design decisions. © 2014 Association for Computational Linguistics.

Schilling C.,ETH Zurich
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013

For a quantum system of N identical, harmonically interacting particles in a one-dimensional harmonic trap we calculate for the bosonic and fermionic ground state the corresponding one-particle reduced density operator ρî1 analytically. In case of bosons ρ î1 is a Gibbs state for an effective harmonic oscillator. Hence the natural orbitals are Hermite functions and their occupation numbers obey a Boltzmann distribution. Intriguingly, for fermions with not too large couplings the natural orbitals coincide up to just a small error with the bosonic ones. In case of strong coupling this still holds qualitatively. Moreover, the decay of the decreasingly ordered fermionic natural occupation numbers is given by the bosonic one, but modified by an algebraic prefactor. Significant differences to bosons occur only for the largest occupation numbers. After all, the "gap" at the "Fermi level" decreases with increasing coupling strength but remains well pronounced even for strong interaction. © 2013 American Physical Society.

Blanco M.G.,Cancer Research UK Research Institute | Matos J.,Cancer Research UK Research Institute | Matos J.,ETH Zurich | West S.C.,Cancer Research UK Research Institute
Molecular Cell | Year: 2014

The careful orchestration of cellular events such as DNA replication, repair, and segregation is essential for equal distribution of the duplicated genome into two daughter cells. To ensure that persistent recombination intermediates are resolved prior to cell division, the Yen1 Holliday junction resolvase is activated at anaphase. Here, we show that the master cell-cycle regulators, cyclin-dependent kinase (Cdk) and Cdc14 phosphatase, control the actions of Yen1. During S phase, Cdk-mediated phosphorylation of Yen1 promotes its nuclear exclusion and inhibits catalytic activity by reducing the efficiency of DNA binding. Later in the cell cycle, at anaphase, Cdc14 drives Yen1 dephosphorylation, leading to its nuclear relocalization and enzymatic activation. Using a constitutively activated form of Yen1, we show that uncontrolled Yen1 activity is detrimental to the cell: spatial and temporal restriction of Yen1 protects against genotoxic stress and, by avoiding competition with the noncrossover-promoting repair pathways, prevents loss of heterozygosity. © 2014 The Authors.

Ju T.,Emory University | Otto V.I.,ETH Zurich | Cummings R.D.,Emory University
Angewandte Chemie - International Edition | Year: 2011

Glycoproteins in animal cells contain a variety of glycan structures that are added co- and/or posttranslationally to proteins. Of over 20 different types of sugar-amino acid linkages known, the two major types are N-glycans (Asn-linked) and O-glycans (Ser/Thr-linked). An abnormal mucin-type O-glycan whose expression is associated with cancer and several human disorders is the Tn antigen. It has a relatively simple structure composed of N-acetyl-D- galactosamine with a glycosidic αlinkage to serine/threonine residues in glycoproteins (GalNAcα1-O-Ser/Thr), and was one of the first glycoconjugates to be chemically synthesized. The Tn antigen is normally modified by a specific galactosyltransferase (T-synthase) in the Golgi apparatus of cells. Expression of active T-synthase is uniquely dependent on the molecular chaperone Cosmc, which is encoded by a gene on the X chromosome. Expression of the Tn antigen can arise as a consequence of mutations in the genes for T-synthase or Cosmc, or genes affecting other steps of O-glycosylation pathways. Because of the association of the Tn antigen with disease, there is much interest in the development of Tn-based vaccines and other therapeutic approaches based on Tn expression. An abnormal sugar: The expression of the abnormal O-glycan called Tn antigen (see structure) in animal glycoproteins typically represents a disease condition. This Review discusses a broad range of chemical and biological studies on the Tn antigen that could lead to new diagnostics and therapeutics. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Stark W.J.,ETH Zurich
Angewandte Chemie - International Edition | Year: 2011

Understanding the behavior of nanoparticles in biological systems opens up new directions for medical treatments and is essential for the development of safe nanotechnology. This Review discusses molecules and nanoparticles when in contact with cells or whole organisms, with a focus on inorganic materials. The interaction of particles with biology unravels a series of new mechanisms not found for molecules: altered biodistribution, chemically reactive interfaces, and the combination of solid-state properties and mobility. Externally guided movement of medicaments by using functional nanomagnets brings mechanics into drug design. In subsequent sections, the role of inertness and bioaccumulation is discussed in regard to the long-term safety of nanoparticles. Particles in motion: Nanoparticles combine the properties of a traditional solid (magnetic, optic, mechanic, radiation) with the mobility of molecules, including the ability to diffuse inside an organism. This combination opens up fascinating opportunities for medical treatments, industrial processes, and improved consumer products-but the environmental aspects must also be considered. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Pizorn I.,ETH Zurich
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013

We study the nonequilibrium steady state of the one-dimensional Bose-Hubbard model coupled to Lindblad reservoirs, using the density-matrix renormalization group in operator space. We observe a transition from a flat particle density profile in the noninteracting limit to the linearly increasing profile with onset of the interparticle interaction. Analyzing the effect of coherent pumping on the nonequilibrium steady state, we find a subspace which remains unaffected by the pumping in the noninteracting limit with the protection gradually diminishing due to interparticle interaction. In the equilibrium situation with one or more symmetric reservoirs, we show analytically that the steady state of the system is a product state for any interaction strength. We also provide analytical results in the noninteracting limits, using the method of the third quantization in operator space. © 2013 American Physical Society.

Gonnet G.H.,ETH Zurich
BMC Bioinformatics | Year: 2012

Background: We analyze phylogenetic tree building methods from molecular sequences (PTMS). These are methods which base their construction solely on sequences, coding DNA or amino acids.Results: Our first result is a statistically significant evaluation of 176 PTMSs done by comparing trees derived from 193138 orthologous groups of proteins using a new measure of quality between trees. This new measure, called the Intra measure, is very consistent between different groups of species and strong in the sense that it separates the methods with high confidence.The second result is the comparison of the trees against trees derived from accepted taxonomies, the Taxon measure. We consider the NCBI taxonomic classification and their derived topologies as the most accepted biological consensus on phylogenies, which are also available in electronic form. The correlation between the two measures is remarkably high, which supports both measures simultaneously.Conclusions: The big surprise of the evaluation is that the maximum likelihood methods do not score well, minimal evolution distance methods over MSA-induced alignments score consistently better. This comparison also allows us to rank different components of the tree building methods, like MSAs, substitution matrices, ML tree builders, distance methods, etc. It is also clear that there is a difference between Metazoa and the rest, which points out to evolution leaving different molecular traces. We also think that these measures of quality of trees will motivate the design of new PTMSs as it is now easier to evaluate them with certainty. © 2012 Gonnet; licensee BioMed Central Ltd.

Stenflo J.O.,ETH Zurich
Astrophysical Journal | Year: 2015

The symmetric peak observed in linear polarization in the core of the solar sodium D1 line at 5896 Å has remained enigmatic since its discovery nearly two decades ago. One reason is that the theory of polarized scattering has not been experimentally tested for multi-level atomic systems in the relevant parameter domains, although the theory is continually being used for the interpretation of astrophysical observations. A laboratory experiment that was set up a decade ago to find out whether the D1 enigma is a problem of solar physics or quantum physics revealed that the D1 system has a rich polarization structure in situations where standard scattering theory predicts zero polarization, even when optical pumping of the m state populations of the hyperfine-split ground state is accounted for. Here we show that the laboratory results can be modeled in great quantitative detail if the theory is extended to include the coherences in both the initial and final states of the scattering process. Radiative couplings between the allowed dipole transitions generate coherences in the initial state. Corresponding coherences in the final state are then demanded by a phase closure selection rule. The experimental results for the well understood D2 line are used to constrain the two free parameters of the experiment, collision rate and optical depth, to suppress the need for free parameters when fitting the D1 results. © 2015. The American Astronomical Society. All rights reserved.

Saget T.,Ecole Polytechnique Federale de Lausanne | Saget T.,ETH Zurich | Cramer N.,Ecole Polytechnique Federale de Lausanne
Angewandte Chemie - International Edition | Year: 2012

Activated: The title reaction proceeds efficiently with 1 mol % of palladium and gives tetrahydroquinolines in excellent enantioselectivities (see scheme). The enantiodiscriminating concerted metalation-deprotonation step occurs via a rare seven-membered palladacycle. The cyclopropyl-substituted tetrahydroquinolines can be regioselectively and enantiospecifically reduced to chiral tetrahydrobenzoazepines. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Thilgen C.,ETH Zurich
Angewandte Chemie - International Edition | Year: 2012

Bending over backwards to build buckybowls and -balls? For the last two decades, flash vacuum pyrolysis of chloro- and bromoarenes has been the method of choice for intramolecular aryl-aryl couplings that afford geodesic polynuclear arenes ("buckybowls") including fullerene C 60. Recently, even fluoroarenes were subjected under relatively mild conditions to intramolecular C Ar-C Ar couplings by combining the cleavage of C Ar-F with the formation of very strong Si-F or Al-F bonds (see picture; green C, white H). Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Saget T.,Ecole Polytechnique Federale de Lausanne | Saget T.,ETH Zurich | Lemouzy S.J.,Ecole Polytechnique Federale de Lausanne | Cramer N.,Ecole Polytechnique Federale de Lausanne
Angewandte Chemie - International Edition | Year: 2012

Teaming up: The important indoline scaffold is provided with enantiomeric ratios of up to 98:2 in palladium(0)-catalyzed C(sp 3)-H activations of aryl triflates. The key is the combination of the electron-rich monodentate Sagephos and the bulky 9H-xanthene-9-carboxylic acid. Both participate in a highly cooperative manner in the enantiodetermining concerted-deprotonation- metalation step (see scheme, Tf=triflate). Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Stenflo J.O.,ETH Zurich
Astronomy and Astrophysics | Year: 2010

The statistical properties of the highly structured magnetic field of the quiet Sun are best described in terms of distribution functions, in particular the probability density functions (PDF) for the flux densities and the angular distribution for the orientations of the field vector. They are needed to test the validity of various MHD simulations, but past determinations have led to contradictory results. A main reason for these difficulties lies in the circumstance that the magnetic structuring continues on scales that are much smaller than the telescope resolution, and that this structuring strongly affects the quantities averaged over each pixel due to the non-linear relation between polarization and magnetic field. Here we use a Hinode SOT/SP data set for the disk center of the quiet Sun to explore the complex behavior of the polarized 6301-6302 A line system and identify the observables that allow the most robust determinations of inclination angles and flux densities. These observables are then used to derive the empirical distribution functions. Our Stokes V line ratio analysis leads us to an unexpected discovery: a magnetic dichotomy with two distinct populations, representing strong (kG) and weak fields. This can be understood in terms of the convective collapse mechanism, which makes the Sun's magnetic flux end up in two states: collapsed and uncollapsed. With the linear-to-circular polarization ratio as a robust observable for the inclination angles, we find that the angular distribution is extremely peaked around the vertical direction for the largest flux densities, but gradually broadens as we go to smaller flux densities, to become asymptotically isotropic at zero flux density. The PDF for the vertical flux density, after accounting for the smearing effect of measurement noise, is found to have an extremely narrow core peak centered at zero flux density, which can be analytically represented by a stretched exponential. The PDF wings are extended and decline quadratically. The PDFs for the horizontal and total flux densities have a similar behavior. In particular we demonstrate that earlier claims that the PDF for the total flux density increases from small values at zero flux density to have a maximum significantly shifted from zero is an artefact of measurement noise. © 2010 ESO.

Zimmermann M.B.,ETH Zurich
Paediatric and Perinatal Epidemiology | Year: 2012

Iodine requirements are increased ≥50% during pregnancy. Iodine deficiency during pregnancy can cause maternal and fetal hypothyroidism and impair neurological development of the fetus. The consequences depend upon the timing and severity of the hypothyroidism; the most severe manifestation is cretinism. In moderate-to-severely iodine-deficient areas, controlled studies have demonstrated that iodine supplementation before or during early pregnancy eliminates new cases of cretinism, increases birthweight, reduces rates of perinatal and infant mortality and generally increases developmental scores in young children by 10-20%. Mild maternal iodine deficiency can cause thyroid dysfunction but whether it impairs cognitive and/or neurologic function in the offspring remains uncertain. Two meta-analyses have estimated that iodine-deficient populations experience a mean reduction in IQ of 12-13.5 points. In nearly all regions affected by iodine deficiency, salt iodisation is the most cost-effective way of delivering iodine and improving maternal and infant health. © 2012 Blackwell Publishing Ltd.

Sanders I.R.,University of Lausanne | Croll D.,ETH Zurich
Annual Review of Genetics | Year: 2010

Arbuscular mycorrhizal symbioses occur between fungi and the majority of plant species. They are important for plant nutrition, plant growth, protection from pathogens, plant diversity, nutrient cycling, and ecosystem processes. A key goal in research is to understand the molecular basis of the establishment, regulation, and functioning of the symbiosis. However, lack of knowledge on the genetics of the fungal side of this association has hindered progress. Here, we show how several key, recently discovered processes concerning the genetics of arbuscular mycorrhizal fungi could be essential for ultimately understanding the molecular genetics of this important symbiosis with plants. © 2010 by Annual Reviews. All rights reserved.

Kohler C.,Swedish University of Agricultural Sciences | Wolff P.,Swedish University of Agricultural Sciences | Wolff P.,ETH Zurich | Spillane C.,National University of Ireland
Annual Review of Plant Biology | Year: 2012

Genomic imprinting, the differential expression of an autosomal gene that is dependent on its parent of origin, has independently evolved in flowering plants and mammals. In both of these organism classes, imprinting occurs in embryo-nourishing tissues-the placenta and the endosperm, respectively. It has been proposed that some imprinted genes control nutrient flow from the mother to the offspring. Genome-wide analyses of imprinted genes in plants have revealed that many imprinted genes are located in the vicinity of transposon or repeat sequences, implying that transposon insertions are associated with the evolution of imprinted loci. Imprinted expression of a number of genes is conserved between monocots and dicots, suggesting that long-term selection can maintain imprinted expression at some loci. In terms of epigenetic mechanisms, imprinted expression is largely controlled by an antagonistic action of DNA methylation and Polycomb group-mediated histone methylation in the vicinity of imprinted genes, whereby the position of such epigenetic modifications can determine whether a gene will be expressed mainly from either the maternally or paternally inherited alleles. © 2012 by Annual Reviews. All rights reserved.

Stitt M.,Max Planck Institute of Molecular Plant Physiology | Zeeman S.C.,ETH Zurich
Current Opinion in Plant Biology | Year: 2012

Many plants store part of their photosynthate as starch during the day and remobilise it to support metabolism and growth at night. Mutants unable to synthesize or degrade starch show strongly impaired growth except in long day conditions. In rapidly growing plants, starch turnover is regulated such that it is almost, but not completely, exhausted at dawn. There is increasing evidence that premature or incomplete exhaustion of starch turnover results in lower rates of plant growth. This review provides an update on the pathways for starch synthesis and degradation. We discuss recent advances in understanding how starch turnover and the use of carbon for growth is regulated during diurnal cycles, with special emphasis on the role of the biological clock. Much of the molecular and genetic research on starch turnover has been performed in the reference system Arabidopsis. This review considers to what extent information gained in this weed species maybe applicable to annual crops and perennial species. © 2012.

Six J.,ETH Zurich | Paustian K.,Colorado State University
Soil Biology and Biochemistry | Year: 2014

Our 2000 paper Soil macroaggregate turnover and microaggregate formation: A mechanism for C sequestration under no-tillage agriculture had its genesis in attempts to identify and isolate soil organic matter (SOM) fractions that reflect the impacts of climate, soil physiochemical properties and physical disturbance on the soil organic carbon balance. A key prerequisite for the investigation was the development of a simple device to isolate the microaggregates (53-250μm) contained within stable (i.e., resistant to slaking) macroaggregates (>250μm) obtained by conventional wet-sieving. By comparing the abundance and C content of micro-within-macroaggregates, the size distribution of intra-aggregate particulate organic matter (iPOM) and isotopically-based estimates of the age of the organic matter in the different fractions, we were able to corroborate our hypothesis that the absence of tillage (i.e., in no-till and native soils) promotes greater longevity of newly-formed macroaggregates, resulting in greater SOM stabilization in microaggregates formed within stable macroaggregates. Follow-up research has indicated that the microaggregate-within-macroaggregate fraction is 1) potentially a robust indicator for management-induced SOC changes over decadal time scales, 2) of biological origin and therefore useful in interpreting impacts of soil biota on soil C and N dynamics, but not in-situ CO2 and N2O fluxes, 3) useful in complimentary chemical and spectroscopic approaches to relate SOM dynamics to soil structure and attributes of the soil pore space, and 4) a good candidate for being incorporated into models as a measurable fraction. © 2013 Elsevier Ltd.

Hurrell R.F.,ETH Zurich
Annals of Nutrition and Metabolism | Year: 2011

Where malaria surveillance and health care is inadequate, iron supplements given without food can increase the severity of malarial infections. The likely explanation is that the rate of iron influx into the plasma from high-dose oral supplements exceeds the rate of iron binding to transferrin and a quantity of non-transferrin-bound iron (NTBI) is formed. It is proposed that NTBI increases the intensity of malarial infections by increasing the sequestration of malaria-infected red cells in the capillaries of the brain and intestine, causing more cerebral malaria and further increasing the permeability of the intestinal barrier to the passage of pathogens. Bacteremia is frequently reported in children with severe malaria. At the same time, high iron doses stimulate the growth of pathogenic bacteria in the stool, further increasing the potential for bacteremia. The normal immune response to malaria, as well as other infections and inflammatory disorders, is to prevent further microbial growth by stimulating hepcidin synthesis and preventing the passage of iron into the plasma. Iron absorption is decreased and the efficacy of the iron interventions would be expected to be lower in the presence of infections. Copyright © 2011 S. Karger AG, Basel.

Balabin R.M.,ETH Zurich
Physical Chemistry Chemical Physics | Year: 2010

The jet-cooled spontaneous Raman spectrum of an amino acid - alanine (Ala, 2-aminopropanoic acid; H2NCH(CH3)COOH) - is reported. The low-frequency vibrational spectrum (below 500 cm-1) was recorded and assigned using quantum chemical data: ab initio (MP2) and DFT (BLYP, B3LYP, and PBE0). Band polarization measurements were used to confirm the vibrational assignments. The acquired medium resolution spectra (HWHM of ∼4 cm -1) allow the different alanine conformations to be distinguished. Four alanine conformers were observed and identified: two previously reported by microwave spectroscopy studies and two that were previously unreported. A set of reasons for why these conformers eluded previous studies are discussed. Selective collisional relaxation processes in the jet (associated with low interconversion barriers between different alanine conformations) that depopulate the high-energy conformers were experimentally demonstrated. © 2010 the Owner Societies.

Cheng Z.,Kraft Foods Inc. | Ristow M.,ETH Zurich
Antioxidants and Redox Signaling | Year: 2013

Mitochondrial function is fundamental to metabolic homeostasis. In addition to converting the nutrient flux into the energy molecule ATP, the mitochondria generate intermediates for biosynthesis and reactive oxygen species (ROS) that serve as a secondary messenger to mediate signal transduction and metabolism. Alterations of mitochondrial function, dynamics, and biogenesis have been observed in various metabolic disorders, including aging, cancer, diabetes, and obesity. However, the mechanisms responsible for mitochondrial changes and the pathways leading to metabolic disorders remain to be defined. In the last few years, tremendous efforts have been devoted to addressing these complex questions and led to a significant progress. In a timely manner, the Forum on Mitochondria and Metabolic Homeostasis intends to document the latest findings in both the original research article and review articles, with the focus on addressing three major complex issues: (1) mitochondria and mitochondrial oxidants in aging-the oxidant theory (including mitochondrial ROS) being revisited by a hyperfunction hypothesis and a novel role of SMRT in mitochondrion-mediated aging process being discussed; (2) impaired mitochondrial capacity (e.g., fatty acid oxidation and oxidative phosphorylation [OXPHOS] for ATP synthesis) and plasticity (e.g., the response to endocrine and metabolic challenges, and to calorie restriction) in diabetes and obesity; (3) mitochondrial energy adaption in cancer progression-a new view being provided for H+-ATP synthase in regulating cell cycle and proliferation by mediating mitochondrial OXPHOS, oxidant production, and cell death signaling. It is anticipated that this timely Forum will advance our understanding of mitochondrial dysfunction in metabolic disorders. Antioxid. Redox Signal. 19, 240-242. © 2013, Mary Ann Liebert, Inc.

Bordignon E.,ETH Zurich
Topics in Current Chemistry | Year: 2012

EPR spectroscopy of site-directed spin labeled membrane proteins is at present a common and valuable biophysical tool to study structural details and conformational transitions under conditions relevant to function. EPR is considered a complementary approach to X-ray crystallography and NMR because it provides detailed information on (1) side chain dynamics with an exquisite sensitivity for flexible regions, (2) polarity and water accessibility profiles across the membrane bilayer, and (3) distances between two spin labeled side chains during protein functioning. Despite the drawback of requiring site-directed mutagenesis for each new piece of information to be collected, EPR can be applied to any complex membrane protein system, independently of its size. This chapter describes the state of the art in the application of site-directed spin labeling (SDSL) EPR to membrane proteins, with specific focus on the different types of information which can be obtained with continuous wave and pulsed techniques. © 2011 Springer-Verlag Berlin Heidelberg.

Bode J.W.,ETH Zurich
Topics in Organometallic Chemistry | Year: 2013

The chemical synthesis of peptides and proteins has long relied on innovative inventions of amide-forming reactions. Our group has discovered and developed an amide-forming ligation reaction by the coupling of α-ketoacids and hydroxylamine (KAHA ligation). This reaction does not require reagents or catalysts, proceeds in the presence of unprotected functional groups, and generates no nonvolatile by-products. This chapter recounts our discovery of this reaction, our development of novel methods for the preparation of α-ketoacids and hydroxylamines, and the application of these new methods to the synthesis of peptides and proteins. © Springer-Verlag Berlin Heidelberg 2012.

Governments in international negotiations such as the United Nations Framework Convention on Climate Change (UNFCCC) negotiations use different negotiation strategies to increase their impact, which can be hard or soft, that is, more or less adding conflict to a negotiation. However, little is known about the structural (e.g. economic size) and domestic (e.g. pressure from stakeholders) factors that determine their choice of strategies. Using an original data set created from interviews with 58 delegations who attended the UNFCCC negotiations in Copenhagen, it is shown that the variables of economic power, democratic status, and pressure from domestic stakeholders had an effect on whether these delegations used hard negotiation strategies. It was found that economically powerful states were likely to use such strategies; in contrast democracies tend to use fewer hard strategies if their economic status is accounted for. However, democracies can be pressured to use hard strategies when they are lobbied by their domestic interest groups (e.g. GHG emitter interests, environmental groups). © 2012 Taylor and Francis Group, LLC.

The accuracy of quantum chemical treatment of biopolymers by means of density functional theory is brought into question in terms of intramolecular basis set superposition error (BSSE). Secondary structure forms- Β -strands (C5; fully extended conformation), repeated γ -turns (C7), 310 -helices (C10), and α -helices (C13)-of homopolypeptides (polyglycine and polyalanine) are used as representative examples. The studied molecules include Ace (Gly)5 NH2, Ace (Gly)10 NH2, Ace (Ala)5 NH2, and Ace (Ala)10 NH2. The counterpoise correction procedure was found to produce reliable estimations for the BSSE values (other methods of BSSE correction are discussed). The calculations reported here used the B3LYP, PBE0 (PBE1PBE), and BMK density functionals with different basis sets [from 6-31G(d) to 6-311+G (3df,3pd)] to estimate the influence of basis set size on intramolecular BSSE. Calculation of BSSE was used to determine the deviation of the current results from the complete basis set limit. Intramolecular BSSE was found to be nonadditive with respect to biopolymer size, in contrast to claims in recent literature. The error, which is produced by a basis set superposition, was found to exceed 4 kcal mol-1 when a medium-sized basis set was used. This indicates that this error has the same order of magnitude as the relative energy differences of secondary structure elements of biopolymers. This result makes all recent reports on the gas-phase stability of homopolypeptides and their analogs questionable. © 2010 American Institute of Physics.

Zamboni N.,ETH Zurich
Current Opinion in Biotechnology | Year: 2011

Experimental determination of in vivo metabolic rates by methods of 13C metabolic flux analysis is a pivotal approach to unravel structure and regulation of metabolic networks, in particular with microorganisms grown in minimal media. However, the study of real-life and eukaryotic systems calls for the quantification of fluxes also in cellular compartments, rich media, cell-wide metabolic networks, dynamic systems or single cells. These scenarios drastically increase the complexity of the task, which is only partly dealt by existing approaches that rely on rigorous simulations of label propagation through metabolic networks and require multiple labeling experiments or a priori information on pathway inactivity to simplify the problem. Albeit qualitative and largely driven by human interpretation, statistical analysis of measured 13C-patterns remains the exclusive alternative to comprehensively handle such complex systems. In the future, this practice will be complemented by novel modeling frameworks to assay particular fluxes within a network by stable isotopic tracer for targeted validation of well-defined hypotheses. © 2010 Elsevier Ltd.

Ernst R.R.,ETH Zurich
Journal of Raman Spectroscopy | Year: 2010

Procedures and versatile Raman instruments are described for the non-destructive in situ analysis of pigments in large paintings. A commercial Raman microscope is mounted on a gantry for scanning paintings with dimensions exceeding 1 m2. Design principles and the physical implementation of the set-up are outlined. Advantages/disadvantages and performance of the gantry-based instrument are compared with those of a mobile Raman probe, attached to the same Raman microscope. The two set-ups are applied to Central Asian thangka paintings. The utility of the gantry-mounted Raman microscope is demonstrated on a 19th century Buddhist painting from Buriatia, South Siberia. Surprisingly, three arsenic-based pigments, i.e. orpiment, realgar, and pararealgar, are found all in the same painting. Pararealgar is used for painting the numerous yellow areas. Realgar is admixed with red lead for adjusting its orange tint. Finally, orpiment is blended with Prussian blue for producing green. Traditional malachite is used in addition as a non-adulterated green pigment. The mobile Raman probe was employed for examining a Tibetan painting of the 18th century from Derge monastery in the Kham area of Sichuan. The highly unique painting could be dated well and its origin accurately located. In fact, the painter's workshop, where the thangka has been executed, is shown in great detail on the painting itself. The painter's palette of this thangka matches the canonical set of pigments used in Tibet for more than 10 centuries. © 2009 John Wiley & Sons, Ltd.

Luisier M.,ETH Zurich
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

Phonon transport is simulated in ultrascaled nanowires in the presence of anharmonic phonon-phonon scattering. A modified valence-force-field model containing four types of bond deformation is employed to describe the phonon band structure. The inclusion of five additional bond deformation potentials allows us to account for anharmonic effects. Phonon-phonon interactions are introduced through inelastic scattering self-energies solved in the self-consistent Born approximation in the nonequilibrium Green's function formalism. After calibrating the model with experimental data, the thermal current, resistance, and conductivity of 〈100〉-, 〈110〉-, and 〈111〉-oriented Si nanowires with different lengths and temperatures are investigated in the presence of anharmonic phonon-phonon scattering and compared to their ballistic limit. It is found that all the simulated thermal currents exhibit a peak at temperatures around 200 K if phonon scattering is turned on while they monotonically increase when this effect is neglected. Finally, phonon transport through Si-Ge-Si nanowires is considered. © 2012 American Physical Society.

Diederich F.,ETH Zurich
Angewandte Chemie - International Edition | Year: 2013

This Review investigates the development of Angewandte Chemie since the founding of the journal in 1887 and analyzes how its content reflects the changes in chemical research over these 125 years. Although Angewandte Chemie was originally founded as a journal for applied ("angewandte") - technical and analytical - chemistry, numerous review articles and abstracts published even in its first 50 years enable the milestones in chemical research in a much broader sense to be traced nicely. With the introduction of the International Edition in 1962, the author base, which had until then been primarily limited to German-speaking countries, became increasingly international, and the journal experienced impressive growth. Today, with its attractive layout, successful mix of articles, and high impact factor, Angewandte Chemie covers chemical research around the world in its full breadth, with its many achievements and future challenges. The development of Angewandte Chemie and the changes in chemical research over the first 125 years of its existence are the topic of this Review. Even from the content of its first 50 years, the journal offers numerous milestones in chemical research, and today, with its attractive layout and successful mix of articles, it covers chemical research worldwide in its full breadth. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Romero M.,IMDEA Madrid Institute for Advanced Studies | Steinfeld A.,ETH Zurich | Steinfeld A.,Paul Scherrer Institute
Energy and Environmental Science | Year: 2012

Concentrated solar energy provides a virtually unlimited source of clean, non-polluting, high-temperature heat. This article reviews the underlying principles of concentrating solar radiation and describes the latest technological advances and future prospects of solar thermal power and thermochemical fuel production. © 2012 The Royal Society of Chemistry.

Hacker B.R.,University of California at Santa Barbara | Gerya T.V.,ETH Zurich
Tectonophysics | Year: 2013

Regional ultrahigh-pressure (UHP) metamorphic terranes exhibit a spectrum of lithological, structural and petrological characteristics that result from the geodynamic processes that formed and exhumed them. At least six geodynamic processes can be envisioned to have carried continental rocks to mantle depths: i) continental margin subduction, ii) microcontinent subduction, iii) sediment subduction, iv) intracontinental subduction, v) subduction erosion, and vi) foundering of a crustal root. Most of these processes have been investigated through numerical or analog models and most have been invoked for one or more specific occurrences of UHP rocks. At least six geodynamic processes can be envisioned to have exhumed UHP continental rocks: i) eduction, ii) microplate rotation, iii) crustal stacking, iv) slab rollback, v) channel flow, and vi) trans-mantle diapirs. Most of these processes have also been investigated through numerical or analog models and all have been invoked to explain the exhumation of at least one UHP terrane. More-detailed and systematic field investigations are warranted to assess the predictions of numerical models, and more-sophisticated and realistic numerical models are required to replicate and explain the petrological, structural, and chronological data obtained from UHP terranes. © 2013 Elsevier B.V.

Van Gunsteren W.F.,ETH Zurich
Angewandte Chemie - International Edition | Year: 2013

"Seven deadly sins" in modern academic research and publishing can be condensed into a list ranging from poorly described experimental or computational setups to falsification of data. This Essay describes these sins and their ramifications, and serves as a code of best practice for researchers in their quest for scientific truth. Picture: Detail from "The Seven Deadly Sins" by Hieronymus Bosch. © 2013 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

Mignan A.,ETH Zurich
Journal of Geophysical Research: Solid Earth | Year: 2012

We investigated the functional shape of the earthquake frequency-magnitude distribution (FMD) to identify its dependence on the completeness magnitude Mc. The FMD takes the form N(m) exp(-m)q(m) where N(m) is the event number, m the magnitude, exp(-m) the Gutenberg-Richter law and q(m) a detection function. q(m) is commonly defined as the cumulative Normal distribution to describe the gradual curvature of bulk FMDs. Recent results however suggest that this gradual curvature is due to Mc heterogeneities, meaning that the functional shape of the elemental FMD has yet to be described. We propose a detection function of the form q(m) = exp((m-Mc)) for m < M c and q(m) = 1 for m Mc, which leads to an FMD of angular shape. The two FMD models are compared in earthquake catalogs from Southern California and Nevada and in synthetic catalogs. We show that the angular FMD model better describes the elemental FMD and that the sum of elemental angular FMDs leads to the gradually curved bulk FMD. We propose an FMD shape ontology consisting of 5 categories depending on the Mc spatial distribution, from Mc constant to Mc highly heterogeneous: (I) Angular FMD, (II) Intermediary FMD, (III) Intermediary FMD with multiple maxima, (IV) Gradually curved FMD and (V) Gradually curved FMD with multiple maxima. We also demonstrate that the gradually curved FMD model overestimates Mc. This study provides new insights into earthquake detectability properties by using seismicity as a proxy and the means to accurately estimate Mc in any given volume. © 2012. American Geophysical Union. All Rights Reserved.

Woerter M.,ETH Zurich
Journal of Technology Transfer | Year: 2012

This paper investigates the technological orientation of firms and universities and their propensity to have knowledge and technology transfer (KTT) activities. This study looks at the technological potential for KTT and how it is used, emphasizing differences between smaller and larger firms. To this end we collected information about the technology activities of firms (patent statistics) and the technology activities of universities. Furthermore, we used survey data on technology transfer activities. We combined the three datasets and found-especially for smaller firms-that great technology proximity fosters transfer activities with different universities (case 1). The same is true if proximity is low and expertise is considerable at universities in the respective technology field (case 2). In both cases additional transfer potential exists. In the second case firms engage in transfer activities in order to update and modify their knowledge base and as a consequence improve "competitiveness" in certain technology fields. Furthermore, firms show a tendency to diversify their contacts with universities in order to avoid knowledge lock-in. © 2011 Springer Science+Business Media, LLC.

Quintal B.,ETH Zurich
Journal of Applied Geophysics | Year: 2012

At seismic frequencies, wave-induced fluid flow is a major cause of P-wave attenuation in partially saturated porous rocks. Attenuation is of great importance for the oil industry in the interpretation of seismic field data. Here, the effects on P-wave attenuation resulting from changes in oil saturation are studied for media with coexisting water, oil, and gas. For that, creep experiments are numerically simulated by solving Biot's equations for consolidation of poroelastic media with the finite-element method. The experiments yield time-dependent stress-strain relations that are used to calculate the complex P-wave modulus from which frequency-dependent P-wave attenuation is determined. The models are layered media with periodically alternating triplets of layers. Models consisting of triplets of layers having randomly varying layer thicknesses are also considered. The layers in each triplet are fully saturated with water, oil, and gas. The layer saturated with water has lower porosity and permeability than the layers saturated with oil and gas. These models represent hydrocarbon reservoirs in which water is the wetting fluid preferentially saturating regions of lower porosity. The results from the numerical experiments showed that increasing oil saturation, connected to a decrease in gas saturation, resulted in a significant increase of attenuation at low frequencies (lower than 2. Hz). Furthermore, replacing the oil with water resulted in a distinguishable behavior of the frequency-dependent attenuation. These results imply that, according to the physical mechanism of wave-induced fluid flow, frequency-dependent attenuation in media saturated with water, oil, and gas is a potential indicator of oil saturation. © 2012 Elsevier B.V.

Gaberdiel M.R.,ETH Zurich | Gopakumar R.,Harish Chandra Research Institute
Journal of Physics A: Mathematical and Theoretical | Year: 2013

We review the duality relating 2DWN minimal model conformal field theories, in a large-N't Hooft like limit, to higher spin gravitational theories on AdS3. © 2013 IOP Publishing Ltd.

Ackermann M.,ETH Zurich | Ackermann M.,Eawag - Swiss Federal Institute of Aquatic Science and Technology
Nature Reviews Microbiology | Year: 2015

Most microbial communities consist of a genetically diverse assembly of different organisms, and the level of genetic diversity plays an important part in community properties and functions. However, biological diversity also arises at a lower level of biological organization, between genetically identical cells that reside in the same microenvironment. In this Review, I outline the molecular mechanisms responsible for phenotypic heterogeneity and discuss how phenotypic heterogeneity allows genotypes to persist in fluctuating environments. I also describe how it promotes interactions between phenotypic subpopulations in clonal groups, providing microbial groups with new functionality. © 2015 Macmillan Publishers Limited. All rights reserved.

Mercer J.,ETH Zurich | Greber U.F.,University of Zurich
Trends in Microbiology | Year: 2013

Macrophages and dendritic cells (DCs) are at the front line of defence against fungi, bacteria, and viruses. Together with physical barriers, such as mucus and a range of antimicrobial compounds, they constitute a major part of the intrinsic and innate immune systems. They have elaborate features, including pattern recognition receptors (PRRs) and specialized endocytic mechanisms, cytokines and chemokines, and the ability to call on reserves. As masters of manipulation and counter-attack, viruses shunt intrinsic and innate recognition, enter immune cells, and spread from these cells throughout an organism. Here, we review mechanisms by which viruses subvert endocytic and pathogen-sensing functions of macrophages and DCs, while highlighting possible strategic advantages of infecting cells normally tuned into pathogen destruction. © 2013 Elsevier Ltd.

Straumann U.,ETH Zurich
Journal of Physics D: Applied Physics | Year: 2011

In the case of ac high voltage overhead transmission lines, corona discharges may result in a broadband crackling and a humming noise at twice the mains frequency (2f ). The computation of the emitted 2f -levels presupposes the knowledge of the apparent power and force passed by the ions to the gas in the surroundings of the conductors. In this paper a simulation determining these magnitudes used in a previous publication is presented. For that purpose the ideas of earlier publications by other authors to compute corona losses are followed. Their central idea is to define an onset field strength on the surface of the conductor which is not exceeded by injecting the right amount of charges in the vicinity of the conductor. This injection together with the recombination determines the amount of drifting ions. The novelty of the method applied in this work is the consideration of different onset field strengths over the conductor surface in dependence of the polar angle. Further 2f -force and 2f -power acting on the gas in the vicinity of the conductor are determined out of the resulting ion drift. The corona currents thus computed coincide fairly well with measured results. © 2011 IOP Publishing Ltd.

Straumann U.,ETH Zurich
Journal of Physics D: Applied Physics | Year: 2011

Foul weather conditions can lead to corona discharges on high voltage overhead transmission lines which are perceivable as an audible broadband crackling and hissing noise. In such situations this noise is often accompanied by a humming noise at twice the mains frequency (2f ). While the crackling and hissing noise component was widely investigated in the past, the 2f -component went largely uncommented. Only a scant amount of literature suggests mechanisms explaining the occurrence of this emission, but these suggestions are either vague or controversial. In this paper a mechanism analogous to that of ion loudspeakers is presented, which describes the sound levels of audible 2f -emissions from high voltage model lines quantitatively in a satisfactory manner. While the broadband component of corona noise emissions has its immediate origin in the discharges themselves, the 2f -emissions arise from the drift of ions left behind by the discharges. Due to collisions with gas molecules, these ions cause in the sum a rise of heat in and a force applying on the neutral gas, thus producing the 2f -emissions. Depending on the situation, both can give rise to contributions of similar magnitude. © 2011 IOP Publishing Ltd.

Wild M.,ETH Zurich
Bulletin of the American Meteorological Society | Year: 2012

A fundamental determinant of climate and life on our planet is the solar radiation (sunlight) incident at the Earth's surface. Any change in this precious energy source affects our habitats profoundly. Until recently, for simplicity and lack of better knowledge, the amount of solar radiation received at the Earth's surface was assumed to be stable over the years. However, there is increasing observational evidence that this quantity undergoes significant multidecadal variations, which need to be accounted for in discussions of climate change and mitigation strategies. Coherent periods and regions with prevailing declines ("dimming") and inclines ("brightening") in surface solar radiation have been detected in the worldwide observational networks, often in accord with anthropogenic air pollution patterns. The present synthesis provides in a nutshell the main characteristics of this phenomenon, a conceptual framework for its causes, and an overview of potential environmental implications. The latest developments and remaining gaps of knowledge in this rapidly growing field of research are further highlighted. © 2012 American Meteorological Society.

Casini F.,ETH Zurich
Canadian Geotechnical Journal | Year: 2012

This paper presents a simple model for predicting the deformation induced by wetting. The objective is to quantify the deformation induced by saturation of an unsaturated layer of homogeneous soil, causing variation of the initial void ratio and gravimetric water content. The soil is a low-plasticity silty sand. A simple expression for the normal compression line (NCL), which depends on the parameter χ and one more parameter, will be proposed. The model may capture the progressive degradation induced by loading and wetting by linking the dependency of NCL by the parameter χ and water retention curve by porosity.

Rodriguez-Villalon A.,ETH Zurich
New Phytologist | Year: 2016

In plants, phloem conduits form a specialized vascular network mediating the exchange of nutrients and signaling molecules between distantly separated organs. To become effective transport elements, protophloem cells undergo a rather unique, differentiation program that involves nucleus degradation, organelle rearrangement and cell wall thickening. Yet, protophloem sieve elements remain alive because their essential metabolic functions are supported by their neighboring companion cells. In spite of the importance of the phloem, the molecular mechanisms orchestrating protophloem specification and differentiation remain still poorly understood. In this review, I provide a summary of recent discoveries regarding morphogenetic events that determine phloem formation, and also a discussion of the systemic effects on root architecture derived from impaired protophloem differentiation programs. © 2016 New Phytologist Trust.

Gross D.,Leibniz University of Hanover | Gross D.,ETH Zurich
IEEE Transactions on Information Theory | Year: 2011

We present novel techniques for analyzing the problem of low-rank matrix recovery. The methods are both considerably simpler and more general than previous approaches. It is shown that an unknown n × n matrix of rank r can be efficiently reconstructed from only O(nrν ln2 n) randomly sampled expansion coefficients with respect to any given matrix basis. The number ν quantifies the "degree of incoherence" between the unknown matrix and the basis. Existing work concentrated mostly on the problem of "matrix completion" where one aims to recover a low-rank matrix from randomly selected matrix elements. Our result covers this situation as a special case. The proof consists of a series of relatively elementary steps, which stands in contrast to the highly involved methods previously employed to obtain comparable results. In cases where bounds had been known before, our estimates are slightly tighter. We discuss operator bases which are incoherent to all low-rank matrices simultaneously. For these bases, we show that $O(nrν ln n) randomly sampled expansion coefficients suffice to recover any low-rank matrix with high probability. The latter bound is tight up to multiplicative constants. © 2011 IEEE.

Stadler T.,ETH Zurich
Systematic Biology | Year: 2011

In this paper, I develop efficient tools to simulate trees with a fixed number of extant species. The tools are provided in my open source R-package TreeSim available on CRAN. The new model presented here is a constant rate birth-death process with mass extinction and/or rate shift events at arbitrarily fixed times 1) before the present or 2) after the origin. The simulation approach for case (2) can also be used to simulate under more general models with fixed events after the origin. I use the developed simulation tools for showing that a mass extinction event cannot be distinguished from a model with constant speciation and extinction rates interrupted by a phase of stasis based on trees consisting of only extant species. However, once we distinguish between mass extinction and period of stasis based on paleontological data, fast simulations of trees with a fixed number of species allow inference of speciation and extinction rates using approximate Bayesian computation and allow for robustness analysis once maximum likelihood parameter estimations are available. © 2011 The Author(s).

The spatial and temporal patterns of fog and low clouds along the South-Western African coast are characterized based on an evaluation of Meteosat SEVIRI satellite data. A technique for the detection of fog/low clouds in the region is introduced, and validated using 1. year of CALIOP cloud lidar products, showing reliable performance. The frequency of fog and low cloud in the study area is analyzed by systematic application of the technique to all available Meteosat SEVIRI scenes from 2004 to 2009, for 7:00 UTC and 14:00 UTC. The highest frequencies are encountered in the area around Walvis Bay, with a peak in the summer months. Fog and low clouds clear by 14:00 UTC almost everywhere over land. Supplementary data are available at doi:10.1594/PANGAEA.755427. © 2011 Elsevier B.V.

Lee C.T.A.,Rice University | Bachmann O.,ETH Zurich
Earth and Planetary Science Letters | Year: 2014

Most magmatism on Earth forms by direct melting of the mantle, generating basalts at the low silica end of the terrestrial compositional spectrum. However, most subduction zone magmas erupted or sampled at the surface are basalt-andesitic to andesitic and hence have higher Si contents. Endmember hypotheses for the origin of andesites are: (1) direct melting of the mantle at water-saturated conditions, (2) partial re-melting of altered basaltic crust, (3) crystal fractionation of arc basalts in crustal magma chambers, and (4) mixing of mafic magmas with high Si crust or magmas, e.g., dacite-rhyolite. Here, we explore the possibility of using Zr and P systematics to evaluate the importance of some of these processes. Direct melting of the mantle generates magmas with low Zr (<50 ppm) and P2O5 (<0.2 wt.%). Crystal-liquid segregation should drive an increase in P and Zr in the residual magma because the magma is initially undersaturated in zircon and apatite. With further cooling and crystallization, apatite followed by zircon will saturate, causing P and Zr to decrease so that most rhyolites and granites will have low P and Zr (high temperature rhyolites may never saturate in zircon and will maintain high Zr contents). Mixing of basalts with rhyolites having low P and Zr should generate coupled decreases in Zr and P with increasing SiO2. Here, we show that Zr (>100 ppm) and P2O5 (>0.2 wt.%) in island- and continental-arc magmas initially increase to levels higher than what can be achieved if andesites form by direct mantle melting. As Si increases, both Zr and P decrease with Zr decreasing at higher Si, and hence lagging the decrease in P. These systematics, particularly the decoupled decrease in Zr and P, cannot be explained by mixing, and instead, are more easily explained if andesites are dominantly formed by crystal-liquid segregation from moderately hydrous basalt, wherein P and Zr are controlled, respectively, by early and later saturation in apatite and zircon. Although there is clear isotopic and outcrop (enclaves) evidence for mixing in magmatic systems, crystal-liquid segregation appears to be the dominant process in generating intermediate magmas, with mixing playing a secondary role.Finally, recent studies have suggested that the abundance of certain magma compositions in a given volcanic setting may be dictated by the optimal crystallinity window for efficient crystal-liquid separation (50-70 vol%). We show that the SiO2 content of the residual liquid in this crystallinity window increases with increasing water content. We thus speculate that high water contents (>2 wt.% H2O) may favor extraction of andesitic and dacitic liquids while lower water contents favor extraction of more basaltic magmas. If continental arc magmas tend to be more andesitic, as often believed, it follows that they may begin more water-rich than island arc magmas, which are basaltic. In any case, if intermediate arc magmas are formed dominantly by crystal-liquid fractionation, large volumes of complementary mafic cumulates must be generated during the formation of andesitic magmas, as is seen in well-exposed crustal sections. © 2014 Elsevier B.V.

Loock C.-M.,ETH Zurich | Staake T.,University of Bamberg | Thiesse F.,University of Wurzburg
MIS Quarterly: Management Information Systems | Year: 2013

This study investigates the role of information systems in stimulating energy-efficient behavior in private households. We present the example of Velix, a web portal designed to motivate customers of a utility company to reduce their electricity consumption. In particular, we consider the effectiveness of goal setting functionality and defaults in influencing energy conservation behavior. For this purpose, we use the web portal as a test of the theoretical propositions underlying its design. Based on data collected from a field experiment with 1,791 electricity consumers, we test hypotheses regarding the structural relations between defaults and goals, the impact of defaults and goals on consumption behavior, and the moderating role of feedback on goal choice. Our results confirm the positive impact of goal setting on energy conservation. We show that default goals lead to statistically significant savings by affecting goal choice. However, if the default goals are set too low or too high with respect to a self-set goal, the defaults will detrimentally affect behavior. We also show that feedback on goal attainment moderates the effect of default goals on goal choice. The results extend the knowledge on goal setting and defaults and have implications for the design of effective energy feedback systems. The study's approach, which combines hypothesis-driven work and design-oriented IS research, could serve as a blueprint for further research endeavors of this kind, particularly with regard to feedback systems based on future smart metering infrastructures.

Ehret A.E.,ETH Zurich
Journal of the Mechanics and Physics of Solids | Year: 2015

In this paper, a link is established between the statistical theory of long chain molecules and Ogden's phenomenological model of rubber elasticity. It has been shown by several authors in the past that many invariant-based phenomenological models for rubber-like materials are related to the classical statistical theories. The essential means to reach this reconciliation were methods to account for a non-affine deformation of polymer chains in the network, appropriate techniques to calculate their averaged response, and an approximation of the inverse Langevin function appearing in the non-Gaussian statistical theory. It is shown in this paper that the very same approach, if appropriately implemented, allows to express the strain-energy function of Ogden's material in terms of physical constants characterising the polymer chain and network, together with few additional parameters that account for the non-affine deformation of the polymer chains. Particularly, it is shown that Ogden's model can be represented as a non-affine non-Gaussian 3-chain model with topological constraints. © 2015 Elsevier Ltd. All rights reserved.

Pluess A.R.,ETH Zurich
Biodiversity and Conservation | Year: 2013

Grassland species might be under differential selection pressure due to management regimes by man or unmanaged grazers. To investigate microevolutionary changes in plants, I used a meta-analysis and a comparative approach. This analysis incorporates 28 studies on 19 species in 3 paired management regimes with a total of 152 reported trait values resulting in 40 pooled Hedge's d effect sizes on genetic, vegetative and reproductive traits as well as 83 Hedge's ds of 14 specific traits. Of the pooled and specific traits, 60 and 72 % indicated divergent selection pressures within a management regime. The pooled Hedge's ds did not differ among the management regimes. Within mown versus grazed sites, trait groups were affected differently with increased reproductive traits in plants originating from grazed sites. In the other management regimes, the trait groups were affected similarly, except of some specific traits. Longevity, palatability, clonality and biome origin did not explain differences in pooled effect sizes, but tended to explain differences in some specific traits. Overall, general selection patterns were rare probably due to a high heterogeneity of among and within species responses, which might level each other out. Moreover, the number of data points per group of interest is often low and thus, for a final conclusion more studies are needed. Nonetheless, the divergent plant reactions indicate that selection pressures within paired management regimes might be large enough to induce microevolutionary changes in grasslands. Subsequently, the increased variation within species under different management techniques might buffer species persistence in the long term. © 2013 Springer Science+Business Media Dordrecht.

Pantazis P.,ETH Zurich | Supatto W.,French Institute of Health and Medical Research
Nature Reviews Molecular Cell Biology | Year: 2014

With the advent of imaging probes and live microscopy, developmental biologists have markedly extended our understanding of the molecular and cellular details of embryonic development. To fully comprehend the complex mechanistic framework that forms the developing organism, quantitative studies with high fidelity in space and time are now required. We discuss how integrating established, newly introduced and future imaging tools with quantitative analysis will ensure that imaging can fulfil its promise to elucidate how new life begins. © 2014 Macmillan Publishers Limited.

Karrasch D.,ETH Zurich
Chaos | Year: 2015

It is a wide-spread convention to identify repelling Lagrangian Coherent Structures (LCSs) with ridges of the forward finite-time Lyapunov exponent (FTLE) field and to identify attracting LCSs with ridges of the backward FTLE. However, we show that, in two-dimensional incompressible flows, also attracting LCSs appear as ridges of the forward FTLE field. This raises the issue of the characterization of attracting LCSs using a forward finite-time Lyapunov analysis. To this end, we extend recent results regarding the relationship between forward and backward maximal and minimal FTLEs, to both the whole finite-time Lyapunov spectrum and to stretch directions. This is accomplished by considering the singular value decomposition (SVD) of the linearized flow map. By virtue of geometrical insights from the SVD, we provide characterizations of attracting LCSs in forward time for two geometric approaches to hyperbolic LCSs. We apply these results to the attracting FTLE ridge of the incompressible saddle flow. © 2015 AIP Publishing LLC.

Dijkstra M.,Max Planck Institute for Astrophysics | Kramer R.,ETH Zurich
Monthly Notices of the Royal Astronomical Society | Year: 2012

We present constrained radiative transfer calculations of Lyα photons propagating through clumpy, dusty, large-scale outflows, and explore whether we can quantitatively explain the Lyα haloes that have been observed around Lyman break galaxies. We construct phenomenological models of large-scale outflows which consist of cold clumps that are in pressure equilibrium with a constant-velocity hot wind. First, we consider models in which the cold clumps are distributed symmetrically around the galaxy and in which the clumps undergo a continuous acceleration in its 'circumgalactic' medium (CGM). We constrain the properties of the cold clumps (radius, velocity, Hi column density and number density) by matching the observed Lyα absorption strength of the CGM in the spectra of background galaxies. We then insert a Lyα source in the centre of this clumpy outflow, which consists of 10 5-10 6 clumps, and compute observable properties of the scattered Lyα photons. In these models, the scattered radiation forms haloes that are significantly more concentrated than observed. In order to simultaneously reproduce the observed Lyα absorption line strengths and the Lyα haloes, we require - preferably bipolar - outflows in which the clumps decelerate after their initial acceleration. This deceleration is predicted naturally in 'momentum-driven' wind models of clumpy outflows. In models that simultaneously fit the absorption and emission-line data, the predicted linear polarization is ~30-40 per cent at a surface brightness contour of S = 10 -18ergs -1cm -2arcsec -2. Our work illustrates clearly that Lyα emission-line haloes around star-forming galaxies provide valuable constraints on the cold gas distribution and kinematics in their CGM, and that these constraints complement those obtained from absorption-line studies alone. © 2012 The Authors. Monthly Notices of the Royal Astronomical Society © 2012 RAS.

Cremers C.,ETH Zurich
Proceedings of the 6th International Symposium on Information, Computer and Communications Security, ASIACCS 2011 | Year: 2011

Many recent key exchange (KE) protocols have been proven secure in the CK, CK-HMQV, or eCK security models. The exact relation between these security models, and hence the relation between the security guarantees provided by the protocols, is unclear. We show first that the CK, CK-HMQV, and eCK security models are formally incomparable. Second, we show that these models are also practically incomparable, by providing for each model attacks on protocols from the literature that are not considered by the other models. Third, our analysis enables us to find previously unreported aws in protocol security proofs from the literature. We identify the causes of these aws and show how they can be avoided. Copyright 2011 ACM.

Different analyses of identical Hinode SOT/SP data of quiet-Sun magnetic fields have in the past led to contradictory answers to the question of whether the angular distribution of field vectors is preferentially horizontal or vertical. These answers have been obtained by combining the measured circular and linear polarizations in different ways to derive the field inclinations. A problem with these combinations is that the circular and linear polarizations scale with field strength in profoundly different ways. Here, we avoid these problems by using an entirely different approach that is based exclusively on the fundamental symmetry properties of the transverse Zeeman effect for observations away from the disk center without any dependence on the circular polarization. Systematic errors are suppressed by the application of a doubly differential technique with the 5247-5250 Å line pair for observations with the ZIMPOL-2 imaging polarimeter on the French THEMIS telescope on Tenerife. For the weakest, intranetwork-type magnetic fields, the angular distribution changes sign with the center-to-limb distance, being preferentially horizontal limbwards of μ (cosine of the heliocentric angle) = 0.2, while favoring the vertical direction inside this disk position. Since decreasing μ corresponds to increasing height of line formation, this finding implies that the intranetwork fields are more peaked around the vertical direction in the low to middle photosphere, while they are more horizontal in the upper photosphere. The angular distribution is however also found to become more vertical with increasing flux density. Thus, all facular points that we have observed have a strong preference for the vertical direction for all disk positions, including those all the way to the extreme limb. In terms of spatial averages weighted by the intrinsic magnetic energy density, these results are independent of telescope resolution. © 2013 ESO.

Haller G.,ETH Zurich
Journal of the Mechanics and Physics of Solids | Year: 2016

The local rigid-body component of continuum deformation is typically characterized by the rotation tensor, obtained from the polar decomposition of the deformation gradient. Beyond its well-known merits, the polar rotation tensor also has a lesser known dynamical inconsistency: it does not satisfy the fundamental superposition principle of rigid-body rotations over adjacent time intervals. As a consequence, the polar rotation diverts from the observed mean material rotation of fibers in fluids, and introduces a purely kinematic memory effect into computed material rotation. Here we derive a generalized polar decomposition for linear processes that yields a unique, dynamically consistent rotation component, the dynamic rotation tensor, for the deformation gradient. The left dynamic stretch tensor is objective, and shares the principal strain values and axes with its classic polar counterpart. Unlike its classic polar counterpart, however, the dynamic stretch tensor evolves in time without spin. The dynamic rotation tensor further decomposes into a spatially constant mean rotation tensor and a dynamically consistent relative rotation tensor that is objective for planar deformations. We also obtain simple expressions for dynamic analogues of Cauchy's mean rotation angle that characterize a deforming body objectively. © 2015 Elsevier Ltd. All rights reserved.

The paper studies the conditions under which Newtonian impacts lead to an energetically consistent post-impact state, and identifies the mechanisms responsible for a potential violation. Unilateral and bilateral geometric and kinematic constraints, as well as various Coulomb type friction elements are equipped with a kinematic restitution coefficient and are taken into account in the impact problem as normal cone inclusions. Based on the condition number of the Delassus operator, bounds on the impact coefficients are derived that ensure energetic consistency. As counter-examples, a slide-push mechanism is presented, and Kane's example of a frictional impact at a double pendulum is analyzed. © 2012 Springer Science+Business Media B.V.

Keller U.,ETH Zurich
Applied Physics B: Lasers and Optics | Year: 2010

Ultrashort lasers provide an important tool to probe the dynamics of physical systems at very short time-scales, allowing for improved understanding of the performance of many devices and phenomena used in science, technology, and medicine. In addition ultrashort pulses also provide a high peak intensity and a broad optical spectrum, which opens even more applications such as material processing, nonlinear optics, attosecond science, and metrology. There has been a long-standing, ongoing effort in the field to reduce the pulse duration and increase the power of these lasers to continue to empower existing and new applications. After 1990, new techniques such as semiconductor saturable absorber mirrors (SESAMs) and Kerr-lens mode locking (KLM) allowed for the generation of stable pulse trains from diode-pumped solid-state lasers for the first time, and enabled the performance of such lasers to improve by several orders of magnitude with regards to pulse duration, pulse energy and pulse repetition rates. This invited review article gives a broad overview and includes some personal accounts of the key events during the last 20 years, which made ultrafast solid-state lasers a success story. Ultrafast Ti:sapphire, diode-pumped solid-state, and novel semiconductor laser oscillators will be reviewed. The perspective for the near future indicates continued significant progress in the field. © 2010 Springer-Verlag.

Listeria monocytogenes is a foodborne opportunistic pathogen capable to switch from an environmental saprophyte to a potentially fatal human pathogen. The fact that the pathogen maintains the genes suitable for an elaborate infectious process indicates that these genes are required to survive in the environment. However, no environmental host reservoir for L. monocytogenes has been identified so far. The similarity of free-living, bacteria-scavenging amoebae to macrophages led to the hypothesis that protozoa may represent the missing link in the ecology and pathology of L. monocytogenes. Consequently, numerous studies have been published reporting on the potential of Acanthamoeba spp. to serve as host for a variety of pathogenic bacteria. However, the data on the interaction of L. monocytogenes with Acanthamoeba spp. are inconsistent and relatively little information on the impact of this interaction on growth and distribution of the foodborne pathogen is currently available. Hence, this review focuses on the interaction of L. monocytogenes and Acanthamoeba spp. affecting survival and growth of the foodborne pathogen in natural and man-made environments, in order to highlight the potential impact of this interplay on food safety and human health. © 2014 Springer-Verlag Berlin Heidelberg.

Willett S.D.,ETH Zurich
Tectonophysics | Year: 2010

An erosion model is proposed to calculate erosion rates for plane-strain models in which the Earth's surface is represented on a line. The fundamentals of river erosion networks are captured by two principles, Hack's Law, which describes the drainage area structure of river network and a stream-power erosion law, which describes the rate of incision of a river. For a simple morphology of parallel transverse rivers with rectangular drainage basins, this allows the earth's surface to be parameterized by two heights: the trunk stream channel height and the interfluvial ridge height. The resulting expressions are solved for the simple cases of constant uplift rate and a constant mean slope as occurs in critical wedge problems. In the latter case, the uplift rate is variable and changes in space so that the trunk channel elevation and the interfluvial ridge elevation average to maintain a constant mean slope. A general, numerical solution is presented for application to any numerical model with arbitrary surface velocity, variable rock erodibility and precipitation. This algorithm is coupled to a plane-strain, plastic-deformation model to demonstrate the utility of the model. © 2009 Elsevier B.V. All rights reserved.

Muller D.J.,ETH Zurich | Dufrene Y.F.,Catholic University of Louvain
Current Biology | Year: 2011

The emerging field of 'live cell nanoscopy' has revolutionized the way biologists explore the living cell at molecular resolution. Whereas far-field fluorescence nanoscopy enables the study of the nanoscale localization and dynamics of biomolecules in cells, recent developments in atomic force microscopy (AFM) techniques offer unprecedented opportunities for imaging the structural heterogeneity of cell surfaces, and for probing the functional properties of their molecular machineries (Figure 1). In the past few years, AFM-based force nanoscopy has enabled key breakthroughs in cell biology, including: deciphering the nanoscale architecture of cell surfaces and their remodelling upon changes to the cell's functional state; understanding cellular mechanics and their functional implications; quantifying the cell adhesion forces that contribute to processes like tissue development, cell division, and bacterial infection; unravelling the molecular elasticity of cellular proteins; and elucidating how cells reassemble membrane receptors into nanodomains and modulate their functional state. In this Primer, we explain the basic principles of AFM and provide a snapshot of some of the exciting work being done in cell biology using this multifunctional tool. © 2011 Elsevier Ltd All rights reserved.

The Pello Hill scoria cone displays widely different characteristics dependent on which side of the cone is studied. This follows as a direct result of a strong depositional asymmetry (i.e., the crater rim varies between 14 and 111. m in height), preferentially depositing material to the northwestern side of the construct. This is interpreted to reflect sedimentation from a sustained eruption plume (with prevailing winds at the time of the eruption coinciding with the direction of maximum deposition). The scoria deposits on this side of the cone form relatively fine-grained and well-sorted deposits which are laterally continuous over distances of 10's of meters. To all other sides of the vent, deposits are characterized by coarse-grained lenticular deposits rich in mantle xenoliths, occasionally showing inverse grading (consistent with an origin as ballistic ejecta and slight reworking down-slope as grain-flows). The pyroclastic textures are dominated by a moderate vesicularity (~. 40. vol. %) in combination with smooth, fluidal, outer surfaces. Two other features that stands out in comparison with "normal" scoria cones, these are defined by the absence of: (i) inward dipping layers into the crater area, and (ii) no agglutination/welding features can be found in any of the exposed outcrops.Calculated magma ascent rates yields values between 8.5 and 36.0m s-1, which is similar to that previously reported for kimberlitic magmas. Building on these ascent rates, and the volume of the pyroclastic construct (~5×106 m3 DRE), the eruption duration is estimated to less than 6hours (for any vent area larger than 28m2). Therefore, it may not have taken longer time than 9hours from the time the magma started to ascend from upper-mantle depth (90km) and the point at which the eruption halted.Overall, the olivine melilititic Pello Hill scoria cone displays many characteristics that can also be found in the vent-facies deposits of kimberlite eruptions. Therefore, further detailed studies of well-preserved, CO2-rich, olivine melilitite eruptions (such as the provided by the landforms within the LNE-MVF and Pello Hill) could provide a tool to understand the emplacement dynamics of the vent-facies of kimberlitic eruptions. © 2012 Elsevier B.V.

Mignan A.,ETH Zurich
Geophysical Research Letters | Year: 2012

Various seismicity patterns before large earthquakes have been reported in the literature. They include foreshocks (medium-term acceleration and short-term activation), quiescence, doughnut patterns and event migration. The existence of these precursory patterns is however debated. Here, we develop an approach based on the concept of stress accumulation to unify and categorize all claimed seismic precursors in a same physical framework. We first extend the Non-Critical Precursory Accelerating Seismicity Theory (N-C PAST), which already explains most precursors, to additionally include short-term activation. Theoretical results are then compared to the time series observed prior to the 2009 Mw=6.3L'Aquila, Italy, earthquake. We finally show that different precursory paths are possible before large earthquakes, with possible coupling of different patterns or non-occurrence of any. This is described by a logic tree defined from the combined probabilities of occurrence of the mainshock at a given stress state and of precursory silent slip on the fault. In the case of the L'Aquila earthquake, the observed precursory path is coupling of quiescence and accelerating seismic release, followed by activation. These results provide guidelines for future research on earthquake predictability. © 2012. American Geophysical Union. All Rights Reserved.

Studart A.R.,ETH Zurich
Chemical Society Reviews | Year: 2016

Additive manufacturing (AM) technologies offer an attractive pathway towards the fabrication of functional materials featuring complex heterogeneous architectures inspired by biological systems. In this paper, recent research on the use of AM approaches to program the local chemical composition, structure and properties of biologically-inspired materials is reviewed. A variety of structural motifs found in biological composites have been successfully emulated in synthetic systems using inkjet-based, direct-writing, stereolithography and slip casting technologies. The replication in synthetic systems of design principles underlying such structural motifs has enabled the fabrication of lightweight cellular materials, strong and tough composites, soft robots and autonomously shaping structures with unprecedented properties and functionalities. Pushing the current limits of AM technologies in future research should bring us closer to the manufacturing capabilities of living organisms, opening the way for the digital fabrication of advanced materials with superior performance, lower environmental impact and new functionalities. © The Royal Society of Chemistry.

Piegsa F.M.,ETH Zurich
Physical Review C - Nuclear Physics | Year: 2013

A concept to search for a neutron electric dipole moment (nEDM) is presented, which employs a pulsed neutron beam instead of the established use of storable ultracold neutrons (UCN). The technique takes advantage of the high peak flux and the time structure of a next-generation pulsed spallation source like the planned European Spallation Source. It is demonstrated that the sensitivity for a nEDM can be improved by several orders of magnitude compared to the best beam experiments performed in the 1970s and can compete with the sensitivity of UCN experiments. © 2013 American Physical Society.

Stadler T.,ETH Zurich | Yang Z.,University College London
Systematic Biology | Year: 2013

We develop a Bayesian Markov chain Monte Carlo (MCMC) algorithm for estimating divergence times using sequentially sampled molecular sequences. This type of data is commonly collected during viral epidemics and is sometimes available from different species in ancient DNA studies. We derive the distribution of ages of nodes in the tree under a birth-death-sequential- sampling (BDSS) model and use it as the prior for divergence times in the dating analysis. We implement the prior in the MCMCtree program in the PAML package for divergence dating. The BDSS prior is very flexible and, with different parameters, can generate trees of very different shapes, suitable for examining the sensitivity of posterior time estimates. We apply the method to a data set of SIV/HIV-2 genes in comparison with a likelihood-based dating method, and to a data set of influenza H1 genes from different hosts in comparison with the Bayesian program BEAST. We examined the impact of tree topology on time estimates and suggest that multifurcating consensus trees should be avoided in dating analysis. We found posterior time estimates for old nodes to be sensitive to the priors on times and rates and suggest that previous Bayesian dating studies may have produced overconfident estimates. [Bayesian inference; MCMC; molecular clock dating; sampled tips; viral evolution.] © 2013 The Author(s). All rights reserved.

Christl I.,ETH Zurich
Environmental Chemistry | Year: 2012

Calcium binding by two terrestrial humic acids was investigated at 25C as a function of pH, ionic strength and Ca 2+ activity with calcium titration experiments. A Ca 2+-selective electrode was used for Ca 2+ measurements to cover a wide range of Ca 2+ activities (10-8.510-2.5). Experimental data were quantitatively described with the NICADonnan model accounting for electrostatic and specific calcium binding. The results showed that calcium binding as a function of Ca 2+ activity was strongly affected by variations of pH and ionic strength indicating that electrostatic binding is an important mechanism for calcium binding by humic acids. Data modelling providing a good description of experimental data for both humic acids suggested that electrostatic binding was the dominant calcium binding mechanisms at high Ca 2+ activities often observed in terrestrial environments. Specific calcium binding being quantitatively predominant only at low Ca 2+ activities was exclusively attributed to binding sites exhibiting a weak affinity for protons considered to represent mainly carboxylic groups. Since the negative charge of the humic acids being prerequisite for electrostatic calcium binding was found to be mainly due to deprotonation of carboxylic groups except under alkaline conditions, carboxylic groups were identified to primarily control calcium binding of humic acids. © 2012 CSIRO.

Jaccard S.L.,ETH Zurich | Galbraith E.D.,McGill University
Nature Geoscience | Year: 2012

During the last glacial termination, the solubility of gases in the ocean decreased as ocean temperatures rose. However, marine sediments have not unanimously recorded ocean deoxygenation throughout this time. Some records show increasing oxygenation since the Last Glacial Maximum, particularly in the deep sea, while many document abrupt oxygenation changes, often associated with apparent changes in the formation rate of North Atlantic Deep Water. Here we present a global compilation of marine sediment proxy records that reveals remarkable coherency between regional oxygenation changes throughout deglaciation. The upper ocean generally became less oxygenated, but this general trend included pauses and even reversals, reflecting changes in nutrient supply, respiration rates and ventilation. The most pronounced deoxygenation episode in the upper ocean occurred midway through the deglaciation, associated with a reinvigoration of North Atlantic Deep Water formation. At this time, the upper Indo-Pacific Ocean was less oxygenated than today. Meanwhile, the bulk of the deep ocean became more oxygenated over the deglaciation, reflecting a transfer of respired carbon to the atmosphere. The observed divergence from a simple solubility control emphasizes the degree to which oxygen consumption patterns can be altered by changes in ocean circulation and marine ecosystems. © 2012 Macmillan Publishers Limited. All rights reserved.

Scaramuzza D.,ETH Zurich
International Journal of Computer Vision | Year: 2011

This paper presents a new method to estimate the relative motion of a vehicle from images of a single camera. The computational cost of the algorithm is limited only by the feature extraction and matching process, as the outlier removal and the motion estimation steps take less than a fraction of millisecond with a normal laptop computer. The biggest problem in visual motion estimation is data association; matched points contain many outliers that must be detected and removed for the motion to be accurately estimated. In the last few years, a very established method for removing outliers has been the "5-point RANSAC" algorithm which needs a minimum of 5 point correspondences to estimate the model hypotheses. Because of this, however, it can require up to several hundreds of iterations to find a set of points free of outliers. In this paper, we show that by exploiting the nonholonomic constraints of wheeled vehicles it is possible to use a restrictive motion model which allows us to parameterize the motion with only 1 point correspondence. Using a single feature correspondence for motion estimation is the lowest model parameterization possible and results in the two most efficient algorithms for removing outliers: 1-point RANSAC and histogram voting. To support our method we run many experiments on both synthetic and real data and compare the performance with a state-of-the-art approach. Finally, we show an application of our method to visual odometry by recovering a 3 Km trajectory in a cluttered urban environment and in real-time. © 2011 Springer Science+Business Media, LLC.

Stenflo J.O.,ETH Zurich
Astronomy and Astrophysics Review | Year: 2013

Observational astrophysics started when spectroscopy could be applied to astronomy. Similarly, observational work on stellar magnetic fields became possible with the application of spectro-polarimetry. In recent decades there have been dramatic advances in the observational tools for spectro-polarimetry. The four Stokes parameters that provide a complete representation of partially polarized light can now be simultaneously imaged with megapixel array detectors with high polarimetric precision (10-5 in the degree of polarization). This has led to new insights about the nature and properties of the magnetic field, and has helped pave the way for the use of the Hanle effect as a diagnostic tool beside the Zeeman effect. The magnetic structuring continues on scales orders of magnitudes smaller than the resolved ones, but various types of spectro-polarimetric signatures can be identified, which let us determine the field strengths and angular distributions of the field vectors in the spatially unresolved domain. Here we review the observational properties of the magnetic field, from the global patterns to the smallest scales at the magnetic diffusion limit, and relate them to the global and local dynamos. © 2013 Springer-Verlag Berlin Heidelberg.

Sigrist F.,ETH Zurich
Journal of Statistical Software | Year: 2015

The R package spate implements methodology for modeling of large space-time data sets. A spatio-temporal Gaussian process is defined through a stochastic partial differential equation (SPDE) which is solved using spectral methods. In contrast to the traditional geostatistical way of relying on the covariance function, the spectral SPDE approach is computationally tractable and provides a realistic space-time parametrization. This package aims at providing tools for simulating and modeling of spatio-temporal processes using an SPDE based approach. The package contains functions for obtaining parametrizations, such as propagator or innovation covariance matrices, of the spatio-temporal model. This allows for building customized hierarchical Bayesian models using the SPDE based model at the process stage. The functions of the package then provide computationally efficient algorithms needed for doing inference with the hierarchical model. Furthermore, an adaptive Markov chain Monte Carlo (MCMC) algorithm implemented in the package can be used as an algorithm for doing inference without any additional modeling. This function is flexible and allows for application specific customizing. The MCMC algorithm supports data that follow a Gaussian or a censored distribution with point mass at zero. Spatio-temporal covariates can be included in the model through a regression term. © 2015, American Statistical Association. All rights reserved.

Schneider G.,ETH Zurich
Drug Discovery Today: Technologies | Year: 2013

Current trends in computational de novo design provide a fresh approach to 'scaffold-hopping' in drug discovery. The methodological repertoire is no longer limited to receptor-based methods, but specifically ligand-based techniques that consider multiple properties in parallel, including the synthetic feasibility of the computer-generated molecules and their polypharmacology, provide innovative ideas for the discovery of new chemical entities. The concept of fragment-based and virtual reaction-driven design enables rapid compound optimization from scratch with a manageable complexity of the search. Starting from known drugs as a reference, such algorithms suggest drug-like molecules with motivated scaffold variations, and advanced mathematical models of structure-activity landscapes and multi-objective design techniques have created new opportunities for hit and lead finding. © 2012 Elsevier Ltd.

Stability analysis of discrete-time switched systems under minimum dwell-time is studied using a new type of LMI conditions. These conditions are convex in the matrices of the system and shown to be equivalent to the nonconvex conditions proposed in Geromel and Colaneri (2006b). The convexification of the conditions is performed by a lifting process which introduces a moderate number of additional decision variables. The convexity of the conditions can be exploited to extend the results to uncertain systems, control design and ℓ2-gain computation without introducing additional conservatism. Several examples are presented to show the effectiveness of the approach. © 2014 Elsevier Ltd. All rights reserved.

The construction industry lacks solutions for accurately, comprehensively and efficiently tracking the three-dimensional (3D) status of buildings under construction. Such information is however critical to the successful management of construction projects: It supports fundamental activities such as progress tracking and construction dimensional quality control. In this paper, a new approach for automated recognition of project 3D Computer-Aided Design (CAD) model objects in large laser scans is presented, with significant improvements compared to the one originally proposed in Bosché et al. (in press) [11]. A more robust point matching method is used and registration quality is improved with the implementation of an Iterative Closest Point (ICP)-based fine registration step. Once the optimal registration of the project's CAD model with a site scan is obtained, a similar ICP-based registration algorithm is proposed to calculate the as-built poses of the CAD model objects. These as-built poses are then used for automatically controlling the compliance of the project with respect to corresponding dimensional tolerances. Experimental results are presented with data obtained from the erection of an industrial building's steel structure. They demonstrate the performance in real field conditions of the model registration and object recognition algorithms, and show the potential of the proposed approach for as-built dimension calculation and control. © 2009 Elsevier Ltd. All rights reserved.

Buryak A.,ETH Zurich
Communications in Mathematical Physics | Year: 2015

In this paper we present a new construction of a hamiltonian hierarchy associated to a cohomological field theory. We conjecture that in the semisimple case our hierarchy is related to the Dubrovin–Zhang hierarchy by a Miura transformation, and we check it in several examples. © 2014, Springer-Verlag Berlin Heidelberg.

In a recent paper, Pandharipande, Solomon and Tessler initiated a study of the intersection theory on the moduli space of Riemann surfaces with boundary. The authors conjectured KdV and Virasoro type equations that completely determine all intersection numbers. In this paper, we study these equations in detail. In particular, we prove that the KdV and the Virasoro type equations for the intersection numbers on the moduli space of Riemann surfaces with boundary are equivalent. © 2015, Springer Science+Business Media Dordrecht.

Heinrich C.A.,ETH Zurich | Heinrich C.A.,University of Zurich
Nature Geoscience | Year: 2015

The Witwatersrand Basin in South Africa is one of the best-preserved records of fluvial sedimentation on an Archaean continent. The basin hosts the worlds biggest gold resource in thin pebble beds, but the process for gold enrichment is debated. Mechanical accumulation of gold particles from flowing river water is the prevailing hypothesis, yet there is evidence for hydrothermal mobilization of gold by fluids invading the metasedimentary rocks after their burial. Earth's atmosphere three billion years ago was oxygen free, but already sustained some of the oldest microbial life on land. Here I use thermodynamic modelling and mass-balance calculations to show that these conditions could have led to the chemical transport and precipitation of gold in anoxic surface waters, reconciling the evidence for fluvial deposition with evidence for hydrothermal-like chemical reactions. I suggest that the release of sulphurous gases from large volcanic eruptions created acid rain that enabled the dissolution and transport of gold in surface waters as sulphur complexes. Precipitation of the richest gold deposits could have been triggered by chemical reduction of the dissolved gold onto organic material in shallow lakes and pools. I conclude that the Witwatersrand gold could have formed only during the Archaean, after the emergence of continental life but before the rise of oxygen in the Earth's atmosphere. © 2015 Macmillan Publishers Limited.

Sornette D.,ETH Zurich | Sornette D.,Swiss Finance Institute
Reports on Progress in Physics | Year: 2014

This short review presents a selected history of the mutual fertilization between physics and economics - from Isaac Newton and Adam Smith to the present. The fundamentally different perspectives embraced in theories developed in financial economics compared with physics are dissected with the examples of the volatility smile and of the excess volatility puzzle. The role of the Ising model of phase transitions to model social and financial systems is reviewed, with the concepts of random utilities and the logit model as the analog of the Boltzmann factor in statistical physics. Recent extensions in terms of quantum decision theory are also covered. A wealth of models are discussed briefly that build on the Ising model and generalize it to account for the many stylized facts of financial markets. A summary of the relevance of the Ising model and its extensions is provided to account for financial bubbles and crashes. The review would be incomplete if it did not cover the dynamical field of agent-based models (ABMs), also known as computational economic models, of which the Ising-type models are just special ABM implementations. We formulate the 'Emerging Intelligence Market Hypothesis' to reconcile the pervasive presence of 'noise traders' with the near efficiency of financial markets. Finally, we note that evolutionary biology, more than physics, is now playing a growing role to inspire models of financial markets. © 2014 IOP Publishing Ltd.

As a rule, a technology transfer gap exists between research and development and the commercialisation of the results. This article investigates the role of new ventures for technology transfer from universities and research institutions as well as between or within companies to close this gap. Based on case studies in Germany and Switzerland, different examples of this technology transfer approach have been analysed. Academic spin-offs can help to transfer technology from universities and research institutions to industry especially if there is the need for additional funding to further develop the technology. Corporate spin-outs can be used for technology transfer between companies as an alternative to closing operations should these no longer fit into the parent organisation. Internal start-ups were identified as a new approach for company internal technology transfer from research departments to business units focused on commercial operations to overcome innovation barriers within companies. © 2012 Springer Science+Business Media, LLC.

Jones O.G.,Purdue University | Mezzenga R.,ETH Zurich
Soft Matter | Year: 2012

Protein fibrils are relevant not only in medicine and amyloid-related neurodegenerative diseases, but also as functional structures in material science or biology. The assembly of protein into fibrils can be promoted or inhibited based on the chosen environmental conditions and interaction with suitable components. We review here the key strategies for promotion and inhibition of protein fibrillation in both physiological and non-physiological conditions in order to create functional designs. The major variables discussed are solvent conditions, metals/ions, biopolymers, aromatic compounds, and surface active components. Due to bias in research directions, deeper investigation has traditionally been carried out for inhibition of fibrillation, but focus has recently shifted. Thus, while various strategies are presented on the breakdown of mature protein fibrils, emphasis is given to the approaches leading to increased rigidity and length of resultant fibrils. We highlight important areas in this field that require further development and promising lines of future experiments. © 2012 The Royal Society of Chemistry.

Wall D.H.,Colorado State University | Nielsen U.N.,University of Western Sydney | Six J.,ETH Zurich
Nature | Year: 2015

Soil biodiversity is increasingly recognized as providing benefits to human health because it can suppress disease-causing soil organisms and provide clean air, water and food. Poor land-management practices and environmental change are, however, affecting belowground communities globally, and the resulting declines in soil biodiversity reduce and impair these benefits. Importantly, current research indicates that soil biodiversity can be maintained and partially restored if managed sustainably. Promoting the ecological complexity and robustness of soil biodiversity through improved management practices represents an underutilized resource with the ability to improve human health. © 2015 Macmillan Publishers Limited. All rights reserved.

Gonnet P.,ETH Zurich
ACM Computing Surveys | Year: 2012

Themost critical component of any adaptive numerical quadrature routine is the estimation of the integration error. Since the publication of the first algorithms in the 1960s, many error estimation schemes have been presented, evaluated, and discussed. This article presents a review of existing error estimation techniques and discusses their differences and their common features. Some common shortcomings of these algorithms are discussed, and a new general error estimation technique is presented © 2012 ACM.

Sosa B.A.,Massachusetts Institute of Technology | Kutay U.,ETH Zurich | Schwartz T.U.,Massachusetts Institute of Technology
Current Opinion in Structural Biology | Year: 2013

Communication between nucleus and cytoplasm extends past molecular exchange and critically includes mechanical wiring. Cytoskeleton and nucleoskeleton are connected via molecular tethers that span the nuclear envelope. Sad1, UNC84 (SUN)-domain proteins spanning the inner nuclear membrane and Klarsicht, ANC-1 and SYNE/Nesprin-1 and -2 Homology (KASH)-peptide bearing proteins residing in the outer nuclear membrane directly bind and constitute the core of the LInkers of Nucleoskeleton and Cytoskeleton (LINC) complex. These connections appear critical for a growing number of biological processes and aberrations are implicated in a host of diverse diseases, including muscular dystrophies, cardiomyopathies, and premature aging. We discuss recent developments in this vibrant research area, particularly in context of first structural insights into LINC complexes reported in the past year. © 2013 Elsevier Ltd.

Hofert M.,ETH Zurich
Computational Statistics and Data Analysis | Year: 2011

Efficient sampling algorithms for both Archimedean and nested Archimedean copulas are presented. First, efficient sampling algorithms for the nested Archimedean families of Ali- Mikhail-Haq, Frank, and Joe are introduced. Second, a general strategyhowto build a nested Archimedean copula from a given Archimedean generator is presented. Sampling this copula involves sampling an exponentially tilted stable distribution. A fast rejection algorithm is developed for the more general class of tilted Archimedean generators. It is proven that this algorithm reduces the complexity of the standard rejection algorithm to logarithmic complexity. As an application it is shown that the fast rejection algorithm outperforms existing algorithms for sampling exponentially tilted stable distributions involved, e.g., in nested Clayton copulas. Third, with the additional help of randomization of generator parameters, explicit sampling algorithms for several nested Archimedean copulas based on different Archimedean families are found. Additional results include approximations and some dependence properties, such as Kendall's tau and tail dependence parameters. The presented ideas may also apply in the more general context of sampling distributions given by their Laplace-Stieltjes transforms. © 2010 Elsevier B.V. All rights reserved.

Kassios I.T.,ETH Zurich
Formal Aspects of Computing | Year: 2011

The theory of Dynamic Frames has been invented to deal with the frame problem in the presence of encapsulation and pointers. It has proved more flexible and conceptually simpler than previous approaches that tackled the problem. It is now being actively used both for theoretical and for practical purposes related to the formal verification of program correctness. This paper presents the full theory of Dynamic Frames, together with its reasoning laws and exemplifies the use of these laws in proving correct several common design patterns. It also discusses the ongoing research on the topic. © 2010 British Computer Society.

Jenny P.,ETH Zurich | Roekaerts D.,Technical University of Delft | Beishuizen N.,Robert Bosch GmbH
Progress in Energy and Combustion Science | Year: 2012

In a real turbulent spray flame, dispersion, continuous phase turbulence modification, dispersed phase inter-particle collisions, evaporation, mixing and combustion occur simultaneously. Dealing with all these complexities and their interactions poses a tremendous modeling task. Therefore, in order to advance current modeling capabilities, it seems reasonable to aim for progress in individual sub-areas like breakup, dispersion, mixing and combustion, which however cannot be viewed in complete isolation. Further, one has to consider advantages and disadvantages of the general modeling approaches, which are direct numerical simulation (DNS), large eddy simulation (LES), simulations based on Reynolds averaged equations and probability density function (PDF) methods. Not least one also has to distinguish between Eulerian and Lagrangian dispersed phase descriptions. The goal of this paper is to provide a review of computational model developments relevant for turbulent dilute spray combustion, i.e. the dense regime, including collisions as well as primary and secondary atomization, is not covered. Also not considered is breakup in dilute sprays, which can occur in the presence of sufficiently high local turbulence. It is intended to guide readers interested in theory, in the development and validation of predictive models, and in planning new experiments. In terms of physical phenomena, the current understanding regarding turbulence modification due to droplets, preferential droplet concentration, impact on evaporation and micro-mixing, and different spray combustion regimes is summarized. In terms of modeling, different sets of equations are discussed, i.e. the governing conservation laws without and with point droplet approximation as employed by DNS, the filtered equations considered in LES, the Reynolds averaged equations, and Lagrangian evolution equations. Further, small scale models required in the context of point droplet approximations are covered. In terms of computational studies and method developments, progress is categorized by the employed approaches, i.e. DNS, LES, simulations based on Reynolds averaged equations, and PDF methods. In terms of experiments, various canonical spray flame configurations are discussed. Moreover, some of the most important experiments in this field are presented in a structured way with the intention to provide a database for model validation and a guideline for future investigations. © 2012 Elsevier Ltd. All rights reserved.

The feasibility of a next generation neutrino observatory in Europe is being considered within the LAGUNA design study. To accommodate giant neutrino detectors and shield them from cosmic rays, a new very large underground infrastructure is required. Seven potential candidate sites in different parts of Europe and at several distances from CERN are being studied: Boulby (UK), Canfranc (Spain), Fréjus (France/Italy), Pyhäsalmi (Finland), Polkowice-Sieroszowice (Poland), Slanic (Romania) and Umbria (Italy). The design study aims at the comprehensive and coordinated technical assessment of each site, at a coherent cost estimation, and at a prioritization of the sites within the summer 2010.

Cossart P.,Institute Pasteur Paris | Cossart P.,French National Institute for Agricultural Research | Helenius A.,ETH Zurich
Cold Spring Harbor Perspectives in Biology | Year: 2014

Of the many pathogens that infect humans and animals, a large number use cells of the host organism as protected sites for replication. To reach the relevant intracellular compartments, they take advantage of the endocytosis machinery and exploit the network of endocytic organelles for penetration into the cytosol or as sites of replication. In this review, we discuss the endocytic entry processes used by viruses and bacteria and compare the strategies used by these dissimilar classes of pathogens. © 2014 Cold Spring Harbor Laboratory Press; all rights reserved.

Maass R.,University of Gottingen | Loffler J.F.,ETH Zurich
Advanced Functional Materials | Year: 2015

The future of metallic glasses as an advanced structural and functional material will to a great extent depend on the understanding and control of their mesoscopic flow defects called shear bands. These defects are sweet-and-sour; sweet because they mediate macroscopic plasticity at room temperature, and sour because they quickly promote failure. In the past decade, fundamental research generated great progress in characterizing the role that shear bands play during plastic deformation of disordered systems, including metallic glasses. Similar to those in many other materials, shear bands in metallic glasses are only active for a very short time, which directed research focus towards topological, structural, chemical, and thermal properties of formed, but inactive shear bands. In this paper, recent progress in directly characterizing the shear-band dynamics in situ during straining experiments is presented. Various shear-banding stages are outlined, including formation, propagation, and arrest, as well as shear-band creep and aging. The results are discussed in a more general context of disordered materials, concluding with a summarizing overview of time-scales involved in shear banding, and describing future research directions that may lead to controlled shear-band plasticity in metallic glasses. Dynamic properties of shear bands are a key element for the design of plastically stable bulk metallic glasses. In this Feature Article, recent progress on in situ characterization of shear-band dynamics is summarized. The aim is to provide a comprehensive understanding of shear-band initiation, propagation, arrest, creep, and aging, and how they determine the plastic flow behavior of bulk metallic glasses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Studart A.R.,ETH Zurich
Advanced Materials | Year: 2012

Biological composites have evolved elaborate hierarchical structures to achieve outstanding mechanical properties using weak but readily available building blocks. Combining the underlying design principles of such biological materials with the rich chemistry accessible in synthetic systems may enable the creation of artificial composites with unprecedented properties and functionalities. This bioinspired approach requires identification, understanding, and quantification of natural design principles and their replication in synthetic materials, taking into account the intrinsic properties of the stronger artificial building blocks and the boundary conditions of engineering applications. In this progress report, the scientific and technological questions that have to be addressed to achieve this goal are highlighted, and examples of recent research efforts to tackle them are presented. These include the local characterization of the heterogeneous architecture of biological materials, the investigation of structure-function relationships to help unveil natural design principles, and the development of synthetic processing routes that can potentially be used to implement some of these principles in synthetic materials. The importance of replicating the design principles of biological materials rather than their structure per se is highlighted, and possible directions for further progress in this fascinating, interdisciplinary field are discussed. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Torrilhon M.,ETH Zurich
Communications in Computational Physics | Year: 2010

In this paper we develop a new closure theory for moment approximations in kinetic gas theory and derive hyperbolic moment equations for 13 fluid variables including stress and heat flux. Classical equations have either restricted hyperbolicity regions like Grad's moment equations or fail to include higher moments in a practical way like the entropy maximization approach. The new closure is based on Pearson-Type-IV distributions which reduce to Maxwellians in equilibrium, but allow anisotropies and skewness in non-equilibrium. The closure relations are essentially explicit and easy to evaluate. Hyperbolicity is shown numerically for a large range of values. Numerical solutions of Riemann problems demonstrate the capability of the new equations to handle strong non-equilibrium. © 2010 Global-Science Press.

Mojsilovic N.,ETH Zurich
Construction and Building Materials | Year: 2012

Load tests on 10 series of pre-compressed masonry triplets with a damp-proof course (DPC) membrane placed either in the middle of the mortar bed joint or between the masonry unit and bed joint mortar have been completed. The shear behaviour of the specimens was highly influenced by the applied pre-compression level and almost uninfluenced by the position of the DPC. For the prediction of the failure shear force a Mohr-Coulomb failure criterion can be applied. A considerable energy dissipation and large deformation capacity could be expected in masonry structures with DPC incorporated in the bed joints. Such behaviour would be desirable for enhanced seismic performance. © 2012 Elsevier Ltd. All rights reserved.

Stadler T.,ETH Zurich
Journal of Theoretical Biology | Year: 2010

I consider the constant rate birth-death process with incomplete sampling. I calculate the density of a given tree with sampled extant and extinct individuals.This density is essential for analyzing datasets which are sampled through time. Such datasets are common in virus epidemiology as viruses in infected individuals are sampled through time. Further, such datasets appear in phylogenetics when extant and extinct species data is available.I show how the derived tree density can be used (i) as a tree prior in a Bayesian method to reconstruct the evolutionary past of the sequence data on a calender-timescale, (ii) to infer the birth- and death-rates for a reconstructed evolutionary tree, and (iii) for simulating trees with a given number of sampled extant and extinct individuals which is essential for testing evolutionary hypotheses for the considered datasets. © 2010 Elsevier Ltd.

Holzner M.,Max Planck Institute for Dynamics and Self-Organization | Luthi B.,ETH Zurich
Physical Review Letters | Year: 2011

In this Letter we present results from particle tracking velocimetry and direct numerical simulation that are congruent with the existence of a laminar superlayer, as proposed in the pioneering work of Corrsin and Kistler (NACA, Technical Report No.AA1244, 1955). We find that the local superlayer velocity is dominated by a viscous component and its magnitude is comparable to the characteristic velocity of the smallest scales of motion. This slow viscous process involves a large surface area so that the global rate of turbulence spreading is set by the largest scales of motion. These findings are important for a better understanding of mixing of mass and momentum in a variety of flows where thin layers of shear exist. Examples are boundary layers, clouds, planetary atmospheres, and oceans. © 2011 American Physical Society.

Khan A.,ETH Zurich | Shankland T.J.,Los Alamos National Laboratory
Earth and Planetary Science Letters | Year: 2012

This paper applies electromagnetic sounding methods for Earth's mantle to constrain its thermal state, chemical composition, and "water" content. We consider long-period inductive response functions in the form of C-responses from four stations distributed across the Earth (Europe, North America, Asia and Australia) covering a period range from 3.9 to 95.2days and sensitivity to ~1200km depth. We invert C-responses directly for thermo-chemical state using a self-consistent thermodynamic method that computes phase equilibria as functions of pressure, temperature, and composition (in the Na 2O-CaO-FeO-MgO-Al 2O 3-SiO 2 model system). Computed mineral modes are combined with recent laboratory-based electrical conductivity models from independent experimental research groups (Yoshino (2010) and Karato (2011)) to compute bulk conductivity structure beneath each of the four stations from which C-responses are estimated. To reliably allocate water between the various mineral phases we include laboratory-measured water partition coefficients for major upper mantle and transition zone minerals. This scheme is interfaced with a sampling-based algorithm to solve the resulting non-linear inverse problem. This approach has two advantages: (1) It anchors temperatures, composition, electrical conductivities, and discontinuities that are in laboratory-based forward models, and (2) At the same time it permits the use of geophysical inverse methods to optimize conductivity profiles to match geophysical data. The results show lateral variations in upper mantle temperatures beneath the four stations that appear to persist throughout the upper mantle and parts of the transition zone. Calculated mantle temperatures at 410 and 660km depth lie in the range 1250-1650°C and 1500-1750°C, respectively, and generally agree with the experimentally-determined temperatures at which the measured phase reactions olivine→β-spinel and γ-spinel→ferropericlase+perovskite occur. The retrieved conductivity structures beneath the various stations tend to follow trends observed for temperature with the strongest lateral variations in the uppermost mantle; for depths >300km conductivities appear to depend less on the particular conductivity database. Conductivities at 410km and at 660km depth are found to agree overall with purely geophysically-derived global and semi-global one-dimensional conductivity models. Both electrical conductivity databases point to <0.01wt.% H 2O in the upper mantle. For transition zone minerals results from the laboratory database of Yoshino (2010) suggest that a much higher water content (up to 2wt.% H 2O) is required than in the other database (Karato, 2011), which favors a relatively "dry" transition zone (<0.01wt.% H 2O). Incorporating laboratory measurements of hydrous silicate melting relations and available conductivity data allows us to consider the possibility of hydration melting and a high-conductivity melt layer above the 410-km discontinuity. The latter appears to be 1) regionally localized and 2) principally a feature from the Yoshino (2010) database. Further, there is evidence of lateral heterogeneity: The mantle beneath southwestern North America and central China appears "wetter" than that beneath central Europe or Australia. © 2011 Elsevier B.V.

Willett S.D.,ETH Zurich
Annual Review of Earth and Planetary Sciences | Year: 2010

As with many mountain belts globally, the Alps have seen a large increase in sediment yield in the late Neogene. The hypothesis that this increase results from climate-change impacts on erosion rate over the past ∼5 Ma is testable, given the extensive work completed in the Alps. Sediment budgets, thermochronology-based cooling rates, and estimates of modern rock uplift and erosion in the Western Alps are reviewed to search for correlation in space and time that might fingerprint climatic events. Major increases in sediment yield are apparent starting in the late Messinian (5.5 Mya) and accelerating toward the present day; this suggests the occurrence of a series of climatic changes that increased erosion rates, even as tectonic processes appear to have slowed or stopped. At present, tight correlations in time are difficult to establish and there remain open questions about the role of tectonics in the coupled system. Copyright © 2010 by Annual Reviews. All rights reserved.

Locher K.P.,ETH Zurich
Nature Structural and Molecular Biology | Year: 2016

ABC transporters catalyze transport reactions, such as the high-affinity uptake of micronutrients into bacteria and the export of cytotoxic compounds from mammalian cells. Crystal structures of ABC domains and full transporters have provided a framework for formulating reaction mechanisms of ATP-driven substrate transport, but recent studies have suggested remarkable mechanistic diversity within this protein family. This review evaluates the differing mechanistic proposals and outlines future directions for the exploration of ABC-transporter-catalyzed reactions. © 2016 Nature America, Inc. All rights reserved.

Bordiga S.,University of Turin | Groppo E.,University of Turin | Agostini G.,University of Turin | Van Bokhoven J.A.,ETH Zurich | And 2 more authors.
Chemical Reviews | Year: 2013

Starting from the late seventies, the progressively increased availability of synchrotron light sources allowed the execution of experiments requiring a high X-ray flux in a continuous interval. In particular, XAFS techniques are extremely powerful to shed light on the structure and reactivity of surface species in heterogeneous catalysts, in many steps of the catalyst life and also during the catalytic act. A quantitative EXAFS and XANES analysis, possible only in these conditions, was fundamental to reveal the local structure of carbonyl species. IR spectroscopy gave independent information on the fraction of chromium sites active in ethylene polymerization on the Phillips catalysts as a function of polymerization conditions, in well agreement with the values obtained by XANES. The second is the experimental hutch, where the experiment is performed. The third is the control room, where where scientists monitor the experiment and perform a first data analysis.

Tran D.N.,Ecole Polytechnique Federale de Lausanne | Tran D.N.,ETH Zurich | Cramer N.,Ecole Polytechnique Federale de Lausanne
Angewandte Chemie - International Edition | Year: 2011

Triple selectivity: Highly substituted indenylamines can be obtained with high enantioselectivity by formal [3+2] additions of aryl ketimines with internal alkynes. These rhodium(I)-catalyzed processes proceed by selective C-H activation of one of the two arene substituents, regioselective carbometalation of the alkyne, and enantioselective addition across the imine. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Steurer W.,ETH Zurich
Angewandte Chemie - International Edition | Year: 2011

Since the moment Dan Shechtman discovered a material with an ordered but aperiodic ("quasiperiodic") structure, which was believed to be impossible at the time, he began a long battle to convince his colleagues that his data were real. His efforts were finally rewarded with a Nobel Prize. The picture shows the projected electron density distribution function of decagonal Al-Co-Ni with Penrose tiling as an example of a quasicrystal. © 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

Jiricny J.,ETH Zurich
Cold Spring Harbor Perspectives in Biology | Year: 2013

The mismatch repair (MMR) system detects non-Watson-Crick base pairs and strand misalignments arising during DNA replication and mediates their removal by catalyzing excision of the mispair-containing tract of nascent DNA and its error-free resynthesis. In this way, MMR improves the fidelity of replication by several orders of magnitude. It also addresses mispairs and strand misalignments arising during recombination and prevents synapses between nonidentical DNA sequences. Unsurprisingly, MMR malfunction brings about genomic instability that leads to cancer in mammals. But MMR proteins have recently been implicated also in other processes of DNA metabolism, such as DNA damage signaling, antibody diversification, and repair of interstrand cross-links and oxidative DNA damage, in which their functions remain to be elucidated. This article reviews the progress in our understanding of the mechanism of replication error repair made during the past decade. © 2013 Cold Spring Harbor Laboratory Press; all rights reserved.

Keeling R.F.,University of California at San Diego | Kortzinger A.,Leibniz Institute of Marine Science | Gruber N.,ETH Zurich
Annual Review of Marine Science | Year: 2010

Ocean warming and increased stratification of the upper ocean caused by global climate change will likely lead to declines in dissolved O 2/ in the ocean interior (ocean deoxygenation) with implications for ocean productivity, nutrient cycling, carbon cycling, and marine habitat. Ocean models predict declines of 1 to 7% in the global ocean O 2 inventory over the next century, with declines continuing for a thousand years or more into the future. An important consequence may be an expansion in the area and volume of so-called oxygen minimum zones, where O 2 levels are too low to support many macrofauna and profound changes in biogeochemical cycling occur. Significant deoxygenation has occurred over the past 50 years in the North Pacific and tropical oceans, suggesting larger changes are looming. The potential for larger O 2 declines in the future suggests the need for an improved observing system for tracking ocean O 2 changes. © 2010 by Annual Reviews.

Hilvert D.,ETH Zurich
Annual Review of Biochemistry | Year: 2013

Diverse engineering strategies have been developed to create enzymes with novel catalytic activities. Among these, computational approaches hold particular promise. Enzymes have been computationally designed to promote several nonbiological reactions, including a Diels-Alder cycloaddition, proton transfer, multistep retroaldol transformations, and metal-dependent hydrolysis of phosphotriesters. Although their efficiencies (kcat/KM =0.1-100 M -1 s-1) are typically low compared with those of the best natural enzymes (106-108 M-1 s-1), these catalysts are excellent starting points for laboratory evolution. This review surveys recent progress in combining computational and evolutionary approaches to enzyme design, together with insights into enzyme function gained from studies of the engineered catalysts. © 2013 by Annual Reviews. All rights reserved.

Maeda S.,Paul Scherrer Institute | Schertler G.F.X.,Paul Scherrer Institute | Schertler G.F.X.,ETH Zurich
Current Opinion in Structural Biology | Year: 2013

Since the first high-resolution structure of the beta 2 adrenergic receptor (b2AR) in 2007, we have seen a growing number of G-protein-coupled receptor (GPCR) structures coming to the repertory, providing a significant progress in our understanding of the structural basis of their function. This has been achieved by the interdisciplinary collaborative work between scientists with various expertise and the development of new methodologies as well as combining and optimizing existing techniques. © 2013 Elsevier Ltd.

One of the reasons for deadlock in global climate policy is countries' disagreement on how to share the mitigation burden. Normative theory suggests various fairness criteria for structuring burden sharing, most prominently, historical responsibility for emissions, economic capacity, and vulnerability to climate change. Governments have taken up these criteria in their rhetoric at UNFCCC negotiations. I examine whether normative criteria influence individual burden sharing preferences. This bottom-up perspective is important for two reasons. First, it is unknown if governments' fairness rhetoric matches citizens' actual preferences. Second, international climate agreements directly affect individuals through domestic policy measures (e.g. energy taxes), and therefore require domestic public support for successful implementation. I conducted two laboratory experiments where participants have to agree on how to share climate change mitigation costs in an ultimatum game. Treatment conditions include differences between proposer and responder in capacity, vulnerability (experiment 1), and historical emissions (experiment 2). Historical emissions are endogenously determined in a prior game. Capacity inequality strongly affects burden sharing, with richer players ending up paying more, and poorer players less. Vulnerability differences reduce the influence of fairness, leading to suggested cost distributions more unfavorable to vulnerable players. However, vulnerable responders still reject many "unfair" offers. Differences in historical responsibility result in cost distributions strongly correlated with players' relative contributions to climate change. The results suggest that more nuanced consideration of fairness criteria in burden sharing could make ambitious climate agreements more acceptable for reluctant countries and their citizens. © 2014 Springer Science+Business Media Dordrecht.

Angelikopoulos P.,ETH Zurich | Bock H.,Heriot - Watt University
Physical Chemistry Chemical Physics | Year: 2012

Self-assembled structures adsorbed on carbon nanotubes and other nanofibres offer a plethora of opportunities to endow them with new functions and to integrate them into devices and materials. At the same time they are key to solve the greatest problem in carbon nanotube utilisation - debundling and individualisation. Success will inevitably require an understanding of the underlying structure-function relationship of the adsorbed surfactant layer. Computer simulations are ideally suited to develop this understanding as they enable us to study the structure-function relationship in great detail. Combining the results from mesoscale and atomistic simulations we begin to develop this understanding and derive a number of recommendations for optimal dispersion design. This journal is © the Owner Societies.

Marguerat S.,University College London | Schmidt A.,University of Basel | Codlin S.,University College London | Chen W.,Berlin Institute for Medical Systems Biology | And 3 more authors.
Cell | Year: 2012

Data on absolute molecule numbers will empower the modeling, understanding, and comparison of cellular functions and biological systems. We quantified transcriptomes and proteomes in fission yeast during cellular proliferation and quiescence. This rich resource provides the first comprehensive reference for all RNA and most protein concentrations in a eukaryote under two key physiological conditions. The integrated data set supports quantitative biology and affords unique insights into cell regulation. Although mRNAs are typically expressed in a narrow range above 1 copy/cell, most long, noncoding RNAs, except for a distinct subset, are tightly repressed below 1 copy/cell. Cell-cycle-regulated transcription tunes mRNA numbers to phase-specific requirements but can also bring about more switch-like expression. Proteins greatly exceed mRNAs in abundance and dynamic range, and concentrations are regulated to functional demands. Upon transition to quiescence, the proteome changes substantially, but, in stark contrast to mRNAs, proteins do not uniformly decrease but scale with cell volume. © 2012 Elsevier Inc.

Marcolli C.,ETH Zurich
Atmospheric Chemistry and Physics | Year: 2014

Heterogeneous ice nucleation is an important mechanism for the glaciation of mixed phase clouds and may also be relevant for cloud formation and dehydration at the cirrus cloud level. It is thought to proceed through different mechanisms, namely contact, condensation, immersion and deposition nucleation. Conceptually, deposition nucleation is the only pathway that does not involve liquid water, but occurs by direct water vapor deposition onto a surface. This study challenges this classical view by putting forward the hypothesis that what is called deposition nucleation is in fact pore condensation and freezing (PCF) occurring in voids and cavities that may form between aggregated primary particles and host water at relative humidity RHw < 100% because of the inverse Kelvin effect. Homogeneous ice nucleation is expected to occur below 235 K when at least one pore is filled with water. Ice nucleation in pores may also happen in immersion mode but with a lower probability because it requires at least one active site in a water filled pore. Therefore a significant enhancement in ice nucleation efficiency is expected when temperature falls below 235 K. For a deposition nucleation process from water vapor no discontinuous change in ice nucleation efficiency should occur at Combining double low line 235 K because no liquid water is involved in this process. Studies on freezing in confinement carried out on mesoporous silica materials such as SBA-15, SBA-16, MCM-41, zeolites and KIT have shown that homogeneous ice nucleation occurs abruptly at Combining double low line 230-235 K in pores with diameters (<) of 3.5-4 nm or larger but only gradually at Combining double low line 210-230 K in pores with Combining double low line 2.5-3.5 nm. Pore analysis of clay minerals shows that kaolinites exhibit pore structures with pore diameters (p) of 20-50 nm. The mesoporosity of illites and montmorillonites is characterized by pores with p Combining double low line 2-5 nm. The number and size of pores is distinctly increased in acid treated montmorillonites like K10. Water adsorption isotherms of MCM-41 show that pores with p Combining double low line 3.5-4 nm fill with water at RHw Combining double low line 56-60% in accordance with an inverse Kelvin effect. Water in such pores should freeze homogeneously for 235 K even before relative humidity with respect to ice (RHi) reaches ice saturation. Ice crystal growth by water vapor deposition from the gas phase is therefore expected to set in as soon as RHi > 100%. Pores with 7.5 nm fill with water at RHi > 100% for 235 K and are likely to freeze homogeneously as soon as they are filled with water. Given the pore structure of clay minerals, PCF should be highly efficient for 235 K and may occur at 235 K in particles that exhibit active sites for immersion freezing within pores. Most ice nucleation studies on clay minerals and mineral dusts indeed show a strong increase in ice nucleation efficiency when temperature is decreased below 235 K in accordance with PCF and are not explicable by the classical view of deposition nucleation. PCF is probably also the prevailing ice nucleation mechanism below water saturation for glassy, soot, and volcanic ash aerosols. No case could be identified that gives clear evidence of ice nucleation by water vapor deposition onto a solid surface. © 2014 Author (s).

Novae and coronae are large volcanotectonic features on Venus, with a contentious and possibly non-unique origin and enigmatic relationship. Their formation is most commonly explained by flexure and fracturing of the strong rigid Venus lithosphere atop mantle upwelling and/or downwelling. Here we present new 3D high-resolution thermomechanical model of mantle plume/diapir impingement into warm and thin lithosphere with Venus-like surface temperature. Numerical results suggests that nova-like and corona-like structures can result from magma-assisted convection of weak ductile crust, induced by decompression melting of the hot rising mantle plume. During the initial stage, nova forms by stellate fracturing of a topographic rise forming atop the growing crustal convection cell. Few million years later, nova can convert to coronae by inward dipping concentric fracturing of the nova rise margins and subsequent outward thrusting of partially molten crustal rocks over the surface. An outer annulus of concentric normal faults forms in the outer rise region of the downbending brittle upper crust, whereas an inner annulus of concentric thrust faults forms in front of the outward thrusting crustal wedge. A trench-like depression forms between these two annuli. Resembling retreating subduction, the rudimentary concentric upper-crustal slab warms up rapidly and recycles into the convection cell. The convection cell remains active for up to 15 million years, fueled by heat and magma from the plume. Predicted surface topography and fracturing patterns agree with some small to moderate size novae and coronae on Venus. © 2014 Elsevier B.V.

Leone G.,ETH Zurich
Journal of Volcanology and Geothermal Research | Year: 2014

The role of lava tube networks and lava channels is reassessed as the primordial stage of the volcano-erosional processes that formed the Labyrinthus Noctis-Valles Marineris system instead of a tectonic origin. The combined use of CTX, CRISM, HiRISE imagery, and MOLA profiles has provided valuable insight in the evolution of pit chains into fossae first and then chasmata later due to mass wasting processes caused by the erosional effect of the lava flows that draped Valles Marineris and other outflow channels. Although a quantitative evaluation of eruption rates is difficult even with digital terrain models (DTMs) because of the mixing between new flows and paleoflows, a comparison with Elysium and other Tharsis outflow channels suggests that the availability of lava supply is correlated to their widths. The images of ubiquitous lava flows rather than sporadic light-toned deposits strengthen the role of lava over that of water in the erosional processes that formed Labyrinthus Noctis and carved Valles Marineris like many other outflow channel on Mars. The erosional evolution of the outflow channels shows an increasing trend of age and a decreasing trend of depth from the sources on Tharsis to the mouths at Chryse Planitia. This finding, coupled with the observation of lava flows mantling Chryse Planitia, may have profound implications for the water inventories thought to have filled the lowlands with an ocean. © 2014 Elsevier B.V.

Kress G.,ETH Zurich
Journal of Composite Materials | Year: 2012

The second world-wide failure exercise is an international activity, organised by Kaddour and Hinton, aiming at a better understanding of the limitations of existing failure criteria to predict failure under triaxial stressing. The present work is a response to an invitation to take part and employ Hashin's failure criteria to the 12 test cases defined by the organizers of the second world-wide failure exercise. Methods, based on nonlinear analysis, used to perform the required simulations are explained and the predictions are presented. Observed problems are discussed. © 2012 The Author(s).

Alternative conditions for establishing dwell-time stability properties of linear switched systems are considered. Unlike the hybrid conditions derived in Geromel and Colaneri (2006), the considered ones are affine in the system matrices, allowing then for the consideration of uncertain switched systems with time-varying uncertainties. The low number of decision variables moreover permits to easily derive convex stabilization conditions using a specific class of state-feedback control laws. The resulting conditions are enforced using sum of squares programming which are shown to be less complex numerically that approaches based on piecewise linear functions or looped-functionals previously considered in the literature. The sums of squares conditions are also proven to (1) approximate arbitrarily well the conditions of Geromel and Colaneri (2006); and (2) be invariant with respect to time-scaling, emphasizing that the complexity of the approach does not depend on the size of the dwell-time. Several comparative examples illustrate the efficiency of the approach. © 2015 Elsevier B.V. All rights reserved.

Escobedo C.,ETH Zurich | Brolo A.G.,University of Victoria | Gordon R.,University of Victoria | Sinton D.,Kings College
Nano Letters | Year: 2012

The integration of fluidics and optics, as in flow-through nanohole arrays, has enabled increased transport of analytes to sensing surfaces. Limits of detection, however, are fundamentally limited by local analyte concentration. We employ the nanohole array geometry and the conducting nature of the film to actively concentrate analyte within the sensor. We achieve 180-fold enrichment of a dye, and 100-fold enrichment and simultaneous sensing of a protein in less than 1 min. The method presents opportunities for an order of magnitude increase in sensing speed and 2 orders of magnitude improvement in limit of detection. © 2012 American Chemical Society.

Eichmair M.,ETH Zurich
Communications in Mathematical Physics | Year: 2013

We extend the Jang equation proof of the positive energy theorem due to Schoen and Yau (Commun Math Phys 79(2):231-260, 1981) from dimension n = 3 to dimensions 3 ≤ n < 8. This requires us to address several technical difficulties that are not present when n = 3. The regularity and decay assumptions for the initial data sets to which our argument applies are weaker than those in Schoen and Yau (Commun Math Phys 79(2):231-260, 1981). © 2013 Springer-Verlag Berlin Heidelberg.

Gebauer H.,ETH Zurich
Theoretical Computer Science | Year: 2011

We show that each 4-regular n-vertex graph contains at most O(18 n/5)≤O(1.783n) Hamilton cycles, which improves a previous bound by Sharir and Welzl. From the other side we exhibit a family of graphs with 48n/8≥1.622n Hamilton cycles per graph. Moreover, we present an algorithm which finds the minimum weight Hamilton cycle of a given 4-regular graph in time √3n·poly(n)=O(1. 733n), improving on Eppstein's previous bound. This algorithm can be modified to compute the number of Hamilton cycles in the same time bound and to enumerate all Hamilton cycles in time (√3n+hc(G)) ·poly(n) with hc(G) denoting the number of Hamilton cycles of the given graph G. So our upper bound of O(1.783n) for the number of Hamilton cycles serves also as a time bound for enumeration. © 2011 Elsevier B.V. All rights reserved.

Benenson Y.,ETH Zurich
Nature Reviews Genetics | Year: 2012

The task of information processing, or computation, can be performed by natural and man-made 'devices'. Man-made computers are made from silicon chips, whereas natural 'computers', such as the brain, use cells and molecules. Computation also occurs on a much smaller scale in regulatory and signalling pathways in individual cells and even within single biomolecules. Indeed, much of what we recognize as life results from the remarkable capacity of biological building blocks to compute in highly sophisticated ways. Rational design and engineering of biological computing systems can greatly enhance our ability to study and to control biological systems. Potential applications include tissue engineering and regeneration and medical treatments. This Review introduces key concepts and discusses recent progress that has been made in biomolecular computing. © 2012 Macmillan Publishers Limited. All rights reserved.

The quantum chemistry of conformation equilibrium is a field where great accuracy (better than 100 cal mol-1) is needed because the energy difference between molecular conformers rarely exceeds 1000-3000 cal mol -1. The conformation equilibrium of straight-chain (normal) alkanes is of particular interest and importance for modern chemistry. In this paper, an extra error source for high-quality ab initio (first principles) and DFT calculations of the conformation equilibrium of normal alkanes, namely the intramolecular basis set superposition error (BSSE), is discussed. In contrast to out-of-plane vibrations in benzene molecules, diffuse functions on carbon and hydrogen atoms were found to greatly reduce the relative BSSE of n-alkanes. The corrections due to the intramolecular BSSE were found to be almost identical for the MP2, MP4, and CCSD(T) levels of theory. Their cancelation is expected when CCSD(T)/CBS (CBS, complete basis set) energies are evaluated by addition schemes. For larger normal alkanes (N >12), the magnitude of the BSSE correction was found to be up to three times larger than the relative stability of the conformer; in this case, the basis set superposition error led to a two orders of magnitude difference in conformer abundance. No error cancelation due to the basis set superposition was found. A comparison with amino acid, peptide, and protein data was provided. © 2011 Taylor & Francis.

Gerya T.V.,ETH Zurich
Physics of the Earth and Planetary Interiors | Year: 2013

This work employs high-resolution 3D thermomechanical numerical models of the incipient oceanic spreading to investigate nucleation and long-term evolution of ridge-transform spreading patterns. The Eulerian-Lagrangian visco-plastic model allows for large strains and accounts for plate cooling by both heat conduction and hydrothermal circulation as well as for partial melting of the asthenosphere and oceanic crust growth by magmatic accretion. According to the numerical experiments, the oceanic spreading pattern depends strongly on the initial offset of spreading centers and the magnitude of fracture healing rate. Three different characteristic long-term spreading modes are obtained: (1) ridge-transform patterns, (2) single ridges and (3) spreading centers with an intermediate plate. Ridge-transform oceanic spreading patterns form gradually from moderate initial offsets of 40-60. km and become fully established several million years after the plate breakup. Moreover, it is demonstrated on the basis of simple analyses that the ridge-transform system is a long-term plate growth pattern that is generally different from an initial plate rifting pattern. Geometry of the ridge-transform system is governed by geometrical requirements (180° rotational symmetry for open space occupation) for simultaneous accretion and displacement of new plate material within two offset spreading centers connected by a sustaining rheologically weak transform fault. According to these requirements, the characteristic spreading-parallel orientation of oceanic transform faults is the only thermomechanically consistent steady state orientation. Results of numerical experiments compare well with both incipient and mature ridge-transform systems observed in nature. © 2012 Elsevier B.V.

Stecher B.,Max Von Pettenkofer Institute | Hardt W.-D.,ETH Zurich
Current Opinion in Microbiology | Year: 2011

The intestinal microbiota can protect efficiently against colonization by many enteric pathogens ('colonization resistance', CR). This phenomenon has been known for decades, but the mechanistic basis of CR is incompletely defined. At least three mechanisms seem to contribute, that is direct inhibition of pathogen growth by microbiota-derived substances, nutrient depletion by microbiota growth and microbiota-induced stimulation of innate and adaptive immune responses. In spite of CR, intestinal infections are well known to occur. In these cases, the multi-faceted interactions between the microbiota, the host and the pathogen are shifted in favor of the pathogen. We are discussing recent progress in deciphering the underlying molecular mechanisms in health and disease. © 2010 Elsevier Ltd.

Conrad C.,European Molecular Biology Laboratory Heidelberg | Gerlich D.W.,ETH Zurich
Journal of Cell Biology | Year: 2010

Fluorescence microscopy is one of the most powerful tools to investigate complex cellular processes such as cell division, cell motility, or intracellular trafficking. The availability of RNA interference (RNAi) technology and automated microscopy has opened the possibility to perform cellular imaging in functional genomics and other large-scale applications. Although imaging often dramatically increases the content of a screening assay, it poses new challenges to achieve accurate quantitative annotation and therefore needs to be carefully adjusted to the specific needs of individual screening applications. In this review, we discuss principles of assay design, large-scale RNAi, microscope automation, and computational data analysis. We highlight strategies for imaging-based RNAi screening adapted to different library and assay designs. © 2010 Conrad and Gerlich.

Chen P.,ETH Zurich
Accounts of Chemical Research | Year: 2016

ConspectusThe development of a chemoselective catalyst for the sequence-selective copolymerization of two cycloolefins by ring-opening metathesis polymerization is described, starting with the mechanistic work that established the structure of the key metallacyclobutane intermediate. Experimental and computational investigations converged to a conclusion that the lowest energy metallacyclobutane intermediate in the ruthenium carbene-catalyzed metathesis reaction had the four-membered ring trans to the phosphine or NHC ligand. The trans-metallacyclobutane structure, for the case of a degenerate metathesis reaction catalyzed by a Grubbs first-generation complex, necessitated a rotation of the 3-fold symmetric tricyclohexylphosphine ligand, with respect to the 2-fold symmetric metallacyclobutane substructure. The degeneracy could be lifted by constraining the rotation. Lifting the degeneracy created the possibility of chemoselectivity. This mechanistic work led to a concept for the tick-tock catalyst for a chemoselective, alternating copolymerization of cyclooctene and norbornene from a mixture of the two monomers. The design concept could be post facto elaborated in terms of stereochemistry and topological theory, both viewpoints providing deeper insight into the design of selectivity into the catalytic reaction. The iterative interaction of theory and experiment provided the basis for the rational design and optimization of a new selectivity into an existing catalytic system with decidedly modest structural modifications of the original carbene complex. © 2016 American Chemical Society.

This paper makes both a theoretical and empirical contribution to a better understanding of how specific forms of network governance play a crucial role in enhancing sustainable development in rural areas. Drawing on the literature on social capital and social networks, I argue that a region has to achieve a certain level of cohesion in the network structure among actors from different societal sectors and governmental levels to strengthen rural sustainable development. However, to sustain positive regional development in the longer term, network structures also need to guarantee fragmentation and flexibility by including actors with varying views and interests. Empirically, the paper looks at the new policy of regional nature parks in Switzerland. The policy provides a good test case for the theoretical argument, because it aims at taking a cooperative and network-oriented approach to enhance rural sustainable development. Two case studies demonstrate that regional park projects have in fact strengthened the vertical cohesion between government levels. Remaining fragmentation at the local level could be a hindering factor while further establishing a park project in the region. In the longer term, however, it could guarantee the necessary flexibility to adapt to new ecological and socioeconomic developments that cannot be directly influenced by a region itself. © 2010 by the author(s).

Colbeck R.,Perimeter Institute for Theoretical Physics | Renner R.,ETH Zurich
Nature Communications | Year: 2011

According to quantum theory, measurements generate random outcomes, in stark contrast with classical mechanics. This raises the question of whether there could exist an extension of the theory that removes this indeterminism, as suspected by Einstein, Podolsky and Rosen. Although this has been shown to be impossible, existing results do not imply that the current theory is maximally informative. Here we ask the more general question of whether any improved predictions can be achieved by any extension of quantum theory. Under the assumption that measurements can be chosen freely, we answer this question in the negative: no extension of quantum theory can give more information about the outcomes of future measurements than quantum theory itself. Our result has significance for the foundations of quantum mechanics, as well as applications to tasks that exploit the inherent randomness in quantum theory, such as quantum cryptography. © 2011 Macmillan Publishers Limited. All rights reserved.

Duhr C.,ETH Zurich
Journal of High Energy Physics | Year: 2012

We show how the Hopf algebra structure of multiple polylogarithms can be used to simplify complicated expressions for multi-loop amplitudes in perturbative quantum field theory and we argue that, unlike the recently popularized symbol-based approach, the coproduct incorporates information about the ζ values. We illustrate our approach by rewriting the two-loop helicity amplitudes for a Higgs boson plus three gluons in a simplified and compact form involving only classical polylogarithms. © SISSA 2012.

Benenson Y.,ETH Zurich
Current Opinion in Chemical Biology | Year: 2012

One of the long-term goals in synthetic biology is the construction of large-scale gene networks to control and manipulate cells. Such networks often tweak natural regulatory mechanisms, or 'switches', in order to achieve the desired function. Regulatory mechanisms that involve RNA building blocks such as messenger RNA, microRNA and riboswitches have become increasingly prominent in this regard. Recent achievements include prototype mRNA sensors, logic circuits that respond to small molecule cues to affect cell fate, and cell-state classifier networks that identify physiological states using multiple microRNA inputs. This Review describes these and other results in RNA-based synthetic biology. © 2012 Elsevier Ltd.

Ttinger H.C.,ETH Zurich
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

We propose a piecewise deterministic Markovian jump process in Hilbert space such that the covariance matrix of this stochastic process solves the thermodynamic quantum master equation. The proposed stochastic process is particularly simple because the normalization of the vectors in Hilbert space is preserved only on average. As a consequence of the nonlinearity of the thermodynamic master equation, the construction of stochastic trajectories involves the density matrix as a running ensemble average. We identify a principle of detailed balance and a fluctuation-dissipation relation for our Markovian jump process. © 2012 American Physical Society.

Estimates of the carbon leakage resulting from sub-global climate policies tend to be lower when using economy-wide general equilibrium models than what technology-specific and bottom-up models suggest. In order to test whether this difference is due to excessive sectoral aggregation, I exploit disaggregated data and estimate unobserved values to create a dataset with rich industrial sector detail. The bias caused by sectoral aggregation is estimated by calibrating a computable general equilibrium model to this dataset and comparing results with those generated from more aggregated data. A stylized unilateral carbon pricing policy is simulated. Results show that aggregated calibrations overestimate industrial output loss and underestimate the increase in the CO2 embodied in imports. The efficiency of border carbon adjustments at reducing leakage is also underestimated. However, I find that general equilibrium estimates of carbon prices and economy-wide leakage rates are mostly unaffected by the degree of industrial aggregation. © 2012 Elsevier B.V.

Stability analysis and control of linear impulsive systems is addressed in a hybrid framework, through the use of continuous-time time-varying discontinuous Lyapunov functions. Necessary and sufficient conditions for stability of impulsive systems with periodic impulses are first provided in order to set up the main ideas. Extensions to the stability of aperiodic systems under minimum, maximum and ranged dwell-times are then derived. By exploiting further the particular structure of the stability conditions, the results are non-conservatively extended to quadratic stability analysis of linear uncertain impulsive systems. These stability criteria are, in turn, losslessly extended to stabilization using a particular, yet broad enough, class of state-feedback controllers, providing then a convex solution to the open problem of robust dwell-time stabilization of impulsive systems using hybrid stability criteria. Relying finally on the representability of sampled-data systems as impulsive systems, the problems of robust stability analysis and robust stabilization of periodic and aperiodic uncertain sampled-data systems are straightforwardly solved using the same ideas. Several examples are discussed in order to show the effectiveness and reduced complexity of the proposed approach.© 2013 Elsevier Ltd. All rights reserved.

Alexe B.,ETH Zurich | Deselaers T.,Google | Ferrari V.,University of Edinburgh
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2012

We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image. © 2012 IEEE.

Zimmermann M.B.,ETH Zurich
Seminars in Cell and Developmental Biology | Year: 2011

Iodine is an essential component of the hormones produced by the thyroid gland. Thyroid hormones, and therefore iodine, are essential for mammalian life. Iodine deficiency is a major public health problem; globally, it is estimated that two billion individuals have an insufficient iodine intake. Although goiter is the most visible sequelae of iodine deficiency, the major impact of hypothyroidism due to iodine deficiency is impaired neurodevelopment, particularly early in life. In the fetal brain, inadequate thyroid hormone impairs myelination, cell migration, differentiation and maturation. Moderate-to-severe iodine deficiency during pregnancy increases rates of spontaneous abortion, reduces birth weight, and increases infant mortality. Offspring of deficient mothers are at high risk for cognitive disability, with cretinism being the most severe manifestation. It remains unclear if development of the offspring is affected by mild maternal iodine deficiency. Moderate-to-severe iodine deficiency during childhood reduces somatic growth. Correction of mild-to-moderate iodine deficiency in primary school aged children improves cognitive and motor function. Iodine prophylaxis of deficient populations with periodic monitoring is an extremely cost effective approach to reduce the substantial adverse effects of iodine deficiency throughout the life cycle. © 2011 Elsevier Ltd.

Grote G.,ETH Zurich
Safety Science | Year: 2012

In view of safety management being introduced in more and more industries, the aim of this paper is to discuss what different high-risk industries can learn from each other and what limits for generalizing safety management methods within and across industries exist. After presenting core components of safety management, three attributes crucial to any organization's functioning are described, which also affect the way safety management systems should be designed, run, and assessed. These attributes are (1) the kinds of safety to be managed, (2) the general approach to managing uncertainty as a hallmark of organizations that manage safety, and (3) the regulatory regime within which safety is managed. By discussing safety management in the context of these three attributes, contingencies are outlined that can help decision-makers in companies to tailor safety management to their own situation and support regulators in drawing up and evaluating safety management requirements for different industries while also promoting learning between different high-risk domains. Standards and procedures, safety training, incident reporting and investigation, and safety culture are taken as examples to illustrate why and how different aspects of organizational functioning should be taken into account when designing and evaluating safety management systems or elements thereof. © 2011 Elsevier Ltd.

Olde Venterink H.,ETH Zurich
Plant and Soil | Year: 2011

It is known that the number of limiting nutrients may affect the species richness of plant communities, but it is unclear whether the type of nutrient limitation is also important. I place the results from a study in Patagonia (elsewhere in this issue) in the context of the number and types of nutrients that are limiting. I present four mechanisms through which N or P limitation may potentially influence species richness. These mechanisms are related to: (i) the number of forms in which P or N are present in soil and the plant traits needed to acquire them, (ii) the mechanisms and traits that control species competition and coexistence under N or P limitation, (iii) the regional species pools of plants capable of growing under N- and P-limited conditions, and (iv) the interaction between the type of nutrient limitation and community productivity. It appears likely that P limitation can favour a higher species richness than N limitation, in at least in a variety of low productive plant communities, but evidence to support this conclusion is so far lacking. The four mechanisms proposed here offer a framework for exploring whether the type of nutrient limitation per se, or an interaction with productivity, is a potential driver for variation in species diversity. © 2011 Springer Science+Business Media B.V.

Muller D.J.,ETH Zurich | Dufrene Y.F.,Catholic University of Louvain
Trends in Cell Biology | Year: 2011

Atomic force microscopy (AFM) techniques provide a versatile platform for imaging and manipulating living cells to single-molecule resolution, thereby enabling us to address pertinent questions in key areas of cell biology, including cell adhesion and signalling, embryonic and tissue development, cell division and shape, and microbial pathogenesis. In this review, we describe the principles of AFM, and survey recent breakthroughs made in AFM-based cell nanoscopy, showing how the technology has increased our molecular understanding of the organization, mechanics, interactions and processes of the cell surface. We also discuss the advantages and limitations of AFM techniques, and the challenges remaining to be addressed in future research. © 2011 Elsevier Ltd.

Picky eating is a relatively common problem during childhood, and parents lack clear strategies with which to decrease picky eating. This study examined whether increasing eating enjoyment and cooking enjoyment might give opportunities to decrease picky eating. Parents (n= 305), mainly mothers with children between 6 and 12. years of age (53.8% boys; 46.2% girls), completed a questionnaire on pressure and restriction, eating enjoyment, and picky eating, and cooking enjoyment. Path analyses were performed to examine the mediating role of eating enjoyment. The final model provided a good fit to the data and explained 33% variance in picky eating. A strong inverse association between eating enjoyment and picky eating was found (β= -.44). Significant direct effects were found between cooking enjoyment and picky eating (β= -.16) and restriction and picky eating (β= .18). Eating enjoyment partly mediated the association between cooking enjoyment and picky eating. Results showed pressure had only an indirect effect on picky eating through eating enjoyment. Eating enjoyment serves as an important and central factor in children's picky-eating behavior. High controlling practices might create a negative environment around food and healthy eating and so decrease eating enjoyment and increase picky eating. © 2012 Elsevier Ltd.

Wagner S.M.,ETH Zurich
IEEE Transactions on Engineering Management | Year: 2010

As firms concentrate on their core competencies, they become more dependent on suppliers who possess technological competencies and manufacturing capacities that the firms themselves do not have. As a consequence, firms must maintain a network of capable and reliable suppliers. Supplier development activities can often support suppliers improvement efforts. This research shows that indirect supplier development improves suppliers product and delivery performance and supplier capabilities, and that direct supplier development strongly improves supplier capabilities. However, supplier development efforts are less effective if indirect and direct supplier developments are applied simultaneously. Therefore, at any given time, firms should engage in either indirect or direct supplier development, not in both. The study contends that supplier developmentif conducted properlycan be a powerful supplier relationship management practice. © 2006 IEEE.

Brunner T.A.,ETH Zurich
Appetite | Year: 2010

In a standard modeling experiment, a naïve participant eats in the presence of an experimental confederate who has been instructed to eat a lot or a little. Results from such experiments show that people eat more when eating companions eat more and less when eating companions eat less. This modeling effect has been shown to be highly powerful. In the current report, two studies demonstrate that the effect is moderated when participants are exposed to weight-related cues. In Study 1, a body-weight scale was present in the experimental room; in Study 2, a weight-related verbal statement was uttered. Results of both of the chocolate tasting studies show that without the weight-related cues, participants consumed more chocolate when the confederate ate a large quantity than when she ate little. However, in the sessions including the weight-related cues, participants ate little chocolate, regardless of how much the confederate consumed. Weight-related cues might function as primes that elicit cognitions about body weight and these cognitions might, unbeknownst to the individual, render him or her more vigilant concerning food intake, thereby decreasing consumption. © 2010 Elsevier Ltd.

Brunner T.A.,ETH Zurich
Appetite | Year: 2012

Dyads composed of unacquainted females (n= 82) watched a cartoon while consuming salty aperitif snacks. The Affective Communication Test was used to measure nonverbal expressiveness. Computing intraclass correlation coefficients, the extent to which participants within dyads matched each other's food intake was analyzed. Food intake matched highly for dyads with two expressive individuals and moderately for dyads with one expressive participant. For dyads with two unexpressive participants, there was no evidence for matching behavior. Highly expressive people seem to be able to synchronize with others and thereby allow for close matching. This is the first study to show an influence of personality on matching consumption behavior. © 2011 Elsevier Ltd.

Slack E.,ETH Zurich | Balmer M.L.,University of Basel | Macpherson A.J.,University of Bern
Immunological Reviews | Year: 2014

Mutualism with our intestinal microbiota is a prerequisite for healthy existence. This requires physical separation of the majority of the microbiota from the host (by secreted antimicrobials, mucus, and the intestinal epithelium) and active immune control of the low numbers of microbes that overcome these physical and chemical barriers, even in healthy individuals. In this review, we address how B-cell responses to members of the intestinal microbiota form a robust network with mucus, epithelial integrity, follicular helper T cells, innate immunity, and gut-associated lymphoid tissues to maintain host-microbiota mutualism. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

Banerjee I.,ETH Zurich
European Journal of Epidemiology | Year: 2010

Genetic variants of paraoxonase 1 (PON1) were implicated in stroke susceptibility in several case-control association studies. However, the studies have reported apparently conflicting results, rendering precise assessment of the disease risk associated with the variants difficult. A meta-analysis was therefore conducted by including the studies that examined the association between two common polymorphisms (L55M and Q192R) in the coding region of PON1 gene and the risk of stroke. Altogether 10 studies on L55M polymorphism and 11 studies on Q192R polymorphism were included in this meta-analysis. The results showed, although there was no significant association of the 55L allele with stroke [random effects OR = 1.09, 95% CI (0.93, 1.27), P = 0.29], the 192R allele conferred significant risk of stroke in the overall study population [random effects OR = 1.25, 95% CI (1.07, 1.46), P = 0.006]. Same pattern of results as both the allele contrasts was obtained for the homozygote contrasts and the dominant, recessive and additive models. Subgroup analyses for stroke type, age of patients and ethnicity revealed no association of the 55L allele with stroke, whereas the association of the 192R allele persisted significantly in the groups comprising ischemic stroke patients, stroke patients with mean age >60 years and Caucasian subjects. But no significant association of this allele with stroke susceptibility was detected in the East Asian population. Therefore, the results of this meta-analysis indicate, the Q192R polymorphism could be an important risk factor for stroke, especially in the Caucasian population. © 2010 Springer Science+Business Media B.V.

Bartok G.,ETH Zurich
Journal of Machine Learning Research | Year: 2013

Partial monitoring is an online learning model where in every time step, after a learner and an opponent choose their actions, the loss and the feedback for the learner is calculated based on a loss and a feedback function, both of which are known to the learner ahead of time. As in other online learning scenarios, the goal of the learner is to minimize his cumulative loss. In this paper we present and analyze a new algorithm for locally observable partial monitoring games. We prove that the expected regret of our algorithm is of Õ(√N′T), where T is the time horizon and N′ is the size of the largest point-local game. The most important improvement of this bound compared to previous results is that it does not depend directly on the number of actions, but rather on the structure of the game. © 2013 G. Bartok.

Kraft N.J.B.,University of Maryland University College | Godoy O.,University of California at Santa Barbara | Godoy O.,CSIC - Institute of Natural Resources and Agriculture Biology of Seville | Levine J.M.,ETH Zurich
Proceedings of the National Academy of Sciences of the United States of America | Year: 2015

Understanding the processes maintaining species diversity is a central problem in ecology, with implications for the conservation and management of ecosystems. Although biologists often assume that trait differences between competitors promote diversity, empirical evidence connecting functional traits to the niche differences that stabilize species coexistence is rare. Obtaining such evidence is critical because traits also underlie the average fitness differences driving competitive exclusion, and this complicates efforts to infer community dynamics from phenotypic patterns. We coupled fieldparameterized mathematical models of competition between 102 pairs of annual plants with detailed sampling of leaf, seed, root, and whole-plant functional traits to relate phenotypic differences to stabilizing niche and average fitness differences. Single functional traits were often well correlated with average fitness differences between species, indicating that competitive dominance was associated with late phenology, deep rooting, and several other traits. In contrast, single functional traits were poorly correlated with the stabilizing niche differences that promote coexistence. Niche differences could only be described by combinations of traits, corresponding to differentiation between species in multiple ecological dimensions. In addition, several traits were associated with both fitness differences and stabilizing niche differences. These complex relationships between phenotypic differences and the dynamics of competing species argue against the simple use of single functional traits to infer community assembly processes but lay the groundwork for a theoretically justified trait-based community ecology.

Schneider G.,ETH Zurich
Journal of Computer-Aided Molecular Design | Year: 2012

Approximately 25 years ago the first computer applications were conceived for the purpose of automated 'de novo' drug design, prominent pioneering tools being ALADDIN, CAVEAT, GENOA, and DYLOMMS. Many of these early concepts were enabled by innovative techniques for ligand-receptor interaction modeling like GRID, MCSS, DOCK, and CoMFA, which still provide the theoretical framework for several more recently developed molecular design algorithms. After a first wave of software tools and groundbreaking applications in the 1990s-expressly GROW, GrowMol, LEGEND, and LUDI representing some of the key players-we are currently witnessing a renewed strong interest in this field. Innovative ideas for both receptor and ligand-based drug design have recently been published. We here provide a personal perspective on the evolution of de novo design, highlighting some of the historic achievements as well as possible future developments of this exciting field of research, which combines multiple scientific disciplines and is, like few other areas in chemistry, subject to continuous enthusiastic discussion and compassionate dispute. © 2011 Springer Science+Business Media B.V.

To assess the efficacy of HIV vaccine candidates or preventive treatment, many research groups have started to challenge monkeys repeatedly with low doses of the virus. Such challenge data provide a unique opportunity to assess the importance of exposure history for the acquisition of the infection. I developed stochastic models to analyze previously published challenge data. In the mathematical models, I allowed for variation of the animals' susceptibility to infection across challenge repeats, or across animals. In none of the studies I analyzed, I found evidence for an immunizing effect of non-infecting challenges, and in most studies, there is no evidence for variation in the susceptibilities to the challenges across animals. A notable exception was a challenge experiment by Letvin et al. Sci Translat Med (2011) conducted with the strain SIVsmE660. The challenge data of this experiment showed significant susceptibility variation from animal-to-animal, which is consistent with previously established genetic differences between the involved animals. For the studies which did not show significant immunizing effects and susceptibility differences, I conducted a power analysis and could thus exclude a very strong immunization effect for some of the studies. These findings validate the assumption that non-infecting challenges do not immunize an animal - an assumption that is central in the argument that repeated low-dose challenge experiments increase the statistical power of preclinical HIV vaccine trials. They are also relevant for our understanding of the role of exposure history for HIV acquisition and forecasting the epidemiological spread of HIV. © 2012 Roland R.

Piegsa F.M.,ETH Zurich
Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment | Year: 2015

A neutron resonance spin flip device is presented which efficiently induces spin flips of neutrons from a cold beam in static fields down to approximately 100 μT. Detailed performance measurements show typical characteristics of the related pulsed nuclear magnetic resonance technique. The obtained results are in excellent quantitative agreement with model calculations which use a formalism based on time-evolution operators. © 2015 Elsevier B.V. All rights reserved.

Gaberdiel M.R.,ETH Zurich | Gopakumar R.,Harish Chandra Research Institute
Journal of High Energy Physics | Year: 2012

The non-linear W 8[μ] symmetry algebra underlies the duality between the W N minimal model CFTs and the hs[μ] higher spin theory on AdS 3. It is shown how the structure of this symmetry algebra at the quantum level, i.e. for finite central charge, can be determined completely. The resulting algebra exhibits an exact equivalence (a 'triality') between three (generically) distinct values of the parameter μ. This explains, among other things, the agreement of symmetries between the W N minimal models and the bulk higher spin theory. We also study the consequences of this triality for some of the simplest W 8[μ] representations, thereby clarifying the analytic continuation between the 'light states' of the minimal models and conical defect solutions in the bulk. These considerations also lead us to propose that one of the two scalar fields in the bulk actually has a non-perturbative origin. © SISSA 2012.

Szpilman A.M.,Technion - Israel Institute of Technology | Carreira E.M.,ETH Zurich
Angewandte Chemie - International Edition | Year: 2010

The systematic modification of natural products through diverted total synthesis is a powerful concept for the systematic modification of natural products with the aim of studying mechanistic aspects of their biological activity. This concept offers far-reaching opportunities for discovery at the interface of biology and chemistry. It is underpinned by the power of chemical synthesis, which manifests itself in the ability to modify structure at will. Its implementation, when combined with innovative design, enables the preparation of unique mechanistic probes that can be decisive in differentiating and validating biological hypotheses at the molecular level. This Review assembles a collection of classic and current cases that illustrate and underscore the scientific possibilities for practitioners of chemical synthesis. Editor's choice: Molecular editing of natural products through diverted total synthesis(DTS) is a powerful approach that offers far-reaching opportunities for discovery at the interface of biology and chemistry. This Review assembles a collection of classic and new cases that illustrate and underscore the scientific possibilities for practitioners of chemical synthesis. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Sperger T.,RWTH Aachen | Sanhueza I.A.,RWTH Aachen | Sanhueza I.A.,ETH Zurich | Kalvet I.,RWTH Aachen | Schoenebeck F.,RWTH Aachen
Chemical Reviews | Year: 2015

A review of the computational studies of Pd-, Ni-, Rh-, and Ir-mediated transformations that have been conducted between 2008 and 2014 and been of significance for organic synthesis is presented. The review is meant to provide the reader with a summary of the recent computational studies that have been conducted in relation to synthetically relevant Pd-, Ni-, Rh-, and Ir-catalyzed transformations, the choice and performance of computational methodology used therein, and the numerous mechanistic insights gained. The review also states that energy calculations are important steps for accurate and reliable studies of Pd, Ni, Rh, and Ir complexes and related systems.

Vomel C.,ETH Zurich
ACM Transactions on Mathematical Software | Year: 2010

The (sequential) algorithm of Multiple Relatively Robust Representations, MRRR, is a more efficient variant of inverse iteration that does not require reorthogonalization. It solves the eigenproblem of an unreduced symmetric tridiagonal matrix T Rn × n at O(n2) cost. The computed normalized eigenvectors are numerically orthogonal in the sense that the dot product between different vectors is O (n ε), where ε refers to the relative machine precision. This article describes the design of ScaLAPACK's parallel MRRR algorithm. One emphasis is on the critical role of the representation tree in achieving both adequate accuracy and parallel scalability. A second point concerns the favorable properties of this code: subset computation, the use of static memory, and scalability. Unlike ScaLAPACK's Divide & Conquer and QR, MRRR can compute subsets of eigenpairs at reduced cost. And in contrast to inverse iterations which can fail, it is guaranteed to produce a satisfactory answer while maintaining memory scalability. ParEig, the parallel MRRR algorithm for PLAPACK, uses dynamic memory allocation. This is avoided by our code at marginal additional cost. We also use a different representation tree criterion that allows for more accurate computation of the eigenvectors but can make parallelization more difficult. © 2010 ACM.