Entity

Time filter

Source Type

Munich, Germany

The Technische Universität München is a research university with campuses in Munich, Garching and Freising-Weihenstephan. It is a member of TU9, an incorporated society of the largest and most notable German institutes of technology. Wikipedia.

Borgwardt S.,TU Munich
Mathematical Programming | Year: 2013

We study the combinatorial diameter of partition polytopes, a special class of transportation polytopes. They are associated to partitions of a set X = {x 1,⋯, x n } of items into clusters C 1,⋯, C k of prescribed sizes κ 1 ≥ ⋯ ≥ κ k . We derive upper bounds on the diameter in the form of κ 1 + κ 2, n - κ 1 and [n/2]. This is a direct generalization of the diameter-2 result for the Birkhoff polytope. The bounds are established using a constructive, graph-theoretical approach where we show that special sets of vertices in graphs that decompose into cycles can be covered by a set of vertex-disjoint cycles. Further, we give exact diameters for partition polytopes with k = 2 or k = 3 and prove that, for all k ≥ 4 and all κ 1, κ 2, there are cluster sizes κ 3,⋯, κ k such that the diameter of the corresponding partition polytope is at least [4/3 κ2]. Finally, we provide an [O(n(\kappa 1 + κ 2 (√K - 1) algorithm for an edge-walk connecting two given vertices of a partition polytope that also adheres to our diameter bounds. © 2011 Springer and Mathematical Optimization Society. Source

Komossa K.,TU Munich
Cochrane database of systematic reviews (Online) | Year: 2010

Obsessive compulsive disorder (OCD) is a psychiatric disorder which has been shown to affect 2 to 3.5% of people during their lifetimes. Inadequate response occurs in 40% to 60% of people that are prescribed first line pharmaceutical treatments (selective serotonin reuptake inhibitors (SSRIs)). To date not much is known about the efficacy and adverse effects of second-generation antipsychotic drugs (SGAs) in people suffering from OCD. To evaluate the effects of SGAs (monotherapy or add on) compared with placebo or other forms of pharmaceutical treatment for people with OCD. The Cochrane Depression, Anxiety and Neurosis Group's controlled trial registers (CCDANCTR-Studies and CCDANCTR-References) were searched up to 21 July 2010. The author team ran complementary searches on ClinicalTrials.gov and contacted key authors and drug companies. We included double-blind randomised controlled trials (RCTs) comparing oral SGAs (monotherapy or add on) in adults with other forms of pharmaceutical treatment or placebo in people with primary OCD. We extracted data independently. For dichotomous data we calculated the odds ratio (OR) and their 95% confidence intervals (CI) on an intention-to-treat basis based on a random-effects model. For continuous data, we calculated mean differences (MD), again based on a random-effects model. We included 11 RCTs with 396 participants on three SGAs. All trials investigated the effects of adding these SGAs to antidepressants (usually SSRIs). The duration of all trials was less than six months. Only 13% of the participants left the trials early. Most trials were limited in terms of quality aspects.Two trials examined olanzapine and found no difference in the primary outcome (response to treatment) and most other efficacy-related outcomes but it was associated with more weight gain than monotherapy with antidepressants.Quetiapine combined with antidepressants was also not any more efficacious than placebo combined with antidepressants in terms of the primary outcome, but there was a significant superiority in the mean Yale-Brown Obsessive Compulsive Scale (Y-BOCS) score at endpoint (MD -2.28, 95% CI -4.05 to -0.52). There were also some beneficial effects of quetiapine in terms of anxiety or depressive symptoms.Risperidone was more efficacious than placebo in terms of the primary outcome (number of participants without a significant response) (OR 0.17, 95% CI 0.04 to 0.66) and in the reduction of anxiety and depression (MD -7.60, 95% CI -12.37 to -2.83).  The available data of the effects of olanzapine in OCD are too limited to draw any conclusions. There is some evidence that adding quetiapine or risperidone to antidepressants increases efficacy, but this must be weighed against less tolerability and limited data. Source

Komossa K.,TU Munich
Cochrane database of systematic reviews (Online) | Year: 2010

Major depressive disorder (MDD) is a common condition with a lifetime prevalence of 15% to 18%, which leads to considerable suffering and disability. Some antipsychotics have been reported to induce remission in major depression, when added to an antidepressant. To evaluate the effects of second-generation antipsychotic (SGA) drugs (alone or augmentation) compared with placebo or antidepressants for people with MDD or dysthymia. The Cochrane Depression, Anxiety and Neurosis Group's controlled trial registers (CCDANCTR-Studies and CCDANCTR-References) were searched up to 21 July 2010. The author team ran complementary searches on clinicaltrials.gov and contacted key authors and drug companies. We included all randomised, double-blind trials comparing oral SGA treatment (alone or augmentation) with other forms of pharmaceutical treatment or placebo in people with MDD or dysthymia. We extracted data independently. For dichotomous data we calculated the odds ratio (OR) and 95% confidence interval (CI) on an intention-to-treat basis, and for continuous data the mean difference (MD), based on a random-effects model. We presented each comparison separately; we did not perform a pooled data analysis. We included 28 trials with 8487 participants on five SGAs: amisulpride, aripiprazole, olanzapine, quetiapine and risperidone.Three studies (1092 participants) provided data on aripiprazole augmentation in MDD. All efficacy data (response n = 1092, three RCTs, OR 0.48; 95% CI 0.37 to 0.63), (MADRS n = 1077, three RCTs, MD -3.04; 95% CI -4.09 to -2) indicated a benefit for aripiprazole but  more side effects (weight gain, EPS) .Seven trials (1754 participants) reported data on olanzapine. Compared to placebo fewer people discontinued treatment due to inefficacy; compared to antidepressants there were no efficacy differences, olanzapine augmentation showed symptom reduction (MADRS n = 808, five RCTs, MD -2.84; 95% CI -5.48 to -0.20), but also more weight or prolactin increase.Quetiapine data are based on seven trials (3414 participants). Compared to placebo, quetiapine monotherapy (response n = 1342, three RCTs, OR 0.52; 95% CI 0.41 to 0.66) and quetiapine augmentation (response n = 937, two RCTs, OR 0.68; 95% CI 0.52 to 0.90) showed symptom reduction, but quetiapine induced more sedation.Four trials (637 participants) presented data on risperidone augmentation, response data were better for risperidone (n = 371, two RCTs, OR 0.57; 95% CI 0.36 to 0.89) but augmentation showed more prolactin increase and weight gain.Five studies (1313 participants) presented data on amisulpride treatment for dysthymia. There were some beneficial effects compared to placebo or antidepressants but tolerability was worse. Quetiapine was more effective than placebo treatment. Aripiprazole and quetiapine and partly also olanzapine and risperidone augmentation showed beneficial effects compared to placebo. Some evidence indicated beneficial effects of low-dose amisulpride for dysthymic people. Most SGAs showed worse tolerability. Source

Padovani P.,European Southern Observatory | Resconi E.,TU Munich
Monthly Notices of the Royal Astronomical Society | Year: 2014

IceCube has recently reported the discovery of high-energy neutrinos of astrophysical origin, opening up the PeV (1015 eV) sky. Because of their large positional uncertainties, these events have not yet been associated to any astrophysical source. We have found plausible astronomical counterparts in the GeV-TeV bands by looking for sources in the available large area high-energy γ-ray catalogues within the error circles of the IceCube events. We then built the spectral energy distribution of these sources and compared it with the energy and flux of the corresponding neutrino. Likely counterparts include mostly BL Lacs and two Galactic pulsar wind nebulae. On the one hand many objects, including the starburst galaxy NGC 253 and Centaurus A, despite being spatially coincident with neutrino events, are too weak to be reconciled with the neutrino flux. On the other hand, various GeV powerful objects cannot be assessed as possible counterparts due to their lack of TeV data. The definitive association between high-energy astrophysical neutrinos and our candidates will be significantly helped by new TeV observations, but will be confirmed or disproved only by further IceCube data. Either way, this will have momentous implications for blazar jets, high-energy astrophysics, and cosmic ray and neutrino astronomy. © 2014 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society. Source

Guenther T.,TU Munich
European journal of cardio-thoracic surgery : official journal of the European Association for Cardio-thoracic Surgery | Year: 2013

Tricuspid regurgitation (TR) secondary to left heart disease is the most common aetiology of tricuspid valve (TV) insufficiency. Valve annuloplasty is the primary treatment for TV insufficiency. Several studies have shown the superiority of annuloplasty with a prosthetic ring over other repair techniques. We reviewed our experience with different surgical techniques for the treatment of acquired TV disease focusing on long-term survival and incidence of reoperation. A retrospective analysis of 717 consecutive patients who underwent TV surgery between 1975 and 2009 with either a ring annuloplasty [Group R: N = 433 (60%)] or a De Vega suture annuloplasty [Group NR: no ring; N = 255 (36%)]. Twenty-nine (4%) patients underwent other types of TV repair. A ring annuloplasty was performed predominantly in the late study period of 2000-09. TV aetiology was functional in 67% (479/717) of the patients. Ninety-one percent of the patients (n = 649) underwent concomitant coronary artery bypass grafting and/or mitral/aortic valve surgery. Patients who received a ring annuloplasty were older (67 ± 13 vs 60 ± 13 years; P < 0.001). Overall 30-day mortality was 13.8% (n = 95) [Group R: n = 55 (12.7%) and Group NR: n = 40 (15.7%)]. Ten-year actuarial survival after TV repair with either the De Vega suture or ring annuloplasty was 39 ± 3 and 46 ± 7%, respectively (P = 0.01). Twenty-eight (4%) patients required a TV reoperation after 5.9 ± 5.1 years. Freedom from TV reoperation 10 years after repair with a De Vega annuloplasty was 87.9 ± 3% compared with 98.4 ± 1% after the ring annuloplasty (P = 0.034). Patients who require TV surgery either as an isolated or a combined procedure constitute a high-risk group. The long-term survival is poor. Tricuspid valve repair with a ring annuloplasty is associated with improved survival and a lower reoperation rate than that with a suture annuloplasty. Source

Thibault P.,TU Munich | Elser V.,Cornell University
Annual Review of Condensed Matter Physics | Year: 2010

X-ray diffraction phenomena have been used for decades to study matter at the nanometer and subnanometer scales. X-ray diffraction microscopy uses the far-field scattering of coherent X-rays to form the 2D or 3D image of a scattering object in a way that resembles crystallography. In this review, we describe the main principles, benefits, and limitations of diffraction microscopy. After sampling some of the milestones of this young technique and its close variants, we conclude with a short assessment of the current state of the field. Copyright © 2010 by Annual Reviews. All rights reserved. Source

Bahl H.,TU Munich | Baumgardt H.,University of Queensland
Monthly Notices of the Royal Astronomical Society | Year: 2014

Ibata et al. recently reported the existence of a vast thin plane of dwarf galaxies (VTPD) orbiting around Andromeda. We investigate whether such a configuration can be reproduced within the standard cosmological framework and search for similar planes of corotating satellite galaxies around Andromeda-like host haloes in the data from the Millennium II simulation combined with a semi-analytic galaxy formation model. We apply a baryonic mass cut of 2.8 × 104 M⊙ for the satellite haloes and restrict the data to a Pan-Andromeda Archaeological Survey-like field. If we include the so-called orphan galaxies in our analysis, we find that planes with an rms lower than the VTPD are common in Millennium II. This is partially due to the strong radially concentrated distribution of orphan galaxies. Excluding part of the orphan galaxies brings the radial distributions of Millennium II satellites into better agreement with the satellite distribution of Andromeda while still producing a significant fraction of planes with a lower rms than the VTPD. We also find haloes in Millennium II with an equal or higher number of corotating satellites than the VTPD. This demonstrates that the VTPD is not in conflict with the standard cosmological framework, although a definite answer of this question might require higher resolution cosmological simulations that do not have to consider orphan galaxies. Our results finally show that satellite planes in Millennium II are not stable structures; hence, the VTPD might only be a statistical fluctuation of an underlying more spherical galaxy distribution. © 2014 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society. Source

Wachinger C.,Massachusetts Institute of Technology | Navab N.,TU Munich
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2013

We address the alignment of a group of images with simultaneous registration. Therefore, we provide further insights into a recently introduced framework for multivariate similarity measures, referred to as accumulated pair-wise estimates (APE), and derive efficient optimization methods for it. More specifically, we show a strict mathematical deduction of APE from a maximum-likelihood framework and establish a connection to the congealing framework. This is only possible after an extension of the congealing framework with neighborhood information. Moreover, we address the increased computational complexity of simultaneous registration by deriving efficient gradient-based optimization strategies for APE: Gauss-Newton and the efficient second-order minimization (ESM). We present next to SSD the usage of intrinsically nonsquared similarity measures in this least squares optimization framework. The fundamental assumption of ESM, the approximation of the perfectly aligned moving image through the fixed image, limits its application to monomodal registration. We therefore incorporate recently proposed structural representations of images which allow us to perform multimodal registration with ESM. Finally, we evaluate the performance of the optimization strategies with respect to the similarity measures, leading to very good results for ESM. The extension to multimodal registration is in this context very interesting because it offers further possibilities for evaluations, due to publicly available datasets with ground-truth alignment. © 1979-2012 IEEE. Source

Simmel F.C.,TU Munich
Frontiers in Life Science | Year: 2012

The DNA origami method is an extraordinarily powerful and robust method for the assembly of almost arbitrarily shaped nanoscale objects made from DNA. Technological advances such as DNA origami and the availability of a range of computational tools have transformed DNA self-assembly from an art into a genuine engineering discipline. Investigators with diverse backgrounds are now beginning to use the origami method in their field of research. Potential applications range from single molecule bio- and nanophysics over structural biology to synthetic biology and nanomedicine. This article discusses the transition of DNA nanotechnology to molecular scale engineering and describes, citing several examples, the power and utility of DNA origami for scientific research. © 2012 Copyright Taylor and Francis Group, LLC. Source

Hackl C.M.,TU Munich
International Journal of Control | Year: 2011

High-gain adaptive position control is proposed for a stiff one-mass system (1MS) and an elastic two-mass system (2MS). The control objective is (load-side) position reference tracking and disturbance rejection (of load torques and friction). Position and speed are available for feedback. Two simple high-gain adaptive position control strategies are presented and applied to a laboratory setup: an adaptive λ-tracking controller and a funnel controller. Both controllers neither estimate nor identify the plant. The λ-tracking controller achieves tracking with prescribed asymptotic accuracy: for given λ > 0 (arbitrary small) the error approaches the interval [-λ, λ] asymptotically. Whereas the funnel controller assures tracking with prescribed transient accuracy: the error and its derivative are bounded by prescribed positive (possibly non-increasing) functions of time. A simple proportional-integral (PI)-like extension for the 1MS, and this extension in combination with a high-pass filter for the 2MS, allow for zero tracking errors in steady-state, respectively. Oscillations in the shaft of the 2MS can be suppressed. © 2011 Taylor & Francis. Source

Bottcher T.,Harvard University | Sieber S.A.,TU Munich
MedChemComm | Year: 2012

Activity-based protein profiling (ABPP) employs small molecule probes to profile their dedicated targets in complex proteomes. Unlike traditional proteomics which is limited on protein abundance, probes that selectively target the active site of certain proteins are a benign measure of protein activity and provide tools for functional analysis. ABPP probes have largely replaced isotope labelled probes and have demonstrated broad spectrum utility ranging from identification and characterization of disease associated enzymes to drug development. Privileged structures with balanced reactivity are prime candidates for the design of activity-based probes. β-Lactams and β-lactones display such privileged structures and have demonstrated unprecedented value as probes. This review provides an overview on β-lactam and β-lactone probes and recent advances in their applications in chemical biology. © 2012 The Royal Society of Chemistry. Source

Drewes M.,TU Munich | Kang J.U.,Kim Il Sung University
Nuclear Physics B | Year: 2013

We calculate the relaxation rate of a scalar field in a plasma of other scalars and fermions with gauge interactions using thermal quantum field theory. It yields the rate of cosmic reheating and thereby determines the temperature of the "hot big bang" in inflationary cosmology. The total rate originates from various processes, including decays and inverse decays as well as Landau damping by scatterings. It involves quantum statistical effects and off-shell transport. Its temperature dependence can be highly non-trivial, making it impossible to express the reheating temperature in terms of the model parameters in a simple way. We pay special attention to the temperature dependence of the phase space due to the modified dispersion relations in the plasma. We find that it can have a drastic effect on the efficiency of perturbative reheating, which depends on the way particles in the primordial plasma interact. For some interactions thermal masses can effectively close the phase space for the dominant dissipative processes and thereby impose an upper bound on the reheating temperature. In other cases they open up new channels of dissipation, hence increase the reheating temperature. At high temperatures we find that the universe can even be heated through couplings to fermions, which are often assumed to be negligible due to Pauli-blocking. These effects may also be relevant for baryogenesis, dark matter production, the fate of moduli and in scenarios of warm inflation. © 2013 Elsevier B.V. Source

Manukyan A.,TU Munich
Photochemistry and Photobiology | Year: 2013

Photosynthetically active radiation (PAR) and Ultraviolet B (UV-B) radiation are among the main environmental factors acting on herbal yield and biosynthesis of bioactive compounds in medicinal plants. The objective of this study was to evaluate the influence of biologically effective UV-B light (280-315 nm) and PAR (400-700 nm) on herbal yield, content and composition, as well as antioxidant capacity of essential oils and polyphenols of lemon catmint (Nepeta cataria L. f. citriodora), lemon balm (Melissa officinalis L.) and sage (Salvia officinalis L.) under controlled greenhouse cultivation. Intensive UV-B radiation (2.5 kJ m-2 d-1) influenced positively the herbal yield. The essential oil content and composition of studied herbs were mainly affected by PAR and UV-B radiation. In general, additional low-dose UV-B radiation (1 kJ m-2d-1) was most effective for biosynthesis of polyphenols in herbs. Analysis of major polyphenolic compounds provided differences in sensitivity of main polyphenols to PAR and UV-B radiation. Essential oils and polyphenol-rich extracts of radiated herbs showed essential differences in antioxidant capacity by the ABTS system. Information from this study can be useful for herbal biomass and secondary metabolite production with superior quality under controlled environment conditions. Effects of PAR and UV-B radiation on herbal yield, secondary metabolites (essential oils, polyphenols) and their antioxidant capacity of lemon catmint, lemon balm and sage were tested under soilless greenhouse conditions. Intensive UV-B radiation (2.5 kJ m-2 d-1) influenced the herbal productivity and essential oil composition of all three species, while only essential oil content of lemon catmint and lemon balm was affected. Intensive UV-B radiation was favorable for strong antioxidant capacity of essential oils. Low-dose UV-B radiation (1 kJ m-2 d-1) resulted in high polyphenolic content with strong antioxidant capacity. Polyphenolic composition of herbs was also impacted by PAR and UV-B. © 2012 The American Society of Photobiology. Source

Silbermayr L.,University of Vienna | Minner S.,TU Munich
International Journal of Production Economics | Year: 2014

Interruptions in supply can have a severe impact on company performance. Their mitigation and management is therefore an important task. Reasons for interruptions can be machine breakdowns, material shortages, natural disasters, and labour strikes. Sourcing from multiple suppliers is a strategy to deal with and reduce supply disruption risk. We study a supply chain with one buyer facing Poisson demand who can procure from a set of potential suppliers who are not perfectly reliable. Each supplier is fully available for a certain amount of time (ON periods) and then breaks down for a certain amount of time during which it can supply nothing at all (OFF periods). The problem is modeled by a Semi-Markov decision process (SMDP) where demands, lead times and ON and OFF periods of the suppliers are stochastic. The objective is to minimize the buyer's long run average cost, including purchasing, holding and penalty costs. In a numerical study, we investigate the trade-off between single and multiple sourcing, as well as keeping inventory and having a back-up supplier. The results illustrate the benefit from dual souring compared to single sourcing and show the influence of the suppliers' characteristics cost, speed and availability on the optimal policy. Further, the value of full information about the supplier status switching events is analyzed and the performance of the optimal policy is compared to an order-up-to-S policy. As the optimal policy is very complex, a simple heuristic providing good results compared to the optimal solution is developed. © 2013 Elsevier B.V. Source

Burov S.,Bar - Ilan University | Metzler R.,TU Munich | Barkai E.,Bar - Ilan University
Proceedings of the National Academy of Sciences of the United States of America | Year: 2010

The Khinchin theorem provides the condition that a stationary process is ergodic, in terms of the behavior of the corresponding correlation function. Many physical systems are governed by nonstationary processes in which correlation functions exhibit aging. We classify the ergodic behavior of such systems and suggest a possible generalization of Khinchin's theorem. Our work also quantifies deviations from ergodicity in terms of aging correlation functions. Using the framework of the fractional Fokker-Planck equation, we obtain a simple analytical expression for the two-time correlation function of the particle displacement in a general binding potential, revealing universality in the sense that the binding potential only enters into the prefactor through the first two moments of the corresponding Boltzmann distribution. We discuss applications to experimental data from systems exhibiting anomalous dynamics. Source

Hahn D.,TU Munich
Journal of Strength and Conditioning Research | Year: 2011

The purpose of this study was to evaluate whether and how isometric multijoint leg extension strength can be used to assess athletes' muscular capability within the scope of strength diagnosis. External reaction forces (Fext) and kinematics were measured (n = 18) during maximal isometric contractions in a seated leg press at 8 distinct joint angle configurations ranging from 30 to 100° knee flexion. In addition, muscle activation of rectus femoris, vastus medialis, biceps femoris c.l., gastrocnemius medialis, and tibialis anterior was obtained using surface electromyography (EMG). Joint torques for hip, knee, and ankle joints were computed by inverse dynamics. The results showed that unilateral Fext decreased significantly from 3,369 ± 575 N at 30° knee flexion to 1,015 ± 152 N at 100° knee flexion. Despite maximum voluntary effort, excitation of all muscles as measured by EMG root mean square changed with knee flexion angles. Moreover, correlations showed that above-average Fext at low knee flexion is not necessarily associated with above-average Fext at great knee flexion and vice versa. Similarly, it is not possible to deduce high joint torques from high Fext just as above-average joint torques in 1 joint do not signify aboveaverage torques in another joint. From these findings, it is concluded that an evaluation of muscular capability by means of F ext as measured for multijoint leg extension is strongly limited. As practical recommendation, we suggest analyzing multijoint leg extension strength at 3 distinct knee flexion angles or at discipline-specific joint angles. In addition, a careful evaluation of muscular capacity based on measured F ext can be done for knee flexion angles ≥80°. For further and detailed analysis of single muscle groups, the use of inverse dynamic modeling is recommended. © 2011 National Strength and Conditioning Association. Source

Garcia-Morales V.,TU Munich
Physics Letters, Section A: General, Atomic and Solid State Physics | Year: 2013

By means of B-calculus [V. García-Morales, Phys. Lett. A 376 (2012) 2645] a universal map for deterministic cellular automata (CAs) has been derived. The latter is shown here to be invariant upon certain transformations (global complementation, reflection and shift). When constructing CA rules in terms of rules of lower range a new symmetry, "invariance under construction" is uncovered. Modular arithmetic is also reformulated within B-calculus and a new symmetry of certain totalistic CA rules, which calculate the Pascal simplices modulo an integer number p, is then also uncovered. © 2012 Elsevier B.V. All rights reserved. Source

Petschauer S.,TU Munich
Nuclear Physics A | Year: 2013

We calculate hyperon-nucleon and hyperon-hyperon interactions at next-to-leading order in SU(3) baryon chiral perturbation theory extending earlier work by the Bonn-Juelich group. The constructed potentials in momentum space include all one- and two-meson exchange terms generated by the SU(3) chiral Lagrangian. Effects from intermediate decuplet baryons are considered as well. These chiral baryon-baryon potentials, together with appropriate contact terms, provide a new basis for systematic studies of hyperon-nucleon scattering and light hypernuclei. © 2013 Elsevier B.V. Source

Dybalski W.,TU Munich
Communications in Mathematical Physics | Year: 2010

This paper presents a general framework for a refined spectral analysis of a group of isometries acting on a Banach space, which extends the spectral theory of Arveson. The concept of a continuous Arveson spectrum is introduced and the corresponding spectral subspace is defined. The absolutely continuous and singular-continuous parts of this spectrum are specified. Conditions are given, in terms of the transposed action of the group of isometries, which guarantee that the pure-point and continuous subspaces span the entire Banach space. In the case of a unitarily implemented group of automorphisms, acting on a C*-algebra, relations between the continuous spectrum of the automorphisms and the spectrum of the implementing group of unitaries are found. The group of spacetime translation automorphisms in quantum field theory is analyzed in detail. In particular, it is shown that the structure of its continuous spectrum is relevant to the problem of existence of (infra-)particles in a given theory. © 2010 Springer-Verlag. Source

Onyike C.U.,Johns Hopkins University | Diehl-Schmid J.,TU Munich
International Review of Psychiatry | Year: 2013

Frontotemporal dementia, a heterogeneous neurodegenerative disorder, is a common cause of young onset dementia (i.e. dementia developing in midlife or earlier). The estimated point prevalence is 15-22/100,000, and incidence 2.7-4.1/100,000. Some 25% are late-life onset cases. Population studies show nearly equal distribution by gender, which contrasts with myriad clinical and neuropathology reports. FTD is frequently familial and hereditary; five genetic loci for causal mutations have been identified, all showing 100% penetrance. Non-genetic risk factors are yet to be identified. FTD shows poor life expectancy but with survival comparable to that of Alzheimer's disease. Recent progress includes the formulation of up-to-date diagnostic criteria for the behavioural and language variants, and the development of new and urgently needed instruments for monitoring and staging the illness. There is still need for descriptive population studies to fill gaps in our knowledge about minority groups and developing regions. More pressing, however, is the need for reliable physiological markers for disease. There is a present imperative to develop a translational science to form the conduit for transferring neurobiological discoveries and insights from bench to bedside. © 2013 Institute of Psychiatry. Source

Su Y.-H.,TU Munich
Cognition | Year: 2014

This study investigated audiovisual synchrony perception in a rhythmic context, where the sound was not consequent upon the observed movement. Participants judged synchrony between a bouncing point-light figure and an auditory rhythm in two experiments. Two questions were of interest: (1) whether the reference in the visual movement, with which the auditory beat should coincide, relies on a position or a velocity cue; (2) whether the figure form and motion profile affect synchrony perception. Experiment 1 required synchrony judgment with regard to the same (lowest) position of the movement in four visual conditions: two figure forms (human or non-human) combined with two motion profiles (human or ball trajectory). Whereas figure form did not affect synchrony perception, the point of subjective simultaneity differed between the two motions, suggesting that participants adopted the peak velocity in each downward trajectory as their visual reference. Experiment 2 further demonstrated that, when judgment was required with regard to the highest position, the maximal synchrony response was considerably low for ball motion, which lacked a peak velocity in the upward trajectory. The finding of peak velocity as a cue parallels results of visuomotor synchronization tasks employing biological stimuli, suggesting that synchrony judgment with rhythmic motions relies on the perceived visual beat. © 2014 Elsevier B.V. Source

Kourist R.,TU Munich | Bornscheuer U.T.,University of Greifswald
Applied Microbiology and Biotechnology | Year: 2011

The enzymatic preparation of optically pure tertiary alcohols under sustainable conditions has received much attention. The conventional chemical synthesis of these valuable building blocks is still hampered by the use of harmful reagents such as heavy metal catalysts. Successful examples in biocatalysis used esterases, lipases, epoxide hydrolases, halohydrin dehalogenases, thiamine diphosphate-dependent enzymes, terpene cyclases, -acetylases, and -dehydratases. This mini-review provides an overview on recent developments in the discovery of new enzymes, their functional improvement by protein engineering, the design of chemoenzymatic routes leading to tertiary alcohols, and the discovery of entirely new biotransformations. © 2011 Springer-Verlag. Source

Adams N.A.,TU Munich
Physics of Fluids | Year: 2011

The approximate deconvolution model (ADM) for large-eddy simulation exploits a range of represented but non-resolved scales as buffer region for emulating the subgrid-scale energy transfer. ADM can be related to Langevin models for turbulence when filter operators are interpreted as stochastic kernel estimators. The main conceptual difference between ADM and Langevin models for turbulence is that the former is formulated with respect to an Eulerian reference frame whereas the latter are formulated with respect to a Lagrangian reference frame. This difference can be resolved by transforming the Langevin models to the Eulerian reference frame. However, the presence of a stochastic force prevents the classical convective transformation from being applicable. It is shown that for the transformation a stochastic number-density field can be introduced that essentially represents the Lagrangian particle distribution of the original model. Unlike previous derivations, the number-density field is derived by invoking the δ-function calculus, and for the resulting stochastic-momentum-field transport equation implies the necessity of a repulsive force in order to maintain a unique mapping between Lagrangian and Eulerian frame. Based on the number-density field and the stochastic-momentum field, a stochastic modification of ADM is possible by an approximate reconstruction of the small-scale field on the above-mentioned range of buffer scales. The objective of this paper is to introduce the concept of the Eulerian formulation of the Langevin model in a consistent form, allowing for stable numerical integration and to show how this model can be used for a modified way of subfilter-scale estimation. It should be noted that the overall concept can be applied more generally to any situation where a Lagrangian Langevin model is used. For an initial verification of the concept, which is within the scope of this paper, we consider the example of compressible isotropic turbulence and that of the three-dimensional Taylor-Green-Vortex. © 2011 American Institute of Physics. Source

Kaiser N.,TU Munich
Nuclear Physics A | Year: 2011

A system of fermions with a short-range interaction proportional to the scattering length a is studied at finite density. At any order an, we evaluate the complete contributions to the energy per particle E-(kf) arising from combined (multiple) particle-particle and hole-hole rescatterings in the medium. This novel result is achieved by simply decomposing the particle-hole propagator into the vacuum propagator plus a medium-insertion and correcting for certain symmetry factors in the (n-1)-th power of the in-medium loop. Known results for the low-density expansion up to and including order a4 are accurately reproduced. The emerging series in akf can be summed to all orders in the form of a double-integral over an arctangent function. In that representation the unitary limit a→∞ can be taken and one obtains the value ξ=0.5067 for the universal Bertsch parameter. We discuss also applications to the equation of state of neutron matter at low densities and mention further extensions of the resummation method. © 2011 Elsevier B.V. Source

Gerlach C.,Bavarian Academy of science and Humanities | Rummel R.,TU Munich
Journal of Geodesy | Year: 2013

One of the main objectives of ESA's Gravity Field and Steady-State Ocean Circulation mission GOCE (Gravity field and steady-state ocean circulation mission, 1999) is to allow global unification of height systems by directly providing potential differences between benchmarks in different height datum zones. In other words, GOCE provides a globally consistent and unbiased geoid. If this information is combined with ellipsoidal (derived from geodetic space techniques) and physical heights (derived from leveling/gravimetry) at the same benchmarks, datum offsets between the datum zones can be determined and all zones unified. The expected accuracy of GOCE is around 2-3 cm up to spherical harmonic degree nmax ≈ 200. The omission error above this degree amounts to about 30 cm which cannot be neglected. Therefore, terrestrial residual gravity anomalies are necessary to evaluate the medium and short wavelengths of the geoid, i. e. one has to solve the Geodetic Boundary Value Problem (GBVP). The theory of height unification by the GBVP approach is well developed, see e. g. Colombo (A World Vertical Network. Report 296, Department of Geodetic Science and Surveying, 1980) or Rummel and Teunissen (Bull Geod 62:477-498, 1988). Thereby, it must be considered that terrestrial gravity anomalies referring to different datum zones are biased due to the respective datum offsets. Consequently, the height reference surface of a specific datum zone deviates from the unbiased geoid not only due to its own datum offset (direct bias term) but is also indirectly affected by the integration of biased gravity anomalies. The latter effect is called the indirect bias term and it considerably complicates the adjustment model for global height unification. If no satellite based gravity model is employed, this error amounts to about the same size as the datum offsets, i. e. 1-2 m globally. We show that this value decreases if a satellite-only gravity model is used. Specifically for GOCE with nmax ≈ 200, the error can be expected not to exceed the level of 1 cm, allowing the effect to be neglected in practical height unification. The results are supported by recent findings by Gatti et al. (J Geod, 2012). © 2012 Springer-Verlag. Source

Feldmann T.,TU Munich
Journal of High Energy Physics | Year: 2011

We build on a recent paper by Grinstein, Redi and Villadoro, where a seesaw like mechanism for quark masses was derived in the context of spontaneously broken gauged favour symmetries. The see-saw mechanism is induced by heavy Dirac fermions which are added to the Standard Model spectrum in order to render the favour symmetries anomaly-free. In this letter we report on the embedding of these fermions into multiplets of an SU(5) grand unified theory and discuss a number of interesting consequences. © SISSA 2011. Source

Sackmann E.,TU Munich
New Journal of Physics | Year: 2011

Adhesion micro-domains (ADs) formed during encounters of lymphocytes with antigen-presenting cells (APC) mediate the genetic expression of quanta of cytokines interleukin-2 (IL-2). The IL-2-induced activation of IL-2 receptors promotes the stepwise progression of the T-cells through the cell cycle, hence their name, immunological synapses. The ADs form short-lived reaction centres controlling the recruitment of activators of the biochemical pathway (the kinases Lck and ZAP) while preventing the access of inhibitors (phosphatase CD45) through steric repulsion forces. CD45 acts as the generator of adhesion domains and, through its role as a spacer protein, also as the promoter of the reaction. In a second phase of T-cell-APC encounters, long-lived global reaction spaces (called supramolecular activation complexes (SMAC)) form by talin-mediated binding of the T-cell integrin (LFA-1) to the counter-receptor ICAM-1, resulting in the formation of ring-like tight adhesion zones (peripheral SMAC). The ADs move to the centre of the intercellular adhesion zone forming the central SMAC, which serve in the recycling of the AD. We propose that cell stimulation is triggered by integrating the effect evoked by the short-lived adhesion domains. Similar global reaction platforms are formed by killer cells to destruct APC. We present a testable mechanical model showing that global reaction spaces (SMAC or dome-like contacts between cytotoxic cells and APC) form by self-organization through delayed activation of the integrin-binding affinity and stabilization of the adhesion zones by F-actin recruitment. The mechanical stability and the polarization of the adhering T-cells are mediated by microtubule-actin cross-talk. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. Source

Althoff M.,TU Munich
IEEE Transactions on Power Systems | Year: 2014

Power system stability analysis becomes more important in the presence of ever increasing variations in operating conditions. Traditionally, the operation of power systems is verified for specific operating conditions. In this work, the stability analysis is performed for a set of operating conditions using reachability analysis, which makes it possible to compute the bounds of all possible system trajectories. Thus, reachability analysis can be used to rigorously check specifications. Contrary to previous work, the presented approach does not require model simplifications when the system is described by semi-explicit, nonlinear, index-1 differential-algebraic equations. The main obstacle in reachability analysis is the scalability towards larger systems, which is addressed by investigating compositional techniques. As a result, transient stability and variable energy production can be analyzed for the IEEE 14-bus and 30-bus benchmark systems, for which the computation times are orders of magnitude faster than the simulation of all cases starting in the corners of the set of possible initial states. © 2014 IEEE. Source

Kastoryano M.J.,Free University of Berlin | Kastoryano M.J.,Niels Bohr Institute | Wolf M.M.,TU Munich | Eisert J.,Free University of Berlin
Physical Review Letters | Year: 2013

Dissipative engineering constitutes a framework within which quantum information processing protocols are powered by system-environment interaction rather than by unitary dynamics alone. This framework embraces noise as a resource and, consequently, offers a number of advantages compared to one based on unitary dynamics alone, e.g., that the protocols are typically independent of the initial state of the system. However, the time independent nature of this scheme makes it difficult to imagine precisely timed sequential operations, conditional measurements, or error correction. In this work, we provide a path around these challenges, by introducing basic dissipative gadgets which allow us to precisely initiate, trigger, and time dissipative operations while keeping the system Liouvillian time independent. These gadgets open up novel perspectives for thinking of timed dissipative quantum information processing. As an example, we sketch how measurement-based computation can be simulated in the dissipative setting. © 2013 American Physical Society. Source

Canetti L.,Ecole Polytechnique Federale de Lausanne | Drewes M.,TU Munich | Drewes M.,RWTH Aachen | Shaposhnikov M.,Ecole Polytechnique Federale de Lausanne
Physical Review Letters | Year: 2013

We demonstrate for the first time that three sterile neutrinos alone can simultaneously explain neutrino oscillations, the observed dark matter, and the baryon asymmetry of the Universe without new physics above the Fermi scale. The key new point of our analysis is leptogenesis after sphaleron freeze-out, which leads to resonant dark matter production, evading thus the constraints on sterile neutrino dark matter from structure formation and x-ray searches. We identify the range of sterile neutrino properties that is consistent with all known constraints. We find a domain of parameters where the new particles can be found with present day experimental techniques, using upgrades to existing experimental facilities. © 2013 American Physical Society. Source

Baumann M.,TU Munich
Journal of Hypertension | Year: 2011

Background: Spontaneously hypertensive rats (SHRs) are characterized by capillary rarefaction, which may contribute to blood pressure elevation. We hypothesized that capillary rarefaction involves a suppressed angiogenesis; renin inhibition influences anti-angiogenesis homeostasis by acting on angiopoietins; transient renin blockade reduces anti-angiogenesis thereby ameliorating long-lasting blood pressure and cardiac hypertrophy in SHRs. Methods: First, serum angiopoietin-1 and angiopoietin-2 were measured in 2-month old normotensive Wistar-Kyoto rats (WKYs) and SHRs after renin inhibition (aliskiren: 1 and 10 mg/kg per day) or placebo. Second, 4-week old SHRs were prehypertensively treated with aliskiren (1 and 10 mg/kg per day) or placebo for 4 weeks. After 4 weeks of 'drug holiday' 12-week old SHRs were given L-nitro-arginine methyl ester (L-NAME) (25 mg/kg per day) for a 4-week interval to promote capillary rarefaction. Thereafter, mean arterial pressure (MAP), cardiac remodeling, capillary density, pAkt/Akt as marker for cellular survival, pro-angiogenic genes and systemic angiopoietins were investigated. Results: Baseline angiopoietin levels were similar between WKYs and SHRs. Renin inhibition increased angiopoietin-1 in SHR and reduced angiopoietin-2 in both WKY and SHR blood pressure independently. Prehypertensive renin inhibition reduced MAP and cardiac hypertrophy in adult SHRs. This was associated with higher cardiac capillary density, pAkt/Akt, pro-angiogenic expression pattern and serum angiopoietin-1, whereas angiopoietin-2 was lower as compared to vehicle-pretreated SHRs. These results were independent of prehypertensive blood pressure lowering by aliskiren. Conclusion: We conclude that renin inhibition modulates anti-angiogenesis signaling independently of blood pressure by increasing angiopoietin-1/angiopoietin-2 ratio. This promotes in SHR stabilization of endothelial cells, favors pro-angiogenic action and consequently results in higher capillary density. © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins. Source

Knolle P.A.,TU Munich
Current Opinion in Immunology | Year: 2016

The liver is known as organ with unique immune competence. Besides its unique microenvironment that is determined by gut-derived portal venous blood constituents and the presence of enzymes with immune regulatory properties, liver antigen presenting cell populations regulate antigen-specific immunity in a local fashion. In addition to bone marrow-derived dendritic cells and myeloid cells such as macrophages and monocytes, also truly liver-resident cell populations function as antigen presenting cells such as liver sinusoidal endothelial cells and hepatocytes. The functional outcome of antigen-presentation by these cell populations is diverse and ranges from generation of regulatory CD4 cells, to induction of memory CD8 T cells or deletional tolerance, which generates a complex network of antigen-presenting cells that determines hepatic immune regulation and local immune surveillance against viral infection. © 2016 Elsevier Ltd. Source

Schaefer C.,TU Munich
BMC genomics | Year: 2012

Non-synonymous single nucleotide polymorphisms (nsSNPs) alter the protein sequence and can cause disease. The impact has been described by reliable experiments for relatively few mutations. Here, we study predictions for functional impact of disease-annotated mutations from OMIM, PMD and Swiss-Prot and of variants not linked to disease. Most disease-causing mutations were predicted to impact protein function. More surprisingly, the raw predictions scores for disease-causing mutations were higher than the scores for the function-altering data set originally used for developing the prediction method (here SNAP). We might expect that diseases are caused by change-of-function mutations. However, it is surprising how well prediction methods developed for different purposes identify this link. Conversely, our predictions suggest that the set of nsSNPs not currently linked to diseases contains very few strong disease associations to be discovered. Firstly, annotations of disease-causing nsSNPs are on average so reliable that they can be used as proxies for functional impact. Secondly, disease-causing nsSNPs can be identified very well by methods that predict the impact of mutations on protein function. This implies that the existing prediction methods provide a very good means of choosing a set of suspect SNPs relevant for disease. Source

Ammar S.,TU Munich
EuroIntervention : journal of EuroPCR in collaboration with the Working Group on Interventional Cardiology of the European Society of Cardiology | Year: 2013

Endovascular renal denervation techniques have been clinically adopted for the treatment of resistant arterial hypertension with great success. Despite the favourable early results achieved with this technology, a clear understanding of the pathophysiology underlying this novel treatment is lacking. In addition, non-responsiveness to renal denervation remains a nidus for treatment failure in distinct patients. In search of meaningful surrogate parameters relating to treatment responsiveness, the current article reviews the existing knowledge on renal nerve anatomy, changes occurring after denervation and procedural parameters collected during denervation. From preclinical experience, the most reliable morphological parameter reflecting successful renal denervation is the presence of axonal degeneration. Most procedural and clinical parameters need extended investigation before adopting them as potential surrogate parameters for successful renal denervation. As a consequence, there is an imperative need for dedicated research revealing the pathophysiology of renal denervation procedures. In this regard, close co-operation of engineers, researchers and clinicians is warranted to turn renal denervation into a milestone treatment of arterial hypertension. Source

Berninger M.T.,TU Munich
Journal of visualized experiments : JoVE | Year: 2013

The treatment of osteochondral articular defects has been challenging physicians for many years. The better understanding of interactions of articular cartilage and subchondral bone in recent years led to increased attention to restoration of the entire osteochondral unit. In comparison to chondral lesions the regeneration of osteochondral defects is much more complex and a far greater surgical and therapeutic challenge. The damaged tissue does not only include the superficial cartilage layer but also the subchondral bone. For deep, osteochondral damage, as it occurs for example with osteochondrosis dissecans, the full thickness of the defect needs to be replaced to restore the joint surface (1). Eligible therapeutic procedures have to consider these two different tissues with their different intrinsic healing potential (2). In the last decades, several surgical treatment options have emerged and have already been clinically established (3-6). Autologous or allogeneic osteochondral transplants consist of articular cartilage and subchondral bone and allow the replacement of the entire osteochondral unit. The defects are filled with cylindrical osteochondral grafts that aim to provide a congruent hyaline cartilage covered surface (3,7,8). Disadvantages are the limited amount of available grafts, donor site morbidity (for autologous transplants) and the incongruence of the surface; thereby the application of this method is especially limited for large defects. New approaches in the field of tissue engineering opened up promising possibilities for regenerative osteochondral therapy. The implantation of autologous chondrocytes marked the first cell based biological approach for the treatment of full-thickness cartilage lesions and is now worldwide established with good clinical results even 10 to 20 years after implantation (9,10). However, to date, this technique is not suitable for the treatment of all types of lesions such as deep defects involving the subchondral bone (11). The sandwich-technique combines bone grafting with current approaches in Tissue Engineering (5,6). This combination seems to be able to overcome the limitations seen in osteochondral grafts alone. After autologous bone grafting to the subchondral defect area, a membrane seeded with autologous chondrocytes is sutured above and facilitates to match the topology of the graft with the injured site. Of course, the previous bone reconstruction needs additional surgical time and often even an additional surgery. Moreover, to date, long-term data is missing (12). Tissue Engineering without additional bone grafting aims to restore the complex structure and properties of native articular cartilage by chondrogenic and osteogenic potential of the transplanted cells. However, again, it is usually only the cartilage tissue that is more or less regenerated. Additional osteochondral damage needs a specific further treatment. In order to achieve a regeneration of the multilayered structure of osteochondral defects, three-dimensional tissue engineered products seeded with autologous/allogeneic cells might provide a good regeneration capacity (11). Beside autologous chondrocytes, mesenchymal stem cells (MSC) seem to be an attractive alternative for the development of a full-thickness cartilage tissue. In numerous preclinical in vitro and in vivo studies, mesenchymal stem cells have displayed excellent tissue regeneration potential (13,14). The important advantage of mesenchymal stem cells especially for the treatment of osteochondral defects is that they have the capacity to differentiate in osteocytes as well as chondrocytes. Therefore, they potentially allow a multilayered regeneration of the defect. In recent years, several scaffolds with osteochondral regenerative potential have therefore been developed and evaluated with promising preliminary results (1,15-18). Furthermore, fibrin glue as a cell carrier became one of the preferred techniques in experimental cartilage repair and has already successfully been used in several animal studies (19-21) and even first human trials (22). The following protocol will demonstrate an experimental technique for isolating mesenchymal stem cells from a rabbit's bone marrow, for subsequent proliferation in cell culture and for preparing a standardized in vitro-model for fibrin-cell-clots. Finally, a technique for the implantation of pre-established fibrin-cell-clots into artificial osteochondral defects of the rabbit's knee joint will be described. Source

Reisenauer R.,TU Munich | Smith K.,University of Edinburgh | Blythe R.A.,University of Edinburgh
Physical Review Letters | Year: 2013

We study the time taken by a language learner to correctly identify the meaning of all words in a lexicon under conditions where many plausible meanings can be inferred whenever a word is uttered. We show that the most basic form of cross-situational learning - whereby information from multiple episodes is combined to eliminate incorrect meanings - can perform badly when words are learned independently and meanings are drawn from a nonuniform distribution. If learners further assume that no two words share a common meaning, we find a phase transition between a maximally efficient learning regime, where the learning time is reduced to the shortest it can possibly be, and a partially efficient regime where incorrect candidate meanings for words persist at late times. We obtain exact results for the word-learning process through an equivalence to a statistical mechanical problem of enumerating loops in the space of word-meaning mappings. © 2013 American Physical Society. Source

Gebhardt R.,TU Munich
Journal of Applied Crystallography | Year: 2014

Pressure-driven membrane filtration is a widely used method to separate casein micelles (CM) from smaller components in milk. The structure of CM attached on the membrane has been investigated because in such a deposited state they reduce the performance of the filtration process. Scattering experiments with nano- and micrometre sized X-ray beams and a filtration setup with silicon micro-sieves as membranes were used. Grazing-incidence small-angle X-ray scattering (GISAXS) experiments above porous regions of the micro-sieves show that spherical CM become stretched in the direction of the filtration flow. The one-dimensional scattering functions extracted from the two-dimensional GISAXS patterns were analyzed by a single ellipsoidal form factor fit. According to the model, CM assume a prolate ellipsoidal shape at a trans-membrane pressure of Δp = 400 mbar (1 mbar = 100 Pa). With increasing trans-membrane pressure, the shape of the CM undergoes a transition towards an oblate structure between 400 and 600 mbar. Small-angle X-ray scattering experiments with a 200 nm beam allow for transmission experiments on CM in a single pore of the micro-sieve. Typical characteristics of the internal structure could not be identified in the scattering functions of CM subjected to filtration forces. © 2014 International Union of Crystallography. Source

Neesse A.,University of Gottingen | Algul H.,TU Munich | Tuveson D.A.,Pancreatic Cancer Research Laboratory | Gress T.M.,University of Marburg
Gut | Year: 2015

Pancreatic ductal adenocarcinoma (PDA) exhibits one of the poorest prognosis of all solid tumours and poses an unsolved problem in cancer medicine. Despite the recent success of two combination chemotherapies for palliative patients, the modest survival benefits are often traded against significant side effects and a compromised quality of life. Although the molecular events underlying the initiation and progression of PDA have been intensively studied and are increasingly understood, the reasons for the poor therapeutic response are hardly apprehended. One leading hypothesis over the last few years has been that the pronounced tumour microenvironment in PDA not only promotes carcinogenesis and tumour progression but also mediates therapeutic resistance. To this end, targeting of various stromal components and pathways was considered a promising strategy to biochemically and biophysically enhance therapeutic response. However, none of the efforts have yet led to efficacious and approved therapies in patients. Additionally, recent data have shown that tumour-associated fibroblasts may restrain rather than promote tumour growth, reinforcing the need to critically revisit the complexity and complicity of the tumour-stroma with translational implications for future therapy and clinical trial design. Source

Fischer H.S.,TU Munich
Applied Vegetation Science | Year: 2015

Phytosociological relevés consist of lists of species occurring on certain plots and their cover values in different vegetation layers. When combining the cover values of two layers into a unique value, the overlap between the layers must be taken into account. With two layers this can easily done by subtracting the product of the cover values from their sum. A general formula to estimate the combined cover for any number of layers, assuming independence of the layers, is presented. This is especially useful for large databases where rich data sets have to be compiled that were originally recorded from different numbers of vegetation layers. © 2014 International Association for Vegetation Science. Source

Abdullah Z.,University of Bonn | Knolle P.A.,University of Bonn | Knolle P.A.,TU Munich
EMBO Journal | Year: 2014

Macrophages detect bacterial infection through pattern recognition receptors (PRRs) localized at the cell surface, in intracellular vesicles or in the cytosol. Discrimination of viable and virulent bacteria from non-virulent bacteria (dead or viable) is necessary to appropriately scale the anti-bacterial immune response. Such scaling of anti-bacterial immunity is necessary to control the infection, but also to avoid immunopathology or bacterial persistence. PRR-mediated detection of bacterial constituents in the cytosol rather than at the cell surface along with cytosolic recognition of secreted bacterial nucleic acids indicates viability and virulence of infecting bacteria. The effector responses triggered by activation of cytosolic PRRs, in particular the RIG-I-induced simultaneous rapid type I IFN induction and inflammasome activation, are crucial for timely control of bacterial infection by innate and adaptive immunity. The knowledge on the PRRs and the effector responses relevant for control of infection with intracellular bacteria will help to develop strategies to overcome chronic infection. Percy Knolle and Zeinab Abdullah discuss how pattern recognition receptors and signaling pathways contribute to the scaling of immune responses against intracellular bacterial infection. © 2014 The Authors. Source

Hofmann H.,TU Munich
Hautarzt | Year: 2012

Lyme borreliosis can affect almost all human organs. Erythema migrans is the first and most frequent manifestation in 80-90% of patients in the early stage of localized skin infection. Besides the typical clinical appearance, many atypical variants can be observed. The solitary borrelial lymphocytoma is much less common and occurs mostly in children. Due to improvement in the early recognition of Lyme borreliosis, the diagnosis is made in the disseminated and late stage in only 10-20% of patients. Multiple erythemata migrantia indicating the hematogenous dissemination of B. burgdorferi remain frequently unrecognized. Late stages of infection feature chronic plasma-cell rich cutaneous inflammation and acrodermatitis chronica atrophicans in its edematous to atrophic forms. Cultivation or DNA detection of B. burgdorferi in skin biopsies are options to prove unusual skin manifestations. Serological detection of Borrelia-specific IgG-and IgM antibodies should be performed according to the two step protocol with ELISA and immunoassay according to the criteria of the MIQ 12. Serological tests have limited utility for follow-up. Antibiotic therapy is very effective if performed according to evidence-based protocols, such as the AWMF guidelines. © Springer-Verlag 2012. Source

Baumann U.,TU Munich
Nature Medicine | Year: 2014

We searched for genetic alterations in human B cell lymphoma that affect the ubiquitin-proteasome system. This approach identified FBXO25 within a minimal common region of frequent deletion in mantle cell lymphoma (MCL). FBXO25 encodes an orphan F-box protein that determines the substrate specificity of the SCF (SKP1–CUL1–F-box)FBXO25 ubiquitin ligase complex. An unbiased screen uncovered the prosurvival protein HCLS1-associated protein X-1 (HAX-1) as the bona fide substrate of FBXO25 that is targeted after apoptotic stresses. Protein kinase Cδ (PRKCD) initiates this process by phosphorylating FBXO25 and HAX-1, thereby spatially directing nuclear FBXO25 to mitochondrial HAX-1. Our analyses in primary human MCL identify monoallelic loss of FBXO25 and stabilizing HAX1 phosphodegron mutations. Accordingly, FBXO25 re-expression in FBXO25-deleted MCL cells promotes cell death, whereas expression of the HAX-1 phosphodegron mutant inhibits apoptosis. In addition, knockdown of FBXO25 significantly accelerated lymphoma development in Eμ-Myc mice and in a human MCL xenotransplant model. Together we identify a PRKCD-dependent proapoptotic mechanism controlling HAX-1 stability, and we propose that FBXO25 functions as a haploinsufficient tumor suppressor and that HAX1 is a proto-oncogene in MCL. © 2014 Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved. Source

Eitz Ferrer P.,TU Munich
PLoS pathogens | Year: 2011

Viral infection is a stimulus for apoptosis, and in order to sustain viral replication many viruses are known to carry genes encoding apoptosis inhibitors. F1L, encoded by the orthopoxvirus modified vaccinia virus Ankara (MVA) has a Bcl-2-like structure. An MVA mutant lacking F1L (MVAΔF1L) induces apoptosis, indicating that MVA infection activates and F1L functions to inhibit the apoptotic pathway. In this study we investigated the events leading to apoptosis upon infection by MVAΔF1L. Apoptosis largely proceeded through the pro-apoptotic Bcl-2 family protein Bak with some contribution from Bax. Of the family of pro-apoptotic BH3-only proteins, only the loss of Noxa provided substantial protection, while the loss of Bim had a minor effect. In mice, MVA preferentially infected macrophages and DCs in vivo. In both cell types wt MVA induced apoptosis albeit more weakly than MVAΔF1L. The loss of Noxa had a significant protective effect in macrophages, DC and primary lymphocytes, and the combined loss of Bim and Noxa provided strong protection. Noxa protein was induced during infection, and the induction of Noxa protein and apoptosis induction required transcription factor IRF3 and type I interferon signalling. We further observed that helicases RIG-I and MDA5 and their signalling adapter MAVS contribute to Noxa induction and apoptosis in response to MVA infection. RNA isolated from MVA-infected cells induced Noxa expression and apoptosis when transfected in the absence of viral infection. We thus here describe a pathway leading from the detection of viral RNA during MVA infection by the cytosolic helicase-pathway, to the up-regulation of Noxa and apoptosis via IRF3 and type I IFN signalling. Source

Woertler K.,TU Munich
Seminars in Musculoskeletal Radiology | Year: 2015

The rotator interval is an anatomically complex region of the shoulder joint that is difficult to evaluate on clinical examination and by imaging. Abnormalities of its components may contribute to instability, shoulder stiffness, and pain and are challenging to diagnose and treat. This article gives an overview of the anatomy, MR anatomy, and normal variants of the rotator interval, together with basic technical aspects of MR imaging of this area. Pathologic conditions of the rotator interval capsule, the long head of biceps tendon, and the pulley system are reviewed and illustrated with several clinical examples. © 2015 by Thieme Medical Publishers, Inc. Source

Duling B.,TU Munich
Journal of High Energy Physics | Year: 2010

We contrast the impact of Higgs mediated flavor changing neutral currents on ε K in the framework of a warped extra dimension that was recently calculated by Azatov et al. with the older results for Kaluza-Klein gluon induced corrections to that observable. We find that the most stringent constraint on the KK scale for a Higgs field localized on the infrared brane for reasonable additional assumptions comes from KK gluon exchange. In the case of a bulk Higgs field we show that for certain scenarios the Higgs contribution can in fact exceed the KK gluon contribution. In the course of this analysis we also describe in detail the different renormalization procedures that have to be employed in the KK gluon and Higgs cases to relate the new physics at high energies to low energy observables. © SISSA 2010. Source

Kaiser N.,TU Munich
Nuclear Physics A | Year: 2010

We calculate in closed analytical form the one-photon loop radiative corrections to muon Compton scattering μ-γ→μ-γ. Ultraviolet and infrared divergences are both treated in dimensional regularization. Infrared finiteness of the (virtual) radiative corrections is achieved (in the standard way) by including soft photon radiation below an energy cut-off λ. We find that the anomalous magnetic moment α/2π provides only a very small portion of the full radiative corrections. Furthermore, we extend our calculation of radiative corrections to the muon-nucleus bremsstrahlung process (or virtual muon Compton scattering μ-γ0*→μ-γ). These results are particularly relevant for analyzing the COMPASS experiment at CERN in which muon-nucleus bremsstrahlung serves to calibrate the Primakoff scattering of high-energy pions off a heavy nucleus with the aim of measuring the pion electric and magnetic polarizabilities. We find agreement with an earlier calculation of these radiative corrections based on a different method. © 2010 Elsevier B.V. Source

Spohn H.,TU Munich
Physica D: Nonlinear Phenomena | Year: 2010

We study the bosonic Boltzmann-Nordheim kinetic equation, which describes the kinetic regime of weakly interacting bosons with s-wave scattering only. We consider a spatially homogeneous fluid with an isotropic momentum distribution. The issue of the dynamical formation of a Bose-Einstein condensate has been studied extensively. We supply here the completed equations of motion for the coupled system, the energy density distribution of the normal fluid and the density of the condensate. With this information the post-nucleation self-similar solution is investigated in more detail than before. © 2010 Elsevier B.V. All rights reserved. Source

Weise W.,TU Munich
Nuclear Physics A | Year: 2010

This report summarizes our understanding of over(K, ̄)-nucleon interactions and reviews the present theoretical situation in the quest for quasibound antikaon-nuclear systems. © 2010 Elsevier B.V. All rights reserved. Source

Goldhaber S.Z.,Harvard University | Leizorovicz A.,University of Lyon | Kakkar A.K.,University College London | Haas S.K.,TU Munich | And 3 more authors.
New England Journal of Medicine | Year: 2011

BACKGROUND: The efficacy and safety of prolonging prophylaxis for venous thromboembolism in medically ill patients beyond hospital discharge remain uncertain. We hypothesized that extended prophylaxis with apixaban would be safe and more effective than short-term prophylaxis with enoxaparin. METHODS: In this double-blind, double-dummy, placebo-controlled trial, we randomly assigned acutely ill patients who had congestive heart failure or respiratory failure or other medical disorders and at least one additional risk factor for venous thromboembolism and who were hospitalized with an expected stay of at least 3 days to receive apixaban, administered orally at a dose of 2.5 mg twice daily for 30 days, or enoxaparin, administered subcutaneously at a dose of 40 mg once daily for 6 to 14 days. The primary efficacy outcome was the 30-day composite of death related to venous thromboembolism, pulmonary embolism, symptomatic deep-vein thrombosis, or asymptomatic proximal-leg deep-vein thrombosis, as detected with the use of systematic bilateral compression ultrasonography on day 30. The primary safety outcome was bleeding. All efficacy and safety outcomes were independently adjudicated. RESULTS: A total of 6528 subjects underwent randomization, 4495 of whom could be evaluated for the primary efficacy outcome - 2211 in the apixaban group and 2284 in the enoxaparin group. Among the patients who could be evaluated, 2.71% in the apixaban group (60 patients) and 3.06% in the enoxaparin group (70 patients) met the criteria for the primary efficacy outcome (relative risk with apixaban, 0.87; 95% confidence interval [CI], 0.62 to 1.23; P = 0.44). By day 30, major bleeding had occurred in 0.47% of the patients in the apixaban group (15 of 3184 patients) and in 0.19% of the patients in the enoxaparin group (6 of 3217 patients) (relative risk, 2.58; 95% CI, 1.02 to 7.24; P = 0.04). CONCLUSIONS: In medically ill patients, an extended course of thromboprophylaxis with apixaban was not superior to a shorter course with enoxaparin. Apixaban was associated with significantly more major bleeding events than was enoxaparin. (Funded by Bristol- Myers Squibb and Pfizer; ClinicalTrials.gov number, NCT00457002.) Copyright © 2011 Massachusetts Medical Society. Source

Rumpel C.,French National Center for Scientific Research | Kogel-Knabner I.,TU Munich
Plant and Soil | Year: 2011

Despite their low carbon (C) content, most subsoil horizons contribute to more than half of the total soil C stocks, and therefore need to be considered in the global C cycle. Until recently, the properties and dynamics of C in deep soils was largely ignored. The aim of this review is to synthesize literature concerning the sources, composition, mechanisms of stabilisation and destabilization of soil organic matter (SOM) stored in subsoil horizons. Organic C input into subsoils occurs in dissolved form (DOC) following preferential flow pathways, as aboveground or root litter and exudates along root channels and/or through bioturbation. The relative importance of these inputs for subsoil C distribution and dynamics still needs to be evaluated. Generally, C in deep soil horizons is characterized by high mean residence times of up to several thousand years. With few exceptions, the carbon-to-nitrogen (C/N) ratio is decreasing with soil depth, while the stable C and N isotope ratios of SOM are increasing, indicating that organic matter (OM) in deep soil horizons is highly processed. Several studies suggest that SOM in subsoils is enriched in microbial-derived C compounds and depleted in energy-rich plant material compared to topsoil SOM. However, the chemical composition of SOM in subsoils is soil-type specific and greatly influenced by pedological processes. Interaction with the mineral phase, in particular amorphous iron (Fe) and aluminum (Al) oxides was reported to be the main stabilization mechanism in acid and near neutral soils. In addition, occlusion within soil aggregates has been identified to account for a great proportion of SOM preserved in subsoils. Laboratory studies have shown that the decomposition of subsoil C with high residence times could be stimulated by addition of labile C. Other mechanisms leading to destabilisation of SOM in subsoils include disruption of the physical structure and nutrient supply to soil microorganisms. One of the most important factors leading to protection of SOM in subsoils may be the spatial separation of SOM, microorganisms and extracellular enzyme activity possibly related to the heterogeneity of C input. As a result of the different processes, stabilized SOM in subsoils is horizontally stratified. In order to better understand deep SOM dynamics and to include them into soil C models, quantitative information about C fluxes resulting from C input, stabilization and destabilization processes at the field scale are necessary. © 2010 Springer Science+Business Media B.V. Source

Sattelmayer T.,TU Munich
Journal of Engineering for Gas Turbines and Power | Year: 2011

Premixed combustion of hydrogen-rich mixtures involves the risk of flame flashback through wall boundary layers. For laminar flow conditions, the flashback mechanism is well understood and is usually correlated by a critical velocity gradient at the wall. Turbulent transport inside the boundary layer considerably increases the flashback propensity. Only tube burner setups were investigated in the past, and thus turbulent flashback limits were only derived for a fully developed Blasius wall friction profile. For turbulent flows, details of the flame propagation in proximity to the wall remain unclear. This paper presents results from a new experimental combustion rig, apt for detailed optical investigations of flame flashbacks in a turbulent wall boundary layer developing on a flat plate and being subject to an adjustable pressure gradient. Turbulent flashback limits are derived from the observed flame position inside the measurement section. The fuels investigated cover mixtures of methane, hydrogen, and air at various mixing ratios. The associated wall friction distributions are determined by Reynolds-averaged Navier-Stokes (RANS) computations of the flow inside the measurement section with fully resolved boundary layers. Consequently, the interaction between flame back pressure and incoming flow is not taken into account explicitly, in accordance with the evaluation procedure used for tube burner experiments. The results are compared with literature values, and the critical gradient concept is reviewed in light of the new data. Source

Moroz S.,University of Heidelberg | Schmidt R.,TU Munich
Annals of Physics | Year: 2010

The old problem of a singular, inverse square potential in nonrelativistic quantum mechanics is treated employing a field-theoretic, functional renormalization method. An emergent contact coupling flows to a fixed point or develops a limit cycle depending on the discriminant of its quadratic beta function. We analyze the fixed points in both conformal and nonconformal phases and perform a natural extension of the renormalization group analysis to complex values of the contact coupling. Physical interpretation and motivation for this extension is the presence of an inelastic scattering channel in two-body collisions. We present a geometric description of the complex generalization by considering renormalization group flows on the Riemann sphere. Finally, using bosonization, we find an analytical solution of the extended renormalization group flow equations, constituting the main result of our work. © 2009 Elsevier Inc. All rights reserved. Source

Sasamoto T.,Chiba University | Spohn H.,TU Munich
Nuclear Physics B | Year: 2010

We consider the KPZ equation in one space dimension with narrow wedge initial condition, h(x,t=0)=-|x|/δ, δ≪1, evolving into a parabolic profile with superimposed fluctuations. Based on previous results for the weakly asymmetric simple exclusion process with step initial conditions, we obtain a determinantal formula for the one-point distribution of the solution h(x,t) valid for any x and t>0. The corresponding distribution function converges in the long time limit, t → ∞, to the Tracy-Widom distribution. The first order correction is a shift of order t-1/3. We provide numerical computations based on the exact formula. © 2010 Elsevier B.V. Source

Straub D.,TU Munich
Probabilistic Engineering Mechanics | Year: 2011

In many instances, information on engineering systems can be obtained through measurements, monitoring or direct observations of system performances and can be used to update the system reliability estimate. In structural reliability analysis, such information is expressed either by inequalities (e.g. for the observation that no defect is present) or by equalities (e.g. for quantitative measurements of system characteristics). When information Z is of the equality type, the a priori probability of Z is zero and most structural reliability methods (SRM) are not directly applicable to the computation of the updated reliability. Hitherto, the computation of the reliability of engineering systems conditional on equality information was performed through first- and second-order approximations. In this paper, it is shown how equality information can be transformed into inequality information, which enables reliability updating by solving a standard structural system reliability problem. This approach enables the use of any SRM, including those based on simulation, for reliability updating with equality information. It is demonstrated on three numerical examples, including an application to fatigue reliability. © 2010 Elsevier Ltd. All rights reserved. Source

Kaufmann T.,University of Bern | Strasser A.,Walter and Eliza Hall Institute of Medical Research | Strasser A.,University of Melbourne | Jost P.J.,TU Munich
Cell Death and Differentiation | Year: 2012

Fas (also called CD95 or APO-1), a member of a subgroup of the tumour necrosis factor receptor superfamily that contain an intracellular death domain, can initiate apoptosis signalling and has a critical role in the regulation of the immune system. Fas-induced apoptosis requires recruitment and activation of the initiator caspase, caspase-8 (in humans also caspase-10), within the death-inducing signalling complex. In so-called type 1 cells, proteolytic activation of effector caspases (-3 and-7) by caspase-8 suffices for efficient apoptosis induction. In so-called type 2 cells, however, killing requires amplification of the caspase cascade. This can be achieved through caspase-8-mediated proteolytic activation of the pro-apoptotic Bcl-2 homology domain (BH)3-only protein BH3-interacting domain death agonist (Bid), which then causes mitochondrial outer membrane permeabilisation. This in turn leads to mitochondrial release of apoptogenic proteins, such as cytochrome c and, pertinent for Fas death receptor (DR)-induced apoptosis, Smac/DIABLO (second mitochondria-derived activator of caspase/direct IAP binding protein with low Pi), an antagonist of X-linked inhibitor of apoptosis (XIAP), which imposes a brake on effector caspases. In this review, written in honour of Juerg Tschopp who contributed so much to research on cell death and immunology, we discuss the functions of Bid and XIAP in the control of Fas DR-induced apoptosis signalling, and we speculate on how this knowledge could be exploited to develop novel regimes for treatment of cancer. © 2012 Macmillan Publishers Limited All rights reserved. Source

Hedberg C.,Umea University | Hedberg C.,Max Planck Institute of Molecular Physiology | Itzen A.,TU Munich
ACS Chemical Biology | Year: 2015

In the cell, proteins are frequently modified covalently at specific amino acids with post-translational modifications, leading to a diversification of protein functions and activities. Since the introduction of high-resolution mass spectrometry, new post-translational modifications are constantly being discovered. One particular modification is the adenylylation of mammalian proteins. In adenylylation, adenosine triphosphate (ATP) is utilized to attach an adenosine monophosphate at protein threonine or tyrosine residues via a phosphodiester linkage. Adenylylation is particularly interesting in the context of infections by bacterial pathogens during which mammalian proteins are manipulated through AMP attachment via secreted bacterial factors. In this review, we summarize the role and regulation of enzymatic adenylylation and the mechanisms of catalysis. We also refer to recent methods for the detection of adenylylated proteins by modification-specific antibodies, ATP analogues equipped with chemical handles, and mass spectrometry approaches. Additionally, we review screening approaches for inhibiting adenylylation and briefly discuss related modifications such as phosphocholination and phosphorylation. © 2014 American Chemical Society. Source

Kruglyak V.V.,University of Exeter | Demokritov S.O.,University of Munster | Grundler D.,TU Munich
Journal of Physics D: Applied Physics | Year: 2010

Magnonics is a young field of research and technology emerging at the interfaces between the study of spin dynamics, on the one hand, and a number of other fields of nanoscale science and technology, on the other. We review the foundations and recent achievements in magnonics in view of guiding further progress from studying fundamental magnonic phenomena towards applications. We discuss the major challenges that have to be addressed in future research in order to make magnonics a pervasive technology. © 2010 IOP Publishing Ltd. Source

Levine S.Z.,Bar - Ilan University | Leucht S.,TU Munich
Biological Psychiatry | Year: 2010

Background: To extend the early treatment response literature, this article aims to quantify the extent of heterogeneity and describe the characteristics of treatment response trajectories in schizophrenia. Methods: Data were extracted from two double-blind, randomized clinical trials that compared amisulpride with risperidone in schizophrenia (n = 538). Available Brief Psychiatric Rating Scale (BPRS) administrations from baseline to Week 8 were used to assess treatment response. Trajectories were calculated with mixed-mode latent class regression modeling from which groups were derived. These groups were compared on clinical and background characteristics. Results: At Week 8, five treatment response trajectories were identified, undifferentiated by medication received, and characterized by varied amelioration levels. Three trajectory groups (n = 414, 76.9%) showed a treatment response trend of amelioration. Of these, two trajectory groups had similar dropout rates (22%, 25%), and two did not significantly differ on BPRS % reduction (approximately 55%, approximately 58%). Trajectory Group 2 (n = 44, 8.2%) was characterized by being oldest, a 21.3 BPRS % reduction, the highest BPRS severity scores, the highest dropout rate (61.4%), and 11.8% meeting Andreasen's remission criterion. Among Trajectory Group 4 (n = 80, 14.9%) symptom reduction was considerable during the first 2 weeks and then gradual. This trajectory group was characterized by being youngest, male, suffering from paranoid schizophrenia, the lowest dropout rate (6.3%), average BPRS baseline scores, an 88.9% BPRS reduction, and 96% meeting Andreasen's remission criterion. Conclusions: Generally, amelioration characterizes early treatment response, such that approximately 77% are moderate responders, approximately 15% are rapid treatment responders, and approximately 8% are poor responders. © 2010 Society of Biological Psychiatry. Source

Ketzer B.,TU Munich
Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment | Year: 2013

A Time Projection Chamber (TPC) is a powerful detector for three-dimensional tracking and particle identification for ultra-high multiplicity events. It is the central tracking device of many experiments, e.g. of the ALICE experiment at CERN. The necessity of a switching electrostatic gate, which prevents ions produced in the amplification region of the MWPCs from entering the drift volume, however, restricts its application to trigger rates of the order of 1 kHz. Charge amplification by Gas Electron Multiplier (GEM) foils instead of proportional wires offers an intrinsic suppression of the ion backflow, although not to the same level as a gating grid. Detailed Monte Carlo simulations have shown that the distortions due to residual space charge from back-drifting ions can be limited to a few cm, and thus can be corrected using standard calibration techniques. A prototype GEM-TPC has been built with the largest active volume to date for a detector of this type. It has been commissioned with cosmic rays and with particle beams at the FOPI experiment at GSI, and was employed for a physics measurement with pion beams. For the future operation of the ALICE TPC at the CERN LHC beyond 2019, where Pb-Pb collision rates of 50 kHz are expected, it is planned to replace the existing MWPCs by GEM detectors, operated in a continuous, triggerless readout mode, thus allowing an increase in event rate by a factor of 100. As a first step of the R&D program, a prototype of an Inner Readout Chamber was equipped with large-size GEM foils and exposed to beams of protons, pions and electrons from the CERN PS. In this paper, new results are shown concerning ion backflow, spatial and momentum resolution of the FOPI GEM-TPC, detector calibration, and dE/dx resolution with both detector prototypes. The perspectives of a GEM-TPC for ALICE with continuous readout will be discussed. © 2013 CERN. Source

Valle E.D.,TU Munich
New Journal of Physics | Year: 2013

A quantum dot can be used as a source of one- and two-photon states and of polarization entangled photon pairs. The emission of such states is investigated here from the point of view of frequency-resolved two-photon correlations. These follow from a spectral filtering of the dot emission, which can be achieved either by using a cavity or by placing a number of interference filters before the detectors. A combination of these various options is used to iteratively refine the emission in a 'distillation' process and arrive at highly correlated states with a high purity. The so-called 'leapfrog processes', where the system undergoes a direct transition from the biexciton state to the ground state by direct emission of two photons, are shown to be central to the quantum features of such sources. Optimum configurations are singled out in a global theoretical picture that unifies the various regimes of operation. © IOP Publishing and Deutsche Physikalische Gesellschaft. Source

Hilscher A.,TU Munich | Knicker H.,CSIC - Institute of Natural Resources and Agriculture Biology of Seville
Soil Biology and Biochemistry | Year: 2011

The present study focuses on the microbial recalcitrance of pyrogenic organic material (PyOM) on a molecular scale. We performed microcosm incubation experiments using 13C- and 15N-enriched grass-derived PyOM mixed with a sub soil material taken from a Haplic Cambisol. Solid-state 13C and 15N NMR studies were conducted to elucidate the humification processes at different stages of PyOM degradation. The chemical structure of the remaining PyOM after incubation was clearly different from the initial pyrogenic material. The proportion of O-containing functional groups was increased, whereas that of aryl C and of N-containing heterocyclic structures had decreased, probably due to mineralisation and conversion to other C and N groups. After 20 months of incubation the aryl C loss reached up to 40% of the initial amount and up to 29% of the remaining PyOM C was assigned to carboxyl/carbonyl C and O-aryl C. These reactions alter the chemical and physical properties of the char residue and make it more available for further microbial attack but also for adsorption processes. Our study presents direct evidence for the degradation of N-heterocyclic domains in charred plant remains adding new aspects to the understanding of the N cycling in fire-affected ecosystems. © 2010 Elsevier Ltd. Source

Schulz C.M.,TU Munich
British journal of anaesthesia | Year: 2011

Workload assessment is an important tool for improving patient safety in anaesthesia. We tested the hypothesis that heart rate, pupil size, and duration of fixation increase, whereas saccade amplitude decreases with increased workload during simulated critical incidents. Fifteen trainee anaesthetists participated in this randomized cross-over trial. Each participant used a head-mounted eye-tracking device (EyeSeeCam) during induction of general anaesthesia in a full-scale simulation during three different sessions. No critical incident was simulated in the first session. In a randomized order, workload was increased by simulation of a critical incident in the second or third session. Pupil size, duration of fixations, saccadic amplitude, and heart rate of each participant and the simulator conditions were recorded continuously and synchronized. The data were analysed by paired sample t-tests and mixed-effects regression analysis. The findings of the second and third sessions of 11 participants were analysed. Pupil diameter and heart rate increased simultaneously as the severity of the simulated critical incident increased. Allowing for individual effects, the simulator conditions explained 92.6% of the variance in pupil diameter and 93.6% of the variance in heart rate (both P<0.001). The duration of fixation decreased with increased workload. The saccadic amplitude remained unaffected by workload changes. Pupil size and heart rate reflect workload increase within simulator sessions, but they do not permit overall workload comparisons between individuals or sessions. Contrary to our assumption, the duration of fixation decreased with increased workload. Saccade amplitude did not reflect workload fluctuations. Source

Heister K.,TU Munich
Geoderma | Year: 2014

The specific surface area (SSA) of soils is a basic property and closely related to other physical and chemical properties like e.g. cation exchange capacity, clay content, organic matter content, porosity and hydrodynamic and geotechnical characteristics. Therefore, the SSA of soils has been measured frequently for decades. However, no universal method to determine SSA exists. The existing methods can generally be grouped into two categories, the adsorption of gases and the adsorption of polar liquids or molecules from solution. Depending on the method applied, the SSA of a soil can vary, as by these different methods, different surfaces of the soil are determined. The most frequently used representatives of these two groups for measuring SSA of soils are the physisorption of nitrogen gas at 77K (BET-N2) for the gas adsorption methods, yielding the external surface area of the mineral particles, and the retention of ethylene glycol monoethyl ether (EGME) for the adsorption of polar liquids, probing the total surface area including interlayers of clay minerals and micropores of organic material. Studies dealing with the determination of SSA of soils are numerous, and it has also been shown that the resulting SSA values differ not only depending on the method but also on the sorbate used and the sample pretreatment. This review shortly presents the principles of these methods and emphasises their limitations and difficulties, when applied to soil samples, like sample pretreatment, (micro-)porosity and attachment of organic material to mineral surfaces. In particular the drying of the samples prior to measurement seems to be crucial for the results obtained. Recommendations are given in order to improve the quality of the data and to facilitate the comparability of SSA data of different studies. It is shown for clayey soil samples that the relationship between BET-N2 and EGME SSA depends predominantly on the type of clay mineral and not on the content of organic material. Thus, from the SSA measurements, an estimation of the dominant clay mineral seems possible. Consequently, a suitable combination of various SSA determination methods together with related techniques can result in a more detailed characterisation of the reactive interface of a soil to the liquid and gaseous phases. © 2013 Elsevier B.V. Source

Ibarra A.,TU Munich
Nuclear and Particle Physics Proceedings | Year: 2015

We review the present theoretical status of particle dark matter, concentrating on two of the most attractive dark matter candidates: sterile neutrinos and weakly interacting massive particles. We review the basic theoretical aspects of each of these two scenarios as well as their observational signatures, with emphasis on the possible signals recently reported by various authors. © 2015 Elsevier B.V. Source

Rusch H.,Behavioural and Institutional Economics | Rusch H.,TU Munich
Proceedings of the Royal Society B: Biological Sciences | Year: 2014

Drawing on an idea proposed by Darwin, it has recently been hypothesized that violent intergroup conflict might have played a substantial role in the evolution of human cooperativeness and altruism. The central notion of this argument, dubbed ‘parochial altruism’, is that the two genetic or cultural traits, aggressiveness against the out-groups and cooperativeness towards the in-group, including self-sacrificial altruistic behaviour, might have coevolved in humans. This review assesses the explanatory power of current theories of ‘parochial altruism’. After a brief synopsis of the existing literature, two pitfalls in the interpretation of the most widely used models are discussed: potential direct benefits and high relatedness between group members implicitly induced by assumptions about conflict structure and frequency. Then, a number of simplifying assumptions made in the construction of these models are pointed out which currently limit their explanatory power. Next, relevant empirical evidence from several disciplines which could guide future theoretical extensions is reviewed. Finally, selected alternative accounts of evolutionary links between intergroup conflict and intragroup cooperation are briefly discussed which could be integrated with parochial altruism in the future. © 2014 The Author(s) Published by the Royal Society. All rights reserved. Source

Pretzsch H.,TU Munich
Forest Ecology and Management | Year: 2014

Mixed-species forest stands are well explored in their favourable ecological, economical, and socio-economical functions and services compared with pure stands, but still poorly understood in their structure and functioning. Canopy structure and tree morphology affect the environmental conditions within the stand, the tree growth, and by this most forest functions and services. Here, I review how canopy structure and crown morphology in mixed stands can differ from pure stands and how this depends on the selection of tree species and interactions between them. The focus is on the macrostructure of canopy and crowns derived from the trees' positions, their convex crown hulls, and their space filling with branches.In mixed canopies the sum of the crown projection area, but not the ground coverage by crowns, mostly exceeds pure stands due to multiple crown overlaps. The interspecific differences in crown shape and allometric scaling cause a 'selection effect' when complementary species are combined. In interspecific environment furthermore 'true mixing effects' like intraspecific shifts in size, shape, and inner space filling of crowns may occur. The much denser and more plastic canopy space filling in mixed stands may increase light interception, stand density, productivity, and growth resilience to disturbances. I discuss the relevance of interspecific interactions for forest management, model building, and theory development and draw perspectives of further research into stand canopy and crown structure. © 2014 The Author. Source

Schmidbaur H.,TU Munich | Schmidbaur H.,King Abdulaziz University | Raubenheimer H.G.,Stellenbosch University | Dobrzanska L.,Catholic University of Leuven
Chemical Society Reviews | Year: 2014

In the first part of this review, the characteristics of Au-H bonds in gold hydrides are reviewed including the data of recently prepared stable organometallic complexes with gold(i) and gold(iii) centers. In the second part, the reports are summarized where authors have tried to provide evidence for hydrogen bonds to gold of the type Au⋯H-X. Such interactions have been proposed for gold atoms in the Au(-i), Au(0), Au(i), and Au(iii) oxidation states as hydrogen bonding acceptors and H-X units with X = O, N, C as donors, based on both experimental and quantum chemistry studies. To complement these findings, the literature was screened for examples with similar molecular geometries, for which such bonding has not yet been considered. In the discussion of the results, the recently issued IUPAC definitions of hydrogen bonding and the currently accepted description of agostic interactions have been used as guidelines to rank the Au⋯H-X interactions in this broad range of weak chemical bonding. From the available data it appears that all the intra- and intermolecular Au⋯H-X contacts are associated with very low binding energies and non-specific directionality. To date, the energetics have not been estimated, because there are no thermochemical and very limited IR/Raman and temperature-dependent NMR data that can be used as reliable references. Where conspicuous structural or spectroscopic effects have been observed, explanations other than hydrogen bonding Au⋯H-X can also be advanced in most cases. Although numerous examples of short Au⋯H-X contacts exist in the literature, it seems, at this stage, that these probably make only very minor contributions to the energy of a given system and have only a marginal influence on molecular conformations which so far have most often attracted researchers to this topic. Further, more dedicated investigations will be necessary before well founded conclusions can be drawn. © 2014 The Royal Society of Chemistry. Source

Sigterman K.E.,TU Munich
The Cochrane database of systematic reviews | Year: 2013

Approximately 25% of adults regularly experience heartburn, a symptom of gastro-oesophageal reflux disease (GORD). Most patients are treated empirically (without specific diagnostic evaluation e.g. endoscopy. Among patients who have an upper endoscopy, findings range from a normal appearance, mild erythema to severe oesophagitis with stricture formation. Patients without visible damage to the oesophagus have endoscopy negative reflux disease (ENRD). The pathogenesis of ENRD, and its response to treatment may differ from GORD with oesophagitis. Summarise, quantify and compare the efficacy of short-term use of proton pump inhibitors (PPI), H2-receptor antagonists (H2RA) and prokinetics in adults with GORD, treated empirically and in those with endoscopy negative reflux disease (ENRD). We searched MEDLINE (January 1966 to November 2011), EMBASE (January 1988 to November 2011), and EBMR in November 2011. Randomised controlled trials reporting symptomatic outcome after short-term treatment for GORD using proton pump inhibitors, H2-receptor antagonists or prokinetic agents. Participants had to be either from an empirical treatment group (no endoscopy used in treatment allocation) or from an endoscopy negative reflux disease group (no signs of erosive oesophagitis). Two authors independently assessed trial quality and extracted data. Thirty-four trials (1314 participants) were included: fifteen in the empirical treatment group, fifteen in the ENRD group and four in both. In empirical treatment of GORD the risk ratio (RR) for heartburn remission (the primary efficacy variable) in placebo-controlled trials for PPI was 0.37 (two trials, 95% confidence interval (CI) 0.32 to 0.44), for H2RAs 0.77 (two trials, 95% CI 0.60 to 0.99) and for prokinetics 0.86 (one trial, 95% CI 0.73 to 1.01). In a direct comparison PPIs were more effective than H2RAs (seven trials, RR 0.66, 95% CI 0.60 to 0.73) and prokinetics (two trials, RR 0.53, 95% CI 0.32 to 0.87).In treatment of ENRD, the RR for heartburn remission for PPI versus placebo was 0.71 (ten trials, 95% CI 0.65 to 0.78) and for H2RA versus placebo was 0.84 (two trials, 95% CI 0.74 to 0.95). The RR for PPI versus H2RA was 0.78 (three trials, 95% CI 0.62 to 0.97) and for PPI versus prokinetic 0.72 (one trial, 95% CI 0.56 to 0.92). PPIs are more effective than H2RAs in relieving heartburn in patients with GORD who are treated empirically and in those with ENRD, although the magnitude of benefit is greater for those treated empirically. Source

Krawczyk M.,Adam Mickiewicz University | Grundler D.,TU Munich | Grundler D.,Ecole Polytechnique Federale de Lausanne
Journal of Physics Condensed Matter | Year: 2014

Research efforts addressing spin waves (magnons) in micro- and nanostructured ferromagnetic materials have increased tremendously in recent years. Corresponding experimental and theoretical work in magnonics faces significant challenges in that spin-wave dispersion relations are highly anisotropic and different magnetic states might be realized via, for example, the magnetic field history. At the same time, these features offer novel opportunities for wave control in solids going beyond photonics and plasmonics. In this topical review we address materials with a periodic modulation of magnetic parameters that give rise to artificially tailored band structures and allow unprecedented control of spin waves. In particular, we discuss recent achievements and perspectives of reconfigurable magnonic devices for which band structures can be reprogrammed during operation. Such characteristics might be useful for multifunctional microwave and logic devices operating over a broad frequency regime on either the macro- or nanoscale. © 2014 IOP Publishing Ltd. Source

Dev P.S.B.,University of Manchester | Dev P.S.B.,TU Munich | Mohapatra R.N.,University of Maryland University College
Physical Review Letters | Year: 2015

We show that the excess events observed in a number of recent LHC resonance searches can be simultaneously explained within a nonsupersymmetric left-right inverse seesaw model for neutrino masses with WR mass around 1.9 TeV. The minimal particle content that leads to gauge coupling unification in this model predicts gR≃0.51 at the TeV scale, which is consistent with data. The extra color singlet, SU(2)-triplet fermions required for unification can be interpreted as the dark matter of the Universe. Future measurements of the ratio of same-sign to opposite-sign dilepton events can provide a way to distinguish this scenario from the canonical cases of type-I and inverse seesaw, i.e., provide a measure of the relative magnitudes of the Dirac and Majorana masses of the right-handed neutrinos in the SU(2)R doublet of the left-right symmetric model. © 2015 American Physical Society. © 2015 American Physical Society. Source

Blum M.,TU Munich
Explorations in Economic History | Year: 2011

The First World War hit Germany severely, particularly the agricultural sector, because the outbreak came unexpected and its duration exceeded all expectations. Many resources necessary for agricultural production were required by the war economy and led to shortages and shrinking supplies. Many agricultural laborers were drafted and the blockade imposed by the allies prevented Germany from a great deal of imports. As a consequence, the nutritional situation was devastating, particularly after 1916, and hit all groups of the German society. The period under observation provides one of most drastic natural experiments in the 20th century. This study uses anthropometric data from German soldiers who served in the Second World War to trace living standards between the 1900s and the 1920s. In contrast to other approaches, this paper is able to distinguish between social groups by occupation, religious denominatio\n, regional origin, and city size. The results suggest that although all social strata were hit by famine conditions, the height of farmers, urban citizens, Catholics, and especially individuals born in the highly integrated food-import regions along the coast and the banks of the Rhine declined most. © 2011 Elsevier Inc. Source

Zahel T.,TU Munich
Journal of computer assisted tomography | Year: 2013

This study aimed to evaluate a novel segmentation software for automated liver volumetry and segmentation regarding segmentation speed and interobserver variability. Computed tomographic scans of 20 patients without underlying liver disease and 10 patients with liver metastasis from colorectal cancer were analyzed by a novel segmentation software. Liver segmentation was performed after manual placement of specific landmarks into 9 segments according to the Couinaud model as well as into 4 segments, the latter being import for surgery planning. Time for segmentation was measured and the obtained segmental and total liver volumes between the different readers were compared calculating intraclass correlations (ICCs). Volumes of liver tumor burden were evaluated similarly. Liver segmentation could be performed rapidly 3 minutes or less. Comparison of total liver volumes revealed a perfect ICC of greater than 0.997. Segmental liver volumes within the 9-part segmentation provided fair to moderate correlation for the left lobe and good to excellent correlations for the right lobe. When applying a 4-part segmentation relevant to clinical practice, strong to perfect agreement was observed. Similarly tumor volumes showed perfect ICC (>0.998). Rapid determination of total and segmental liver volumes can be obtained using a novel segmentation software suitable for daily clinical practice. Source

Murguia D.I.,Wuppertal Institute for Climate | Bohling K.,TU Munich
Journal of Cleaner Production | Year: 2013

Multinational mining companies operating in Latin America increasingly publish sustainability reports which outline their contributions to sustainable development. Companies argue that reports help communities better understand the importance of the benefits created by mining. However, we argue that sustainability reporting can only play a role in improving a company's performance and reputation if the quality of the reported data is good enough to answer community-raised contentious issues and if such are tackled through a stakeholder engagement process which includes 'anti-mining' groups. The paper examines a mining conflict in Argentina's Bajo de la Alumbrera open pit mine. The assessment is based on a content analysis of Alumbrera's Sustainability Report (SR), primarily from 2009, complemented with insights from the 2010 and 2011 reports. The study reveals that environmental and economic indicators are the most contentious and least reported. The reports examined only briefly acknowledge these issues, and fail to detail the procedures followed to identify and engage stakeholders. © 2012 Elsevier Ltd. All rights reserved. Source

Sprenger T.,TU Munich
Current Pain and Headache Reports | Year: 2010

Tension type headache (TTH) is a primary headache disorder considered common in children and adolescents. It remains debatable whether TTH and migraine are separate biological entities. This review summarizes the most recent literature of TTH with regards to children and adolescents. Further studies of TTH are needed to develop a biologically based classification system that may be facilitated through understanding changes in the developing brain during childhood and adolescence. © 2010 The Author(s). Source

Krauss A.,TU Munich
Journal of Automated Reasoning | Year: 2010

Based on inductive definitions, we develop a tool that automates the definition of partial recursive functions in higher-order logic (HOL) and provides appropriate proof rules for reasoning about them. Termination is modeled by an inductive domain predicate which follows the structure of the recursion. Since a partial induction rule is available immediately, partial correctness properties can be proved before termination is established. It turns out that this modularity also facilitates termination arguments for total functions, in particular for nested recursions. Our tool is implemented as a definitional package extending Isabelle/HOL. Various extensions provide convenience to the user: Pattern matching, default values, tail recursion, mutual recursion and currying. © 2009 Springer Science+Business Media B.V. Source

Broy M.,TU Munich
Computer Journal | Year: 2010

A theory for the systematic development of distributed interactive software systems constructed in terms of components requires a basic system model and description techniques supporting specific views and abstractions of systems. Typical system views are the interface, the distribution, or the state transition view. We show how to represent these views by mathematics and logics. The development of systems consists in working out these views leading step by step to implementations in terms of sets of distributed, concurrent, interacting state machines. For large systems, the development is carried out by refinement through several levels of abstraction. We formalize the typical steps of the development process and express and justify them directly in logic. In particular, we treat three types of refinement steps: horizontal refinement which stays within one level of abstraction, vertical refinement addressing the transition from one level of abstraction to another, and implementation by glass box refinement. We introduce refinement relations to capture these three dimensions of the development space. We derive verification rules for the refinement steps and show the modularity of the approach. © The Author 2009. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved. Source

Neumann T.,TU Munich | Weikum G.,Maxplanckinstitut For Informatik
Proceedings of the VLDB Endowment | Year: 2010

The RDF data model is gaining importance for applications in computational biology, knowledge sharing, and social communities. Recent work on RDF engines has focused on scalable performance for querying, and has largely disregarded updates. In addition to incremental bulk loading, applications also require online updates with exible control over multi-user isolation levels and data consistency. The challenge lies in meeting these requirements while retaining the capability for fast querying. This paper presents a comprehensive solution that is based on an extended deferred-indexing method with integrated versioning. The version store enables time-travel queries that are efficiently processed without adversely affecting queries on the current data. For exible consistency, transactional concurrency control is provided with options for either snapshot isolation or full serializability. All methods are integrated in an extension of the RDF-3X system, and their very good performance for both queries and updates is demonstrated by measurements of multi-user workloads with real-life data as well as stress-test synthetic loads. © 2010 VLDB Endowment. Source

Spohn H.,TU Munich | Stoltz G.,University Paris Est Creteil
Journal of Statistical Physics | Year: 2015

We study the BS model, which is a one-dimensional lattice field theory taking real values. Its dynamics is governed by coupled differential equations plus random nearest neighbor exchanges. The BS model has two locally conserved fields. The peak structure of their steady state space–time correlations is determined through numerical simulations and compared with nonlinear fluctuating hydrodynamics, which predicts a traveling peak with KPZ scaling function and a standing peak with a scaling function given by the maximally asymmetric Lévy distribution with parameter α=5/3. As a by-product, we completely classify the universality classes for two coupled stochastic Burgers equations with arbitrary coupling coefficients. © 2015, Springer Science+Business Media New York. Source

Darsow U.,TU Munich
Current Opinion in Allergy and Clinical Immunology | Year: 2012

Purpose of review Aeroallergens are relevant eliciting factors of not only respiratory allergy but also atopic eczema in subgroups of patients. Due to a low number of controlled studies, the use of allergen-specific immunotherapy (ASIT) as potentially curative therapy as in respiratory atopic diseases is controversial in treating atopic eczema. This article summarizes theoretical aspects and recent results of clinical trials associated with ASIT in atopic eczema. Recent findings Literature demonstrates variability in study design and results, but ASIT has potential to improve the course of atopic eczema if type I sensitizations are present. Studies suggest the efficacy of ASIT on eczema not only in house dust mite allergy but also in patients with birch or grass pollen sensitization. In several studies, only defined subgroups of patients with atopic eczema showed positive results of ASIT. A generally good tolerability of ASIT was seen. Summary Atopic eczema patients with relevant allergies might benefit from ASIT as an additional therapeutic option.Side effects have been overestimated in the past. Hypothetically, the atopy patch test helps to identify an atopic eczema patient subgroup for successful ASIT. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins. Source

Geist J.,TU Munich
Ecological Indicators | Year: 2011

Freshwater ecosystems provide goods and services of critical importance to human societies, yet they are among the most heavily altered ecosystems with an overproportional loss of biodiversity. Major threats to freshwater biodiversity include overexploitation, water pollution, fragmentation, destruction or degradation of habitat, and invasions by non-native species. Alterations of natural flow regimes by man-made dams, land-use changes, river impoundments, and water abstraction often have profound impacts on lotic communities. An understanding of the functional interactions and processes in freshwater ecosystems presents a major challenge for scientists, but is crucial for effective and sustainable restoration. Most conservation approaches to date have considered single species or single level strategies. In contrast, the concept of 'Integrative Freshwater Ecology and Biodiversity Conservation' (IFEBC) proposed herein addresses the interactions between abiotic and biotic factors on different levels of organization qualitatively and quantitatively. It consequently results in a more holistic understanding of biodiversity functioning and management. Core questions include modeling of the processes in aquatic key habitats and their functionality based on the identification and quantification of factors which control the spatial and temporal distribution of biodiversity and productivity in aquatic ecosystems. The context and importance of research into IFEBC is illustrated using case studies from three major areas of research: (i) aquatic habitat quality and restoration ecology, (ii) the genetic and evolutionary potential of aquatic species, and (iii) the detection of stress and toxic effects in aquatic ecosystems using biomarkers. In conclusion, our understanding of the functioning of aquatic ecosystems and conservation management can greatly benefit from the methodological combination of molecular and ecological tools. © 2011 Elsevier Ltd. Source

Krishnan Y.,Tata Institute of Fundamental Research | Simmel F.C.,TU Munich
Angewandte Chemie - International Edition | Year: 2011

In biology, nucleic acids are carriers of molecular information: DNA's base sequence stores and imparts genetic instructions, while RNA's sequence plays the role of a messenger and a regulator of gene expression. As biopolymers, nucleic acids also have exciting physicochemical properties, which can be rationally influenced by the base sequence in myriad ways. Consequently, in recent years nucleic acids have also become important building blocks for bottom-up nanotechnology: as molecules for the self-assembly of molecular nanostructures and also as a material for building machinelike nanodevices. In this Review we will cover the most important developments in this growing field of nucleic acid nanodevices. We also provide an overview of the biochemical and biophysical background of this field and the major "historical" influences that shaped its development. Particular emphasis is laid on DNA molecular motors, molecular robotics, molecular information processing, and applications of nucleic acid nanodevices in biology. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source

Sobolewski A.L.,Polish Academy of Sciences | Domcke W.,TU Munich
Physical Chemistry Chemical Physics | Year: 2012

The photochemistry of the hydrogen-bonded oxotitanium porphyrin-water complex (TiOP-H2O) has been explored with electronic-structure calculations. It is shown that intramolecular charge-transfer processes, which are initiated by the excitation of the Soret band of TiOP, accumulate electronic charge on the oxygen atom of TiOP, which in turn abstracts a hydrogen atom from water by an exoenergetic and essentially barrierless hydrogen-transfer reaction, resulting in the TiPOH-OH biradical. About 75% of the absorbed photon energy is thus stored as chemical energy in two ground-state radicals. Absorption of a second photon by TiPOH can result in the detachment of the H radical and recovery of the photocatalyzer TiOP. Again, about 75% of the photon energy is stored in the dissociation energy of TiPOH. Overall, a water molecule is decomposed into H and OH radicals by the absorption of two visible photons. Exoenergetic radical recombination reactions can yield molecular hydrogen, molecular oxygen or hydrogen peroxide as closed-shell products. © 2012 the Owner Societies. Source

Muller T.-O.,TU Munich
Physical Review Letters | Year: 2013

For scattering by potentials with attractive inverse-cube (-C 3/r3) tails, the threshold law for elastic collisions is presented. The expansion of the scattering phase shift contains all terms up to and including O(k2) and only relies on the value of the threshold quantum number's remainder Δâ̂̂[0,1), which accounts for short-range deviations of the full potential from the pure -C3/r3 form. In contrast to previous approaches, the threshold law presented provides a connection to the regular solution at zero energy as well as to the position of a weakly bound s-wave state. © 2013 American Physical Society. Source

Morral A.F.I.,Ecole Polytechnique Federale de Lausanne | Morral A.F.I.,TU Munich
IEEE Journal on Selected Topics in Quantum Electronics | Year: 2011

To date, the use of gold for the synthesis of nanowires has proven to be nearly impossible to circumvent, regardless of the potential negative effects on the nanowires physical properties. In this paper, the synthesis of gallium arsenide nanowires without the use of gold as a catalyst is reviewed. The review focuses on gallium-assisted growth and selective area epitaxy, revealing the common and different growth mechanisms and resulting properties. In particular, we show how the excellent material quality results also in excellent optical properties of gold-free GaAs nanowires and related heterostructures. Finally, the perspectives for future applications are discussed. © 2011 IEEE. Source

Mackereth C.D.,University of Bordeaux Segalen | Sattler M.,Helmholtz Center Munich | Sattler M.,TU Munich
Current Opinion in Structural Biology | Year: 2012

Protein-RNA interactions play essential roles in gene regulation and RNA metabolism. While high-resolution structures have revealed principles of RNA recognition by individual RNA binding domains (RBDs), the presence of multiple RBDs in many eukaryotic proteins suggests additional modes of RNA recognition by combination and cooperation of these interactions. Recent structures, together with biochemical and biophysical studies have revealed novel principles of RNA recognition by multi-domain proteins. These examples highlight an important role for dynamics in RNA recognition, with mechanisms including fly-casting and conformational selection, and advocate the use of solution techniques for their analysis. © 2012 Elsevier Ltd. Source

Sackmann E.,TU Munich
Advances in Colloid and Interface Science | Year: 2014

The endoplasmatic reticulum (ER) comprises flattened vesicles (cisternae) with worm holes dubbed with ribosomes coexisting with a network of interconnected tubes which can extend to the cell periphery or even penetrate nerve axons. The coexisting topologies enclose a continuous luminal space. The complex ER topology is specifically controlled by a group of ER-shaping proteins often called reticulons (discovered by the group of Tom Rapoport [1]). They include atlastin, reticulon, REEP and the MT severing protein spastin. A generic ER shape controlling factor is the necessity to maximize the area-to-volume ratio of ER membranes in the highly crowded cytoplasmic space. I present a model of the ER-shaping function of the reticulons based on the Helfrich bending elasticity concept of soft shell shape changes. Common structural motifs of the reticulons are hydrophobic sequences forming wedge shaped hairpins which penetrate the lipid bilayer of the cell membranes. The wedge-like hydrophobic anchors can both induce the high curvature of the tubular ER fraction and ensure the preferred distribution of the reticulons along the tubules. Tubular junctions may be stabilized by the reticulons forming two forceps twisted by 90°. The ER extensions to the cell periphery and the axons are mediated by coupling of the tubes to the microtubules which is mediated by REEP and spastin. At the end I present a model of the tension driven homotype fusion of ER-membranes by atlastin, based on analogies to the SNARE-complexin-SNARE driven heterotype fusion process. © 2014 Elsevier B.V. Source

Horst K.,TU Munich
Molecular nutrition & food research | Year: 2010

The metabolism of 1,8-cineole after ingestion of sage tea was studied. After application of the tea, the metabolites 2-hydroxy-1,8-cineole, 3-hydroxy-1,8-cineole, 9-hydroxy-1,8-cineole and, for the first time in humans, 7-hydroxy-1,8-cineole were identified in plasma and urine of one volunteer. For quantitation of these metabolites and the parent compound, stable isotope dilution assays were developed after synthesis of [(2)H(3)]-1,8-cineole, [9/10-(2)H(3)]-2-hydroxy-1,8-cineole and [(13)C,(2)H(2)]-9-hydroxy-1,8-cineole as internal standards. Using these standards, we quantified 1,8-cineole by solid phase microextraction GC-MS and the hydroxyl-1,8-cineoles by LC-MS/MS after deconjugation in blood and urine of the volunteer. After consumption of 1.02 mg 1,8-cineole (19 μg/kg bw), the hydroxycineoles along with their parent compound were detectable in the blood plasma of the volunteer under study after liberation from their glucuronides with 2-hydroxycineole being the predominant metabolite at a maximum plasma concentration of 86 nmol/L followed by the 9-hydroxy isomer at a maximum plasma concentration of 33 nmol/L. The parent compound 1,8-cineole showed a low maximum plasma concentration of 19 nmol/L. In urine, 2-hydroxycineole also showed highest contents followed by its 9-isomer. Summing up the urinary excretion over 10 h, 2-hydroxycineole, the 9-isomer, the 3-isomer and the 7-isomer accounted for 20.9, 17.2, 10.6 and 3.8% of the cineole dose, respectively. Source

Dean-Ben X.L.,Helmholtz Center Munich | Razansky D.,Helmholtz Center Munich | Razansky D.,TU Munich
Optics Express | Year: 2013

We report on a novel hand-held imaging probe for real-time optoacoustic visualization of deep tissues in three dimensions. The system incorporates an annular two-dimensional array of ultrasonic sensors densely distributed on a spherical surface. Simultaneous recording and processing of time-resolved data from all the channels enables acquisition of entire volumetric data sets for each illumination laser pulse. The proposed solution utilizes a transparent membrane in order to allow efficient coupling of optoacoustically generated waves to the ultrasonic detectors while avoiding direct contact of the imaged object with the coupling medium. The handheld approach further allows convenient handling of both pre-clinical experiments as well as clinical measurements in human subjects. Here we demonstrate an imaging speed of 10 volumetric frames per second with spatial resolution down to 200 micrometers in the imaged region while also achieving imaging depth of more than 1.5 cm in living tissues without signal averaging. © 2013 Optical Society of America. Source

Spanier B.,TU Munich
Journal of Physiology | Year: 2014

Dietary proteins are cleaved within the intestinal lumen to oligopeptides which are further processed to small peptides (di- and tripeptides) and free amino acids. Although the transport of amino acids is mediated by several specific amino acid transporters, the proton-coupled uptake of the more than 8000 different di- and tripeptides is performed by the high-capacity/low-affinity peptide transporter isoform PEPT1 (SLC15A1). Its wide substrate tolerance also allows the transport of a repertoire of structurally closely related compounds and drugs, which explains their high oral bioavailability and brings PEPT1 into focus for medical and pharmaceutical approaches. Although the first evidence for the interplay of nutrient supply and PEPT1 expression and function was described over 20 years ago, many aspects of the molecular processes controlling its transcription and translation and modifying its transporter properties are still awaiting discovery. The present review summarizes the recent knowledge on the factors modulating PEPT1 expression and function in Caenorhabditis elegans, Danio rerio, Mus musculus and Homo sapiens, with focus on dietary ingredients, transcription factors and functional modulators, such as the sodium-proton exchanger NHE3 and selected scaffold proteins. © 2013 The Physiological Society. Source

Oberhofer H.,TU Munich | Blumberger J.,University College London
Physical Chemistry Chemical Physics | Year: 2012

We assess the validity of incoherent hopping models that have previously been used to describe electron transport in crystalline C 60 at room temperature. To this end we present new density functional theory based calculations of the electron transfer parameter defining these models. Specifically, we report electronic coupling matrix elements for several ten thousand configurations that are thermally accessible to the C 60 molecules through rotational diffusion around their lattice sites. We find that the root-mean-square fluctuations of the electronic coupling matrix element (11 meV) are almost as large as the average value (14 meV) and that the distribution is well approximated by a Gaussian. Importantly, due to the small reorganisation energy of the C 60 dimer (≈0.1 eV), the ET is almost activationless for the majority of configurations. Yet, for a small but significant fraction of orientations the coupling is so strong compared to reorganisation energy that no charge-localised state exists, a situation that is aggravated if zero-point motion of the nuclei is taken into account. The present calculations indicate that standard hopping models do not provide a sound description of electron transport in C 60, which might be the case for many other organics as well, and that approaches are needed that solve the electron dynamics directly. This journal is © 2012 the Owner Societies. Source

Schuller B.,TU Munich
IEEE Transactions on Affective Computing | Year: 2011

Most research efforts dealing with recognition of emotion-related states from the human speech signal concentrate on acoustic analysis. However, the last decade's research results show that the task cannot be solved to complete satisfaction, especially when it comes to real life speech data and in particular to the assessment of speakers' valence. This paper therefore investigates novel approaches to the additional exploitation of linguistic information. To ensure good applicability to the real world, spontaneous speech and nonacted nonprototypical emotions are examined in the recently popular dimensional model in 3D continuous space. As there is a lack of linguistic analysis approaches and experiments for this model, various methods are proposed. Best results are obtained with the described bag of n-gram and character n-gram approaches introduced for the first time for this task and allowing for advanced vector space representation of the spoken contents. Furthermore, string kernels are considered. By early fusion and combined space optimization of the proposed linguistic features with acoustic ones, the regression of continuous emotion primitives outperforms reported benchmark results on the VAM corpus of highly emotional face-to-face communication. © 2011 IEEE. Source

Boche H.,TU Munich | Schubert M.,Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut
IEEE Transactions on Information Theory | Year: 2011

Many solutions and concepts in resource allocation and game theory rely on the assumption of a convex utility set. In this paper, we show that the less restrictive assumption of a logarithmic hidden convexity is sometimes sufficient. We consider the problems of Nash bargaining and proportional fairness, which are closely related. We extend the Nash bargaining framework to a broader family of log-convex sets. We then focus on the set of feasible signal-to-interference-plus-noise ratios (SINRs), for the cases of individual power constraints and a sum power constraint. Under the assumption of log-convex interference functions, we show how Pareto optimality of boundary points depends on the interference coupling between the users. Finally, we provide necessary and sufficient conditions for strict log-convexity of the feasible SINR region. © 2011 IEEE. Source

Neumann T.,TU Munich
Proceedings of the VLDB Endowment | Year: 2011

As main memory grows, query performance is more and more determined by the raw CPU costs of query processing itself. The classical iterator style query processing technique is very simple and exible, but shows poor performance on modern CPUs due to lack of locality and frequent instruction mis- predictions. Several techniques like batch oriented processing or vectorized tuple processing have been proposed in the past to improve this situation, but even these techniques are frequently out-performed by hand-written execution plans. In this work we present a novel compilation strategy that translates a query into compact and effcient machine code using the LLVM compiler framework. By aiming at good code and data locality and predictable branch layout the resulting code frequently rivals the performance of hand- written C++ code. We integrated these techniques into the HyPer main memory database system and show that this results in excellent query performance while requiring only modest compilation time. © 2011 VLDB Endowment. Source

Mendl C.B.,TU Munich
Computer Physics Communications | Year: 2011

This paper introduces the FermiFab toolbox for many-particle quantum systems. It is mainly concerned with the representation of (symbolic) fermionic wavefunctions and the calculation of corresponding reduced density matrices (RDMs). The toolbox transparently handles the inherent antisymmetrization of wavefunctions and incorporates the creation/annihilation formalism. Thus, it aims at providing a solid base for a broad audience to use fermionic wavefunctions with the same ease as matrices in Matlab, say. Leveraging symbolic computation, the toolbox can greatly simply tedious pen-and-paper calculations for concrete quantum mechanical systems, and serves as "sandbox" for theoretical hypothesis testing. FermiFab (including full source code) is freely available as a plugin for both Matlab and Mathematica. © 2011 Elsevier B.V. Source

Pohl V.,TU Munich
IEEE Transactions on Information Theory | Year: 2011

This paper considers the problem of estimating a stationary sequence y from the observation of a stationary correlated sequence x by means of a causal linear filter. Thereby, it is assumed that the spectral density Φx of x vanishes on a subset of the unit circle of positive Lebesgue measure such that the classical derivation of the estimation filter, based on the spectral factorization of Φx, can not be applied. The paper derives the transfer function of such an estimation filter, discusses its stability behavior, and applies the result to the causal reconstruction of deterministic signals from its samples. © 2011 IEEE. Source

Raasch C.,TU Munich
Journal of Product Innovation Management | Year: 2014

Open innovation research and practice recognize the important role of external complementors in value creation. At the same time, firms need to retain exclusive control over some essential components to capture value from their product and/or service system. This paper contributes to the literature by analyzing some of the trade-offs between openness to external value creation and closedness for internal value capture. It focuses on selective openness as a key variable and investigates how it affects value creation by external complementors, specifically the members of user innovation communities. Openness, it is hypothesized, matters to community members: The more open a product design is, the higher their sense of involvement in the innovation project, and the larger the effort they devote to it. Unlike prior literature, different forms and loci of openness are distinguished, specifically the transparency, accessibility, and replicability of different components of the product being developed. Hypotheses are tested based on survey data (n = 309) from 20 online communities in the consumer electronics and information technology hardware industries. Multilevel regression analysis is used to account for clustering, and thus nonindependent data, at the community level. We find that openness indeed increases community members' involvement in the innovation project and their contributions to it. Interestingly, however, some forms and loci of openness strongly affect community perceptions and behavior, while others have limited or no impact. This finding suggests that, at least in relation to user communities, the trade-off that firms face between external value creation and internal value capture is softer than hitherto understood. Contingency factors that may be able to explain these patterns are advanced. For example, users are expected to value the form of openness that they have the capabilities and incentives to exploit. The findings in this paper extend the literature on selective openness in innovation. They emphasize the need to study the demand for different forms of openness at the subsystem level and align supply-side strategies to it. In managerial practice, a careful assessment of the demand for openness enables firms to successfully use selective openness and to effectively appropriate value from selectively open systems. © 2013 Product Development & Management Association. Source

Garcia-Morales V.,TU Munich
Physics Letters, Section A: General, Atomic and Solid State Physics | Year: 2012

A universal map is derived for all deterministic 1D cellular automata (CAs) containing no freely adjustable parameters and valid for any alphabet size and any neighborhood range (including non-symmetrical neighborhoods). The map can be extended to an arbitrary number of dimensions and topologies and to arbitrary order in time. Specific CA maps for the famous Conway's Game of Life and Wolfram's 256 elementary CAs are given. An induction method for CAs, based in the universal map, allows mathematical expressions for the orbits of a wide variety of elementary CAs to be systematically derived. © 2012 Elsevier B.V. Source

Jirauschek C.,TU Munich
Optics Express | Year: 2010

Based on a coupled simulation of carrier transport and optical cavity field, the intrinsic linewidth in resonant phonon terahertz quantum cascade lasers is self-consistently analyzed. For high power structures, values on the order of Hz are obtained. Thermal photons are found to play a considerable role at elevated temperatures. A linewidth enhancement factor of 0.5 is calculated for the investigated designs. © 2010 Optical Society of America. Source

Braun P.,TU Munich | Braun P.,Helmholtz Center Munich
Proteomics | Year: 2012

Protein interactions mediate essentially all biological processes and analysis of protein-protein interactions using both large-scale and small-scale approaches has contributed fundamental insights to the understanding of biological systems. In recent years, interactome network maps have emerged as an important tool for analyzing and interpreting genetic data of complex phenotypes. Complementary experimental approaches to test for binary, direct interactions, and for membership in protein complexes are used to explore the interactome. The two approaches are not redundant but yield orthogonal perspectives onto the complex network of physical interactions by which proteins mediate biological processes. In recent years, several publications have demonstrated that interactions from high-throughput experiments can be equally reliable as the high quality subset of interactions identified in small-scale studies. Critical for this insight was the introduction of standardized experimental benchmarking of interaction and validation assays using reference sets. The data obtained in these benchmarking experiments have resulted in greater appreciation of the limitations and the complementary strengths of different assays. Moreover, benchmarking is a central element of a conceptual framework to estimate interactome sizes and thereby measure progress toward near complete network maps. These estimates have revealed that current large-scale data sets, although often of high quality, cover only a small fraction of a given interactome. Here, I review the findings of assay benchmarking and discuss implications for quality control, and for strategies toward obtaining a near-complete map of the interactome of an organism. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source

Steimer W.,TU Munich
Therapeutic Drug Monitoring | Year: 2010

Psychiatry is one of the most promising areas for bringing pharmacogenomics to the patient. Psychiatric disorders such as depression and schizophrenia contribute significantly to worldwide morbidity and mortality. Forecasts rank depression second only to ischemic heart disease by 2020. In depression and schizophrenia, 30% to 50% of all patients do not respond sufficiently to the initial treatment regime. Genetic variability has been demonstrated to play an important role in the response to pharmacotherapy. Most data are available with regard to polymorphisms in the genes coding for drug-metabolizing enzymes and recommendations for the choice of personalized dosages based on genotyping results are available. Clinical outcome, in particular adverse effects, has been shown to correlate with the results from genotyping. Incorporating pharmacogenomics into clinical practice has, however, been slow and it is still not clear in which clinical situations genotyping should be performed and what the benefit of such procedures could be beyond therapeutic drug monitoring. Additionally, many studies in psychiatry focus on genetic variation in candidate genes of drug targets. However, despite promising reports, no clear recommendation can be given at present to perform such testing in clinical use. © 2010 by Lippincott Williams & Wilkins. Source

Beer A.J.,TU Munich
Methods in molecular biology (Clifton, N.J.) | Year: 2011

Imaging of αvβ3 expression in malignant diseases has been extensively studied in the last years, mainly because the level of integrin αvβ3 expression might be a surrogate parameter of angiogenic activity. Most studies have been performed using preclinical tumor models but recently first results if imaging αvβ3 expression in patients have been published. The first approach used was the radiotracer approach with tracers for positron emission tomography (PET) like [(18)F]Galacto-RGD or tracers for single photon emission computed tomography (SPECT) like [(99m)Tc]NC100692. In this article we will focus on the experimental design and methodology of PET imaging of αvβ3 expression with the tracer [(18)F]Galacto-RGD. Common difficulties and pitfalls in image acquisition and interpretation will be discussed. Finally, the performance of PET will be compared to other methods of imaging of αvβ3 expression, like magnetic resonance imaging, ultrasound, or optical imaging. Source

Kuzyk A.,TU Munich
Electrophoresis | Year: 2011

Dielectrophoresis has become a powerful tool for manipulation of various materials, such as metal and semiconducting particles, DNA molecules, nanowires and graphene. This short review is intended to provide the reader with an overview of the recent advances of application of dielectrophoresis at the nanoscale. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source

Nipkow T.,TU Munich
Journal of Automated Reasoning | Year: 2010

This paper presents verified quantifier elimination procedures for dense linear orders (two of them novel), for real and for integer linear arithmetic. All procedures are defined and verified in the theorem prover Isabelle/HOL, are executable and can be applied to HOL formulae themselves (by reflection). The formalization of the different theories is highly modular. © Springer Science+Business Media B.V. 2010. Source

Quastel J.,University of Toronto | Spohn H.,TU Munich
Journal of Statistical Physics | Year: 2015

Our understanding of the one-dimensional KPZ equation, alias noisy Burgers equation, has advanced substantially over the past 5 years. We provide a non-technical review, where we limit ourselves to the stochastic PDE and lattice type models approximating it. © 2015, Springer Science+Business Media New York. Source

Biberthaler P.,TU Munich
International orthopaedics | Year: 2013

Patients suffering from isolated subacromial impingement (SI) of their shoulder but who are resistant to other therapies benefit substantially from arthroscopic subacromial decompression (ASD) if they are young (<60 years). Although physical demands rise notably in the older population, it still remains unclear if surgery leads to better results in these patients. Therefore, the aim of this study was to focus on the impact of age on the functional outcome in elderly patients suffering from SI. In this retrospective analysis, 307 patients (age range: 42-63 years) with isolated SI were enrolled. The 165 patients were allocated to physical therapy whereas 142 underwent ASD. The patient cohort was divided into two groups according to the median age (<57 years). Functional outcome was recorded using the Munich Shoulder Questionnaire (MSQ) allowing for qualitative self -assessment of the Constant, SPADI and Dark Scores. Median age was 57 (25%-75%: 48-63) years, follow-up was 55 (25%-75%: 25-87) months. In group I (age < 57 years, n = 165) no significant differences in outcome between physical therapy and ASD were detected. In contrast, in group II (age > 57 years; n = 142) the patients reported significantly better results after ASD in the overall MSQs. Despite their higher age, elderly patients with isolated SI actually benefit significantly from ASD in comparison to physical therapy. Source

Schwamborn K.,TU Munich
Journal of Proteomics | Year: 2012

Biomarker discovery and validation involves the consideration of many issues and challenges in order to be effectively used for translation from bench to bedside. Imaging mass spectrometry (IMS) is a new technology to assess spatial molecular arrangements in tissue sections, going far beyond microscopy in providing hundreds of different molecular images from a single scan without the need of target-specific reagents. The possibility to correlate distribution maps of multiple analytes with histological and clinical features makes it an ideal tool to discover diagnostic and prognostic markers of diseases. Some recently published studies that show the usefulness and advantages of this technology in the field of cancer research are highlighted. This article is part of a Special Issue entitled: Imaging Mass Spectrometry: A User's Guide to a New Technique for Biological and Biomedical Research. © 2012 . Source

Enss T.,TU Munich | Sirker J.,University of Kaiserslautern
New Journal of Physics | Year: 2012

The Lieb-Robinson bound implies that the unitary time evolution of an operator can be restricted to an effective light cone for any Hamiltonian with short-range interactions. Here we present a very efficient renormalization group algorithm based on this light cone structure to study the time evolution of prepared initial states in the thermodynamic limit in one-dimensional quantum systems. The algorithm does not require translational invariance and allows for an easy implementation of local conservation laws. We use the algorithm to investigate the relaxation dynamics of double occupancies in fermionic Hubbard models as well as a possible thermalization. For the integrable Hubbard model, we find a pure power-law decay of the number of doubly occupied sites towards the value in the long-time limit, while the decay becomes exponential when adding a nearest-neighbor interaction. In accordance with the eigenstate thermalization hypothesis, the long-time limit is reasonably well described by a thermal average. We point out, however, that such a description naturally requires the use of negative temperatures. Finally, we study a doublon impurity in a Néel background and find that the excess charge and spin spread at different velocities, providing an example of spin-charge separation in a highly excited state. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. Source

Bletzinger K.-U.,TU Munich
Structural and Multidisciplinary Optimization | Year: 2014

The paper discusses the filtering of shape sensitivities as a mesh independent regularization method for very large problems of shape optimal design. The vertices of the simulation discretization grids are directly used as design morphing handles allowing for the largest possible design space. Still, however, there has been a lack of theory to consistently merging the sensitivity filtering into the standard optimization technology which is an ongoing topic of discussion in the community. The actual paper tries to overcome this burden. As a result it will be shown that there is a perfect transition between the sensitivity filtering and all the other shape parameterization techniques used for the shape optimization, as there are CAD-based techniques, subdivision surfaces or morphing box technologies. It appears that sensitivity filtering belongs to the most general and powerful control technologies available for shape optimal design. The success will be demonstrated by various illustrative examples which span from basic aspects to sophisticated applications in structural and fluid mechanics. © 2014 Springer-Verlag Berlin Heidelberg. Source

Chen W.,TU Munich | Plewig G.,Ludwig Maximilians University of Munich
British Journal of Dermatology | Year: 2014

Human Demodex mites (Demodex folliculorum and Demodex brevis) hold a high rank in the evolutionary and phylogenetic hierarchy of the skin microbiome, although in most people their presence is of no consequence. While human demodicosis is a skin disease sui generis, it can mimic many other inflammatory dermatoses, such as folliculitis, rosacea and perioral dermatitis, leading to unspecific and confusing descriptions in the literature. Here, we propose to classify human demodicosis into a primary form and a secondary form, which is associated mainly with immunosuppression. The clinical manifestations of primary demodicosis may include (i) spinulate demodicosis, currently known as pityriasis folliculorum, involving sebaceous hair follicles without visible inflammation; (ii) papulopustular/nodulocystic or conglobate demodicosis with pronounced inflammation affecting most commonly the perioral and periorbital areas of the face; (iii) ocular demodicosis, inducing chronic blepharitis, chalazia or, less commonly, keratoconjunctivitis; and (iv) auricular demodicosis causing external otitis or myringitis. Secondary demodicosis is usually associated with systemic or local immunosuppression. Treatment is only weakly evidence based, and the most effective concentrations of acaricides remain to be determined. Optimization of an in vitro or ex vivo culture model is necessary for future studies. Endosymbiosis between certain bacteria and Demodex mites in the pathogenesis of demodicosis deserves more attention. Further clinical observations and experiments are needed to prove our hypothesis. What's already known about this topic? The pathogenicity of human Demodex mites in inflammatory skin diseases remains controversial. What does this study add? A new classification is proposed to divide human demodicosis into a primary form and a secondary form associated with other local or systemic diseases. The recognition of primary human demodicosis as a disease sui generis will enable clinicians to differentiate it from other mimicking inflammatory dermatoses and encourage the development of a specific effective treatment. © 2014 British Association of Dermatologists. Source

Nuyken O.,TU Munich | Pask S.D.,SteDaPa Consulting
Polymers | Year: 2013

This short, introductory review covers the still rapidly growing and industrially important field of ring opening polymerization (ROP). The review is organized according to mechanism (radical ROP (RROP), cationic ROP (CROP), anionic ROP (AROP) and ring-opening metathesis polymerization (ROMP)) rather than monomer classes. Nevertheless, the different groups of cyclic monomers are considered (olefins, ethers, thioethers, amines, lactones, thiolactones, lactams, disulfides, anhydrides, carbonates, silicones, phosphazenes and phosphonites) and the mechanisms by which they can be polymerized involving a ring-opening polymerization. Literature up to 2012 has been considered but the citations selected refer to detailed reviews and key papers, describing not only the latest developments but also the evolution of the current state of the art. © 2013 by the authors. Source

Bausch A.R.,TU Munich | Schwarz U.S.,University of Heidelberg
Nature Materials | Year: 2013

Cells can sense their environment by applying and responding to mechanical forces, yet how these forces are transmitted through the cell's cytoskeleton is largely unknown. Now, a combination of experiments and computer simulations shows how forces applied to the cell cortex are synergistically shared by motor proteins and crosslinkers. The adaptive and regenerative properties of such tissues are strongly connected to the genetic programs of the cell types involved, and therefore progress in the design of cell-based biomimetic active materials relies on a better understanding of gene expression and differentiation in response to mechanical signals. The researchers also showed how different proteins synergistically work together in the cortex. By using various types of mechanical perturbation, they demonstrated how the force applied by a biaxial stretch to the membrane trickles down to the cortex through molecular bridges and then distributes over myosin II and crosslinking proteins. Source

Koetter R.,TU Munich | Effros M.,California Institute of Technology | Medard M.,Massachusetts Institute of Technology
IEEE Transactions on Information Theory | Year: 2011

A family of equivalence tools for bounding network capacities is introduced. Given a network N with node set V, the capacity of N is a set of non-negative vectors with elements corresponding to all possible multicast connections in N a vector R is in the capacity region for N if and only if it is possible to simultaneously and reliably establish all multicast connections across N at the given rates. Any other demand type with independent messages is a special case of this multiple multicast problem, and is therefore included in the given rate region. In Part I, we show that the capacity of a network N is unchanged if any independent, memoryless, point-to-point channel in N is replaced by a noiseless bit pipe with throughput equal to the removed channel's capacity. It follows that the capacity of a network comprised entirely of such point-to-point channels equals the capacity of an error-free network that replaces each channel by a noiseless bit pipe of the corresponding capacity. A related separation result was known previously for a single multicast connection over an acyclic network of independent, memoryless, point-to-point channels; our result treats general connections (e.g., a collection of simultaneous unicasts) and allows cyclic or acyclic networks. © 2006 IEEE. Source

Rohrmoser A.,TU Munich
Nuclear Engineering and Design | Year: 2010

The new neutron source FRM II is based on a very compact single fuel element concept. This work describes the calculations that led to the core design and gives also a fair description of later 3d evaluations, now with inclusion of all 'as-built' user installations in the moderator tank. Finally a comparison to some data collected during operation is given. © 2010 Elsevier B.V. All rights reserved. Source

Althoff M.,TU Munich | Krogh B.H.,Carnegie Mellon University
IEEE Transactions on Automatic Control | Year: 2014

This paper presents a numerical procedure for the reachability analysis of systems with nonlinear, semi-explicit, index-1 differential-algebraic equations. The procedure computes reachable sets for uncertain initial states and inputs in an overapproximative way, i.e. it is guaranteed that all possible trajectories of the system are enclosed. Thus, the result can be used for formal verification of system properties that can be specified in the state space as unsafe or goal regions. Due to the representation of reachable sets by zonotopes and the use of highly scalable operations on them, the presented approach scales favorably with the number of state variables. This makes it possible to solve problems of industry-relevant size, as demonstrated by a transient stability analysis of the IEEE 14-bus benchmark problem for power systems. © 1963-2012 IEEE. Source

Lang K.,Medical Research Council Laboratory of Molecular Biology | Chin J.W.,TU Munich
Chemical Reviews | Year: 2014

A range of chemoselective reactions have been used to label isolated biomolecules, cell surface biomolecules, and intracellular biomolecules at physiological temperatures and pressures. Many of these reactions proceed under aqueous conditions and produce nontoxic or no byproducts. The rates of these chemoselective reactions span 9 orders of magnitude and the recent development of rapid reactions promises applications of labeling to previously inaccessible biological problems. The development of reactions that are chemoselective and rapid under biologically relevant conditions is being rapidly translated into approaches for selective protein labeling in cells and animals via genetic code expansion. Although genetic code expansion approaches commonly direct unnatural amino acid incorporation in response to the amber codon, there appears to be minimal background labeling resulting from incorporation and labeling at endogenous amber codons in E. coli. Source

Erkan M.,TU Munich
Journal of Pathology | Year: 2013

Pancreatic ductal adenocarcinoma (PDAC) is a notoriously therapy-resistant desmoplastic tumour. Antifibrotic therapy is shown to increase drug delivery in the preclinical setting. However, this approach can be a doubleedged sword: first, PDAC is not uniform and some types of (dormant) stroma may in fact be protective; second, conventional chemotherapeutics are not powerful enough to eradicate all cancer cells in the tumour, therefore breaking down the stromal wall non-selectively may also lead to the increased dissemination of cancer cells. Recently, Kadaba et al [19] have analysed the impact of the stromal cells in pancreatic, oesophageal and skin cancers, in bio-engineered, physiomimetic organotypic cultures. These authors show that the maximal effect on increasing cancer cell proliferation and invasion, as well as decreasing cancer cell apoptosis, occurs when pancreatic stellate cells constitute the majority of the cellular population in a three-dimensional (3D) model. This work may be instrumental for better understanding the types of stoma in PDAC before eliminating it non-selectively. Copyright © 2013 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd. Source

Ndrepepa G.,TU Munich
Coronary Artery Disease | Year: 2015

ST-segment elevation myocardial infarction (STEMI) is a major cause of mortality and disability worldwide. Reperfusion therapy by thrombolysis or primary percutaneous coronary intervention (PPCI) improves survival and quality of life in patients with STEMI. Despite the proven efficacy of timely reperfusion, mortality from STEMI remains high, particularly among patients with suboptimal reperfusion. Reperfusion injury following opening of occluded coronary arteries mitigates the efficacy of PPCI by further accentuating ischemic damage and increasing infarct size (IS). On the basis of experimental studies, it is assumed that nearly 50% of the final IS is because of the reperfusion injury. IS is a marker of ischemic damage and adequacy of reperfusion that is strongly related to mortality in reperfused patients with STEMI. Many therapeutic strategies including pharmacological and conditioning agents have been proven effective in reducing reperfusion injury and IS in preclinical research. Mechanistically, these agents act either by inhibiting reperfusion injury cascades or by activating cellular prosurvival pathways. Although most of these agents/strategies are at the experimental stage, some of them have been tested clinically in patients with STEMI. This review provides an update on key pharmacological agents and postconditioning used in the setting of PPCI to reduce reperfusion injury and IS. Despite intensive research, no strategy or intervention has been shown to prevent reperfusion injury or enhance myocardial salvage in a consistent manner in a clinical setting. A number of novel therapeutic strategies to reduce reperfusion injury in the setting of PPCI in patients with STEMI are currently under investigation. They will lead to a better understanding of reperfusion injury and to more efficient strategies for its prevention. © 2015 Wolters Kluwer Health, Inc. All rights reserved. Source

Kramer G.,TU Munich
IEEE Transactions on Information Theory | Year: 2014

A class of channels is introduced for which there is memory inside blocks of a specified length and no memory across the blocks. The multiuser model is called an information network with in-block memory (NiBM). It is shown that block-fading channels, channels with state known causally at the encoder, and relay networks with delays are NiBMs. A cut-set bound is developed for NiBMs that unifies, strengthens, and generalizes existing cut bounds for discrete memoryless networks. The bound gives new finite-letter capacity expressions for several classes of networks including point-to-point channels, and certain multiaccess, broadcast, and relay channels. Cardinality bounds on the random coding alphabets are developed that improve on existing bounds for channels with action-dependent state available causally at the encoder and for relays without delay. Finally, quantize-forward network coding is shown to achieve rates within an additive gap of the new cut-set bound for linear, additive, Gaussian noise channels, symmetric power constraints, and a multicast session. © 2014 IEEE. Source

Geyer P.,TU Munich
Advanced Engineering Informatics | Year: 2012

To support performance-oriented modelling for sustainable building design, one must both model geometric interdependencies in a parametric way and include non-geometric physical, environmental, and economic design reasoning. For this purpose, this paper examines the use of Systems Modelling Language (SysML) to model systems for sustainable building design and develops a method called Parametric Systems Modelling (PSM). Selected diagrams demonstrate the application of the method for performance-oriented building design and show generic models of typical requirements, design structures, internal processes, and item flows of energy and resources in a systems view. An exemplary implementation of a parametric system, which handles the trade-off between investments in both building envelope and heat generation technology, illustrates the use and benefit of systems modelling for decision-making. Further considerations address integrating systems modelling into the CAD/BIM-based design process. © 2012 Elsevier Ltd. All rights reserved. Source

Meierhofer C.,TU Munich
European heart journal cardiovascular Imaging | Year: 2013

We compared flow and wall shear stress (WSS) patterns in the ascending aorta of individuals with either bicuspid aortic valve (BAV) or tricuspid aortic valve (TAV) using four-dimensional cardiovascular magnetic resonance (4D-CMR). BAV are known to be associated with dilation and dissection of the ascending aorta. However, the cause of vessel disease in patients with BAVs is unknown. Inborn connective tissue disease and also dilation secondary to increased WSS because of altered blood flow patterns in the ascending aorta are discussed as causes for dilation of the aorta. WSS can be estimated non-invasively by 4D-CMR. Eighteen, otherwise, healthy individuals with functionally normal BAVs were compared prospectively with an age- and sex-matched control group of healthy individuals with TAV. Blood flow data were obtained by 4D-CMR visualization and WSS was calculated with specific software tools. Eighty-five per cent of the individuals with BAVs showed a high-grade helical flow pattern in the ascending aorta compared with 6% of the individuals with TAV. WSS in the ascending aorta was significantly altered in individuals with BAVs compared with TAV. WSS and flow patterns in the ascending aorta in patients with BAVs without concomitant valve or vessel disease are significantly different compared with TAV. The significantly higher shear forces may have an impact on the development of aortic dilation in patients with BAVs. Source

Fiener P.,University of Cologne | Auerswald K.,TU Munich | Van Oost K.,Catholic University of Louvain
Earth-Science Reviews | Year: 2011

Surface runoff and associated erosion processes adversely affect soil and surface water quality. There is increasing evidence that a sound understanding of spatial-temporal dynamics of land use and management are crucial to understanding surface runoff processes and underpinning mitigation strategies. In this review, we synthesise the effects of (1) temporal patterns of land management of individual fields, and (2) spatio-temporal interaction of several fields within catchments by applying semivariance analysis, which allows the extent and range of the different patterns to be compared. Consistent effects of management on the temporal dynamics of surface runoff of individual fields can be identified, some of which have been incorporated into small-scale hydrological models. In contrast, the effects of patchiness, the spatial organisation of patches with different soil hydrological properties, and the effects of linear landscape structures are less well understood and are rarely incorporated in models. The main challenge for quantifying these effects arises from temporal changes within individual patches, where the largest contrasts usually occur in mid-summer and cause a seasonally varying effect of patchiness on the overall catchment response. Some studies indicate that increasing agricultural patchiness, due to decreasing field sizes, reduces the catchment-scale response to rainfall, especially in cases of Hortonian runoff. Linear structures associated with patchiness of fields (e.g. field borders, ditches, and ephemeral gullies) may either increase or decrease the hydraulic connectivity within a catchment. The largest gap in research relates to the effects and temporal variation of patch interaction, the influence of the spatial organisation of patches and the interaction with linear structures. In view of the substantial changes in the structure of agricultural landscapes occurring throughout the world, it is necessary to improve our knowledge of the influence of patchiness and connectivity, and to implement this knowledge in new modelling tools. © 2011 Elsevier B.V. Source

Verschoor A.,TU Munich | Langer H.F.,University of Tubingen
Thrombosis and Haemostasis | Year: 2013

Platelets have a central function in repairing vascular damage and stopping acute blood loss. They are equally central to thrombus formation in cardiovascular diseases such as myocardial infarction and ischaemic stroke. Beyond these classical prothrombotic diseases, immune mediated pathologies such as haemolytic uraemic syndrome (HUS) or paroxysmal nocturnal haemoglobinuria (PNH) also feature an increased tendency to form thrombi in various tissues. It has become increasingly clear that the complement system, part of the innate immune system, has an important role in the pathophysiology of these diseases. Not only does complement influence prothrombotic disease, it is equally involved in idiopathic thrombocytopenic purpura (ITP), an autoimmune disease characterised by thrombocytopenia. Thus, there are complex interrelationships between the haemostatic and immune systems, and platelets and complement in particular. Not only does complement influence platelet diseases such as ITP, HUS and PNH, it also mediates interaction between microbes and platelets during systemic infection, influencing the course of infection and development of protective immunity. This review aims to provide an integrative overview of the mechanisms underlying the interactions between complement and platelets in health and disease. © Schattauer 2013. Source

Kannan S.,Martin Luther University of Halle Wittenberg | Zacharias M.,TU Munich
Nucleic Acids Research | Year: 2011

Hairpin loops belong to the most important structural motifs in folded nucleic acids. The d(GNA) sequence in DNA can form very stable trinucleotide hairpin loops depending, however, strongly on the closing base pair. Replica-exchange molecular dynamics (REMD) were employed to study hairpin folding of two DNA sequences, d(gcGCAgc) and d(cgGCAcg), with the same central loop motif but different closing base pairs starting from single-stranded structures. In both cases, conformations of the most populated conformational cluster at the lowest temperature showed close agreement with available experimental structures. For the loop sequence with the less stable G:C closing base pair, an alternative loop topology accumulated as second most populated conformational state indicating a possible loop structural heterogeneity. Comparative-free energy simulations on induced loop unfolding indicated higher stability of the loop with a C:G closing base pair by ~3kcal mol -1 (compared to a G:C closing base pair) in very good agreement with experiment. The comparative energetic analysis of sampled unfolded, intermediate and folded conformational states identified electrostatic and packing interactions as the main contributions to the closing base pair dependence of the d(GCA) loop stability. © The Author(s) 2011. Published by Oxford University Press. Source

Hartmann J.,TU Munich
Cold Spring Harbor perspectives in biology | Year: 2011

Metabotropic glutamate receptors type 1 (mGluR1s) are required for a normal function of the mammalian brain. They are particularly important for synaptic signaling and plasticity in the cerebellum. Unlike ionotropic glutamate receptors that mediate rapid synaptic transmission, mGluR1s produce in cerebellar Purkinje cells a complex postsynaptic response consisting of two distinct signal components, namely a local dendritic calcium signal and a slow excitatory postsynaptic potential. The basic mechanisms underlying these synaptic responses were clarified in recent years. First, the work of several groups established that the dendritic calcium signal results from IP(3) receptor-mediated calcium release from internal stores. Second, it was recently found that mGluR1-mediated slow excitatory postsynaptic potentials are mediated by the transient receptor potential channel TRPC3. This surprising finding established TRPC3 as a novel postsynaptic channel for glutamatergic synaptic transmission. Source

Straub D.,TU Munich
Structural Safety | Year: 2014

When designing monitoring systems and planning inspections, engineers must assess the benefits of the additional information that can be obtained and weigh them against the cost of these measures. The value of information (VoI) concept of the Bayesian statistical decision analysis provides a formal framework to quantify these benefits. This paper presents the determination of the VoI when information is collected to increase the reliability of engineering systems. It is demonstrated how structural reliability methods can be used to effectively model the VoI and an efficient algorithm for its computation is proposed. The theory and the algorithm are demonstrated by an illustrative application to monitoring of a structural system subjected to fatigue deterioration. © 2013 Elsevier Ltd. Source

Kaiser N.,TU Munich
European Physical Journal A | Year: 2012

An exploratory study of chiral four-nucleon interactions in nuclear and neutron matter is performed. The leading-order terms arising from pion-exchange in combination with the chiral 4π-vertex and the chiral NN3π-vertex are found to be very small. Their attractive contribution to the energy per particle stays below 0.6MeV in magnitude for densities up to ρ = 0.4 fm-3. We consider also the fournucleon interaction induced by pion-exchange and twofold Δ-isobar excitation of nucleons. For most of the closed four-loop diagrams the occurring integrals over four Fermi spheres can either be solved analytically or reduced to easily manageable one- or two-parameter integrals. After summing the individually large contributions from 3-ring, 2-ring and 1-ring diagrams of alternating signs, one obtains at nuclear matter saturation density ρ0 = 0.16 fm-3 a moderate contribution of 2.35MeV to the energy per particle. The curve Ē(ρ) rises rapidly with density, approximately with the third power of ρ. In pure neutron matter the analogous chiral four-body interactions lead, at the same density ρn, to a repulsive contribution that is about half as strong. The present calculation indicates that long-range multi-nucleon forces, in particular those provided by the strongly coupled πNΔ-system with its small mass-gap of 293MeV, can still play an appreciable role for the equation of state of nuclear and neutron matter. © Società Italiana di Fisica / Springer-Verlag 2012. Source

Guadagnoli D.,TU Munich | Mohapatra R.N.,University of Maryland University College
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2011

In minimal left-right symmetric models, the mass of the neutral Higgs field mediating tree-level flavor changing effects (FCNH) is directly related to the parity breaking scale. Specifically, the lower bound on the Higgs mass coming from Higgs-induced tree-level effects, and exceeding about 15 TeV, would tend to imply a WR mass bound much higher than that required by gauge exchange loop effects - the latter allowing WR masses as low as 2.5 TeV. Since a WR mass below 4 TeV is accessible at the LHC, it is important to investigate ways to decouple the FCNH effects from the WR mass. In this Letter, we present a model where this happens, providing new motivation for LHC searches for WR in the 1-4 TeV mass range. © 2010 Elsevier B.V. Source

Esposito I.,TU Munich
Der Pathologe | Year: 2012

The identification and characterization of precursor lesions is fundamental to develop screening programs for early diagnosis and treatment, aiming at reducing cancer-related mortality. Pancreatic ductal adenocarcinoma (PDAC) is an aggressive disease that becomes clinical apparent only in advanced stages. In order to enable screening procedures for early detection of PDAC, an exact characterization of precursor lesions is of utmost importance. Pancreatic intraepithelial neoplasias (PanIN) are the most frequent and best characterized precursors of PDAC and are lesions with a ductal phenotype thus indicating a ductal cell origin of PDAC. However, evidence from genetically engineered mouse models suggests that tubular complexes (TC) originating through a process of acinar-ductal metaplasia (ADM) form atypical flat lesions (AFL) that may represent an alternative pathway of pancreatic carcinogenesis. Based on a thorough morphological and genetic analysis of murine TC, AFL and PanIN and their human counterparts, a new dual model of pancreatic carcinogenesis is proposed taking into account the role of AFL as possible new precursors of PDAC. Source

Haisch C.,TU Munich
Annual Review of Analytical Chemistry | Year: 2012

The number of applications using optical tomography has significantly increased over the past decade. A literature research providing this term as keyword gives 26 hits for 1990, 719 for 2000, and 9,202 for 2010. With an increasing number of applications, the number of different imaging modalities is also increasing. This review summarizes recent developments in tomographic methods for scattering and nonscattering samples. These two different cases of optical tomography are typically represented by biomedical imaging and atmospheric tomography, representing high- and low-scattering samples, respectively. An essential prerequisite for tomographic analyses is an understanding of light propagation in different media, which allows for the development of specific reconstruction algorithms for the different tomographic tasks. Copyright © 2012 by Annual Reviews. All rights reserved. Source

Hepperger P.,TU Munich
Journal of Computational Finance | Year: 2013

A numerical method for pricing Bermudan options depending on a large number of underlyings is presented. The asset prices are modeled with exponential time-inhomogeneous jump-diffusion processes. We improve the least-squares Monte Carlo method proposed by Longstaff and Schwartz, introducing an efficient variance-reduction scheme. Acontrol variable is obtained froma low-dimensional approximation of the multivariate Bermudan option. To this end, we adapt a model reduction method called proper orthogonal decomposition (POD), which is closely related to principal component analysis, to the case of Bermudan options. Our goal is to make use of the correlation structure of the assets in an optimal way. We compute the expectation of the control variable either by solving a low-dimensional partial integro-differential equation or by applying Fourier methods. The POD approximation can also be used as a candidate for the minimizing martingale in the dual pricing approach suggested by Rogers. We evaluate both approaches in numerical experiments. © 2013, Incisive Media Ltd. All rights reserved. Source

Reuning U.,TU Munich
Journal of Cellular Biochemistry | Year: 2011

We previously showed that integrin αvβ3 expression upon engagement by its major ligand vitronectin (VN) correlated with enhanced human ovarian cancer cell adhesion, motility, and proliferation, by triggering intracellular signaling events, ultimately leading to altered gene expression. In the present study, we characterized cellular VN expression as a function of αvβ3 and noticed significant upregulation of VN protein which was reflected by elevated VN gene transcription. In order to identify specific transcription factors involved in the αvβ3- regulatory effect on VN, we generated different VN promoter mutants. We noticed that disruption of the DNA consensus motif for Rel proteins did not only prominently reduce VN promoter activity but, moreover, led to a loss of responsiveness to αvβ3, suggesting a crucial role of Rel proteins in αvβ3-provoked VN induction. In cell migration studies, we confirmed increased cell motility as a function of αvβ3/VN which was further enhanced by raising cellular Rel transcription factor levels. Thus, the data of the present study elucidated a positive feedback regulatory loop on VN expression by αvβ3 implicating transcription factors of the Rel family. Hence by altering the composition of the extracellular matrix upon additional VN synthesis and deposition, tumor cells might be enabled to modulate their surrounding reactive microenvironment towards enhanced αvβ3/VN-interactions and, consequently, intrinsic intracellular signaling events affecting cancer progression. © 2011 Wiley-Liss, Inc. Source

Zeltser L.M.,Columbia University | Seeley R.J.,University of Cincinnati | Tschop M.H.,TU Munich
Nature Neuroscience | Year: 2012

Maintaining energy balance is of paramount importance for metabolic health and survival. It is achieved through the coordinated regulation of neuronal circuits that control a wide range of physiological processes affecting energy intake and expenditure, such as feeding, metabolic rate, locomotor activity, arousal, growth and reproduction. Neuronal populations distributed throughout the CNS but highly enriched in the mediobasal hypothalamus, sense hormonal, nutrient and neuronal signals of systemic energy status and relay this information to secondary neurons that integrate the information and regulate distinct physiological parameters in a manner that promotes energy homeostasis. To achieve this, it is critical that neuronal circuits provide information about short-term changes in nutrient availability in the larger context of long-term energy status. For example, the same signals lead to different cellular and physiological responses if delivered under fasted versus fed conditions. Thus, there is a clear need to have mechanisms that rapidly and reversibly adjust responsiveness of hypothalamic circuits to acute changes in nutrient availability. © 2012 Nature America, Inc. All rights reserved. Source

Granvogl M.,TU Munich
Journal of Agricultural and Food Chemistry | Year: 2014

Three stable isotope dilution assays (SIDAs) were developed for the quantitation of (E)-2-butenal (crotonaldehyde) in heat-processed edible fats and oils as well as in food using synthesized [13C4]- crotonaldehyde as internal standard. First, a direct headspace GC-MS method, followed by two indirect methods on the basis of derivatization with either pentafluorophenylhydrazine (GC-MS) or 2,4-dinitrophenylhydrazine (LC-MS/MS), was developed. All methods are also suitable for the quantitation of acrolein using the standard [13C3]-acrolein. Applying these three methods on five different types of fats and oils varying in their fatty acid compositions revealed significantly varying crotonaldehyde concentrations for the different samples, but nearly identical quantitative data for all methods. Formed amounts of crotonaldehyde were dependent not only on the type of oil, e.g., 0.29-0.32 mg/kg of coconut oil or 33.9-34.4 mg/kg of linseed oil after heat-processing for 24 h at 180 C, but also on the applied temperature and time. The results indicated that the concentration of formed crotonaldehyde seemed to be correlated with the amount of linolenic acid in the oils. Furthermore, the formation of crotonaldehyde was compared to that of its homologue acrolein, demonstrating that acrolein was always present in higher amounts in heat-processed oils, e.g., 12.3 mg of crotonaldehyde/kg of rapeseed oil in comparison to 23.4 mg of acrolein/kg after 24 h at 180 C. Finally, crotonaldehyde was also quantitated in fried food, revealing concentrations from 12 to 25 μg/kg for potato chips and from 8 to 19 μg/kg for donuts, depending on the oil used. © 2014 American Chemical Society. Source

Hintermann L.,TU Munich
Topics in Organometallic Chemistry | Year: 2010

Progress in the field of metal-catalyzed redox-neutral additions of oxygen nucleophiles (water, alcohols, carboxylic acids, and others) to alkenes, alkynes, and allenes between 2001 and 2009 is critically reviewed. Major advances in reaction chemistry include development of chiral Lewis acid catalyzed asymmetric oxa-Michael additions and Lewis-acid catalyzed hydro-alkoxylations of nonactivated olefins, as well as further development of Markovnikov-selective cationic gold complex-catalyzed additions of alcohols or water to alkynes and allenes. © 2010 Springer Berlin Heidelberg. Source

Garbrecht B.,RWTH Aachen | Garny M.,TU Munich
Annals of Physics | Year: 2012

We derive solutions to the Schwinger-Dyson equations on the Closed-Time-Path for a scalar field in the limit where backreaction is neglected. In Wigner space, the two-point Wightman functions have the curious property that the equilibrium component has a finite width, while the out-of equilibrium component has zero width. This feature is confirmed in a numerical simulation for scalar field theory with quartic interactions. When substituting these solutions into the collision term, we observe that an expansion including terms of all orders in gradients leads to an effective finite-width. Besides, we observe no breakdown of perturbation theory, that is sometimes associated with pinch singularities. The effective width is identical with the width of the equilibrium component. Therefore, reconciliation between the zero-width behaviour and the usual notion in kinetic theory, that the out-of-equilibrium contributions have a finite width as well, is achieved. This result may also be viewed as a generalisation of the fluctuation-dissipation relation to out-of-equilibrium systems with negligible backreaction. © 2011 Elsevier Inc. Source

Zasada I.,Leibniz Center for Agricultural Landscape Research | Zasada I.,TU Munich
Land Use Policy | Year: 2011

Peri-urban areas around urban agglomerations in Europe and elsewhere have been subject to agricultural and land use research for the past three decades. The manner in which farming responds to urban pressures, socio-economic changes and development opportunities has been the main focus of examination, with urban demand for rural goods and services representing a driving factor to adapt farming activities in a multifunctional way. Working within the peri-urban framework, this review pays particular attention to the relevance of multifunctional agriculture. Academic discourses and empirical insights related to farm structure and practices beyond conventional agriculture are analysed. Diversification, recreational and environmental farming, landscape management and specialisation, as well as direct marketing are all taken into consideration and discussed within the context of landscape functions. The provision of rural goods and services is contrasted with societal demands on peri-urban agriculture. This review finds that multifunctional agriculture has been commonly recognised in peri-urban areas - a phenomenon that includes a large variety of activities and diversification approaches within the context of environmental, social and economic functions of agriculture. In response to the post-productive, consumption-oriented requirements of the urban society, peri-urban farmers have intensified their uptake of multifunctional activities. Nevertheless, not all multifunctional opportunities are being fully developed when one considers the large and growing urban demand for goods and services provided by agriculture carried out near the city. This paper discusses policy and planning approaches to support multifunctional agriculture in peri-urban areas. © 2011 Elsevier Ltd. Source

Huckelhoven R.,TU Munich | Panstruga R.,RWTH Aachen | Panstruga R.,Max Planck Institute for Plant Breeding Research
Current Opinion in Plant Biology | Year: 2011

Powdery mildew fungi represent a paradigm for obligate biotrophic parasites, which only propagate in long-lasting intimate interactions with living host cells. These highly specialized phytopathogens induce re-organization of host cell architecture and physiology for their own demands. This probably includes the corruption of basal host cellular functions for successful fungal pathogenesis. Recent studies revealed secretory processes by both interaction partners as key incidents of the combat at the plant-fungus interface. The analysis of cellular events during plant-powdery mildew interactions may not only lead to a better understanding of plant pathological features, but may also foster novel discoveries in the area of plant cell biology. © 2011 Elsevier Ltd. Source

Huffman B.,TU Munich
ACM SIGPLAN Notices | Year: 2012

We present techniques for reasoning about constructor classes that (like the monad class) fix polymorphic operations and assert polymorphic axioms. We do not require a logic with first-class type constructors, first-class polymorphism, or type quantification; instead, we rely on a domain-theoretic model of the type system in a universal domain to provide these features. These ideas are implemented in the Tycon library for the Isabelle theorem prover, which builds on the HOLCF library of domain theory. The Tycon library provides various axiomatic type constructor classes, including functors and monads. It also provides automation for instantiating those classes, and for defining further subclasses. We use the Tycon library to formalize three Haskell monad transformers: the error transformer, the writer transformer, and the resumption transformer. The error and writer transformers do not universally preserve the monad laws; however, we establish datatype invariants for each, showing that they are valid monads when viewed as abstract datatypes. Copyright © 2012 ACM. Source

Tellier A.,TU Munich | Lemaire C.,CNRS Research Institute on Horticulture and Seeds
Molecular Ecology | Year: 2014

Population genetics theory has laid the foundations for genomic analyses including the recent burst in genome scans for selection and statistical inference of past demographic events in many prokaryote, animal and plant species. Identifying SNPs under natural selection and underpinning species adaptation relies on disentangling the respective contribution of random processes (mutation, drift, migration) from that of selection on nucleotide variability. Most theory and statistical tests have been developed using the Kingman coalescent theory based on the Wright-Fisher population model. However, these theoretical models rely on biological and life history assumptions which may be violated in many prokaryote, fungal, animal or plant species. Recent theoretical developments of the so-called multiple merger coalescent models are reviewed here (Λ-coalescent, beta-coalescent, Bolthausen-Sznitman, Ξ-coalescent). We explain how these new models take into account various pervasive ecological and biological characteristics, life history traits or life cycles which were not accounted in previous theories such as (i) the skew in offspring production typical of marine species, (ii) fast adapting microparasites (virus, bacteria and fungi) exhibiting large variation in population sizes during epidemics, (iii) the peculiar life cycles of fungi and bacteria alternating sexual and asexual cycles and (iv) the high rates of extinction-recolonization in spatially structured populations. We finally discuss the relevance of multiple merger models for the detection of SNPs under selection in these species, for population genomics of very large sample size and advocate to potentially examine the conclusion of previous population genetics studies. © 2014 John Wiley & Sons Ltd. Source

Bornemann F.,TU Munich
Foundations of Computational Mathematics | Year: 2011

High-order derivatives of analytic functions are expressible as Cauchy integrals over circular contours, which can very effectively be approximated, e.g., by trapezoidal sums. Whereas analytically each radius r up to the radius of convergence is equal, numerical stability strongly depends on r. We give a comprehensive study of this effect; in particular, we show that there is a unique radius that minimizes the loss of accuracy caused by round-off errors. For large classes of functions, though not for all, this radius actually gives about full accuracy; a remarkable fact that we explain by the theory of Hardy spaces, by the Wiman-Valiron and Levin-Pfluger theory of entire functions, and by the saddle-point method of asymptotic analysis. Many examples and nontrivial applications are discussed in detail. © 2010 SFoCM. Source

Asami S.,Helmholtz Center Munich | Reif B.,Helmholtz Center Munich | Reif B.,TU Munich
Accounts of Chemical Research | Year: 2013

When applied to biomolecules, solid-state NMR suffers from low sensitivity and resolution. The major obstacle to applying proton detection in the solid state is the proton dipolar network, and deuteration can help avoid this problem. In the past, researchers had primarily focused on the investigation of exchangeable protons in these systems.In this Account, we review NMR spectroscopic strategies that allow researchers to observe aliphatic non-exchangeable proton resonances in proteins with high sensitivity and resolution. Our labeling scheme is based on u-[2H, 13C]-glucose and 5-25% H2O (95-75% D2O) in the M9 bacterial growth medium, known as RAP (reduced adjoining protonation). We highlight spectroscopic approaches for obtaining resonance assignments, a prerequisite for any study of structure and dynamics of a protein by NMR spectroscopy. Because of the dilution of the proton spin system in the solid state, solution-state NMR 1HCC1H type strategies cannot easily be transferred to these experiments. Instead, we needed to pursue ( 1H)CC1H, CC1H, 1HCC or ( 2H)CC1H type experiments. In protonated samples, we obtained distance restraints for structure calculations from samples grown in bacteria in media containing [1,3]-13C-glycerol, [2]- 13C-glycerol, or selectively enriched glucose to dilute the 13C spin system. In RAP-labeled samples, we obtained a similar dilution effect by randomly introducing protons into an otherwise deuterated matrix. This isotopic labeling scheme allows us to measure the long-range contacts among aliphatic protons, which can then serve as restraints for the three-dimensional structure calculation of a protein. Due to the high gyromagnetic ratio of protons, longer range contacts are more easily accessible for these nuclei than for carbon nuclei in homologous experiments.Finally, the RAP labeling scheme allows access to dynamic parameters, such as longitudinal relaxation times T1, and order parameters S2 for backbone and side chain carbon resonances. We expect that these measurements will open up new opportunities to obtain a more detailed description of protein backbone and side chain dynamics. © 2013 American Chemical Society. Source

Winkler A.S.,TU Munich
Pathogens and Global Health | Year: 2012

Neurocysticercosis has been recognized as a major cause of secondary epilepsy worldwide. So far, most of the knowledge about the disease comes from Latin America and the Indian subcontinent. Unfortunately, in sub-Saharan Africa the condition was neglected for a long time, mainly owing to the lack of appropriate diagnostic tools. This review therefore focuses on the prevalence of neurocysticercosis in sub-Saharan Africa, the clinical picture with emphasis on epilepsy, as well as the diagnosis and treatment of neurocysticercosis and its related epilepsy/epileptic seizures in African resource-poor settings. © W. S. Maney & Son Ltd 2012. Source

Knopf A.,TU Munich
Rheumatology (Oxford, England) | Year: 2011

To describe the clinical manifestations of rheumatic disorders with isolated head and neck (H&N) affection and to introduce a novel diagnostic pathway. From 2004 to 2010, 90 patients presented with isolated H&N symptoms of a rheumatic disorder were included in the study. Rheumatic disorders were classified according to the ACR criteria. In 2008, we introduced a novel diagnostic pathway to reduce under-diagnosis of primary rheumatic disorders in the H&N. Disease-related data were assessed retrospectively and set into clinical context. The majority of patients suffered from SS (n = 42), granulomatosis with polyangiitis (Wegener's) (n = 13) and sarcoidosis (n = 18) with predominance for female patients (n = 65). Enlargement of the major salivary glands (n = 47), sicca symptoms (n = 41) and cervical lymphadenopathy (n = 25) represented the most frequent symptoms. Interestingly, 3% of all enlargements of salivary glands and 4% of all cervical lymphadenopathy could be contributed to rheumatic disorders. The mean time to diagnosis was 20.71 months for SS, 8.4 months for granulomatosis with polyangiitis and 57.5 months for sarcoidosis. After implementation of the newly developed diagnostic pathway in 2008, the annually diagnosed rheumatic disorders increased 5-fold. The majority of rheumatic diseases of the H&N can be related to SS, granulomatosis with polyangiitis and sarcoidosis. However, the lack of specific symptoms and the clinical variability of H&N manifestation may contribute to a prolonged time to diagnosis. Our retrospective study points out the variability of symptoms and suggests a diagnostic pathway to reduce the cases of undetected H&N affection in rheumatic disorders. Source

Aydn A.A.,Technical University of Istanbul | Aydn A.A.,TU Munich
Solar Energy Materials and Solar Cells | Year: 2012

A series of diesters of high-chain dicarboxylic acids with 1-tetradecanol (myristyl alcohol) was synthesized by using decanedioic, dodecanedioic and tetradecanedioic acids under vacuum and in the absence of catalyst for the first time. These diesters were particularly investigated in terms of their thermo-physical properties to be further used as phase change materials (PCMs) in thermal energy storage. High purity syntheses were controlled via FT-IR, GC-MS and elemental analyses and thermo-physical properties were determined with differential scanning calorimeter (DSC) and thermo-gravimetric analyzer (TGA). Thermal properties of the diesters were expressed in terms of phase change temperature, enthalpy, specific heat (C p), thermal decomposition and reliability after 1000 thermal cycles with necessary statistical data. In addition to that, the GC-MS data were also presented to specify the mass fragmentation fingerprints of the diesters. The yield of diester formation was found to be in the range of 95-97%. The DSC analyses indicated that the melting temperatures of the high-chain diesters with myristyl alcohol were between 50°C and 58°C with phase change enthalpy above 200 kJ/kg. The results showed that these materials were favorable for low temperature heat transfer applications with successful thermal properties and reliability. © 2012 Elsevier B.V. All rights reserved. Source

Gobert C.,TU Munich
Journal of Turbulence | Year: 2010

For large eddy simulation of particle-laden flow, the effect of the unresolved scales on the particles needs to be modelled. The present work contains an analysis of three such models, namely approximate deconvolution method (ADM) as proposed by Kuerten (Subgrid modeling in particle-laden channel flow, Phys. Fluids. 18 (2006), p.025108) and two stochastic models proposed by Shotorban and Mashayek (A stochastic model for particle motion in large-eddy simulations, J. Turbul. 7 (2006), p.N18) and Simonin, Deutsch and Minier (Eulerian prediction of the fluid/particle correlated Motion in turbulent two-phase flow, Appl. Sci. Res. 51, (1993), pp. 275-283). The purpose of the analysis is twofold. On the one hand, the results serve for model selection in dependence of the application and on the other hand, the analysis shows possibilities for model improvement. The present work contains for each model an analytical computation of averages (first moments) and root-mean square values (second moments) of particle velocity, particle position and fluid velocity seen by the particles. Results indicate that for large Stokes number, ADM yields higher accuracy than the stochastic models analysed, whereas for small Stokes number the stochastic models give higher accuracy. An analytical estimate for the model error is given in dependence of Stokes number and the energy spectrum of the flow. This information can be used for model selection and for reliability prediction. Furthermore, it is shown that both stochastic models contain a deficient convective term, which should be avoided in future model development. The analysis is backed by numerical results from literature. © 2010 Taylor & Francis. Source

Baumann M.,TU Munich
Journal of the American Society of Hypertension | Year: 2012

Reactive derivatives of nonenzymatic glucose-protein condensation reactions, as well as lipids and nucleic acids exposed to reducing sugars, form a heterogeneous group of irreversible adducts called "advanced glycation endproducts" (AGEs). Numerous studies have investigated the role of the AGEs in diabetic subjects; however, the role on hypertension and the cardiovascular system has been less intensively investigated in clinical studies. This review summarizes clinical data on AGEs and its action on the receptor of AGEs (RAGE) with respect to blood pressure and vascular disease to update the clinician on this important pathway. In summary, clinical data on the AGE-RAGE axis at the moment does not provide evidence for a role in hypertension but for vascular disease, including macrocirculation as well as microcirculation. Potential causes, such as local deposition or signaling pathways are discussed in context with the literature. Finally, the small number of interventional studies is summarized, pointing to a need for more interventional trials with respect to AGEs and vascular disease. Animal data are explicitly excluded to strengthen the clinical focus and to increase the relevance for clinicians. © 2012 Published by Elsevier Inc on behalf of American Society of Hypertension. Source

Haisch C.,TU Munich
Measurement Science and Technology | Year: 2012

Many different techniques, such as UV/vis absorption, IR spectroscopy, fluorescence and Raman spectroscopy are routinely applied in chemical (micro-)analysis and chemical imaging, and a large variety of instruments is commercially available. Up to now, opto- or photoacoustic (PA) and other optothermal (OT) methods are less common and only a limited number of instruments reached a level of application beyond prototypes in research laboratories. The underlying principle of all these techniques is the detection of local heating due to the conversion of light into heat by optical absorption. Considering the versatility, robustness and instrumental simplicity of many PA techniques, it is surprising that the number of commercial instruments based on such approaches is so sparse. The impetus of this review is to summarize basic principles and possible applications described in the literature, in order to foster routine application of these techniques in industry, process analysis and environmental screening. While the terms OT and PA methods cover a very wide range of methods and physical phenomena, this review will concentrate on techniques with applications for analytical measurements. © 2012 IOP Publishing Ltd. Source

Ramakrishnan R.,TU Munich
Journal of Chemical Education | Year: 2013

A program is described and presented to readily plot the molecular orbitals from a Hückel calculation. The main features of the program and the scope of its applicability are discussed through some example organic molecules. © 2012 The American Chemical Society and Division of Chemical Education, Inc. Source

Brockow K.,TU Munich
Hautarzt | Year: 2014

Drug hypersensitivity reactions affect more than seven percent of the population and are a concern for patients and doctors alike. In a substantial part of such reactions, IgE-mediated mechanisms have been documented. Clinical manifestations of immediate reactions, which occur directly after drug intake (mostly ≤1 h), are acute urticaria, angioedema, dyspnea and other symptoms of anaphylaxis in the skin, gastrointestinal tract, respiratory tract or cardiovascular system. Although normally leading to milder reactions, drugs are also the most frequent elicitors of fatal anaphylaxis. The median time interval between systemic drug application and clinical death is 5 min. The most common elicitors of immediate reactions are analgesics, antibiotics, radiocontrast media and muscle relaxants. The aim of history and experience guided skin tests ± laboratory tests is to document a sensitization, which depends on the eliciting drug and is only successful in less than half of the patients. Else a drug provocation test under controlled conditions is necessary to clarify the diagnosis and to confirm or exclude a drug hypersensitivity reaction. Therapy consists in drug avoidance or in pressing indications in tolerance induction by a "drug desensitization". © 2014 Springer-Verlag. Source

Glebe T.W.,TU Munich
American Journal of Agricultural Economics | Year: 2013

Most bid evaluation systems in conservation auctions consider both the proposed payment and the environmental attributes. When auctioneers have a more comprehensive understanding of the conservation benefits than bidders do, information becomes a central element of the auction design. Concealing information about conservation benefits may be the optimal strategy when entry decisions are not relevant. However, disclosing information may motivate landholders whose lands are associated with high environmental benefits to participate in an auction. The present study demonstrates that revealing information about conservation benefits can be an optimal strategy if it enhances an auction's participation rate when bid acceptance rates are high. © The Author (2013). Source

Bornkessel-Schlesewsky I.,University of South Australia | Bornkessel-Schlesewsky I.,University of Marburg | Schlesewsky M.,Johannes Gutenberg University Mainz | Small S.L.,University of California at Irvine | And 2 more authors.
Trends in Cognitive Sciences | Year: 2015

Here, we present a new perspective on an old question: how does the neurobiology of human language relate to brain systems in nonhuman primates? We argue that higher-order language combinatorics, including sentence and discourse processing, can be situated in a unified, cross-species dorsal-ventral streams architecture for higher auditory processing, and that the functions of the dorsal and ventral streams in higher-order language processing can be grounded in their respective computational properties in primate audition. This view challenges an assumption, common in the cognitive sciences, that a nonhuman primate model forms an inherently inadequate basis for modeling higher-level language functions. © 2014 Elsevier Ltd. Source

Zeller P.,Ludwig Maximilians University of Munich | Gunther S.,TU Munich
New Journal of Physics | Year: 2014

We present a systematic investigation of two coinciding lattices and their spatial beating frequencies that lead to the formation of moiré patterns. A mathematical model was developed and applied for the case of a hexagonally arranged adsorbate on a hexagonal support lattice. In particular, it describes the moiré patterns observed for graphene grown on a hexagonally arranged transition metal surface, a system that serves as one of the promising synthesis routes for the formation of this highly wanted material. The presented model uses a geometric construction that derives analytic expressions for first and higher order beating frequencies occurring for arbitrarily oriented graphene on the underlying substrate lattice. By solving the corresponding equations, we predict the size and orientation of the resulting moiré pattern. Adding the constraints for commensurability delivers further solvable analytic equations that predict whether or not first or higher order commensurable phases occur. We explicitly treat the case for first, second and third order commensurable phases. The universality of our approach is tested by comparing our data with moiré patterns that are experimentally observed for graphene on Ir(111) and on Pt(111). Our analysis can be applied for graphene, hexagonal boron nitride (h-BN), or other sp2-networks grown on any hexagonally packed support surface predicting the size, orientation and properties of the resulting moiré patterns. In particular, we can determine which commensurate phases are expected for these systems. The derived information can be used to critically discuss the moiré phases reported in the literature. © 2014 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. Source

Steffens M.,TU Munich | Buddenbaum H.,University of Trier
Geoderma | Year: 2013

The physical and chemical heterogeneities of soils are the source of a multitude of spatial domains supporting a vast functional diversity of soil properties. But many studies do not consider the spatial variability of soil types, diagnostic horizons and properties. This lateral and vertical heterogeneities of soils or soil horizons are mostly neglected due to the limitations in the available soil data and missing techniques to gather the information. We present a fast imaging technique that enables the spatially accurate classification of diagnostic horizons and the mapping of various elemental concentrations (e.g. carbon and iron) for large, undisturbed soil samples (100. ×. 300. mm) with a high spatial resolution of 63. ×. 63 μm per pixel.We sampled a stagnic Luvisol (siltic, greyic) under a Norway spruce monoculture in southern Germany (Freising, Bavaria) using a stainless steel box (100×100×300mm3). A laboratory-based hyperspectral camera with a spectral range of 400 to 990nm in 160 bands and a ground sampling distance of 63×63μm per pixel was used. After recording the images, we took 66 samples from characteristic Luvisol horizons and representative spots and determined the concentrations of carbon, nitrogen, aluminium, iron and manganese with standard laboratory methods. These concentrations were correlated to the measured spectra of the sampled spots and used for regression analyses to extrapolate the elemental concentrations to the complete image. The image data of the soil profile was analysed in three different ways with increasing complexity: 1) geostatistics were used to assess the spatial variability of the soil profile as a whole without previous knowledge; 2) supervised spectral angle mapper classification was used to classify diagnostic horizons following the WRB guidelines; and 3) three different chemometric regression algorithms (narrow band indices, partial least square regression and support vector machine regression each used on reflectance spectra and continuum removed spectra) were tested to extrapolate the elemental concentrations of the sampling areas to the complete image and calculate high resolution chemometric maps of the five elements.Laboratory imaging spectroscopy enables the fast evaluation of a soil's spatial variability without previous knowledge, the mapping of diagnostic horizons, different qualities of OM, soil mottling and various elemental concentrations for large undisturbed soil samples with a high spatial resolution. It has the potential to significantly improve soil classification, assessment of elemental budgets and balances and the understanding of soil forming processes and mechanisms. © 2012 Elsevier B.V. Source

Mendl C.B.,TU Munich | Lin L.,Lawrence Berkeley National Laboratory
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

The many-body Coulomb repulsive energy of strictly correlated electrons provides direct information on the exact Hohenberg-Kohn exchange-correlation functional in the strong interaction limit. Until now the treatment of strictly correlated electrons has been based on the calculation of comotion functions with the help of semianalytic formulations. This procedure is system-specific and has been limited to spherically symmetric atoms and strictly one-dimensional systems. We develop a nested optimization method which solves the Kantorovich dual problem directly, and thus facilitates a general treatment of strictly correlated electrons for systems including atoms and small molecules. © 2013 American Physical Society. Source

Leucht S.,TU Munich | Davis J.M.,University of Illinois at Chicago
British Journal of Psychiatry | Year: 2011

The classification system of atypical and typical antipsychotics has created a lot of confusion and might be abandoned. Nevertheless, to say that all drugs are the same and that therefore it does not matter which drug is given is wrong. Both typical and atypical antipsychotics differ in side-effects, mechanisms of action, cost and efficacy. The available choice of antipsychotics should be adapted to individual patients in a shared decision-making process. Source

Leucht S.,TU Munich
Cochrane database of systematic reviews (Online) | Year: 2012

The symptoms and signs of schizophrenia have been firmly linked to high levels of dopamine in specific areas of the brain (limbic system). Antipsychotic drugs block the transmission of dopamine in the brain and reduce the acute symptoms of the disorder. This review examined whether antipsychotic drugs are also effective for relapse prevention. To review the effects of maintaining antipsychotic drugs for people with schizophrenia compared to withdrawing these agents. We searched the Cochrane Schizophrenia Group's Specialised Register (November 2008), with additional searches of MEDLINE, EMBASE and clinicaltrials.gov (June 2011). We included all randomised trials comparing maintenance treatment with antipsychotic drugs and placebo for people with schizophrenia or schizophrenia-like psychoses. We extracted data independently. For dichotomous data we calculated relative risks (RR) and their 95% confidence intervals (CI) on an intention-to-treat basis based on a random-effects model. For continuous data, we calculated mean differences (MD) or standardised mean differences (SMD) again based on a random-effects model. The review currently includes 65 randomised controlled trials (RCT(s)) and 6493 participants comparing antipsychotic medication with placebo. The trials were published from 1959 to 2011 and their size ranged between 14 and 420 participants. In many studies the methods of randomisation, allocation and blinding were poorly reported. Although this and other potential sources of bias limited the overall quality, the efficacy of antipsychotic drugs for maintenance treatment in schizophrenia was clear. Antipsychotic drugs were significantly more effective than placebo in preventing relapse at seven to 12 months (primary outcome; drug 27%, placebo 64%, 24 RCT(s), n=2669, RR 0.40 CI 0.33 to 0.49, number needed to treat for an additional beneficial outcome (NNTB 3 CI 2 to 3). Hospitalisation was also reduced, however, the baseline risk was lower (drug 10%, placebo 26%, 16 RCT(s), n=2090, RR 0.38 CI 0.27 to 0.55, NNT 5 CI 4 to 9). More participants in the placebo group than in the antipsychotic drug group left the studies early due to any reason (at 7-12 months: drug 38%, placebo 66%, 18 RCT(s), n=2420, RR 0.55 CI 0.46 to 0.66, NNTB 4 CI 3 to 5) and due to inefficacy of treatment (at 7-12 months: drug 20%, placebo 50%, 18 RCT(s), n=2420, RR 0.36 CI 0.28 to 0.45, NNTB 3 CI 2 to 4). Quality of life was better in drug-treated participants (3 RCT(s), n=527, SMD -0.62 CI -1.15 to -0.09). Conversely, antipsychotic drugs as a group and irrespective of duration, were associated with more participants experiencing movement disorders (e.g. at least one movement disorder: drug 16%, placebo 9%, 22 RCT(s), n=3411, RR 1.55 CI 1.25 to 1.93, NNTH 25 CI 13 to 100), sedation (drug 13%, placebo 9%, 10 RCT(s), n=146, RR 1.50 CI 1.22 to 1.84, number needed to treat for an additional harmful outcome (NNTH) not significant) and weight gain (drug 10%, placebo 6%, 10 RCT(s), n=321, RR 2.07 CI 1.31 to 3.25, NNTH 20 CI 14 to 33). The results of the primary outcome were robust in a number of subgroup, meta-regression and sensitivity analyses, the main exception being that the drug-placebo difference in longer trials was smaller than in shorter trials. The results clearly demonstrate the superiority of antipsychotic drugs compared to placebo in preventing relapse. This effect must be weighed against the side effects of antipsychotic drugs. Future studies should focus on outcomes of social participation and clarify the long-term morbidity and mortality associated with these drugs. Source

Hagler P.,TU Munich
Physics Reports | Year: 2010

This is a review of hadron structure physics from lattice QCD. Throughout this report, we place emphasis on the contribution of lattice results to our understanding of a number of fundamental physics questions related to, for example, the origin and distribution of the charge, magnetization, momentum and spin of hadrons. Following an introduction to some of the most important hadron structure observables, we summarize the methods and techniques employed for their calculation in lattice QCD. We briefly discuss the status of relevant chiral perturbation theory calculations needed for controlled extrapolations of the lattice results to the physical point. In the main part of this report, we give an overview of lattice calculations on hadron form factors, moments of (generalized) parton distributions, moments of hadron distribution amplitudes, and other important hadron structure observables. Whenever applicable, we compare with results from experiment and phenomenology, taking into account systematic uncertainties in the lattice computations. Finally, we discuss promising results based on new approaches, ideas and techniques, and close with remarks on future perspectives of the field. © 2009 Elsevier B.V. Source

Wjst M.,Institute of Lung Biology and Disease ILBD | Wjst M.,TU Munich
Current Opinion in Allergy and Clinical Immunology | Year: 2012

Purpose of Review: A link between vitamin D supplementation and allergy was already suspected soon after it became possible to chemically synthesise vitamin D2 by means of ultraviolet radiation. During the past decade, the assumed allergenic effect was confirmed by clinical and epidemiological studies although the most recent discussion has centred more on vitamin D insufficiency. The purpose of this review is to summarise studies published during the past year while attempting to reconcile some apparent inconsistencies. Recent Findings: Two new concepts are presented here - epigenetic programming of the fetal vitamin D system by low maternal vitamin D supply (Barker's paradox) and ubiquitous vitamin D exposure of the newborn (Rose's paradox). Taken together a misdirected epigenetic programming offers an explanation why also vitamin D insufficiency in pregnancy may be associated with increased allergy rates in the offspring.At least eight studies examined the association of early 25-hydroxy-vitamin D levels and atopic diseases in 2011, whereas no new study addressed the question of vitamin D supplementation in the newborn period. One study tested the whole range of 25-hydroxy-vitamin D levels in cord blood describing a U-shaped association with 2.4-fold odds ratio of low and 4-fold odds ratio of high levels to develop allergen-specific immunoglobulin E. Summary: Randomised clinical trials with vitamin D supplements are therefore highly required. Several key points are presented for designing vitamin D trials. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins. Source

Furukawa T.A.,Kyoto University | Leucht S.,TU Munich
PLoS ONE | Year: 2011

Background: In the literature we find many indices of size of treatment effect (effect size: ES). The preferred index of treatment effect in evidence-based medicine is the number needed to treat (NNT), while the most common one in the medical literature is Cohen's d when the outcome is continuous. There is confusion about how to convert Cohen's d into NNT. Methods: We conducted meta-analyses of individual patient data from 10 randomized controlled trials of second generation antipsychotics for schizophrenia (n = 4278) to produce Cohen's d and NNTs for various definitions of response, using cutoffs of 10% through 90% reduction on the symptom severity scale. These actual NNTs were compared with NNTs calculated from Cohen's d according to two proposed methods in the literature (Kraemer, et al., Biological Psychiatry, 2006; Furukawa, Lancet, 1999). Results: NNTs from Kraemer's method overlapped with the actual NNTs in 56%, while those based on Furukawa's method fell within the observed ranges of NNTs in 97% of the examined instances. For various definitions of response corresponding with 10% through 70% symptom reduction where we observed a non-small number of responders, the degree of agreement for the former method was at a chance level (ANOVA ICC of 0.12, p = 0.22) but that for the latter method was ANOVA ICC of 0.86 (95%CI: 0.55 to 0.95, p<0.01). Conclusions: Furukawa's method allows more accurate prediction of NNTs from Cohen's d. Kraemer's method gives a wrong impression that NNT is constant for a given d even when the event rate differs. © 2011 Furukawa, Leucht. Source

Steffensen S.,RWTH Aachen | Ulbrich M.,TU Munich
SIAM Journal on Optimization | Year: 2010

We present a new relaxation scheme for mathematical programs with equilibrium constraints (MPEC), where the complementarity constraints are replaced by a reformulation that is exact for the complementarity conditions corresponding to sufficiently nondegenerate complementarity components and relaxes only the remaining complementarity conditions. A positive parameter determines to what extent the complementarity conditions are relaxed. The relaxation scheme is such that a strongly stationary solution of the MPEC is also a solution of the relaxed problem if the relaxation parameter is chosen sufficiently small. We discuss the properties of the resulting parameterized nonlinear programs and compare stationary points and solutions. We further prove that a limit point of a sequence of stationary points of a sequence of relaxed problems is Clarke-stationary if it satisfies a so-called MPEC-constant rank constraint qualification, and it is Mordukhovich-stationary if it satisfies the MPEC-linear independence constraint qualification and the stationary points satisfy a second order sufficient condition. From this relaxation scheme, a numerical approach is derived that is applied to a comprehensive test set. The numerical results show that the approach combines good efficiency with high robustness. © 2010 Society for Industrial and Applied Mathematics. Source

Dold M.,TU Munich
Cochrane database of systematic reviews (Online) | Year: 2012

Because of the high number of people with schizophrenia not responding adequately to monotherapy with antipsychotic agents, the evidence regarding the efficacy and safety of additional medication was examined in a number of clinical trials. One approach to this research question was the use of benzodiazepines, as monotherapy as well as in combination with antipsychotics. To determine the efficacy, acceptability, and tolerability of benzodiazepines in people with schizophrenia and schizophrenia-like psychoses. In February 2011, we updated the literature search of the previous version of this systematic review (last search March 2005). We searched the trial register of the Cochrane Schizophrenia Group (containing methodical searches of BIOSIS, CINAHL, Dissertation abstracts, EMBASE, LILACS, MEDLINE, PSYNDEX, PsycINFO, RUSSMED, Sociofile, supplemented with hand searching of relevant journals and numerous conference proceedings). Additionally, we inspected references of all identified studies for further relevant studies and contacted authors of relevant publications in order to obtain missing data from existing trials. We applied no language restrictions. We included all randomised controlled trials comparing benzodiazepines (as monotherapy or as adjunctive agent) with antipsychotic drugs or placebo for the pharmacological management of schizophrenia and/or schizophrenia-like psychoses. Review authors (MD and CL) analysed independently the new references of the update-search referring to the inclusion criteria. MD and CL extracted all data from the included trials. For dichotomous outcomes we calculated risk ratios (RR) and their 95% confidence intervals (CI). We analysed continuous data by using mean differences (MD) and their 95% CI. We assessed each pre-selected outcome from the included trials with the risk of bias tool. The 2011 update search yielded three further randomised controlled trials. The review currently includes 34 studies with 2657 participants. Most studies were characterised by a small sample size, short duration, and incomplete outcome data reporting.Benzodiazepine monotherapy is compared with placebo in eight trials. The proportion of participants with no clinically important response did not significantly differ between those given benzodiazepines or placebo (N = 382, 6 RCTs, RR 0.67 CI 0.44 to 1.02). The results from the various rating scales applied to assess global and mental state were inconsistent.Fourteen studies examined benzodiazepine monotherapy in comparison with antipsychotic monotherapy. Clinically important treatment response assessment revealed no statistically significant difference between the study groups (30 minutes: N = 44, 1 RCT, RR 0.91 CI 0.58 to 1.43; 60 minutes: N = 44,1 RCT, RR 0.61 CI 0.20 to 1.86; 12 hours: N = 66, 1 RCT, RR 0.75 CI 0.44 to 1.30; pooled short-term studies: N = 112, 2 RCTs, RR 1.48 CI 0.64 to 3.46). Desired sedation occurred significantly more often among participants in the benzodiazepine group than in the antipsychotic group at 20 and 40 minutes. No significant between-group differences could be identified for global and mental state or occurrence of adverse effects.Twenty trials compared benzodiazepine augmentation of antipsychotics with antipsychotic monotherapy. Referring to clinically important response, statistically significant improvement could be demonstrated only for the first 30 minutes of augmentation treatment (30 minutes: 1 RCT, N = 45, RR 0.38 CI 0.18 to 0.80; 60 minutes: N = 45,1 RCT, RR 0.07 CI 0.00 to 1.13; 12 hour: N = 67,1 RCT, RR 0.85 CI 0.51 to 1.41; pooled short-term studies: N = 511, 6 RCTs, RR 0.87 CI 0.49 to 1.54). Analyses of the global and mental state yielded no between-group differences except for desired sedation at 30 as well as 60 minutes (30 minutes: N = 45, 1 RCT, RR 2.25 CI 1.18 to 4.30; 60 minutes: N = 45, 1 RCT, RR 1.39 CI 1.06 to 1.83). There is currently no convincing evidence to confirm or refute the practise of administering benzodiazepines as monotherapy or in combination with antipsychotics for the pharmacological treatment of schizophrenia and schizophrenia-like psychosis. Low-quality evidence suggests that benzodiazepines are effective for very short-term sedation and could be considered for calming acutely agitated people with schizophrenia. Measured by the overall attrition rate, the acceptability of benzodiazepine treatment appears to be adequate. Adverse effects were generally poorly reported. High-quality future research projects with large sample sizes are required to clarify the evidence of benzodiazepine treatment in schizophrenia, especially regarding long-term augmentation strategies. Source

Leucht C.,TU Munich
Cochrane database of systematic reviews (Online) | Year: 2012

Amitriptyline is a tricyclic antidepressant that was synthesised in 1960 and introduced as early as 1961 in the USA, but is still regularly used. It has also been frequently used as an active comparator in trials on newer antidepressants and can therefore be called a 'benchmark' antidepressant. However, its efficacy and safety compared to placebo in the treatment of major depression has not been assessed in a systematic review and meta-analysis. To assess the effects of amitriptyline compared to placebo or no treatment for major depressive disorder in adults. We searched the Cochrane Depression, Anxiety and Neurosis Group's Specialised Register (CCDANCTR-Studies and CCDANCTR-References) to August 2012. This register contains relevant randomised controlled trials from: The Cochrane Library (all years), EMBASE (1974 to date), MEDLINE (1950 to date) and PsycINFO (1967 to date). The reference lists of reports of all included studies were screened and manufacturers of amitriptyline contacted for details of additional studies. All randomised controlled trials (RCTs) comparing amitriptyline with placebo or no treatment in patients with major depressive disorder as diagnosed by operationalised criteria. Two review authors independently extracted data. For dichotomous data, we calculated the odds ratio (OR) with 95% confidence intervals (CI). We analysed continuous data using standardised mean differences (with 95% CI). We used a random-effects model throughout. The review includes 39 trials with a total of 3509 participants. Study duration ranged between three and 12 weeks. Amitriptyline was significantly more effective than placebo in achieving acute response (18 RCTs, n = 1987, OR 2.67, 95% CI 2.21 to 3.23). Significantly fewer participants allocated to amitriptyline than to placebo withdrew from trials due to inefficacy of treatment (19 RCTs, n = 2017, OR 0.20, 95% CI 0.14 to 0.28), but more amitriptyline-treated participants withdrew due to side effects (19 RCTs, n = 2174, OR 4.15, 95% CI 2.71 to 6.35). Amitriptyline also caused more anticholinergic side effects, tachycardia, dizziness, nervousness, sedation, tremor, dyspepsia, sedation, sexual dysfunction and weight gain. In subgroup and meta-regression analyses the results of the primary outcome were robust towards publication year (1971 to 1997), mean participant age at baseline, mean amitriptyline dose, study duration in weeks, pharmaceutical sponsor, inpatient versus outpatient setting and two-arm versus three-arm design. However, higher severity at baseline was associated with higher superiority of amitriptyline (P = 0.02), while higher responder rates in the placebo groups were associated with lower superiority of amitriptyline (P = 0.05). The results of the primary outcome were rather homogeneous, reflecting comparability of the trials. However, methods of randomisation, allocation concealment and blinding were usually poorly reported. Not all studies used intention-to-treat analyses and in many of them standard deviations were not reported and often had to be imputed. Funnel plots suggested a possible publication bias, but the trim and fill method did not change the overall effect size much (seven adjusted studies, OR 2.64, 95% CI 2.24 to 3.10). Amitriptyline is an efficacious antidepressant drug. It is, however, also associated with a number of side effects. Degree of placebo response and severity of depression at baseline may moderate drug-placebo efficacy differences. Source

Aydin A.A.,Technical University of Istanbul | Aydin A.A.,TU Munich | Aydin A.,Marmara University
Solar Energy Materials and Solar Cells | Year: 2012

High-chain fatty acid esters of higher alcohols have recently been investigated as novel organic phase change materials (PCM) for thermal energy storage. A series of high-chain fatty acid esters of 1-hexadecanol (cetyl alcohol) were prepared through esterification reaction between 1-hexadecanol and C10C20 fatty acids with even carbon number in the absence of catalyst and under vacuum. FT-IR spectrometer, differential scanning calorimeter (DSC) and thermo-gravimetric analyzer (TGA) were intensively used for chemical and thermal analyses. Phase change temperature, enthalpy, specific heat (Cp), thermal decomposition and reliability after 1000 thermal cycles were obtained with necessary statistical data to clarify the thermal properties of the materials. The DSC analyses indicated that the melting temperatures of the high-chain fatty acid esters of cetyl alcohol were between 29 °C and 60 °C with phase change enthalpy above 185 kJ/kg. The results showed that these materials were favorable for low temperature heat transfer applications with superior thermal properties and reliability. © 2011 Elsevier B.V. All rights reserved. Source

Pohlig F.,TU Munich
European journal of medical research | Year: 2012

Biopsy is a crucial step within the diagnostic cascade in patients with suspected bone or soft tissue sarcoma. Open biopsy is still considered the gold standard. However, recent literature suggests similar results for percutaneous biopsy techniques. Therefore, the aim of this retrospective analysis was to compare open and percutaneous core needle biopsy (CNB) regarding their accuracy in diagnosis of malignant musculoskeletal lesions. From January 2007 to December 2009, all patients with suspected malignant primary bone or soft tissue tumour undergoing a percutaneous CNB or open biopsy and a subsequent tumour resection at our department were identified and enrolled. Sensitivities, specificities, positive predictive values (PPV), negative predictive values (NPV) and diagnostic accuracy were calculated for both biopsy techniques and compared using Fisher's exact test. A total of 77 patients were identified and enrolled in this study. Sensitivity, specificity, PPV, NPV and diagnostic accuracy were 100% for CNB in bone tumours. Sensitivity (95.5%), NPV (91.7%) and diagnostic accuracy (93.3%) for open biopsy in bone tumours showed slightly inferior results without statistical significance (p > 0.05). In soft tissue tumours favourable results were obtained in open biopsies compared to CNB with differences regarding sensitivity (100% vs. 81.8%, p = 0.5), NPV (100% vs. 50%, p = 0.09) and diagnostic accuracy (100% vs. 84.6%, p = 0,19) without statistical significance. The overall diagnostic accuracy was 92.9% for CNB and 98.0% for open biopsy (p = 0.55). A specific diagnosis could be obtained in 84.2% and 93.9%, respectively (p = 0.34). In our study we found moderately inferior results for the percutaneous biopsy technique compared to open biopsy in soft tissue tumours whereas almost equal results were obtained for both biopsy techniques for bone tumours. Thus, CNB is a safe, minimal invasive and cost-effective technique for diagnosing bony lesions. In soft tissue masses, the indication for percutaneous core needle biopsy needs to be made carefully by an experienced orthopaedic oncologist with respect to the suspected entity, size of necrosis and location of the lesion to avoid incorrect or deficient results. Source

Klahn M.,Institute of Chemical and Engineering Sciences, Singapore | Zacharias M.,TU Munich
Physical Chemistry Chemical Physics | Year: 2013

Structural and energetic transformations in the plasma membrane of a cancerous cell are investigated together with related consequences for the insertion of small cationic compounds. Molecular dynamics simulations are performed with an empirical force field on two membrane models that represent the membrane of a cancerous cell (M-Cancer) and of a healthy cell (M-Eukar), respectively. An eight-fold increase of negatively charged phosphatidylserine in the external membrane layer as well as a reduction of cholesterol concentration by half is taken into account to describe the membrane transformation. Three additional reference membranes are prepared and consist of pure phosphatidylcholine (M-PC), where 20% is replaced with phosphatidylserine (M-PC0.8S0.2), and where 34% is replaced with cholesterol (M-PC0.66Ch0.34), respectively. Moreover, the free energy released by inserting octadecylmethylimidazolium (OMIM+), a cation found in a class of common ionic liquids, into M-Eukar, M-Cancer as well as into the three reference model membranes is derived by applying thermodynamic integration. We find that the presence of serine improves the solvation of the membrane through favorable electrostatic interactions with solvated sodium ions, where a significant number of sodium ions are capable of penetrating the upper polar layer of the membrane. However, the insertion free energy of OMIM + does not seem to be influenced by serine in the membrane. Furthermore, a significant serine induced structural reorganization of the membrane is not observed. In contrast, a reduction of cholesterol in the membrane models leads to smaller lipid surface densities, thinner membranes as well as less ordered and less stretched lipids as expected. We also observe that cholesterol reduction leads to a rougher membrane surface and an increased solvent accessibility of the hydrophobic membrane core. Membrane insertion of OMIM+ becomes significantly more favorable in the absence of cholesterol, with an increased insertion free energy release of 7.1 kJ mol -1 in M-Cancer compared to M-Eukar. Overall, the results suggest only a minor influence of serine on membrane organization but do not rule out an influence on cation insertion through a stronger cation adsorption to the membrane surface. In contrast, cholesterol seems to impede OMIM+ insertion by increasing the density of polar lipids on the membrane surface and by flattening the membrane surface. These observations are shedding some light on the previously observed selective disruption of cancerous cells induced by cationic compounds such as found in ionic liquids. © the Owner Societies 2013. Source

Grebenshchikov S.Y.,TU Munich
Journal of Chemical Physics | Year: 2013

The global potential energy surfaces of the first six singlet electronic states of CO2, 1 - 31A′, and 1 - 3 1A″ are constructed using high level ab initio calculations. In linear molecule, they correspond to X1Σg+, 1 1Δu, 11Σu-, and 1 1Πg. The calculations accurately reproduce the known benchmarks for all states and establish missing benchmarks for future calculations. The calculated states strongly interact at avoided crossings and true intersections, both conical and glancing. Near degeneracies can be found for each pair of six states and many intersections involve more than two states. In particular, a fivefold intersection dominates the Franck-Condon zone for the ultraviolet excitation from the ground electronic state. The seam of this intersection traces out a closed loop. All states are diabatized, and a diabatic 5 × 5 potential matrix is constructed, which can be used in quantum mechanical calculations of the absorption spectrum of the five excited singlet valence states. © 2013 AIP Publishing LLC. Source

Quasdorff M.,University of Cologne | Protzer U.,TU Munich
Journal of Viral Hepatitis | Year: 2010

Hepatitis B virus (HBV) is tightly controlled by a number of noncytotoxic mechanisms. This control occurs within the host hepatocyte at different steps of the HBV replication cycle. HBV persists by establishing a nuclear minichromosome, HBV cccDNA, serving as a transcription template for the viral pregenome and viral mRNAs. Nucleoside/nucleotide analogues widely used for antiviral therapy as well as most antiviral cytokines act at steps after transcription of HBV RNAs and thus can control virus replication but do not directly affect its gene expression. Control of HBV at the level of transcription in contrast is able to restrict both, HBV replication and gene expression. In the review, we focus on how HBV is controlled at the level of transcription. We discuss how the composition of transcription factors determines HBV gene expression and replication and how this may be influenced by antivirally active substances, e.g. the cytokine IL-6 or helioxanthin analogues, or by the differentiation state of the hepatocyte. © 2010 Blackwell Publishing Ltd. Source

Kloppel G.,TU Munich
Endocrine-Related Cancer | Year: 2011

Gastroenteropancreatic neuroendocrine neoplasms (GEP-NENs) are composed of cells with a neuroendocrine phenotype. The old and the new WHO classifications distinguish between well-differentiated and poorly differentiated neoplasms. All well-differentiated neoplasms, regardless of whether they behave benignly or develop metastases, will be called neuroendocrine tumours (NETs), and graded G1 (Ki67 <2%) or G2 (Ki67 2-20%). All poorly differentiated neoplasms will be termed neuroendocrine carcinomas (NECs) and graded G3 (Ki67 >20%). To stratify the GEP-NETs and GEP-NECs regarding their prognosis, they are now further classified according to TNM-stage systems that were recently proposed by the European Neuroendocrine Tumour Society (ENETS) and the AJCC/UICC. In the light of these criteria the pathology and biology of the various NETs and NECs of the gastrointestinal tract (including the oesophagus) and the pancreas are reviewed. © 2011 Society for Endocrinology. Source

Fiorucci S.,University of Nice Sophia Antipolis | Zacharias M.,TU Munich
Proteins: Structure, Function and Bioinformatics | Year: 2010

The ATTRACT protein-protein docking program combined with a coarse-grained protein model has been used to predict protein-protein complex structures in CAPRI rounds 13-19. For six targets acceptable or better quality solutions have been submitted (high quality predictions for targets 32, 40, 41, and 42). The improved performance compared to previous rounds can be attributed in part to the inclusion of conformational flexibility during systematic searches and an optimized scoring function. In addition, a recently developed method for the prediction of putative protein binding sites based on the electrostatic penalty to place neutral low dielectric probes on the protein surface was applied to the most recent targets. The approach resulted in useful predictions of putative binding sites that can help to limit the systematic docking searches. Possible improvements of the docking approach in particular at the scoring and refinement steps are discussed. © 2010 Wiley-Liss, Inc. Source

Grange T.,TU Munich
Physical Review B - Condensed Matter and Materials Physics | Year: 2014

Electronic transport is theoretically investigated in laterally confined semiconductor superlattices using the formalism of nonequilibrium Green's functions. Velocity-field characteristics are calculated for nanowire superlattices of varying diameters, from the quantum dot superlattice regime to the quantum well superlattice regime. Scattering processes due to electron-phonon couplings, phonon anharmonicity, charged impurities, surface and interface roughness, and alloy disorder are included on a microscopic basis. Elastic scattering mechanisms are treated in a partial coherent way beyond the self-consistent Born approximation. The nature of transport along the superlattice is shown to depend dramatically on the lateral dimensionality. In the quantum wire regime, the electron velocity-field characteristics are predicted to deviate strongly from the standard Esaki-Tsu form. The standard peak of negative differential velocity is shifted to lower electric fields, while additional current peaks appear due to integer and fractional resonances with optical phonons. © 2014 American Physical Society. Source

Epple E.,TU Munich
International Journal of Modern Physics A | Year: 2011

The decay of Λ(1405) into Σ0π0 was studied in p+p collisions at EKin=3.5 GeV, measured by the High Acceptance Di-Electron Spectrometer (HADES). To extract the line shape of the Λ(1405), special techniques have been developed to describe the misidentification background and extract the contributions of different production channels. © 2011 World Scientific Publishing Company. Source

Neumann T.,TU Munich
Proceedings of the VLDB Endowment | Year: 2014

Developing a database engine is both challenging and rewarding. Database engines are very complex software artifacts that have to scale to large data sizes and large hardware configurations, and developing such systems usually means choosing between different trade-offs at various points of development. This papers gives a survey over two different database engines, the disk-based SPARQL-processing engine RDF-3X, and the relational main-memory engine HyPer. It discusses the design choices that were made during development, and highlights optimization techniques that are important for both systems. © 2014 VLDB Endowment 2150-8097/14/08. Source

Hebel M.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation | Stilla U.,TU Munich
IEEE Transactions on Geoscience and Remote Sensing | Year: 2012

Tasks such as city modeling or urban planning require the registration, alignment, and comparison of multiview and/or multitemporal remote sensing data. Airborne laser scanning (ALS) is one of the established techniques to deliver these data. Regrettably, direct georeferencing of ALS measurements usually leads to considerable displacements that limit connectivity and/or comparability of overlapping point clouds. Most reasons for this effect can be found in the impreciseness of the positioning and orientation sensors and their misalignment to the laser scanner. Typically, these sensors are comprised of a global navigation satellite system receiver and an inertial measurement unit. This paper presents a method for the automatic self-calibration of such ALS systems and the alignment of the acquired laser point clouds. Although applicable to classical nadir configurations, a novelty of our approach is the consideration of multiple data sets that were recorded with an oblique forward-looking full-waveform laser scanner. A combination of a region-growing approach with a random-sample-consensus segmentation method is used to extract planar shapes. Matching objects in overlapping data sets are identified with regard to several geometric attributes. A new methodology is presented to transfer the planarity constraints into systems of linear equations to determine both the boresight parameters and the data alignment. In addition to system calibration and data registration, the presented workflow results in merged 3-D point clouds that contain information concerning rooftops and all building facades. This database represents a solid basis and reference for applications such as change detection. © 2012 IEEE. Source

Becker K.-F.,TU Munich
Proteomics - Clinical Applications | Year: 2015

From my experience of 22 years working in a pathology research laboratory and overseeing dozens of collaborations with research groups from basic sciences and industry, I have the impression that researchers are rarely aware of the special issues related to acquisition and processing of frozen or formalin-fixed tissue samples for proteomic analysis. While challenges are expected for formalin-fixed tissues because of the cross-linking activities of formaldehyde, researchers believe when using frozen tissue samples they are safe and always have excellent material to analyze-but this is not always the case. It is alarming that many researchers do not question the quality of the tissue samples they are analyzing and focus only on their analytical technique. Standardization of the entire workflow from test ordering to the report of the proteomic assay, with special emphasis on the preanalytical phase, is crucial for successful integration of proteomic studies in the clinic as protein profiles may change due to sample processing before the proteomic analysis is performed. The aim of this review is to discuss the progress of proteomic studies with human tissues and to highlight the challenges that must be understood and addressed for successful translation of proteomic methods to clinical practice. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source

Ghozlan H.,University of Southern California | Kramer G.,TU Munich
IEEE International Symposium on Information Theory - Proceedings | Year: 2013

Consider a waveform channel where the transmitted signal is corrupted by Wiener phase noise and additive white Gaussian noise (AWGN). A discrete-time channel model that takes into account the effect of filtering on the phase noise is developed. The model is based on a multi-sample receiver which, at high Signal-to-Noise Ratio (SNR), achieves a rate that grows logarithmically with the SNR if the number of samples per symbol grows with the square-root of the SNR. Moreover, the pre-log factor is at least 1/2 in this case. © 2013 IEEE. Source

Helft J.,The Francis Crick Institute | Bottcher J.,The Francis Crick Institute | Chakravarty P.,The Francis Crick Institute | Zelenay S.,The Francis Crick Institute | And 5 more authors.
Immunity | Year: 2015

Dendritic cells (DCs) are key players in the immune system. Much of their biology has been elucidated via culture systems in which hematopoietic precursors differentiate into DCs under the aegis of cytokines. A widely used protocol involves the culture of murine bone marrow (BM) cells with granulocyte-macrophage colony-stimulating factor (GM-CSF) to generate BM-derived DCs (BMDCs). BMDCs express CD11c and MHC class II (MHCII) molecules and share with DCs isolated from tissues the ability to present exogenous antigens to Tcells and to respond to microbial stimuli by undergoing maturation. We demonstrate that CD11c+MHCII+ BMDCs are in fact a heterogeneous group of cells that comprises conventional DCs and monocyte-derived macrophages. DCs and macrophages in GM-CSF cultures both undergo maturation upon stimulation with lipopolysaccharide but respond differentially to the stimulus and remain separable entities. These results have important implications for the interpretation of a vast array of data obtained with DC culture systems. © 2015 Elsevier Inc. Source

Wanajo S.,TU Munich | Wanajo S.,Max Planck Institute for Astrophysics | Janka H.-T.,Max Planck Institute for Astrophysics
Astrophysical Journal | Year: 2012

We examine r-process nucleosynthesis in the neutrino-driven wind from the thick accretion disk (or "torus") around a black hole. Such systems are expected as remnants of binary neutron star or neutron star-black hole mergers. We consider a simplified, analytic, time-dependent evolution model of a 3 M · central black hole surrounded by a neutrino emitting accretion torus with 90km radius, which serves as basis for computing spherically symmetric neutrino-driven wind solutions. We find that ejecta with modest entropies (30 per nucleon in units of the Boltzmann constant) and moderate expansion timescales (∼100ms) dominate in the mass outflow. The mass-integrated nucleosynthetic abundances are in good agreement with the solar system r-process abundance distribution if a minimal value of the electron fraction at the charged-particle freezeout, Y e, min ∼0.2, is achieved. In the case of Y e, min ∼0.3, the production of r-elements beyond A ∼ 130 does not reach to the third peak but could still be important for an explanation of the abundance signatures in r-process deficient stars in the early Galaxy. The total mass of the ejected r-process nuclei is estimated to be ∼1 × 10-3 M ⊙. If our model was representative, this demands a Galactic event rate of ∼2 × 10-4yr-1 for black-hole-torus winds from merger remnants to be the dominant source of the r-process elements. Our result thus suggests that black-hole-torus winds from compact binary mergers have the potential to be a major, but probably not the dominant, production site of r-process elements. © 2012. The American Astronomical Society. All rights reserved. Source

Grillo C.,TU Munich
Astrophysical Journal Letters | Year: 2012

We study a sample of 39 massive early-type lens galaxies at redshift z ≲ 0.3 to determine the slope of the average dark-matter density profile in the innermost regions. We keep the strong-lensing and stellar population synthesis modeling as simple as possible to measure the galaxy total and luminous masses. By rescaling the values of the Einstein radius and dark-matter projected mass with the values of the luminous effective radius and mass, we combine all the data of the galaxies in the sample. We find that between 0.3 and 0.9times the value of the effective radius the average logarithmic slope of the dark-matter projected density profile is -1.0 ± 0.2 (i.e., approximately isothermal) or -0.7 ± 0.5 (i.e., shallower than isothermal), if, respectively, a constant Chabrier or heavier, Salpeter-like stellar initial mass function is adopted. These results provide positive evidence of the influence of the baryonic component on the contraction of the galaxy dark-matter halos, compared to the predictions of dark-matter-only cosmological simulations, and open a new way to test models of structure formation and evolution within the standard ΛCDM cosmological scenario. © 2012 The American Astronomical Society. All rights reserved. Source

Palazzo A.,TU Munich
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2011

Motivated by the recent low-threshold measurements of the solar B8 neutrino spectrum performed by Borexino, Super-Kamiokande and the Sudbury Neutrino Observatory-all now monitoring the transition regime between low-energy (vacuumlike) and high-energy (matter-dominated) flavor conversions-we consider the role of subdominant dynamical terms induced by new flavor-changing interactions. We find that the presence of such perturbations with strength ∼10-1GF is now favored, offering a better description of the anomalous behavior suggested by the new results, whose spectrum shows no sign of the typical low-energy upturn predicted by the standard Mikheyev-Smirnov-Wolfenstein (MSW) mechanism. Our findings, if interpreted in a 2-flavor scheme, provide a hint of such new interactions at the ∼2σ level, which is rather robust with respect to 3-flavor effects possibly induced by nonzero θ13. © 2011 American Physical Society. Source

Vaupel P.,TU Munich
Advances in Experimental Medicine and Biology | Year: 2013

Since 1970, the multifactorial pathogenesis of the deficient and heterogeneous oxygenation of transplanted murine tumors and of human cancers (including parameters determining oxygen delivery, e.g., blood flow, diffusion geometry, oxygen transport capacity of the blood) has been investigated in vivo. Hypoxia and/or anoxia was quantitatively assessed and characterized using microtechniques and special preclinical tumor models. Hypoxia subtypes were identified, and critical supply conditions were theoretically analyzed. In the 1980s, first experiments on humans were carried out in cancers of the rectum and of the oral cavity. In the 1990s, the clinical investigations were carried out on cancers of the breast and of the uterine cervix, clearly showing that hypoxia is a hallmark of locally advanced human tumors. In multivariate analysis, hypoxia was found to be an independent, adverse prognostic factor for patient survival due to hypoxia-driven malignant progression and hypoxia-associated resistance to anticancer therapy. © 2013 Springer Science+Business Media New York. Source

Lenz A.J.,TU Munich
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2011

We reinvestigate a simple relation between the semileptonic CP asymmetry asls, the decay rate difference ΔΓs, the mass difference ΔMs and Sψ extracted from the angular analysis of the decay Bs→ψ, which is regularly used in the literature. We find that this relation is not suited to eliminate the theory prediction for Γ12, it can, however be used to determine the size of the penguin contributions to the decay Bs→ψ. Moreover we comment on the current precision of the theory prediction for Γ12. © 2011 American Physical Society. Source

Zacharias M.,TU Munich
Proteins: Structure, Function and Bioinformatics | Year: 2013

A hybrid coarse-grained (CG) and atomistic (AT) model for protein simulations and rapid searching and refinement of peptide-protein complexes has been developed. In contrast to other hybrid models that typically represent spatially separate parts of a protein by either a CG or an AT force field model, the present approach simultaneously represents the protein by an AT (united atom) and a CG model. The interactions of the protein main chain are described based on the united atom force field allowing a realistic representation of protein secondary structures. In addition, the AT description of all other bonded interactions keeps the protein compatible with a realistic bonded geometry. Nonbonded interactions between side chains and side chains and main chain are calculated at the level of a CG model using a knowledge-based potential. Unrestrained molecular dynamics simulations on several test proteins resulted in trajectories in reasonable agreement with the corresponding experimental structures. Applications to the refinement of docked peptide-protein complexes resulted in improved complex structures. Application to the rapid refinement of docked protein-protein complex is also possible but requires further optimization of force field parameters. © 2012 Wiley Periodicals, Inc. Source

Gunes H.,Queen Mary, University of London | Schuller B.,TU Munich
Image and Vision Computing | Year: 2013

In the context of affective human behavior analysis, we use the term continuous input to refer to naturalistic settings where explicit or implicit input from the subject is continuously available, where in a human-human or human-computer interaction setting, the subject plays the role of a producer of the communicative behavior or the role of a recipient of the communicative behavior. As a result, the analysis and the response provided by the automatic system are also envisioned to be continuous over the course of time, within the boundaries of digital machine output. The term continuous affect analysis is used as analysis that is continuous in time as well as analysis that uses affect phenomenon represented in dimensional space. The former refers to acquiring and processing long unsegmented recordings for detection of an affective state or event (e.g., nod, laughter, pain), and the latter refers to prediction of an affect dimension (e.g., valence, arousal, power). In line with the Special Issue on Affect Analysis in Continuous Input, this survey paper aims to put the continuity aspect of affect under the spotlight by investigating the current trends and provide guidance towards possible future directions. © 2012 Elsevier B.V. Source

Kirchhoff C.,TU Munich
The American journal of sports medicine | Year: 2010

BACKGROUND: Tears of the rotator cuff are highly prevalent in patients older than 60 years, thereby presenting a population also suffering from osteopenia or osteoporosis. Suture fixation in the bone depends on the holding strength of the anchoring technique, whether a bone tunnel or suture anchor is selected. Because of osteopenic or osteoporotic bone changes, suture anchors in the older patient might pull out, resulting in failure of repair. HYPOTHESIS: The aim of our study was to analyze the bone quality within the tuberosities of the osteoporotic humeral head using high-resolution quantitative computed tomography (HR-pQCT). STUDY DESIGN: Descriptive laboratory study. METHODS: Thirty-six human cadaveric shoulders were analyzed using HR-pQCT. The mean bone volume to total volume (BV/TV) as well as trabecular bone mineral densities (trabBMDs) of the greater tuberosity (GT) and the lesser tuberosity (LT) were determined. Within the GT, 6 volumes of interest (VOIs) within the LT, and 2 VOIs and 1 control volume within the subchondral area beyond the articular surface were set. RESULTS: Comparing BV/TV of the medial and the lateral row, significantly higher values were found medially (P < .001). The highest BV/TV, 0.030% + or - 0.027%, was found in the posteromedial portion of the GT (P < .05). Regarding the analysis of the LT, no difference was found comparing the superior (BV/TV: 0.024% + or - 0.022%) and the inferior (BV/TV: 0.019% + or - 0.016%) portion. Analyzing trabBMD, equal proportions were found. An inverse correlation with a correlation coefficient of -0.68 was found regarding BV/TV of the posterior portion of the GT and age (P < .05). CONCLUSION: Significant regional differences of trabecular microarchitecture were found in our HR-pQCT study. The volume of highest bone quality resulted for the posteromedial aspect of the GT. Moreover, a significant correlation of bone quality within the GT and age was found, while the bone quality within the LT seems to be independent from it. CLINICAL RELEVANCE: The shape of the rotator cuff tear largely determines the bony site of tendon reattachment, although the surgeon has distinct options to modify anchor positioning. According to our results, placement of suture anchors in a medialized way at the border to the articular surface might guarantee a better structural bone stock. Source

Seidl C.,TU Munich
Immunotherapy | Year: 2014

α-particle-emitting radionuclides are highly cytotoxic and are thus promising candidates for use in targeted radioimmunotherapy of cancer. Due to their high linear energy transfer (LET) combined with a short path length in tissue, α-particles cause severe DNA double-strand breaks that are repaired inaccurately and finally trigger cell death. For radioimmunotherapy, α-emitters such as 225Ac, 211At, 212Bi/212Pb, 213Bi and 227Th are coupled to antibodies via appropriate chelating agents. The α-emitter immunoconjugates preferably target proteins that are overexpressed or exclusively expressed on cancer cells. Application of α-emitter immunoconjugates seems particularly promising in treatment of disseminated cancer cells and small tumor cell clusters that are released during the resection of a primary tumor. α-emitter immunoconjugates have been successfully administered in numerous experimental studies for therapy of ovarian, colon, gastric, blood, breast and bladder cancer. Initial clinical trials evaluating α-emitter immunoconjugates in terms of toxicity and therapeutic efficacy have also shown positive results in patients with melanoma, ovarian cancer, acute myeloid lymphoma and glioma. The present problems in terms of availability of therapeutically effiective α-emitters will presumably be solved by use of alternative production routes and installation of additional production facilities in the near future. Therefore, clinical establishment of targeted α-emitter radioimmunotherapy as one part of a multimodal concept for therapy of cancer is a promising, middle-term concept. © 2014 Future Medicine Ltd. Source

Meissner K.,Ludwig Maximilians University of Munich | Meissner K.,TU Munich
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2011

For many subjectively experienced outcomes, such as pain and depression, rather large placebo effects have been reported. However, there is increasing evidence that placebo interventions also affect end-organ functions regulated by the autonomic nervous system (ANS). After discussing three psychological models for autonomic placebo effects, this article provides an anatomical framework of the autonomic system and then critically reviews the relevant placebo studies in the field, thereby focusing on gastrointestinal, cardiovascular and pulmonary functions. The findings indicate that several autonomic organ functions can indeed be altered by verbal suggestions delivered during placebo and nocebo interventions. In addition, three experimental studies provide evidence for organ-specific effects, in agreement with the current knowledge on the central control of the ANS. It is suggested that the placebo effects on autonomic organ functions are best explained by the model of 'implicit affordance', which assumes that placebo effects are dependent on 'lived experience' rather than on the conscious representation of expected outcomes. Nevertheless, more studies will be needed to further elucidate psychological and neurobiological pathways involved in autonomic placebo effects. © 2011 The Royal Society. Source

Schraml B.U.,TU Munich | Reis e Sousa C.,Cancer Research UK Research Institute
Current Opinion in Immunology | Year: 2015

Dendritic cells (DCs) are versatile controllers of the immune system, best known for their potent ability to initiate adaptive immunity. Traditionally, DCs have been defined on the basis of cell morphology, expression of specific markers and select functional attributes such as the ability to migrate to T cell areas of secondary lymphoid organs and activate T lymphocytes. However, such properties are not qualitative and often change in conditions of inflammation or infection. Phenotypic-based and function-based definitions can therefore lead to difficulties in cell identification. Here we review other approaches to try and solve questions of DC lineage attribution with an emphasis on recent insights arising from our increased understanding of DC ontogeny and differentiation. © 2014 The Authors. Source

Gubler P.,ECT | Weise W.,ECT | Weise W.,TU Munich
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2015

Moments of the ϕ meson spectral function in vacuum and in nuclear matter are analyzed, combining a model based on chiral SU(3) effective field theory (with kaonic degrees of freedom) and finite-energy QCD sum rules. For the vacuum we show that the spectral density is strongly constrained by a recent accurate measurement of the e+e-→K+K- cross section. In nuclear matter the ϕ spectrum is modified by interactions of the decay kaons with the surrounding nuclear medium, leading to a significant broadening and an asymmetric deformation of the ϕ meson peak. We demonstrate that both in vacuum and nuclear matter, the first two moments of the spectral function are compatible with finite-energy QCD sum rules. A brief discussion of the next-higher spectral moment involving strange four-quark condensates is also presented. © 2015 The Authors. Source

Zacharias M.,TU Munich
Current Opinion in Structural Biology | Year: 2010

Three-dimensional structures of only a small fraction of known protein-protein complexes are currently known. Meanwhile, computational methods are of increasing importance to provide structural models for known protein-protein interactions. Current protein-protein docking methods are often successful if experimentally determined partner proteins undergo little conformational changes upon binding. However, the realistic and computationally efficient treatment of conformational changes especially of the protein backbone during docking remains a challenge. New promising approaches of flexible refinement, ensemble docking and explicit inclusion of flexibility during the entire docking process have been developed. A significant fraction of known protein-protein interactions can be modeled based on homology to known protein-protein complexes which in many cases also requires efficient flexible refinement to provide accurate structural models. © 2010 Elsevier Ltd. Source

Abdi M.,TU Munich | Hartmann M.J.,Heriot - Watt University
New Journal of Physics | Year: 2015

We study entanglement of the motional degrees of freedom of two tethered and optically trapped microdisks inside a single cavity. By properly choosing the position of the trapped objects in the optical cavity and driving proper modes of the cavity, it is possible to equip the system with linear and quadratic optomechanical couplings. We show that a parametric coupling between the fundamental vibrational modes of two tethered microdisks can be generated via a time-modulated input laser. For a proper choice of the modulation frequency, this mechanism can drive the motion of the microdisks into an inseparable state in the long time limit via a two-mode squeezing process. We numerically confirm the performance of our scheme for current technology and briefly discuss an experimental setup that can be used for detecting this entanglement by employing the quadratic coupling. We also comment on the perspectives for generating such entanglement between the oscillations of optically levitated nanospheres. © 2015 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. Source

Kruhl J.H.,TU Munich
Journal of Structural Geology | Year: 2013

Fractal-geometry techniques are widely applied to the quantification of complex rock structures. Important properties of such structures are (i) different scaling behaviour on different scales, (ii) inhomogeneity, and (iii) anisotropy. The current paper presents a special view on the quantification of these properties by classical and newly developed fractal-geometry methods, discusses advantages and disadvantages of special methods and outlines the correlations between structure quantifications and rock properties and structure-forming processes, presented in the literature. © 2012. Source

Spinner S.,TU Munich
Leukemia | Year: 2016

T lymphocyte non-Hodgkin's lymphoma (T-NHL) represents an aggressive and largely therapy-resistant subtype of lymphoid malignancies. As deregulated apoptosis is a frequent hallmark of lymphomagenesis, we analyzed gene expression profiles and protein levels of primary human T-NHL samples for various apoptotic regulators. We identified the apoptotic regulator MCL-1 as the only pro-survival BCL-2 family member to be highly expressed throughout all human T-NHL subtypes. Functional validation of pro-survival protein members of the BCL-2 family in two independent T-NHL mouse models identified that the partial loss of Mcl-1 significantly delayed T-NHL development in vivo. Moreover, the inducible reduction of MCL-1 protein levels in lymphoma-burdened mice severely impaired the continued survival of T-NHL cells, increased their susceptibility to chemotherapeutics and delayed lymphoma progression. Lymphoma viability remained unaffected by the genetic deletion or pharmacological inhibition of all alternative BCL-2 family members. Consistent with a therapeutic window for MCL-1 treatment within the context of the whole organism, we observed an only minimal toxicity after systemic heterozygous loss of Mcl-1 in vivo. We conclude that re-activation of mitochondrial apoptosis by blockade of MCL-1 represents a promising therapeutic strategy to treat T-cell lymphoma.Leukemia advance online publication, 8 April 2016; doi:10.1038/leu.2016.49. © 2016 Macmillan Publishers Limited Source

De Kerret P.,TU Munich | De Kerret P.,Eurecom | Gesbert D.,Eurecom
IEEE Wireless Communications | Year: 2013

Multiple-antenna based transmitter cooperation has been established as a promising tool toward avoiding, aligning, or shaping the interference resulting from aggressive spectral reuse. The price paid in the form of feedback and exchanging channel state information between cooperating devices in most existing methods is often underestimated, though. In reality, feedback and information overhead threatens the practicality and scalability of TX cooperation approaches in dense networks. Hereby we addresses a Who needs to know what? problem when it comes to CSI at cooperating transmitters. A comprehensive answer to this question remains beyond our reach and the scope of this article. Nevertheless, recent results in this area suggest that CSI overhead can be contained for even large networks provided the allocation of feedback to TXs is made non-uniform and to properly depend on the network's topology. This article provides a few hints toward solving the problem. © 2013 IEEE. Source

Grivennikov S.I.,University of California at San Diego | Greten F.R.,TU Munich | Karin M.,University of California at San Diego
Cell | Year: 2010

Inflammatory responses play decisive roles at different stages of tumor development, including initiation, promotion, malignant conversion, invasion, and metastasis. Inflammation also affects immune surveillance and responses to therapy. Immune cells that infiltrate tumors engage in an extensive and dynamic crosstalk with cancer cells, and some of the molecular events that mediate this dialog have been revealed. This review outlines the principal mechanisms that govern the effects of inflammation and immunity on tumor development and discusses attractive new targets for cancer therapy and prevention. © 2010 Elsevier Inc. Source

Pugachev A.O.,TU Munich
Structural and Multidisciplinary Optimization | Year: 2013

This paper demonstrates the application of gradient-based optimization methods to the minimal weight design optimization of rotor systems. A nonlinear constrained optimization problem is considered. Design variables are inner radii and wall thicknesses of shaft sections. Constraints are imposed on torsional and equivalent stresses, natural frequencies, and unbalance response amplitudes. The sizing optimization problem is solved using a gradient projection method and a sequential quadratic programming technique. A typical turbine rotor system is considered. An in-house beam-based finite element method code is used for the prediction of static and dynamic characteristics of the rotor system. Analytical sensitivity analysis is performed for the static and harmonic equations using the adjoint method. Sensitivity coefficients for the natural frequencies are obtained directly from the quadratic eigenvalue problem. Results of several optimization runs with different constraint sets show a significant shaft weight reduction in comparison with the baseline configuration with all constraints being satisfied. The two optimization methods are compared and discussed in regard to their performance. © 2012 Springer-Verlag Berlin Heidelberg. Source

Broy M.,TU Munich
Computer | Year: 2011

Software engineering requires an understanding of theory, what it can offer, and its limits. This requires a comprehensive understanding of practice, its needs, and its open challenges. Successful software engineering requires insights into many aspects of software and its evolution, such as methodology, including requirements engineering and specification, architecture and design, code quality, integration and verification, deployment and migration, and modification and improvement of software system. Applying theory systemically in this field using context-free grammars and compiler generators has turned it into one of the first mature areas of software construction. Applied topics such as operating systems, protocols, and databases should also be taught and explained in terms of adequate theories. A structured presentation of theory and its connection to the engineering of software systems is a must in software engineering education. Source

Shahzad M.,TU Munich | Zhu X.X.,German Aerospace Center
IEEE Transactions on Geoscience and Remote Sensing | Year: 2015

With data provided by modern meter-resolution synthetic aperture radar (SAR) sensors and advanced multipass interferometric techniques such as tomographic SAR inversion (TomoSAR), it is now possible to reconstruct the shape and monitor the undergoing motion of urban infrastructures on the scale of centimeters or even millimeters from space in very high level of details. The retrieval of rich information allows us to take a step further toward generation of 4-D (or even higher dimensional) dynamic city models, i.e., city models that can incorporate temporal (motion) behavior along with the 3-D information. Motivated by these opportunities, the authors proposed an approach that first attempts to reconstruct facades from this class of data. The approach works well for small areas containing only a couple of buildings. However, towards automatic reconstruction for the whole city area, a more robust and fully automatic approach is needed. In this paper, we present a complete extended approach for automatic (parametric) reconstruction of building facades from 4-D TomoSAR point cloud data and put particular focus on robust reconstruction of large areas. The proposed approach is illustrated and validated by examples using TomoSAR point clouds generated from a stack of TerraSAR-X high-resolution spotlight images from ascending orbit covering an approximately 2-$\hbox{km}^{2}$ high-rise area in the city of Las Vegas. © 2014 IEEE. Source

Brockow K.,TU Munich
Chemical Immunology and Allergy | Year: 2012

Hypersensitivity reactions to contrast media (CM) are frequent causes of anaphylaxis and drug exanthemas. Adverse events after CM exposure are classified into immediate (>1 h) and non-immediate reactions (>1 h), with differing mechanisms. In the majority of patients with immediate reactions, IgE-mediated allergy cannot be demonstrated, and the underlying mechanism remains unknown. However, recent data have provided evidence for skin test positivity and/or specific IgE in some patients. T cell-mediated hypersensitivity is the responsible mechanism for the majority of non-immediate skin eruptions. These insights have consequences for diagnosis and prevention. Skin testing evolves to be a useful tool for diagnosis of CM allergy. Skin tests have been employed to confirm this hypersensitivity. Previous reactors have an increased risk to develop new reactions upon repeated exposure; however, other risk factors are poorly defined. The use of skin tests for the selection of a 'safe' CM is under investigation with promising results. In vitro tests to search for CM-specific cell activation include flow cytometric approaches, lymphocyte cultures and construction of cell lines and hybridomas. Premedication of previous reactors is common practice among radiologists; however, breakthrough reactions are a concern, and physicians should not rely on the efficacy of pharmacological premedication. Copyright © 2012 S. Karger AG, Basel. Source

Palazzo A.,TU Munich
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2011

Motivated by the accumulating hints of new sterile neutrino species at the eV scale, we explore the consequences of such an hypothesis on the solar sector phenomenology. After introducing the theoretical formalism needed to describe the Mikheyev-Smirnov-Wolfenstein conversion of solar neutrinos in the presence of one (or more) sterile neutrino state(s) located "far" from the (ν1, ν2) "doublet", we perform a quantitative analysis of the available experimental results, focusing on the electron neutrino mixing. We find that the present data posses a sensitivity to the amplitude of the lepton mixing matrix element Ue4-encoding the admixture of the electron neutrino with a new mass eigenstate-which is comparable to that achieved on the standard matrix element Ue3. In addition, and more importantly, our analysis evidences that, in a 4-flavor framework, the current preference for |Ue3|0 is indistinguishable from that for |Ue4| 0, having both a similar statistical significance (which is ∼1.3σ adopting the old reactor fluxes determinations, and ∼1.8σ using their new estimates.) We also point out that, differently from the standard 3-flavor case, in a 3+1 scheme the Dirac CP-violating phases cannot be eliminated from the description of solar neutrino conversions. © 2011 American Physical Society. Source

Rischpler C.,TU Munich
Current cardiology reports | Year: 2013

Heart failure is a serious condition with poor prognosis, which imposes an ever increasing burden on healthcare systems due to its rising prevalence. Nonetheless, physiological processes underlying heart failure remain poorly understood. In recent years, functional imaging such as gated CT has become available for routine clinical cardiology investigations. However, a maturation of nuclear imaging techniques such as PET and SPECT is now yielding new insights into the pathophysiological changes underlying heart failure, based on non-invasive measurements of myocardial blood flow, myocardial viability, sympathetic innervation, neoangiogenesis and matrix metalloproteinases activity. Investigations of these biomarkers have the potential to reveal early aspects of left ventricle remodeling; diagnosis at an earlier stage of heart failure promises to facilitate improved intervention and therapy guidance. Furthermore, nuclear imaging techniques are being developed to monitor and predict outcome of novel cell-based approaches for restorative therapy of heart failure. Source

Schul D.B.,TU Munich
Neurosurgery | Year: 2012

Although population age increases, published evidence on meningioma treatment in the elderly is scarce. In order to improve selection for surgery, we investigated our patients' collective, using 2 proposed risk assessment systems, the Clinical-Radiological Grading System (CRGS) and the SKALE score (sex, Karnofsky, American Society of Anesthesiology [ASA] score, location, edema). We retrospectively assessed morbidity and mortality in 164 patients aged ≥ 65, operated on for an intracranial meningioma. Medical and surgical records were reviewed and analyzed. CRGS and SKALE scores were calculated. The ability of both CRGS and SKALE and all single factors to predict death within 12 months was analyzed by the use of multivariate logistic regression modeling. Eleven patients died (6.7%). Logistic regression for CRGS/SKALE showed a significant relationship with 12 months mortality. Age, Simpson resection grade, and sex were not significant predictors when investigated alone. In multivariate logistic regression, including all proposed factors, only concomitant disease and edema (CRGS) as well as ASA score and preoperative Karnofsky Performance Scale (SKALE) showed a significant relationship to mortality. After stepwise reduction of the full multivariate regression model to its significant terms, only concomitant disease and ASA remained significant for CRGS (P < .001) and SKALE (P = .003), respectively. Meningioma resection in the elderly is possible with some mortality. We were unable to reproduce the utility of 2 proposed grading systems for mortality prediction when extending to younger patients. In single-factor analysis, only concomitant disease and ASA score remained significant. The decision whether to operate should be taken individually. Patients with severe concomitant disease or high ASA score should be advised not to undergo surgical therapy independently from other factors. Source

Kaiser N.,TU Munich
Journal of Physics G: Nuclear and Particle Physics | Year: 2015

Based on the phenomenological Skyrme interaction various density-dependent nuclear matter quantities are calculated up to second order in many-body perturbation theory. The spin-orbit term as well as two tensor terms contribute at second order to the energy per particle. The simultaneous calculation of the isotropic Fermi-liquid parameters provides a rigorous check through the validity of the Landau relations. It is found that published results for these second order contributions are incorrect in most cases. In particular, interference terms between s-wave and p-wave components of the interaction can contribute only to (isospin or spin) asymmetry energies. Even with nine adjustable parameters, one does not obtain a good description of the empirical nuclear matter saturation curve in the low density region . The reason for this feature is the too strong density-dependence of several second-order contributions. The inclusion of the density-dependent term is therefore indispensable for a realistic description of nuclear matter in the Skyrme framework. © 2015 IOP Publishing Ltd. Source

Van Hemmen J.L.,TU Munich
Biological Cybernetics | Year: 2013

The vector strength, a number between 0 and 1, is a classical notion in biology. It was first used in neurobiology by Goldberg and Brown (J Neurophys 31:639-656, 1969) but dates back at least to von Mises (Phys Z 19:490-500, 1918). It is widely used as a means to measure the periodicity or lack of periodicity of a neuronal response to an outside periodic signal. Here, we provide a self-contained and simple treatment of a closely related notion, the synchrony vector, a complex number with the vector strength as its absolute value and with a definite phase that one can directly relate to a biophysical delay. The present analysis is essentially geometrical and based on convexity. As such it does two things. First, it maps a sequence of points, events such as spike times on the time axis, onto the unit circle in the complex plane so that for a perfectly periodic repetition, a single point on the unit circle appears. Second, events hardly ever occur periodically, so that we need a criterion of how to extract periodicity out of a set of real numbers. It is here where convex geometry comes in, and a geometrically intuitive picture results. We also quantify how the events cluster around a period as the vector strength goes to 1. A typical example from the auditory system is used to illustrate the general considerations. Furthermore, von Mises' seminal contribution to the notion of vector strength is explained in detail. Finally, we generalize the synchrony vector to a function of angular frequency, not fixed on the input frequency at hand and indicate its potential as a "resonating" vector strength. © 2013 Springer-Verlag Berlin Heidelberg. Source

Muller-Buschbaum P.,TU Munich
Polymer Journal | Year: 2013

The enhancement of surface sensitivity by grazing incidence geometry facilitates the investigation of nanostructures in thin films and at surfaces. The technique provides information about the surface roughness, lateral correlations, sizes and shapes of objects (such as, nanoparticles and nanostructures) positioned on top of the surface or in a region near the surface. Grazing incidence small-angle neutron scattering (GISANS) overcomes the limitations of conventional small-angle neutron scattering for extremely small sample volumes in the thin-film geometry. Although real space analysis techniques, such as atomic force microscopy, provide easy access to surface structures, reciprocal space analysis techniques, such as GISANS, provide several advantages: (i) average statistical information over the large illuminated sample surface can be detected and (ii) buried lateral structures can be probed without damage, using the variable-probed depth as a function of the incident angle. To illustrate the potential applications and challenges of GISANS, several different examples of thin nanostructured polymer films are reviewed. Nanostructures in triblock copolymer thin films are studied in the bulk as well as at the polymer-air and the silicon-polymer interface. Confined nanostructures in a dewetted diblock copolymer film are also discussed in terms of contrast and experimental settings. © 2013 The Society of Polymer Science, Japan (SPSJ) All rights reserved. Source

Iocco F.,Institute Fisica Teorica UAM CSIC | Pato M.,Sao Paulo State University | Bertone G.,TU Munich | Bertone G.,The Oskar Klein Center | Bertone G.,University of Amsterdam
Nature Physics | Year: 2015

The ubiquitous presence of dark matter in the Universe is today a central tenet in modern cosmology and astrophysics. Throughout the Universe, the evidence for dark matter is compelling in dwarfs, spiral galaxies, galaxy clusters as well as at cosmological scales. However, it has been historically difficult to pin down the dark matter contribution to the total mass density in the Milky Way, particularly in the innermost regions of the Galaxy and in the solar neighbourhood. Here we present an up-to-date compilation of Milky Way rotation curve measurements, and compare it with state-of-the-art baryonic mass distribution models. We show that current data strongly disfavour baryons as the sole contribution to the Galactic mass budget, even inside the solar circle. Our findings demonstrate the existence of dark matter in the inner Galaxy without making any assumptions about its distribution. We anticipate that this result will compel new model-independent constraints on the dark matter local density and profile, thus reducing uncertainties on direct and indirect dark matter searches, and will help reveal the structure and evolution of the Galaxy. Source

Holzmann B.,TU Munich
Amino Acids | Year: 2013

The peripheral nervous system is connected with lymphoid organs through sensory nerves that mediate pain reflexes and may influence immune responses through the release of neuropeptides such as calcitonin gene-related peptide (CGRP). Local and systemic levels of CGRP increase rapidly during inflammatory responses. CGRP inhibits effector functions of various immune cells and dampens inflammation by distinct pathways involving the amplification of IL-10 production and/or the induction of the transcriptional repressor inducible cAMP early repressor (ICER). Thus, available evidence suggests that, in neuro-immunological interactions, CGRP mediates a potent peptidergic anti-inflammatory pathway. © 2011 Springer-Verlag. Source

Wang Y.-M.,TU Munich | Shen Y.-L.,Ocean University of China
Nuclear Physics B | Year: 2015

We compute perturbative corrections to B→π form factors from QCD light-cone sum rules with B-meson distribution amplitudes. Applying the method of regions we demonstrate factorization of the vacuum-to-B-meson correlation function defined with an interpolating current for pion, at one-loop level, explicitly in the heavy quark limit. The short-distance functions in the factorization formulae of the correlation function involves both hard and hard-collinear scales; and these functions can be further factorized into hard coefficients by integrating out the hard fluctuations and jet functions encoding the hard-collinear information. Resummation of large logarithms in the short-distance functions is then achieved via the standard renormalization-group approach. We further show that structures of the factorization formulae for fBπ+(q2) and fBπ0(q2) at large hadronic recoil from QCD light-cone sum rules match that derived in QCD factorization. In particular, we perform an exploratory phenomenological analysis of B→π form factors, paying attention to various sources of perturbative and systematic uncertainties, and extract |Vub|=(3.05-0.38+0.54|th.±0.09|exp.)×10-3 with the inverse moment of the B-meson distribution amplitude ϕB+(ω) determined by reproducing fBπ+(q2=0) obtained from the light-cone sum rules with π distribution amplitudes. Furthermore, we present the invariant-mass distributions of the lepton pair for B→πℓν (ℓ=μ, τ) in the whole kinematic region. Finally, we discuss non-valence Fock state contributions to the B→π form factors fBπ+(q2) and fBπ0(q2) in brief. © 2015 The Authors. Source

Nitsche U.,TU Munich
Annals of surgery | Year: 2012

Individualized risk assessment in patients with UICC stage II colon cancer based on a panel of molecular genetic alterations. Risk assessment in patients with colon cancer and localized disease (UICC stage II) is not sufficiently reliable. Development of metachronous metastasis is assumed to be governed largely by individual tumor genetics. Fresh frozen tissue from 232 patients (T3-4, N0, M0) with complete tumor resection and a median follow-up of 97 months was analyzed for microsatellite stability, KRAS exon 2, and BRAF exon 15 mutations. Gene expression of the WNT-pathway surrogate marker osteopontin and the metastasis-associated genes SASH1 and MACC1 was determined for 179 patients. The results were correlated with metachronous distant metastasis risk (n = 22 patients). Mutations of KRAS were detected in 30% patients, mutations of BRAF in 15% patients, and microsatellite instability in 26% patients. Risk of recurrence was associated with KRAS mutation (P = 0.033), microsatellite stable tumors (P = 0.015), decreased expression of SASH1 (P = 0.049), and increased expression of MACC1 (P < 0.001). MACC1 was the only independent parameter for recurrence prediction (hazard ratio: 6.2; 95% confidence interval: 2.4-16; P < 0.001). Integrative 2-step cluster analysis allocated patients into 4 groups, according to their tumor genetics. KRAS mutation, BRAF wild type, microsatellite stability, and high MACC1 expression defined the group with the highest risk of recurrence (16%, 7 of 43), whereas BRAF wild type, microsatellite instability, and low MACC1 expression defined the group with the lowest risk (4%, 1 of 26). MACC1 expression predicts development of metastases, outperforming microsatellite stability status, as well as KRAS/BRAF mutation status. Source

Munzer A.M.,TU Munich | Michael Z.P.,University of Pittsburgh | Star A.,University of Pittsburgh
ACS Nano | Year: 2013

Carbon nanotubes (CNTs) have been of high interest because of their potential to complement or to replace current biomedical sensor and assay techniques. By taking advantage of their unique electrical and optical properties, CNTs can be integrated into highly sensitive sensors and probes. We highlight recent advances toward applying CNTs to the biomedical field, focusing on a report by Reuel et al. in this issue of ACS Nano, wherein the inherent near-infrared (NIR) fluorescence of functionalized arrays of single-walled carbon nanotubes (SWNTs) is utilized for detection of several important biological markers. © 2013 American Chemical Society. Source

Kaiser N.,TU Munich
Physical Review C - Nuclear Physics | Year: 2015

Based on a chiral approach to nuclear matter, the quartic term in the expansion of the equation of state of isospin-asymmetric nuclear matter is calculated. The contributions to the quartic isospin asymmetry energy A4(kf) arising from 1π exchange and chiral 2π exchange in nuclear matter are calculated analytically together with three-body terms involving virtual Δ(1232) isobars. From these interaction terms one obtains at saturation density ρ0=0.16fm-3 the value A4(kf0)=1.5MeV, more than three times as large as the kinetic energy part. Moreover, iterated 1π exchange exhibits components for which the fourth derivative with the respect to the isospin asymmetry parameter δ becomes singular at δ=0. The genuine presence of a nonanalytical term δ4ln|δ| in the expansion of the energy per particle of isospin-asymmetric nuclear matter is demonstrated by evaluating an s-wave contact interaction at second order. © 2015 American Physical Society. Source

Schwab W.,TU Munich
Molecules | Year: 2013

4-Hydroxy-2,5-dimethyl-3(2H)-furanone (HDMF, furaneol®) and its methyl ether 2,5-dimethyl-4-methoxy-3(2H)-furanone (DMMF) are import aroma chemicals and are considered key flavor compounds in many fruit. Due to their attractive sensory properties they are highly appreciated by the food industry. In fruits 2,5-dimethyl-3(2H)-furanones are synthesized by a series of enzymatic steps whereas HDMF is also a product of the Maillard reaction. Numerous methods for the synthetic preparation of these compounds have been published and are applied by industry, but for the development of a biotechnological process the knowledge and availability of biosynthetic enzymes are required. During the last years substantial progress has been made in the elucidation of the biological pathway leading to HDMF and DMMF. This review summarizes the latest advances in this field. © 2013 by the authors. Source

Bas M.,TU Munich
Expert Review of Clinical Immunology | Year: 2012

Bradykinin is the key mediator of symptoms of hereditary angioedema (HAE), a rare genetic disorder characterized by recurrent episodes of edema of the skin, mucosa and muscle. Icatibant, a bradykinin B2 receptor antagonist, is an effective and generally well-tolerated treatment option for acute attacks of type I and II HAE. A Phase III randomized, double-blind, placebo-controlled study, FAST-3 (NCT00912093), was designed to further evaluate the efficacy and safety of icatibant in patients presenting with moderate to very severe cutaneous and/or abdominal or mild-to-moderate laryngeal symptoms. Severe laryngeal attacks were treated with open-label icatibant. The controlled phase of FAST-3, completed in October 2010 with results published in December 2011, demonstrated that compared with placebo, icatibant evoked clinically meaningful and statistically significant efficacy across multiple end points in the treatment of type I and II HAE attacks. In addition, icatibant was generally well tolerated and no drug-related serious adverse events were experienced. © 2012 2012 Expert Reviews Ltd. Source

Witt H.,TU Munich
Nature Genetics | Year: 2011

A genome-wide association study has identified two new loci modifying pulmonary disease severity in cystic fibrosis. Although this data offers clues to pathways influencing pulmonary function, the underlying genes and mechanisms remain to be elucidated. © 2011 Nature America, Inc. All rights reserved. Source

Levine S.Z.,Haifa University | Leucht S.,TU Munich
Schizophrenia Research | Year: 2013

Background: The treatment and measurement of negative symptoms are currently at issue in schizophrenia, but the clinical meaning of symptom severity and change is unclear. Aim: To offer a clinically meaningful interpretation of severity and change scores on the Scale for the Assessment of Negative Symptoms (SANS). Method: Patients were intention-to-treat participants (n=383) in two double-blind randomized placebo-controlled clinical trials that compared amisulpride with placebo for the treatment of predominant negative symptoms. Equipercentile linking was used to examine extrapolation from (a) CGI-S to SANS severity ratings, and (b) CGI-I to SANS percentage change (n=383). Linking was conducted at baseline, 8-14. days, 28-30. days, and 56-60. days of the trials. Results: Across visits, CGI-S ratings of 'not ill' linked to SANS scores of 0-13, and ranged to 'extreme' ratings that linked to SANS scores of 102-105. The relationship between the CGI-S and the SANS severity scores assumed a linear trend (1=0-13, 2=15-56, 3=37-61, 4=49-66, 5=63-75, 6=79-89, 7=102-105). Similarly the relationship between CGI-I ratings and SANS percentage change followed a linear trend. For instance, CGI-I ratings of 'very much improved' were linked to SANS percent changes of 90 to 67, 'much improved' to 50 to 42, and 'minimally improved' to 21 to 13. Conclusions: The current results uniquely contribute to the debate surrounding negative symptoms by providing clinical meaning to SANS severity and change scores and so offer direction regarding clinically meaningful response cut-off scores to guide treatment targets of predominant negative symptoms. © 2013 Elsevier B.V. Source

Simmel F.C.,TU Munich
ACS Nano | Year: 2013

The development of complex self-organizing molecular systems for future nanotechnology requires not only robust formation of molecular structures by self-assembly but also precise control over their temporal dynamics. As an exquisite example of such control, in this issue of ACS Nano, Fujii and Rondelez demonstrate a particularly compact realization of a molecular "predator-prey" ecosystem consisting of only three DNA species and three enzymes. The system displays pronounced oscillatory dynamics, in good agreement with the predictions of a simple theoretical model. Moreover, its considerable modularity also allows for ecological studies of competition and cooperation within molecular networks. © 2013 American Chemical Society. Source

Kaiser N.,TU Munich
Physical Review C - Nuclear Physics | Year: 2015

The nucleon-nucleon interaction arising from the exchange of three pions and the excitation of Δ(1232) isobars in intermediate states is studied. Approximating the Δ propagator by the inverse ΔN mass-splitting, analytical expressions are derived for the spectral functions of the isoscalar and isovector central, spin-spin, and tensor NN potentials in momentum-space. A translation of the spectral functions into coordinate-space potentials reveals that the main effect of these specific exchange and excitation mechanisms is a repulsive isoscalar central NN potential. © 2015 American Physical Society. Source

Wagner S.,TU Munich
Information and Software Technology | Year: 2010

Context: Software quality is a complex concept. Therefore, assessing and predicting it is still challenging in practice as well as in research. Activity-based quality models break down this complex concept into concrete definitions, more precisely facts about the system, process, and environment as well as their impact on activities performed on and with the system. However, these models lack an operationalisation that would allow them to be used in assessment and prediction of quality. Bayesian networks have been shown to be a viable means for this task incorporating variables with uncertainty. Objective: The qualitative knowledge contained in activity-based quality models are an abundant basis for building Bayesian networks for quality assessment. This paper describes a four-step approach for deriving systematically a Bayesian network from an assessment goal and a quality model. Method: The four steps of the approach are explained in detail and with running examples. Furthermore, an initial evaluation is performed, in which data from NASA projects and an open source system is obtained. The approach is applied to this data and its applicability is analysed. Results: The approach is applicable to the data from the NASA projects and the open source system. However, the predictive results vary depending on the availability and quality of the data, especially the underlying general distributions. Conclusion: The approach is viable in a realistic context but needs further investigation in case studies in order to analyse its predictive validity. © 2010 Elsevier B.V. All rights reserved. Source

Veprek S.,TU Munich
Journal of Vacuum Science and Technology A: Vacuum, Surfaces and Films | Year: 2013

High elastic moduli do not guarantee high hardness because upon finite shear electronic instabilities often occur that result in transformation to softer phases. Therefore, the author concentrates on the extrinsically superhard nanostructured materials, which are the most promising. Decreasing crystallite size results in strengthening and hardening because the grain boundaries impede the plasticity (e.g., Hall-Petch strengthening in case of dislocation activity). However, this hardening is limited to a crystallite size down to 10-15 nm below which softening due to grain boundary shear dominates. This softening can be reduced by forming low energy grain boundaries or a strong interfacial layer. In such a way, much higher hardness enhancement can be achieved. The emphasis will be on the understanding of the mechanisms of the hardness enhancement. A special section deals with examples of the present industrial applications of such coatings on tools for machining in order to illustrate that these materials are already in large-scale use. In the last section, the author summarizes the open questions and limitations for the preparation of the super- and ultrahard nanocomposite coatings and possible ways on how to overcome them. © 2013 American Vacuum Society. Source

Leucht S.,TU Munich | Zhao J.,Merck
Journal of Psychopharmacology | Year: 2014

Objective: The purpose of this study was to assess whether early symptom improvement predicts later treatment outcome in patients with schizophrenia. Methods: Data were pooled from intent-to-treat (ITT) populations of three six-week randomized controlled studies with fixed doses of asenapine (ASE; n=470), olanzapine (OLA; n=95), risperidone (RIS; n=56), haloperidol (HAL; n=112), or placebo (PLA; n=275). Early improvement was defined as a 20% reduction of Positive and Negative Syndrome Scale (PANSS) total score at week 2, compared to baseline (primary criterion). Treatment outcome at week 6 was defined as response (PANSS: ≥50% score reduction) or remission (PANSS item score ≤3 on selected items at week 6). Odds ratios (ORs) and predictive performance statistics were calculated. Results: Statistically significant associations between early improvement (at week 2) and treatment outcome (at week 6) were observed for all treatment groups except OLA; as evidenced by increased ORs for response. Analysis of associations between early improvement and remission, as defined by Andreasen et al. (2005), revealed a statistically significant relationship for ASE and PLA-treated patients only. Predictive performance statistics revealed higher negative predictive value (NPV) and sensitivity rates, and comparably lower positive predictive value (PPV) and specificity rates across treatment groups for both response and remission. Conclusion: It is suggested that absence of improvement within two weeks of treatment may predict the unlikely success of subsequent pharmacological intervention. Source

Becker V.,TU Munich
Journal of biophotonics | Year: 2011

Probe-based confocal laser scanning endomicroscopy (pCLE) enables in-vivo histopathology during ongoing endoscopy. The most frequently used fluorophore is fluorescein sodium administered intravenously. Despite the increased use of pCLE, there are hardly any data on the ideal fluorescein concentration. Therefore, rectal mucosa of pigs was examined after injection (i.v.) of fluorescein as a single bolus (0.1 ml/kg body weight) in different concentrations (0.5%, 1%, 2%, 5%, 10%). Video sequences were recorded after 1, 5 and 60 min. For objective evaluation signal-to-noise ratio (SNR) was computed. For subjective evaluation, video sequences were randomized and blindly evaluated by experienced endomicroscopists. In total, 19037 images were analyzed. The mean SNR increased from the lowest (0.5%; SNR 6.75, range 3.55) to the highest concentration (10%; SNR 9.11, range 3.18). Subjective evaluation demonstrated best image quality with fluorescein concentration of 5%. In conclusion, pCLE shows best results using single injection of IV fluorescein 5%. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source

Afanasjev A.V.,Mississippi State University | Agbemava S.E.,Mississippi State University | Ray D.,Mississippi State University | Ring P.,TU Munich
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2013

The neutron and proton drip lines represent the limits of the nuclear landscape. While the proton drip line is measured experimentally up to rather high Z values, the location of the neutron drip line for absolute majority of elements is based on theoretical predictions which involve extreme extrapolations. The first ever systematic investigation of the location of the proton and neutron drip lines in the covariant density functional theory has been performed by employing a set of the state-of-the-art parametrizations. Calculated theoretical uncertainties in the position of two-neutron drip line are compared with those obtained in non-relativistic DFT calculations. Shell effects drastically affect the shape of two-neutron drip line. In particular, model uncertainties in the definition of two-neutron drip line at Z~ 54, N = 126 and Z~ 82, N = 184 are very small due to the impact of spherical shell closures at N= 126 and 184. © 2013 Elsevier B.V. Source

Su Y.-H.,TU Munich
Acta Psychologica | Year: 2014

This study investigated whether explicit beat induction in the auditory, visual, and audiovisual (bimodal) modalities aided the perception of weakly metrical auditory rhythms, and whether it reinforced attentional entrainment to the beat of these rhythms. The visual beat-inducer was a periodically bouncing point-light figure, which aimed to examine whether an observed rhythmic human movement could induce a beat that would influence auditory rhythm perception. In two tasks, participants listened to three repetitions of an auditory rhythm that were preceded and accompanied by (1) an auditory beat, (2) a bouncing point-light figure, (3) a combination of (1) and (2) synchronously, or (4) a combination of (1) and (2), with the figure moving in anti-phase to the auditory beat. Participants reproduced the auditory rhythm subsequently (Experiment 1), or detected a possible temporal change in the third repetition (Experiment 2). While an explicit beat did not improve rhythm reproduction, possibly due to the syncopated rhythms when a beat was imposed, bimodal beat induction yielded greater sensitivity to a temporal deviant in on-beat than in off-beat positions. Moreover, the beat phase of the figure movement determined where on-beat accents were perceived during bimodal induction. Results are discussed with regard to constrained beat induction in complex auditory rhythms, visual modulation of auditory beat perception, and possible mechanisms underlying the preferred visual beat consisting of rhythmic human motions. © 2014 Elsevier B.V. Source

Mallick P.,University of Southern California | Mallick P.,University of California at Los Angeles | Kuster B.,TU Munich | Kuster B.,Center for Integrated Protein Science Munich
Nature Biotechnology | Year: 2010

The evolution of mass spectrometry-based proteomic technologies has advanced our understanding of the complex and dynamic nature of proteomes while concurrently revealing that no 'one-size-fits-all' proteomic strategy can be used to address all biological questions. Whereas some techniques, such as those for analyzing protein complexes, have matured and are broadly applied with great success, others, such as global quantitative protein expression profiling for biomarker discovery, are still confined to a few expert laboratories. In this Perspective, we attempt to distill the wide array of conceivable proteomic approaches into a compact canon of techniques suited to asking and answering specific types of biological questions. By discussing the relationship between the complexity of a biological sample and the difficulty of implementing the appropriate analysis approach, we contrast areas of proteomics broadly usable today with those that require significant technical and conceptual development. We hope to provide nonexperts with a guide for calibrating expectations of what can realistically be learned from a proteomics experiment and for gauging the planning and execution effort. We further provide a detailed supplement explaining the most common techniques in proteomics. © 2010 Nature America, Inc. All rights reserved. Source

Mendl C.B.,TU Munich
Journal of Computational Physics | Year: 2012

We present a fast algorithm to calculate Coulomb/exchange integrals of prolate spheroidal electronic orbitals, which are the exact solutions of the single-electron, two-center Schrödinger equation for diatomic molecules. Our approach employs Neumann's expansion of the Coulomb repulsion 1/{divides}. x- y{divides}, solves the resulting integrals symbolically in closed form and subsequently performs a numeric Taylor expansion for efficiency. Thanks to the general form of the integrals, the obtained coefficients are independent of the particular wavefunctions and can thus be reused later.Key features of our algorithm include complete avoidance of numeric integration, drafting of the individual steps as fast matrix operations and high accuracy due to the exponential convergence of the expansions.Application to the diatomic molecules O 2 and CO exemplifies the developed methods, which can be relevant for a quantitative understanding of chemical bonds in general. © 2012 Elsevier Inc. Source

Wikstrom M.,University of Helsinki | Sharma V.,Tampere University of Technology | Kaila V.R.I.,TU Munich | Hosler J.P.,University of Mississippi Medical Center | Hummer G.,Max Planck Institute of Biophysics
Chemical Reviews | Year: 2015

Complexes I, III (cytochrome bc1), and IV (cytochrome c oxidase) of the respiratory chain employ fundamentally different mechanisms for redox-coupled proton pumping. In the Q-cycle