Max Planck Institute for Mathematics in the Sciences

Leipzig, Germany

Max Planck Institute for Mathematics in the Sciences

Leipzig, Germany

The Max Planck Institute for Mathematics in the science in Leipzig was founded on March 1, 1996.At the institute scientists work on projects which apply mathematics in various areas of natural science, in particular physics, biology, chemistry and material science.Main research areas are Scientific computing , Pattern formation, energy landscapes and scaling laws Riemannian, Kählerian and algebraic geometry , neuronal networks ,The institute has an extensive visitors programme which had made Leipzig a main place for research in applied mathematics. Wikipedia.

Time filter
Source Type

Tuckwell H.C.,Max Planck Institute for Mathematics in the Sciences
Progress in Neurobiology | Year: 2012

Ca2+ currents in neurons and muscle cells have been classified as being one of 5 types, of which four, L, N, P/Q and R were said to be high threshold and one, T, was designated low threshold. This review focuses on quantitative aspects of L-type currents. L-type channels are now distinguished according to their structure as one of four main subtypes Cav1.1-Cav1.4. L-type calcium currents play many fundamental roles in cellular dynamical processes including control of firing rate and pacemaking in neurons and cardiac cells, the activation of transcription factors involved in synaptic plasticity and in immune cells. The half-activation potentials of L-type currents (ICaL) have been ascribed values as low as -50mV and as high as near 0mV. The inactivation of ICaL has been found to be both voltage (VDI) and calcium-dependent (CDI) and the latter component may involve calcium-induced calcium release. CDI is often an important aspect of dynamical models of cell electrophysiology. We describe the basic components in modeling ICaL including activation and both voltage and calcium dependent inactivation and the two main approaches to determining the current. We review, by means of tables of values from over 65 representative studies, the various details of the dynamical properties associated with ICaL that have been found experimentally or employed in the last 25 years in deterministic modeling in various nervous system and cardiac cells. Distributions and statistics of several parameters related to activation and inactivation are obtained. There are few reliable complete experimental data on L-type calcium current kinetics for cells at physiological calcium ion concentrations. Neurons are divided approximately into two groups with experimental half-activation potentials that are high, -18.3mV, or low, -36.4mV, which correspond closely with those for Cav1.2 and Cav1.3 channels in physiological solutions. There are very few complete experimental data on time constants of activation, those available suggesting values around 0.5-2ms. In modeling, a wide range of time constants has been employed. A major problem for quantitative studies due to lack of experimental data has been the use of kinetic parameters from one cell type for others. Inactivation time constants for VDI have been found experimentally with average 65ms. Examples of calculations of ICaL are made for linear and constant field methods and the effects of CDI are illustrated for single and double pulse protocols and the results compared with experiment. The review ends with a discussion and analysis of experimental subtype (Cav1.1-Cav1.4) properties and their roles in normal, including pacemaker, activity, and many pathological states. © 2011 Elsevier Ltd.

Banjai L.,Max Planck Institute for Mathematics in the Sciences
SIAM Journal on Scientific Computing | Year: 2010

We describe how a time-discretized wave equation in a homogeneous medium can be solved by boundary integral methods. The time discretization can be a multistep, Runge-Kutta, or a more general multistep-multistage method. The resulting convolutional system of boundary integral equations belongs to the family of convolution quadratures of Lubich. The aim of this work is twofold. It describes an efficient, robust, and easily parallelizable method for solving the semidiscretized system. The resulting algorithm has the main advantages of time-stepping methods and of Fourier synthesis: at each time-step, a system of linear equations with the same system matrix needs to be solved, yet computations can easily be done in parallel; the computational cost is almost linear in the number of time-steps; and only the Laplace transform of the time-domain fundamental solution is needed. The new aspect of the algorithm is that all this is possible without ever explicitly constructing the weights of the convolution quadrature. This approach also readily allows the use of modern data-sparse techniques to efficiently perform computation in space. We investigate theoretically and numerically to which extent hierarchical matrix (H-matrix) techniques can be used to speed up the space computation. The second aim of this article is to perform a series of large-scale 3D experiments with a range of multistep and multistage time discretization methods: the backward difference formula of order 2 (BDF2), the Trapezoid rule, and the 3-stage Radau IIA methods are investigated in detail. One of the conclusions of the experiments is that the Radau IIA method often performs overwhelmingly better than the linear multistep methods, especially for problems with many reflections, yet, in connection with hyperbolic problems, BDFs have so far been predominant in the literature on convolution quadrature. © 2010 Society for Industrial and Applied Mathematics.

Khoromskaia V.,Max Planck Institute for Mathematics in the Sciences
Computational Methods in Applied Mathematics | Year: 2014

The Hartree-Fock eigenvalue problem governed by the 3D integrodifferential operator is the basic model in ab initio electronic structure calculations. Several years ago the idea to solve the Hartree-Fock equation by a fully 3D grid based numerical approach seemed to be a fantasy, and the tensor-structured methods did not exist. In fact, these methods evolved during the work on this challenging problem. In this paper, our recent results on the topic are outlined and the black-box Hartree- Fock solver by tensor numerical methods is presented. The approach is based on the rank-structured calculation of the core hamiltonian and of the two-electron integrals tensor using the problem adapted basis functions discretized on n×n×n 3D Cartesian grids. The arising 3D convolution transforms with the Newton kernel are replaced by a combination of 1D convolutions and 1D Hadamard and scalar products. The approach allows huge spatial grids, with n3 ∼1015, yielding high resolution at low cost. The two-electron integrals are computed via multiple factorizations. The Laplacian Galerkin matrix can be computed "on-the-fly" using the quantized tensor approximation of O(log n) complexity. The performance of the black-box solver in Matlab implementation is compatible with the benchmark packages based on the analytical (pre)evaluation of the multidimensional convolution integrals. We present ab initio Hartree-Fock calculations of the ground state energy for the amino acid molecules, and of the "energy bands" for the model examples of extended (quasi-periodic) systems. © 2014 Institute of Mathematics.

Elze T.,Max Planck Institute for Mathematics in the Sciences
Journal of Neuroscience Methods | Year: 2010

In experimental visual neuroscience brief presentations of visual stimuli are often required. Accurate knowledge of the durations of visual stimuli and their signal shapes is important in psychophysical experiments with humans and in neuronal recordings with animals. In this study we measure and analyze the changes in luminance of visual stimuli on standard computer monitors. Signal properties of the two most frequently used monitor technologies, cathode ray tube (CRT) and liquid crystal display (LCD) monitors, are compared, and the effects of the signal shapes on the stated durations of visual stimuli are analyzed. The fundamental differences between CRT and LCD signals require different methods for the specification of durations, especially for brief stimulus presentations. In addition, stimulus durations on LCD monitors vary over different monitor models and are not even homogeneous with respect to different luminance levels on a single monitor. The use of LCD technology for brief stimulus presentation requires extensive display measurements prior to the experiment. © 2010 Elsevier B.V.

Neukamm S.,Max Planck Institute for Mathematics in the Sciences
Archive for Rational Mechanics and Analysis | Year: 2012

We present a rigorous derivation of a homogenized, bending-torsion theory for inextensible rods from three-dimensional nonlinear elasticity in the spirit of Γ -convergence. We start with the elastic energy functional associated with a nonlinear composite material, which in a stress-free reference configuration occupies a thin cylindrical domain with thickness h ≪ 1. We consider composite materials that feature a periodic microstructure with period ε ≪ 1. We study the behavior as ε and h simultaneously converge to zero and prove that the energy (scaled by h -4) Γ -converges towards a non-convex, singular energy functional. The energy is finite only for configurations that correspond to pure bending and twisting of the rod. In this case, the energy is quadratic in curvature and torsion. Our derivation leads to a new relaxation formula that uniquely determines the homogenized coefficients. It turns out that their precise structure additionally depends on the ratio h/ε and, in particular, different relaxation formulas arise for h ≪ ε, ε ~ h and ε ≪ h. Although the initial elastic energy functional and the limiting functional are nonconvex, our analysis leads to a relaxation formula that is quadratic and involves only relaxation over a single cell. Moreover, we derive an explicit formula for isotropic materials in the cases h ≪ ε and h ≫ ε, and prove that the Γ -limits associated with homogenization and dimension reduction in general do not commute. © 2012 Springer-Verlag.

Oikonomou V.K.,Max Planck Institute for Mathematics in the Sciences
Nuclear Physics B | Year: 2013

We study N=2 supersymmetric Chern-Simons Higgs models in (2+1)-dimensions and the existence of extended underlying supersymmetric quantum mechanics algebras. Our findings indicate that the fermionic zero modes quantum system in conjunction with the system of zero modes corresponding to bosonic fluctuations, are related to an N=4 extended 1-dimensional supersymmetric algebra with central charge, a result closely connected to the N=2 spacetime supersymmetry of the total system. We also add soft supersymmetric terms to the fermionic sector in order to examine how this affects the index of the corresponding Dirac operator, with the latter characterizing the degeneracy of the solitonic solutions. In addition, we analyze the impact of the underlying supersymmetric quantum algebras to the zero mode bosonic fluctuations. This is relevant to the quantum theory of self-dual vortices and particularly for the symmetries of the metric of the space of vortices solutions and also for the non-zero mode states of bosonic fluctuations. © 2013 Elsevier B.V.

Rauh J.,Max Planck Institute for Mathematics in the Sciences
IEEE Transactions on Information Theory | Year: 2011

This paper investigates maximizers of the information divergence from an exponential family E. It is shown that the rI-projection of a maximizer P to E is a convex combination of P and a probability measure P- with disjoint support and the same value of the sufficient statistics A. This observation can be used to transform the original problem of maximizing D(·∥E) over the set of all probability measures into the maximization of a function Dr over a convex subset of ker A. The global maximizers of both problems correspond to each other. Furthermore, finding all local maximizers of D̄r yields all local maximizers of D(·∥E). This paper also proposes two algorithms to find the maximizers of D̄r and applies them to two examples, where the maximizers of D(·∥E) were not known before. © 2011 IEEE.

Bacak M.,Max Planck Institute for Mathematics in the Sciences
SIAM Journal on Optimization | Year: 2014

The geometric median as well as the Fréchet mean of points in a Hadamard space are important in both theory and applications. Surprisingly, no algorithms for their computation are hitherto known. To address this issue, we use a splitting version of the proximal point algorithm for minimizing a sum of convex functions and prove that this algorithm produces a sequence converging to a minimizer of the objective function, which extends a recent result of Bertsekas [Math. Program., 129(2011), pp. 163-195] into Hadamard spaces. The method is quite robust, and not only does it yield algorithms for the median and the mean, but also it applies to various other optimization problems. We, moreover, show that another algorithm for computing the Fréchet mean can be derived from the law of large numbers due to Sturm [Ann. Probab., 30(2002), pp. 1195-1222]. In applications, computing medians and means is probably most needed in tree space, which is an instance of a Hadamard space, invented by Billera, Holmes, and Vogtmann [Adv. in Appl. Math., 27(2001), pp. 733-767] as a tool for averaging phylogenetic trees. Since there now exists a polynomialtime algorithm for computing geodesics in tree space due to Owen and Provan [IEEE/ACM Trans. Comput. Biol. Bioinform., 8(2011), pp. 2-13], we obtain efficient algorithms for computing medians and means of trees, which can be directly used in practice. © 2014 Societ y for Industrial and Applied Mathematics.

Background: In visual psychophysics, precise display timing, particularly for brief stimulus presentations, is often required. The aim of this study was to systematically review the commonly applied methods for the computation of stimulus durations in psychophysical experiments and to contrast them with the true luminance signals of stimuli on computer displays. Methodology/Principal Findings: In a first step, we systematically scanned the citation index Web of Science for studies with experiments with stimulus presentations for brief durations. Articles which appeared between 2003 and 2009 in three different journals were taken into account if they contained experiments with stimuli presented for less than 50 milliseconds. The 79 articles that matched these criteria were reviewed for their method of calculating stimulus durations. For those 75 studies where the method was either given or could be inferred, stimulus durations were calculated by the sum of frames (SOF) method. In a second step, we describe the luminance signal properties of the two monitor technologies which were used in the reviewed studies, namely cathode ray tube (CRT) and liquid crystal display (LCD) monitors. We show that SOF is inappropriate for brief stimulus presentations on both of these technologies. In extreme cases, SOF specifications and true stimulus durations are even unrelated. Furthermore, the luminance signals of the two monitor technologies are so fundamentally different that the duration of briefly presented stimuli cannot be calculated by a single method for both technologies. Statistics over stimulus durations given in the reviewed studies are discussed with respect to different duration calculation methods. Conclusions/Significance: The SOF method for duration specification which was clearly dominating in the reviewed studies leads to serious misspecifications particularly for brief stimulus presentations. We strongly discourage its use for brief stimulus presentations on CRT and LCD monitors. © 2010 Tobias Elze.

Khoromskij B.N.,Max Planck Institute for Mathematics in the Sciences
Chemometrics and Intelligent Laboratory Systems | Year: 2012

In the present paper, we give a survey of the recent results and outline future prospects of the tensor-structured numerical methods in applications to multidimensional problems in scientific computing. The guiding principle of the tensor methods is an approximation of multivariate functions and operators relying on a certain separation of variables. Along with the traditional canonical and Tucker models, we focus on the recent quantics-TT tensor approximation method that allows to represent N-d tensors with log-volume complexity, O(d log N). We outline how these methods can be applied in the framework of tensor truncated iteration for the solution of the high-dimensional elliptic/parabolic equations and parametric PDEs. Numerical examples demonstrate that the tensor-structured methods have proved their value in application to various computational problems arising in quantum chemistry and in the multi-dimensional/parametric FEM/BEM modeling-the tool apparently works and gives the promise for future use in challenging high-dimensional applications. © 2011 Elsevier B.V.

Loading Max Planck Institute for Mathematics in the Sciences collaborators
Loading Max Planck Institute for Mathematics in the Sciences collaborators