Time filter

Source Type

Grabow C.,Max Planck Institute for Dynamics and Self-Organization | Grosskinsky S.,University of Warwick | Timme M.,Max Planck Institute for Dynamics and Self-Organization | Timme M.,Bernstein Center for Computational Neuroscience Gottingen | Timme M.,University of Gottingen
European Physical Journal B | Year: 2011

Synchrony is one of the most common dynamical states emerging on networks. The speed of convergence towards synchrony provides a fundamental collective time scale for synchronizing systems. Here we study the asymptotic synchronization times for directed networks with topologies ranging from completely ordered, grid-like, to completely disordered, random, including intermediate, partially disordered topologies. We extend the approach of master stability functions to quantify synchronization times. We find that the synchronization times strongly and systematically depend on the network topology. In particular, at fixed in-degree, stronger topological randomness induces faster synchronization, whereas at fixed path length, synchronization is slowest for intermediate randomness in the small-world regime. Randomly rewiring real-world neural, social and transport networks confirms this picture. © 2011 EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg.


Grabow C.,Max Planck Institute for Dynamics and Self-Organization | Grosskinsky S.,University of Warwick | Timme M.,Max Planck Institute for Dynamics and Self-Organization | Timme M.,Bernstein Center for Computational Neuroscience Gottingen
Physical Review Letters | Year: 2012

Collective dynamics on small-world networks emerge in a broad range of systems with their spectra characterizing fundamental asymptotic features. Here we derive analytic mean-field predictions for the spectra of small-world models that systematically interpolate between regular and random topologies by varying their randomness. These theoretical predictions agree well with the actual spectra (obtained by numerical diagonalization) for undirected and directed networks and from fully regular to strongly random topologies. These results may provide analytical insights to empirically found features of dynamics on small-world networks from various research fields, including biology, physics, engineering, and social science. © 2012 American Physical Society.


Nagler J.,Max Planck Institute for Dynamics and Self-Organization | Nagler J.,University of Gottingen | Levina A.,Max Planck Institute for Dynamics and Self-Organization | Levina A.,Bernstein Center for Computational Neuroscience Gottingen | And 3 more authors.
Nature Physics | Year: 2011

How a complex network is connected crucially impacts its dynamics and function. Percolation, the transition to extensive connectedness on gradual addition of links, was long believed to be continuous, but recent numerical evidence of 'explosive percolationg' suggests that it might also be discontinuous if links compete for addition. Here we analyse the microscopic mechanisms underlying discontinuous percolation processes and reveal a strong impact of single-link additions. We show that in generic competitive percolation processes, including those showing explosive percolation, single links do not induce a discontinuous gap in the largest cluster size in the thermodynamic limit. Nevertheless, our results highlight that for large finite systems single links may still induce substantial gaps, because gap sizes scale weakly algebraically with system size. Several essentially macroscopic clusters coexist immediately before the transition, announcing discontinuous percolation. These results explain how single links may drastically change macroscopic connectivity in networks where links add competitively. © 2011 Macmillan Publishers Limited. All rights reserved.


Goldschmidt D.,University of Gottingen | Hesse F.,University of Gottingen | Worgotter F.,University of Gottingen | Worgotter F.,Bernstein Center for Computational Neuroscience Gottingen | And 2 more authors.
IEEE International Conference on Intelligent Robots and Systems | Year: 2012

Insects, e.g. cockroaches and stick insects, have found fascinating solutions for the problem of locomotion, especially climbing over a large variety of obstacles. Research on behavioral neurobiology has identified key behavioral patterns of these animals (i.e., body flexion, center of mass elevation, and local leg reflexes) necessary for climbing. Inspired by this finding, we develop a neural control mechanism for hexapod robots which generates basic walking behavior and especially enables them to effectively perform reactive climbing behavior. The mechanism is composed of three main neural circuits: locomotion control, reactive backbone joint control, and local leg reflex control. It was developed and tested using a physical simulation environment, and was then successfully transferred to a physical six-legged walking machine, called AMOS II. Experimental results show that the controller allows the robot to overcome obstacles of various heights (e.g., ∼ 75% of its leg length, which are higher than those that other comparable legged robots have achieved so far). The generated climbing behavior is also comparable to the one observed in cockroaches. © 2012 IEEE.


Kriener B.,Norwegian University of Life Sciences | Anand L.,Max Planck Institute for Dynamics and Self-Organization | Anand L.,Bernstein Center for Computational Neuroscience Gottingen | Timme M.,Max Planck Institute for Dynamics and Self-Organization | And 2 more authors.
New Journal of Physics | Year: 2012

Synchrony prevalently emerges from the interactions of coupled dynamical units. For simple systems such as networks of phase oscillators, the asymptotic synchronization process is assumed to be equivalent to a Markov process that models standard diffusion or random walks on the same network topology. In this paper, we analytically derive the conditions for such equivalence for networks of pulse-coupled oscillators, which serve as models for neurons and pacemaker cells interacting by exchanging electric pulses or fireflies interacting via light flashes. We find that the pulse synchronization process is less simple, but there are classes of, e.g., network topologies that ensure equivalence. In particular, local dynamical operators are required to be doubly stochastic. These results provide a natural link between stochastic processes and deterministic synchronization on networks. Tools for analyzing diffusion (or, more generally, Markov processes) may now be transferred to pin down features of synchronization in networks of pulse-coupled units such as neural circuits. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.


Ihrke M.,Bernstein Center for Computational Neuroscience Gottingen | Schrobsdorff H.,Bernstein Center for Computational Neuroscience Gottingen | Herrmann J.M.,Bernstein Center for Computational Neuroscience Gottingen | Herrmann J.M.,University of Edinburgh
International Journal of Neural Systems | Year: 2011

We introduce an approach to compensate for temporal distortions of repeated measurements in event-related potential research. The algorithm uses a combination of methods from nonlinear time-series analysis and is based on the construction of pairwise registration functions from cross-recurrence plots of the phase-space representations of ERP signals. The globally optimal multiple-alignment path is approximated by hierarchical cluster analysis, i.e. by iteratively combining pairs of trials according to similarity. By the inclusion of context information in form of externally acquired time markers (e.g. reaction time) into a regularization scheme, the extracted warping functions can be guided near paths that are implied by the experimental procedure. All parameters occurring in the algorithm can be optimized based on the properties of the data and there is a broad regime of parameter configurations where the algorithm produces good results. Simulations on artificial data and the analysis of ERPs from a psychophysical study demonstrate the robustness and applicability of the algorithm. © 2011 World Scientific Publishing Company.


Liu J.K.,University of Gottingen | Liu J.K.,Bernstein Center for Computational Neuroscience Gottingen | Gollisch T.,University of Gottingen | Gollisch T.,Bernstein Center for Computational Neuroscience Gottingen
PLoS Computational Biology | Year: 2015

When visual contrast changes, retinal ganglion cells adapt by adjusting their sensitivity as well as their temporal filtering characteristics. The latter has classically been described by contrast-induced gain changes that depend on temporal frequency. Here, we explored a new perspective on contrast-induced changes in temporal filtering by using spike-triggered covariance analysis to extract multiple parallel temporal filters for individual ganglion cells. Based on multielectrode-array recordings from ganglion cells in the isolated salamander retina, we found that contrast adaptation of temporal filtering can largely be captured by contrast-invariant sets of filters with contrast-dependent weights. Moreover, differences among the ganglion cells in the filter sets and their contrast-dependent contributions allowed us to phenomenologically distinguish three types of filter changes. The first type is characterized by newly emerging features at higher contrast, which can be reproduced by computational models that contain response-triggered gain-control mechanisms. The second type follows from stronger adaptation in the Off pathway as compared to the On pathway in On-Off-type ganglion cells. Finally, we found that, in a subset of neurons, contrast-induced filter changes are governed by particularly strong spike-timing dynamics, in particular by pronounced stimulus-dependent latency shifts that can be observed in these cells. Together, our results show that the contrast dependence of temporal filtering in retinal ganglion cells has a multifaceted phenomenology and that a multi-filter analysis can provide a useful basis for capturing the underlying signal-processing dynamics. © 2015 Liu, Gollisch.


Timme M.,Max Planck Institute for Dynamics and Self-Organization | Timme M.,Bernstein Center for Computational Neuroscience Gottingen | Timme M.,University of Gottingen | Casadiego J.,Max Planck Institute for Dynamics and Self-Organization
Journal of Physics A: Mathematical and Theoretical | Year: 2014

What can we learn from the collective dynamics of a complex network about its interaction topology? Taking the perspective from nonlinear dynamics, we briefly review recent progress on how to infer structural connectivity (direct interactions) from accessing the dynamics of the units. Potential applications range from interaction networks in physics, to chemical and metabolic reactions, protein and gene regulatory networks as well as neural circuits in biology and electric power grids or wireless sensor networks in engineering. Moreover, we briefly mention some standard ways of inferring effective or functional connectivity. © 2014 IOP Publishing Ltd.


Kriener B.,Norwegian University of Life Sciences | Kriener B.,Max Planck Institute for Dynamics and Self-Organization | Kriener B.,Bernstein Center for Computational Neuroscience Gottingen
Chaos | Year: 2012

Under which conditions can a network of pulse-coupled oscillators sustain stable collective activity states? Previously, it was shown that stability of the simplest pattern conceivable, i.e., global synchrony, in networks of symmetrically pulse-coupled oscillators can be decided in a rigorous mathematical fashion, if interactions either all advance or all retard oscillation phases ("mono-interaction network"). Yet, many real-world networks-for example neuronal circuits-are asymmetric and moreover crucially feature both types of interactions. Here, we study complex networks of excitatory (phase-advancing) and inhibitory (phase-retarding) leaky integrate-and-fire (LIF) oscillators. We show that for small coupling strength, previous results for mono-interaction networks also apply here: pulse time perturbations eventually decay if they are smaller than a transmission delay and if all eigenvalues of the linear stability operator have absolute value smaller or equal to one. In this case, the level of inhibition must typically be significantly stronger than that of excitation to ensure local stability of synchrony. For stronger coupling, however, network synchrony eventually becomes unstable to any finite perturbation, even if inhibition is strong and all eigenvalues of the stability operator are at most unity. This new type of instability occurs when any oscillator, inspite of receiving inhibitory input from the network on average, can by chance receive sufficient excitatory input to fire a pulse before all other pulses in the system are delivered, thus breaking the near-synchronous perturbation pattern. © 2012 American Institute of Physics.


Effenberger F.,Max Planck Institute for Mathematics in the Sciences | Jost J.,Max Planck Institute for Mathematics in the Sciences | Levina A.,Max Planck Institute for Mathematics in the Sciences | Levina A.,Bernstein Center for Computational Neuroscience Gottingen
PLoS Computational Biology | Year: 2015

Structural inhomogeneities in synaptic efficacies have a strong impact on population response dynamics of cortical networks and are believed to play an important role in their functioning. However, little is known about how such inhomogeneities could evolve by means of synaptic plasticity. Here we present an adaptive model of a balanced neuronal network that combines two different types of plasticity, STDP and synaptic scaling. The plasticity rules yield both long-tailed distributions of synaptic weights and firing rates. Simultaneously, a highly connected subnetwork of driver neurons with strong synapses emerges. Coincident spiking activity of several driver cells can evoke population bursts and driver cells have similar dynamical properties as leader neurons found experimentally. Our model allows us to observe the delicate interplay between structural and dynamical properties of the emergent inhomogeneities. It is simple, robust to parameter changes and able to explain a multitude of different experimental findings in one basic network. © 2015 Effenberger et al.

Loading Bernstein Center for Computational Neuroscience Gottingen collaborators
Loading Bernstein Center for Computational Neuroscience Gottingen collaborators