Time filter

Source Type

Fernandez-Recio J.,Barcelona Supercomputing Center
Wiley Interdisciplinary Reviews: Computational Molecular Science | Year: 2011

Protein-protein interactions are involved in the majority of cell processes, and their detailed structural and functional characterization has become one of the most important challenges in current structural biology. The first ideal goal is to determine the structure of the specific complex formed upon interaction of two or more given proteins. However, since this is not always technically possible, the practical approach is often to locate and characterize the protein residues that are involved in the interaction. This can be achieved by experimental means at expense of time and cost, so a growing number of computer tools are becoming available to complement experimental efforts. Reported methods for interface prediction are based on sequence information or on structural data, and make use of a variety of evolutionary, geometrical, and physicochemical parameters. As we show here, computer predictions can achieve a high degree of success, and they are of practical use to guide mutational experiments as well as to explain functional and mechanistic aspects of the interaction. Interestingly, it has been found that typically only a few of the interacting residues contribute significantly to the binding energy. The identification of such hot-spot residues is important for understanding basic aspects of protein association. In addition, these residues have received recent attention as possible targets for drug design, so several computer methods have been developed to predict them. We will review here existing computer approaches for the prediction of protein binding sites and hot-spot residues, with a discussion on their applicability and limitations. © 2011 John Wiley & Sons, Ltd. Source

Folch A.,Barcelona Supercomputing Center
Journal of Volcanology and Geothermal Research | Year: 2012

Tephra transport models try to predict atmospheric dispersion and sedimentation of tephra depending on meteorology, particle properties, and eruption characteristics, defined by eruption column height, mass eruption rate, and vertical distribution of mass. Models are used for different purposes, from operational forecast of volcanic ash clouds to hazard assessment of tephra dispersion and fallout. The size of the erupted particles, a key parameter controlling the dynamics of particle sedimentation in the atmosphere, varies within a wide range. Largest centimetric to millimetric particles fallout at proximal to medial distances from the volcano and sediment by gravitational settling. On the other extreme, smallest micrometric to sub-micrometric particles can be transported at continental or even at global scales and are affected by other deposition and aggregation mechanisms. Different scientific communities had traditionally modeled the dispersion of these two end members. Volcanologists developed families of models suitable for lapilli and coarse ash and aimed at computing fallout deposits and for hazard assessment. In contrast, meteorologists and atmospheric scientists have traditionally used other atmospheric transport models, dealing with finer particles, for tracking motion of volcanic ash clouds and, eventually, for computing airborne ash concentrations. During the last decade, the increasing demand for model accuracy and forecast reliability has pushed on two fronts. First, the original gap between these different families of models has been filled with the emergence of multi-scale and multi-purpose models. Second, new modeling strategies including, for example, ensemble and probabilistic forecast or model data assimilation are being investigated for future implementation in models and or modeling strategies. This paper reviews the evolution of tephra transport and dispersal models during the last two decades, presents the status and limitations of the current modeling strategies, and discusses some emergent perspectives expected to be implemented at operational level during the next few years. Improvements in both real-time forecasting and long-term hazard assessment are necessary to loss prevention programs on a local, regional, national and international level. © 2012 Elsevier B.V. Source

Moal I.H.,Barcelona Supercomputing Center | Bates P.A.,Cancer Research UK Research Institute
PLoS Computational Biology | Year: 2012

The prediction of protein-protein kinetic rate constants provides a fundamental test of our understanding of molecular recognition, and will play an important role in the modeling of complex biological systems. In this paper, a feature selection and regression algorithm is applied to mine a large set of molecular descriptors and construct simple models for association and dissociation rate constants using empirical data. Using separate test data for validation, the predicted rate constants can be combined to calculate binding affinity with accuracy matching that of state of the art empirical free energy functions. The models show that the rate of association is linearly related to the proportion of unbound proteins in the bound conformational ensemble relative to the unbound conformational ensemble, indicating that the binding partners must adopt a geometry near to that of the bound prior to binding. Mirroring the conformational selection and population shift mechanism of protein binding, the models provide a strong separate line of evidence for the preponderance of this mechanism in protein-protein binding, complementing structural and theoretical studies. © 2012 Moal, Bates. Source

Perez A.,Barcelona Institute for Research in Biomedicine | Orozco M.,Barcelona Supercomputing Center
Angewandte Chemie - International Edition | Year: 2010

(Figure Presented) An unfolding story: Simulations exceeding the microsecond timescale and the introduction of a co-solvent were used to characterize the pathways of the unfolding of a short DNA duplex. Two main pathways are detected involving different substrates. Once unfolding is reached the DNA adopts compact structures different from a random-coil distribution (see picture). © 2010 Wiley-VCH Verlag GmbH & Co. KCaA, Weinheim. Source

Madadkar-Sobhani A.,Barcelona Supercomputing Center
Nucleic acids research | Year: 2013

PELE, Protein Energy Landscape Exploration, our novel technology based on protein structure prediction algorithms and a Monte Carlo sampling, is capable of modelling the all-atom protein-ligand dynamical interactions in an efficient and fast manner, with two orders of magnitude reduced computational cost when compared with traditional molecular dynamics techniques. PELE's heuristic approach generates trial moves based on protein and ligand perturbations followed by side chain sampling and global/local minimization. The collection of accepted steps forms a stochastic trajectory. Furthermore, several processors may be run in parallel towards a collective goal or defining several independent trajectories; the whole procedure has been parallelized using the Message Passing Interface. Here, we introduce the PELE web server, designed to make the whole process of running simulations easier and more practical by minimizing input file demand, providing user-friendly interface and producing abstract outputs (e.g. interactive graphs and tables). The web server has been implemented in C++ using Wt (http://www.webtoolkit.eu) and MySQL (http://www.mysql.com). The PELE web server, accessible at http://pele.bsc.es, is free and open to all users with no login requirement. Source

Discover hidden collaborations