Time filter

Source Type

News Article | May 9, 2017
Site: www.chromatographytechniques.com

Proteins are the tools by which organisms carry out instructions encoded in their genes. The thinking is that tracking the evolution of these molecules could show how bigger changes are wrought: changes from genes, and eventually species development and diversification. A new algorithm proposes a more-complex understanding of the development of proteins – and how their structure and function is driving those larger changes of evolution, as described in the journal Nature Methods. “We have developed a fast algorithm that gives us previously unavailable insights into protein evolution,” said Lars Jermiin, one of the authors, from the Australian National University. “ModelFinder” takes a big-data look at the nearly limitless combinations of 20 amino acid building blocks. But it uses a more-complex and heterogeneous, palette than previous computing models use. For instance, previous tools looked at single proteins evolving based on shape being determined by a single variable. Those earlier models have limited real-life understanding of what is happening biologically, said Minh Quang Bui, a co-author from the Center for Integrative Bioinformatics in Vienna. Much of what has been understood and published thus needs to be re-evaluated, he said. “That assumption, however does not reflect reality, and it might have led to a large proportion of biased phylogenetic results being published over the last two decades or so,” said Bui. The algorithm ultimately will not have just conceptual purposes for understanding evolution-scale changes, contend the authors, including a team from Max Perutz Laboratories, also in Vienna. It may even “help us come one step closer to unravelling the mysteries which are responsible for the great diversity on our planet.” “The new tool is likely to have a huge impact on a wide variety of research areas, including on the evolution of pathogens and the dispersal of agricultural pests,” said Jermiin.


News Article | May 9, 2017
Site: phys.org

A 3-D structure of human haemoglobin, showing residues with high (red) or low (light yellow) evolutionary rates as determined by ModelFinder. Credit: Copyright: Minh Quang Bui/University of Vienna Understanding evolution is one of the cornerstones of biology—evolution is, in fact, the sole explanation for life's diversity on Earth. Based on the evolution of proteins, researchers may explain the emergence of new species and functions through genetic changes, how enzymes with novel functions might be engineered, or, for example, how humans are related to their closest relatives such as gorillas or bonobos. One popular approach to the study of evolution is to compare genome data using computer-aided bioinformatics tools. Scientists using these approaches may compare specific proteins, which consist of combinations of 20 universal building blocks called amino acids. The bioinformatics tools used to study the evolution of single proteins have assumed that the speed at which different regions of proteins evolve can be modeled with a statistical distribution whose shape is determined by a single variable. "That assumption, however, does not reflect reality, and it might have led to a large proportion of biased phylogenetic results being published over the last two decades or so," explains Minh Quang Bui, from the Center for Integrative Bioinformatics (CIBIV) and co-author of the study. A new algorithm allows insights into protein evolution Arndt von Haeseler, group leader at the Max F. Perutz Laboratories (MFPL) and Lars Jermiin from the Australian National University have now found a revolutionary way of implementing different rates of evolution into bioinformatics models. It was well known among experts that the popular approach might not capture the complexities of protein evolution. However, the computational cost of using more realistic models was unacceptably high. "We have now developed a fast algorithm that gives us previously unavailable insights into protein evolution—the new tool is likely to have a huge impact on a wide variety of research areas, including on the evolution of pathogens and the dispersal of agricultural pests," says Jermiin. The new program, ModelFinder, will allow more accurate scientific estimates of evolutionary processes. This enhanced understanding of evolution will help us come one step closer to unraveling the mysteries, which are responsible for the great diversity on our planet. Explore further: Drivers of evolution hidden in plain sight More information: Subha Kalyaanamoorthy et al, ModelFinder: fast model selection for accurate phylogenetic estimates, Nature Methods (2017). DOI: 10.1038/NMETH.4285


News Article | May 9, 2017
Site: www.chromatographytechniques.com

Proteins are the tools by which organisms carry out instructions encoded in their genes. The thinking is that tracking the evolution of these molecules could show how bigger changes are wrought: changes from genes, and eventually species development and diversification. A new algorithm proposes a more-complex understanding of the development of proteins – and how their structure and function is driving those larger changes of evolution, as described in the journal Nature Methods. “We have developed a fast algorithm that gives us previously unavailable insights into protein evolution,” said Lars Jermiin, one of the authors, from the Australian National University. “ModelFinder” takes a big-data look at the nearly limitless combinations of 20 amino acid building blocks. But it uses a more-complex and heterogeneous, palette than previous computing models use. For instance, previous tools looked at single proteins evolving based on shape being determined by a single variable. Those earlier models have limited real-life understanding of what is happening biologically, said Minh Quang Bui, a co-author from the Center for Integrative Bioinformatics in Vienna. Much of what has been understood and published thus needs to be re-evaluated, he said. “That assumption, however does not reflect reality, and it might have led to a large proportion of biased phylogenetic results being published over the last two decades or so,” said Bui. The algorithm ultimately will not have just conceptual purposes for understanding evolution-scale changes, contend the authors, including a team from Max Perutz Laboratories, also in Vienna. It may even “help us come one step closer to unravelling the mysteries which are responsible for the great diversity on our planet.” “The new tool is likely to have a huge impact on a wide variety of research areas, including on the evolution of pathogens and the dispersal of agricultural pests,” said Jermiin.


News Article | May 9, 2017
Site: www.eurekalert.org

Understanding evolution is one of the cornerstones of biology - evolution is, in fact, the sole explanation for life's diversity on our planet. Based on the evolution of proteins, researchers may explain the emergence of new species and functions through genetic changes, how enzymes with novel functions might be engineered, or, for example, how humans are related to their closest relatives such as gorillas or bonobos. One popular approach to the study of evolution is to compare genome data using bioinformatics (computer-aided) tools. Scientists using these approaches may compare specific proteins, which consist of combinations of 20 universal building blocks, called amino acids. So far, the bioinformatics tools used to study the evolution of single proteins have assumed that the speed at which different regions of proteins evolve can be modeled with a statistical distribution whose shape is determined by a single variable. "That assumption, however, does not reflect reality, and it might have led to a large proportion of biased phylogenetic results being published over the last two decades or so," explains Minh Quang Bui, from the Center for Integrative Bioinformatics (CIBIV) and co-author of the study. Arndt von Haeseler, group leader at the Max F. Perutz Laboratories (MFPL) and Lars Jermiin from the Australian National University have now found a revolutionary way of implementing different rates of evolution into bioinformatics models. It was well known among experts that the popular approach might not capture the complexities of protein evolution. However, the computational cost of using more realistic models was unacceptably high. "We have now developed a fast algorithm that gives us previously unavailable insights into protein evolution - the new tool is likely to have a huge impact on a wide variety of research areas, including on the evolution of pathogens and the dispersal of agricultural pests, " adds Lars Jermiin. The new program "ModelFinder" will allow more accurate scientific estimates of evolutionary processes. This enhanced understanding of evolution will help us come one step closer to unraveling the mysteries, which are responsible for the great diversity on our planet.


Alders D.J.C.,VU University Amsterdam | Alders D.J.C.,Institute for Cardiovascular Research | Johan Groeneveld A.B.,VU University Amsterdam | Binsl T.W.,Center for Integrative Bioinformatics | And 2 more authors.
American Journal of Physiology - Heart and Circulatory Physiology | Year: 2011

Heterogeneity of regional coronary blood flow is caused in part by heterogeneity in O 2 demand in the normal heart. We investigated whether myocardial O 2 supply/ demand mismatching is associated with the myocardial depression of sepsis. Regional blood flow (microspheres) and O 2 uptake ([ 13C] acetate infusion and analysis of resultant NMR spectra) were measured in about nine contiguous tissue samples from the left ventricle (LV) in each heart. Endotoxemic pigs (n = 9) showed hypotension at unchanged cardiac output with a fall in LV stroke work and first derivative of LV pressure relative to controls (n = 4). Global coronary blood flow and O 2 delivery were maintained. Lactate accumulated in arterial blood, but net lactate extraction across the coronary bed was unchanged during endotoxemia. When LV O 2 uptake based on blood gas versus NMR data were compared, the correlation was 0.73 (P = 0.007). While stable over time in controls, regional blood flows were strongly redistributed during endotoxin shock, with overall flow heterogeneity unchanged. A stronger redistribution of blood flow with endotoxin was associated with a larger fall in LV function parameters. Moreover, the correlation of regional O 2 delivery to uptake fell from r = 0.73 (P<0.001) in control to r = 0.18 (P = 0.25, P = 0.009 vs. control) in endotoxemic hearts. The results suggest a redistribution of LV regional coronary blood flow during endotoxin shock in pigs, with regional O 2 delivery mismatched to O 2 demand. Mismatching may underlie, at least in part, the myocardial depression of sepsis. © 2011 the American Physiological Society.


Duc D.D.,Vietnam National University, Hanoi | Xuan H.H.,Vietnam National University, Hanoi | Dinh H.Q.,Center for Integrative Bioinformatics | Dinh H.Q.,Gregor Mendel Institute of Molecular Plant Biology
IFMBE Proceedings | Year: 2013

Gene regulatory activity prediction is an important step to understand which Transcription Factors (TFs) are important for regulation of gene expression in cells. The development of recent high throughput technologies and machine learning approaches allow us to archive this task more efficiently. Support Vector Machine (SVM) has been successfully applied for the case of predicting gene regulatory activity in Drosophila embryonic development. Here, we introduce meta-heuristic approaches to select the best parameters for regulatory prediction from transcription factor binding profiles. Experimental results show that our approach outperforms existing methods and the potentials for further analysis beyond the prediction. © 2013 IFMBE.


Duc D.D.,Vietnam National University, Hanoi | Le T.-T.,Vietnam Maritime University | Vu T.-N.,University of Antwerp | Dinh H.Q.,Center for Integrative Bioinformatics | And 2 more authors.
2012 IEEE RIVF International Conference on Computing and Communication Technologies, Research, Innovation, and Vision for the Future, RIVF 2012 | Year: 2012

Gene regulatory activity prediction problem is one of the important steps to understand the significant factors for gene regulation in biology. The advents of recent sequencing technologies allow us to deal with this task efficiently. Amongst these, Support Vector Machine (SVM) has been applied successfully up to more than 80% accuracy in the case of predicting gene regulatory activity in Drosophila embryonic development. In this paper, we introduce a metaheuristic based on genetic algorithm (GA) to select the best parameters for regulatory prediction from transcriptional factor binding profiles. Our approach helps to improve more than 10% accuracy compared to the traditional grid search. The improvements are also significantly supported by biological experimental data. Thus, the proposed method helps boosting not only the prediction performance but also the potentially biological insights. ©2012 IEEE.


Vinga S.,Institute Engineering Of Sistemas E Computadores Investigacao E Desenvolvimento Inesc Id | Vinga S.,New University of Lisbon | Neves A.R.,ITQB UNL | Santos H.,ITQB UNL | And 2 more authors.
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2010

The dynamic modelling of metabolic networks aims to describe the temporal evolution of metabolite concentrations in cells. This area has attracted increasing attention in recent years owing to the availability of high-throughput data and the general development of systems biology as a promising approach to study living organisms. Biochemical Systems Theory (BST) provides an accurate formalism to describe biological dynamic phenomena. However, knowledge about the molecular organization level, used in these models, is not enough to explain phenomena such as the driving forces of these metabolic networks. Dynamic Energy Budget (DEB) theory captures the quantitative aspects of the organization of metabolism at the organism level in a way that is nonspecies- specific. This imposes constraints on the sub-organismal organization that are not present in the bottom-up approach of systems biology. We use in vivo data of lactic acid bacteria under various conditions to compare some aspects of BST and DEB approaches. Due to the large number of parameters to be estimated in the BST model, we applied powerful parameter identification techniques. Both models fitted equally well, but the BST model employs more parameters. The DEB model uses similarities of processes under growth and no-growth conditions and under aerobic and anaerobic conditions, which reduce the number of parameters. This paper discusses some future directions for the integration of knowledge from these two rich and promising areas, working top-down and bottom-up simultaneously. This middle-out approach is expected to bring new ideas and insights to both areas in terms of describing how living organisms operate. © 2010 The Royal Society.


May A.,VU University Amsterdam | May A.,Center for Integrative Bioinformatics | Abeln S.,Center for Integrative Bioinformatics | Abeln S.,VU University Amsterdam | And 5 more authors.
Bioinformatics | Year: 2014

Motivation: 16S rDNA pyrosequencing is a powerful approach that requires extensive usage of computational methods for delineating microbial compositions. Previously, it was shown that outcomes of studies relying on this approach vastly depend on the choice of pre-processing and clustering algorithms used. However, obtaining insights into the effects and accuracy of these algorithms is challenging due to difficulties in generating samples of known composition with high enough diversity. Here, we use in silico microbial datasets to better understand how the experimental data are transformed into taxonomic clusters by computational methods. Results: We were able to qualitatively replicate the raw experimental pyrosequencing data after rigorously adjusting existing simulation software. This allowed us to simulate datasets of real-life complexity, which we used to assess the influence and performance of two widely used pre-processing methods along with 11 clustering algorithms. We show that the choice, order and mode of the pre-processing methods have a larger impact on the accuracy of the clustering pipeline than the clustering methods themselves. Without pre-processing, the difference between the performances of clustering methods is large. Depending on the clustering algorithm, the most optimal analysis pipeline resulted in significant underestimations of the expected number of clusters (minimum: 3.4%; maximum: 13.6%), allowing us to make quantitative estimations of the bacterial complexity of real microbiome samples. © 2014 The Author 2014.

Loading Center for Integrative Bioinformatics collaborators
Loading Center for Integrative Bioinformatics collaborators