Santander, Spain
Santander, Spain

Time filter

Source Type

News Article | April 20, 2017
Site: www.eurekalert.org

Findings could help guide development of treatments that target many mutated proteins at once Scientists have identified thousands of previously ignored genetic mutations that, although rare, likely contribute to cancer growth. The findings, which could help pave the way to new treatments, are published in PLOS Computational Biology. Cancer arises when genetic mutations in a cell cause abnormal growth that leads to a tumor. Some cancer drugs exploit this to attack tumor cells by targeting proteins that are mutated from their usual form because of mutations in the genes that encode them. However, only a fraction of all the mutations that contribute significantly to cancer have been identified. Thomas Peterson, at the University of Maryland, and colleagues developed a new statistical analysis approach that uses genetic data from cancer patients to find cancer-causing mutations. Unlike previous studies that focused on mutations in individual genes, the new approach addresses similar mutations shared by families of related proteins. Specifically, the new method focuses on mutations in sub-components of proteins known as protein domains. Even though different genes encode them, different proteins can share common protein domains. The new strategy draws on existing knowledge of protein domain structure and function to pinpoint locations within protein domains where mutations are more likely to be found in tumors. Using this new approach, the researchers identified thousands of rare tumor mutations that occur in the same domain location as mutations found in other proteins in other tumors-- suggesting that they are likely to be involved in cancer. "Maybe only two patients have a mutation in a particular protein, but when you realize it is in exactly the same position within the domain as mutations in other proteins in cancer patients," says senior author of the study Maricel Kann, "you realize it's important to investigate those two mutations." The researchers have coined the term "oncodomain" to refer to protein domains that are more likely to contain cancer-causing mutations. Further study of oncodomains could help inform drug development: "Because the domains are the same across so many proteins," Kann says, "it is possible that a single treatment could tackle cancers caused by a broad spectrum of mutated proteins." In your coverage please use this URL to provide access to the freely available article in PLOS Computational Biology: http://journals. Funding: This work was funded by NSF (award #1446406, PI: MGK), NIH (award #1K22CA143148, PI: MGK and Award #R01LM009722 CoPI: MGK). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist.


News Article | April 27, 2017
Site: www.eurekalert.org

Researchers have developed a personalized algorithm that predicts the impact of particular foods on an individual's blood sugar levels, according to a new study published in PLOS Computational Biology. The algorithm has been integrated into an app, Glucoracle, which will allow individuals with type 2 diabetes to keep a tighter rein on their glucose levels -- the key to preventing or controlling the major complications of a disease that affects 8 percent of Americans. Medications are often prescribed to help patients with type 2 diabetes manage their blood sugar levels, but exercise and diet also play an important role. "While we know the general effect of different types of food on blood glucose, the detailed effects can vary widely from one person to another and for the same person over time," said lead author David Albers, PhD, associate research scientist in Biomedical Informatics at Columbia University Medical Center (CUMC). "Even with expert guidance, it's difficult for people to understand the true impact of their dietary choices, particularly on a meal-to-meal basis. Our algorithm, integrated into an easy-to-use app, predicts the consequences of eating a specific meal before the food is eaten, allowing individuals to make better nutritional choices during mealtime." The algorithm uses a technique called data assimilation, in which a mathematical model of a person's response to glucose is regularly updated with observational data--blood sugar measurements and nutritional information--to improve the model's predictions, explained co-study leader George Hripcsak, MD, MS, the Vivian Beaumont Allen Professor and chair of Biomedical Informatics at CUMC. Data assimilation is used in a variety of applications, notably weather forecasting. "The data assimilator is continually updated with the user's food intake and blood glucose measurements, personalizing the model for that individual," said co-study leader Lena Mamykina, PhD, assistant professor of biomedical informatics at CUMC, whose team has designed and developed the Glucoracle app. Glucoracle allows the user to upload fingerstick blood measurements and a photo of a particular meal to the app, along with a rough estimate of the nutritional content of the meal. This estimate provides the user with an immediate prediction of post-meal blood sugar levels. The estimate and forecast are then adjusted for accuracy. The app begins generating predictions after it has been used for a week, allowing the data assimilator has learned how the user responds to different foods. The researchers initially tested the data assimilator on five individuals using the app, including three with type 2 diabetes and two without the disease. The app's predictions were compared with actual post-meal blood glucose measurements and with the predictions of certified diabetes educators. For the two non-diabetic individuals, the app's predictions were comparable to the actual glucose measurements. For the three subjects with diabetes, the apps forecasts were slightly less accurate, possibly due to fluctuations in the physiology of patients with diabetes or parameter error, but were still comparable to the predictions of the diabetes educators. "There's certainly room for improvement," said Dr. Albers. "This evaluation was designed to prove that it's possible, using routine self-monitoring data, to generate real-time glucose forecasts that people could use to make better nutritional choices. We have been able to make an aspect of diabetes self-management that has been nearly impossible for people with type 2 diabetes more manageable. Now our task is to make the data assimilation tool powering the app even better." Encouraged by these early results, the research team is preparing for a larger clinical trial. The researchers estimate that the app could be ready for widespread use within two years. This release is based on text provided by the authors. In your coverage please use this URL to provide access to the freely available article in PLOS Computational Biology: http://journals. Funding: GH, ML, and DJA are supported by a grant from the National Library of Medicine LM006910. LM, DJA and ML are supported by a grant from the Robert Wood Johnson Foundation RWJF 73070. LM is supported by a grant from the National Institute of Diabetes and Digestive Kidney diseases R01DK090372. Competing Interests: The authors have declared that no competing interests exist.


News Article | April 17, 2017
Site: www.scientificamerican.com

The brain processes sights, sounds and other sensory information—and even makes decisions—based on a calculation of probabilities. At least, that’s what a number of leading theories of mental processing tell us: The body’s master controller builds an internal model from past experiences, and then predicts how best to behave. Although studies have shown humans and other animals make varied behavioral choices even when performing the same task in an identical environment, these hypotheses often attribute such fluctuations to “noise”—to an error in the system. But not everyone agrees this provides the complete picture. After all, sometimes it really does pay off for randomness to enter the equation. A prey animal has a higher chance of escaping predators if its behavior cannot be anticipated easily, something made possible by introducing greater variability into its decision-making. Or in less stable conditions, when prior experience can no longer provide an accurate gauge for how to act, this kind of complex behavior allows the animal to explore more diverse options, improving its odds of finding the optimal solution. One 2014 study found rats resorted to random behavior when they realized nonrandom behavior was insufficient for outsmarting a computer algorithm. Perhaps, then, this variance cannot simply be chalked up to mere noise. Instead, it plays an essential role in how the brain functions. Now, in a study published April 12 in PLoS Computational Biology, a group of researchers in the Algorithmic Nature Group at LABORES Scientific Research Lab for the Natural and Digital Sciences in Paris hope to illuminate how this complexity unfolds in humans. “When the rats tried to behave randomly [in 2014],” says Hector Zenil, a computer scientist who is one of the study’s authors, “researchers saw that they were computing how to behave randomly. This computation is what we wanted to capture in our study.” Zenil’s team found that, on average, people’s ability to behave randomly peaks at age 25, then slowly declines until age 60, when it starts to decrease much more rapidly. To test this, the researchers had more than 3,400 participants, aged four to 91, complete a series of tasks—“a sort of reversed Turing test,” Zenil says, determining how well a human can outcompete a computer when it comes to producing and recognizing random patterns. The subjects had to create sequences of coin tosses and die rolls they believed would look random to another person, guess which card would be drawn from a randomly shuffled deck, point to circles on a screen and color in a grid to form a seemingly random design. The team then analyzed these responses to quantify their level of randomness by determining the probability that a computer algorithm could generate the same decisions, measuring algorithmic complexity as the length of the shortest possible computer program that could model the participants’ choices. In other words, the more random a person’s behavior, the more difficult it would be to describe his or her responses mathematically, and the longer the algorithm would be. If a sequence were truly random, it would not be possible for such a program to compress the data at all—it would be the same length as the original sequence. After controlling for factors such as language, sex and education, the researchers concluded age was the only characteristic that affected how randomly someone behaved. “At age 25, people can outsmart computers at generating this kind of randomness,” Zenil says. This developmental  trajectory, he adds, reflects what scientists would expect measures of higher cognitive abilities to look like. In fact, a sense of complexity and randomness is based on cognitive functions including attention, inhibition and working memory (which were involved in the study’s five tasks)—although the exact mechanisms behind this relationship remain unknown. “It is around 25, then, that minds are the sharpest.” This makes biological sense, according to Zenil: Natural selection would favor a greater capacity for generating randomness during key reproductive years. The study’s results may even have implications for understanding human creativity. After all, a large part of being creative is the ability to develop new approaches and test different outcomes. “That means accessing a larger repository of diversity,” Zenil says, “which is essentially randomness. So at 25, people have more resources to behave creatively.” Zenil’s findings support previous research, which also showed a decline in random behavior with age. But this is the first study to employ an algorithmic approach to measuring complexity as well as the first to do so over a continuous age range. “Earlier studies considered groups of young and older adults, capturing specific statistical aspects such as repetition rate in very long response sequences,” says Gordana Dodig-Crnkovic, a computer scientist at Mälardalen University in Sweden, who was not involved in the research. “The present article goes a step further.” Using algorithmic measures of randomness, rather than statistical ones, allowed Zenil’s team to examine true random behavior instead of statistical, or pseudorandom, behavior—which, although satisfying statistical tests for randomness, would not necessarily be “incompressible” the way truly random data is. The fact that algorithmic capability differed with age implies the brain is algorithmic in nature—that it does not assume the world is statistically random but takes a more generalized approach without the biases described in more traditional statistical models of the brain. These results may open up a wider perspective on how the brain works: as an algorithmic probability estimator. The theory would update and eliminate some of the biases in statistical models of decision-making that lie at the heart of prevalent theories—prominent among them is the Bayesian brain hypothesis, which holds that the mind assigns a probability to a conjecture and revises it when new information is received from the senses.  “The brain is highly algorithmic,” Zenil says. “It doesn’t behave stochastically, or as a sort of coin-tossing mechanism.” Neglecting an algorithmic approach in favor of only statistical ones gives us an incomplete understanding of the brain, he adds. For instance, a statistical approach does not explain why we can remember sequences of digits such as a phone number—take “246-810-1214,” whose digits are simply even counting numbers: This is not a statistical property, but an algorithmic one. We can recognize the pattern and use it to memorize the number. Algorithmic probability, moreover, allows us to more easily find (and compress) patterns in information that appears random. “This is a paradigm shift,” Zenil says, “because even though most researchers agree that there is this algorithmic component in the way the mind works, we had been unable to measure it because we did not have the right tools, which we have now developed and introduced in our study.” Zenil and his team plan to continue exploring human algorithmic complexity, and hope to shed light on the cognitive mechanisms underlying the relationship between behavioral randomness and age. First, however, they plan to conduct their experiments with people who have been diagnosed with neurodegenerative diseases and mental disorders, including Alzheimer’s and schizophrenia. Zenil predicts, for example, that participants diagnosed with the latter will not generate or perceive randomness as well as their counterparts in the control group, because they often make more associations and observe more patterns than the average person does. The researchers’ colleagues are standing by. Their work on complexity, says Dodig-Crnkovic, “presents a very promising approach.”


News Article | March 17, 2017
Site: www.techtimes.com

The latest advancement in diagnostic technology could help doctors identify the presence of autism biomarkers in patients's bloodstream as early as childhood. This novel diagnostic tool is the first physiological test that can detect the genetic disorder, its accuracy greatly surpassing currently used screening measures, which only focus on behavioral symptoms. The blood test uses an algorithm to track levels of metabolites, and is designed to predict the occurrence of Autism spectrum disorder or ASD in children, allowing the possibility of earlier diagnosis. The Rensselaer Polytechnic Institute in New York, which created the algorithm, studied its efficiency through advanced data analysis and published the results in the journal PLOS Computational Biology. According to Juergen Hahn, one of the study's lead authors, previous research typically focused on only one biomarker: metabolite or gene. Although successful, past results were not statistically significant enough to be replicated in other diagnostic case. Hahn's study, however, is based on multivariate statistical models that enabled his team to classify children with autism based on their neurological status. Researchers analyzed biomedical data from 149 blood samples, belonging to 83 autistic children and 76 neurotypical participants (children not affected by ASD) — all aged between 3 and 10. Instead of focusing on individual metabolites, Hahn's team investigated several metabolite patterns correlated with autism and discovered important differences in metabolite concentrations between the ASD test group and neurotypical cohort. The comparative analysis revealed disparities in two metabolic processes: the methionine cycle (linked to several cellular functions) and the transulfuration pathway (responsible for producing antioxidants to decrease cell oxidation). Past studies showed both pathways go through a process of alteration in people with high risk of autism. "By measuring 24 metabolites from a blood sample, this algorithm can tell whether or not an individual is on the Autism spectrum, and even to some degree where on the spectrum they land," explained Hahn. This new diagnostic method was shown to be "highly accurate and specific," and helped researchers identify 97.6 percent of children who had autism and 96.1 percent of those who were neurotypical. No other diagnosis approach currently available can produce an equally precise classification of autistic patients or predict on which end of the spectrum they are found. Hahn's team believes their algorithm is "a strong indicator that the metabolites under consideration are strongly correlated with an ASD diagnosis." "The method presented in this work is the only one of its kind that can classify an individual as being on the autism spectrum or as being neurotypical. We are not aware of any other method, using any type of biomarker that can do this, much less with the degree of accuracy that we see in our work," study authors said in a statement. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | April 17, 2017
Site: www.eurekalert.org

VIDEO:  This is an attempt to explain what we think are some of the most salient results of our research packed in a 4-minute video. People's ability to make random choices or mimic a random process, such as coming up with hypothetical results for a series of coin flips, peaks around age 25, according to a study published in PLOS Computational Biology. Scientists believe that the ability to behave in a way that appears random arises from some of the most highly developed cognitive processes in humans, and may be connected to abilities such as human creativity. Previous studies have shown that aging diminishes a person's ability to behave randomly. However, it had been unclear how this ability evolves over a person's lifetime, nor had it been possible to assess the ways in which humans may behave randomly beyond simple statistical tests. To better understand how age impacts random behavior, Nicolas Gauvrit and colleagues at the Algorithmic Nature Group, LABORES for the Natural and Digital Sciences, Paris, assessed more than 3,400 people aged 4 to 91 years old. Each participant performed a series of online tasks that assessed their ability to behave randomly. The five tasks included listing the hypothetical results of a series of 12 coin flips so that they would "look random to somebody else," guessing which card would appear when selected from a randomly shuffled deck, and listing the hypothetical results of 10 rolls of a die--"the kind of sequence you'd get if you really rolled a die." The scientists analyzed the participants' choices according to their algorithmic randomness, which is based on the idea that patterns that are more random are harder to summarize mathematically. After controlling for characteristics such as gender, language, and education, they found that age was the only factor that affected the ability to behave randomly. This ability peaked at age 25, on average, and declined from then on. "This experiment is a kind of reverse Turing test for random behavior, a test of strength between algorithms and humans," says study co-author Hector Zenil. "25 is, on average, the golden age when humans best outsmart computers," adds Dr. Gauvrit. The study also demonstrated that a relatively short list of choices, say 10 hypothetical coin flips, can be used to reliably gauge randomness of human behavior. The authors are now using a similar approach to study potential connections between the ability to behave randomly and such things as cognitive decline and neurodegenerative diseases. The authors have produced a video to summarize the key results of their research, which can be found, with a caption and further details, here: https:/ In your coverage please use this URL to provide access to the freely available article in PLOS Computational Biology: http://journals. Funding: HZ received partial funding the Swedish Research Council (VR). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist.


News Article | May 4, 2017
Site: www.eurekalert.org

Findings suggest why some onset patterns are associated with better surgery outcomes At the onset of an epileptic seizure, differing characteristics of brain tissue surrounding the seizure's origin site may determine which of two main patterns of brain activity will be seen, according to a study in PLOS Computational Biology. Electrical activity in the brain at the start of an epileptic seizure typically follows either a "low amplitude fast" pattern or a "high amplitude slow" pattern. Patients whose seizures follow the high amplitude slow pattern have a higher risk of continuing seizures after surgical treatment. However, the mechanisms underlying these different onset patterns are unclear. To better understand the onset patterns, Yujiang Wang of Newcastle University, U.K., and colleagues used a previously developed computer model that can simulate brain activity at the start of a seizure. The model output suggested that the onset pattern of a seizure may be determined not by brain tissue at the site where the seizure originates, but by characteristics of the surrounding "healthy" brain tissue. The simulation showed that the high amplitude slow pattern occurs when surrounding brain tissue has higher excitability; that is, the brain cells have a stronger response to stimulation and can react immediately to the initiation of a seizure. Meanwhile, the low amplitude fast pattern is associated with tissue of lower excitability, which is only slowly penetrated by seizure activity. These findings suggest why the different onset patterns are associated with different treatment outcomes. Surgical removal of seizure-triggering brain tissue may be enough to prevent seizure activity in nearby low-excitability tissue. However, high-excitability tissue may still be stimulated by alternative trigger sites after surgery, providing a possible explanation for the worse outcomes experienced by patients whose seizures follow the high amplitude slow pattern. Next, the researchers plan to study seizure onset patterns in greater detail. "We hope to contribute towards the overall goal of associating patterns seen in seizures with an understanding of the underlying mechanism," Wang says. "This would not only help our understanding of seizures in general, but may be useful for patient stratification in terms of treatment options." In your coverage please use this URL to provide access to the freely available article in PLOS Computational Biology: http://journals. Funding: YW, AJT and MK were supported by the CANDO project funded through the Wellcome Trust (102037) and Engineering and Physical Sciences Research Council (EPSRC) (NS/A000026/1). PNT was supported by Wellcome Trust (105617/Z/14/Z). MK was supported by the Human Green Brain project funded through Engineering and Physical Sciences Research Council (EPSRC) (EP/K026992/1), and the Portabolomics Project funded through Engineering and Physical Sciences Research Council (EPSRC) (EP/N031962/1). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist.


Data4Cure, Inc. to present at the 2017 BioData West Conference in San Francisco Data4Cure, Inc. a biomedical data-to-knowledge company will take part in the 2017 BioData West conference held in San Francisco, April 26-27. Janusz Dutkowski, Ph.D., CEO, will participate in a panel on the future of AI in pharmaceutical development and healthcare and give a talk on the second day of the conference. "As machine learning and AI are increasingly entering the biomedical domain, it is important to consider how these approaches can extend biomedical knowledge and not add to the complexity and noise that often hamper the progress in the field," said Dutkowski. "At Data4Cure we believe in building advanced computational tools that are guided by prior knowledge of how cells and biological systems work. This prior knowledge serves to narrow down the search space of possible solutions and guides AI and machine learning algorithms towards biologically-sound solutions that have a greater chance of success in clinical development. This guided approach is at the core of what the Biomedical Intelligence Cloud does," said Dutkowski. The talk on day two of the conference will focus on Data4Cure’s model-based molecular and immune infiltrate stratification engine which identifies cancer subtypes that may benefit from immuno-oncology and targeted therapies. Data4Cure’s solution uniquely combines molecular network and pathway information with somatic mutation and immune-infiltration profiles inferred from RNA sequencing data. This approach looks beyond mutation burden and PD-L1 expression to identify subtypes based on their distinct biology. This includes detailed characterization of key cancer pathways driving each subtype as well as determining the state of the immune system infiltrating the tumor. "Across multiple cancer types from colorectal to lung and kidney cancer we are finding that our systems approach can provide our pharmaceutical partners with a lot more information and is also more predictive than currently used methods based on mutation burden and/or PD-L1 expression," said Dutkowski. The stratification and immuno-oncology tools are integrated with CURIE™, a dynamic biomedical knowledge graph that powers the Biomedical Intelligence Cloud. "CURIE is continuously updated with information mined from a variety of data sources. It cross-references tens of thousands of datasets, bioinformatics analyses, literature, clinical trials and external databases, and provides immediate data-driven answers to over 100 million biomedical questions helping to identify novel disease mechanisms, biomarkers and therapeutic targets," said Roy Ronen, Ph.D., VP, Computational Biology at Data4Cure, who will also be presenting Data4Cure’s platform at the conference. Launched in 2016, the Biomedical Intelligence Cloud is used across some of the largest global pharmaceutical companies and leading research institutes. "We are very excited to work with some of the best research groups in the industry and academia," said Dutkowski. "By dramatically accelerating the access to data-driven knowledge and uncovering non-obvious connections hidden in millions of datasets, scientific papers and clinical trials we can really start to turn data into cures.”


Scientists have identified two small molecules that could be pursued as potential treatments for chronic inflammatory diseases. According to a paper published in PLOS Computational Biology, the researchers singled out the molecules using a new drug screening approach they developed. Both molecules, known as T23 and T8, inhibit the function of a protein called tumor necrosis factor (TNF), which is involved in inflammation in diseases such as rheumatoid arthritis, Crohn's disease, psoriasis, multiple sclerosis, and more. Drugs that inhibit TNF's function are considered the most effective way to combat such diseases. However, not all patients respond to them, and their effectiveness can wear off over time. To aid discovery of better TNF inhibitor drugs, Georgia Melagraki and colleagues from Greece and Cyprus developed a new computer-based drug screening platform. The platform incorporates proprietary molecular properties shared between TNF and another protein called RANKL, which is also involved in chronic inflammatory diseases. The researchers developed the platform based on a combination of advanced computational tools. The platform was then used to virtually screen nearly 15,000 small molecules with unknown activity and to predict their interactions with the TNF and RANKL proteins; specifically, how well the small molecules might disrupt the protein-protein interactions (PPIs) leading to the trimerization and activation of these crucial proteins. "This virtual experiment identified nine promising molecules out of thousands of candidates," says study co-corresponding author Antreas Afantitis of NovaMechanics Ltd, Cyprus. To further evaluate their potential, the scientists studied how the nine small molecules interacted with TNF and RANKL in real-world laboratory experiments. Of the nine molecules, T23 and T8 surfaced as particularly strong TNF inhibitors. Both molecules bind to TNF and RANKL, preventing them from interacting properly with other proteins. Both also show low potential for causing toxic side effects in humans. With further research, T23 and T8 could be "further optimized to develop improved treatments for a range of inflammatory, autoimmune, and bone loss diseases," says study co-corresponding author George Kollias of the Biomedical Sciences Research Center 'Alexander Fleming', Greece. Meanwhile, the new virtual drug screening approach could enable discovery of other promising TNF inhibitors, and could be modified to search for potential treatments for additional diseases. In your coverage please use this URL to provide access to the freely available article in PLOS Computational Biology: http://journals. Citation: Melagraki G, Ntougkos E, Rinotas V, Papaneophytou C, Leonis G, Mavromoustakos T, et al. (2017) Cheminformatics-aided discovery of small-molecule Protein-Protein Interaction (PPI) dual inhibitors of Tumor Necrosis Factor (TNF) and Receptor Activator of NF-κB Ligand (RANKL). PLoS Comput Biol 13(4): e1005372. https:/ Funding: This work was funded by Greek "Cooperation" Action project TheRAlead (09SYN-21-784) co-financed by the European Regional Development Fund and NSRF 2007-2013, the Innovative Medicines Initiative (IMI) funded project BTCure (No 115142) and Advanced European Research Council (ERC) grant MCs-inTEST (No 340217) to GKol. AA would like to acknowledge funding from Cyprus Research Promotion Foundation, DESMI 2008, ΕΠΙΧΕΙΡΗΣΕΙΣ/ΕΦΑΡΜ /0308/20 http://www. . The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: Georgia Melagraki, Georgios Leonis and Antreas Afantitis are employed by Novamechanics Ltd, a drug design company. Other authors declare that there are no conflicts of interest.


News Article | March 2, 2017
Site: www.eurekalert.org

A team of researchers have invented a new method to identify the origin of irregular electrical 'storm waves' in the heart. This new research, published in PLOS Computational Biology, could have major implications for the future treatment of a killer cardiac disease. Atrial Fibrillation - one of the most common forms of abnormal heart rhythm - is caused by these waves, and is a major cause of stroke as it increases the risk of blood clots forming inside the heart. It occurs in about 1-2% of people, and studies have shown that it is on the rise in the developed world due to our ageing population. The potential societal and economic implications of this increase highlight the urgent need for new and more effective treatment options for the condition. Identifying the origin of atrial fibrillation is vital for its diagnosis and treatment. Current methods involve the use of a catheter to isolate the storm waves - however, this is very invasive surgery, and it is extremely difficult to identify the origin of the waves in order to treat the condition. The researchers in this study, from the University of Manchester and the University of Hull, used a virtual human heart-torso and a 64-lead electrocardiogram (ECG) vest to study the correlation between the origin of the storm waves and the features of the ECG signals. Using the properties of the atrial activation and the signals, they were able to develop a method which can pin down the location of Atrial Fibrillation non-invasively, as well identifying different types of the condition. "This technique can identify the origin of Atrial Fibrillation extremely effectively, which may provide a powerful tool for treatment in the future," said Professor Henggui Zhang, who led the study. "This is very exciting research, which we think could lead to new developments to tackle heart problems more effectively and simply." This release is based on text provided by the University of Manchester. In your coverage please use this URL to provide access to the freely available article in PLOS Computational Biology: http://journals. Citation: Alday EAP, Colman MA, Langley P, Zhang H (2017) Novel non-invasive algorithm to identify the origins of re-entry and ectopic foci in the atria from 64-lead ECGs: A computational study. PLoS Comput Biol 13(3): e1005270. doi:10.1371/journal.pcbi.1005270 Funding: This work was supported by research project grants from EPSRC (EP/J00958X/1; EP/I029826/1) and BHF (FS/14/5/30533). It was also supported by Conacyt and the University of Manchester. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist.


News Article | February 17, 2017
Site: www.medicalnewstoday.com

Seeing is not always believing - visual speech (mouth movements) mismatched with auditory speech (sounds) can result in the perception of an entirely different message. This mysterious illusion is known as the McGurk effect. In new research, published in PLOS Computational Biology, neuroscience researchers have created an algorithm to reveal key insight into why the brain can sometimes muddle up one of the most fundamental aspects of the human experience. The findings will be useful in understanding patients with speech perception deficits and in building computers able to understand auditory and visual speech. "All humans grow up listening to tens of thousands of speech examples, with the result that our brains contain a comprehensive mapping of the likelihood that any given pair of mouth movements and speech sounds go together," said Dr. Michael Beauchamp, professor of neurosurgery at Baylor College of Medicine and senior author on the paper with John Magnotti, postdoctoral research fellow at Baylor. "In everyday situations we are frequently confronted with multiple talkers emitting auditory and visual speech cues, and the brain must decide whether or not to integrate a particular combination of voice and face." "Even though our senses are constantly bombarded with information, our brain effortlessly selects the verbal and nonverbal speech of our conversation partners from this cacophony," Magnotti said. The McGurk effect is an example of when this goes wrong. It happens when mouth movements that are seen can override what is heard, causing a person to perceive a different sound than what is actually being said. Only when the eyes are closed, and when the sound is being heard, can the correct message be perceived. For example, the visual "ga" combined with the auditory "ba" results in the perception of "da." Magnotti and Beauchamp were able to create an algorithm model of multisensory speech perception based on the principle of causal inference, which means given a particular pair of auditory and visual syllables, the brain calculates the likelihood they are from single versus multiple talkers and uses this likelihood to determine the final speech perception. "We compared our model with an alternative model that is identical, except that it always integrates the available cues, meaning there is no casual inference of speech perception," said Beauchamp, who also is director of the Core for Advanced MRI at Baylor. "Using data from a large number of subjects, the model with causal inference better predicted how humans would or would not integrate audiovisual speech syllables." "The results suggest a fundamental role for a causal inference type calculation going on in the brain during multisensory speech perception," Magnotti said. Researchers already have an idea of how and where the brain separately encodes auditory speech and visual speech, but this algorithm shines light on the process of how they are integrated. It will serve as a guide, highlighting specific brain regions that will be essential for multisensory speech perception. "Understanding how the brain combines information from multiple senses will provide insight into ways to improve declines in speech perception due to typical aging and even to develop devices that could enhance hearing across the life span," Beauchamp said. Article: A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech, Magnotti JF, Beauchamp MS, PLOS Computational Biology, doi:10.1371/journal.pcbi.1005229, published 16 February 2017.

Loading Computational Biology collaborators
Loading Computational Biology collaborators