Texas, Texas, United States
Texas, Texas, United States

Time filter

Source Type

News Article | April 10, 2017
Site: www.scientificcomputing.com

An innovative supercomputing program could assist psychologists with diagnosing mental health conditions. Researchers are using the Stampede Supercomputer, stationed at the Texas Advanced Computing Center, to teach a machine-learning algorithm that can sift through diverse data sets and potentially predict which patients are at risk of developing depression and anxiety. The team conducted a study where they had 52 treatment-seeking participants with depression and 45 healthy control participants receive diffusion tensor imaging (DTI) MRI scans. This process entails tagging water molecules to analyze the level of which these particles are microscopically diffused in the brain over a certain period of time. "We feed in whole brain data or a subset and predict disease classifications or any potential behavioral measure such as measures of negative information bias," said David Schnyer, a psychology professor and cognitive neuroscientists at the University of Texas at Austin, in a statement. Measuring these diffusions in multiple spatial directions generates vectors for each voxel, according to the official announcement. Voxels are three-dimensional cubes that signify either structure or neural activity throughout the brain. These outcomes are then morphed into metrics that indicate the integrity of white matter pathways residing in the cerebral cortex. The algorithm sorted through this data and was able to predict whether a volunteer in this study had a form of depression with roughly 75 percent accuracy. "Not only are we learning that we can classify depressed versus non-depressed people using DTI data, we are also learning something about how depression is represented within the brain," said Christopher Beevers, a professor of psychology and director of the Institute for Mental Health Research at UT Austin who participated in this research. "Rather than trying to find the area that is disrupted in depression, we are learning that alterations across a number of networks contribute to the classification of depression." Both researchers were impressed with these findings, but plan on adding more data from several hundred volunteers to strengthen the system’s predictive capabilities. Machine learning is a growing field in the healthcare sector. Researchers are designing these innovative programs for tasks like obtaining data from cancer pathology reports, improving cancer surveillance on the national, state, and local levels, and diagnoses for voice disorders.


News Article | May 2, 2017
Site: www.biosciencetechnology.com

Surgery and radiation remove, kill, or damage cancer cells in a certain area. But chemotherapy -- which uses medicines or drugs to treat cancer -- can work throughout the whole body, killing cancer cells that have spread far from the original tumor. Finding new drugs that can more effectively kill cancer cells or disrupt the growth of tumors is one way to improve survival rates for ailing patients. Increasingly, researchers looking to uncover and test new drugs use powerful supercomputers like those developed and deployed by the Texas Advanced Computing Center (TACC). "Advanced computing is a cornerstone of drug design and the theoretical testing of drugs," said Matt Vaughn, TACC's Director of Life Science Computing. "The sheer number of potential combinations that can be screened in parallel before you ever go in the laboratory makes resources like those at TACC invaluable for cancer research." Three projects powered by TACC supercomputer, which use virtual screening, molecular modeling and evolutionary analyses, respectively, to explore chemotherapeutic compounds, exemplify the type of cancer research advanced computing enables. Shuxing Zhang, a researcher in the Department of Experimental Therapeutics at the University of Texas MD Anderson Cancer Center, leads a lab dedicated to computer-assisted rational drug design and discovery of novel targeted therapeutic agents. The group develops new computational methods, using artificial intelligence and high-performance computing-based virtual screening strategies, that help the entire field of cancer drug discovery and development. Identifying a new drug by intuition or trial and error is expensive and time consuming. Virtual screening, on the other hand, uses computer simulations to explore how a large number of small molecule compounds "dock", or bind, to a target to determine if they may be candidates for future drugs. "In silico virtual screening is an invaluable tool in the early stages of drug discovery," said Joe Allen, a research associate at TACC. "It paints a clear picture not only of what types of molecules may bind to a receptor, but also what types of molecules would not bind, saving a lot of time in the lab." One specific biological target that Zhang's group investigates is called TNIK (TRAF2- and NCK-interacting kinase). TNIK is an enzyme that plays a key role in cell signaling related to colon cancer. Silencing TNIK, it is believed, may suppress the proliferation of colorectal cancer cells. Writing in Scientific Reports in September 2016, Zhang and his collaborators reported the results of a study that investigated known compounds with desirable properties that might act as TNIK inhibitors. Using the Lonestar supercomputer at TACC, they screened 1,448 Food and Drug Administration-approved small molecule drugs to determine which had the molecular features needed to bind and inhibit TNIK. They discovered that one -- mebendazole, an approved drug that fights parasites -- could effectively bind to the target. After testing it experimentally, they further found that the drug could also selectively inhibit TNIK's enzymatic activity. As an FDA-approved drug that can be used at higher dosages without severe side effects, mebendazole may is a strong candidate for further exploration and may even exhibit a 'synergic anti-tumor effect' when used with other anti-cancer drugs. "Such advantages render the possibility of quickly translating the discovery into a clinical setting for cancer treatment in the near future," Zhang and his collaborators wrote. In separate research published in Cell in 2013, Zhang's group used Lonestar to virtually screen an even greater number of novel inhibitors of Skp2, a critical oncogene that controls the cell cycle and is frequently observed as being overexpressed in human cancer. "Molecular docking is a computationally-expensive process and the screening of 3 million drug-like compounds needs more than 2,000 days on a single CPU [computer processing unit]," Zhang said. "By running the process on a high-performance computing cluster, we were able to screen millions of compounds within days instead of years." Their computational approaches identified a specific Skp2 inhibitor that can selectively impair Skp2 activity and functions, thereby exhibiting potent anti-tumor activity. "Our work at TACC has resulted in multiple potential drug candidates currently at the different stages of preclinical and clinical studies," said Zhang. "We hope to continue using the resources to identify more effective and less toxic therapeutics." Described as "the guardian of the genome", tumor protein 53 (p53) plays a crucial role in multicellular organisms, conserving the stability of DNA by preventing mutations and thereby acting as a tumor suppressor. However, in approximately 50 percent of all human cancers, p53 is mutated and rendered inactive. Therefore, reactivation of mutant p53 using small molecules has been a long-sought-after anticancer therapeutic strategy. Rommie Amaro, professor of Chemistry and Biochemistry at the University of California, San Diego has been studying this important molecule for years trying to understand how it works. In September 2016, writing in the journal Oncogene, she reported results from the largest atomic-level simulation of the tumor suppression protein to date -- comprising more than 1.5 million atoms. The simulations helped to identify new "pockets" -- binding sites on the surface of the protein -- where it may be possible to insert a small molecule that could reactivate p53. They revealed a level of complexity that is very difficult, if not impossible, to experimentally test. "We could see how when the full-length p53 was bound to a DNA sequence that was a recognition sequence, the tetramer clamps down and grips onto the DNA - which was unexpected," Amaro said. In contrast, with the negative control DNA, p53 stays more open. "It actually relaxes and loosens its grip on the DNA," she said. "It suggested a mechanism by which this molecule could actually change its dynamics depending on the exact sequence of DNA." According to Amaro, computing provides a better understanding of cancer mechanisms and ways to develop possible novel therapeutic avenues. "When most people think about cancer research they probably don't think about computers, but biophysical models are getting to the point where they have a great impact on the science," she said. Chemicals created by plants are the basis for the majority of the medicines used today. One such plant, the periwinkle (Catharanthus roseus), is used in chemotherapy protocols for leukemia and Hodgkin's lymphoma. A completely different approach to drug discovery involves studying the evolution of plants that are known to be effective chemotherapeutic agents and their genetic relatives, since plants that share an evolutionary history often share related collections of chemical compounds. University of Texas researchers -- working with researchers from King Abdulaziz University in Saudi Arabia, the University of Ottawa and Université de Montréal -- have been studying Rhazya stricta, an environmentally stressed, poisonous evergreen shrub found in Saudi Arabia that is a member of the family that includes the periwinkle. To understand the genome and evolutionary history of Rhayza stricta, the researchers performed genome assemblies and analyses on TACC's Lonestar, Stampede and Wrangler systems. According Robert Jansen, professor of Integrative Biology at UT and lead researcher on the project, the computational resources at TACC were essential for constructing and studying the plant's genome. The results were published in Scientific Reports in September 2016. "These analyses allowed the identification of genes involved in the monoterpene indole alkaloid pathway, and in some cases expansions of gene families were detected," he said. The monoterpene indole alkaloid pathway produces compounds that have known therapeutic properties against cancer. From the annotated Rhazya genome, the researchers developed a metabolic pathway database, RhaCyc, that can serve as a community resource and help identify new chemotherapeutic molecules. Jansen and his team hope that by better characterizing the genome and evolutionary history using advanced computational methods, and making the metabolic pathway database available as a community resource, they can speed the development of new medicines in the future. "There are a nearly infinite number of possible drug compounds," Vaughn said. "But knowing the principles of what a good drug might look like - how it might bind to a certain pocket or what it might need to resemble - helps narrow the scope immensely, accelerating discoveries, while reducing costs."


News Article | April 10, 2017
Site: www.scientificcomputing.com

An innovative supercomputing program could assist psychologists with diagnosing mental health conditions. Researchers are using the Stampede Supercomputer, stationed at the Texas Advanced Computing Center, to teach a machine-learning algorithm that can sift through diverse data sets and potentially predict which patients are at risk of developing depression and anxiety. The team conducted a study where they had 52 treatment-seeking participants with depression and 45 healthy control participants receive diffusion tensor imaging (DTI) MRI scans. This process entails tagging water molecules to analyze the level of which these particles are microscopically diffused in the brain over a certain period of time. "We feed in whole brain data or a subset and predict disease classifications or any potential behavioral measure such as measures of negative information bias," said David Schnyer, a psychology professor and cognitive neuroscientists at the University of Texas at Austin, in a statement. Measuring these diffusions in multiple spatial directions generates vectors for each voxel, according to the official announcement. Voxels are three-dimensional cubes that signify either structure or neural activity throughout the brain. These outcomes are then morphed into metrics that indicate the integrity of white matter pathways residing in the cerebral cortex. The algorithm sorted through this data and was able to predict whether a volunteer in this study had a form of depression with roughly 75 percent accuracy. "Not only are we learning that we can classify depressed versus non-depressed people using DTI data, we are also learning something about how depression is represented within the brain," said Christopher Beevers, a professor of psychology and director of the Institute for Mental Health Research at UT Austin who participated in this research. "Rather than trying to find the area that is disrupted in depression, we are learning that alterations across a number of networks contribute to the classification of depression." Both researchers were impressed with these findings, but plan on adding more data from several hundred volunteers to strengthen the system’s predictive capabilities. Machine learning is a growing field in the healthcare sector. Researchers are designing these innovative programs for tasks like obtaining data from cancer pathology reports, improving cancer surveillance on the national, state, and local levels, and diagnoses for voice disorders.


News Article | May 1, 2017
Site: www.eurekalert.org

Researchers use TACC's advanced computers to virtually discover and experimentally test new chemotherapy drugs and targets Surgery and radiation remove, kill, or damage cancer cells in a certain area. But chemotherapy -- which uses medicines or drugs to treat cancer -- can work throughout the whole body, killing cancer cells that have spread far from the original tumor. Finding new drugs that can more effectively kill cancer cells or disrupt the growth of tumors is one way to improve survival rates for ailing patients. Increasingly, researchers looking to uncover and test new drugs use powerful supercomputers like those developed and deployed by the Texas Advanced Computing Center (TACC). "Advanced computing is a cornerstone of drug design and the theoretical testing of drugs," said Matt Vaughn, TACC's Director of Life Science Computing. "The sheer number of potential combinations that can be screened in parallel before you ever go in the laboratory makes resources like those at TACC invaluable for cancer research." Three projects powered by TACC supercomputer, which use virtual screening, molecular modeling and evolutionary analyses, respectively, to explore chemotherapeutic compounds, exemplify the type of cancer research advanced computing enables. Shuxing Zhang, a researcher in the Department of Experimental Therapeutics at the University of Texas MD Anderson Cancer Center, leads a lab dedicated to computer-assisted rational drug design and discovery of novel targeted therapeutic agents. The group develops new computational methods, using artificial intelligence and high-performance computing-based virtual screening strategies, that help the entire field of cancer drug discovery and development. Identifying a new drug by intuition or trial and error is expensive and time consuming. Virtual screening, on the other hand, uses computer simulations to explore how a large number of small molecule compounds "dock", or bind, to a target to determine if they may be candidates for future drugs. "In silico virtual screening is an invaluable tool in the early stages of drug discovery," said Joe Allen, a research associate at TACC. "It paints a clear picture not only of what types of molecules may bind to a receptor, but also what types of molecules would not bind, saving a lot of time in the lab." One specific biological target that Zhang's group investigates is called TNIK (TRAF2- and NCK-interacting kinase). TNIK is an enzyme that plays a key role in cell signaling related to colon cancer. Silencing TNIK, it is believed, may suppress the proliferation of colorectal cancer cells. Writing in Scientific Reports in September 2016, Zhang and his collaborators reported the results of a study that investigated known compounds with desirable properties that might act as TNIK inhibitors. Using the Lonestar supercomputer at TACC, they screened 1,448 Food and Drug Administration-approved small molecule drugs to determine which had the molecular features needed to bind and inhibit TNIK. They discovered that one -- mebendazole, an approved drug that fights parasites -- could effectively bind to the target. After testing it experimentally, they further found that the drug could also selectively inhibit TNIK's enzymatic activity. As an FDA-approved drug that can be used at higher dosages without severe side effects, mebendazole may is a strong candidate for further exploration and may even exhibit a 'synergic anti-tumor effect' when used with other anti-cancer drugs. "Such advantages render the possibility of quickly translating the discovery into a clinical setting for cancer treatment in the near future," Zhang and his collaborators wrote. In separate research published in Cell in 2013, Zhang's group used Lonestar to virtually screen an even greater number of novel inhibitors of Skp2, a critical oncogene that controls the cell cycle and is frequently observed as being overexpressed in human cancer. "Molecular docking is a computationally-expensive process and the screening of 3 million drug-like compounds needs more than 2,000 days on a single CPU [computer processing unit]," Zhang said. "By running the process on a high-performance computing cluster, we were able to screen millions of compounds within days instead of years." Their computational approaches identified a specific Skp2 inhibitor that can selectively impair Skp2 activity and functions, thereby exhibiting potent anti-tumor activity. "Our work at TACC has resulted in multiple potential drug candidates currently at the different stages of preclinical and clinical studies," said Zhang. "We hope to continue using the resources to identify more effective and less toxic therapeutics." Described as "the guardian of the genome", tumor protein 53 (p53) plays a crucial role in multicellular organisms, conserving the stability of DNA by preventing mutations and thereby acting as a tumor suppressor. However, in approximately 50 percent of all human cancers, p53 is mutated and rendered inactive. Therefore, reactivation of mutant p53 using small molecules has been a long-sought-after anticancer therapeutic strategy. Rommie Amaro, professor of Chemistry and Biochemistry at the University of California, San Diego has been studying this important molecule for years trying to understand how it works. In September 2016, writing in the journal Oncogene, she reported results from the largest atomic-level simulation of the tumor suppression protein to date -- comprising more than 1.5 million atoms. The simulations helped to identify new "pockets" -- binding sites on the surface of the protein -- where it may be possible to insert a small molecule that could reactivate p53. They revealed a level of complexity that is very difficult, if not impossible, to experimentally test. "We could see how when the full-length p53 was bound to a DNA sequence that was a recognition sequence, the tetramer clamps down and grips onto the DNA - which was unexpected," Amaro said. In contrast, with the negative control DNA, p53 stays more open. "It actually relaxes and loosens its grip on the DNA," she said. "It suggested a mechanism by which this molecule could actually change its dynamics depending on the exact sequence of DNA." According to Amaro, computing provides a better understanding of cancer mechanisms and ways to develop possible novel therapeutic avenues. "When most people think about cancer research they probably don't think about computers, but biophysical models are getting to the point where they have a great impact on the science," she said. Chemicals created by plants are the basis for the majority of the medicines used today. One such plant, the periwinkle (Catharanthus roseus), is used in chemotherapy protocols for leukemia and Hodgkin's lymphoma. A completely different approach to drug discovery involves studying the evolution of plants that are known to be effective chemotherapeutic agents and their genetic relatives, since plants that share an evolutionary history often share related collections of chemical compounds. University of Texas researchers -- working with researchers from King Abdulaziz University in Saudi Arabia, the University of Ottawa and Université de Montréal -- have been studying Rhazya stricta, an environmentally stressed, poisonous evergreen shrub found in Saudi Arabia that is a member of the family that includes the periwinkle. To understand the genome and evolutionary history of Rhayza stricta, the researchers performed genome assemblies and analyses on TACC's Lonestar, Stampede and Wrangler systems. According Robert Jansen, professor of Integrative Biology at UT and lead researcher on the project, the computational resources at TACC were essential for constructing and studying the plant's genome. The results were published in Scientific Reports in September 2016. "These analyses allowed the identification of genes involved in the monoterpene indole alkaloid pathway, and in some cases expansions of gene families were detected," he said. The monoterpene indole alkaloid pathway produces compounds that have known therapeutic properties against cancer. From the annotated Rhazya genome, the researchers developed a metabolic pathway database, RhaCyc, that can serve as a community resource and help identify new chemotherapeutic molecules. Jansen and his team hope that by better characterizing the genome and evolutionary history using advanced computational methods, and making the metabolic pathway database available as a community resource, they can speed the development of new medicines in the future. "There are a nearly infinite number of possible drug compounds," Vaughn said. "But knowing the principles of what a good drug might look like - how it might bind to a certain pocket or what it might need to resemble - helps narrow the scope immensely, accelerating discoveries, while reducing costs."


News Article | May 1, 2017
Site: phys.org

Finding new drugs that can more effectively kill cancer cells or disrupt the growth of tumors is one way to improve survival rates for ailing patients. Increasingly, researchers looking to uncover and test new drugs use powerful supercomputers like those developed and deployed by the Texas Advanced Computing Center (TACC). "Advanced computing is a cornerstone of drug design and the theoretical testing of drugs," said Matt Vaughn, TACC's Director of Life Science Computing. "The sheer number of potential combinations that can be screened in parallel before you ever go in the laboratory makes resources like those at TACC invaluable for cancer research." Three projects powered by TACC supercomputer, which use virtual screening, molecular modeling and evolutionary analyses, respectively, to explore chemotherapeutic compounds, exemplify the type of cancer research advanced computing enables. Shuxing Zhang, a researcher in the Department of Experimental Therapeutics at the University of Texas MD Anderson Cancer Center, leads a lab dedicated to computer-assisted rational drug design and discovery of novel targeted therapeutic agents. The group develops new computational methods, using artificial intelligence and high-performance computing-based virtual screening strategies, that help the entire field of cancer drug discovery and development. Identifying a new drug by intuition or trial and error is expensive and time consuming. Virtual screening, on the other hand, uses computer simulations to explore how a large number of small molecule compounds "dock", or bind, to a target to determine if they may be candidates for future drugs. "In silico virtual screening is an invaluable tool in the early stages of drug discovery," said Joe Allen, a research associate at TACC. "It paints a clear picture not only of what types of molecules may bind to a receptor, but also what types of molecules would not bind, saving a lot of time in the lab." One specific biological target that Zhang's group investigates is called TNIK (TRAF2- and NCK-interacting kinase). TNIK is an enzyme that plays a key role in cell signaling related to colon cancer. Silencing TNIK, it is believed, may suppress the proliferation of colorectal cancer cells. Writing in Scientific Reports in September 2016, Zhang and his collaborators reported the results of a study that investigated known compounds with desirable properties that might act as TNIK inhibitors. Using the Lonestar supercomputer at TACC, they screened 1,448 Food and Drug Administration-approved small molecule drugs to determine which had the molecular features needed to bind and inhibit TNIK. They discovered that one—mebendazole, an approved drug that fights parasites—could effectively bind to the target. After testing it experimentally, they further found that the drug could also selectively inhibit TNIK's enzymatic activity. As an FDA-approved drug that can be used at higher dosages without severe side effects, mebendazole may is a strong candidate for further exploration and may even exhibit a 'synergic anti-tumor effect' when used with other anti-cancer drugs. "Such advantages render the possibility of quickly translating the discovery into a clinical setting for cancer treatment in the near future," Zhang and his collaborators wrote. In separate research published in Cell in 2013, Zhang's group used Lonestar to virtually screen an even greater number of novel inhibitors of Skp2, a critical oncogene that controls the cell cycle and is frequently observed as being overexpressed in human cancer. "Molecular docking is a computationally-expensive process and the screening of 3 million drug-like compounds needs more than 2,000 days on a single CPU [computer processing unit]," Zhang said. "By running the process on a high-performance computing cluster, we were able to screen millions of compounds within days instead of years." Their computational approaches identified a specific Skp2 inhibitor that can selectively impair Skp2 activity and functions, thereby exhibiting potent anti-tumor activity. "Our work at TACC has resulted in multiple potential drug candidates currently at the different stages of preclinical and clinical studies," said Zhang. "We hope to continue using the resources to identify more effective and less toxic therapeutics." Described as "the guardian of the genome", tumor protein 53 (p53) plays a crucial role in multicellular organisms, conserving the stability of DNA by preventing mutations and thereby acting as a tumor suppressor. However, in approximately 50 percent of all human cancers, p53 is mutated and rendered inactive. Therefore, reactivation of mutant p53 using small molecules has been a long-sought-after anticancer therapeutic strategy. Rommie Amaro, professor of Chemistry and Biochemistry at the University of California, San Diego has been studying this important molecule for years trying to understand how it works. In September 2016, writing in the journal Oncogene, she reported results from the largest atomic-level simulation of the tumor suppression protein to date—comprising more than 1.5 million atoms. The simulations helped to identify new "pockets"—binding sites on the surface of the protein—where it may be possible to insert a small molecule that could reactivate p53. They revealed a level of complexity that is very difficult, if not impossible, to experimentally test. "We could see how when the full-length p53 was bound to a DNA sequence that was a recognition sequence, the tetramer clamps down and grips onto the DNA - which was unexpected," Amaro said. In contrast, with the negative control DNA, p53 stays more open. "It actually relaxes and loosens its grip on the DNA," she said. "It suggested a mechanism by which this molecule could actually change its dynamics depending on the exact sequence of DNA." According to Amaro, computing provides a better understanding of cancer mechanisms and ways to develop possible novel therapeutic avenues. "When most people think about cancer research they probably don't think about computers, but biophysical models are getting to the point where they have a great impact on the science," she said. Chemicals created by plants are the basis for the majority of the medicines used today. One such plant, the periwinkle (Catharanthus roseus), is used in chemotherapy protocols for leukemia and Hodgkin's lymphoma. A completely different approach to drug discovery involves studying the evolution of plants that are known to be effective chemotherapeutic agents and their genetic relatives, since plants that share an evolutionary history often share related collections of chemical compounds. University of Texas researchers—working with researchers from King Abdulaziz University in Saudi Arabia, the University of Ottawa and Université de Montréal—have been studying Rhazya stricta, an environmentally stressed, poisonous evergreen shrub found in Saudi Arabia that is a member of the family that includes the periwinkle. To understand the genome and evolutionary history of Rhayza stricta, the researchers performed genome assemblies and analyses on TACC's Lonestar, Stampede and Wrangler systems. According Robert Jansen, professor of Integrative Biology at UT and lead researcher on the project, the computational resources at TACC were essential for constructing and studying the plant's genome. The results were published in Scientific Reports in September 2016. "These analyses allowed the identification of genes involved in the monoterpene indole alkaloid pathway, and in some cases expansions of gene families were detected," he said. The monoterpene indole alkaloid pathway produces compounds that have known therapeutic properties against cancer. From the annotated Rhazya genome, the researchers developed a metabolic pathway database, RhaCyc, that can serve as a community resource and help identify new chemotherapeutic molecules. Jansen and his team hope that by better characterizing the genome and evolutionary history using advanced computational methods, and making the metabolic pathway database available as a community resource, they can speed the development of new medicines in the future. "There are a nearly infinite number of possible drug compounds," Vaughn said. "But knowing the principles of what a good drug might look like - how it might bind to a certain pocket or what it might need to resemble - helps narrow the scope immensely, accelerating discoveries, while reducing costs." Explore further: Supercomputing the p53 protein as a promising anticancer therapy


News Article | February 15, 2017
Site: www.eurekalert.org

Understanding how oil and gas molecules, water and rocks interact at the nanoscale will help make extraction of hydrocarbons through hydraulic fracturing more efficient, according to Rice University researchers. Rice engineers George Hirasaki and Walter Chapman are leading an effort to better characterize the contents of organic shale by combining standard nuclear magnetic resonance (NMR) -- the same technology used by hospitals to see inside human bodies - with molecular dynamics simulations. The work presented this month in the Journal of Magnetic Resonance details their method to analyze shale samples and validate simulations that may help producers determine how much oil and/or gas exist in a formation and how difficult they may be to extract. Oil and gas drillers use NMR to characterize rock they believe contains hydrocarbons. NMR manipulates the hydrogen atoms' nuclear magnetic moments, which can be forced to align by an applied external magnetic field. After the moments are perturbed by radio-frequency electromagnetic pulses, they "relax" back to their original orientation, and NMR can detect that. Because relaxation times differ depending on the molecule and its environment, the information gathered by NMR can help identify whether a molecule is gas, oil or water and the critical size of the pores that contain them. "This is their eyes and ears for knowing what's down there," said Hirasaki, who said NMR instruments are among several tools in the string sent downhole to "log," or gather information, about a well. In conventional reservoirs, he said, the NMR log can distinguish gas, oil and water and quantify the amounts of each contained in the pores of the rock from their relaxation times -- known as T1 and T2 -- as well as how diffuse fluids are. "If the rock is water-wet, then oil will relax at rates close to that of bulk oil, while water will have a surface-relaxation time that is a function of the pore size," Hirasaki said. "This is because water is relaxed by sites at the water/mineral interface and the ratio of the mineral surface area to water volume is larger in smaller pores. The diffusivity is inversely proportional to the viscosity of the fluid. Thus gas is easily distinguished from oil and water by measuring diffusivity simultaneously with the T2 relaxation time. "In unconventional reservoirs, both T1 and T2 relaxation times of water and oil are short and have considerable overlap," he said. "Also the T1/T2 ratio can become very large in the smallest pores. The diffusivity is restricted by the nanometer-to-micron size of the pores. Thus it is a challenge to determine if the signal is from gas, oil or water." Hirasaki said there is debate on whether the short relaxation times in shale are due to paramagnetic sites on mineral surfaces and asphaltene aggregates and/or due to the restricted motion of the molecules confined in small pores. "We don't have an answer yet, but this study is the first step," he said. "The development of technology to drill horizontal wells and apply multiple hydraulic fractures (up to about 50) is what made oil and gas production commercially viable from unconventional resources," Hirasaki said. "These resources were previously known as the 'source rock,' from which oil and gas found in conventional reservoirs had originated and migrated. The source rock was too tight for commercial production using conventional technology." Fluids pumped downhole to fracture a horizontal well contain water, chemicals and sand that keeps the fracture "propped" open after the injection stops. The fluids are then pumped out to make room for the hydrocarbons to flow. But not all the water sent downhole comes back. Often the chemical composition of the organic component of shale known as kerogen has an affinity that allows water molecules to bind and block the nanoscale pores that would otherwise let oil and gas molecules through. "Kerogen is the organic material that resisted biodegradation during deep burial," Hirasaki said. "When it gets to a certain temperature, the molecules start cracking and make hydrocarbon liquids. Higher temperature makes methane (natural gas). But the fluids are in pores that are so tight the technology developed for conventional reservoirs doesn't apply anymore." The Rice project managed by lead author Philip Singer, a research scientist in Hirasaki's lab, and co-author Dilip Asthagiri, a research scientist in Chapman's lab, a lecturer and director of Rice's Professional Master's in Chemical Engineering program, applies NMR to kerogen samples and compares it to computer models that simulate how the substances interact, particularly in terms of material's wettability, its affinity for binding to water, gas or oil molecules. "NMR is very sensitive to fluid-surface interactions," Singer said. "With shale, the complication we're dealing with is the nanoscale pores. The NMR signal changes dramatically compared with measuring conventional rocks, in which pores are larger than a micron. So to understand what the NMR is telling us in shale, we need to simulate the interactions down to the nanoscale." The simulations mimic the molecules' known relaxation properties and reveal how they move in such a restrictive environment. When matched with NMR signals, they help interpret conditions downhole. That knowledge could also lead to fracking fluids that are less likely to bind to the rock, improving the flow of hydrocarbons, Hirasaki said. "If we can verify with measurements in the laboratory how fluids in highly confined or viscous systems behave, then we'll be able to use the same types of models to describe what's happening in the reservoir itself," he said. One goal is to incorporate the simulations into iSAFT -- inhomogeneous Statistical Associating Fluid Theory -- a pioneering method developed by Chapman and his group to simulate the free energy landscapes of complex materials and analyze their microstructures, surface forces, wettability and morphological transitions. "Our results challenge approximations in models that have been used for over 50 years to interpret NMR and MRI (magnetic resonance imaging) data," Chapman said. "Now that we have established the approach, we hope to explain results that have baffled scientists for years." Chapman is the William W. Akers Professor of Chemical and Biomolecular Engineering and associate dean for energy in the George R. Brown School of Engineering. Hirasaki is the A.J. Hartsook Professor Emeritus of Chemical and Biomolecular Engineering. The Rice University Consortium on Processes in Porous Media supported the research, with computing resources supplied by the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy, and the Texas Advanced Computing Center at the University of Texas at Austin. This news release can be found online at http://news. Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation's top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,910 undergraduates and 2,809 graduate students, Rice's undergraduate student-to-faculty ratio is 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for happiest students and for lots of race/class interaction by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger's Personal Finance. To read "What they're saying about Rice," go to http://tinyurl. .


News Article | February 16, 2017
Site: www.eurekalert.org

One of the main tools doctors use to detect diseases and injuries in cases ranging from multiple sclerosis to broken bones is magnetic resonance imaging (MRI). However, the results of an MRI scan take hours or days to interpret and analyze. This means that if a more detailed investigation is needed, or there is a problem with the scan, the patient needs to return for a follow-up. A new, supercomputing-powered, real-time analysis system may change that. Researchers from the Texas Advanced Computing Center (TACC), The University of Texas Health Science Center (UTHSC) and Philips Healthcare, have developed a new, automated platform capable of returning in-depth analyses of MRI scans in minutes, thereby minimizing patient callbacks, saving millions of dollars annually, and advancing precision medicine. The team presented a proof-of-concept demonstration of the platform at the International Conference on Biomedical and Health Informatics this week in Orlando, Florida. The platform they developed combines the imaging capabilities of the Philips MRI scanner with the processing power of the Stampede supercomputer -- one of the fastest in the world -- using the TACC-developed Agave API Platform infrastructure to facilitate communication, data transfer, and job control between the two. An API, or Application Program Interface, is a set of protocols and tools that specify how software components should interact. Agave manages the execution of the computing jobs and handles the flow of data from site to site. It has been used for a range of problems, from plant genomics to molecular simulations, and allows researchers to access cyberinfrastructure resources like Stampede via the web. "The Agave Platform brings the power of high-performance computing into the clinic," said William (Joe) Allen, a life science researcher for TACC and lead author on the paper. "This gives radiologists and other clinical staff the means to provide real-time quality control, precision medicine, and overall better care to the patient." For their demonstration project, staff at UTHSC performed MRI scans on a patient with a cartilage disorder to assess the state of the disease. Data from the MRI was passed through a proxy server to Stampede where it ran the GRAPE (GRAphical Pipelines Environment) analysis tool. Created by researchers at UTHSC, GRAPE characterizes the scanned tissue and returns pertinent information that can be used to do adaptive scanning - essentially telling a clinician to look more closely at a region of interest, thus accelerating the discovery of pathologies. The researchers demonstrated the system's effectiveness using a T1 mapping process, which converts raw data to useful imagery. The transformation involves computationally-intensive data analyses and is therefore a reasonable demonstration of a typical workflow for real-time, quantitative MRI. A full circuit, from MRI scan to supercomputer and back, took approximately five minutes to complete and was accomplished without any additional inputs or interventions. The system is designed to alert the scanner operator to redo a corrupted scan if the patient moves, or initiate additional scans as needed, while adding only minimal time to the overall scanning process. "We are very excited by this fruitful collaboration with TACC," said Refaat Gabr, an assistant professor of Diagnostic and Interventional Imaging at UTHSC and the lead researcher on the project. "By integrating the computational power of TACC, we plan to build a completely adaptive scan environment to study multiple sclerosis and other diseases." Ponnada Narayana, Gabr's co-principal investigator and the director of Magnetic Resonance Research at The University of Texas Medical School at Houston, elaborated. "Another potential of this technology is the extraction of quantitative, information-based texture analysis of MRI," he said. "There are a few thousand textures that can be quantified on MRI. These textures can be combined using appropriate mathematical models for radiomics. Combining radiomics with genetic profiles, referred to as radiogenomics, has the potential to predict outcomes in a number diseases, including cancer, and is a cornerstone of precision medicine." According to Allen, "science as a service" platforms like Agave will enable doctors to capture many kinds of biomedical data in real time and turn them into actionable insights. "Here, we demonstrated this is possible for MRI. But this same idea could be extended to virtually any medical device that gathers patient data," he said. "In a world of big health data and an almost limitless capacity to compute, there is little reason not to leverage high-performance computing resources in the clinic." The research is supported in part by National Science Foundation (NSF) award ACI-1450459, by the Clinical Translational Science Award (CTSA) Grant UL1-TR000371 from the National Institutes of Health (NIH) National Center for Advancing Translational Sciences, and by the Chair in Biomedical Engineering Endowment Fund. Stampede was generously funded by the NSF through award ACI-1134872.


News Article | February 16, 2017
Site: phys.org

A new, supercomputing-powered, real-time analysis system may change that. Researchers from the Texas Advanced Computing Center (TACC), The University of Texas Health Science Center (UTHSC) and Philips Healthcare, have developed a new, automated platform capable of returning in-depth analyses of MRI scans in minutes, thereby minimizing patient callbacks, saving millions of dollars annually, and advancing precision medicine. The team presented a proof-of-concept demonstration of the platform at the International Conference on Biomedical and Health Informatics this week in Orlando, Florida. The platform they developed combines the imaging capabilities of the Philips MRI scanner with the processing power of the Stampede supercomputer—one of the fastest in the world—using the TACC-developed Agave API Platform infrastructure to facilitate communication, data transfer, and job control between the two. An API, or Application Program Interface, is a set of protocols and tools that specify how software components should interact. Agave manages the execution of the computing jobs and handles the flow of data from site to site. It has been used for a range of problems, from plant genomics to molecular simulations, and allows researchers to access cyberinfrastructure resources like Stampede via the web. "The Agave Platform brings the power of high-performance computing into the clinic," said William (Joe) Allen, a life science researcher for TACC and lead author on the paper. "This gives radiologists and other clinical staff the means to provide real-time quality control, precision medicine, and overall better care to the patient." For their demonstration project, staff at UTHSC performed MRI scans on a patient with a cartilage disorder to assess the state of the disease. Data from the MRI was passed through a proxy server to Stampede where it ran the GRAPE (GRAphical Pipelines Environment) analysis tool. Created by researchers at UTHSC, GRAPE characterizes the scanned tissue and returns pertinent information that can be used to do adaptive scanning - essentially telling a clinician to look more closely at a region of interest, thus accelerating the discovery of pathologies. The researchers demonstrated the system's effectiveness using a T1 mapping process, which converts raw data to useful imagery. The transformation involves computationally-intensive data analyses and is therefore a reasonable demonstration of a typical workflow for real-time, quantitative MRI. A full circuit, from MRI scan to supercomputer and back, took approximately five minutes to complete and was accomplished without any additional inputs or interventions. The system is designed to alert the scanner operator to redo a corrupted scan if the patient moves, or initiate additional scans as needed, while adding only minimal time to the overall scanning process. "We are very excited by this fruitful collaboration with TACC," said Refaat Gabr, an assistant professor of Diagnostic and Interventional Imaging at UTHSC and the lead researcher on the project. "By integrating the computational power of TACC, we plan to build a completely adaptive scan environment to study multiple sclerosis and other diseases." Ponnada Narayana, Gabr's co-principal investigator and the director of Magnetic Resonance Research at The University of Texas Medical School at Houston, elaborated. "Another potential of this technology is the extraction of quantitative, information-based texture analysis of MRI," he said. "There are a few thousand textures that can be quantified on MRI. These textures can be combined using appropriate mathematical models for radiomics. Combining radiomics with genetic profiles, referred to as radiogenomics, has the potential to predict outcomes in a number diseases, including cancer, and is a cornerstone of precision medicine." According to Allen, "science as a service" platforms like Agave will enable doctors to capture many kinds of biomedical data in real time and turn them into actionable insights. "Here, we demonstrated this is possible for MRI. But this same idea could be extended to virtually any medical device that gathers patient data," he said. "In a world of big health data and an almost limitless capacity to compute, there is little reason not to leverage high-performance computing resources in the clinic." Explore further: Stampede 2 drives the frontiers of science and engineering forward


News Article | March 23, 2016
Site: www.rdmag.com

When a hail storm moved through Fort Worth, Texas on May 5, 1995, it battered the highly populated area with hail up to 4 inches in diameter and struck a local outdoor festival known as the Fort Worth Mayfest. The Mayfest storm was one of the costliest hailstorms in U.S history, causing more than $2 billion in damage and injuring at least 100 people. Scientists know that storms with a rotating updraft on their southwestern sides -- which are particularly common in the spring on the U.S. southern plains -- are associated with the biggest, most severe tornadoes and also produce a lot of large hail. However, clear ideas on how they form and how to predict these events in advance have proven elusive. A team based at University of Oklahoma (OU) working on the Severe Hail Analysis, Representation and Prediction (SHARP) project works to solve that mystery, with support from the National Science Foundation (NSF). Performing experimental weather forecasts using the Stampede supercomputer at the Texas Advanced Computing Center, researchers have gained a better understanding of the conditions that cause severe hail to form, and are producing predictions with far greater accuracy than those currently used operationally. To predict hail storms, or weather in general, scientists have developed mathematically based physics models of the atmosphere and the complex processes within, and computer codes that represent these physical processes on a grid consisting of millions of points. Numerical models in the form of computer codes are integrated forward in time starting from the observed current conditions to determine how a weather system will evolve and whether a serious storm will form. Because of the wide range of spatial and temporal scales that numerical weather predictions must cover and the fast turnaround required, they are almost always run on powerful supercomputers. The finer the resolution of the grid used to simulate the phenomena, the more accurate the forecast; but the more accurate the forecast, the more computation required. The highest-resolution National Weather Service's official forecasts have grid spacing of one point for every three kilometers. The model the Oklahoma team is using in the SHARP project, on the other hand, uses one grid point for every 500 meters -- six times more resolved in the horizontal directions. "This lets us simulate the storms with a lot higher accuracy," said Nathan Snook, an OU research scientist. "But the trade-off is, to do that, we need a lot of computing power -- more than 100 times that of three-kilometer simulations. Which is why we need Stampede." Stampede is currently one of the most powerful supercomputers in the U.S. for open science research and serves as an important part of NSF's portfolio of advanced cyberinfrastructure resources, enabling cutting-edge computational and data-intensive science and engineering research nationwide. According to Snook, there's a major effort underway to move to a "warning on forecast" paradigm -- that is, to use computer-model-based, short-term forecasts to predict what will happen over the next several hours and use those predictions to warn the public, as opposed to warning only when storms form and are observed. "How do we get the models good enough that we can warn the public based on them?" Snook asks. "That's the ultimate goal of what we want to do -- get to the point where we can make hail forecasts two hours in advance. 'A storm is likely to move into downtown Dallas, now is a good time to act.'" With such a system in place, it might be possible to prevent injuries to vulnerable people, divert or move planes into hangers and protect cars and other property. Looking at past storms to predict future ones To study the problem, the team first reviews the previous season's storms to identify the best cases to study. They then perform numerical experiments to see if their models can predict these storms better than the original forecasts using new, improved techniques. The idea is to ultimately transition the higher-resolution models they are testing into operation in the future. Now in the third year of their hail forecasting project, the researchers are getting promising results. Studying the storms that produced the May 20, 2013 Oklahoma-Moore tornado that led to 24 deaths, destroyed 1,150 homes and resulted in an estimated $2 billion in damage, they developed zero to 90 minute hail forecasts that captured the storm's impact better than the National Weather Service forecasts produced at the time. "The storms in the model move faster than the actual storms," Snook said. "But the model accurately predicted which three storms would produce strong hail and the path they would take." The models required Stampede to solve multiple fluid dynamics equations at millions of grid points and also incorporate the physics of precipitation, turbulence, radiation from the sun and energy changes from the ground. Moreover, the researchers had to simulate the storm multiple times -- as an ensemble -- to estimate and reduce the uncertainty in the data and in the physics of the weather phenomena themselves. "Performing all of these calculations on millions of points, multiple times every second, requires a massive amount of computing resources," Snook said. The team used more than a million computing hours on Stampede for the experiments and additional time on the Darter system at the National Institute for Computational Science for more recent forecasts. The resources were provided through the NSF-supported Extreme Science and Engineering Discovery Environment (XSEDE) program, which acts as a single virtual system that scientists can use to interactively share computing resources, data and expertise. Though the ultimate impacts of the numerical experiments will take some time to realize, its potential motivates Snook and the severe hail prediction team. "This has the potential to change the way people look at severe weather predictions," Snook said. "Five or 10 years down the road, when we have a system that can tell you that there's a severe hail storm coming hours in advance, and to be able to trust that -- it will change how we see severe weather. Instead of running for shelter, you'll know there's a storm coming and can schedule your afternoon." Ming Xue, the leader of the project and director of the Center for Analysis and Prediction of Storms (CAPS) at OU, gave a similar assessment. "Given the promise shown by the research and the ever increasing computing power, numerical prediction of hailstorms and warnings issued based on the model forecasts, with a couple of hours of lead time, may indeed be realized operationally in a not-too-distant future, and the forecasts will also be accompanied by information on how certain the forecasts are." The team published its results in the proceedings of the 20th Conference on Integrated Observing and Assimilation Systems for Atmosphere, Oceans and Land Surface (IOAS-AOLS); they will also be published in an upcoming issue of the American Meteorological Society journal Weather and Forecasting. "Severe hail events can have significant economic and safety impacts," said Nicholas F. Anderson, program officer in NSF's Division of Atmospheric and Geospace Sciences. "The work being done by SHARP project scientists is a step towards improving forecasts and providing better warnings for the public."


News Article | March 18, 2016
Site: www.scientificcomputing.com

Last year, President Obama announced the National Strategic Computing Initiative (NSCI), an executive order to increase research, development and deployment of high performance computing (HPC) in the United States, with the National Science Foundation, the Department of Energy and the Department of Defense as the lead agencies. One of NSCI's objectives is to accelerate research and development that can lead to future exascale computing systems — computers capable of performing one billion billion calculations per second (also known as an exaflop). Exascale computers will advance research, enhance national security and give the U.S. a competitive economic advantage. Experts believe simply improving existing technologies and architectures will not get us to exascale levels. Instead, researchers will need to rethink the entire computing paradigm — from power, to memory, to system software — to make exascale systems a reality. The Argo Project is a three-year collaborative effort, funded by the Department of Energy, to develop a new approach for extreme-scale system software. The project involves the efforts of 40 researchers from three national laboratories and four universities working to design and prototype an exascale operating system and the software to make it useful. To test their new ideas, the research team is using Chameleon, an experimental environment for large-scale cloud computing research supported by the National Science Foundation and hosted by the University of Chicago and the Texas Advanced Computing Center (TACC). Chameleon — funded by a $10 million award from the NSFFutureCloud program — is a re-configurable testbed that lets the research community experiment with novel cloud computing architectures and pursue new, architecturally-enabled applications of cloud computing. "Cloud computing has become a dominant method of providing computing infrastructure for Internet services,” said Jack Brassil, a program officer in NSF's division of Computer and Network Systems. "But to design new and innovative compute clouds and the applications they will run, academic researchers need much greater control, diversity and visibility into the hardware and software infrastructure than is available with commercial cloud systems today." The NSFFutureCloud testbed provides the types of capabilities Brassil described. Using Chameleon, the team is testing four key aspects of the future system: Chameleon's unique, reconfigurable infrastructure lets researchers bypass some issues that would have come up if the team was running the project on a typical high-performance computing system. For instance, developing the Node Operating System requires researchers to change the operating system kernel — the computer program that controls all the hardware components of a system and allocates them to applications. "There are not a lot of places where we can do that," said Swann Perarnau, a postdoctoral researcher at Argonne National Laboratory and collaborator on the Argo Project. "HPC machines in production are strictly controlled, and nobody will let us modify such a critical component." However, Chameleon lets scientists modify and control the system from top to bottom, allowing it to support a wide variety of cloud research and methods and architectures not available elsewhere. "The Argo project didn't have the right hardware nor the manpower to maintain the infrastructure needed for proper integration and testing of the entire software stack," Perarnau added. "While we had full access to a small cluster, I think we saved weeks of additional system setup time, and many hours of maintenance work, switching to Chameleon." One of the major challenges in reaching exascale is energy usage and cost. During last year's Supercomputing Conference, the researchers demonstrated the ability to dynamically control the power usage of 20 nodes during a live demonstration running on Chameleon. They released a paper describing their approach to power management for future exascale systems and will present the results at the Twelfth Workshop on High-Performance, Power-Aware Computing (HPPAC'16) in May 2016. The Argo team is working with industry partners, including Cray, Intel and IBM, to explore which techniques and features would be best suited for the Department of Energy’s next supercomputer. "Argo was founded to design and prototype exascale operating systems and runtime software," Perarnau said. "We believe some of the new techniques and tools we have developed can be tested on petascale systems and refined for exascale platforms."

Loading Texas Advanced Computing Center collaborators
Loading Texas Advanced Computing Center collaborators