Beijing, China
Beijing, China

Time filter

Source Type

News Article | April 17, 2017
Site: www.eurekalert.org

These are just three familiar examples of the hundreds of thousands of small molecules (also called specialized or secondary metabolites) that plants use as chemical ammunition to protect themselves from predation. Unfortunately, identifying the networks of genes that plants use to make these biologically active compounds, which are the source of many of the drugs that people use and abuse daily, has vexed scientists for years, hindering efforts to tap this vast pharmacopeia to produce new and improved therapeutics. Now, Vanderbilt University geneticists think they have come up with an effective and powerful new way for identifying these elusive gene networks, which typically consist of a handful to dozens of different genes, that may overcome this road block. "Plants synthesize massive numbers of bioproducts that are of benefit to society. This team has revolutionized the potential to uncover these natural bioproducts and understand how they are synthesized," said Anne Sylvester, program director in the National Science Foundation's Biological Sciences Directorate, which funded the research. The revolutionary new approach is based on the well-established observation that plants produce these compounds in response to specific environmental conditions. "We hypothesized that the genes within a network that work together to make a specific compound would all respond similarly to the same environmental conditions," explained Jennifer Wisecaver, the post-doctoral fellow who conducted the study. To test this hypothesis, Wisecaver - working with Cornelius Vanderbilt Professor of Biological Sciences Antonis Rokas and undergraduate researcher Alexander Borowsky - turned to Vanderbilt's in-house supercomputer at the Advanced Computing Center for Research & Education in order to crunch data from more than 22,000 gene expression studies performed on eight different model plant species. "These studies use advanced genomic technologies that can detect all the genes that plants turn on or off under specific conditions, such as high salinity, drought or the presence of a specific predator or pathogen," said Wisecaver. But identifying the networks of genes responsible for producing these small molecules from thousands of experiments measuring the activity of thousands of genes is no trivial matter. That's where the Vanderbilt scientists stepped in; They devised a powerful algorithm capable of identifying the networks of genes that show the same behavior (for example, all turning on) across these expression studies. The result of all this number crunching - described in the paper titled "A global co-expression network approach for connecting genes to specialized metabolic pathways in plants" published online Apr. 13 by The Plant Cell journal - was the identification of dozens, possibly even hundreds of gene pathways that produce small metabolites, including several that previous experiments had identified. Vered Tzin from Ben-Gurion University's Jacoob Blaustein Institutes for Desert Research in Israel and Georg Jander from Cornell University's Boyce Thompson Institute for Plant Research in Ithaca, NY, helped verify the predictions the analysis made in corn, and Daniel Kliebenstein from the Department of Plant Sciences at the University of California, Davis helped verify the predictions in the model plant system Arabidopsis. The results of their analysis go against the prevailing theory that the genes that make up these pathways are clustered together on the plant genome. "This idea comes from the observation in fungi and bacteria that the genes that make up these specialized metabolite pathways are clustered together," said Rokas. "In plants, however, these genes appear to be mostly scattered across the genome. Consequently, the strategies for discovering plant gene pathways will need to be different from those developed in the other organisms." The researchers argue that the results of their study show that this approach "is a novel, rich and largely untapped means for high-throughput discovery of the genetic basis and architecture of plant natural products." If that proves to be true, then it could help open the tap on new plant-based therapeutics for treating a broad range of conditions and diseases. The research was funded by a National Science Foundation National Plant Genome Initiative (NPGI) Postdoctoral Research Fellowship to Wisecaver (IOS-1401682), as well as by National Science Foundation grants to Rokas (DEB-1442113) and Jander (IOS-1339237).


News Article | April 17, 2017
Site: www.futurity.org

A new method for identifying the gene networks plants use to create anti-predator chemicals could lead to more effective drugs, a new study suggests. Plants create hundreds of thousands of small molecules (also called specialized or secondary metabolites)—including chemicals like cocaine, nicotine, and capsaicin—to use as “chemical ammunition” to protect themselves from predation. Unfortunately, the difficulty of identifying the networks of genes that plants use to make these biologically active compounds, which are the source of many of the drugs that people use and abuse daily, has hindered efforts to tap this vast pharmacopeia to produce new and improved therapeutics. Now, geneticists think they have come up with an effective and powerful new way for identifying these elusive gene networks, which typically consist of a handful to dozens of different genes. “Plants synthesize massive numbers of bioproducts that are of benefit to society. This team has revolutionized the potential to uncover these natural bioproducts and understand how they are synthesized,” says Anne Sylvester, program director in the National Science Foundation’s Biological Sciences Directorate, which funded the research. The revolutionary new approach is based on the well-established observation that plants produce these compounds in response to specific environmental conditions. “We hypothesized that the genes within a network that work together to make a specific compound would all respond similarly to the same environmental conditions,” explains Jennifer Wisecaver, a postdoctoral fellow at Vanderbilt University who conducted the study. To test this hypothesis, Wisecaver—working with professor of biological sciences Antonis Rokas and undergraduate researcher Alexander Borowsky—turned to Vanderbilt’s in-house supercomputer at the Advanced Computing Center for Research & Education in order to crunch data from more than 22,000 gene expression studies performed on eight different model plant species. “These studies use advanced genomic technologies that can detect all the genes that plants turn on or off under specific conditions, such as high salinity, drought, or the presence of a specific predator or pathogen,” says Wisecaver. But identifying the networks of genes responsible for producing these small molecules from thousands of experiments measuring the activity of thousands of genes is no trivial matter. That’s where the scientists stepped in; they devised a powerful algorithm capable of identifying the networks of genes that show the same behavior (for example, all turning on) across these expression studies. The result of all this number crunching—described in a paper in The Plant Cell—was the identification of dozens, possibly even hundreds, of gene pathways that produce small metabolites, including several that previous experiments had identified. Vered Tzin from Ben-Gurion University’s Jacoob Blaustein Institutes for Desert Research in Israel and Georg Jander from Cornell University’s Boyce Thompson Institute for Plant Research in Ithaca, New York, helped verify the predictions the analysis made in corn, and Daniel Kliebenstein from the plant sciences department at the University of California, Davis helped verify the predictions in the model plant system Arabidopsis. The results of their analysis go against the prevailing theory that the genes that make up these pathways are clustered together on the plant genome. “This idea comes from the observation in fungi and bacteria that the genes that make up these specialized metabolite pathways are clustered together,” says Rokas. “In plants, however, these genes appear to be mostly scattered across the genome. Consequently, the strategies for discovering plant gene pathways will need to be different from those developed in the other organisms.” The researchers argue that the results of their study show that this approach “is a novel, rich, and largely untapped means for high-throughput discovery of the genetic basis and architecture of plant natural products.” If that proves to be true, then it could help open the tap on new plant-based therapeutics for treating a broad range of conditions and diseases. Funding came from a National Science Foundation National Plant Genome Initiative Postdoctoral Research Fellowship to Wisecaver, as well as by National Science Foundation grants to Rokas and Jander.


Unfortunately, identifying the networks of genes that plants use to make these biologically active compounds, which are the source of many of the drugs that people use and abuse daily, has vexed scientists for years, hindering efforts to tap this vast pharmacopeia to produce new and improved therapeutics. Now, Vanderbilt University geneticists think they have come up with an effective and powerful new way for identifying these elusive gene networks, which typically consist of a handful to dozens of different genes, that may overcome this road block. "Plants synthesize massive numbers of bioproducts that are of benefit to society. This team has revolutionized the potential to uncover these natural bioproducts and understand how they are synthesized," said Anne Sylvester, program director in the National Science Foundation's Biological Sciences Directorate, which funded the research. The revolutionary new approach is based on the well-established observation that plants produce these compounds in response to specific environmental conditions. "We hypothesized that the genes within a network that work together to make a specific compound would all respond similarly to the same environmental conditions," explained Jennifer Wisecaver, the post-doctoral fellow who conducted the study. To test this hypothesis, Wisecaver - working with Cornelius Vanderbilt Professor of Biological Sciences Antonis Rokas and undergraduate researcher Alexander Borowsky - turned to Vanderbilt's in-house supercomputer at the Advanced Computing Center for Research & Education in order to crunch data from more than 22,000 gene expression studies performed on eight different model plant species. "These studies use advanced genomic technologies that can detect all the genes that plants turn on or off under specific conditions, such as high salinity, drought or the presence of a specific predator or pathogen," said Wisecaver. But identifying the networks of genes responsible for producing these small molecules from thousands of experiments measuring the activity of thousands of genes is no trivial matter. That's where the Vanderbilt scientists stepped in; They devised a powerful algorithm capable of identifying the networks of genes that show the same behavior (for example, all turning on) across these expression studies. The result of all this number crunching - described in the paper titled "A global co-expression network approach for connecting genes to specialized metabolic pathways in plants" published online Apr. 13 by The Plant Cell journal - was the identification of dozens, possibly even hundreds of gene pathways that produce small metabolites, including several that previous experiments had identified. Vered Tzin from Ben-Gurion University's Jacoob Blaustein Institutes for Desert Research in Israel and Georg Jander from Cornell University's Boyce Thompson Institute for Plant Research in Ithaca, NY, helped verify the predictions the analysis made in corn, and Daniel Kliebenstein from the Department of Plant Sciences at the University of California, Davis helped verify the predictions in the model plant system Arabidopsis. The results of their analysis go against the prevailing theory that the genes that make up these pathways are clustered together on the plant genome. "This idea comes from the observation in fungi and bacteria that the genes that make up these specialized metabolite pathways are clustered together," said Rokas. "In plants, however, these genes appear to be mostly scattered across the genome. Consequently, the strategies for discovering plant gene pathways will need to be different from those developed in the other organisms." The researchers argue that the results of their study show that this approach "is a novel, rich and largely untapped means for high-throughput discovery of the genetic basis and architecture of plant natural products." If that proves to be true, then it could help open the tap on new plant-based therapeutics for treating a broad range of conditions and diseases. Explore further: Researchers detail genetic mechanisms that govern growth and drought response in plants More information: Jennifer H. Wisecaver et al, A Global Co-expression Network Approach for Connecting Genes to Specialized Metabolic Pathways in Plants, The Plant Cell (2017). DOI: 10.1105/tpc.17.00009


News Article | May 8, 2017
Site: www.scientificcomputing.com

Product developers talk about time to market. Web service providers measure time to first byte. For James Lowey, the key metric is time to life. Lowey is CIO at the Translational Genomics Research Institute (TGen), a nonprofit focused on turning genomics insights into faster diagnostics and treatments that are more effective. TGen’s genetics research is being applied to rare childhood diseases, cancer, neurological disorders, diabetes and others. “We’ve got patients waiting,” Lowey told the panel audience. “We need to diagnose and treat them. They need results now, not in weeks or months. We’re working to accelerate the movement of insights from the bench to the bedside.” It’s no surprise that each new generation of processors helps organizations like TGen deliver genetic results—and clinical answers—more quickly. Lowey described TGen’s farm of Intel Xeon processor E5 v3 based Dell blade servers based on Intel Scalable System Framework (Intel SSF). Using the blade servers, TGen has reduced processing time for critical genomics processing tasks from two weeks to seven hours, making it fast enough to be clinically relevant. Digital imaging is another area where HPC-enabled speedups are advancing clinical care. Panelist Simon K. Warfield described innovative imaging techniques his team is applying to increase understanding of the brain’s complex circuitry. Dr. Warfield is the Thorne Griscom Professor of Radiology at Harvard Medical School and the founder and director of the Computational Radiology Lab (CRL) at Boston Children's Hospital. CRL is an Intel Parallel Computing Center that is modernizing the algorithms and data structures of medical image computing on Intel Xeon and Intel Xeon Phi processors. The lab is improving cache performance, vectorization performance and multi-threading performance, as well as creating more sophisticated imaging and modeling strategies. CRL can contribute to improved diagnosis and treatment of brain injuries, multiple sclerosis, depression, Alzheimer’s and many other conditions. Consider the novel technique CRL has developed to show more clearly water’s diffusion through the brain—and pinpoint hindrances and restrictions to its flow. In contrast to traditional image processing approaches, CRL’s diffusion-weighted imaging infers new parametric maps from data measurements. Its computational model includes tens or hundreds of 3D images—each up to 10 million pixels each—as its inputs. “This type of analysis is very computationally intensive,” Warfield said. “With the accelerated algorithm and the Intel Xeon Phi processors, we reduced the time needed from 48 hours to 15 minutes of calculations.” That speedup can translate to immediate benefits in for critically ill patients facing brain surgery. That’s because, as Warfield put it, “When you’re talking about surgical planning, life is a matter of time.” Recently, one of the hospital’s neurosurgery teams realized on a Friday that their patient’s conventional magnetic resonance scan was not clear enough to allow them to proceed with a planned brain resection. With the surgery-planning meeting scheduled for Monday, they requested emergency use of CRL’s diffusion imaging algorithm. The patient had a new scan Saturday evening, the data was processed on Sunday, and the information was ready for the team’s decision on Monday. The panel also highlighted precision medicine’s global reach—and its big data challenges. Fang Lin, Director of the Bioinformatics Center at BGI, described BGI’s use of the Lustre file system to help maintain storage performance as its data volumes grow. BGI is a global research leader as well as a provider of genetic testing products. It also operates the China National Genebank, putting it on the forefront of China’s five-year. BGI cranks 20 terabytes of sequencing data every day. The institute stores13petabytes of genomic data and uses a 10 petabyte file system comprising Intel Enterprise Edition for Lustre Software and open source technologies. Dr. David Torrents, a molecular biologist and research professor at the Barcelona Supercomputing Center, shone a spotlight on the importance of collaboration in advancing precision medicine. BSC provides resources to a variety of international centers and consortia. In addition, the institute conducts its own multidisciplinary research in computational biomedicine and related fields. BSC’s alliances also encompass a range of hospitals and medical centers, enabling it to validate and test its models and tools with data from clinical institutions. “We’re at an exciting moment,” Torrents said. “We are not just developing new solutions for personalized medicine, but now are beginning a pilot program in January 2017 to bring them together and apply them in clinical settings, beginning in Catalonia and then throughout Spain.” The panelists say continued leaps forward in precision medicine will come from faster and more sophisticated analysis of larger volumes of more varied data types. “What we want is a more holistic picture, and for that, it’s becoming absolutely critical to combine many diverse data types together for analysis,” said Lowey. To achieve that holistic picture, researchers want to use deep learning and other forms of artificial intelligence. They also want to apply those AI methods to genomic data in combination with imaging data, lifelong clinical records, population studies, environmental studies, and much more. Different aspects of the precision medicine workflow will have varying processing and storage requirements. So the push continues for faster performance with agile or heterogeneous platform architectures rather than a single “silver bullet” approach. The processors will continue as the primary workhorses, supplemented by embedded resources and FPGA accelerators for parts of the workflow. Distributed compute and storage resources will remain crucial, along with advances in applications and tools. As to the clinical impact of these holistic approaches, look no further than Boston Children’s Hospital. Noninvasive prenatal genomic testing can indicate whether a fetus has the risk factors that predispose it to be born with a malformed heart. If genetic testing shows these factors are present, data-intensive digital imaging can reveal whether the heart is actually deformed. By combining genomic with other medical data in this way, clinicians can provide peace of mind for worried parents-to-be, or help them plan for their child’s future. “We’re starting to connect the genetics that predisposes an individual to heart disease, with the imaging to see if the defect is present, and use that information to influence current treatment,” said Warfield. “That information can also help us plan for the child’s longer-term future. We can predict how they’ll do as teenagers and begin to plan accordingly.” Precision medicine is one of the most promising and meaningful applications of high-performance computing today. “It’s still early days, but we’re moving toward an exciting new era of predictive biology and personalized medicine,” said McManus. “Our panelists gave us a great taste of what’s on the horizon. With continued advances in platform technologies, artificial intelligence, and other areas, we create significant opportunities to increase the science of medicine and ultimately improve human health. Intel is excited to empower scientists and clinicians with technology innovations, resources and expertise as we collaborate to make this new era a reality.” Jan Rowell writes about technology trends in HPC, healthcare, life sciences, and other industries.


News Article | May 8, 2017
Site: www.scientificcomputing.com

Product developers talk about time to market. Web service providers measure time to first byte. For James Lowey, the key metric is time to life. Lowey is CIO at the Translational Genomics Research Institute (TGen), a nonprofit focused on turning genomics insights into faster diagnostics and treatments that are more effective. TGen’s genetics research is being applied to rare childhood diseases, cancer, neurological disorders, diabetes and others. “We’ve got patients waiting,” Lowey told the panel audience. “We need to diagnose and treat them. They need results now, not in weeks or months. We’re working to accelerate the movement of insights from the bench to the bedside.” It’s no surprise that each new generation of processors helps organizations like TGen deliver genetic results—and clinical answers—more quickly. Lowey described TGen’s farm of Intel Xeon processor E5 v3 based Dell blade servers based on Intel Scalable System Framework (Intel SSF). Using the blade servers, TGen has reduced processing time for critical genomics processing tasks from two weeks to seven hours, making it fast enough to be clinically relevant. Digital imaging is another area where HPC-enabled speedups are advancing clinical care. Panelist Simon K. Warfield described innovative imaging techniques his team is applying to increase understanding of the brain’s complex circuitry. Dr. Warfield is the Thorne Griscom Professor of Radiology at Harvard Medical School and the founder and director of the Computational Radiology Lab (CRL) at Boston Children's Hospital. CRL is an Intel Parallel Computing Center that is modernizing the algorithms and data structures of medical image computing on Intel Xeon and Intel Xeon Phi processors. The lab is improving cache performance, vectorization performance and multi-threading performance, as well as creating more sophisticated imaging and modeling strategies. CRL can contribute to improved diagnosis and treatment of brain injuries, multiple sclerosis, depression, Alzheimer’s and many other conditions. Consider the novel technique CRL has developed to show more clearly water’s diffusion through the brain—and pinpoint hindrances and restrictions to its flow. In contrast to traditional image processing approaches, CRL’s diffusion-weighted imaging infers new parametric maps from data measurements. Its computational model includes tens or hundreds of 3D images—each up to 10 million pixels each—as its inputs. “This type of analysis is very computationally intensive,” Warfield said. “With the accelerated algorithm and the Intel Xeon Phi processors, we reduced the time needed from 48 hours to 15 minutes of calculations.” That speedup can translate to immediate benefits in for critically ill patients facing brain surgery. That’s because, as Warfield put it, “When you’re talking about surgical planning, life is a matter of time.” Recently, one of the hospital’s neurosurgery teams realized on a Friday that their patient’s conventional magnetic resonance scan was not clear enough to allow them to proceed with a planned brain resection. With the surgery-planning meeting scheduled for Monday, they requested emergency use of CRL’s diffusion imaging algorithm. The patient had a new scan Saturday evening, the data was processed on Sunday, and the information was ready for the team’s decision on Monday. The panel also highlighted precision medicine’s global reach—and its big data challenges. Fang Lin, Director of the Bioinformatics Center at BGI, described BGI’s use of the Lustre file system to help maintain storage performance as its data volumes grow. BGI is a global research leader as well as a provider of genetic testing products. It also operates the China National Genebank, putting it on the forefront of China’s five-year. BGI cranks 20 terabytes of sequencing data every day. The institute stores13petabytes of genomic data and uses a 10 petabyte file system comprising Intel Enterprise Edition for Lustre Software and open source technologies. Dr. David Torrents, a molecular biologist and research professor at the Barcelona Supercomputing Center, shone a spotlight on the importance of collaboration in advancing precision medicine. BSC provides resources to a variety of international centers and consortia. In addition, the institute conducts its own multidisciplinary research in computational biomedicine and related fields. BSC’s alliances also encompass a range of hospitals and medical centers, enabling it to validate and test its models and tools with data from clinical institutions. “We’re at an exciting moment,” Torrents said. “We are not just developing new solutions for personalized medicine, but now are beginning a pilot program in January 2017 to bring them together and apply them in clinical settings, beginning in Catalonia and then throughout Spain.” The panelists say continued leaps forward in precision medicine will come from faster and more sophisticated analysis of larger volumes of more varied data types. “What we want is a more holistic picture, and for that, it’s becoming absolutely critical to combine many diverse data types together for analysis,” said Lowey. To achieve that holistic picture, researchers want to use deep learning and other forms of artificial intelligence. They also want to apply those AI methods to genomic data in combination with imaging data, lifelong clinical records, population studies, environmental studies, and much more. Different aspects of the precision medicine workflow will have varying processing and storage requirements. So the push continues for faster performance with agile or heterogeneous platform architectures rather than a single “silver bullet” approach. The processors will continue as the primary workhorses, supplemented by embedded resources and FPGA accelerators for parts of the workflow. Distributed compute and storage resources will remain crucial, along with advances in applications and tools. As to the clinical impact of these holistic approaches, look no further than Boston Children’s Hospital. Noninvasive prenatal genomic testing can indicate whether a fetus has the risk factors that predispose it to be born with a malformed heart. If genetic testing shows these factors are present, data-intensive digital imaging can reveal whether the heart is actually deformed. By combining genomic with other medical data in this way, clinicians can provide peace of mind for worried parents-to-be, or help them plan for their child’s future. “We’re starting to connect the genetics that predisposes an individual to heart disease, with the imaging to see if the defect is present, and use that information to influence current treatment,” said Warfield. “That information can also help us plan for the child’s longer-term future. We can predict how they’ll do as teenagers and begin to plan accordingly.” Precision medicine is one of the most promising and meaningful applications of high-performance computing today. “It’s still early days, but we’re moving toward an exciting new era of predictive biology and personalized medicine,” said McManus. “Our panelists gave us a great taste of what’s on the horizon. With continued advances in platform technologies, artificial intelligence, and other areas, we create significant opportunities to increase the science of medicine and ultimately improve human health. Intel is excited to empower scientists and clinicians with technology innovations, resources and expertise as we collaborate to make this new era a reality.” Jan Rowell writes about technology trends in HPC, healthcare, life sciences, and other industries.


News Article | April 17, 2017
Site: www.prweb.com

When people think about spinal cord injuries, thoughts generally turn toward Christopher Reeve who was thrown from his horse during trial events for an equestrian competition in 1995, and Steven McDonald, who was shot three times in 1986 after serving two years as an officer with the New York Police Department. Reeve’s and McDonald’s heroic and visible survival stories brought the severity of spinal cord injuries into the international dialogue. Today at the College of Staten Island (CSI), Maria Knikou, PhD, is holding clinical trials of her breakthrough research designed to develop effective rehabilitation strategies to improve the walking ability of persons with spinal cord injuries that have affected the function of the central nervous system. During her ongoing trials, she has recently worked with eight people with spinal cord injuries, including a 20-year-old who fell out of a golf cart and broke his neck nine months ago, and a Midwestern woman who broke her neck. These people, who have been diagnosed with tetraplegia (a spinal cord injury above the first thoracic vertebra or within cervical sections Cervical 1-8) and severe paralysis of the legs, came to CSI to participate in the research trials. After completing four to six weeks of therapy with Dr. Knikou, the patients saw motor function improve, with increased control and reduced spasticity. According to spinalcord.com, “The spinal cord carries nerve fibers traveling both from the brain to the rest of the body and from the body back to the brain. Those coming from the brain are responsible for voluntary control of muscles. Those traveling toward the brain carry sensation.” Dr. Knikou’s non-invasive therapy focuses on assessing the signal transfer from the brain to the legs in order to strengthen and enhance that pathway and provide gains in motor function. Patients who undergo the phase one therapy may be eligible for the phase two Robotic Gait Training, designed to further stimulate brain, spinal, and muscular health on a pathway for improved mobility. People who participate in the trials are provided a stipend, and certain expenses may be covered. Persons who are interested in learning if they are eligible candidates for this unique therapeutic approach should contact Dr. Knikou, Professor of Human Neurophysiology in the Physical Therapy Department of the School of Health Sciences at 718.982.3316 or maria.knikou(at)csi.cuny.edu. All trials are conducted on the Willowbrook campus of the College of Staten Island in New York City. "Dr Knikou's forward-thinking and expertise in human neurophysiology have enabled her to be extremely successful, with ongoing grant support from New York State and other private foundations," commented Dean Maureen Becker, PhD. "She is one of the leading researchers in the School of Health Sciences at the College of Staten Island and her work, one day, will impact the lives of millions of individuals with spinal cord injury." Dr. Knikou’s research project is funded by the New York State Department of Health, Spinal Cord Injury Research Board, under the Project to Accelerate Research Translation (PART) award. She mentors high school, undergraduate, and graduate students, as well as postdoctoral research fellows and junior faculty. Dr. Knikou serves on several editorial boards and has published her research work in high-ranking, peer-reviewed scientific journals. For more information about the College of Staten Island School of Health Sciences visit http://www.csi.cuny.edu/schoolofhealthsciences. About the College of Staten Island The College of Staten Island is a senior college of The City University of New York (CUNY) offering Doctoral programs, Advanced Certificate programs, and Master’s programs, as well as Bachelor’s and Associate’s degrees. CSI is ranked 3rd in New York State by MONEY magazine for Best Colleges and 6th in the nation on CollegeNet’s Social Mobility Index. CSI is also a “Top Master’s University,” as ranked by Washington Monthly; in the Top 15% for Alumni Salary Potential according to Payscale; and has been named a Military Friendly School for seven consecutive years by GI Jobs magazine. The CUNY Interdisciplinary High-Performance Computing Center, one of the most powerful supercomputers in the New York City region, handles big-data analysis for faculty researchers and their student research teams, as well as researchers nationwide. The 204-acre park-like campus of CSI, the largest in NYC, is fully accessible and contains an advanced, networked infrastructure to support technology-based teaching, learning, and research. Dolphin Cove Resident Halls, the college’s new apartment-style luxury suites, celebrates its third year at full occupancy housing students from across NYC, the United States, and the world.


News Article | May 1, 2017
Site: www.biosciencetechnology.com

Digital imaging is another area where HPC-enabled speedups are advancing clinical care. Panelist Simon K. Warfield described innovative imaging techniques his team is applying to increase understanding of the brain’s complex circuitry. Dr. Warfield is the Thorne Griscom Professor of Radiology at Harvard Medical School and the founder and director of the Computational Radiology Lab (CRL) at Boston Children's Hospital. CRL is an Intel Parallel Computing Center that is modernizing the algorithms and data structures of medical image computing on Intel Xeon and Intel Xeon Phi processors. The lab is improving cache performance, vectorization performance and multi-threading performance, as well as creating more sophisticated imaging and modeling strategies. CRL can contribute to improved diagnosis and treatment of brain injuries, multiple sclerosis, depression, Alzheimer’s and many other conditions. Consider the novel technique CRL has developed to show more clearly water’s diffusion through the brain—and pinpoint hindrances and restrictions to its flow. In contrast to traditional image processing approaches, CRL’s diffusion-weighted imaging infers new parametric maps from data measurements. Its computational model includes tens or hundreds of 3D images—each up to 10 million pixels each—as its inputs. “This type of analysis is very computationally intensive,” Warfield said. “With the accelerated algorithm and the Intel Xeon Phi processors, we reduced the time needed from 48 hours to 15 minutes of calculations.” That speedup can translate to immediate benefits in for critically ill patients facing brain surgery. That’s because, as Warfield put it, “When you’re talking about surgical planning, life is a matter of time.” Recently, one of the hospital’s neurosurgery teams realized on a Friday that their patient’s conventional magnetic resonance scan was not clear enough to allow them to proceed with a planned brain resection. With the surgery-planning meeting scheduled for Monday, they requested emergency use of CRL’s diffusion imaging algorithm. The patient had a new scan Saturday evening, the data was processed on Sunday, and the information was ready for the team’s decision on Monday. The panel also highlighted precision medicine’s global reach—and its big data challenges. Fang Lin, Director of the Bioinformatics Center at BGI, described BGI’s use of the Lustre file system to help maintain storage performance as its data volumes grow. BGI is a global research leader as well as a provider of genetic testing products. It also operates the China National Genebank, putting it on the forefront of China’s five-year. BGI cranks 20 terabytes of sequencing data every day. The institute stores13petabytes of genomic data and uses a 10 petabyte file system comprising Intel Enterprise Edition for Lustre Software and open source technologies. Dr. David Torrents, a molecular biologist and research professor at the Barcelona Supercomputing Center, shone a spotlight on the importance of collaboration in advancing precision medicine. BSC provides resources to a variety of international centers and consortia. In addition, the institute conducts its own multidisciplinary research in computational biomedicine and related fields. BSC’s alliances also encompass a range of hospitals and medical centers, enabling it to validate and test its models and tools with data from clinical institutions. “We’re at an exciting moment,” Torrents said. “We are not just developing new solutions for personalized medicine, but now are beginning a pilot program in January 2017 to bring them together and apply them in clinical settings, beginning in Catalonia and then throughout Spain.” The panelists say continued leaps forward in precision medicine will come from faster and more sophisticated analysis of larger volumes of more varied data types. “What we want is a more holistic picture, and for that, it’s becoming absolutely critical to combine many diverse data types together for analysis,” said Lowey. To achieve that holistic picture, researchers want to use deep learning and other forms of artificial intelligence. They also want to apply those AI methods to genomic data in combination with imaging data, lifelong clinical records, population studies, environmental studies, and much more. Different aspects of the precision medicine workflow will have varying processing and storage requirements. So the push continues for faster performance with agile or heterogeneous platform architectures rather than a single “silver bullet” approach. The processors will continue as the primary workhorses, supplemented by embedded resources and FPGA accelerators for parts of the workflow. Distributed compute and storage resources will remain crucial, along with advances in applications and tools. As to the clinical impact of these holistic approaches, look no further than Boston Children’s Hospital. Noninvasive prenatal genomic testing can indicate whether a fetus has the risk factors that predispose it to be born with a malformed heart. If genetic testing shows these factors are present, data-intensive digital imaging can reveal whether the heart is actually deformed. By combining genomic with other medical data in this way, clinicians can provide peace of mind for worried parents-to-be, or help them plan for their child’s future. “We’re starting to connect the genetics that predisposes an individual to heart disease, with the imaging to see if the defect is present, and use that information to influence current treatment,” said Warfield. “That information can also help us plan for the child’s longer-term future. We can predict how they’ll do as teenagers and begin to plan accordingly.” Precision medicine is one of the most promising and meaningful applications of high-performance computing today. “It’s still early days, but we’re moving toward an exciting new era of predictive biology and personalized medicine,” said McManus. “Our panelists gave us a great taste of what’s on the horizon. With continued advances in platform technologies, artificial intelligence, and other areas, we create significant opportunities to increase the science of medicine and ultimately improve human health. Intel is excited to empower scientists and clinicians with technology innovations, resources and expertise as we collaborate to make this new era a reality.” Jan Rowell writes about technology trends in HPC, healthcare, life sciences, and other industries.


News Article | April 28, 2017
Site: www.scientificcomputing.com

DataDirect Networks (DDN) announced that the University of Edinburgh, one of the world’s top educational and research institutions, has deployed three DDN ES7K Lustre Parallel File System Appliances to accelerate data-intensive workflows across research and industry. At the Edinburgh Parallel Computing Center (EPCC), DDN’s advanced Lustre appliances manage a rising tide of digital data generated by traditional HPC and other forms of novel computing. According to professor Mark Parsons, director of the Edinburgh Parallel Computing Center, DDN’s high-performance storage supports fast-growing genomics research while enabling multinational companies and smaller businesses to benefit from access to advanced technologies. “We’re entering a period of huge innovation both in HPC and storage,” he said. “Companies like DDN, which continue to innovate, best allow us to support scientists and researchers across all kinds of businesses to harness the full potential of leading-edge technology and to accelerate life-changing discoveries.” The Edinburgh Parallel Computing Center participates in many large-scale European research infrastructures, including the Scottish Genomes Partnership (SGP), led by the University of Edinburgh and University of Glasgow. With the goal to link crucial genetic data from sequenced genomes with clinical information, SGP installed 10 Illumina HiSeq X Ten Sequencing Systems along with the first deployment of Illumina’s SeqLab end-to-end solution for streamlining scientific workflows. A team at Edinburgh Genomics took advantage of this “plug and play” approach to achieve maximum throughput in weeks rather than months; 5,500 genomes have been sequenced in a little more than a year. Thanks to its powerful, centralized DDN storage, the Edinburgh computing center is making major strides in sequencing the genomes of more than 3,000 people in Scotland. This ongoing effort is empowering the Scottish National Health Service to discover new, personalized treatments of diseases and genetic disorders while opening the door to more effective and safer drug therapies. The deployment of DDN’s trio of ES7K appliances with nearly 3PB of robust storage follows a legacy deployment of 23PB of DDN SFA12K storage used to support the UK Research Data Facility (RDF), which is funded by the Engineering and Physical Sciences Research Council (EPSRC). “The key to DDN winning this latest bid was the quality of its technology and solutions,” said Parsons. “DDN understands our requirements, and its ES7K offers the best balance of price, performance and capacity.” In addition, the Edinburgh Parallel Computing Center takes advantage of DDN’s fast data access speeds and scalable capacity to meet the far-ranging requirements of its Fortissimo initiative, a European-commissioned project that brings vital resources to companies that otherwise couldn’t afford them. This successful effort has sped the development of the first high-performance “megacar,” one of the world’s fastest cars. The effort also elevated performance of SunCast software, a leading-edge solution from Integrated Environment Solutions, to analyze the earth’s shadows and effects of solar gains on the thermal performance of buildings. Looking ahead, the University of Edinburgh is evaluating DDN’s WOS object storage to support an infinitely scalable storage pool to help the university better manage an active archive of more than 2PB of data. The goal of this additional deployment would be to offer a highly scalable alternative to file storage to the university’s industry users and tier-two centers that are being established around the UK.


News Article | April 19, 2017
Site: phys.org

It sounds like an easy task – after all, any animal with basic vision can see a moving object, decide whether it is food or a threat and react accordingly, but what comes easily to a scallop is a challenge for the world's biggest supercomputers. Crutchfield, along with physics graduate student Adam Rupe and postdoc Ryan James, is designing these new machine learning systems to allow supercomputers to spot large-scale atmospheric structures, such as hurricanes and atmospheric rivers, in climate data.  The UC Davis Complexity Sciences Center, which Crutchfield leads, was recently named as an Intel Parallel Computing Center and is collaborating with Intel Research, the Department of Energy's National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley Lab, Stanford University, and University of Montreal. The entire Big Data Center project is led by Prabhat, leader of the Data And Analytics Services Group at the Berkeley lab. The team works on NERSC's CORI supercomputer, in the top five of the world's fastest machines with over 600,000 CPU cores. Modern science is full of "big data." For climate science, that includes both satellite- and ground-based measurements that span the planet, as well as "big" simulations. "We need new kind of machine learning to interpret very large data and planet-wide simulations," Crutchfield said. Climate and weather systems evolve over time, so the machines need to be able to find patterns not only in space but over time. "Dynamics are key to this," Crutchfield said. Humans (and other visual animals) recognize dynamic changes very quickly, but it's much harder for machines. Pattern Discovery is more than Pattern Recognition With existing technology, computers recognize patterns based on an existing template. That's how voice recognition systems work, by comparing your voice to an existing catalog of sounds. These pattern recognition systems can be very useful but they can't identify anything truly new – that isn't represented in their template. Crutchfield and his team are taking a different approach, based on pattern discovery. They are working on algorithms that allow computers to identify structures in data without knowing what they are in advance. "Learning novel patterns is what humans are uniquely good at, but machines can't do it," he said. Using pattern discovery, a supercomputer would learn how to identify hurricanes or other features in climate and weather data. It might also identify new kinds of structures that are too complex for humans to perceive at all. While this application is in global climate modeling, Crutchfield hopes to make it a new paradigm for analyzing very large datasets. "Usually, you apply known models to interpret the data. To say that you will extract your model directly from the data is a radical claim," he said. The collaboration is part of the Intel Parallel Computing Centers program, which provides funding to universities, institutions, and research labs to modernize key community codes used across a wide range of disciplines to run on industry-standard parallel architectures. Explore further: "Colony" Computer to Look for a Theory of Theories


News Article | August 15, 2016
Site: www.scientificcomputing.com

Quantum computing remains mysterious and elusive to many, but USC Viterbi School of Engineering researchers might have taken us one step closer to bring the superpowered devices to practical reality. The Information Sciences Institute at USC Viterbi is home to the USC-Lockheed Martin Quantum Computing Center (QCC), a supercooled, magnetically shielded facility specially built to house the first commercially available quantum optimization processors – devices so advanced that there are currently only two in use outside the Canadian company D-Wave Systems, where they were built: The first one went to USC and Lockheed Martin, the second to NASA and Google. Quantum computers encode data in quantum bits, or “qubits,” which have the capability of representing the two digits of one and zero at the same time – as opposed to traditional bits, which can encode distinctly either a one or a zero. This property, called superposition, along with the ability of quantum states to “interfere” (cancel or reinforce each other like waves in a pond) and “tunnel” through energy barriers, is what may one day allow quantum processors to ultimately perform optimization calculations much faster than is possible using traditional processors. Optimization problems can take many forms, and quantum processors have been theorized to be useful for a variety of machine learning and big data problems like stock portfolio optimization, image recognition and classification, and detecting anomalies. Yet, because of the exotic way in which quantum computers process information, they are highly sensitive to errors of different kinds. When such errors occur they can erase any quantum computational advantage — so developing methods to overcome errors is of paramount importance in the quest to demonstrate “quantum supremacy.” USC researchers Walter Vinci, Tameem Albash and Daniel Lidar put forth a scheme to minimize errors. Their solution, explained in the article “Nested Quantum Annealing Correction” published in the journal Nature Quantum Information, is focused on reducing and correcting errors associated with heating, a type of errors that is common and particularly detrimental in quantum optimizers. Cooling the quantum processor further is not possible since the specialized dilution refrigerator that keeps it cool already operates at its limit, at a temperature approximately 1,000 times colder than outer space. Vinci, Albash and Lidar have developed a new method to suppress heating errors: By coupling several qubits together on a D-Wave Two quantum optimizer, without changing the hardware of the device, these qubits act effectively as one qubit that experiences a lower temperature. The more qubits are coupled, the lower is the temperature experienced, allowing researchers to minimize the effect of heating as a source of noise or error. This nesting scheme is implementable not only on platforms such as the D-Wave processor on which it was tested, but also on other future quantum optimization devices with different hardware architectures. The researchers believe that this work is an important step in eliminating a bottleneck for scalable quantum optimization implementations. “Our work is part of a large scale effort by the research community aimed at realizing the potential of quantum information processing, which we all hope might one day surpass its classical counterparts,” said Lidar, a USC Viterbi professor and QCC scientific director.

Loading Computing Center collaborators
Loading Computing Center collaborators