Computing Center

Beijing, China

Computing Center

Beijing, China
SEARCH FILTERS
Time filter
Source Type

News Article | April 17, 2017
Site: www.eurekalert.org

These are just three familiar examples of the hundreds of thousands of small molecules (also called specialized or secondary metabolites) that plants use as chemical ammunition to protect themselves from predation. Unfortunately, identifying the networks of genes that plants use to make these biologically active compounds, which are the source of many of the drugs that people use and abuse daily, has vexed scientists for years, hindering efforts to tap this vast pharmacopeia to produce new and improved therapeutics. Now, Vanderbilt University geneticists think they have come up with an effective and powerful new way for identifying these elusive gene networks, which typically consist of a handful to dozens of different genes, that may overcome this road block. "Plants synthesize massive numbers of bioproducts that are of benefit to society. This team has revolutionized the potential to uncover these natural bioproducts and understand how they are synthesized," said Anne Sylvester, program director in the National Science Foundation's Biological Sciences Directorate, which funded the research. The revolutionary new approach is based on the well-established observation that plants produce these compounds in response to specific environmental conditions. "We hypothesized that the genes within a network that work together to make a specific compound would all respond similarly to the same environmental conditions," explained Jennifer Wisecaver, the post-doctoral fellow who conducted the study. To test this hypothesis, Wisecaver - working with Cornelius Vanderbilt Professor of Biological Sciences Antonis Rokas and undergraduate researcher Alexander Borowsky - turned to Vanderbilt's in-house supercomputer at the Advanced Computing Center for Research & Education in order to crunch data from more than 22,000 gene expression studies performed on eight different model plant species. "These studies use advanced genomic technologies that can detect all the genes that plants turn on or off under specific conditions, such as high salinity, drought or the presence of a specific predator or pathogen," said Wisecaver. But identifying the networks of genes responsible for producing these small molecules from thousands of experiments measuring the activity of thousands of genes is no trivial matter. That's where the Vanderbilt scientists stepped in; They devised a powerful algorithm capable of identifying the networks of genes that show the same behavior (for example, all turning on) across these expression studies. The result of all this number crunching - described in the paper titled "A global co-expression network approach for connecting genes to specialized metabolic pathways in plants" published online Apr. 13 by The Plant Cell journal - was the identification of dozens, possibly even hundreds of gene pathways that produce small metabolites, including several that previous experiments had identified. Vered Tzin from Ben-Gurion University's Jacoob Blaustein Institutes for Desert Research in Israel and Georg Jander from Cornell University's Boyce Thompson Institute for Plant Research in Ithaca, NY, helped verify the predictions the analysis made in corn, and Daniel Kliebenstein from the Department of Plant Sciences at the University of California, Davis helped verify the predictions in the model plant system Arabidopsis. The results of their analysis go against the prevailing theory that the genes that make up these pathways are clustered together on the plant genome. "This idea comes from the observation in fungi and bacteria that the genes that make up these specialized metabolite pathways are clustered together," said Rokas. "In plants, however, these genes appear to be mostly scattered across the genome. Consequently, the strategies for discovering plant gene pathways will need to be different from those developed in the other organisms." The researchers argue that the results of their study show that this approach "is a novel, rich and largely untapped means for high-throughput discovery of the genetic basis and architecture of plant natural products." If that proves to be true, then it could help open the tap on new plant-based therapeutics for treating a broad range of conditions and diseases. The research was funded by a National Science Foundation National Plant Genome Initiative (NPGI) Postdoctoral Research Fellowship to Wisecaver (IOS-1401682), as well as by National Science Foundation grants to Rokas (DEB-1442113) and Jander (IOS-1339237).


News Article | April 17, 2017
Site: www.futurity.org

A new method for identifying the gene networks plants use to create anti-predator chemicals could lead to more effective drugs, a new study suggests. Plants create hundreds of thousands of small molecules (also called specialized or secondary metabolites)—including chemicals like cocaine, nicotine, and capsaicin—to use as “chemical ammunition” to protect themselves from predation. Unfortunately, the difficulty of identifying the networks of genes that plants use to make these biologically active compounds, which are the source of many of the drugs that people use and abuse daily, has hindered efforts to tap this vast pharmacopeia to produce new and improved therapeutics. Now, geneticists think they have come up with an effective and powerful new way for identifying these elusive gene networks, which typically consist of a handful to dozens of different genes. “Plants synthesize massive numbers of bioproducts that are of benefit to society. This team has revolutionized the potential to uncover these natural bioproducts and understand how they are synthesized,” says Anne Sylvester, program director in the National Science Foundation’s Biological Sciences Directorate, which funded the research. The revolutionary new approach is based on the well-established observation that plants produce these compounds in response to specific environmental conditions. “We hypothesized that the genes within a network that work together to make a specific compound would all respond similarly to the same environmental conditions,” explains Jennifer Wisecaver, a postdoctoral fellow at Vanderbilt University who conducted the study. To test this hypothesis, Wisecaver—working with professor of biological sciences Antonis Rokas and undergraduate researcher Alexander Borowsky—turned to Vanderbilt’s in-house supercomputer at the Advanced Computing Center for Research & Education in order to crunch data from more than 22,000 gene expression studies performed on eight different model plant species. “These studies use advanced genomic technologies that can detect all the genes that plants turn on or off under specific conditions, such as high salinity, drought, or the presence of a specific predator or pathogen,” says Wisecaver. But identifying the networks of genes responsible for producing these small molecules from thousands of experiments measuring the activity of thousands of genes is no trivial matter. That’s where the scientists stepped in; they devised a powerful algorithm capable of identifying the networks of genes that show the same behavior (for example, all turning on) across these expression studies. The result of all this number crunching—described in a paper in The Plant Cell—was the identification of dozens, possibly even hundreds, of gene pathways that produce small metabolites, including several that previous experiments had identified. Vered Tzin from Ben-Gurion University’s Jacoob Blaustein Institutes for Desert Research in Israel and Georg Jander from Cornell University’s Boyce Thompson Institute for Plant Research in Ithaca, New York, helped verify the predictions the analysis made in corn, and Daniel Kliebenstein from the plant sciences department at the University of California, Davis helped verify the predictions in the model plant system Arabidopsis. The results of their analysis go against the prevailing theory that the genes that make up these pathways are clustered together on the plant genome. “This idea comes from the observation in fungi and bacteria that the genes that make up these specialized metabolite pathways are clustered together,” says Rokas. “In plants, however, these genes appear to be mostly scattered across the genome. Consequently, the strategies for discovering plant gene pathways will need to be different from those developed in the other organisms.” The researchers argue that the results of their study show that this approach “is a novel, rich, and largely untapped means for high-throughput discovery of the genetic basis and architecture of plant natural products.” If that proves to be true, then it could help open the tap on new plant-based therapeutics for treating a broad range of conditions and diseases. Funding came from a National Science Foundation National Plant Genome Initiative Postdoctoral Research Fellowship to Wisecaver, as well as by National Science Foundation grants to Rokas and Jander.


News Article | April 28, 2017
Site: www.scientificcomputing.com

DataDirect Networks (DDN) announced that the University of Edinburgh, one of the world’s top educational and research institutions, has deployed three DDN ES7K Lustre Parallel File System Appliances to accelerate data-intensive workflows across research and industry. At the Edinburgh Parallel Computing Center (EPCC), DDN’s advanced Lustre appliances manage a rising tide of digital data generated by traditional HPC and other forms of novel computing. According to professor Mark Parsons, director of the Edinburgh Parallel Computing Center, DDN’s high-performance storage supports fast-growing genomics research while enabling multinational companies and smaller businesses to benefit from access to advanced technologies. “We’re entering a period of huge innovation both in HPC and storage,” he said. “Companies like DDN, which continue to innovate, best allow us to support scientists and researchers across all kinds of businesses to harness the full potential of leading-edge technology and to accelerate life-changing discoveries.” The Edinburgh Parallel Computing Center participates in many large-scale European research infrastructures, including the Scottish Genomes Partnership (SGP), led by the University of Edinburgh and University of Glasgow. With the goal to link crucial genetic data from sequenced genomes with clinical information, SGP installed 10 Illumina HiSeq X Ten Sequencing Systems along with the first deployment of Illumina’s SeqLab end-to-end solution for streamlining scientific workflows. A team at Edinburgh Genomics took advantage of this “plug and play” approach to achieve maximum throughput in weeks rather than months; 5,500 genomes have been sequenced in a little more than a year. Thanks to its powerful, centralized DDN storage, the Edinburgh computing center is making major strides in sequencing the genomes of more than 3,000 people in Scotland. This ongoing effort is empowering the Scottish National Health Service to discover new, personalized treatments of diseases and genetic disorders while opening the door to more effective and safer drug therapies. The deployment of DDN’s trio of ES7K appliances with nearly 3PB of robust storage follows a legacy deployment of 23PB of DDN SFA12K storage used to support the UK Research Data Facility (RDF), which is funded by the Engineering and Physical Sciences Research Council (EPSRC). “The key to DDN winning this latest bid was the quality of its technology and solutions,” said Parsons. “DDN understands our requirements, and its ES7K offers the best balance of price, performance and capacity.” In addition, the Edinburgh Parallel Computing Center takes advantage of DDN’s fast data access speeds and scalable capacity to meet the far-ranging requirements of its Fortissimo initiative, a European-commissioned project that brings vital resources to companies that otherwise couldn’t afford them. This successful effort has sped the development of the first high-performance “megacar,” one of the world’s fastest cars. The effort also elevated performance of SunCast software, a leading-edge solution from Integrated Environment Solutions, to analyze the earth’s shadows and effects of solar gains on the thermal performance of buildings. Looking ahead, the University of Edinburgh is evaluating DDN’s WOS object storage to support an infinitely scalable storage pool to help the university better manage an active archive of more than 2PB of data. The goal of this additional deployment would be to offer a highly scalable alternative to file storage to the university’s industry users and tier-two centers that are being established around the UK.


Unfortunately, identifying the networks of genes that plants use to make these biologically active compounds, which are the source of many of the drugs that people use and abuse daily, has vexed scientists for years, hindering efforts to tap this vast pharmacopeia to produce new and improved therapeutics. Now, Vanderbilt University geneticists think they have come up with an effective and powerful new way for identifying these elusive gene networks, which typically consist of a handful to dozens of different genes, that may overcome this road block. "Plants synthesize massive numbers of bioproducts that are of benefit to society. This team has revolutionized the potential to uncover these natural bioproducts and understand how they are synthesized," said Anne Sylvester, program director in the National Science Foundation's Biological Sciences Directorate, which funded the research. The revolutionary new approach is based on the well-established observation that plants produce these compounds in response to specific environmental conditions. "We hypothesized that the genes within a network that work together to make a specific compound would all respond similarly to the same environmental conditions," explained Jennifer Wisecaver, the post-doctoral fellow who conducted the study. To test this hypothesis, Wisecaver - working with Cornelius Vanderbilt Professor of Biological Sciences Antonis Rokas and undergraduate researcher Alexander Borowsky - turned to Vanderbilt's in-house supercomputer at the Advanced Computing Center for Research & Education in order to crunch data from more than 22,000 gene expression studies performed on eight different model plant species. "These studies use advanced genomic technologies that can detect all the genes that plants turn on or off under specific conditions, such as high salinity, drought or the presence of a specific predator or pathogen," said Wisecaver. But identifying the networks of genes responsible for producing these small molecules from thousands of experiments measuring the activity of thousands of genes is no trivial matter. That's where the Vanderbilt scientists stepped in; They devised a powerful algorithm capable of identifying the networks of genes that show the same behavior (for example, all turning on) across these expression studies. The result of all this number crunching - described in the paper titled "A global co-expression network approach for connecting genes to specialized metabolic pathways in plants" published online Apr. 13 by The Plant Cell journal - was the identification of dozens, possibly even hundreds of gene pathways that produce small metabolites, including several that previous experiments had identified. Vered Tzin from Ben-Gurion University's Jacoob Blaustein Institutes for Desert Research in Israel and Georg Jander from Cornell University's Boyce Thompson Institute for Plant Research in Ithaca, NY, helped verify the predictions the analysis made in corn, and Daniel Kliebenstein from the Department of Plant Sciences at the University of California, Davis helped verify the predictions in the model plant system Arabidopsis. The results of their analysis go against the prevailing theory that the genes that make up these pathways are clustered together on the plant genome. "This idea comes from the observation in fungi and bacteria that the genes that make up these specialized metabolite pathways are clustered together," said Rokas. "In plants, however, these genes appear to be mostly scattered across the genome. Consequently, the strategies for discovering plant gene pathways will need to be different from those developed in the other organisms." The researchers argue that the results of their study show that this approach "is a novel, rich and largely untapped means for high-throughput discovery of the genetic basis and architecture of plant natural products." If that proves to be true, then it could help open the tap on new plant-based therapeutics for treating a broad range of conditions and diseases. Explore further: Researchers detail genetic mechanisms that govern growth and drought response in plants More information: Jennifer H. Wisecaver et al, A Global Co-expression Network Approach for Connecting Genes to Specialized Metabolic Pathways in Plants, The Plant Cell (2017). DOI: 10.1105/tpc.17.00009


News Article | April 17, 2017
Site: www.prweb.com

When people think about spinal cord injuries, thoughts generally turn toward Christopher Reeve who was thrown from his horse during trial events for an equestrian competition in 1995, and Steven McDonald, who was shot three times in 1986 after serving two years as an officer with the New York Police Department. Reeve’s and McDonald’s heroic and visible survival stories brought the severity of spinal cord injuries into the international dialogue. Today at the College of Staten Island (CSI), Maria Knikou, PhD, is holding clinical trials of her breakthrough research designed to develop effective rehabilitation strategies to improve the walking ability of persons with spinal cord injuries that have affected the function of the central nervous system. During her ongoing trials, she has recently worked with eight people with spinal cord injuries, including a 20-year-old who fell out of a golf cart and broke his neck nine months ago, and a Midwestern woman who broke her neck. These people, who have been diagnosed with tetraplegia (a spinal cord injury above the first thoracic vertebra or within cervical sections Cervical 1-8) and severe paralysis of the legs, came to CSI to participate in the research trials. After completing four to six weeks of therapy with Dr. Knikou, the patients saw motor function improve, with increased control and reduced spasticity. According to spinalcord.com, “The spinal cord carries nerve fibers traveling both from the brain to the rest of the body and from the body back to the brain. Those coming from the brain are responsible for voluntary control of muscles. Those traveling toward the brain carry sensation.” Dr. Knikou’s non-invasive therapy focuses on assessing the signal transfer from the brain to the legs in order to strengthen and enhance that pathway and provide gains in motor function. Patients who undergo the phase one therapy may be eligible for the phase two Robotic Gait Training, designed to further stimulate brain, spinal, and muscular health on a pathway for improved mobility. People who participate in the trials are provided a stipend, and certain expenses may be covered. Persons who are interested in learning if they are eligible candidates for this unique therapeutic approach should contact Dr. Knikou, Professor of Human Neurophysiology in the Physical Therapy Department of the School of Health Sciences at 718.982.3316 or maria.knikou(at)csi.cuny.edu. All trials are conducted on the Willowbrook campus of the College of Staten Island in New York City. "Dr Knikou's forward-thinking and expertise in human neurophysiology have enabled her to be extremely successful, with ongoing grant support from New York State and other private foundations," commented Dean Maureen Becker, PhD. "She is one of the leading researchers in the School of Health Sciences at the College of Staten Island and her work, one day, will impact the lives of millions of individuals with spinal cord injury." Dr. Knikou’s research project is funded by the New York State Department of Health, Spinal Cord Injury Research Board, under the Project to Accelerate Research Translation (PART) award. She mentors high school, undergraduate, and graduate students, as well as postdoctoral research fellows and junior faculty. Dr. Knikou serves on several editorial boards and has published her research work in high-ranking, peer-reviewed scientific journals. For more information about the College of Staten Island School of Health Sciences visit http://www.csi.cuny.edu/schoolofhealthsciences. About the College of Staten Island The College of Staten Island is a senior college of The City University of New York (CUNY) offering Doctoral programs, Advanced Certificate programs, and Master’s programs, as well as Bachelor’s and Associate’s degrees. CSI is ranked 3rd in New York State by MONEY magazine for Best Colleges and 6th in the nation on CollegeNet’s Social Mobility Index. CSI is also a “Top Master’s University,” as ranked by Washington Monthly; in the Top 15% for Alumni Salary Potential according to Payscale; and has been named a Military Friendly School for seven consecutive years by GI Jobs magazine. The CUNY Interdisciplinary High-Performance Computing Center, one of the most powerful supercomputers in the New York City region, handles big-data analysis for faculty researchers and their student research teams, as well as researchers nationwide. The 204-acre park-like campus of CSI, the largest in NYC, is fully accessible and contains an advanced, networked infrastructure to support technology-based teaching, learning, and research. Dolphin Cove Resident Halls, the college’s new apartment-style luxury suites, celebrates its third year at full occupancy housing students from across NYC, the United States, and the world.


News Article | April 19, 2017
Site: phys.org

It sounds like an easy task – after all, any animal with basic vision can see a moving object, decide whether it is food or a threat and react accordingly, but what comes easily to a scallop is a challenge for the world's biggest supercomputers. Crutchfield, along with physics graduate student Adam Rupe and postdoc Ryan James, is designing these new machine learning systems to allow supercomputers to spot large-scale atmospheric structures, such as hurricanes and atmospheric rivers, in climate data.  The UC Davis Complexity Sciences Center, which Crutchfield leads, was recently named as an Intel Parallel Computing Center and is collaborating with Intel Research, the Department of Energy's National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley Lab, Stanford University, and University of Montreal. The entire Big Data Center project is led by Prabhat, leader of the Data And Analytics Services Group at the Berkeley lab. The team works on NERSC's CORI supercomputer, in the top five of the world's fastest machines with over 600,000 CPU cores. Modern science is full of "big data." For climate science, that includes both satellite- and ground-based measurements that span the planet, as well as "big" simulations. "We need new kind of machine learning to interpret very large data and planet-wide simulations," Crutchfield said. Climate and weather systems evolve over time, so the machines need to be able to find patterns not only in space but over time. "Dynamics are key to this," Crutchfield said. Humans (and other visual animals) recognize dynamic changes very quickly, but it's much harder for machines. Pattern Discovery is more than Pattern Recognition With existing technology, computers recognize patterns based on an existing template. That's how voice recognition systems work, by comparing your voice to an existing catalog of sounds. These pattern recognition systems can be very useful but they can't identify anything truly new – that isn't represented in their template. Crutchfield and his team are taking a different approach, based on pattern discovery. They are working on algorithms that allow computers to identify structures in data without knowing what they are in advance. "Learning novel patterns is what humans are uniquely good at, but machines can't do it," he said. Using pattern discovery, a supercomputer would learn how to identify hurricanes or other features in climate and weather data. It might also identify new kinds of structures that are too complex for humans to perceive at all. While this application is in global climate modeling, Crutchfield hopes to make it a new paradigm for analyzing very large datasets. "Usually, you apply known models to interpret the data. To say that you will extract your model directly from the data is a radical claim," he said. The collaboration is part of the Intel Parallel Computing Centers program, which provides funding to universities, institutions, and research labs to modernize key community codes used across a wide range of disciplines to run on industry-standard parallel architectures. Explore further: "Colony" Computer to Look for a Theory of Theories


News Article | May 8, 2017
Site: www.scientificcomputing.com

Product developers talk about time to market. Web service providers measure time to first byte. For James Lowey, the key metric is time to life. Lowey is CIO at the Translational Genomics Research Institute (TGen), a nonprofit focused on turning genomics insights into faster diagnostics and treatments that are more effective. TGen’s genetics research is being applied to rare childhood diseases, cancer, neurological disorders, diabetes and others. “We’ve got patients waiting,” Lowey told the panel audience. “We need to diagnose and treat them. They need results now, not in weeks or months. We’re working to accelerate the movement of insights from the bench to the bedside.” It’s no surprise that each new generation of processors helps organizations like TGen deliver genetic results—and clinical answers—more quickly. Lowey described TGen’s farm of Intel Xeon processor E5 v3 based Dell blade servers based on Intel Scalable System Framework (Intel SSF). Using the blade servers, TGen has reduced processing time for critical genomics processing tasks from two weeks to seven hours, making it fast enough to be clinically relevant. Digital imaging is another area where HPC-enabled speedups are advancing clinical care. Panelist Simon K. Warfield described innovative imaging techniques his team is applying to increase understanding of the brain’s complex circuitry. Dr. Warfield is the Thorne Griscom Professor of Radiology at Harvard Medical School and the founder and director of the Computational Radiology Lab (CRL) at Boston Children's Hospital. CRL is an Intel Parallel Computing Center that is modernizing the algorithms and data structures of medical image computing on Intel Xeon and Intel Xeon Phi processors. The lab is improving cache performance, vectorization performance and multi-threading performance, as well as creating more sophisticated imaging and modeling strategies. CRL can contribute to improved diagnosis and treatment of brain injuries, multiple sclerosis, depression, Alzheimer’s and many other conditions. Consider the novel technique CRL has developed to show more clearly water’s diffusion through the brain—and pinpoint hindrances and restrictions to its flow. In contrast to traditional image processing approaches, CRL’s diffusion-weighted imaging infers new parametric maps from data measurements. Its computational model includes tens or hundreds of 3D images—each up to 10 million pixels each—as its inputs. “This type of analysis is very computationally intensive,” Warfield said. “With the accelerated algorithm and the Intel Xeon Phi processors, we reduced the time needed from 48 hours to 15 minutes of calculations.” That speedup can translate to immediate benefits in for critically ill patients facing brain surgery. That’s because, as Warfield put it, “When you’re talking about surgical planning, life is a matter of time.” Recently, one of the hospital’s neurosurgery teams realized on a Friday that their patient’s conventional magnetic resonance scan was not clear enough to allow them to proceed with a planned brain resection. With the surgery-planning meeting scheduled for Monday, they requested emergency use of CRL’s diffusion imaging algorithm. The patient had a new scan Saturday evening, the data was processed on Sunday, and the information was ready for the team’s decision on Monday. The panel also highlighted precision medicine’s global reach—and its big data challenges. Fang Lin, Director of the Bioinformatics Center at BGI, described BGI’s use of the Lustre file system to help maintain storage performance as its data volumes grow. BGI is a global research leader as well as a provider of genetic testing products. It also operates the China National Genebank, putting it on the forefront of China’s five-year. BGI cranks 20 terabytes of sequencing data every day. The institute stores13petabytes of genomic data and uses a 10 petabyte file system comprising Intel Enterprise Edition for Lustre Software and open source technologies. Dr. David Torrents, a molecular biologist and research professor at the Barcelona Supercomputing Center, shone a spotlight on the importance of collaboration in advancing precision medicine. BSC provides resources to a variety of international centers and consortia. In addition, the institute conducts its own multidisciplinary research in computational biomedicine and related fields. BSC’s alliances also encompass a range of hospitals and medical centers, enabling it to validate and test its models and tools with data from clinical institutions. “We’re at an exciting moment,” Torrents said. “We are not just developing new solutions for personalized medicine, but now are beginning a pilot program in January 2017 to bring them together and apply them in clinical settings, beginning in Catalonia and then throughout Spain.” The panelists say continued leaps forward in precision medicine will come from faster and more sophisticated analysis of larger volumes of more varied data types. “What we want is a more holistic picture, and for that, it’s becoming absolutely critical to combine many diverse data types together for analysis,” said Lowey. To achieve that holistic picture, researchers want to use deep learning and other forms of artificial intelligence. They also want to apply those AI methods to genomic data in combination with imaging data, lifelong clinical records, population studies, environmental studies, and much more. Different aspects of the precision medicine workflow will have varying processing and storage requirements. So the push continues for faster performance with agile or heterogeneous platform architectures rather than a single “silver bullet” approach. The processors will continue as the primary workhorses, supplemented by embedded resources and FPGA accelerators for parts of the workflow. Distributed compute and storage resources will remain crucial, along with advances in applications and tools. As to the clinical impact of these holistic approaches, look no further than Boston Children’s Hospital. Noninvasive prenatal genomic testing can indicate whether a fetus has the risk factors that predispose it to be born with a malformed heart. If genetic testing shows these factors are present, data-intensive digital imaging can reveal whether the heart is actually deformed. By combining genomic with other medical data in this way, clinicians can provide peace of mind for worried parents-to-be, or help them plan for their child’s future. “We’re starting to connect the genetics that predisposes an individual to heart disease, with the imaging to see if the defect is present, and use that information to influence current treatment,” said Warfield. “That information can also help us plan for the child’s longer-term future. We can predict how they’ll do as teenagers and begin to plan accordingly.” Precision medicine is one of the most promising and meaningful applications of high-performance computing today. “It’s still early days, but we’re moving toward an exciting new era of predictive biology and personalized medicine,” said McManus. “Our panelists gave us a great taste of what’s on the horizon. With continued advances in platform technologies, artificial intelligence, and other areas, we create significant opportunities to increase the science of medicine and ultimately improve human health. Intel is excited to empower scientists and clinicians with technology innovations, resources and expertise as we collaborate to make this new era a reality.” Jan Rowell writes about technology trends in HPC, healthcare, life sciences, and other industries.


News Article | May 8, 2017
Site: www.scientificcomputing.com

Product developers talk about time to market. Web service providers measure time to first byte. For James Lowey, the key metric is time to life. Lowey is CIO at the Translational Genomics Research Institute (TGen), a nonprofit focused on turning genomics insights into faster diagnostics and treatments that are more effective. TGen’s genetics research is being applied to rare childhood diseases, cancer, neurological disorders, diabetes and others. “We’ve got patients waiting,” Lowey told the panel audience. “We need to diagnose and treat them. They need results now, not in weeks or months. We’re working to accelerate the movement of insights from the bench to the bedside.” It’s no surprise that each new generation of processors helps organizations like TGen deliver genetic results—and clinical answers—more quickly. Lowey described TGen’s farm of Intel Xeon processor E5 v3 based Dell blade servers based on Intel Scalable System Framework (Intel SSF). Using the blade servers, TGen has reduced processing time for critical genomics processing tasks from two weeks to seven hours, making it fast enough to be clinically relevant. Digital imaging is another area where HPC-enabled speedups are advancing clinical care. Panelist Simon K. Warfield described innovative imaging techniques his team is applying to increase understanding of the brain’s complex circuitry. Dr. Warfield is the Thorne Griscom Professor of Radiology at Harvard Medical School and the founder and director of the Computational Radiology Lab (CRL) at Boston Children's Hospital. CRL is an Intel Parallel Computing Center that is modernizing the algorithms and data structures of medical image computing on Intel Xeon and Intel Xeon Phi processors. The lab is improving cache performance, vectorization performance and multi-threading performance, as well as creating more sophisticated imaging and modeling strategies. CRL can contribute to improved diagnosis and treatment of brain injuries, multiple sclerosis, depression, Alzheimer’s and many other conditions. Consider the novel technique CRL has developed to show more clearly water’s diffusion through the brain—and pinpoint hindrances and restrictions to its flow. In contrast to traditional image processing approaches, CRL’s diffusion-weighted imaging infers new parametric maps from data measurements. Its computational model includes tens or hundreds of 3D images—each up to 10 million pixels each—as its inputs. “This type of analysis is very computationally intensive,” Warfield said. “With the accelerated algorithm and the Intel Xeon Phi processors, we reduced the time needed from 48 hours to 15 minutes of calculations.” That speedup can translate to immediate benefits in for critically ill patients facing brain surgery. That’s because, as Warfield put it, “When you’re talking about surgical planning, life is a matter of time.” Recently, one of the hospital’s neurosurgery teams realized on a Friday that their patient’s conventional magnetic resonance scan was not clear enough to allow them to proceed with a planned brain resection. With the surgery-planning meeting scheduled for Monday, they requested emergency use of CRL’s diffusion imaging algorithm. The patient had a new scan Saturday evening, the data was processed on Sunday, and the information was ready for the team’s decision on Monday. The panel also highlighted precision medicine’s global reach—and its big data challenges. Fang Lin, Director of the Bioinformatics Center at BGI, described BGI’s use of the Lustre file system to help maintain storage performance as its data volumes grow. BGI is a global research leader as well as a provider of genetic testing products. It also operates the China National Genebank, putting it on the forefront of China’s five-year. BGI cranks 20 terabytes of sequencing data every day. The institute stores13petabytes of genomic data and uses a 10 petabyte file system comprising Intel Enterprise Edition for Lustre Software and open source technologies. Dr. David Torrents, a molecular biologist and research professor at the Barcelona Supercomputing Center, shone a spotlight on the importance of collaboration in advancing precision medicine. BSC provides resources to a variety of international centers and consortia. In addition, the institute conducts its own multidisciplinary research in computational biomedicine and related fields. BSC’s alliances also encompass a range of hospitals and medical centers, enabling it to validate and test its models and tools with data from clinical institutions. “We’re at an exciting moment,” Torrents said. “We are not just developing new solutions for personalized medicine, but now are beginning a pilot program in January 2017 to bring them together and apply them in clinical settings, beginning in Catalonia and then throughout Spain.” The panelists say continued leaps forward in precision medicine will come from faster and more sophisticated analysis of larger volumes of more varied data types. “What we want is a more holistic picture, and for that, it’s becoming absolutely critical to combine many diverse data types together for analysis,” said Lowey. To achieve that holistic picture, researchers want to use deep learning and other forms of artificial intelligence. They also want to apply those AI methods to genomic data in combination with imaging data, lifelong clinical records, population studies, environmental studies, and much more. Different aspects of the precision medicine workflow will have varying processing and storage requirements. So the push continues for faster performance with agile or heterogeneous platform architectures rather than a single “silver bullet” approach. The processors will continue as the primary workhorses, supplemented by embedded resources and FPGA accelerators for parts of the workflow. Distributed compute and storage resources will remain crucial, along with advances in applications and tools. As to the clinical impact of these holistic approaches, look no further than Boston Children’s Hospital. Noninvasive prenatal genomic testing can indicate whether a fetus has the risk factors that predispose it to be born with a malformed heart. If genetic testing shows these factors are present, data-intensive digital imaging can reveal whether the heart is actually deformed. By combining genomic with other medical data in this way, clinicians can provide peace of mind for worried parents-to-be, or help them plan for their child’s future. “We’re starting to connect the genetics that predisposes an individual to heart disease, with the imaging to see if the defect is present, and use that information to influence current treatment,” said Warfield. “That information can also help us plan for the child’s longer-term future. We can predict how they’ll do as teenagers and begin to plan accordingly.” Precision medicine is one of the most promising and meaningful applications of high-performance computing today. “It’s still early days, but we’re moving toward an exciting new era of predictive biology and personalized medicine,” said McManus. “Our panelists gave us a great taste of what’s on the horizon. With continued advances in platform technologies, artificial intelligence, and other areas, we create significant opportunities to increase the science of medicine and ultimately improve human health. Intel is excited to empower scientists and clinicians with technology innovations, resources and expertise as we collaborate to make this new era a reality.” Jan Rowell writes about technology trends in HPC, healthcare, life sciences, and other industries.


News Article | May 1, 2017
Site: www.biosciencetechnology.com

Digital imaging is another area where HPC-enabled speedups are advancing clinical care. Panelist Simon K. Warfield described innovative imaging techniques his team is applying to increase understanding of the brain’s complex circuitry. Dr. Warfield is the Thorne Griscom Professor of Radiology at Harvard Medical School and the founder and director of the Computational Radiology Lab (CRL) at Boston Children's Hospital. CRL is an Intel Parallel Computing Center that is modernizing the algorithms and data structures of medical image computing on Intel Xeon and Intel Xeon Phi processors. The lab is improving cache performance, vectorization performance and multi-threading performance, as well as creating more sophisticated imaging and modeling strategies. CRL can contribute to improved diagnosis and treatment of brain injuries, multiple sclerosis, depression, Alzheimer’s and many other conditions. Consider the novel technique CRL has developed to show more clearly water’s diffusion through the brain—and pinpoint hindrances and restrictions to its flow. In contrast to traditional image processing approaches, CRL’s diffusion-weighted imaging infers new parametric maps from data measurements. Its computational model includes tens or hundreds of 3D images—each up to 10 million pixels each—as its inputs. “This type of analysis is very computationally intensive,” Warfield said. “With the accelerated algorithm and the Intel Xeon Phi processors, we reduced the time needed from 48 hours to 15 minutes of calculations.” That speedup can translate to immediate benefits in for critically ill patients facing brain surgery. That’s because, as Warfield put it, “When you’re talking about surgical planning, life is a matter of time.” Recently, one of the hospital’s neurosurgery teams realized on a Friday that their patient’s conventional magnetic resonance scan was not clear enough to allow them to proceed with a planned brain resection. With the surgery-planning meeting scheduled for Monday, they requested emergency use of CRL’s diffusion imaging algorithm. The patient had a new scan Saturday evening, the data was processed on Sunday, and the information was ready for the team’s decision on Monday. The panel also highlighted precision medicine’s global reach—and its big data challenges. Fang Lin, Director of the Bioinformatics Center at BGI, described BGI’s use of the Lustre file system to help maintain storage performance as its data volumes grow. BGI is a global research leader as well as a provider of genetic testing products. It also operates the China National Genebank, putting it on the forefront of China’s five-year. BGI cranks 20 terabytes of sequencing data every day. The institute stores13petabytes of genomic data and uses a 10 petabyte file system comprising Intel Enterprise Edition for Lustre Software and open source technologies. Dr. David Torrents, a molecular biologist and research professor at the Barcelona Supercomputing Center, shone a spotlight on the importance of collaboration in advancing precision medicine. BSC provides resources to a variety of international centers and consortia. In addition, the institute conducts its own multidisciplinary research in computational biomedicine and related fields. BSC’s alliances also encompass a range of hospitals and medical centers, enabling it to validate and test its models and tools with data from clinical institutions. “We’re at an exciting moment,” Torrents said. “We are not just developing new solutions for personalized medicine, but now are beginning a pilot program in January 2017 to bring them together and apply them in clinical settings, beginning in Catalonia and then throughout Spain.” The panelists say continued leaps forward in precision medicine will come from faster and more sophisticated analysis of larger volumes of more varied data types. “What we want is a more holistic picture, and for that, it’s becoming absolutely critical to combine many diverse data types together for analysis,” said Lowey. To achieve that holistic picture, researchers want to use deep learning and other forms of artificial intelligence. They also want to apply those AI methods to genomic data in combination with imaging data, lifelong clinical records, population studies, environmental studies, and much more. Different aspects of the precision medicine workflow will have varying processing and storage requirements. So the push continues for faster performance with agile or heterogeneous platform architectures rather than a single “silver bullet” approach. The processors will continue as the primary workhorses, supplemented by embedded resources and FPGA accelerators for parts of the workflow. Distributed compute and storage resources will remain crucial, along with advances in applications and tools. As to the clinical impact of these holistic approaches, look no further than Boston Children’s Hospital. Noninvasive prenatal genomic testing can indicate whether a fetus has the risk factors that predispose it to be born with a malformed heart. If genetic testing shows these factors are present, data-intensive digital imaging can reveal whether the heart is actually deformed. By combining genomic with other medical data in this way, clinicians can provide peace of mind for worried parents-to-be, or help them plan for their child’s future. “We’re starting to connect the genetics that predisposes an individual to heart disease, with the imaging to see if the defect is present, and use that information to influence current treatment,” said Warfield. “That information can also help us plan for the child’s longer-term future. We can predict how they’ll do as teenagers and begin to plan accordingly.” Precision medicine is one of the most promising and meaningful applications of high-performance computing today. “It’s still early days, but we’re moving toward an exciting new era of predictive biology and personalized medicine,” said McManus. “Our panelists gave us a great taste of what’s on the horizon. With continued advances in platform technologies, artificial intelligence, and other areas, we create significant opportunities to increase the science of medicine and ultimately improve human health. Intel is excited to empower scientists and clinicians with technology innovations, resources and expertise as we collaborate to make this new era a reality.” Jan Rowell writes about technology trends in HPC, healthcare, life sciences, and other industries.


News Article | May 24, 2017
Site: www.eurekalert.org

KIT's new computing center with its high-performance computer is granted energy efficiency prize -- researchers from all over Germany can use the petaflop system The new supercomputer of Karlsruhe Institute of Technology (KIT) not only is very fast, but also very economical. The high-performance computer ForHLR II that started operation last year has now reached the first place in the German Computing Center Prize category of "Newly built energy- and resource-efficient computing centers". It has more than 24,000 processor cores and is equipped with a highly energy-efficient cooling system. The costs of the computer amounted to EUR 26 million. The project was funded by the state of Baden-Württemberg and the Federal Republic of Germany at equal shares: "ForHLR II plays an important role in our state's strategy for supercomputing. And, of course, I am very pleased that it is a green product. It is another example of performance being compatible with resource efficiency," Baden-Württemberg Minister of Science, Theresia Bauer, says. "Whoever wants to be at the cutting edge of international research needs highest computing and storage capacity," the President of KIT, Professor Holger Hanselka, says. "Here, highest computing capacity and latest visualization technology for modern simulation methods are combined with a very small energy consumption in line with the strategy of KIT. This reflects our strength of research in the society's relevant areas of demand." "Without the support by the Ministry for Science, Research, and the Arts and the Minister herself, it would have been impossible to make energy efficiency a focus of the project," says Professor Bernhard Neumair, Director of the Steinbuch Centre for Computing (SCC). Operation of the high-performance computer is integrated perfectly into KIT's energy supply concept based on co-generation. Science today produces strongly increasing data volumes that do not only have to be processed and stored, but also visualized. Researchers from all over Germany can use ForHLR II, a petaflop system with more than 1,170 nodes, more than 24,000 processor cores, and 75 terabytes of main memory. One petaflop corresponds to one quadrillion computing operations per second. ForHLR II's capacity exceeds that of ForHLR I that started operation in 2014 by a factor of 2.5. The new computing center building is equipped with latest technology for highly energy-efficient warm water cooling up to 45°C. During the cold season, the waste heat of the system is used to heat the office building. All through the year, reliable cooling of all hot system components is ensured. The system does not need any energy-intensive additional cooling machines. For components that still require classical cold air cooling, a district cooling network based on trigeneration is being established at Campus North of KIT. As a result, economic and environmental efficiency will be even more increased. According to Professor Rudolf Lohner of SCC, design of an environmentally efficient cooling system in one of the warmest areas of Germany was a particular challenge. Lohner accompanied the project from the start through to implementation and coordinated it in the final phase. So-called wet coolers, in which water evaporates on the surface of cooling elements and cools the content, are not suited due to their high maintenance expenditure during pollen flight and their susceptibility to bacteria colonization. "Hence, we had to use dry coolers." Their small cooling capacity was compensated by the size of construction. Lohner points out that many aspects associated with the planning and building of such a complex computing center were mastered in close cooperation with partners and other units of KIT only. "This highest recognition of our successful project by a Germany-wide expert panel reflects the successful growing together of KIT and use of the synergies produced," he thinks. The German Computing Center Prize is granted every year at Germany's biggest computing center congress future thinking. This year, it took place in Darmstadt on April 25 and 26. For further information, please contact: Dr. Felix Mescoli, Pressereferent, Tel.: +49 721 608 48120, Fax: +49 721 608 43658, felix mescoli@kit edu Karlsruhe Institute of Technology (KIT) pools its three core tasks of research, higher education, and innovation in a mission. With about 9,300 employees and 25,000 students, KIT is one of the big institutions of research and higher education in natural sciences and engineering in Europe. KIT - The Research University in the Helmholtz Association Since 2010, the KIT has been certified as a family-friendly university. This press release is available on the internet at http://www. .

Loading Computing Center collaborators
Loading Computing Center collaborators