South Africa
South Africa

Time filter

Source Type

News Article | May 23, 2017
Site: www.gizmag.com

Mysterious radio signals from space have been coming in from all directions, and their source is still unknown. Extremely bright and short-lived, these fast radio bursts (FRB) have scientists puzzled, but the Australian Square Kilometer Array Pathfinder (ASKAP) radio telescope may help bring some answers to light. New Atlas spoke to Keith Bannister, an astronomer on the project, about what might be behind these strange signals and how the mystery might be unraveled. Located in an isolated part of Western Australia, far from the radio interference of the city, one of ASKAP's main goals is to study FRB, and it picked up its first one just four days after it began the search, in January this year. Designated FRB170107, the signal originated from the constellation of Leo, and traveled some six billion light-years to get here. "Scientists aren't very original at naming things: they're fast, and they're radio, and they're bursts," Bannister explains to us. "So they're radio, you detect them with a radio telescope that looks like a big satellite dish. And they're fast – really fast. Click your fingers and they're gone, less than a millisecond usually in duration. And they're bursts. They happen once and that's the last time we see them." The first FRB was identified in 2007, when scientists trawling through archival data from 2001 came across a short, sharp burst that couldn't be explained. Over the next few years, more of these anomalies were discovered in existing data, but the phenomenon wasn't detected live until 2015. In total, less than 30 of the signals have been identified so far. Generally, astronomers aren't sure what could be giving off these signals, but they seem to have the most in common with pulsars and magnetars. These two types of neutron stars both have extremely large magnetic fields and give off similar radio bursts, but the difference is that they repeat themselves. Some are as regular as clockwork while others have more unpredictable patterns, but either way they usually happen more than once. FRB, on the other hand, are largely one-time events, and that leaves scientists scratching their heads over their origins. "There are more theories than there are bursts," says Bannister. "They range from everything from scaled-up versions of what we've seen in our own galaxy, like a pulsar or a magnetar, to things that are much more exotic: someone proposed it's the signal of what alien civilizations look like when they're starting to explore their own galaxy." While these outlandish ideas often pop up to help explain space mysteries, the general consensus tends to lean toward something a little more mundane. "The community at the moment hasn't really settled on anything," says Bannister. "But the more conservative theory people are saying it's probably got to be something to do with a neutron star. So you can think of a pulsar as being a neutron star with a huge magnetic field, and then a magnetar is a neutron star with an even bigger magnetic field. So maybe if you can make one that's got an even bigger magnetic field, then eventually it can make FRB. That I think is probably the leading option, but it's not settled by any stretch." The first steps toward answering the question of what FRB are is simply to find more of them. The more data there is to work with, the clearer the picture gets, and being able to detect them faster means more telescopes can be trained on that patch of sky to see what possible points of origin are lurking there. "One big leap forward will be when a fast radio burst happens, to really pinpoint its location in the sky," says Bannister. "Then you can go and look and see if there's a galaxy there, and that will tell you a lot. So if it turns out FRB come from certain types of galaxies, then we can use what we know about those galaxies to try and work out what FRB are. We'll go back and we'll look for repeats. We take optical images, we look for x-ray images or gamma ray bursts, or all sorts of stuff. So once you start finding them, you can go back and look in that particular part of the sky with lots of different types of telescopes, and see what you can see." By the same token, the more FRB that are studied, the more likely it becomes that a particularly enlightening one will be spotted. "When you look at enough bursts, you never know when one of them is so unusual it kind of unlocks the whole thing," says Bannister. "So in that sense, jut collecting more sometimes is helpful because every now and then you'll find one that's a real oddball, and that actually turns out to be the key to understanding the whole bunch." In fact, the oddball in question may have already been discovered. FRB121102 was first found by the Arecibo radio telescope in Puerto Rico, and so far it's the only fast radio burst to buck the trend of being a one-hit wonder, instead pulsing at least 16 times since its discovery in 2012. Whether it confounds the mystery or helps solve it remains to be seen. "That's the oddball of the family," says Bannister. "We call it the Repeater, because there's only one. That's even more puzzling than the phenomenon of fast radio bursts. We think they're probably connected, but we don't exactly know how." Besides the thrill of nailing down a potentially new type of celestial object, FRB can give us a new understanding of our little corner of the Universe. They travel an extremely long distance through space before they reach us, and when they get here they carry with them clues about their journey. "The key property of radio bursts in general, from pulsars and fast radio bursts, is that we can measure what's called the dispersion," says Bannister. "So the radio waves as they leave the pulsar or fast radio burst, they go at different speeds, and that's because of the matter that the radio waves are going through. When they get to our telescopes, the short wavelengths arrive first and the long wavelengths about a second afterwards, and that time delay tells us how much matter those waves have gone through." By that system, when a fast radio burst is found to have a large dispersion, that indicates that the radio waves have passed through a lot of electrons. Astronomers can then compare that data with what they know about that region of the sky, and if there aren't many galaxies in its path, it tells them that the signal must have come from a long way away. When a fast radio burst goes off, studying its dispersion can tell scientists just how much matter there is along that line of sight, and with enough data, a 3D map of the Universe can effectively be built up. ASKAP began searching for FRB in the first week of January this year, and within four days it had spotted its first signal. Since then two more FRB have been detected, and that was done using just eight of its eventual 36 dishes. Those dishes, each 12 m (40 ft) in diameter, will all point in slightly different directions, giving the array a wide viewing area, like a segmented fly's eye. "This thing will see 36 different patches of the sky at the same time," says Bannister. "That's a huge increase in the amount of sky you can see, and that makes a huge difference in how many FRB you can catch every day." The rest of these dishes are due to come online over the next 12 to 18 months, and when that happens, the number of FRB the system spots should increase dramatically. "I think the next 12 months will be a pretty exciting time, because there are a lot of telescopes, like ASKAP, that are just starting up, and once they're really running, there'll be a lot of new information coming in," says Bannister. A paper describing the ASKAP team's discovery was published in The Astrophysical Journal Letters.


News Article | November 19, 2015
Site: phys.org

High-performance computing (or HPC) enables discoveries in practically every field of science—not just those typically associated with supercomputers like chemistry and physics, but also in the social sciences, life sciences and humanities. By combining superfast and secure networks, cutting-edge parallel computing and analytics software, advanced scientific instruments and critical datasets across the U.S., NSF's cyber-ecosystem lets researchers investigate questions that can't otherwise be explored. NSF has supported advanced computing since its beginning and is constantly expanding access to these resources. This access helps tens of thousands of researchers each year—from high-school students to Nobel Prize winners—expand the frontiers of science and engineering, regardless of whether their institutions are large or small, or where they are located geographically. Below are 10 examples of research enabled by NSF-supported advanced computing resources from across all of science. Pineapples don't just taste good—they have a juicy evolutionary history. Recent analyses using computing resources that are part of the iPlant Collaborative revealed an important relationship between pineapples and crops like sorghum and rice, allowing scientists to home in on the genes and genetic pathways that allow plants to thrive in water-limited environments. Led by the University of Arizona, Texas Advanced Computing Center, Cold Spring Harbor Laboratory and University of North Carolina at Wilmington, iPlant was established in 2008 with NSF funding to develop cyberinfrastructure for life sciences research, provide powerful platforms for data storage and bioinformatics and democratize access to U.S. supercomputing capabilities. This week, iPlant announced it will host a new platform, Digital Imaging of Root Traits (DIRT), that lets scientists in the field measure up to 76 root traits merely by uploading a photograph of a plant's roots. Software that simulates the effect of an electric charge passing through a transistor—only a few atoms wide—is helping researchers to explore alternative materials that may replace silicon in future nanodevices. The software simulations designed by Purdue researcher Gerhard Klimeck and his group, available on the nanoHUB portal, provide new information about the limits of current semiconductor technologies and are helping design future generations of nanoelectronic devices. NanoHUB, supported by NSF, is the first broadly successful, scientific end-to-end cloud computing environment. It provides a library of 3,000 learning resources to 195,000 users worldwide. Its 232 simulation tools are used in the cloud by over 10,800 researchers and students annually. Earthquakes originate through complex interactions deep below the surface of the Earth, making them notoriously difficult to predict. The Southern California Earthquake Center (SCEC) and its lead scientist Thomas Jordan use massive computing power to simulate the dynamics of earthquakes. In doing so, SCEC helps to provide long-term earthquake forecasts and more accurate hazard assessments. In 2014, the SCEC team investigated the earthquake potential of the Los Angeles Basin, where the Pacific and North American Plates run into each other at the San Andreas Fault. Their simulations showed that the basin essentially acts like a big bowl of jelly that shakes during earthquakes, producing more high-shaking ground motions than the team expected. Using the NSF-funded Blue Waters supercomputer at the National Center for Supercomputing Applications and the Department of Energy-funded Titan supercomputer at the Oak Ridge Leadership Computing Facility, the researchers turned their simulations into seismic hazard models. These models describe the probability of an earthquake occurring in a given geographic area, within a given window of time and with ground motion intensity exceeding a given threshold. Nearly 33,000 people die in the U.S. each year due to motor vehicle crashes, according to the National Highway Traffic Safety Administration. Modern restraint systems save lives, but some deaths and injuries remain—and restraints themselves can cause injuries. Researchers from the Center for Injury Biomechanics at Wake Forest University used the Blacklight supercomputer at the Pittsburgh Supercomputing Center to simulate the impacts of car crashes with much greater fidelity than crash-test dummies can. By studying a variety of potential occupant positions, they're uncovering important factors that lead to more severe injuries, as well as ways to potentially mitigate these injuries, using advanced safety systems. Since Albert Einstein, scientists have believed that when major galactic events like black hole mergers occur, they leave a trace in the form of gravitational waves—ripples in the curvature of space-time that travel outward from the source. Advanced LIGO is a project designed to capture signs of these events. Since gravitational waves are expected to travel at the speed of light, detecting them requires two gravitational wave observatories, located 1,865 miles apart and working in unison, that can triangulate the gravitational wave signals and determine the source of the wave in the sky. In addition to being an astronomical challenge, Advanced LIGO is also a "big data" problem. The observatories take in huge volumes of data that must be analyzed to determine their meaning. Researchers estimate that Advanced LIGO will generate more than 1 petabyte of data a year, the equivalent of 13.3 years' worth of high-definition video. To achieve accurate and rapid gravity wave detection, researchers use Extreme Science and Engineering Discovery Environment (XSEDE)—a powerful collection of advanced digital resources and services—to develop and test new methods for transmitting and analyzing these massive quantities of astronomical data. Advanced LIGO came online in September and advanced computing will play an integral part in its future discoveries. What happens when a supercomputer reaches retirement age? In many cases, it continues to make an impact in the world. The NSF-funded Ranger supercomputer is one such example. In 2013, after five years as one of NSF's flagship computer systems, the Texas Advanced Computing Center (TACC) disassembled Ranger and shipped it from Austin, Texas to South Africa, Tanzania and Botswana to give root to a young and growing supercomputing community. With funding from NSF, TACC experts led training sessions in South Africa in December 2014. In November 2015, 19 delegates from Africa came to the U.S. to attend a two-day workshop at TACC as well as the Supercomputing 2015 International Conference for High Performance Computing. The effort is intended, in part, to help provide the technical expertise needed to successfully staff and operate the Square Kilometer Array, a new radio telescope being built in Australia and Africa which will offer the highest resolution images in all of astronomy. In September 2015, President Obama announced plans to improve maps and elevation models of the Arctic, including Alaska. To that end, NSF and the National Geospatial-Intelligence Agency (NGA) are supporting the development of high-resolution Digital Elevation Models in order to provide consistent coverage of the globally significant region. The models will allow researchers to see in detail how warming in the region affects the landscape in remote areas, and allow them to compare changes over time. The project relies, in part, on the computing and data analysis powers of Blue Waters, which will let researchers store, access and analyze large numbers of images and models. To solve some of society's most pressing long-term problems, the U.S. needs to educate and train the next generation of scientists and engineers to use advanced computing effectively. This pipeline of training begins as early as high school and continues throughout the careers of scientists. Last summer, TACC hosted 50 rising high school juniors and seniors to participate in an innovative new STEM program, CODE@TACC. The program introduced students to high-performance computing, life sciences and robotics. On the continuing education front, XSEDE offers hundreds of training classes each year to help researchers update their skills and learn new ones. High-performance computing has another use in education: to assess how students learn and ultimately to provide personalized educational paths. A recent report from the Computing Research Association, "Data-Intensive Research in Education: Current Work and Next Steps," highlights insights from two workshops on data-intensive education initiatives. The LearnSphere project at Carnegie Mellon University, an NSF Data Infrastructure Building Blocks project, is putting these ideas into practice. Experimenting with cloud computing on new platforms In 2014, NSF invested $20 million to create two cloud computing testbeds that let the academic research community develop and experiment with cloud architectures and pursue new, architecturally-enabled applications of cloud computing. CloudLab (with sites in Utah, Wisconsin and South Carolina) came online in May 2015 and provides researchers with the ability to create custom clouds and test adjustments at all levels of the infrastructure, from the bare metal on up. Chameleon, a large-scale, reconfigurable experimental environment for cloud research, co-located at the University of Chicago and The University of Texas at Austin, went into production in July 2015. Both serve hundreds of researchers at universities across the U.S. and let computer scientists experiment with unique cloud architectures in ways that weren't available before. The NSF-supported "Comet" system at the San Diego Supercomputer Center (SDSC) was dedicated in October and is already aiding scientists in a number of fields, including domains relatively new for supercomputer integration, such as neuroscience. SDSC recently received a major grant to expand the Neuroscience Gateway, which provides easy access to advanced cyberinfrastructure tools and resources through a web-based portal, and can significantly improve the productivity of researchers. The gateway will contribute to the national BRAIN Initiative and deepen our understanding of the human brain. Explore further: Innovative new supercomputers increase nation's computational capacity and capability


News Article | November 18, 2015
Site: www.scientificcomputing.com

When researchers need to compare complex new genomes, or map new regions of the Arctic in high-resolution detail, or detect signs of dark matter, or make sense of massive amounts of functional MRI data, they turn to the high-performance computing and data analysis systems supported by the National Science Foundation (NSF). High-performance computing (or HPC) enables discoveries in practically every field of science — not just those typically associated with supercomputers like chemistry and physics, but also in the social sciences, life sciences and humanities. By combining superfast and secure networks, cutting-edge parallel computing and analytics software, advanced scientific instruments and critical datasets across the U.S., NSF's cyber-ecosystem lets researchers investigate questions that can't otherwise be explored. NSF has supported advanced computing since its beginning and is constantly expanding access to these resources. This access helps tens of thousands of researchers each year — from high-school students to Nobel Prize winners — expand the frontiers of science and engineering, regardless of whether their institutions are large or small, or where they are located geographically. Below are 10 examples of research enabled by NSF-supported advanced computing resources from across all of science. Pineapples don't just taste good — they have a juicy evolutionary history. Recent analyses using computing resources that are part of the iPlant Collaborative revealed an important relationship between pineapples and crops like sorghum and rice, allowing scientists to home in on the genes and genetic pathways that allow plants to thrive in water-limited environments. Led by the University of Arizona, Texas Advanced Computing Center, Cold Spring Harbor Laboratory and University of North Carolina at Wilmington, iPlant was established in 2008 with NSF funding to develop cyberinfrastructure for life sciences research, provide powerful platforms for data storage and bioinformatics and democratize access to U.S. supercomputing capabilities. This week, iPlant announced it will host a new platform, Digital Imaging of Root Traits (DIRT), that lets scientists in the field measure up to 76 root traits merely by uploading a photograph of a plant's roots. 2. Designing new nanodevices Software that simulates the effect of an electric charge passing through a transistor — only a few atoms wide — is helping researchers to explore alternative materials that may replace silicon in future nanodevices. The software simulations designed by Purdue researcher Gerhard Klimeck and his group, available on the nanoHUB portal, provide new information about the limits of current semiconductor technologies and are helping design future generations of nanoelectronic devices. NanoHUB, supported by NSF, is the first broadly successful, scientific end-to-end cloud computing environment. It provides a library of 3,000 learning resources to 195,000 users worldwide. Its 232 simulation tools are used in the cloud by over 10,800 researchers and students annually. Earthquakes originate through complex interactions deep below the surface of the Earth, making them notoriously difficult to predict. The Southern California Earthquake Center (SCEC) and its lead scientist Thomas Jordan use massive computing power to simulate the dynamics of earthquakes. In doing so, SCEC helps to provide long-term earthquake forecasts and more accurate hazard assessments. In 2014, the SCEC team investigated the earthquake potential of the Los Angeles Basin, where the Pacific and North American Plates run into each other at the San Andreas Fault. Their simulations showed that the basin essentially acts like a big bowl of jelly that shakes during earthquakes, producing more high-shaking ground motions than the team expected. Using the NSF-funded Blue Waters supercomputer at the National Center for Supercomputing Applications and the Department of Energy-funded Titan supercomputer at the Oak Ridge Leadership Computing Facility, the researchers turned their simulations into seismic hazard models. These models describe the probability of an earthquake occurring in a given geographic area, within a given window of time and with ground motion intensity exceeding a given threshold. Nearly 33,000 people die in the U.S. each year due to motor vehicle crashes, according to the National Highway Traffic Safety Administration. Modern restraint systems save lives, but some deaths and injuries remain — and restraints themselves can cause injuries. Researchers from the Center for Injury Biomechanics at Wake Forest University used the Blacklight supercomputer at the Pittsburgh Supercomputing Center to simulate the impacts of car crashes with much greater fidelity than crash-test dummies can. By studying a variety of potential occupant positions, they're uncovering important factors that lead to more severe injuries, as well as ways to potentially mitigate these injuries, using advanced safety systems. Since Albert Einstein, scientists have believed that when major galactic events like black hole mergers occur, they leave a trace in the form of gravitational waves — ripples in the curvature of space-time that travel outward from the source. Advanced LIGO is a project designed to capture signs of these events. Since gravitational waves are expected to travel at the speed of light, detecting them requires two gravitational wave observatories, located 1,865 miles apart and working in unison, that can triangulate the gravitational wave signals and determine the source of the wave in the sky. In addition to being an astronomical challenge, Advanced LIGO is also a "big data" problem. The observatories take in huge volumes of data that must be analyzed to determine their meaning. Researchers estimate that Advanced LIGO will generate more than one petabyte of data a year, the equivalent of 13.3 years' worth of high-definition video. To achieve accurate and rapid gravity wave detection, researchers use Extreme Science and Engineering Discovery Environment (XSEDE) — a powerful collection of advanced digital resources and services — to develop and test new methods for transmitting and analyzing these massive quantities of astronomical data. Advanced LIGO came online in September, and advanced computing will play an integral part in its future discoveries. What happens when a supercomputer reaches retirement age? In many cases, it continues to make an impact in the world. The NSF-funded Ranger supercomputer is one such example. In 2013, after five years as one of NSF's flagship computer systems, the Texas Advanced Computing Center (TACC) disassembled Ranger and shipped it from Austin, TX, to South Africa, Tanzania and Botswana to give root to a young and growing supercomputing community. With funding from NSF, TACC experts led training sessions in South Africa in December 2014. In November 2015, 19 delegates from Africa came to the U.S. to attend a two-day workshop at TACC as well as the Supercomputing 2015 International Conference for High Performance Computing. The effort is intended, in part, to help provide the technical expertise needed to successfully staff and operate the Square Kilometer Array, a new radio telescope being built in Australia and Africa which will offer the highest resolution images in all of astronomy. In September 2015, President Obama announced plans to improve maps and elevation models of the Arctic, including Alaska. To that end, NSF and the National Geospatial-Intelligence Agency (NGA) are supporting the development of high-resolution Digital Elevation Models in order to provide consistent coverage of the globally significant region. The models will allow researchers to see in detail how warming in the region affects the landscape in remote areas, and allow them to compare changes over time. The project relies, in part, on the computing and data analysis powers of Blue Waters, which will let researchers store, access and analyze large numbers of images and models. To solve some of society's most pressing long-term problems, the U.S. needs to educate and train the next generation of scientists and engineers to use advanced computing effectively. This pipeline of training begins as early as high school and continues throughout the careers of scientists. Last summer, TACC hosted 50 rising high school juniors and seniors to participate in an innovative new STEM program, CODE@TACC. The program introduced students to high-performance computing, life sciences and robotics. On the continuing education front, XSEDE offers hundreds of training classes each year to help researchers update their skills and learn new ones. High-performance computing has another use in education: to assess how students learn and ultimately to provide personalized educational paths. A recent report from the Computing Research Association, "Data-Intensive Research in Education: Current Work and Next Steps," highlights insights from two workshops on data-intensive education initiatives. The LearnSphere project at Carnegie Mellon University, an NSF Data Infrastructure Building Blocks project, is putting these ideas into practice. 9. Experimenting with cloud computing on new platforms In 2014, NSF invested $20 million to create two cloud computing testbeds that let the academic research community develop and experiment with cloud architectures and pursue new, architecturally-enabled applications of cloud computing. CloudLab (with sites in Utah, Wisconsin and South Carolina) came online in May 2015 and provides researchers with the ability to create custom clouds and test adjustments at all levels of the infrastructure, from the bare metal on up. Chameleon, a large-scale, reconfigurable experimental environment for cloud research, co-located at the University of Chicago and The University of Texas at Austin, went into production in July 2015. Both serve hundreds of researchers at universities across the U.S. and let computer scientists experiment with unique cloud architectures in ways that weren't available before. The NSF-supported "Comet" system at the San Diego Supercomputer Center (SDSC) was dedicated in October and is already aiding scientists in a number of fields, including domains relatively new for supercomputer integration, such as neuroscience. SDSC recently received a major grant to expand the Neuroscience Gateway, which provides easy access to advanced cyberinfrastructure tools and resources through a web-based portal, and can significantly improve the productivity of researchers. The gateway will contribute to the national BRAIN Initiative and deepen our understanding of the human brain.


Parsons A.R.,University of California at Berkeley | Liu A.,University of California at Berkeley | Aguirre J.E.,University of Pennsylvania | Ali Z.S.,University of California at Berkeley | And 14 more authors.
Astrophysical Journal | Year: 2014

We present new constraints on the 21 cm Epoch of Reionization (EoR) power spectrum derived from three months of observing with a 32 antenna, dual-polarization deployment of the Donald C. Backer Precision Array for Probing the Epoch of Reionization in South Africa. In this paper, we demonstrate the efficacy of the delay-spectrum approach to avoiding foregrounds, achieving over eight orders of magnitude of foreground suppression (in mK2). Combining this approach with a procedure for removing off-diagonal covariances arising from instrumental systematics, we achieve a best 2σ upper limit of (41 mK)2 for k = 0.27 h Mpc-1 at z = 7.7. This limit falls within an order of magnitude of the brighter predictions of the expected 21 cm EoR signal level. Using the upper limits set by these measurements, we generate new constraints on the brightness temperature of 21 cm emission in neutral regions for various reionization models. We show that for several ionization scenarios, our measurements are inconsistent with cold reionization. That is, heating of the neutral intergalactic medium (IGM) is necessary to remain consistent with the constraints we report. Hence, we have suggestive evidence that by z = 7.7, the H I has been warmed from its cold primordial state, probably by X-rays from high-mass X-ray binaries or miniquasars. The strength of this evidence depends on the ionization state of the IGM, which we are not yet able to constrain. This result is consistent with standard predictions for how reionization might have proceeded. © 2014. The American Astronomical Society. All rights reserved..


News Article | February 1, 2016
Site: phys.org

However, the discovery in the 1990s of a completely unknown force dubbed dark energy that makes up 70% of the cosmos – causing it to expand at an accelerated rate – has taught us to be humble. Since then, astronomers have begun investing billions of pounds in experiments which aim to find out what this mysterious phenomenon is. What they discover is guaranteed to change physics forever. From the mountaintops of Chile and Hawaii, several telescopes have so far been retro-fitted with new cameras that are beginning to make maps of large areas of the sky. The Kilo Degree Survey, the Dark Energy Survey, and the HyperSuprimeCam Survey are all mapping about 10% of the sky in visible light. The idea is to look at how galaxies and other structures are clustered across the sky. One way to do this is to measure "weak lensing", an effect where the light from distant galaxies is distorted by matter on its way to us, which unveils the scaffolding of matter in the universe. Dark energy has an impact on this because it stops the matter clumping together, which pushes everything apart. This can also be measured by looking at spectra, which separates starlight into its constituent wavelengths much like a prism does with sunlight, from galaxies. Another measurement technique is based on tracking supernovae, bright flashes of light that we observe when massive stars die. Because the physics of stars is relatively well understood, the amount of light that is emitted can be determined very accurately. Therefore by measuring the brightness of supernovae their distances can be measured, helping us track how they move as a result of the expansion of the universe. The experiments of the future Each of these experiments plans to make their first results known soon. However they are not purpose-built dark energy explorers. To get the most reliable results, astronomers have gone back to the drawing board and created more powerful – and much, much more expensive – experiments with bigger cameras that can map larger parts of the sky. One of these is Euclid, a new satellite that is being built by the European Space Agency. Due to launch in 2020, over only five years it will fundamentally change our view of the sky by taking high-resolution images, much like the Hubble space telescope did for a small patch of sky. Euclid will use both the weak lensing and galaxy clustering approaches to map half the sky. Over in Chile, the eight-metre Large Synoptic Survey Telescope(LSST) is under construction. It too aims to map half the sky, but from the ground. Compared to Euclid, that will make a single very high-resolution image of the sky, LSST will make lower resolution pictures (blurred by the atmosphere). However its unique aspect is that it will create a movie of the sky by taking many hundreds of pictures over a decade, making it a powerful tool for detecting supernovae. Meanwhile, the Square Kilometer Array(SKA) will observe the sky in radio wavelengths through a network of radio dishes, which can be spread hundreds of kilometres apart. The SKA will network together radio dishes from the Sahara to South Africa, and across Australia, creating a single continent-sized telescope. In 2010, a decade-long US strategy for astronomy proposed to combine a dark energy experiment and an exoplanet telescope into one. The concept that was born was called WFIRST – a space telescope scanning for near-infrared waves. In 2012 the US National Reconnaissance Office offered to donate two unused "spy satellites", the same size as the Hubble Space Telescope but with a wider field of view. One of these satellites now forms the core of the experiment, expected to launch in the mid-2020s. While each of these telescopes will work in isolation, cosmologists are realising they'll achieve more by combining the information from all of them. Because they are very different, problematic effects such as those caused by instrument defects, can be removed by comparing the results. In fact, we are creating single super telescope for humanity. What could they find? While we don't yet know what dark energy is, there are three main contenders for it: Apart from Higgs, all particles in nature are associated with "quantum fields" that are like the analogy of wind maps. It has been theorised that, like Higgs, dark energy could be another example of a scalar field. Of course all of these explanations may be wrong, dark energy could be something even stranger. But thanks to the efforts being put in over next decade, it seems we are close to finding out. Explore further: The Large Synoptic Survey Telescope: Unlocking the secrets of dark matter and dark energy


News Article | August 29, 2016
Site: www.scientificcomputing.com

If you want to model weather systems, perform advanced computational mechanics, simulate the impact of climate change, study the interaction of lithium and manganese in batteries at the atomic level, or conduct the next experiment of your latest in vitro biomedical technique virtually — and you want to do it in Africa — then there is only one place to go: the Center for High Performance Computing (CHPC) in Cape Town. Built and operated within the South African Council for Scientific and Industrial Research (CSIR), the CHPC is home to South Africa’s newest (and only) supercomputer. Named “Lengau,” which means “Cheetah” in Setswana, the system became fully operational in May 2016 and was ranked 121 on the June TOP500 list of the world’s fastest supercomputers. Its mission: to make South Africa, and indeed Africa itself, a major player within the international community of HPC-driven scientific researchers while also boosting South Africa’s burgeoning development in scientific and technical education. Such world-class ambitions, however, require equally world-class technology. Based on Intel Xeon processors, the new system is comprised of 1,013 Dell PowerEdge servers totaling 19 racks of compute nodes and storage. It has a total storage capacity of five petabytes and uses Dell networking Ethernet switches and Mellanox FDR InfiniBand with a maximum interconnect speed of 56 GB/s. With over 24,000 cores, the machine is the fastest computer on the African continent at roughly one petaflop (a thousand trillion floating point operations per second) — 15 times faster than CHPC’s previous system. The person leading the effort to make the new supercomputer a reality was CHPC Director, Dr. Happy Sithole. For him, nothing less than world-class supercomputing power would suffice. “For us, it’s no different from the rest of the world in terms of looking for opportunities where we need to accelerate competitiveness. I think high performance computing is vital for competitiveness in developed countries. In South Africa we also have that ambition to accelerate areas where we are competitive in industry and science.” Those research domains are quite broad, Dr. Sithole says. “They cover chemistry, bioinformatics, astronomy, computational mechanics, engineering applications or systems, and the earth sciences including climate change. The South African Weather Service is a key collaborator as well as the Agricultural Research Council. It’s quite a broad spectrum of users.” But advancing scientific research is only one of the key benefits high performance computing offers South Africa, Dr. Sithole says. Helping industry is another. “The first key performance indicator for us is whether we are helping someone solve a problem faster. And the second is whether we demonstrate an impact to non-academic users — whether some of our industries can say we were able to do things much faster, we were able to generate more revenue, because of high performance computing.” Virtual prototyping is a prime example, he says. “The more you are able to do virtual prototypes the faster you can take your product to market. And here at CHPC we have an ongoing investment in virtual prototyping.” But if CHPC shares many of the same goals as other high performance computing centers, it also faces some unique challenges as well as opportunities. “If you look at most centers around the world,” Dr. Sithole says, “they have the option to focus on a specific area. But we don’t have that luxury. We have some users who don’t have access to any other computing resources. That is our uniqueness — that we are the only center in the country and in the continent. We have all those users with varied needs of computing and also of application requirements. But our unique geographical position also brings us unique opportunities and some very good partnerships.” A good example is climate change research. Like other countries, South Africa is very concerned about the future impact greenhouse gases will have on public health, agriculture, the availability of fresh water, and other areas. But what makes climate research here different is its focus on the Southern Hemisphere. “Perhaps our biggest user,” Dr. Sithole says, “is a climate modeling team from the CSIR, which absolutely depends on the CHPC for what they call Variable Resolution Earth System Model or VRESM. This is an earth systems model for climate prediction that contributes to global research efforts. It specifically focuses on the Southern Hemisphere whereas similar modeling efforts elsewhere only focus on the Northern Hemisphere. VRESM relies on the CHPC because of the level of computing resources they are accessing — 9,000 to 10,000 cores at a time — which they cannot get anywhere else. And where before their models were limited to an eight-kilometer resolution, today they are at one-kilometer resolution. This is something they could not do before.” Another example is materials science, particularly in fields like battery research and minerals differentiation (extracting precious metals from ores). South Africa ranks either very near or at the top in deposits of metals like manganese, platinum, chromite, vanadium, and vermiculite. Here too the new system’s increased computational power is having a clear impact. According to Dr. Sithole, “Materials science models that once took 11 days to finish now only take thee-quarters of a day. That’s a major improvement.” On the battery side, scientists use CHPC to model the interaction of atoms from different metals, like lithium and manganese, as a way to predict battery performance. “They’re looking at lithium manganese dioxide,” says Dr. Sithole. “In order to realistically show what happens in the actual battery system, researchers need to simulate a large number of lithium atoms traveling through the manganese. That means scaling the size of the battery system to millions of atoms. Where they could only model hundreds before, they have already surpassed 120,000 atoms and they now see they can push to millions.” CHPC will also play a key role in support of the world’s largest radio telescope — the Square Kilometer Array (SKA) — scheduled to be deployed in South Africa’s Karoo desert by the year 2020. It will be 50 times more sensitive and survey the sky 10,000 times faster than today’s most powerful radio telescopes — and also generate record-setting amounts of astronomical data. The precursor to SKA is the MeerKAT radio telescope, located in South Africa’s Northern Cape. To enable users to have close proximity to their data and also help balance MeerKAT’s — and soon SKA’s — huge compute load, CHPC will support portions of MeerKAT’s data analysis and hold its archives. CHPC will also participate in efforts to create Africa’s first data-intensive cloud infrastructure as part of the country’s new Inter-University Institute for Data Intensive Astronomy (IDIA). Supporting these types of use cases would be impossible, Dr. Sithole says, without the help of vendor partners. “You would not be able to achieve this through working alone. We worked very closely with the Intel team especially when it came to working with the Lustre vendors but also in looking at the libraries and other Intel related dependencies. For example, some configurations under Intel Manager for Lustre software did not allow a number of files to be written at the same time. During this whole process their people were available all the time and were very helpful in resolving issues. Without companies like Intel we would not be able to achieve benefits like efficient parallelization or the introduction of new technologies. So partnerships with OEMs are very important when you are looking to build at scale.” That’s just one of many lessons Dr. Sithole and his team learned in building out CHPC’s new supercomputer. Another was the need “to identify low hanging fruit so you can start demonstrating impact early.” Still another was to “start building expertise within your user base early and get users involved early and incrementally during the build-out process.” Thanks to leadership like that, South Africa now has its own role to play in the global community of high performance computing — while at the same time enjoying the singular opportunities that come from leveraging this continent’s unique and abundant resources. Randall Cronk of greatwriting, LLC is a technology writer with over 30 years’ experience writing about complex systems, products, and services.


The first discovery of a fast radio burst, lasting only 5 milliseconds, was announced in 2007 by scientists mining data from Australia's Parkes radio telescope. Unfortunately, the burst did not repeat, so it couldn't be independently confirmed by others. Several years passed before new bursts were found at different locations in the sky using independent telescopes in Arecibo, Puerto Rico and Greenbank, US. But astronomers failed to agree on what had caused the bursts. As they had been one-off blips, more like whistlers than repeating signals, it was suggested that they could come from catastrophic one-off events – such as a neutron star colliding with a black hole. Other explanations included huge flashes of brightness, similar to solar flares, from stars in our own galaxy, or simply contaminating signals from radio waves emitted on Earth. Some even speculated that the signals could be transmitted by distant alien civilisations. On March 2, it seemed the mystery had finally been solved when scientists announced the detection of what they interpreted to be an afterglow – lasting six days – from a fast radio burst. For the first time, they were able to suggest the galaxy that the burst could have come from, roughly 6 billion light years from Earth. The researchers suggested it likely originated when two compact objects such as a neutron star and a black hole collided. But astronomers' excitement was short-lived. Just days later, new observations from the Very Large Array suggested these findings could be flawed. The array had seen the signal get stronger rather than fade, which would have been expected for an afterglow. The researchers therefore concluded it was not an afterglow from a fast radio burst at all, but rather radiation from a supermassive black hole at the heart of the galaxy gradually devouring material from its surroundings. This is a common phenomenon; even the centres of nearby galaxies show variable radio brightness. To make matters even more confusing, astronomers using the Arecibo radio telescope have just announced the discovery of a collection of signals which they argue could be a repeating fast radio burst. This is a huge surprise as we had until now assumed that the bursts were one-off events. These signals are ten times weaker than traditional fast radio bursts and seem to have different properties. There also remains uncertainty about the exact location of each burst on the sky, so they may not be related to one catastrophic event. It is clear that many more fast radio bursts need to be discovered and studied before generalisations about their nature and origin can be made. Trying to discover a burst and, at exactly the same time, pinpoint accurately where on the sky it comes from is still a major challenge for radio observatories, as their telescopes have relatively small fields of view. This is also challenging for astronomers working at wavelengths other than the radio bands who are searching for other kinds of electromagnetic radiation (such as X-rays or the kind of optical light that we can see). Such radiation may have been emitted in the same event that caused the fast radio bursts. If a radio signal could be backed up by discoveries in these other parts of the spectrum, we could measure the distance and understand the physics processes driving these events. This problem is not new to astronomers. Intense, short-lived flashes of high-energy gamma rays (radiation that is much more energetic than radio waves) – discovered by military satellites in the 1960s and named "gamma ray bursts" – remained a mystery until they could be pinpointed in the sky with sufficient accuracy to allow other telescopes (looking in different wavelengths) to help search. Scientists working with different telescopes could ultimately establish that they came from far beyond our own galaxy. A revolution in the understanding of the underlying physics of these bursts came with the launch of NASA's Swift satellite, which accurately locates new busts and automatically notifies ground-based robotic telescopes in real time so they can get a closer look. At radio wavelengths, new breakthroughs may come from upgrades of existing observatories, but the development of a new global radio facility – the Square Kilometer Array (SKA) – in the coming decade is set to revolutionise this field. SKA, a huge network of radio antennas, will combine the ability to see large parts of the sky with fast detection technology to create accurate radio maps of the sky at any given moment. This will give radio astronomers a super all-in-one search and locate machine. It could also deliver news of discoveries in real time to astronomers searching for light at other parts of the spectrum with other facilities, which will see a similar revolution in the ability to scan the sky in real time. But in the radio bands in particular, it will never be possible to store all of the data collected. Instead, astronomers will have to develop sophisticated hardware and software to sift through the data in real time to capture and identify fleeting events like fast radio bursts. It may seem like a lot of hurdles to overcome, but it is all happening at a rapid pace. So perhaps sometime in the next decade or two we will know whether "fast radio bursts" are created by aliens or cataclysmic events … or just from microwaves in our kitchens.


News Article | January 3, 2016
Site: motherboard.vice.com

There is more to the violence of a black hole than the cataclysmic inward suck of infinite gravity. There is a yang to the yin of a black hole's inward attraction (accretion), which takes the form of relativistic jets that blast outward from the black hole stretching for sometimes millions of light-years. A recent discovery, described in a paper published Friday in Science, offers a new perspective on these jets thanks to a passing star unfortunate enough to wander into the outflowing blast from a quasar (either a supermassive black hole or a binary black hole system found at the center of a galaxy) spotted by the All-Sky Automated Survey for Supernovae (ASAS-SN) project. The event is known as ASASSN-14li. The star in question was promptly ripped to shreds, resulting in a tidal disruption flare (TDF) as its remains fell inward and were then blasted back out into space by the energy of the black hole's jet. A TDF is a very rare event—occurring about once every 100,000 years or so per galaxy—and one highly sought after by astronomers trying to generally explain the accretion/outflow balance of black holes. This is an ongoing challenge for the simple reason that the processes involved typically evolve at timescales much longer than a human lifespan. TDFs, however, offer a much shorter timeframe, an accelerated perspective on the otherwise masked accretion/outflow processes of a supermassive black hole. "TDFs are events associated with the gravitational disruption of a star making a close passage to a super massive black hole," explains astrophysicist Geoffrey C Bower in the awesomely titled Science perspective, "The Screams of a star Being Ripped Apart." "If a star passes within the tidal radius of a black hole then the differential gravity on the star will rip the star apart." "As the stellar remnant approaches the black hole," Bower writes, "its gravitational potential energy is converted into heat through viscous effects. The accretion flow will reach a temperature of 105 K and glow brightly at optical, ultraviolet, and x-ray wavelengths for about 100 days." This is a funny thing about black holes—they have so much gravitational pull that all of the infalling material winds up smashing up against other infalling material, with the result being tremendous amounts of energy released via friction. Not everything just falls into the void. These are the jets we're talking about, and they're what ripped apart a star to result in the TDF event dubbed ASASSN-14li. This isn't the first TDF identified, but, as the researchers explain, ASASSN-14li is the first observed within 30 days of its peak. Most have been bagged only years later. The ASAS-SN group was able to rule out jets fueled by more typical everyday accretion because this one erupted and then "switched off" so abruptly. There is still a lot to learn about TDFs and black hole jets generally. The current ASAS-SN results, Bower concludes, "suggest that new discoveries may also come from large-area surveys at radio wavelengths. Powerful telescopes such as the Very Large Array, as well as new telescopes optimized for surveys such as the Australian Square Kilometer Array Pathfinder and MeerKAT in South Africa, will have a rich future in searching for TDFs."


News Article | March 4, 2016
Site: www.sciencenews.org

A recently claimed home for an elusive cosmic radio burst might not be the host galaxy after all. What appeared to be an afterglow from the eruption might instead have been a run-of-the-mill radio emission from an unrelated galaxy, researchers claim online February 28 at arXiv.org. Fast radio bursts, ephemeral blasts of radio waves that appear to originate in other galaxies, have been stumping astronomers since 2007 (SN: 8/9/14, p. 22). Identifying a host galaxy for an FRB could provide a clue to its cause. A recent FRB seemed to finally leave a return address (SN Online: 2/24/16). Two hours after the initial detection, astronomers caught a fading radio signal coming from the same direction. That signal led a team headed by Evan Keane, an astronomer with the Square Kilometer Array Organization in Macclesfield, England, to a galaxy about 6 billion light-years away. But the claimed afterglow might have nothing to do with the FRB, Harvard University astronomers Peter Williams and Edo Berger suggest. A supermassive black hole appears to live in that galaxy, and it is actively feeding off a swirling disk of interstellar detritus. Such cosmic snacks routinely belch out radio waves. Observations of the galaxy obtained on February 26 and 27 at the Very Large Array in New Mexico show that not only has the “afterglow” returned, but it is brighter than what researchers saw in the hours after the FRB detection. Williams and Berger argue that the galaxy is not the home of the FRB. But Duncan Lorimer, an astrophysicist at West Virginia University in Morgantown, says “I would be cautious about dismissing the result.” Astronomers don’t know what causes FRBs and many mysteries remain. One burst detected in 2012, for example, recently became the first FRB known to repeat itself after erupting 10 more times last year (SN Online: 3/2/16). It’s possible that this one repeats as well, Lorimer says. “It tells us how little we still know.” Keane and colleagues are performing additional studies to better understand what’s going on. “When we've completed and fully considered those, we will certainly report our findings,” he says. “I know that FRBs are exciting, and appreciate that there is a lot interest, but we really can't rush the scientific process.”


One of the world's largest fully steerable radio telescopes, the Effelsberg 100-m dish, surveyed the entire northern sky in the light of the neutral hydrogen (HI) 21-cm line. This effort, led by Jürgen Kerp (Argelander Institute for Astronomy) and Benjamin Winkel (Max Planck-Institut für Radioastronomie), began in 2008 and has culminated today in the initial data release of the Effelsberg-Bonn HI Survey (EBHIS). Funded by the German Research Foundation (Deutsche Forschungsgemeinschaft - DFG), the EBHIS data base is now freely accessible for all scientists around the world. In addition to the now released Milky Way data, the EBHIS project also includes unique information about HI in external galaxies out to a distance of about 750 million light years from Earth. Hydrogen is THE ELEMENT of the universe. Consisting of a single proton and an electron it is the simplest and most abundant element in space. One could almost consider the universe as a pure hydrogen universe, albeit with some minor "pollution" by heavier elements, among these carbon, the fundamental component of all organisms on Earth. The 21-cm line is a very faint but characteristic emission line of neutral atomic hydrogen (or HI). It is not only feasible to detect the weakest signals from distant galaxies with the 100-m Effelsberg antenna, but also to determine their motion relative to Earth with high precision. A special receiver was required in order to enable the EBHIS project. With seven receiving elements observing the sky independently from each other, it was possible to reduce the necessary observing time from decades to about five years only. Field Programmable Gate Array (FPGA) spectrometers were developed within the course of the EBHIS project, allowing real time processing and storage of about 100 million individual HI spectra with consistently good quality. The individual HI spectra were combined using high-performance computers into a unique map of the entire northern sky and provide unsurpassed richness in detail of the Milky Way Galaxy gas. Astronomy students at Bonn University had unique access to the pre-release EBHIS data. In 2013 the European Space Agency (ESA) signed a memorandum of understanding with the Bonn HI radio astronomers. ESA was granted exclusive access to EBHIS data for their Planck satellite mission and, in return, Bonn students were given unique access to Planck data for their thesis projects. Twelve Bachelor, nine Master, and five Doctoral thesis projects have been successfully completed since 2008. The Square Kilometer Array (SKA), the world's largest future radio astronomical facility, to be constructed in Australia and South Africa, will benefit directly from the EBHIS data. Owing to the construction of SKA as a radio interferometer, it is inherently insensitive to the faint and extended HI emission of the Milky Way and nearby external galaxies. Since the HI gas is measured very well by EBHIS, only combining SKA and EBHIS data will allow one to derive a comprehensive view of the interstellar HI gas. The Effelsberg-Bonn HI Survey will be a rich resource for science in the near and far future. Independent attempts to survey the entire northern sky with a 100-m class telescope are not scheduled. The EBHIS data will thus set the quality standard for the Milky Way Galaxy HI for the next decades. Explore further: An atlas of the Milky Way More information: B. Winkel et al. The Effelsberg-Bonn H i Survey: Milky Way gas, Astronomy & Astrophysics (2015). DOI: 10.1051/0004-6361/201527007

Loading Square Kilometer Array collaborators
Loading Square Kilometer Array collaborators