Entity

Time filter

Source Type

South Africa

News Article | January 3, 2016
Site: http://motherboard.vice.com/

There is more to the violence of a black hole than the cataclysmic inward suck of infinite gravity. There is a yang to the yin of a black hole's inward attraction (accretion), which takes the form of relativistic jets that blast outward from the black hole stretching for sometimes millions of light-years. A recent discovery, described in a paper published Friday in Science, offers a new perspective on these jets thanks to a passing star unfortunate enough to wander into the outflowing blast from a quasar (either a supermassive black hole or a binary black hole system found at the center of a galaxy) spotted by the All-Sky Automated Survey for Supernovae (ASAS-SN) project. The event is known as ASASSN-14li. The star in question was promptly ripped to shreds, resulting in a tidal disruption flare (TDF) as its remains fell inward and were then blasted back out into space by the energy of the black hole's jet. A TDF is a very rare event—occurring about once every 100,000 years or so per galaxy—and one highly sought after by astronomers trying to generally explain the accretion/outflow balance of black holes. This is an ongoing challenge for the simple reason that the processes involved typically evolve at timescales much longer than a human lifespan. TDFs, however, offer a much shorter timeframe, an accelerated perspective on the otherwise masked accretion/outflow processes of a supermassive black hole. "TDFs are events associated with the gravitational disruption of a star making a close passage to a super massive black hole," explains astrophysicist Geoffrey C Bower in the awesomely titled Science perspective, "The Screams of a star Being Ripped Apart." "If a star passes within the tidal radius of a black hole then the differential gravity on the star will rip the star apart." "As the stellar remnant approaches the black hole," Bower writes, "its gravitational potential energy is converted into heat through viscous effects. The accretion flow will reach a temperature of 105 K and glow brightly at optical, ultraviolet, and x-ray wavelengths for about 100 days." This is a funny thing about black holes—they have so much gravitational pull that all of the infalling material winds up smashing up against other infalling material, with the result being tremendous amounts of energy released via friction. Not everything just falls into the void. These are the jets we're talking about, and they're what ripped apart a star to result in the TDF event dubbed ASASSN-14li. This isn't the first TDF identified, but, as the researchers explain, ASASSN-14li is the first observed within 30 days of its peak. Most have been bagged only years later. The ASAS-SN group was able to rule out jets fueled by more typical everyday accretion because this one erupted and then "switched off" so abruptly. There is still a lot to learn about TDFs and black hole jets generally. The current ASAS-SN results, Bower concludes, "suggest that new discoveries may also come from large-area surveys at radio wavelengths. Powerful telescopes such as the Very Large Array, as well as new telescopes optimized for surveys such as the Australian Square Kilometer Array Pathfinder and MeerKAT in South Africa, will have a rich future in searching for TDFs."


The first discovery of a fast radio burst, lasting only 5 milliseconds, was announced in 2007 by scientists mining data from Australia's Parkes radio telescope. Unfortunately, the burst did not repeat, so it couldn't be independently confirmed by others. Several years passed before new bursts were found at different locations in the sky using independent telescopes in Arecibo, Puerto Rico and Greenbank, US. But astronomers failed to agree on what had caused the bursts. As they had been one-off blips, more like whistlers than repeating signals, it was suggested that they could come from catastrophic one-off events – such as a neutron star colliding with a black hole. Other explanations included huge flashes of brightness, similar to solar flares, from stars in our own galaxy, or simply contaminating signals from radio waves emitted on Earth. Some even speculated that the signals could be transmitted by distant alien civilisations. On March 2, it seemed the mystery had finally been solved when scientists announced the detection of what they interpreted to be an afterglow – lasting six days – from a fast radio burst. For the first time, they were able to suggest the galaxy that the burst could have come from, roughly 6 billion light years from Earth. The researchers suggested it likely originated when two compact objects such as a neutron star and a black hole collided. But astronomers' excitement was short-lived. Just days later, new observations from the Very Large Array suggested these findings could be flawed. The array had seen the signal get stronger rather than fade, which would have been expected for an afterglow. The researchers therefore concluded it was not an afterglow from a fast radio burst at all, but rather radiation from a supermassive black hole at the heart of the galaxy gradually devouring material from its surroundings. This is a common phenomenon; even the centres of nearby galaxies show variable radio brightness. To make matters even more confusing, astronomers using the Arecibo radio telescope have just announced the discovery of a collection of signals which they argue could be a repeating fast radio burst. This is a huge surprise as we had until now assumed that the bursts were one-off events. These signals are ten times weaker than traditional fast radio bursts and seem to have different properties. There also remains uncertainty about the exact location of each burst on the sky, so they may not be related to one catastrophic event. It is clear that many more fast radio bursts need to be discovered and studied before generalisations about their nature and origin can be made. Trying to discover a burst and, at exactly the same time, pinpoint accurately where on the sky it comes from is still a major challenge for radio observatories, as their telescopes have relatively small fields of view. This is also challenging for astronomers working at wavelengths other than the radio bands who are searching for other kinds of electromagnetic radiation (such as X-rays or the kind of optical light that we can see). Such radiation may have been emitted in the same event that caused the fast radio bursts. If a radio signal could be backed up by discoveries in these other parts of the spectrum, we could measure the distance and understand the physics processes driving these events. This problem is not new to astronomers. Intense, short-lived flashes of high-energy gamma rays (radiation that is much more energetic than radio waves) – discovered by military satellites in the 1960s and named "gamma ray bursts" – remained a mystery until they could be pinpointed in the sky with sufficient accuracy to allow other telescopes (looking in different wavelengths) to help search. Scientists working with different telescopes could ultimately establish that they came from far beyond our own galaxy. A revolution in the understanding of the underlying physics of these bursts came with the launch of NASA's Swift satellite, which accurately locates new busts and automatically notifies ground-based robotic telescopes in real time so they can get a closer look. At radio wavelengths, new breakthroughs may come from upgrades of existing observatories, but the development of a new global radio facility – the Square Kilometer Array (SKA) – in the coming decade is set to revolutionise this field. SKA, a huge network of radio antennas, will combine the ability to see large parts of the sky with fast detection technology to create accurate radio maps of the sky at any given moment. This will give radio astronomers a super all-in-one search and locate machine. It could also deliver news of discoveries in real time to astronomers searching for light at other parts of the spectrum with other facilities, which will see a similar revolution in the ability to scan the sky in real time. But in the radio bands in particular, it will never be possible to store all of the data collected. Instead, astronomers will have to develop sophisticated hardware and software to sift through the data in real time to capture and identify fleeting events like fast radio bursts. It may seem like a lot of hurdles to overcome, but it is all happening at a rapid pace. So perhaps sometime in the next decade or two we will know whether "fast radio bursts" are created by aliens or cataclysmic events … or just from microwaves in our kitchens.


One of the world's largest fully steerable radio telescopes, the Effelsberg 100-m dish, surveyed the entire northern sky in the light of the neutral hydrogen (HI) 21-cm line. This effort, led by Jürgen Kerp (Argelander Institute for Astronomy) and Benjamin Winkel (Max Planck-Institut für Radioastronomie), began in 2008 and has culminated today in the initial data release of the Effelsberg-Bonn HI Survey (EBHIS). Funded by the German Research Foundation (Deutsche Forschungsgemeinschaft - DFG), the EBHIS data base is now freely accessible for all scientists around the world. In addition to the now released Milky Way data, the EBHIS project also includes unique information about HI in external galaxies out to a distance of about 750 million light years from Earth. Hydrogen is THE ELEMENT of the universe. Consisting of a single proton and an electron it is the simplest and most abundant element in space. One could almost consider the universe as a pure hydrogen universe, albeit with some minor "pollution" by heavier elements, among these carbon, the fundamental component of all organisms on Earth. The 21-cm line is a very faint but characteristic emission line of neutral atomic hydrogen (or HI). It is not only feasible to detect the weakest signals from distant galaxies with the 100-m Effelsberg antenna, but also to determine their motion relative to Earth with high precision. A special receiver was required in order to enable the EBHIS project. With seven receiving elements observing the sky independently from each other, it was possible to reduce the necessary observing time from decades to about five years only. Field Programmable Gate Array (FPGA) spectrometers were developed within the course of the EBHIS project, allowing real time processing and storage of about 100 million individual HI spectra with consistently good quality. The individual HI spectra were combined using high-performance computers into a unique map of the entire northern sky and provide unsurpassed richness in detail of the Milky Way Galaxy gas. Astronomy students at Bonn University had unique access to the pre-release EBHIS data. In 2013 the European Space Agency (ESA) signed a memorandum of understanding with the Bonn HI radio astronomers. ESA was granted exclusive access to EBHIS data for their Planck satellite mission and, in return, Bonn students were given unique access to Planck data for their thesis projects. Twelve Bachelor, nine Master, and five Doctoral thesis projects have been successfully completed since 2008. The Square Kilometer Array (SKA), the world's largest future radio astronomical facility, to be constructed in Australia and South Africa, will benefit directly from the EBHIS data. Owing to the construction of SKA as a radio interferometer, it is inherently insensitive to the faint and extended HI emission of the Milky Way and nearby external galaxies. Since the HI gas is measured very well by EBHIS, only combining SKA and EBHIS data will allow one to derive a comprehensive view of the interstellar HI gas. The Effelsberg-Bonn HI Survey will be a rich resource for science in the near and far future. Independent attempts to survey the entire northern sky with a 100-m class telescope are not scheduled. The EBHIS data will thus set the quality standard for the Milky Way Galaxy HI for the next decades. Explore further: An atlas of the Milky Way More information: B. Winkel et al. The Effelsberg-Bonn H i Survey: Milky Way gas, Astronomy & Astrophysics (2015). DOI: 10.1051/0004-6361/201527007


News Article | August 29, 2016
Site: http://www.scientificcomputing.com/rss-feeds/all/rss.xml/all

If you want to model weather systems, perform advanced computational mechanics, simulate the impact of climate change, study the interaction of lithium and manganese in batteries at the atomic level, or conduct the next experiment of your latest in vitro biomedical technique virtually — and you want to do it in Africa — then there is only one place to go: the Center for High Performance Computing (CHPC) in Cape Town. Built and operated within the South African Council for Scientific and Industrial Research (CSIR), the CHPC is home to South Africa’s newest (and only) supercomputer. Named “Lengau,” which means “Cheetah” in Setswana, the system became fully operational in May 2016 and was ranked 121 on the June TOP500 list of the world’s fastest supercomputers. Its mission: to make South Africa, and indeed Africa itself, a major player within the international community of HPC-driven scientific researchers while also boosting South Africa’s burgeoning development in scientific and technical education. Such world-class ambitions, however, require equally world-class technology. Based on Intel Xeon processors, the new system is comprised of 1,013 Dell PowerEdge servers totaling 19 racks of compute nodes and storage. It has a total storage capacity of five petabytes and uses Dell networking Ethernet switches and Mellanox FDR InfiniBand with a maximum interconnect speed of 56 GB/s. With over 24,000 cores, the machine is the fastest computer on the African continent at roughly one petaflop (a thousand trillion floating point operations per second) — 15 times faster than CHPC’s previous system. The person leading the effort to make the new supercomputer a reality was CHPC Director, Dr. Happy Sithole. For him, nothing less than world-class supercomputing power would suffice. “For us, it’s no different from the rest of the world in terms of looking for opportunities where we need to accelerate competitiveness. I think high performance computing is vital for competitiveness in developed countries. In South Africa we also have that ambition to accelerate areas where we are competitive in industry and science.” Those research domains are quite broad, Dr. Sithole says. “They cover chemistry, bioinformatics, astronomy, computational mechanics, engineering applications or systems, and the earth sciences including climate change. The South African Weather Service is a key collaborator as well as the Agricultural Research Council. It’s quite a broad spectrum of users.” But advancing scientific research is only one of the key benefits high performance computing offers South Africa, Dr. Sithole says. Helping industry is another. “The first key performance indicator for us is whether we are helping someone solve a problem faster. And the second is whether we demonstrate an impact to non-academic users — whether some of our industries can say we were able to do things much faster, we were able to generate more revenue, because of high performance computing.” Virtual prototyping is a prime example, he says. “The more you are able to do virtual prototypes the faster you can take your product to market. And here at CHPC we have an ongoing investment in virtual prototyping.” But if CHPC shares many of the same goals as other high performance computing centers, it also faces some unique challenges as well as opportunities. “If you look at most centers around the world,” Dr. Sithole says, “they have the option to focus on a specific area. But we don’t have that luxury. We have some users who don’t have access to any other computing resources. That is our uniqueness — that we are the only center in the country and in the continent. We have all those users with varied needs of computing and also of application requirements. But our unique geographical position also brings us unique opportunities and some very good partnerships.” A good example is climate change research. Like other countries, South Africa is very concerned about the future impact greenhouse gases will have on public health, agriculture, the availability of fresh water, and other areas. But what makes climate research here different is its focus on the Southern Hemisphere. “Perhaps our biggest user,” Dr. Sithole says, “is a climate modeling team from the CSIR, which absolutely depends on the CHPC for what they call Variable Resolution Earth System Model or VRESM. This is an earth systems model for climate prediction that contributes to global research efforts. It specifically focuses on the Southern Hemisphere whereas similar modeling efforts elsewhere only focus on the Northern Hemisphere. VRESM relies on the CHPC because of the level of computing resources they are accessing — 9,000 to 10,000 cores at a time — which they cannot get anywhere else. And where before their models were limited to an eight-kilometer resolution, today they are at one-kilometer resolution. This is something they could not do before.” Another example is materials science, particularly in fields like battery research and minerals differentiation (extracting precious metals from ores). South Africa ranks either very near or at the top in deposits of metals like manganese, platinum, chromite, vanadium, and vermiculite. Here too the new system’s increased computational power is having a clear impact. According to Dr. Sithole, “Materials science models that once took 11 days to finish now only take thee-quarters of a day. That’s a major improvement.” On the battery side, scientists use CHPC to model the interaction of atoms from different metals, like lithium and manganese, as a way to predict battery performance. “They’re looking at lithium manganese dioxide,” says Dr. Sithole. “In order to realistically show what happens in the actual battery system, researchers need to simulate a large number of lithium atoms traveling through the manganese. That means scaling the size of the battery system to millions of atoms. Where they could only model hundreds before, they have already surpassed 120,000 atoms and they now see they can push to millions.” CHPC will also play a key role in support of the world’s largest radio telescope — the Square Kilometer Array (SKA) — scheduled to be deployed in South Africa’s Karoo desert by the year 2020. It will be 50 times more sensitive and survey the sky 10,000 times faster than today’s most powerful radio telescopes — and also generate record-setting amounts of astronomical data. The precursor to SKA is the MeerKAT radio telescope, located in South Africa’s Northern Cape. To enable users to have close proximity to their data and also help balance MeerKAT’s — and soon SKA’s — huge compute load, CHPC will support portions of MeerKAT’s data analysis and hold its archives. CHPC will also participate in efforts to create Africa’s first data-intensive cloud infrastructure as part of the country’s new Inter-University Institute for Data Intensive Astronomy (IDIA). Supporting these types of use cases would be impossible, Dr. Sithole says, without the help of vendor partners. “You would not be able to achieve this through working alone. We worked very closely with the Intel team especially when it came to working with the Lustre vendors but also in looking at the libraries and other Intel related dependencies. For example, some configurations under Intel Manager for Lustre software did not allow a number of files to be written at the same time. During this whole process their people were available all the time and were very helpful in resolving issues. Without companies like Intel we would not be able to achieve benefits like efficient parallelization or the introduction of new technologies. So partnerships with OEMs are very important when you are looking to build at scale.” That’s just one of many lessons Dr. Sithole and his team learned in building out CHPC’s new supercomputer. Another was the need “to identify low hanging fruit so you can start demonstrating impact early.” Still another was to “start building expertise within your user base early and get users involved early and incrementally during the build-out process.” Thanks to leadership like that, South Africa now has its own role to play in the global community of high performance computing — while at the same time enjoying the singular opportunities that come from leveraging this continent’s unique and abundant resources. Randall Cronk of greatwriting, LLC is a technology writer with over 30 years’ experience writing about complex systems, products, and services.


News Article
Site: http://phys.org/technology-news/

High-performance computing (or HPC) enables discoveries in practically every field of science—not just those typically associated with supercomputers like chemistry and physics, but also in the social sciences, life sciences and humanities. By combining superfast and secure networks, cutting-edge parallel computing and analytics software, advanced scientific instruments and critical datasets across the U.S., NSF's cyber-ecosystem lets researchers investigate questions that can't otherwise be explored. NSF has supported advanced computing since its beginning and is constantly expanding access to these resources. This access helps tens of thousands of researchers each year—from high-school students to Nobel Prize winners—expand the frontiers of science and engineering, regardless of whether their institutions are large or small, or where they are located geographically. Below are 10 examples of research enabled by NSF-supported advanced computing resources from across all of science. Pineapples don't just taste good—they have a juicy evolutionary history. Recent analyses using computing resources that are part of the iPlant Collaborative revealed an important relationship between pineapples and crops like sorghum and rice, allowing scientists to home in on the genes and genetic pathways that allow plants to thrive in water-limited environments. Led by the University of Arizona, Texas Advanced Computing Center, Cold Spring Harbor Laboratory and University of North Carolina at Wilmington, iPlant was established in 2008 with NSF funding to develop cyberinfrastructure for life sciences research, provide powerful platforms for data storage and bioinformatics and democratize access to U.S. supercomputing capabilities. This week, iPlant announced it will host a new platform, Digital Imaging of Root Traits (DIRT), that lets scientists in the field measure up to 76 root traits merely by uploading a photograph of a plant's roots. Software that simulates the effect of an electric charge passing through a transistor—only a few atoms wide—is helping researchers to explore alternative materials that may replace silicon in future nanodevices. The software simulations designed by Purdue researcher Gerhard Klimeck and his group, available on the nanoHUB portal, provide new information about the limits of current semiconductor technologies and are helping design future generations of nanoelectronic devices. NanoHUB, supported by NSF, is the first broadly successful, scientific end-to-end cloud computing environment. It provides a library of 3,000 learning resources to 195,000 users worldwide. Its 232 simulation tools are used in the cloud by over 10,800 researchers and students annually. Earthquakes originate through complex interactions deep below the surface of the Earth, making them notoriously difficult to predict. The Southern California Earthquake Center (SCEC) and its lead scientist Thomas Jordan use massive computing power to simulate the dynamics of earthquakes. In doing so, SCEC helps to provide long-term earthquake forecasts and more accurate hazard assessments. In 2014, the SCEC team investigated the earthquake potential of the Los Angeles Basin, where the Pacific and North American Plates run into each other at the San Andreas Fault. Their simulations showed that the basin essentially acts like a big bowl of jelly that shakes during earthquakes, producing more high-shaking ground motions than the team expected. Using the NSF-funded Blue Waters supercomputer at the National Center for Supercomputing Applications and the Department of Energy-funded Titan supercomputer at the Oak Ridge Leadership Computing Facility, the researchers turned their simulations into seismic hazard models. These models describe the probability of an earthquake occurring in a given geographic area, within a given window of time and with ground motion intensity exceeding a given threshold. Nearly 33,000 people die in the U.S. each year due to motor vehicle crashes, according to the National Highway Traffic Safety Administration. Modern restraint systems save lives, but some deaths and injuries remain—and restraints themselves can cause injuries. Researchers from the Center for Injury Biomechanics at Wake Forest University used the Blacklight supercomputer at the Pittsburgh Supercomputing Center to simulate the impacts of car crashes with much greater fidelity than crash-test dummies can. By studying a variety of potential occupant positions, they're uncovering important factors that lead to more severe injuries, as well as ways to potentially mitigate these injuries, using advanced safety systems. Since Albert Einstein, scientists have believed that when major galactic events like black hole mergers occur, they leave a trace in the form of gravitational waves—ripples in the curvature of space-time that travel outward from the source. Advanced LIGO is a project designed to capture signs of these events. Since gravitational waves are expected to travel at the speed of light, detecting them requires two gravitational wave observatories, located 1,865 miles apart and working in unison, that can triangulate the gravitational wave signals and determine the source of the wave in the sky. In addition to being an astronomical challenge, Advanced LIGO is also a "big data" problem. The observatories take in huge volumes of data that must be analyzed to determine their meaning. Researchers estimate that Advanced LIGO will generate more than 1 petabyte of data a year, the equivalent of 13.3 years' worth of high-definition video. To achieve accurate and rapid gravity wave detection, researchers use Extreme Science and Engineering Discovery Environment (XSEDE)—a powerful collection of advanced digital resources and services—to develop and test new methods for transmitting and analyzing these massive quantities of astronomical data. Advanced LIGO came online in September and advanced computing will play an integral part in its future discoveries. What happens when a supercomputer reaches retirement age? In many cases, it continues to make an impact in the world. The NSF-funded Ranger supercomputer is one such example. In 2013, after five years as one of NSF's flagship computer systems, the Texas Advanced Computing Center (TACC) disassembled Ranger and shipped it from Austin, Texas to South Africa, Tanzania and Botswana to give root to a young and growing supercomputing community. With funding from NSF, TACC experts led training sessions in South Africa in December 2014. In November 2015, 19 delegates from Africa came to the U.S. to attend a two-day workshop at TACC as well as the Supercomputing 2015 International Conference for High Performance Computing. The effort is intended, in part, to help provide the technical expertise needed to successfully staff and operate the Square Kilometer Array, a new radio telescope being built in Australia and Africa which will offer the highest resolution images in all of astronomy. In September 2015, President Obama announced plans to improve maps and elevation models of the Arctic, including Alaska. To that end, NSF and the National Geospatial-Intelligence Agency (NGA) are supporting the development of high-resolution Digital Elevation Models in order to provide consistent coverage of the globally significant region. The models will allow researchers to see in detail how warming in the region affects the landscape in remote areas, and allow them to compare changes over time. The project relies, in part, on the computing and data analysis powers of Blue Waters, which will let researchers store, access and analyze large numbers of images and models. To solve some of society's most pressing long-term problems, the U.S. needs to educate and train the next generation of scientists and engineers to use advanced computing effectively. This pipeline of training begins as early as high school and continues throughout the careers of scientists. Last summer, TACC hosted 50 rising high school juniors and seniors to participate in an innovative new STEM program, CODE@TACC. The program introduced students to high-performance computing, life sciences and robotics. On the continuing education front, XSEDE offers hundreds of training classes each year to help researchers update their skills and learn new ones. High-performance computing has another use in education: to assess how students learn and ultimately to provide personalized educational paths. A recent report from the Computing Research Association, "Data-Intensive Research in Education: Current Work and Next Steps," highlights insights from two workshops on data-intensive education initiatives. The LearnSphere project at Carnegie Mellon University, an NSF Data Infrastructure Building Blocks project, is putting these ideas into practice. Experimenting with cloud computing on new platforms In 2014, NSF invested $20 million to create two cloud computing testbeds that let the academic research community develop and experiment with cloud architectures and pursue new, architecturally-enabled applications of cloud computing. CloudLab (with sites in Utah, Wisconsin and South Carolina) came online in May 2015 and provides researchers with the ability to create custom clouds and test adjustments at all levels of the infrastructure, from the bare metal on up. Chameleon, a large-scale, reconfigurable experimental environment for cloud research, co-located at the University of Chicago and The University of Texas at Austin, went into production in July 2015. Both serve hundreds of researchers at universities across the U.S. and let computer scientists experiment with unique cloud architectures in ways that weren't available before. The NSF-supported "Comet" system at the San Diego Supercomputer Center (SDSC) was dedicated in October and is already aiding scientists in a number of fields, including domains relatively new for supercomputer integration, such as neuroscience. SDSC recently received a major grant to expand the Neuroscience Gateway, which provides easy access to advanced cyberinfrastructure tools and resources through a web-based portal, and can significantly improve the productivity of researchers. The gateway will contribute to the national BRAIN Initiative and deepen our understanding of the human brain. Explore further: Innovative new supercomputers increase nation's computational capacity and capability

Discover hidden collaborations