News Article | April 28, 2017
« Technion team devises method for on-demand H2 production from water and aluminum for aviation applications | Main | Ford introduces new Intelligent Speed Limiter in Europe » BP announced a major breakthrough in seismic imaging that has identified more than 200 million barrels of additional resources at BP’s Atlantis field in the deepwater Gulf of Mexico. As a result of this early success, BP now is deploying this technique to fields elsewhere in the Gulf of Mexico as well as in Azerbaijan, Angola, and Trinidad and Tobago. The innovation has enabled BP to enhance the clarity of images that it collects during seismic surveys, particularly areas below the earth’s surface that complex salt structures previously obscured or distorted. The sharper seismic images mean that BP can drill new development wells in deepwater reservoirs with higher confidence and accuracy. Proprietary algorithms developed by BP’s Subsurface Technical Center were applied on seismic data run at BP’s Center for High Performance Computing, one of the largest supercomputers in the world dedicated to commercial research. The algorithms allowed data that would normally take a year to be analyzed to be processed in only a few weeks, accelerating BP’s development decisions for the field. The algorithms enhance a technique known as Full Waveform Inversion (FWI), which matches seismic simulations with existing seismic data to produce high quality subsurface images. Full Waveform Inversion (FWI) uses advanced algorithms to iteratively refine models of the subsurface by generating seismic wave simulations and adjusting the values of subsurface properties based on the quality of the match between the simulated and recorded data. The subsurface property models that are created by FWI are used to create high quality and high resolution images of oil and gas reservoirs. BP invented and was the first company to deploy wide-azimuth towed-streamer (WATS) technology to better illuminate and image below complex structures like salt. Established in 2016, BP’s Subsurface Technical Center specializes in advanced seismic imaging and enhanced oil recovery.
Lefeuvre P.,University of Cape Town |
Lefeuvre P.,University of Reunion Island |
Martin D.P.,University of Cape Town |
Martin D.P.,Center for High Performance Computing |
And 10 more authors.
PLoS Pathogens | Year: 2010
The ongoing global spread of Tomato yellow leaf curl virus (TYLCV; Genus Begomovirus, Family Geminiviridae) represents a serious looming threat to tomato production in all temperate parts of the world. Whereas determining where and when TYLCV movements have occurred could help curtail its spread and prevent future movements of related viruses, determining the consequences of past TYLCV movements could reveal the ecological and economic risks associated with similar viral invasions. Towards this end we applied Bayesian phylogeographic inference and recombination analyses to available TYLCV sequences (including those of 15 new Iranian full TYLCV genomes) and reconstructed a plausible history of TYLCV's diversification and movements throughout the world. In agreement with historical accounts, our results suggest that the first TYLCVs most probably arose somewhere in the Middle East between the 1930s and 1950s (with 95% highest probability density intervals 1905-1972) and that the global spread of TYLCV only began in the 1980s after the evolution of the TYLCV-Mld and -IL strains. Despite the global distribution of TYLCV we found no convincing evidence anywhere other than the Middle East and the Western Mediterranean of epidemiologically relevant TYLCV variants arising through recombination. Although the region around Iran is both the center of present day TYLCV diversity and the site of the most intensive ongoing TYLCV evolution, the evidence indicates that the region is epidemiologically isolated, which suggests that novel TYLCV variants found there are probably not direct global threats. We instead identify the Mediterranean basin as the main launch-pad of global TYLCV movements. © 2010 Lefeuvre et al.
Okouma P.M.,University of Cape Town |
Okouma P.M.,South African Astronomical Observatory |
Okouma P.M.,African Institute for Mathematical Sciences |
Okouma P.M.,Center for High Performance Computing |
And 6 more authors.
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2013
Distance measurement provides no constraints on curvature independent of assumptions about the dark energy, raising the question, how flat is our Universe if we make no such assumptions? Allowing for general evolution of the dark energy equation of state with 20 free parameters that are allowed to cross the phantom divide, w(z)=-1, we show that while it is indeed possible to match the first peak in the Cosmic Microwave Background with non-flat models and arbitrary Hubble constant, H0, the full WMAP7 and supernova data alone imply -0.12<Ωk<0.01 (σ). If we add an H0 prior, this tightens significantly to Ωk=0.002±0.009. These constitute the most conservative and model-independent constraints on curvature available today, and illustrate that the curvature-dynamics degeneracy is broken by current data, with a key role played by the Integrated Sachs Wolfe effect rather than the distance to the surface of last scattering. If one imposes a quintessence prior on the dark energy (-1≤w(z)≤1) then just the WMAP7 and supernova data alone force the Universe to near flatness: Ωk=0.013±0.012. Finally, allowing for curvature, we find that all datasets are consistent with a Harrison-Zel'dovich spectral index, ns=1, at 2σ, illustrating the interplay between early and late Universe constraints. © 2013 Elsevier B.V.
News Article | August 29, 2016
If you want to model weather systems, perform advanced computational mechanics, simulate the impact of climate change, study the interaction of lithium and manganese in batteries at the atomic level, or conduct the next experiment of your latest in vitro biomedical technique virtually — and you want to do it in Africa — then there is only one place to go: the Center for High Performance Computing (CHPC) in Cape Town. Built and operated within the South African Council for Scientific and Industrial Research (CSIR), the CHPC is home to South Africa’s newest (and only) supercomputer. Named “Lengau,” which means “Cheetah” in Setswana, the system became fully operational in May 2016 and was ranked 121 on the June TOP500 list of the world’s fastest supercomputers. Its mission: to make South Africa, and indeed Africa itself, a major player within the international community of HPC-driven scientific researchers while also boosting South Africa’s burgeoning development in scientific and technical education. Such world-class ambitions, however, require equally world-class technology. Based on Intel Xeon processors, the new system is comprised of 1,013 Dell PowerEdge servers totaling 19 racks of compute nodes and storage. It has a total storage capacity of five petabytes and uses Dell networking Ethernet switches and Mellanox FDR InfiniBand with a maximum interconnect speed of 56 GB/s. With over 24,000 cores, the machine is the fastest computer on the African continent at roughly one petaflop (a thousand trillion floating point operations per second) — 15 times faster than CHPC’s previous system. The person leading the effort to make the new supercomputer a reality was CHPC Director, Dr. Happy Sithole. For him, nothing less than world-class supercomputing power would suffice. “For us, it’s no different from the rest of the world in terms of looking for opportunities where we need to accelerate competitiveness. I think high performance computing is vital for competitiveness in developed countries. In South Africa we also have that ambition to accelerate areas where we are competitive in industry and science.” Those research domains are quite broad, Dr. Sithole says. “They cover chemistry, bioinformatics, astronomy, computational mechanics, engineering applications or systems, and the earth sciences including climate change. The South African Weather Service is a key collaborator as well as the Agricultural Research Council. It’s quite a broad spectrum of users.” But advancing scientific research is only one of the key benefits high performance computing offers South Africa, Dr. Sithole says. Helping industry is another. “The first key performance indicator for us is whether we are helping someone solve a problem faster. And the second is whether we demonstrate an impact to non-academic users — whether some of our industries can say we were able to do things much faster, we were able to generate more revenue, because of high performance computing.” Virtual prototyping is a prime example, he says. “The more you are able to do virtual prototypes the faster you can take your product to market. And here at CHPC we have an ongoing investment in virtual prototyping.” But if CHPC shares many of the same goals as other high performance computing centers, it also faces some unique challenges as well as opportunities. “If you look at most centers around the world,” Dr. Sithole says, “they have the option to focus on a specific area. But we don’t have that luxury. We have some users who don’t have access to any other computing resources. That is our uniqueness — that we are the only center in the country and in the continent. We have all those users with varied needs of computing and also of application requirements. But our unique geographical position also brings us unique opportunities and some very good partnerships.” A good example is climate change research. Like other countries, South Africa is very concerned about the future impact greenhouse gases will have on public health, agriculture, the availability of fresh water, and other areas. But what makes climate research here different is its focus on the Southern Hemisphere. “Perhaps our biggest user,” Dr. Sithole says, “is a climate modeling team from the CSIR, which absolutely depends on the CHPC for what they call Variable Resolution Earth System Model or VRESM. This is an earth systems model for climate prediction that contributes to global research efforts. It specifically focuses on the Southern Hemisphere whereas similar modeling efforts elsewhere only focus on the Northern Hemisphere. VRESM relies on the CHPC because of the level of computing resources they are accessing — 9,000 to 10,000 cores at a time — which they cannot get anywhere else. And where before their models were limited to an eight-kilometer resolution, today they are at one-kilometer resolution. This is something they could not do before.” Another example is materials science, particularly in fields like battery research and minerals differentiation (extracting precious metals from ores). South Africa ranks either very near or at the top in deposits of metals like manganese, platinum, chromite, vanadium, and vermiculite. Here too the new system’s increased computational power is having a clear impact. According to Dr. Sithole, “Materials science models that once took 11 days to finish now only take thee-quarters of a day. That’s a major improvement.” On the battery side, scientists use CHPC to model the interaction of atoms from different metals, like lithium and manganese, as a way to predict battery performance. “They’re looking at lithium manganese dioxide,” says Dr. Sithole. “In order to realistically show what happens in the actual battery system, researchers need to simulate a large number of lithium atoms traveling through the manganese. That means scaling the size of the battery system to millions of atoms. Where they could only model hundreds before, they have already surpassed 120,000 atoms and they now see they can push to millions.” CHPC will also play a key role in support of the world’s largest radio telescope — the Square Kilometer Array (SKA) — scheduled to be deployed in South Africa’s Karoo desert by the year 2020. It will be 50 times more sensitive and survey the sky 10,000 times faster than today’s most powerful radio telescopes — and also generate record-setting amounts of astronomical data. The precursor to SKA is the MeerKAT radio telescope, located in South Africa’s Northern Cape. To enable users to have close proximity to their data and also help balance MeerKAT’s — and soon SKA’s — huge compute load, CHPC will support portions of MeerKAT’s data analysis and hold its archives. CHPC will also participate in efforts to create Africa’s first data-intensive cloud infrastructure as part of the country’s new Inter-University Institute for Data Intensive Astronomy (IDIA). Supporting these types of use cases would be impossible, Dr. Sithole says, without the help of vendor partners. “You would not be able to achieve this through working alone. We worked very closely with the Intel team especially when it came to working with the Lustre vendors but also in looking at the libraries and other Intel related dependencies. For example, some configurations under Intel Manager for Lustre software did not allow a number of files to be written at the same time. During this whole process their people were available all the time and were very helpful in resolving issues. Without companies like Intel we would not be able to achieve benefits like efficient parallelization or the introduction of new technologies. So partnerships with OEMs are very important when you are looking to build at scale.” That’s just one of many lessons Dr. Sithole and his team learned in building out CHPC’s new supercomputer. Another was the need “to identify low hanging fruit so you can start demonstrating impact early.” Still another was to “start building expertise within your user base early and get users involved early and incrementally during the build-out process.” Thanks to leadership like that, South Africa now has its own role to play in the global community of high performance computing — while at the same time enjoying the singular opportunities that come from leveraging this continent’s unique and abundant resources. Randall Cronk of greatwriting, LLC is a technology writer with over 30 years’ experience writing about complex systems, products, and services.
Regis M.,University of Cape Town |
Regis M.,Center for High Performance Computing |
Regis M.,University of Turin |
Regis M.,National Institute of Nuclear Physics, Italy |
Clarkson C.,University of Cape Town
General Relativity and Gravitation | Year: 2012
Explaining the well established observation that the expansion rate of the universe is apparently accelerating is one of the defining scientific problems of our age. Within the standard model of cosmology, the repulsive 'dark energy' supposedly responsible has no explanation at a fundamental level, despite many varied attempts. A further important dilemma in the standard model is the lithium problem, which is the substantial mismatch between the theoretical prediction for 7Li from Big Bang Nucleosynthesis and the value that we observe today. This observation is one of the very few we have from along our past worldline as opposed to our past lightcone. By releasing the untested assumption that the universe is homogeneous on very large scales, both apparent acceleration and the lithium problem can be easily accounted for as different aspects of cosmic inhomogeneity, without causing problems for other cosmological phenomena such as the cosmic microwave background. We illustrate this in the context of a void model. © 2012 Springer Science+Business Media, LLC.
Martin D.P.,University of Cape Town |
Martin D.P.,Center for High Performance Computing |
Lemey P.,Rega Institute for Medical Research |
Lott M.,University of Cape Town |
And 6 more authors.
Bioinformatics | Year: 2010
Summary: RDP3 is a new version of the RDP program for characterizing recombination events in DNA-sequence alignments. Among other novelties, this version includes four new recombination analysis methods (3SEQ, VISRD, PHYLRO and LDHAT), new tests for recombination hot-spots, a range of matrix methods for visualizing over-all patterns of recombination within datasets and recombination-aware ancestral sequence reconstruction. Complementary to a high degree of analysis flow automation, RDP3 also has a highly interactive and detailed graphical user interface that enables more focused hands-on cross-checking of results with a wide variety of newly implemented phylogenetic tree construction and matrix-based recombination signal visualization methods. The new RDP3 can accommodate large datasets and is capable of analyzing alignments ranging in size from 1000×10 kilobase sequences to 20×2 megabase sequences within 48 h on a desktop PC. © The Author(s) 2010. Published by Oxford University Press.
Clarkson C.,University of Cape Town |
Regis M.,University of Cape Town |
Regis M.,Center for High Performance Computing
Journal of Cosmology and Astroparticle Physics | Year: 2011
The dimming of Type Ia supernovae could be the result of Hubble-scale inhomogeneity in the matter and spatial curvature, rather than signaling the presence of a dark energy component. A key challenge for such models is to fit the detailed spectrum of the cosmic microwave background (CMB). We present a detailed discussion of the small-scale CMB in an inhomogeneous universe, focusing on spherically symmetric 'void' models. We allow for the dynamical effects of radiation while analyzing the problem, in contrast to other work which inadvertently fine tunes its spatial profile. This is a surprisingly important effect and we reach substantially different conclusions. Models which are open at CMB distances fit the CMB power spectrum without fine tuning; these models also fit the supernovae and local Hubble rate data which favour a high expansion rate. Asymptotically flat models may fit the CMB, but require some extra assumptions. We argue that a full treatment of the radiation in these models is necessary if we are to understand the correct constraints from the CMB, as well as other observations which rely on it, such as spectral distortions of the black body spectrum, the kinematic Sunyaev-Zeldovich effect or the Baryon Acoustic Oscillations. © 2011 IOP Publishing Ltd and SISSA.
Kadner K.,University of Cape Town |
Dobner S.,University of Cape Town |
Franz T.,University of Cape Town |
Franz T.,Center for High Performance Computing |
And 4 more authors.
Biomaterials | Year: 2012
Biomaterials are increasingly being investigated as a means of reducing stress within the ventricular wall of infarcted hearts and thus attenuating pathological remodelling and loss of function. In this context, we have examined the influence of timing of delivery on the efficacy of a polyethylene glycol hydrogel polymerised with an enzymatically degradable peptide sequence. Delivery of the hydrogel immediately after infarct induction resulted in no observable improvements, but a delay of one week in delivery resulted in significant increases in scar thickness and fractional shortening, as well as reduction in end-systolic diameter against saline controls and immediately injected hydrogel at both 2 and 4 weeks post-infarction (p<0.05). Hydrogels injected at one week were degraded significantly slower than those injected immediately and this may have played a role in the differing outcomes. The hydrogel assumed markedly different morphologies at the two time points having either a fibrillar or bulky appearance after injection immediately or one week post-infarction respectively. We argue that the different morphologies result from infarction induced changes in the cardiac structure and influence the degradability of the injectates. The results indicate that timing of delivery is important and that very early time points may not be beneficial. © 2011 Elsevier Ltd.
Inggs M.,University of Cape Town |
Inggs M.,Center for High Performance Computing
IEEE Aerospace and Electronic Systems Magazine | Year: 2010
Cognitive Radar describes a generic radar system that is capable of adapting its transmission waveforms and cooperation with other sensors  in order to achieve superior detection, recognition, and tracking of targets. For example, the sensors of a cognitive radar system might use the illumination signals to carry broadcast data, allowing the sharing of target information. Herein, we postulate that it would be possible to implement a cognitive version of Passive Coherent Location (PCL) which has much in common with the broad cognitive radar concept, but adapts only to the waveforms it senses in the environment, and exploits those that are most useful to it for target detection. In addition, it would model the terrain to improve coverage and provide countermeasures against direct signal saturation. By its name, PCL does not transmit, but relies on emissions from other radiating systems, such as broadcast services, other radars, cellular radio, WiFi, and so on. It is clear that such a system, consisting of multiple, cooperating receivers, can achieve excellent performance in the presence of deliberate jamming, difficult terrain, and attempts at target stealth. In the civilian radar domain, the technology offers opportunities for bandwidth conservation . © 2006 IEEE.
Passmoor S.S.,University of the Western Cape |
Cress C.M.,University of the Western Cape |
Cress C.M.,Center for High Performance Computing |
Faltenbacher A.,University of the Western Cape
Monthly Notices of the Royal Astronomical Society: Letters | Year: 2011
We investigate the clustering of HI-selected galaxies in the Arecibo Legacy Fast ALFA Survey (ALFALFA) and compare results with those obtained for the HI Parkes All Sky Survey (HIPASS). Measurements of the angular correlation function and the inferred 3D clustering are compared with results from direct spatial-correlation measurements. We are able to measure clustering on smaller angular scales and for galaxies with lower HI masses than was previously possible. We calculate the expected clustering of dark matter using the redshift distributions of the HIPASS and ALFALFA, and show that the ALFALFA sample is somewhat more antibiased with respect to dark matter than the HIPASS sample. © 2011 The Authors. Monthly Notices of the Royal Astronomical Society © 2011 RAS.