Entity

Time filter

Source Type

Yuquan, China

News Article
Site: http://www.scientificcomputing.com/rss-feeds/all/rss.xml/all

A Florida State University high performance computing researcher has predicted a physical effect that would help physicists and astronomers provide fresh evidence of the correctness of Einstein’s general theory of relativity. Bin Chen, who works at the university’s Research Computing Center, describes the yet-to-be-observed effect in the paper “Probing the Gravitational Faraday Rotation Using Quasar X-ray Microlensing,” published November 17, 2015, in the journal Scientific Reports. “To be able to test general relativity is of crucial importance to physicists and astronomers,” Chen said. This testing is especially so in regions close to a black hole, according to Chen, because the current evidence for Einstein’s general relativity — light bending by the sun, for example — mainly comes from regions where the gravitational field is very weak, or regions far away from a black hole. Electromagnetism demonstrates that light is composed of oscillating electric and magnetic fields. Linearly polarized light is an electromagnetic wave whose electric and magnetic fields oscillate along fixed directions when the light travels through space. The gravitational Faraday effect, first predicted in the 1950s, theorizes that when linearly polarized light travels close to a spinning black hole, the orientation of its polarization rotates according to Einstein’s theory of general relativity. Currently, there is no practical way to detect gravitational Faraday rotation. In the paper, Chen predicts a new effect that can be used to detect the gravitational Faraday effect. His proposed observation requires monitoring the X-ray emissions from gravitationally lensed quasars. “This means that light from a cosmologically distant quasar will be deflected, or gravitationally lensed, by the intervening galaxy along the line of sight before arriving at an observer on the Earth,” said Chen of the phenomenon of gravitational lensing, which was predicted by Einstein in 1936. More than 100 gravitational lenses have been discovered so far. “Astronomers have recently found strong evidence showing that quasar X-ray emissions originate from regions very close to supermassive black holes, which are believed to reside at the center of many galaxies,” Chen said. “Gravitational Faraday rotation should leave its fingerprints on such compact regions close to a black hole. “Specifically, the observed X-ray polarization of a gravitationally microlensed quasar should vary rapidly with time if the gravitational Faraday effect indeed exists,” he said. “Therefore, monitoring the X-ray polarization of a gravitationally lensed quasar over time could verify the time dependence and the existence of the gravitational Faraday effect.” If detected, Chen’s effect — a derivative of the gravitational Faraday effect — would provide strong evidence of the correctness of Einstein’s general relativity theory in the “strong-field regime,” or an environment in close proximity to a black hole. Chen generated a simulation for the paper on the FSU Research Computing Center’s High-Performance Computing cluster — the second-largest computer cluster in Florida.


News Article | April 12, 2016
Site: http://www.scientificcomputing.com/rss-feeds/all/rss.xml/all

This is the fifth installment in a series covering how scientists are updating popular molecular dynamics, quantum chemistry and quantum materials code to take advantage of hardware advances, such as the forthcoming Intel Xeon Phi processors. Quantum-mechanical materials and molecular modeling research is the science for materials modeling at the nanoscale. Quantum materials research examines elementary particles using a mathematical interpretation of the structure and interactions of matter. This research has a wide range of applications, such as studying molecular systems for material assemblies, small chemical systems and studying biological molecules. High performance computing (HPC) systems are required for complex quantum materials research, due to the amount of data and the computation power required for calculating mathematical formulas and generating images. Researchers use specialized software such as Quantum ESPRESSO and a variety of HPC software in conducting quantum materials research. Quantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves and pseudo potentials. Quantum ESPRESSO is coordinated by the Quantum ESPRESSO Foundation and has a growing world-wide user community in academic and industrial research. Its intensive use of dense mathematical routines makes it an ideal candidate for many-core architectures, such as the Intel Xeon Phi coprocessor. The Intel Parallel Computing Centers at Cineca and Lawrence Berkeley National Lab (LBNL) along with the National Energy Research Scientific Computing Center (NERSC) are at the forefront in using HPC software and modifying Quantum ESPRESSO (QE) code to take advantage of Intel Xeon processors and Intel Xeon Phi coprocessors used in quantum materials research. In addition to Quantum ESPRESSO, the teams use tools such as Intel compilers, libraries, Intel VTune and OpenMP in their work. The goal is to incorporate the changes they make to Quantum ESPRESSO into the public version of the code so that scientists can gain from the modification they have made to improve code optimization and parallelization without requiring researchers to manually modify legacy code. One example of how Cineca used Quantum ESPRESSO to study a real device of promising scientific and technological interest is the electrical conductivity of a PDI-FCN2 molecule. This study was conducted by Cineca in collaboration with the University of Bologna and the National Research Council of Italy - Institute for Nanoscience (CNR-NANO). The object of this study is a two-terminal device based on a PDI-FCN2, a molecule derived from perylene. This system is important for the study of the electron transport in single-molecule devices and the further development of a new generation of organic field-effect transistor (OFET). The simulated system is composed of two gold electrodes, each of them made by 162 golden atoms. Between the electrodes, there is a PDI-FCN2 molecule. The system is made of 390 atoms and 3852 electrons. The metallic nature of the leads also requires a fine sampling of the Brillouin Zone in the description of the electronic structure. This further increases the computational effort required to simulate this system. Figures 1 and 2 show the molecular structure and the results of the study. The quantum mechanical solution of the electronic problem for such a huge system is a big challenge and it requires a large HPC computational infrastructure, like the one available at Cineca, and all the scaling properties of Quantum ESPRESSO. Dr. Carlo Cavazzoni (Principal Investigator at the Cineca Intel Parallel Computing Center) states, “Based on the results obtained by this study, we will gain a deep understanding in the intimate conduction mechanisms of this type of organic devices, going a step forward in the direction of utilizing the new OFET technologies that soon will replace the traditional silicon devices. Quantum ESPRESSO and new supercomputing facilities will make possible our studies and the understanding of the physics of the devices that, in the future, will be the building blocks for new photovoltaics cells, next-generation displays and molecular computers.” Cineca supercomputer The Cineca team currently does their research using the Cineca FERMI BGQ supercomputer and an IBM NeXtScale cluster named Galileo based on Intel Xeon processors and Intel Xeon Phi coprocessors (768 E7120P). Cineca’s next HPC computer will be a Lenovo named Marconi with 2 PFlops with Intel Xeon processors E5 v4 in a first stage and 11 PFlops with the next-generation Intel Xeon Phi processors before fall of 2016. A third-stage system will include 4.5PFlops with future Intel Xeon processors and integrate an Intel Omni-Path interconnection. Cineca is engaged in many R&D projects relating to HPC developments. One of the most important is MaX, which is a center of excellence funded by the European Community, whose ambitions are to establish an infrastructure for the material scientists and to support the development of codes toward the exascale. According to Cavazzoni, “We always focused our work on the part of numerical algorithms on the Fourier transform (FFT) and on the linear algebra modules. We started to rethink the overall organization of the memory hierarchy and parallelization structure. In particular, we modified the code in order to implement a sort of hierarchical tiling of data structures. In order to do so, we had to deeply modify the distribution of the data structure in Quantum ESPRESSO. The following figure shows the high level QE hierarchy.” Cineca is tiling data structures to efficiently use the computing power of each node. They changed fine-grain parallelism in the QE FFT module by refactoring data distribution using task groups as shown in Figure 4. The move to a many core model required changing QE code to make it fit the structure of a single node efficiently and splitting QE code into intra-node and inter-node processes.  In their work with the new data layout, a single TILE of processes, inside a given taskgroup, contains all the G-vectors and subset of bands to compute full 3-D FFTs. The data tiling can be changed to best match the HPC system characteristic, to the limit (if node memory permits) of having a whole 3-D FFT performed by a single taskgroup locally to the node. The following example shows the results of a Car Parrinello simulation on a system of 32 water molecules. This plot shows the differences between the old implementation (blue) and the new one (red), enhancing a reduction of the time-to-solution. Different taskgroups distributions are shown in the plot. Simulations were obtained running on an Intel Xeon processor E5-2630 v3. Cavazzoni indicates, “The Intel exascale road-map allows for a smooth innovation path in the code, and a constant improvement of the performance and scalability. The availability of a large number of cores per node has made it possible to tune the different layers of parallelization. A good tiling of the different data structures permits us to efficiently tile the memory and computing power of each node, reducing the amount of communication and, thus, enhancing the performances. We changed the fine grain parallelism of QE and, in particular, the FFT module. Adopting different kind of data distribution (taskgroups) we achieved a good improvement in terms of performance (Figure 5). However, there is still room for improvement, in particular for the efficiency of the OpenMP multithreading that is now limited to 4-8 threads. This is because workloads that are too small can induce load unbalancing and then a large spinning time. Adopting OpenMP tasking strategies, we are expecting a considerable improvement of the shared memory parallelism based on the new task level parallelism which is implemented in OpenMP4. We have already done some tests that make us think that we can remove the bottleneck displayed by synchronous thread level parallelism.” The main focus of Lawrence Berkley National Lab (LBNL) working with the National Energy Research Scientific Computing Center (NERSC) is to advance the open-source quantum chemistry or materials codes on multicore high-performance computing systems. They are jointly optimizing a variety of codes including NWChem and Quantum ESPRESSO code. NERSC is a national supercomputing center that serves the supercomputing mission and data needs of the U.S. Department of Energy Office of Science. NERSC is part of the Lawrence Berkley National Laboratory adjacent to the University of California campus. NERSC is also experimenting with modifying Quantum ESPRESSO code, since it is one of the most commonly used codes on NERSC systems. According to Taylor Barnes, LBNL Hopper Fellow, “In particular, we are interested in improving the performance of hybrid Density Functional Theory (DFT) calculations within Quantum ESPRESSO. Hybrid DFT is often more accurate than other types of DFT, and can be especially important for performing simulations of systems like batteries and photovoltaic cells. Unfortunately, hybrid DFT is also much more computationally demanding and, thus, many of the calculations that we would like to perform are difficult or impossible to run on current machines.” One of the LBNL/NERSC strategies for improving the performance of hybrid calculations in Quantum ESPRESSO has been to refactor and modify the hybrid sections of the code. Barnes states, “In doing so, we have made significant changes to both the communication and parallelization strategies, leading to large improvements in the code’s strong scaling efficiency.” Another focus of the LBNL/NERSC efforts is the investigation of improved ways to handle the parallelization of the fast FFTs, which are an integral part of any calculation in Quantum ESPRESSO.  “FFTs are notoriously difficult to parallelize efficiently across nodes; as a result, we are exploring strategies for distinguishing between intra-node parallelization of the FFTs using OpenMP and inter-node parallelization of other portions of the calculation using MPI. Our expectation is that these changes will be especially important on Intel Xeon Phi architectures,” indicates Barnes. How HPC will aid quantum materials research in the future Cineca, LBNL and NERSC all have a vision of how improved HPC code and Intel processors and coprocessors can improve the future of quantum materials research. The work these groups are doing to modify code to take advantage of HPC parallelization and optimization is especially important because there are not enough software engineers to adapt legacy codes. The work they are doing is being reviewed, and the optimization and parallelization modifications made by Cineca have been approved and incorporated into Release 5.3.0 of the Quantum ESPRESSO code. Both the LBNL and NERSC teams are active in the Intel Xeon Phi User's Group (IXPUG) and in the exchange of information and ideas to enhance the usability and efficiency of scientific applications running on large Intel Xeon Phi coprocessor-based high performance computing (HPC) systems. NERSC will be getting a large next-generation Intel Xeon Phi-processor-based supercomputer known as Cori late in 2016. NERSC has launched the NERSC Exascale Science Applications Program, which will allow 20 projects to collaborate with NERSC, Cray and Intel by providing access to early hardware, special training and preparation sessions. Project teams, guided by NERSC, Cray and Intel, will undertake intensive efforts to adapt software to take advantage of Cori's manycore architecture and to use the resultant codes to produce path-breaking science on an architecture that may represent an approach to exascale systems. Cavazzoni states, “In the context of the MaX project, we are committed to work on different codes from the community of material science in order to get ready for the exascale challenges. One of our main targets is to contribute to the modularization of such codes in order to build domain-specific libraries to be usable in different codes and/or complex workflows as LEGO blocks. This high degree of modularization will also allow our team to increase the performances and the suitability for new incoming architectures. In QE, we are already performing this work, and we recently packed all the functionalities related to the FFT kernels in a specific library. We are doing similar work for the linear algebra (such as diagonalization and eigenvalue problems) kernels. Together with MaX, we are also exploring new parallel paradigms and their possible usage in QE. In particular, we are interested in the tasking strategies implemented in the OpenMP standard. The advent of the Intel Xeon Phi architecture platforms gave us a strong motivation to increase the level of exposed parallelism in QE. Working on this aspect brings us much closer to the exascale scalability. The Intel Xeon Phi architecture clearly tells us that what will make the difference is the ability to use the shared memory paradigm and node resources best. We need to allow the allocation of a single MPI task per socket, where the best ratio today for MPI/threads is 1/2, 1/4, quite unlikely 1/8, and nothing above. We should improve the shared memory efficiency to have the possibility to use MPI to threads ratio in the order of 1 to 32 at least. And this will be valuable for any architecture, not only for the Intel Xeon Phi processor. All these enhancements will be soon tested on the upcoming Intel Xeon Phi processors that will be available this year in new supercomputers.” Other articles in this series covering the modernization of popular chemistry codes include: Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR. R&D 100 AWARD ENTRIES NOW OPEN: Establish your company as a technology leader! For more than 50 years, the R&D 100 Awards have showcased new products of technological significance. You can join this exclusive community! .


Pascual J.M.,University of Leon | Prieto R.,University of San Carlos | Carrasco R.,Ramony Cajal University Hospital | Barrios L.,Computing Center
Journal of Neurosurgery | Year: 2013

Object. Accurate diagnosis of the topographical relationships of craniopharyngiomas (CPs) involving the third ventricle and/or hypothalamus remains a challenging issue that critically influences the prediction of risks associated with their radical surgical removal. This study evaluates the diagnostic accuracy of MRI to define the precise topographical relationships between intraventricular CPs, the third ventricle, and the hypothalamus. Methods. An extensive retrospective review of well-described CPs reported in the MRI era between 1990 and 2009 yielded 875 lesions largely or wholly involving the third ventricle. Craniopharyngiomas with midsagittal and coronal preoperative and postoperative MRI studies, in addition to detailed descriptions of clinical and surgical findings, were selected from this database (n = 130). The position of the CP and the morphological distortions caused by the tumor on the sella turcica, suprasellar cistern, optic chiasm, pituitary stalk, and third ventricle floor, including the infundibulum, tuber cinereum, and mammillary bodies (MBs), were analyzed on both preoperative and postoperative MRI studies. These changes were correlated with the definitive CP topography and type of third ventricle involvement by the lesion, as confirmed surgically. Results. The mammillary body angle (MBA) is the angle formed by the intersection of a plane tangential to the base of the MBs and a plane parallel to the floor of the fourth ventricle in midsagittal MRI studies. Measurement of the MBA represented a reliable neuroradiological sign that could be used to discriminate the type of intraventricular involvement by the CP in 83% of cases in this series (n = 109). An acute MBA (< 60°) was indicative of a primary tuberal-intraventricular topography, whereas an obtuse MBA (> 90°) denoted a primary suprasellar CP position, causing either an invagination of the third ventricle (pseudointraventricular lesion) or its invasion (secondarily intraventricular lesion; p < 0.01). A multivariate model including a combination of 5 variables (the MBA, position of the hypothalamus, presence of hydrocephalus, psychiatric symptoms, and patient age) allowed an accurate definition of the CP topography preoperatively in 74%-90% of lesions, depending on the specific type of relationship between the tumor and third ventricle. Conclusions. The type of mammillary body displacement caused by CPs represents a valuable clue for ascertaining the topographical relationships between these lesions and the third ventricle on preoperative MRI studies. The MBA provides a useful sign to preoperatively differentiate a primary intraventricular CP originating at the infundibulotuberal area from a primary suprasellar CP, which either invaginated or secondarily invaded the third ventricle. © AANS, 2013. Source


News Article
Site: http://www.scientificcomputing.com/rss-feeds/all/rss.xml/all

Today’s installment is the third in a series covering how researchers from national laboratories and scientific research centers are updating popular molecular dynamics, quantum chemistry and quantum materials code to take advantage of hardware advances, such as the next-generation Intel Xeon Phi processors. Georgia Institute of Technology, known as Georgia Tech, is an Intel Parallel Computing Center (Intel PCC) that focuses on modernizing the performance and functionality of software on advanced HPC systems used in scientific discovery. Georgia Tech developed a new HPC software package, called GTFock, and the SIMINT library to make quantum chemistry and materials simulations run faster on servers and supercomputers using Intel Xeon processors and Intel Xeon Phi coprocessors. These tools, which continue to be improved, provide an increase in processing speed over the best state-of-the-art quantum chemistry codes in existence. “GTFock and SIMINT allow us to perform quantum chemistry simulations faster and with less expense, which can help in solving large-scale problems from fundamental chemistry and biochemistry to pharmaceutical and materials design,” states Edmond Chow, Associate Professor of Computational Science and Engineering and Director of the Georgia Institute of Technology Intel PCC. The Intel PCC at Georgia Tech has been simulating the binding of the drug Indinavir with human immunodeficiency virus (HIV) II protease. Indinavir is a protease inhibitor that competitively binds to the active site of HIV II protease to disrupt normal function as part of HIV treatment therapy. Such systems are too large to study quantum mechanically, so only a part of the protease closest to the drug is typically simulated. The aim of the work at Georgia Tech is to quantify the discrepancy in the binding energy when such truncated models of the protease are used. To do this, simulations with increasing larger portions of the protease are performed. These are enabled by the GTFock code, developed at the Georgia Tech Intel PCC in collaboration with Intel, which has been designed to scale efficiently on large cluster computers, including Intel Many Integrated Core (MIC) architecture clusters. Calculations were performed at the Hartree-Fock level of theory. The largest simulations included residues of the protease more than 18 Angstroms away from the drug molecule. These simulations involved almost 3000 atoms and were performed on more than 1.6 million compute cores of the Tianhe-2 supercomputer (an Intel Xeon processor and Intel Xeon Phi processor-based system that is currently number one on the TOP500 list). The results of this work so far show variations in binding energy that persist throughout the range up to 18 Angstroms. This suggests that at even relatively large cutoff distances, leading to very large model complexes (much larger than are typically possible with conventional codes and computing resources), the binding energy is not converged to within chemical accuracy. Further work is planned to validate these results as well as to study additional protein-ligand systems. New quantum chemistry code: GTFock The GTFock code was developed by the Georgia Tech Intel PCC in conjunction with the Intel Parallel Computing Lab. GTFock addresses one of the main challenges of quantum chemistry, which is the ability to run more accurate simulations and simulations of larger molecules through exploiting distributed memory processing. GTFock was designed as a new toolkit with optimized and scalable code for Hartree-Fock self-consistent field iterations and the distributed computation of the Fock matrix in quantum chemistry. The Hartree-Fock (HF) method is the one of most fundamental methods in quantum chemistry for approximately solving the electronic Schrödinger equation. The solution of the equation, called the wavefunction, can be used to determine properties of the molecule. Georgia Tech’s goals in the code design of GTFock include scalability to large numbers of nodes and the capability to simultaneously use CPUs and Intel Xeon Phi coprocessors. GTFock also includes infrastructure for performing self-consistent field (SCF) iterations to solve for the Hartree-Fock approximation and uses a new distributed algorithm for load balancing and reducing communication. GTFock code can be integrated into existing quantum chemistry packages and can be used for experimentation as a benchmark for high-performance computing. The code is capable of separately computing the Coulomb and exchange matrices and, thus, can be used as a core routine in many quantum chemistry methods. As part of IPCC collaborations, Georgia Tech graduate student Xing Liu and Intel researcher Sanchit Misra spent a month in China optimizing and running GTFock on Tianhe-2. During testing, the team encountered scalability problems when scaling up the code to 8100 nodes on Tianhe-2. They resolved these issues by using a better static partitioning and a better work stealing algorithm than used in previous work. They utilized the Intel Xeon Phi coprocessors on Tianhe-2 by using a dedicated thread on each node to manage offload to coprocessors and to use work stealing to dynamically balance the work between CPUs and coprocessors. The electron repulsion integral (ERI) calculations were also optimized for modern processors including the Intel Xeon Phi coprocessor. The partitioning framework used in GTFock is useful for comparing existing and future partitioning techniques. The best partitioning scheme may depend on the size of the problem, the computing system used and the parallelism available. In Fock matrix construction, each thread sums to its own copy of Fock submatrices in order to avoid contention for a single copy of the Fock matrix on a node. However, accelerators including Intel Xeon Phi coprocessors have limited memory per core, making this strategy impossible for reduction across many threads. Thus, novel solutions had to be designed. Figure 2 shows speed up results from running the GTFock code. A deficiency in quantum chemistry codes that Georgia Tech saw had to be addressed is the bottleneck of computing quantities called electron repulsion integrals. This calculation is a very computationally intensive step: there are many of these integrals to calculate and these calculations do not run efficiently on modern processors, including the Intel Xeon processor. One of the reasons is that the existing codes do not take advantage of single instruction, multiple data (SIMD) processing that is available on these processors. It is difficult for algorithms to exploit SIMD operations because of the structure of the algorithms. The existing algorithms that are used are recursive in multiple dimensions and require substantial amounts of intermediate data. In general, it is difficult to vectorize these calculations. Many attempts in the past involved taking existing libraries and rearranging code elements to try to optimize and speed up the calculations. The Georgia Tech team felt it was necessary to create a new library for electron integral calculations from scratch. The library they created is called SIMINT, which means Single Instruction Multiple Integral (named by SIMINT library developer Ben Pritchard). This library applies SIMD instructions to compute multiple integrals at the same time, which is the efficient mode of operation of Intel Xeon processors as well as the Intel Xeon Phi microarchitecture (MIC), which has wide SIMD units. SIMINT is a library for calculating electron repulsion integrals. The Georgia Tech PCC team designed it to use the SIMD features of Intel Xeon processors — it is highly efficient and faster than other state-of-the-art ERI codes. The approach is to use horizontal vectorization; thus, you must compute batches of integrals of the same type together. The Georgia Tech team has posted information so that users can take a look. The team uses Intel VTune amplifier extensively in optimizing SIMINT, because it helps tune the vectorization and cache performance. Developers know how fast the processor can go and the speed limits of the calculation because of the instructions they need to perform. Intel VTune amplifier provides a variety of statistics at a line of code level that help determine why they may not be reaching the expected performance. Figure 3 shows an approximate 2x speedup over libint with a test case that has many worst-case configurations. Figure 4 shows a 3x speedup for another basis set without worst-case configurations. “SIMINT has been designed specifically to efficiently use SIMD features of Intel processors and co-processors. As a result, we’re already seeing speedups of 2x to 3x over the best existing codes.” Edmond Chow, Associate Professor of Computational Science and Engineering and Director of the Georgia Institute of Technology Intel PCC. “GTFock has attracted the attention of other developers of quantum chemistry packages. We have already integrated GTFock into PSI4 to provide distributed memory parallel capabilities to that package. In addition, we have exchanged visits with the developers of the NWChem package to initiate integration of GTFock into NWChem (joint work with Edo Apra and Karol Kowalski, PNNL). Along with SIMINT, we hope to help quantum chemists get their simulations — and their science — done faster,” states Chow. Other articles in this series covering the modernization of popular chemistry codes include: Linda Barney is the founder and owner of Barney and Associates, a technical/marketing writing, training and web design firm in Beaverton, OR. R&D 100 AWARD ENTRIES NOW OPEN: Establish your company as a technology leader! For more than 50 years, the R&D 100 Awards have showcased new products of technological significance. You can join this exclusive community! .


News Article | August 15, 2016
Site: http://www.scientificcomputing.com/rss-feeds/all/rss.xml/all

Quantum computing remains mysterious and elusive to many, but USC Viterbi School of Engineering researchers might have taken us one step closer to bring the superpowered devices to practical reality. The Information Sciences Institute at USC Viterbi is home to the USC-Lockheed Martin Quantum Computing Center (QCC), a supercooled, magnetically shielded facility specially built to house the first commercially available quantum optimization processors – devices so advanced that there are currently only two in use outside the Canadian company D-Wave Systems, where they were built: The first one went to USC and Lockheed Martin, the second to NASA and Google. Quantum computers encode data in quantum bits, or “qubits,” which have the capability of representing the two digits of one and zero at the same time – as opposed to traditional bits, which can encode distinctly either a one or a zero. This property, called superposition, along with the ability of quantum states to “interfere” (cancel or reinforce each other like waves in a pond) and “tunnel” through energy barriers, is what may one day allow quantum processors to ultimately perform optimization calculations much faster than is possible using traditional processors. Optimization problems can take many forms, and quantum processors have been theorized to be useful for a variety of machine learning and big data problems like stock portfolio optimization, image recognition and classification, and detecting anomalies. Yet, because of the exotic way in which quantum computers process information, they are highly sensitive to errors of different kinds. When such errors occur they can erase any quantum computational advantage — so developing methods to overcome errors is of paramount importance in the quest to demonstrate “quantum supremacy.” USC researchers Walter Vinci, Tameem Albash and Daniel Lidar put forth a scheme to minimize errors. Their solution, explained in the article “Nested Quantum Annealing Correction” published in the journal Nature Quantum Information, is focused on reducing and correcting errors associated with heating, a type of errors that is common and particularly detrimental in quantum optimizers. Cooling the quantum processor further is not possible since the specialized dilution refrigerator that keeps it cool already operates at its limit, at a temperature approximately 1,000 times colder than outer space. Vinci, Albash and Lidar have developed a new method to suppress heating errors: By coupling several qubits together on a D-Wave Two quantum optimizer, without changing the hardware of the device, these qubits act effectively as one qubit that experiences a lower temperature. The more qubits are coupled, the lower is the temperature experienced, allowing researchers to minimize the effect of heating as a source of noise or error. This nesting scheme is implementable not only on platforms such as the D-Wave processor on which it was tested, but also on other future quantum optimization devices with different hardware architectures. The researchers believe that this work is an important step in eliminating a bottleneck for scalable quantum optimization implementations. “Our work is part of a large scale effort by the research community aimed at realizing the potential of quantum information processing, which we all hope might one day surpass its classical counterparts,” said Lidar, a USC Viterbi professor and QCC scientific director.

Discover hidden collaborations