Entity

Time filter

Source Type

Palo Alto, CA, United States

News Article
Site: http://www.cemag.us/rss-feeds/all/rss.xml/all

An international group of physicists led by the University of Arkansas has created an artificial material with a structure comparable to graphene. “We’ve basically created the first artificial graphene-like structure with transition metal atoms in place of carbon atoms,” says Jak Chakhalian, professor of physics and director of the Artificial Quantum Materials Laboratory. In 2014, Chakhalian was selected as a quantum materials investigator for the Gordon and Betty Moore Foundation. His selection came with a $1.8 million grant, a portion of which funded the study, Graphene, discovered in 2001, is a one-atom-thick sheet of graphite. Graphene transistors are predicted to be substantially faster and more heat tolerant than today’s silicon transistors and may result in more efficient computers and the next-generation of flexible electronics. Its discoverers were awarded the Nobel Prize in physics in 2010. The University of Arkansas -led group published its findings this week in Physical Review Letters, the journal of the American Physical Society, in a paper titled “Mott Electrons in an Artificial Graphene-like Crystal of Rare Earth Nickelate.” “This discovery gives us the ability to create graphene-like structures for many other elements,” says Srimanta Middey, a postdoctoral research associate at the University of Arkansas who led the study. The research group also included postdoctoral research associates Michael Kareev and Yanwei Cao, doctoral student Xiaoran Liu, and recent doctoral graduate Derek Meyers, now at Brookhaven National Laboratory. Additional members of the group were David Doennig of the University of Munich, Rossitza Pentcheva of the University of Duisburg-Essen in Germany, Zhenzhong Yang, Jinan Shi, and Lin Gu of the Chinese Academy of Sciences; and John W. Freeland and Phillip Ryan of the Advanced Photon Source at Argonne National Laboratory near Chicago. The research was also partially funded by the Chinese Academy of Sciences. The of their report reads: Deterministic control over the periodic geometrical arrangement of the constituent atoms is the backbone of the material properties, which, along with the interactions, define the electronic and magnetic ground state. Following this notion, a bilayer of a prototypical rare-earth nickelate, NdNiO , combined with a dielectric spacer, LaAlO , has been layered along the pseudocubic [111] direction. The resulting artificial graphenelike Mott crystal with magnetic 3D electrons has antiferromagnetic correlations. In addition, a combination of resonant X-ray linear dichroism measurements and ab initio calculations reveal the presence of an ordered orbital pattern, which is unattainable in either bulk nickelates or nickelate based heterostructures grown along the [001] direction. These findings highlight another promising venue towards designing new quantum many-body states by virtue of geometrical engineering.


News Article
Site: http://www.nanotech-now.com/

Abstract: Scientists at the U.S. Department of Energy's Brookhaven National Laboratory, Cornell University, and collaborators have produced the first direct evidence of a state of electronic matter first predicted by theorists in 1964. The discovery, described in a paper published online April 13, 2016, in Nature, may provide key insights into the workings of high-temperature superconductors. The prediction was that "Cooper pairs" of electrons in a superconductor could exist in two possible states. They could form a "superfluid" where all the particles are in the same quantum state and all move as a single entity, carrying current with zero resistance -- what we usually call a superconductor. Or the Cooper pairs could periodically vary in density across space, a so-called "Cooper pair density wave." For decades, this novel state has been elusive, possibly because no instrument capable of observing it existed. Now a research team led by J.C. Séamus Davis, a physicist at Brookhaven Lab and the James Gilbert White Distinguished Professor in the Physical Sciences at Cornell, and Andrew P. Mackenzie, Director of the Max-Planck Institute CPMS in Dresden, Germany, has developed a new way to use a scanning tunneling microscope (STM) to image Cooper pairs directly. The studies were carried out by research associate Mohammed Hamidian (now at Harvard) and graduate student Stephen Edkins (St. Andrews University in Scotland), working as members of Davis' research group at Cornell and with Kazuhiro Fujita, a physicist in Brookhaven Lab's Condensed Matter Physics and Materials Science Department. Superconductivity was first discovered in metals cooled almost to absolute zero (-273.15 degrees Celsius or -459.67 Fahrenheit). Recently developed materials called cuprates - copper oxides laced with other atoms - superconduct at temperatures as "high" as 148 degrees above absolute zero (-125 Celsius). In superconductors, electrons join in pairs that are magnetically neutral so they do not interact with atoms and can move without resistance. Hamidian and Edkins studied a cuprate incorporating bismuth, strontium, and calcium (Bi2Sr2CaCu2O8) using an incredibly sensitive STM that scans a surface with sub-nanometer resolution, on a sample that is refrigerated to within a few thousandths of a degree above absolute zero. At these temperatures, Cooper pairs can hop across short distances from one superconductor to another, a phenomenon known as Josephson tunneling. To observe Cooper pairs, the researchers briefly lowered the tip of the probe to touch the surface and pick up a flake of the cuprate material. Cooper pairs could then tunnel between the superconductor surface and the superconducting tip. The instrument became, Davis said, "the world's first scanning Josephson tunneling microscope." Flow of current made of Cooper pairs between the sample and the tip reveals the density of Cooper pairs at any point, and it showed periodic variations across the sample, with a wavelength of four crystal unit cells. The team had found a Cooper pair density wave state in a high-temperature superconductor, confirming the 50-year-old prediction. A collateral finding was that Cooper pairs were not seen in the vicinity of a few zinc atoms that had been introduced as impurities, making the overall map of Cooper pairs into "Swiss cheese." The researchers noted that their technique could be used to search for Cooper-pair density waves in other cuprates as well as more recently discovered iron-based superconductors. ### This work was supported by a grant to Davis from the EPiQS Program of the Gordon and Betty Moore Foundation and by the U.S. Department of Energy's Office of Science. The collaboration also included scientists in Scotland, Germany, Japan and Korea. About Brookhaven National Laboratory Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. Brookhaven is operated and managed for DOE's Office of Science by Brookhaven Science Associates, a limited-liability company founded by the Research Foundation for the State University of New York on behalf of Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit applied science and technology organization. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Home > Press > Physicists prove energy input predicts molecular behavior: Theoretical proof could lead to more reliable nanomachines Abstract: The world within a cell is a chaotic space, where the quantity and movement of molecules and proteins are in constant flux. Trying to predict how widely a protein or process may fluctuate is essential to knowing how well a cell is performing. But such predictions are hard to pin down in a cell’s open system, where everything can look hopelessly random. Now physicists at MIT have proved that at least one factor can set a limit, or bound, on a given protein or process’ fluctuations: energy. Given the amount of energy that a cell is spending, or dissipating, the fluctuations in a particular protein’s quantity, for example, must be within a specific range; fluctuations outside this range would be deemed impossible, according to the laws of thermodynamics. This idea also works in the opposite direction: Given a range of fluctuations in, say, the rate of a motor protein’s rotation, the researchers can determine the minimum amount of energy that the cell must be expending to drive that rotation. “This ends up being a very powerful, general statement about what is physically possible, or what is not physically possible, in a microscopic system,” says Jeremy England, the Thomas D. and Virginia W. Cabot Assistant Professor of Physics at MIT. “It’s also a generally applicable design constraint for the architecture of anything you want to make at the nanoscale.” For instance, knowing how energy and microscopic fluctuations relate will help scientists design more reliable nanomachines, for applications ranging from drug delivery to fuel cell technology. These tiny synthetic machines are designed to mimic a molecule’s motor-like behavior, but getting them to perform reliably at the nanoscale has proven extremely difficult. “This is a general proof that shows that how much energy you feed the system is related in a quantitative way to how reliable you’ve made it,” England says. “Having this constraint immediately gives you intuition, and a sort of road-ready yardstick to hold up to whatever it is you’re trying to design, to see if it’s feasible, and to direct it toward things that are feasible.” England and his colleagues, including Physics of Living Systems Fellow Todd Gingrich, postdoc Jordan Horowitz, and graduate student Nikolay Perunov, have published their results this week in Physical Review Letters. Making sense of microscopic motions The researchers’ paper was inspired by another study published last summer by scientists in Germany, who speculated that a cell’s energy dissipation might shape the fluctuations in certain microscopic processes. That paper addressed only typical fluctuations. England and his colleagues wondered whether the same results could be extended to include rare, “freak” instances, such as a sudden, temporary spike in a cell’s protein quantity. The team started with a general master equation, a model that describes motion of small systems, be it in the number or directional rotation for a given protein. The researchers then employed large deviation theory, which is a mathematical technique that is used to determine the probability distributions of processes that occur over a long period time, to evaluate how a microscopic system such as a rotating protein would behave. They then calculated, essentially, how the system fluctuated over a long period of time — for instance, how often a protein rotated clockwise versus counterclockwise — and then developed a probability distribution for those fluctuations. That distribution turns out to have a general form, which the team found could be bounded, or limited, by a simple mathematical expression. When they translated this expression into thermodynamic terms, to apply to the fluctuations in cells and other microscopic systems, they found that the bound boiled down to energy dissipation. In other words, how a microscopic system fluctuates is constrained by the energy put into the system. “We have in mind trying to make some sense of molecular systems,” Gingrich says. “What this proof tells us is, even without observing every single feature, by measuring the amount of energy lost from the system to the environment, it teaches us and limits the set of possibilities of what could be going on with the microscopic motions.” Pushing out of equilibrium The team found that the minimum amount of energy required to produce a given distribution of fluctuations is related to a state that is “near-equilibrium.” Systems that are at equilibrium are essentially at rest, with no energy coming in or out of the system. Any movement within the system is entirely due to the effect of the surrounding temperature, and therefore, fluctuations in whether a protein turns clockwise or counterclockwise, for example, are completely random, with an equal chance of rotating in either direction. Near-equilibrium systems are close to this state of rest; directional motion is generated by a small input of energy, but many features of the motion still appear as they do in equilibrium. Most living systems, however, operate far from equilibrium, with so much energy constantly flowing into and out of a cell that the fluctuations of molecular proteins and processes do not resemble anything in equilibrium. Lacking a similarity to equilibrium, it has been hard for scientists to uncover many general features of nonequilibrium fluctuations. England and his colleagues have shown that a comparison can nevertheless be made: Fluctuations occurring far from equilibrium must be at least as large as those that occur near equilibrium. The team says scientists can use the relationships established in its proof to understand the energy requirements in certain cellular systems, as well as to design reliable synthetic molecular machines. “One of the things that’s confusing about life is, it happens on a microscopic scale where there are a lot of processes that look pretty random,” Gingrich says. “We view this proof as a signpost: Here is one thing that at least must be true, even in those extreme, far-from-equilibrium situations where life is operating.” This research was supported in part by the Gordon and Betty Moore Foundation. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article
Site: http://news.mit.edu/topic/mitnanotech-rss.xml

The world within a cell is a chaotic space, where the quantity and movement of molecules and proteins are in constant flux. Trying to predict how widely a protein or process may fluctuate is essential to knowing how well a cell is performing. But such predictions are hard to pin down in a cell’s open system, where everything can look hopelessly random. Now physicists at MIT have proved that at least one factor can set a limit, or bound, on a given protein or process’ fluctuations: energy. Given the amount of energy that a cell is spending, or dissipating, the fluctuations in a particular protein’s quantity, for example, must be within a specific range; fluctuations outside this range would be deemed impossible, according to the laws of thermodynamics. This idea also works in the opposite direction: Given a range of fluctuations in, say, the rate of a motor protein’s rotation, the researchers can determine the minimum amount of energy that the cell must be expending to drive that rotation. “This ends up being a very powerful, general statement about what is physically possible, or what is not physically possible, in a microscopic system,” says Jeremy England, the Thomas D. and Virginia W. Cabot Assistant Professor of Physics at MIT. “It’s also a generally applicable design constraint for the architecture of anything you want to make at the nanoscale.” For instance, knowing how energy and microscopic fluctuations relate will help scientists design more reliable nanomachines, for applications ranging from drug delivery to fuel cell technology. These tiny synthetic machines are designed to mimic a molecule’s motor-like behavior, but getting them to perform reliably at the nanoscale has proven extremely difficult. “This is a general proof that shows that how much energy you feed the system is related in a quantitative way to how reliable you’ve made it,” England says. “Having this constraint immediately gives you intuition, and a sort of road-ready yardstick to hold up to whatever it is you’re trying to design, to see if it’s feasible, and to direct it toward things that are feasible.” England and his colleagues, including Physics of Living Systems Fellow Todd Gingrich, postdoc Jordan Horowitz, and graduate student Nikolay Perunov, have published their results this week in Physical Review Letters. The researchers’ paper was inspired by another study published last summer by scientists in Germany, who speculated that a cell’s energy dissipation might shape the fluctuations in certain microscopic processes. That paper addressed only typical fluctuations. England and his colleagues wondered whether the same results could be extended to include rare, “freak” instances, such as a sudden, temporary spike in a cell’s protein quantity. The team started with a general master equation, a model that describes motion of small systems, be it in the number or directional rotation for a given protein. The researchers then employed large deviation theory, which is a mathematical technique that is used to determine the probability distributions of processes that occur over a long period time, to evaluate how a microscopic system such as a rotating protein would behave. They then calculated, essentially, how the system fluctuated over a long period of time — for instance, how often a protein rotated clockwise versus counterclockwise — and then developed a probability distribution for those fluctuations. That distribution turns out to have a general form, which the team found could be bounded, or limited, by a simple mathematical expression. When they translated this expression into thermodynamic terms, to apply to the fluctuations in cells and other microscopic systems, they found that the bound boiled down to energy dissipation. In other words, how a microscopic system fluctuates is constrained by the energy put into the system. “We have in mind trying to make some sense of molecular systems,” Gingrich says. “What this proof tells us is, even without observing every single feature, by measuring the amount of energy lost from the system to the environment, it teaches us and limits the set of possibilities of what could be going on with the microscopic motions.” The team found that the minimum amount of energy required to produce a given distribution of fluctuations is related to a state that is “near-equilibrium.” Systems that are at equilibrium are essentially at rest, with no energy coming in or out of the system. Any movement within the system is entirely due to the effect of the surrounding temperature, and therefore, fluctuations in whether a protein turns clockwise or counterclockwise, for example, are completely random, with an equal chance of rotating in either direction. Near-equilibrium systems are close to this state of rest; directional motion is generated by a small input of energy, but many features of the motion still appear as they do in equilibrium. Most living systems, however, operate far from equilibrium, with so much energy constantly flowing into and out of a cell that the fluctuations of molecular proteins and processes do not resemble anything in equilibrium. Lacking a similarity to equilibrium, it has been hard for scientists to uncover many general features of nonequilibrium fluctuations. England and his colleagues have shown that a comparison can nevertheless be made: Fluctuations occurring far from equilibrium must be at least as large as those that occur near equilibrium. The team says scientists can use the relationships established in its proof to understand the energy requirements in certain cellular systems, as well as to design reliable synthetic molecular machines. “One of the things that’s confusing about life is, it happens on a microscopic scale where there are a lot of processes that look pretty random,” Gingrich says. “We view this proof as a signpost: Here is one thing that at least must be true, even in those extreme, far-from-equilibrium situations where life is operating.” This research was supported in part by the Gordon and Betty Moore Foundation.


News Article | February 23, 2016
Site: http://www.techtimes.com/rss/sections/environment.xml

Researchers classify marine fishery species according to similar temperature and depth distribution and found that the groups respond similarly to climate change effects. Interactions between individual species, however, may be affected by different factors like food competition, predator-prey relationships and available habitat. For a study published in PLOS One, researchers from the National Oceanic and Atmospheric Administration (NOAA) evaluated the magnitude and pace of the effects of climate change for bottom-dwelling species in the U.S. Northeast Shelf. Almost 70 of these species were grouped into four assemblages, or species groups sharing a common environmental niche. "Regional differences in the effects of climate change on the movement and extent of species assemblages hold important implications for management, mitigation of climate change effects and adaptation," said Kristin Kleisner, study's lead author and from the Ecosystem Assessment Program of NOAA's Northeast Fisheries Science Center (NEFSC). Earlier studies have been done to look at how climate change affects species, but research has not been taken to the ecosystem's community level, where variations in local climate, oceanographic conditions and topography play a crucial role. According to the study's hypothesis, species groups moving in the same depth and temperature distribution respond similarly to climate effects. To test this, the researchers compared shifts in species distribution using data from bottom trawl surveys carried out in the spring and fall by the NEFSC between 1968 and 2012. Based on their analysis, the researchers found that species assemblages follow consistent patterns in rate and direction of distribution. For instance, species groups associated with the shallower, warmer waters of the Georges Bank and the Mid-Atlantic Bight tend to shift northeast strongly, following changes in temperature bands along the shelf. Aside from implications in how predator and prey interact, the results also hint at the possible economic impact of shifting species distribution. For instance, local fishing communities may lose access to stocks or will have to deal with higher travel and fuel costs as they seek out species distances away. The study was carried out in partnership with The Nature Conservancy and with financial support from a Gordon and Betty Moore Foundation. Malin Pinsky, Katherine Weaver, Laurel Smith, Vincent Saba, Jay Odell, Christopher McGuire, Sean Lucey, Jonathan Hare, Jennifer Greene, Paula Fratantoni, Analie Barnett, Sally McGee and Michael Fogarty also contributed to the research.

Discover hidden collaborations