Landau Institute for Theoretical Physics

Moscow Region, Russia

Landau Institute for Theoretical Physics

Moscow Region, Russia
SEARCH FILTERS
Time filter
Source Type

News Article | May 10, 2017
Site: www.scientificamerican.com

The origins of space and time are among the most mysterious and contentious topics in science. Our February 2017 article “Pop Goes the Universe” argues against the dominant idea that the early cosmos underwent an extremely rapid expansion called inflation. Its authors instead advocate for another scenario—that our universe began not with a bang but with a bounce from a previously contracting cosmos. In the letter below, a group of 33 physicists who study inflationary cosmology respond to that article. It is followed by a reply from the authors (an extended version of their reply can be found here). In “Pop Goes the Universe,” by Anna Ijjas, Paul J. Steinhardt and Abraham Loeb, the authors (hereafter “IS&L”) make the case for a bouncing cosmology, as was proposed by Steinhardt and others in 2001. They close by making the extraordinary claim that inflationary cosmology “cannot be evaluated using the scientific method” and go on to assert that some scientists who accept inflation have proposed “discarding one of [science’s] defining properties: empirical testability,” thereby “promoting the idea of some kind of nonempirical science.” We have no idea what scientists they are referring to. We disagree with a number of statements in their article, but in this letter, we will focus on our categorical disagreement with these statements about the testability of inflation. There is no disputing the fact that inflation has become the dominant paradigm in cosmology. Many scientists from around the world have been hard at work for years investigating models of cosmic inflation and comparing these predictions with empirical observations. According to the high-energy physics database INSPIRE, there are now more than 14,000 papers in the scientific literature, written by over 9,000 distinct scientists, that use the word “inflation” or “inflationary” in their titles or abstracts. By claiming that inflationary cosmology lies outside the scientific method, IS&L are dismissing the research of not only all the authors of this letter but also that of a substantial contingent of the scientific community. Moreover, as the work of several major, international collaborations has made clear, inflation is not only testable, but it has been subjected to a significant number of tests and so far has passed every one. Inflation is not a unique theory but rather a class of models based on similar principles. Of course, nobody believes that all these models are correct, so the relevant question is whether there exists at least one model of inflation that seems well motivated, in terms of the underlying particle physics assumptions, and that correctly describes the measurable properties of our universe. This is very similar to the early steps in the development of the Standard Model of particle physics, when a variety of quantum field theory models were explored in search of one that fit all the experiments. Although there is in principle a wide space of inflationary models to examine, there is a very simple class of inflationary models (technically, “single-field slow-roll” models) that all give very similar predictions for most observable quantities—predictions that were clearly enunciated decades ago. These “standard” inflationary models form a well-defined class that has been studied extensively. (IS&L have expressed strong opinions about what they consider to be the simplest models within this class, but simplicity is subjective, and we see no reason to restrict attention to such a narrow subclass.) Some of the standard inflationary models have now been ruled out by precise empirical data, and this is part of the desirable process of using observation to thin out the set of viable models. But many models in this class continue to be very successful empirically. The standard inflationary models predict that the universe should have a critical mass density (that is, it should be geometrically flat), and they also predict the statistical properties of the faint ripples that we detect in the cosmic microwave background (CMB). First, the ripples should be nearly “scale-invariant,” meaning that they have nearly the same intensity at all angular scales. Second, the ripples should be “adiabatic,” meaning that the perturbations are the same in all components: the ordinary matter, radiation and dark matter all fluctuate together. Third, they should be “Gaussian,” which is a statement about the statistical patterns of relatively bright and dark regions. Fourth and finally, the models also make predictions for the patterns of polarization in the CMB, which can be divided into two classes, called E-modes and B-modes. The predictions for the E-modes are very similar for all standard inflationary models, whereas the levels of B-modes, which are a measure of gravitational radiation in the early universe, vary significantly within the class of standard models. The remarkable fact is that, starting with the results of the Cosmic Background Explorer (COBE) satellite in 1992, numerous experiments have confirmed that these predictions (along with several others too technical to discuss here) accurately describe our universe. The average mass density of the universe has now been measured to an accuracy of about half of a percent, and it agrees perfectly with the prediction of inflation. (When inflation was first proposed, the average mass density was uncertain by at least a factor of three, so this is an impressive success.) The ripples of the CMB have been measured carefully by two more satellite experiments, the Wilkinson Microwave Anisotropy Probe (WMAP) and the Planck satellite, as well as many ground- and balloon-based experiments—all confirming that the primordial fluctuations are indeed nearly scale-invariant and very accurately adiabatic and Gaussian, precisely as predicted (ahead of time) by standard models of inflation. The B-modes of polarization have not yet been seen, which is consistent with many, though not all, of the standard models, and the E-modes are found to agree with the predictions. In 2016 the Planck satellite team (a collaboration of about 260 authors) summarized its conclusions by saying that “the Planck results offer powerful evidence in favour of simple inflationary models.” So if inflation is untestable, as IS&L would have us believe, why have there been so many tests of it and with such remarkable success? While the successes of inflationary models are unmistakable, IS&L nonetheless make the claim that inflation is untestable. (We are bewildered by IS&L’s assertion that the dramatic observational successes of inflation should be discounted while they accuse the advocates of inflation of abandoning empirical science!) They contend, for example, that inflation is untestable because its predictions can be changed by varying the shape of the inflationary energy density curve or the initial conditions. But the testability of a theory in no way requires that all its predictions be independent of the choice of parameters. If such parameter independence were required, then we would also have to question the status of the Standard Model, with its empirically determined particle content and 19 or more empirically determined parameters. An important point is that standard inflationary models could have failed any of the empirical tests described above, but they did not. IS&L write about how “a failing theory gets increasingly immunized against experiment by attempts to patch it,” insinuating that this has something to do with inflation. But despite IS&L’s rhetoric, it is standard practice in empirical science to modify a theory as new data come to light, as, for example, the Standard Model has been modified to account for newly discovered quarks and leptons. For inflationary cosmology, meanwhile, there has so far been no need to go beyond the class of standard inflationary models. IS&L also assert that inflation is untestable because it leads to eternal inflation and a multiverse. Yet although the possibility of a multiverse is an active area of study, this possibility in no way interferes with the empirical testability of inflation. If the multiverse picture is valid, then the Standard Model would be properly understood as a description of the physics in our visible universe, and similarly the models of inflation that are being refined by current observations would describe the ways inflation can happen in our particular part of the universe. Both theories would remain squarely within the domain of empirical science. Scientists would still be able to compare newly obtained data—from astrophysical observations and particle physics experiments—with precise, quantitative predictions of specific inflationary and particle physics models. Note that this issue is separate from the loftier goal of developing a theoretical framework that can predict, without the use of observational data, the specific models of particle physics and inflation that should be expected to describe our visible universe. Like any scientific theory, inflation need not address all conceivable questions. Inflationary models, like all scientific theories, rest on a set of assumptions, and to understand those assumptions we might need to appeal to some deeper theory. This, however, does not undermine the success of inflationary models. The situation is similar to the standard hot big bang cosmology: the fact that it left several questions unresolved, such as the near-critical mass density and the origin of structure (which are solved elegantly by inflation), does not undermine its many successful predictions, including its prediction of the relative abundances of light chemical elements. The fact that our knowledge of the universe is still incomplete is absolutely no reason to ignore the impressive empirical success of the standard inflationary models. During the more than 35 years of its existence, inflationary theory has gradually become the main cosmological paradigm describing the early stages of the evolution of the universe and the formation of its large-scale structure. No one claims that inflation has become certain; scientific theories don’t get proved the way mathematical theorems do, but as time passes, the successful ones become better and better established by improved experimental tests and theoretical advances. This has happened with inflation. Progress continues, supported by the enthusiastic efforts of many scientists who have chosen to participate in this vibrant branch of cosmology. Empirical science is alive and well! »Click here to jump to the authors’ reply Alan H. Guth             Victor F. Weisskopf Professor of Physics, Massachusetts Institute of Technology             http://web.mit.edu/physics/people/faculty/guth_alan.html David I. Kaiser             Germeshausen Professor of the History of Science and Professor of Physics, Massachusetts Institute of Technology             http://web.mit.edu/physics/people/faculty/kaiser_david.html Andrei D. Linde             Harald Trap Friis Professor of Physics, Stanford University             https://physics.stanford.edu/people/faculty/andrei-linde Yasunori Nomura             Professor of Physics and Director, Berkeley Center for Theoretical Physics, University of California, Berkeley             http://physics.berkeley.edu/people/faculty/yasunori-nomura Charles L. Bennett             Bloomberg Distinguished Professor and Alumni Centennial Professor of Physics and Astronomy, Johns Hopkins University             Principal Investigator, Wilkinson Microwave Anisotropy Probe (WMAP) mission             Deputy Principal Investigator and Science Working Group member, Cosmic Background Explorer (COBE) mission             http://physics-astronomy.jhu.edu/directory/charles-l-bennett/ J. Richard Bond             University Professor, University of Toronto and Director, Canadian Institute for Advanced Research Cosmology and Gravity Program, Canadian Institute for Theoretical Astrophysics             Member of the Planck collaboration             http://www.cita.utoronto.ca/~bond/ François Bouchet             Director of Research, Institut d’Astrophysique de Paris, CNRS and Sorbonne Université-UPMC             Deputy Principal Investigator, Planck satellite HFI (High Frequency Instrument) Consortium and Member, Planck Science Team             http://savoirs.ens.fr/conferencier.php?id=145 Sean Carroll             Research Professor of Physics, California Institute of Technology             http://www.astro.caltech.edu/people/faculty/Sean_Carroll.html George Efstathiou             Professor of Astrophysics, Kavli Institute for Cosmology, University of Cambridge             Member, Planck Science Team             http://www.ast.cam.ac.uk/~gpe/ Stephen Hawking             Lucasian Professor of Mathematics (Emeritus) and Dennis Stanton Avery and Sally Tsui Wong-Avery Director of Research, Department of Applied Mathematics and Theoretical Physics, University of Cambridge             http://www.damtp.cam.ac.uk/people/s.w.hawking/ Renata Kallosh             Professor of Physics, Stanford University             https://physics.stanford.edu/people/faculty/renata-kallosh Eiichiro Komatsu             Director of the Department of Physical Cosmology, Max-Planck-Institute für Astrophysik, Garching             Member, Wilkinson Microwave Anisotropy Probe (WMAP) collaboration             http://wwwmpa.mpa-garching.mpg.de/~komatsu/ Lawrence Krauss             Foundation Professor in the School of Earth and Space Exploration and Department of Physics, and Director, The Origins Project at Arizona State University             http://krauss.faculty.asu.edu David H. Lyth             Professor of Physics (Emeritus), Lancaster University             http://www.lancaster.ac.uk/physics/about-us/people/david-lyth Juan Maldacena             Carl P. Feinberg Professor of Physics, Institute for Advanced Study             https://www.sns.ias.edu/malda John C. Mather             Senior Astrophysicist and Goddard Fellow, NASA Goddard Space Flight Center and recipient of the Nobel Prize in Physics (2006)             Project Scientist, Cosmic Background Explorer (COBE) mission and             Senior Project Scientist, James Webb Space Telescope             https://science.gsfc.nasa.gov/sed/bio/john.c.mather Hiranya Peiris             Professor of Astrophysics, University College London and Director, Oskar Klein Centre for Cosmoparticle Physics, Stockholm             Member, Wilkinson Microwave Anisotropy Probe (WMAP) collaboration and Planck collaboration             http://zuserver2.star.ucl.ac.uk/~hiranya/ Malcolm Perry             Professor of Theoretical Physics, University of Cambridge             http://www.damtp.cam.ac.uk/people/m.j.perry/ Lisa Randall             Frank B. Baird, Jr., Professor of Science, Department of Physics, Harvard University             https://www.physics.harvard.edu/people/facpages/randall Martin Rees             Astronomer Royal of Great Britain, former President of the Royal Society of London, and Professor (Emeritus) of Cosmology and Astrophysics, University of Cambridge             http://www.ast.cam.ac.uk/~mjr/ Misao Sasaki             Professor, Yukawa Institute for Theoretical Physics, Kyoto University             http://www2.yukawa.kyoto-u.ac.jp/~misao.sasaki/ Leonardo Senatore             Associate Professor of Physics, Stanford University             https://physics.stanford.edu/people/faculty/leonardo-senatore Eva Silverstein             Professor of Physics, Stanford University             https://physics.stanford.edu/people/faculty/eva-silverstein George F. Smoot III             Professor of Physics (Emeritus), Founding Director, Berkeley Center for Cosmological Physics, and recipient of the Nobel Prize in Physics (2006)             Principal Investigator, Cosmic Background Explorer (COBE) mission             http://physics.berkeley.edu/people/faculty/george-smoot-iii Alexei Starobinsky             Principal Researcher, Landau Institute for Theoretical Physics, Moscow             http://www.itp.ac.ru/en/persons/starobinsky-aleksei-aleksandrovich/ Leonard Susskind             Felix Bloch Professor of Physics and Wells Family Director, Stanford Institute for Theoretical Physics, Stanford University             https://physics.stanford.edu/people/faculty/leonard-susskind Michael S. Turner             Bruce. V. Rauner Distinguished Service Professor, Department of Astronomy and Astrophysics and Department of Physics, University of Chicago             https://astro.uchicago.edu/people/michael-s-turner.php Alexander Vilenkin             L. and J. Bernstein Professor of Evolutionary Science and Director, Institute of Cosmology, Tufts University             http://cosmos2.phy.tufts.edu/vilenkin.html Steven Weinberg             Jack S. Josey-Welch Foundation Chair and Regental Professor and Director, Theory Research Group, Department of Physics, University of Texas at Austin, and recipient of the Nobel Prize in Physics (1979)             https://web2.ph.utexas.edu/~weintech/weinberg.html Rainer Weiss                         Professor of Physics (Emeritus), Massachusetts Institute of Technology             Chair, Science Working Group, Cosmic Background Explorer (COBE) mission             Co-Founder, Laser Interferometric Gravitational-wave Observatory (LIGO)             http://web.mit.edu/physics/people/faculty/weiss_rainer.html Frank Wilczek                      Herman Feshbach Professor of Physics, Massachusetts Institute of Technology, and recipient of the Nobel Prize in Physics (2004)             http://web.mit.edu/physics/people/faculty/wilczek_frank.html Edward Witten                      Charles Simonyi Professor of Physics, Institute for Advanced Study and recipient of the Fields Medal (1990)             https://www.sns.ias.edu/witten Matias Zaldarriaga               Professor of Astrophysics, Institute for Advanced Study             https://www.sns.ias.edu/matiasz THE AUTHORS REPLY: We have great respect for the scientists who signed the rebuttal to our article, but we are disappointed by their response, which misses our key point: the differences between the inflationary theory once thought to be possible and the theory as understood today. The claim that inflation has been confirmed refers to the outdated theory before we understood its fundamental problems. We firmly believe that in a healthy scientific community, respectful disagreement is possible and hence reject the suggestion that by pointing out problems, we are discarding the work of all of those who developed the theory of inflation and enabled precise measurements of the universe. Historically, the thinking about inflation was based on a series of misunderstandings. It was not understood that the outcome of inflation is highly sensitive to initial conditions. And it was not understood that inflation generically leads to eternal inflation and, consequently, a multiverse—an infinite diversity of outcomes. Papers claiming that inflation predicts this or that ignore these problems. Our point is that we should be talking about the contemporary version of inflation, warts and all, not some defunct relic. Logically, if the outcome of inflation is highly sensitive to initial conditions that are not yet understood, as the respondents concede, the outcome cannot be determined. And if inflation produces a multiverse in which, to quote a previous statement from one of the responding authors (Guth), “anything that can happen will happen”—it makes no sense whatsoever to talk about predictions. Unlike the Standard Model, even after fixing all the parameters, any inflationary model gives an infinite diversity of outcomes with none preferred over any other. This makes inflation immune from any observational test. For more details, see our 2014 paper “Inflationary Schism” (preprint available at https://arxiv.org/abs/1402.6980). We are three independent thinkers representing different generations of scientists. Our article was not intended to revisit old debates but to discuss the implications of recent observations and to point out unresolved issues that present opportunities for a new generation of young cosmologists to make a lasting impact. We hope readers will go back and review our article’s concluding paragraphs. We advocated against invoking authority and for open recognition of the shortcomings of current concepts, a reinvigorated effort to resolve these problems and an open-minded exploration of diverse ideas that avoid them altogether. We stand by these principles.


News Article | January 18, 2016
Site: www.materialstoday.com

A two-dimensional meta material composed of silver elements will have a negative refractive index for light in the most energetic region of the visible spectrum, 400 to 500 nanometers (violet, blue and light blue), according to computer simulations by the team of Sergey Belan, Vladimir Parfenyev, and Sergey Vergeles from the Landau Institute for Theoretical Physics in the Russian Academy of Sciences, Moscow; Belan is also affiliated with the Moscow Institute of Physics and Technology, in Dolgoprudny. Writing in the journal Optical Material Express, the researchers hint at its potential for compact optical. [DOI: 10.1364/OME.5.002843] The computations using the COMSOL Multiphysics package on a two-dimensional meta lattice composed of pairs of closely spaced silver cylinders with a radius of the order of 100 nanometers suggest a refractive efficiency of 70 percent and a reflective efficiency of 80 percent for violet to blue light. Earlier meta materials are usually three-dimensional and have much more complex geometries. Moreover, they have been demonstrated to operate at microwave wavelengths, which limits their potential applications. The silver lattice meta material works through diffraction, splitting incident light into rays depending on angle of incidence, wavelength and the period of the lattice. The structure of the unit cell determines how the energy of the incident light is distributed between the rays. In a material with a negative refractive index, all but one of the diffracted rays are suppressed and the remainder emerges in the desired direction. That direction is counterintuitive given our experience with the appearance of a stick dipped into a swimming pool where water has a natural, positive refractive index. At the mechanistic level, light's interaction with the pairs of metal cylinders is due to a plasmon resonance effect. The researchers have demonstrated that the extraordinary optical response of the proposed material arises due to excitation of the plasmonic modes in the gaps between cylinders, Belan told Materials Today. Tweaking the lattice allows a negative refractive index to be manifest for wide range of angles of incidence. This suggests potential applications in controlling signals in ultra-compact devices for optical telecommunications and future computing. The next step will be to manufacture with very high tolerance the requisite smooth metal cylinders with their less than 10 nanometer separation for laboratory testing of the meta material's true potential. "We are now working on optimization of the proposed design in order to improve efficiency," Belan told us. "Specifically, we are testing in simulations the metal rods with a non-circular cross-section. In addition, we are looking for collaboration with experimental groups capable of realizing negative refraction in the laboratory." David Bradley blogs at Sciencebase Science Blog and tweets @sciencebase, he is author of the bestselling science book "Deceived Wisdom".


Falkovsky L.A.,Landau Institute for Theoretical Physics
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

The effects of phonon anharmonicity, phonon-magnon, and electron-phonon interactions on the temperature dependence of Raman optical phonon modes are theoretically investigated. Besides the Klemens result for the phonon width due to anharmonicity, the corresponding line shift is derived. We argue that the phonon decay into two magnons has very low intensity in ferromagnets with low Curie temperatures. Therefore, the electron interband transitions accompanied with the ferromagnetic ordering are included in considerations to get a good quantitative agreement with experiments. © 2013 American Physical Society.


Menshutin A.,Landau Institute for Theoretical Physics
Physical Review Letters | Year: 2012

We present a self-consistent picture of diffusion limited aggregation (DLA) growth based on the assumption that the probability density P(r,N) for the next particle to be attached within the distance r to the center of the cluster is expressible in the scale-invariant form P[r/R dep(N)]. It follows from this assumption that there is no multiscaling issue in DLA and there is only a single fractal dimension D for all length scales. We check our assumption self-consistently by calculating the particle-density distribution with a measured P(r/R dep) function on an ensemble with 1000 clusters of 5×107 particles each. We also show that a nontrivial multiscaling function D(x) can be obtained only when small clusters (N<10000) are used to calculate D(x). Hence, multiscaling is a finite-size effect and is not intrinsic to DLA. © 2012 American Physical Society.


Falkovsky L.A.,Landau Institute for Theoretical Physics
Journal of Experimental and Theoretical Physics | Year: 2012

The optical conductivity of graphene, bilayer graphene, and graphite in quantizing magnetic fields is studied. Both dynamical conductivities, longitudinal and Hall's, are evaluated analytically. The conductivity peaks are explained in terms of electron transitions. Correspondences between the transition frequencies and the magneto-optical features are established using the theoretical results. We show that trigonal warping can be considered within the perturbation theory for strong magnetic fields larger than 1 T. The semiclassical approach is applied for weak fields when the Fermi energy is much larger than the cyclotron frequency. The main optical transitions obey the selection rule with Δn = 1 for the Landau number n, but the Δn = 2 transitions due to the trigonal warping are also possible. The Faraday/Kerr rotation and light transmission/reflection in quantizing magnetic fields are calculated. Parameters of the Slonczewski-Weiss-McClure model are used in the fit taking the previous de Haas-van Alphen measurements into account and correcting some of them in the case of strong magnetic fields. © 2012 Pleiades Publishing, Ltd.


Lebed A.G.,University of Arizona | Lebed A.G.,Landau Institute for Theoretical Physics
Physical Review Letters | Year: 2011

We solve a long-standing problem about a theoretical description of the upper critical magnetic field, parallel to conducting layers and perpendicular to conducting chains, in a (TMTSF)2ClO4 superconductor. In particular, we explain why the experimental upper critical field, Hc2b ′6T, is higher than both the quasiclassical upper critical field and the Clogston paramagnetic limit. We show that this property is due to the coexistence of the hidden reentrant and Larkin-Ovchinnikov-Fulde-Ferrell phases in a magnetic field in the form of three plane waves with nonzero momenta of the Cooper pairs. Our results are in good qualitative and quantitative agreement with the recent experimental measurements of Hc2b′ and support a singlet d-wave-like scenario of superconductivity in (TMTSF) 2ClO4. © 2011 American Physical Society.


Lashkevich M.,Landau Institute for Theoretical Physics
Journal of Physics A: Mathematical and Theoretical | Year: 2012

The notion of operator resonances was previously introduced by Al Zamolodchikov within the framework of the conformal perturbation theory. The resonances are related to logarithmic divergences of integrals in the perturbation expansion, and manifest themselves in poles of the correlation functions and form factors of local operators considered as functions of conformal dimensions. The residues of the poles can be computed by means of some operator identities. Here, we study the resonances in the Liouville, sinh- and sine-Gordon models, considered as perturbations of a massless free boson. We show that the well-known higher equations of motion discovered by Al Zamolodchikov in the Liouville field theory are nothing but resonance identities for some descendant operators. The resonance expansion in the vicinity of a resonance point provides a regularized version of the corresponding operators. We try to construct the corresponding resonance identities and resonance expansions in the sinh- and sine-Gordon theories. In some cases it can be done explicitly, but in most cases we are only able to obtain a general form so far. We show nevertheless that the resonances are perturbatively exact, which means that each of them only appears in a single term of the perturbation theory. © 2012 IOP Publishing Ltd.


The results of computer simulations carried out by the authors showed that it would be a high performance material for light with a wavelength from 400-500nm (violet, blue and light blue). Efficiency in this case is defined as the percentage of light scattered in a desired direction. The efficiency of the material is approximately 70 percent for refraction, and 80 percent for reflection of the light. Theoretically, the efficiency could reach 100 percent, but in real metals, there are losses due to ohm resistance. A metamaterial is a material, the properties of which are created by an artificial periodic structure. The prefix "meta" (from the Greek μετά - beyond) indicates that the characteristics of the material are beyond what we see in nature. Most often, when we talk about metamaterials, we mean materials with a negative refractive index. When light is incident on the surface of such a material, the refracted light is on the same side of the normal to the surface as the incident light. The difference between the behaviour of the light in a medium with a positive and a negative refractive index can be seen, for example, when a rod is immersed in liquid. The existence of substances with a negative refractive index was predicted as early as the middle of the 20th century. In 1976, Soviet physicist V.G. Veselago published an article that theoretically describes their properties, including unusual refraction of light. The term "metamaterials" for such substances was suggested by Roger Walser in 1999. The first samples of metamaterials were made from arrays of thin wires and only worked with microwave radiation. Importantly, the unusual optical effects do not necessarily imply the use of the volumetric (3d) metamaterials. You can also manipulate the light with the help of two-dimensional structures—so-called metasurfaces. In fact, it is a thin film composed of individual elements. The principle of operation of the metasurface is based on the phenomenon of diffraction. Any flat periodic array can be viewed as a diffraction lattice, which splits the incident light into a few rays. The number and direction of the rays depends on geometrical parameters: the angle of incidence, wavelength and the period of the lattice. The structure of the unit cell, in turn, determines how the energy of the incident light is distributed between the rays. For a negative refractive index, it is necessary that all but one of the diffraction rays are suppressed, then all of the incident light will be directed in the required direction. This idea underlies the recent work by the group of scientists from the Moscow Institute of Physics and Technology and the Landau Institute for Theoretical Physics. The unit cell of the proposed lattice is composed of a pair of closely spaced silver cylinders with a radius of the order of 100 nanometres (see figure). Such a structure is simple and operates at optical wavelengths, while most analogues have more complex geometries and only work with microwaves. The effective interaction of pairs of metal cylinders with light is due to the plasmon resonance effect. Light is absorbed by the metal rods, forcing the electrons in the metal to oscillate and re-radiate. Researchers were able to adjust the parameters of the cell so that the resulting optical lattice response is consistent with abnormal (i.e. negative) refraction of the incident wave (see figure). Interestingly, by reversing the orientation of the cylinder pairs you can get an abnormal reflection effect. It should be noted that the scheme works with a wide range of angles of incidence. These results can be applied to control optical signals in ultra-compact devices. In this case, we are talking primarily about optical transmission and information processing technologies, which will help expedite computer processing in the future. The conventional electrical interconnects used in modern chips are operating at the limit of their carrying capacities and inhibit further growth in computing performance. To replace the electrical interconnects by optical we need to effectively control optical signals at nanoscale. In order to solve this problem, the efforts of the scientific community are focused to a large extent on creating structures capable of redirecting the light toward the desired direction. It should be noted that an experimental demonstration of anomalous scattering using the lattice described above requires the manufacture of smooth metal cylinders separated by a very small distance (less than 10 nanometres). This is quite a difficult practical problem, the solution of which could be a breakthrough for modern photonics. Explore further: Bending light the 'wrong' way


News Article | February 15, 2017
Site: phys.org

Igor Kolokolov and Vladimir Lebedev, scientific experts from HSE's Faculty of Physics and the Landau Institute for Theoretical Physics of Russian Academy of Sciences, have developed an analytical theory binding the structure of coherent vortices formed due to inverse cascades in 2-D turbulence with the statistical properties of hydrodynamic fluctuations. Uncovering this link could be useful in identifying the causes of the particular characteristics of such atmospheric phenomena as cyclones and anticyclones. Their research is presented in an article published in the Journal of Fluid Mechanics. According to Vladimir Lebedev, one of the article's authors, 'This concerns how order comes out of chaos,' noting, 'we were able to generate an analytical scheme explaining the results of numerical and laboratory experiments where coherent vortices (stable vortex formations) are observed by relating vortex characteristics to the statistical properties of chaotic fluctuations.' The article, 'Velocity Statistics Inside Coherent Vortices Generated by the Inverse Cascade of 2-D Turbulence', published in the Journal of Fluid Mechanics, presents a consistent analytical theory describing both the intensive mean flow inside the vortices and the fluctuations therein. Namely, the research indicates that the vortices possess a universal structure: When there is an interval, the azimuth speed does not depend on distance from the vortex center. Statistical properties of the fluctuations in the universal interval are established. They can be used, for instance, to determine advection and mixing of pollutants in the turbulent flow. In short, the theory helps to explain the outcomes of laboratory experiments and numerical modelling of 2-D turbulence where coherent vortices had earlier been observed. The scientists note that this research does not rely on semiempirical formulae, whereby general conclusions are made about computational and natural experiments, but rather the correlations derived from first principles. The results of the analysis are valuable owing to their predictive power, as well as insight into such phenomena. Furthermore, the article presents the latest results of the scientists' efforts to analyze coherent vortices in 2-D turbulence, which the authors have been investigating for more than a decade. As both Kolokolov and Lebedev note, despite the fact that geophysics is much richer than fluid-film hydrodynamics, there are grounds to consider large-scale atmospheric phenomena such as cyclones, anticyclones and hurricanes as coherent structures that emerge out of 'chaos'. This, in turn, enriches our understanding of the laws controlling appearance of such atmospheric phenomena. Furthermore, this may, over the long-term, even offer us with possibilities to manage the phenomena. Explore further: Physicists generate ocean vortices in a glass of water More information: I. V. Kolokolov et al, Velocity statistics inside coherent vortices generated by the inverse cascade of 2-D turbulence, Journal of Fluid Mechanics (2016). DOI: 10.1017/jfm.2016.699


News Article | February 15, 2017
Site: www.eurekalert.org

Igor Kolokolov and Vladimir Lebedev, scientific experts from HSE's Faculty of Physics and the Landau Institute for Theoretical Physics of Russian Academy of Sciences, have developed an analytical theory, which binds the structure of coherent vortices formed due to inverse cascades in 2-D turbulence with the statistical properties of hydrodynamic fluctuations. Uncovering this link can be useful in identifying the causes of the particular characteristics of such atmospheric phenomena as cyclones and anticyclones. Their research is presented in an article published in the Journal of Fluid Mechanics. According to Vladimir Lebedev, one of the article's authors, 'this concerns how order comes out of chaos,' noting: 'we were able to generate analytical scheme, which, explains the results of numerical and laboratory experiments where coherent vortices (stable vortex formations) are observed by relating vortex characteristics to the statistical properties of chaotic fluctuations.' The article 'Velocity Statistics Inside Coherent Vortices Generated by the Inverse Cascade of 2-D Turbulence', published in the Journal of Fluid Mechanics, presents a consistent analytical theory, which describes both the intensive mean flow inside the vortices and the fluctuations therein. Namely, the research indicates that the vortices possess a universal structure, when there is an interval, the azimuth speed does not depend on distance from the vortex center. Statistical properties of the fluctuations in the universal interval are established. They can be used, for instance, to determine advection and mixing of pollutants in the turbulent flow. In short, the theory developed helps to explain the outcomes of laboratory experiments and numerical modelling of 2-D turbulence where coherent vortices had earlier been observed. The scientists note that this research does not rely on semiempirical formulae, whereby general conclusions are made about computational and natural experiments, but rather the correlations derived from first principles. The results of the analysis are valuable owing to their predictive power, as well as insight into such phenomena. Furthermore, the article presents the latest results of the scientists' efforts to analyze coherent vortices in 2-D turbulence, which the authors have been investigating for more than a decade. As both Kolokolov and Lebedev note, despite the fact that geophysics is much richer than fluid-film hydrodynamics, there are grounds to consider large-scale atmospheric phenomena such as cyclones, anticyclones and hurricanes as coherent structures that emerge out of 'chaos'. This, in turn, enriches our understanding of the laws controlling appearance of such atmospheric phenomena. Furthermore, this may, over the long-term, even offer us with possibilities to manage the phenomena.

Loading Landau Institute for Theoretical Physics collaborators
Loading Landau Institute for Theoretical Physics collaborators