Time filter

Source Type

Patent
Russian Academy of Sciences, DuPont Company, Ovchinnikov, Bobkova, Medvinskaya, Samokhin and Nekrasov | Date: 2017-04-19

The present invention relates to the field of genetic engineering and medicine. Proposed is a method for treating neurodegenerative diseases and Alzheimers disease that includes the intranasal administration to a subject of a therapeutically effective amount of the YB-1 protein and/or active fragment and/or derivative thereof.


Abdulagatov I.M.,Russian Academy of Sciences | Azizov N.D.,Azerbaijan State Oil Academy
Journal of Chemical Thermodynamics | Year: 2014

Densities of (water + 1-propanol) mixtures have been measured over the temperature range from 298 K to 582 K and at pressures up to 40 MPa using the constant-volume piezometer immersed in a precision liquid thermostat. The measurements were made for six compositions of (0.869, 2.465, 2.531, 7.407, 14.377, and 56.348) mol · kg-1 of 1-propanol. The expanded uncertainty of the density, pressure, temperature, and concentration measurements at the 95% confidence level with a coverage factor of k = 2 is estimated to be 0.06%, 0.05%, 15 mK, and 0.015%, respectively. The derived volumetric properties such as excess (VmE), apparent (VΦ), and partial (V̄2) molar volumes were calculated using the measured values of density for the mixture and for pure components (water and 1-propanol). The concentration dependences of the apparent molar volumes were extrapolated to zero concentration to yield the partial molar volumes of 1-propanol at infinite dilution (V̄2). The temperature, pressure, and concentration dependence of density and derived properties of the mixture were studied. All experimental and derived properties (excess, apparent, and partial molar volumes) were compared with the reported data by other authors. The small and negative values of excess molar volume for the mixtures were found at all experimental temperatures, pressures, and over the entire concentration range. The excess molar volume minimum is found at concentration about 0.4 mole fraction of 1-propanol. The concentration minimum of the derived apparent molar volumes VΦ near the 2.5 mol · kg-1 (dilute mixture) was observed. © 2013 Elsevier Ltd. All rights reserved.


Agranovich V.M.,University of Texas at Dallas | Agranovich V.M.,Russian Academy of Sciences | Gartstein Y.N.,University of Texas at Dallas | Litinskaya M.,University of British Columbia
Chemical Reviews | Year: 2011

The hybrid resonant organic-inorganic nanostructures for optoelectronic applications are examined. The probability of Förster resonant energy transfer from an excited molecule to another molecule is proportional to the overlap of the fluorescence spectrum of the donor and the absorption spectrum of the acceptor, these spectra being determined in the absence of the donor acceptor interaction. The dependence of energy transfer on the distance between components of the nanostructure strongly depends on its geometry and is different for quantum wells, wires, and dots. The UV light-emitting InGaN QW is spaced from the blue-light-emitting poly(9,9-dioctylfluorene-co-9,9-di(4- methoxy)phenyl-fluorene) film by GaN cap layers of variable thickness. Energetic alignment is needed to maximize the resonant coupling between the inorganic and organic excitations. The energy-transfer-based hybrids are expected to be more interface quality tolerant and not suffer from low charge carrier mobilities in the organic component.


Rar V.,Russian Academy of Sciences | Golovljova I.,National Institute for Health Development
Infection, Genetics and Evolution | Year: 2011

The family Anaplasmataceae includes gram-negative obligate intracellular alphaproteobacteria, which amplify within membrane-bound vacuoles. The genera Anaplasma and Ehrlichia as well as the recently specified cluster " Candidatus Neoehrlichia" comprise all bacteria of the family that are transmitted by ixodid ticks to mammalian hosts and cause infections in humans and numerous domestic and wild animal species. Totally, six Anaplasma, five Ehrlichia, and two " Candidatus Neoehrlichia" species as well as a variety of Ehrlichia bacteria with undefined taxonomic position belong to tick-borne bacteria. This review briefs the ecology, pathogenic properties, and molecular genetic diversity of Anaplasmataceae validated species, " Candidatus" species, and undefined novel Ehrlichia genetic variants. © 2011 Elsevier B.V.


Spicer R.A.,Open University Milton Keynes | Herman A.B.,Russian Academy of Sciences
Palaeogeography, Palaeoclimatology, Palaeoecology | Year: 2010

Late Cretaceous megafossil floras from the palaeo-Arctic of northeastern Russia and northern Alaska are reviewed in respect of their age, composition, structure and floral dynamics. Palaeofloral correlations and comparisons are made between the two regions. Nine angiosperm-rich, predominantly Cenomanian to Coniacian, floras from the palaeo-Arctic are re-evaluated using Climate Leaf Analysis Multivariate Program (CLAMP) calibrated using a global gridded (0.5°×0.5°) climate data set derived from that used in climate modelling. Additional floras from lower palaeolatitudes were used to derive latitudinal temperature gradients: seven from N. America, five from around 30°N palaeolatitude in Europe and one from Kazakhstan. The Arctic climatic determinations, similar to previous estimates, support the existence of a northern Pacific Ocean cold gyre and a warm Arctic Ocean. At palaeolatitudes greater than 80°N floras are insufficiently diverse in woody dicot taxa to use CLAMP, but using CLAMP-derived latitudinal temperature gradients Arctic Ocean coastal environments at 70Ma and 82°N, and which supported a diverse dinosaur magafauna, are predicted to have experienced a mean annual temperature of 6.3±2.2°C, a warm month mean of 14.5±3.1°C and a cold month mean no colder than -2.0±3.9°C. All uncertainties are 2σ. The new estimates are in good agreement with a wide range of non-palaeobotanical climate proxies and render as an outlier warmer temperature estimates for the Arctic Ocean derived from the TEX86 proxy. Modelling, however, shows that land to ocean temperature gradients could have been steep. The CLAMP estimates also suggest high values for humidity and precipitation consistent with sedimentological indicators and, coupled with warm temperatures, support the existence of a persistent polar cloud cap that helped maintain high terrestrial air temperatures throughout prolonged periods (up to 5months) of winter darkness. © 2010 Elsevier B.V.


Lozovik Y.E.,Moscow Institute of Physics and Technology | Ogarkov S.L.,National Research Nuclear University | Sokolik A.A.,Russian Academy of Sciences
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

Condensation of pairs formed by spatially separated electrons and holes in a system of two isolated graphene layers is studied beyond the mean-field approximation. Suppression of the screening of the pairing interaction at large distances, caused by the appearance of the gap, is considered self-consistently. A mutual positive feedback between the appearance of the gap and the enlargement of the interaction leads to a sharp transition to a correlated state with a greatly increased gap above some critical value of the coupling strength. At a coupling strength below the critical value, this correlation effect increases the gap approximately by a factor of 2. The maximal coupling strength achievable in experiments is close to the critical value. This indicates the importance of correlation effects in closely spaced graphene bilayers at weak substrate dielectric screening. Another effect beyond the mean-field approximation considered is the influence of vertex corrections on the pairing, which is shown to be very weak. © 2012 American Physical Society.


Banishev A.A.,University of California at Riverside | Klimchitskaya G.L.,Russian Academy of Sciences | Mostepanenko V.M.,Russian Academy of Sciences | Mohideen U.,University of California at Riverside
Physical Review Letters | Year: 2013

We demonstrate the Casimir interaction between two ferromagnetic boundary surfaces using the dynamic atomic force microscope. The experimental data are found to be in excellent agreement with the predictions of the Lifshitz theory for magnetic boundary surfaces combined with the plasma model approach. It is shown that for magnetic materials the role of hypothetical patch potentials is opposite to that required for reconciliation of the data with the Drude model. © 2013 American Physical Society.


Libman A.,Russian Academy of Sciences
Energy Economics | Year: 2013

The differentiation in the impact of resources on economic growth is often explained by the specifics of institutional factors. The aim of this paper is to investigate how sub-national political differences influence the effect of natural resources on economic growth. Using a dataset of Russian regions, this paper demonstrates that sub-national democratization influences the growth effects of resources and considers possible mechanisms for this influence. The paper finds that in Russia, natural resources are only capable of promoting growth in the regions with non-democratic political systems that, at the same time, have an efficient and non-corrupt bureaucracy. © 2013 Elsevier B.V.


Simakin A.G.,Russian Academy of Sciences | Bindeman I.N.,University of Oregon
Earth and Planetary Science Letters | Year: 2012

Large and small volume rhyolites are generated in calderas and rift zones, inheriting older and isotopically diverse crystal populations from their volcanic predecessors. Low-δ 18O values in many of these rhyolites suggest that they were derived from the remelting of solid, hydrothermally altered by meteoric water protoliths that were once close to the surface, but become buried by caldera collapse or rifting. These rhyolites persist for millions of years in these environments with little evidence of coeval basalts. We present a series of numerical experiments on convective melting of roof-rocks by the underplated by near liquidus to superheated silicic melts, generated at the base of the chamber by basaltic intrusions in shallow crustal conditions. We used a range of temperatures and compositions, an appropriate phase diagram with a defined extended eutectic zone appropriate for these environments, varied sill thickness, viscosity of the boundary layer, and considered hydrothermal and lower boundary heat losses. The goal was to estimate melting rates and mechanisms, define conditions that are required for efficient and rapid remelting in the upper crust, quantitatively describe novel details of the dynamics of convecting melting, and compare it to the earlier parametric and numerical treatments of roof melting by underplating. Resolution of numerical experiments allowed us to track mixed thermal and two-phase plume-like convection in silicic magma with a bulk viscosity of 10 4.5-10 5.5Pas. The following results were obtained: (1) remarkably fast melting/magma generation rates of many meters per year, (2) intrinsic inhomogeneities in the roof accelerates convection and melting rates via rapid gravitational settling of refractory blocks and exposing detachment scars to the melting front, (3) due to rapid melting, hydrothermal heat loss through the roof, and conductive heat dissipation through the bottom are less important on melting timescales. (4) Convective melting is capable of digesting cold roof-rocks, with high assimilation degrees, which are primarily controlled by sill thickness and roof-rock temperature: thin 10m sills are able to digest 40% of the initially hot roof-rock T=650°C roof-rock, but>100m sills achieve the same level of bulk digestion with T=400°C roof-rocks. The proposed model can explain the origin of hot (above 800-850°C), crystal-poor, "recycled" rhyolites in calderas and rift zones. It can also explain the generation of large, supervolcanic rhyolite volumes through remelting of their erupted and subvolcanic predecessors on rapid timescales, dictated by their zoned and disequilibrium crystalline cargo. © 2012 Elsevier B.V.


Dorofeyev I.A.,RAS Institute for Physics of Microstructures | Vinogradov E.A.,Russian Academy of Sciences
Physics Reports | Year: 2011

Various theoretical methods to calculate the spectral and correlation properties of fluctuating electromagnetic fields generated by solids are reviewed, all of which essentially reduce to solving the Maxwell equations for a specified geometry and boundary conditions and then using the fluctuation-dissipation theorem. The spectral and correlation properties of thermally stimulated fields of a system in equilibrium and out of equilibrium with surrounding bodies are described. A special attention is put to the difference between the spectral and correlation properties of the propagating and evanescent waves of the thermally stimulated fields of solids. The dispersion interaction between solid bodies in different thermodynamic states, the resonance dispersion interaction in a three-body system, the fluctuating fields as a means of body-to-body energy transfer, and the shift, broadening, and deexcitation of energy levels in a particle near a solid surface are discussed using the theory of thermally stimulated electromagnetic fields. Thermally stimulated emission of infrared radiation of semiconductor crystals and films (heated up to 150 °C) on metal substrates are measured in frequency range of vibrational polaritons both for propagating waves and for evanescent waves transformed into propagating waves due to ATR prism over the samples. The experimental results on thermally stimulated electromagnetic fields from solids are in qualitative and quantitative agreement with model calculations and theoretical expectations. © 2011 Elsevier B.V.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2011.6.2 | Award Amount: 4.79M | Year: 2011

The objective of ISES is to develop ICT building blocks to integrate, complement and empower existing tools for design and operation management to a Virtual Energy Lab (VEL). This will allow evaluating, simulating and optimizing the energy efficiency of products for built facilities and facility components in variations of real life scenarios before their realization, acknowledging the stochastic life-cycle nature.\nThe focus is on buildings, factories and warehouses because in buildings about 40% of the global energy is used and 30% of CO2 emissions and solid waste is created. There is a huge market for more energy-efficient design of new buildings and for refurbishing of the huge building stock through energy-efficient component products.\nThe goal of the project is to increase, by an order of magnitude, the quality of energy-efficiency in design through the development of an In-Silico Energy Simulator Laboratory, based on an interoperable ontology-supported platform.\nThe focus of the RTD work is on multi-model design and testing, stochastic lifecycle analysis/simulation in combination with new supporting ontology and interoperability tools and services, and respective re-engineering of existing tools, making them more intelligent and smartly interoperable. Further goals are the combination of energy profile models with product development STEP models and building and facility BIM models.\nThe VEL will be configured as an ontology-controlled SOA system with distributed services, distributed modelling and analysis/simulation tools and distributed data sources. This will allow concentrating the RTD work on ICT gaps, whereas existing, market-proof services, tools and data sources can be incorporated nearly development-free. The extended focus is on: (1) the development of optimal support for inventing and using innovative faade elements, and (2) the elaboration of the optimal computable stochastic model in relation to the design phase in order to make stochastic considerations applicable in daily practice.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA-CA | Phase: INCO-2007-3.1.2 | Award Amount: 3.00M | Year: 2009

Major objectives of ERA.Net RUS are: RAISING KNOWLEDGE on bilateral and national S&T programmes with or towards Russia and on relevant activities of other programme owners; IDENTIFYING STRENGTHS AND CHALLENGES of present bilateral S&T programmes of MS/AC with Russia. LEARNING LESSONS from ongoing (thematic) ERA-NETs involving Russian programme owners and IDENTIFYING GOOD PRACTICE from other international ERA-NETs. IDENTIFYING COMMON GROUND across bilateral S&T programmes of MS/AC with Russia and PROVIDING A BASIS FOR A JOINT PROGRAMMATIC APPROACH. DEVELOPING AN APPROPRIATE INSTRUMENTAL SETTING for joint funding activities TESTING A SCENARIO AND LEARNING LESSONS from a pilot joint call. DEVELOPING AND PROMOTING A SUSTAINABLE JOINT PROGRAMME with Russia In order to achieve these objectives ERA.Net RUS will implement a KNOWLEDGED BASED DIALOGUE BETWEEN PROGRAMME OWNERS IN EU MS, AC AND RUSSIA. identify options for JOINT FUNDING ACTIVITIES OF PARTICULAR BENEFIT FOR PROGRAMME OWNERS IN EU MS/AC and develop respective IMPLEMENTATION SCENARIOS. implement a PILOT JOINT CALL of interested programme owners in EU MS/AC and Russia from inside and outside the project consortium and assess the effectiveness and efficiency of the joint funding scheme. develop a SUSTAINABLE S&T (AND/OR INNOVATION) PROGRAMME to be agreed upon by interested programme owners from EU MS/AC and Russia. With respect to the work programme, ERA.Net RUS will: facilitate MUTUAL LEARNING and drawing (joint) conclusions of programme owners create SYNERGIES, increase impact, reinforce cooperation among S&T programmes and reduce fragmentation facilitate the development and implementation of JOINT PROGRAMMES allow LARGER NETWORKING in a wider European Research Area address NEW EU MEMBER STATES AND ACCESSION COUNTRIES NOT OVERLAP but create synergies with other ERA-NET activities assure SUSTAINABILITY of activities and LASTING IMPACT.


Grant
Agency: Cordis | Branch: FP7 | Program: CPCSA | Phase: INFRA-2008-1.2.2 | Award Amount: 3.39M | Year: 2009

Many research groups and institutes within the European Research Area (ERA) are playing a central role in the production of a vast range of atomic and molecular (AM) data, data that is of critical importance across a wide range of applications such as astrophysics, atmospheric physics, fusion, environmental sciences, combustion chemistry and in industrial applications from plasmas to lighting.\n\nThrough the auspices of this infrastructure the Virtual Atomic and Molecular Data Centre (VAMDC) aims to build a secure, documented, flexible and interoperable e-science environment-based interface to the existing AM data. The VAMDC will be built upon the expertise of existing AM databases, data producers and service providers with the specific aim of creating an infrastructure that is easily tuned to the requirements of a wide variety of users in academic, governmental, industrial or public communities both within and outside the ERA. The project will cover the building of the core consortium, the development and deployment of the infrastructure and the development of interfaces to the existing AM databases as well as providing a forum for training potential users and dissemination of expertise across the ERA. It is expected that VAMDC becomes a European legal entity during the course of the project.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-CSA-Infra | Phase: INFRA-2008-1.1.1 | Award Amount: 7.39M | Year: 2009

The Europlanet RI project will provide the European planetary science community with a unique research infrastructure, combining access to a suite of state of the art facilities while fostering their joint development and integration in terms of capacity and performance. This research infrastructure will include access to laboratory and field site facilities, advanced modelling, simulation and data analysis resources and to data produced by space missions and ground-based telescopes hence maximising the scientific impact of major European space missions and ground-based installations. Access will be provided in two forms. Three coordinated Trans National Access activities will open to many users the unique range of laboratory and field site facilities selected for this project. In parallel, the IDIS e-service will provide a user-friendly web-based access to the available planetary science data, information and software tools. Four Joint Research Activities will broaden the scope of the infrastructure, opening access to new field sites, offering new models and data analysis tools for users and widening the opportunity of remote data access by progressively upgrading IDIS into a Planetary Virtual Observatory. Four complementary networking activities will publicize the objectives and opportunities of Europlanet RI and disseminate project results amongst the scientific community, industries, SMEs, space agencies and the public. They will consolidate the establishment of a European Research Area for planetary science and exploration. Building on the synergies between its services, joint research activities and networking activities, Europlanet RI will provide the ideal scientific and technical environment to fully analyse data from past and present planetary missions and prepare the next generation of missions. In this way it will play a vital role in establishing the European Community as a leading player in planetary and space exploration.


Patent
Russian Academy of Sciences, Jozef Stefan Institute and Tomsk Polytechnic University | Date: 2015-11-19

The object of the present invention is low-dimensional, primarily 2D folded structures of organic and/or inorganic substances and/or their agglomerates, which have folds and faces of irregular shape and exhibit high local electric field strength generated by surface charges on the said folds, faces and edges, and use thereof: as sorbents of organic particles (molecules, bacteria, viruses, proteins, antigens, endotoxins) and inorganic particles (metal ions, colloids); as an agent with wound healing and antibacterial activity; as an agent for tumor cell growth inhibition.


Patent
Biosense Webster Israel Ltd., Bar Tal, Demri, Bar On, Zino, Russian Academy of Sciences, Zigelman and Berman | Date: 2014-04-23

A coordinate system registration module (62), including radiopaque elements (102) arranged in a fixed predetermined pattern and configured, in response to the radiopaque elements generating a fluoroscopic image, to define a position of the module in a fluoroscopic coordinate system of reference. The module further includes one or more connections configured to fixedly connect the module to a magnetic field transmission pad (50) at a predetermined location and orientation with respect to the pad, so as to characterize the position of the registration module in a magnetic coordinate system of reference defined by the magnetic field transmission pad.


Grant
Agency: Cordis | Branch: H2020 | Program: RIA | Phase: BG-10-2016 | Award Amount: 8.10M | Year: 2016

Blue-Action will provide fundamental and empirically-grounded, executable science that quantifies and explains the role of a changing Arctic in increasing predictive capability of weather and climate of the Northern Hemisphere.To achieve this Blue-Action will take a transdisciplinary approach, bridging scientific understanding within Arctic climate, weather and risk management research, with key stakeholder knowledge of the impacts of climatic weather extremes and hazardous events; leading to the co-design of better services.This bridge will build on innovative statistical and dynamical approaches to predict weather and climate extremes. In dialogue with users, Blue-Arctic will take stock in existing knowledge about cross-sectoral impacts and vulnerabilities with respect to the occurrence of these events when associated to weather and climate predictions. Modeling and prediction capabilities will be enhanced by targeting firstly, lower latitude oceanic and atmospheric drivers of regional Arctic changes and secondly, Arctic impacts on Northern Hemisphere climate and weather extremes. Coordinated multi-model experiments will be key to test new higher resolution model configurations, innovative methods to reduce forecast error, and advanced methods to improve uptake of new Earth observations assets are planned. Blue-Action thereby demonstrates how such an uptake may assist in creating better optimized observation system for various modelling applications. The improved robust and reliable forecasting can help meteorological and climate services to better deliver tailored predictions and advice, including sub-seasonal to seasonal time scales, will take Arctic climate prediction beyond seasons and to teleconnections over the Northern Hemisphere. Blue-Action will through its concerted efforts therefore contribute to the improvement of climate models to represent Arctic warming realistically and address its impact on regional and global atmospheric and oceanic circulation.


The present invention relates to methods for producing oxide ferrimagnetics with spinel structure and iron oxide nanoparticles by soft mechanochemical synthesis using inorganic salt hydrates, oxide ferrimagnetics with spinel structure and iron oxide nanoparticles of ultra-small size and high specific surface area obtainable by the methods, biocompatible aqueous colloidal systems comprising oxide ferrimagnetics with spinel structure and iron oxide nanoparticles, carriers comprising oxide ferrimagnetics with spinel structure and iron oxide nanoparticles, and uses thereof in medicine.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: AAT.2010.1.1-3.;AAT.2010.4.1-4. | Award Amount: 6.34M | Year: 2010

Affordable transport for the citizen relies on innovative solutions and technologies that will result in lower costs and lead-time of the aircraft and its systems. In this area, the packaging of on-board computers is an important contributor. The Packaging of futuRe Integrated ModulAr Electronics (PRIMAE) objective is to develop a new flexible, robust and open aeronautical packaging for the next generation of electronics and particularly to Integrated Modular Avionics. This new concept after standardization will be able to replace the 35 year old ARINC 600 standard. PRIMAE technical objectives are: - Reduce electronics packaging in terms of volume (50%) and weight (30%) and offer flexibility and growth capability - Reduce costs (20%) using market standard components - Enhance reliability (50%) through thermal and vibratory breakthrough - Mitigate EMC protection penalties in composite fuselage environment - Ensure fast production ramp up and support rapid final assembly on aircraft - Improve availability and reduce maintenance cost. In these domains significant technological studies, beyond the state of the art (cooling, lightweight composite materials, electromagnetic interferences, power supply, connectivity), will be carried out in respect to airworthiness regulations. To achieve the PRIMAE objectives, 3 steps are required: - Definition phase of air framers and suppliers requirement - Research and evaluation of advanced packaging technologies - Specification and development of representative mock-up to integrate different technologies. The concept once harmonized among the main European players participating in this project, will be proposed as a standard for the future generation of large and regional aircraft, and helicopters. The new packaging concept will strengthen competitiveness of the market and will support the effort of industrial avionics suppliers to improve costs and environmental impacts.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA-ERA-Plus | Phase: INCO.2013-3.1 | Award Amount: 31.15M | Year: 2013

The Partnership for Modernisation between the European Union and Russia states as one of the priority areas to enhance cooperation in innovation, research and development. In this context the 24 consortium members of ERA.Net RUS Plus propose to implement a Single Joint Call for transnational and scientifically excellent research projects in the fields of Nanotechnologies/Materials/Production Technologies, Health, Environment/ Climate Change, Social Sciences/Humanities and Innovation. The national financial contributions of the partners to the Single Joint Call will sum up to approximately EUR 22.2 millions. The action will engage programme owners from various EU-Member States (MS), countries associated (AC) to the EU RTD Framework Programme and Russia. ERA.Net RUS Plus is a follow-up to the current ERA.Net RUS action that was set-up from 2009 to 2013 to link Russia to the European Research Area by a coordination of S&T programmes in EU MS/AC and Russia. ERA.Net RUS was the first joint call activity in which prominent Russian Funding Parties (RAS, RFBR, RFH and FASIE) were jointly participating in a multilateral funding scheme. The major objective of ERA.Net RUS Plus is to deepen the transnational collaboration between EU MS/AC and Russia and to reduce the fragmentation of research programmes along national funding lines. This joint undertaking will create synergies and strengthen the cooperation among the consortium members. Other principal goals of ERA.Net RUS Plus are to further develop the instrumental setting for joint funding activities and thus to provide a solid basis for a joint programmatic approach to be pursued in the near future. This programmatic approach will assure the desired and necessary sustainability of the cooperation between EU MS/AC and Russia and a lasting impact. ERA.Net RUS Plus will therefore be the next important step in further linking Russia and its key research communities to the European Research Area.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA-SA | Phase: INFRA-2007-2.1-01 | Award Amount: 336.45K | Year: 2008

The main objective of the project is to design the conceptual scheme of a research infrastructure supporting the networking of centres for high-quality research in Slavic lexicography, fostering their scientific capacity, integrating their digital resources and opening them up to the European academic community. The project will provide strategies for the coordination, unification and extension of existing digital lexical resources and the creation of new ones, in accordance with the recent advances in the field and the international standards. This will ensure their reusability and interoperability, so that researchers in humanities and social sciences as well as business will have easy access to digital bi- and multilingual dictionaries of Slavic languages. Thus the project will contribute to preserving and supporting the multilingual and multicultural European heritage. In more distant perspective we envisage the creation of a general lexical data base with the possibility of searching entries in any Slavic language, with information on the words etymology and correspondences in all Slavic languages and in English. The data base will possess a rich system of links between forms and meanings of words in synchrony and diachrony. An interactive Web portal will enable the supervised extension of the data base by the end users (Wiki-style) and ensure fast growth and relevance to the users needs. The partners in the project are research organisations from six European countries whose six national languages belong to the Slavic group: Bulgaria, Poland, Russia, Slovakia, Slovenia and Ukraine. All partners are national centres for high-quality research in lexicography and digital resources. Each partner is responsible for coordinating a part of the work matching their specialisation and experience. The six thematic working groups are designed as fora for discussing specific parts of the project, and the work programme provides a detailed breakup of the task in time.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-CSA-Infra-PP | Phase: INFRA-2010-2.2.3 | Award Amount: 6.68M | Year: 2010

Environmental change and climate change in particular, are expected to be most pronounced in the polar regions. For this reason, a multi-disciplinary research infrastructure covering all important elements of the coupled Earth System in the Arctic is a very valuable tool to quantify the ongoing global change and to verify the capability of Earth System models to predict future changes. The proposed EFRI project Svalbard Integrated Arctic Earth Observing System (SIOS) is intended to take this role. The main goal of the SIOS Preparatory Phase (SIOS-PP) project is to define, work out and decide on the formal framework needed to establish and operate the geographically distributed and thematically composed multi-national research infrastructure with a node function in different aspects, that SIOS will manifest. This covers, on one side, aspects common for all ESFRI initiatives, such as legal status, governance structure, financial strategy, a data management and utilization plan, and an (on- and offshore) logistics plan. In addition, SIOS-PP will address topics that are special for this infrastructure: a dedicated remote sensing strategy, an internal scientific and observational, as well as an international integration and cooperation plan, which will link SIOS to regional European Arctic and pan-Arctic scientific infrastructure networks. The SIOS-PP project will be carried out by a consortium of 27 partners from 14 countries including 4 non-EU and non-associated countries; three of the partners are national funding agencies. In addition, 19 associated partners with infrastructure or strong scientific interests on Svalbard will cooperate during the preparatory phase. The project has a duration of 3 years.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: ENERGY-2007-9.1-01 | Award Amount: 3.77M | Year: 2008

The SECURE project will build a comprehensive framework that considers all the issues related to the topic of security of supply, including geopolitics, price formation and the economic and technical design of energy markets inside and outside the EU. Tools, methods and models will be developed to measure and assess EU security of energy supply both outside the EU and inside the EU. The objective is to evaluate the vulnerability of the EU to the different risks which affect energy supplies in order to help optimizing the Unions energy insecurity mitigation strategies, including infrastructure investment, demand side management and dialogue with producing countries. This project will develop energy security indicators for all the major energy sources in order to identify the risk factors and quantify the EU exposure to volume and price risks in the short and long terms, including impacts of severe accidents and terrorist threats. Costs and benefits (both measurable and perceived) of energy security will be evaluated for different energy supply/demand scenarios to help policy makers providing the most appropriate institutional, political and industrial solutions. All major energy sources and technologies (oil, natural gas, coal, nuclear, renewables and electricity) will be addressed from upstream to downstream with both a global and sectoral analysis studying in depth issues such as technical, economic/regulatory and geopolitical risks. The analysis will also integrate demand as a key issue related to energy security. The SECURE project has both a strong quantitative and qualitative component and will at the end not only provide a comprehensive methodological and quantitative framework to measure energy security of supply, but it will also propose policy recommendations on how to improve energy security taking into account costs, benefits and risks of various policy choices.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA-CA | Phase: NMP-2008-4.0-11 | Award Amount: 1.29M | Year: 2009

The NCP TeAm project aims at an improved NMP NCP service across Europe therefore helping simplify access to FP7 calls, lowering the entry barriers for newcomers, and raising the average quality of submitted proposals through closer collaboration between the NMP NCPs and normalisation of the differences in knowledge and quality of provided services across the NCP Network in Europe and beyond. To this effect the four main objectives of this project are to optimise the tools at hand to the benefit of the networks clients, such as the adoption of a partner search system tailormade to the NMP needs and the creation of an NCP NMP Network website, to maximise the impact and add value to organised National and European events with organising parallel brokerage (partnership building) events, joint NMP NCP stands and joint awareness campaigns (including visits to researchers and industries), to standardise the Network skills and provided services through the creation of a charter of services, an operational manual, twinnings and targeted trainings and to forge stronger links with all NMP related Networks, Technology platforms and other EC initiatives and programmes.


Grant
Agency: Cordis | Branch: H2020 | Program: CSA | Phase: INFRASUPP-6-2014 | Award Amount: 1.70M | Year: 2015

This CREMLIN proposal is to foster scientific cooperation between the Russian Federation and the European Union in the development and scientific exploitation of large-scale research infrastructures. It has been triggered by the recent so-called megascience projects initiative launched by and in the Russian Federation which is now very actively seeking European integration. The proposed megascience facilities have an enormous potential for the international scientific communities and represent a unique opportunity for the EU to engage in a strong collaborative framework with the Russian Federation. The CREMLIN proposal is a first and path finding step to identify, build and enhance scientific cooperation and strong enduring networks between European research infrastructures and the corresponding megascience facilities to maximize scientific returns. The proposal follows the specific recommendations of an EC Expert Group by devising concrete coordination and support measures for each megascience facility and by developing common best practice and policies on internationalisation and opening. CREMLIN will thus effectively contribute to better connect Russian RIs to the European Research Area.


Grant
Agency: Cordis | Branch: H2020 | Program: RIA | Phase: SFS-07b-2015 | Award Amount: 7.99M | Year: 2016

The goal of GenTree is to provide the European forestry sector with better knowledge, methods and tools for optimising the management and sustainable use of forest genetic resources (FGR) in Europe in the context of climate change and continuously evolving demands for forest products and services. To reach its goal, GenTree will make scientific, technological and implementation breakthroughs in: (i) designing innovative strategies for dynamic conservation of FGR in European forests, (ii) broadening the range of FGR used by European breeding programmes, and (iii) preparing new forest management scenarios and policy frameworks fully integrating genetic conservation and breeding aspects, to adapt forests and forestry to changing environmental conditions and societal demands. GenTree focuses on economically and ecologically important tree species in Europe, growing in a wide range of habitats and covering different societal uses and values. The major outputs of GenTree will include: (i) much needed new scientific knowledge on phenotypic and genotypic diversity across environmental gradients in Europe, (ii) improved genotyping and phenotyping monitoring tools for practitioners, (iii) updated and refined data for information systems of in-situ and ex-situ FGR collections, (iv) innovative strategies for conservation, breeding and exchanging and using diversified forest reproductive material, (v) novel outreach and science-policy support tools to better integrate FGR concerns into forest management and better implement relevant international commitments in Europe. GenTree will improve the status and use of European in-situ and ex-situ FGR collections, support acquisition, conservation, characterisation, evaluation and use of relevant FGR in breeding and forestry practice and policy, will seek to harmonise, rationalise and improve management of existing collections and databases, and will strengthen the EU strategy for cooperation on FGR research and innovation.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-IP | Phase: SSH.2013.3.2-1 | Award Amount: 6.25M | Year: 2014

SI-DRIVE extends knowledge about social innovation (SI) in three major directions: - Integrating theories and research methodologies to advance understanding of SI leading to a comprehensive new paradigm of innovation. - Undertaking European and global mapping of SI, thereby addressing different social, economic, cultural, historical and religious contexts in eight major world regions. - Ensuring relevance for policy makers and practitioners through in-depth analyses and case studies in seven policy fields, with cross European and world region comparisons, foresight and policy round tables. SI-DRIVE involves 15 partners from 12 EU Member States and 10 from other parts of the world. The approach adopted carefully interlinks the research process to both the complexity of the topic and the project workflow. First, cyclical iteration between theory development, methodological improvements, and policy recommendations. Second, two mapping exercises at European and global level. Initial mapping will capture basic information about 1000\ actual successful and failed social innovations from a wide variety of sources worldwide, leading to a typology of SI (testing the SI perspectives proposed by the BEPA report) and using this to examine the global SI distribution. Subsequent mapping will use the typology to focus on well documented SI, leading to the selection of 10 cases each for in-depth analysis in the seven SI-DRIVE Policy Fields. Third, these case studies will be further analysed, used in stakeholder dialogues in 7 policy field platforms and in analysis of cross-cutting dimensions (e.g. gender, diversity, ICT), carefully taking into account cross-sector relevance (private, public, civil sectors), and future impact. The outcomes of SI-DRIVE will address all objectives required by the Call, cover a broad range of research dimensions, impacting particularly in terms of changing society and empowerment, and contributing to the objectives of the Europe 2020 Strategy.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: AAT.2008.1.1.1.;AAT.2008.4.1.1. | Award Amount: 3.72M | Year: 2009

Broadband noise generated by the turbulent flow around the airframe is one of the most important components of aircraft generated acoustic nuisances. Its prediction and subsequent reduction is essential for achieving the ACARE 2020 objectives of noise level reductions. However, the complexity and diversity of broadband turbulent noise sources makes that prediction with present numerical tools extremely challenging and far from mature. VALIANT is an upstream research-oriented project, with objective to tackle this challenge by generating new experimental data and validating and improving numerical tools for prediction of airframe noise (AFN) generated from landing gears, slats, flaps and local separation regions. Due to the extremely complex physical nature of the phenomenon and the high computational cost of computing full aircraft configurations on one hand, and a lack of a reliable experimental database on the other hand, VALIANT focuses on key generic test cases representing the major broadband AFN mechanisms associated with multiple body interactions: two-struts (landing gear), wing-flap, slat-wing and gap-turbulence. For all these configurations, the components of the noise prediction chain (for turbulent/source region, near- and far-field propagation domains) and their mutual interactions are evaluated and avenues of improvement developed. It is expected that by validating and improving the predictive tools, a deeper insight into the mechanisms behind AFN will be obtained, which is an essential step towards new efficient AFN reduction concepts and their optimization to achieve the required breakthrough towards quieter aircraft. The consortium is formed by 12 European and Russian partners: 2 universities and 7 research establishments including the most important technology providers to airframe industry, and 3 companies (2 SME) providing dedicated engineering services and software tools in aeroacoustics to transport industries with emphasis on aeronautics.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-CSA-Infra | Phase: INFRA-2010-1.1.19 | Award Amount: 9.36M | Year: 2011

Environmental change and particularly amplified global climate change are accelerating in the Arctic. These changes already affect local residents and feedback from the Arctics land surface to the climate system, will have global implications. However, climate change and its impacts are variable throughout the wide environmental and land use envelopes of the Arctic. Unfortunately, the Arctic is generally remote, sparsely populated and research and monitoring activities are more restricted in time and space than elsewhere. This limitation comes when there is a rapidly expanding need for knowledge as well as increasing technological opportunities to make data collection in the field and accessibility more efficient. INTERACT is a network under the auspices of SCANNET, a circumarctic network of terrestrial field bases. INTERACT specifically seeks to build capacity for research and monitoring in the European Arctic and beyond. Partnerships will be established between Station Managers and researchers within Joint Research Activities that will develop more efficient networks of sensors to measure changing environmental conditions and make data storage and accessibility more efficient through a single portal. New communities of researchers will be offered access to Arctic terrestrial infrastructures while local stakeholders as well as major international organisations will be involved in interactions with the infrastructures. This will lead to increased public awareness of environmental change and methods to adapt to them, increased access to information for education at all levels, and input to major international research and assessment programmes.The whole consortium will form a coherent and integrated unit working within a concept of a wide environmental and land use envelopes in which local conditions determine the directions and magnitudes of environmental change whereas the balance and synergies of processes integrated across the whole region have global impacts.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-IP | Phase: SSH.2011.4.2-1 | Award Amount: 8.99M | Year: 2012

In order to gauge its significance, conceptual change in the study of borders must be seen in relation to fundamental social, economic and geopolitical transformations that have taken place in the past decades. In addition, major paradigmatic shifts in scientific debate, and in the social sciences in particular, must also be considered. Recognising the close interrelationships between social change and paradigm shifts, the EUBORDERSCAPES project will analyse the evolving concept of borders in terms of a mutually linked emergence of post-national, post-colonial, post-modernist and post-Communist strands of inquiry. State borders are the frame of reference, rather than ethnographic/anthropological boundaries. However, this approach emphasises the social significance and subjectivities of state borders while critically interrogating objective categories of state territoriality and international relations. The research proposed here will, furthermore, not only be focused at the more general, at times highly abstract, level of conceptual change. This approach will also allow us to compare and contrast how different and often contested conceptualisations of state borders (in terms of their political, social, cultural and symbolic significance) resonate in concrete contexts at the level of everyday life.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-IP | Phase: ENV.2011.1.1.2-1 | Award Amount: 10.93M | Year: 2011

CLAIRE investigates the ways in which climate change alters the threat of air pollution on European land ecosystems including soils. Based on field observations, experimental data and models, it establishes new flux, concentration and dose-response relationships, as a basis to inform future European policies. Starting with biosphere-atmosphere exchange measurements, CLAIRE quantifies how global warming and altered precipitation will affect emissions of key European primary pollutants (NOx, NH3, VOCs), including interactions with increasing aerosol and hemispheric O3 background concentrations, modifying atmospheric transport and deposition. An ensemble of chemistry transport models will be applied to assess uncertainty in response to harmonized scenarios for climate, emissions and land-use, while high resolution studies will investigate how climate change alters local patterns of pollutant exposure and threshold exceedance. A network of European experiments for contrasting ecosystems and climates, combined with meta-analysis of unpublished datasets, will quantify how climate change alters ecosystem vulnerability to tropospheric O3 and N deposition, including interaction with increased CO2. Combined with special topics on interactions with N form (wet/dry, NHx/NOy), aerosol-exacerbated drought stress and BVOC self-protection of O3 effects, novel threshold and dose-response approaches will be developed. These will be combined with regional atmospheric and biogeochemical models to estimate interactions and feedbacks on plant/soil carbon stocks, greenhouse gas balance and plant species change. The new risk assessment chain to be developed will be applied at the European scale, quantifying how projected climate change will alter damage estimates. Combined with economic valuation of ecosystem services, improved integrated assessment modelling will allow a cost-benefit analysis to inform future mitigation and adaptation strategies on air pollution and climate change.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA | Phase: INFRA-2007-3.0-03 | Award Amount: 4.06M | Year: 2008

PESI provides standardised and authoritative taxonomic information by integrating and securing Europes taxonomically authoritative species name registers and nomenclators (name databases) that underpin the management of biodiversity in Europe.\nPESI defines and coordinates strategies to enhance the quality and reliability of European biodiversity information by integrating the infrastructural components of four major community networks on taxonomic indexing into a joint work programme. This will result in functional knowledge networks of taxonomic experts and regional focal points, which will collaborate on the establishment of standardised and authoritative taxonomic (meta-) data. In addition PESI will coordinate the integration and synchronisation of the European taxonomic information systems into a joint e-infrastructure and the set up of a common user-interface disseminating the pan-European checklists and associated user-services results\nThe organisation of national and regional focal point networks as projected not only assures the efficient access to local expertise, but is also important for the synergistic promotion of taxonomic standards throughout Europe, for instance to liaison with national governmental bodies on the implementation of European biodiversity legislations. In addition PESI will start with the geographic expansion of the European expertise networks to eventually cover the entire Palaearctic biogeographic region.\nPESI supports international efforts on the development of a Global Names Architecture by building a common intelligent name-matching device in consultation with the principal initiatives (GBIF, TDWG, EoL, SpeciesBase). PESI contributes the development of a unified cross-reference system and provides of high quality taxonomic standards. PESI will further involve the Europe-based nomenclatural services and link the planned joint European taxonomic e-infrastructures middle-layer to the global e-gateway.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-CSA-Infra | Phase: INFRA-2012-2.2.5. | Award Amount: 4.12M | Year: 2012

MIRRI will be a pan-European distributed RI providing microbiological services, thus facilitating access to high quality microorganisms, their derivatives and associated data for research, development and application. It will connect resource holders with researchers and policy makers to deliver the resources and services more effectively and efficiently to meet the needs of innovation in biotechnology. The RI builds upon 60 microbial domain resource centres (BRCs) in 26 European countries; collectively they provide access to more than 350,000 strains of microorganisms. Microorganisms provide essential raw material for biotechnology - but to date less than 1% of the estimated number of species are described and available to be harnessed by man. As new species are discovered, expertise is difficult to locate to ensure correct identification. Public sequence databases provide modern tools for identification but the information is often of poor quality and often not backed up by the biological material to enable validation of data. The current fragmented resource distributed across Europe needs to be coordinated and operated to common standards with facilitating policy to help focus activities to the big challenges in healthcare, food security, poverty alleviation and climate change. Lacking infrastructure, policy framework and governance structures will be defined in the preparatory phase, as will be the links to researchers and policy makers. Specialist clusters will address priority issues and output steering. BRCs should be placed on national road maps, ensuring the pan-European strategy can be implemented. Cross discipline interactions with other RIs will lead to new approaches. Linking the microbial strain information to other relevant data will allow their full exploitation. Appropriate data mining solutions will enable focussing acquisition and delivery strategies, thus bridging current gaps and improving user access.


News Article | December 20, 2016
Site: www.rdmag.com

Physicists from MIPT and the Joint Institute for High Temperatures of the Russian Academy of Sciences described the mobility of line defects, or dislocations, in uranium dioxide. This will enable future predictions of nuclear fuel behavior under operating conditions. The research findings were published in the International Journal of Plasticity. Nuclear fuel has an immense potential, as it is one of the most energy dense resources available: a single uranium dioxide fuel pellet weighing no more than a few grams releases the same amount of energy within the reactor core that is produced by burning several hundred kilograms of anthracite coal or oil. When a nuclear reactor is in operation, the fuel in the pellets undergoes extremely complex transformations caused by both temperature and radiation. Because the underlying mechanisms of these transformations are not yet fully understood, we are still unable to realize the complete potential of nuclear fuel and reduce the risk of accidents to a minimum. The mechanical properties of fuel pellets, which play an important part in nuclear engineering, are determined by the motion of and interaction between dislocations. Dislocation mobility in uranium dioxide at high temperatures and under stress had never been studied in detail. That is before the recent research into dislocation dynamics carried out by Artem Lunev, Alexey Kuksin, and Sergey Starikov. In their paper, the scientists provide data of a simulation of dislocation behavior in uranium dioxide, which is one of the most widespread compounds used as nuclear fuel on power plants across the globe. To be used as nuclear fuel, uranium dioxide is formed into ceramic pellets that are sintered at a high temperature. This material has a very high melting point, is resistant to radiation-induced growth, and does not experience phase transitions within a broad temperature range. Theoretically, a solid body has a regular, ordered structure (crystalline structure), and there is a certain designated position for each atom to be at. In reality, perfect crystals do not exist, because some atoms or groups of atoms are always out of place, altering the ideal arrangement. In other words, there are defects (imperfections) in an actual crystal. They come in several types, viz., point defects, line defects (dislocations), planar defects, and bulk defects. Defects can move within the crystal, and the nature of their motion depends on external factors. Dislocation dynamics are known to determine fuel properties relevant to nuclear engineering (plasticity, fission fragments diffusion). In their study, the scientists from MIPT and the Joint Institute for High Temperatures used computational methods to develop a model of an isolated dislocation in a perfect uranium dioxide crystal. They calculated the varying dislocation velocity as a function of temperature and the external forces affecting the crystal. The researchers analyzed simulation results within the framework of statistical physics and obtained a model that describes the behavior of dislocations in a broad temperature range under shear stress of various magnitudes. This model enables the calculation of dislocation velocity based on the known temperature and stress parameters. The model proposed by the Russian scientists could soon be used to simulate more complex systems and study the macroscopic processes occurring in fuel pellets under operating conditions. "This is a major advance toward being able to describe processes as complex as nuclear fuel swelling and embrittlement during operation by means of computer simulations alone," says Sergey Starikov, a coauthor of the study, an associate professor at MIPT, and a senior researcher at the Joint Institute for High Temperatures. Computer modeling enables scientists to trace individual fuel atoms and calculate their velocities and forces affecting them, along with other parameters. This allows systems of various complex configurations to be simulated and studied. Computer modeling is widely used in situations where performing an experiment is rather problematic. Research into nuclear fuel behavior is precisely one of those areas. Such large-scale calculations rely on modern supercomputers, as massive computing power is required to find the forces affecting individual atoms at each moment in time.


News Article | November 18, 2016
Site: phys.org

The two sources studied, 4U 0115+63 and V 0332+53, belong to a rather special class of transient X-ray pulsars. These stars alternately act as weak X-ray sources, undergo giant outbursts, and disappear from sight completely. The transitions of pulsars between different states provide valuable information about their magnetic field and the temperature of the surrounding matter. Such information is indispensable, as the immensely strong magnetic fields and extremely high temperatures make direct measurements impossible in a laboratory on Earth. The name of a pulsar is preceded by a letter designating the first observatory to discover it, which is followed by a numerical code containing the coordinates of the pulsar. The "V" refers to Vela 5B, a US military satellite that was launched to spy on the Soviets. As for the "4U" in the other name, it stands for the fourth Uhuru catalog, compiled by the first observatory in orbit dedicated specifically to X-ray astronomy. Following the discovery of the first pulsar, it was originally known as "LGM-1" (for "little green men"), because it was a source of regular radio pulses, leading scientists to believe that they might have received a signal from intelligent extraterrestrials. An X-ray pulsar is a rapidly spinning neutron star with a strong magnetic field. A neutron star can be part of a binary system. In a process that astrophysicists call accretion, the neutron star can channel gas from its normal star companion. The attracted gas spirals toward the neutron star, forming an accretion disk, which is disrupted at the magnetosphere radius. During accretion, the matter penetrates to a certain extent into the magnetosphere, "freezes into it," and flows along the lines of the magnetic field toward the magnetic poles of the neutron star. Falling toward the poles, the gas is heated to several hundred million degrees, which causes the emission of X-rays. If the magnetic axis of a neutron star is skewed relative to its rotational axis, the X-ray beams it emits rotate in a manner that resembles the way beacons work. For an "onshore" observer, the source appears to be sending signals at regular intervals ranging from fractions of a second to several minutes. A neutron star is one of the possible remnants left behind by a supernova. It can be formed at the end of stellar evolution, if the original star was massive enough to allow gravitation to compress the stellar matter enough to make electrons combine with protons yielding neutrons. The magnetic field of a neutron star can be more than 10 orders of magnitude stronger that any magnetic field that could be achieved on Earth. In a binary system, an X-ray pulsar is observed when the neutron star is accreting matter from its normal star companion—often a giant or a supergiant characterized by a strong stellar wind (ejection of matter into space). Alternatively, it can be a smaller star like our own sun that has filled its Roche lobe—the region beyond which it is unable to hold on to the matter attracted by the gravity of the neutron star companion. The 4U 0115+63 and V 0332+53 pulsars are irregular X-ray sources (transients), owing to the fact that their stellar companions belong to the rather unusual Be star class. The axial rotation of a Be star is so rapid that it occasionally starts "bulging" at the equator, and a gas disk is formed around it, filling the Roche lobe. The neutron star starts rapidly accreting the gas from its "donor" companion, causing a sharp increase in X-ray emission called an X-ray outburst. At some point, after the matter in the equatorial bulge starts to deplete, the accretion disk becomes exhausted, and the gas can no longer fall onto the neutron star due to the influence of the magnetic field and the centrifugal force. This gives rise to a phenomenon known as the "propeller effect"—the pulsar enters a state in which accretion does not occur, and the X-ray source is no longer observed. Astronomers use the term "luminosity" to refer to the total amount of energy emitted by a celestial body per unit time. The red line in the diagram represents the threshold luminosity for the 4U 0115+63 pulsar. Observations of the other source (V 0332+53) produced similar results. The blue lines mark the moments in time when the distance between the pulsar and the companion was at a minimum. This proximity of the companion star might cause the neutron star to go into overdrive and resume emission (see diagram), provided that sufficient amounts of matter are still available for accretion. The Russian scientists used the X-ray telescope (XRT) on NASA's Swift space observatory to measure the threshold luminosity that marks the transition of a pulsar to the propeller regime. This parameter depends on the magnetic field and the rotational period of the pulsar. The rotational periods of the sources in this study are known based on the intervals between the pulses that we can register, 3.6 s in the case of 4U 0115+63 and 4.3 s for V 0332+53. Knowing both the threshold luminosity and the rotational period, one can calculate the strength of the magnetic field. The research findings are in agreement with the values obtained using other methods. However, the luminosity was only reduced by a factor of 200, as compared to the expected 400 times reduction. The researchers hypothesized that there could be two possible explanations for this discrepancy. First, the neutron star surface could become an additional source of X-rays, as it cools down following an outburst. Second, the propeller effect could leave some room for matter transfer between the two stars, as opposed to sealing the neutron star off completely. In other words, an unaccounted mechanism could be enabling accretion to continue to a certain extent. The transition of a pulsar into the propeller mode is challenging to observe, as the low luminosity state cannot be detected easily. For 4U 0115+63 and V 0332+53, this was attempted following the previous outbursts of these sources. However, the instruments available at the time were not sensitive enough to see the pulsars in the "off-mode." This study is the first to demonstrate reliably that these two sources do, indeed, "black out." Moreover, the researchers showed that knowledge of the luminosity that marks the transition of pulsars into the propeller regime can be used to learn more about the structure and intensity of the magnetic fields around neutron stars. Prof. Dr. Alexander Lutovinov of the Russian Academy of Sciences, Head of Laboratory at the Space Research Institute (IKI RAS) and a professor at MIPT, comments, "Knowledge of the structure of the magnetic fields of neutron stars is of paramount importance for our understanding of their formation and evolution. In this research, we determined the dipole magnetic field component, which is linked to the propeller effect, for two neutron stars. We demonstrate that this independently calculated value can be compared to the available results of magnetic field measurements based on the detection of cyclotron lines in the spectra of sources. By doing this, it is possible to estimate the contribution of the other, higher-order components in the field structure." More information: S. S. Tsygankov et al, Propeller effect in two brightest transient X-ray pulsars: 4U 0115+63 and V 0332+53, Astronomy & Astrophysics (2016). DOI: 10.1051/0004-6361/201628236


News Article | February 15, 2017
Site: www.eurekalert.org

Scientists from MIPT, the University of Oxford, and the Lebedev Physical Institute of the Russian Academy of Sciences estimated the number of stars disrupted by solitary supermassive black holes in galactic centers formed due to mergers of galaxies containing supermassive black holes. The astrophysicists found out whether gravitational effects arising as two black holes draw closer to one another can explain why we observe fewer stars being captured by black holes than basic theoretical models predict. In their study published in The Astrophysical Journal, the researchers looked into the interplay of various dynamic mechanisms affecting the number of stars in a galaxy that are captured per unit time (tidal disruption rate). (Spoiler Alert! An advanced theoretical model yielded results that are even more inconsistent with observations, leading the team to hypothesize that the disruption of stars in galactic nuclei may occur without our knowledge.) Tidal disruption events, or TDEs, are the only available source of information from inactive galactic nuclei. There is at least one supermassive black hole in the center of most galaxies. Surrounded by dense central star clusters, black holes occupy regions known as galactic nuclei. As their name suggests, black holes do not emit any light. However, when matter falls onto the central massive object, it gets heated to extreme temperatures and can be observed with a telescope. Active galaxies have gas clouds that feed the black hole thus making it visible. However, most of the galaxies--approximately 90 percent of them--remain "silent" because there are no gas clouds in them and so there is no matter for the black hole to feed on, except for stars that occasionally stray too close to it. When this happens, the star is pulled apart by tidal forces, experiencing what is known as spaghettification, and astronomers detect a tidal disruption event (TDE). So far, around 50 flares of radiation linked to TDEs have been observed. It is reckoned that the average rate of stellar disruption amounts to one star per 10,000 to 100,000 years per galaxy. Based on this data, the scientists are trying to develop a reliable model of what goes on in inactive galactic nuclei. The simplest theoretical model involves a galaxy whose nucleus is spherical in shape and has a supermassive black hole at its center. The black hole is orbited by stars that change the direction of their motion as they pass by one another, the way billiard balls bounce off one another when they collide on the table. However, whereas a billiard ball needs to be moving straight toward the hole to fall into it, a star has more options: It is enough for its velocity vector to be in the so-called loss cone, to ensure that the star will eventually be captured and disrupted by the black hole's gravity. According to this very simple model, an average of one star per galaxy should be captured every 1,000 to 10,000 years, i.e., more frequently than observed. Although the model can be improved by taking a number of other factors into account (e.g., the difference in the mass of stars), this would only further increase the predicted tidal disruption rates. At present, there is only one mechanism discussed in published sources that could be responsible for the fact that fewer stars are captured than expected. Curiously, it requires that most of the low-angular-momentum stars vanish, so to speak. But let us first examine an analogous case involving gas diffusion. Suppose there are gas molecules in random motion contained inside a vessel whose walls can absorb the molecules. Now imagine the molecules closest to the walls have been removed. The obvious consequence of this would be less molecules absorbed per unit time, since the remaining molecules have yet to travel a certain distance before they can come in contact with a wall. Similarly, if stars are removed from the center of the galaxy, the stellar disruption rate will fall. Naturally, the stars cannot simply vanish into thin air; but if the galaxy hosts a binary black hole, then individual stars can be ejected from the galaxy by means of a so-called gravitational slingshot, a maneuver also known as a gravity assist when man-made spacecraft are involved. The law of conservation of energy implies that when a star is accelerated (i.e., receives additional kinetic energy), the energy of the binary black hole must be reduced. As a result, the two black holes draw closer to one another and begin to merge. Eventually, when the merger is almost complete, some of the energy is radiated outward in the form of gravitational waves, as demonstrated by this recent sensational discovery. Although a galaxy merger can be accompanied by a decrease in the rate of star disruption, the opposite effect has also been observed. It has to do with the fact that any galactic nucleus which is a product of a merger is slightly nonspherical in shape. In a nonspherical nucleus, stars are more thoroughly intermixed; hence, there are more stars whose orbits lie close to the black hole. This means that more stars are available to be captured and the TDE rate goes up, in spite of the slingshot effect. To find out how the interplay of these two opposing factors impacts the rate of stellar disruption, Kirill Lezhnin and Eugene Vasiliev--both MIPT graduates--performed the necessary calculations and investigated the influence that black hole mass, nuclear star cluster geometry, and initial conditions have on disruption rates. It turned out that the effect of the removal of stars from the center of the galaxy by means of the gravitational slingshot was negligible in all cases except for the spherical-galaxy-in-a-vacuum scenario. It should be noted, however, that the shape of a galaxy formed in a merger is never a perfect sphere. As far as the results of calculations are concerned, the bottom line is that an average of one star per 10,000 years per galaxy should be disrupted. And while this number is in good agreement with prior theoretical predictions, it also begs the question: Why is it the case that fewer TDEs are observed than theoretical models would have us expect? Kirill Lezhnin, one of the authors of the study and a researcher at MIPT's Laboratory of Astrophysics and Physics of Nonlinear Processes, explains the significance of the research findings: "We showed that the observed low disruption rates cannot be accounted for by the slingshot effect. Therefore, another mechanism needs to be found which lies outside the realm of stellar dynamics studies. Alternatively, the TDE rates we arrived at could in fact be accurate. We then need to find an explanation as to why they are not observed."


News Article | November 17, 2016
Site: www.eurekalert.org

An international team of astrophysicists including Russian scientists from the Space Research Institute of the Russian Academy of Sciences (RAS), MIPT, and Pulkovo Observatory of RAS has detected an abrupt decrease of pulsar luminosity following giant outbursts. The phenomenon is associated with the so-called "propeller effect," which was predicted more than 40 years ago. However, this is the first study to reliably observe the transition of the two X-ray pulsars 4U 0115+63 and V 0332+53 to the "propeller regime." The results of the observations, the conclusions reached by the researchers, and the relevant calculations were published in Astronomy & Astrophysics. The two sources studied, 4U 0115+63 and V 0332+53, belong to a rather special class of transient X-ray pulsars. These stars alternately act as weak X-ray sources, undergo giant outbursts, and disappear from sight completely. The transitions of pulsars between different states provide valuable information about their magnetic field and the temperature of the surrounding matter. Such information is indispensable, as the immensely strong magnetic fields and extremely high temperatures make direct measurements impossible in a laboratory on Earth. The name of a pulsar is preceded by a letter designating the first observatory to discover it, which is followed by a numerical code containing the coordinates of the pulsar. The "V" refers to Vela 5B, a US military satellite that was launched to spy on the Soviets. As for the "4U" in the other name, it stands for the fourth Uhuru catalog, compiled by the first observatory in orbit dedicated specifically to X-ray astronomy. Following the discovery of the first pulsar, it was originally known as "LGM-1" (for "little green men"), because it was a source of regular radio pulses, leading scientists to believe that they might have received a signal from intelligent extraterrestrials. An X-ray pulsar is a rapidly spinning neutron star with a strong magnetic field. A neutron star can be part of a binary system. In a process that astrophysicists call accretion, the neutron star can channel gas from its normal star companion. The attracted gas spirals toward the neutron star forming an accretion disk, which is disrupted at the magnetosphere radius. During accretion the matter penetrates to a certain extent into the magnetosphere, "freezes into it," and flows along the lines of the magnetic field toward the magnetic poles of the neutron star. Falling toward the poles, the gas is heated to several hundred million degrees, which causes the emission of X-rays. If the magnetic axis of a neutron star is skewed relative to its rotational axis, the X-ray beams it emits rotate in a manner that resembles the way beacons work. For an "onshore" observer, the source appears to be sending signals at regular intervals ranging from fractions of a second to several minutes. A neutron star is one of the possible remnants left behind by a supernova. It can be formed at the end of stellar evolution, if the original star was massive enough to allow gravitation to compress the stellar matter enough to make electrons combine with protons yielding neutrons. The magnetic field of a neutron star can be more than ten orders of magnitude stronger that any magnetic field that could be achieved on Earth. A binary system where the normal star has filled its Roche lobe. In a binary system, an X-ray pulsar is observed when the neutron star is accreting matter from its normal star companion--often a giant or a supergiant characterized by a strong stellar wind (ejection of matter into space). Alternatively, it can be a smaller star like our own Sun that has filled its Roche lobe--the region beyond which it is unable to hold on to the matter attracted by the gravity of the neutron star companion. A NASA video showing the accretion of matter by a pulsar from its companion star. The 4U 0115+63 and V 0332+53 pulsars are irregular X-ray sources (transients), owing to the fact that their stellar companions belong to the rather unusual Be star class. The axial rotation of a Be star is so rapid that it occasionally starts "bulging" at the equator, whereby a gas disk is formed around it, filling the Roche lobe. The neutron star starts rapidly accreting the gas from its "donor" companion, causing a sharp increase in X-ray emission called an X-ray outburst. At some point, after the matter in the equatorial bulge starts to deplete, the accretion disk becomes exhausted, and the gas can no longer fall onto the neutron star due to the influence of the magnetic field and the centrifugal force. This gives rise to a phenomenon known as the "propeller effect": the pulsar enters a state where accretion does not occur, and the X-ray source is no longer observed. Astronomers use the term "luminosity" to refer to the total amount of energy emitted by a celestial body per unit time. The red line in the diagram represents the threshold luminosity for the 4U 0115+63 pulsar. Observations of the other source (V 0332+53) produced similar results. The blue lines mark the moments in time, when the distance between the pulsar and the companion was at a minimum. This proximity of the companion star might cause the neutron star to go into overdrive and resume emission (see diagram), provided that sufficient amounts of matter are still available for accretion. The Russian scientists used the X-ray telescope (XRT) based on NASA's Swift space observatory to measure the threshold luminosity that marks the transition of a pulsar to the propeller regime. This parameter depends on the magnetic field and the rotational period of the pulsar. The rotational periods of the sources in this study are known based on the intervals between the pulses that we can register: 3.6 s in the case of 4U 0115+63 and 4.3 s for V 0332+53. Knowing both the threshold luminosity and the rotational period, one can calculate the strength of the magnetic field. The research findings are in agreement with the values obtained using other methods. However, the luminosity was only reduced by a factor of 200, as compared to the expected 400 times reduction. The researchers hypothesized that there could be two possible explanations for this discrepancy. Firstly, the neutron star surface could become an additional source of X-rays, as it cools down following an outburst. Secondly, the propeller effect could leave some room for matter transfer between the two stars, as opposed to sealing the neutron star off completely. In other words, an unaccounted for mechanism could be involved enabling accretion to continue to a certain extent. The transition of a pulsar into the propeller mode is challenging to observe, as the low luminosity state cannot be detected easily. For 4U 0115+63 and V 0332+53, this was attempted following the previous outbursts of these sources. However, the instruments available at the time were not sensitive enough to see the pulsars in the "off-mode." This study is the first to demonstrate reliably that these two sources do indeed "black out." Moreover, the researchers showed that knowledge of the luminosity that marks the transition of pulsars into the propeller regime can be used to learn more about the structure and intensity of the magnetic fields around neutron stars. Prof. Dr. Alexander Lutovinov of the Russian Academy of Sciences, Head of Laboratory at the Space Research Institute (IKI RAS) and a professor at MIPT, comments, "Knowledge of the structure of the magnetic fields of neutron stars is of paramount importance for our understanding of their formation and evolution. In this research, we determined the dipole magnetic field component, which is linked to the propeller effect, for two neutron stars. We demonstrate that this independently calculated value can be compared to the available results of magnetic field measurements based on the detection of cyclotron lines in the spectra of sources. By doing this, it is possible to estimate the contribution of the other, higher-order components in the field structure."


News Article | October 26, 2016
Site: www.scientificamerican.com

Far beyond the eight planets of the solar system, beyond even Pluto and the diminutive dwarf planets, may lurk a major new world called “Planet Nine.” Few if any discoveries can be as sensational as finding another planet orbiting our sun, making the feat a Holy Grail for astronomers, who have managed to pull it off only a few times over the centuries. No one yet knows exactly where this ephemeral world might be—or even if it really exists at all. But in the race to find it researchers are now narrowing down its location through its influence on the rest of the solar system, roughly halving the amount of space they thought they had to search only a few months ago. The scientists detailed their latest advances in the quest last week at the meeting of the American Astronomical Society Division for Planetary Sciences and the European Planetary Science Congress in Pasadena, Calif. In January astrophysicists Konstantin Batygin and Michael Brown at the California Institute of Technology revealed evidence for a big, undiscovered ninth planet. Their computer models suggested the gravitational pull of such a world might explain the strange, tilted orbits of several bodies in the Kuiper Belt of icy objects that haunt the solar system's outer reaches. Scientists are now scrambling to be the first to spot Planet Nine using some of the biggest telescopes on Earth, such as the Subaru Telescope in Hawaii. Batygin and Brown's work narrowed down the planet’s possible mass and orbit to areas where previous observations might have missed it. Their calculations suggest that it has a mass between five to 20 times that of Earth—a figure that is key to knowing the approximate size of the object they are looking for. They also suggest that its orbit is likely tilted about 30 degrees compared to the plane of the solar system—the relatively thin, flat zone in which the eight major planets orbit. They also propose that the planet is now likely near its farthest point from the sun, in the sky's northern hemisphere, and that it likely has an elongated orbit averaging between 380 and 980 astronomical units (AU) from the sun. (One AU is the average distance between Earth and the sun.) However, these estimates still leave a swath of sky "about 1,500 square degrees large," says astronomer Scott Sheppard at the Carnegie Institution for Science, who with astronomer Chadwick Trujillo first suggested the existence of Planet Nine in 2014. (In comparison, the full moon as seen from Earth covers about 0.2 degrees of sky.) This swath described by Sheppard corresponds to about 20 nights of observations on Subaru, "and if we get seven nights or so this year, that's three years—if it's not rainy any of those nights," Sheppard says. So the strategy in the race is now largely a matter of reducing the search area by eliminating theoretical possibilities. In an as-yet unpublished set of about 100 new high-resolution computer simulations, Batygin says he and Brown have narrowed down Planet Nine's location to a roughly 600- to 800-square degree patch of sky. They first modeled the solar system over the course of about 4 billion years, focusing on how the gravitational pulls of the system’s largest planets—Jupiter, Saturn, Uranus, Neptune and Planet Nine—might have sculpted the orbits of thousands of randomly scattered Kuiper Belt objects (KBOs). "We're searching for all of the things that Planet Nine does to the solar system," Brown says. In an attempt to refine Planet Nine’s likely orbit, the researchers next compared their results with how the Kuiper Belt looks now. "Our work yielded a synthetic solar system that looks a lot like the real one," Batygin says. "My confidence that Planet Nine is really out there has reached near-completion, given how our results are so close to what we actually see in the solar system." Further strategies involve examining Planet Nine’s own possible gravitational influence on a variety of other bodies. Astronomers Yuri Medvedev and Dmitri Vavilov at the Institute of Applied Astronomy of the Russian Academy of Sciences looked at 768 comets entering the solar system for the first time, noting five that might have came close to Planet Nine—whose gravity would have altered their paths—sometime in the past. Their analysis suggests that "maybe Planet Nine made these comets go into the solar system," Vavilov says. "We think comets could help narrow down Planet Nine's location. It would help to find other comets." Sheppard says he would “be wary about using comets to help find Planet Nine, since there may be a lot of forces besides Planet Nine that could influence the comets' orbits. ... Still, while I'm skeptical, it could be helpful." Analyses of Pluto by astrophysicists Matthew Holman and Matthew Payne at the Harvard-Smithsonian Center for Astrophysics failed to find conclusive signs for or against Planet Nine. This was partly due to the messy nature of the old archival images of Pluto, which make it difficult to see whether Pluto veered in its orbit in ways that would hint at Planet Nine's presence or absence, Holman says. However, he notes that the high-quality data on Saturn from the ground-based radio-dish network that monitors the position of NASA's Cassini spacecraft is very promising, and is in line with what Batygin and Brown have reported so far. Analyzing variations in Mars' orbit may also help find Planet Nine, Holman says. Although Planet Nine would have less of an effect on Mars than Saturn—a consequence of Mars being closer and therefore more tightly gravitationally bound to the sun—there are more orbiters around Mars than Saturn, and they have been observing the Red Planet for a longer time, so "they're much more precise with their observations," Holman says. In addition, "Cassini ranging data is good to tens of meters, while ranging data between Earth and Mars is as good as one meter." More potential evidence of Planet Nine's influence may be found in how long it takes outer solar system bodies to orbit the sun. For instance, the four KBOs with the longest-known orbits revolve in patterns most readily explained by the presence of Planet Nine, says astronomer Renu Malhotra, chair of theoretical astrophysics at the University of Arizona at Tucson. Work by Malhotra and her colleagues also suggests two likely tilts for Planet Nine's orbit, one closer to the plane of the solar system at 18 degrees and the other steeper at about 48 degrees—information that could help shrink the vast part of the sky to be searched. But some research appears to limit possible locations to such an extent that it may rule out Planet Nine’s existence altogether. For example, while previous studies suggest Planet Nine exists because of the way certain KBO orbits cluster together, another explanation might be that the limited number of available KBO observations only make it look like their orbits are grouped, say astrophysicists Cory Shankman at the University of Victoria and Samantha Lawler at the National Research Council of Canada.* Brown contends that he and Batygin have accounted for the possibility of such observational biases—and that other lines of evidence also suggest Planet Nine is real. For instance, Brown and his colleagues say they have found that the ghostly world’s influence might solve the longstanding mystery of why the plane of the solar system is tilted with respect to the sun. Malhotra at the University of Arizona says she remains agnostic about Planet Nine, but notes that she and her colleagues have found that the orbits of extremely distant KBOs seem tilted in a way that is difficult to otherwise explain. "The amount of warp we see is just crazy," she says. "To me, it's the most intriguing evidence for Planet Nine I've run across so far." *Editor's Note (10/26/16): This sentence was changed after posting to correct Samantha Lawler's institutional affiliation.


News Article | February 3, 2016
Site: www.rdmag.com

Researchers from the General Physics Institute of the Russian Academy of Sciences (GPI RAS) and Moscow Institute of Physics and Technology (MIPT) have developed a new biosensor test system based on magnetic nanoparticles. It is designed to provide highly accurate measurements of the concentration of protein molecules (e.g. markers, which indicate the onset or development of a disease) in various samples, including opaque solutions or strongly colored liquids. The new development is similar (in its principal) to a pregnancy test. The analysis is conducted using small test strips made of porous material with two test lines. A droplet of the sample liquid is applied to one end of the strip and after a short period the result is shown as the activation of one or both lines. These test strips can be stored for a long time before being used. The test can be done quickly and does not need to be carried out by specially trained staff; tests can easily be performed next to a patient or even in field conditions. At molecular level, the magnetic nanoparticles "link" with antibodies to the required protein and then they are placed on a porous plate close to the intended point of contact with the test solution. The liquid, which spreads along the plate due to capillary action, captures the magnetic particles. It then meets two lines - the test line and the control line. The test line contains antibodies that capture the protein in question and also the magnetic markers that became attached to molecules of the protein due to the fact that the nanoparticles are also "linked" to the antibodies. The control line only captures the antibodies with magnetic markers, and it will be activated in any case, if the test strip is usable. The control line serves as an indicator as to whether the test is suitable for use, the protein antibodies in it have not been destroyed due to improper storage, and the test liquid has been applied correctly. After the sample has permeated the test strip and the antibodies have interacted with one another, the result can be read. This is as far as the resemblance to a pregnancy test goes. In a "classic" pregnancy test, the result can either be "yes" or "no". With this test, however, scientists are not only able to, with a high level of sensitivity, detect a protein, but they can also accurately determine the concentration of the protein. The accuracy of determining the concentration will even exceed the accuracy of methods that are only performed in laboratory conditions by trained staff. Alexey Orlov, the corresponding author of the study and a Research Fellow of GPI RAS (who also completed an undergraduate degree at MIPT in 2010, and a postgraduate degree in 2013): "Normally, tests that can be performed not only under lab conditions but also in the field, use fluorescent or coloured markers and the results are determined visually, by sight or by using a video camera. In our case, we are using magnetic particles, which have a significant advantage: they can be used to conduct analyses even if the test strip is dipped into a completely opaque liquid, to determine the substances in whole blood for example. The precise numerical measurement is conducted entirely electronically using a portable device. This completely excludes any ambiguity." The scientists note that along with the high level of sensitivity of determining the concentration of a protein, the new test system also allows measurements to be taken over a wide dynamic range: the upper limit of the test concentration is more than 4000 times greater than the lower limit. Dynamic range is a familiar term in photography: in relation to a camera, it means the ability of the image sensor or film to distinguish gradations of brightness without washing out to white or converting an image into a dark spot. In biochemical measurements, dynamic range implies the ability to measure the concentration of a protein in a very dilute solution as well as in a very saturated solution. The new system was tested by measuring 0.025 nanograms per millilitre of prostate-specific antigen in the blood (the "healthy" range is anything lower than 4 nanograms). Prostate-specific antigen is one of the most commonly monitored markers in clinical examinations on men. Prostate-specific antigen, PSA, is one of the possible markers for prostate cancer - and it is also used in forensics to detect traces of semen. Both applications have certain limitations and they are not able to conclusively provide a diagnosis/prove the guilt of a suspect, but the new biosensor platform will be able to do more than analyse PSA; this particular protein was chosen as a demonstration of the method's capabilities. This level of sensitivity of determining PSA is enough to tell whether there has been a recurrence after the removal of the prostate, and the results obtained give an idea of the potential of the new development. It is not only able to indicate when a particular indicator is outside of the normal range, but it can also be used to easily trace the dynamics of the concentration of protein markers in a disease. Having checked the results obtained by the new method against the "gold standard" for determining PSA - enzyme-linked immunosorbent assay (ELISA), the scientists proved that the new test system works well and that it has significant advantages over traditional methods. In the new test system, the researchers used their own patented method MPQ (magnetic particle quantification) to precisely count magnetic nanoparticles by their nonlinear magnetization. Using this method, scientists are able to record anything above 60 zeptomoles (the prefix zepto- means ten to the minus twenty-first power!) of nanoparticles in a linear range exceeding ten million times. These parameters have never been recorded at this level before. The method involves applying an alternating magnetic field to the nanoparticles at two frequencies and monitoring the induction response at combinatorial frequencies. Many methods of analysing substances are based on the fact that test objects, whether they are particles or molecules, are affected by an electromagnetic field. At the correct frequency, a sample starts to either actively absorb radiation, or radiate in response. In this case, the scientists used a combination of two frequencies of the magnetic field and monitored the response at the frequency which is their linear combination - this is called the "combinatorial" frequency. Maxim Nikitin, the Head of MIPT's Laboratory of Nanobiotechnology and a co-author of the study: "We previously demonstrated the high level of sensitivity of this method of detecting magnetic particles in a joint project with US researchers from the University of Chicago. We used our sensors to record magnetic radioactive nanoparticles based on the isotope 59-Fe in the bodies of animals in vivo (see ?. Nikitin et al., J.Appl. Phys. 2008, 103, 07A304). In particular, it was found that the threshold for detection using this electronic method coincides with the reporting threshold of accompanying gamma radiation, which means that radioactive markers can be replaced with magnetic nanoparticles in a number of various biophysical studies. In the present study, we use this methodology to achieve ultra-sensitivity for conducting immunoassays." "...the threshold for detection using this electronic method coincides with the reporting threshold of accompanying gamma radiation" means that in a number of biomedical research magnetic markers and the special device to detect them can be used to identify that was previously possible using radioactive preparations. From a doctor's point of view, magnetic products are clearly better for diagnostic studies as they prevent patients from being exposed to excess radiation. Dr. Petr Nikitin, Head of Research and Head of Laboratory at GPI RAS (graduated from MIPT in 1979): "The magnetic methods developed and the recorders for counting nanomarkers on test strips not only provide these limits and ranges of measurement of concentrations of antigens, but they are also able to effectively control all stages of the process: from the development and optimization of immunoassay protocols to conducting and interpreting results. This, in particular, is achieved by conducting quantitative monitoring of the redistribution of nanomarkers during biochemical reactions along all three-dimensional porous components of test strips, which has not previously been possible using any other method. Furthermore, the iron salts used to synthesize the nanoparticles are comparably more accessible and cheaper than the reagents used to synthesize gold nanoparticles, which are commonly used in threshold tests such as the pregnancy test." The combination of reliability, accessibility, and high accuracy and sensitivity of the new method means that it is likely to make a rapid transition from a laboratory prototype to mass production. The developers have not yet given a specific timeframe, but they emphasize that their test system can be used not only to diagnose diseases, but also for a number of other applications. The biosensor is able to conduct analyses on food products and medicines; it will also be able to be used to conduct environmental monitoring. And it will be able to do all this at the location itself, without any complicated or expensive equipment.


News Article | August 30, 2016
Site: www.scientificamerican.com

Based on breathless news reports from many prominent media outlets that should know better, this week’s biggest non-story in science is the discovery of a possible radio signal from talkative aliens elsewhere in the Milky Way. I’m here to tell you, alas, that anyone hoping for this to be the moment of First Contact with another galactic civilization is very likely to be disappointed. It all started innocently enough, with a carefully worded blog post this past Saturday from the respected science journalist Paul Gilster. Gilster wrote about a message he had received from some SETI (Search for Extraterrestrial Intelligence) researchers who reported a curiously powerful 3-second burst of radio waves  from a star less than 100 light-years away. The researchers, led by Nikolai Bursov of the Russian Academy of Sciences, couldn't rule out the possibility that the signal was artificial, and were intrigued enough that they called for “permanent monitoring” of the star. Gilster’s post ignited a firestorm of sensationalistic and credulous news coverage that is still blazing as I write this, burning through newspapers and websites to astound most everyone who encounters it. Soon, I predict, it will burn out—as these sorts of stories (almost) always do. Here's what happened: in May of 2015, the RATAN-600 radio telescope in Zelenchukskaya, Russia detected the signal from the direction of HD 164595, a star about 94 light-years away in the constellation Hercules. This is a star very much like our Sun albeit slightly older, with an estimated age of 6.3 billion years. It also harbors at least one known planet, an uninhabitable Neptune-sized world, though of course others more habitable could exist in the system. If the signal really is artificial, and really does come from HD 164595, its energy source must be gargantuan. According to Seth Shostak, an astronomer at the SETI Institute in California who was not involved with the research, a radio blast sent out in all directions by a hypothetical alien civilization would take hundreds of times more power than that of all the sunlight bathing the Earth, based on how bright the signal appeared in the Russian telescope (that's because we'd only be seeing a tiny part of of the total radio energy). If it were instead a beam focused solely on the Earth, the signal would still require twice the electricity used by the United States in an entire year. Clearly, if this is another galactic civilization, they are far, far more advanced than our own. The trouble is, there is no good reason to think that this signal is due to aliens at all—and there never was, according to Eric Korpela, an astronomer at the University of California in Berkeley. Korpela heads SETI@Home, a citizen-science initiative that processes SETI data using home computers; on the SETI@Home blog, Korpela said that after evaluating the available evidence he was “unimpressed” and that the data was “relatively uninteresting.” “SETI@home has seen millions of potential signals with similar characteristics, but it takes more than that to make a good candidate,” Korpela continued. “Multiple detections are a minimum criterion.” That's not what the Russian telescope found. The putative signal only occurred once out of 39 times the RATAN-600 telescope scanned the star, and the scans themselves were performed in such a way that many potential false positives could not be ruled out. Rather than being a beacon from ET, Korpela said, the signal could just as easily have been due to flares from the target star, the burp of a supermassive black hole in a background galaxy, or even the chance magnification of ordinary, all-natural stellar radio emissions by a passing foreground star or a transient ripple of interstellar plasma. What this boils down to is that extraordinary claims require extraordinary evidence—evidence that can be very hard to come by for modern SETI programs. Contrary to Hollywood depictions, SETI efforts tend to have meager budgets that only support infrequent observations of small fractions of the sky in narrow bandwidth ranges. Could the new signal from HD 164595 be a genuine transmission from a cosmic civilization? Well, yes, conceivably it could. But so could the millions of other curious optical blips and radio spikes sitting in SETI archives, each a singular phenomenon that whispered just once into a telescope, never to repeat or return. This has happened before—see, for instance, the tantalizing “Wow! signal” from 1977, and the resulting decades of fruitless efforts see it a second time. Even assuming there was enough telescope time available (there isn’t), every SETI program on Earth would likely go bankrupt trying to follow-up each of the millions of potential candidates if each received the care and attention bestowed upon the  Wow! signal. In such a resource-constrained environment, only the very best and most compelling signals should merit much closer attention—and Korpela isn’t sure the signal from HD 164595 has crossed that threshold. The real problem, Korpela goes on, is that the media frenzy over HD 164595 means that no reputable SETI program can now afford to not use precious telescope time to look for other signals from HD 164595. These projects need money, and they need public attention—ignoring the signal even if it is overwhelmingly likely to prove spurious is not an option. “We’ll be along for the ride,” he wrote. “And we’ll all find nothing.” Indeed, according to Shostak, the SETI Institute’s Allen Telescope Array is already looking at the star, though so far they have yet to see anything peculiar. Other results are now coming in. Just this morning the Breakthrough Listen project, a private SETI endeavor funded by the billionaire Yuri Milner, released results of a follow-up on HD 164595 using archival data as well as new observations from the Green Bank Telescope in West Virginia. Breakthrough found—you guessed it—nothing. More curiously, Breakthrough’s statistical analysis of the available data suggests that, if produced by natural astrophysical causes, one would expect that previous surveys would have seen such strong signals elsewhere in the sky. Which means that the RATAN-600 team was “either extremely lucky to detect this source in their observations, or that the transient is due to local interference or other calibration issues.” That is, absent the possibility of aliens, even an astrophysical source for the signal looks suspect—a far more likely explanation for the signal would be the passage of a satellite overhead, or an errant signal from electric currents coursing through wires within the observatory itself, or even a software glitch. Nevertheless, the Breakthrough Listen team intends to periodically revisit HD 164595 in the future to keep looking for a repetition of the signal. Other SETI programs will undoubtedly do the same. And perhaps they should. But let’s not fool ourselves: Even in the lofty, noble quest to break our cosmic solitude, sometimes the tail can wag the dog. Now if you’re interested in a truly epochal discovery relevant to the search for extraterrestrial life, I would refer you to last week’s announcement of a rocky planet found in a habitable orbit around the Sun’s nearest neighboring star, Proxima Centauri—which frustratingly seems to have made less of an impression than this week’s far more questionable news.


News Article | November 18, 2016
Site: www.rdmag.com

An international team of astrophysicists including Russian scientists from the Space Research Institute of the Russian Academy of Sciences (RAS), MIPT, and Pulkovo Observatory of RAS has detected an abrupt decrease of pulsar luminosity following giant outbursts. The phenomenon is associated with the so-called "propeller effect," which was predicted more than 40 years ago. However, this is the first study to reliably observe the transition of the two X-ray pulsars 4U 0115+63 and V 0332+53 to the "propeller regime." The results of the observations, the conclusions reached by the researchers, and the relevant calculations were published in Astronomy & Astrophysics. The two sources studied, 4U 0115+63 and V 0332+53, belong to a rather special class of transient X-ray pulsars. These stars alternately act as weak X-ray sources, undergo giant outbursts, and disappear from sight completely. The transitions of pulsars between different states provide valuable information about their magnetic field and the temperature of the surrounding matter. Such information is indispensable, as the immensely strong magnetic fields and extremely high temperatures make direct measurements impossible in a laboratory on Earth. The name of a pulsar is preceded by a letter designating the first observatory to discover it, which is followed by a numerical code containing the coordinates of the pulsar. The "V" refers to Vela 5B, a US military satellite that was launched to spy on the Soviets. As for the "4U" in the other name, it stands for the fourth Uhuru catalog, compiled by the first observatory in orbit dedicated specifically to X-ray astronomy. Following the discovery of the first pulsar, it was originally known as "LGM-1" (for "little green men"), because it was a source of regular radio pulses, leading scientists to believe that they might have received a signal from intelligent extraterrestrials. An X-ray pulsar is a rapidly spinning neutron star with a strong magnetic field. A neutron star can be part of a binary system. In a process that astrophysicists call accretion, the neutron star can channel gas from its normal star companion. The attracted gas spirals toward the neutron star forming an accretion disk, which is disrupted at the magnetosphere radius. During accretion the matter penetrates to a certain extent into the magnetosphere, "freezes into it," and flows along the lines of the magnetic field toward the magnetic poles of the neutron star. Falling toward the poles, the gas is heated to several hundred million degrees, which causes the emission of X-rays. If the magnetic axis of a neutron star is skewed relative to its rotational axis, the X-ray beams it emits rotate in a manner that resembles the way beacons work. For an "onshore" observer, the source appears to be sending signals at regular intervals ranging from fractions of a second to several minutes. A neutron star is one of the possible remnants left behind by a supernova. It can be formed at the end of stellar evolution, if the original star was massive enough to allow gravitation to compress the stellar matter enough to make electrons combine with protons yielding neutrons. The magnetic field of a neutron star can be more than ten orders of magnitude stronger that any magnetic field that could be achieved on Earth. A binary system where the normal star has filled its Roche lobe. In a binary system, an X-ray pulsar is observed when the neutron star is accreting matter from its normal star companion--often a giant or a supergiant characterized by a strong stellar wind (ejection of matter into space). Alternatively, it can be a smaller star like our own Sun that has filled its Roche lobe--the region beyond which it is unable to hold on to the matter attracted by the gravity of the neutron star companion. A NASA video showing the accretion of matter by a pulsar from its companion star. The 4U 0115+63 and V 0332+53 pulsars are irregular X-ray sources (transients), owing to the fact that their stellar companions belong to the rather unusual Be star class. The axial rotation of a Be star is so rapid that it occasionally starts "bulging" at the equator, whereby a gas disk is formed around it, filling the Roche lobe. The neutron star starts rapidly accreting the gas from its "donor" companion, causing a sharp increase in X-ray emission called an X-ray outburst. At some point, after the matter in the equatorial bulge starts to deplete, the accretion disk becomes exhausted, and the gas can no longer fall onto the neutron star due to the influence of the magnetic field and the centrifugal force. This gives rise to a phenomenon known as the "propeller effect": the pulsar enters a state where accretion does not occur, and the X-ray source is no longer observed. Astronomers use the term "luminosity" to refer to the total amount of energy emitted by a celestial body per unit time. The red line in the diagram represents the threshold luminosity for the 4U 0115+63 pulsar. Observations of the other source (V 0332+53) produced similar results. The blue lines mark the moments in time, when the distance between the pulsar and the companion was at a minimum. This proximity of the companion star might cause the neutron star to go into overdrive and resume emission (see diagram), provided that sufficient amounts of matter are still available for accretion. The Russian scientists used the X-ray telescope (XRT) based on NASA's Swift space observatory to measure the threshold luminosity that marks the transition of a pulsar to the propeller regime. This parameter depends on the magnetic field and the rotational period of the pulsar. The rotational periods of the sources in this study are known based on the intervals between the pulses that we can register: 3.6 s in the case of 4U 0115+63 and 4.3 s for V 0332+53. Knowing both the threshold luminosity and the rotational period, one can calculate the strength of the magnetic field. The research findings are in agreement with the values obtained using other methods. However, the luminosity was only reduced by a factor of 200, as compared to the expected 400 times reduction. The researchers hypothesized that there could be two possible explanations for this discrepancy. Firstly, the neutron star surface could become an additional source of X-rays, as it cools down following an outburst. Secondly, the propeller effect could leave some room for matter transfer between the two stars, as opposed to sealing the neutron star off completely. In other words, an unaccounted for mechanism could be involved enabling accretion to continue to a certain extent. The transition of a pulsar into the propeller mode is challenging to observe, as the low luminosity state cannot be detected easily. For 4U 0115+63 and V 0332+53, this was attempted following the previous outbursts of these sources. However, the instruments available at the time were not sensitive enough to see the pulsars in the "off-mode." This study is the first to demonstrate reliably that these two sources do indeed "black out." Moreover, the researchers showed that knowledge of the luminosity that marks the transition of pulsars into the propeller regime can be used to learn more about the structure and intensity of the magnetic fields around neutron stars. Prof. Dr. Alexander Lutovinov of the Russian Academy of Sciences, Head of Laboratory at the Space Research Institute (IKI RAS) and a professor at MIPT, comments, "Knowledge of the structure of the magnetic fields of neutron stars is of paramount importance for our understanding of their formation and evolution. In this research, we determined the dipole magnetic field component, which is linked to the propeller effect, for two neutron stars. We demonstrate that this independently calculated value can be compared to the available results of magnetic field measurements based on the detection of cyclotron lines in the spectra of sources. By doing this, it is possible to estimate the contribution of the other, higher-order components in the field structure."


News Article | February 15, 2017
Site: cerncourier.com

Tom Dombeck, an innovative and versatile physicist and project manager, passed away in Kāneʻohe, Hawaii, on 4 November 2016. His legacy includes many measurements in particle physics, the development of new techniques for the production of ultra-cold neutrons and substantial contributions to the management of several major scientific projects. Tom received a BA in physics from Columbia University in 1967 and a PhD in particle physics from Northwestern University in 1972, and his career saw him hold prominent roles at numerous institutes. He was a research associate at Imperial College London from 1972 to 1974 and visiting scientist at Dubna in the former USSR in 1975. Following six years at the University of Maryland, from 1981 to 1988 Tom held various roles at Los Alamos National Laboratory (LANL) after which he spent a year working in the US Department of Energy in the office of the Superconducting Supercollider (SSC). Afterwards he became a staff physicist and ultimately deputy project manager for operations at the SSC laboratory in Texas, where he led the successful “string test”. In 1994 he moved to a role as project manager for the Sloan Digital Sky Survey at the University of Chicago. Tom was deputy head for the technical division at Fermilab from 1997 to 1999, and project manager for the Next Linear Collider project at Fermilab between 2000 and 2002. From 2003 to 2006 he was project manager for the Pan-STARRS telescope at the University of Hawaii and an affiliated graduate faculty member there until 2016. Tom began his scientific research with bubble chambers and was a key participant in the experiment that observed the first neutrino interaction in a hydrogen filled bubble chamber in 1970 at the ZGS at Argonne National Laboratory. For many years he pursued measurements of the electric dipole moment (EDM) of the neutron and was also involved in the development of ultra-cold neutrons by Doppler shifting at pulsed sources. He proposed a new method for a neutron EDM measurement that involved Bragg scattering polarised neutrons from a silicon crystal and led an initial effort at the Missouri University Research Reactor, after which he initiated an experiment using the reactor at the NIST Center for Neutron Research. While at LANL, Tom led a neutrino-oscillation search that involved constructing a new beamline and neutrino source at LAMPF and provided improved limits on muon-neutrino to electron-neutrino oscillations. He carried these fundamental physics interests and abilities to his later work as a highly effective scientific programme manager. Tom was able to see the connections between disparate scientific areas and bring together new ideas and approaches that moved the field forwards. He could inspire people around him with his enthusiasm and kindness, and his wry sense of humour and wicked smile were trademarks that will long be remembered by his friends and colleagues. Tom was a devoted family man and is missed greatly by his wife Bonnie, his two children, Daniel and Heidi, and his four grandchildren. Sidney David Drell, professor emeritus of theoretical physics at SLAC National Accelerator Laboratory, senior fellow at Stanford’s Hoover Institution and a giant in the worlds of both academia and policy, died on 21 December 2016 at his home in Palo Alto, California. He was 90 years old. Drell made immense contributions to his field, including uncovering a process that bears his name and working on national and international security. His legacy as a humanitarian includes his friendship and support of Soviet physicist and dissident Andrei Sakharov, who won the Nobel Peace Prize in 1975 for his opposition of the abuse of power in the Soviet Union. Drell was also known for his welcoming nature and genuine, albeit perhaps unwarranted, humility. Drell’s commitment to arms control spanned more than 50 years. He served on numerous panels advising US Congress, the intelligence community and military. He was an original member of JASON, a group of academic scientists created to advise the government on national security and defence issues, and from 1992 to 2001 he was a member of the President’s Foreign Intelligence Advisory Board. He was also the co-founder of the Center for International Security and Cooperation at Stanford, and in 2006 he and former Secretary of State George Shultz began a programme at the Hoover Institution dedicated to developing practical steps towards ridding the world of nuclear weapons. In 1974, Drell met Sakharov at a conference hosted by the Soviet Academy of Sciences and they quickly became friends. When Sakharov was internally exiled to Gorky from 1980 to 1986 following his criticism of the Soviet invasion of Afghanistan, Drell exchanged letters with him and called on Soviet leader Mikhail Gorbachev for his release. He also organised a petition to allow another Soviet physicist and dissident, Nohim Meiman, to emigrate to Israel, and obtained the signatures of 118 members of the US National Academy of Sciences. Having graduated with a bachelor’s degree from Princeton University in 1946, Drell earned a master’s degree in 1947 and a PhD in physics in 1949 from the University of Illinois, Urbana-Champaign. He began at Stanford in 1950 as an instructor in physics, leaving to work as a researcher and assistant professor at the Massachusetts Institute of Technology and then returning to Stanford in 1956 as a professor of physics. He served as deputy director of SLAC from 1969 until his retirement from the lab in 1998. Drell’s research was in the fields of quantum electrodynamics and quantum chromodynamics. While at SLAC, he and research associate Tung-Mow Yan formulated the famous Drell–Yan Process, which has become an invaluable tool in particle physics. His theoretical work was critical in setting SLAC on the course that it took. As head of the SLAC theory group, Drell brought in a host of younger theoretical physicists who began creating the current picture of the structure of matter. He played an important role in developing the justification for experiments and turning the results into what became the foundation of the Standard Model of particle physics. For his research and lifetime of service to his country, Drell received many prestigious awards, including: the National Medal of Science; the Enrico Fermi Award; a fellowship from the MacArthur Foundation; the Heinz Award for contributions in public policy; the Rumford Medal from the American Academy of Arts and Sciences; and the National Intelligence Distinguished Service Medal. Drell was one of 10 scientists honoured as the founders of satellite reconnaissance as a space discipline by the US National Reconnaissance Office. He was elected to the National Academy of Sciences, the American Academy of Arts and Sciences and the American Philosophical Society, and was president of the American Physical Society in 1986. Drell was also an accomplished violinist who played chamber music throughout his life. He is survived by his wife, Harriet, and his children, Daniel, Virginia, Persis and Joanna. Persis Drell, a former director of SLAC who is also a physicist at Stanford and dean of the School of Engineering, will be the university’s next provost. • Based, with permission, on the obituary published on the Stanford University website on 22 December 2016. Mambillikalathil Govind Kumar Menon, a pioneer in particle physics and a distinguished statesman of science, passed away peacefully on 22 November at his home in New Delhi, India. He graduated with a bachelor of science from Jaswant College, Jodhpur, in 1946, and inspired by Chandrasekhara Venkata Raman, studied under the tutelage of spectroscopist Nanasaheb R Tawde before joining Cecil Powell’s group at the University of Bristol, UK, in 1949. Menon’s first important contribution was to establish the bosonic character of the pion through a study of fragments emerging from π-capture by light nuclei. He then focused his attention on the emerging field of K-meson physics. Along with his colleagues at Bristol, notably Peter Fowler, Cecil Powell and Cormac O’Ceallaigh, Menon discovered K+ → π+ π0 and K+ → π+ π– π+ events in nuclear emulsion indicating parity non-conservation, (the τ – θ puzzle). He also identified a sizeable collection of events showing the associated production of kaons and hyperons. In 1955 Menon joined the Tata Institute of Fundamental Research (TIFR), where he worked on cosmic-ray research programmes initiated by Homi Bhabha. Following Bhabha’s death in an air crash over Mont Blanc in 1966, the responsibility of the directorship of TIFR fell squarely on his shoulders, along with the wide-ranging initiatives for national development that Bhabha had started. Notwithstanding these additional demands on his time, his focus on particle physics never wavered. He continued with his research, establishing a collaboration with Arnold W Wolfendale at the University of Durham, UK, and Saburo Miyake of Osaka City University, Japan, for the study of particle physics with detectors deployed deep underground; he detected events induced by cosmic-ray neutrino interactions; and he also launched a dedicated effort to test the early predictions of violation of baryon-number conservation leading to proton decay. During his Bristol years, Menon established a close friendship with William O Lock, who had moved to CERN in 1959. This facilitated collaboration between TIFR and CERN, leading to the development of bubble-chamber techniques to study mesons produced in proton–antiproton collisions. These initial studies eventually led to highly successful collaborations between Indian researchers and the L3 experiment at LEP, and the CMS, ALICE and ATLAS experiments at the LHC. Menon won several awards including the Cecil F Powell and C V Raman medals, and was elected to the three scientific academies in India. He was elected as a fellow of the Royal Society in 1970, and subsequently to the Pontifical Academy of Sciences, American Academy of Arts and Sciences, the Russian Academy of Sciences and as an honorary fellow of the Institute of Physics and the Institution of Electrical & Electronics Engineers. He also served two terms as president of the International Council of Scientific Unions, and stimulated its participation in policy issues, including climate change. Menon held a firm conviction that science can bring about technological development and societal progress, which motivated him to work with Abdus Salam in founding the Third World Academy of Sciences. He held several high positions in the Indian government, and thus contributed to the growth of science and technology in India. Alongside his scientific achievements, M G K Menon was also very close to his wife Indumati and their two children Preeti and Anant Kumar. Our warmest thoughts go out to them and to innumerable others whose lives he touched in so many important ways. Helmut Oeschler, an active member of the ALICE collaboration, passed away from heart failure on 3 January while working at his desk. Born in Southern Germany, he received his PhD from the University of Heidelberg in 1972 and held postdoc positions at the Niels Bohr Institute in Copenhagen, and in Strasbourg, Saclay and Orsay in France. From 1981 he was at the Institute for Nuclear Physics of TU Darmstadt. He held a Doctorate Honoris Causa from Dubna University, Russia, and in 2006 he received the Gay-Lussac-Humboldt prize. Oeschler’s physics interests concerned the dynamics of nuclear reactions over a broad energy range, from the Coulomb barrier to ultra-relativistic collisions. He was a driving force for building the kaon spectrometer at the GSI in Darmstadt, which made it possible to measure strange particles in collisions of heavy nuclei. From the late 1990s he was actively involved in addressing new aspects of equilibration in relativistic nuclear reactions. Oeschler became a member of the ALICE collaboration at CERN in 2000 and made important contributions to the construction of the experiment. Together with his students, he was involved in developing track reconstruction software for measuring the production of charged particles in lead–lead collisions at the LHC. He also led the analysis efforts for the measurements of identified charged hadrons in the LHC’s first proton–proton collisions. From 2010 to 2014 he led the ALICE editorial board, overseeing the publication of key results relating to quark-gluon matter at the highest energy densities. His deep involvement in the data analysis and interpretation continued unabated and he made important contributions to several research topics. Advising and working in close collaboration with students was a much loved component of Helmut’s activity and was highly appreciated among the ALICE collaboration. Helmut Oeschler was a frequent visitor of South Africa and served there on numerous international advisory committees. He was instrumental in helping the South African community develop the physics of heavy-ion collisions and collaboration with CERN. With Helmut Oeschler we have lost an internationally renowned scientist and particular friend and colleague. His scientific contributions, especially on the production of strange particles in high-energy collisions, are important achievements.


News Article | February 16, 2017
Site: www.rdmag.com

Scientists from MIPT, the University of Oxford, and the Lebedev Physical Institute of the Russian Academy of Sciences estimated the number of stars disrupted by solitary supermassive black holes in galactic centers formed due to mergers of galaxies containing supermassive black holes. The astrophysicists found out whether gravitational effects arising as two black holes draw closer to one another can explain why we observe fewer stars being captured by black holes than basic theoretical models predict. In their study published in The Astrophysical Journal, the researchers looked into the interplay of various dynamic mechanisms affecting the number of stars in a galaxy that are captured per unit time (tidal disruption rate). An advanced theoretical model yielded results that are even more inconsistent with observations, leading the team to hypothesize that the disruption of stars in galactic nuclei may occur without our knowledge. Tidal disruption events, or TDEs, are the only available source of information from inactive galactic nuclei. There is at least one supermassive black hole in the center of most galaxies. Surrounded by dense central star clusters, black holes occupy regions known as galactic nuclei. As their name suggests, black holes do not emit any light. However, when matter falls onto the central massive object, it gets heated to extreme temperatures and can be observed with a telescope. Active galaxies have gas clouds that feed the black hole thus making it visible. However, most of the galaxies--approximately 90 percent of them--remain "silent" because there are no gas clouds in them and so there is no matter for the black hole to feed on, except for stars that occasionally stray too close to it. When this happens, the star is pulled apart by tidal forces, experiencing what is known as spaghettification, and astronomers detect a tidal disruption event (TDE). So far, around 50 flares of radiation linked to TDEs have been observed. It is reckoned that the average rate of stellar disruption amounts to one star per 10,000 to 100,000 years per galaxy. Based on this data, the scientists are trying to develop a reliable model of what goes on in inactive galactic nuclei. The simplest theoretical model involves a galaxy whose nucleus is spherical in shape and has a supermassive black hole at its center. The black hole is orbited by stars that change the direction of their motion as they pass by one another, the way billiard balls bounce off one another when they collide on the table. However, whereas a billiard ball needs to be moving straight toward the hole to fall into it, a star has more options: It is enough for its velocity vector to be in the so-called loss cone, to ensure that the star will eventually be captured and disrupted by the black hole's gravity. According to this very simple model, an average of one star per galaxy should be captured every 1,000 to 10,000 years, i.e., more frequently than observed. Although the model can be improved by taking a number of other factors into account (e.g., the difference in the mass of stars), this would only further increase the predicted tidal disruption rates. At present, there is only one mechanism discussed in published sources that could be responsible for the fact that fewer stars are captured than expected. Curiously, it requires that most of the low-angular-momentum stars vanish, so to speak. But let us first examine an analogous case involving gas diffusion. Suppose there are gas molecules in random motion contained inside a vessel whose walls can absorb the molecules. Now imagine the molecules closest to the walls have been removed. The obvious consequence of this would be less molecules absorbed per unit time, since the remaining molecules have yet to travel a certain distance before they can come in contact with a wall. Similarly, if stars are removed from the center of the galaxy, the stellar disruption rate will fall. Naturally, the stars cannot simply vanish into thin air; but if the galaxy hosts a binary black hole, then individual stars can be ejected from the galaxy by means of a so-called gravitational slingshot, a maneuver also known as a gravity assist when man-made spacecraft are involved. The law of conservation of energy implies that when a star is accelerated (i.e., receives additional kinetic energy), the energy of the binary black hole must be reduced. As a result, the two black holes draw closer to one another and begin to merge. Eventually, when the merger is almost complete, some of the energy is radiated outward in the form of gravitational waves, as demonstrated by this recent sensational discovery. Although a galaxy merger can be accompanied by a decrease in the rate of star disruption, the opposite effect has also been observed. It has to do with the fact that any galactic nucleus which is a product of a merger is slightly nonspherical in shape. In a nonspherical nucleus, stars are more thoroughly intermixed; hence, there are more stars whose orbits lie close to the black hole. This means that more stars are available to be captured and the TDE rate goes up, in spite of the slingshot effect. To find out how the interplay of these two opposing factors impacts the rate of stellar disruption, Kirill Lezhnin and Eugene Vasiliev--both MIPT graduates--performed the necessary calculations and investigated the influence that black hole mass, nuclear star cluster geometry, and initial conditions have on disruption rates. It turned out that the effect of the removal of stars from the center of the galaxy by means of the gravitational slingshot was negligible in all cases except for the spherical-galaxy-in-a-vacuum scenario. It should be noted, however, that the shape of a galaxy formed in a merger is never a perfect sphere. As far as the results of calculations are concerned, the bottom line is that an average of one star per 10,000 years per galaxy should be disrupted. And while this number is in good agreement with prior theoretical predictions, it also begs the question: Why is it the case that fewer TDEs are observed than theoretical models would have us expect? Kirill Lezhnin, one of the authors of the study and a researcher at MIPT's Laboratory of Astrophysics and Physics of Nonlinear Processes, explains the significance of the research findings: "We showed that the observed low disruption rates cannot be accounted for by the slingshot effect. Therefore, another mechanism needs to be found which lies outside the realm of stellar dynamics studies. Alternatively, the TDE rates we arrived at could in fact be accurate. We then need to find an explanation as to why they are not observed."


News Article | December 19, 2016
Site: www.eurekalert.org

Physicists from MIPT and the Joint Institute for High Temperatures of the Russian Academy of Sciences described the mobility of line defects, or dislocations, in uranium dioxide. This will enable future predictions of nuclear fuel behavior under operating conditions. The research findings were published in the International Journal of Plasticity. Nuclear fuel has an immense potential, as it is one of the most energy dense resources available: a single uranium dioxide fuel pellet weighing no more than a few grams releases the same amount of energy within the reactor core that is produced by burning several hundred kilograms of anthracite coal or oil. When a nuclear reactor is in operation, the fuel in the pellets undergoes extremely complex transformations caused by both temperature and radiation. Because the underlying mechanisms of these transformations are not yet fully understood, we are still unable to realize the complete potential of nuclear fuel and reduce the risk of accidents to a minimum. The mechanical properties of fuel pellets, which play an important part in nuclear engineering, are determined by the motion of and interaction between dislocations. Dislocation mobility in uranium dioxide at high temperatures and under stress had never been studied in detail. That is before the recent research into dislocation dynamics carried out by Artem Lunev, Alexey Kuksin, and Sergey Starikov. In their paper, the scientists provide data of a simulation of dislocation behavior in uranium dioxide, which is one of the most widespread compounds used as nuclear fuel on power plants across the globe. To be used as nuclear fuel, uranium dioxide is formed into ceramic pellets that are sintered at a high temperature. This material has a very high melting point, is resistant to radiation-induced growth, and does not experience phase transitions within a broad temperature range. Theoretically, a solid body has a regular, ordered structure (crystalline structure), and there is a certain designated position for each atom to be at. In reality, perfect crystals do not exist, because some atoms or groups of atoms are always out of place, altering the ideal arrangement. In other words, there are defects (imperfections) in an actual crystal. They come in several types, viz., point defects, line defects (dislocations), planar defects, and bulk defects. Defects can move within the crystal, and the nature of their motion depends on external factors. Dislocation dynamics are known to determine fuel properties relevant to nuclear engineering (plasticity, fission fragments diffusion). In their study, the scientists from MIPT and the Joint Institute for High Temperatures used computational methods to develop a model of an isolated dislocation in a perfect uranium dioxide crystal. They calculated the varying dislocation velocity as a function of temperature and the external forces affecting the crystal. The researchers analyzed simulation results within the framework of statistical physics and obtained a model that describes the behavior of dislocations in a broad temperature range under shear stress of various magnitudes. This model enables the calculation of dislocation velocity based on the known temperature and stress parameters. The model proposed by the Russian scientists could soon be used to simulate more complex systems and study the macroscopic processes occurring in fuel pellets under operating conditions. "This is a major advance toward being able to describe processes as complex as nuclear fuel swelling and embrittlement during operation by means of computer simulations alone," says Sergey Starikov, a coauthor of the study, an associate professor at MIPT, and a senior researcher at the Joint Institute for High Temperatures. Computer modeling enables scientists to trace individual fuel atoms and calculate their velocities and forces affecting them, along with other parameters. This allows systems of various complex configurations to be simulated and studied. Computer modeling is widely used in situations where performing an experiment is rather problematic. Research into nuclear fuel behavior is precisely one of those areas. Such large-scale calculations rely on modern supercomputers, as massive computing power is required to find the forces affecting individual atoms at each moment in time.


News Article | February 15, 2017
Site: cerncourier.com

Shinichiro Michizono from KEK has been appointed as associate director for the International Linear Collider (ILC), taking over from Mike Harrison, while Jim Brau of the University of Oregon has replaced Hitoshi Yamamoto as associate director for physics and detectors. The Linear Collider collaboration, which encompasses the ILC and CLIC, has recently been granted a further three-year mandate by the International Committee for Future Accelerators. The council of the European Southern Observatory (ESO), which builds and operates some of the world’s most powerful ground-based telescopes, has appointed Xavier Barcons as its next director general. The 57 year-old astronomer will take up his new position on 1 September 2017, when the current director general Tim de Zeeuw completes his mandate. He began his career as a physicist, completing a PhD on hot plasmas. In October 2016, Jianwei Qiu joined the Thomas Jefferson National Accelerator Facility as its new associate director for theoretical and computational physics. Qiu, whose research focus is QCD and its applications in both high-energy particle and nuclear physics, will oversee a broad programme of theoretical research in support of the physics studied with the Continuous Electron Beam Accelerator Facility (CEBAF). Rende Steerenberg has been appointed head of operations in CERN’s Beams Department, effective from 1 January 2017. He takes over from Mike Lamont, who has been in the role since 2009 and oversaw operations from the LHC’s rollercoaster start-up to its latest record performance. Lamont remains deputy group leader of the Beams Department. Former CERN Director-General Rolf-Dieter Heuer has been appointed Chevalier de la Légion d’Honneur (Knight of the Legion of Honour), one of the highest recognitions of achievement in France. Heuer, who is currently president of the German Physical Society (DPG) and president-elect of the SESAME Council, among other roles, was presented with the medal on 22 November at the residence of the French permanent representative in Geneva. The 2017 Breakthrough Prize in Fundamental Physics has been awarded to Joseph Polchinski, University of California at Santa Barbara, and Andrew Strominger and Cumrun Vafa of Harvard University. The three winners, who received the $3 million award at a glitzy ceremony in San Francisco on 4 December, have made important contributions to fundamental physics including quantum gravity and string theory. Polchinski was recognised in particular for his discovery of D-branes, while the citation for Strominger and Vafa included their derivation of the Bekenstein–Hawking area-entropy relation, which unified the laws of thermodynamics and black-hole dynamics. Recipients of the previously announced Special Prize in Fundamental Physics – Ronald Drever and Kip Thorne of Caltech and Rainer Weiss of MIT, who were recognised in May along with the entire LIGO team for the discovery of gravitational waves – were also present. A further prize, the $100,000 New Horizons in Physics Prize, went to six early-career physicists: Asimina Arvanitaki (Perimeter Institute), Peter Graham (Stanford University) and Surjeet Rajendran (University of California, Berkeley); Simone Giombi (Princeton University) and Xi Yin (Harvard University); and Frans Pretorius (Princeton). This year’s Breakthrough Prize, which was founded in 2012 by Sergey Brin, Anne Wojcicki, Yuri and Julia Milner, Mark Zuckerberg and Priscilla Chan, saw $25 million in prizes awarded for achievements in the life sciences, fundamental physics and mathematics. On 30 November, the Alexander von Humboldt Foundation in Bonn, Germany, granted a Humboldt Research Award to Raju Venugopalan, a senior physicist at Brookhaven National Laboratory and Stony Brook University. The €60,000 award recognises Venugopalan’s achievements in theoretical nuclear physics, and comes with the opportunity to collaborate with German researchers at Heidelberg University and elsewhere. US physicist and science policy adviser to the US government, Richard Garwin, was awarded the Presidential Medal of Freedom at a White House ceremony on 22 November. The award is the highest honour that the US government can confer to civilians. Garwin was recognised for his long career in research and invention, which saw him play a leading role in the development of the hydrogen bomb, and for his advice to policy makers. Introducing Garwin, President Obama remarked: “Dick’s not only an architect of the atomic age. Reconnaissance satellites, the MRI, GPS technology, the touchscreen all bear his fingerprints – he even patented a mussel washer for shellfish. Dick has advised nearly every president since Eisenhower, often rather bluntly. Enrico Fermi, also a pretty smart guy, is said to have called Dick the only true genius he ever met.” Fumihiko Suekane of Tohoku University, Japan, has been awarded a 2016 Blaise Pascal Chair to further his research into neutrinos. Established in 1996, and named after the 17th-century French polymath Blaise Pascal, the €200,000 grant allows researchers from abroad to work on a scientific project in an institution in the Ile-de-France region. Suekane will spend a year working at the Astroparticle and Cosmology Laboratory in Paris, where he will focus on R&D for novel neutrino detectors and measurements of reactor neutrinos. In late 2016, theorists Mikhail Danilov, from the Lebedev Institute in Moscow, Sergio Ferrara from CERN and David Gross from the Kavli Institute for Theoretical Physics and the University of California in Santa Barbara were elected as members of the Russian Academy of Sciences. Established in 1724, the body has more than 2000 members. President of the Republic of Poland, Andrzej Duda, visited CERN on 15 November and toured the CERN Control Centre. Chi-­Chang Kao, signed the guestbook with CERN Director-General Fabiola Gianotti on 23 November. From 28 November to 2 December, more than 200 flavour physicists gathered at the Tata Institute of Fundamental Research in Mumbai for the 9th International Workshop on the Cabibbo–Kobayashi–Maskawa Unitarity Triangle (CKM2016). The workshop focuses on weak transitions of quarks from one flavour to another, as described by the CKM matrix, and on the charge–parity (CP) violation present in these transitions, as visualised by the unitarity triangle (UT). Input from theory, particularly lattice QCD, is vital to fully leverage the power of such measurements. It is an exciting time for flavour physics. The mass scales potentially involved in such weak processes are much higher than those that can be directly probed at the LHC, due to the presence of quantum loops that mediate many of the processes of interest, such as B0 – B0 mixing. Compared with the absence of new particles so far at the energy frontier, LHCb and other B factories already have significant hints of deviations between measurements and Standard Model (SM) predictions. An example is the persistent discrepancy in the measured differential distributions of the decay products of the rare flavour-changing neutral-current process B0 → K*0 μ+ μ–, first reported by the LHCb collaboration in 2015. A highlight of CKM2016 was the presentation of first results of the same distributions from the Belle experiment in Japan, which also included the related but previously unmeasured process B0 → K*0 e+ e–. The Belle results are more compatible with those of LHCb than the SM, further supporting the idea that new physics may be manifesting itself, via interference effects, in these observables. Progress on measuring CP violation in B decays was also reported, with LHCb presenting the first evidence for time-dependent CP violation in the decay of B0 mesons in two separate final states, D+ K– and K+ K–. The latter involves loop diagrams allowing a new-physics-sensitive determination of a UT angle (γ) that can be compared to a tree-level SM determination in the decay B– → D0 K–. For the first time, LHCb also presented results with data from LHC Run 2, which is ultimately expected to increase the size of the LHCb data samples by approximately a factor four. Longer term, the Belle II experiment based at the SuperKEKB collider recently enjoyed its first beam, and will begin its full physics programme in 2018. By 2024, Belle II should have collected 50 times more data than Belle, allowing unprecedented tests of rare B-meson decays and precision CP-violation measurements. On the same timescale, the LHCb upgrade will also be in full swing, with the goal of increasing the data size by least a factor 10 compared to Run 1 and Run 2. Plans for a second LHCb upgrade presented at the meeting would allow LHCb, given the long-term future of the LHC, to run at much higher instantaneous luminosities to yield an enormous data set by 2035. With more data the puzzles of flavour physics will be resolved thanks to the ongoing programme of LHCb, imminent results from rare-kaon-decay experiments (KOTO and NA62), and the Belle II/LHCb upgrade projects. No doubt there will be more revealing results by the time of the next CKM workshop, to be held in Heidelberg in September 2018. While there are many conferences focusing on physics at the high-energy frontier, the triennial PSI workshop at the Paul Scherrer Institute (PSI) in Switzerland concerns searches for new phenomena at non-collider experiments. These are complementary to direct searches at the LHC and often cover a parameter space that is beyond the reach of the LHC or even future colliders. The fourth workshop in this series, PSI2016, took place from 16–21 October and attracted more than 170 physicists. Theoretical overviews covered: precision QED calculations; beyond-the-Standard-Model implications of electric-dipole-moment (EDM) searches; axions and other light exotic particles; flavour symmetries; the muon g-2 problem; NLO calculations of the rare muon decay μ → eeeνν; and possible models to explain the exciting flavour anomalies presently seen in B decays. On the experimental side, several new results were presented. Fundamental neutron physics featured prominently, ranging from cold-neutron-beam experiments to those with stored ultracold neutrons at facilities such as ILL, PSI, LANL, TRIUMF and Mainz. Key experiments are measurements of the neutron lifetime, searches for a permanent EDM, measurements of beta-decay correlations and searches for exotic interactions. The future European Spallation Source in Sweden will also allow a new and much improved search for neutron–antineutron oscillations. Atomic physics and related methods offer unprecedented sensitivity to fundamental-physics aspects ranging from QED tests, parity violation in weak interactions, EDM and exotic physics to dark-matter (DM) and dark-energy searches. With the absence of signals from direct DM searches so far, light and ultralight DM is a focus of several upcoming experiments. Atomic physics also comprises precision spectroscopy of exotic atoms, and several highlight talks included the ongoing efforts at CERN’s Antiproton Decelerator with antihydrogen and with light muonic atoms at J-PARC and at PSI. For antiprotons and nuclei, impressive results from recent Penning-trap mass and g-factor measurements were presented with impacts on CPT tests, bound-state QED tests and more. Major international efforts are under way at PSI (μ → eγ, μ → eee), FNAL and J-PARC (μ → e conversion) devoted to muons and their lepton-flavour violating decays, and the upcoming muon g-2 experiments at FNAL and J-PARC have reported impressive progress. Last but not least, rare kaon decays (at CERN and J-PARC), new long-baseline neutrino oscillation results, developments towards direct neutrino-mass measurements, and CP and CPT tests with B mesons were reported. The field of low-energy precision physics has grown fast over the past few years, and participants plan to meet again at PSI in 2019. The fields of nanomaterials and nanotechnology are quickly evolving, with discoveries frequently reported across a wide range of applications including nanoelectronics, sensor technologies, drug delivery and robotics, in addition to the energy and healthcare sectors. At an academia–industry event on 20–21 October at GSI in Darmstadt, Germany, co-organised by the technology-transfer network HEPTech, delegates explored novel connections between nanotechnology and high-energy physics (HEP). The forum included an overview of the recent experiments at DESY’s hard X-ray source PETRA III, which allows the investigation of physical and chemical processes in situ and under working conditions and serves a large user community in many fields including nanotechnology. Thermal-scanning probe lithography, an increasingly reliable method for rapid and low-cost prototyping of 2D and quasi-3D structures, was also discussed. Much attention was paid to the production and application of nanostructures, where the achievements of the Ion Beam Center at Helmholtz-Zentrum Dresden-Rossendorf in surface nanostructuring and nanopatterning were introduced. UK firm Hardide Coatings Ltd presented its advanced surface-coating technology, the core of which are nano-structured tungsten-carbide-based coatings that have promising applications in HEP and vacuum engineering. Industry also presented ion-track technology, which is being used to synthesise 3D interconnected nanowire networks in micro-batteries or gas sensors, among other applications. Neutron-research infrastructures and large-scale synchrotrons are emerging as highly suitable platforms for the advanced characterisation of micro- and nano-electronic devices, and the audience heard the latest developments from the IRT Nanoelec Platform for Advanced Characterisation of Grenoble. The meeting addressed how collaboration between academia and industry in the nanotechnology arena can best serve the needs of HEP, with CERN presenting applications in gaseous detectors using the charge-transfer properties of graphene. The technology-transfer office at DESY also shared its experience in developing a marketing strategy for promoting the services of the DESY NanoLab to companies. Both academia and industry representatives left the event with a set of contacts and collaboration arrangements. On 24–25 November, academics and leading companies in the field of superconductivity met in Madrid, Spain, to explore the technical challenges of applying new accelerator technology to medicine. Organised by CIEMAT in collaboration with HEPTech, EUCARD2, CDTI, GSI and the Enterprise Europe Network, the event brought together 120 participants from 19 countries to focus on radioisotope production, particle therapy and gantries. Superconductivity has a range of applications in energy, medicine, fusion and high-energy physics (HEP). The latter are illustrated by CERN’s high-luminosity LHC (HL-LHC), now near construction with superconducting magnets made from advanced Nb Sn technology capable of 12 T fields. The HL-LHC demands greatly advanced superconducting cavities with more efficient and higher-gradient RF systems, plus the development of new devices such as crab cavities that can deflect or rotate single bunches of protons. On the industry side, new superconducting technology is ready to go into production for medical applications. A dedicated session presented novel developments in cyclotron production, illustrated by the AMIT project of CIEMAT (based on a cyclotron with a compact superconducting design that will be able to produce low-to-moderate rates of dose-on-demand 11C and 18F) and the French industry–academia LOTUS project system, which features a compact 12 MeV superconducting helium-free magnet cyclotron suitable for the production of these isotopes in addition to 68Ga. Antaya Science and Technology, meanwhile, reported on the development of a portable high-field superconducting cyclotron for the production of ammonia-13N in near proximity to the PET cameras. The meeting also heard from MEDICIS, the new facility under construction at CERN that will extend the capabilities of the ISOLDE radioactive ion-beam facility for production of radiopharmaceuticals and develop new accelerator technologies for medical applications (CERN Courier October 2016 p28). Concerning particle therapy, industry presented medical accelerators such as the MEVION S250 – a proton-therapy system based on a gantry-mounted 250 MeV superconducting synchrocyclotron that weighs less than 15 tonnes and generates magnetic fields in excess of 10 T. Global medical-technology company IBA described its two main superconducting cyclotrons for particle therapy: the Cyclone 400 for proton/carbon therapy and the S2C2 dedicated to proton therapy, with a particular emphasis on their superconducting coil systems. IBA also introduced the latest developments concerning ProteusONE – a single-room system that delivers the most clinically advanced form of proton-radiation therapy. Researchers from MIT in the US presented a novel compact superconducting synchrocyclotron based on an ironless magnet with a much reduced weight, while the TERA Foundation in Italy is developing superconducting technology for “cyclinacs” – accelerators that combine a cyclotron injector and a linac booster. Finally, the session on gantries covered developments such as a superconducting bending-magnet section for future compact isocentric gantries by researchers at the Paul Scherrer Institute, and a superconducting rotating gantry for carbon radiotherapy designed by the Japanese National Institute of Radiological Sciences. With demand for medical isotopes and advanced cancer therapy rising, we can look forward to rich collaborations between accelerator physics and the medical community in the coming years. The fifth in the series of Higgs Couplings workshops, which began just after the Higgs-boson discovery in 2012 to bring together theorists and experimentalists, was held at SLAC on 9–12 November and drew 148 participants from five continents. Discussions focused on lessons from the current round of LHC analyses that could be applied to future data. Modelling of signal and background is already limiting for some measurements, and new theoretical results and strategies were presented. Other key issues were the use of vector-boson fusion production as a tool, and the power and complementarity of diverse searches for heavy Higgs bosons. Two new themes emerged at the meeting. The first was the possibility of exotic decays of the 125 GeV Higgs boson. These include not only Higgs decays to invisible particles but also decays to lighter Higgs particles, light quarks and leptons (possibly with flavour violation) and new, long-lived particles. A number of searches from ATLAS and CMS reported their first results. The workshop also debated the application of effective field theory as a framework for parametrising precise Higgs measurements. The 6th Higgs Couplings meeting will be held in Heidelberg on 6–10 November 2017. We look forward to new ideas for the creative use of the large data samples of Higgs bosons that will become available as the LHC programme continues. The 8th International Conference on Hard and Electromagnetic Probes of High-energy Nuclear Collisions (Hard Probes 2016) was held in Wuhan, China, on 23–27 September. Hard and electromagnetic probes are powerful tools for the study of the novel properties of hot and dense QCD matter created in high-energy nucleus–nucleus collisions, and have provided much important evidence for the formation of quark–gluon plasma (QGP) in heavy-ion collisions at RHIC and the LHC. Hard Probe 2016 attracted close to 300 participants from 28 countries. The main topics discussed were: jet production and modification in QCD matter; high transverse-momentum hadron spectra and correlations; jet-induced medium excitations; jet properties in small systems; heavy flavour hadrons and quarkonia; photons and dileptons and initial states and related topics. The most recent experimental progress on hard and electromagnetic probes from the ALICE, ATLAS, CMS, LHCb, PHENIX and STAR collaborations, together with many new exciting theoretical and phenomenological developments, were discussed. The next Hard Probe conference will be held in Aix Les Bains, France, in 2018. The International Symposium on EXOtic Nuclei (EXON-2016), took place from 5–9 September in Kazan, Russia, attracting around 170 nuclear experts from 20 countries. The scientific programme focused on recent experiments on the synthesis and study of new super-heavy elements, the discovery of which demonstrates the efficiency of international co-operation. Interesting results were obtained in joint experiments on chemical identification of elements 112 and 114 performed at JINR (Russia), the GSI (Germany) and the Paul Scherrer Institute (Switzerland). A vivid example of co-operation with US scientists is an experiment on the synthesis of element 117 held at the cyclotron of JINR. Recently, the International Union of Pure and Applied Chemistry approved the discovery of the new elements with atomic numbers 113 (“nihonian”), 115 (“moscovium”), 117 (“tennessine”) and 118 (“oganesson”). Five laboratories, which are the co-founders of the symposium, are now creating a new generation of accelerators for the synthesis and study of new exotic nuclei. Projects such as SPIRAL2, RIKEN RI Beam Factory, FAIR, DRIBs, NICA and FRIB will allow us to delve further into the upper limits of the periodic table. The CERN Accelerator School (CAS) and the Wigner Research Centre for Physics jointly organised an introduction-to-accelerator-physics course in Budapest, Hungary, from 2–14 October, attended by more than 120 participants spanning 28 nationalities. This year, CAS will organise a specialised course on beam injection, extraction and transfer (to be held in Erice, Sicily, from 10–19 March) and a second specialised course on vacuum for particle accelerators (near Lund, Sweden, from 6–16 June). The next course on advanced-accelerator physics will be held in the UK in early September, and a Joint International Accelerator School on RF technology will be held in Hayama, Japan, from 16–26 October (www.cern.ch/schools/CAS).


News Article | March 30, 2016
Site: www.nature.com

Space silence The Japan Aerospace Exploration Agency (JAXA) lost contact with its flagship X-ray astronomical satellite, Hitomi — previously known as ASTRO-H — on 26 March. Launched on 17 February, it had been going through initial tests and calibrations. Hitomi’s status remains unknown, but JAXA engineers are working to regain communication. The US Joint Space Operations Center, which tracks space debris, reported five objects near the spacecraft around the time that it went silent, which it characterized as pieces of a “break-up”. On 28 March, unconfirmed reports said that telescopes had seen the satellite tumbling. See go.nature.com/jlkhvg for more. Hydrogen on Ceres The northern polar region of the dwarf planet Ceres contains lots of hydrogen and probably water, as revealed in an image taken by NASA’s Dawn spacecraft and released on 22 March. Dawn scientists compiled this false-colour map using data from the spacecraft’s neutron-counting instrument, which scans the uppermost metre of Ceres’s surface material. Red indicates high neutron counts, and blue shows low counts. Fewer neutrons near the north pole indicate the presence of hydrogen there, probably in the form of water ice. Japan’s whaling Japan’s Institute for Cetacean Research has confirmed that 333 minke whales were killed by the country’s controversial ‘scientific’ whaling initiative in the Antarctic, which started last year. In a 24 March statement, the institute said that 103 males and 230 females — many of which were pregnant — were caught between December last year and March. In 2014, an international court declared that Japan’s whaling programme was not scientific, and the country has struggled to convince the International Whaling Commission to approve a revised programme (see A. S. Brierley and P. J. Clapham Nature 529, 283; 2016; J. Morishita Nature 531, 35; 2016). Philippines satellite The Philippines’s first micro-satellite was successfully launched on 22 March from Cape Canaveral, Florida. The craft, Diwata-1 — a collaboration between the University of the Philippines Diliman, the Phillipine Department of Science and Technology and Japan’s Tohoku and Hokkaido universities — is part of a resupply mission to the International Space Station, from where it will be placed into orbit. The satellite will beam back images of weather patterns and land and water resources, and represents “a giant leap for Philippine science and technology”, said Jose Cuisia, the country’s US ambassador. Frontier science Microsoft co-founder Paul Allen has pledged US$100 million over 10 years to transformative bioscience projects and investigators. The first grants from the Paul G. Allen Frontiers Group in Seattle, Washington, were announced on 23 March. Four scientists will receive $1.5 million each: Ethan Bier at the University of California, San Diego; James Collins at the Massachusetts Institute of Technology, Cambridge; Jennifer Doudna at the University of California, Berkeley; and Bassem Hassan at the Brain and Spine Institute, Paris. Two universities, Stanford in California and Tufts in Medford, Massachusetts, will each receive $30 million, from the Allen group and partners, over 8 years. Competitions for additional investigators and research centres will be held periodically. Russian funding Concerns have been raised over future support for civilian basic research under a science and technology strategy that the Russian government plans to launch this year. Despite mounting budget pressure, the government’s overall spending on military and civilian science is to remain stable, deputy prime minister Arkady Dvorkovich told the Russian Academy of Sciences last week. But scientists told Nature that they fear that priority research programmes set to be introduced by the end of the year will favour commercial research over fundamental science. Recipients of Russian grants have already lost substantial purchasing power owing to the rapid decline of the rouble. Fetal research A US Congress committee is preparing to subpoena 17 universities and research institutions for data on their use of human tissue from aborted fetuses, according to media reports on 24 March. This is the second round of subpoenas from the House Select Investigative Panel on Infant Lives, which was created in October 2015 to investigate allegations that reproductive health-care provider Planned Parenthood was illegally selling fetal tissue to researchers — charges that the non-profit group denies. The committee’s chair, Representative Marsha Blackburn (Republican, Tennessee), is seeking the names of researchers who work with fetal cells and tissue. Canadian science Canada’s government will boost funding for science and technology, finance minister Bill Morneau announced on 22 March. Science-granting agencies will receive an extra Can$76 million (US$58 million) annually from the 2016–17 fiscal year, plus Can$19 million for indirect costs at academic institutions that undertake federally sponsored research. The government also plans to spend up to Can$2 billion over 3 years on a new science infrastructure, and Can$800 million over 4 years on a series of “innovation networks and clusters” that aim to foster research and development ties with the private sector. Intel icon dies Andrew Grove, the legendary chairman and chief executive of semiconductor giant Intel, died on 21 March aged 79, the company has announced. Grove (pictured) was the first engineer to be hired by Intel’s founders in 1968. He later had a crucial role in management as the company, based in Santa Clara, California, drove down the cost of computer chips and boosted their power, both at an exponential rate. Born into a Jewish family in Hungary, Grove survived the Holocaust; in the mid-1950s, he escaped through the Iron Curtain and emigrated to the United States. Macchiarini affair The Karolinska Institute announced on 23 March that it has rescinded its contract with controversial surgeon Paolo Macchiarini. Macchiarini, formerly a visiting professor at the institute in Stockholm, had been internationally fêted for his pioneering transplants of artificial windpipes — but allegations of scientific and ethical misconduct began to emerge almost two years ago. The institute’s disciplinary board now says that he “engaged in conduct and research that is incompatible with a position of employment”. Macchiarini says that he rejects the board’s findings. See go.nature.com/qqeiqk for more. Asymmetry pegged The LHCb experiment at CERN’s Large Hadron Collider near Geneva, Switzerland, has improved the accuracy of a crucial measurement of the difference in behaviour between matter and antimatter. At a meeting in La Thuile, Italy, physicist Matthew Kenzie of CERN, Europe’s particle-physics lab, reported on 23 March that one indicator of asymmetry — called γ and measured through the decay of B mesons and their antiparticles — in the behaviour of quarks had been determined with a precision of about 10%, twice that of previous experiments. One of three angles of a triangle, γ encodes the asymmetries in quark behaviour; LHCb physicists hope to measure all three angles with a precision that is better than 1%. Solo observatories Two US radioastronomy observatories will branch out on their own following a funding crunch, the National Radio Astronomy Observatory (NRAO) in Charlottesville, Virginia, announced on 24 March. The Green Bank Telescope in West Virginia will become the independent Green Bank Observatory, and the Very Long Baseline Array — a set of ten dishes stretching from Hawaii to the US Virgin Islands — will be the Long Baseline Observatory. The changes come as the US National Science Foundation, which funds the NRAO, looks to save money by offloading some of its astronomy facilities. Call for drug reform Drug policy needs to be shorn of ideological bias and based on better science, according to the Johns Hopkins–Lancet Commission on Public Health and International Drug Policy. In a 24 March report, the group calls for decriminalization of minor drug offences including use and possession, regulated drug markets and a focus on harm reduction rather than prevention of use (J. Csete et al. Lancet http://doi.org/bdp2; 2016). The commission also says that current global policies are causing huge health problems, and that a more diverse source of funders is needed to provide “non-ideological” science on drug policy and reform. Global investments in renewable energy rose to a record US$286 billion in 2015, more than double the investment in coal and gas-fired power generation, the United Nations Environment Programme (UNEP) announced on 24 March. The world added 134 gigawatts of renewable-energy capacity in 2015 — up 26% from 2014. Most investment went into solar and wind power. For the first time, UNEP reported, investments by developing countries surpassed those of developed countries. 31 March–1 April US President Barack Obama hosts the last of four summits on nuclear security in Washington DC. go.nature.com/4fq3gj 1–2 April Robotics experts gather in Coral Gables, Florida, to wrestle with the legal and policy questions surrounding robots. go.nature.com/sc4fuc 6–7 April The Astroparticle Physics European Consortium holds a meeting in Paris to discuss updating its global-initiatives road map. app2016.in2p3.fr


Lateral photonic integration of oxide-confined leaky vertical-cavity surface-emitting lasers enables their application in data communications and sensing. Vertical-cavity surface-emitting lasers (VCSELs) that operate at 850nm and are based on oxide-confined apertures are widely used in optical interconnects in data centers, supercomputers, wireless backbone networks, and consumer applications.1 As the processor productivity in these applications increases, it is necessary to continuously improve performance and scale transmission speeds accordingly. In recent years, developers have produced a generation of devices capable of transmitting 40Gb/s at moderate current densities,2, 3 and they have recently demonstrated 54Gb/s non-return-to-zero transmission through 2.2km of multimode fiber.4 Now, 108Gb/s per wavelength transmission can be realized over 100–300m of multimode fiber through the use of advanced modulation formats: discrete multi-tone,5 multiCAP,6 and PAM4.7 All of these achievements are made possible through the use of VCSELs operating in a single transverse and longitudinal mode (SM VCSELs). When manufacturing SM VCSELs, developers typically make the oxide aperture in a VCSEL very small (around 2–3μm in diameter). This approach, however, may result in very low optical power, high resistance, and low manufacturing yield. To extend single-mode behavior toward more conventional aperture sizes (5–7μm), several alternative approaches have been proposed, including surface patterning, etching, overgrowth, and ion implantation in combination with photonic crystals.8, 9 These approaches require additional processing steps that must be precisely aligned (oxide aperture and surface pattern). The resulting complexity can reduce the yield and increase the cost of manufacturing. Our approach uses oxide-confined leaky VCSELs, which—through the application of proper epitaxial design—enable the generation of high optical leakage losses for high-order transverse modes. Using these devices, we extend the single-mode behavior of the laser toward large oxide aperture diameters. With our approach, we aim to create an additional cavity at a wavelength longer than the VCSEL cavity mode. Upon oxidation, the relative intensity distribution of the optical field between the coupled cavities can be strongly affected in the oxidized section. This induces a break in the orthogonality of the VCSEL mode and the second cavity mode (when at a certain tilt angle), which enables in-plane leakage to occur. High-order modes with the field intensity maxima close to the oxide periphery have thus much higher leakage losses.10 We have designed and manufactured oxide-confined leaky VCSELs and observed their leakage process through tilted narrow lobes in the far-field spectrum. The emission comes from the area outside the aperture, and thus does not suffer from diffraction-induced broadening. To model the VCSELs in 3D, we applied finite element analysis based on Maxwell's vector equations in a rotational symmetric system.11 Figure 1 shows a cross section of the simulated electric field of the fundamental and first excited optical modes of an oxide-confined aluminum gallium arsenide-based leaky VCSEL. A simulated far-field profile of the excited mode can be seen in Figure 2. The simulations show that the leakage effect results in a specific tilted emission over the VCSEL surface at ∼35–37°. Most of the intensity of the leaking light is channeled in the direction parallel to the surface. Figure 1. Radial distribution of the simulated electric field of oxide-confined leaky vertical-cavity surface-emitting laser (VCSEL) optical modes. (a) Fundamental optical mode. (b) First excited mode. An active region (magenta line) placed within the cavity is confined by aluminum gallium arsenide distributed Bragg reflectors. The structure contains oxide apertures (white lines). A semiconductor-air interface is shown as a dotted line in the figure. arb. u.: Arbitrary units. Figure 2. Far-field profile simulation of the excited VCSEL mode presented in Figure 1. We manufactured and tested VCSELs according to our design. The far-field measurements of the devices at two current densities can be seen in Figure 3, which shows that at high current densities during multimode operation, narrow lobes arise at ∼35° angles. These lobes are related to the leakage process (see Figure 2). Figure 3. Far-field profiles of a leaky VCSEL operating in fundamental mode (blue, 10kA/cm2) and multi-mode (red, >25kA/cm2). Electroluminescence spectra of the leaky VCSEL at different current densities are shown in Figure 4. We concluded that the VCSEL was predominantly single mode at all the current densities examined, despite the relatively large aperture diameter (5μm). In contrast, the non-leaky VCSEL with thick oxide apertures was heavily multimode, with the excited modes dominant even at small current densities.12 Figure 4. Electroluminescence spectra of an oxide leaky VCSEL with a 5μm aperture. The graph shows dominance of the fundamental mode up to high currents (5.5mA, red). Insert: An optical eye diagram (PRBS7) at 32Gb/s. To summarize, we have shown that it is possible to significantly improve VCSEL spectral quality without involving any additional processing steps. Furthermore, we confirmed the occurrence of in-plane leakage through leakage lobes in the far-field profile of the device. Our findings create opportunities for engineering photonic integrated circuits, for example, by coherent coupling of two or more devices. Therefore, it may be possible to use the technique for beam steering.13 By operating one VCSEL in a couple under reverse bias, it is possible to realize an on-chip integrated monitor photodiode, thus drastically reducing the cost of packaging (since fewer of the elements require alignment and assembly). Our future work will focus on optimization of the leakage effect in order to manufacture high- power and high-speed single-mode VCSELs. This project received funding from the European Union's Horizon 2020 research and innovation program under grant 666866. VI Systems GmbH Nikolay Ledentsov Jr. received his MSc in physics at the Technical University of Berlin while developing indium gallium arsenide-based LEDs. At VI Systems he is responsible for the design and numerical simulation of optoelectronic devices, and operates an automated testbed for spectral and high-speed characterization. Vitaly Shchukin received a diploma in physics and engineering in the field of semiconductor physics from St. Petersburg State Polytechnical University, St. Petersburg, Russia, and a PhD (1987) and doctor of science (1999) in physics and mathematics from the Ioffe Physical-Technical Institute, Russian Academy of Sciences, St. Petersburg. He is co-author of more than 190 published papers, and holds 22 patents and a monograph. Joerg Kropp holds a doctor of science in the field of atomic physics with optical spectroscopy and laser applications. He has more than 25 years' experience in industry in the field of optical communications through management positions with Siemens and Infineon. Mikel Agustin received a diploma in telecommunications engineering from the Public University of Navarra, Spain, and completed his education at the Institute of Telecommunications, Warsaw University of Technology, Poland. At VI Systems he is responsible for developing energy-efficient ultrafast vertical-cavity surface-emitting lasers and photodetectors. Nikolay N. Ledentsov received a diploma in electrical engineering from the Electrical Engineering Institute in Leningrad (LETI, now Electrotechnical University, St. Petersburg, Russia) in 1982. He obtained his PhD (1987) and doctor of science (1994) in physics and mathematics from the Ioffe Physical-Technical Institute. He has been professor of electrical engineering at LETI since 1994 and professor of physics and mathematics at the Ioffe Physical-Technical Institute since 2005. 1. T. R. Fanning, J. Wang, Z.-W. Feng, M. Keever, C. Chu, A. Sridhara, C. Rigo, et al., 28-Gbps 850-nm oxide VCSEL development and manufacturing progress at Avago, Proc. SPIE 9001, p. 900102, 2014. doi:10.1117/12.2039499 3. S. A. Blokhin, J. A. Lott, A. Mutig, G. Fiol, N. N. Ledentsov, M. V. Maximov, A. M. Nadtochiy, V. A. Shchukin, D. Bimberg, 850nm VCSELs operating at bit rates up to 40Gbit/s, Electron. Lett. 45, p. 501-503, 2009. 4. G. Stepniak, A. Lewandowski, J. R. Kropp, N. N. Ledentsov, V. A. Shchukin, N. Ledentsov, G. Schaefer, M. Agustin, J. P. Turkiewicz, 54 Gbit/s OOK transmission using single-mode VCSEL up to 2.2km MMF, Electron. Lett. 52, p. 633-635, 2016. 5. B. Wu, X. Zhou, Y. Ma, J. Luo, K. Zhong, S. Qiu, Z. Feng, et al., Close to 100 Gbps discrete multitone transmission over 100m of multimode fiber using a single transverse mode 850nm VCSEL, Proc. SPIE 9766, p. 97660K, 2016. doi:10.1117/12.2208901 6. R. Puerta, M. Agustin, L. Chorchos, J. Tonski, J.-R. Kropp, N. Ledentsov, V. A. Shchukin, et al., 107.5Gb/s 850nm multi- and single-mode VCSEL transmission over 10 and 100m of multi-mode fiber, OSA Opt. Fiber Commun. Conf. Th5B, p. Th5B.5, 2016. 7. G. Stepniak, L. Chorchos, M. Agustin, J.-R. Kropp, N. N. Ledentsov, V. A. Shchukin, N. N. Ledentsov, J. P. Turkiewicz, Up to 108Gb/s PAM 850nm multi and single mode VCSEL transmission over 100m of multi mode fiber, 2016. Paper accepted at the 42nd Euro. Conf. Opt. Commun. in Düsseldorf, 18-22 September 2016. 8. E. Haglund, A. Haglund, J. Gustavsson, B. Kögel, P. Westbergh, A. Larsson, Reducing the spectral width of high speed oxide confined VCSELs using an integrated mode filter, Proc. SPIE 8276, p. 82760L, 2012. doi:10.1117/12.908424 10. V. Shchukin, N. N. Ledentsov, J. Kropp, G. Steinle, N. Ledentsov, S. Burger, F. Schmidt, Single-mode vertical cavity surface emitting laser via oxide-aperture-engineering of leakage of high-order transverse modes, IEEE J. Quantum Electron. 50, p. 990-995, 2014. 11. N. Ledentsov, V. A. Shchukin, N. N. Ledentsov, J.-R. Kropp, S. Burger, F. Schmidt, Direct evidence of the leaky emission in oxide-confined vertical cavity lasers, IEEE J. Quantum Electron. 52, p. 1-7, 2016. 12. N. N. Ledentsov, J. Xu, J. A. Lott, Future Trends in Microelectronics: Frontiers and Innovations, ch. Ultrafast nanophotonic devices for optical interconnects, Wiley, 2013. doi:10.1002/9781118678107.ch11


Grant
Agency: Cordis | Branch: FP7 | Program: CP-CSA-Infra | Phase: INFRA-2008-1.1.1 | Award Amount: 4.00M | Year: 2009

The recently finished FP6 RI Black Sea SCENE project has established a Black Sea Scientific Network of leading environmental and socio-economic research institutes, universities and NGOs from the countries around the Black Sea and has developed a distributed virtual data and information infrastructure that is populated and maintained by these organisations to improve the identification, access, exchange, quality indication and use of their data and information about the Black Sea. The Black Sea SCENE research infrastructure stimulates scientific cooperation, exchange of knowledge and expertise, and strengthens the regional capacity and performance of marine environmental data and information management, underpins harmonization with European marine data quality control/assessment procedures and adoption of international meta-data standards and data-management practices, providing improved data & information delivery services for the Black Sea region at a European level. The Up-Grade of Black Sea SCENE project aims: a) to extend the existing research infrastructure with 19 marine environmental institutes/organizations from the 6 Black Sea countries, b) to implement the results of the Joint Research Activities of the FP6 RI SeaDataNet project (common communication standards and adapted technologies to ensure the datacenters interoperability), c) to network the existing and new Black Sea datacenters, active in data collection, and provide integrated databases of standardized quality on-line, d) to realize and improve on-line access to in-situ and remote sensing data, meta-data and products and e) to adopt standardized methodologies for data quality checking to ensure the quality, compatibility and coherence of the data issuing from so many sources. The Up-Grade Black Sea SCENE project is undertaken by 51 partners of which 43 are located in the Black Sea countries.


News Article | November 24, 2016
Site: www.eurekalert.org

A common species of Asian tree frog may actually be two separate species according to new genetic data collected by an international group of scientists. If the two groups of frogs are confirmed to be different species, assigning their scientific names may require searching historical records of foreign explorers in Japan during the 1800s. Before the frogs are officially recognized as two separate species, researchers will test if individual frogs from the two groups have unique physical or behavioral features and if they can produce healthy offspring. The project began when researchers at European universities expanded their studies on sex determination and population dynamics in amphibians to include Asian species. The species of tree frog that they chose, Hyla japonica, is found throughout Japan, the Korean peninsula, eastern China, and eastern Russia. Collaborators around the world began sending genetic samples from local frog populations to discover their evolutionary relationships. The data revealed evolutionarily distinct groups of frogs in Japan, the Korean peninsula, and eastern Russia. Ancestors of the modern frog populations likely traveled either into or out of Japan by two separate routes: from the North on a chain of islands between Russia and Japan, and from the South along a land bridge on the Philippine Sea Plate between South Korea and Japan. Japanese H. japonica populations may have been isolated into separate East and West groups. Researchers are exploring this possibility in more detail with an ongoing research project led by Ikuo Miura, PhD, an Associate Professor in Amphibian Research Center at Hiroshima University. The same separation between East and West Japan is known in other species of frogs and skinks. Miura explains that the scientific community has no definitive information about exactly what caused the divide between East and West Japan, but suggests the possibility of the expansion of ancient basin associated with volcanic activity in central Japan. Miura and Yuya Higaki, a fourth-year bachelor's degree student, are currently running genetic analysis on 50 populations of H. japonica from across Japan. They will present their preliminary results on November 26th at the annual conference of the Herpetological Society of Japan. This project is part of Miura's larger research interests in sex determination and its influence on speciation and evolution. If H. japonica is recognized as two separate species, it will be challenging for researchers to decide which species should keep the original name due to the mystery surrounding which population of H. japonica was used for the original species characterization in 1858. The German-British naturalist Albert Gunther named H. japonica after examining a specimen collected years earlier, potentially in 1826, by Philipp Siebold and Heinrich Burger, German botanists and physicians who were among the first Westerners granted official access to Japan. The modern research team visited the British Museum of Natural History to inspect the original specimen, but the location of where Siebold and Burger collected the first H. japonica is recorded only as "Japan." For now, naming the species will remain a historical mystery secondary to the ongoing scientific questions. The current research paper is published in the November 23, 2016 issue of BioMed Central Evolutionary Biology. Authors of the paper are based at the University of Lausanne (Switzerland), Leibniz-Institute of Freshwater Ecology and Inland Fisheries (Germany), Russian Academy of Sciences, Seoul National University, Ewha Woman's University (Republic of Korea), Chinese Academy of Sciences, and Hiroshima University. The species Hyla japonica is listed as Least Concern on the International Union for the Conservation of Nature (ICUN) Red List. Find more Hiroshima University news on our Facebook page: http://www. More information about the Hiroshima University Amphibian Research Center is available in both English and Japanese on their website: http://home. Information about the Herpetological Society of Japan is available in both English and Japanese on their website: http://herpetology. Academic Paper Citation: Dufresnes C, Litvinchuk SN, Borzee A, Jang Y, Li J, Miura I, Perrin N, Stock M. Phylogeography reveals an ancient cryptic radiation in East-Asian tree frogs (Hyla japonica group) and complex relationships between continental and island lineages. BioMed Central Evolutionary Biology. 23 November 2016.


News Article | March 11, 2016
Site: cleantechnica.com

They come from the West Coast, as far south as California, as north as Alaska, and as east as the Atlantic coast. Their joint letter refers to “Misrepresentation,” “lack of information,” and “Disregard for science that was not funded by the proponent.” Scientists condemn the flawed review process for Lelu Island, at the mouth of British Columbia’s Skeena River, as “a symbol of what is wrong with environmental decision-making in Canada.” More than 130 scientists signed on to this letter. “This letter is not about being for or against LNG, the letter is about scientific integrity in decision-making,” said Dr. Jonathan Moore, Liber Ero Chair of Coastal Science and Management, Simon Fraser University. One of the other signatories is Otto Langer, former Chief of Habitat Assessment at Department of Fisheries and Oceans (DFO), who wrote: These are tough words for a Federal government that promised to put teeth back in the gutted environmental review process. In Prime Minister Justin Trudeau’s defense, this is yet another problem he inherited from the previous administration, and the task of cleaning up this mess seems enormous. That said, this government was aware the environmental review process was broken before it was elected and has not intervened to at least stop the process from moving forward until it is prepared to take action. The Liberal Government appears to be facing a tough decision. So far, it has attempted to work with the provinces. On Lelu Island, as well as the equally controversial proposed Kinder Morgan Pipeline  expansion and Site C Dam project, continuing to support Premier Clak’s policies in this manner would appear to necessitate betraying the trust of the Canadian people. Here are a few choice excerpts from the public letter that more than 130 scientists sent to Catherine McKenna and Prime Minister Trudeau: ” … The CEAA draft report has not accurately characterized the importance of the project area, the Flora Bank region, for fish. The draft CEAA report1 states that the “…marine habitats around Lelu Island are representative of marine ecosystems throughout the north coast of B.C.”. In contrast, five decades of science has repeatedly documented that this habitat is NOT representative of other areas along the north coast or in the greater Skeena River estuary, but rather that it is exceptional nursery habitat for salmon2-6 that support commercial, recreational, and First Nation fisheries from throughout the Skeena River watershed and beyond7. A worse location is unlikely to be found for PNW LNG with regards to potential risks to fish and fisheries….” ” … CEAA’s draft report concluded that the project is not likely to cause adverse effects on fish in the estuarine environment, even when their only evidence for some species was an absence of information. For example, eulachon, a fish of paramount importance to First Nations and a Species of Special Concern8, likely use the Skeena River estuary and project area during their larval, juvenile, and adult life-stages. There has been no systematic study of eulachon in the project area. Yet CEAA concluded that the project posed minimal risks to this fish…” ” … CEAA’s draft report is not a balanced consideration of the best-available science. On the contrary, CEAA relied upon conclusions presented in proponent-funded studies which have not been subjected to independent peer-review and disregarded a large and growing body of relevant independent scientific research, much of it peer-reviewed and published…” ” …The PNW LNG project presents many different potential risks to the Skeena River estuary and its fish, including, but not limited to, destruction of shoreline habitat, acid rain, accidental spills of fuel and other contaminants, dispersal of contaminated sediments, chronic and acute sound, seafloor destruction by dredging the gas pipeline into the ocean floor, and the erosion and food-web disruption from the trestle structure. Fisheries and Oceans Canada (DFO) and Natural Resources Canada provided detailed reviews12 on only one risk pathway – habitat erosion – while no such detailed reviews were conducted on other potential impacts or their cumulative effects…” ” … CEAA’s draft report concluded that the project posed moderate risks to marine fish but that these risks could be mitigated. However, the proponent has not fully developed their mitigation plans and the plans that they have outlined are scientifically dubious. For example, the draft assessment states that destroyed salmon habitat will be mitigated; the “proponent identified 90 000 m2 of lower productivity habitats within five potential offsetting sites that could be modified to increase the productivity of fisheries”, when in fact, the proponent did not present data on productivity of Skeena Estuary habitats for fish at any point in the CEAA process. Without understanding relationships between fish and habitat, the proposed mitigation could actually cause additional damage to fishes of the Skeena River estuary…” British Columbia Institute of Technology 1. Marvin Rosenau, Ph.D., Professor, British Columbia Institute of Technology. 2. Eric M. Anderson, Ph.D., Faculty, British Columbia Institute of Technology. British Columbia Ministry of Environment 1. R. S. Hooton, M.Sc., Former Senior Fisheries Management Authority for British Columbia Ministry of Environment, Skeena Region. California Academy of Sciences 1. John E. McCosker, Ph.D., Chair of Aquatic Biology, Emeritus, California Academy of Sciences. Department of Fisheries and Oceans Canada 1. Otto E. Langer, M.Sc., R.P.Bio., Fisheries Biologist, Former Chief of Habitat Assessment, Department of Fisheries and Oceans Canada Memorial University of Newfoundland 1. Ian A. Fleming, Ph.D., Professor, Memorial University of Newfoundland. 2. Brett Favaro, Ph.D., Liber Ero conservation fellow, Memorial University of Newfoundland. Norwegian Institute for Nature Research 1. Rachel Malison, Ph.D., Marie Curie Fellow and Research Ecologist, The Norwegian Institute for Nature Research. Russian Academy of Science 1. Alexander I. Vedenev, Ph.D., Head of Ocean Noise Laboratory, Russian Academy of Science 2. Victor Afanasiev, Ph.D., Russian Academy of Sciences. Sakhalin Research Institute of Fisheries and Oceanography 1. Alexander Shubin, M.Sc. Fisheries Biologist, Sakhalin Research Institute of Fisheries and Oceanography. Simon Fraser University, BC 1. Jonathan W. Moore, Ph.D., Liber Ero Chair of Coastal Science and Management, Associate Professor, Simon Fraser University. 2. Randall M. Peterman, Ph.D., Professor Emeritus and Former Canada Research Chair in Fisheries Risk Assessment and Management, Simon Fraser University. 3. John D. Reynolds, Ph.D., Tom Buell BC Leadership Chair in Salmon Conservation, Professor, Simon Fraser University 4. Richard D. Routledge, Ph.D., Professor, Simon Fraser University. 5. Evelyn Pinkerton, Ph.D., School of Resource and Environmental Management, Professor, Simon Fraser University. 6. Dana Lepofsky, Ph.D., Professor, Simon Fraser University 7. Nicholas Dulvy, Ph.D., Canada Research Chair in Marine Biodiversity and Conservation, Professor, Simon Fraser University. 8. Ken Lertzman, Ph.D., Professor, Simon Fraser University. 9. Isabelle M. Côté, Ph.D., Professor, Simon Fraser University. 10. Brendan Connors, Ph.D., Senior Systems Ecologist, ESSA Technologies Ltd., Adjunct Professor, Simon Fraser University. 11. Lawrence Dill, Ph.D., Professor Emeritus, Simon Fraser University. 12. Patricia Gallaugher, Ph.D., Adjunct Professor, Simon Fraser University. 13. Anne Salomon, Ph.D., Associate Professor, Simon Fraser University. 14. Arne Mooers, Ph.D., Professor, Simon Fraser University. 15. Lynne M. Quarmby, Ph.D., Professor, Simon Fraser University. 16. Wendy J. Palen, Ph.D., Associate Professor, Simon Fraser University. University of Alaska 1. Peter Westley, Ph.D., Assistant Professor of Fisheries, University of Alaska Fairbanks. 2. Anne Beaudreau, Ph.D., Assistant Professor of Fisheries, University of Alaska Fairbanks. 3. Megan V. McPhee, Ph.D., Assistant Professor, University of Alaska Fairbanks. University of Alberta 1. David.W. Schindler, Ph.D., Killam Memorial Professor of Ecology Emeritus, University of Alberta. 2. Suzanne Bayley, Ph.D., Emeritus Professor, University of Alberta. University of British Columbia 1. John G. Stockner, Ph.D., Emeritus Senior Scientist DFO, West Vancouver Laboratory, Adjuct Professor, University of British Columbia. 2. Kai M.A. Chan, Ph.D., Canada Research Chair in Biodiversity and Ecosystem Services, Associate Professor, University of British Columbia 3. Hadi Dowlatabadi, Ph.D., Canada Research Chair in Applied Mathematics and Integrated Assessment of Global Change, Professor, University of British Columbia 4. Sarah P. Otto, Ph.D., Professor and Director, Biodiversity Research Centre, University of British Columbia. 5. Michael Doebeli, Ph.D., Professor, University of British Columbia. 6. Charles J. Krebs, Ph.D., Professor, University of British Columbia. 7. Amanda Vincent, Ph.D., Professor, University of British Columbia. 8. Michael Healey, Ph.D., Professor Emeritus, University of British Columbia. University of California (various campuses) 1. Mary E. Power, Ph.D., Professor, University of California, Berkeley 2. Peter B. Moyle, Ph.D., Professor, University of California. 3. Heather Tallis, Ph.D., Chief Scientist, The Nature Conservancy, Adjunct Professor, University of California, Santa Cruz. 4. James A. Estes, Ph.D., Professor, University of California. 5. Eric P. Palkovacs, Ph.D., Assistant Professor, University of California-Santa Cruz. 6. Justin D. Yeakel, Ph.D., Assistant Professor, University of California. 7. John L. Largier, Ph.D., Professor, University of California Davis. University of Montana 1. Jack A. Stanford, Ph.D., Professor of Ecology, University of Montana. 2. Andrew Whiteley, Ph.D., Assistant Professor, University of Montana. 3. F. Richard Hauer, Ph.D., Professor and Director, Center for Integrated Research on the Environment, University of Montana. University of New Brunswick 1. Richard A. Cunjak, Ph.D., Professor, University of New Brunswick. University of Ontario Institute of Technology 1. Douglas A. Holdway, Ph.D., Canada Research Chair in Aquatic Toxicology, Professor, University of Ontario Institute of Technology. University of Ottawa 1. Jeremy Kerr, Ph.D., University Research Chair in Macroecology and Conservation, Professor, University of Ottawa University of Toronto 1. Martin Krkosek, Ph.D., Assistant Professor, University of Toronto. Gail McCabe, Ph.D., University of Toronto. University of Victoria 1. Chris T. Darimont, Ph.D., Associate Professor, University of Victoria 2. John Volpe, Ph.D., Associate Professor, University of Victoria. 3. Aerin Jacob, Ph.D., Postdoctoral Fellow, University of Victoria. 4. Briony E.H. Penn, Ph.D., Adjunct Professor, University of Victoria. 5. Natalie Ban, Ph.D., Assistant Professor, School of Environmental Studies, University of Victoria. 6. Travis G. Gerwing, Ph.D., Postdoctoral Fellow, University of Victoria. 7. Eric Higgs, Ph.D., Professor, University of Victoria. 8. Paul C. Paquet, Ph.D., Senior Scientist, Raincoast Conservation Foundation, Adjunct Professor, University of Victoria. 9. James K. Rowe, Ph.D., Assistant Professor, University of Victoria. University of Washington 1. Charles Simenstad, Ph.D., Professor, University of Washington. 2. Daniel Schindler, Ph.D., Harriet Bullitt Endowed Chair in Conservation, Professor, University of Washington. 3. Julian D. Olden, Ph.D., Associate Professor, University of Washington. 4. P. Sean McDonald, Ph.D., Research Scientist, University of Washington. 5. Tessa Francis, Ph.D., Research Scientist, University of Washington. University of Windsor 1. Hugh MacIsaac, Ph.D., Canada Research Chair Great Lakes Institute for Environmental Research, Professor, University of Windsor. Photo Credits: 9 of the scientist condemning the CEAA review are professors at the University of Victoria. Photo shows U Vic students listening to a UN official in 2012 by Herb Neufeld via Flickr (CC BY SA, 2.0 License); Screen shot from a Liberal campaign video in which Trudeau promised to bring real change to Ottawa;8 of the scientist condemning the CEAA review are professors at the University of British Columbia. Photo of UBC by abdallahh via Flickr (CC BY SA, 2.0 License);5 of the scientists condemning the CEAA review are from the University of Washington. Photo is Mary Gates Hall, in the University of Washington by PRONam-ho Park Follow via Flickr (CC BY SA, 2.0 License);5 of the scientists condemning the CEAA review are from the Skeena Fisheries Commission. Photo is Coast mountains near the mouth of the Skeena River by Roy Luck via Flickr (CC BY SA, 2.0 License);16 of the scientists condemning the CEAA review were professors at Simon Fraser University. Photo shows SFU’s Reflective Pool by Jon the Happy Web Creative via Flickr (CC BY SA, 2.0 License)    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


Home > Press > Scientists have put a high precision blood assay into a simple test strip: Researchers have developed a new biosensor test system based on magnetic nanoparticles Abstract: Researchers from the General Physics Institute of the Russian Academy of Sciences (GPI RAS) and Moscow Institute of Physics and Technology (MIPT) have developed a new biosensor test system based on magnetic nanoparticles. It is designed to provide highly accurate measurements of the concentration of protein molecules (e.g. markers, which indicate the onset or development of a disease) in various samples, including opaque solutions or strongly coloured liquids. The new development is similar (in its principal) to a pregnancy test. The analysis is conducted using small test strips made of porous material with two test lines. A droplet of the sample liquid is applied to one end of the strip and after a short period the result is shown as the activation of one or both lines. These test strips can be stored for a long time before being used. The test can be done quickly and does not need to be carried out by specially trained staff; tests can easily be performed next to a patient or even in field conditions. At molecular level, the magnetic nanoparticles "link" with antibodies to the required protein and then they are placed on a porous plate close to the intended point of contact with the test solution. The liquid, which spreads along the plate due to capillary action, captures the magnetic particles. It then meets two lines - the test line and the control line. The test line contains antibodies that capture the protein in question and also the magnetic markers that became attached to molecules of the protein due to the fact that the nanoparticles are also "linked" to the antibodies. The control line only captures the antibodies with magnetic markers, and it will be activated in any case, if the test strip is usable. The control line serves as an indicator as to whether the test is suitable for use, the protein antibodies in it have not been destroyed due to improper storage, and the test liquid has been applied correctly. After the sample has permeated the test strip and the antibodies have interacted with one another, the result can be read. This is as far as the resemblance to a pregnancy test goes. In a "classic" pregnancy test, the result can either be "yes" or "no". With this test, however, scientists are not only able to, with a high level of sensitivity, detect a protein, but they can also accurately determine the concentration of the protein. The accuracy of determining the concentration will even exceed the accuracy of methods that are only performed in laboratory conditions by trained staff. Alexey Orlov, the corresponding author of the study and a Research Fellow of GPI RAS (who also completed an undergraduate degree at MIPT in 2010, and a postgraduate degree in 2013): "Normally, tests that can be performed not only under lab conditions but also in the field, use fluorescent or coloured markers and the results are determined visually, by sight or by using a video camera. In our case, we are using magnetic particles, which have a significant advantage: they can be used to conduct analyses even if the test strip is dipped into a completely opaque liquid, to determine the substances in whole blood for example. The precise numerical measurement is conducted entirely electronically using a portable device. This completely excludes any ambiguity." The scientists note that along with the high level of sensitivity of determining the concentration of a protein, the new test system also allows measurements to be taken over a wide dynamic range: the upper limit of the test concentration is more than 4000 times greater than the lower limit. Dynamic range is a familiar term in photography: in relation to a camera, it means the ability of the image sensor or film to distinguish gradations of brightness without washing out to white or converting an image into a dark spot. In biochemical measurements, dynamic range implies the ability to measure the concentration of a protein in a very dilute solution as well as in a very saturated solution. The new system was tested by measuring 0.025 nanograms per millilitre of prostate-specific antigen in the blood (the "healthy" range is anything lower than 4 nanograms). Prostate-specific antigen is one of the most commonly monitored markers in clinical examinations on men. Prostate-specific antigen, PSA, is one of the possible markers for prostate cancer - and it is also used in forensics to detect traces of semen. Both applications have certain limitations and they are not able to conclusively provide a diagnosis/prove the guilt of a suspect, but the new biosensor platform will be able to do more than analyse PSA; this particular protein was chosen as a demonstration of the method's capabilities. This level of sensitivity of determining PSA is enough to tell whether there has been a recurrence after the removal of the prostate, and the results obtained give an idea of the potential of the new development. It is not only able to indicate when a particular indicator is outside of the normal range, but it can also be used to easily trace the dynamics of the concentration of protein markers in a disease. Having checked the results obtained by the new method against the "gold standard" for determining PSA - enzyme-linked immunosorbent assay (ELISA), the scientists proved that the new test system works well and that it has significant advantages over traditional methods. In the new test system, the researchers used their own patented method MPQ (magnetic particle quantification) to precisely count magnetic nanoparticles by their nonlinear magnetization. Using this method, scientists are able to record anything above 60 zeptomoles (the prefix zepto- means ten to the minus twenty-first power!) of nanoparticles in a linear range exceeding ten million times. These parameters have never been recorded at this level before. The method involves applying an alternating magnetic field to the nanoparticles at two frequencies and monitoring the induction response at combinatorial frequencies. Many methods of analysing substances are based on the fact that test objects, whether they are particles or molecules, are affected by an electromagnetic field. At the correct frequency, a sample starts to either actively absorb radiation, or radiate in response. In this case, the scientists used a combination of two frequencies of the magnetic field and monitored the response at the frequency which is their linear combination - this is called the "combinatorial" frequency. Maxim Nikitin, the Head of MIPT's Laboratory of Nanobiotechnology and a co-author of the study: "We previously demonstrated the high level of sensitivity of this method of detecting magnetic particles in a joint project with US researchers from the University of Chicago. We used our sensors to record magnetic radioactive nanoparticles based on the isotope 59-Fe in the bodies of animals in vivo (see ?. Nikitin et al., J.Appl. Phys. 2008, 103, 07A304). In particular, it was found that the threshold for detection using this electronic method coincides with the reporting threshold of accompanying gamma radiation, which means that radioactive markers can be replaced with magnetic nanoparticles in a number of various biophysical studies. In the present study, we use this methodology to achieve ultra-sensitivity for conducting immunoassays." "...the threshold for detection using this electronic method coincides with the reporting threshold of accompanying gamma radiation" means that in a number of biomedical research magnetic markers and the special device to detect them can be used to identify that was previously possible using radioactive preparations. From a doctor's point of view, magnetic products are clearly better for diagnostic studies as they prevent patients from being exposed to excess radiation. Dr. Petr Nikitin, Head of Research and Head of Laboratory at GPI RAS (graduated from MIPT in 1979): "The magnetic methods developed and the recorders for counting nanomarkers on test strips not only provide these limits and ranges of measurement of concentrations of antigens, but they are also able to effectively control all stages of the process: from the development and optimization of immunoassay protocols to conducting and interpreting results. This, in particular, is achieved by conducting quantitative monitoring of the redistribution of nanomarkers during biochemical reactions along all three-dimensional porous components of test strips, which has not previously been possible using any other method. Furthermore, the iron salts used to synthesize the nanoparticles are comparably more accessible and cheaper than the reagents used to synthesize gold nanoparticles, which are commonly used in threshold tests such as the pregnancy test." The combination of reliability, accessibility, and high accuracy and sensitivity of the new method means that it is likely to make a rapid transition from a laboratory prototype to mass production. The developers have not yet given a specific timeframe, but they emphasize that their test system can be used not only to diagnose diseases, but also for a number of other applications. The biosensor is able to conduct analyses on food products and medicines; it will also be able to be used to conduct environmental monitoring. And it will be able to do all this at the location itself, without any complicated or expensive equipment. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Grant
Agency: Cordis | Branch: FP7 | Program: CPCSA | Phase: INFRA-2007-1.2-03 | Award Amount: 49.02M | Year: 2008

A globally distributed computing Grid now plays an essential role for large-scale, data intensive science in many fields of research. The concept has been proven viable through the Enabling Grids for E-sciencE project (EGEE and EGEE-II, 2004-2008) and its related projects. EGEE-II is consolidating the operations and middleware of this Grid for use by a wide range of scientific communities, such as astrophysics, computational chemistry, earth and life sciences, fusion and particle physics. Strong quality assurance, training and outreach programmes contribute to the success of this production Grid infrastructure. \nBuilt on the pan-European network GANT2, EGEE has become a unique and powerful resource for European science, allowing researchers in all regions to collaborate on common challenges. Worldwide collaborations have extended its reach to the benefit of European science.\nThe proposed EGEE-III project has two clear objectives that are essential for European research infrastructures: to expand, optimize and simplify the use of Europes largest production Grid by continuous operation of the infrastructure, support for more user communities, and addition of further computational and data resources; to prepare the migration of the existing Grid from a project-based model to a sustainable federated infrastructure based on National Grid Initiatives. \nBy strengthening interoperable, open source middleware, EGEE-III will actively contribute to Grid standards, and work closely with businesses to ensure commercial uptake of the Grid, which is a key to sustainability. \nFederating its partners on a national or regional basis, EGEE-III will have a structuring effect on the European Research Area. In particular, EGEE-III will ensure that the European Grid does not fragment into incompatible infrastructures of varying maturity. EGEE-III will provide a world class, coherent and reliable European Grid, ensuring Europe remains at the forefront of scientific excellence.


News Article | August 24, 2016
Site: www.nature.com

Poor diet plan British health campaigners have pulled to pieces a UK government plan to tackle childhood obesity. The plan includes a levy on high-sugar soft drinks. But the final version, unveiled on 18 August, is significantly weaker than some researchers had hoped. Among the critics is the British Medical Association, which said that the government had “rowed back” on promises to crack down on the problem and instead produced a “weak plan” with “pointless” voluntary targets. Villagers vote to leave for good People in the Alaskan village of Shishmaref voted on 16 August to abandon their tiny island northeast of the Bering Strait and move onto the mainland because of erosion due to global warming and rising sea level. The Inuit village, containing around 600 people, has already been affected by erosion and flooding, which is expected only to worsen in coming decades. The unofficial vote was 89–78 in favour of moving, but there are as yet no funds to pay for relocation. Feverish campaign A logistically challenging emergency vaccination campaign, launched last week, aims to stop the spread of deadly yellow fever in Angola and the Democratic Republic of the Congo. Working with the countries’ health ministries and 56 global partners, the World Health Organization is coordinating the vaccination of 14 million people in more than 8,000 locations, including urban areas and hard-to-reach border regions. Since December, more than 400 people in the region have died in the worst outbreak of the mosquito-borne disease for 30 years. Hidden Zika risk A medical report from Brazil confirms that Zika virus can be transmitted through a blood transfusion. Two people received platelet transfusions in January from a blood donor who showed no symptoms at the time of donation, but who was later found to have been infected with Zika virus. The two recipients harboured Zika virus RNA that was genetically related to the virus found in the donor, although they showed no symptoms of infection. The findings add to evidence showing another mode of transmission for Zika, which has been shown to be passed from mother to child in utero, and between sexual partners, say the doctors who reported the case on 17 August (I. J. F. Motta et al. N. Engl. J. Med. http://doi.org/bppb; 2016). Free space for all NASA announced on 16 August that it is granting free public access to any published research funded by the agency. Research data and peer-reviewed publications by NASA-funded scientists will be available for download and reading on the agency’s PubSpace portal within one year of publication. PubSpace was created in response to a 2013 request by the White House Office of Science and Technology Policy. Other US agencies, including the National Institutes of Health and the Food and Drug Administration, are making their research available through the same portal. Patent thriller Fresh accusations have rekindled the battle over who invented the potentially lucrative CRISPR–Cas9 gene-editing technology. In an e-mail released by the University of California (UC) on 15 August as part of a pending patent case, Shuailiang Lin, a former student at the Broad Institute in Cambridge, Massachusetts, claims that, contrary to the Broad’s claims, the lab built its gene-editing technique on the back of discoveries made at UC Berkeley. The Broad denied Lin’s claims and notes that he used the same e-mail to apply for a job at UC Berkeley. The letter was first reported by the MIT Technology Review. Greener lorries The US government announced stricter fuel-economy standards for lorries, buses and vans on 16 August. Although heavy-duty vehicles make up only about 5% of traffic in the United States, they account for more than 20% of fuel consumption in the transport sector, and contribute a similar proportion of carbon dioxide emissions. The standards for new vehicles will be introduced gradually, becoming tighter each year over the next decade. Maiden voyage The world’s largest aircraft made its maiden flight in the United Kingdom on 17 August. The 92-metre-long Airlander 10 (pictured) incorporates lighter-than-air technology, combining characteristics of an airship and an aeroplane. It took off from Cardington Airfield in Bedfordshire and performed a circuit of the area before landing safely 20 minutes later. Its developer, Hybrid Air Vehicles in Bedford, says that the hybrid can travel at a cruise speed of 148 kilometres per hour and stay airborne for up to 5 days. The flight marked the start of 200 hours of flight testing. Airlander 10 is intended to be used for surveillance, communication, aid delivery and even passenger travel. Record hot July July this year was the warmest month since systematic global temperature records began in 1880. It was 0.1 degrees warmer than the previous hottest Julys, in 2015, 2011 and 2009, according to a monthly analysis of global temperatures by scientists at NASA’s Goddard Institute for Space Studies in New York City. July was the tenth record-setting warm month in a row, so 2016 looks set to end up as the warmest year in 136 years of modern record-keeping. Mind the data Psychologists can now share their data and early results with colleagues before formal publication. On 15 August, a comparative study of personality traits in 8,600 US students was the first paper to be deposited in PsyArXiv, a new preprint server for the psychological sciences (K. S. Corker and B. Donnellan Preprint at PsyArXiv http://osf.io/xeg7y; 2016). Following the example of the successful physics server arXiv, similar online repositories were launched earlier this year for the social sciences and for engineering. A preprint service for chemists will be up and running soon. Berkeley head quits Nicholas Dirks has resigned as chancellor of the University of California, Berkeley. He did not give a specific reason for his decision, announced on 16 August, but he had been criticized for how he managed the university budget and handled allegations of sexual harassment against faculty members and staff — including complaints filed last year against the astronomer Geoffrey Marcy. Marcy stepped down from his position in October 2015, after the complaints came to light. Russian gambit On 19 August, Russian President Vladimir Putin appointed church historian Olga Vasilyeva as the country’s science and education minister. Vasilyeva succeeds Dmitry Livanov, who will become presidential envoy for trade and economic relations with Ukraine. During his four-year term as minister, Livanov oversaw a radical overhaul of the Russian Academy of Sciences, Russia’s main basic-research organization. Vladimir Ivanov, a vice-president of the academy, welcomed Livanov’s replacement. Vasilyeva, formerly in charge of public education in religion and history in the presidential administration, told Russian news agency Interfax that religion will not interfere with her work as education and science minister. See go.nature.com/2c1zbq1 for more. Smallpox fighter Donald Ainslee Henderson, head of the successful campaign to wipe out smallpox, died on 19 August, aged 87. Henderson headed the World Health Organization’s global smallpox eradication campaign between 1966 and 1977. The experience caused him to question the feasibility of other disease-eradication efforts. Smallpox remains the only human disease ever to be wiped out, although decades-long efforts to eliminate polio and Guinea-worm disease may now be nearing completion. The UK government’s backing of a planned 1.8-gigawatt wind farm off the Yorkshire coast cements Britain’s position as the world’s leading producer of offshore wind power. More than 90% of global capacity is installed off the coasts of 11 European countries. Most of the remaining 10% is installed off China. In 2015, offshore wind accounted for 24% of total wind-power installations in the European Union. Globally, offshore wind represents only about 3% of installed wind capacity. 23% The share of ‘bee-friendly’ garden plants purchased in the United States that contain neonicotinoid pesticides, which are highly toxic to bees. Until two years ago, more than half of garden plants sold by large US retailers were pretreated with the bee-toxic insecticides. Source: Friends of the Earth


News Article | February 15, 2017
Site: www.marketwired.com

BELLEVUE, WA--(Marketwired - February 15, 2017) - 5G Americas, the industry trade association and voice of 5G and LTE for the Americas, today announced the publication of Wireless Technology Evolution Towards 5G: 3GPP Release 13 to Release 15 and Beyond which outlines the progress and evolution of LTE technology and the developing 5G standards. With the U.S. approaching 90 percent penetration of LTE [Ovum] the stage is being set for a new technology frontier with the fifth-generation wireless technologies. "Dozens of 5G trials are occurring around the world, and the process of information gathering by leading operators and manufacturers to feed into the 3GPP standards process is underway," commented Vicki Livingston, Head of Communications, 5G Americas and a contributing writer to the report. "Both LTE and 5G will work together to form our connected future." 3GPP's strong history of standardizing the technologies that drive the largest mobile wireless ecosystems from GSM to LTE and now to 5G creates the backbone for this whitepaper by 5G Americas. LTE is expected to be the dominant mobile wireless technology well into the next decade with LTE-Advanced Pro in Rel-13 completed in 2016. The 5G Americas white paper provides a detailed discussion on the key feature enhancements that were included in 3GPP Rel-13, such as: The white paper also provides a detailed discussion of the LTE enhancements being worked on as part of Rel-14 including further MIMO enhancements, CA enhancements, enhanced LAA (eLAA), enhanced LWA (eLWA), Voice over LTE (VoLTE) enhancements and enhancements to ProSe/D2D. Jim Seymour, Principal Engineer, Mobility CTO Group, and co-leader of the 5G Americas report explained, "3GPP Releases 13 and 14 provide additional innovative technical enhancements for the evolution of LTE as the 5G and LTE standards develop in parallel at 3GPP." Rel-14 and Rel-15 represent the first phase of work in 3GPP towards 5G standards with Rel-14 focused on the study items towards 5G and Rel-15 on the first phase of normative specifications for 5G. One of the requirements for 5G is the ability to optimally support a wider range of frequency bands in 5G, and in particular millimeter (mm)Wave bands. Given that radio accesses like LTE and HSPA were not designed to be optimized for mmWave frequency bands, 3GPP agreed that Rel-14 and Rel-15 will introduce a New Radio (NR) access technology for 5G that will be flexible enough to support not only frequency bands below 6 GHz but also mmWave bands up to 100 GHz. The 5G Americas whitepaper provides details on 3GPP Rel-14 studies of the New Radio access technology, including Channel Modeling, RAN requirements, RAN technologies and potential solutions that meet a broad range of use cases. The RAN study item on requirements aims to identify typical deployment scenarios for next generation access technologies and the required performance and main functional capabilities: The studies, explained in the whitepaper, focus on four main RAN specification areas: 1) Physical layer aspects; 2) Radio interface protocols and procedures; 3) Radio Network architecture, interfaces and protocols/procedures; and 4) Radio Frequency (RF) and performance requirements. At the service requirements level, the 3GPP studies include over 70 use cases for new opportunities in the next generation of telecommunications systems. The IMT-2020 process is in part driving the work in 3GPP to define a 5G technology which consists of a two-phased approach with Phase I focusing on 5G New Radio and Next Generation System Architecture studies (Rel-14) and normative specifications (Rel-15) by 2018, and Phase II including everything needed to meet IMT-2020 requirements in Rel-16 by the end of 2019. Wireless Technology Evolution Towards 5G: 3GPP Release 13 to Release 15 and Beyond was written by members of 5G Americas and is available for free download on the 5G Americas website. Co-leaders of the white paper working group are Betsy Covell, Nokia Bell Labs Standards Manager, Jim Seymour, Principal Engineer, Mobility CTO Group, Cisco and Vicki Livingston, Head of Communications at 5G Americas. To highlight the importance of the 3GPP 5G standard, 5G Americas has partnered with FierceWireless for a luncheon event on February 27 at the Mobile World Congress. 5G Evolution: Transitioning from Fixed to Mobile will be the topic of discussion by expert panelists from Qualcomm, Deutsche Telecom and Verizon joined by Chris Pearson, President of 5G Americas. The FierceWireless event is available for registration. About 5G Americas: The Voice of 5G and LTE for the Americas 5G Americas is an industry trade organization composed of leading telecommunications service providers and manufacturers. The organization's mission is to advocate for and foster the advancement and full capabilities of LTE wireless technology and its evolution beyond to 5G, throughout the ecosystem's networks, services, applications and wirelessly connected devices in the Americas. 5G Americas is invested in developing a connected wireless community while leading 5G development for all the Americas. 5G Americas is headquartered in Bellevue, Washington. More information is available at www.5gamericas.org. Follow our news on Twitter at @5GAmericas and Facebook at www.facebook.com/5gamericas.


Kamchatnov A.M.,Russian Academy of Sciences | Pavloff N.,University Paris - Sud
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

We study the flow of a one-dimensional Bose-Einstein condensate incident onto a narrow obstacle. We consider a configuration in which a dispersive shock is formed and propagates upstream away from the obstacle while the downstream flow reaches a supersonic velocity, generating a sonic horizon. Conditions for obtaining this regime are explicitly derived and the accuracy of our analytical results is confirmed by numerical simulations. © 2012 American Physical Society.


Lobanov V.E.,Polytechnic University of Catalonia | Kartashov Y.V.,Polytechnic University of Catalonia | Kartashov Y.V.,Russian Academy of Sciences | Konotop V.V.,University of Lisbon
Physical Review Letters | Year: 2014

Using the parity and time reversal symmetries of a two-dimensional spin-orbit coupled Bose-Einstein condensate in a lattice created by the Zeeman field, we identify and find numerically various families of localized solutions, including multipole and half-vortex solitons. The obtained solutions may exist at any direction of the gauge field with respect to the lattice and can be found either in finite gaps (for repulsive interatomic interactions) or in a semi-infinite gap (for attractive interactions). The existence of half-vortices requires higher symmetry (the reflection with respect to the field direction). Stability of these modes makes them feasible for experimental observation. © 2014 American Physical Society.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP-SICA | Phase: HEALTH-2009-4.3.3-1 | Award Amount: 5.68M | Year: 2009

The subject of the proposal is the search and analysis of genomic variations underlying Alzheimers disease (AD), alcoholism and schizophrenia wide-spread diseases in human populations. Schizophrenia and alcoholism are common forms of behavior pathology and disability in adult life. AD is a most common form of dementia in human populations. Though the genomic variations presumably associated with AD, alcoholism and schizophrenia were described in preliminary studies for European populations, the significance of the putatively associated alleles, genetic background as well as the role of environmental factors is still poorly understood for them. Within the framework of this project we plan to extend the studies of genomic variations underlying these diseases by performing genome-wide association analysis in cohorts of patients and normal individuals from several ethnic populations of Europe and Russia.The genetic factors for cognition endophenotype will also be studied. Candidate regions, both newly found and reported previously for these diseases will be additionally analyzed by sequencing. Such large scale population studies combined with deep analysis of particular genes and genomic regions will allow us to reveal genetic reasons for susceptibility to these diseases. On the basis of this research we propose to contribute to development of a diagnostic instrument for the analysis of genetic risk factors for AD, alcoholism and schizophrenia. Comparison of several ethnic cohorts (different populations from Russia and Central/Western and Southern Europe) will also help to elucidate the influence of genetic background and environmental factors on the etiology of neuropsychiatric diseases. Consortium includes 6 groups from EU/AC and 7 groups from Russia. The participants are leading specialists in their fields and have joint publications on subjects related to this proposal.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-IP | Phase: KBBE.2010.3.5-04 | Award Amount: 7.74M | Year: 2011

There is a strong need for new thermostable hydrolases with appropriate performance and/or novel functionalities that could provide huge savings in time, money and energy for industrial processes. The HotZyme project aims to identify such enzymes from hot terrestial environments, using metagenomic screening methods. New bioinfomatic tools will be developed to facilitate function prediction of genes from metagenomes that show low or no sequence homology to enzymes of known function. A range of high-throughput screening technologies will be employed to identify novel hydrolases. The consortium is composed of 13 partners from 10 European countries plus one partner from USA. The strong expertise in Microbiology, Moleculary Biology, Biochemistry, Biophysics, Geochemistry, Nanotechnology and Bioinformatics from our partners will be integrated in the project to ensure the fulfilment of the proposed tasks. Importantly, the five industrial partners, including three SMEs, will seek to commercialize the project results, thus ensuring a European wide impact, post project.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-IP | Phase: HEALTH-2009-2.1.1-2 | Award Amount: 13.75M | Year: 2010

The International Cancer Genome Consortium (ICGC) has the goal of obtaining a comprehensive description of genomic, transcriptomic and epigenomic changes in 50 different tumour types and/or subtypes, with the aim of elucidating the genomic changes present in the many forms of cancers that contribute to the burden of disease throughout the world. We present a proposal for a European contribution to this effort through application of state-of-the-art approaches to the genomics of the most common form of renal cell cancer (RCC). RCC is of particular importance within Europe where the highest global incidence rates are observed. Disease incidence has increased over the last two decades, and it is now the 8th most common cancer in the EU. CAGEKID brings clinical and epidemiological resources that are unique worldwide together with the necessary genetics and genomics expertise required for this effort. In the first phase of the study, we will provide a full genomic characterisation of 100 matched pairs of DNA extracted from the tumour and constitutional samples. DNA will be completely sequenced, and the data brought together with those from whole genome transcript and methylation analyses. Follow-up studies of potential targets will be made in further samples. The results acquired will be relied to targeted protein analyses. The primary data will be made available to the scientific community, and the programme will contribute to establishing norms for the manipulation and storage of biological samples. CAGEKID will provide the first systematic analysis of this tumour site providing new insights into disease aetiology with application for diagnosis and treatment. It addresses a major need to identify new biological markers for renal cell cancer, one of very few tumour types for which there are currently no biological markers in routine clinical use. Renal cancer is not yet supported by any of the members of the ICGC.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ENV.2013.6.4-3 | Award Amount: 7.88M | Year: 2013

The NEAM (North East Atlantic, Mediterranean and Adjacent Seas) region of IOC/UNESCO is known to be exposed to tsunamis and, like other regions of the world, faces increasing levels of risk due to i) continuous development of coastal areas with critical infrastructure, and ii) year-round presence of millions of tourists. In recent years, European researchers have greatly advanced knowledge of tsunami hazards and implementation of operational infrastructure, such as creation of a regional system of candidate tsunami-watch providers (CTWP) and national tsunami warning centers (NTWC). However, significant gaps remain and more effort is needed. ASTARTE (Assessment STrategy And Risk for Tsunami in Europe) aims to develop a comprehensive strategy to mitigate tsunami impact in this region. To achieve this goal, an interdisciplinary consortium has been assembled. It includes all CTWPs of NEAM and expert institutions across Europe and worldwide. ASTARTE will improve i) basic knowledge of tsunami generation and recurrence going beyond simple catalogues, with novel empirical data and new statistical analyses for assessing long-term recurrence and hazards of large events in sensitive areas of NEAM, ii) numerical techniques for tsunami simulation, with focus on real-time codes and novel statistical emulation approaches, and iii) methods for assessment of hazard, vulnerability, and risk. ASTARTE will also provide i) guidelines for tsunami Eurocodes,ii) better tools for forecast and warning for CTWPs and NTWCs, and iii) guidelines for decision makers to increase sustainability and resilience of coastal communities. In summary, ASTARTE will develop basic scientific and technical elements allowing for a significant enhancement of the Tsunami Warning System in the NEAM region in terms of monitoring, early warning and forecast, governance and resilience. Overall, this will lead to the goal of the European/NEAM Horizon 2020 strategy: to foster tsunami resilient communities.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP-SICA | Phase: ENV.2009.4.1.3.2 | Award Amount: 3.31M | Year: 2010

ImpactMin will develop new methods and a corresponding toolset for the environmental impact monitoring of mining operations using Earth Observations. Novel technology will be proposed for the combined use of satellite remote sensing, aerial lightweight measurements and Unmanned Aerial Vehicles (UAVs). The proposed methods will be validated at four demonstration sites in Sweden, Bosnia&Herzegovina, Romania and Russia. The theoretical advancement in science and technology will be made public knowledge with the help of a free multimedia e-learning programme whereas the tools and foreground knowledge will be commercialised by the participating SMEs. The scientific challenge is to use remote sensing technologies to develop cost-effective, reliable and repeatable approaches for monitoring the impact of mining activities on the environment through time, in order to identify, predict and prevent potentially serious consequences for the natural and human environment. On the one hand it is of great importance that the amount of detail obtained from these data sources is sufficient to monitor relatively small changes through time. On the other hand it is essential that the methods used are sufficiently flexible to operate under different circumstances at acceptable cost. The composition of project partnership and the selection of demonstration sites correspond to European interests in both accessing strategic minerals needed for the European economy and the need to monitor and control environmental pollution from these sources.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA-SA | Phase: SiS-2010-3.0.3.1 | Award Amount: 1.23M | Year: 2011

The objective of the project is to use art to communicate emotions related to the understanding of nature and to stimulate students create artistic initiatives able to demonstrate commonalities of artistic and scientific fascination. The objective will be pursued according to two strictly related aspects: 1)produce artistic works based on scientific phenomena at a professional level; 2)stimulate students of EC schools to produce their own works and to organize an international competition to prize the best ones. (We consider this a form of very deep and long lasting interactive action that we prefer to the sometimes superficial and ephemeral interactive processes available in some popularization science exhibitions). Practically we intend to realize artistic events based on scientific issues per each of the following artistic disciplines: 1)Modern dance 2)Cinema 3)Contemporary art 4)Imaging 5)Literature The produced art work will be exploited in a double way: a)By presenting them in live events in the different countries involved in the project addressing not only the targeted category of persons (high school students (15-18 years), but also the general public; b)By organizing a competition among the EU high school students for each of the 5 considered discipline (with a consequent interactive process involving potentially thousands of students). The consortium includes several scientists, artists, art critics, film directors, actors, musicians and specialists in science popularization, who will work together to achieve the goals synthetically above reported. The activities will be coordinated by the project leader who is, at the same time, a well known scientist and a person active since long time in several artistic activities. Universities, research institutes, dance schools, museums, theatres will be involved, together with the famous European Synchrotron Radiation Facility which hosts every year thousands scientists, including Nobel price winners.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ENV.2013.6.2-6 | Award Amount: 11.20M | Year: 2013

By 2050, global agricultural productivity will need to increase with at least 70%. In order to guarantee food production for future generations, agricultural production will need to be based on sustainable land management practises. At present, earth observation based (global) crop monitoring systems focus mostly on short-term agricultural forecasts, thereby neglecting longer term environmental effects. However, it is well known that unsustainable cultivation practises may lead to a degradation of the (broader) environment resulting in lower agricultural productivity. As such, agricultural monitoring systems need to be complemented with methods to also assess environmental impacts of change in crop land and shifting cultivation practises. It is thereby important that this is addressed at the global level. SIGMA presents a global partnership of expert institutes in agricultural monitoring, with a strong involvement in GEO and the Global Agricultural Geo-Monitoring (GEO-GLAM) initiative. SIGMA aims to develop innovative methods, based upon the integration of in-situ and earth observation data, to enable the prediction of the impact of crop production on ecosystems and natural resources. The proposed project will address methods to: i. enable sharing and integration of satellite and in situ observations according to GEOSS Data CORE principles; ii. assess the impact of cropland areas and crop land change on other ecosystems; iii. understand and assess shifts in cultivation practises and cropping systems to evaluate impacts on biodiversity and the environment. Furthermore, dedicated capacity building activities are planned to increase national and international capacity to enable sustainable management of agriculture. Lastly, a strong coordinating mechanism will be put in place, through the project partners, between SIGMA and the G20 Global Agricultural Geo-Monitoring Initiative (GEOGLAM), in order to assure transparency and alignment of the SIGMA activities.


Grant
Agency: Cordis | Branch: FP7 | Program: CPCSA | Phase: INFRA-2007-1.2-01 | Award Amount: 669.89K | Year: 2008

We propose to set up a European digital repository for cosmic ray data\nby pooling existing data archives and by developing a real-time database\nwith the data of as many European neutron monitor stations as possible.\nThe data will be available through internet. Cosmic rays provide a\ndiagnostic tool to analyze processes in interplanetary space and at the\nSun. Cosmic rays also directly affect the terrestrial environment and\nserve as indicators of solar variability and non-anthropogenic climate\nchanges on Earth. In the fifties of the last century a worldwide network\nof standardized neutron monitors was developed to examine temporal and\nspatial variations in our space environment. Despite decades of\ntradition, neutron monitors remain the state-of-the-art instrumentation\nfor measuring GeV cosmic rays that cannot be measured by space\nexperiments. Therefore the worldwide network, which presently consists\nof about 50 stations, ideally complements cosmic ray observations in\nspace. Since the beginning of the coordinated neutron monitor\nmeasurements the data have been collected in world data centers. A big\nshortcoming of these data centers for todays demands is the fact that\nthe data are not available in real-time and only with a time resolution\nof one hour. Cosmic ray applications, e.g. space weather warnings\n(geomagnetic storms, solar energetic particle events) need access to\nneutron monitor measurements in real-time and with high time resolution.\nReliable forecasts of geomagnetic storms are important in many technical\nareas (radio communication, electric power lines, etc.). Confident alert\nprediction of solar energetic particle events is highly important for\nmanned space missions and for airline crews and passengers. The proposal\nunifies for the first time the cosmic ray community of the European\nneutron monitor network in a coordinated effort to advance the use of\ncosmic ray data in cutting-edge applications, as e.g. space weather.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2009.1.4 | Award Amount: 4.13M | Year: 2010

Security has always been a stepchild to the developers of embedded\nsystems. In the past they were able to rely on physical\nprotection. Due to the connectivity and ubiquity of todays embedded\nsystems this is no longer possible. The aim of SecFutur is to develop\nand establish a security engineering process for embedded systems. In\norder to achieve this overall goal SecFutur will provide a set of\nimplemented resource-efficient security building blocks for embedded\nsystems, each addressing a specific complex non-functional\nrequirement, and a security engineering framework that supports the\ndeveloper in integrating these building blocks into the overall\nengineering process. SecFutur targets the developer of embedded\nsystems who by using the project results will be able to follow an\napplication driven security engineering approach and increase the\noverall security of the system. Practical scenarios from several\nsecurity-relevant application areas will be used in SecFutur to\nevaluate and demonstrate the advances towards secure\nresource-efficient embedded systems.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: HEALTH-2007-1.3-3 | Award Amount: 3.78M | Year: 2008

The goal of the OpenTox project is to develop a predictive toxicology framework with a unified access to toxicological data, (Q)SAR models and supporting information. It will provide tools for the integration of data from various sources (public and confidential), for the generation and validation of (Q)SAR models, libraries for the development and integration of new (Q)SAR algorithms, and validation routines. OpenTox will attract toxicological experts without (Q)SAR expertise as well as model and algorithm developers. It will move beyond existing attempts to solve individual research issues, by providing a flexible and user friendly framework that integrates existing solutions and new developments. OpenTox will be relevant for REACH as it gives risk assessors simple access to experimental data, (Q)SAR models and toxicological information that adheres to European and international regulatory requirements. OpenTox will be published as an open source project to allow a critical evaluation of its algorithms, to promote dissemination, and to attract external developers. Facilities for the inclusion of confidential in-house data and for accessing commercial prediction systems will be included. OpenTox will contain high-quality data and (Q)SAR models for chronic, genotoxic and carcinogenic effects. These are the endpoints with the greatest potential to reduce animal testing. The impact of OpenTox will however go beyond REACH and long-term effects, because it will be straightforward to create models for other endpoints (e.g,. sensitisation, liver-toxicity, cardio-toxicity, ecotoxicity). The proposed framework will support the development of new (Q)SAR models and algorithms by automating routine tasks, providing a testing and validation environment and allowing the easy addition of new data. For this reason we expect, that OpenTox will lead to (Q)SAR models for further toxic endpoints and generally improve the acceptance and reliability of (Q)SAR models.


Grant
Agency: Cordis | Branch: FP7 | Program: MC-ITN | Phase: FP7-PEOPLE-2012-ITN | Award Amount: 3.48M | Year: 2013

One dimensional nanostructures (1DNS) produced from various elemental (Si and Ge) and compound (III-V and II-VI) semiconductors are receiving increasing worldwide attention due to their unique properties and potential for a wide range of applications. They are the building blocks for single photon emitters, third generation solar cells and the monolithic integration of optoelectronic devices. 1DNS can be used to fabricate the smallest light emitting devices and lasers. Despite recent progress, many fundamental and applied challenges still prevent transfer of 1DNS from laboratories to large scale industrial use. The proposed NanoEmbrace assembles eight leading industry partners and ten internationally renowned institutions in materials science, engineering, chemistry, condensed matter physics and nanoscale device fabrication. The original vision of NanoEmbrace is to gain superior control and understanding of 1DNS and to transfer 1DNS from laboratory to industry. It is probably the first organised attempt to put together all the competences and capabilities, experimental and theoretical, necessary for the comprehension of the mechanisms that govern the growth of 1DNS that cannot all be described by existing models. We also aim to provide the highest quality multidisciplinary and cross-sectoral training to early career researchers (ESRs) in nanoscience to create the next generation of research and industry leaders. The ESRs joining NanoEmbrace will have a unique opportunity to enjoy close personal contact with internationally renowned experts and to put together an unprecedented, complex but unified overall understanding of the growth of 1DNS and to develop the process required to produce practical commercial devices. To deliver the highest quality of training to young talented researchers, NanoEmbrace has identified the key research themes: controlled synthesis, theoretical modelling, characterisation of 1DNS and the integration of 1DNS into device fabrication.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP-SICA | Phase: SPA.2010.3.2-01 | Award Amount: 685.41K | Year: 2010

Information on the outlook of yield and production of crops over large regions is essential for government services, food relief agencies, and international organizations monitoring the world food production and trade. In 2007, unbalances in the global production of agricultural commodities caused the marked prices of agricultural commodities to peak. Given this background the need for a global monitoring system for agricultural production is undisputed and included in the priority themes for GEO. In Europe, agricultural monitoring has been implemented through the MARS Crop Yield Forecasting System (MCYFS) operated by JRC and also embedded in the European Unions GMES initiative. Recently, the MCYFS was extended and now includes the monitoring of crops in all of Russia, Central-Asia and China. These regions are characterized by harsh winter conditions and warm and dry summer conditions. Particularly winter-crops are affected by low temperatures during the winter which determine whether rapid regrowth is possible in spring. However, the effects of winter-kill are poorly described in the crop models used by MCYFS. Earth observation data provide an opportunity to derive such information and can form a basis for real-time updating of wheat growth parameters in the MCYFS. Within the proposed project we plan to combine earth observation capabilities as well as improvements in crop modeling approaches. Data from the Russian RESURS-DK and KMSS sensors complemented with observations from the MERIS and MODIS sensors should allow to obtain: 1) masks of winter-wheat planting; 2) time-series of crop-specific variables and 3) maps of (relative) winter kill damage. Moreover, recent advances in crop modeling allow to take EO data into account through data assimilation techniques. Validation of these principles must be carried out both at local scale with observed field data, as well as on regional scale by evaluating the adapted and default models with regional statistics.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA | Phase: INFRA-2010-3.3 | Award Amount: 871.70K | Year: 2010

The FP7 EDGeS project has successfully set up a production-level distributed computing infrastructure (DCI) consisting of more than 100.000 PCs from several volunteer and low-cost Desktop Grids, which have been connected to existing Service Grids (including EGEE, SEE-GRID, etc. with about 150.000 processors) based on the new 3G Bridge technology and application development methodology.\n\nThe main aim of the DEGISCO project is the further extension of the European DCI infrastructure that is already interconnected by EDGeS to International Cooperation Partner Countries (ICPC) in strong collaboration with on the one hand local partners in ICPC countries and on the other hand European e-Infrastructure experts (including 3G Bridge know-how) and thus, reinforce the global relevance and impact of European distributed infrastructures.\n\nThe project will support the creation of new Desktop Grids in ICPC countries and the connection of these Grids to European DCIs and existing Service Grids in ICPC countries by employing 3G Bridge technology. Moreover, building on the solid expertise of the DEGISCO partners, the project will provide recommendations on best practices and define joint roadmaps for ICPC countries and Europe.\n\nThe application support activities of DEGISCO support the use of the already two dozen ported applications on new connected DCIs in ICPC countries and support new applications.\n\nThe dissemination and training activities will promote via various channels such interoperation between Desktop Grid and Service Grid infrastructures on a global scale, which leads to more awareness under the general public of computational science and distributed computing co-funded by the EC. As a result more citizens, students, and companies are expected to donate resources to scientific purposes.\n\n\nAs part of these activities an International Desktop Grid Federation will be operated allowing for effective exchange of information through participation.


Grant
Agency: Cordis | Branch: FP7 | Program: MC-ITN | Phase: FP7-PEOPLE-2013-ITN | Award Amount: 3.88M | Year: 2013

The realization of efficient, cheap, reliable, scalable and portable terahertz (THz) radiation sources and detectors is one of the important objectives in modern applied physics. THz emitters and detectors have potential applications in biology, medicine, security and nondestructive in-depth imaging. However none of the existing THz devices satisfy the application requirements. The project consortium, which includes teams from the leading European universities: Durham, Vilnius, Paris-7, Exeter, St. Petersburg, Prague, Amiens, University of Iceland, Swiss Company Alpes laser and several industrial associated partners, proposes a broad range of new approaches aiming to bridge the terahertz gap: (i) Polariton-based THz emission using microcavities in the strong coupling regime, (ii) New types and concepts of semiconducting materials for short pulsed THz emission, (iii) Carbon nanotubes and graphene as THz emitters and detectors, (iv) Application of Ferroelectric and Multiferroic materials for THz devices. To achieve this objective, we are planning to educate and train a team of collaborating young physicists and device engineers able to conduct research and exploit its application in this new area.


News Article | April 11, 2016
Site: www.nrl.navy.mil

Dr. Chandra Pande, a metallurgist at the U.S. Naval Research Laboratory, has been awarded ASM Honorary Membership for the year 2014 in recognition of his "distinguished service to the materials science and engineering profession by his pioneering and seminal research and teaching and its promotion to metallurgical community." ASM (formerly known as American Society for Metals) established Honorary Membership in the Society in 1919 to recognize distinguished service to the materials profession, to ASM International, and to the progress of mankind. As an interesting note, Thomas A. Edison, who thought of the idea for a research facility like NRL almost 100 years ago, received this award in 1929, and Pande is now the first NRL researcher to receive the award. Pande is a Senior Scientist at NRL and a member of the Multifunctional Materials Branch. He joined NRL in 1980 after receiving his doctorate in physical sciences from Oxford University and after teaching materials science at the Indian Institute of Technology, New Delhi, and conducting research in physical metallurgy at Brookhaven National Laboratory. His main research interests are in materials characterization using electron, X-rays and neutron scattering techniques. In addition he has also developed advanced modeling techniques to analyze the relationship between observed microstructure and material properties of many materials including nanomaterials and superconductors. Pande is a Fellow of ASM International, a Senior Member of TMS (The Metallurgical Society), and Life Member of American Physical Society. He has held numerous elected offices in professional societies including Chair, Physical Metallurgy Committee of The Metallurgical Society (TMS) 1990 to 1992; Chair, Superconducting Materials Committee of TMS, 1997 to 1999; Chair, Bardeen Award Committee of TMS, 1999; Chair, Electrical, Magnetic and Optical Phenomena Committee of ASM, 1995 to 1999; and Chair TMS Arora Award Panel, 2000 to 2013. In addition he has served as Associate Editor, International Journal of Electronic Materials, 1998 to 2000; and Member, International Advisory Board of Review of Advanced Materials Science (Russian Academy of Sciences) 2000 to 2005; and is currently a member of the Board of International Materials Reviews. Pande has demonstrated excellence in the fields of physics and materials sciences and has attracted international recognition through over 175 refereed journal articles, 5 book chapters, 4 NRL Alan Berman Research Publication Awards and numerous invited talks at home and abroad. ASM International honored him with the Burgess Award (1995) for exceptional sustained research contribution over a five-year period. About the U.S. Naval Research Laboratory The U.S. Naval Research Laboratory provides the advanced scientific capabilities required to bolster our country's position of global naval leadership. The Laboratory, with a total complement of approximately 2,500 personnel, is located in southwest Washington, D.C., with other major sites at the Stennis Space Center, Miss., and Monterey, Calif. NRL has served the Navy and the nation for over 90 years and continues to advance research further than you can imagine. For more information, visit the NRL website or join the conversation on Twitter, Facebook, and YouTube.


News Article | October 26, 2016
Site: www.eurekalert.org

Agricultural biotechnologists at the Moscow Institute of Physics and Technology (MIPT) have developed a screening technique that can quickly determine how chemicals affect the growth of plants. Together with their colleagues from the Research Center of Biotechnology of the Russian Academy of Sciences, Lomonosov Moscow State University, and the University of Freiburg (Germany), the researchers at MIPT have devised a way to quickly identify chemicals that influence the rate at which plants grow. The scientists developed a compound screening technique that relies on pollen and offers an alternative to the conventional seed germination assays, which tend to require over a week to complete. The research team used tobacco (Nicotiana tabacum) to test their proposed screening technique. To do this, they tested a total of 1,040 compounds in just two hours and identified 65 chemicals that either promoted or inhibited the growth of tobacco. Previously, it would have taken weeks, if not months of painstaking work to obtain the same results by using a less effective compound screening procedure. By contrast, the new technique will allow researchers to spot chemical modulators of plant growth and development within very short periods of time (more details are available in BMC Plant Biology). "This method enables us to rapidly identify completely new molecular targets and mechanisms of plant growth that can be used to selectively slow down the growth of unwanted plants and stimulate the growth of cultivated species--which we currently see as the aim of the research conducted by our international collaboration," says Sergey Leonov, the head of the Laboratory for the Development of Innovative Drugs at MIPT. To identify just one or two compounds that affect plant development, tens or even hundreds of thousands have to be tested. At present, studying the influence of chemicals takes about one month per compound. One might call the technique that is currently in use "layman's screening". It involves growing seedlings in a medium containing the chemical that is being tested. You might imagine yourself doing something similar in your backyard: it is a bit like using a new fertilizer and waiting around for two weeks or so to see if it works. Except that you would have to test a thousand of them to find the best compound. To determine whether the molecules of a given chemical affect plant growth, it is necessary to find out how long it takes before seed germination is complete and the shoots have grown. The scientists hypothesized that seedlings growing in a backyard undergo processes that are similar to those involved in pollen germination and the growth of pollen tubes, and that they also rely on the activation of the same genes. This implies that pollen grains and seeds can have shared targets in growth regulating pathways and, as a result, be affected by the same chemicals. Compared to seeds, pollen germinates much more quickly (in about two hours), which makes it convenient for chemical compound screening, as it means that an experiment can be completed within hours instead of weeks. Opting for the pollen tube based assay results in significantly higher productivity: with this technique, it is possible to test around 10,000 compounds (and select the most potent ones) within a week. Furthermore, the process no longer involves a human observer; instead, the results are based on computer-aided digital image analysis measuring the area occupied by pollen tubes. The whole process is automated, with most of the work being performed by the machine. It starts with scientists suspending the pollen in a liquid germination medium, transferring it into multiwell plates, and introducing the test chemicals into the wells. After that, all that needs to be done is to press the right buttons. An automated microscope photographs the contents of each well at the beginning of the experiment and at different stages in the development of pollen. Every image is processed by specialized software, which was developed by the researchers from MIPT in partnership with Molecular Devices LLC. It is down to the software to ultimately decide if a given chemical acts as a growth modulator or not. This is accomplished by calculating the area occupied by the pollen in each of the 384 wells per plate at the beginning and at the end of the experiment. These figures are compared to the corresponding ones measured for the controls grown without any added chemicals. If the total area occupied by the pollen grain and tube in a certain well is larger than that of the controls, the machine labels the relevant chemical as a growth stimulator, whereas a smaller area (relative to the controls) is seen as an indication of growth inhibitory properties. Knowing the compounds that influence the rate of development of different plants, as well as the corresponding molecular mechanisms of plant growth that are targeted by these compounds will enable biotechnologists to selectively inhibit the growth of pests and promote the growth of crops. This research was carried out as part of the 5-Top100 Russian Academic Excellence Project launched by the government with the aim of creating long-term competitive advantages for the best Russian universities on the global education market.


News Article | November 30, 2016
Site: www.eurekalert.org

A team of scientists from N. D. Zelinsky Institute of Organic Chemistry of the Russian Academy of Sciences (RAS), N. K. Kol'tsov Institute of Developmental Biology of the RAS, and Immune Pharmaceuticals LLC led by MIPT's Prof. Alexander Kiselyov has synthesized an antitumor compound that could be used to fight chemoresistant cancer. The research findings were published in the European Journal of Medicinal Chemistry. The scientists synthesized a range of new compounds and evaluated their anticancer effect using sea urchin embryos and human cancer cell lines. One molecule proved to be potent and selective enough to have an effect even on a chemoresistant cancer--ovarian carcinoma. The synthesized compounds belong to a class known as aminoisothiazoles. "We made the decision to experiment with aminoisothiazoles because compounds of this class exhibit a diverse range of pharmacological and biological activities. This led us to expect that, with the appropriate functional groups, they might act as anticancer agents," comments Alexander Kiselyov. The proposed approach enables a straightforward synthesis of the target compounds in high yields. The developed reaction sequence involves only six steps and is based on readily available reagents. To assess antitumor activity, the researchers used in vitro assays based on human cancer cells, as well as the in vivo sea urchin embryo model developed and validated by the team earlier. Of the synthesized 37 compounds, 12 molecules were found to reduce the proliferation rate of cancer cells with different potencies or to prevent their division completely leading to the death of cancer cells. The effect of these antitumor compounds is attributed to their capacity to destroy microtubules, which are involved in cell division (mitosis). Microtubules are made of a protein called tubulin, which can be targeted by the anticancer agents, causing the degradation of the microtubule structure. The potency of the synthesized compounds targeting microtubules was further assessed using sea urchin embryos, as well as a panel of human cancer cells from breast adenocarcinoma, melanoma, ovarian and lung tumors. Sea urchin embryos have been shown by the team to be a good model to study specific tubulin-binding agents. They cause embryos to rapidly rotate, as opposed to moving in the ordinary way (cf. the left and right sides of the animation). This effect can be easily observed using a light microscope enabling the scientists to evaluate the anticancer potential of a compound within a short amount of time. Moreover, the team found sea urchin embryos to be more sensitive toward the identified agents than cancer cells. A difference between the duration of the mitotic cycles of sea urchin embryos and cancer cells (40 minutes vs. 24 hours) may lead to distinct effects of the small molecules on tubulin dynamics and could thus account for this phenomenon.The molecule featuring 3-thiophene- and para-methoxyphenyl substituents was identified to be the most potent anticancer agent in the studies. According to the researchers, it is this combination of functional groups, as well as the unique topology of the molecule that are responsible for its unique activity. Specifically, the new agent displays antitubulin properties as it blocks cell division by affecting microtubules and thus can destroy the chemoresistant cells of ovarian carcinoma. The scientists plan to use crystallography data and structure modeling techniques to study microtubule degradation in more detail in order to identify the sites, or "spots" where the active compounds bind to tubulin. In their earlier study, the researchers used substances isolated from dill and parsley seeds to synthesize glaziovianin A, another anticancer compound, along with its structural analogs.


News Article | December 12, 2016
Site: www.eurekalert.org

Scientists from the Institute of Physics and Technology of the Russian Academy of Sciences and MIPT have let two electrons loose in a system of quantum dots to create a quantum computer memory cell of a higher dimension than a qubit (a quantum bit). In their study published in Scientific Reports, the researchers demonstrate for the first time how quantum walks of several electrons can help to implement quantum computation. "By studying the system with two electrons, we solved the problems faced in the general case of two identical interacting particles. This paves the way toward compact high-level quantum structures," comments Leonid Fedichkin, Expert at the Russian Academy of Sciences, Vice-Director for Science at NIX (a Russian computer company), and Associate Professor at MIPT's Department of Theoretical Physics. In a matter of hours, a quantum computer would be able to hack through the most popular cryptosystem used even in your web browser. As far as more benevolent applications are concerned, a quantum computer would be capable of molecular modeling that takes into account all interactions between the particles involved. This in turn would enable the development of highly efficient solar cells and new drugs. To have practical applications, a quantum computer needs to incorporate hundreds or even thousands of qubits. And that is where it gets tricky. As it turns out, the unstable nature of the connection between qubits remains the major obstacle preventing us from using quantum walks of particles for quantum computation. Unlike their classical analogs, quantum structures are extremely sensitive to external noise. To prevent a system of several qubits from losing the information stored in it, liquid nitrogen (or helium) needs to be used for cooling. Plenty of schemes have been proposed for the experimental realization of a separate qubit. In an earlier study, a research team led by Prof. Fedichkin demonstrated that a qubit could be physically implemented as a particle "taking a quantum walk" between two extremely small semiconductors known as quantum dots, which are connected by a "quantum tunnel." From the perspective of an electron, the quantum dots represent potential wells. Thus, the position of the electron can be used to encode the two basis states of the qubit--|0? and |1?--depending on whether the particle is in one well or the other. Rather than sit in one of the two wells, the electron is smeared out between the two different states, taking up a definite position only when its coordinates are measured. In other words, it is in a superposition of two states. If an entangled state is created between several qubits, their individual states can no longer be described separately from one another, and any valid description must refer to the state of the whole system. This means that a system of three qubits has a total of 8 basis states and is in a superposition of them: A|000?+B|001?+C|010?+D|100?+E|011?+F|101?+G|110?+H|111?. By influencing the system, one inevitably affects all of the 8 coefficients, whereas influencing a system of regular bits only affects their individual states. By implication, n bits can store n variables, while n qubits can store 2? variables. Qudits offer an even greater advantage, since n four-level qudits (aka ququarts) can encode 4?, or 2?×2? variables. To put this into perspective, 10 ququarts store approximately 100,000 times more information than 10 bits. With greater values of n, the zeros in this number start to pile up very quickly. In this study, Alexey Melnikov and Leonid Fedichkin obtain a system of two qudits implemented as two entangled electrons quantum-walking around the so-called cycle graph. To make one, the scientists had to "connect the dots" forming a circle (once again, these are quantum dots, and they are connected by the effect called quantum tunneling). The entanglement of the two electrons is caused by the mutual electrostatic repulsion experienced by like charges. It is possible to create a system of even more qudits in the same volume of semiconductor material. To do this, it is necessary to connect quantum dots in a pattern of winding paths and have more wandering electrons. The quantum walks approach to quantum computation is convenient because it is based on a natural process. Nevertheless, the presence of two identical electrons in the same structure was a source of additional difficulties that had remained unsolved. The phenomenon of particle entanglement plays a pivotal role in quantum information processing. However, in experiments with identical particles, it is necessary to distinguish so-called false entanglement, which can arise between electrons that are not interacting, from genuine entanglement. To do this, the scientists performed mathematical calculations for both cases, viz., with and without entanglement. They observed the changing distribution of probabilities for the cases with 6, 8, 10, and 12 dots, i.e., for a system of two qudits with three, four, five, and six levels each. The scientists demonstrated that their proposed system is characterized by a relatively high degree of stability. It has been a long time since people first set their hearts on building a universal quantum computer, but so far we have been unable to connect a sufficient number of qubits. The work of the Russian researchers brings us one step closer to a future where quantum computations are commonplace. And although there are algorithms that quantum computers could never accelerate, others would still benefit enormously from devices able to exploit the potential of large numbers of qubits (or qudits). These alone would be enough to save us a couple of thousand years.


On Thursday, December 1st 2016, the Prix Galien Russia Awards Committee honored excellence in research, development and innovation in the biopharmaceutical industry at its third awards ceremony. The gala dinner was held in the presence of Sergey Tsyb, Vice Minister of Industry and Trade of the Russian Federation, Natalia Sanina, First Vice Chairman of the State Duma Healthcare Committee; Mikhail Murashko, Head of Roszdravnadzor, National Service of Control in Healthcare, Sergey Muravev, Director of the Department of International Cooperation and Public Relations, and Igor Lanskoy, Advisor to the Minister of Health of the Russian Federation among others. For the third time in history, the Prix Galien Russia Awards Committee, a Jury of 15 unrivalled Russian scientists and doctors, including academicians of the Russian Academy of Sciences, recognized leading efforts in advancing the human condition through biopharmaceutical innovation in six categories: Best Orphan Drug, Best Pharmaceutical Agent, Best Biotechnology Product, Best Medical Technology, Best Research in Russia and Best Russian Product. The Prix Galien Russia 2016 awards were presented to the following winners: In the category of Best Orphan Drug the award was presented to Amgen's Blinatomubab, for the treatment of Philadelphia chromosome-negative (Ph-) relapsed or refractory B-precursor acute lymphoblastic leukemia (ALL). Blinatumomab overcomes the effect of tumor escape from the immune system. Due to its unique mechanism of immune effect, blinatumomab does not inhibit hematopoiesis, this sets up blinatumomab from cytotoxic agents, exhibiting a myelosuppressive effect, which limits their therapeutic options. The Medal was presented to Oleg Paroshin, CEO of Amgen Russia by Ekaterina Zakharova, President of the Association of rare diseases. The 2016 Prix Galien Russia Award for Best Pharmaceutical Agent was presented to Janssen's (Pharmaceutical Companies of Johnson&Johnson) - Imbruvica® (Ibrutinib) for chronic lymphocytic leukaemia. Ibrutinib is the first BTK inhibitor to offer patients a novel, targeted treatment that combines unparalleled efficacy and good tolerability with the convenience of oral dosing. In the category of Best Biopharmaceutical Product, the 2016 Prix Galien Russia Award was presented to Bristol-Myers Squibb's Yervoy® (Ipilimumab) for treatment of previously treated unresectable or metastatic melanoma. The 2016 Prix Galien Russia Award for Best Medical Technology was presented to Abbott's Absorb Bioresorbable Vascular Scaffold. Absorb is the world's first drug eluting bioresorbable vascular scaffold (BVS) for the treatment of coronary artery disease. Absorb is designed to open a blocked heart vessel in the same way as a traditional metallic stent and then dissolve naturally. In the category of Best Research in Russia the award went to Valery Charushin, Oleg Chupakhin and Vladimir Rusinov for their work on the development of a new generation of antivirals of the family of azoloazines. The 2016 Prix Galien Russia Award for Best Russian Product was presented to the DNA-Technology and Federal State Budget Institution "Research Center for Obstetrics, Gynecology and Perinatology" Ministry of Healthcare of the Russian Federation "Femoflor®"+"ImmunoQuantex®" for differential and complex diagnosis of different clinical condition connected with dysbiotic disorders and inflammatory conditions of the lower reproductive tract of women. On the occasion of World HIV day, the Jury also honoured E. Zvartau, E. Krupitsky, D.Lioznov, G.Woody, J. Samet with a special distinction for their work on the prevalence of HIV in vulnerable and special populations of the Russian Federation. Reading a letter received by Veronika Skvortsova, Igor Lanskoi, adviser to the minister declared that "for many years, this has been one of the most coveted awards in the field of biopharmaceutical research, opening new perspectives for the improvement of medical science". Michael Murshko, presenting the medal for Best Research in Russia declared "healthcare cannot develop without innovations and the Prix Galien award is a successful example of support for new improvements in biomedical science, the main purpose of which is to provide patients with quality and effective medical products". Representing the State Duma for the first time at the event, Natalia Petrovna Sanina declared "Prix Galien is not just an award, it is a movement aimed at searching for and promoting scientific innovations that contribute to people's health".


News Article | August 31, 2016
Site: www.nature.com

Crowded streets in Moscow boast some unusual adverts. Instead of touting property or consumer goods, they offer services for improving one's h-index and other metrics that will boost the impact of published scientific papers. A thriving market of academic fraud is one ugly side of post-Soviet science. Lamenting former prowess in everything from space exploration to nuclear power is another. Over the past 15 years, Russia has doubled its total spend on research and development (R&D). But this increase has failed to make up for the calamitous decline in Russian science during the 1990s, following the disintegration of the Soviet Union. According to the Russian Academy of Sciences (RAS), total R&D spending in 2013 still hovered about 40% below 1990 levels. And in 2014, the country spent 1.19% of its gross domestic product (GDP) on R&D, according to estimates by the Organisation for Economic Co-operation and Development (OECD) — only a slight improvement on the 1.05% it invested in 2000, and well below the 2.38% OECD average. But funding issues alone cannot explain why Russia has been on the sidelines of the international research stage for years — a lack of competition, poor infrastructure and an inability to retain talent have all played their part. Now a stumbling economy threatens the modest recovery in science made in the past decades, and the chances of reform. In 2014, Russia's R&D spending rose to a post-Soviet high of US$38.2 billion (in 2010 dollars). But Victor Kalinushkin, head of the RAS's trade union and a laser researcher at the A. M. Prokhorov General Physics Institute in Moscow, says that the current level of funding “is a minimal survival level” that does not allow for development, “keeping up with the others or compensating the losses incurred in the 1990s”. Twenty-five years into a market economy, R&D is still a state-dominated affair: from 2000 to 2014, the government's share of funding rose by 14%, to 69%, but private funding dropped by a few per cent. By comparison, in countries such as the United States and China about 75% of R&D funding comes from non-government sources. “Regrettably, we have failed to create the stimuli needed for the private sector to at least maintain the level of investment,” says Irina Dezhina, a science-policy researcher at Moscow's Institute for the Economy in Transition. And we cannot change the situation without large-scale economic reforms, she says. Instead of setting new rules to distribute the additional government funds through competitive grants, “this new money started pouring into the old system”, says Mikhail Gelfand, deputy director of the Institute for Information Transmission Problems in Moscow. “It was spread thin,” he says. “We have failed to build a new system, and the one we have is inelastic relative to investment. Funding has increased significantly over the last 15 years, and productivity, however you choose to measure it, has hardly increased at all.” The transition to a competitive-grant-based funding model has been slow. In 2014, just 11.1% of total R&D spending went on research foundation grants, barely up from 9.5% in 2010. Most R&D money is allocated through institution budgets. Adding to funding woes is the fact that precious extra resources are often distributed through politicking rather than through a transparent process. An analysis of 2016 budget plans (go.nature.com/2aisci7, in Russian) by Evgeny Onishchenko, a trade-union activist and semiconductor researcher at the Lebedev Physical Institute in Moscow, revealed that the Kurchatov Institute will receive 40% more funding this year than it did in 2015. The institute is headed by physicist Mikhail Kovalchuk, who is the brother of the businessman Yury Kovalchuk — a close friend of Russian President Vladimir Putin. Alexei Khokhlov, vice-rector at Lomonosov Moscow State University and chair of the advisory council for the Russian Ministry of Education and Science, says that the Federal Agency for Scientific Organizations, which was created in a swift reform of the RAS in 2013 to manage the network of institutes, has been “timid” in its efforts to change the status quo of competitive funding. Khokhlov thinks that the amount of research funding is not the central issue. Instead, he says that the lingering Soviet idea that every scientist should have a permanent salaried position that is largely independent of job performance, known as a stavka and much like a tenured professorship, has prevented meaningful reforms. The bulk of research spending will be paying for “payroll for positions set years ago”, he says. He also notes that this rigid personnel structure means that labs lack autonomy, allowing little room for scientists to work in emerging fields. This is reflected in how Russian science is represented internationally: the bulk of articles indexed by Web of Science and Scopus are related to physics, astronomy, chemistry, engineering and maths — all areas with a strong Soviet legacy. But the country is poorly represented in disciplines such as medicine or neuroscience. In the past 20 years or so, an exodus of researchers from Russia has contributed to the scientific workforce falling by about one-third, to some 373,900 people. This follows a previous catastrophic loss between 1989 and 1994, when a similar decreased happened in just five years. Pinning down the scale of cross-border 'brain drain' is tricky: estimates range from 30,000 to 200,000 scientists having left since the early 1990s. A small proportion of the researchers have returned, encouraged by initiatives such as a mega-grant programme to establish new labs in universities and research institutes — a total of 162 projects have been supported since 2010, in areas such as metamaterials, climate change and stem-cell research. Gelfand thinks that, apart from money, the science community's biggest problem is the lack of “a set of career paths with clear rules” for researchers. He says that there are too few opportunities for researchers to establish independent research groups, as well as little mobility in a system that, owing to its inflexibility and red tape, “does not encourage changing research areas, labs or institutes”. RAS president Vladimir Fortov has publicly stated that a renewed outflow of younger researchers in the past couple of years is evidence of this problem, a sentiment shared by Gelfand and others. Konstantin Severinov, a molecular biologist at Rutgers University in New Brunswick, New Jersey, who returned to Moscow in 2004 to also work at the Skolkovo Institute of Technology, has compared working in Russia to “swimming in a pool without water”. He says that staying in the country precludes young and promising scientists from competing internationally. By 2015, Russia had planned to increase its Web of Science share to an inexplicably precise 2.44%, a target set by Putin's 2012 decree. This did not happen, but the focus on performance indicators is unlikely to go away. Dezhina says that Russia places too much importance on metrics. “Most science-led economies have already passed this stage and now have a stronger appreciation for the role of experts,” she says. Kalinushkin also questions whether replacing peer-review assessments with statistics and metrics is a good strategy for evaluating scientists work. Resistance to what is seen as excessive focus on 'scientometrics' has put a spotlight on those who exploit loopholes in the system. One recent case of fraud involved two senior researchers at a biophysics institute who added irrelevant references into student conference papers that they edited to boost their own metrics. The two were exploiting a national science-citation index, which was established in 2006 as a way to measure the impact of Russian science that was largely invisible to the international audience. One analysis showed that some 90% of papers published by Russian scientists in 2014 were not indexed in Web of Science or Scopus ( et al. The Economics of Science 1, 6–14 (2015); in Russian). With little reaction from governing bodies to cases of academic fraud and plagiarism, some scientists have taken it upon themselves to expose university boards that routinely rubber-stamp plagiarized or ghostwritten PhD dissertations. Gelfand, who co-manages Dissernet, a volunteer organization dedicated to exposing this fraud, says that the 'immune response' by government has been weak, and groups such as his are in an arms race with unscrupulous people who are abusing the systems designed to protect the integrity of science. “Reputation as a social institution has been destroyed in Russia,” he says. “Nothing is shameful.” During a panel discussion at the St. Petersburg International Economic Forum in June 2016, historian Loren Graham, of the Massachusetts Institute of Technology in Cambridge, was asked if Russia can compete with the rest of the world in science and innovation. He reiterated the “contradictory and strange” fact that Russians tend to be excellent inventors, able to come up with new ideas, but very poor innovators, not able to make commercial successes of those ideas. Russian leaders prohibit demonstrations, suppress political opponents and twist the legal system, creating a regressive authoritarian regime that does not encourage entrepreneurship, Graham told the forum audience. “They want the milk without the cow.” Until such policies change, he concluded, “the scientific genius of the Russian people, for which I have such respect, will remain economically unfulfilled”. But things may get worse before they get better, as the Russian recession looks likely to deepen. In June, the Moscow-based Higher School of Economics said that it could see no signs of recovery in the economy. The recent turbulence has led to a 10% cut in science spending in 2016, and further cuts are expected to follow. Limited funding will require better prioritizing. Khokhlov thinks that any attempt to do this should reflect challenges that are specific to Russia. The world in general “will do just fine without us”, he says. He cites a statement by the advisory council for the Russian Ministry of Education and Science, which calls for Russia to prioritize communications, transportation technologies that are appropriate for the country's size, and a transition to automation. Gelfand, however, highlights the importance of a more results-based approach. “I think there's one top priority in Russian science now,” he says. “And that is protecting everything that still has a heartbeat.” The country's leaders have paid lip service to the importance of research. Indeed, in a speech at the RAS, just two weeks after his first inauguration in a May 2000, Putin called science “a crucial resource for economic growth”. Sixteen years later, although the economy has grown, Russia's scientific enterprise has a long way to go to regain past glories.


News Article | November 30, 2016
Site: www.eurekalert.org

In the 12 years since David Gross won his Nobel Prize, the UC Santa Barbara theoretical physicist has been celebrated myriad times. His latest accolades come from China and Russia. This year, the University of Chinese Academy of Sciences awarded Gross an honorary doctorate degree, an event so rare in that country that it requires government approval. And more recently, the Russian Academy of Sciences confirmed Gross as a foreign member and awarded him the Medal of Honor in recognition of his "outstanding and fundamental contributions to quantum chromodynamics." Gross shared the Nobel Prize in physics with David Politzer and Frank Wilczek for their work in that field. Quantum chromodynamics is the theory of the nuclear force that holds quarks together and binds them inside protons and neutrons. "It is wonderful to see David recognized for his groundbreaking work and his continued impact on theoretical physics around the world," said Lars Bildsten, director of UCSB's Kavli Institute for Theoretical Physics (KITP). "David is a strong advocate for the value of physics as a pillar in fundamental research, and I look forward to his future term as president of the American Physical Society." (Gross just began a four-year term at the APS, where he is currently vice president.) Gross has longtime scientific ties to both China and Russia. Since winning the Nobel in 2004, he has worked with Chinese physicists to improve the country's Institute of Theoretical Physics and has been an adviser to a project that will build a supercollider at least twice the size of the Large Hadron Collider in Switzerland. When the Soviet Union still existed, Gross was a frequent visitor. Last year, he was invited to the Joint Institute for Nuclear Research in Dubna, outside of Moscow, to mark the building of a new heavy-ion accelerator that will be able to create quark matter. "I did my original work formulating the theory of these quarks and quantum chromodynamics, which is well-tested in many regimes," said Gross, who is a permanent member of the KITP and its Chancellor's Chair Professor of Theoretical Physics. "Dubna is going to create an intense heavy-ion beam -- with less energy and lower temperature than the Large Hadron Collider but denser -- so they can probe a different regime of the physics of this quark matter, one that potentially could be very interesting." Gross received his bachelor's and master's degrees from Hebrew University in Jerusalem in 1962 and his doctorate from UC Berkeley in 1966. A junior fellow at Harvard University before moving to Princeton University, he joined UCSB in 1997 as director of the KITP, where he served until 2012. In addition to the Nobel Prize, Gross' many honors and awards include the J.J. Sakurai Prize for Theoretical Particle Physics from the American Physical Society; a MacArthur Fellowship; the Dirac Medal from the International Centre for Theoretical Physics; the Oskar Klein Medal of the Royal Swedish Academy of Sciences; the High Energy and Particle Physics Prize from the European Physical Society; and the Grande Médaille D'or de l'Académie des Science, France. Gross has delivered lectures around the world and holds numerous honorary doctorates and professorships. He has written hundreds of articles as well as conference proceedings and book chapters.


News Article | November 30, 2016
Site: www.rdmag.com

A team of scientists from N. D. Zelinsky Institute of Organic Chemistry of the Russian Academy of Sciences (RAS), N. K. Kol'tsov Institute of Developmental Biology of the RAS, and Immune Pharmaceuticals LLC led by MIPT's Prof. Alexander Kiselyov has synthesized an antitumor compound that could be used to fight chemoresistant cancer. The research findings were published in the European Journal of Medicinal Chemistry. The scientists synthesized a range of new compounds and evaluated their anticancer effect using sea urchin embryos and human cancer cell lines. One molecule proved to be potent and selective enough to have an effect even on a chemoresistant cancer--ovarian carcinoma. The synthesized compounds belong to a class known as aminoisothiazoles. "We made the decision to experiment with aminoisothiazoles because compounds of this class exhibit a diverse range of pharmacological and biological activities. This led us to expect that, with the appropriate functional groups, they might act as anticancer agents," comments Alexander Kiselyov. The proposed approach enables a straightforward synthesis of the target compounds in high yields. The developed reaction sequence involves only six steps and is based on readily available reagents. To assess antitumor activity, the researchers used in vitro assays based on human cancer cells, as well as the in vivo sea urchin embryo model developed and validated by the team earlier. Of the synthesized 37 compounds, 12 molecules were found to reduce the proliferation rate of cancer cells with different potencies or to prevent their division completely leading to the death of cancer cells. The effect of these antitumor compounds is attributed to their capacity to destroy microtubules, which are involved in cell division (mitosis). Microtubules are made of a protein called tubulin, which can be targeted by the anticancer agents, causing the degradation of the microtubule structure. The potency of the synthesized compounds targeting microtubules was further assessed using sea urchin embryos, as well as a panel of human cancer cells from breast adenocarcinoma, melanoma, ovarian and lung tumors. Sea urchin embryos have been shown by the team to be a good model to study specific tubulin-binding agents. They cause embryos to rapidly rotate, as opposed to moving in the ordinary way (cf. the left and right sides of the animation). This effect can be easily observed using a light microscope enabling the scientists to evaluate the anticancer potential of a compound within a short amount of time. Moreover, the team found sea urchin embryos to be more sensitive toward the identified agents than cancer cells. A difference between the duration of the mitotic cycles of sea urchin embryos and cancer cells (40 minutes vs. 24 hours) may lead to distinct effects of the small molecules on tubulin dynamics and could thus account for this phenomenon.The molecule featuring 3-thiophene- and para-methoxyphenyl substituents was identified to be the most potent anticancer agent in the studies. According to the researchers, it is this combination of functional groups, as well as the unique topology of the molecule that are responsible for its unique activity. Specifically, the new agent displays antitubulin properties as it blocks cell division by affecting microtubules and thus can destroy the chemoresistant cells of ovarian carcinoma. The scientists plan to use crystallography data and structure modeling techniques to study microtubule degradation in more detail in order to identify the sites, or "spots" where the active compounds bind to tubulin. In their earlier study, the researchers used substances isolated from dill and parsley seeds to synthesize glaziovianin A, another anticancer compound, along with its structural analogs.


News Article | October 23, 2015
Site: www.nature.com

A biology institute at Russia’s largest and most prestigious university has instructed its scientists to get all research manuscripts approved by the security service before submitting them to conferences or journals. The instructions, which come in response to an amended law on state secrets, appear in minutes from a meeting held on 5 October at the A. N. Belozersky Institute of Physico-Chemical Biology at Lomonosov Moscow State University (MSU). The Russian government says that the amendment is not designed to restrict the publication of basic, non-military research. But scientists say that they believe institutes across the country are issuing similar orders. “This is a return to Soviet times when in order to send a paper to an international journal, we had to get a permission specifying that the result is not new and important and hence may be published abroad,” says Mikhail Gelfand, a bioinformatician at MSU. In 1993, the government passed a law obliging scientists in Russia to get permission from the Federal Security Service (FSB) before publishing results that might have military or industrial significance. This mainly covered work that related to building weapons, including nuclear, biological and chemical ones. However, in May, President Vladimir Putin used a decree to expand the scope of the law to include any science that can be used to develop vaguely defined “new products”. The amendment was part of a broader crackdown that included declaring the deaths and wounding of soldiers during peacetime a secret; this was prompted by accusations that Russian soldiers are involved in conflict in Ukraine. Since then, rumours have emerged that Russian universities and institutes are demanding that manuscripts be approved before submission to comply with the amendment. The minutes from the Belozersky Institute meeting confirm this. “Be reminded that current legislation obliges scientists to get approval prior to publication of any article and conference talk or poster,” they say. They note that the rules apply to any publication or conference, foreign or national, and to all staff “without exception”. Scientists will need to seek permission from the university’s First Department — a branch of the FSB that exists at all Russian universities and research institutes, says Viacheslav Shuper, a geographer at the Russian Academy of Sciences in Moscow and MSU. He says that MSU geographers have been given similar instructions. The minutes tell scientists to seek permission “despite the obvious absurdity of the whole situation”. Vladimir Skulachev, director of the Belozersky Institute, did not respond to Nature’s queries as to how the changes might affect research in his department. Shuper and other academics say researchers across Russia have complained that their institutes are also asking for manuscript approval. “Many scientists in Russia don’t dare to speak openly,” says Shuper. “But I know that many are very unhappy about the degradation of their academic freedom.” Letting bureaucrats decide whether any piece of science is a state secret is not just nerve-wracking, but also burdensome, he says. For example, at some institutes, scientists who have written papers in English for foreign publication are obliged to translate them into Russian for the sake of the security service. The changes are also bad for science, says Fyodor Kondrashov, a Russian biologist at the Centre for Genomic Regulation in Barcelona, Spain. “The problem is that it appears that all scientific output is being treated as potentially classified,” he says. “This creates an unhealthy research climate with some scientists preferring not to share information — not to give a talk at a conference abroad, for example. I fear that the authorities will choose to apply this law selectively against their critics.” Sergey Salikhov, director of the Russian science ministry’s science and technology department, told Nature that the government does not intend the amendment to restrict the publication of basic research. He says that it is not ordering universities or security services to proactively enforce the law over civilian research. But the amendment leaves interpretation to the security services and science administrators, who tend to be over-zealous, says Gelfand. “Basically, anything new and potentially useful can now be interpreted to be a state secret,” says Konstantin Severinov, a molecular biologist with the Skolkovo Institute of Science and Technology, who graduated from MSU. The demand for approval runs counter to government efforts to strengthen and internationalize Russian science, says Severinov. The government aims to see 5 of the country’s universities enter the top 100 in the world rankings by 2020, and is keen to attract leading foreign scientists to Russia. Gelfand says that he will not comply with the rules imposed by his institute, and he encourages others to follow suit. “A sad sign of overall deterioration here is that many are sheepishly following any absurdity instilled by the bureaucrats,” he says. “I am going to ignore it and hope that a sufficient number of colleagues would do the same.”


News Article | October 28, 2016
Site: www.sciencedaily.com

Agricultural biotechnologists at the Moscow Institute of Physics and Technology (MIPT) have developed a screening technique that can quickly determine how chemicals affect the growth of plants. Together with their colleagues from the Research Center of Biotechnology of the Russian Academy of Sciences, Lomonosov Moscow State University, and the University of Freiburg (Germany), the researchers at MIPT have devised a way to quickly identify chemicals that influence the rate at which plants grow. The scientists developed a compound screening technique that relies on pollen and offers an alternative to the conventional seed germination assays, which tend to require over a week to complete. The research team used tobacco (Nicotiana tabacum) to test their proposed screening technique. To do this, they tested a total of 1,040 compounds in just two hours and identified 65 chemicals that either promoted or inhibited the growth of tobacco. Previously, it would have taken weeks, if not months of painstaking work to obtain the same results by using a less effective compound screening procedure. By contrast, the new technique will allow researchers to spot chemical modulators of plant growth and development within very short periods of time (more details are available in BMC Plant Biology). "This method enables us to rapidly identify completely new molecular targets and mechanisms of plant growth that can be used to selectively slow down the growth of unwanted plants and stimulate the growth of cultivated species -- which we currently see as the aim of the research conducted by our international collaboration," says Sergey Leonov, the head of the Laboratory for the Development of Innovative Drugs at MIPT. To identify just one or two compounds that affect plant development, tens or even hundreds of thousands have to be tested. At present, studying the influence of chemicals takes about one month per compound. One might call the technique that is currently in use "layman's screening." It involves growing seedlings in a medium containing the chemical that is being tested. You might imagine yourself doing something similar in your backyard: it is a bit like using a new fertilizer and waiting around for two weeks or so to see if it works. Except that you would have to test a thousand of them to find the best compound. To determine whether the molecules of a given chemical affect plant growth, it is necessary to find out how long it takes before seed germination is complete and the shoots have grown. The scientists hypothesized that seedlings growing in a backyard undergo processes that are similar to those involved in pollen germination and the growth of pollen tubes, and that they also rely on the activation of the same genes. This implies that pollen grains and seeds can have shared targets in growth regulating pathways and, as a result, be affected by the same chemicals. Compared to seeds, pollen germinates much more quickly (in about two hours), which makes it convenient for chemical compound screening, as it means that an experiment can be completed within hours instead of weeks. Opting for the pollen tube based assay results in significantly higher productivity: with this technique, it is possible to test around 10,000 compounds (and select the most potent ones) within a week. Furthermore, the process no longer involves a human observer; instead, the results are based on computer-aided digital image analysis measuring the area occupied by pollen tubes. The whole process is automated, with most of the work being performed by the machine. It starts with scientists suspending the pollen in a liquid germination medium, transferring it into multiwell plates, and introducing the test chemicals into the wells. After that, all that needs to be done is to press the right buttons. An automated microscope photographs the contents of each well at the beginning of the experiment and at different stages in the development of pollen. Every image is processed by specialized software, which was developed by the researchers from MIPT in partnership with Molecular Devices LLC. It is down to the software to ultimately decide if a given chemical acts as a growth modulator or not. This is accomplished by calculating the area occupied by the pollen in each of the 384 wells per plate at the beginning and at the end of the experiment. These figures are compared to the corresponding ones measured for the controls grown without any added chemicals. If the total area occupied by the pollen grain and tube in a certain well is larger than that of the controls, the machine labels the relevant chemical as a growth stimulator, whereas a smaller area (relative to the controls) is seen as an indication of growth inhibitory properties. Knowing the compounds that influence the rate of development of different plants, as well as the corresponding molecular mechanisms of plant growth that are targeted by these compounds will enable biotechnologists to selectively inhibit the growth of pests and promote the growth of crops. This research was carried out as part of the 5-Top100 Russian Academic Excellence Project launched by the government with the aim of creating long-term competitive advantages for the best Russian universities on the global education market.


News Article | October 26, 2016
Site: www.newscientist.com

You could hear a pin drop. That’s how quiet the cosmos appears to be despite news that a year-old spike in radio signals could point toward an intelligent extraterrestrial civilisation. The signal, detected on 15 May 2015 by a radio telescope operated by the Russian Academy of Sciences, was so powerful it evoked a radio beacon built by an intelligent civilisation. It appeared to originate from the star HD 164595, in the constellation Hercules, which has one known planet roughly the size of Neptune. SETI astronomers across the globe have jumped into action to try to confirm that signal. But so far, there’s no sign of ET. In an informal report posted online this morning, Steve Croft at the Berkeley SETI Research Center reported how the Breakthrough Listen Initiative — a project that uses radio telescopes across the world to look for signs of intelligent life beyond Earth — searched for the signal in archived data. Unfortunately, no counterparts were spotted in the Naval Radio Astronomy Observatory Catalog — a null result, which is unexpected if the signal is real. This suggests that the Russian team was “either extremely lucky to detect this source in their observations, or that the transient is due to local interference or other calibration issues”, write Croft and his colleagues. Just in case, the team aimed the Green Bank Telescope in West Virginia toward the star on Sunday night. They didn’t detect any ongoing emission from the direction of the star, but they are careful to point out that this doesn’t mean there’s no extraterrestrial civilisation there. Seth Shostak of the SETI Institute and his colleagues also observed the star on Sunday and Monday nights with the Allen Telescope Array in northern California. The first night turned up empty and Shostak hasn’t yet received the results from the second night. And Douglas Vakoch, the president of METI International — a group that isn’t just looking for signs of intelligent extraterrestrial life, but would like to send messages to them as well — has made plans to swing the Optical SETI Observatory in Panama toward the star. Unfortunately, a series of thunderstorms has hit the area and are forecast to continue for several days. “Once the skies clear, we will have about an hour shortly after sunset each night to search for signals,” he says.


News Article | April 14, 2016
Site: phys.org

Snapshots 1 and 2 correspond to fast formation of initial complex on PAS (EI), transition through the bottleneck (3) corresponds to the maximum of pulling force, snapshot 4 reflects complex of final equilibrium position and is similar to the X-ray structure of complex of C547 with mAChE. Credit: Archives of Biochemistry and Biophysics Reversible inhibition of an enzyme, an activity in which the inhibiting molecular entity (often a small chemical called ligand or inhibitor) associates and dissociates from the protein's binding site, is a very fast process. During enzyme-substrate interaction, enzyme-inhibitor equilibrium is established within microseconds. Thus, classical reversible inhibitors are characterized by rapid on/off rates. However, a number of enzymes do not respond instantly to reversible inhibitors. In such cases, there is a slow onset of inhibition. This type of reversible inhibition is called slow-binding inhibition (SBI) and is characterized by slow establishment of enzyme-inhibitor equilibrium. Because SBIs slowly bind and slowly dissociate from their target, several kinetic parameters are useful to describe toxico/pharmacodynamic processes, enabling an understanding of the endogenous mechanisms of protection against SBI toxicants, and facilitating the discovery of new pharmacological countermeasures. These parameters also account for the efficacy of SBIs as drugs. Researchers from Laboratory of Neuropharmacology, Kazan Federal University, and Laboratory of Computer Modeling of Biomolecular Systems and Nanomaterials, Emanuel Institute of Biochemistry of RAS conducted a study dedicated to cholinesterases (ChEs), which display slow onset of inhibition with certain inhibitor. The study highlights the pharmaco-toxicological importance of SBI of these enzymes. The fully integrated study included synthesis of the molecules, enzyme and inhibitory kinetics, crystallography, molecular modeling, pharmacokinetics and pharmacodynamics. Thus, one of the SBI's of this nature, acetylcholinesterase, has an important physiological function in terminating the action of the neurotransmitter acetylcholine in the central cholinergic system, ganglia and at neuromuscular junctions. AChE has also non-cholinergic functions in cell development and embryogenesis, and is involved in pathogenesis of Alzheimer's disease in promoting formation of beta-amyloid fibrils. Inhibitors of AChE have been used for the palliative treatment of Alzheimer's disease, glaucoma and myasthenia. Irreversible inhibition of AChE by organophosphorus compounds (mostly pesticides and chemical warfare nerve agents) and carbamates causes a major cholinergic syndrome, responsible for the acute toxicity of these compounds. The related enzyme, butyrylcholinesterase (BChE) has no known physiological function, though it has recently been found to hydrolyze ghrelin, a neuropeptide also called the "hunger hormone." Otherwise, BChE is of toxicological and pharmacological importance. It acts as an endogenous bioscavenger against numerous esters used as drugs, pesticides, or banned chemical warfare agents. Certain potent reversible inhibitors bind slowly to the active center of both ChEs. Slow onset inhibition determines kinetic complexities in terms of possible mechanisms of protection against external toxicants, pharmacological uses of slow-binding inhibitors and design of new drugs with long residence time on targets and short residence time in the bloodstream. To analyze and interpret the data, the researchers used new concepts in pharmacology: residence time, rebinding and micro-pharmacodynamics SBI of ChEs results either from simple slow interaction, induced-fit, or slow conformational selection. In some cases, slow equilibrium is followed by an irreversible chemical step. This later was observed in the interaction of ChEs with certain irreversible inhibitors. Because slow-binding inhibitors present pharmacological advantages over classical reversible inhibitors (e.g. high selectivity, long target-residence times and rebinding to target in micro sub-organ compartments (e.g. neural synapses and neuromuscular junctions), resulting in prolonged efficacy with minimal unwanted side effects that could lead to a decrease in the number of pills consumed and space out the time between doses taken), slow-binding inhibitors of ChEs, in particular methyl-uracil derivatives that have been synthesized in Kazan, are promising new drugs for treatment of Alzheimer disease, myasthenia, and neuroprotection. SBI is also of toxicological importance; it may play a role in mechanisms of resistance and protection against poisoning by irreversible agent. To further understand the toxic and/or therapeutic effects of SBI, toxicology/pharmacology and neurophysiology the research group has been conducting experiments in vivo on model animals and isolated cholinergic cell systems and muscles in collaboration with scientists of A.E. Arbuzov Institute of Organic and Physical Chemistry Kazan Scientific Centre of the Russian Academy of Sciences. Explore further: Discovery paves the way for a new generation of chemotherapies More information: Patrick Masson et al. Slow-binding inhibition of cholinesterases, pharmacological and toxicological relevance, Archives of Biochemistry and Biophysics (2016). DOI: 10.1016/j.abb.2016.02.010


James Corden Will Host Special Anniversary Ceremony on Sunday, November 6, 2016 at the Beverly Hilton Hotel HOLLYWOOD, CA--(Marketwired - October 26, 2016) - dick clark productions announced today that Academy Award-winning actress Natalie Portman will be honored with the "Hollywood Actress Award" for her role in "Jackie" and artist Janelle Monáe will receive the "Hollywood Spotlight Award" for her breakout role in "Hidden Figures." Additionally, the cast of the film "Gold," including Academy Award-winning actor Matthew McConaughey, Golden Globe Award-nominated actress Bryce Dallas Howard, Golden Globe Award-nominated actor Edgar Ramirez and Golden Globe Award-winning actor Stacy Keach will receive the "Hollywood Ensemble Award" at the 20th Annual "Hollywood Film Awards® Presented by Virginia Black." The awards ceremony, celebrating its 20th anniversary as the official launch of the awards season®, will be hosted by actor and comedian James Corden, and will take place at the Beverly Hilton Hotel in Beverly Hills, on November 6, 2016. The Hollywood Film Awards honors some of the most acclaimed films and actors, as well as previews highly anticipated films and talent for the upcoming year. Additional artists are also honored in the categories of Cinematography, Visual Effects, Film Composing, Costume Design, Editing, Production Design, Sound and Makeup & Hairstyling. Its honorees over the past 20 years have included the world's biggest stars and more than 110 have gone on to garner Oscar nominations and/or wins. Natalie Portman is being recognized for her role as Jacqueline Kennedy in the film "Jackie," a searing and intimate portrait of one of the most important and tragic moments in American history, seen through the eyes of the iconic First Lady, then Jacqueline Bouvier Kennedy (Natalie Portman). "Jackie" places us in her world during the days immediately following her husband's assassination. Known for her extraordinary dignity and poise, here we see a psychological portrait of the First Lady as she struggles to maintain her husband's legacy and the world of "Camelot" that they created and loved so well. "Jackie" is directed by Pablo Larraín and written by Noah Oppenheim. In addition to Academy Award® winner Portman, the film stars Peter Sarsgaard, Greta Gerwig, Billy Crudup, and Academy Award® nominee John Hurt. "Jackie" is produced by Juan De Dios Larraín, Darren Aronofsky, Mickey Liddell, Scott Franklin, and Ari Handel. The film opens December 2, 2016. Janelle Monáe is being recognized for her role as Mary Jackson in the 20th Century Fox film "Hidden Figures." In the film Monáe plays a part of a team of African-American women who provide NASA with important mathematical data needed to launch the program's first successful space missions. "Gold," inspired by actual events, is the epic tale of one man's pursuit of the American dream and everything he'll do to keep it from falling apart. Oscar© winner Matthew McConaughey stars as Kenny Wells, a man raised in the mining business and desperate for a lucky break. He teams up with a mythic geologist and sets off on an amazing journey to find gold in the uncharted jungle of Indonesia. When they make the largest gold find in history, they realize this dream is only the beginning of the adventure. They battle nature, Wall Street, conspiracies inside of conspiracies, but most of all they test their faith in themselves. Directed by Oscar© winner Stephen Gaghan ("Traffic," "Syriana"), the film stars Oscar© winner Matthew McConaughey ("Interstellar," "Dallas Buyers Club," "The Wolf of Wall Street") and Golden Globe© nominees Edgar Ramirez ("Hands of Stone," "The Girl on the Train") and Bryce Dallas Howard ("Jurassic World," "Pete's Dragon"). "Gold" opens wide Christmas Day. An Academy-Award winning actress, filmmaker and philanthropist, Natalie Portman first charmed audiences at the age of eleven in Luc Besson's "The Professional." Since then she has starred in over 25 films, including "Beautiful Girls," "Cold Mountain," "Garden State" and "Closer." In 2011, she received an Oscar for her performance in Darren Aronofsky's critically acclaimed film, "Black Swan." Portman has also taken on a variety of roles behind the lens including "A Tale of Love and Darkness," which she wrote, directed, and starred in. She will be soon be seen as Jacqueline Kennedy in Pablo Larrain's "Jackie," to be released by Fox Searchlight in December 2016. When Portman isn't working, she serves as Ambassador of WE and FINCA, in addition to working with Children's Hospital of Los Angeles. A Harvard graduate with a degree in psychology, Portman has also studied at Hebrew University in Jerusalem. Janelle Monáe is a Grammy-nominated singer-songwriter, performer, producer and CoverGirl spokesperson known for her unique style and groundbreaking sound. Immersed in the performing arts at a young age, she founded her record label the Wondaland Arts Society releasing the EP "Metropolis: Suite I (The Chase)." In 2010, Monáe released the highly anticipated and critically acclaimed "ArchAndroid," which reached No. 17 on the Billboard Charts and earned her two Grammy nominations, including one for the hit single "Tightrope." 2013 saw the release of the critically acclaimed album "The Electric Lady" featuring Prince and Miguel, which reached No. 5 on the Billboard Top 200 Chart. In February of 2015, Janelle launched her very own label, Wondaland Records. Most recently, Monáe was featured in the Super Bowl 50 Pepsi commercial titled, the "Joy of Dance," wherein she pays homage to some of the greatest musical acts of past and present. In 2017, Janelle takes her talent to the silver screen, in which she will star as a lead actor in the upcoming dramas "Hidden Figures" and "Moonlight." "Hidden Figures" centers around the true story of three African-American women who work for NASA and help make John Glenn the first person to orbit the Earth in 1962. The film also stars Octavia Spencer, Taraji P. Henson, and Kevin Costner. "Moonlight," produced by Brad Pitt's Plan B Entertainment and A24, chronicles the coming-of-age of Chiron (Ashton Sanders, Trevante Rhodes), a young man facing an uphill battle with his sexuality during Miami's turbulent war-on-drugs era in the 1970s. "Moonlight" also stars Naomi Harris, André Holland and Mahershala Ali. Previously announced honorees for this year's show include: "Hollywood Career Achievement Award," Eddie Murphy; "Hollywood Actor Award," Tom Hanks; "Hollywood Supporting Actor Award," Hugh Grant; "Hollywood Supporting Actress Award," Nicole Kidman; "Hollywood Breakout Actress Award," Naomie Harris; "New Hollywood Award," Lily Collins; "Hollywood Blockbuster Award," "The Jungle Book"; "Hollywood Animation Award," "Zootopia"; "Hollywood Director Award," Mel Gibson; "Hollywood Breakthrough Director Award," Tom Ford; "Hollywood Producer Award," Marc Platt; "Hollywood Screenwriter Award," Kenneth Lonergan; "Hollywood Cinematography Award," Linus Sandgren; "Hollywood Film Composer Award," Mychael Danna; "Hollywood Editor Award," John Gilbert; "Hollywood Visual Effects Award," Stephane Ceretti and Richard Bluff; "Hollywood Sound Award," Christopher Boyes and Frank Eulner; "Hollywood Costume Design Award," Albert Wolsky; "Hollywood Make Up & Hair Styling Award," Shane Thomas, Angela Conte, Bec Taylor and Noriko Waztanabe; and "Hollywood Production Design Award," Wynn Thomas. Additional honorees for the 20th Annual Hollywood Film Awards will be announced in the coming weeks. The 20th Annual Hollywood Film Awards are presented by Virginia Black. Virginia Black. Rich. Decadent. Smooth. Whiskey redefined. CHEAT TWEET: Natalie Portman, @JanelleMonae and the cast of @GOLDTheMovie will be honored at the @HollywoodAwards: hfas.news/holly31b2 #HFAs dick clark productions (dcp) is the world's largest producer and proprietor of televised live event entertainment programming with the "Academy of Country Music Awards," "American Music Awards," "Billboard Music Awards," "Dick Clark's New Year's Rockin' Eve with Ryan Seacrest," "Golden Globe Awards," "Hollywood Film Awards," "Miss America" and the "Streamy Awards." Weekly television programming includes "So You Think You Can Dance" from 19 Entertainment, a division of CORE Media Group and dcp. dcp also owns one of the world's most unique and extensive entertainment archive libraries with more than 55 years of award-winning shows, historic programs, specials, performances and legendary programming. For additional information, visit www.dickclark.com. The Hollywood Film Awards®, founded in 1997, were created to celebrate Hollywood and launch the awards season. The recipients of the awards are selected by an Advisory Team for their body of work and/or a film(s) that is to be released during the calendar year. For additional information, visit www.hollywoodawards.com.


News Article | April 25, 2016
Site: www.rdmag.com

An international team of including the Lomonosov Moscow State University researchers discovered which enzyme enables Escherichia coli bacterium (E. coli) to breathe. The study is published in the Scientific Reports. Scientists discovered how the E. coli bacterium can survive in the human gut - earlier the question how they breathe was a mystery to experts. Vitaliy Borisov, Senior Researcher, Doctor of Biological Sciences, Professor of the Russian Academy of Sciences, A.N. Belozersky Research Institute physical and chemical biology employee, the Lomonosov Moscow State University and one of the authors, explains that breathing E. coli uses special enzymes, which are absent in the human body. This means that the discovery of the scientists can contribute to the creation of new drugs, which will be detrimental to the bacteria without harming a human. The energy for the vital activity of any organism comes from food, and is generated by the means of redox processes in the body. The food is converted into energy not directly but through intermediaries. First, the complex molecules are decomposed into simpler: proteins are decomposed into amino acids, fats - to fatty acids, carbohydrates - to monosaccharides. Oxidation of simpler molecules releases energy, which all is contained in the electrons. The electrons passes to the respiratory chain with the so-called reducing equivalents (electron-carrying compound). They are NADH (nicotinamide adenine dinucleotide) and ubiquinol, also known as coenzyme Q. These two basic reducing equivalents fully cope with the processing of food: NADH is a water-soluble compound and ubiquinol is fat-soluble. Membranous enzymes accept electrons from reducing equivalents and transfer them to molecular oxygen. The terminal cytochrome oxidase is the main membrane enzyme responsible for the human mitochondrial respiration and was thought to be used for the breath of E. coli as well. The scheme of oxidases action is simple: transferring electrons to molecular oxygen, reducing equivalents are oxidized again, and as a result "the energy currency" of the cell - the proton-moving force is generated. 'If you stop breathing, you die just because oxygen does not flow to the oxidase, and it does not produce energy,' said Vitaly Borisov. The Escherichia coli bacterium lives in the gastrointestinal tract, where a lot of hydrogen sulfide is produced, which attenuates mitochondrial respiration. Free hydrogen sulfide inhibits cytochrome oxidase work. Its concentration exceeds several hundred times the minimum concentration required for substantial blocking of this enzyme. Hence, it seems that the E. coli bacterium cannot "breathe", but despite that the bacteria somehow survive in the intestine. The researchers assumed that the breath in the presence of hydrogen sulfide is still possible, but due to other oxidase. The fact is that the breath in people and bacteria occur in different ways. Each cell in our body "breathes" due to the work of only the cytochrome-c oxidase, others we have not. However, the E. coli bacteria has two types of oxidase: bo-type cytochrome oxidase (analogue of "human" cytochrome-c oxidase) and completely different bd-type cytochromes. 'Our hypothesis was that the bd-type oxidase (bd-I and bd-II) are more resistant to the hydrogen sulfide inhibition than the bo-type cytochrome oxidase,' commented Vitaly Borisov. To test this hypothesis scientists needed to learn how the sulfide presence in the environment affects the growth of the E. coli bacteria cells, which have only one terminal oxidase (bd-I, bd-II or bo) in the respiratory chain. a variety of biochemical, biophysical and microbiological methods and approaches were applied, as well as the method of the intended mutagenesis. 'Bo-oxidase's activity is completely inhibited by the hydrogen sulfide, while the work of the bd-oxidases remains untouched by the H2S. Thus, in order to successfully produce the main types of "the energy currency" under a high concentration of hydrogen sulfide, the intestinal microflora inhabitants should use a unique type of terminal oxidases, which is missing in the cells of humans and animals,' said Vitaly Borisov. The discovery could be used in the future to develop medicines that regulate the intestinal microflora and relieving it from harmful bacteria. As human cells do not contain the bd-type oxidase, the question of the ability to combat disease-causing bacteria without causing harm to the human body becomes relevant. For example, the bacterium causing tuberculosis, which's primarily membrane enzyme is also a bd-type oxidase, quickly gaining resistance to classical antibiotics. Through this study there is a prospect of a new type of antibiotics "turning off" the oxygen only to the harmful bacteria cells, not to human cells.


News Article | December 19, 2016
Site: www.eurekalert.org

Biologists from the Lomonosov Moscow State University in collaboration with colleagues from the Engelhardt Institute of Molecular Biology, Russian Academy of Sciences used RNA transfection and in vitro techniques to show how the same mRNA can direct protein synthesis in a cell by four different means. The research results have been published in a peer reviewed journal Scientific Reports. Scientists from the Belozersky Institute of Physico-Chemical Biology, a department of the Lomonosov Moscow State University, along with their colleagues, have applied a transfection method to deliver RNA into the cell, that has allowed them to study the impact of cell stress on protein biosynthesis on a short-time scale. Sergey Dmitriev, Senior Researcher at the Belozersky Institute of Physico-Chemical Biology, the Lomonosov Moscow State University and the Engelhardt Institute of Molecular Biology, Russian Academy of Sciences, being a leading author of the article, shares the following comments. He says: "Our project is devoted to the studies of protein biosynthesis mechanisms, including the situations of cell stress. The research highlights three aspects. The first one concerns methods as we present a technique, which allows analyzing protein synthesis in a cell with the help of a short-term RNA transfection technique". Transfection is a method of DNA and RNA delivery to a living cell. Usually DNA is used: after entrance into the nucleus it initiates the processes of new RNA synthesis, and only afterwards the RNAs are exported into cytoplasm and can participate in protein production. Biologists from the Lomonosov Moscow State University propose a methodology of introducing an artificially synthesized RNA into the cell, so it could become a template for protein synthesis immediately. RNA is delivered to cells with the help of a special chemical agent. Once penetrated into cytoplasm, it's released and gets ready to participate in protein production; the only thing left to do for the RNA is to meet a ribosome. So, the way to the final product becomes much shorter. In as little as 1 or 2 hours one could observe protein activity and estimate its quantity. This technique allows studying the impact of a stress on the cell in a short-time scale. To cell stresses one could refer, for example, heat shock, caused by elevated temperature; oxidative stress, provoked by reactive oxygen species; response to chemical agents, which disrupt homeostasis (including antibiotics and medical drugs). Factors of cell stress compel the cell to suspend protein biosynthesis (or "reprogram" it), until the system redresses the balance. Sergey clarifies: "Usually these processes last from one to four hours, and our technique of "fast" RNA transfection is the most convenient way to study the effect of these processes. We've conducted our research on cultured cells of human kidney, which serve as a standard model for such studies. Finally, we have elaborated a technique that allows one to obtain artificial RNA, transfer it into cells and obtain the result in a very short period of time. We've named the whole method FLERT (for "fleeting mRNA transfection"), which sounds in Russian a bit like flirtation,"- adds Sergey and smiles. Messenger RNA (mRNA) is a polymer of nucleotides, coding for a protein. One amino acid is encoded by 3 nucleotides. There is a special molecular machine, called ribosome, existing in the cell for protein production. Moving along mRNA, the ribosome reads information in a triplet by triplet manner. The structure of the protein synthesis machine is very complicated. It comprises two subunits: a small one (40S) and a large one (60S). A whole ribosome is obtained when they join. However, it's specified not as 100S, but as 80S. The reason is that these figures refer not to the particle mass, but to the sedimentation coefficient, determined during centrifugation. This coefficient depends on several parameters, including the shape of a particle. In order to start information decoding, first of all it is necessary to find the right starting point - a triplet from which the reading begins. Detection of the starting point is problematic, as there are no marks for triplets in mRNA. However, if you start reading from a wrong nucleotide, the reading frame will be shifted, and everything will go wrong. Special proteins (translation initiation factors) help the ribosome to find the right place in the template to start reading triplets. Usually there is a distance between the beginning of the mRNA chain and the starting point, called "leader". A ribosome should pass this leader by, without reading. Russian scientists have decide to check what will happen if mRNA will begin right with the start codon - from the "first word". It's interesting that in archaea (single-celled prokaryotic organisms, which have been living on the Earth for billion years and are capable of surviving in extreme conditions) and some other primitive organisms most mRNAs begin right from the start codon. Such RNAs are called leaderless. Leaderless mRNAs are supposed to be an evolutionary prototype of messenger RNAs because ancient ribosomes were not able to find starting points and initiated decoding from the very beginning of the mRNA chain. A ribosome is to pass through several phases in order to interact with mRNA and start protein synthesis. Normally, at first the 40S subunit of the ribosome binds mRNA, and then the large 60S subunit joins it at the start codon. In contrast, the leaderless mRNA can be loaded directly into the whole ribosome. This discovery was made in the 90-s by Ivan Shatsky, Professor from the Lomonosov Moscow State University. In the new project scientists have demonstrated that due to their unique properties, leaderless RNAs are resistant to many stress types and continue directing protein synthesis even in such conditions, when common RNAs with leader stop working in the fist minutes after the impact. With the help of FLERT technique scientists have shown this in living cells. The research extension has brought even more interesting results. It has been turned out that unique properties of the leaderless mRNA provide it with high flexibility in the choice of protein synthesis mechanisms. It has been found out that eukaryotes possess several pathways by which a ribosome could find itself on the start codon. These modes are mediated by distinct sets of specialized proteins - called translation initiation factors - and have been shown to operate on different mRNAs. The most common pathway, which could be used by any cellular mRNA, is provided by eIF2 protein. However, this factor is very quickly inactivated under conditions of stress. As a result, ribosomes fail to recognize the start codons on all mRNAs, except those that use other initiation factors. Later on scientists discovered that eIF2 is not the only factor able to do this work. For instance, mRNA of hepatitis C virus is capable of doing without eIF2 and can use other factors, eIF5B or eIF2D, instead (this discovery was also made by scientists from the Lomonosov Moscow State University earlier). This virus was supposed to be unique in this sense - while canonical templates are passively waiting until a ribosome binds them, the hepatitis C virus mRNA "grasps" 40S subunit and "puts" it to the right place in the chain. This unusual property makes possible the usage of alternative ways. And now scientists have proved that the leaderless mRNA is capable of acting in the same way. It's also interesting that all organisms possess eIF5B factor as it's an evolutionary conserved protein. In contrast, eIF2 exists only in eukaryotes and archaea - so, it's not universal. All the above mentioned results allow to say that the well-studied classical factor eIF2 is needed only if ribosomes recognize mRNA by active searching for the start codon. Such mean of translation initiation is called scanning and requires eIF2. When the start codon is found, eIF2 is replaced by eIF5B and protein synthesis starts. More evolutionary-ancient leaderless mRNA can use a primitive mechanism, immediately recruiting eIF5B factor. Sergey Dmitriev concludes: "We've got a nice result, which has explained everything. We've found out that a primitive mRNA could use an evolutionary ancient mechanism. Moreover, it is capable of using the other three pathways: through eIF2, eIF2D or direct recruitment of the whole 80S ribosome".


News Article | December 21, 2016
Site: www.eurekalert.org

Scientists from MIPT and other research institutes and universities have discovered unusual phenomena occurring in a single cerium hexaboride (CeB?) crystal. By performing an electron spin resonance (ESR) experiment, the researchers confirmed the status of the material that had been dubbed an "exception to exceptions" for the way its behavior defies any explanation in terms of the existing models and conventional theories. The research findings were published in Scientific Reports. Experimental findings of this kind offer a way to test the validity of the accepted scientific theories. By implication, recognizing the fundamental results of such research is much more important than seeking new practical applications for this particular material. Despite being studied for over 40 years, cerium hexaboride never stops challenging our understanding of the physics behind its unusual properties. It is a compound that belongs to the class of strongly correlated materials, i.e., materials whose properties cannot be described adequately without taking into account the interactions between electrons (aka electronic correlation). Up to now, many theories have been proposed to explain the anomalous physical properties of cerium hexaboride, but they all proved unable to predict the results of ESR experiments. It might be the case that the theory of ESR in strongly correlated systems needs to be substantially improved in order to account for the exceptional behavior of cerium hexaboride. ESR spectroscopy is used to study samples that contain particles with unpaired spins, viz., electrons and radicals. The sample is placed in a steady magnetic field and exposed to microwave radiation. As a result, an ESR spectrum of the sample is obtained, from which data on its chemical structure and properties can be extracted. Absolute calibration of ESR spectra in units of magnetic permeability and ESR spectral line shape analysis enable scientists to find the spectroscopic parameters: g-factor (gyromagnetic ratio), line width (spin relaxation time), and oscillating magnetization or dynamic magnetic susceptibility. ESR in cerium hexaboride was reported in an earlier study by the same authors. They developed a unique experimental technique capable of picking up the ESR signal from cerium hexaboride and similar materials. Conventional ESR spectrometers often face considerable difficulties detecting signals from strongly correlated materials. The experimental findings turned out to be rather unexpected for the researchers. For one thing, their measurements showed that the oscillating magnetization along the [100] crystallographic direction may exceed the total static magnetization of the sample. This runs contrary to the commonsense expectation (and theoretical predictions), since oscillating magnetization is theoretically supposed to be one of the constituent parts forming the magnetic moment of the sample, i.e., it must be less than the total magnetization. According to the scientists, a simple way to explain this experimental fact would be to say that there are some additional, unaccounted for interactions between free electrons and the electrons in the 4f subshell of cerium ions. This qualitative explanation, however, needs to be confirmed by further theoretical calculations. Another unexpected result of the experiment is the correlation between the angular dependences of the magnetoresistance and the ESR spectral line width with respect to the external magnetic field (under crystal sample rotation). The correlation is remarkable, as the above parameters have a completely different physical nature, therefore such correspondence was not anticipated. The authors of the study offer the following explanation: since ESR line width is largely determined by spin fluctuations, the value of the material's magnetoresistance may likewise be dominated by band electron scattering on spin fluctuations. The measurements reported in the study were made possible thanks to improvements to the equipment design introduced by Marat Gilmanov and Alexander Samarin, doctoral students at MIPT working under the supervision of Alexey Semeno, a senior research fellow at Prokhorov General Physics Institute of the Russian Academy of Sciences (GPI RAS) who also graduated from MIPT. "We have achieved a greater degree of sensitivity and stability for this class of materials than any other experimenters in the world. This means that no one else can perform ESR measurements of strongly correlated metals as accurately as we can. And it is our improved equipment that enables us to see what others cannot," says MIPT's Prof. Sergey Demishev, who also heads the Department of Low Temperatures and Cryogenic Engineering at Prokhorov General Physics Institute.


Pitjeva E.V.,Russian Academy of Sciences | Pitjev N.P.,Saint Petersburg State University
Monthly Notices of the Royal Astronomical Society | Year: 2013

The high precision of the latest version of the planetary ephemeris Ephemerides of the Planets and the Moon (EPM2011) enables one to explore more accurately a variety of small effects in the Solar system. The processing of about 678 thousand of position observations of planets and spacecraft for 1913-2011 with the predominance of modern radar measurements resulted in improving the PPNparameters, dynamic oblateness of the Sun, secular variation of the heliocentric gravitational constant GM⊙, and the stronger limits on variation of the gravitational constant G. This processing made it possible to estimate the potential additional gravitational influence of dark matter on the motion of the Solar system bodies. The density of dark matter dm, if any, turned out to be substantially below the accuracy achieved by the present determination of such parameters. At the distance of the orbit of Saturn the density ρdm is estimated to be under 1.1 × 10-20 g cm-3, andthe mass of dark matter in the area inside the orbit of Saturn is less than 7.9 × 10-11 M⊙ even takinginto account its possible tendency to concentrate in the centre. © 2013 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.


Petoukhov V.,Potsdam Institute for Climate Impact Research | Semenov V.A.,Leibniz Institute of Marine Science | Semenov V.A.,Russian Academy of Sciences
Journal of Geophysical Research: Atmospheres | Year: 2010

The recent overall Northern Hemisphere warming was accompanied by several severe northern continental winters, as for example, extremely cold winter 2005-2006 in Europe and northern Asia. Here we show that anomalous decrease of wintertime sea ice concentration in the Barents-Kara (B-K) seas could bring about extreme cold events like winter 2005-2006. Our simulations with the ECHAM5 general circulation model demonstrate that lower-troposphere heating over the B-K seas in the Eastern Arctic caused by the sea ice reduction may result in strong anticyclonic anomaly over the Polar Ocean and anomalous easterly advection over northern continents. This causes a continental-scale winter cooling reaching -1.5°C, with more than 3 times increased probability of cold winter extremes over large areas including Europe. Our results imply that several recent severe winters do not conflict the global warming picture but rather supplement it, being in qualitative agreement with the simulated large-scale atmospheric circulation realignment. Furthermore, our results suggest that high-latitude atmospheric circulation response to the B-K sea ice decrease is highly nonlinear and characterized by transition from anomalous cyclonic circulation to anticyclonic one and then back again to cyclonic type of circulation as the B-K sea ice concentration gradually reduces from 100% to ice free conditions. We present a conceptual model that may explain the nonlinear local atmospheric response in the B-K seas region by counter play between convection over the surface heat source and baroclinic effect due to modified temperature gradients in the vicinity of the heating area. Copyright © 2010 by the American Geophysical Union.


Chicherin D.,Saint Petersburg State University | Derkachov S.,Russian Academy of Sciences | Kirschner R.,University of Leipzig
Nuclear Physics B | Year: 2014

Yangian symmetry of amplitudes in N=4 super-Yang-Mills theory is formulated in terms of eigenvalue relations for monodromy matrix operators. The Quantum Inverse Scattering Method provides the appropriate tools to treat the extended symmetry and to recover as its consequences many known features like cyclic and inversion symmetry, BCFW recursion, Inverse Soft Limit construction, Grassmannian integral representation, R-invariants and on-shell diagram approach. © 2014 The Authors.


Kozlov V.,Linköping University | Kuznetsov N.,Russian Academy of Sciences
Archive for Rational Mechanics and Analysis | Year: 2014

The two-dimensional free-boundary problem of steady periodic waves with vorticity is considered for water of finite depth. We investigate how flows with small-amplitude Stokes waves on the free surface bifurcate from a horizontal parallel shear flow in which counter-currents may be present. Two bifurcation mechanisms are described: one for waves with fixed Bernoulli’s constant, and the other for waves with fixed wavelength. In both cases the corresponding dispersion equations serve for defining wavelengths from which Stokes waves bifurcate. Necessary and sufficient conditions for the existence of roots of these equations are obtained. Two particular vorticity distributions are considered in order to illustrate the general results. © 2014, Springer-Verlag Berlin Heidelberg.


Ovid'Ko I.A.,Saint Petersburg State University | Sheinerman A.G.,Russian Academy of Sciences
Applied Physics Letters | Year: 2011

A special physical mode of plastic flow and nanograin formation in nanocrystalline and polycrystalline solids deformed at high stresses is suggested and theoretically described. The mode represents the nanoscale rotational deformation (NRD) occurring through the collective events of ideal nanoscale shear in solids. We calculated its stress and energy characteristics. It is found that NRD can effectively occur in nanocrystalline and polycrystalline solids during dynamic loading. © 2011 American Institute of Physics.


Kozlov V.,Linköping University | Kuznetsov N.,Russian Academy of Sciences
Archive for Rational Mechanics and Analysis | Year: 2011

The two-dimensional nonlinear problem of steady gravity waves on water of finite depth is considered. The Benjamin-Lighthill conjecture is proved for these waves provided Bernoulli's constant attains near-critical values. In fact this is a consequence of the following more general results. If Bernoulli's constant is near-critical, then all corresponding waves have sufficiently small heights and slopes. Moreover, for every near-critical value of Bernoulli's constant, there exist only the following waves: a solitary wave and the family of Stokes waves having their crests strictly below the crest of this solitary wave; this family is parametrised by wave heights which increase from zero to the height of the solitary wave. All these waves are unique up to horizontal translations. Most of these results were proved in our previous paper (Kozlov and Kuznetsov in Arch Rational Mech Anal 197, 433-488, 2010), in which it was supposed that wave slopes are bounded a priori. Here we show that the latter condition is superfluous by proving the following theorem. If any steady wave has the free-surface profile of a sufficiently small height, then the slope of this wave is also small. © 2011 Springer-Verlag.


Kamenshchik A.Y.,National Institute of Nuclear Physics, Italy | Kamenshchik A.Y.,Russian Academy of Sciences | Steinwachs C.F.,Albert Ludwigs University of Freiburg
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

In the framework of a general scalar-tensor theory, we investigate the equivalence between two different parametrizations of fields that are commonly used in cosmology - the so-called Jordan frame and Einstein frame. While it is clear that both parametrizations are mathematically equivalent at the level of the classical action, the question about their mathematical equivalence at the quantum level as well as their physical equivalence is still a matter of debate in cosmology. We analyze whether the mathematical equivalence still holds when the first quantum corrections are taken into account. We explicitly calculate the one-loop divergences in both parametrizations by using the generalized Schwinger-DeWitt algorithm and compare both results. We find that the quantum corrections do not coincide off shell and hence induce an off-shell dependence on the parametrization. According to the equivalence theorem, the one-loop divergences should however coincide on shell. For a cosmological background, we show explicitly that the on-shell equivalence is indeed realized by a nontrivial cancellation. © 2015 American Physical Society.


Pitjev N.P.,Saint Petersburg State University | Pitjeva E.V.,Russian Academy of Sciences
Astronomy Letters | Year: 2013

We have searched for and estimated the possible gravitational influence of dark matter in the Solar system based on the EPM2011 planetary ephemerides using about 677 thousand positional observations of planets and spacecraft. Most of the observations belong to present-day ranging measurements. Our estimates of the dark matter density and mass at various distances from the Sun are generally overridden by their errors (σ). This suggests that the density of dark matter ρdm, if present, is very low and is much less than the currently achieved error of these parameters. We have found that ρdm is less than 1. 1 × 10-20 g cm-3 at the orbital distance of Saturn, ρdm < 1. 4 × 10-20 g cm-3 at the orbital distance of Mars, and ρdm < 1. 4 × 10-19 g cm-3 at the orbital distance of the Earth. We also have considered the case of a possible concentration of dark matter to the Solar system center. The dark matter mass in the sphere within Saturn's orbit should be less than 1. 7 × 10-10M⊙ even if its possible concentration is taken into account. © 2013 Pleiades Publishing, Ltd.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP-SICA | Phase: SPA.2010.3.2-01 | Award Amount: 745.66K | Year: 2011

PRE-EARTHQUAKES intends to commit EU and Russian researchers to integrate different observational data (including ESA and ROSKOSMOS satellite data) and to improve, by cross-validating, their methodologies, in order: to substantially improve our knowledge of preparatory phases of earthquakes and their possible precursors; to promote a worldwide Earthquake Observation System (EQuOS) as a dedicated component of GEOSS (Global Earth Observation System of Systems); to develop and offer to the international scientific community an integration platform where independent observations and new data analysis methodologies devoted to the research on/of earthquake precursors can be collected and cross-validated; Different ground and satellite based observations, different data analysis methods, different measured parameters will be compared and integrated in order to move the research in this filed behind its present frontiers. Specific scientific objective of the project will be to investigate and demonstrate to which extent a systematic integration of independent observations can improve (possibly up to e pre-operational level of precision and reliability) our capabilities of short-term earthquake prediction which are presently based mostly on a single parameter/observation methods. The project will profit of the unique opportunity, offered by ESA and ROSKOSOMOS, to have access for free to their satellite data archives in order to integrate, compare, improve different methods and observations relevant for the study of preparatory phase of the earthquakes. Three European (from Italy, Germany and Turkey) and three Russian Scientific Institutions (from 2 different regions), among the worldwide most quoted for this kind of studies, will constitute the partnership of the project that will develop on a 2 years period.


Pitjeva E.V.,Russian Academy of Sciences | Pitjev N.P.,Saint Petersburg State University
Celestial Mechanics and Dynamical Astronomy | Year: 2014

This paper outlines the progress in development of the numerical planet ephemerides EPM-Ephemerides of Planets and the Moon. EPM was first created in the 1970s in support of Russian space flight missions and constantly improved at IAA RAS. Comparison between various available EPM ephemerides (EPM2004, EPM2008, EPM2011) is shown. The first results of the updated EPM2013 version which takes into account the two-dimensional annulus of small asteroids are presented. Currently two main factors drive the progress of planet ephemerides: dynamical models of planet motion and observational data, with the crucial role of spacecraft ranging. EPM ephemerides are the basis for the Russian Astronomical and Nautical Astronomical Yearbooks, are planned to use in the GLONASS and LUNA-RESOURCE programs, and are being used for determination of physical parameters: masses of asteroids, planet rotation parameters and topography, the GM⊙ and its secular variation, the PPN parameters, and the upper limit on the mass of dark matter in the Solar System. The files containing polynomial approximation for EPM ephemerides (EPM2004, EPM2008, EPM2011) along with TT-TDB and ephemerides of Ceres, Pallas, Vesta, Eris, Haumea, Makemake, and Sedna are available from ftp://quasar.ipa.nw.ru/incoming/EPM/. Files are provided in IAA's binary and ASCII formats, as well as in the SPK format. © 2014 Springer Science+Business Media Dordrecht.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ENERGY.2008.7.2.1 | Award Amount: 4.79M | Year: 2009

The interstate integration of power grids provides multiple advantages concerning operation security, integration of renewable energy as well as energy trading. Due to this fact the UCTE interconnection expands continually since its establishment. Consideration is given to different scenarios of joint operation of UCTE and NORDEL with power grids on the territory of the former USSR. Due to the fact that such an interconnection is second to none in the World in terms of the scale and distance of the interconnection and number of countries involved, strong R&D and innovations are urgently required along with the recent development of technologies. Bulk power grids may encounter major blackouts, which originate in increasing complication in monitoring, operation and control of interconnected power grids as well as in limited knowledge of the total system state. Therefore the possible future interconnection between the European and Russian electricity transmission systems requires elaborating methods for monitoring, control and protection of large scale systems and especially for the support of their interconnections. The development and prototypically implementation of these new methods and tools is the major goal of the ICOEUR project. New technologies like Wide Area Monitoring, Control and Protection as well as advanced network controllers (FACTS) and HVDC systems will be considered. Envisioned ICOEUR goals can be achieved only in close cooperative work of experts, with extensive knowledge of EU and Russian power systems as well as manufacturers and network operators. The ICOEUR consortium involves leading experts in all these domains and guarantees efficient collaboration and knowledge required for testing the methodologies developed. The joint development of innovative monitoring, simulation and control concepts, tools and equipment through international diversified ICOEUR consortium and their prototype implementation will promote their adoptions.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2011.9.1 | Award Amount: 3.13M | Year: 2012

The goal of this project is the development of a suite of nanophotonic devices that interface with spins, for application in quantum information and quantum-enabled classical communi-cation technologies. Our technologies will be based on electron and nuclear spins in semi-conductor quantum dots (QDs) embedded in nanophotonic devices. We will combine knowledge of the physics of semiconductor spins, photonics and cavity quantum electro-dynamics, with quantum information and optical communication technology.\n\nIn this FET-Open project, we anticipate that a wealth of novel devices and fundamental un-derstanding will result from the solution to one key problem. What is the best form for a hybrid spin-photon quantum memory, how does one transfer quanta of angular momentum from it to a single photon, and how will this angular momentum be encoded? This is an issue that is inadequately addressed so far, and we take highly novel approaches towards it.\n\nWe address this question on several fronts. From the photonics side, polarization engineering in photonic nanostructures will be investigated, moving beyond linear polarization to exploit the full light angular momentum states. In terms of quantum memories, we will create the technology for long-lived (>1s) nuclear spin memories, long enough to achieve entanglement over large distances. These might one day be used over 1000s km and via satellites to po-tentially anywhere on the globe.\n\nPhotonic crystal structures will be used for integrated quantum-optical circuit technology and plasmonic nanoantennas will enable a spin-dependent near-to-far field coupling, and ultra-fast control of the electron spin. One may use this spin-photon interface to entangle very large numbers of photons, with the memory allowing time for measurement operations of a quantum algorithm. The compatibility of these QD technologies means that the components may be combined, paving the way towards an entirely QD-based quantum internet.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2009.1.4 | Award Amount: 8.50M | Year: 2010

The main objective of MASSIF is to achieve a significant advance in the area of Security Information and Event Management (SIEM). On the base of proper multi-level event correlation MASSIF will provide innovation techniques in order to enable the detection of upcoming security threats and trigger remediation actions even before the occurrence of possible security incidences. Thus, MASSIF will develop a new generation SIEM framework for service infrastructures supporting intelligent, scalable, and multi-level/multi-domain security event processing and predictive security monitoring.\n\nSuch service-level SIEM involves the modelling and formal validation of security, including trusted computing concepts (Fraunhofer), architecture for dependable and resilient collection of service events (Uni. Lisboa), supported by an extremely scalable and performant event collection and processing framework (UPM), in the context of service-level attack models (SPIIRAS).\n\nFour industrial domains serve as a source for requirements\nand to validate and demonstrate project results: (i) Olympic Games IT infrastructure deployed and managed by Atos Origin; (ii) France Telecom scenario on Mobile phone based money transfer service facing security events, especially for the non-IT and service events; (iii) T-Systems South Africa provides managed IT outsource services with a high degree of complexity in setting up SIEM systems for large distributed enterprises; and (iv) Epsilon (an SME) will demonstrate the use of the advanced concepts of SIEM in an IT system supporting a critical infrastructure (dam).\n\nDeep insight into the current and near future industrial state-of-art is provided by (1) AlienVault, as the creator and maintainer of the leading open source SIEM product (OSSIM), (2) Integration of MASSIF results into Prelude (second largest open source SIEM product) by Institut Telecom, and (3) Use and Deployment of huge installations of commercial SIEM products.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: AAT.2008.3.3.2. | Award Amount: 5.60M | Year: 2009

Atmospheric turbulence encounters are the leading cause of injuries to passengers and crews in non-fatal airline accident. The number of turbulence accidents has been growing by a factor of 5 since 1980. A whole class of turbulence (Clear Air Turbulence), representing 40% of turbulence accidents, cannot be detected by any existing equipment including state-of-the-art weather radar. Operational concepts for the protection against turbulence hazard, based on a UV LIDAR, are: - Short-range measurement of air speed ahead of the aircraft and action on the flight controls, - Medium-range detection of turbulence, and securing of passengers by seat belts fasten. The short-range concept has been validated in the frame of the FP5 AWIATOR project. The objective of DELICAT is to validate the medium range turbulence detection (based on backscattered energy fluctuations). In development phase, both short and medium range will be integrated in single equipment. The validation of medium range turbulence detection is based on the comparison of LIDAR data and aircraft sensors data. - A UV LIDAR mock up is designed, manufactured and flight-tested, - The LIDAR data and the aircraft sensors data are compared off line for the same atmosphere area. The consortium assembled for the DELICAT project includes all the expertise necessary to reach the objective, and will take advantage of previous projects such as FP5 AWIATOR and FP6 FLYSAFE. The project resources are globally adapted to the project ambitious objectives and innovation content. DELICAT takes advantage of existing hardware (laser sub assemblies, test aircraft fairing) to achieve the goal at lower possible cost. The DELICAT project will directly contribute to the objectives of the AAT.2008.3.3.2. topic by increasing customer safety; the objective is to reduce the number of turbulence accidents by 40%. Dissemination of DELICAT will be ensured by setting up a Web Site, and by gathering an External Experts Advisory Group.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2009.1.1 | Award Amount: 6.09M | Year: 2010

Wireless networks importance for the Future Internet is raising at a fast pace as mobile devices increasingly become its entry point. However, todays wireless networks are unable to rapidly adapt to evolving contexts and service needs due to their rigid architectural design.\nWe believe that the wireless Internets inability to keep up with innovation directly stems from its reliance on the traditional layer-based Internet abstraction. Especially, the Link Layer interface appears way too abstracted from the actual wireless access and coordination needs. FLAVIA fosters a paradigm shift towards the Future Wireless Internet: from pre-designed link services to programmable link processors. The key concept is to expose flexible programmable interfaces enabling service customization and performance optimization through software-based exploitation of low-level operations and control primitives, e.g., transmission timing, frame customization and processing, spectrum and channel management, power control, etc.\nFLAVIAs approach is based on three main pillars: i) lower the interface between hardware-dependent layers and upper layers, ii) apply a hierarchical decomposition of the MAC/PHY layer functionalities, and iii) open programmable interfaces at different abstraction levels. To prove the viability of this new architectural vision, FLAVIA will prototype its concept on two wireless technologies currently available, 802.11 and 802.16, representing todays two main radio resource allocation philosophies: contention-based and scheduled. Moreover, FLAVIA will assess the applicability of the proposed architecture concepts to the emerging 3GPP standards.\nFLAVIAs concept will allow boosting innovation and reducing the cost of network upgrades. Operators, manufacturers, network designers, emerging third-party solution developers, and even spontaneous end users, will be able to easily and rapidly optimize and upgrade the wireless network operation, quickly prototype and test their new protocols, and adapt the wireless access operation to emerging scenarios or service needs.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: AAT.2008.4.1.1. | Award Amount: 3.66M | Year: 2010

The upstream FFAST project addresses the topic Design Systems and Tools (AAT.2008.4.1.1) by developing, implementing and assessing a range of numerical simulation technologies to accelerate future aircraft design. Critical load identification methods and reduced order modelling techniques developed will potentially provide a step change in the efficiency and accuracy of the dynamic aeroelastic loads process . Identifying the flight conditions that lead to the maximum loads on aircraft structures and introducing higher fidelity methods at these conditions will reduce the cost and turn around time of the loads process of conventional aircraft. This will lead to significant improvements to product development and manufacture, supporting the ACARE 2020 targets. In addition, innovative designs required for green aircraft can be evaluated more rapidly and at lower risk. Reduced order modelling techniques offer the potential for further step changes in the efficiency of the aeroelastic loads process. These offer the accuracy of high fidelity methods at a cost close to that of the current low fidelity methods. The target for the FFAST project is to demonstrate a speed up of 2 to 3 orders of magnitude over high fidelity methods. To meet this target research will be carried out in work packages to: improve identification of critical loads; develop reduced order modelling strategies for unsteady aerodynamic and aeroelastic simulation. A work package dedicated to validation and evaluation on a set of industrially relevant test cases will judge the success of the technologies developed and give industry confidence to make the necessary pull-through investment. Strong industrial support of FFAST allows direct exploitation of the results via focused future investment, the solution data base and early release software. The dissemination of FFAST to a wider audience is vital and will be achieved via a website, targeted lectures and workshops, conferences and journal publications.


Patent
Liang, Zhao and Russian Academy of Sciences | Date: 2011-06-29

The present invention relates to an independent, tobacco addiction-dispelling and tobacco toxicity-detoxifying filter, and a process for preparing the same. Said process is characterized in enabling organic substances in tea leaves to exude by air entraining and heating, immersing with a suitable amount of water and stirring, enhancing the effect on dispelling tobacco addiction and detoxifying tobacco toxicity, adding a part of celluloses by using the adhesiveness and water resistant effect of the exuded organic substances to produce tea powder, tea slices and tea particles; packaging the tea powder, tea slices, tea particles and active substances and additives within the filter via a device to form a cigarette holder, a cigarette filter rod or a cigarette holder bullet core having an absorbing resistance controlled to be within the applicable scope. During smoking, the effective substances in the filter volatilizes and sublimes via heat and water vapor in the smoke, sucking force of smoking or an electronic smoking equipment and are absorbed when entering respiratory tract, so as to inhibit nicotine receptor and to achieve the cessation and detoxification effect on cigarette smoking.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA-CA | Phase: NMP-2007-2.1-3 | Award Amount: 666.70K | Year: 2008

Nanostructured electromagnetic materials are rapidly maturing and become increasingly employed for design of the electronic and optical components, integrated circuits and functional devices. A broad class of applications is based upon the specialised electromagnetic materials that provide the necessary functionality for electronic devices and constitute the physical layer of the technologies dealing with electromagnetic signals. For such applications, the electromagnetic parameters of materials are of the primary concern. Characterisation and metrology of the engineered nanostructured electromagnetic media have become the critical aspects of their development and utilisation in practical applications. The main project objective of this project is to consolidate efforts and bring coordination in the European work towards development, testing, and dissemination of methods and tools for electromagnetic characterisation and metrology of nanostructured composite materials. The main novel characterisation approaches are focused on intrinsically interrelated developments and harmonisation of the material phenomenological models, standardisation of characteristic parameters and measurement techniques for evaluating the specified parameters. The main impact will be in development and dissemination of novel techniques for electromagnetic characterization of nanostructured materials and preparation of standards in the field of nanomaterials characterisation. This will complement the known techniques for characterization in terms of particle and reactivity by developing unified approaches to characterization in terms of permittivity, permeability, chirality parameter, surface impedance, grid impedance and similar.


News Article | November 14, 2016
Site: phys.org

Dendra2 is a photo-activated fluorescent protein that was developed 10 years ago by researchers at the M.M. Shemyakin and Yu.A. Ovchinnikov Institute of Bioorganic Chemistry of the Russian Academy of Sciences, based on the Dendra protein from a particular kind of coral, the Dendronephthya sp. octocorals. These proteins are capable of undergoing significant change in their fluorescent properties when irradiated with a certain amount of light of a specific wavelength. Today, they are among the most popular tools for monitoring and tracking proteins, cells and tissues, and are particularly suitable for ultrahigh-resolution fluorescence microscopy. Studies done on the mechanisms through which this protein are light-activated, in particular switching from green to red form under violet or blue light, had long been considered exhaustive. However, a year ago, researchers discovered that the weak photoactivation of the protein under blue light could be repeatedly strengthened by simultaneous irradiation via laser in the near infrared range (700 to 780 nm). This renders unnecessary the use of violet light, which can harm living tissue. However, the process itself involves using costly infrared lasers. In the new study, researchers from the Institute of Bioorganic Chemistry of the Russian Academy of Sciences and the Nizhny Novgorod State Medical Academy discovered that even light sources with a substantially shorter wavelength in the red region of the visible spectrum (630-650 nm) are able to induce the same effect. "During the study, we were able to demonstrate a significant increase in photoactivation efficiency when living cells are simultaneously irradiated with blue and red light under ultrahigh-resolution fluorescence microscopy," notes Alexander Mishin, Ph.D., one of the researchers from the IBCh RAS Laboratory of Biophotonics. "The cheap red lasers that are a standard part of many microscopes have made the new photoactivation method available to a wider range of researchers." The data obtained from this collaborative research is of particular interest for understanding the photoactivation mechanism itself. As the scientists noted, this process is still riddled with many other unknowns, and even the nature of the unusual intermediate state of the fluorescent protein that absorbs light in a wide spectral range is still a mystery to this day. Modern science implements several ways of using light to activate these proteins. The most common method involves transforming them from a green to a red fluorescent state. The first such protein was described by Japanese scientists in 2003, who managed to isolate it from madrepore. They named it Kaede, which means 'maple leaf'. However, this is not the only protein capable of changing its fluorescent properties in this way. More information: N. V. Klementieva et al. Green-to-red primed conversion of Dendra2 using blue and red lasers, Chem. Commun. (2016). DOI: 10.1039/C6CC05599K Nadya G Gurskaya et al. Engineering of a monomeric green-to-red photoactivatable fluorescent protein induced by blue light, Nature Biotechnology (2006). DOI: 10.1038/nbt1191 William P Dempsey et al. In vivo single-cell labeling by confined primed conversion, Nature Methods (2015). DOI: 10.1038/nmeth.3405


News Article | September 21, 2016
Site: www.chromatographytechniques.com

Scientists from the Space Research Institute of the Russian Academy of Sciences (IKI RAS), the Moscow Institute of Physics and Technology (MIPT), and the Max Planck Institute for Astrophysics (MPA) have shown that diffusion of gas particles during the formation of the first structures in the early Universe could have impacted the relative abundance of helium and hydrogen in the first galaxies. According to their calculations, the diffusion-induced element abundance changes are comparable to the precision of current cosmological measurements. This means that the observed hydrogen and helium abundance may differ from theoretical predictions. The research was published in the Monthly Notices of the Royal Astronomical Society. The scientists want to know what role diffusion played in the formation of the first stars and galaxies. The reason for their heightened interest is that the accuracy of direct measurements of primordial elements abundance has been steadily improving. To describe the state of the Universe during all stages of its development, physicists assign values to the so-called cosmological parameters. Among them are the Hubble parameter, which determines the rate of expansion of the Universe, and the parameters that specify the matter and dark energy content in the Universe. The values of these parameters vary with time. Knowing their right values for the different stages of the evolution of the Universe would allow physicists to arrive at a correct cosmological model and peer billions of years into the past or into the future. Techniques have been developed to compute the value of the cosmological parameters with a high degree of precision. One of these techniques is the direct observation of primordial elements abundance. This allows scientists to determine the baryon-to-photon ratio, which reveals the contribution of ordinary (baryonic) matter to the overall matter-energy density of the Universe. According to the standard cosmological theory, a process called nucleosynthesis occurred a few minutes after the Big Bang: protons and neutrons coupled into nuclei, forming the primordial plasma made of hydrogen, some helium and a small amount of deuterium and lithium. This epoch is of particular interest to physicists because it is the earliest stage of the evolution of the Universe for which reliable observational data is available. Pavel Medvedev, Sergey Sazonov and Marat Gilfanov proposed that, as the first galaxies were forming, diffusion of gas could take place, effecting a change in the ratio of primordial helium to hydrogen. "Galaxy formation begins with a contraction of dark matter, which is followed by an inflow of gas that is gravitationally attracted towards the centre of the future galaxy. We believe diffusion is possible in this flowing gas. As a consequence, particles of different masses move at different velocities. Suppose there is only hydrogen and helium in the gas. As helium is a heavier particle, it accretes faster than hydrogen, driven by the gravitational field of the forming galaxy. This means that when the galaxy is formed, the helium-to-hydrogen ratio in it is going to differ from that predicted by the nucleosynthesis theory," says Sergey Sazonov of the IKI and MIPT. The scientists examined particle diffusion in the gas during galaxy formation in the early Universe. They estimated the changes in relative helium abundance, which could be induced by this phenomenon, for galaxies of different masses. Their research shows that these changes could be on the order of 0.01 percent or less in the case of diffusion in cold gas. However, if the gas was heated to several thousand degrees during the epoch when the first galaxies were formed (several hundred million years after the Big Bang), then diffusion-driven helium abundance changes could be on the order of 0.1 percent. One possible mechanism involving the preheating of gas is the transfer of energy from the first supernovae to the environment via cosmic rays. This was proposed in a recent paper by Sazonov and Sunyaev. Primordial helium abundance is normally inferred from the measurements of interstellar gas in the galaxies close to our own where star formation does not occur. Otherwise, scientists would not be looking at primordial elements, because the composition of the interstellar medium would have been enriched by the products of thermonuclear fusion in stars. The direct measurements of primordial helium abundance enable physicists to constrain cosmological parameters and test the Big Bang nucleosynthesis (BBN) theory. As stated above, cosmological parameters determine the state of the Universe at any given time. That is why finding their precise values is one of the main objectives of cosmology. The diffusion-induced changes are comparable to the precision of current predictions of helium abundance. This means that the effect proposed by the authors could account for galactic helium content changes that are within the accuracy of observations. For this reason, any higher accuracy future predictions that are based on measured data will have to take this effect into account.


News Article | January 26, 2016
Site: www.techtimes.com

There are a trillion pieces of space junk orbiting the planet and with a steady increase in numbers, they may pose threats on satellites all over the planet. A Russian scientist predicts that something as small as a space junk may provoke global wars. The potential damage of these debris on military satellites may be misinterpreted by countries as deliberate attacks. Researchers at the Russian Academy of Sciences in Moscow said the debris contains a special political danger because it is hard to determine if the satellite was deliberately attacked by another country or was accidentally hit by space junk fragments. In the report published in Acta Astronautica, the scientist found that there are about 23,000 pieces of debris that are longer than 10 centimeters (about 4 inches). They estimate that there are about half a billion debris ranging from one to 10 centimeters and trillions of smaller fragments. These fragments pose serious threats to satellites and spacecraft at a combined speed of more than 30,000 miles per hour. One of the authors of the study, Vitaly Adushkin, reveals that the impact of space junk on satellites in space, especially military satellites, may provoke an armed or political conflict between countries. In 2013, a Russian satellite called Blits, was destroyed and disabled after it collided with fragments created when China shot down its old weather satellite in 2007. When China used a missile to destroy the satellite, it left around 3,000 fragments in orbit. In NASA's report, more than 500,000 pieces of debris or space junk are tracked as they orbit the Earth. On an average, they travel at a speed of 17,500 miles per hour which is fast enough to have a debris damage a satellite in orbit. The increasing number of space junk poses danger not only to satellite and spacecraft, but also to the International Space Station (ISS) where humans are aboard. Astronauts took shelter as a Russian space debris travelled past the ISS in 2015. In the 16 years of ISS's existence, this is the fourth time a space junk or debris threatened the orbiting outpost. Space junk is becoming a big problem since the start of satellite launches in 1957. Space debris are classified into two: natural and man-made. Natural debris are usually meteroids and man-made debris are from non-functional spacecraft, mission-related debris, fragmentation debris and abandoned launch vehicle stages. "The greatest risk to space missions comes from non-trackable debris," said Nicholas Johnson, NASA chief scientist for orbital debris.


News Article | February 15, 2017
Site: cen.acs.org

Density functional theory (DFT) is a widely used computational method for carrying out quantum calculations in chemistry, materials science, and biology research. Despite its enormous popularity and ongoing modifications and updates, DFT seems to be getting worse at predicting key electron properties, according to a study (Science 2017, DOI: 10.1126/science.aah5975). The finding suggests that users of DFT programs should carefully evaluate and benchmark results of their computations. And it may encourage DFT specialists and developers to redouble their efforts to improve the method’s capabilities. For decades, researchers have depended on quantum methods to calculate electronic structures, bond lengths, and molecular geometries and energies. The values of those properties and others can be obtained with high accuracy from wave-function-based quantum methods. But applying those methods to all but the simplest chemical systems is complex, arduous, and expensive. DFT simplifies the calculations. It sidesteps the use of wave functions to account for the motions of a molecule’s atoms and electrons. Instead, DFT determines electronic properties from the three-dimensional densities of the systems’ electron clouds. That simplification has helped put quantum calculations in the hands of large numbers of researchers, not just hard-core theoreticians. But as Michael G. Medvedev and Ivan S. Bushmarinov of the Russian Academy of Sciences and coworkers now report, although DFT continues to provide ever more accurate energy values, thanks to ongoing method development and refinements, it is getting worse at correctly predicting electron densities. The team carried out DFT calculations on 14 types of atoms and ions using 128 different functionals—mathematical descriptions of electron density—developed since 1970. They compared those results with ones obtained from high-level ab-initio wave function quantum methods, which are known to be highly accurate. They find that until about 2000, DFT-calculated energies and electron density values improved hand-in-hand. Since then, however, the energies have continued to improve, but the densities have become less accurate. For some applications in chemistry and biology, the energies and geometries of molecules are the most important pieces of information, says Sharon Hammes-Schiffer, a chemistry professor and specialist in computational methods at the University of Illinois, Urbana-Champaign. “If the electron density does not affect these properties then perhaps the inaccurate electron density is irrelevant,” she adds. It may have little effect on chemical bonding. The key issue in Hammes-Schiffer’s view is that some modern functionals “may be giving the correct energies for the wrong reason.” She argues that this subject merits further investigation because “most scientists would prefer to obtain the correct answer for the correct reason.” Describing the study as “provocative,” Martin Head-Gordon, a theoretician at the University of California, Berkeley, says one way to move forward is a combinatorial approach to develop new functionals that use fewer adjustable parameters than some modern ones use. His group is working on that approach and they plan to test it on electron densities. This article has been translated into Spanish by Divulgame.org and can be found here.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA | Phase: ICT-2007.9.2 | Award Amount: 1.19M | Year: 2009

The ISTOK-SOYUZ project, based on the sound outcomes and lessons learnt of the ISTOK.Ru project www.istok-ru.eu implemented in Russia in 2006-2008, will expand the ISTOK experience to the Eastern Europe & Central Asia countries, identifying and promoting visibility of mutual RTD potential and collaboration opportunities. The project will: (1) promote the EU ICT programme, raise awareness about benefits of mutual collaboration; (2) identify potential for R&D ICT collaboration between the European Union and 9 addressed countries of Eastern Europe and Central Asia; (3) expand the EU-Russian ICT research community to 4 targeted countries (Ukraine, Belorussia, Armenia and Kazakhstan) through the opening of an ISTOK competence platform and implementing pilot actions such as networking & brokerage events and assistance to integration into the European Technology Platforms and Networks of Excellence; (4) provide support to research teams from the targeted countries with the goal of increasing the number of ICT FP7 partnerships between researchers from Europe and targeted countries. The project activities will be done in collaboration with national stakeholders and other relevant co-operation projects in view of exploiting synergies and maximising impact. The clustering approach involving two other INCO projects - SCUBE-ICT and EXTEND - will be applied in particular for the specific activities in Ukraine, Belorussia and Armenia in order to increase the benefit for the European research from the support actions of three projects.


News Article | April 11, 2016
Site: www.nrl.navy.mil

U.S. Naval Research Laboratory (NRL) plasma physicist, Dr. Alexander L. Velikovich, receives the 2015 IEEE Plasma Science and Applications Award for advancing the theory of plasma shocks and hydrodynamics and magneto-hydrodynamics — enabling many-fold increases in both Z-pinch and laser-plasma experimental performance in radiation and fusion applications. Presented by the Nuclear and Plasma Sciences Society (NPSS), the award recognizes outstanding contributions to the field of plasma science and engineering to include plasma dynamics, thermonuclear fusion, plasma sources, relativistic electron beams, laser plasma interactions, diagnostics and solid-state plasmas. Author and co-author of more than 170 publications, with over 2,500 citations, Velikovich developed the first analytical theory to calculate the time-dependent growth of compressible Richtmyer-Meshkov (RM) instability in the linear regime, as well as, non-linear RM theory explaining reduction of its growth rate for large initial amplitude. Most recently Velikovich developed a theory explaining the effect of shock-generated turbulence — discovered in numerical simulations at NRL over a decade ago — on the Rankine-Hugoniot jump conditions. Derived from the laws of mass, momentum and energy, many physical effects, first observed on the Nike krypton fluoride (KrF) laser at NRL, were recreated on both the Nova laser at the Lawrence Livermore National Laboratory, California, and the Omega laser at the Laboratory for Laser Energetics (LLE) located in New York. Velikovich's research on High Energy Density Physics (HEDP) and Inertial Confinement Fusion (ICF), particularly laser-fusion and Z-pinch-related plasma hydrodynamics, formed the theoretical basis for x-ray generation in Z-pinch plasma radiation. Modeling and interpretation of these results were instrumental in establishing the physical picture of the Rayleigh-Taylor (RT) instability seeding in laser fusion targets caused by the roughness of the front and rear surface of a laser target. Research that later translated into the development of a theory that provided the physical basis for most hydrodynamic experiments performed on the Nike laser over the past decade. Earning a Master of Science equivalent degree in physics from Moscow State University Department of Physics, Moscow, Russia, in 1974, Velikovich completed a Ph.D. equivalent degree in plasma physics and chemistry awarded by Kapitza Institute for Physical Problems, U.S.S.R. Academy of Sciences in 1978. In 1991 he earned an Advanced Degree of Doctor of Science (equivalent of Habilitation in European Union countries) in Electrophysics awarded by High Current Electronics Institute, Russian Academy of Sciences, Tomsk, Russia. Velikovich started at NRL in 1993 as a contractor, transitioning to federal civil service at NRL in 1999. In 2005 Velikovich was elected Fellow of the American Physical Society upon the recommendation of its Division of Plasma Physics for "outstanding contributions to the theories of dynamics and stability of Z-pinch plasmas, Richtmyer-Meshkov instability and related effects of early-time perturbation seeding and evolution in laser plasma targets." In 2010, along with colleagues from Sandia and NRL, Velikovich shared the 2010 Department of Energy (DoE) Defense Programs Award of Excellence for "increased cold x-ray source yields, improved source characterization and debris mitigation techniques to qualify stockpile components on refurbished Z machine." In 2000, 2007, 2010, and 2012, he was the recipient of the NRL Alan Berman Research Publication Award. In 2015 he received the NRL Sigma Xi award for Pure Science. About the U.S. Naval Research Laboratory The U.S. Naval Research Laboratory provides the advanced scientific capabilities required to bolster our country's position of global naval leadership. The Laboratory, with a total complement of approximately 2,500 personnel, is located in southwest Washington, D.C., with other major sites at the Stennis Space Center, Miss., and Monterey, Calif. NRL has served the Navy and the nation for over 90 years and continues to advance research further than you can imagine. For more information, visit the NRL website or join the conversation on Twitter, Facebook, and YouTube.


News Article | December 6, 2016
Site: phys.org

A group of researchers from the Institute of Bioorganic Chemistry of the RAS and Moscow State University, in collaboration with the University of Utah, has for the first time successfully made use of the FRET microscopy method to demonstrate the ability of nucleosomes to reversibly unwind under the action of FAST without expending any energy. The data obtained will illuminate the role this protein complex plays in actively dividing cells in tumor tissues. The results were published in Nature Structural & Molecular Biology. "This research work can be viewed as a perfect example of the successful collaboration between the two research groups," says Kseniya Kudryashova, Ph.D., a researcher with the Bioengineering Department of the Institute of Bioorganic Chemistry of the Russian Academy of Sciences. The DNA molecule is compactly coiled within cell nuclei. Correctly placed genomes provide special proteins that, together with the DNA, form a structure called chromatin. The nucleosome acts as the unit of compaction for the chromatin. Nucleosomes are often compared to a thread spool, in which the DNA 'thread' is wound on a barrel of histone proteins – the 'spool.' The dense packing of chromatin in the nucleus is violated during transcription, when there is an active reading of genetic information from DNA. The FACT Conservative Protein, which has become a research focus, facilitates the transcription of chromatin protein. Although the interaction of FACT with the nucleosome has been demonstrated in earlier works, this is the first time that scientists have discovered the important details of the process. The researchers studied the interaction of the FACT protein factor with the spFRET-microscopy nucleosome method. "As a result of the productive work that was started back in 2010, scientists were able to develop the spFRET method from scratch and apply it specifically in the study of nucleosomes," explains Kudryashova. "Based on this method, scientists are able to work with the microscopy of single molecules. Fluorescent markers are introduced in adjacent loops of the nucleosomal DNA. One of the markers serves as an energy donor, while the other as the acceptor. The donor can be excited using a laser of a specific wavelength. If the donor is located close to the acceptor, energy is transferred to the acceptor. The closer the position of the marker, the brighter the signal from the acceptor. In this way, it is possible to monitor the distance between adjacent DNA loops and assess how much the nucleosome is compactly folded. This is a unique development for Russia that allows us to study structural changes in the nucleosome complexes with a resolution of a few nanometers, with the information collected from each molecule individually." Using this method, scientists for the first time demonstrated the ability of the nucleosome to unwind reversibly under the action of FACT in vitro (test tube) without expending energy, which is quite unusual, given that the ATP-dependent remodeling complexes for chromatin reorganization expend a lot of energy. During the formation of the FACT-nucleosome complex, DNA strands are completely straightened, but remain bound to the histone proteins. If FACT is removed from the complex, then everything goes back to square one—the DNA nucleosome winds itself on the base of the histones. Thus, FACT is a rare example of an ATP-independent (with non-natural sources of energy) chromatin-remodeling complex. According to Mary Valieva, the first author of the article, the ability of FACT to reversibly change its chromatin structure in a cell may activate certain genes through the reorganization of DNA components. However, nothing more specific can be said about similar processes occurring inside cells, as the issue is still poorly understood. The author notes that the study of this protein complex is important not only scientifically, but also from a medical perspective. This is because tumor tissues contain large quantities of FACT. That is why scientists are now investigating the role played by this protein complex in actively dividing cells. More information: Maria E Valieva et al. Large-scale ATP-independent nucleosome unfolding by a histone chaperone, Nature Structural & Molecular Biology (2016). DOI: 10.1038/nsmb.3321


Researchers at the Institute of Bioorganic Chemistry of the Russian Academy of Sciences have collaborated with colleagues from other research institutions and created a microfluidic system for ultra-high-performance screening in double emulsion droplets. This technique can be used when studying the unique properties of single living cells, and is 30,000 times more productive than robotic workstations. At the same time, it greatly simplifies the work of researchers involved in determining the functionality of biological objects for the creation of biomedicines. The results of this research were published in the PNAS journal. "Researchers tend to spend a lot of their working hours testing, purifying and isolating highly active proteins. So we have tried to solve this problem by developing an ultra-high-performance screening system for isolating biomolecules based on microfluidic emulsions. The result is a system that allows us to isolate interesting biological functions from the enormous diversity of any microscopic biological object, not only enzymes," explains Stanislav Terekhov, junior researcher at the Russian Academy of Sciences, one of the authors of the article. The idea came about three years ago, when Stanislav proposed developing a technology to quickly determine the activity of hundreds of millions of new enzymes produced by his colleague, Ivan Smirnov.This working group was engaged in creating and selecting biocatalysts from combinatorial libraries of enzymes that speed up reactions for which there are no natural enzymes. For example, for the inactivation of organophosphorus toxins, neuromuscular paralytic gases are relevant in connection with the growing use of pesticides and chemical warfare agents. The researchers had to spend years trying to obtain only a few dozen new proteins. Using the photolithography method used to create computer chips, researchers from the Institute of Bioorganic Chemistry and collaborators created microfluidic chips with channels with a thickness smaller than the diameter of a hair to generate emulsion droplets. The researchers used droplets of water-oil-water double emulsion to isolate single cells, allowing them to study their unique properties. Using microfluidic chips, Stanislav and his colleagues put the individual living cells into the droplets, after which the enzymatic and biological activity of the cells in the drops was studied using a fluorescence-activated cell sorter. The fluorescence drops helped detect the most active cells. The isolated cells in the drops were then further analyzed using classical molecular-biological methods and modern metabolic methods of analysis, as well as large-scale sequencing based on the Federal Clinical Research Centre's physical and chemical medicine. "As a result, we were able to obtain approximately 108 drops per hour, and in a day, we managed to isolate the necessary amount of enzymes with the required amount of activity," says Stanislav. "For example, we managed to improve the enzyme butyrylcholinesterase, which not only linked the organophosphorus toxin, but helped it to hydrolyze it, and link to the next toxin. Afterwards, we encapsulated bacterial cells in order to trace which microorganisms are inhibitors of the growth of the highly pathogenic bacteria Staphylococcus aureus. Thus, our screening method is suitable for discovering new drugs that are enzyme-based or based on microorganisms, their metabolites and other biological objects." This universal microfluidic screening platform that the researchers have developed requires significantly less time and financial resources. Explore further: A new type of monitoring provides information about the life of bacteria in microdroplets More information: Stanislav S. Terekhov et al. Microfluidic droplet platform for ultrahigh-throughput single-cell screening of biodiversity, Proceedings of the National Academy of Sciences (2017). DOI: 10.1073/pnas.1621226114


News Article | April 20, 2016
Site: news.yahoo.com

Berlin (AFP) - Climate researchers in Germany said Wednesday they had found a way to more accurately predict the Indian monsoon, which could help maximise the subcontinent's food and hydro-power supplies. Improved forecasts of when the heavy summer rains start and end could help millions of farmers plant crops at the right time, and allow energy providers to estimate when dams and reservoirs fill up, they said. Global warming already affects monsoon stability and will make accurate forecasting ever more important, as deviations can spark droughts and floods, said the Potsdam Institute for Climate Impact Research (PIK). "The timing of Indian summer monsoons, on which the livelihoods of many millions of people depend, is likely becoming more erratic," said project leader Juergen Kurths. "This makes early and accurate forecasting ever more crucial." The scientists said they had developed a novel prediction method based on a network analysis of regional weather data, and would propose their model to the Indian Meteorological Department. "We can predict the beginning of the Indian monsoon two weeks earlier, and the end of it even six weeks earlier than before -- which is quite a breakthrough, given that for the farmers, every day counts," said Veronika Stolbova of PIK and Zurich University. "We found that in North Pakistan and the Eastern Ghats, a mountain range close to the Indian Ocean, changes of temperatures and humidity mark a critical transition to monsoon," said Stolbova in a statement. Usually the focus has been on southern India's Kerala region, said Stolbova, lead author of the study published in the scientific journal the Geophysical Research Letters. The team said it used an advanced mathematical approach called network analysis of complex non-linear systems, combined with subtle statistical analyses of the early warning signals for the monsoon onset and withdrawal. "These precursor phenomena are often buried by huge piles of weather data and hence get overlooked," said PIK guest scientist Elena Surovyatkina of the Russian Academy of Sciences' Space Research Institute. Kurths said they had looked at the climate system "as a network, just like the social networks so many people are using in their everyday life". "On Facebook or Twitter, you can follow how news is spreading, one posting leading to many others. In the climate system, not people but geographical regions are communicating -- admittedly in a quite complex way." Like Facebook postings or tweets that get shared again and again, the scientists explained, temperature and humidity get transported from one place to another by atmospheric flows such as winds. Information about monsoon timing is key for Indian farmers, who usually grow all-important crops like rice, soybean and cotton during the June-September monsoon season. The scientists said they had tested their method with historical monsoon data and achieved correct predictions in more than 70 percent of cases for the start of the monsoon, and in more than 80 percent for its withdrawal. The authors said their method could improve the time horizon of monsoon prediction compared to that now used in India -- both during relatively normal times, and in years when the El Nino phenomenon affects the rainy season.


News Article | March 2, 2017
Site: phys.org

Researchers at the Institute of Bioorganic Chemistry of the Russian Academy of Sciences and colleagues have proposed a new mechanism for the dynamic self-organization of spatial structures in embryogenesis. Using mathematical modeling methods, the researchers have demonstrated that this self-organization may be due to a significant difference in the mutual penetration rates (diffusion) of morphogen proteins, which occurs due to differently binding biologically active substances (morphogens) in the extracellular matrix. The results of this research, which were published in PLOS ONE, create the preconditions for new models describing the variety of forms in the early stages of organism development. In the early stages of development, the embryonic organs are composed of a plurality of identical cells, which then become complex spatial structures, and their sizes are much larger than the cells themselves. How does this happen? It is believed that such structures are formed as a result of dynamic self-organization, a process in which morphogen proteins secreted by cells and propagated over long distances play an important role. One of the conditions necessary for self-organization is finding a system located in a state of acute disequilibrium, that is, under conditions of strong dissipation of energy. Therefore, such structures formed during self-organization are often referred to as "dissipative." "The increasing complexity of the embryo can be simplistically reduced to laws of territorial division, in which differentiated cells, i.e. those that have different functions, play different roles in the body. In many cases, instructions to this orderly spatial differentiation of embryonic tissue cells are obtained due to the formation of dissipative structures. They usually appear as concentration gradients of morphogen proteins. As a result, embryonic cells at different locations along this gradient are exposed to different morphogen concentrations—for example, hormones—and thus receive signals to differentiate," explains Andrey Zaraiskii, head of the Laboratory of Molecular Bases of Embryogenesis of the Institute of Bioorganic Chemistry of the Shemyakin&Ovchinnikov Institute of Bioorganic Chemistry of the Russian Academy of Sciences. It is known that complex structures arise when there are at least two diffusing and nonlinearly interacting morphogens with sharply differing diffusion coefficients, i.e. the rate of penetration of one substance to another. However, real morphogen proteins have similar size and approximately the same mobility in aqueous solutions. "What exactly leads to the achievement of the difference between the diffusion rates of the morphogens, which is necessary for the self-organization of dissipative structures? For a long time, this question remained open ended," says Alexey Nesterenko, a researcher from the Lomonosov Moscow State University. "Previously, we have shown that in the diffusion process in the intercellular space, different morphogens may bind with varying force with proteoglycans, specific proteins of the extracellular matrix (substance)." The researchers assumed that it is this difference in the nonspecific binding of morphogens that may provide a significant difference in their rate of diffusion. "We have applied a mathematical model to demonstrate that a system comprising two uniformly-mobile morphogens with conditions in which one adsorbs on the extracellular matrix makes it possible to obtain the spatial structure of the mechanism of dynamic self-organization," explains Maxim Kuznetsov, one of the researchers from the P.N. Lebedev Physical Institute of the Russian Academy of Sciences. The authors applied the new model to examples in multiple organisms, including the process of cuckoo-catfish coloring. "This fish of the mochokid catfish family has a yellow coloring, with numerous black spots scattered throughout its body. The model that we have developed provides an adequate explanation for the formation and regular decrease in the number of spots of its color in the tail-to-head direction," says Daria Korotkov, one of the researchers, a student at the Lomonosov Moscow State University. The approach creates the preconditions for the development of mathematical models for increasingly diverse morphological forms in embryogenesis. The researchers are already currently working on its further experimental confirmation. Explore further: Russian biologists determine how frog heads are formed More information: Alexey M. Nesterenko et al. Morphogene adsorption as a Turing instability regulator: Theoretical analysis and possible applications in multicellular embryonic systems, PLOS ONE (2017). DOI: 10.1371/journal.pone.0171212


News Article | February 28, 2017
Site: www.eurekalert.org

Planets are thought to form in the disks of dust and gas found around young stars. But astronomers have struggled to assemble a complete theory of their origin that explains how the initial dust develops into planetary systems. A French-UK-Australian team now think they have the answer, with their simulations showing the formation of 'dust traps' where pebble-sized fragments collect and stick together, to grow into the building blocks of planets. They publish their results in Monthly Notices of the Royal Astronomical Society. Our Solar system, and other planetary systems, began life with disks of gas and dust grains around a young star. The processes that convert these tiny grains, each a few millionths of a metre (a micron) across, into aggregates a few centimetres in size, and the mechanism for making kilometre-sized 'planetesimals' into planetary cores, are both well understood. The intermediate stage, taking pebbles and joining them together into objects the size of asteroids, is less clear, but with more than 3,500 planets already found around other stars, the whole process must be ubiquitous. Dr Jean-Francois Gonzalez, of the Centre de Recherche Astrophysique de Lyon, in France, led the new work. He comments: "Until now we have struggled to explain how pebbles can come together to form planets, and yet we've now discovered huge numbers of planets in orbit around other stars. That set us thinking about how to solve this mystery." There are two main barriers that need to be overcome for pebbles to become planetesimals. Firstly the drag of gas on dust grains in a disk makes them drift rapidly towards the central star, where they are destroyed, leaving no material to form planets. The second challenge is that growing grains can be broken up in high-speed collisions, breaking them into a large number of smaller pieces and reversing the aggregation process. The only locations in planet forming disks where these problems can be overcome are so-called 'dust traps'. In these high-pressure regions, the drift motion slows, allowing dust grains to accumulate. With their reduced velocity, the grains can also avoid fragmentation when they collide. Until now, astronomers thought that dust traps could only exist in very specific environments, but the computer simulations run by the team indicate that they are very common. Their model pays particular attention to the way the dust in a disk drags on the gas component. In most astronomical simulations, gas causes the dust to move, but sometimes, in the dustiest settings, the dust acts more strongly on the gas. This effect, known as aerodynamic drag back-reaction, is usually negligible, so up to now has been ignored in studies of growing and fragmenting grains. But its effects become important in dust rich environments, like those found where planets are forming. The effect of the back-reaction is to slow the inward drift of the grains, which gives them time to grow in size. Once large enough, the grains are their own masters, and the gas can no longer govern their motion. The gas, under the influence of this back-reaction, will be pushed outwards and form a high-pressure region: the dust trap. These spontaneous traps then concentrate the grains coming from the outer disk regions, creating a very dense ring of solids, and giving a helping hand to the formation of planets. Gonzalez concludes: "We were thrilled to discover that, with the right ingredients in place, dust traps can form spontaneously, in a wide range of environments. This is a simple and robust solution to a long standing problem in planet formation." Observatories like ALMA in Chile already see bright and dark rings in forming planetary systems that are thought to be dust traps. Gonzalez and his team, and other research groups around the world, now plan to extend the trap model all the way to the formation of planetesimals. An image of a protoplanetary disk, made using results from the new model, after the formation of a spontaneous dust trap, visible as a bright dust ring. Gas is depicted in blue and dust in red. Credit: Jean-Francois Gonzalez This cartoon illustrates the stages of the formation mechanism for dust traps. The central star is depicted as yellow, surrounded by the protoplanetary disk, here shown in blue. The dust grains make up the band running through the disk. In the first stage, the dust grains grown in size, and move inwards towards the central star. The now pebble-sized larger grains (in the second panel) then pile up and slow down, and in the third stage the gas is pushed outwards by the back-reaction, creating regions where dust accumulates, the so-called dust traps. The traps then allow the pebbles to aggregate to form planetesimals, and eventually planet-sized worlds. Credit: © Volker Schurbert The same cartoon without text The new work appears in "Self-induced dust traps: overcoming planet formation barriers", J.-F. Gonzalez, G. Laibe, and S. T. Maddison, Monthly Notices of the Royal Astronomical Society, in press. After the embargo expires the final paper will be available from OUP via http://doi. A preprint of the paper is available from http://www. The Royal Astronomical Society (RAS), founded in 1820, encourages and promotes the study of astronomy, solar-system science, geophysics and closely related branches of science. The RAS organizes scientific meetings, publishes international research and review journals, recognizes outstanding achievements by the award of medals and prizes, maintains an extensive library, supports education through grants and outreach activities and represents UK astronomy nationally and internationally. Its more than 4,000 members (Fellows), a third based overseas, include scientific researchers in universities, observatories and laboratories as well as historians of astronomy and others. The RAS accepts papers for its journals based on the principle of peer review, in which fellow experts on the editorial boards accept the paper as worth considering. The Society issues press releases based on a similar principle, but the organisations and scientists concerned have overall responsibility for their content.


News Article | September 28, 2016
Site: www.theenergycollective.com

The first American housing project that puts an urban farm intentionally in the centre of a community has begun to win awards and serve its first harvest. It is billed as California’s first farm-to-table new home community and aims to become a state-of-the-art sustainable urban farming showcase. The development covers 7.4 acres, of which the farm is 5.5 acres, including 4 acres of farmland with organic vegetables, poultry and orchard fruit. Called The Cannery, it is an innovative mixed-use community with 583 residences on the site of a former tomato cannery in Davis, on the outskirts of Sacramento near San Francisco. Retail shops, a recreation centre, outdoor amphitheatres and miles of trails are also included in the plan. The homes are diverse, for all generations, lifestyles and size of family. The mixed housing includes accommodation to buy and rent, detached and attached, both high end and affordable homes. Furthermore, multigenerational life space designs include guesthouses and private quarters. Every single home is within 300 feet of a park or trail/cycle path. The first crop from the urban farm is being harvested this month, sold through a farm shop that is open twice a week. Mary Kimball, the executive director of the Centre for Land-Based Learning, the organisation managing the urban farm, says, “We have three beginning farmers, all graduates of the California Farm Academy, who have started new farming businesses and are now providing the residents of the town very local produce.” “To see The Cannery today becoming a viable farm community is not only personally exciting for me, but also one of the most fulfilling accomplishments in my career,” said Craig McNamara, founder of the Center for Land-Based Learning (CLBL). “The Cannery Urban Farm honours what I believe in most: connecting eaters directly to food.” The project includes a teaching academy for sustainable farming. The Centre launched the California Farm Academy five years ago to help those wanting to break into a career in agriculture. “These first farm harvests at The Cannery signify about 6 ½ years of meetings, discussions, flexibility and creativity,” said Kimball. “From the earliest planning stages, there was active collaboration between the builder (The New Home Company), the City of Davis and the CLBL. It’s been a good example of public-private-non-profit partnership.” There are several “agrihoods” around the nation, such as Agritopia in Phoenix and Serenbe in Atlanta, she says, but they tend to have different arrangements with the farmers. “As far as we can tell, The Cannery is the nation’s first farm-to-table housing development focused on beginning farmers.” The Academy offers a full range of training and internship opportunities, focussing not just on growing but on conservation and included among the topics are access to land and equipment, the most expensive barriers to starting a farming business. “The cost of land is really expensive,” says Hope Sippola, one of the farmers. “The only way to make it affordable is to lease land through the centre.” This is one example of how the Centre partners with public and private landowners to provide low-cost lease opportunities. Weekly vegetable subscriptions via veggie boxes – Community Supported Agriculture – are being offered. The Cannery is managed by New Home Company, which was given a Grand Award for “Residential Community of the Year Master Plan” for its work at the end of June at the climax of the building industry’s leading West Coast American conference, trade show, PCBC (Pacific Coast Builders Conference), the Gold Nugget Awards. “We have worked extremely hard over the past several years to get to this moment,” said Kevin Carson, Northern California President for New Home. “The Cannery is unlike any other community in the western United States and it has truly been a rewarding experience to contribute to such an innovative concept.” He was also elected to the Hall of Fame at the Gold Nugget Awards. The New Home team also shared the Residential Community of the year honour with design consultants who helped shape the vision for The Cannery. The site also reduces reliance on cars by supporting bicycles and walking, limits energy consumption with highly energy efficient buildings, and features renewable energy production: a 1.5 kV photovoltaic solar system and electric vehicle charging come with every home. Residents can also upgrade to net zero living. Parks and paths connect neighbours everywhere they turn. The homes themselves are built by Shea Homes, a large West Coast developer founded in 1881. The New Home Company developed the Urban Farm’s infrastructure, including a new agricultural well and conveyance system on the farm. The orchard land and infrastructure are being donated to the City of Davis, which is leasing it inexpensively to CLBL. The Urban Farm is managed by CLBL, which is leasing the farmland to three graduates of its California Farm Academy Program. CLBL’s training program and farm business incubator teaches and mentors new farmers in agricultural production, business planning and marketing. CLBL is partnering with University of California Division of Agriculture and Natural Resources (UCANR) to develop community services and educational programs. Sacramento Region has had a push in recent years to establish itself as America’s Farm-to-Fork Capital. In August, Sacramento’s Elk Grove City Council expressed unanimous support for a plan to introduce urban, commercial farming within non-agriculturally zoned areas in the city. “Urban farming” is described in a city report as “a type of urban agriculture that entails the production of produce that is grown in an urban environment, and primarily for sale or consumed by someone other than the grower.” If the idea becomes a reality in Elk Grove, similar farms could be operated on currently vacant, unimproved, or otherwise underdeveloped parcels in the city. They can be for-profit, non-profit, and/or social enterprises. Their products can be sold at such places as on-site stands, farmers’ markets, grocery stores and restaurants. These farms can also contribute to food banks. Urban farms are already located in many American cities, including Chicago, Baltimore, New York City, San Francisco, Los Angeles, Sacramento and West Sacramento. They are typically found on properties ranging in size from one acre to three acres. Cities need to do much more to feed themselves and reduce the environmental impact of farming, including food miles. Planning new communities around urban farms also reconnects city dwellers to nature, and the process of growing food, for which all too often they are distant and alienated. David Thorpe is the author of:


Samoylovich M.I.,Central Research and Technology Institute TechnoMash | Talis A.L.,Russian Academy of Sciences
Acta Crystallographica Section A: Foundations of Crystallography | Year: 2010

It is established that the eight-dimensional lattice E 8 and the Mathieu group M 12 determine a unique sequence of algebraic geometry constructions which define a special class of simple 24-vertex, 14-face polyhedra with four-, five- and six-edge faces. As an example, the graphs of the ten stereohedra that generate most known tetrahedrally coordinated water cages of gas hydrates have been derived a priori. A structural model is proposed for the phase transition between gas hydrate I and ice. © 2010 International Union of Crystallography Printed in Singapore - all rights reserved.


Bisnovatyi-Kogan G.S.,Russian Academy of Sciences | Lovelace R.V.E.,Cornell University
Astrophysical Journal | Year: 2012

In earlier works we pointed out that the disk's surface layers are non-turbulent and thus highly conducting (or non-diffusive) because the hydrodynamic and/or magnetorotational instabilities are suppressed high in the disk where the magnetic and radiation pressures are larger than the plasma thermal pressure. Here, we calculate the vertical profiles of the stationary accretion flows (with radial and azimuthal components), and the profiles of the large-scale, magnetic field taking into account the turbulent viscosity and diffusivity and the fact that the turbulence vanishes at the surface of the disk. Also, here we require that the radial accretion speed be zero at the disk's surface and we assume that the ratio of the turbulent viscosity to the turbulent magnetic diffusivity is of order unity. Thus, at the disk's surface there are three boundary conditions. As a result, for a fixed dimensionless viscosity α-value, we find that there is a definite relation between the ratio of the accretion power going into magnetic disk winds to the viscous power dissipation and the midplane plasma-β, which is the ratio of the plasma to magnetic pressure in the disk. For a specific disk model with of order unity we find that the critical value required for a stationary solution is βc ≈ 2.4r/(αh), where h is the disk's half thickness. For weaker magnetic fields, β > βc, we argue that the poloidal field will advect outward while for β < βc it will advect inward. Alternatively, if the disk wind is negligible (), there are stationary solutions with β ≫ βc. © 2012. The American Astronomical Society. All rights reserved.


Gor'kov L.P.,Florida State University | Gor'kov L.P.,Russian Academy of Sciences
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

The Fermi-liquid-like (FL) resistivity recently observed in clean Hg1201 below the pseudogap temperature was related to carriers at the nodal points on the Fermi surface (FS) (N. Barišić, arXiv:1207.1504, doi:10.1073/pnas.13019891109). We show that this has important implications for the electronic spectrum of underdoped (UD) cuprates as a whole. Photoemission experiments (angle-resolved photoemission spectroscopy) in other cuprates picture the spectrum as "metallic arcs" separated from each other by regions with large energy gaps. We rigorously solve the kinetic equation in such a model. The Fermi arcs' carriers contribute to the FL resistivity, if scattering between the opposite nodal points admits the umklapp processes. The Hall coefficient defines the effective number of carriers on the arcs and at weak magnetic fields it has a positive sign. All parameters that determine the arcs' widths are measurable experimentally. We conclude that the T2 resistivity gives support to the Fermi arcs' concept and argue that the idea of a reconstructed FS in UD cuprates is not consistent with the latter. © 2013 American Physical Society.


Fedushkin I.L.,Russian Academy of Sciences | Nikipelov A.S.,Russian Academy of Sciences | Lyssenko K.A.,RAS Nesmeyanov Institute of Organoelement Compounds
Journal of the American Chemical Society | Year: 2010

Different alkynes add reversibly to the gallium complex of the dpp-Bian dianion. The reactions proceed with addition of the alkynes across the Ga-N-C fragment resulting in carbon-carbon and carbon-gallium bonds. In the case of 3 and 4 a full elimination of the alkyne takes place at T < 100 °C, whereas with adducts 5 and 6 it occurs at heating to ca. 200 °C. © 2010 American Chemical Society.


Blagodatsky S.,University of Aberdeen | Blagodatsky S.,Russian Academy of Sciences | Smith P.,University of Aberdeen
Soil Biology and Biochemistry | Year: 2012

One of the issues hampering progress in modelling greenhouse gas (GHG) emissions from soils is a lack of co-ordination between models originating from different disciplines: soil physics and soil biology. We have reviewed recent advances in modelling both gaseous transport and the biochemical processes in the soil that lead to the emission of the main biogeneic GHGs: CO 2, N 2O, and CH 4. The precise coupling of gaseous transport and biochemistry is necessary because CH 4 and N 2O can be both produced and consumed in soil, and eventual flux to the atmosphere depends on the position of reaction sites and the escape pathways for these gases. The CO 2 production rate depends in turn on the efficiency of oxygen transport in the soil. Principles leading to successful simulation are: keeping a balanced level of detail in coupled model systems describing biochemical reactions and transport; reduction of unnecessary complexity by means of using the most essential relationships elucidated by comprehensive statistical model testing; consideration of all transport mechanisms in relation to prevailing ecological conditions, i.e., diffusion and convection in the air and liquid phases, plant-mediated transport and ebullition.It is important to model all three major GHG in accord with the description of O 2 and N 2 transport and concentration in soil. This helps: i) to estimate the full global warming potential; ii) to apply the model algorithms considering partial gas pressure and gas species interactions; iii) to describe the O 2 effect on the biochemical processes in soil. We discuss the approaches linking the simple and more complex process-oriented models, and propose a strategy for up-scaling model results from soil aggregate to profile and to the field/catchment. © 2011 Elsevier Ltd.


Habchi J.,Aix - Marseille University | Habchi J.,CNRS Architecture and Functions of Biological Macromolecules Lab | Tompa P.,Vrije Universiteit Brussel | Tompa P.,Hungarian Academy of Sciences | And 4 more authors.
Chemical Reviews | Year: 2014

Proteins are the major component of the living cell. They play crucial roles in the maintenance of life, and their dysfunctions are known to cause different pathologies. Simple amino acid propensities reflect some basic physical or sequence features. Such propensity-based predictors rely on simple statistics of amino acid propensity, on the physical/chemical features of amino acids, or on a preliminary concept on the physical background of disorder. Regions of missing electron density in the PDB are generally short, as long regions prevent crystallization. As such, short disorder is overrepresented in the database of disordered regions, and hence these predictors tend to perform better in predicting short disorder than long disorder. Predictors can also be classified based on the binary nature of the prediction. Examples of binary predictors are the CH plot and the cumulative distribution function (CDF) analysis.


Patent
Russian Academy of Sciences and Uop Llc | Date: 2012-12-10

One embodiment can be an apparatus for contacting a first liquid and a second liquid. The apparatus can include a vessel, which in turn, may include at least one wall, a first member, and a funnical frustum. The at least one wall can form a perimeter about an interior space. Also, the first member can be positioned within the interior space and form a plurality of passageways. Each passageway may be bordered by a first side and a second side from an outer region to an interior surface of the first member. Generally, the funnical frustum is positioned downstream of the first member to facilitate a swirling of at least one of the first and second liquids.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA-CA | Phase: KBBE.2010.1.4-06 | Award Amount: 560.40K | Year: 2011

Soil microbes play key roles in ecosystems and influence ecosystem processes, including nutrient acquisition, nitrogen cycling, carbon cycling, and soil formation. Soil microbes are the majority in soil and a large portion of the genetic diversity on Earth. Many microbes live in association with plant roots and can promote plant productivity and provide protection against stress and disease. Other microbes acts as pathogens and reduce plant growth. Sound environmental and agricultural practices favour balanced composition of biotic agents population in the rhizosphere. Well balanced micro-organisms population favours colonization of soil and nutrients intake, limits occurrence of pathogens (biotic stress), supports the development of commensal or symbiotic relationship between plant and micro-organisms, with a good tempering effect regarding abiotic stress (good water retention capacity, available macro- and micronutrients). Cultivated soils have been extensively studied. Pristine rhizospheres are less studied although these ecological niches contain important micro-flora and -fauna or help conceive the complex ecological microbes-plant relationship in soil, in the rhizosphere. The purposes of this project are: - to organize cooperation between specialised collections containing microbes isolated from the rhizosphere, - to constitute a common wide-range pool of micro-biodiversity exploitable for research and industry: the Pan-European Rhizosphere resources network (PERN). Collaboration between culture collections having material from West-European ecosystems and Russian Biological Resources Centres having micro-organisms from East-European biotopes will create a coordinated network of combined human expertise and culture collections. The network will offer scientists facilitated access to a wide range of micro-organisms coming from a broad spectrum of ecological zones spread over an extensive geographical range to study and exploit.


Grant
Agency: Cordis | Branch: H2020 | Program: RIA | Phase: NFRP-12-2015 | Award Amount: 3.05M | Year: 2015

HoNESt (History of Nuclear Energy and Society) involves an interdisciplinary team with many experienced researchers and 24 high profile research institutions. HoNESts goal is to conduct a three-year interdisciplinary analysis of the experience of nuclear developments and its relationship to contemporary society with the aim of improving the understanding of the dynamics over the last 60 years. HoNESts results will assist the current debate on future energy sources and the transition to affordable, secure, and clean energy production. Civil societys interaction with nuclear developments changes over time, and it is locally, nationally and transnationally specific. HoNESt will embrace the complexity of political, technological and economic challenges; safety; risk perception and communication, public engagement, media framing, social movements, etc. Research on these interactions has thus far been mostly fragmented. We will develop a pioneering integrated interdisciplinary approach, which is conceptually informed by Large Technological Systems (LTS) and Integrated Socio-technical System (IST), based on a close and innovative collaboration of historians and social scientists in this field. HoNESt will first collect extensive historical data from over 20 countries. These data will be jointly analyzed by historians and social scientists, through the lens of an innovative integrated approach, in order to improve our understanding of the mechanisms underlying decision making and associated citizen engagement with nuclear power. Through an innovative application of backcasting techniques, HoNESt will bring novel content to the debate on nuclear sustainable engagement futures. Looking backwards to the present, HoNESt will strategize and plan how these suitable engagement futures could be achieved. HoNESt will engage key stakeholders from industry, policy makers and civil society in a structured dialogue to insert the results into the public debate on nuclear energy.


News Article | January 18, 2017
Site: www.techtimes.com

The Russian Federal Space Agency (Roscosmos) and NASA may team up for a mission to Venus. The two would be going ahead and initiating a research on the scorching hot planet's mysteries and also whether Venus is hostile to life. The mission has been dubbed Venera-D and would send an orbiter and a lander to the planet. The D in the name of the mission stands for dolgozhivushaya which means "long lasting" in Russian. NASA has passed two missions - VERITAS and DAVINCI, earlier this month to examine the surface and atmosphere of Earth's neighbor, Venus. Venera-D happens to be one more Venus mission with two of the World's most well-equipped agencies teaming up for the purpose. The proposed joint mission would analyze Venus' atmosphere for 3 years with an orbiter, as well as a lander that would be present for a few hours on the planet's burning surface. Although, Russia wanted more time on the planet's surface, the costs involved made this impossible. "Russia has always been interested in going back to Venus," said David Senske, of NASA's Jet Propulsion Laboratory in Pasadena, California Senske further added that NASA got involved with the process about three years ago, when Russia asked the U.S. space agency on its willingness to collaborate on the mission. The two space agencies NASA and Russia's Roscosmos are anticipating the possibility of finding microbial life and come up with a deeper understanding of the habitable environment on Venus's atmosphere. "A joint working team put forward several scenarios of the project in October, with Russian and American competences complementing each other. After that we discussed the matter with NASA Deputy Administrator Dava Newman, who confirmed NASA's interest to the Venera-D project," said Lev Zeleny,Director of the Space Research Institute of the Russian Academy of Sciences. Space.com reported that Russia would be in the pilot's seat for this mission by developing the launching systems, the lander and the orbiter. NASA is also expected to contribute additional scientific instruments to aid the mission. For nearly two years, discussions on the possibility of cooperation with NASA on the project has been ongoing. The last landing dates back to 1985 when a capsule of the Soviet's Vega-2 robot had reached the planet. The Venera-D mission is expected to launch in the second half of 2020. An international team of scientists would be delivering their final report on the mission to the two space agencies later this month. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | December 22, 2016
Site: phys.org

Despite 40 years of study, cerium hexaboride still presents research challenges due to its unusual properties. It is a compound that belongs to the class of strongly correlated materials, i.e., materials whose properties cannot be described adequately without accounting for the interactions between electrons (aka electronic correlation). Many theories have been proposed to explain the anomalous physical properties of cerium hexaboride, but they all proved unable to predict the results of ESR experiments. It might be the case that the theory of ESR in strongly correlated systems needs to be substantially improved in order to account for the exceptional behavior of cerium hexaboride. ESR spectroscopy is used to study samples that contain particles with unpaired spins, viz., electrons and radicals. A sample is placed in a steady magnetic field and exposed to microwave radiation. An ESR spectrum of the sample is obtained, from which data on its chemical structure and properties can be extracted. Absolute calibration of ESR spectra in units of magnetic permeability and ESR spectral line shape analysis enable scientists to find the spectroscopic parameters: g-factor (gyromagnetic ratio), line width (spin relaxation time), and oscillating magnetization or dynamic magnetic susceptibility. ESR in cerium hexaboride was reported in an earlier study by the same authors. They developed a unique experimental technique capable of picking up the ESR signal from cerium hexaboride and similar materials. Conventional ESR spectrometers often face considerable difficulties detecting signals from strongly correlated materials. The experimental findings were unexpected. For one thing, their measurements showed that the oscillating magnetization along the [100] crystallographic direction may exceed the total static magnetization of the sample. This runs contrary to commonsense expectations (and theoretical predictions), since oscillating magnetization is theoretically supposed to be one of the constituents of the magnetic moment of the sample, i.e., it must be less than the total magnetization. According to the scientists, a simple way to explain this finding would be to say that there are some additional, unaccounted-for interactions between free electrons and the electrons in the 4f subshell of cerium ions. This qualitative explanation, however, needs to be confirmed by further theoretical calculations. Another unexpected result of the experiment is the correlation between the angular dependences of the magnetoresistance and the ESR spectral line width with respect to the external magnetic field (under crystal sample rotation). The correlation is remarkable, as the above parameters have a completely different physical nature. Therefore, this correspondence was not anticipated. The authors of the study offer the following explanation: Since ESR line width is largely determined by spin fluctuations, the value of the material's magnetoresistance may likewise be dominated by band electron scattering on spin fluctuations. The measurements reported in the study were made possible thanks to improvements to the equipment design introduced by Marat Gilmanov and Alexander Samarin, doctoral students at MIPT working under the supervision of Alexey Semeno, a senior research fellow at Prokhorov General Physics Institute of the Russian Academy of Sciences (GPI RAS), who also graduated from MIPT. "We have achieved a greater degree of sensitivity and stability for this class of materials than any other experimenters in the world. This means that no one else can perform ESR measurements of strongly correlated metals as accurately as we can. And it is our improved equipment that enables us to see what others cannot," says MIPT's Prof. Sergey Demishev, who also heads the Department of Low Temperatures and Cryogenic Engineering at Prokhorov General Physics Institute. Explore further: Einstein in an iron crystal More information: A. V. Semeno et al, Magnetic resonance anisotropy in CeB : an entangled state of the art, Scientific Reports (2016). DOI: 10.1038/srep39196


News Article | November 16, 2016
Site: www.eurekalert.org

Researchers in Japan and Russia have found some snail species that counterattack predators by swinging their shells, suggesting the importance of predator-prey interactions in animal evolution Until now, snails were thought to protectively withdraw into their shells when attacked. However, an international research team has found a pair of snail species that use their shells like a club to hit predators and knock them over. Evolutionary scientists have been questioning how predator-prey interactions affect the evolution of the prey. However, they are yet to resolve whether this interaction induces the diversification of the prey species and its morphological features and behaviours, and if so, why? Researchers from Japan's Hokkaido University and Tohoku University collaborated with colleagues at the Russian Academy of Sciences to closely study snail species from the genus Karaftohelix in both countries. They observed each species' defensive behaviours against their predator, the carabid beetle, and conducted shell measurements and species comparisons. The team used DNA sequencing to analyse how closely related the species were to each other. They found that two snail species--Karaftohelix (Ezohelix) gainesi in Hokkaido, Japan and Karaftohelix selskii in the Far East region of Russia--swing their shells to hit the carabid beetles, demonstrating a very unique, active defence strategy; while other closely related snail species withdraw their soft bodies into their shells and wait until the opponent stops attacking. "The difference in their defensive behaviours is also reflected in their shell morphology, indicating that their behaviours and shell shapes are interrelated to optimize the preferred defence strategy," says Yuta Morii, the study's lead author. By analysing DNA sequences of each species, the team also discovered that the two active-or-passive defensive methods evolved independently in the Japanese and Russian species. Their findings suggest that the selection of each method has led to the diversification of the behaviours, shapes and species of the snails. This study, published in the Journal Scientific Reports, is one of only a few to report on land snails using their shells for active defence by swinging them against a predator. "Our study showcases the importance of predator-prey interactions along with resource competition as major selective forces affecting the evolution of morphological and behavioural traits in organisms," Morii adds.


News Article | March 30, 2016
Site: www.rdmag.com

Physicists of MIPT (Moscow Institute of Physics and Technology) and the Space Research Institute of the Russian Academy of Sciences developed optical technology for the "correction" of light coming from distant stars, which will significantly improve the "seeing" of telescopes and therefore will enable us to directly observe exoplanets as Earth-twins. Their work has been published in the Journal of Astronomical Telescopes, Instruments, and Systems (JATIS). The first exoplanets (extra solar planets), which are the planets outside our solar system, had been discovered in the late 20th century, and now we have detected of more than two thousand of them. It is almost impossible to see the faint light of the planets themselves without special tools -- it is saturated "overshadowed" by the radiation of parent star. Therefore exoplanets are discovered by indirect methods: by registration of the weak periodic fluctuations in the luminosity of the star when a planet passes in front of its disk (the transit method), or by spectral translational vibrations of the star itself from the impact of the planet's gravity (the radial-velocity method). For the first time, in the late 2000s, astronomers were able to directly obtain images of exoplanets. So far we have about 65 of such images. To obtain them, the scientists use stellar coronagraphs first created in 1930s for observations of the solar corona outside eclipses known as solar coronagraphs. These devices have a focal mask - an "artificial moon" inside them, which blocks some part of the field of view -- ultimately, it covers the solar disk, allowing you to see the dim solar corona. To repeat this technique for the stars, we need a much higher level of accuracy and much higher resolution of the telescope, which accommodates a coronagraph. Apparent size of the orbit of Earth-type planets, nearest to us, is about 0.1 arcseconds. This is close to the resolution limit of modern space telescopes (for example, the resolution of the space telescope Hubble is about 0.05 seconds). To remove the effects of atmospheric distortions in ground-based telescopes, scientists use adaptive optics -- mirrors that can change shape while adjusting to the state of the atmosphere. In some cases, the mirror shape can be maintained with an accuracy of 1 nanometer, but such systems do not keep pace with the dynamics of atmospheric changes and are extremely expensive. A team led by Alexander Tavrov, an associate professor at MIPT and the Head of the Planetary Astronomy Laboratory at the Space Research Institute of the Russian Academy of Sciences, has found a way to obtain the highest resolution, while using relatively simple and inexpensive systems of adaptive optics. They used the idea of a EUI (Extremely Unbalanced Interferometer) proposed by one of the article's authors -- Juno Nishikawa, a Japanese scientist working at the National Astronomical Observatory of Japan. Conventional interferometry implies using the waves with approximately equal intensity for combining them into a single wavefront with the purpose of producing a clear and sharp image. The EUI light is divided into two beams (weak and strong), whose amplitudes have an approximate preset ratio of 1:10. A weak beam passes through the adaptive optics system, after which the two beams are brought together again and interfere with each other. As a result, the weak beam, so to say, "smoothes out" the light of the strong beam, which can significantly reduce both the distortion of the wavefront and the contribution of stellar speckle patterns (a random interference pattern). "Through the use of a relatively simple optical set-up, we can obtain the image contrast at the quality necessary for the direct observation of Earth-type planets by means of coronagraphs. Of course, compared to foreign developments, our system requires a more complex control technique, but at the same time it is much less dependent on the temperature stability that greatly simplifies its operation in space," the team leader Alexander Tavrov said. With the help of computer simulation, they have determined approximate characteristics of the system developed by them. According to calculations, the resulting scheme provides the image contrast of about 10-9. Furthermore, it was demonstrated that EUI shows achromatism, i.e. the reduction of aberrations with increasing wavelength. In the future, scientists plan to create a laboratory prototype and perform a number of experiments on it. As Alexander Tavrov notes, "We want to see the distant worlds through a telescope, but it implies that the distant worlds might see us as well. An advanced technology -- by only some of 50 to 100 years -- could be enough to do it many times more precisely than we are able to do it now."


News Article | March 30, 2016
Site: phys.org

The first exoplanets, which planets detected outside our solar system, were discovered in the late 20th century, and we have now detected more than 2,000. It is almost impossible to see the faint light of the planets themselves without special tools because it is saturated and "overshadowed" by the radiation of parent star. Therefore, exoplanets are discovered by indirect methods: by detecting the weak periodic fluctuations in the luminosity of the star when a planet passes in front of its disk (the transit method), or by spectral translational vibrations of the star itself from the influence of the planet's gravity (the radial-velocity method). Recently, astronomers were able to directly obtain images of exoplanets for the first time. So far, 65 such images have been taken. To obtain them, the scientists use stellar coronagraphs first created in the 1930s for observations of the solar corona outside eclipses, known as solar coronagraphs. These devices have focal masks—"artificial moons" inside them which block some part of the field of view—ultimately, it covers the solar disk, revealing the dim solar corona. To use this technique for stellar observations requires a much higher level of accuracy and a higher resolution telescope that accommodates a coronagraph. The apparent size of the orbit of Earth-type planets nearest to us is about 0.1 arcseconds. This is close to the resolution limit of modern space telescopes (for example, the resolution of the Hubble space telescope is about 0.05 seconds). To remove the effects of atmospheric distortions in ground-based telescopes, scientists use adaptive optics—mirrors that can change shape while adjusting to the state of the atmosphere. In some cases, the mirror shape can be maintained with an accuracy of 1 nanometer, but such systems do not keep pace with the dynamics of atmospheric changes and are extremely expensive. A team led by Alexander Tavrov, an associate professor at MIPT and the Head of the Planetary Astronomy Laboratory at the Space Research Institute of the Russian Academy of Sciences, has found a way to obtain the highest resolution while using relatively simple and inexpensive systems of adaptive optics. The researchers used the idea of an "extremely unbalanced interferometer" (EUI) proposed by one of the article's authors—Jun Nishikawa, a Japanese scientist working at the National Astronomical Observatory of Japan. Conventional interferometry combines the waves with approximately equal intensity into a single wavefront to produce a clear, sharp image. The EUI light is divided into two beams (weak and strong), whose amplitudes have an approximate preset ratio of 1:10. A weak beam passes through the adaptive optics system, after which the two beams are recombined and interfere with each other. As a result, the weak beam "smooths out" the light of the strong beam, which can significantly reduce the distortion of the wavefront and the contribution of stellar speckle patterns (a random interference pattern). "Through the use of a relatively simple optical set-up, we can obtain the image contrast at the quality necessary for the direct observation of Earth-type planets by means of coronagraphs. Of course, compared to foreign developments, our system requires a more complex control technique, but at the same time, it is much less dependent on the temperature stability that greatly simplifies its operation in space," team leader Alexander Tavrov says. With the help of computer simulation, they have determined approximate characteristics of the system. According to calculations, the resulting scheme provides the image contrast of about 10-9. Furthermore, it was demonstrated that EUI shows achromatism, i.e. the reduction of aberrations with increasing wavelength. In the future, scientists plan to create a laboratory prototype and perform a number of experiments. As Alexander Tavrov notes, "We want to see the distant worlds through a telescope, but it implies that the distant worlds might see us as well. Advanced technology—within 50 to 100 years—could be enough to do it many times more precisely than we are able to do it now." Explore further: Detecting exoplanets close to their host star: Astronomers develop breakthrough optical component More information: Inna Shashkova et al. Extremely unbalanced interferometer for precise wavefront control in stellar coronagraphy, Journal of Astronomical Telescopes, Instruments, and Systems (2015). DOI: 10.1117/1.JATIS.2.1.011011


News Article | November 16, 2016
Site: www.rdmag.com

Until now, snails were thought to protectively withdraw into their shells when attacked. However, an international research team has found a pair of snail species that use their shells like a club to hit predators and knock them over. Evolutionary scientists have been questioning how predator-prey interactions affect the evolution of the prey. However, they are yet to resolve whether this interaction induces the diversification of the prey species and its morphological features and behaviours, and if so, why? Researchers from Japan's Hokkaido University and Tohoku University collaborated with colleagues at the Russian Academy of Sciences to closely study snail species from the genus Karaftohelix in both countries. They observed each species' defensive behaviours against their predator, the carabid beetle, and conducted shell measurements and species comparisons. The team used DNA sequencing to analyse how closely related the species were to each other. They found that two snail species--Karaftohelix (Ezohelix) gainesi in Hokkaido, Japan and Karaftohelix selskii in the Far East region of Russia--swing their shells to hit the carabid beetles, demonstrating a very unique, active defence strategy; while other closely related snail species withdraw their soft bodies into their shells and wait until the opponent stops attacking. "The difference in their defensive behaviours is also reflected in their shell morphology, indicating that their behaviours and shell shapes are interrelated to optimize the preferred defence strategy," says Yuta Morii, the study's lead author. By analysing DNA sequences of each species, the team also discovered that the two active-or-passive defensive methods evolved independently in the Japanese and Russian species. Their findings suggest that the selection of each method has led to the diversification of the behaviours, shapes and species of the snails. This study, published in the Journal Scientific Reports, is one of only a few to report on land snails using their shells for active defence by swinging them against a predator. "Our study showcases the importance of predator-prey interactions along with resource competition as major selective forces affecting the evolution of morphological and behavioural traits in organisms," Morii adds.


News Article | December 7, 2015
Site: motherboard.vice.com

On the main drag of Khuzhir, the principal town of Olkhon Island in Lake Baikal, Siberia, the dust blown from buses delivering tourists billows through the windswept streets. By the upturned chassis of vehicles picked clean of parts, babushkas wade through piles of trash, searching for anything worth claiming. In the shadows of a former prison, sunbathers line a beach around clumps of algae and rusted metal detritus. While Buryat shamans, who revere the lake as holy, perform ceremonies around totem poles festooned with ribbons on a bluff above town, in a ship graveyard below them, children toss empty vodka bottles at each other for entertainment. Stretching for 395 miles, thirty-million-year-old Baikal is the world’s deepest lake, its volume roughly equivalent to the five Great Lakes of North America combined. A UNESCO World Heritage Site, Baikal contains one-fifth of the unfrozen freshwater on the planet. Its unique closed ecosystem is home to over 3,500 species and subspecies of animals and plants, roughly sixty percent of which are not to be found anywhere else on Earth. The lake is now subject to an unprecedented catalogue of threats. I visited the region this summer, when wildfires the likes of which had never been seen before were raging, leading locals to describe the scene as feeling “like doomsday.” The lake faces a range of environmental issues: phosphate run-off from unplanned tourist developments and poor sewage treatment, the rampant growth of algae mats and a sponge die-off, and low inflow that saw water levels hit critical marks this year, down 40cm since 2013. A waterfront industrial plant that produced cellulose fiber for Soviet aircraft tyres is now closed, but dotted around the dimly-lit derelicts lie 13 toxic reservoirs, each the size of two football fields. Now, Baikal is further endangered by Mongolian plans for hydropower plants which, Professor Marianne Moore from Wellesley College told Motherboard, could potentially starve the deep waters of the lake of oxygen. Detritus on the beach near the tourist town of Khuzhir. Image: Stephen M. Bland A few miles from Khuzhir, beyond an abandoned fish factory that had operated as a gulag, I came across the rotting corpses of Nerpa seals which had washed to shore. According to Greenpeace, the number of these unique seals, one of only three entirely freshwater seal species, has decreased by approximately a third since the beginning of the 1990s. Commenting on this “huge die-off,” Greenpeace Russia campaigner Roman Vazhenkov noted that although the seals had died of disease, chlorine substance found in the creatures' fatty tissues suggests their immune systems had been weakened. I spoke to Professor Oleg Timoshkin from the Limnological Institute of the Russian Academy of Sciences about the levels of pollution threatening the lake. “In some areas of Northern Baikal, a crust of rotten spirogyra algae up to ten meters [wide] covers the once cozy beaches,” he told me. “Even cows and horses refuse to drink the water.” Emissions from a failed sewage treatment plant in the town of Severobaikalsk on the northern shore have caused a bank of foul-smelling algae to form that stretches for six miles. In addition to this, locals report that for years, sewage trucks pulled up daily to dispose of wastewater at another dilapidated station near the shoreline. “The [new] plant failed in large part because the railroad industry dumped a bunch of cleaning products into the sewage treatment facility,” Moore told Motherboard. “They were cleaning railroad cars with heavy duty detergent and it killed the microbes that helped remove nutrients. Untreated sewage has been entering the shallow waters up there.” Discarded piles of litter at the abandoned gulag fish factory, Khuzhir. Image: Stephen M. Bland I took a hydrofoil across the lake to Ust-Barguzin, the boat cutting through swathes of smoke rising from the mainland. In this remote hamlet of feral dogs and high-walled compounds, we docked at a port choked with the rusting hulks of half-sunken ships. In an effort to dampen the acrid fumes, some residents had hung wet cloths across their windows. Despite being home to some 7,000 people, the sewage treatment facilities in Ust-Barguzin are, according to Timoshkin, “completely destroyed.” With tourist numbers around Baikal rapidly increasing from 300,000 in 2009 to 1.3 million in 2015, according to the Siberian Times, infrastructure is struggling to keep up with demand. “Down at the southern end of the lake, there’s a town called Listvianka,” Moore continued. “It’s a tourist mecca and the hotels there, many of which have gone up in the last fifteen years, none of them have treatment facilities, so sewage is going right into the lake.” During his group’s latest expedition, Timoshkin and his colleagues also found that a pathogen had been killing sea sponges, which naturally filter the water. In the area under investigation, “from 30 to 100 percent of branched Lubomirskia baicalensis specimens were either sick or damaged and died,” Timoshkin commented. On the shoreline near the monogorod (single-factory town) of Baikalsk lie the ruins of the Baikal Paper and Pulp Mill. The plant, from which chlorinated waste found its way into the lake, finally closed at the end of 2013—for financial, not ecological reasons—but 6.2 million tons of toxic waste still remain in the aforementioned reservoirs. The Baikal trough is located on a rift zone and should an earthquake strike, the contaminated holding pods could easily rupture, causing an ecological disaster. Buryat shaman performing a ceremony on the bluff above Maloe More. Image: Stephen M. Bland In 2014, the Moscow Times reported that the site of the old factory would be turned into a “Russian Disneyland” called "Precious Russia" following the biggest clean-up in the country’s history, projected to take six years. With the economic crisis continuing though, those plans have now been shelved. “I just don’t think Russia has the political will,” Moore told Motherboard. Now, with Mongolia planning to build a series of massive dams upstream, Lake Baikal is facing a challenge which Moore believes could make all other problems “pale in comparison.” The plans, which are being evaluated as part of a World Bank-funded environmental and social impact assessment, include a project to dam the Selenga River, which provides nearly 50 percent of the lake’s water. “If this dam were built then this might cause major damage to Baikal’s ecosystem,” Professor Anson Mackay of University College London told Motherboard. “The Selenga is the lake’s largest tributary. Should flow be reduced [there could be] long-term catastrophic consequences for the ecology and wildlife in and around the lake.” When NGOs met with local authorities in Irkutsk and Baikalsk last week, Mongolian plans for the so-called Shuren Hydropower Project were high on the agenda. As the Convention for the Protection of the World Cultural and Natural Heritage states that countries should not take actions that could affect World Heritage Sites in other nations, environmental groups have questioned the legality of the project. As this row rumbles on, the Russian authorities have continued to drain the lake for an existing hydroelectric station downstream, which is at least partly culpable for water levels that Jennie Sutton from NGO Baikal Wave describes as “critically low.” A source, who spoke on condition of anonymity because the source did not have authorization to speak publicly, said that despite having poured 54 billion roubles into protecting the lake over the last three years, mismanagement of government funds has seen the state of the region continue to deteriorate. Disregarding evidence to the contrary, however, the most recent report from the Ministry of Environment and Ecology on the state of Baikal maintains that the ecosystem has not undergone “any significant changes.” Ruminating on what needs to be done to save the lake, which he describes as “seriously ill,” Timoshkin says he is calling for the introduction of a more effective system of government monitoring. He is not alone in concluding that the current approach is only fit to “diagnose cancer in the last stage.” UNESCO has accused Moscow of "dereliction of duty," per Deutsche Presse-Agentur, with regard to its handling of the Baikal region. As the sun set, having completed their daily pilgrimage to the sights, a clutch of minivan drivers returned tourists to Khuzhir. Beneath ominous skies, Russian holidaymakers hit the bars, discarding their trash as they tottered down the dung-spattered streets. Illegal campfires illuminated the dusky woods along the Maloe More Strait. Mats of algae and flotsam washed to shore. With more of Baikal than ever before open to tourism, dependence on this source of income continues to grow. The future is uncertain for a lake that until recently was considered the cleanest on Earth. Clarification: This story originally referred to Lake Baikal as the "largest" lake in the world. It's the largest by volume, unless you count the Caspian Sea, which is a matter of debate. Baikal, however, is definitely the deepest freshwater body on Earth, which makes for a clearer headlines, so we've referred to it as such. The body of the story never referred to Lake Baikal as the world's largest lake. Apologies for any confusion.


News Article | February 28, 2017
Site: www.eurekalert.org

TALLAHASSEE, Fla. -- A Florida State University researcher will lead a joint U.S.-Russia project that will examine the effect of space travel on astronauts' vision, an ongoing problem that NASA has been eager to solve. "The issue of vision impairment is very important to NASA," said Michael Delp, dean of the College Human Sciences and the lead investigator on the project. "When astronauts go into space, they lose bone density and muscle mass, but that's mostly preventable or recoverable. The vision may not be, and not everyone is willing to risk their sight." The work will be funded by a $750,000 grant from NASA. In recent years, NASA has spearheaded research into vision problems experienced by astronauts returning from space. On shorter trips, astronauts have often experienced minor vision changes that eventually self-corrected. But longer trips to space have caused more structural changes to the eye that have permanently altered astronauts' ability to see. Delp and colleagues at the Russian Academy of Sciences previously collaborated on a study that found space travel may diminish the body's ability to regulate blood flowing to the brain, which could contribute to temporary or permanent changes in vision for astronauts. For this study, researchers will send male mice into space on a SpaceX rocket in August 2017. The mice will live at the International Space Station for 30 days and then return to Earth to be recovered by the U.S. Navy off the California coast. Scientists will then examine changes to the lymphatic and vascular functions to see how that may affect vision. "We're looking at cerebral arteries, cerebral veins and the lymphatics that drain the brain to see if vascular function in any of those areas are changed with micro gravity," Delp said. "We're also looking at the blood brain barrier. So if the vessels become leaky and allow fluid out into the brain, that might be another reason we're seeing vision problems in astronauts." Delp's research has been funded by NASA for several decades. Prior to becoming dean of the College of Human Sciences in 2013, he worked as a professor at the University of Florida, West Virginia University and Texas A&M University. He received his doctoral degree from University of Georgia. The grant will fund three graduate students and one postdoctoral researcher at Florida State University, as well as scientists and graduate students from Texas A&M University.


News Article | November 17, 2016
Site: www.sciencedaily.com

Researchers in Japan and Russia have found some snail species that counterattack predators by swinging their shells, suggesting the importance of predator-prey interactions in animal evolution. Until now, snails were thought to protectively withdraw into their shells when attacked. However, an international research team has found a pair of snail species that use their shells like a club to hit predators and knock them over. Evolutionary scientists have been questioning how predator-prey interactions affect the evolution of the prey. However, they are yet to resolve whether this interaction induces the diversification of the prey species and its morphological features and behaviours, and if so, why? Researchers from Japan's Hokkaido University and Tohoku University collaborated with colleagues at the Russian Academy of Sciences to closely study snail species from the genus Karaftohelix in both countries. They observed each species' defensive behaviours against their predator, the carabid beetle, and conducted shell measurements and species comparisons. The team used DNA sequencing to analyse how closely related the species were to each other. They found that two snail species -- Karaftohelix (Ezohelix) gainesi in Hokkaido, Japan and Karaftohelix selskii in the Far East region of Russia -- swing their shells to hit the carabid beetles, demonstrating a very unique, active defence strategy; while other closely related snail species withdraw their soft bodies into their shells and wait until the opponent stops attacking. "The difference in their defensive behaviours is also reflected in their shell morphology, indicating that their behaviours and shell shapes are interrelated to optimize the preferred defence strategy," says Yuta Morii, the study's lead author. By analysing DNA sequences of each species, the team also discovered that the two active-or-passive defensive methods evolved independently in the Japanese and Russian species. Their findings suggest that the selection of each method has led to the diversification of the behaviours, shapes and species of the snails. This study, published in the Journal Scientific Reports, is one of only a few to report on land snails using their shells for active defence by swinging them against a predator. "Our study showcases the importance of predator-prey interactions along with resource competition as major selective forces affecting the evolution of morphological and behavioural traits in organisms," Morii adds.


News Article | September 22, 2016
Site: phys.org

Scientists from the Space Research Institute of the Russian Academy of Sciences (IKI RAS), the Moscow Institute of Physics and Technology (MIPT), and the Max Planck Institute for Astrophysics (MPA) have shown that diffusion of gas particles during the formation of the first structures in the early universe could have impacted the relative abundance of helium and hydrogen in the first galaxies.


News Article | November 16, 2016
Site: phys.org

Until now, snails were thought to protectively withdraw into their shells when attacked. However, an international research team has found a pair of snail species that use their shells like a club to hit predators and knock them over. Evolutionary scientists have been questioning how predator-prey interactions affect the evolution of the prey. However, they are yet to resolve whether this interaction induces the diversification of the prey species and its morphological features and behaviours, and if so, why? Researchers from Japan's Hokkaido University and Tohoku University collaborated with colleagues at the Russian Academy of Sciences to closely study snail species from the genus Karaftohelix in both countries. They observed each species' defensive behaviours against their predator, the carabid beetle, and conducted shell measurements and species comparisons. The team used DNA sequencing to analyse how closely related the species were to each other. They found that two snail species—Karaftohelix (Ezohelix) gainesi in Hokkaido, Japan and Karaftohelix selskii in the Far East region of Russia—swing their shells to hit the carabid beetles, demonstrating a very unique, active defence strategy; while other closely related snail species withdraw their soft bodies into their shells and wait until the opponent stops attacking. "The difference in their defensive behaviours is also reflected in their shell morphology, indicating that their behaviours and shell shapes are interrelated to optimize the preferred defence strategy," says Yuta Morii, the study's lead author. By analysing DNA sequences of each species, the team also discovered that the two active-or-passive defensive methods evolved independently in the Japanese and Russian species. Their findings suggest that the selection of each method has led to the diversification of the behaviours, shapes and species of the snails. This study, published in the Journal Scientific Reports, is one of only a few to report on land snails using their shells for active defence by swinging them against a predator. "Our study showcases the importance of predator-prey interactions along with resource competition as major selective forces affecting the evolution of morphological and behavioural traits in organisms," Morii adds. More information: Yuta Morii et al. Parallel evolution of passive and active defence in land snails, Scientific Reports (2016). DOI: 10.1038/srep35600


News Article | November 17, 2016
Site: www.gizmag.com

Faced with an incoming predator, most snails don't really have a lot of options other than to retreat into their shells. But scientists have discovered that members of a certain species won't take such provocations lying down, instead aggressively swinging their shells from side to side to gain the upper hand on an assailant. The species in question hail from Russia and Japan, and are known as Karaftohelix selskii and Karaftohelix (Ezohelix) gainesi, respectively. Scientists at Japan's Hokkaido University and Tohoku University teamed up with researchers at the Russian Academy of Sciences to study how the two snails behave when under threat from a carabid beetle, and were surprised to find that they were capable of launching some pretty brutal counterattacks. The scientists observed how once the snails sensed the carabid beetle in the vicinity, they would begin to swing their shells about to clear the area and knock the beetle down. They say that this appears to be a highly unique trait, with other closely related snail species instead hiding away inside the shell until the attacker stands down. "The difference in their defensive behaviors is also reflected in their shell morphology, indicating that their behaviors and shell shapes are interrelated to optimize the preferred defense strategy," says Yuta Morii, the study's lead author. What's more, the researchers analyzed the DNA sequences of the snails and found that these behaviors evolved independently of one another. That these traits evolved in parallel on different continents, the researchers say, highlights the significance of predator-prey relations in shaping the evolution of organisms. The research was published in the journal Scientific Reports, and you can see a Ezohelix gainesi snail and carabid beetle go at it in the video below.


News Article | February 21, 2017
Site: phys.org

Ionised matter, like plasma, still holds secrets. Physicists working with plasma jets, made of a stream of ionised matter, have just discovered a new phenomenon. Indeed, Eduard Sosnin from the Institute of High Current Electronics, Russian Academy of Sciences in Tomsk, Russia, and colleagues found a new type of discharge phenomenon in an atmospheric pressure plasma. It has been dubbed apokamp - from the Greek words for 'off' and 'bend', because it appears at a perpendicular angle to where plasma jets bend. Their findings have been recently published in EPJ D and are particularly relevant for the development of novel applications in medicine, health care and materials processing because they involve air at normal atmospheric pressure, which would make it cheaper than applications in inert gases or nitrogen. The authors established the conditions for the phenomenon to occur. It takes two electrodes positioned at an angle to each other, together with electric field lines which are curved upward between the two electrodes. It requires not one but both electrodes to have a high voltage to obtain an apokamp plasma jet, which typically develops from the bending point of the discharge channel. The apokamp can vary from a single needle to a 6-7-cm-long conical jet attached to the arc of the plasma current channel. Using high-speed photography data on the apokamp dynamics, the authors elucidated its nature as consisting of ionisation waves - so-called plasma bullets - that move with a velocity of 100-220 km/s. Such plasma bullets have previously been detected in inert gases and in nitrogen in the presence of negatively charged gas. The next step would require gathering sufficient data to build a model that can further explain the apokamp phenomenon. This phenomenon can help explain the blue jet phenomenon identified in 1994 in the upper atmosphere, where strange upwards-facing jets develop from thunderstorm clouds. More information: Eduard A. Sosnin et al, Dynamics of apokamp-type atmospheric pressure plasma jets, The European Physical Journal D (2017). DOI: 10.1140/epjd/e2016-70466-0


The European Space Agency has announced details of the second phase of the ExoMars mission in 2020. Comprising two science elements, the mission will introduce a rover and a surface platform for Martian exploration. During the mission, the rover will scout for life and organic materials across Mars, while the surface platform will stay stationary. To be managed by Roscosmos and the Space Research Institute of Russian Academy of Sciences, the surface platform will study the landing site's outer environment. Given a life of one Earth year, the surface platform will be backed by instruments and sensors and will have a mass of 827.9 kilograms (1,825.2 pounds) covering a scientific payload of 45 kilograms (99 pounds). According to ESA, the main mandate of the ExoMars program is to investigate whether life ever existed on Mars. The ExoMars program has two missions. The first involves the Trace Gas Orbiter with an entry, descent and landing demonstrator Module, named Schiaparelli. The former was launched on March 14, 2016 and was in news recently for the crash of Schiaparelli. Now the new details are about the second mission wherein a rover will be launched in 2020. According to ESA, the priorities on the surface platform investigations will involve climate monitoring, context imaging as well as a series of atmospheric investigations. The surface platform instruments will also be examining the subsurface water distribution along with the exchange of volatile elements and compounds between the red planet's surface and atmosphere. The internal structure will also be studied via geophysical investigations. The on-board instruments will also detect and monitor the planet's radiation. The measurements will be compared to those taken by the Fine Resolution Epithermal Neutron Detector on the Trace Gas Orbiter. There is good support for the Mars mission from the European scientific community. For the ExoMars 2020 mission, they sent nine proposals to ESA after a call in March. The ESA, on its part, approved six European elements in 2015 that included two European-led instruments and four sensors for the Russian-led instruments. The Lander Radioscience experiment or LaRa and the Habitability, Brine Irradiation and Temperature package or HABIT are the two European-led instruments that will be on the surface platform. LaRa will delve into the planet's internal structure for precise measurements of its rotation and orientation. This will be done by studying two-way Doppler frequency shifts between the Earth and the surface platform. This will expose variations in angular momentum resulting from the redistribution of masses, such as when ice migrates from the polar areas to the Martian atmosphere. HABIT will focus on water vapor in the atmosphere with the impacts coming from seasonal variations in the ground, UV radiation environment and air temperatures. Other sensor packages from the European side will track pressure, plasma environment, humidity, dust and local magnetic fields. Meanwhile, it has been noted that ESA's new update has come days after a high-resolution image was released by NASA's Mars Reconnaissance Orbiter, revealing the area where the Schiaparelli module crashed. A section of experts attributes the Schiaparelli crash to software breakdown on the on-board computer. The ESA claims that the images with a dark spot of 2.4 meters (7.9 feet) across were a crater created by the 660-pound object when it hit the ground with a huge speed of several hundred miles per hour. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


Turitsyn S.K.,Aston University | Bale B.G.,Aston University | Bale B.G.,Lincoln Laboratory | Fedoruk M.P.,Russian Academy of Sciences
Physics Reports | Year: 2012

Nonlinear systems with periodic variations of nonlinearity and/or dispersion occur in a variety of physical problems and engineering applications. The mathematical concept of dispersion managed solitons already has made an impact on the development of fibre communications, optical signal processing and laser science. We overview here the field of the dispersion managed solitons starting from mathematical theories of Hamiltonian and dissipative systems and then discuss recent advances in practical implementation of this concept in fibre-optics and lasers. © 2012 Elsevier B.V.


Khomich V.G.,Russian Academy of Sciences | Boriskina N.G.,Russian Academy of Sciences | Santosh M.,China University of Geosciences
Gondwana Research | Year: 2014

The world-class gold-ore districts of South-East Russia and North-East China are located above the frontal and peripheral regions of subducted oceanic lithosphere accumulated as stagnant slabs in the mantle transition zone generated by paleo-Pacific plate subduction. The highly productive ore systems of the Aldan (South Yakutiya), Baley (Transbaikalia), and Zhao-Ye (Jiaodong) metallogenic belts which are widely separated in space, but formed at the same time, are associated with similar mechanisms of huge magma flare-ups in the Middle-Jurassic-Early Cretaceous driven by deep mantle processes. Their common features and crust-mantle interaction signature offer important clues to mantle dynamics rooted along the periphery of the subducted slabs in a mega-convergent zone in East Asia. © 2014 International Association for Gondwana Research.


Turitsyn S.K.,Aston University | Turitsyn S.K.,Novosibirsk State University | Babin S.A.,Novosibirsk State University | Churkin D.V.,Aston University | And 4 more authors.
Physics Reports | Year: 2014

The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with "negative absorption" of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors-random distributed feedback fibre laser-was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100km. Although an effective reflection due to the Rayleigh scattering is extremely small (~0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the generation of a stationary near-Gaussian beam with a narrow spectrum. A random distributed feedback fibre laser has efficiency and performance that are comparable to and even exceed those of similar conventional fibre lasers. The key features of the generated radiation of random distributed feedback fibre lasers include: a stationary narrow-band continuous modeless spectrum that is free of mode competition, nonlinear power broadening, and an output beam with a Gaussian profile in the fundamental transverse mode (generated both in single mode and multi-mode fibres).This review presents the current status of research in the field of random fibre lasers and shows their potential and perspectives. We start with an introductory overview of conventional distributed feedback lasers and traditional random lasers to set the stage for discussion of random fibre lasers. We then present a theoretical analysis and experimental studies of various random fibre laser configurations, including widely tunable, multi-wavelength, narrow-band generation, and random fibre lasers operating in different spectral bands in the 1-1.6μm range. Then we discuss existing and future applications of random fibre lasers, including telecommunication and distributed long reach sensor systems. A theoretical description of random lasers is very challenging and is strongly linked with the theory of disordered systems and kinetic theory. We outline two key models governing the generation of random fibre lasers: the average power balance model and the nonlinear Schrödinger equation based model. Recently invented random distributed feedback fibre lasers represent a new and exciting field of research that brings together such diverse areas of science as laser physics, the theory of disordered systems, fibre optics and nonlinear science. Stable random generation in optical fibre opens up new possibilities for research on wave transport and localization in disordered media. We hope that this review will provide background information for research in various fields and will stimulate cross-disciplinary collaborations on random fibre lasers. © 2014 Elsevier B.V.


Smirnov S.V.,Novosibirsk State University | Churkin D.V.,Aston University | Churkin D.V.,Russian Academy of Sciences
Optics Express | Year: 2013

For the first time we report full numerical NLSE-based modeling of generation properties of random distributed feedback fiber laser based on Rayleigh scattering. The model which takes into account the random backscattering via its average strength only describes well power and spectral properties of random DFB fiber lasers. The influence of dispersion and nonlinearity on spectral and statistical properties is investigated. The evidence of non-gaussian intensity statistics is found. © 2013 Optical Society of America.


Mekler V.,Rutgers University | Severinov K.,Rutgers University | Severinov K.,Russian Academy of Sciences
Nucleic Acids Research | Year: 2013

RNA polymerase (RNAP) melts promoter DNA to form transcription-competent open promoter complex (RPo). Interaction of the RNAP σ subunit with non-template strand bases of a conserved -10 element (consensus sequence T-12A-11T-10A-9A-8T -7) is an important source of energy-driving localized promoter melting. Here, we used an RNAP molecular beacon assay to investigate interdependencies of RNAP interactions with -10 element nucleotides. The results reveal a strong cooperation between RNAP interactions with individual -10 element non-template strand nucleotides and indicate that recognition of the -10 element bases occurs only when free energy of the overall RNAP -10 element binding reaches a certain threshold level. The threshold-like mode of the -10 element recognition may be related to the energetic cost of attaining a conformation of the -10 element that is recognizable by RNAP. The RNAP interaction with T/A-12 base pair was found to be strongly stimulated by RNAP interactions with other -10 element bases and with promoter spacer between the -10 and -35 promoter elements. The data also indicate that unmelted -10 promoter element can impair RNAP interactions with promoter DNA upstream of the -11 position. We suggest that cooperativity and threshold effects are important factors guiding the dynamics and selectivity of RPo formation. © 2013 The Author(s).


Batalin I.A.,Russian Academy of Sciences | Bering K.,Masaryk University
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2015

In this paper we work out the explicit form of the change of variables that reproduces an arbitrary change of gauge in a higher-order Lagrangian formalism. © 2015 The Authors.


Maslov D.L.,University of Florida | Yudson V.I.,Russian Academy of Sciences | Chubukov A.V.,University of Wisconsin - Madison
Physical Review Letters | Year: 2011

We analyze the effect of the electron-electron interaction on the resistivity of a metal near a Pomeranchuk quantum phase transition (QPT). We show that umklapp processes are not effective near a QPT, and one must consider both interactions and disorder to obtain a finite and T dependent resistivity. By power counting, the correction to the residual resistivity at low T scales as AT(D+2)/3 near a Z=3 QPT. We show, however, that A=0 for a simply connected, convex Fermi surface in 2D, due to the hidden integrability of the electron motion. We argue that A>0 in a two-band (s-d) model and propose this model as an explanation for the observed T(D+2) /3 behavior. © 2011 American Physical Society.


Vatnik I.D.,Russian Academy of Sciences | Churkin D.V.,Aston University | Babin S.A.,Novosibirsk State University
Optics Express | Year: 2012

We present a comprehensive study of power output characteristics of random distributed feedback Raman fiber lasers. The calculated optimal slope efficiency of the backward wave generation in the one-arm configuration is shown to be as high as ∼90% for 1 W threshold. Nevertheless, in real applications a presence of a small reflection at fiber ends can appreciably deteriorate the power performance. The developed numerical model well describes the experimental data. © 2012 Optical Society of America.


Kamenshchik A.Y.,National Institute of Nuclear Physics, Italy | Kamenshchik A.Y.,Russian Academy of Sciences
Classical and Quantum Gravity | Year: 2013

The development of dark energy models has stimulated interest to cosmological singularities, which differ from the traditional Big Bang and Big Crunch singularities. We review a broad class of phenomena connected with soft cosmological singularities in classical and quantum cosmology. We discuss the classification of singularities from the geometrical point of view and from the point of view of the behavior of finite size objects, crossing such singularities. We discuss in some detail quantum and classical cosmology of models based on perfect fluids (anti-Chaplygin gas and anti-Chaplygin gas plus dust), of models based on the Born-Infeld-type fields and of the model of a scalar field with a potential inversely proportional to the field itself. We dwell also on the phenomenon of the phantom divide line crossing in the scalar field models with cusped potentials. Then we discuss the Friedmann equations modified by quantum corrections to the effective action of the models under considerations and the influence of such modification on the nature and the existence of soft singularities. We review also quantum cosmology of models, where the initial quantum state of the universe is presented by the density matrix (mixed state). Finally, we discuss the exotic singularities arising in the braneworld cosmological models. © 2013 IOP Publishing Ltd.


Mashonkina L.,Universitats Sternwarte Munich | Mashonkina L.,Russian Academy of Sciences
Astronomy and Astrophysics | Year: 2013

Context. Magnesium abundances of cool stars with different metallicities are important for understanding the galactic chemical evolution. Aims. This study tests atomic data used in stellar magnesium abundance analyses. Methods. We evaluate nonlocal thermodynamical equilibrium (NLTE) line formation for Mg i, using the most up-to-date theoretical and experimental atomic data available so far, and check the Mg abundances from individual lines in the Sun, four well studied A-type stars, and three reference metal-poor stars. Results. With the adopted gf-values, NLTE abundances derived from the Mg i 4703 Å, 5528 Å, and Mg ib lines are consistent within 0.05 dex for each A-type star. The same four Mg i lines in the solar spectrum give consistent NLTE abundances at log NMg/NH =-4.45, when correcting van der Waals damping constants inferred from the perturbation theory. Inelastic Mg+H collisions as treated by Barklem, Belyaev, Spielfiedel, Guitou, and Feautrier serve as an efficient thermalizing process for the statistical equilibrium of Mg i in the atmospheres of metal-poor stars. The use of Mg+H collision data improves Mg abundance determinations for HD 84937 and HD 122563, though it does not completely remove the differences between different lines. © 2013 ESO.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP-SICA | Phase: SSH.2013.4.1-1 | Award Amount: 3.27M | Year: 2014

CASCADE will: - scrutinise Caucasian states and societies for the root causes of conflict and insecurity, conduct synergetic research on democracy and security in the Caucasus and investigate their interaction, - analyse the links between the region and its wider neighbourhood, - provide forward-looking analysis on regional security and democratisation processes, - and develop a set of proposals on how the EU could enhance its role in the region. The project will de-compartmentalise research on the Caucasus by exploring linkages between societal challenges, political developments and conflicts and investigating the interactions between the North and South Caucasus, as well as between the Caucasus and its wider neighbourhood. On the basis of a strong comparative and interdisciplinary approach, we will seek to provide a more accurate understanding of how democracy and security are perceived, understood, experienced and exploited as political and social resources by Caucasus actors and other actors involved in the region. Cooperation between researchers with backgrounds in different humanities will uncover the various facets of the relationship between democracy and security. Drawing upon extensive fieldwork in the North and South Caucasus, including in the conflict areas and de facto States, CASCADE will generate a wealth of empirical data as well as new and important insights into security and democracy in the Caucasus. The projects outcomes will also be largely policy-driven and CASCADEs impact will be considerable in terms of shaping EU policy toward the Caucasus. The project will act as a knowledge hub to spur debate and bring together academic and policy communities from Europe and the Caucasus.


Archibald J.D.,San Diego State University | Averianov A.,Russian Academy of Sciences
Zoological Journal of the Linnean Society | Year: 2012

The eutherian, family-level clade Zhelestidae is consistently although weakly supported in five phylogenetic analyses that we performed on all Cretaceous eutherians. Additionally in the fifth analysis, which included some placentals, Zhelestidae is placed as a stem eutherian clade rather than grouping within the crown clade Placentalia as argued in some previous studies but not others. The subfamily Zhelestinae, Dzharakuduk (Turonian -?Coniacian ages), Kyzylkum Desert, Uzbekistan includes Zhelestes temirkayzk, Aspanlestes aptap, Parazhelestes mynbulakensis (=Sorlestes budan), Parazhelestes robustus, Eoungulatum kudukensis. Additional taxa for the time being recognized as Zhelestidae incertae sedis are: Sheikhdzheilia rezvyii (Cenomanian, Uzbekistan), Borisodon kara gen. nov. (='Sorlestes'kara) (Turonian, Kazakhstan), Lainodon orueetxebarriai (Campanian or Maastrichtian, Spain), Labes quintanillensis (Maastrictian, Spain), Labes garimondi (Campanian, France), Gallolestes pachymandibularis (Campanian, Mexico), Gallolestes agujaensis (Campanian, USA), and Avitotherium utahensis (Campanian, USA). Eozhelestes mangit (Cenomanian, Uzbekistan) is a questionable zhelestid (?Zhelestidae), possibly stem to Zhelestidae. Paranyctoides (Asia and North America) is often linked to Zhelestidae. Alostera, previously referred to Zhelestidae, is a eutherian of unknown affinities. Associated skull fragments permitted the first reconstruction of a zhelestid (Aspanlestes) skull. Abundant dentulous and edentulous dentaries allowed examination of dental replacement from the canine posteriorly in Dzharakuduk zhelestids as follows: [dc, p1, dp2, p3, dp4, dp5]->m1->p2->c, p4, m2->p5->m3. © 2012 The Linnean Society of London.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-IP | Phase: ENV.2011.1.1.3-1 | Award Amount: 9.34M | Year: 2011

Permafrost is defined as ground that remains continuously at or below 0C for at least two consecutive years; some 24% of the land surface in the northern Hemisphere is classified as permafrost. In the Northern high latitudes, strong warming has been observed over the recent decades, and climate models project strong future warming. A projected decline in the extent of permafrost will have a major impact on the Earth system, affecting global climate through the mobilization of carbon and nitrogen stored in permafrost. PAGE21 aims to understand and quantify the vulnerability of permafrost environments to a changing global climate, and to investigate the feedback mechanisms associated with increasing greenhouse gas emissions from permafrost zones. This research makes use of a unique set of Arctic permafrost investigations performed at stations that span the full range of Arctic bioclimatic zones. The project brings together the best European permafrost researchers and eminent scientists from Canada, Russia, the USA, and Japan. In a truly original approach we combine field measurements of permafrost processes, pools, and fluxes, with remote sensing data and global climate models at local, regional and, for the first time, pan-Arctic scales. The output from this research will help to advance our understanding of permafrost processes at multiple scales, resulting in improvements in global numerical permafrost modeling and the ensuing future climate projections, as well as in the assessment of stabilisation scenarios. These outputs will feed into global assessments and international monitoring programs, in which most of the consortium members are already actively participating in leading roles. This project will, in particular, provide projections on a pan-Arctic scale of greenhouse gas releases from the projected thawing of permafrost terrain during the 21st century, with direct implications for global policy discussions on emission reduction targets.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP-SICA | Phase: SPA.2010.3.2-01 | Award Amount: 668.74K | Year: 2011

ZAPS delivers a set of procedures and products for forest resource assessment and monitoring. The team consists of a balanced contribution of leading experts from Europe and Russia in the field of Earth observation for forestry and includes a Russian Forest Research Institute which also apply remote sensing techniques. ZAPS investigates and cross validates methodologies using both Russian and European Earth observation data to develop a worldwide observation system. Earth observation data include ENVISAT MERIS and ASAR in different acquisition modes, METEOR-M and RESURS-DK1. The methodologies include state-of-the-art optical and radar retrieval algorithms and their improvement as well as investigation of innovative synergistic approaches. Products include biomass maps and biomass change for the years 2007-2008-2009 on a local scale, a biomass and improved land cover map on the regional scale, and in addition, a 1 km scale land cover map as input to a carbon accounting model. These products serve the inventory community (e.g. FAO Forest Ressource Assessment) as well as the Kyoto Protocol implementation bodies. ZAPS is unique with respect to the team composition, Earth observation operational expertise, interdisciplinarity and envisioned improved synergistic products. In specified regions land cover change dynamics, specifically land abandonment, will be investigated. A ZAPS unique feature is the investigation of up- and down-scaling effects of data and products to improve the overall accuracies. A bi-lingual webportal (as already existing at partner locations) provides access to findings, data and products to the outside forest and Earth observation community.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: SPA.2010.2.3-1 | Award Amount: 1.56M | Year: 2011

Geomagnetically induced currents (GIC), occurring during magnetic storms, pose a natural disaster risk to the reliable operation of electric power transmission grids, a European critical infrastructure as defined by the Council Directive 2008/114/EC. The EURISGIC project will produce the first European-wide real-time prototype forecast service of GIC in power systems, based on in-situ solar wind observations and comprehensive simulations of the Earths magnetosphere. By utilising geomagnetic recordings, we will also derive the first map of the statistical risk of large GIC throughout Europe. Because the most intense geomagnetic storms constitute the most remarkable threat, with a risk of power grid blackouts and destruction of transformers, we will also investigate worst-case GIC scenarios based on historical data. EURISGIC will exploit the knowledge and advanced modelling methods developed in Europe and North America. Close communication throughout the project with a stakeholder advisory group will help in directing the research and outreach appropriately. The results of this study will help in the future design of more robust and secure protection against GIC in power transmission grids in Europe, which are anticipated to become increasingly interconnected and geographically wider. GIC are observed in power transmission grids, oil and gas pipelines, telecommunication cables and railways. This project focuses on high-voltage power transmission networks, which are probably currently the most susceptible to GIC effects. Geomagnetic storms cover large geographical regions, at times the whole globe. Consequently, power networks are rightly described as being European critical infrastructures whose disruption or destruction would have a significant impact on at least two member states (Council Directive 2008/114/EC; Article 2b; Annex I).


News Article | February 7, 2017
Site: www.techtimes.com

A top science body in Russia has dubbed homeopathy a "pseudoscience," arguing that this alternative medicine treatment has no scientific basis. In a new memorandum, the Commission to Combat Pseudoscience and Falsification of Scientific Research, which is part of the Russian Academy of Sciences, said the methods of homeopathy contradict biological, chemical, and physical laws. "Homeopathic diagnosis and treatment should be qualified as pseudoscientific," the memorandum said. Furthermore, the Russian commission urges people not to confuse homeopathy with phototherapy, a use of plants for therapeutic purposes. It involves the use of plants with medicinal properties to treat relevant diseases. Homeopathy is a system of alternative medicine developed by Samuel Hahnemann in 1796. It is based on a belief that a substance that causes symptoms of an illness in healthy people would cure the same symptoms among sick people. For instance, a fly agaric mushroom causes hallucinations when eaten. According to homeopathy, the same mushroom should be used in small doses to treat hallucinations. However, there is not enough scientific data to prove that the mushroom has any medicinal qualities. An Australian report published in 2016 concluded that homeopathy is ineffective in 0 out of 68 illnesses, calling the practice a "therapeutic dead-end." "Though that body [of evidence] was mixed in size and quality, no clear signal of effectiveness emerged from the higher quality studies," said Paul Glasziou, a professor of medicine who led the study. In 2009, the World Health Organization (WHO) said that it does support the use of homeopathy against diseases such as tuberculosis, HIV, infant diarrhea, influenza, and malaria. In 2015, the Food and Drug Administration (FDA) announced plans to regulate homeopathic products. These products are currently regulated by the FDA. In September 2016, the FDA warned against the use of homeopathic gels and tablets as they may pose a risk to children and infants. In 2016, His Royal Highness the Prince of Wales shared in a conference that he had used homeopathy on his farm animals. The Prince said he chose homeopathy to treat his farm animals to reduce the use of antibiotics and prevent the emergence of antibiotic resistance, which has become a global threat. Meanwhile, the National Center for Complementary and Integrative Health (NIH) suggests that homeopathy should not be used as a substitute for proven conventional care or a replacement to postpone seeking a health care provider about a medical issue. A homeopathic product should also be consulted with a health care provider first before being taken. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | November 10, 2016
Site: www.eurekalert.org

Scientists from the Moscow Institute of Physics and Technology (MIPT), Semenov Institute of Chemical Physics of the Russian Academy of Sciences (ICP RAS), and Chuiko Institute of Surface Chemistry of the National Academy of Sciences of Ukraine (ISC NASU) have proposed a model nanosized dipole photomotor based on the phenomenon of light-induced charge redistribution. Triggered by a laser pulse, this tiny device is capable of directed motion at a record speed and is powerful enough to carry a certain load. The research findings were published in the Journal of Chemical Physics. "The unprecedented characteristics of dipole photomotors based on semiconductor nanoclusters offer the prospect of more than just addressing a certain scarcity of the translational photomotors family. These devices could actually be applied wherever rapid nanoparticle transport is required. In chemistry and physics, they could help develop new analytical and synthetic instruments, while in biology and medicine they could be used to deliver drugs to diseased tissues, improve gene therapy strategies, and so on," says Prof. Leonid Trakhtenberg of the Department of Molecular and Chemical Physics at MIPT, who is the leader of the research team and the head of the Laboratory of Functional Nanocomposites at ICP RAS. Prof. Trakhtenberg collaborated with Prof. Viktor Rozenbaum, who heads the Department of Theory of Nanostructured Systems at ISC NASU, to develop the theory of photoinduced molecular transport. This theory provides a framework for the design of nanomachines, whose motion can be controlled by a laser. The scientists have established the relationship between several model parameters (e.g., particle dimensions, photoexcitation conditions etc.) and the key performance characteristic of the device--its average velocity. Directed nanomotors have prototypes in nature. Living organisms make use of protein devices driven by external nonequilibrium processes of a different nature, which are known as Brownian, or molecular motors. They are capable of converting random Brownian motion into directed translational motion, reciprocation, or rotation. Brownian motors are involved in muscle contraction, cell mobility (flagellar motility of bacteria), and the intra- and intercellular transport of organelles and relatively large particles of various substances (e.g., phagocytosis, or "cell eating", and elimination of metabolic waste products from the cell). These devices operate with an amazingly high efficiency approaching 100%. "Understanding the underlying mechanisms of the operation of naturally occurring molecular motors enables us not only to replicate them but also to design new highly efficient multifunctional artificial devices that could eventually be applied in nanorobotics. For the last several decades, researchers and engineers in various fields have been working together and making some real progress towards the development of controllable nanomachines. The results of their work were recognized as a highly relevant achievement and a significant advance in science and technology, when the 2016 Nobel Prize in Chemistry was awarded 'for the design and synthesis of molecular machines,'" says Prof. Rozenbaum. A Brownian motor operates by switching between at least two discrete states, which is achieved by means of chemical reactions, thermal action, AC signals, or light pulses. In the latter case, the device is referred to as a photomotor. About ten years ago, a model was developed to describe the work of a translational dipole photomotor that operates due to photoexcitation of the molecule (particle) into a state with a dipole moment different from that in the ground state. The larger the difference between the total dipole moments of the nanoparticle in the two energy states, the higher the average velocity and efficiency of the motor. The proposed motor is activated by a resonant laser pulse, which excites electrons in the cylinder-shaped semiconductor nanocluster causing a separation of charges and giving rise to an electrostatic interaction between the particle and the polar substrate. Subjecting the nanocylinder to periodic resonant laser pulses causes its potential energy in the field of the substrate to vary with time, which in turn enables directed motion (see diagram). Photomotors based on inorganic nanoparticles outperform their organic molecule based counterparts in terms of efficiency and average velocity. In a cylinder-shaped semiconductor nanocluster, the value of the dipole moment before irradiation is close to zero, but photoexcitation of an electron from the bulk to the surface gives rise to an enormous dipole moment (approx. 40 D for a cylinder with a height of ca 15 Å). "Owing to the fact that the parameters of the device have been optimized, our proposed model photomotor based on a semiconductor nanocylinder moves at a record speed of 1 mm/s, which is approximately three orders of magnitude faster than similar models based on organic molecules or motor proteins in living organisms," the authors of the study told us.


News Article | February 21, 2017
Site: www.eurekalert.org

Ionised matter, like plasma, still holds secrets. Physicists working with plasma jets, made of a stream of ionised matter, have just discovered a new phenomenon. Indeed, Eduard Sosnin from the Institute of High Current Electronics, Russian Academy of Sciences in Tomsk, Russia, and colleagues found a new type of discharge phenomenon in an atmospheric pressure plasma. It has been dubbed apokamp - from the Greek words for 'off' and 'bend', because it appears at a perpendicular angle to where plasma jets bend. Their findings have been recently published in EPJ D and are particularly relevant for the development of novel applications in medicine, health care and materials processing because they involve air at normal atmospheric pressure, which would make it cheaper than applications in inert gases or nitrogen. The authors established the conditions for the phenomenon to occur. It takes two electrodes positioned at an angle to each other, together with electric field lines which are curved upward between the two electrodes. It requires not one but both electrodes to have a high voltage to obtain an apokamp plasma jet, which typically develops from the bending point of the discharge channel. The apokamp can vary from a single needle to a 6-7-cm-long conical jet attached to the arc of the plasma current channel. Using high-speed photography data on the apokamp dynamics, the authors elucidated its nature as consisting of ionisation waves - so-called plasma bullets - that move with a velocity of 100-220 km/s. Such plasma bullets have previously been detected in inert gases and in nitrogen in the presence of negatively charged gas. The next step would require gathering sufficient data to build a model that can further explain the apokamp phenomenon. This phenomenon can help explain the blue jet phenomenon identified in 1994 in the upper atmosphere, where strange upwards-facing jets develop from thunderstorm clouds.


News Article | November 10, 2016
Site: www.eurekalert.org

Olga Vinogradova, Professor at the Faculty of Physics, the Lomonosov Moscow State University, Director of laboratory at the Frumkin Institute of Physical chemistry and Electrochemistry (the Russian Academy of Sciences), together with Salim Maduar, a junior researcher of her group, being a part of an international scientific team, have suggested a new method of manipulation of microparticles at solid-liquid interface in water. The research has been published in Scientific Reports -- a journal of Nature Publishing Group. Scientists have suggested a method, based on addition of a photoresponsive surfactant into water. Illumination of this surfactant with laser light results in a formation of fast waterflows moving particles. This allows one to manipulate particles, for instance, clean surfaces from contaminations without any risk of damaging. On the contrary, it allows one also to gather or pattern a micro- and nanoparticle assembly of necessary configuration and size at a solid-liquid interface. The key component of the proposed method is a photosensitive surfactant, which can change its conformation under illumination of light of appropriate wavelength. In one situation it looks like a rod, in another - like a tick. If you illuminate a solution of such a surfactant, you'll see that molecules inside a light spot will change conformation and the system will generate concentration gradients of "rods" and "ticks". Theoretical physicists from Moscow have explained that concentration gradients near charged solid-liquid interface lead to an unusual phenomenon - a diffusio-osmotic flow, which allows one to manipulate particles at a solid-liquid interface. The authors have shown that if you select laser wavelength correctly, you could make particles move in the required direction - to remove them out from the light spot or, in contrast, gather towards its center. Scientists have succeded to describe the system theoretically, what has allowed, in its turn, to optimize conditions providing a highest liquid velocity. It was found that diffusio-osmotic flow is very sensitive to whether water is salty or pure. In the last case the velocity could increase in several times. In spite of the fact that the initial aim of the new method was soft cleaning of surfaces, such as semiconducting crystals for microelectronics, scientists have also found several unusual applications. For instance, moving a laser spot, you could "draw" on the surface, as laser will leave a visible trace with enhanced or, oppositely, decreased concentration of microparticles. In the original article authors also show photos and videos, where the logo of the University of Potsdam, a "happy man" and a "heart" shaped pattern - all made of microparticles by lateral repositioning the laser spotacross the solid liquid interface.


News Article | November 11, 2016
Site: www.cemag.us

Scientists from the Moscow Institute of Physics and Technology (MIPT), Semenov Institute of Chemical Physics of the Russian Academy of Sciences (ICP RAS), and Chuiko Institute of Surface Chemistry of the National Academy of Sciences of Ukraine (ISC NASU) have proposed a model nanosized dipole photomotor based on the phenomenon of light-induced charge redistribution. Triggered by a laser pulse, this tiny device is capable of directed motion at a record speed and is powerful enough to carry a certain load. The research findings were published in the Journal of Chemical Physics. “The unprecedented characteristics of dipole photomotors based on semiconductor nanoclusters offer the prospect of more than just addressing a certain scarcity of the translational photomotors family. These devices could actually be applied wherever rapid nanoparticle transport is required. In chemistry and physics, they could help develop new analytical and synthetic instruments, while in biology and medicine they could be used to deliver drugs to diseased tissues, improve gene therapy strategies, and so on,” says Prof. Leonid Trakhtenberg of the Department of Molecular and Chemical Physics at MIPT, who is the leader of the research team and the head of the Laboratory of Functional Nanocomposites at ICP RAS. Prof. Trakhtenberg collaborated with Prof. Viktor Rozenbaum, who heads the Department of Theory of Nanostructured Systems at ISC NASU, to develop the theory of photoinduced molecular transport. This theory provides a framework for the design of nanomachines, whose motion can be controlled by a laser. The scientists have established the relationship between several model parameters (e.g., particle dimensions, photoexcitation conditions etc.) and the key performance characteristic of the device—its average velocity. Brownian Motors Directed nanomotors have prototypes in nature. Living organisms make use of protein devices driven by external nonequilibrium processes of a different nature, which are known as Brownian, or molecular motors. They are capable of converting random Brownian motion into directed translational motion, reciprocation, or rotation. Brownian motors are involved in muscle contraction, cell mobility (flagellar motility of bacteria), and the intra- and intercellular transport of organelles and relatively large particles of various substances (e.g., phagocytosis, or “cell eating”, and elimination of metabolic waste products from the cell). These devices operate with an amazingly high efficiency approaching 100%. “Understanding the underlying mechanisms of the operation of naturally occurring molecular motors enables us not only to replicate them but also to design new highly efficient multifunctional artificial devices that could eventually be applied in nanorobotics. For the last several decades, researchers and engineers in various fields have been working together and making some real progress towards the development of controllable nanomachines. The results of their work were recognized as a highly relevant achievement and a significant advance in science and technology, when the 2016 Nobel Prize in Chemistry was awarded ‘for the design and synthesis of molecular machines,’” says Prof.  Rozenbaum. A Brownian motor operates by switching between at least two discrete states, which is achieved by means of chemical reactions, thermal action, AC signals, or light pulses. In the latter case, the device is referred to as a photomotor. About ten years ago, a model was developed to describe the work of a translational dipole photomotor that operates due to photoexcitation of the molecule (particle) into a state with a dipole moment different from that in the ground state. The larger the difference between the total dipole moments of the nanoparticle in the two energy states, the higher the average velocity and efficiency of the motor. Laser Triggering The proposed motor is activated by a resonant laser pulse, which excites electrons in the cylinder-shaped semiconductor nanocluster causing a separation of charges and giving rise to an electrostatic interaction between the particle and the polar substrate. Subjecting the nanocylinder to periodic resonant laser pulses causes its potential energy in the field of the substrate to vary with time, which in turn enables directed motion. Photomotors based on inorganic nanoparticles outperform their organic molecule based counterparts in terms of efficiency and average velocity. In a cylinder-shaped semiconductor nanocluster, the value of the dipole moment before irradiation is close to zero, but photoexcitation of an electron from the bulk to the surface gives rise to an enormous dipole moment (approx. 40 D for a cylinder with a height of ca 15 Å). “Owing to the fact that the parameters of the device have been optimized, our proposed model photomotor based on a semiconductor nanocylinder moves at a record speed of 1 mm/s, which is approximately three orders of magnitude faster than similar models based on organic molecules or motor proteins in living organisms,” the authors of the study told us.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA-CA | Phase: NMP-2008-2.6-3 | Award Amount: 684.05K | Year: 2009

Membranes are nano-/micro-porous multifunctional materials the main property of which is permselectivity regarding to different ionic and molecular species. This property results in a great number of applications in processes of aqueous and gaseous mixture separation. The small material and energy consumption and high eco-efficiency of separation by membranes determine strategic role of membrane processes as technologies making a bridge between industry and environment. The main objective of the proposal is to make a step towards reaching an effective integration of research activities, training, equipment sharing, and thus answer the needs for a coordinated membrane science and technology R&D in Europe and Russia oriented primarily at development of eco-efficient methods in industry. This objective replies to one of most important priorities of FP7: Elaboration of concepts aimed at sustainable development, and societal innovation. This objective will be attained by rapprochement of two membrane networks: Network of Excellence NanoMemPro in Europe and Russian Membrane Network being in the way of formation. The project foresees the organisation of two meetings of 25 representatives of European institutions belonging to NanoMemPro and of the equivalent number of Russian and NIS scientists. The basis for a European-Russian Membrane Science and Innovation Technology Platform will be founded as well. As a result of the project, a well structured programme/concept of scientific collaboration and diverse actions (in the field of researches, training, equipment sharing, person mobility, technological innovation) for the next several years will be elaborated. The proposed project should prepare grounds for further larger and amplified actions between European and Russian membrane networks following the axe of environmental protection, in the framework of FP7 and other international and regional programmes.


Home > Press > Scientists come up with light-driven motors to power nanorobots of the future: Researchers from Russia and Ukraine propose a nanosized motor controlled by a laser with potential applications across the natural sciences and medicine Abstract: Scientists from the Moscow Institute of Physics and Technology (MIPT), Semenov Institute of Chemical Physics of the Russian Academy of Sciences (ICP RAS), and Chuiko Institute of Surface Chemistry of the National Academy of Sciences of Ukraine (ISC NASU) have proposed a model nanosized dipole photomotor based on the phenomenon of light-induced charge redistribution. Triggered by a laser pulse, this tiny device is capable of directed motion at a record speed and is powerful enough to carry a certain load. The research findings were published in the Journal of Chemical Physics. "The unprecedented characteristics of dipole photomotors based on semiconductor nanoclusters offer the prospect of more than just addressing a certain scarcity of the translational photomotors family. These devices could actually be applied wherever rapid nanoparticle transport is required. In chemistry and physics, they could help develop new analytical and synthetic instruments, while in biology and medicine they could be used to deliver drugs to diseased tissues, improve gene therapy strategies, and so on," says Prof. Leonid Trakhtenberg of the Department of Molecular and Chemical Physics at MIPT, who is the leader of the research team and the head of the Laboratory of Functional Nanocomposites at ICP RAS. Prof. Trakhtenberg collaborated with Prof. Viktor Rozenbaum, who heads the Department of Theory of Nanostructured Systems at ISC NASU, to develop the theory of photoinduced molecular transport. This theory provides a framework for the design of nanomachines, whose motion can be controlled by a laser. The scientists have established the relationship between several model parameters (e.g., particle dimensions, photoexcitation conditions etc.) and the key performance characteristic of the device--its average velocity. Brownian motors Directed nanomotors have prototypes in nature. Living organisms make use of protein devices driven by external nonequilibrium processes of a different nature, which are known as Brownian, or molecular motors. They are capable of converting random Brownian motion into directed translational motion, reciprocation, or rotation. Brownian motors are involved in muscle contraction, cell mobility (flagellar motility of bacteria), and the intra- and intercellular transport of organelles and relatively large particles of various substances (e.g., phagocytosis, or "cell eating", and elimination of metabolic waste products from the cell). These devices operate with an amazingly high efficiency approaching 100%. "Understanding the underlying mechanisms of the operation of naturally occurring molecular motors enables us not only to replicate them but also to design new highly efficient multifunctional artificial devices that could eventually be applied in nanorobotics. For the last several decades, researchers and engineers in various fields have been working together and making some real progress towards the development of controllable nanomachines. The results of their work were recognized as a highly relevant achievement and a significant advance in science and technology, when the 2016 Nobel Prize in Chemistry was awarded 'for the design and synthesis of molecular machines,'" says Prof. Rozenbaum. A Brownian motor operates by switching between at least two discrete states, which is achieved by means of chemical reactions, thermal action, AC signals, or light pulses. In the latter case, the device is referred to as a photomotor. About ten years ago, a model was developed to describe the work of a translational dipole photomotor that operates due to photoexcitation of the molecule (particle) into a state with a dipole moment different from that in the ground state. The larger the difference between the total dipole moments of the nanoparticle in the two energy states, the higher the average velocity and efficiency of the motor. Laser triggering The proposed motor is activated by a resonant laser pulse, which excites electrons in the cylinder-shaped semiconductor nanocluster causing a separation of charges and giving rise to an electrostatic interaction between the particle and the polar substrate. Subjecting the nanocylinder to periodic resonant laser pulses causes its potential energy in the field of the substrate to vary with time, which in turn enables directed motion (see diagram). Photomotors based on inorganic nanoparticles outperform their organic molecule based counterparts in terms of efficiency and average velocity. In a cylinder-shaped semiconductor nanocluster, the value of the dipole moment before irradiation is close to zero, but photoexcitation of an electron from the bulk to the surface gives rise to an enormous dipole moment (approx. 40 D for a cylinder with a height of ca 15 Å). "Owing to the fact that the parameters of the device have been optimized, our proposed model photomotor based on a semiconductor nanocylinder moves at a record speed of 1 mm/s, which is approximately three orders of magnitude faster than similar models based on organic molecules or motor proteins in living organisms," the authors of the study told us. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Kurganova I.,University of Gottingen | Lopes de Gerenyu V.,Russian Academy of Sciences | Six J.,ETH Zurich | Kuzyakov Y.,University of Gottingen
Global Change Biology | Year: 2014

The collapse of collective farming in Russia after 1990 and the subsequent economic crisis led to the abandonment of more than 45 million ha of arable lands (23% of the agricultural area). This was the most widespread and abrupt land use change in the 20th century in the northern hemisphere. The withdrawal of land area from cultivation led to several benefits including carbon (C) sequestration. Here, we provide a geographically complete and spatially detailed analysis of C sequestered in these abandoned lands. The average C accumulation rate in the upper 20 cm of mineral soil was 0.96 ± 0.08 Mg C ha-1 yr-1 for the first 20 years after abandonment and 0.19 ± 0.10 Mg C ha-1 yr-1 during the next 30 years of postagrogenic evolution and natural vegetation establishment. The amount of C sequestered over the period 1990-2009 accounts to 42.6 ± 3.8 Tg C per year. This C sequestration rate is equivalent to ca. 10% of the annual C sink in all Russian forests. Furthermore, it compensates all fire and postfire CO2 emissions in Russia and covers about 4% of the global CO2 release due to deforestation and other land use changes. Our assessment shows a significant mitigation of increasing atmospheric CO2 by prolonged C accumulation in Russian soils caused by collective farming collapse. © 2013 John Wiley & Sons Ltd.


Ptuskin V.,Russian Academy of Sciences | Zirakashvili V.,Russian Academy of Sciences | Seo E.-S.,University of Maryland University College
Astrophysical Journal | Year: 2010

The spectra of high-energy protons and nuclei accelerated by supernova remnant (SNR) shocks are calculated, taking into account magnetic field amplification and Alfvénic drift both upstream and downstream of the shock for different types of SNRs during their evolution. The maximum energy of accelerated particles may reach 5×1018 eV for Fe ions in Type IIb SNRs. The calculated energy spectrum of cosmic rays after propagation through the Galaxy is in good agreement with the spectrum measured at the Earth. © 2010. The American Astronomical Society. All rights reserved.


Grant
Agency: Cordis | Branch: FP7 | Program: CSA-SA | Phase: ENV.2008.5.1.0.2. | Award Amount: 737.77K | Year: 2009

The overall strategic objective of the E-URAL project is to improve in quantity and quality the participation of Russian researchers and SMEs in the Environment (including climate change) theme of the Seventh Framework Programme (FP7). This objective answers to specific gaps that hamper the participation of Russia, such as the lack of knowledge among researchers and multipliers from Russia on the FP7 and the Environment theme and the lack of know-how on submission procedures under EU-FPs, as well as of awareness in Europe of skilled potential partners from Russia to be involved in EU-FPs projects. E-URAL project, through its multi-skilled consortium composed of various leading and long experienced institutions from EU and Russia, addresses these major obstacles in 4 main lines of activity: mapping of research expertise and needs in Russia in the field of Environment; awareness raising and capacity building for multipliers and researchers in Russia through training activities and staff exchanges; promotion of co-operation opportunities between EU and Russia; liaison with Russian and European stakeholders in the field of concern. The achievement of these strategic and ambitious objectives is guaranteed by the effective structure of the work plan and the partnership of both Russian and European partners with long experience in environment research or in training activities. As a result, the project will create new research consortia, will raise the number of Russian participants in the Framework Programme and identify key RTD topics for further work programmes in the FP7 Environment theme.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP-SICA | Phase: SPA.2010.3.2-01 | Award Amount: 728.38K | Year: 2010

The project is intended to investigate the short-term transient processes in the global lithosphere-atmosphere-ionosphere coupled system using synchronous satellite and ground-based observations. The research will be based on analysis of the data obtained from several micro-satellites, including the French spacecraft DEMETER. The main scientific objectives of DEMETER are directly related to study of the ionospheric perturbations due to seismic and man-made activities. Satellite data will be analysed together with those collected from the specialized network of identical VLF receivers distributed in Russia, Europe and Japan. The network operates in conjunction with powerful transmitters deployed all over the world and enables the search for electromagnetic earthquake precursors in highly seismo-active regions such as the Far East and Southern Europe. A unique experimental database of electromagnetic and seismic observations from both satellite and ground-based sources will be created using data covering the Far East and Southern European regions. The combined use of satellite and ground-based datasets enables us to resolve the problem of the separation of spatial and temporal variations, which is insoluble using single satellite measurements. The work plan of the project foresees combined data processing and extensive theoretical modelling of the interaction processes and the signal transfer mechanisms in the lithosphere-atmosphere-ionosphere system. The method of global diagnostics of seismicity using both DEMETER and ground VLF signal observations has been already tested over Japan. The participants of the project have extensive experience performing investigations of seismicity and related geophysical phenomena as well as in the processing and analysis of satellite and ground based data. Overall, the project will fill several significant gaps in the understanding of the physics of the formation of ionospheric disturbances caused by seismic activity.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP-SICA | Phase: SPA.2010.3.2-04 | Award Amount: 742.34K | Year: 2011

BIOSMHARS is the first phase of a Europe - Russia research effort aiming to develop the scientific and technical tools enabling to have a comprehensive approach of biocontamination inside manned spacecrafts, and to develop appropriate countermeasures. BIOSMHARS will develop and calibrate a mathematical model predicting the transportation of bioaerosols in a closed environment and the induced biocontamination. The model of biocontamination will be calibrated in the BIOS-3 facility in Krasnoyarsk as an analogue to the International Space Station. It will integrate parameters enabling it to be adapted to a real spacecraft. A first experiment will enable to calibrate the physical model and particles transportation. A second experiment will enable to correlate the real biocontamination induced by injecting calibrated bioaerosols of bacteria and fungi with the calculations of the model. BIOMHARS will also develop a scientific and technical roadmap to facilitate the links with other EU relevant initiatives and plan for further phases. The project is proposed by a consortium of European and Russian leading research institutions in space, modelling and microbiology. The consortium is coordinated by MEDES, Institute for Space physiology and medicine in France. It also includes VTT - Technical Research Centre of Finland, the University of Eastern Finland and the Belgium Centre for nuclear research as European partners. For Russia, it includes IBMP, the Institute of BioMedical Problems of Moscow and IBP, the Institute of BioPhysics of Krasnoyarsk. The project duration is of two years.


Flash Physics is our daily pick of the latest need-to-know developments from the global physics community selected by Physics World's team of editors and reporters Calcium-iron arsenide, which is usually not a superconductor, has been made to superconduct by Paul Chu and colleagues at the University of Houston in the US. This was done using an idea first proposed in the 1970s – that superconductivity can be enhanced or even created at the interface between two materials. Chu and colleagues heated calcium-iron arsenide so that it coexists in two different structural phases, neither of which is superconducting. Then the sample is cooled carefully to preserve the two phases. When cooled to below 25 K, the material is a superconductor at the interface between the phases. While this superconducting temperature is too low to be of practical use, Chu believes that the work offers a new direction in the search for more efficient, less expensive superconducting materials. The research is described in Proceedings of the National Academy of Sciences. A bibliometric study by researchers at the National Research University Higher School of Economics (HSE) in Russia has measured the scientific impact of 39 physics institutions belonging to the Russian Academy of Sciences (RAS). Carried out by HSE sociologists Yuriy Kachanov and Natalia Shmatko, together with Yulia Markova from the American Association for the Advancement of Science, they found that the Joint Institute for Nuclear Research, the Alikhanov Institute for Theoretical and Experimental Physics, the Lebedev Physical Institute – all based in Moscow – and the Ioffe Institute in St Petersburg are the top physics research institutions in the country. The study looked at the number of researchers based at each institution, together with publication statistics. "We were able to prove that big institutions held authority on the global science scene and produced more scientific data, which was highly received by the physics community," says Shmatko. A new initiative aimed at strengthening ties between tech firms and the UK's National Physical Laboratory (NPL) was officially launched last night at the Institute of Engineering and Technology in London. The project, known as NPL Instruments, will see experts at the Teddington-based national measurement institute work closely with companies to develop bespoke instruments, products and related services. At the event, NPL chief-executive Peter Thompson told Physics World that the new business unit would focus on products at a moderate stage of development (equivalent to Technology Readiness Levels 4 and 5) in the areas of advanced manufacturing, environment, health and life sciences, and the digital sector. NPL's work on instruments tends to be "hidden in plain view", Thompson told an audience of around 100 lab personnel, industry scientists, engineers and academics at the event, adding that the new business unit is intended to help publicize and expand the lab's role as an "instrument development partner". Paul Shore, who leads both the new unit and NPL's engineering measurement division, gave indoor GPS technologies and "smaller, faster, cheaper" atomic clocks as examples of products where the lab's existing strengths in measurement and sensing could help to catalyse technical advances. The initiative comes on the heels of a transition period for NPL, which announced in August that it would make up to 50 staff members redundant as part of what Thompson called a "rebalancing" of the 116 year-old lab.


News Article | January 18, 2016
Site: www.materialstoday.com

A two-dimensional meta material composed of silver elements will have a negative refractive index for light in the most energetic region of the visible spectrum, 400 to 500 nanometers (violet, blue and light blue), according to computer simulations by the team of Sergey Belan, Vladimir Parfenyev, and Sergey Vergeles from the Landau Institute for Theoretical Physics in the Russian Academy of Sciences, Moscow; Belan is also affiliated with the Moscow Institute of Physics and Technology, in Dolgoprudny. Writing in the journal Optical Material Express, the researchers hint at its potential for compact optical. [DOI: 10.1364/OME.5.002843] The computations using the COMSOL Multiphysics package on a two-dimensional meta lattice composed of pairs of closely spaced silver cylinders with a radius of the order of 100 nanometers suggest a refractive efficiency of 70 percent and a reflective efficiency of 80 percent for violet to blue light. Earlier meta materials are usually three-dimensional and have much more complex geometries. Moreover, they have been demonstrated to operate at microwave wavelengths, which limits their potential applications. The silver lattice meta material works through diffraction, splitting incident light into rays depending on angle of incidence, wavelength and the period of the lattice. The structure of the unit cell determines how the energy of the incident light is distributed between the rays. In a material with a negative refractive index, all but one of the diffracted rays are suppressed and the remainder emerges in the desired direction. That direction is counterintuitive given our experience with the appearance of a stick dipped into a swimming pool where water has a natural, positive refractive index. At the mechanistic level, light's interaction with the pairs of metal cylinders is due to a plasmon resonance effect. The researchers have demonstrated that the extraordinary optical response of the proposed material arises due to excitation of the plasmonic modes in the gaps between cylinders, Belan told Materials Today. Tweaking the lattice allows a negative refractive index to be manifest for wide range of angles of incidence. This suggests potential applications in controlling signals in ultra-compact devices for optical telecommunications and future computing. The next step will be to manufacture with very high tolerance the requisite smooth metal cylinders with their less than 10 nanometer separation for laboratory testing of the meta material's true potential. "We are now working on optimization of the proposed design in order to improve efficiency," Belan told us. "Specifically, we are testing in simulations the metal rods with a non-circular cross-section. In addition, we are looking for collaboration with experimental groups capable of realizing negative refraction in the laboratory." David Bradley blogs at Sciencebase Science Blog and tweets @sciencebase, he is author of the bestselling science book "Deceived Wisdom".


COLORADO SPRINGS, Colo., Feb. 16, 2017 /PRNewswire/ -- The American Academy of Forensic Sciences (AAFS) presented Robert E. Gaensslen, PhD, with the prestigious Gradwohl Medallion as a Gradwohl Laureate on Feb. 15 during the Academy's 69th Annual Scientific Meeting in New Orleans,...


News Article | January 16, 2016
Site: news.yahoo.com

The injuries on this mammoth carcass, excavated here by Sergey Gorbunov, suggests that ancient people butchered it about 45,000 years ago. More The slashed and punctured bones of a woolly mammoth suggest that humans lived in the far northern reaches of Siberia earlier than scientists had previously thought, a new study finds. Before the surprising discovery, researchers thought that humans lived in the freezing Siberian Arctic no earlier than about 30,000 to 35,000 years ago. Now, the newly studied mammoth carcass suggests that people lived in the area, where they butchered the likes of this giant animal about 45,000 years ago. "We now have an enormous extension of the space that was inhabited at 45,000 years ago," said Vladimir Pitulko, a senior research scientist at the Russian Academy of Sciences and co-lead researcher on the study. [Image Gallery: Stunning Mammoth Unearthed] Paleolithic human remains are rarely found in the Eurasian Arctic. But all expectations were overturned in 2012, when a team found the carcass of an "exceptionally complete" woolly mammoth on the eastern shore of Yenisei Bay, located in the central Siberian Arctic, the researchers wrote in the study. The extreme cold preserved some of the male mammoth's soft tissue, including the remains of its fat hump and its penis, they said. However, injuries found on the mammoth's bones — including its ribs, left shoulder bone, right tusk and cheekbone — suggest that it had a violent end. Some of the bones have dents and punctures, possibly from thrusting spears, the researchers said. "[These injuries] are clearly related to the death of the animal, which was killed and then partly butchered," Pitulko said in a statement he emailed to journalists. The ancient hunters likely removed the mammoth's tongue and some of its internal organs, but it's unclear why they didn't take more of the beast. "Maybe some obstacle appeared and prevented them from returning — who knows?" Pitulko told Live Science. Using radiocarbon dating, the researchers dated the mammoth's tibia (shinbone) and surrounding materials to about 45,000 years ago. Radiocarbon dating measures the amount of carbon-14 (a carbon isotope, or variant with a different number of neutrons in its nucleus) left in a once-living organism, and can be used reliably to date material to about 50,000 years ago, although some techniques allow researchers to date older organic objects. The researchers also found a Pleistocene wolf humerus (arm bone) that had been injured by a "sharp implement with a conical tip," Pitulko said in the statement. The bone, also discovered in Arctic Siberia, dates to about 47,000 years ago, they found. The wolf bone was uncovered near the bones of ancient bison, reindeer and rhinoceros, all of which have evidence of human modification. This finding suggests that ancient humans hunted and ate a variety of mammals, not just mammoths, Pitulko said. [In Images: Ancient Beasts of the Arctic] The hunters who butchered the mammoth and wolf were far from the Bering Land Bridge, which lay exposed at that time. However, perhaps their advanced hunting knowledge helped them survive in the Arctic. It may have also helped those who crossed the land bridge survive the journey, Pitulko said. The new study is "splendidly done," said Ross MacPhee, a curator of mammalogy at the American Museum of Natural History in New York City, who was not involved in the new research. If the mammoth had sustained just one wound, it would have been harder to say it was caused by humans, MacPhee said. For instance, the damaged tusk could be the result of daily mammoth living, he said. "[But] there's not just the one wound; there's lots of them, and they're quite convincing," MacPhee said. He added that the mammoth finding is "another nail in the coffin that people exclusively caused the extinctions of these megabeasts." If people have been hunting mammoths since 45,000 years ago, they would have needed to quickly overhunt them to kill off the mammoths; otherwise, the giants would have likely had enough individuals to continue breeding, MacPhee said. The study was published online yesterday (Jan. 14) in the journal Science. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article | November 13, 2015
Site: phys.org

Sulfur derivatives are, by far, the richest fossil source of functional molecules available in nature. Indeed, a diversity of sulfur species is present as contaminants in crude oil. Unfortunately, there are still no efficient technological tools to separate sulfur compounds from crude oil and utilize them in materials production, and the petroleum industry wastes billions of tons of valuable compounds annually. It is a well-known fact that humans are very sensitive to thiols. Small molecular thiols have an extremely unpleasant smell, which even in trace-level concentration (1 to 5 parts per billion) can be easily detected by human's nose. A unique palladium catalyst was developed in the laboratory of Prof. Ananikov at the Zelinsky Institute of Organic Chemistry, Russian Academy of Sciences. A Pd complex with an NHC ligand produced chemical transformations of thiols into vinyl monomers, a useful component for the new generation of polymeric materials. Even challenging EtSH and PrSH thiols were involved in the reaction with excellent outcomes. Chemical transformation was performed using an atom–economic approach, which assures high yield and complete selectivity. This means that a pure product can be obtained just after completion of the reaction and isolation of the catalyst. Mechanistic studies have revealed the nuclearity of transition metal complexes (Figure 2) in the catalytic cycle. Monometallic Pd complex mediated quick reaction, but bimetallic Pd complex reacted much slower. The mechanistic findings are connected to the catalyst evolution problem and to the role of nucleation to nanoparticles revealed by this group earlier. Upon addition to alkynes, thiols were efficiently converted to vinyl thioethers—stable monomers, which are easy to handle and do not have an unpleasant odour. Here comes the logical solution to many chemical dilemmas: The right catalyst may turn even unpleasant chemicals into valuable products. The article "Pd-NHC Catalytic System for the Efficient Atom-Economic Synthesis of Vinyl Sulfides from Tertiary, Secondary, or Primary Thiols" by Evgeniya Degtyareva, Julia Burykina, Artem Fakhrutdinov, Evgeniy Gordeev, Victor Khrustalev, and Valentine Ananikov was published in ACS Catalysis journal published by American Chemical Society. Explore further: A new method for clicking molecules together Alexey S. Kashin et al. Catalytic C–C and C–Heteroatom Bond Formation Reactions: In Situ Generated or Preformed Catalysts? Complicated Mechanistic Picture Behind Well-Known Experimental Procedures, The Journal of Organic Chemistry (2013). DOI: 10.1021/jo402038p


Home > Press > Two electrons go on a quantum walk and end up in a qudit: Russian scientists find a way to reliably connect quantum elements Abstract: Scientists from the Institute of Physics and Technology of the Russian Academy of Sciences and MIPT have let two electrons loose in a system of quantum dots to create a quantum computer memory cell of a higher dimension than a qubit (a quantum bit). In their study published in Scientific Reports, the researchers demonstrate for the first time how quantum walks of several electrons can help to implement quantum computation. "By studying the system with two electrons, we solved the problems faced in the general case of two identical interacting particles. This paves the way toward compact high-level quantum structures," comments Leonid Fedichkin, Expert at the Russian Academy of Sciences, Vice-Director for Science at NIX (a Russian computer company), and Associate Professor at MIPT's Department of Theoretical Physics. In a matter of hours, a quantum computer would be able to hack through the most popular cryptosystem used even in your web browser. As far as more benevolent applications are concerned, a quantum computer would be capable of molecular modeling that takes into account all interactions between the particles involved. This in turn would enable the development of highly efficient solar cells and new drugs. To have practical applications, a quantum computer needs to incorporate hundreds or even thousands of qubits. And that is where it gets tricky. As it turns out, the unstable nature of the connection between qubits remains the major obstacle preventing us from using quantum walks of particles for quantum computation. Unlike their classical analogs, quantum structures are extremely sensitive to external noise. To prevent a system of several qubits from losing the information stored in it, liquid nitrogen (or helium) needs to be used for cooling. Plenty of schemes have been proposed for the experimental realization of a separate qubit. In an earlier study, a research team led by Prof. Fedichkin demonstrated that a qubit could be physically implemented as a particle "taking a quantum walk" between two extremely small semiconductors known as quantum dots, which are connected by a "quantum tunnel." From the perspective of an electron, the quantum dots represent potential wells. Thus, the position of the electron can be used to encode the two basis states of the qubit--|0? and |1?--depending on whether the particle is in one well or the other. Rather than sit in one of the two wells, the electron is smeared out between the two different states, taking up a definite position only when its coordinates are measured. In other words, it is in a superposition of two states. If an entangled state is created between several qubits, their individual states can no longer be described separately from one another, and any valid description must refer to the state of the whole system. This means that a system of three qubits has a total of 8 basis states and is in a superposition of them: A|000?+B|001?+C|010?+D|100?+E|011?+F|101?+G|110?+H|111?. By influencing the system, one inevitably affects all of the 8 coefficients, whereas influencing a system of regular bits only affects their individual states. By implication, n bits can store n variables, while n qubits can store 2? variables. Qudits offer an even greater advantage, since n four-level qudits (aka ququarts) can encode 4?, or 2?×2? variables. To put this into perspective, 10 ququarts store approximately 100,000 times more information than 10 bits. With greater values of n, the zeros in this number start to pile up very quickly. In this study, Alexey Melnikov and Leonid Fedichkin obtain a system of two qudits implemented as two entangled electrons quantum-walking around the so-called cycle graph. To make one, the scientists had to "connect the dots" forming a circle (once again, these are quantum dots, and they are connected by the effect called quantum tunneling). The entanglement of the two electrons is caused by the mutual electrostatic repulsion experienced by like charges. It is possible to create a system of even more qudits in the same volume of semiconductor material. To do this, it is necessary to connect quantum dots in a pattern of winding paths and have more wandering electrons. The quantum walks approach to quantum computation is convenient because it is based on a natural process. Nevertheless, the presence of two identical electrons in the same structure was a source of additional difficulties that had remained unsolved. The phenomenon of particle entanglement plays a pivotal role in quantum information processing. However, in experiments with identical particles, it is necessary to distinguish so-called false entanglement, which can arise between electrons that are not interacting, from genuine entanglement. To do this, the scientists performed mathematical calculations for both cases, viz., with and without entanglement. They observed the changing distribution of probabilities for the cases with 6, 8, 10, and 12 dots, i.e., for a system of two qudits with three, four, five, and six levels each. The scientists demonstrated that their proposed system is characterized by a relatively high degree of stability. It has been a long time since people first set their hearts on building a universal quantum computer, but so far we have been unable to connect a sufficient number of qubits. The work of the Russian researchers brings us one step closer to a future where quantum computations are commonplace. And although there are algorithms that quantum computers could never accelerate, others would still benefit enormously from devices able to exploit the potential of large numbers of qubits (or qudits). These alone would be enough to save us a couple of thousand years. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 13, 2015
Site: www.nanotech-now.com

Abstract: NHC Catalytic system, developed in the Ananikov laboratory, targeted on alternative technology of chemical utilization of organic sulfur species from crude oil (DOI: 10.1021/acscatal.5b01815). Mercaptans or thiols are a special class of organic compounds that contains sulfur functional group, RSH. Various sulfur compounds are highly demanded in the formation of new materials in photonics, optics, pharmaceutical industry, organic chemistry, and nanotechnology. Sulfur derivatives are, by far, the richest fossil source of functional molecules available in nature. Indeed, a diversity of sulfur species is present as contaminants in crude oil. Unfortunately, there are still no efficient technological tools to separate sulfur compounds from crude oil and utilize them in materials production. Petroleum industry wastes billions of tones of valuable compounds, which are annually destroyed to elemental sulfur. It is a well-known fact that humans are very sensitive to thiols. Small molecular thiols have an extremely unpleasant smell, which even in trace-level concentration (1-5 parts per billion) can be easily detected by human’s nose. A unique palladium catalyst was developed in the laboratory of Prof. Ananikov at Zelinsky Institute of Organic Chemistry, Russian Academy of Sciences. Pd complex with NHC ligand furnished chemical transformations of thiols into vinyl monomers, a useful component of new generation of polymeric materials. Even challenging EtSH and PrSH thiols were involved in the reaction and produced excellent outcome. Chemical transformation was performed using atom–economic approach, which assures high yield and complete selectivity. This means that a pure product can be obtained just after completion of the reaction and isolation of the catalyst. Mechanistic studies have revealed the key role of nuclearity of transition metal complexes (Figure 2) in the catalytic cycle. Monometallic Pd complex mediated quick reaction, where bimetallic Pd complex reacted much slower. The mechanistic findings are connected to the catalyst evolution problem and to the role of nucleation to nanoparticles revealed by this group earlier (doi: 10.1021/jo402038p). Upon addition to alkynes, thiols were efficiently converted to vinyl thioethers – stable monomenrs, which are easy to handle and do not have an unpleasant odour. Here comes the logical solution to many chemical dilemmas: a right catalyst may turn even unpleasant chemicals into valuable and friendly products. The article «Pd-NHC Catalytic System for the Efficient Atom-Economic Synthesis of Vinyl Sulfides from Tertiary, Secondary, or Primary Thiols» by Evgeniya Degtyareva, Julia Burykina, Artem Fakhrutdinov, Evgeniy Gordeev, Victor Khrustalev, and Valentine Ananikov was published in ACS Catalysis journal published by American Chemical Society. Contacts: Anna Mikhailova. Press Officer +79105403666 Ananikov Laboratory Prof. Valentine P. Ananikov Laboratory Head of Zelinsky Institute of Organic Chemistry Elected Member of Russian Academy of Sciences Professor of Moscow State University Zelinsky Institute of Organic Chemistry, Russian Academy of Sciences Leninsky Prospekt 47 Moscow, 119991, Russia If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


Kaganer V.M.,Paul Drude Institute for Solid State Electronics | Sabelfeld K.K.,Russian Academy of Sciences
Acta Crystallographica Section A: Foundations of Crystallography | Year: 2010

X-ray diffraction peak profiles are calculated by the Monte Carlo method for arbitrarily correlated dislocations without making any approximations or simplifications. The arrangement of dislocations in pairs with opposite Burgers vectors provides screening of the long-range strains. Moreover, any screening can be modeled by appropriate distribution of the dislocation pairs. Analytical description of the peak profiles is compared with the Monte Carlo results. Symmetric peaks due to screw dislocations and asymmetric peaks due to edge dislocations are simulated and analyzed. © 2010 International Union of Crystallography Printed in Singapore - all rights reserved.


Large R.R.,University of Tasmania | Bull S.W.,University of Tasmania | Maslennikov V.V.,Russian Academy of Sciences
Economic Geology | Year: 2011

This paper presents evidence and arguments that carbonaceous sedimentary rocks were a source for Au and As in sediment-hosted orogenic and Carlin-type gold deposits and develops a corresponding genetic model. In this two-stage basin-scale model, gold and arsenic are introduced early into black shale and turbidite basins during sedimentation and diagenesis (stage 1) and concentrated to ore grades by later hydrothermal, structural, or magmatic processes (stage 2). In reduced continental margin basin settings, organic matter, sedimented under anoxic to euxinic conditions, immobilizes and concentrates gold, arsenic, and a range of trace elements (particularly V, Ni, Se, Ag, Zn, Mo, Cu, U) present in marine bottom waters, into fine-grained black mudstone and siltstone of slope and basin facies. During early diagenesis, gold and certain other trace elements (Ni, Se, Te, Ag, Mo, Cu, ±PGE) are preferentially partitioned into arsenian pyrite that grows in the muds. These processes produce regionally extensive black shale and turbidite sequences enriched in syngenetic gold and arsenic, commonly from 5 to 100 ppb Au and 10 to 200 ppm As. Rare organic- and sulfide-rich metalliferous black shales may contain up to 1 to 2 ppm Au and over 1,000 ppm As, present as refractory gold in arsenian pyrite and nanoparticles of free gold. During late diagenesis and early metamorphism (stage 2) the diagenetic arsenian pyrite is recrystallized to form coarser grained pyrite generations, and the organic matter is cooked to bitumen. Under higher grade metamorphism (lower greenschist facies and above) arsenian pyrite in carbonaceous shales is converted to pyrrhotite. These processes release gold, arsenic, sulfur and other elements (Sb, Te, Cu, Zn, Mo, Bi, Tl, and Pb) from the source rocks to become concentrated by hydrothermal processes, locally to produce gold ores, in structural sites such as fold hinge zones, shear or breccia zones within or above the black shale sequence. LA-ICP-MS analyses of diagenetic pyrite in carbonaceous sediments, both associated and not associated with gold deposits, suggests that invisible gold contents of greater than 250 ppb in diagenetic pyrite, are indicative of carbonaceous shale source rocks with the potential to produce economic gold deposits. Application of this sedimentary source-rock model enables a systematic exploration approach for sediment-hosted gold deposits, based on the distribution, composition and structure of carbonaceous shale sequences and their contained diagenetic pyrite. © 2011 Society of Economic Geologists, Inc.


Olovnikov I.,California Institute of Technology | Olovnikov I.,Russian Academy of Sciences | Aravin A.A.,California Institute of Technology | Fejes Toth K.,California Institute of Technology
Current Opinion in Genetics and Development | Year: 2012

Eukaryotes use several classes of small RNA molecules to guide diverse protein machineries to target messenger RNA. The role of small RNA in post-transcriptional regulation of mRNA stability and translation is now well established. Small RNAs can also guide sequence-specific modification of chromatin structure and thus contribute to establishment and maintenance of distinct chromatin domains. In this review we summarize the model for the inter-dependent interaction between small RNA and chromatin that has emerged from studies on fission yeast and plants. We focus on recent results that link a distinct class of small RNAs, the piRNAs, to chromatin regulation in animals. © 2012 Elsevier Ltd.


Kotov O.V.,Russian Academy of Sciences | Kol'chenko M.A.,Samsung | Lozovik Y.E.,Moscow Institute of Physics and Technology
Optics Express | Year: 2013

The behavior of the TE and TM electromagnetic waves in graphene at the interface between two semi-infinite dielectric media is studied. The dramatic influence on the TE waves propagation even at very small changes in the optical contrast between the two dielectric media is predicted. Frequencies of the TE waves are found to lie only in the window determined by the contrast. We consider this effect in connection with the design of graphene-based optical gas sensor. Near the frequency, where the imaginary part of the conductivity of graphene becomes zero, ultrahigh refractive index sensitivity and very low detection limit are revealed. The considered graphene-based optical gas sensor outperforms characteristics of modern volume refractive index sensors by several orders of magnitude. © 2013 Optical Society of America.


Kaganer V.M.,Paul Drude Institute for Solid State Electronics | Belov A.Yu.,Russian Academy of Sciences
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

Lattice distortions in a cylindrical nanowire with nonuniform intrinsic strains arbitrarily varying along its axis are analyzed, with a special emphasis on heterostructures. We find that, as a result of the elastic relaxation on the side surface, the lateral mismatch does not change the average longitudinal lattice period of the whole cylinder, but effects the average period of heterostructure. As a consequence, the positions of the x-ray diffraction peaks due to a periodic axial heterostructure depend on the ratio of the total height of the whole heterostructure to the cylinder diameter. The peaks attain the positions of the planar heterostructure peaks only when the diameter becomes orders of magnitude larger than the heterostructure height. The typical nanowire heterostructure parameters correspond to the opposite limit of full lateral relaxation. © 2012 American Physical Society.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: NMP.2012.1.3-2 | Award Amount: 1.28M | Year: 2013

The central goal of our proposal is to develop physically justified models and computational tools to quantitatively describe and understand the molecular mechanisms of nanoparticle-cell membrane interactions, which we consider to be a crucial point in any predictive model of nanoparticle toxicity. We consider mechanisms of nanoparticle protein corona formation, the protective function of the membrane, nanoparticle uptake into the cell, and the effect of nanoparticles on the cell membrane. We plan to develop a consistent multiscale simulation scheme starting from nanoparticle-biomolecule interaction at the atomistic scale using molecular dynamics simulation, and then systematically constructing coarse-grained mesoscale models for simulating the structure and dynamics of the cell membrane perturbed by nanoparticles at the physiologically relevant time and length scales. We will develop and test a universal method for evaluating the rates of nanoparticle translocation through membranes and evaluate associated specific toxicity effects. Based on the information acquired from the simulations and analyzed together with available experimental data, the toxicological impact will be deduced. We will apply our approach to a range of common engineered nanoparticles, relating their physicochemical properties such as size and shape, surface charge, hydrophobicity (logP), and plasma protein binding affinity to the toxicological effects and develop a test suite allowing to make toxicity prediction on the basis of purely computational or limited in vitro screening tests.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-SICA | Phase: KBBE-2008-3-1-04 | Award Amount: 2.60M | Year: 2009

Advances in the technologies for expressing proteins and extracting them from plants have allowed several plant-made products to be assessed for safety and efficacy. The results have been favourable and have culminated in the demonstration that plant-produced vaccine can protect target animals against challenge. However, most of these successes have concerned the production of antigens which had previously been produced using established methods such as mammalian cell culture. For plants to fulfil their potential as a means of producing vaccines, it is now imperative that methods are developed for the rapid production and characterisation of a large number of vaccine candidates. This project will exploit recent developments in transient expression technologies to screen a range of vaccine candidates in plants. These methods can produce milligram quantities of candidate proteins in a matter of days using only small amounts (tens of grams) of plant tissue. The project will concentrate on screening vaccine candidate proteins which are capable of forming virus-like particles (VLPs), as such particulate structures are known to be potent stimulators of the immune system. Furthermore, they can be used as carriers of additional immunogenic sequences for the developments of novel vaccines. The project will focus on diseases which are particularly relevant to both the EU and Russia, including Avian Influenza virus (AIV), Blue Tongue Virus (BTV) Porcine Respiratory and Reproductive Syndrome Virus (PRRSV). The ability to screen many candidate VLPs will result in the development of novel vaccines against these and other important pathogens. At the same time as the screening is carried out, methods will be developed to allow the rapid translation of the information gained through the transient studies into larger scale production systems for the most promising candidates. This will enable low cost vaccines to be developed for use for livestock and, ultimately, humans.

Loading Russian Academy of Sciences collaborators
Loading Russian Academy of Sciences collaborators