Time filter

Source Type

Patent
Chinese Academy of Sciences | Date: 2015-07-30

The present invention discloses a polymetallocarbosilane from organic metal catalysed polymerization and uses thereof, said polymetallocarbosilane has a structural formula as shown in (I). In the formula, R is methyl, ethyl, propyl, ethenyl, chloromethyl, phenyl or phenethyl; M is Ti, Zr or Hf; m is an integer equal to or greater than 1, n is an integer equal to or greater than 0, and Cp_(1 )and Cp_(2 )are each a cyclopentadienyl or substituted cyclopentadienyl group. The present invention adopts a method for producing polymetallocarbosilane by metallocene catalysed addition polymerization of an organosilane, with adjustability of metal content in polymer, simple reaction steps, mild reaction conditions and low preparation costs.


Patent
Chinese Academy of Sciences | Date: 2016-08-30

The present invention relates to a synthetic method of graphitic carbon nitride material. The method involves a homogenous mixing of carbon nitride precursor and ammonium salt, and calcining the mixture to obtain a porous graphitic carbon nitride material. Wherein, the ammonium salt is any one or a combination of at least two which could release gaseous NH_(3 )during thermolysis. The present invention uses thermolabile ammonium salt as a pore former; the thermolysis of ammonium salt could release soft gas bubbles during the calcination; the later burst of bubbles leads to the formation of nanoporous structure. The proposed method is template-free and environmentally-friendly, and the resultant material exhibits high photocatalytic activity in the field of gas and water decontamination.


Patent
Chinese Academy of Sciences | Date: 2014-12-10

A system includes a named data networking and a content delivery network, and a joint processing gateway. The named data networking is used for providing data content to a node in the content delivery network or sending a content request command to the content delivery network. The named data networking is a network formed by devices supporting a named data networking protocol. The content delivery network is used for providing data content to a node in the named data networking network or sending a content request command to the named data networking network. The content delivery network is a content delivery network set up over an IP network. The joint processing gateway is used for converting content data and transmitting the converted data from the named data networking to the content delivery network; and used for converting content data and transmitting the converted content data from the content delivery network to the named data networking.


Provided are a method for preparing an induced pluripotent stem cell and a composition used in the method. The method comprises: introducing a composition for promoting the formation of an induced pluripotent stem cell into a somatic cell, the composition comprising: (i) a c-Jun antagonist and one group of factors from among the following seven such groups: (1) Sox2, Klf4 and c-Myc, (2) Klf4 and c-Myc, (3) Oct3/4, Klf4 and c-Myc, (4) Sox2, Nanog and Lin28, (5) Oct3/4, Nanog and Lin28, (6) Oct3/4, Klf and Sox2, and (7) Klf4 and Sox2; or (ii) the c-Jun antagonist, Jhdm1b and Id1, and at least one of Glis1, Sall4 or Lrh1; or (iii) the c-Jun antagonist, Jhdm1b and Id1, and at least one of: Oct4, Klf4, Sox2, Lin28, Esrrb, Lef1, Utf1 or miRNA C. The present method allows for successful preparation of induced pluripotent stem cells with no generation of abnormal chromosomes.


Patent
Chinese Academy of Sciences | Date: 2016-09-02

The present invention relates to a spin logic device and an electronic equipment comprising the same. A spin logic device may include a Spin Hall effect (SHE) layer formed of a conductive material having Spin Hall effect and configured to receive a first logic input current and a second logic input current, the first logic input current and the second logic input current both being an in-plane current, a magnetic tunnel junction provided on the SHE layer comprising a free magnetic layer in contact with the SHE layer, a barrier layer disposed on the free magnetic layer, and a reference magnetic layer disposed on the barrier layer, and a current wiring in connection to the reference magnetic layer side of the magnetic tunnel junction, the current wiring being in cooperation with the SHE layer to apply a read current passing through the magnetic tunnel junction therebetween.


Patent
Chinese Academy of Sciences and A+ Network | Date: 2017-03-15

The present invention relates to a method, a device and a system for processing media resource information. The method comprises: a server acquires media resource information and user viewing information, the media resource information comprising media features, and the user viewing information comprising user features; the server ranks the media resource information according to the media features and the user features, so as to generate a first sequence; the server schedules the first sequence into time periods of idle channels according to preset priority of channels and time periods so as to generate a program list, and sends the program list to a client; the server receives an update request sent by the client, the update request comprising time baseline changes; and the server updates the program list according to the time baseline changes.


Patent
Chinese Academy of Sciences | Date: 2017-01-04

An embodiment of the present disclosure provide a key protection method, via setting that each core of the multi-core process may have one symmetric master key, dynamically obtaining the plaintext private key of the asymmetric algorithm via a decryption operation and using the Intel TSX, it may be ensured that the private key and the intermediate variables used in the computation process may be stored in the cache occupied by the operation core only in terms of the hardware level, which may prevent the attackers from stealing the private key from the physical memory and ensure the security of the implementation of the public-key cryptographic algorithm in the computer system. Further, even the OS may be compromised and the attacker may directly read the memory storing the key, since the Intel TSX mechanism may ensure the atomicity of the memory operation, the attacker cannot obtain the plaintext private key. Further, in this scheme, while the physical attacks and system intrusions are resisted, the other cores of the multi-core processor may perform the cryptographic operation, which may enhance the operation efficiency.


An aflatoxin nanobody immunoabsorbent and immunoaffinity column and preparation method and use thereof. The immunoabsorbent comprises a solid phase carrier and aflatoxin B1 nanobody 2014AFB-G15 coupled with the solid phase carrier. The 50% inhibiting concentration IC_(50) of aflatoxin B1 nanobody 2014AFB-G15 to aflatoxin B1 is 0.66 ng/mL, and the cross-reactivity of aflatoxin B1 nanobody 2014AFB-G15 to aflatoxins B2, G1, G2, and M1 are respectively 22.6%, 10.95%, 32.1% and 26%. The amino acid sequence of aflatoxin B1 nanobody 2014AFB-G15 is as depicted by SEQ ID NO: 7, and the coding gene sequence thereof is as depicted by SEQ ID NO: 8. The aflatoxin nanobody immunoaffinity column can be used for purification and concentration of sample extract prior to computer testing, and the immunoaffinity column can be reused repeatedly.


Patent
Chinese Academy of Sciences | Date: 2017-03-01

The present invention provides a precision actuating device for achieving precision actuation in at least two directions, including: a first moving component having an installation space formed therein, wherein the first moving component has an upper surface and an inner side face, the installation space is recessed from the upper surface towards the interior of the first moving component and is open towards the upper surface, and the periphery of the installation space is defined by the inner side face; a first precision driving mechanism in transmission connection with the first moving component, wherein the first precision driving mechanism is used for driving the first moving component to move along a first direction; and a second moving component and a second precision driving mechanism, which are arranged and held within the installation space so as to be able to move along the first direction together with the first moving component, wherein the second precision driving mechanism is in transmission connection with the second moving component and is used for driving the second moving component to move in the installation space along a second direction that is different from the first direction. The structural design of the precision actuating device of the present invention is compact, and the precision actuating device is small in occupation area and large in movement range.


Patent
Chinese Academy of Sciences, A+ Network and Beijing Hili Technology Co. | Date: 2017-01-11

The present invention provides a system and method for providing an on-site service, said system containing a plurality of nodes, each node containing: a neighborhood node set generation module, used for generating a neighborhood node set on the basis of the bidirectional link bandwidth between a local node and a neighboring node; a neighborhood information index table generation module, used for generating a neighborhood information index table of the local node; a candidate service point selection module, used for selecting according to a selection function a candidate service node from the set of neighboring nodes; the definition of said selection function being: for a current service request, computing the difference between the QoS of the neighboring node i executing the service request and the QoS of the local node executing the service request; if the computed difference is smaller than a set threshold, the neighboring node i serving as the candidate service node; a service scheduling module, used for receiving status information and feedback information provided in real time by the candidate service node, and selecting, on the basis of this information, a candidate node or the local node to serve as the service-executing node.


Patent
Chinese Academy of Sciences | Date: 2017-01-11

The present invention provides a named data networking-based content delivery system and method. The system comprises a named data networking and a content delivery network, and the system further comprises a joint processing gateway. The named data networking is used for providing data content to a node in the content delivery network or sending a content request command to the content delivery network. The named data networking is a network formed by devices supporting a named data networking protocol. The content delivery network is used for providing data content to a node in the named data networking network or sending a content request command to the named data networking network. The content delivery network is a content delivery network set up over an IP network. The joint processing gateway is used for converting content data and transmitting the converted data from the named data networking to the content delivery network; and used for converting content data and transmitting the converted content data from the content delivery network to the named data networking.


Patent
Chinese Academy of Sciences | Date: 2017-02-01

The invention discloses a cell culture and experiment device used in the field of biological and genetic engineering experiment apparatus, comprising a central distribution compartment, a culture compartment, a treatment compartment, and pipelines for delivering liquid between the central distribution compartment and the culture compartment and between the central distribution compartment and the treatment compartment. The central distribution compartment is equipped with a distribution chamber and a piston which can be moved forward and backward in the distribution chamber to alter the working volume of the distribution chamber. At the bottom of the distribution chamber, the central distribution compartment is equipped with a distribution valve controlling the connectivity between the distribution chamber and any of the channels. The invention provides a miniaturized apparatus integrating the central distribution compartment, the culture compartment and the treatment compartment, which can replace manual operations, save time and labor, and avoid wasting experimental raw material.


Patent
Chinese Academy of Sciences and A+ Network | Date: 2017-04-05

The present invention provides a system and a method for maintaining a connection channel in a multi-device interworking service. The system comprises: a multi-screen channel control client module and a multi-screen channel control server module. The multi-screen channel control client module is located on each terminal device, and used for acquiring an identifier of the device terminal and reporting a binding relationship between the terminal and a multi-screen channel to the multi-screen channel control server. The multi-screen channel control server module is used for recording the binding relationship between the terminal and the multi-screen channel, and pushing services to the bound terminal side. The multi-screen channel control server module is located on a server that provides a data service. The present invention has the advantages that the binding relationship maintenance and UI synchronization can be implemented on a newly added Websocket layer without affecting an original service, and stable service upgrade is achieved; and during the implementation of a Websocket channel on a client, multiplexing of the Websocket channel between pages is implemented in a nested iframe mode, thereby reducing the channel maintenance complexity.


Patent
Chinese Academy of Sciences | Date: 2017-02-01

Disclosed is aflatoxin B1 nanobody 2014AFB-G15. An amino acid sequence thereof is shown in SEQ ID NO: 7, and a gene coding sequence thereof is shown in SEQ ID NO: 8. Aflatoxin B1 nanobody 2014AFB-G15 obtained in the present disclosure has the properties of resistance to organic reagents, resistance to high temperature, and good stability. 50% inhibitory concentration IC_(50) of aflatoxin B1 nanobody 2014AFB-G15 against aflatoxin B1 is 0.66 ng/mL, and cross-reaction rates thereof with aflatoxin B2, aflatoxin G1, aflatoxin G2, and aflatoxin M1 are 22.6%, 0.95%, 32.1%, and 26%, respectively.


Patent
Chinese Academy of Sciences and Dongguan Eontec Co. | Date: 2017-01-11

Provided are a Zr-Cu-Ni-Al-Ag-Y bulk amorphous alloy, and a preparation method and an application thereof. The ingredients of the alloy are: 41%-63% of Zr, 18%-46% of Cu, 1.5%-12.5% of Ni, 4%-15% of Al, 0.01 %-5% of Ag and 0.01 %-5% of Y, by atomic percentage. The amorphous alloy is prepared by using a copper mold casting method and can be used as part of an antibacterial material required in many fields.


Patent
Chinese Academy of Sciences | Date: 2017-04-19

The present invention provides a communication method based on an assembled communication protocol stack. The method comprises: construct protocols to form protocol modules, and place the protocol modules into a protocol module library; extract required protocol modules from the protocol module library, and assemble the extracted protocol modules to form a communication protocol stack; install the assembled communication protocol stack into a protocol stack running device; and an application on the protocol stack running device implements data communication by using the installed protocol stack. The specific process of constructing protocol modules comprises: constructing execution codes of protocols corresponding to an operating system to form protocol modules; and defining PDUs, that is, defining a corresponding data length and a data structure when each protocol module is in a mutual input/output relationship with other protocol modules in the protocol module library. The relationship among protocol modules to be assembled is established according to the compatibility of the protocol modules and by visually dragging or configuring a file; and after the assembly is completed, a protocol stack configuration file and a protocol stack execution file are generated.


Zhigang Y.,Chinese Academy of Sciences
ACM International Conference Proceeding Series | Year: 2016

The telephone speech corpus is the basis of developing a Humanmachine interaction system designed for communication and mobile internet. The main problem nowadays for constructing a qualified speech corpus is lack of a standard scheme. This research tries to find a standardization program which can make the corpus be established more efficiently and be used or shared easier. The specifications of constructing a speech corpus are also introduced in the paper. Finally, a telephone speech corpus, TSC973, be exemplified to illuminate the standardization program. © 2016 ACM.


Qiu Z.,Chinese Academy of Sciences | Li X.,Chinese Academy of Sciences
Neuroscience Bulletin | Year: 2017

Modeling brain disorders has always been one of the key tasks in neurobiological studies. A wide range of organisms including worms, fruit flies, zebrafish, and rodents have been used for modeling brain disorders. However, whether complicated neurological and psychiatric symptoms can be faithfully mimicked in animals is still debatable. In this review, we discuss key findings using non-human primates to address the neural mechanisms underlying stress and anxiety behaviors, as well as technical advances for establishing genetically-engineered non-human primate models of autism spectrum disorders and other disorders. Considering the close evolutionary connections and similarity of brain structures between non-human primates and humans, together with the rapid progress in genome-editing technology, non-human primates will be indispensable for pathophysiological studies and exploring potential therapeutic methods for treating brain disorders. © 2017, The Author(s).


Nan H.,Chinese Academy of Sciences
Nature Genetics | Year: 2017

Soybean is a major legume crop originating in temperate regions, and photoperiod responsiveness is a key factor in its latitudinal adaptation. Varieties from temperate regions introduced to lower latitudes mature early and have extremely low grain yields. Introduction of the long-juvenile (LJ) trait extends the vegetative phase and improves yield under short-day conditions, thereby enabling expansion of cultivation in tropical regions. Here we report the cloning and characterization of J, the major classical locus conferring the LJ trait, and identify J as the ortholog of Arabidopsis thaliana EARLY FLOWERING 3 (ELF3). J depends genetically on the legume-specific flowering repressor E1, and J protein physically associates with the E1 promoter to downregulate its transcription, relieving repression of two important FLOWERING LOCUS T (FT) genes and promoting flowering under short days. Our findings identify an important new component in flowering-time control in soybean and provide new insight into soybean adaptation to tropical regions. © 2017 Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved.


Alterations in cellular ubiquitin (Ub) homeostasis, known as Ub stress, feature and affect cellular responses in multiple conditions, yet the underlying mechanisms are incompletely understood. Here we report that autophagy receptor p62/sequestosome-1 interacts with E2 Ub conjugating enzymes, UBE2D2 and UBE2D3. Endogenous p62 undergoes E2-dependent ubiquitylation during upregulation of Ub homeostasis, a condition termed as Ub+ stress, that is intrinsic to Ub overexpression, heat shock or prolonged proteasomal inhibition by bortezomib, a chemotherapeutic drug. Ubiquitylation of p62 disrupts dimerization of the UBA domain of p62, liberating its ability to recognize polyubiquitylated cargoes for selective autophagy. We further demonstrate that this mechanism might be critical for autophagy activation upon Ub+ stress conditions. Delineation of the mechanism and regulatory roles of p62 in sensing Ub stress and controlling selective autophagy could help to understand and modulate cellular responses to a variety of endogenous and environmental challenges, potentially opening a new avenue for the development of therapeutic strategies against autophagy-related maladies.Cell Research advance online publication 21 March 2017; doi:10.1038/cr.2017.40. © 2017 Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences


Guo X.,Chinese Academy of Sciences
Cell Research | Year: 2017

T-cell receptor-CD3 complex (TCR) is a versatile signaling machine that can initiate antigen-specific immune responses based on various biochemical changes of CD3 cytoplasmic domains, but the underlying structural basis remains elusive. Here we developed biophysical approaches to study the conformational dynamics of CD3ε cytoplasmic domain (CD3εCD). At the single-molecule level, we found that CD3εCD could have multiple conformational states with different openness of three functional motifs, i.e., ITAM, BRS and PRS. These conformations were generated because different regions of CD3εCD had heterogeneous lipid-binding properties and therefore had heterogeneous dynamics. Live-cell imaging experiments demonstrated that different antigen stimulations could stabilize CD3εCD at different conformations. Lipid-dependent conformational dynamics thus provide structural basis for the versatile signaling property of TCR.Cell Research advance online publication 24 March 2017; doi:10.1038/cr.2017.42. © 2017 Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences


Fan X.,Chinese Academy of Sciences
Cell Research | Year: 2017

Extensive pre-mRNA back-splicing generates numerous circular RNAs (circRNAs) in human transcriptome. However, the biological functions of these circRNAs remain largely unclear. Here we report that N6-methyladenosine (m6A), the most abundant base modification of RNA, promotes efficient initiation of protein translation from circRNAs in human cells. We discover that consensus m6A motifs are enriched in circRNAs and a single m6A site is sufficient to drive translation initiation. This m6A-driven translation requires initiation factor eIF4G2 and m6A reader YTHDF3, and is enhanced by methyltransferase METTL3/14, inhibited by demethylase FTO, and upregulated upon heat shock. Further analyses through polysome profiling, computational prediction and mass spectrometry reveal that m6A-driven translation of circRNAs is widespread, with hundreds of endogenous circRNAs having translation potential. Our study expands the coding landscape of human transcriptome, and suggests a role of circRNA-derived proteins in cellular responses to environmental stress.Cell Research advance online publication 10 March 2017; doi:10.1038/cr.2017.31. © 2017 Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences


Gong T.,Chinese Academy of Sciences | Luo D.,CAS Institute of Atmospheric Physics
Journal of Climate | Year: 2017

In this paper, the lead-lag relationship between the Arctic sea ice variability over the Barents-Kara Sea (BKS) and Ural blocking (UB) in winter (DJF) ranging from 1979/80 to 2011/12 is examined. It is found that in a regressed DJF-mean field an increased UB frequency (days) corresponds to an enhanced sea ice decline over the BKS, while the high sea surface temperature over the BKS is accompanied by a significant Arctic sea ice reduction. Lagged daily regression and correlation reveal that the growth and maintenance of the UB that is related to the positive North Atlantic Oscillation (NAO+) through the negative east Atlantic/west Russia (EA/WR-) wave train is accompanied by an intensified negative BKS sea ice anomaly, and the BKS sea ice reduction lags the UB pattern by about four days. Because the intensified UB pattern occurs together with enhanced downward infrared radiation (IR) associated with the intensified moisture flux convergence and total column water over the BKS, the UB pattern contributes significantly to the BKS sea ice decrease on a time scale of weeks through intensified positive surface air temperature (SAT) anomalies resulting from enhanced downward IR. It is also found that the BKS sea ice decline can persistently maintain even when the UB has disappeared, thus indicating that the UB pattern is an important amplifier of the BKS sea ice reduction. Moreover, it is demonstrated that the EA/WR- wave train formed by the combined NAO+ and UB patterns is closely related to the amplified warming over the BKS through the strengthening (weakening) of mid-to-high-latitude westerly wind in the North Atlantic (Eurasia). © 2017 American Meteorological Society.


Zhang Q.S.,Chinese Academy of Sciences
Astrophysical Journal | Year: 2017

The resistance coefficients in the screened Coulomb potential of stellar plasma are evaluated to high accuracy. I have analyzed the possible singularities in the integral of scattering angle. There are possible singularities in the case of an attractive potential. This may result in a problem for the numerical integral. In order to avoid the problem, I have used a proper scheme, e.g., splitting into many subintervals where the width of each subinterval is determined by the variation of the integrand, to calculate the scattering angle. The collision integrals are calculated by using Romberg's method, therefore the accuracy is high (i.e., ∼10-12). The results of collision integrals and their derivatives for -7 ≤ ψ ≤ 5 are listed. By using Hermite polynomial interpolation from those data, the collision integrals can be obtained with an accuracy of 10-10. For very weakly coupled plasma (ψ ≥ 4.5), analytical fittings for collision integrals are available with an accuracy of 10-11. I have compared the final results of resistance coefficients with other works and found that, for a repulsive potential, the results are basically the same as others'; for an attractive potential, the results in cases of intermediate and strong coupling show significant differences. The resulting resistance coefficients are tested in the solar model. Comparing with the widely used models of Cox et al. and Thoul et al., the resistance coefficients in the screened Coulomb potential lead to a slightly weaker effect in the solar model, which is contrary to the expectation of attempts to solve the solar abundance problem. © 2017. The American Astronomical Society. All rights reserved.


Xie F.-G.,Chinese Academy of Sciences | Yuan F.,Chinese Academy of Sciences
Astrophysical Journal | Year: 2017

A correlation among the radio luminosity (LR), X-ray luminosity (LX), and black hole (BH) mass (MBH) in active galactic nuclei (AGNs) and BH binaries is known to exist and is called the "fundamental plane" of BH activity. Yuan and Cui predict that the radio/X-ray correlation index, &xiX, changes from &xiX ≈ 0.6 to &xiX 1.2-1.3 when Lx /LEdd decreases below a critical value of ∼10-6. While many works favor such a change, there are also several works claiming the opposite. In this paper, we gather from the literature the largest quiescent AGN (defined as LX/ LEdd ≲ 10- 6) sample to date, consisting of 75 sources. We find that these quiescent AGNs follow a &xiX ≈1.23 radio/X-ray relationship, in excellent agreement with the Yuan and Cui prediction. The reason for the discrepancy between the present result and some previous works is that their samples contain not only quiescent sources but also "normal" ones (i.e., LX/ LEdd ≳ 10-6). In this case, the quiescent sources will mix up with those normal ones in LR and LX. The value of &xiX will then be between 0.6 and ∼1.3, with the exact value being determined by the sample composition, i.e., the fraction of the quiescent and normal sources. Based on this result, we propose that a more physical way to study the fundamental plane is to replace LR and LX with L R/LEdd and Lx/ LEdd, respectively. © 2017. The American Astronomical Society. All rights reserved.


Jithesh V.,Chinese Academy of Sciences | Wang Z.,Chinese Academy of Sciences
Astrophysical Journal | Year: 2017

We report the identification of seven transient X-ray sources in the nearby Magellanic-type galaxy NGC 4449 using archival multi-epoch X-ray observations conducted with the Chandra, XMM-Newton, and Swift telescopes over the years 2001-2013. Among them, two sources are classified as supersoft X-ray sources (SSSs) because of their soft X-ray color; the rest of the sources are X-ray binaries (XRBs). Transient SSSs' spectra can be fitted with a blackbody of effective temperature ∼80-105 eV, and luminosities were ≃1037-1038 erg s-1 in 0.3-8 keV. These properties are consistent with the widely accepted model for SSSs, an accreting white dwarf with steady nuclear burning on its surface, and the SSS emission has also been observed in many post-nova systems. Detailed analysis of one sufficiently bright SSS revealed strong short-term variability, possibly showing a 2.3-hr periodic modulation, and long-term variability, detectable over 23 years with different X-ray telescopes before the year 2003. The X-ray properties of four other transients are consistent with neutron star or black hole binaries in their hard state, whereas the remaining source is most likely an XRB with a quasi-soft X-ray spectrum. Analysis of archival Hubble Space Telescope image data was also conducted, and multiple massive stars were found as possible counterparts. We conclude that the X-ray transient properties in NGC 4449 are similar to those in other Magellanic-type galaxies. © 2017. The American Astronomical Society. All rights reserved.


Sun R.,Chinese Academy of Sciences | Jia P.,Taiyuan University of Technology
Publications of the Astronomical Society of the Pacific | Year: 2017

Space debris is a special kind of fast-moving, near-Earth objects, and it is also considered to be an interesting topic in time-domain astronomy. Optical survey is the main technique for observing space debris, which contributes much to the studies of space environment. However, due to the motion of registered objects, image degradation is critical in optical space debris observations, as it affects the efficiency of data reduction and lowers the precision of astrometry. Therefore, the image restoration in the form of deconvolution can be applied to improve the data quality and reduction accuracy. To promote the image processing and optimize the reduction, the image degradation across the field of view is modeled statistically with principal component analysis and the efficient mean point-spread function (PSF) is derived from raw images, which is further used in the image restoration. To test the efficiency and reliability, trial observations were made for both low-Earth orbital and high-Earth orbital objects. The positions of all targets were measured using our novel approach and compared with the reference positions. The performance of image restoration employing our estimated PSF was compared with several competitive approaches. The proposed image restoration outperformed the others so that the influence of image degradation was distinctly reduced, which resulted in a higher signal-to-noise ratio and more precise astrometric measurements. © 2017. The Astronomical Society of the Pacific. All rights reserved.


Qiu J.,Montana State University | Cheng J.,Chinese Academy of Sciences
Astrophysical Journal Letters | Year: 2017

We report observations of a two-stage coronal dimming in an eruptive event of a two-ribbon flare and a fast coronal mass ejection (CME). Weak gradual dimming persists for more than half an hour before the onset of the two-ribbon flare and the fast rise of the CME. It is followed by abrupt rapid dimming. The two-stage dimming occurs in a pair of conjugate dimming regions adjacent to the two flare ribbons, and the flare onset marks the transition between the two stages of dimming. At the onset of the two-ribbon flare, transient brightenings are also observed inside the dimming regions, before rapid dimming occurs at the same places. These observations suggest that the CME structure, most probably anchored at the twin dimming regions, undergoes a slow rise before the flare onset, and its kinematic evolution has significantly changed at the onset of flare reconnection. We explore diagnostics of the CME evolution in the early phase with analysis of the gradual dimming signatures prior to the CME eruption. © 2017. The American Astronomical Society. All rights reserved.


Chi X.,Peking University | Chi X.,Chinese Academy of Sciences | Guo Q.,Peking University | Fang J.,Peking University | And 2 more authors.
Journal of Plant Ecology | Year: 2017

To quantify the seasonal differences in effects of leaf habit, species identity, initial diameter, neighborhood interaction and stand environment on tree absolute diameter growth rates in a subtropical forest in China. Methods We used man-made dendrometer bands to record radial increments of all trees with diameter at breast height (DBH) ≥5 cm and height ≥3 m within 25 comparative study plots (30 × 30 m for each) of the Biodiversity-Ecosystem Functioning Experiment China- (BEF-China) in the Gutianshan National Nature Reserve, Zhejiang Province, China. We measured stem circumferences twice a year from 2011 to 2014 to calculate absolute diameter growth rate of a warm and wet season (WWS, April to September) and a dry and cold season (DCS, October to the next March) for each individual tree: Annual growth (GRyear), growth during the WWS (GRWWS) and growth during the DCS (GRDCS). We firstly tested the differences in growth rates between different seasons using paired t-Tests with Bonferroni correction. Then we applied linear mixed models to explore the effects of leaf habit, species identity, initial diameter, neighborhood interaction (indicated by richness, density and total basal area of all neighboring trees within a radius of 5 m around target trees), stand age and topography (elevation, slope and aspect) on tree growth rates of the two different seasons in three deciduous and 14 evergreen species. Important Findings GRyear, GRWWS and GRDCS varied between 0.04-0.50 cm year-1 (mean = 0.21), 0.03-0.46 cm season-1 (mean = 0.18) and 0.01-0.05 cm season-1 (mean = 0.03) across the 17 species, respectively. GRWWS was significantly higher than GRDCS for all species. Growth rates of faster growing species tended to have larger absolute differences between the WWS and DCS. Tree growth rates of both seasons and of the year (GRyear, GRWWS and GRDCS) varied significantly among leaf habit and species, and increased allometrically with initial diameter, decreased with stand age, but were not significantly related to topography and neighborhood richness or density. GRWWS decreased with neighborhood total basal area, while GRDCS did not. In conclusion, species might the temporally complementary, contributing to plot growth at different times of the year. © The Author 2016. Published by Oxford University Press on behalf of the Institute of Botany, Chinese Academy of Sciences and the Botanical Society of China. All rights reserved.


Guo Q.,Peking University | Chi X.,Peking University | Chi X.,Chinese Academy of Sciences | Xie Z.,CAS Institute of Botany | Tang Z.,Peking University
Journal of Plant Ecology | Year: 2017

Asymmetric competition for light may depress the growth rates (GRs) to different extents for different-sized tree individuals. Various responses of different functional groups to light availability result that tree individuals of different functional groups may experience different competition intensities, e.g. canopy and deciduous species grow faster and demand more light than understory and evergreen species. In this study, we estimated the effects of asymmetric competition for light using individual GRs and explored the effects of asymmetric competition on growth among different functional groups (e.g. canopy vs. understory species and deciduous vs. evergreen species). Methods We measured growth in circumference to determine the radial increments of a total of 2233 stems with diameter at breast height ≥ 5.0 cm in a permanent plot (140 × 80 m2) located in a typical evergreen and deciduous broadleaved mixed forest on Mt Shennongjia, China. All of the measurements were carried out at ~6-month intervals every April and October from 2012 to 2014, and biomass of each individual was calculated based on its diameter and species-specific allometry. We then calculated GRs of annual biomass growth (growth between October and the next October). Considering the hypothesis that asymmetric competition for light among trees of different sizes may result in a steeper allometric growth curve with increasing tree size, we further divided the sampled trees into different subsets according to their height, at intervals of 1 m, and then fitted the scaling relationship between the logarithm of the biomass GR (logGR) and the logarithm of diameter (logD) for each height class using standardized major axis regression. Finally, we used simple linear regression to test whether the scaling exponent was related to tree height. The above analyses were conducted for the annual growth of all tree species, canopy species, understory vs.Treelets species and deciduous vs. evergreen species. Important findings We observed a concave curve for the relationship between logGR and logD with an increase in the scaling exponent between logGR and logD with increasing tree height. This pattern held for the annual growth of canopy species and deciduous species but not for the annual growth of understory species, treelets or evergreen species. These results suggest that asymmetric competition for light is more important in regulating the GRs of the fast-growing species, such as canopy species and deciduous species, than those of shadetolerant species, such as understory species, treelets and evergreen species. © The Author 2016. Published by Oxford University Press on behalf of the Institute of Botany, Chinese Academy of Sciences and the Botanical Society of China. All rights reserved.


Zhan W.L.,Chinese Academy of Sciences
IPAC 2016 - Proceedings of the 7th International Particle Accelerator Conference | Year: 2016

It is the new approaches of sustainable fission energy that high power accelerator driven to produce intensive external neutron for close fuel cycle and utilize fission resources >90%, which is two higher efficiency than that in exist fission energy. New approaches include the fission fuel burner and the used nuclear fuel (UNF) recycle. The burner is designed as the nuclear waste transmutation, the fissile material breeding and the energy production in situ by accelerator driven in start. The UNF recycle designed is to remove ∼50% fission products (FP), and other residual of UNF is made as recycle fuel, then, burn inside burner. Typically, LWR FP can be removed by extension "AIROX" for volatile FP and rare earth element extraction for lanthanide FP. Thus, new approaches could sustain the fission energy close to 10000yr and minimum nuclear waste <4% in quantity with live time <500yr. There are 4 phases in Chinese development roadmap and new research sites are introduced. The burner, evolution from ADS, is consists of the high power superconductor linac (SCL), the spallation target and subcritical core. ∼25 MeV SCL prototype will extract proton beam before end of 2016. Presently, 10 MeV SCL is under tuning in 10's kW CW beam. The 14∼18GHz ECRIS R&D shows more stable as the RF power is 1/10 and feed in/pumping out gas is 1/3. New concept of the intense granular fluid spallation neutron target prototype shows the simple and stable operation as sand clock, the heat removed off line. This kind target could extent the spallation neutron target power up 10∼100 MW. The different material could be used as different purpose target. Copyright © 2016 CC-BY-3.0 and by the respective authors.


Wang W.,Chinese Academy of Sciences
Annals of Plastic Surgery | Year: 2017

BACKGROUND: Flap prefabrication is to turn a random flap into an axial flap by transferring a vascular pedicle. METHODS: In the past 13 years, we have prefabricated 20 flaps in 20 patients by the superficial temporal artery and its concomitant veins. Typically, a 50- to 800-mL tissue expander was implanted in the donor site. After flap maturation, the prefabricated flap was raised and transferred locally to cover the large defect on the face. All the cases were followed up regularly. RESULTS: The patientsʼ age were between 3 and 27 years, the size of the flaps were between 3.5 × 5.5 cm and 13 × 15 cm, the superficial temporal artery length was between 10 and 15 cm. All flaps were transferred successfully: 10 of the flaps had venous congestion, partial epidermis exfoliation and flap necrosis occurred in 4 flaps. All cases were followed up for at least 1 year, the longest follow-up period was 9 years. Long-term follow-up results showed the prefabricated flap survived in good condition and had a satisfactory outcome. CONCLUSIONS: Because flap prefabrication is practical, and long-term follow-ups have proved its preferable characters and stability, it is a fine method for large area facial reconstructions. Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved.


Zhao J.-F.,Chinese Academy of Sciences
Proceedings - 2016 International Conference on Information System and Artificial Intelligence, ISAI 2016 | Year: 2016

With the rapid development of computer technology, it has been widely used in many fields such as politics, economy, military affairs, culture, and education and so on. The society has entered into the information age with the development of computer and network technologies, however, the security problem of information has become the fundamental mater. As we all know, the basic principle of Von Neumann structure computer is program storage. The program and data are stored in the memory of the computer, the computer runs in the control of the process step by step to deal with problem, until the results are got. As the main place of program instruction and data storage, the problem of memory security protection has been paid more and more attention. This paper summarizes the various attacks on memory and its characteristics, the research of memory security technology at home and abroad, including the confidentiality of technology and the integrity of the technology and the existing problems. In the end, the research of memory security protection is summarized and prospected. © 2016 IEEE.


Zhao J.-F.,Chinese Academy of Sciences
Proceedings - 2016 International Conference on Information System and Artificial Intelligence, ISAI 2016 | Year: 2016

Software aging is a kind of objective phenomenon with the decrease of the performance and increase of failure rate of the software system. It is a severe test of the reliability of the software system. However, the time point of software aging is uncertain, and a new method based on non stationary time series modeling is proposed for the problem of the current software aging detection, analysis, and the difficulty of the problem of software rejuvenation. In order to provide a more accurate reference for the software rejuvenation, the key parameters of the software running process are established by using the non-stationary time series. An experimental platform, which is based on the current popular Apache server software, is presented. Modeling results show that the average prediction error of the key parameters of software aging is within 4%, which can describe the phenomenon of software aging, and provides a more accurate reference for the actual server. © 2016 IEEE.


Extracellular signals have been shown to impact on alternative pre-mRNA splicing; however, the molecular mechanisms and biological significance of signal-induced splicing regulation remain largely unknown. Here, we report that epidermal growth factor (EGF) induces splicing changes through ubiquitylation of a well-known splicing regulator, hnRNP A1. EGF signaling upregulates an E3 ubiquitin (Ub) ligase adaptor, SPRY domain-containing SOCS box protein 1 (SPSB1), which recruits Elongin B/C-Cullin complexes to conjugate lysine 29-linked polyUb chains onto hnRNP A1. Importantly, SPSB1 and ubiquitylation of hnRNP A1 have a critical role in EGF-driven cell migration. Mechanistically, EGF-induced ubiquitylation of hnRNP A1 together with the activation of SR protein kinases (SRPKs) results in the upregulation of a Rac1 splicing isoform, Rac1b, to promote cell motility. These findings unravel a novel crosstalk between protein ubiquitylation and alternative splicing in EGF/EGF receptor signaling, and identify a new EGF/SPSB1/hnRNP A1/Rac1 axis in modulating cell migration, which may have important implications for cancer treatment.Cell Research advance online publication 13 January 2017; doi:10.1038/cr.2017.7. © 2017 Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences


Zhang Y.,Chinese Academy of Sciences
Energy Policy | Year: 2017

A three-region input–output model was applied in this study to analyze the emission spillover–feedback effects across the eastern, middle, and western regions of China. Results revealed that the interregional trade has important spillover effects (SEs) on the emissions of each region, particularly in the middle and western regions, but the feedback effects are few. Although the eastern regional final demands have a smaller economic SE per unit than those of the middle and western regions in 2002–2010, its emission SE gradually exceeded that of the two other regions. The interregional trade policy has to be enforced in the future, but the emission SEs should be controlled efficiently. Therefore, the central government should continue to implement the policies on the reduction of energy and carbon intensities from the past decade, limit coal consumption, and encourage renewable fuel development. At the same time, the central government and the eastern region can help the middle and western regions control their carbon intensity by providing fiscal, technological, and training assistance. The middle and western regions should set strict admittance standards for energy-intensive plants that transferred from the eastern region. © 2016 Elsevier Ltd


Chen L.-L.,Chinese Academy of Sciences | Chen L.-L.,University of Shanghai for Science and Technology | Yang L.,CAS Shanghai Institutes for Biological Sciences | Yang L.,University of Shanghai for Science and Technology
Trends in Cell Biology | Year: 2017

Alu elements belong to the primate-specific SINE family of retrotransposons and constitute almost 11% of the human genome. Alus are transcribed by RNA polymerase (Pol) III and are inserted back into the genome with the help of autonomous LINE retroelements. Since Alu elements are preferentially located near to or within gene-rich regions, they can affect gene expression by distinct mechanisms of action at both DNA and RNA levels. In this review we focus on recent advances of how Alu elements are pervasively involved in gene regulation. We discuss the impacts of Alu DNA sequences that are in close proximity to genes, Pol-III-transcribed free Alu RNAs, and Pol-II-transcribed Alu RNAs that are embedded within coding or noncoding RNA transcripts. The recent elucidation of Alu functions reveals previously underestimated roles of these selfish or junk DNA sequences in the human genome. Primate-specific Alus constitute 11% of the human genome, with >1 million copies, and their genomic distribution is biased toward gene-rich regions.The functions of Alus are highly associated with their sequence and structural features. Alus can regulate gene expression by serving as cis elements.Pol-III-transcribed free Alus mainly affect Pol II transcription and mRNA translation in trans.Embedded Alus within Pol-II-transcribed mRNAs can impact their host gene expression through the regulation of alternative splicing, and RNA stability and translation.Nearly half of annotated Alus are located in introns; RNA pairing formed by orientation-opposite Alus across introns promotes circRNA biogenesis. © 2017 Elsevier Ltd.


Hao C.,Chinese Academy of Sciences | Hao C.,University of Chinese Academy of Sciences
Archive for Rational Mechanics and Analysis | Year: 2017

For the free boundary problem of the plasma–vacuum interface to 3D ideal incompressible magnetohydrodynamics, the a priori estimates of smooth solutions are proved in Sobolev norms by adopting a geometrical point of view and some quantities such as the second fundamental form and the velocity of the free interface are estimated. In the vacuum region, the magnetic fields are described by the div–curl system of pre-Maxwell dynamics, while at the interface the total pressure is continuous and the magnetic fields are tangential to the interface, but we do not need any restrictions on the size of the magnetic fields on the free interface. We introduce the “fictitious particle” endowed with a fictitious velocity field in vacuum to reformulate the problem to a fixed boundary problem under the Lagrangian coordinates. The L2-norms of any order covariant derivatives of the magnetic fields both in vacuum and on the boundaries are bounded in terms of initial data and the second fundamental forms of the free interface and the rigid wall. The estimates of the curl of the electric fields in vacuum are also obtained, which are also indispensable in elliptic estimates. © 2017 Springer-Verlag Berlin Heidelberg


Deng L.,Chinese Academy of Sciences
2016 3rd International Conference on Systems and Informatics, ICSAI 2016 | Year: 2016

The complex spatial and temporal behaviors of long-Term solar magnetic activity are an important aspect in solar data set analysis, however it is not an easy work in practice owing to its complex features of solar dynamo process. This paper can improve our knowledges on periodic variations of F10.7 radio flux and sunspot areas by determining the phase relationship between periodic identification and time series decomposition. Several quantitative analysis approaches, including auto-correlation analysis, cross-correlation analysis, and empirical mode decomposition, are combined to study their periodic behavior, phase relationship, and multi-scale modes. The analysis results indicate that the above analysis approaches can be used to decompose a data set into several meaningful components. To sum up, these techniques can be improved to study the periodic features of solar magnetic activity, moreover, it can also identify the natural modes or components, i.e., high-frequency mode, mid-frequency mode, and low-frequency mode. © 2016 IEEE.


Yang X.,South China University of Technology | Yang X.,Chinese Academy of Sciences
Applied Microbiology and Biotechnology | Year: 2017

N-acetylglutamate kinase (NAGK) catalyzes the phosphorylation of N-acetylglutamate. In many bacteria, NAGK catalysis is the rate controlling step in the L-arginine biosynthesis pathway from glutamate to L-arginine and is allosterically inhibited by L-arginine. Many data show that conformational dynamics of NAGKs are essential for their function. The demonstration of the conformational mechanism provides a potential way to improve the yield of arginine. Due to the lack of NAGK catalysis step in arginine synthesis route of mammals, the elucidation of the dynamic mechanism can also provide a way to design a new antivirus drug. This paper reviews how the dynamics affect the activity of NAGKs and are controlled by the effectors. X-ray crystallography and modeling data have shown that in NAGKs, the structural elements required for inhibitor and substrate binding, catalysis and product release, are highly mobile. It is possible to eliminate the inhibition of the arginine and/or block the synthesis of arginine by disturbing the flexibility of the NAGKs. Amino acid kinase family is thought to share some common dynamic features; the flexible structural elements of NAGKs have been identified, but the details of the dynamics and the signal transfer pathways are yet to be elucidated. © 2017 Springer-Verlag Berlin Heidelberg


Fan B.,Nanjing University of Posts and Telecommunications | Cong Y.,Chinese Academy of Sciences
Pattern Recognition | Year: 2017

Most multi-task learning based trackers adopt similar task definition by assuming that all tasks share a common feature set, which can't cover the real situation well. In this paper, we define the subtasks from the novel perspective, and develop a structured and consistent multi-layer multi-subtask tracker with graph regularization. The tracking task is completed by the collaboration of multi-layer subtasks. Different subtasks correspond to the tracking of different parts in the target area. The correspondences of the subtasks among the adjacent frames are consistent and smooth. The proposed model introduces hyper-graph regularizer to preserve the global and local intrinsic geometrical structures among and inside target candidates or trained samples, and decomposes the representative matrix of the subtasks into two components: low-rank property captures the subtask relationship, group-sparse property identifies the outlier subtasks. Moreover, a collaborate metric scheme is developed to find the best candidate, by concerning both discrimination reliability and representation accuracy. We show that the proposed multi-layer multi-subtask learning based tracker is a general model, which accommodates most existing multi-task trackers with the respective merits. Encouraging experimental results on a large set of public video sequences justify the effectiveness and robustness of the proposed tracker, and achieve comparable performance against many state-of-the-art methods. © 2017 Elsevier Ltd


Deng L.,Chinese Academy of Sciences
Proceedings - 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, CISP-BMEI 2016 | Year: 2016

Synchronization can be considered as a fundamental property for dynamical system in scientific fields, which is used to describe the asynchronous behaviour in time-frequency domain. The statistical analysis of solar time series can obtain some useful information to characterise and predict the long-term variability of the Sun. In this paper, two analysis approaches, including wavelet coherence transform and cross-correlation analysis, are combined to analyze the statistical connection of polar faculae between low latitudes and high latitudes for the time period from 1951 to 1998. Analysis results indicated that the combination technique is an appropriate choice for studying the asynchronous behavior between the two time series, because the classical analysis techniques can easily get some incorrect results. © 2016 IEEE.


Two partial human skulls discovered in China may have belonged to the mysterious cousin of the Neanderthals, the extinct ice age humans known as Denisovans. The Denisovans and Neanderthals are known to have a common ancestor that had split from the modern human lineage. In a 2016 study, researchers found nuclear DNA evidence suggesting that this split may have happened 765,000 years ago. The Denisovans have only been known from bits of DNA taken from a partial finger bone found in the Denisova Cave in Altai Mountains in Siberia. In 2010, researchers from the ancient DNA laboratory of Max Planck Institute in Germany yielded a complete genome of what was previously an unknown type of human using a bit of pinky from a growing girl. That sliver of bone served as the first evidence of the Denisovans, a distinct branch of the Homo family tree that mated with both the Neanderthals and modern humans and whose genes continue to live on today among modern Europeans, Asians, and the Melanesians of Papua New Guinea. The Denisovans are believed to have walked in the lands of Asia with tools as sophisticated as the ones made by humans more than 100,000 years ago. Besides the pinky nub of a young girl, three molars found in the same cave in Siberia also point to the existence of the Denisovans. Since the discovery of the Denisovans, however, researchers have only discovered few tangible evidence that can help prove these archaic humans existed. Now, the newly discovered fossils from China estimated to be between 105,000 to 125,00 years old are being suspected to be new evidence of the Denisovans. The bones called "archaic Homo" emerge as prime candidates that show what these extinct human relative may have looked like. In a paper published in the journal Science, Zhan-Yang Li, from the Chinese Academy of Sciences in Beijing, and colleagues avoided using the word Denisovans in their report but noted that the bones could have belonged to a new type of human or an eastern variant of the Neanderthals. "Some features are ancestral and similar to those of earlier eastern Eurasian humans, some are derived and shared with contemporaneous or later humans elsewhere, and some are closer to those of Neanderthals," Zhan-Yang Li and colleagues wrote in their study. Despite that the paper did not mention the Denisovans, other researchers think the bones may have belonged to them. María Martinón-Torres, from University College London, said that the skulls definitely fit what could be expected from a Denisovan. The paleoanthropologist said that the fossils have something with an Asian flavor that is closely related to the Neanderthals. "This would be the combination that one would expect based on the ancient DNA analysis of Denisovans, who were closely related to Neanderthals," said Katerina Harvati, a Neanderthal expert from the University of Tübingen in Germany, who is not involved in the research. Because the investigators have not yet taken DNA from the skulls, the possibility these belonged to the Denisovans remains a speculation. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


(Institute of Atmospheric Physics, Chinese Academy of Sciences) A Chinese Program examined the impacts of astronomy and earth motion factors on climate change. Solar impacts on earth's climate are most sensitive in polar and tropical Pacific regions and the monsoon activity plays a crucial role in the propagation of solar signal between different latitudes.


News Article | May 3, 2017
Site: phys.org

China is under pressure to write its own encyclopaedia so it can guide public thought, according to a statement by the project's executive editor Yang Muzhi published last month on the website of the Chinese Academy of Sciences. He once listed Wikipedia, which is available in China, and Britain's Encyclopaedia Britannica as potential rivals and said the project aims to exceed them, according to an article he wrote late last year. The project, which will be under the guidance of the state-owned China Publishing Group, "must have Chinese characteristics," he wrote, adding it would be a "symbol of the country's cultural and technological development" and increase its softpower and international influence. Unlike Wikipedia—and its Chinese version Baidu Baike—which are written by volunteers and are in a constant state of revision, the new project, which was approved in 2011, will be entirely written by professionals. So far over 20,000 scholars and academics have been enlisted to compile the project, which aims to have more than 300,000 entries by its 2018 launch. The new encyclopaedia will be based on a previous printed version, published in book form in 1993. A second edition, which can be accessed through a special terminal, was released in 2009. The newest version will be released online before being published in a bound edition. China has over 700 million internet users but a 2015 report by US think tank Freedom House found that the country had the most restrictive online use policies of 65 nations it studied, ranking below Iran and Syria. It has maintained that its various forms of web censorship—collectively known as "The Great Firewall"—are necessary for protecting its national security. Sites blocked due to their content or sensitivity, among them Facebook and Twitter, cannot be accessed in China without special software that allows users to bypass the strict controls. Beijing issued a new restriction for online freedoms, requiring Chinese Internet users to provide their real names when accessing online news sources. The new restriction will come into effect on June 1.


News Article | April 17, 2017
Site: co.newswire.com

​​​Distributed cloud computing platform iExec has announced a role call of advisors that includes Aurelien Menant of Gatecoin and Marco Streng of Genesis Mining. The list of advisors covers a broad range of fields from security and legal to business and industry. Together, they will ensure the rapid and sustainable growth of iExec. iExec Advisors & Partnerships Business Advisor: Aurélien Menant, founder and CEO, Gatecoin​ Business Advisor: Han Feng, President, Digital Assets Coalition of Asia (DACA) Financial Support: HyperChain Capital, Fintech Blockchain Group (FBG), Wanxiang Blockchain Lab Industry Partner: Marco Streng, founder and CEO, Genesis Mining Industry Partner: Christophe Perron, CEO, Stimergy Industry Advisor: Feller Gao, Strategy Cooperation Director, Huawei Technology ​Academic Partner: CAS, Chinese Academy of Sciences ​Academic Partner: CNRS, French National Research Center ​Academic partner: INRIA, French National Institute of Computer and Automation Legal Advisor: Simon Polrot, Laywer, Fieldfisher ​Investment Platform: ICO365, imtoken, ICOAge iExec are very pleased to welcome their newest industry partner, Marco Streng, CEO of Genesis Mining. Marco founded Genesis Mining in 2013 and has grown it to become the world’s largest cloud mining provider in Bitcoin and Ethereum. Genesis Mining is operating the largest GPU farms in the world which, together with iExec, will provide true high-performance computing capabilities to DApps. The team also welcomes the support of Aurélien Menant, founder and CEO of Gatecoin and founding member of the Bitcoin Association of Hong Kong as business advisor. Aurélien Menant has provided unique insight into the blockchain market, advising the co-founders on market perspectives and investments. Gatecoin will list the RLC token straight after the ICO. He is joined by Han Feng, President of the Digital Assets Coalition of Asia (DACA). The DACA Association is committed to serve the Bitcoin and blockchain development in the Asia area and was originally initiated by OKCoin, BTCC, HouoBi and BTCtrader among others. The first service provider on the iExec network will be Stimergy who partnered with iExec back in December 2016. The start-up, founded by Christophe Perron, designs and produces distributed data-centers. Stimergy will be the first to accept the RLC token through the iExec platform in return for its cloud services. Furthermore, the involvement of Feller Gao, strategy director at Huawei Technology is indicative of the mainstream potential of blockchain cloud computing. iExec is built from technology from the field of distributed computing, developed at the French National Institute of Computer and Automation (INRIA) and the French National Research Center (CNRS). Both research centers will now partner with iExec bringing a range of benefits. CNRS has 32,000 employees and as such is the largest governmental research organisation in Franceand the largest fundamental science agency in Europe. INRIA has 3500 researchers across 179 teams and is a research institute focused on computer science and applied mathematics. iExec also enjoys close ties with the Chinese Academy of Sciences (CAS) and has been through the accelerator program of Tsinghua University in Beijing. CAS will join INRIA and CNRS in the role of academic partners, cooperating with iExec to tackle and address research challenges such as distributed system theory, smart contract formal verification, applied cryptography and other problem solving. iExec also recently won the Blockgrant-X program and received financial support from Wanxiang Blockchain Lab. In addition the team benefits from the expertise of Simon Polrot, the leading expert in France regarding legal aspects of blockchain technologies. Simon Polrot has a deep understanding of the regulatory aspects of token crowdsales. He has participated in parliamentary blockchain events and he is providing his expertise to the French market regulation authorities (AMF). iExec ​will launch a crowdfund this week to raise funds for development and launch of its distributed cloud platform. Participants will receive RLC tokens that they can use to interact with the iExec cloud network. iExec are cooperating with three investment platforms specialised in token crowdsales; ICO365, ICOage and imToken. ImToken is the next generation Ethereum wallet that will accept the RLC token and allows its user to participate in the RLC crowdsale. The crowdsale opens on 12 April 13:00 UTC. To find out more visit http://crowdsale.iex.ec


News Article | May 2, 2017
Site: www.eurekalert.org

An international team of researchers at the University of Calgary and the Nanjing Institute of Geology and Palaeontology of the Chinese Academy of Science have shown just how precarious the recovery of life was following Earth's greatest extinction event, about 251.9 million years ago. A site near Shangsi in China's Sichuan Province highlights a short-lived community of organisms that may hold clues to forces shaping our planet today and into the future. In a paper published online this Monday in Geology entitled "Precarious ephemeral refugia during the earliest Triassic", international scientists highlight an assemblage including microbial mats, trace fossils, bivalves, and echinoids that represent a refuge in a moderately deep-water setting. "Refuge" describes an ecosystem that acts as a sanctuary for organisms during and immediately following times of environmental stress. The echinoids normally live in shallow-water environments, but in this case they sought refuge from lethally hot surface waters. The culprit was global warming associated with massive volcanic eruptions in Siberia, but modern-day events may lead to similar changes in today's oceans. The community was short-lived, and was extinguished by a relatively minor ecologic disturbance as determined from the geochemistry of the host rocks, only to be replaced by a low-diversity community of 'disaster taxa', opportunistic organisms that thrive while others go extinct. The team envisages the earliest Triassic ocean floor as a shifting patchwork of temporary or ephemeral refugia, in which some communities survived and others died off depending on local conditions. As conditions improved throughout the Early Triassic, these communities no longer had to cling to life in ephemeral refugia, but could expand into normal habitats around the world. The echinoids at this site are the ancestors of a diverse group of modern echinoids or sea urchins that live in reef communities, rocky shorelines and sandy shelves today. The study will help bring about a deeper understanding of how modern oceans might respond to intense global warming due to natural or anthropogenic effects. It could inform the management of our oceanic resources as they continue to be affected by environmental stressors. The rock record is cryptic, but it records events that have run their full course. If we can decipher the story, then it is possible to inform us better as to what might happen in the future as changes to our environment continue to occur. This study was supported by the National Science Foundation of China, the Strategic Priority Research Program (B) of the Chinese Academy of Sciences, and a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant.


News Article | April 27, 2017
Site: en.prnasia.com

SINGAPORE, April 27, 2017 /PRNewswire/ -- As Guangzhou prepares to host the 2017 Fortune Global Forum later this year, key representatives from the municipal government hosted a roadshow in Singapore to discuss openness, innovation and collaboration between the two commercial hubs. At the roadshow, two Letters of Intent (LOI) were signed between Sino-Singapore International Joint Research Institute and Chinese Academy of Sciences Holdings Co., Ltd; and between Guangzhou Gas Group, Guangzhou Port Group and Royal Golden Eagle, outlining agreements on the Sino-Singapore collaboration on technology and intellectual property rights. "Guangzhou and Singapore share many similarities, including city scale, cultural background and languages - in fact Singapore even hosted the first Fortune Global Forum years ago in 1995,"  Cai Chaolin, Guangzhou Vice Mayor, said. "What's more, our forward-thinking outlook and cooperation through the years have enabled us to pursue meaningful projects together that enrich our economies. What lies ahead is exciting, and we look forward to more collaborations." Today Guangzhou is among the most important business centers in China and a gateway to the outside world. The economically dynamic region is built on a centuries-old foundation of innovation and wisdom, and continues to thrive on openness and modernization. In the last five years, Guangzhou's gross domestic product (GDP) saw an average increase of 10.1 percent, with the local service industry's growth approaching that of developed economies, representing 66.77 percent of the total GDP in 2016. Guangzhou has continued to increase its investment in advancing technologies and talent, and to improve the convenience, effectiveness and efficiency of trade facilitation and services. As a result, the relationship with, and opportunities for, Singapore in the region have been further expanded. In recent years, Guangzhou has introduced a series of policies to encourage even more innovation in the region, and is providing on-going support for enterprises, start-ups and talent. Guangzhou also continues to further optimize the local business environment and reduce the burden on foreign enterprises through governmental self-reforms, including the streamlining of administrative examinations and approval permissions. To develop a more business-friendly environment, Guangzhou has reduced administrative fees, set up government funds, implemented policies to lower insurance rates, and it continues to provide incentives for foreign corporations During his visit to Singapore, Guangzhou Vice Mayor, Cai Chaolin, met key leaders and representatives in the region to discuss the next chapter for Guangzhou and Singapore, including open perspectives, innovative approaches, and jointly building an international innovative hub for technology. He was joined by representatives from the Guangzhou government, the Chinese Embassy in Singapore, International Enterprise (IE) Singapore, A*STAR, the Intellectual Property Office of Singapore, the Nanyang Technological University, the Urban Renewal Authority, executives from the Fortune Global Forum, the Chairman of Ascendas-Singbridge, Mr. Wong Kan Seng, and corporate representatives across different industries. "With more than 2,230 years of history, we are surely an ancient city," said Vice Mayor Cai. "But today, all eyes are on the future and what tremendous things are still to come for the region."


News Article | March 20, 2017
Site: www.techtimes.com

Top Scientific Minds You Probably Never Heard Of Mushrooms or Agaricomycetes are a type of fungal growth, which evolve in the shape of a domed cap on the stalk of a plant, along with gills on the base of the cap. In an exciting discovery, Mushrooms that have been well preserved for 99 million years have been unearthed along with beetles, which are about 125 million years old. Both were preserved surprisingly well inside an amber, which is a fossilized tree resin. A research group, led by Prof. Huang Diying from the Nanjing Institute of Geology and Paleontology (NIGPAS) of the Chinese Academy of Sciences, extracted a variety of gilled mushrooms and mycophagous rove beetles from the Burmese amber. The reported beetles belong to Oxyporinae, whose modern members have a necessary association with soft-textured mushrooms. Mushrooms are also described as prominent, common, and morphologically varied kind of fungi. Most of the mushrooms are defined as a fruiting body with a short lifespan. This is the reason why very rare fossils of these kinds of fungi are present. One of the discovered species of mushroom is Palaeoagaracites antiquus, which is about 99 million years old. The other is Archaeomarasmius leggetti, which is nearly 90 million years old. The remaining groups of the mushroom fossils are from the Miocene Dominican amber, which is about 20 million years old. The mushrooms have been preserved properly due to them being inside the amber and can be divided into four groups. A stalk along with a cap and gills is visible in most of the amber. The oldest kind of amber mushroom ever found is a million years older than the newly discovered mushrooms. The newly discovered fossils are a part of the 111,000 discovered fossils from Burma, which are kept in Nanjing Institute of Geology and Paleontology in China. The findings of the research team portray an old ecological community, which includes varied mushrooms and beetles. The beetles which are preserved have big mandibles, along with big apical labial palpomeres, as well as a particular kind of sensory organ system. According to the study, the beetles were discovered inside the amber in northeastern China and Burma. The study also shows that these conserved fossils portray the features of modern beetles, as well as the feeding habit of mushrooms. The researchers were surprised by how well the fossils have been preserved and said they looked similar to present-day mushrooms. The results of the study were published in the journal Nature. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | May 5, 2017
Site: www.eurekalert.org

Researchers at the Institute of Acoustics (IOA) of the Chinese Academy of Sciences have designed and fabricated an underwater acoustic carpet cloak using transformation acoustics, a scientific first. The research was published online in Scientific Reports on April 6. An acoustic cloak is a material shell that can control the propagation direction of sound waves to make a target undetectable in an acoustic system. The carpet cloak modifies the acoustic signature of the target and mimics the acoustic field obtained from a reflecting plane, so that the cloaked target is indistinguishable from the reflecting surface. The field of transformation acoustics focuses on the design of new acoustic structures. It shows how to control the propagation of acoustic waves. The parameters of the cloak shell can be given by transformation acoustics. However, in most cases, these parameters are too complex for practical use. To solve this problem, YANG Jun and his IOA team adopted a scaling factor and simplified the structure of the carpet cloak with only modest impedance mismatch. The research team then used layers of brass plates featuring small channels filled with water to construct the model cloak. This material possesses effective anisotropic mass density in long-wavelength regimes. The structure of the carpet cloak, comprised of layered brass plates, is therefore simplified at the cost of some impedance match. "The carpet cloak has a unit cell size of about 1/40 of the wavelength, making it able to control underwater acoustic waves in the deep subwavelength scale," said YANG Jun. The proposed carpet cloak has shown good performance in experimental results across a wide frequency range. In tests, a short Gaussian pulse propagates towards a target bump covered with the carpet cloak; the scattered wave then returns in the backscattering direction. The cloaked object successfully mimics the reflecting plane and is imperceptible to sound detection. Previously, the IOA researchers had designed and fabricated a carpet cloak in air. The results of this earlier research were published in the Journal of Applied Physics (Volume 113, Issue 2, January 2013).


The world’s most popular drink (after water) is under threat. We already know much about the threat of climate change to staple crops such as wheat, maize and rice, but the impact on tea is just coming into focus. Early research indicates that tea grown in some parts of Asia could see yields decline by up to 55% thanks to drought or excessive heat, and the quality of the tea is also falling. The intensive use of pesticides and chemical fertilisers in tea plantations has also led to soil degradation at an average annual rate of 2.8%. This also causes chemical runoff into waterways, which can lead to serious problems for human health and the environment. However, hope may be on the horizon now that scientists at the Kunming Institute of Botany at the Chinese Academy of Sciences have sequenced the entire tea genome. Mapping the exact sequence of DNA in this way provides the foundation for extracting all the genetic information needed to help breed and speed up development of new varieties of the tea plant. And it could even help improve the drink’s flavour and nutritional value. In particular, the whole tea tree genome reveals the genetic basis for tea’s tolerance to environmental stresses, pest and disease resistance, flavour, productivity and quality. So breeders could more precisely produce better tea varieties that produce higher crop yields and use water and nutrients more efficiently. And they could do this while widening the genetic diversity of tea plants, improving the overall health of the tea plant population. This is also an important milestone for scientists because it provides a deeper understanding of the complex evolution and the functions of key genes associated with stress tolerance, tea flavour and adaptation. The new tea genome is very large, with nearly 37,000 genes – more than four times the size of the coffee plant genome. The process of evolution by natural selection has already helped the tea plant develop hundreds of genes related to resisting environmental stress from drought and disease. These genes are like molecular markers that scientists can identify when selecting plants for use in breeding. This will allow them to be more certain that the next generation of plants they produce will have the genes and so the traits they want, speeding up the breeding process. Sequencing the genome also raises the possibility of using genetic modification (GM) technologies to turn on or enhance desirable genes (or turn off undesirable ones). The same principles could also be used to enhance the nutritional or medicinal value of certain tea varieties. The genome sequence includes genes associated with biosynthesis. This is the production of the proteins and enzymes involved in creating the compounds that make tea so drinkable, such as flavonoids, terpenes and caffeine. These are closely related to the aroma, flavour and quality of tea and so using genetic breeding techniques could help improve the taste of tea and make it more flavourful or nutritional. For example, we could also remove the caffeine biosynthetic genes from the tea plant to help breeding of low or non-caffeine varieties. By boosting certain compounds at the same time, we could make tea healthier and develop entirely new flavours to make caffeine tea more appealing. An estimated 5.56m tons of tea is commercially grown on more than 3.8m hectares of land (as of 2014). And its huge cultural importance, as well as its economic value, mean securing a sustainable future for tea is vitally important for millions of people. So the first successful sequencing of the tea genome is a crucial step to making tea plants more robust, productive and drinkable in the face of massive environmental challenges. This article was originally published on The Conversation. Read the original article. Chungui Lu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.


News Article | April 17, 2017
Site: www.eurekalert.org

As the most abundant gas in Earth's atmosphere, nitrogen has been an attractive option as a source of renewable energy. But nitrogen gas -- which consists of two nitrogen atoms held together by a strong, triple covalent bond -- doesn't break apart under normal conditions, presenting a challenge to scientists who want to transfer the chemical energy of the bond into electricity. In the journal Chem on April 13, researchers in China present one approach to capturing atmospheric nitrogen that can be used in a battery. The "proof-of-concept" design works by reversing the chemical reaction that powers existing lithium-nitrogen batteries. Instead of generating energy from the breakdown of lithium nitride (2Li3N) into lithium and nitrogen gas, the researchers' battery prototype runs on atmospheric nitrogen in ambient conditions and reacts with lithium to form lithium nitride. Its energy output is brief but comparable to that of other lithium-metal batteries. "This promising research on a nitrogen fixation battery system not only provides fundamental and technological progress in the energy storage system but also creates an advanced N2/Li3N (nitrogen gas/lithium nitride) cycle for a reversible nitrogen fixation process," says senior author Xin-Bo Zhang, of the Changchun Institute of Applied Chemistry, part of the Chinese Academy of Sciences. "The work is still at the initial stage. More intensive efforts should be devoted to developing the battery systems." This work is financially supported by the Ministry of Science and Technology of China and the National Natural Science Foundation of China. Chem, Ma and Bao et al.: "Reversible Nitrogen Fixation Based on Rechargeable Lithium-Nitrogen Battery for Energy Storage" http://www.cell.com/chem/fulltext/S2451-9294(17)30129-8 Chem (@Chem_CP) is the first physical science journal published by Cell Press. The sister journal to Cell, Chem provides a home for seminal and insightful research and showcases how fundamental studies in chemistry and its sub-disciplines may help in finding potential solutions to the global challenges of tomorrow. Visit http://www. . To receive Cell Press media alerts, contact press@cell.com.


HONG KONG, April 28, 2017 /PRNewswire/ -- Digital China Holdings Limited ("DC Holdings" or the "Group"; stock code: 00861.HK, 910861.TW), China's largest integrated IT service provider, is pleased to announce that the signing ceremony of China Healthcare Big Data Development Co., Ltd. (CHBDDC) was held in Beijing. DC Holdings initiated the establishment of CHBDDC. Guo Wei, Chairman of DC Holdings, and Jin Xiaotao, Deputy Director of National Health and Family Planning Commission, delivered speech at the ceremony. Lu Jiang, Vice Mayor of Xia'men, representatives of Changzhou Municipal Government, China Population and Development Research Center, Peking Union Medical College Hospital, Peking University, and Philips (China) Investment Co., Ltd attended the signing ceremony. Under the guidance of National Health and Family Planning Commission, CHBDDC was initiated by DC Holdings, and cosponsored by Industrial and Commercial Bank of China Limited, Bank of China Limited, Chinese Academy of Sciences Holding Co., Ltd.,China Telecom Corporation Limited, China Cinda Asset Management Co., Ltd., Shougang Corporation, Guangzhou Urban Construction Investment Group Co., Ltd., Wonders Information Co., Ltd., Neusoft Corporation, Inspur and Ylz Information Technology, Bringspring Science and Technology Co., Ltd. and many other central enterprises, state-owned enterprises and well-known listed companies. The new company will respond to the country's policy guidelines to promote the "Interconnection, Open Sharing" of Big Data in healthcare and promote the supply-side structural reform on healthcare and fulfill the mission of "Digitalised China" for the benefit of the people. Based on the top-level design for industrial parks of healthcare Big Data and combined with the development plan of "Healthy China", CHBDDC will be undertaking the construction of national and local industrial parks of healthcare Big Data. CHBDDC will put effort in the construction of Big Data center, Big Data platform, medical services, healthcare services, integrated management, precision medicine, delicacy insurance, insurance audit, medical payment and other key areas, to promote a full layout and construction of industrial parks of healthcare Big Data, and to become a new support for medical reform and a new driver of local economy. The new company will serve as a platform to integrate the relevant advantages of state-owned enterprises and listed enterprises: to invest and operate the National Healthcare Big Data Center and industrial parks so as to build a service system of healthcare Big Data under the principles of "Government-led, Commercial Operation and Joint Innovation for Win-win"; to open up the data of the whole industry chain to build a Sm@rt Big Data ecology; to utilize financial means for the promotion of incubation and cultivation of health industry and to build a large healthcare data ecosystem and develop the construction of healthcare Big Data. DC Holdings upholds its mission to drive the "Digitalised China". Combined with technology and capital, DC Holdings has formed cloud computing and Big Data as core capabilities. The Company is working towards the transformation to cloud computing and Big Data based on the ecological circle created through internal bottom-up innovation system and external investment for M&As and with Sm@rt City, modern agriculture, precision medicine and intelligent manufacturing as core business areas. The integration of healthcare resources and building up the integration platform by DC Holdings will fully demonstrate the benefit of health Big Data. As one of the key parts in DC Holdings' core business areas, Digital China Health is determined to build the top brand of healthcare Big Data in China by providing comprehensive and accurate information services and cancer-related data services. About Digital China Holdings Limited Digital China Holdings Limited ("DC Holdings", Stock Code: 0861.HK) was listed on the Main Board of The Stock Exchange of Hong Kong in 2001 following a successful spin-off from the then Legend Group in 2000. It has developed its capital platforms across Mainland China, Hong Kong and Taiwan through the following four listed companies – DC Holdings, Digital China Information Service Company Ltd., Digiwin Software Company Ltd., and HC International, Inc. Their combined market capitalization reaches nearly HK$50 billion. Since its listing 16 years ago, DC Holdings has adhered to the mission of driving the "Digitalization in China". With the corporate value of "Commitment, Passion and Innovation", the Company evolved from China's largest IT product distributor into the largest integrated IT services provider and then the most influential Sm@rt City solutions provider in China. According to its latest development strategy, DC Holdings will capture the opportunities arising from the "Internet +" strategy and leverage on its technological strengths and capital resources to drive breakthroughs in Sm@rt City, precision healthcare, modern agriculture and Sm@rt manufacturing based on the Cloud computing and Big Data technology. With comprehensive innovation mechanism and multi-layer incubation system, the Company is determined to become a genuine innovative enterprise. For additional information about DC Holdings, please visit the Group's website at www.dcholdings.com.hk. For investor and media inquiries: Charles Chan Emma Liang PRChina Limited PRChina Limited Tel: 852-2522-1838 Tel: 852-2522-1838 Email: ckchan@prchina.com.hk Email: eliang@prchina.com.hk Digital China Holdings Limited Tel: 852-3416-8000 Email: ir@digitalchina.com   To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/dc-holdings-initiates-the-establishment-of-china-healthcare-big-data-development-co-ltd-to-construct-the-healthcare-big-data-ecosystem-300447914.html


News Article | April 17, 2017
Site: www.eurekalert.org

Eusocial insects, such as ants, social wasps and bees, and termites, include some of the most ecologically ubiquitous of terrestrial animals. The nests of these insects are well protected and provide a safe, communal space for the storing of resources and production of brood, so the nests are often cohabited by various highly specialized symbionts that take advantage of the abundant resources and protection inside the nests. Recently, a research team led by Dr. CAI Chenyang and Prof. HUANG Diying from Nanjing Institute of Geology and Palaeontology (NIGPAS) of the Chinese Academy of Sciences reported the oldest, horseshoe-crab-shaped, and obligate termite-loving rove beetles from mid-Cretaceous Burmese amber. These fossils represent the oldest known termitophiles, which are able to hack into termite nest and to exploit their controlled physical conditions and "steal" plentiful resources (e.g., fungi) inside it. The discovery reveals that ancient termite societies were quickly invaded by beetles about 99 million year ago. Termitophiles, symbionts that live in termite nests, include a wide range of morphologically and behaviorally specialized organisms. Understanding of the early evolution of termitophily is challenging due to a scarcity of fossil termitophiles, with all known reliable records occurring from the Miocene Dominican and Mexican ambers (approximately 19 million years ago). Mesozoic termitophiles are of great significance for understanding the origin of eusocial societies of termites and the early evolution of specialized termitophily. To integrate into the hosts' societies, termitophilous beetles have repeatedly evolved physogastry (swollen abdomens) and limuloid (horseshoe-crab-shaped) body shapes, representing the two principal forms. Both morphological adaptations have arisen convergently many times in beetles (Coleoptera) as well as in flies (Diptera). The peculiar fossil rove beetles, named as Cretotrichopsenius burmiticus Cai et al., 2017, exhibits the characteristic features of the modern aleocharine tribe Trichopseniini, including the articulation of the hind leg whereby the coxae are fully fused and incorporated into the metaventrite. Cretotrichopsenius burmiticus has a protective horseshoe-crab-shaped body form typical of many modern termitophiles, with concealed head and antennae and strong posteriorly directed abdominal setae. The discovery represents the earliest definitive termitophiles, pushing back the fossil record of termitophiles by 80 million years. Recent species of Trichopseniini are usually associated with derived neoisopteran termites of Rhinotermitidae, and less frequently with Termitidae. Interestingly, some trichopseniines are known to live within nests of the basal-most termites (Mastotermitidae) and drywood termites (Kalotermitidae). Because host specificity is rather low in extant trichopseniines, it is certainly likely that Cretotrichopsenius may have been associated with the variety of termite groups known from Burmese amber. The fossils reveal that ancient termite societies were quickly invaded by beetles about 99 million years ago. The result was published in Current Biology on April 13th, 2017. This study was jointly supported by the Chinese Academy of Sciences, the National Natural Science Foundation, the Natural Sciences Foundation of Jiangsu Province, and the Ministry of Science and Technology of China.


News Article | April 19, 2017
Site: www.prweb.com

On 17 April 2017, Chinese geneticists discovered that major mutations in vitamin D related genes are much more common in autism subjects than in controls: Li et al. Vitamin D-related genes are subjected to significant de novo mutation burdens in autism spectrum disorder. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, April 17, 2017. Dr. Li and colleagues found multiple mutations of vitamin D related genes (VDRG), writing that this could be part of the genetic mechanism “underlying ASD pathogenesis.” A meta-analysis found vitamin D deficiency is much more common in autistic children than typically developing children, thought to be due to less sun exposure. But, low vitamin D levels in autistic individuals is lower at 3 months’ gestation, at birth and at age six, so sun exposure does not explain it. Also, two open label trials and a randomized controlled trial found that high dose vitamin D has a treatment effect in established autism. For details of these vitamin D and autism papers, contact Dr. Cannell below. Dr. Li and colleagues discovered that low vitamin D in autism is due to genetics, according to Dr. Li et al in the above paper. They found de novo mutations (a new mutation arising in the egg or sperm that were not present in the parents) were much more likely in vitamin D related genes. This may explain why autistic children have such low vitamin D levels and why vitamin D may help treat and/or prevent autism. It is important to understand that even if you inherit low vitamin D levels, sunshine or supplements may overcome the genetics. To contact the authors, contact: Dr. Zhong Sheng Sun Science, Chinese Academy of Sciences, Beijing,China Email: sunzs(at)biols9dot)ac(dot)cn Kun Xia, the State Key Laboratory of Medical Genetics, School of Life Sciences, Central South University, Changsha, China. Email: xiakun(at)sklmg.edu(dot)cn This press release was sponsored by: John Cannell MD Vitamin D Council 1411 Marsh Street, Ste 203 San Luis Obispo, CA 93401 jjcannell(at)vitaminDcouncil(dot)org 805 439-1075


Shallow cumulus (SCu) clouds play an important role in the global redistribution of water and energy and in the transport of surface heat, moisture and momentum to the free troposphere. SCu clouds or fair-weather cumuli are characterized by their small size, relatively weak convection, and no precipitation, which is significantly different from cumulus congestus and deep convection clouds. SCu clusters can often be observed in summer over the Inner Mongolia Grassland (IMG), which is the largest grassland in China; and yet, despite this, few studies on SCu in this region have been conducted. Recently, a team led by Prof. Hongbin CHEN from the Institute of Atmospheric Physics, Chinese Academy of Sciences, provided an initial insight into the features of SCu over the IMG. An intensive radiosonde experiment was performed in the summer of 2014, and the findings published in Advances in Atmospheric Sciences (Shi et al., 2017) constitute the first report of SCu observations over this region. "We made some interesting discoveries regarding shallow cumulus over the IMG," explains Hongrong Shi, the first author of the paper. "The cloud base height of 3.4 km and cloud top height of 5 km over this region were far in excess of those over the sea, but were relatively close to those of the Southern Great Plains in the United States." Further analysis indicated that the formation and maintenance of SCu was in response to wind shear, subsidence, surface forcing and development of the boundary layer. CHEN, corresponding author, comments further that "...Although some interesting features associated with shallow cumulus have been revealed by the analysis of the intensive radiosonde measurements, we should keep it in mind that this is a preliminary study, since the results were mainly based on one case. Further studies of shallow cumulus in this specific key region are still required. A combination of radiosonde and satellite measurements, as well as model simulations, would shed new light on the formation and maintenance of these clouds".


The highest and largest plateau in the Northern Hemisphere, the Tibetan Plateau (TP), is in the subtropical region of Asia. The air quality above the TP is only 60% of the sea level. In addition, because the radiation over the plateau, especially in the boundary layer is significantly different from those in the low altitude region, the thermal process over the TP has obvious particularity. Through its special thermodynamic and dynamic effects, the TP and its adjacent Iran Plateau (IP) have significant impacts on the circulation and climate over the plateaus as well as the adjacent region and the globe. However, the interaction and feedback among the heat source of the TP and IP and circulation is not clear nowadays. Recent research papers have made new progress in this issue. Professors Wu Guoxiong and Liu Yimin and their students who come from the Institute of Atmospheric Physics, Chinese Academy of Sciences, conducted the study. With theoretical diagnostic analysis and simulations by regional model, the researchers have revealed the physical process of interaction and feedback between the two types of summertime heating, the surface sensible heating and condensation heating over TP and the surface sensible heating over Tibetan-Iranian Plateaus. They also discovered how this interaction influences the vertical thermodynamic structure near the tropopause over Asia and the atmospheric circulation in the globe. The maintenance of a steady state in the atmosphere is only possible by obtaining external energies, particularly through the transfer of surface sensible heating, evaporation-condensation heating. This study showed that the TP surface sensible heating can generate convective precipitation over the southern and eastern TP, whereas the precipitation over the TP can reduce the in situ surface sensible heating. This indicates the existence of a feedback or interaction process between the two types of diabatic heating over the TP. Furthermore they documented that the surface sensible heating over the two plateaus not only have mutual influences but also feedback to each other. The IP surface sensible heating can reduce the surface sensible heating and increase the condensation heating over the TP, whereas the TP surface sensible heating can increase IP sensible heating, thereby reaching quasi-equilibrium among the surface sensible heating and condensation heating over the TP, the IP surface sensible heating and the atmosphere vertical motion. Therefore, a so-called Tibetan-Iranian Plateau coupling system (TIPS) is constructed, which influences atmosphere circulation (Figure). "The interaction between surface sensible heating and latent heating over the TP plays a leading role in the TIPS" said Prof. Wu. The surface sensible heating of the IP and TP influences on other regions not only have superimposed effects but also mutually offset, the combined influence over TP and IP represents the major contribution to the convergence of water vapor transport in the Asian subtropical monsoon region. In addition, the heating of TIPS increases the upper tropospheric temperature maximum and lifts the tropopause, cooling the lower stratosphere. Combined with large-scale thermal forcing of the Eurasian continent, the TIPS produce a strong anticyclonic circulation and the South Asian High that warms the upper troposphere and cools the lower stratosphere, thereby affecting regional and global weather and climate. The results improved the understanding on the unique feature of the climate dynamics of the TP. It will also help the regional weather and climate prediction. The study was funded by the Key Program and integration Program of the Major Research Plan of the National Natural Science Foundation of China (No. 91437219 and .91637312) Liu Y M, Wang Z Q, Zhuo H F, Wu G X. 2017.Two types of summertime heating over Asian large-scale orography and excitation of potential-vorticity forcing II.Sensible heating over Tibetan-Iranian Plateau. Science China Earth Sciences, 60(4): 733-744 Wu G X, Zhuo H F, Wang Z Q, Liu Y M. 2016.Two types of summertime heating over the Asian large-scale orography and excitation of potential-vorticity forcing I. Over Tibetan Plateau. Science China Earth Sciences, 59 (10): 1996-2008


News Article | April 17, 2017
Site: www.greencarcongress.com

« Bio- and jet-fuel carinata feedstock company Agrisoma closes $15.4M Series B financing | Main | CSIRO licenses technology to Amfora for production of oil in leaves and stems of plants; participates in Series A » As the most abundant gas in Earth’s atmosphere, nitrogen has been an attractive option as a source of renewable energy. But nitrogen gas—which consists of two nitrogen atoms held together by a strong, triple covalent bond—doesn’t break apart under normal conditions, presenting a challenge to scientists who want to transfer the chemical energy of the bond into electricity. Now, researchers in China have developed a rechargeable lithium-nitrogen (Li-N–) battery with the proposed reversible reaction of 6Li + N– ⇋ 2Li–N. The assembled N– fixation battery system, consisting of a Li anode, ether-based electrolyte, and a carbon cloth cathode, shows a promising electrochemical faradic efficiency (59%). The “proof-of-concept” design, described in an open-access paper in the journal Chem, works by reversing the chemical reaction that powers existing lithium-nitrogen batteries. Instead of generating energy from the breakdown of lithium nitride (2Li N) into lithium and nitrogen gas, the researchers’ battery prototype runs on atmospheric nitrogen in ambient conditions and reacts with lithium to form lithium nitride. Its energy output is brief but comparable to that of other lithium-metal batteries. Although it constitutes about 78% of Earth’s atmosphere, N in its molecular form is unusable in most organisms because of its strong nonpolar N≡N covalent triple-bond energy, negative electron affinity, high ionization energy, and so on. In terms of energy efficiency, the honorable Haber-Bosch process, which was put forward more than 100 years ago, is the most efficient process for producing the needed N fertilizers from atmospheric N in industrial processes. However, the energy-intensive Haber-Bosch process is inevitably associated with major environmental concerns under high temperature and pressure, leaving almost no room for further improvement by industry optimization. … Inspired by rechargeable metal-gas batteries such as Li-O , Li-CO , Li-SO , Al-CO , and Na-CO (which have attracted much attention because of their high specific energy density and ability to reduce gas constituent), research on Li-N batteries has not seen any major breakthroughs yet. Although Li-N batteries have never been demonstrated in rechargeable conditions, the chemical process is similar to that of the previously mentioned Li-gas systems. During discharging reactions, the injected N molecules accept electrons from the cathode surface, and the activated N molecules subsequently combine with Li ions to form Li-containing solid discharge products. From the results of theoretical calculations, the proposed Li-N batteries show an energy density of 1,248 Wh kg , which is comparable to that of rechargeable Li-SO and Li-CO batteries. The research team demonstrated that a rechargeable Li-N battery is possible under room temperature and atmospheric pressure with the following reversible battery reactions: The team investigated the use of Ru-CC and ZrO2-CC composite cathodes to improved the N fixation efficiency. Li-N2 batteries with catalyst cathodes showed higher fixation efficiency than pristine CC cathodes. This promising research on a nitrogen fixation battery system not only provides fundamental and technological progress in the energy storage system but also creates an advanced N /Li N (nitrogen gas/lithium nitride) cycle for a reversible nitrogen fixation process. The work is still at the initial stage. More intensive efforts should be devoted to developing the battery systems. —senior author Xin-Bo Zhang, of the Changchun Institute of Applied Chemistry, part of the Chinese Academy of Sciences This work was financially supported by the Ministry of Science and Technology of China and the National Natural Science Foundation of China.


News Article | April 24, 2017
Site: www.eurekalert.org

The vast Tibetan Plateau, with high altitude and intense uplift, is being as a holy land for earth science researches. It deserves a reputation of "The Third Pole of the World", relative to the Arctic Pole and Antarctic Pole. A recent research reveals processes of the India-Eurasia continental collision, which led to the eventual formation of the Tibetan Plateau. The relevant review paper entitled "Processes of initial collision and suturing between India and Asia" has been published on the journal of Science China-Earth Sciences, No.3, 2017. As the first and corresponding author in this paper, Ding Lin, from the Institute of Tibetan Plateau Research, Chinese Academy of Sciences, has reviewed dozens of research approaches on the initial collision between the Indian and Asian plates, and concluded that the tectono-sedimentary response in the peripheral foreland basin provides the most sensitive index of this event, and that paleomagnetism presents independent evidence as an alternative, reliable, and quantitative research method. Based on the systematic overviews of the previous studies, it suggests that the initial collision first occurred in the center of the Yarlung Tsangpo suture zone (YTSZ) between ca. 65 Ma and 63 Ma and then spreading both eastwards and westwards. Collision between India and Asia was perhaps the most spectacular geological event to occur over the last 500 million years ago. Although there are numerous records of ocean closures and continental collisions in geological history, only the Indian-Asian collision has aroused extensive surface uplift. The ongoing processes of collision has also affected Tibet as well as central and southeast Asia. Thus, collision between India and Asia as the resultant formation of the Tibetan Plateau likely includes a number of unique processes of both continental collision and mechanisms of intracontinental deformation. During the 1980s and 1990s, geoscientists first proposed that the Indian and Asian continents initially collided along western syntaxis ca. 55 Ma, then diachronously suturing eastwards. In recent years, the earliest peripheral foreland basin related to the collision has been recognized, which developed much closer to the suture zone on the Indian continental and in which the earliest detrital material sourced from Eurasian continental had been identified, thus can represent the initial timing of the continental collision. Integrated with tectonic deformation, foreland basins, provenance analysis and paleomagnetism, an alternative model was first proposed that the collision between India and Asia first occurred in the central section of the YTSZ between ca. 65 Ma and 63 Ma, then progressing both eastwards and westwards. This new model, suggesting an earlier collision between India and Asia, predicts that: (1) large-scale continental subduction occurred within the Tibetan Plateau along main suture zones in order to accommodate additional shortening of about 1300 km; (2) resultant large-scale continental subduction would have generated far-reaching deformation effects across central Asia, and; (3) post-collisional igneous rocks and mineral deposits would have formed as a result within continental subduction belts. Moreover, along with the continental collision, the Himalayas had a southward growth. When the elevation of Himalayas exceeded the proto-Tibetan Plateau, it caused intense aridity in inland climate on the plateau, which eventually modified the south Asian monsoon to current pattern. Please search the original paper for more details: Ding L, Maksatbek S, Cai F L, Wang H Q, Song P P, Ji W Q, Xu Q, Zhang L Y, Muhammad Q, Upendra B. 2017. Processes of initial collision and suturing between India and Asia. Science China Earth Sciences, doi: 10.1007/s11430-016-5244-x.


News Article | May 5, 2017
Site: hosted2.ap.org

China compiles its own Wikipedia, but public can't edit it (AP) — It'll be free. It'll be uniquely Chinese. It'll be an online encyclopedia to rival Wikipedia — but without the participation of the public. And don't expect entries on "Tiananmen Square 1989" or "Falun Gong spiritual group" to come up in your searches, either. Scholars and experts hand-picked by Beijing to work on the project say only they will be able to make entries — the latest example of the Chinese government's efforts to control information available on the internet. The scholars say truth is their guiding light, and their editing and review process is a rigorous one. If there is a difference of opinion, a committee should figure it out, said Zhang Baichun, chief editor of the history of science and technology section. "Of course, science does not come from democratic votes, to convince others you will have to present the most convincing proof," he told The Associated Press. The effort to compile 300,000 entries that span science, literature, politics and history is being led by the ruling Communist Party's Central Propaganda Department, which guides public opinion through instructions to China's media, internet companies and publishing industry as well as overseeing the education sector. It has instructed the Encyclopedia of China Publishing House, known for its offline Chinese Encyclopedia, to produce it. The ruling party has struggled to manage public opinion in the internet age, when citizens can comment on news and topics of outrage and post photos of protests on social media — at least until such messages are scrubbed away or rendered unsearchable by censors. China also regularly blocks overseas sites including Facebook and Twitter, and has periodically blocked Wikipedia's English and Chinese versions. Currently, the Chinese Wikipedia is inaccessible on the mainland. Jiang Lijun, senior editor at the Encyclopedia of China Publishing House, said they had met with a team from San Francisco-based Wikipedia to learn from their experience. China has had a private sector version of Wikipedia since 2006, run by Baidu, the operator of the country's most popular internet search engine. It has more than 14 million entries, and more than 6 million people have edited it, according to its website. Jiang said Thursday that they plan to have entries on political leaders, the history of the Communist Party, and subjects such as virtual reality, artificial intelligence and the European Union. The online Chinese Encyclopedia will focus primarily on entries that are less likely to change as opposed to recent events, and with academic value, "while also trying to strike a balance between that, being timely and what people are searching for," she said. She declined to comment on how events that are politically sensitive in China, like the Cultural Revolution and the 1989 Tiananmen Square crackdown, would be treated. Qiao Mu, an independent media analyst in Beijing, said the Chinese Encyclopedia would be "quite different" from Wikipedia because of the need to toe the line on political taboos. "If it's not blocked in China, the publisher must accept censorship, either self-censorship or censored by authorities," he said. He said the encyclopedia would likely present a single, official version of sensitive historical events, and exclude items like the Tiananmen crackdown and the outlawed Falun Gong spiritual group, which "never exist on the internet." The publishing house behind the Chinese Encyclopedia is paying 20,000 scholars and experts from universities and research institutes to write entries and it is slated to go online next year. Jiang said initially the encyclopedia will just be in Chinese, but they are also doing research to see how viable an English version would be. Wikipedia is edited and maintained by hundreds of thousands of volunteers around the world, and has more than 40 million articles encompassing nearly 300 languages. More than 900,000 entries are in Chinese, compared with more than 5 million in English. "There is Chinese content on Wikipedia too, but sometimes it is not as accurate as it could be," said Jiang, the senior editor. Jiang said that as Wikipedia's content is generated by users, they can create more entries faster. "But we try to eliminate self-promotion and inaccuracy as much as possible." Zhang, the professor of history of science and technology, said the online version will make it easier to reach more people, particularly young readers. To create the history of science and technology entries, Zhang said professors from the Chinese Academy of Sciences' Institute of Natural Sciences first hold meetings with veterans and young experts in their fields to form a committee. The committee will then find the most authoritative person on the topic to write the draft, including sometimes foreign experts, said Zhang, who is director of the institute. The draft is reviewed by a section chief editor and then the committee. "If there is a difference of opinion, all deputy and chief editors should participate in the discussion and figure it out together," Zhang said. "We will reason things out with the author until we reach an agreement, or change the author."


News Article | May 4, 2017
Site: phys.org

Scholars and experts hand-picked by Beijing to work on the project say only they will be able to make entries—the latest example of the Chinese government's efforts to control information available on the internet. The scholars say truth is their guiding light, and their editing and review process is a rigorous one. If there is a difference of opinion, a committee should figure it out, said Zhang Baichun, chief editor of the history of science and technology section. "Of course, science does not come from democratic votes, to convince others you will have to present the most convincing proof," he told The Associated Press. The effort to compile 300,000 entries that span science, literature, politics and history is being led by the ruling Communist Party's Central Propaganda Department, which guides public opinion through instructions to China's media, internet companies and publishing industry as well as overseeing the education sector. It has instructed the Encyclopedia of China Publishing House, known for its offline Chinese Encyclopedia, to produce it. The ruling party has struggled to manage public opinion in the internet age, when citizens can comment on news and topics of outrage and post photos of protests on social media—at least until such messages are scrubbed away or rendered unsearchable by censors. China also regularly blocks overseas sites including Facebook and Twitter, and has periodically blocked Wikipedia's English and Chinese versions. Currently, the Chinese Wikipedia is inaccessible on the mainland. Jiang Lijun, senior editor at the Encyclopedia of China Publishing House, said they had met with a team from San Francisco-based Wikipedia to learn from their experience. China has had a private sector version of Wikipedia since 2006, run by Baidu, the operator of the country's most popular internet search engine. It has more than 14 million entries, and more than 6 million people have edited it, according to its website. Jiang said Thursday that they plan to have entries on political leaders, the history of the Communist Party, and subjects such as virtual reality, artificial intelligence and the European Union. The online Chinese Encyclopedia will focus primarily on entries that are less likely to change as opposed to recent events, and with academic value, "while also trying to strike a balance between that, being timely and what people are searching for," she said. She declined to comment on how events that are politically sensitive in China, like the Cultural Revolution and the 1989 Tiananmen Square crackdown, would be treated. Qiao Mu, an independent media analyst in Beijing, said the Chinese Encyclopedia would be "quite different" from Wikipedia because of the need to toe the line on political taboos. "If it's not blocked in China, the publisher must accept censorship, either self-censorship or censored by authorities," he said. He said the encyclopedia would likely present a single, official version of sensitive historical events, and exclude items like the Tiananmen crackdown and the outlawed Falun Gong spiritual group, which "never exist on the internet." The publishing house behind the Chinese Encyclopedia is paying 20,000 scholars and experts from universities and research institutes to write entries and it is slated to go online next year. Jiang said initially the encyclopedia will just be in Chinese, but they are also doing research to see how viable an English version would be. Wikipedia is edited and maintained by hundreds of thousands of volunteers around the world, and has more than 40 million articles encompassing nearly 300 languages. More than 900,000 entries are in Chinese, compared with more than 5 million in English. "There is Chinese content on Wikipedia too, but sometimes it is not as accurate as it could be," said Jiang, the senior editor. Jiang said that as Wikipedia's content is generated by users, they can create more entries faster. "But we try to eliminate self-promotion and inaccuracy as much as possible." Zhang, the professor of history of science and technology, said the online version will make it easier to reach more people, particularly young readers. To create the history of science and technology entries, Zhang said professors from the Chinese Academy of Sciences' Institute of Natural Sciences first hold meetings with veterans and young experts in their fields to form a committee. The committee will then find the most authoritative person on the topic to write the draft, including sometimes foreign experts, said Zhang, who is director of the institute. The draft is reviewed by a section chief editor and then the committee. "If there is a difference of opinion, all deputy and chief editors should participate in the discussion and figure it out together," Zhang said. "We will reason things out with the author until we reach an agreement, or change the author." Explore further: China to launch own encyclopaedia to rival Wikipedia


News Article | April 27, 2017
Site: www.eurekalert.org

Whether you prefer a cool summer night with a gentle breeze or a crystal clear and still winter day, the human perception of temperature, or thermal comfort, whilst largely dependent on the temperature itself, involves several other climate variables, such as humidity and wind speed. For example, during summer, high humidity can reduce the evaporation of sweat, needed for cooling, from the human body, consequently increasing levels of heat stress and making people feel uncomfortable or even sick. Meanwhile, a nice breeze around you removes heat by accelerating evaporation in hot conditions, leading to a cooling effect. In winter, the presence of a wind will remove heat through the process of convection, thereby further elevating the feeling of chilliness. In China, this is especially pronounced in the north of the country during periods when strong northerly winds blow cold air from Siberia. Meanwhile, owing to humid air conducting more heat, the wetter climate of southern China in winter typically leads to a perception of colder conditions for the human body. So how has thermal comfort changed in recent decades over China against the background of the global warming? To answer the question, scientists from the Institute of Atmospheric Physics of the Chinese Academy of Sciences, the National Climate Center of the China Meteorological Administration, and other institutes, carried out an investigation over the Chinese mainland using the index of effective temperature (ET), which combines the effects of temperature, humidity and wind speed. The human perception of temperature was classified on a scale based on the values of the ET index, ranging from very cold, cold, cool, comfortable, to warm, hot, and very hot. The study reveals an increase in ET in recent decades (1961-2014) over China, mostly due to the increase in temperature and decrease in wind speed. An average of 255 days of very cold and cold days per year was found, related to the fact that China is a country located in the mid and high latitudes, comprising several high-elevation areas (e.g., the vast Tibetan Plateau). However, a marked decline, at a rate of 3.5 days per decade was also found, indicating there were around 15 fewer cold days in the last decade compared to the 1960s. At the other end of the spectrum, warming led to an increase in the number of comfortable days (average of 27 per year) at a rate of 1.3 days per decade, i.e. an increase from 25 in the 1960s to 29 days in the last decade. However, the increase showed seasonal and geographical differences. For example, in summer, a significant decrease in the number of comfortable days dominated in eastern China, where there is a high population density. The number of hot and very hot days at the end of the study period amounted to only 11 per year; however, this figure had been increasing at a rate of 0.7 days per decade. In other words, 30% more hot and very hot days were observed in the last decade of the study period compared to the 1960s (12.5 vs. 9.5 days). The study's findings not only provide an interesting insight into the changes in thermal comfort that have already occurred, but also give us plenty to think about as the climate continues to change in the future. Indeed, GAO Xuejie, one of the study's authors from the Institute of Atmospheric Physics warns "We can expect further and possibly accelerating levels of change in the future under global warming and, as such, our group is currently working on high-resolution regional climate model simulations to build a clearer picture in this regard". The findings are published in the International Journal of Climatology.


Kang L.,Chinese Academy of Sciences
Annual Review of Entomology | Year: 2014

Phase change in locusts is an ideal model for studying the genetic architectures and regulatory mechanisms associated with phenotypic plasticity. The recent development of genomic and metabolomic tools and resources has furthered our understanding of the molecular basis of phase change in locusts. Thousands of phase-related genes and metabolites have been highlighted using large-scale expressed sequence tags, microarrays, high-throughput transcriptomic sequences, or metabolomic approaches. However, only several key factors, including genes, metabolites, and pathways, have a critical role in phase transition in locusts. For example, CSP (chemosensory protein) and takeout genes, the dopamine pathway, protein kinase A, and carnitines were found to be involved in the regulation of behavioral phase change and gram-negative bacteria-binding proteins in prophylaxical disease resistance of gregarious locusts. Epigenetic mechanisms including small noncoding RNAs and DNA methylation have been implicated. We review these new advances in the molecular basis of phase change in locusts and present some challenges that need to be addressed. © Copyright ©2014 by Annual Reviews. All rights reserved.


Wang T.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Journal of the American Chemical Society | Year: 2013

A novel TEMPO-catalyzed aerobic oxygenation and nitrogenation of hydrocarbons via C=C double-bond cleavage has been disclosed. The reaction employs molecular oxygen as the terminal oxidant and oxygen-atom source by metal-free catalysis under mild conditions. This method can be used for the preparation of industrially and pharmaceutically important N- and O-containing motifs, directly from simple and readily available hydrocarbons. © 2013 American Chemical Society.


Li Z.,Peking University | Li Z.,Chinese Academy of Sciences
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2012

We note that the theoretical prediction of neutrinos from gamma-ray bursts (GRBs) by IceCube overestimates the GRB neutrino flux, because they ignore both the energy dependence of the fraction of proton energy transferred to charged pions and the radiative energy loss of secondary pions and muons when calculating the normalization of the neutrino flux. After correction, GRB neutrino flux is reduced by a factor ∼5 for typical GRB spectral parameters and may be consistent with the present zero event detected by IceCube. More observations are important to push the sensitivity below the prediction and test whether GRBs are the sources of ultrahigh energy cosmic rays. © 2012 American Physical Society.


Zhang B.,Chinese Academy of Sciences | Zhang Y.,Peking University | Zhu D.,Chinese Academy of Sciences
Chemical Communications | Year: 2012

An organic-inorganic hybrid combining a semiconducting BEDT-TTF layer and a Jahn-Teller distorted oxalato-bridged honeycomb antiferromagnetic layer [Cu2(C2O4)3 2-] n was obtained and characterized.


Pan Z.,University of California at Davis | Yu C.,Chinese Academy of Sciences
Astrophysical Journal | Year: 2015

The Blandford-Znajek (BZ) mechanism describes a process for extracting the rotation energy from a spinning black hole (BH) via magnetic field lines penetrating the event horizon of the central BH. In this paper, we present a perturbation approach to study force-free jets launched by the BZ mechanism, and then discuss its two immediate applications: (1) we present a high-order split-monopole perturbation solution to the BZ mechanism which accurately pins down the energy extraction rate E? and well describes the structure of the BH magnetosphere for the entire range of BH spins (0 ≤ a ≤ 1); (2) the approach yields an exact constraint for the monopole field configuration in Kerr spacetime, I = ω(1 - Aφ 2 , where Aφ is the φ component of the vector potential of electromagnetic field, Ω is the angular velocity of the magnetic field lines, and I is the poloidal electric current. The constraint is of particular importance to benchmark the accuracy of numerical simulations. © 2015. The American Astronomical Society. All rights reserved.


Liu X.-W.,Peking University | Zhao G.,CAS National Astronomical Observatories | Hou J.-L.,Chinese Academy of Sciences
Research in Astronomy and Astrophysics | Year: 2015

By the time of this writing, the ongoing LAMOST Galactic surveys have collected approximately 4.5 million stellar spectra with signal-to-noise ratios better than 10 per pixel. This special issue is devoted to early results from the surveys, mostly based on the LAMOST Data Release 1 (DR1; Luo et al., this volume) that contains data secured by May 2013, the end of the first year of regular surveys, although a few studies have made use of data collected in the second year of regular surveys. LAMOST DR1 was released to the Chinese astronomical community and international partners in August 2013 and made public to the whole world in March 2015. Here we briefly review the scope and motivation, data reduction and release, as well as early results of the surveys. As the project advances, one can expect that these surveys will yield an exquisite description of the distribution, kinematics and chemistry of Galactic stellar populations, especially those within a few kpc of the Sun, a robust measurement of the local dark matter density, and, consequently, shed light on how our Galaxy, and other galaxies in general, form and evolve. © 2015 National Astronomical Observatories, Chinese Academy of Sciences and IOP Publishing Ltd.


Xu Z.,Peking University | Zhang C.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Angewandte Chemie - International Edition | Year: 2012

A fragment-assembling strategy is used to form oxazoles from aryl acetaldehydes, amines, and molecular oxygen under mild conditions (see scheme). The transformation is highly efficient with the removal of six hydrogen atoms, including the cleavage of four C(sp3)-H bonds. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Ding S.,Peking University | Yan Y.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Chemical Communications | Year: 2013

The aerobic direct dehydrogenative annulation of N-iminopyridinium ylides with terminal alkynes leading to pyrazolo[1,5-a]pyridine derivatives has been developed. © 2013 The Royal Society of Chemistry.


Lin R.,Peking University | Chen F.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Organic Letters | Year: 2012

A metal-free N-hydroxyphthalimide (NHPI) catalyzed aerobic oxidative cleavage of olefins has been developed. Molecular oxygen is used as the oxidant and reagent for this oxygenation reaction. This methodology has prevented the use of toxic metals or overstoichiometric amounts of traditional oxidants, showing good economical and environmental advantages. Based on the experimental observations, a plausible mechanism is proposed. © 2012 American Chemical Society.


Qin C.,Peking University | Shen T.,Peking University | Tang C.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Angewandte Chemie - International Edition | Year: 2012

Ironing it out: An efficient and convenient nitrogenation strategy involving C-C bond cleavage for the straightforward synthesis of versatile arylamines is presented. Various alkyl azides and alkylarenes, including the common industrial by-product cumene, react using this protocol. Moreover, this method provides a potential strategy for the degradation of polystyrene. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Ding S.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Angewandte Chemie - International Edition | Year: 2012

Often used as a common solvent for chemical reations and utilized widely in industry as a reagent, N,N-dimethylformamide (DMF) has played an important role in organic synthesis for a long time. Numerous highly useful articles and reviews discussing its utilizations have been published. With a focus on the performance of DMF as a multipurpose precursor for various units in numerous reactions, this Minireview summarizes recent developments in the employment of DMF in the fields of formylation, aminocarbonylation, amination, amidation, and cyanation, as well as its reaction with arynes. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.


Shen T.,Peking University | Yuan Y.,Peking University | Song S.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Chemical Communications | Year: 2014

A novel iron-catalyzed efficient approach to construct sulfone-containing oxindoles, which play important roles in the structural library design and drug discovery, has been developed. The use of readily available benzenesulfinic acids, an inexpensive iron salt as the catalyst, and air as the oxidant makes this sulfur incorporation protocol very efficient and practical. This journal is © the Partner Organisations 2014.


Li Z.,Peking University | Li Z.,Chinese Academy of Sciences
Astrophysical Journal Letters | Year: 2013

If gamma-ray bursts (GRBs) produce high-energy cosmic rays, neutrinos are expected to be generated in GRBs via photo-pion productions. However, we stress that the same process also generates electromagnetic (EM) emission induced by the secondary electrons and photons, and that the EM emission is expected to be correlated with neutrino flux. Using Fermi/Large Area Telescope results on gamma-ray flux from GRBs, the GRB neutrino emission is limited to be <20 GeV m-2 per GRB event on average, which is independent of the unknown GRB proton luminosity. This neutrino limit suggests that IceCube, operating at full scale, requires stacking of more than 130 GRBs in order to detect one GRB muon neutrino. © 2013. The American Astronomical Society. All rights reserved..


Shi Z.,Peking University | Zhang C.,Peking University | Tang C.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Chemical Society Reviews | Year: 2012

For green and sustainable chemistry, molecular oxygen is considered as an ideal oxidant due to its natural, inexpensive, and environmentally friendly characteristics, and therefore offers attractive academic and industrial prospects. This critical review introduces the recent advances over the past 5 years in transition-metal catalyzed reactions using molecular oxygen as the oxidant. This review highlights the scope and limitations, as well as the mechanisms of these oxidation reactions (184 references). © 2012 The Royal Society of Chemistry.


Zhang C.,Peking University | Feng P.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Journal of the American Chemical Society | Year: 2013

The Cu-catalyzed novel aerobic oxidative esterification reaction of 1,3-diones for the synthesis of α-ketoesters has been developed. This method combines C-C σ-bond cleavage, dioxygen activation and oxidative C-H bond functionalization, as well as provides a practical, neutral, and mild synthetic approach to α-ketoesters which are important units in many biologically active compounds and useful precursors in a variety of functional group transformations. A plausible radical process is proposed on the basis of mechanistic studies. © 2013 American Chemical Society.


Zhang C.,Peking University | Xu Z.,Peking University | Zhang L.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Angewandte Chemie - International Edition | Year: 2011

Efficient and practical: The title reaction provides an efficient route to α-ketoamides compounds, which are ubiquitous structural units in a number of biologically active compounds. N-substituted anilines are suitable substrates for this transformation. Two C sp 3-H bonds as well as one C sp 2-H and one N-H bond are cleaved in this reaction. Molecular oxygen (1 atm) is used as the oxidant and the reaction involves dioxygen activation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Shen T.,Peking University | Wang T.,Peking University | Qin C.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Angewandte Chemie - International Edition | Year: 2013

Three in one blow! A novel direct transformation of alkynes into nitriles by a silver-catalyzed nitrogenation reaction through C≡C bond cleavage has been developed. This research provides both a new application for alkynes in organic synthesis, and valuable mechanistic insights into nitrogenation chemistry. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Shen Y.,Chinese Academy of Sciences | Chen Y.,Xinjiang Institute of Ecology and Geography
Hydrological Processes | Year: 2010

Arid and semiarid regions comprise a large part of the world's terrestrial area and are home to hundreds of millions of people. Water resources in arid regions are rare and critical to society and to ecosystems. The hydrologic cycle in arid and semiarid regions has been greatly altered due to long-term human exploitation. Under conditions of global warming, water resources in these regions are expected to be more unstable and ecosystems likely will suffer from severe water stress. In the current special issue contributed to understanding ecohydrologic processes and water-related problems in arid regions of western China, this paper provides a global perspective on the hydrology and water balance of six major arid basins of the world. A number of global datasets, including the state-of-the-art ensemble simulation of land surface models by GSWP2 (Global Soil Wetness Project II, a project by GEWEX), were used to address the water balance terms in the world's major hydroclimatic regions. The common characteristics of hydrologic cycles and water balance in arid basins are as follows: strong evapotranspiration characterizes the hydrological cycle in arid basins; and in water use sectors irrigation consumes a large amount of water, resulting in degradation of native vegetation. From the ecohydrology viewpoint, a comprehensive study of hydrological and ecological processes of water utilization in arid basins is urgently needed. Copyright © 2009 John Wiley & Sons, Ltd.


Shen T.,Peking University | Yuan Y.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Chemical Communications | Year: 2014

A novel and direct metal-free nitro-carbocyclization of activated alkenes leading to valuable nitro-containing oxindoles via cascade C-N and C-C bond formation has been developed. The mechanistic study indicates that the initial NO and NO2 radical addition and the following C-H functionalization processes are involved in this transformation. This journal is © The Royal Society of Chemistry 2014.


Tang C.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Angewandte Chemie - International Edition | Year: 2014

A novel copper-catalyzed aerobic oxidative C(CO)-C(alkyl) bond cleavage reaction of aryl alkyl ketones for C-N bond formation is described. A series of acetophenone derivatives as well as more challenging aryl ketones with long-chain alkyl substituents could be selectively cleaved and converted into the corresponding amides, which are frequently found in biologically active compounds and pharmaceuticals. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Liang Y.-F.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Angewandte Chemie - International Edition | Year: 2014

A transition-metal-free Cs2CO3-catalyzed α-hydroxylation of carbonyl compounds with O2 as the oxygen source is described. This reaction provides an efficient approach to tertiary α-hydroxycarbonyl compounds, which are highly valued chemicals and widely used in the chemical and pharmaceutical industry. The simple conditions and the use of molecular oxygen as both the oxidant and the oxygen source make this protocol very environmentally friendly and practical. This transformation is highly efficient and highly selective for tertiary C(sp3)-H bond cleavage. OH, so simple! A transition-metal-free Cs2CO 3-catalyzed α-hydroxylation of carbonyl compounds with O 2 provided a variety of tertiary α-hydroxycarbonyl compounds (see scheme; DMSO=dimethyl sulfoxide), which are widely used in the chemical and pharmaceutical industry. The simple conditions and the use of molecular oxygen as both the oxidant and the oxygen source make this protocol very efficient and practical. © 2014 WILEY-VCH Verlag GmbH.


Qin C.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Journal of the American Chemical Society | Year: 2010

This paper describes the first direct approach to alkenyl nitriles from allylarenes or alkenes facilitated by an inexpensive homogeneous iron catalyst. Three C-H bond cleavages occur under the mild conditions during this process. Mechanistic studies indicate that the cleavage of the allyl C(sp3)-H bond is involved in the rate-determining step. This observation may provide an opportunity to achieve C(sp3)-H functionalization catalyzed by an iron catalyst. © 2010 American Chemical Society.


Jiang J.-W.,National University of Singapore | Wang J.-S.,National University of Singapore | Wang B.-S.,Chinese Academy of Sciences
Applied Physics Letters | Year: 2011

The minimum thermal conductance versus supercell size (ds) is revealed in graphene and boron nitride superlattice with ds far below the phonon mean free path. The minimum value is reached at a constant ratio of ds/L ≈ 5, where L is the thickness of the superlattice; thus, the minimum point of ds depends on L. The phenomenon is attributed to the localization property and the number of confined modes in the superlattice. With the increase of ds, the localization of the confined mode is enhanced while the number of confined modes decreases, which directly results in the minimum thermal conductance. © 2011 American Institute of Physics.


Jiang J.-W.,National University of Singapore | Wang B.-S.,Chinese Academy of Sciences | Wang J.-S.,National University of Singapore
Applied Physics Letters | Year: 2011

The thermal conductance in graphene nanoribbon with a vacancy or silicon point defect is investigated by nonequilibrium Green's function (NEGF) formalism combined with first-principles calculations of density-functional theory with local density approximation. The thermal conductance is very sensitive to the position of the vacancy defect, while insensitive to the position of silicon defect. A vacancy defect situated at the center of the nanoribbon generates a saddlelike surface, which greatly reduces the thermal conductance by strong scattering to all phonon modes; while an edge vacancy defect only results in a further reconstruction of the edge and slightly reduces the thermal conductance. © 2011 American Institute of Physics.


Ding S.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Journal of the American Chemical Society | Year: 2011

This paper describes the direct cyanation of indoles and benzofurans employing N,N-dimethylformamide (DMF) as both reagent and solvent. Isotopic labeling experiments indicated that both the N and the C of the cyano group derived from DMF. This transformation offers an alternative method for preparing aryl nitriles, though it is currently limited in scope to indoles and benzofurans. © 2011 American Chemical Society.


Liu R.,Dalian University of Technology | Cao J.,Dalian University of Technology | Lin Z.,Peking University | Shan S.,Chinese Academy of Sciences
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2014

Partial Differential Equations (PDEs) have been successful in solving many low-level vision tasks. However, it is a challenging task to directly utilize PDEs for visual saliency detection due to the difficulty in incorporating human perception and high-level priors to a PDE system. Instead of designing PDEs with fixed formulation and boundary condition, this paper proposes a novel framework for adaptively learning a PDE system from an image for visual saliency detection. We assume that the saliency of image elements can be carried out from the relevances to the saliency seeds (i.e., the most representative salient elements). In this view, a general Linear Elliptic System with Dirichlet boundary (LESD) is introduced to model the diffusion from seeds to other relevant points. For a given image, we first learn a guidance map to fuse human prior knowledge to the diffusion system. Then by optimizing a discrete submodular function constrained with this LESD and a uniform matroid, the saliency seeds (i.e., boundary conditions) can be learnt for this image, thus achieving an optimal PDE system to model the evolution of visual saliency. Experimental results on various challenging image sets show the superiority of our proposed learning-based PDEs for visual saliency detection. © 2014 IEEE.


Tang C.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Journal of the American Chemical Society | Year: 2012

A novel and efficient copper-catalyzed azidation reaction of anilines via C-H activation has been developed. This method, in which the primary amine acts as a directing group by coordinating to the metal center, provides ortho azidation products regioselectively under mild conditions. This effective route for the synthesis of aryl azides is of great significance in view of the versatile reactivity of the azide products. © 2012 American Chemical Society.


Patent
Chinese Academy of Sciences and DongGuan Eontec Co. | Date: 2015-07-23

A bulk amorphous alloy, including, based on atomic percentage amounts, between 41 and 63% of Zr, between 18 and 46% of Cu, between 1.5 and 12.5% of Ni, between 4 and 15% of Al, between 0.01 and 5% of Ag, and between 0.01 and 5% of Y.


Grant
Agency: European Commission | Branch: FP7 | Program: CSA-CA | Phase: KBBE.2012.2.4-03 | Award Amount: 1.18M | Year: 2012

Guaranteeing the long term availability of safe foods is a global concern that has initiated a large number of activities, including research, policy development and implementation, legislation and training. Extensive information is generated about food safety, but it is fragmented, and not internationally disseminated. The Collab4Safety consortium will establish a global network on food safety with the aim of developing a sustainable coordination platform for exchange of food safety information about research findings, capacity building and policies and facilitate the control and mitigation of existing and emerging food risks. Tried and tested methods will be used to identify problems and gaps in knowledge, resulting in generation of outputs valuable to research managers and interested stakeholders globally. Establishing a permanent structure will contribute to the development of trust between key players and institutions, which is needed to create an international forum for exchange of information and opinions on matters pertaining to food safety in the food and feed chain


Human life and the entire ecosystem of South East Asia depend upon the monsoon climate and its predictability. More than 40% of the earths population lives in this region. Droughts and floods associated with the variability of rainfall frequently cause serious damage to ecosystems in these regions and, more importantly, injury and loss of human life. The headwater areas of seven major rivers in SE Asia, i.e. Yellow River, Yangtze, Mekong, Salween, Irrawaddy, Brahmaputra and Ganges, are located in the Tibetan Plateau. Estimates of the Plateau water balance rely on sparse and scarce observations that cannot provide the required accuracy, spatial density and temporal frequency. Fully integrated use of satellite and ground observations is necessary to support water resources management in SE Asia and to clarify the roles of the interactions between the land surface and the atmosphere over the Tibetan Plateau in the Asian monsoon system. The goal of this project is to: 1. Construct out of existing ground measurements and current / future satellites an observing system to determine and monitor the water yield of the Plateau, i.e. how much water is finally going into the seven major rivers of SE Asia; this requires estimating snowfall, rainfall, evapotranspiration and changes in soil moisture; 2. Monitor the evolution of snow, vegetation cover, surface wetness and surface fluxes and analyze the linkage with convective activity, (extreme) precipitation events and the Asian Monsoon; this aims at using monitoring of snow, vegetation and surface fluxes as a precursor of intense precipitation towards improving forecasts of (extreme) precipitations in SE Asia. A series of international efforts initiated in 1996 with the GAME-Tibet project. The effort described in this proposal builds upon 10 years of experimental and modeling research and the consortium includes many key-players and pioneers of this long term research initiative.


Grant
Agency: European Commission | Branch: H2020 | Program: CSA | Phase: ISSI-5-2015 | Award Amount: 3.64M | Year: 2016

The RRI-Practice project will bring together a unique group of international experts in RRI to understand the barriers and drivers to the successful implementation of RRI both in European and global contexts; to promote reflection on organisational structures and cultures of research conducting and research funding organisations; and to identify and support best practices to facilitate the uptake of RRI in organisations and research programmes. The project will review RRI related work in 22 research conducting and research funding organisations and will develop RRI Outlooks outlining RRI objectives, targets and indicators for each organisation. It will involve comparative analysis of the five EC keys of RRI locating these within broader, evolving discourses on RRI. Within each identified RRI dimension the project will analyse how the topic has developed in particular social and institutional contexts, how the RRI concept and configuration meshes, overlaps and challenges existing organisational practices and cultures, leading to an analysis of the barriers and drivers associated with operationalising and implementing RRI. 12 national case studies will allow for in depth studies of, and dialogue with, the included organisations, and will form the basis for systematic analysis and comparison of drivers, barriers and best practices on each dimension of RRI. The project design also allows analysis of such drivers, barriers and best practices related to national and organisational characteristics, safeguarding the need to take into account diversity and pluralism in regional RRI programs. These analyses will ultimately end up in recommendations to the EC about effective, efficient and targeted strategies for increasing RRI uptake in different kinds of organisations and national cultures, in Europe and in selected major S&T intensive economies worldwide. The project will also develop user-friendly guidance aimed directly at research and funding organisations themselves.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-IP | Phase: SSH.2011.2.1-1 | Award Amount: 10.36M | Year: 2012

One of the biggest challenges facing global society today is the widespread and growing presence of hunger and food insecurity. Given that the lead time for some social and technological solutions is long, a long-term framework on global food and nutrition security (FNS) is required. FoodSecure aims at improving the resilience of the food system, by providing a means to mitigate risks and uncertainties in the world food system caused by economic and climatic shocks while providing for sustainable economic growth. The project provides an analytical toolbox to experiment, analyse, and coordinate the effects of short and medium term policies, thereby allowing for the execution of consistent, coherent, long-term strategies with desirable consequences. The FoodSecure collaboration responds to the challenge of food shortages and volatility by providing stakeholders, in the EU and beyond, with the capacity to assess and address the short term and long term challenges of food and nutrition security both effectively and sustainably. The project draws on an expert, multi-disciplinary, science team to provide a complete set of knowledge to inform and guide decision makers and other stakeholders in formulating strategies to alleviate food shortages. The food system is analysed in relationship to the ecosystem, energy, and financial markets, all of which are potential sources of shocks that can disrupt the food system. In addition, it is examined in light of fundamental societal trends and changing attitudes towards food consumption and production. The project emphasises the diversity of challenges of FNS in countries and regions. The project delivers new empirical evidence on the drivers of global FNS, and classifies regions and livelihood systems in typologies . A harmonised data system and modelling toolbox are developed for forecasts (on short term) and forward looking (towards 2050) on future hunger. A support for effective and sustainable actions will include the identification of the critical pathways for technological and institutional change and for EU policies in the areas of development aid, climate change, trade, common agricultural policy and renewable energy, including sustainability criteria.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: WASTE-2-2014 | Award Amount: 9.44M | Year: 2015

The overall aim of the REFRESH project is to contribute significantly towards the objective of reducing food waste across the EU by 30% by 2025 (which amounts to between 25 to 40 million tonnes of food not being wasted in 2025[1], worth tens of billions of Euros a year) and maximizing the value from unavoidable food waste and packaging materials. To achieve this ambitious goal, we will adopt a systemic approach and use cutting edge science to enable action by businesses, consumers and public authorities. A central ambition of the REFRESH project is to develop a Framework for Action model that is based on strategic agreements across all stages of the supply chain (backed by Governments), delivered through collaborative working and supported by evidence-based tools to allow targeted, cost effective interventions. Success will support transformation towards a more sustainable and secure EU food system, benefitting Europes economy, environment and society.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: SFS-13-2015 | Award Amount: 6.43M | Year: 2016

MycoKey aims to generate innovative and integrated solutions that will support stakeholders in effective and sustainable mycotoxin management along food and feed chains. The project will contribute to reduce mycotoxin contamination mainly in Europe and China, where frequent and severe mycotoxin contaminations occur in crops, and where international trade of commodities and contaminated batches are increasing. MycoKey will address the major affected crops maize, wheat and barley, their associated toxigenic fungi and related mycotoxins (aflatoxins, deoxynivalenol, zearalenone, ochratoxin A, fumonisins). The project will integrate key information and practical solutions for mycotoxin management into a smart ICT tool (MycoKey App), providing answers to stakeholders, who require rapid, customized forecasting, descriptive information on contamination risk/levels, decision support and practical economically-sound suggestions for intervention. Tools and methodologies will be strategically targeted for cost-effective application in the field and during storage, processing and transportation. Alternative and safe ways to use contaminated batches will be also delivered. The focus of Mycokey will be: i) innovating communications of mycotoxin management by applying ICT, providing input for legislation, enhancing knowledge and networks; ii) selecting and improving a range of tools for mycotoxin monitoring; iii) assessing the use of reliable solutions, sustainable compounds/green technologies in prevention, intervention and remediation. The multi-disciplinary consortium, composed by scientific, industrial and association partners (32), includes 11 Chinese institutions and will conduct the 4 years programme in a framework of international networks.


Patent
Chinese Academy of Sciences and Dongguan Eontec Co. | Date: 2016-10-05

Provided is a device for casting forming of amorphous alloy components. The device comprises an injection system, an alloy smelting system, a raw material feeding system, a mould system, a vacuum system and a protective atmosphere system. The vacuum system comprises a vacuum tank (6). The protective atmosphere system comprises a gas cylinder (1) for a protective atmosphere. The vacuum tank or the gas cylinder for a protective atmosphere is provided to effectively realize the acquisition of a vacuum or a protective atmosphere with a positive pressure in the forming process so as to achieve the casting forming of the amorphous alloy components under the protection of the vacuum and the protective atmosphere with a positive pressure. The mould is provided with an exhaust port to prevent the formation of micro shrinkage cavities on the surface in the process of forming the alloy components. Also provided is a process for the casting forming of the amorphous alloy components. The device and process substantially reduce the space of the vacuum or the protective atmosphere with a positive pressure, and can improve the quality of the amorphous alloy components, save on cost and improve the production efficiency.


Patent
Chinese Academy of Sciences and Dongguan Eontec Co. | Date: 2016-10-05

Provided is a device for the casting forming of amorphous alloy components. The device comprises an injection system, an alloy smelting system, a raw material feeding system, a mould system, a vacuum system and a protective atmosphere system, which are used for the preparation of the amorphous alloy components, and can realize the formation by squeezing casting of the amorphous alloy components under a vacuum or a protective atmosphere with a positive pressure. The device is provided with an exhaust port on the mould to effectively solve the problem of forming micro shrinkage cavities on the surface in the process of forming the alloy components so as to improve the quality of the amorphous alloy components. Also provided is a process for the casting forming of the amorphous alloy components. A high vacuum tank or a protective atmosphere tank is used to reach an acquisition time of the vacuum or protective atmosphere with a positive pressure so as to shorten the forming period, save the production cost, and improve the production efficiency.


Su Y.,Peking University | Zhang L.,Peking University | Jiao N.,Peking University | Jiao N.,Chinese Academy of Sciences
Organic Letters | Year: 2011

Chemical equations presented. A novel, efficient oxidation of α-aryl halogen derivatives to the corresponding α-aryl carbonyl compounds at room temperature has been disclosed. Natural sunlight and air are successfully utilized in this approach through the combination of photocatalysis and organocatalysis. A plausible mechanism was proposed on the basis of the mechanistic studies. © 2011 American Chemical Society.


Zhang X.-O.,CAS Shanghai Institutes for Biological Sciences | Wang H.-B.,Chinese Academy of Sciences | Wang H.-B.,Shanghai University | Zhang Y.,Chinese Academy of Sciences | And 3 more authors.
Cell | Year: 2014

Exon circularization has been identified from many loci in mammals, but the detailed mechanism of its biogenesis has remained elusive. By using genome-wide approaches and circular RNA recapitulation, we demonstrate that exon circularization is dependent on flanking intronic complementary sequences. Such sequences and their distribution exhibit rapid evolutionary changes, showing that exon circularization is evolutionarily dynamic. Strikingly, exon circularization efficiency can be regulated by competition between RNA pairing across flanking introns or within individual introns. Importantly, alternative formation of inverted repeated Alu pairs and the competition between them can lead to alternative circularization, resulting in multiple circular RNA transcripts produced from a single gene. Collectively, exon circularization mediated by complementary sequences in human introns and the potential to generate alternative circularization products extend the complexity of mammalian posttranscriptional regulation. © 2014 Elsevier Inc.


Grant
Agency: European Commission | Branch: FP7 | Program: CSA-CA | Phase: KBBE.2010.1.3-05 | Award Amount: 1.21M | Year: 2010

This European surveillance network for influenza in pigs (ESNIP) 3 will maintain and expand surveillance networks established during previous EC concerted actions (ESNIP 1, QLK2-CT-2000-01636; ESNIP 2, SSPE-022749). Three work packages (WP 2, 3, 4) aim to increase the knowledge of the epidemiology and evolution of swine influenza (SI) virus (SIV) in European pigs through organised field surveillance programmes (WP2). Virus strains detected in these programmes will be subjected to detailed characterisation both antigenically (WP3) and genetically (WP4) using standardised methodology. Specifically this will involve timely information on genomic data and generation of antigenic maps using the latest technology. These analyses will provide significant and timely added value to knowledge of SIV. A strong focus will be monitoring spread and independent evolution of pandemic H1N1 2009 virus in pigs. All these data will in turn be used to improve the diagnosis of SI by updating the reagents used in the recommended techniques (WP2). The virus bank and electronic database that were established during ESNIPs 1 and 2 will also be expanded and formally curated with relevant SIV isolates and information for global dissemination within and outwith the consortium (WP5). ESNIP 3 represents the only organised surveillance network for influenza in pigs and seeks to strengthen formal interactions with human and avian surveillance networks previously established in ESNIP 2. A timely and transparent interaction with these networks will be a key output. These approaches are entirely consistent with improved pandemic preparedness and planning for human influenza whilst providing an evidence base for decisions in relation to veterinary health. The project consortium consists of 24 participants, which contribute a blend of different specialisms and skills ensuring multi-disciplinary cutting-edge outputs. The vast majority of the partners are actively working with SIV including in a field setting. Twenty-one participants are from 11 EU member states, seven of which were actively involved in ESNIP 2. Co-operation with partners in China and North America will continue to promote a greater understanding of the epidemiology of SIVs at a global level.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-IP-SICA | Phase: KBBE-2009-2-4-02 | Award Amount: 7.82M | Year: 2010

Plant food supplements, or botanicals, have high acceptance by European consumers. Potentially, they can deliver significant health benefits, safely, and at relatively low costs. New regulations and EFSA guidance are also now in. However, concerns about safety, quality and efficacy of these products remain, and bottle-necks in risk and benefit assessments need to be solved. PlantLIBRA (PLANT food supplements: Levels of Intake, Benefit and Risk Assessment) aims to foster the safe use of food supplements containing plants or herbal extracts, by increasing science-based decision-making by regulators and food chain operators. To make informed decisions, competent authorities and food businesses need more quality-assured and accessible information and better tools (e.g., metadatabanks). PlantLIBRA is structured to develop, validate and disseminate data and methodologies for risk and benefit assessment and implement sustainable international cooperation. International cooperation, on-spot and in-language capacity building are necessary to ensure the quality of the plants imported in the EU. PlantLibra spans 4 continents and 23 partners, comprising leading academics, Small- and Medium-Sized Enterprises, industry and non-profit organizations. Through its partners it exploits the databases and methodologies of two Network of Excellences, EuroFIR and Moniqa. Plantlibra will also fill the gap in intake data by conducting harmonized field surveys in the regions of the EU and apply consumer sciences to botanicals. Existing composition and safety data will be collated into a meta-databank and new analytical data and methods will be generated. The overarching aim is to integrate diverse scientific expertise into a single science of botanicals. PlantLIBRA works closely with EFSA since several PlantLIBRA partners or experts are involved in the relevant EFSA Working Groups, and also plans shoulder-to-shoulder cooperation with competent authorities and stakeholders.


Grant
Agency: European Commission | Branch: FP7 | Program: CSA-CA | Phase: HEALTH-2007-2.1.2-7 | Award Amount: 1.12M | Year: 2009

In contrast to the reductionist approach of Western medicine that is based on modern anatomy, cell and molecular biology, Traditional Chinese Medicine (TCM) uses a unique theory system and an individualised and holistic approach to describe health and disease, based on the philosophy of Yin-Yang balance and an emphasis on harmony of functions. These two medicine systems disagree with each other in many situations as both of them may observe health from their limited perspective. GP-TCM aims to inform best practice and harmonise research of the safety and efficacy of TCM, especially Chinese herbal medicines (CHM) and acupuncture, in EU Member States using a functional genomics approach through exchange of opinions, experience and expertise among scientists in EU Member States and China. In 10 proposed work packages, we will take actions to review the current status, identify problems and solutions in the quality control, extraction and analysis of CHM. While these fundamental issues are addressed, discussion forums emphasising the use of functional genomics methodology in research of the safety, efficacy and mechanisms of CHM and acupuncture will be the core of this Coordination project. It will include the application of the technique in cell-based models, animal models and in clinical studies. Guidelines about good practice and agreed protocols in related research areas will be published to promote future TCM research in all the EU member states; online tools and research resources will be made available to EU member states; EU member states and additional China partners will be invited to join this network; The GP-TCM Research Association will be established during this project and kept running autonomously to continue the guidance and coordination of EU-China collaboration in TCM research.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-FP | Phase: KBBE-2008-1-4-05 | Award Amount: 3.08M | Year: 2009

The overall objective of the project is to collect and analyze new data on non-tariff measures (NTMs), particularly on governmental standards and regulations that prescribe the conditions for importing agri-food products into the EU market and into the markets of the main competing players. Furthermore, impacts from EU NTBs on least developing country (LDC) exports are examined. The project will deliver the following results: 1. An analytical framework for defining measures, methods, products and countries. 2. A data base on NTMs in EU, USA, Canada, Japan, China, India, Brazil, Argentina, Australia, Russia and New Zealand. 3. Comparative analyses on the impact of NTMs on agri-food trade of the EU. 4. Policy recommendations from case studies for quantifying NTMs on fruits and vegetables, meat and dairy trade clusters with the EU. 5. Policy recommendations from case studies on the impacts of EU private and public standards in LDCs. 6. Dissemination of project results to key stakeholders. This will be achieved: A. By optimizing complementarities of the project with ongoing NTM research on the TRAINS data base at UNCTAD. B. By organizing the research work in research, database, management and dissemination work packages. C. By developing research methodologies that are innovative and robust, optimizing the direct usefulness of the end results for the end users. D. By proposing a partner consortium that together reunites the relevant needs, for: Scientific excellence and international project experience Appropriate geographic coverage to collect the required data in all countries Linkages and complementarities with ongoing international NTM analyses (UNCTAD, OECD, World Bank, IFPRI) Policy contacts, dialogue and influence Efficient and effective project management E. With a budget of 314.5 person months, 2.372 M EC request, for 19 partners, over 30 months.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-FP | Phase: ENV.2007.4.1.3.3. | Award Amount: 3.44M | Year: 2008

Soil and land information is needed for a wide range of applications but available data are often inaccessible, incomplete, or out of date. GEOSS plans a global Earth Observation System and, within this framework, the e-SOTER project addresses the felt need for a global soil and terrain database. As the European contribution to a Global Soil Observing System, it will deliver a web-based regional pilot platform with data, methodology, and applications, using remote sensing to validate, augment and extend existing data. Technical barriers that have to be overcome include: quantitative mapping of landforms; soil parent material and soil attribute characterization and pattern recognition by remote sensing; standardization of methods and measures of soil attributes to convert legacy data. Two major research thrusts involve: 1) improvement of the current SOTER methodology at scale 1:1 million in four windows in Europe, China and Morocco. Moderate-resolution optical remote sensing will be combined existing parent material/geology and soil information, making use of advanced statistical procedures; 2) within 1:250 000-scale pilot areas, advanced remote sensing applications will be developed - geomorphic landscape analysis, geological re-classified remote sensing, and remote sensing of soil attributes. Advances beyond the state of the art include: transformation of pre-existing data and addition of new information with remote sensing and DEM; interpretations of the e-SOTER database that address threats defined in the EU Soil Thematic Strategy and comparing the results with current assessments; and delivery through a web service of a data portal. e-SOTER will deliver a Pilot Platform and a portal that provides open access to: 1) a methodology to create 1:1 million-scale SOTER databases, and an enhanced soil and terrain database at scale 1:1 million for the four windows; 2) an artifact-free 90m digital elevation model; 3) methodologies to create 1:250 000-scale enhanced SOTER databases, and the databases themselves for four pilots; 4) advanced remote sensing techniques to obtain soil attribute data; 5) validation and uncertainty propagation analysis; 6) dedicated applications related to major threats to soil quality and performance.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: SFS-04-2014 | Award Amount: 5.31M | Year: 2015

LANDMARK is a pan-European multi-actor consortium of leading academic and applied research institutes, chambers of agriculture and policy makers that will develop a coherent framework for soil management aimed at sustainable food production across Europe. The LANDMARK proposal builds on the concept that soils are a finite resource that provides a range of ecosystem services known as soil functions. Functions relating to agriculture include: primary productivity, water regulation & purification, carbon-sequestration & regulation, habitat for biodiversity and nutrient provision & cycling. Trade-offs between these functions may occur: for example, management aimed at maximising primary production may inadvertently affect the water purification or habitat functions. This has led to conflicting management recommendations and policy initiatives. There is now an urgent need to develop a coherent scientific and practical framework for the sustainable management of soils. LANDMARK will uniquely respond to the breadth of this challenge by delivering (through multi-actor development): 1. LOCAL SCALE: A toolkit for farmers with cost-effective, practical measures for sustainable (and context specific) soil management. 2. REGIONAL SCALE - A blueprint for a soil monitoring scheme, using harmonised indicators: this will facilitate the assessment of soil functions for different soil types and land-uses for all major EU climatic zones. 3. EU SCALE An assessment of EU policy instruments for incentivising sustainable land management. There have been many individual research initiatives that either address the management & assessment of individual soil functions, or address multiple soil functions, but only at local scales. LANDMARK will build on these existing R&D initiatives: the consortium partners bring together a wide range of significant national and EU datasets, with the ambition of developing an interdisciplinary scientific framework for sustainable soil management.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: SFS-04-2014 | Award Amount: 6.88M | Year: 2015

Knowledge regarding the complex interplay between agricultural land use and management and soil quality and function is fragmented and incomplete, in particular with regard to underlying principles and regulating mechanisms. The main aim of iSQAPER is to develop an interactive soil quality assessment tool (SQAPP) for agricultural land users that integrates newly derived process understanding and accounts for the impact of agricultural land use and management on soil properties and functions, and related ecosystem services. For this purpose, >30 long-term experimental field trials in the EU and China will be analysed to derive regulating principles for integration in SQAPP. SQAPP will be developed using a multi-actor approach aiming at facilitating social innovation and providing options to land users for cost-effective agricultural management activities to enhance soil quality and crop productivity. SQAPP will be tested extensively in 14 dedicated Case Study Sites in the EU and China covering a wide spectrum of farming systems and pedo-climatic zones, and rolled-out across the continents thereafter. Within the Case Study sites a range of alternative agricultural practices will be selected, implemented and evaluated with regard to effects on improving soil quality and crop productivity. Proven practices will be evaluated for their potential applicability at EU and China levels, and to assess the related soil environmental footprint under current and future agricultural trends and various agricultural policy scenarios. How the soil quality tool can be utilized for different policy purposes, e.g. in cross compliance and agro-environmental measures, will also be investigated and demonstrated. A comprehensive dissemination and communication strategy, including a web-based information portal, will ensure that project results are available to a variety of stakeholders at the right time and in appropriate formats to enhance soil quality and productivity in the EU and China.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-FP | Phase: SPA.2013.3.2-01 | Award Amount: 2.90M | Year: 2013

Due to the strong economic growth in the China in the past decade, air pollution has become a serious issue in many parts of the country. Therefore, up-to-date regional air pollution information and means of emission control for the main pollutants are important for China. Especially, the Beijing-Tianjin-Hebei region, the Yangtze River and the Pearl River deltas are known as three focal regions with serious air pollution where air quality policies are very important. Within the FP6 project AMFIC atmospheric environmental monitoring over China was addressed by a team of both Chinese and European scientists. Within AMFIC it was concluded that modelling of air quality and therefore the forecast capabilities are hampered by the rapidly changing emissions due to economic growth. In addition, air quality measures could not directly be related to changes in emissions. Therefore, within the follow-up proposal - MarcoPolo - the focus will be on emission estimates from space and the refinement of these emission estimates by spatial downscaling and by source sector apportionment. Air pollutants cover both anthropogenic and biogenic sources. A wide range of satellite data will be used from various instruments. From these satellite data, emission estimates will be made for NOx, SO2, PM and biogenic sources. With various state-of-the-art techniques emission inventories will be created and intercompared. By combining these emission data with known information from the ground a new emission database for MarcoPolo will be constructed. Due to the strongly growing economy in China regular emission inventories are quickly outdated. Within MarcoPolo we will have a monthly update of the emissions based on the latest satellite observations. The improved emission inventory is input to regional air quality models on meso-scale and urban-scale. End-users and decision makers will be informed about air quality via visualized model results and forecasts.


News Article | November 30, 2016
Site: www.prnewswire.co.uk

BEIJING, Nov. 30, 2016 /PRNewswire/ -- Today, the Edge Computing Consortium (ECC) was officially established in Beijing, China. This initiative was jointly created by Huawei Technologies Co., Ltd., Shenyang Institute of Automation of Chinese Academy of Sciences, China Academy of Information and Communications Technology (CAICT), Intel Corporation, ARM, and iSoftStone. The ECC aims to build a cooperative platform for the edge computing industry that will give impetus to openness and collaboration in the Operational Technology (OT) and Information and Communications Technology (ICT) industries, nurtures industrial best practices, and stimulates the healthy and sustainable development of edge computing. Today's global digital revolution is driving a new round of industrial restructuring. Through the digital transformation of industries, products are incorporated into intelligent interconnection. In-depth coordination and convergence of OT and ICT help improve industrial automation, meet the customized requirements of products and services, promote full-lifecycle transformation from products to service operations, and trigger the innovation of products, services, and business models. This will have a lasting impact on the value chain, supply chain, and ecosystem. Yu Haibin, Chairman of the ECC and Director of Shenyang Institute of Automation, the Chinese Academy of Sciences, said, "In the 13th Five Year Plan, China launched two national strategies, integration of digitization and industrialization, as well as 'Made in China 2025'. This requires much on ICT and OT convergence. Edge Computing is key to supporting and enabling this convergence. Meanwhile, industrial development is also facing a turning point. "Industrial automation technology systems will evolve from layered architecture and information silos to IoT, cloud computing, and Big Data analytics architecture. Amidst the evolution, edge computing will bolster distributed industrial automatic self-control architecture. Therefore, the ECC will keep an eye on the design of the architecture and the choice of technical roadmap, as well as promoting industrial development through standardization. In addition, building an ecosystem will also be focused," continued Yu Haibin. The ECC is in pursuit of the OICT concept that OT, information technology (IT), and communications technology (CT) resources should integrate and coordinate with each other, and stick to the spirit of consensus, unity, and win-win cooperation, to drive forward the ECC's healthy development. The ECC strives to advance cooperation among industry resources from government, vendors, academics, research, and customer sectors. The Edge Computing Consortium's White Paper was also released at the 2016 Edge Computing Industrial Summit, during the ECC's launch ceremony. It puts emphasis on the edge computing industry's trends and major challenges, elaborates on the definition and content of edge computing, displays the ECC's top-level design and operational model, and formulates the reference architecture and technological framework of edge computing, guiding the ECC's future development.


News Article | January 28, 2016
Site: www.techtimes.com

A team of scientists from China and the United States have discovered how relatively young stars are able to make their way into dense collections of older stars known as globular clusters. While it was initially thought that these clusters form their stars all at the same time, it has been revealed that they are also capable of producing thousands of second and even third generations of sibling stars. In a study featured in the journal Nature, researchers from Peking University and the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC) in China and the Adler Planetarium and Northwestern University (NU) in the U.S. have found that globular clusters are capable of taking in gas from outside sources, which can then lead to the formation of new stars. This discovery contradicts an earlier notion that it is the aging stars themselves that shed gas in order to trigger the creation of newer stars. Chengyuan Li, a researcher from Peking's Kavli Institute for Astronomy and Astrophysics (KIAA) and lead author of the study, explained that their research offers new perspective on how several stellar populations are able to form as part of star clusters. Their findings suggest that the gas from which new stars are formed likely originates from outside globular clusters rather than from the inside. This event can be compared to how some people choose to adopt kids instead of having biological children of their own with their partners. Globular clusters are capable of producing their own progeny of stars, but it appears that they would rather "adopt" young stars, or at the very least the materials with which new stars can be formed. "Our explanation that secondary stellar populations originate from gas accreted from the clusters' environments is the strongest alternative idea put forward to date," KIAA astronomer Richard de Grijs said. "Globular clusters have turned out to be much more complex than we once thought." The Milky Way is known to contain hundreds of spherical and densely packed globular clusters at its outskirts. A large number of these clusters are already quite old, which is why Li and his colleagues chose to focus their study on younger clusters. The team found their target clusters in two dwarf galaxies known as the Magellanic Clouds. Through the use of data gathered from observations of the Hubble Space Telescope, the researchers were able to identify three particular globular clusters: NGC 1696 and NGC 1783 found in the Large Magellanic Cloud and NGC 411 found in the Small Magellanic Cloud. In the NGC 1783 star cluster, Li and his team identified an initial stellar population that is already 1.4 billion years old, as well as two other stellar populations that are 450 million years old and 890 million years old. The difference in the ages of the star populations was first thought to be because of their ability to retain dust and gas enough to produce several generations of stars. However, this appears to be unlikely according to the researchers. NU astronomer Aaron M. Geller said that after massive stars are formed, they only have an estimated 10 million years before they meet their end in powerful supernovae that can eliminate any remaining dust or gas in the surrounding area. After the explosion, lower-mass stars would then trigger an accumulation of dust and gas in the area once again. The researchers believe globular clusters take up material from stray dust and gas as they move about their host galaxies. Li and his colleagues are now planning to extend their study to globular clusters in the Milky Way other than Magellanic Cloud.


News Article | August 22, 2016
Site: cen.acs.org

For a long time, the club of scientists working with terahertz radiation was fairly exclusive. Its “members” included instrumentation aficionados who made customized equipment for probing molecules in the terahertz range (1 THz = 1012 Hz), also known as the far-infrared range. These scientists used light in this portion of the electromagnetic spectrum, which sits between the infrared and microwave regions, for studies in a number of areas such as spectroscopy and astronomy. For example, astronomers have used it to study the abundance of water, carbon monoxide, and oxygen in interstellar clouds. In recent years, however, affordable commercial instruments for generating and detecting terahertz light have become widely available. As a result, the once-exclusive club now includes many researchers who are applying terahertz light to areas that were difficult to study previously. These scientists are harnessing unique analytical abilities of terahertz light. Unlike infrared light, for example, which induces bending and stretching motions within molecules, terahertz light causes collective motions of groups of molecules. This provides new ways of interrogating molecular systems. In one study, researchers used terahertz spectroscopy to measure the elasticity of polyproline helices (Angew. Chem. Int. Ed. 2016, DOI: 10.1002/anie.201602268). Because polyproline is a key component of collagen, a protein that imparts strength and structural integrity to connective tissues, the elasticity of the biopolymer is related to the springiness of skin and tendons. Polyproline has long been considered a rather rigid molecule, says Syracuse University chemist Timothy M. Korter. The rigidity is often invoked to explain the basis of mechanical strength in proline-based proteins, he adds, but it has been difficult to measure that property directly. Korter and coworkers used terahertz spectroscopy, quantum calculations, and other methods to determine Young’s modulus, a measure of springiness, for two helical forms of poly- -proline: form I, which is a right-handed helix, and form II, which is a left-handed helix. Their findings show that conventional wisdom is wrong about polyproline. It turns out that the helices are considerably less rigid than many other natural and synthetic polymers. And because of differences between peptide bond geometries in the two forms, they differ greatly from each other. The team reports Young’s moduli of 4.9 and 9.6 gigapascals for forms I and II, respectively. For comparison, the polymer poly- -glycine has a Young’s modulus of 42 GPa. In the other terahertz-polymer study, scientists used terahertz light to induce conformational changes and crystallization in poly(3-hydroxybutyrate) (PHB), a well-studied bioderived plastic (Sci. Rep. 2016, DOI: 10.1038/srep27180). The study may offer a new low-energy way to control the orientation of molecular chains and degree to which they are ordered. Those parameters often dictate a polymeric material’s physical properties, such as flexibility, heat resistance, transparency, and toughness. And they affect the biological activity of biopolymers. Hiromichi Hoshina, a senior scientist at Japan’s RIKEN research institute, and coworkers irradiated PHB-chloroform solutions with intense terahertz light as the solvent evaporated. By coupling control experiments with electron microscopy analysis, the researchers showed that the terahertz light induced obvious microscopic changes in the polymer films’ morphologies, but not as a result of thermal effects. And by using spectroscopy methods, they showed that the terahertz light increased the degree of crystallization by up to 20%. The mechanism controlling crystallization remains unclear, Hoshina acknowledges. The studies indicate that terahertz light activates intermolecular motions between polymer chains and solvent molecules, he says, but it probably occurs indirectly, perhaps through the effects of terahertz-generated shock waves. His team is now investigating that hypothesis. The team also aims to study whether the method can induce crystallization in biopolymers that are difficult to crystallize. “It’s exciting to see how terahertz methods are being used nowadays in a growing number of important applications,” says Yale University’s Charles A. Schmuttenmaer, one of the field’s pioneers. The polyproline study “beautifully demonstrates” the power of combining terahertz spectroscopy with high-level computational methods, he says. And the PHB study shows that polymer crystallization can be driven by terahertz radiation’s unique ability to tap into low-frequency intermolecular vibrations. University at Buffalo, SUNY, physicist Andrea Markelz, a terahertz spectroscopist who specializes in biomolecular dynamics, is also enthusiastic about these studies. Regarding the significance of the Syracuse work, Markelz notes that polypeptide elasticity is important biologically, but it has been very challenging to design an experiment to measure that parameter directly. Meanwhile, the RIKEN researchers convincingly conducted control tests and observed a “clear effect of terahertz light on crystal growth,” she says. “I have never seen anything like this before.” Jiang Zhao, a polymer physicist at the Institute of Chemistry of the Chinese Academy of Sciences, notes that after decades of investigations “there still are unsolved problems in basic polymer science.” For example, scientists would like to understand the relationship between polymer chain structure and polymer properties, as well as learn about molecular behavior during phase transitions such as crystallization. “These studies show that terahertz methods can certainly help advance our understanding of fundamental polymer properties.”


News Article | October 25, 2016
Site: news.yahoo.com

This illustration shows the fish called Qilinyu that lived 423 million years ago during the Silurian Period. A fossil of the fish was unearthed in China’s Yunnan province and is described in the journal Science. Dinghua Yang/Chinese Academy of Sciences/Handout via REUTERS WASHINGTON (Reuters) - A bottom-dwelling, mud-grubbing, armored fish that swam in tropical seas 423 million years ago is fundamentally changing the understanding of the evolution of an indisputably indispensable anatomical feature: the jaw. Scientists said on Thursday they unearthed in China's Yunnan province fossils of a primordial fish called Qilinyu rostrata that was about 12 inches (30 cm) long and possessed the telltale bones present in modern vertebrate jaws including in people. Qilinyu was part of an extinct fish group called placoderms, clad in bony armor covering the head and much of the body and boasting jaws armed with bony plates that acted as teeth to slice and dice prey. Fish were Earth's first vertebrates when they appeared more than half a billion years ago, but they were primitive and jawless, with sucker-like mouths. Placoderms were the first vertebrates with jaws, a pivotal evolutionary advance that enabled them to grasp prey, but they had no teeth. Teeth appeared for the first time in later fish. Qilinyu had three bones, the dentary, maxilla and premaxilla, that characterize the modern vertebrate jaw seen in bony fish, amphibians, reptiles, birds and mammals, though they are absent in the cartilaginous sharks and rays. Scientists long viewed placoderms as a fascinating evolutionary dead-end. But the fossils of Qilinyu and another placoderm called Entelognathus that also possessed the three bones indicate that the elements of the modern jaw first appeared in placoderms. The maxilla and premaxilla are bones of the upper jaw. The dentary is a bone of the lower jaw. It appears they evolved from the bony plates that placoderms used to sheer flesh in lieu of teeth, said paleontologist Per Ahlberg of Sweden's University of Uppsala, who helped lead the study published in the journal Science. "In us, the lower jaw is made entirely from the dentary. Most of the upper jaw is composed from the maxilla, but the bit that carries the incisor teeth is the premaxilla," Ahlberg said. The findings contradict the long-held notion that the modern jaw architecture evolved later, in the earliest bony fish. "Now we know that one branch of placoderms evolved into modern jawed vertebrates," said study co-leader Zhu Min, a paleontologist at Chinese Academy of Sciences' Institute of Vertebrate Paleontology and Paleoanthropology. "In this sense, placoderms are not extinct." Qilinyu had a flat underside, ate soft-bodied mud-dwelling invertebrates. It was a modest member of the placoderm group, which included Earth's first true monster, a fish called Dunkleosteus with huge, powerful jaws that was bigger than a great white shark.


News Article | April 19, 2016
Site: motherboard.vice.com

Image of the embryos having developed to the blastocyst stage 80 hours after launch. Image: Enkui Duan Chinese scientists are creeping a tiny bit closer to the future dream of humans colonizing and reproducing in space. They’ve succeeded, reports the Chinese Academy of Sciences, in developing early-stage mouse embryos aboard the SJ-10, a satellite that was launched into orbit on April 6 from the Jiuquan Satellite Launch Center in northwest China’s Gansu Province. “This research is a very first step for [we humans] to make interstellar travel and planet colonization come true,” Enkui Duan, the principal investigator of the space mouse embryos project and a researcher at the State Key Laboratory of Stem Cell and Reproductive Biology in Beijing told me over email. I caught Duan as he spent a sleepless night travelling to retrieve the mouse embryos (some of which survived) from Sizi Wangqi in Inner Mongolia—where the SJ-10 satellite landed on April 18—and back again to his team’s lab in Beijing for further analysis. “The experiment we have proposed in space was a big challenge. We boarded more than 6,000 mouse embryos on China’s SJ-10 recoverable satellites by using our newly developed large scale mammalian embryo freezing and thawing technology,” said Duan. The embryos before launch, at the two-cell stage (not yet developed to blastocysts). Image: Enkui Duan The team developed an embryo culture system and placed it within a small enclosed chamber that provides the ideal conditions for the embryos to develop in space. While the chamber was in orbit, a camera attached to the experiment took photographs of the embryos as they developed in microgravity, and sent these images back to Earth. With the aid of their imaging technology, the researchers were able to observe how the mammalian two-cell stage embryos developed into blastocysts under microgravity after four days. Blastocysts are structures formed in the very early development of mammals. In humans blastocysts begin to form five days after fertilization. The researchers will now compare their space-developed embryos to those cultured in normal laboratory environments on Earth to see what differences there are between the two at both a cellular and molecular level. In the long run, the researchers are tying their research into the more broader issues of whether humans could survive and live healthily in space, whether they could have healthy offspring in space, and if short or long-term travel in space could affect human fertility owing to exposure to harsh space environments. In other words, they’re dreaming big. “The question we focused on is whether humans could achieve the dream of surviving and reproducing in outer space in the future,” said Duan. “Now, we have finally proven that the most crucial step in our reproduction—early embryo development—is possible in outer space.” L-R Zheng WB (designer of embryo cultural box), Enkui Duan, Lei XH (embryo researcher) at the payload transfer area. Image: Enkui Duan Duan and his team have been working on space reproductive technologies for the last couple of years, and they first attempted to develop mouse embryos in space back in 2006. That time, the team placed four-cell stage mouse embryos in the SJ-8 satellite, which beamed back high-resolution images of how those embryos were getting on. “Unfortunately, all embryos failed to develop because of the high temperature in the culture system according to the data and images transmitted from the SJ-8 satellite,” said Duan, who didn’t give up. He and his team spent the next few years persuading Chinese state officials that “failure is inevitable in the path of such space exploration,” and that the team was set on succeeding if it was given a second chance. In the meantime, Duan also collaborated with researchers from the Shanghai Institute of Technical Physics in order to improve their space-faring equipment and in-lab culture systems. Though Duan admitted that humans still had a long way to go before they can could colonize space, he was adamant that his team’s project was a leap in the right direction. “As we know, after the embryo develops to blastocyst, it must implant into the uterus then develop into a fetus. Next, we want to see whether the embryo developed in outer space could implant into the uterus correctly and develop into the final step—the fetus,” said Duan. “We will further still focus on the possibility of mammalian embryo implantation and subsequent development as well as human pregnant ability in outer space. Our final conquest, is the sea of stars.”


News Article | March 16, 2016
Site: cen.acs.org

A diverse team of global experts has been selected to lead ACS Omega, the American Chemical Society’s newest open access journal publishing peer-reviewed articles. Based in the Americas, Europe, India, and China, the editors not only represent key geographic regions of active R&D, they also bring expertise from four distinct scientific areas of interest. The new editors are Cornelia Bohne, a professor of chemistry at the University of Victoria in Canada; Krishna Ganesh, director of the Indian Institute of Science Education & Research in India; Luis Liz-Marzán, Ikerbasque research professor and scientific director at CIC biomaGUNE in Spain; and Deqing Zhang, director of the Institute of Chemistry, Chinese Academy of Sciences, in China. Bohne’s research focuses on developing the fundamental understanding of the dynamics of supramolecular systems and on the application of this knowledge to functional supramolecular materials. Ganesh is an expert in modified DNA and peptide-nucleic acids as novel cell-penetrating agents. As the first (founding and serving) director of IISER, Ganesh has built a unique, interdisciplinary infrastructure in which teaching and education are wholly integrated into state-of-the-art research. Liz-Marzán’s research focuses on nanoparticle synthesis and assembly, nanoplasmonics, and the development of nanoparticle-based sensing and diagnostic tools. He most recently served as a senior editor of the ACS journal Langmuir. Zhang’s research focuses on organic functional materials involving synthesis of organic functional molecules, spectroscopic studies, characterizations of self-assembly structures and optoelectronic properties, as well as applications for chemo/biosensing and imaging. “The ACS Omega editors have themselves authored in aggregate more than 850 peer-reviewed research articles, book chapters, and patents,” says Penelope Lewis, director of editorial and new product development in ACS Publications. “Their prolific publishing records and academic and professional achievements set the foundation for a team that will define and lead the editorial vision for the journal, drawing on a geographically diverse editorial board they will soon enlist—to be composed of active researchers with wide-ranging expertise and scientific backgrounds across chemistry, chemical engineering, and allied interdisciplinary scientific fields.” ACS Omega will begin accepting research submissions in April 2016 and will publish its first articles online early this summer.


DONG Chunhua's group and ZOU Changling from the Chinese Academy of Sciences first experimentally demonstrated non-magnetic non-reciprocity using optomechanical interactions in a whispering gallery microresonator. This work was published in Nature Photonics. This study utilizes ordinary optomechanical interaction in whispering gallery microresonators, where the two optical modes are the degenerate clock-wise (CW) and counter-clockwise (CCW) traveling-wave whispering-galley modes with opposite orbital angular momentums. For such an interaction, the CW and CCW modes are independently coupled with the mechanical mode. Because of the conservation of orbital angular momentum, the driving field can stimulate coherent interaction between signal photons and phonons only when the driving and signal optical fields are coupled to the same optical mode. As a result, the directional driving field breaks the time-reversal symmetry and leads to non-reciprocal transmittance for the signal light. Optomechanically induced non-reciprocal transparency (OMIT) and amplification (OMIA) are observed, and a non-reciprocal phase shift of up to 40 degrees is demonstrated in this study. Optomechanically induced non-reciprocity is actually controllable using two oppositely propagating driving fields that excite the CW and CCW modes simultaneously, which behaves as a controllable narrowband reflector with nonreciprocal transmittance. Note that the underlying mechanism of non-reciprocity demonstrated in this study is actually universal and can be generalized to any traveling wave resonators via dispersive coupling with a mechanical resonator. With the mechanical vibrations being cooled to their ground states, applications in the quantum regime, such as single-photon isolators and circulators, also become possible. Aside from these applications, non-reciprocal phase shift is of fundamental interest for exploring exotic topological photonics, such as the realization of chiral edge states and topological protection. The results of this study represent an important step toward integrated all-optical controllable isolators and circulators, as well as non-reciprocal phase shifters. This work is an extension of last year's research by DONG's group regarding Brillouin scattering non-reciprocity (Nature Communications), which expanded the applications of non-reciprocal devices based on cavity optomechanics to the whole optical wavelength or even the microwave wavelength. Especially when the system is in the ground state, single-photon isolators and circulators could also lead to hybrid quantum Internet technology. Explore further: Whispering light hears liquids talk: Scientists build first-ever bridge between optomechanics and microfluidics More information: Zhen Shen et al, Experimental realization of optomechanically induced non-reciprocity, Nature Photonics (2016). DOI: 10.1038/nphoton.2016.161


News Article | December 1, 2016
Site: www.prnewswire.co.uk

PEKÍN, 1 de diciembre de 2016 /PRNewswire/ -- El Edge Computing Consortium (ECC) se estableció oficialmente hoy en Pekín, China. Esta iniciativa se ha creado de forma conjunta a través de Huawei Technologies Co., Ltd., el Shenyang Institute of Automation of Chinese Academy of Sciences, la China Academy of Information and Communications Technology (CAICT), Intel Corporation, ARM e iSoftStone. El ECC tiene como finalidad desarrollar una plataforma cooperativa para la industria de la informática de marco que dará un impulso a la apertura y la colaboración en los sectores de tecnología operacional (OT) y tecnología de la información y las comunicaciones (ICT), fomentando con ello las mejores prácticas del sector y estimulando un desarrollo saludable y sostenible de la informática de marco. La revolución digital global actual está impulsando una nueva ronda de reestructuración industrial. Por medio de la transformación digital de las industrias, los productos se incorporan a una interconexión inteligente. La coordinación en profundidad y la convergencia de OT e ICT ayudan a mejorar la automatización industrial, cumpliendo así con los requisitos a medida de los productos y servicios, promoviendo la transformación de todo el ciclo de vida de los productos y las operaciones de servicios y activando la innovación de productos, servicios y modelos empresariales. Esto tendrá un impacto duradero en lo que hace referencia a la cadena de valor, la cadena de suministro y el ecosistema. Yu Haibin, presidente del ECC y director del Shenyang Institute of Automation de la Chinese Academy of Sciences, afirmó: "En su 13r plan de cinco años, China lanzó dos estrategias nacionales: la integración de la digitalización y la industrialización, además de 'Made in China 2025'. Esto requiere la convergencia de la ICT y la OT. Edge Computing es clave para apoyar y posibilitar esta convergencia. Mientras, el desarrollo industrial también se enfrenta a un punto de inflexión". "Los sistemas de tecnología de automatización industrial evolucionarán, pasando de una arquitectura en capas y silos de información a una arquitectura de IoT, computación en nube y análisis de Big Data. En mitad de esta evolución, la informática de marco va a reforzar la arquitectura industrial de autocontrol automático distribuido. Debido a esto, el ECC tendrá su vista puesta en el diseño de la arquitectura y la elección de la hoja de ruta técnica, además de promocionar el desarrollo industrial a través de la estandarización. Además, se va a centrar en el desarrollo de un ecosistema", añadió Yu Haibin. El ECC busca el concepto de OICT, en el que los recursos de la OT, la tecnología de la información (TI) y la tecnología de las comunicaciones (CT) deben integrarse y coordinarse entre sí, al tiempo que adhieren el espíritu del consenso, la unidad y la cooperación beneficiosa para ambas partes, con el fin de impulsar el desarrollo saludable del ECC. El ECC se esfuerza para avanzar dentro de la cooperación entre los recursos de la industria a partir de los sectores de gobierno, proveedores, académicos, de investigación y de los clientes. El libro blanco del Edge Computing Consortium se presentó durante la celebración de la 2016 Edge Computing Industrial Summit, durante la ceremonia de lanzamiento del ECC. Enfatiza las tendencias y los principales desafíos de la industria de la informática de marco, elabora la definición y el contenido de la informática de marco, muestra el diseño de excelente nivel y el modelo operativo del ECC y formula la arquitectura de referencia y el marco tecnológico de la informática de marco, que busca llevar a cabo el desarrollo futuro del ECC.


News Article | December 28, 2015
Site: phys.org

The nomination came as part of a campaign by the Information Technology & Innovation Foundation (ITIF), a leading science and technology policy think tank, to call out the "worst of the year's worst innovation killers." It's an odd juxtaposition, to say the least. The Luddite Awards – named after an 18th-century English worker who inspired a backlash against the Industrial Revolution – highlight what ITIF refers to as "egregious cases of neo-Luddism in action." Musk, of course, is hardly a shrinking violet when it comes to promoting technology innovation. Whether it's self-driving cars, reusable commercial rockets or the futuristic "hyperloop," he's not known for being a tech party pooper. ITIF, as it turns out, took exception to Musk's concerns over the potential dangers of artificial intelligence (AI) – along with those other well-known "neo-Luddites," Stephen Hawking and Bill Gates. ITIF is right to highlight the importance of technology innovation as an engine for growth and prosperity. But what it misses by a mile is the importance of innovating responsibly. Back in 2002, the European Environment Agency (EEA) published its report Late Lessons from Early Warnings. The report – and its 2013 follow-on publication – catalogs innovations, from PCBs to the use of asbestos, that damaged lives and environments because early warnings of possible harm were either ignored or overlooked. This is a picture that is all too familiar these days as we grapple with the consequences of unfettered innovation – whether it's climate change, environmental pollution or the health impacts of industrial chemicals. Things get even more complex, though, with emerging technologies like AI, robotics and the "internet of things." With these and other innovations, it's increasingly unclear what future risks and benefits lie over the horizon – especially when they begin to converge together. This confluence – the "Fourth Industrial Revolution" as it's being called by some – is generating remarkable opportunities for economic growth. But it's also raising concerns. Klaus Schwab, Founder of the World Economic Forum and an advocate of the new "revolution," writes "the [fourth industrial] revolution could yield greater inequality, particularly in its potential to disrupt labor markets. As automation substitutes for labor across the entire economy, the net displacement of workers by machines might exacerbate the gap between returns to capital and returns to labor." Schwab is, by any accounting, a technology optimist. Yet he recognizes the social and economic complexities of innovation, and the need to act responsibly if we are to see a societal return on our techno-investment. Of course every generation has had to grapple with the consequences of innovation. And it's easy to argue that past inventions have led to a better present – especially if you're privileged and well-off. Yet our generation faces unprecedented technology innovation challenges that simply cannot be brushed off by assuming business as normal. For the first time in human history, for instance, we can design and engineer the stuff around us at the level of the very atoms it's made of. We can redesign and reprogram the DNA at the core of every living organism. We can aspire to creating artificial systems that are a match for human intelligence. And we can connect ideas, people and devices together faster and with more complexity than ever before. This explosion of technological capabilities offers unparalleled opportunities for fighting disease, improving well-being and eradicating inequalities. But it's also fraught with dangers. And like any complex system, it's likely to look great… right up to the moment it fails. Because of this, an increasing number of people and organizations are exploring how we as a society can avoid future disasters by innovating responsibly. It's part of the reasoning behind why Arizona State University launched the new School for the Future of Innovation in Society earlier this year, where I teach. And it's the motivation behind Europe's commitment to Responsible Research and Innovation. Far from being a neo-Luddite movement, people the world over are starting to ask how we can proactively innovate to improve lives, and not simply innovate in the hope that things will work out OK in the end. This includes some of the world's most august scientific bodies. In December, for instance, the US National Academy of Sciences, the Chinese Academy of Sciences and the UK's Royal Society jointly convened a global summit on human gene editing. At stake was the responsible development and use of techniques that enable the human genome to be redesigned and passed on to future generations. In a joint statement, the summit organizers recommended "It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (ii) there is broad societal consensus about the appropriateness of the proposed application." Neo-Luddites? Or simply responsible scientists? I'd go for the latter. If innovation is to serve society's needs, we need to ask tough questions about what the consequences might be, and how we might do things differently to avoid mistakes. And rather than deserving the label "neo-Luddite," Musk and others should be applauded for asking what could go wrong with technology innovation, and thinking about how to avoid it. That said, if anything, they sometimes don't go far enough. Musk's answer to his AI fears, for instance, was to launch an open AI initiative – in effect accelerating the development of AI in the hopes that the more people are involved, the more responsible it'll be. It's certainly a novel approach – and one that seriously calls into question ITIF's Luddite label. But it still adheres to the belief that the answer to technology innovation is… more technology innovation. The bottom line is that innovation that improves the lives and livelihoods of all – not just the privileged – demands a willingness to ask questions, challenge assumptions and work across boundaries to build a better society. If that's what it means to be a Luddite, count me in!


News Article | November 16, 2016
Site: www.nature.com

I first met Joseph L. Birman in 1979. He attended an unofficial seminar in Moscow for scientists like me who had lost access to academic institutions in the Soviet Union because of our political views or because we had applied to leave the country. We gathered in the apartment of a computer scientist who was under KGB surveillance (and later spent five years in prison and exile for anti-Soviet actions). This was before the Internet and social media. National borders were closed; we felt isolated and threatened by our government. Although few Western academics dared attend our gatherings, there were many more people than chairs. Birman, a tall man, sat uncomfortably on the floor, trying to find space for his legs. Then, as always, he was talkative and cheerful: everybody's uncle. Birman, who died on 1 October, was born in New York City on 21 May 1927, the grandson of Jewish immigrants from Russia. In 1943, he graduated from the Bronx High School of Science, famously an incubator of prominent researchers. He received a bachelor's in science from the City College of New York and a doctorate in theoretical physics from Columbia University in 1952, going on to work on the optical properties of semiconductors at GTE Laboratories in New York. A decade later, he become a professor at New York University, and in 1974, he joined the faculty of City College, where he remained until his death. Sharply experimental in his thinking and prohibitively mathematical, Birman demonstrated how the branch of mathematics known as group theory can be applied to understand transitions between crystal phases and to predict light scattering and other optical properties of solids. He leveraged the respect he gained from seminal papers in the 1960s and 1970s into advocacy for hundreds of scientists. In a letter endorsing Birman for the Andrei Sakharov Prize of the American Physical Society (APS), which recognizes scientists who fight for human rights, Iranian physicist Hadi Hadizadeh wrote: “His efforts to get me released from detention and solitary confinement in 2001 will not be forgotten by me, my family, and many scientists worldwide.” Winning the award in 2010, Birman was delighted to see his name attached to that of the notable Soviet dissident and nuclear physicist. Birman's trips to the Soviet Union began in the 1970s with official invitations from the Soviet Academy of Sciences. During those trips he learned about the plight of Jewish scientists in the country. They were often denied promotion, travel abroad and positions at top research institutions. Applying for an exit visa frequently resulted in loss of employment, but rarely in permission to leave. Open protests led to arrests and imprisonment. Birman used his travels and eminence to challenge the heads of Soviet research institutions on behalf of scientists caught in this plight. It is thanks to the efforts of him and his colleagues that I did not end up in jail, despite multiple KGB interrogations, and was finally allowed to leave the Soviet Union. In the early 1990s, when many scientists in Russia were finally allowed to emigrate, Birman helped to establish the Program for Refugee Scientists in the United States, raising funds from private foundations. This supported visiting positions for more than a hundred émigré scientists in US universities and gave them time to secure permanent positions in industry and academia. Birman played a crucial part, along with particle physicist Robert Marshak, in recovering a generation of Chinese physicists lost to Mao Zedong's cultural revolution in the 1960s and 1970s. During this time, most scientific research ceased, concepts such as Einstein's theory of relativity were denounced as bourgeois and scientists were sent to do manual labour in the countryside. In 1983, Birman and Marshak travelled to Beijing on behalf of the APS and signed an agreement with the Chinese Academy of Sciences and the Ministry of Education that brought more than 60 middle-aged physicists to work in labs throughout the United States for up to three years. Many leaders of Chinese physics are alumni of that programme, and the scientific cooperation between Chinese and US physicists that now exists evolved largely from it. When the programme came to an end in the tragic aftermath of the Tiananmen Square protests in 1989, Birman redirected his efforts to achieve justice for Chinese scholars who openly spoke their minds. He would get phone calls and even surprise visits from Chinese scientists. He welcomed and did everything in his power to help these people, counselling them on how to manage their careers and providing contacts and recommendations. Birman chaired human-rights committees at the APS and the New York Academy of Sciences and in that capacity wrote hundreds of letters to heads of governments, kings and religious leaders. He publicized cases of unjustly imprisoned scientists. For more than 40 years, he served as vice-chair of the Committee of Concerned Scientists dedicated to protecting human rights and scientific freedom around the world. Joe met Joan Sylvia Lyttle when they were both at graduate school. They married in 1950 and had three children. She became a professor of mathematics at Columbia University. The day before Joe died, he and Joan had spent hours discussing a potential overlap between her work on topology and his model of how the phase of a particle is influenced by its trajectory through space. As much as Joe loved physics, getting a scientist out of prison had infinitely greater value to him than any scientific achievement. As a physicist and a humanitarian, Joe's influence touched so many lives. He will be dearly missed by his friends and remembered by hundreds of people he helped.


News Article | April 26, 2016
Site: www.sciencenews.org

An amber collector in Germany has spotted the ancient remains of a beetle never before seen in the fossil record. Two itty-bitty specimens, entombed in amber since the middle Eocene epoch some 54.5 million to 37 million years ago, represent a new species of Jacobson’s beetle, researchers report online March 28 in the Journal of Paleontology. The beetles, Derolathrus groehni, are, like their modern relatives, about as long as the width of a grain of rice. MicroCT scans and other images revealed narrow bodies, a shiny brown exterior and two wispy featherlike wings protruding from the hindquarters, angled like the blades of a helicopter. The fossils look just like today’s Jacobson’s beetles, says study coauthor Chenyang Cai of the Chinese Academy of Sciences. Fringed, eyelashlike wings may have helped the beetles ride the wind, eventually spreading to far-flung regions of the world — from western Russia (a big source of Baltic amber) to distant habitats in Fiji, Sri Lanka and even Alabama, where Jacobson's beetles have been spotted recently.


News Article | December 16, 2015
Site: cen.acs.org

As infectious bacteria continue to evolve defenses against conventional antibiotics, many people are focused on the need for more potent drugs. But Shu Wang of the Chinese Academy of Sciences presented a different route for fighting antibiotic resistance on Tuesday during the 7th International Chemical Congress of Pacific Basin Societies, or Pacifichem. Wang and his team have developed polymers with antimicrobial activity that can essentially be turned on and off (Angew. Chem. Int. Ed. 2015, DOI: 10.1002/anie.201504566). The polymers are actively antimicrobial only when they need to be so they don’t continually pressure bacteria to evolve resistance, Wang says. Although making switchable antibiotics isn’t a new strategy, Wang and his team believe they have a fresh, promising approach. Previously, researchers have investigated compounds that can be switched on or off using light. Wang’s antibiotics use reversible supramolecular chemistry that also works in the dark, he tells C&EN. These biocides are cationic polymers derived from poly(phenylene vinylene). The polymers have linear backbones with cationic ammonium arms that control the material’s antibiotic properties. The positively charged arms help kill bacteria by penetrating into cell membranes. This behavior is governed primarily by electrostatics, although hydrophobic interactions may also contribute, Wang says. He adds that the polymers are designed to attack bacterial cells and not mammalian cells, including healthy human cells. To switch off the antimicrobial activity, Wang’s team exposes the polymers to ring-shaped cucurbit[7]uril molecules, or CB[7]. These compounds cuff the ammonium arms, hindering the polymer’s ability to latch on to bacteria. To restore the polymer’s antimicrobial activity, the team adds the small molecule amantadine to the bacterial system, usually a culture of E. coli. Amantadine bonds with CB[7], removing the cuffs from the polymers, Wang explains. This is an interesting approach, says Ben L. Feringa of the University of Groningen, who was not involved in this study but pioneered earlier examples of switchable antibiotics. When it comes to combatting antibiotic resistance, “innovative approaches should be applauded,” Feringa adds. Yet he says he has a hard time imagining how this multiple-component process would be administered to patients in the clinic. Wang is optimistic, however, and he points out that the polymers could also be used to deactivate antimicrobials before they are released into the environment, for instance, in wastewater from agriculture. “Antibiotic resistance is a very big problem for society,” he says. “We think this could be a fast way to reduce that trouble.”


News Article | November 30, 2016
Site: www.prnewswire.co.uk

BEIJING, 30. November 2016 /PRNewswire/ -- Heute wurde offiziell das Edge Computing Consortium (ECC) in Beijing, China, eingerichtet. Diese Initiative wurde gemeinsam von Huawei Technologies Co., Ltd., dem Shenyang Institute of Automation der Chinese Academy of Sciences, der Chinese Academy of Information and Communications Technology (CAICT), der Intel Corporation, von ARM und iSoftStone geschaffen. Das ECC soll eine Kooperationsplattform für die Edge Computing-Branche aufbauen, die zu Aufgeschlossenheit und Kollaboration in den Branchen Betriebstechnologie (Operational Technology, OT) sowie Informations- und Kommunikationstechnologie (Information and Communications Technology, ICT) anregt, branchenrelevante Best Practices fördert und die gesunde sowie nachhaltige Entwicklung von Edge Computing stimuliert. Die heutige digitale Revolution fördert eine neue Runde der industriellen Umstrukturierung. Durch die digitale Transformation der Branchen werden Produkte durch eine intelligente Verknüpfung verbunden. Eine sorgfältige Koordination und Konvergenz von OT und ICT helfen, die industrielle Automatisierung zu verbessern, die kundenspezifischen Anforderungen der Produkte und Dienstleistungen zu erfüllen, die Transformation des vollständigen Lebenszyklus von Produkten zu Dienstleistungsvorgängen zu fördern sowie die Innovation von Produkten, Dienstleistungen und Geschäftsmodellen hervorzurufen. Das wird sich langfristig auf die Wertschöpfungskette, Lieferkette und das Ökosystem auswirken. Yu Haibin, Vorsitzender von ECC und Direktor vom Shenyang Institute of Automation, Chinese Academy of Sciences, erklärte: „Im 13. Fünf-Jahres-Plan führte China zwei nationale Strategien, Integration der Digitalisierung und Industrialisierung, sowie 'Made in China 2025' ein. Das erfordert eine hohe Konvergenz von ICT und OT. Edge Computing ist entscheidend dafür, diese Konvergenz zu unterstützen und zu ermöglichen. Mittlerweile steht auch die industrielle Entwicklung an einem Wendepunkt." „Industrielle Automatisierungstechnologiesysteme werden sich von einer Schichtenarchitektur und von Informationssilos hin zu einer von IoT, Cloud Computing und Big Data-Analytik gekennzeichneten Architektur entwickeln. Inmitten dieser Evolution wird Edge Computing eine verteilte industrielle automatische selbstgesteuerte Architektur stärken. Daher wird das ECC das Design der Architektur und die Wahl der technischen Roadmap im Auge behalten sowie die industrielle Entwicklung durch Standardisierung fördern. Zusätzlich wird sich ebenfalls auf den Aufbau eines Ökosystems konzentriert", fuhr Yu Haibin fort. Das ECC verfolgt das OICT-Konzept, demzufolge Informationstechnologie- (IT-) und Kommunikationstechnologie- (Communication Technology, CT-) Ressourcen miteinander integriert und koordiniert werden und sich nach Konsens, Einheit und einer Win-Win-Kooperation richten sollten, um die gesunde Entwicklung des ECC voranzutreiben. Das ECC ist um die Kooperation unter Branchenressourcen von der Regierung, aus Lieferantenkreisen, der Wissenschaft, Forschung und dem Kundensektor bemüht. Das White Paper des Edge Computing Consortium wurde ebenfalls anlässlich des 2016 Edge Computing Industrial Summit während der ECC-Einführungsfeier veröffentlicht. Es betont die Trends und wesentlichen Herausforderungen der Edge Computing-Branche, führt die Definition und den Inhalt von Edge Computing weiter aus, zeigt die Gestaltung der Führungsebene vom ECC sowie das Betriebsmodell und konzipiert die Referenzarchitektur und das technologische Rahmenwerk von Edge Computing, das als Richtlinie für die zukünftige Entwicklung vom ECC dient.


News Article | December 9, 2016
Site: news.yahoo.com

An avalanche of ice that killed nine in western Tibet may be a sign that climate change has come to the region, a new study finds. The avalanche at the Aru glacier in July 2016 was a massive event that spilled ice and rock 98 feet (30 meters) thick over an area of 4 square miles (10 square kilometers). Nine nomadic herders and many of their animals died during the 5-minute cataclysm. It was the second-biggest glacial avalanche ever recorded, and initially mystified scientists. "This is new territory scientifically," Andreas Kääb, a glaciologist at the University of Oslo, said in a statement in September. "It is unknown why an entire glacier tongue would shear off like this." [Images of Melt: Earth's Vanishing Ice] Now, an international group of scientists thinks they know the reason: Meltwater at the base of the glacier must have hastened the slide of the debris. "Given the rate at which the event occurred and the area covered, I think it could only happen in the presence of meltwater," Lonnie Thompson, a professor of Earth sciences at The Ohio State University, said in a statement. Thompson and his colleagues from the university's Byrd Polar and Climate Research Center worked with scientists from the Chinese Academy of Sciences to measure the icefall and recreate it with a computer model. They based the model on satellite and global position system (GPS) data, allowing for a precise understanding of how much debris fell. The stimulations could only recreate the catastrophic collapse if meltwater was present. Liquid water at the base of a glacier speeds its advance by reducing friction, as is frequently seen in Greenland. Meltwater may also bring heat to the interior of the glacier, warming it from the inside, according to 2013 research on Greenland's glaciers. In western Tibet, the origin of the possible meltwater is unknown, Thompson said in the statement. However, the region is undoubtedly heating up. "[G]iven that the average temperature at the nearest weather station has risen by about 1.5 degrees Celsius (2.7 degrees Fahrenheit) over the last 50 years, it makes sense that snow and ice are melting and the resulting water is seeping down beneath the glacier," Thompson said. That's particularly alarming because western Tibet's glaciers have so far been stable in the face of warming temperatures, according to the researchers. In southern and eastern Tibet, the glaciers have been melting much more rapidly. Above-average snowfall in western Tibet has even expanded some glaciers, according to study author Lide Tian, a glaciologist at the Institute of Tibetan Plateau Research at the Chinese Academy of Sciences. Paradoxically, Tian said in a statement, that extra snowfall may have created more meltwater and made the devastating avalanche more likely. A second avalanche hit just a few kilometers away in September 2016. No one was harmed in that icefall, but Kääb and his colleagues said that the two collapses, so close in time and space, were unprecedented.


The government-backed effort, known as the Grain-for-Green Program, has transformed 28 million hectares (69.2 million acres) of cropland and barren scrubland back to forest in an effort to prevent erosion and alleviate rural poverty. While researchers around the world have studied the program, little attention has been paid to understanding how the program has affected biodiversity until now. New research led by Princeton University and published in the journal Nature Communications finds that China's Grain-for-Green Program overwhelmingly plants monoculture forests and therefore falls dramatically short of restoring the biodiversity of China's native forests, which contain many tree species. In its current form, the program fails to benefit, protect and promote biodiversity. Following a literature review, two years of fieldwork and rigorous economic analyses, the researchers found the vast majority of new forests contain only one tree species. While these monocultures may be a simpler route for China's rural residents—who receive cash and food payments, as well as technical support to reforest land—the single-species approach brings very limited biodiversity benefits, and, in some cases, even harms wildlife. The researchers conclude that restoring the full complement of native trees that once grew on the land would provide the best outcome for biodiversity. If native forests are unachievable within the current scope of the program, the researchers recommend mixed forests—which contain multiple tree species and more closely resemble natural forests—as a second option. Mixed forests better protect wildlife than monoculture forests, and would not financially burden farmers participating in the program. Both native and mixed forests also help to mitigate climate change. "Around the world, people are leaving rural areas and moving into cities, potentially creating new opportunities to restore forests on abandoned farmland," said co-author David Wilcove, professor of ecology and evolutionary biology and public affairs in Princeton's Woodrow Wilson School of Public and International Affairs and the Princeton Environmental Institute. "In many places, we're seeing efforts to reforest areas that have once been cleared, and China is the first country to do it on this large of a scale," he said. "The critical policy question is how to restore forests that provide multiple benefits to society, including preventing soil erosion, providing timber and sustaining wildlife. China has an opportunity to do it right and turn these monocultures into mixed or native forests that will be more valuable for wildlife in future years." "If the Chinese government is willing to expand the scope of the program, restoring native forests is, without doubt, the best approach for biodiversity," said lead author Fangyuan Hua, a postdoctoral research associate in the Program in Science, Technology and Environmental Policy in Princeton's Woodrow Wilson School. "But even within the current scope of the program, our analysis shows there are economically feasible ways to restore forests while also improving biodiversity." During the Great Leap Forward in the late 1950s, China converted millions of hectares of native forest to cropland. The country's unprotected forests continued to be heavily exploited in the decades that followed, but without an effective conservation system in place. After a series of powerful floods in the late 1990s, China's government launched a series of landmark ecological initiatives aimed at controlling soil erosion, including the Grain-for-Green Program. The program is in place in 26 of China's 31 mainland provinces and, while its central goal is to prevent erosion, most of the reestablished forest is now used for the production of timber, fiber, tree fruits and other cash crops. Rural residents are encouraged with cash and food incentives to plant forests, shrubs and grasslands, but there seems to be little consideration for biodiversity in determining what is planted. The research team—which included scholars from the Chinese Academy of Sciences, Sichuan University, the University of East Anglia in the United Kingdom and the University of Vermont—wanted to investigate how these approaches to planting influenced biodiversity. They examined four specific questions. "We asked: What types of forest are being established by the program across China?" Hua said. "Then, focusing on a particular region, we asked: How does the biodiversity of the new forests compare to the biodiversity of the croplands they are replacing? How do the new forests compare to native forests? And, would planting more diverse forests result in any biodiversity benefits while also being economically feasible?" The team examined 258 publications, most of which were written in Mandarin, to determine the current tree composition within forests planted by the program. Although the program included a large number of species across China as a whole, they found that the majority of individual forests were planted with only one tree species, such as bamboo, eucalyptus or Japanese cedar. Only three locations actually planted forest native to the area. "To our knowledge, this is the first nationwide synthesis of the tree-species composition of forests reestablished under the program," Hua said. "This is essential to understanding the program's biodiversity implications." Next, the team zeroed in on Sichuan Province in south-central China and conducted fieldwork on bird and bee diversity across all seasons. Birds and bees are good indicators of the overall biodiversity of a particular area, the researchers noted. "Birds are sensitive to the types of trees, the overall age of the forest and the insects within the forest, and bees depend more on resources like pollen or nectar from the understory. Together, these two taxa provide a well-rounded picture of biodiversity within a forest," Hua said. Birds were surveyed using point counts. This measurement entails counting the birds seen and heard from a grid of points placed in the forests and cropland that are separated by certain distances. The bee species were collected and identified using DNA barcoding. All fieldwork was conducted across different types of land including monocultures, mixed forests, cropland and native forest. The researchers found that reforesting land with monocultures resulted in more harm than good for birds. In regions with monocultures, there were fewer bird species, and birds tended to be less abundant. Mixed forests, however, harbored more bird species and similar overall numbers of birds compared with cropland. The bees suffered from reforestation regardless, which was likely caused by the lack of floral resources in replanted forests. Overall, the best environment for birds and bees is native forest, the researchers found, as opposed to the forests reestablished under the Green-for-Grain Program. "Together, our findings point to the enormous potential of biodiversity benefits that China's Green-for-Green Program has yet to realize," Hua said. In the final part of the study, the researchers conducted economic analyses in order to understand the economic impacts of reforestation. They interviewed 166 households and asked what percentage of household income came from forest production. The researchers also calculated the average annual cost of—and income from—forest production per hectare across different types of forests. The median and mean percentages of annual household income contributed by forest production were 5 percent and 12.8 percent, respectively. The net annual profits were not that high, hovering around $400 per hectare (roughly $160 per acre). In terms of profit, mixed forests yielded gains similar to those derived from monocultures. Therefore, switching to a mixed forest, which would improve biodiversity, is unlikely to pose economic risks to households, the researchers concluded. "The work done by Fangyuan and her team is an enormous task," Wilcove said. "These data are crucial. Restoring forests is a tremendously positive thing to do for the world, but you can get a lot more bang for the buck in terms of benefits to society if you know how to do it right based on sound biological and economic data. Fangyuan's work provides this type of keen analysis." More information: Fangyuan Hua et al, Opportunities for biodiversity gains under the world's largest reforestation programme, Nature Communications (2016). DOI: 10.1038/ncomms12717


News Article | April 21, 2016
Site: www.techtimes.com

A 3D printer designed for use in space has been constructed by researchers in China. Initial reports suggest it is currently the largest and most versatile device of its type anywhere on (or above) Earth. The Chinese Academy of Sciences designed the new printer, specifically created to operate in the microgravity environment of outer space. The device was created to quickly manufacture parts needed for spacecraft while the ship is in orbit. If this new design is successful at producing affordable spacecraft equipment quickly, the invention could eliminate the need to carry much of the redundant equipment currently carried aboard spacecraft. Such an advance could significantly lower the cost of reaching space and carrying out missions. In March, NASA launched its own 3D printer to the International Space Station (ISS). However, Chinese researchers state their device is able to print out parts that are 20 percent larger that its American counterpart. China is not participating in the ISS program, but its space agency hopes to launch its own space station by the year 2020. The nation has been banned from the ISS since 2011, following a ruling by the U.S. Congress, which forbids official American contact with the Chinese space program over concerns on national security. This new 3D printer, and technology derived from the invention, could be used to assist in the construction and maintenance of the upcoming Chinese-managed orbiting outpost. "Scientists with both CAS's Chongqing Institute of Green and Intelligent Technology and the Technology and Engineering Center for Space Utilization were behind the two-year-long project," the Chinese Academy of Sciences said. During the 93rd test flights conducted in France, the 3D printer was shown to operate perfectly in a microgravity environment. This included product construction from five different materials, using two differing printer technologies. The printer is capable of production in a range of gravitational environments, accelerations, even while on a vibrating surface, researchers announced. One of the goals that NASA has set for itself in the near future is placing a human crew on the surface of Mars. Currently, that mission is scheduled to touch down on the Red Planet sometime in the middle of 2030s. Utilizing 3D printers on board these flights could allow space travelers to construct replacement equipment as needed, greatly reducing the cost and timescale of such a mission. © 2016 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | November 30, 2016
Site: en.prnasia.com

BEIJING, Nov. 30, 2016 /PRNewswire/ -- Today, the Edge Computing Consortium (ECC) was officially established in Beijing, China. This initiative was jointly created by Huawei Technologies Co., Ltd., Shenyang Institute of Automation of Chinese Academy of Sciences, China Academy of Information and Communications Technology (CAICT), Intel Corporation, ARM, and iSoftStone. The ECC aims to build a cooperative platform for the edge computing industry that will give impetus to openness and collaboration in the Operational Technology (OT) and Information and Communications Technology (ICT) industries, nurtures industrial best practices, and stimulates the healthy and sustainable development of edge computing. Today's global digital revolution is driving a new round of industrial restructuring. Through the digital transformation of industries, products are incorporated into intelligent interconnection. In-depth coordination and convergence of OT and ICT help improve industrial automation, meet the customized requirements of products and services, promote full-lifecycle transformation from products to service operations, and trigger the innovation of products, services, and business models. This will have a lasting impact on the value chain, supply chain, and ecosystem. Yu Haibin, Chairman of the ECC and Director of Shenyang Institute of Automation, the Chinese Academy of Sciences, said, "In the 13th Five Year Plan, China launched two national strategies, integration of digitization and industrialization, as well as 'Made in China 2025'. This requires much on ICT and OT convergence. Edge Computing is key to supporting and enabling this convergence. Meanwhile, industrial development is also facing a turning point. "Industrial automation technology systems will evolve from layered architecture and information silos to IoT, cloud computing, and Big Data analytics architecture. Amidst the evolution, edge computing will bolster distributed industrial automatic self-control architecture. Therefore, the ECC will keep an eye on the design of the architecture and the choice of technical roadmap, as well as promoting industrial development through standardization. In addition, building an ecosystem will also be focused," continued Yu Haibin. The ECC is in pursuit of the OICT concept that OT, information technology (IT), and communications technology (CT) resources should integrate and coordinate with each other, and stick to the spirit of consensus, unity, and win-win cooperation, to drive forward the ECC's healthy development. The ECC strives to advance cooperation among industry resources from government, vendors, academics, research, and customer sectors. The Edge Computing Consortium's White Paper was also released at the 2016 Edge Computing Industrial Summit, during the ECC's launch ceremony. It puts emphasis on the edge computing industry's trends and major challenges, elaborates on the definition and content of edge computing, displays the ECC's top-level design and operational model, and formulates the reference architecture and technological framework of edge computing, guiding the ECC's future development.


News Article | November 4, 2015
Site: www.washingtonpost.com

Human attitude toward risk is a complicated thing — and it doesn’t always seem to make sense on the outside. In a lead-zinc mining village in China’s western Hunan province, for instance, scientists recently observed some confusing patterns when it came to the villagers’ perceptions of the environmental and human health risks associated with mining for heavy metals. In their study, published on Oct. 22 in the Journal of Environmental Psychology, the researchers found that villagers who were not directly involved with the mines (which were privately operated, not government-owned) perceived the risks associated with mining to be much higher, and were more likely to oppose the practice than people who actually worked in the mines and were more directly at risk of heavy-metal toxicity. The researchers conducted the study by distributing questionnaires, which asked 220 local villagers questions about their level of concern for the effects of pollution on their village, crops and families; how much benefit they perceived themselves to be deriving from the mining operations; and whether they supported or opposed lead-zinc mining in the village. The researchers chose to focus their study on lead-zinc mining specifically because of its prevalence in China and its demonstrably serious health and environmental consequences, said Shu Li, a professor at the Chinese Academy of Sciences’ Institute of Psychology and the study’s lead author, in an e-mail to The Post. “China is the world’s largest producer of lead and zinc, accounting for nearly half of the global lead mine production and more than a third of the global zinc mine production,” Li said in the e-mail. “Lead-zinc mining and smelting activities are, however, some of the primary sources of heavy metals pollution.” Lead is a toxic metal, which can contaminate the environment and cause serious health problems in humans. According to the World Health Organization, lead exposure accounts for 0.6 percent of the global burden of disease. It’s capable of affecting many of the body’s functions and is particularly known for its ability to cause neurological damage, especially in children. It’s been a particular concern in China, where thousands of cases of lead poisoning have been reported in recent years. One of the problems could be that environmental regulations in China are not always adequately enforced, suggested Yixiu Wu, a toxics campaigner with Greenpeace East Asia who works in China. “It’s a mixed picture because the regulation [is] actually quite stringent in terms of evaluating the concentration of heavy metals in the air and in the water,” Wu said. “But the disadvantage is that most of the time there is not enough monitoring to make sure that the regulation is being followed up.” Wu has been involved in several Greenpeace-led studies of heavy-metal pollution in China, including a study that found widespread lead and cadmium contamination near China’s largest lead mine in the Yunnan province. So it seemed counterintuitive to find, in the new psychology study, that people who are more directly involved with the mining process — and, thus, more susceptible to its risks — were less concerned. It’s a phenomenon the researchers refer to as the “psychological typhoon eye” effect, and it’s been observed in other cases as well. In fact, according to lead author Li, the researchers were inspired to conduct the study after witnessing a similar phenomenon following the 2008 Wenchuan earthquake, which killed tens of thousands of people. In that case, shortly afterward, research showed that people in the most severely devastated areas were least concerned about another earthquake. While such attitudes might be motivated by different factors in different cases, the researchers’ theory in the case of the lead-zinc mining community is that risk perception is driven by the perceived benefits of the risky activity. In other words, people who stand to benefit more from the mines themselves (for instance, by working and earning money from them) are more likely to perceive their dangers as being lower. And, indeed, the researchers’ surveys showed that people who perceived more benefit from the mining also perceived less risk. People who saw less benefit perceived greater risk — and these people were also more likely to oppose lead-zinc mining in the community at all. And from a psychological standpoint, these results aren’t surprising at all, said Stuart Carlton, a coastal ecosystem and social science specialist at Texas Sea Grant, who was not involved in the study but has conducted other research on human perceptions of environmental issues, including climate change. “Whether or not [people] have a stake in something changes the way they perceive risks related to it,” Carlton said. What’s important to remember, he said, is that people don’t generally evaluate risk in rational ways, as a scientist might. “People assess risks based on their feelings about it, their experience with it,” he said. And in the brain, risk and benefit are often tied into the same process, said Paul Slovic, a professor of psychology at the University of Oregon and president of Decision Research, who was also not involved in the study. “The brain doesn’t keep these separate,” he said. “They kind of blend these into an overall feeling of ‘this is something good or bad.’” This means that perceived benefits and perceived risk are often negatively correlated in the mind — meaning the greater benefit a person sees in an activity, the less risk he or she will associate with it, and vice versa. It’s a phenomenon that’s shown up time and again in the case of environmental issues. One example is in the public’s reactions to early communication efforts by the nuclear power industry, Slovic said. One comparison that the industry made was to say that the risks associated with a properly operated nuclear plant were less than the risks of driving a car every day, he said. “People did not take kindly to that comparison,” he said. “They felt these are apples and oranges. I’m in control of my car, I’m getting benefit from driving my car — I don’t necessarily feel that I’m getting a benefit from a nuclear power plant in the region.” But in the case of the lead-mining village, Wu, the Greenpeace campaigner, said she’s concerned that the villagers’ attitudes could have been compounded by a lack of adequate education on the risks of heavy-metal contamination — an issue she said is widespread throughout the country. “I don’t think even the workers working in the mines know enough about it,” she said. “I think they know very little about how this could potentially harm their health or harm their family’s health.” But when it comes to changing attitudes toward risk, mere education is often not enough, Carlton said. This is an issue that arises frequently in conversations surrounding anthropogenic climate change, where research has shown that simply presenting facts and statistics to climate doubters is rarely enough to change their minds. “Information is not education, and education is not behavior change,” Carlton said. “People do not behave purely rationally around risks.” But understanding the factors that influence risk perceptions could help activists and policymakers become better communicators on environmental issues, nonetheless. Changing the way costs and benefits are framed is one suggestion the study’s authors make, for instance by more clearly communicating the cost of lead-zinc mining in a way that minimizes its benefits — or by promoting other livelihoods, such as cash-crop farming, that could provide better benefits with fewer risks. Such strategies could help influence the kinds of environmental policies citizens support and oppose, as shown in this study. “Policymakers and managers in risk management should think twice about the desires of different interest groups before they act,” the authors wrote.


News Article | November 2, 2016
Site: www.sciencemag.org

Mark Hutchinson could read the anguish on the participants’ faces in seconds. As a graduate student at the University of Adelaide in Australia in the late 1990s, he helped with studies in which people taking methadone to treat opioid addiction tested their pain tolerance by dunking a forearm in ice water. Healthy controls typically managed to stand the cold for roughly a minute. Hutchinson himself, “the young, cocky, Aussie bloke chucking my arm in the water,” lasted more than 2 minutes. But the methadone patients averaged only about 15 seconds. “These aren’t wimps. These people are injecting all sorts of crazy crap into their arms. … But they were finding this excruciating,” Hutchinson says. “It just fascinated me.” The participants were taking enormous doses of narcotics. How could they experience such exaggerated pain? The experiment was Hutchinson’s first encounter with a perplexing phenomenon called opioid-induced hyperalgesia (OIH). At high doses, opioid painkillers actually seem to amplify pain by changing signaling in the central nervous system, making the body generally more sensitive to painful stimuli. “Just imagine if all the diabetic medications, instead of decreasing blood sugar, increased blood sugar,” says Jianren Mao, a physician and pain researcher at Massachusetts General Hospital in Boston who has studied hyperalgesia in rodents and people for more than 20 years. But how prevalent hyperalgesia is, and whether it plays a role in the U.S. epidemic of opioid abuse and overdose, is unclear. A lack of reliable testing methods and a series of contradictory papers have created believers and skeptics. A few researchers, like Mao, think hyperalgesia is an underappreciated puzzle piece in the opioid epidemic—a force that can pile on pain, drive up doses, and make it harder for chronic users to come off their drugs. Some of those researchers are looking for ways to turn down hyperalgesia, to help patients function on lower doses of their oxycodone, for example, or make it easier to taper off it altogether. Others see OIH as an oddity in the literature—real, and a powerful clue to the workings of  pain pathways, but unlikely to tighten the grip of opioids on most patients. Hutchinson thinks the majority of physicians are either unaware of hyperalgesia or unconvinced of its importance. “I think if you surveyed prescribers of opioids, they would be divided probably 60–40.” Paradoxical as it may seem, OIH makes evolutionary sense. “Nature didn’t come up with pain just to torture mankind,” says Martin Angst, an anesthesiologist and clinical pharmacologist at Stanford University in Palo Alto, California. Pain causes us to recoil from a hot stove and to stay off an injured leg while it heals. And when it’s crucial that we temporarily ignore pain—say, when we run on that injured leg to evade a charging lion—the body has a way of numbing it, in part by releasing its own opioids. These natural molecules bind to receptors on neurons to block pain signals and activate reward centers in the brain. But doses of prescription opioids are orders of magnitude higher than our endogenous levels, Angst says. Confronted by these, “your biology fights back and says, ‘I’m blindfolded to pain by all these chemicals. I need to be able to sense pain again.’” Mao was among the first to delve into potential mechanisms of OIH in an animal model. In 1994, while at Virginia Commonwealth University in Richmond,  he and his colleagues showed that after 8 days of spinal morphine injections, rats were quicker to pull their paws away from a gradually heated glass surface. The animals’ baseline pain threshold had changed, and the effect was something more than tolerance, in which the body requires increasing doses of a drug to get the same effect. In this case, a higher dose could actually increase sensitivity to pain. The researchers found they could reverse the hyperalgesic effect by blocking certain receptors on neurons in the animals’ spinal cord. These N-methyl-D-aspartate (NMDA) receptors pick up chemical signals—notably an excitatory molecule called glutamate—released by sensory neurons projecting from the skin and organs, and transmit pain signals up to the brain. Researchers already knew that even without opioids, some people with chronic pain from nerve damage or fibromyalgia, for example, experience hyperalgesia when normal pain signaling gets reinforced and amplified over time. It appeared that, at least in animals, opioids had a similar effect. By 2000, Mao was turning his attention to patients, and the population of opioid users was expanding. Doctors had begun to consider the drugs relatively safe options for managing chronic pain. With the release and aggressive marketing of the long-acting narcotic OxyContin in 1996, a class of drugs that had largely been reserved for cancer patients was becoming a go-to treatment for conditions such as lower back pain. As prescribing skyrocketed, so did overdoses. U.S. deaths from prescription opioids have roughly quadrupled in the last 2 decades, reaching 21,000 in 2014. Making things worse, abundant prescription opioids have been diverted for recreational use, which has driven up rates of heroin addiction as users have sought cheaper or more accessible alternatives. Both prescription and illegal opioids kill when high doses slow breathing, especially when combined with alcohol or antianxiety drugs called benzodiazepines. “I’m not sure you could find an example of physicians doing more harm to human beings than we have achieved in our liberal opiate prescribing,” says David Clark, an anesthesiologist at Stanford. Mao and others wondered whether hyperalgesia was another important opioid side effect. People might be seeking a higher dose as drug-induced pain compounded the original pain, he thought. If so, doctors who ignore hyperalgesia might bump up the dose when the right decision was to reduce it. And when a patient tried to taper off a drug, a temporarily lowered pain threshold might make it harder for them to manage without it. “If they’re hyperalgesic, they can just go back to the drug again to feel okay,” says Jose Moron-Concepcion, a neuroscientist at the Washington University School of Medicine in St. Louis in Missouri. The evidence for hyperalgesia is clearest in people taking extreme doses—for instance, in opioid abusers or terminal cancer patients managing severe pain. Surgical patients given large amounts of the opioid remifentanil have shown signs of hyperalgesia; they have larger areas of soreness around their wounds and seem predisposed to chronic pain following surgery. But what about patients who take lower doses of opioids daily over months or years to manage chronic pain? As a pain specialist at a large teaching hospital, Mao frequently encounters patients who can’t find relief from increasing opioid doses and who tell him that their pain has become worse—diffuse, nagging, and harder to pinpoint. But just how many people experience OIH, and at what opioid dose, is hard to say. The phenomenon can be very hard to distinguish from tolerance, when pain increases as the drug loses its effectiveness over time. (It’s also possible that a patient’s underlying condition has changed, or that the chronic pain itself has kicked their pain signaling into high gear.) Because diagnosing hyperalgesia can be a guessing game in the clinic, some researchers have turned to the lab. They have tried to document changing pain thresholds with quantitative sensory tests, like the so-called cold pressor test Hutchinson witnessed in the methadone patients in Australia, or contraptions that apply heat or pressure to the skin. But the studies have been small and the results inconsistent. “Nobody has actually shown that that particular stimulus in a human being is a valid way to say, ‘Yes, this person has become hyperalgesic,’” Angst says. In 2006, for instance, a team that included Angst and Clark gave the cold pressor test to six people with chronic lower back pain before and after a monthlong course of morphine pills. After the drug treatment, the team found signs of hyperalgesia: On average, the subjects registered pain from the ice water about 2 seconds earlier, and removed their hands about 8 seconds earlier, than they had beforehand. But those results didn’t hold up in a larger group of 139 patients randomized to take opioids or placebo, nor did they appear in a different pain test that applied a gradually heated probe to the forearm. Then in 2013, a study with a different methodology seemed to confirm the effect. A research team in Israel reported evidence of hyperalgesia in 17 of 30 patients with radiating spinal nerve pain by asking them to rate the intensity of heat pain on a numerical scale before and after a 4-week course of hydromorphone. If you can’t reliably diagnose hyperalgesia, it’s hard to predict its long-term effects, says Michael Hooten, an anesthesiologist at the Mayo Medical School in Rochester, Minnesota. His group found evidence in 91 patients tapering off opioids that those whose doses were higher at the start, forcing them to make greater reductions over the 3-week program, had worse measures of heat pain hyperalgesia. But the team wasn’t able to track these patients long-term to ask the bigger questions: How long until their pain thresholds bounced back to normal? Do hyperalgesic patients who manage to quit taking opioids ultimately see improvements in pain? Are hyperalgesic patients more or less prone to addiction or relapse? For some, this lack of evidence makes research into hyperalgesia look like a dead end. “When I go to work every day, I don’t think about opioid-induced hyperalgesia,” says Gary Bennett, a pain researcher at the University of California in San Diego. “We know that it’s real. We don’t know how important it is, and it’s really, really hard to answer that question, so let’s move on.” Mao isn't ready to move on. He believes the risk of hyperalgesia should motivate doctors to try tapering patients off their opioids when their pain worsens without an obvious cause. But in his experience, only about a third of chronic pain patients are willing to try that. So he’s hoping for a different solution: a drug that targets the mechanisms behind hyperalgesia and that might be given alongside an opioid, either when it’s first prescribed or when a doctor suspects OIH. Mao is recruiting patients for clinical trials to test two candidate drugs. One is ketamine, an anesthetic that blocks NMDA receptors. The other, guanfacine, is currently used to treat high blood pressure and is thought to keep sensory neurons from releasing glutamate into the spinal cord. A team led by Peggy Compton of Georgetown University in Washington, D.C., meanwhile, is investigating a pain and antiseizure drug called gabapentin that may block neural transmission to reduce excessive pain signals. Other groups are attacking opioid side effects, including hyperalgesia, from a very different angle. In the early 2000s, researchers began exploring the role of glia, star-shaped immune cells in the brain and spinal cord, which were traditionally thought to function as mere “housekeepers,” offering structural support for neurons and removing debris. But when the immune system becomes activated in response to an illness or injury, glia in regions associated with pain processing seem to take on another role: They release inflammatory molecules that interact with nearby neurons to amplify pain signals. In 2001, researchers at the Chinese Academy of Sciences in Shanghai reported that chronic morphine administration in rats activated glial cells called astrocytes in the spinal cord. Subsequent studies showed that inhibiting the inflammatory molecules released by glia could reverse hyperalgesia and tolerance in the rats. The results suggested that opioids may trigger glia to set off system-wide pain signaling that both counteracts the pain relief from the drug and makes the body generally more sensitive to pain. Many see dampening this inflammatory response as a promising way to fight hyperalgesia, because it would not interfere with opioids’ pain-relieving activity on neural receptors. Several efforts are underway. The San Diego, California–based biotech company MediciNova recently completed a phase II trial of a glia-inhibiting drug called ibudilast, already approved as an asthma treatment in Japan, to relieve pain and treat withdrawal in opioid abusers. A study led by researchers at Yale University is testing the antibiotic acne medication minocycline, which is also thought to block glial activation in the brain. And research spun out of neuroscientist Linda Watkins’s group at the University of Colorado in Boulder is testing a new pain drug that may tame glia in the spinal cord by blocking a signaling protein on their surface. If inflammation turns out to be a key driver of OIH, it might also point the way to a better test for the effect, says Lesley Colvin, a pain researcher at the University of Edinburgh. Markers of inflammation in the blood might correlate with clinical signs of hyperalgesia or declining pain thresholds on sensory tests. Colvin says she already sees strong evidence of hyperalgesia in high-dose opioid users at the clinic where she works. With so much at stake, she is eager to understand the phenomenon and how it might affect them long term. “Although it’s complicated,” she says, “that doesn’t mean we shouldn’t try and work out the details.”


News Article | November 17, 2015
Site: phys.org

(Phys.org)—A team of researchers with Peking University, the Chinese Academy of Sciences and Tsinghua University has identified a protein that aligns with a magnetic field when polymerized and coupled with another well known protein. In their paper published in the journal Nature Materials, the researchers suggest the protein complex may be the means by which many insects and animals orient themselves using the Earth's magnetic field.


News Article | December 16, 2015
Site: www.rdmag.com

As the year comes to a close, it’s only natural to speculate about what lies ahead. In the laboratory world, the “lab of the future” is always a hot topic. Augmented reality, robots as lab personnel, the complete elimination of paper—these subjects are all common discussion points around this time of year. But the fact is, we’re still years away from that version of a laboratory. Once the clock strikes 12:00 a.m. on Jan. 1, 2016, labs around the world will remain mostly unchanged. Spectrometers and chromatographs will still take up bench space, freezers and refrigerators will still be noisy and the lighting will still not be quite right. But that’s not to say the lab of 2016 is without its opportunities—and challenges. The discussion around the revolutionary CRISPR-Cas9 genome editing technique is expected to continue as scientists work to improve the system daily. Refined approaches and breakthroughs occurred frequently in 2015, even as scientists called for a moratorium on the controversial system. While there will always be scientific challenges in a laboratory, participants in a recent panel titled “Eye on the Future” at the R&D 100 Awards & Technology Conference pointed to two less traditional challenges the science and technology industry must overcome. Daryl Belock, VP of Innovation and R&D Collaboration at Thermo Fisher Scientific, said one of the biggest challenges in coming up with the next generation of technology is finding the right talent to do so. While Thomas Mason, Director of Oak Ridge National Laboratory (ORNL), and Dean Kamen, entrepreneur and inventor, agreed with Belock, they also pointed to another hardship facing the science industry—poor public perception. Challenge 1: Scientific workforce According to the National Science Board’s (NSB) biennial Science and Engineering Indicators 2014 report, the science and engineering (S&E) workforce has grown steadily over time. Between 1960 and 2011, the number of workers in S&E occupations grew at an average annual rate of 3.3 percent, greater than the 1.5 percent growth rate of the total workforce. Additionally, during the 2007-2009 recession, S&E employment compared favorably to overall employment trends. While the idea of a shortage in the field has been pervasive recently, the numbers seem to indicate otherwise. It’s not so much finding employees, it’s finding the right employee with the right skill set. “There always seems to be a small group of very talented people that stimulate breakthroughs in innovation,” said Belock. “It’s very easy to fall into a short-term vision of the future, where teams get into incrementalism in terms of what they are bringing to the market. Taking a longer-term view and developing talent, and rare talent, is critical.” Terry Adams, Shimadzu Scientific’s Vice President of Marketing, agrees that a long-term vision, and cultivating a culture that is committed to the long-term, is half the hiring battle these days. “We’re at a stage where we need experienced people, but they come with history and baggage based on their past jobs or what they perceive this job to be,” Adams told Laboratory Equipment. “[New employees] are surprised about our patience, how deliberate we are, how much planning we do—we work off of a very deliberate 9-year plan in 3-year increments. We’re not driven quarter to quarter or half to half. We’re thinking year to year.” Like every other industry, the aging of the baby boomer generation will have/is having significant implications in science and technology. Five years ago, the oldest baby boomers were 60, and still a productive part of the workforce. In fact, according to the NSB report, between 1993 and 2010, increasing percentages of scientists and engineers in their 60s reported they were still in the labor force. Whereas 54 percent of scientists and engineers between the ages of 60 and 69 were employed in 1993, the comparable percentage rose to 63 percent in 2010. However, more recent estimates from the Bureau of Labor Statistics indicate that the oldest baby boomers—of which about 10,000 turn 65 years old every day—have begun the wave of retirements, significantly shifting the country’s age demographics “Fifty percent of our staff has been with us less than 10 years,” said ORNL’s Mason. “It’s tremendous [and] exciting, lots of new people and ideas coming in. But how can we create in them the kind of culture that will allow us to continue to push the envelope? That’s probably the thing I feel the greatest need around.” In industry specifically, a popular approach to hiring young staff members is to train them as early as possible, even while they are still in school. For example, Shimadzu (and other equipment manufacturers) have established Centers of Excellence in research-heavy universities. These centers are typically gifted with millions of dollars of a manufacturer’s state-of-the-art equipment. Students working in the center are then heavily exposed to the company’s equipment and techniques well before they look to enter the workforce. “[Because] the students are already being trained on our instruments, we can pluck them right out of school to start the lower-tier jobs and move everyone else up through the ranks,” Shimadzu’s Adams explained. “We try to grow our own talent. [The students] come in already knowing our technology and science, now it’s time for them to understand the business world. If we can move them up through the corporate ladder, they do fit the culture. They are already a part of it.” Adams also commented that, recently, the pool of qualified candidates has featured more women and minorities than in the past. According to the 2014 NSB report, women remain underrepresented in the S&E workforce, but to a lesser degree than in the past. Additionally, the number and percentage of undergraduate and master’s degrees in most major S&E fields has increased for both woman and minorities since 2000. “The only thing that has yet to be commoditized, the only thing that will add value is innovation,” said Kamen, the inventor of the insulin pump and Segway. “To do that innovation, more than ever, will require people. Smart people with vision and courage that will focus on the right things.” Challenge 2: Public perception The general public and scientists express strikingly different views on science-related issues, according to a Pew Research Center report that was published January 2015. The report found significant differences in views on 13 science-related issues, the largest of which include: •    A 51-percentage point gap between scientists and the public about the safety of eating genetically modified foods– 88% of American Association for the Advancement of Science (AAAS) scientists think eating GM food is safe, while 37% of the public believes that. •    42-percentage point gap over the issue of using animals in research– 89% of scientists favor it, while 47% of the public backs the idea. •    40-percentage point gap on the question of whether it is safe to eat foods grown with pesticides– 68% of scientists say that it is, compared with 28% of citizens. •    37-percentage point gap over whether climate change is mostly caused by human activity– 87% of AAAS scientists say it is, while 50% of the public does. “Scientists get an A+ for the accelerated rate at which technology is happening, but we get a C- for engaging the global public in understanding why it’s so important, why we need to keep moving with it and how we can do it in a responsible way,” said Kamen. “If we don’t deliver that clear message over and over again until the world accepts it, we’re headed for a train wreck.” Overall, the report—based on two surveys performed in collaboration with AAAS—suggests science holds an esteemed place among citizens and professionals, but both groups are less upbeat about the scientific enterprise than they were in 2009—a concerning fact, and something to monitor closely in the coming years. From the public perspective, the trust decline could be due to a series of high-profile scientific papers found to be fraudulent, the most popular of which was a study claiming “acid baths” offered an easy pathway to the generation of new stem cells called STAP. The papers, along with the authors from RIKEN, immediately gained notoriety. However, no one could duplicate the results, and complaints started forming just a few days after the papers’ release in January 2014. Six months later, Nature retracted the papers, citing critical errors and falsified data. But the damage was already done. News of this “amazing” new stem cell had already hit mainstream media, and the scandal—which saw one co-author take his own life—became a story all to its own. “One of the concerns I have is you have instances where people have actually been doing deceitful things, and that gets discovered—in a sense you can say that is the system working or self-correcting. However, then you juxtapose that against some of the areas of science that are politically or social controversial, like climate change, for example. In the minds of the general public, it’s hard to extinguish fraudulent behavior. If you’re not an expert in the field, it is extremely difficult to distinguish those cases,” said Mason. Despite that, the Pew Research Center found there is still broad public support for government investment in scientific research. Seven in 10 adults say government investments in engineering and technology and basic scientific research usually pay off in the long run. Some 61 percent say government investment is essential for scientific progress, while 34 percent believe private investment is enough to ensure that scientific progress is made. Opportunity: CRISPR CRISPR-Cas9 is a genetic engineering technique invented by molecular biologist Jennifer Doudna that is capable of quickly, easily and inexpensively performing precise changes in DNA. Cas9, a naturally occurring protein in the immune system of certain bacteria, acts like a pair of molecular scissors to precisely cut or edit specific sections of DNA. Invented in 2012, interest in the technique has increased dramatically in the past two years, prompting scientists to gather at a summit earlier this month to establish guidelines for its responsible use in the future. The main concern with CRISPR (and similar techniques) is that it can be used to perform germline genetic modifications, which means making changes in a human egg, sperm or embryo. These modifications would be passed down for generations, impacting an entire lineage rather than just one person. However, at the same time, CRISPR’s ability to edit human DNA could eliminate genetic diseases. Fortunately, the recent uptick of interest in the technology has been accompanied by dramatic improvements in the past year—including multiple enhancements in just the last few months. While Cas9 is highly efficient at cutting its target site, a major drawback of the system has been that, once inside a cell, it can bind to and cut additional sites that are not targeted. This has the potential to produce undesired edits that can alter gene expression or knock a gene out entirely, which could lead to the development of cancer or other problems. But, researchers at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT have devised a way to dramatically reduce “off-target editing” to undetectable levels. Using previous knowledge about the structure of the Cas9 enyzme, Feng Zhang and colleagues were able to predict that replacing some of the positively charged amino acids in DNA with neutral ones would decrease the binding of “off-target” sequences much more than “on-target” sequences. After experimenting with various changes, Zhang’s team found that mutations in just three of approximately 1,400 amino acids dramatically reduced “off-target” cuts. The newly engineered enzyme, which the team calls enhanced S. pyogenes Cas9, or eSpCas9, will be useful for genome editing applications that require a high level of specificity. The lab has made the eSpCas9 enzyme available for researchers worldwide. “We hope the development of eSpCas9 will help address some of [safety] concerns [related to off-target effects], said Zhang, who was a speaker at the International Summit on Human Gene Editing. “But we certainly don’t see this as a magic bullet. The field is advancing at a rapid pace, and there is still a lot to learn before we can consider applying this technology for clinical use.” Additional research out of Harvard’s Wyss Institute has recently added another layer of safety to CRISPR with the ability to reverse the spread of imposed genetic traits. George Church and Kevin Esvelt developed molecular confinement mechanisms to prevent gene drives from functioning in the wild by manipulating CRISPR’s biological components. By separating the guide RNA and the Cas9 protein so they are not encoded together in the same organism, or by inserting an artificial sequence into the target gene, gene drives can only be activated in lab organisms. According to Harvard, using this safeguard, essentially any population-level change mediated by a gene drive could be overwritten if the need ever arose. In such a case, the originally imposed trait would be reversed and the biological machinery of the CRISPR gene drive system—the guide RNAs and the Cas9 protein—would remain present, albeit inactive, in the DNA of organisms. What’s more, the reversibility mechanism isn’t just a useful backup in case a gene drive ever had an unexpected side effect—the researchers believe the ability to impose or reverse gene drive effects could one day prove powerful for the management of disease-transmitting organisms, invasive species and crop-destroying insects. “The gene drive research community has been actively discussing what should be done to safeguard shared ecosystems, and now we have demonstrated that the proposed safeguards work extremely well and should therefore be used by every gene drive researcher in every relevant lab organism,” said Esvelt. Opportunity: CRISPR guidelines The International Summit on Human Gene Editing—hosted by the U.S. National Academies, UK Royal Society and the Chinese Academy of Sciences—concluded on Dec. 3 after three days of discussion on the scientific, ethical and governance issues associated with human gene editing technologies like CRISPR. The 12-member organizing committee acknowledged that basic and preclinical research is very much needed and should proceed; however, modified cells should not be used to establish a pregnancy. In fact, the committee cautioned against the clinical use of gene editing in regards to the germline, which can pass edits and mutations on to subsequent generations. “It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (ii) there is broad societal consensus about the appropriateness of the proposed application,” the committee wrote in its statement. However, the green light was given to clinical applications of gene editing that are directed toward altering somatic cells only—or those cells whose genomes will not be transmitted to the next generation. Since the proposed clinical uses are intended to affect only the individual who receives care, the committee approved use of the technology within “existing and evolving regulatory framework.” Overall, the committee stopped short of calling for a permanent ban on editing human embryos and germline cells. “As scientific knowledge advances and societal views evolve, the clinical use of germline editing should be revisited on a regular basis,” committee members wrote. That regular basis will begin immediately, with the national academies that co-hosted the summit pledging to take the lead in creating an ongoing international forum. Over the next year, scientists and ethicists from the U.S., UK and China will convene to examine some of the issues raised at the meeting, and release an additional report with more concrete guidelines in late 2016.


News Article | December 11, 2016
Site: www.techtimes.com

Climate change is taking a toll on some of the most stable parts of the Tibetan Plateau. The avalanche on July 17 is a case in point, when more than 70 million tons of ice from the Aru glacier in western Tibet tumbled down a valley, killing nine nomadic yak herders living there besides destroying property. The connection between climate change and the disaster was mooted by a team of international researchers who published an analysis of the July 2016 avalanche disaster in Journal of Glaciology. To explore what caused the avalanche, researchers from the Chinese Academy of Sciences collaborated with two glaciologists from the Ohio State University's Byrd Polar and Climate Research Center, professors Lonnie Thompson and Ellen Mosley-Thompson. According to NASA estimates, the debris of the Aru avalanche containing streams of ice and rock was spread across 4 square miles, with a thickness of nearly 98 feet. Aside from the yak herders, the casualties included 350 sheep and 110 yaks in the village of Dungru. The debris spread makes it one of the most massive ice avalanches ever recorded. Comparable events include the avalanche from Kolka Glacier in the Caucasus in 2002, explained Andreas Kääb, a glaciologist at the University of Oslo. Using satellite data and GPS, scientists have been investigating how much ice fell in the first avalanche and the area it covered. The scientists noted that the avalanche only lasted for around 5 minutes but had buried 3.7 square miles of the valley floor within that short span of time. Thompson said the likely cause for the speed at which the ice came down could be meltwater at the glacier base that lubricated the ice. Computer simulations also showed the only possibility was meltwater. Noting that the origin of the meltwater is still unknown, Thompson said the assumption is based on the average temperature at the nearest weather station, which has increased by about 1.5 degrees Celsius in the past 50 years. The warmer climate could have driven up the melting of snow and ice, resulting in the seeping down of meltwater beneath the glacier. Noting that nearby glaciers are also vulnerable, Thompson said the unfortunate part is there is no mechanism to predict such disasters. That sounds true as researchers were not able to predict the collapse of a neighboring glacier in the same mountain range in September. Western Tibet is known as a stable region, unlike southern and eastern Tibet where glaciers are melting at a higher rate, which is why scientists are puzzled about the collapse of the Aru glacier. This area on the Tibetan Plateau has even experienced excessive snowfall, which added to the mass of the glaciers. Lead author of the paper Lide Tian, a glaciologist with the Institute of Tibetan Plateau Research, said the extra snowfall might have played a role in the avalanche by creating more meltwater, suggesting that once-stable regions could become extremely dangerous as climate change continues to accelerate glacier surging. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | January 27, 2016
Site: phys.org

Instead of having all their stellar progeny at once, globular clusters can somehow bear second or even third sets of thousands of sibling stars. Now a new study led by researchers at the Kavli Institute for Astronomy and Astrophysics (KIAA) at Peking University, and including astronomers at Northwestern University, the Adler Planetarium and the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC), might explain these puzzling, successive stellar generations. Using observations by the Hubble Space Telescope, the research team has for the first time found young populations of stars within globular clusters that have apparently developed courtesy of star-forming gas flowing in from outside of the clusters themselves. This method stands in contrast to the conventional idea of the clusters' initial stars shedding gas as they age in order to spark future rounds of star birth. The study will be published in the Jan. 28 issue of the journal Nature. "This study offers new insight on the problem of multiple stellar populations in star clusters," said study lead author Chengyuan Li, an astronomer at KIAA and NAOC who also is affiliated with the Chinese Academy of Sciences' Purple Mountain Observatory. "Our study suggests the gaseous fuel for these new stellar populations has an origin that is external to the cluster, rather than internal." In a manner of speaking, globular clusters appear capable of "adopting" baby stars—or at least the material with which to form new stars—rather than creating more "biological" children as parents in a human family might choose to do. "Our explanation that secondary stellar populations originate from gas accreted from the clusters' environments is the strongest alternative idea put forward to date," said Richard de Grijs, also an astronomer at KIAA and Chengyuan's Ph.D. advisor. "Globular clusters have turned out to be much more complex than we once thought." Globular clusters are spherical, densely packed groups of stars orbiting the outskirts of galaxies. Our home galaxy, the Milky Way, hosts several hundred. Most of these local, massive clusters are quite old, however, so the KIAA-led research team turned their attention to young and intermediate-aged clusters found in two nearby dwarf galaxies, collectively called the Magellanic Clouds. Specifically, the researchers used Hubble observations of the globular clusters NGC 1783 and NGC 1696 in the Large Magellanic Cloud, along with NGC 411 in the Small Magellanic Cloud. Scientists routinely infer the ages of stars by looking at their colors and brightnesses. Within NGC 1783, for example, Li, de Grijs and colleagues identified an initial population of stars aged 1.4 billion years, along with two newer populations that formed 890 million and 450 million years ago. What is the most straightforward explanation for these unexpectedly differing stellar ages? Some globular clusters might retain enough gas and dust to crank out multiple generations of stars, but this seems unlikely, said study co-author Aaron M. Geller of Northwestern University and the Adler Planetarium in Chicago. "Once the most massive stars form, they are like ticking time bombs, with only about 10 million years until they explode in powerful supernovae and clear out any remaining gas and dust," Geller said. "Afterwards, the lower-mass stars, which live longer and die in less violent ways, may allow the cluster to build up gas and dust once again." The KIAA-led research team proposes that globular clusters can sweep up stray gas and dust they encounter while moving about their respective host galaxies. The theory of newborn stars arising in clusters as they "adopt" interstellar gases actually dates back to a 1952 paper. More than a half-century later, this once speculative idea suddenly has key evidence to support it. In the study, the KIAA researchers analyzed Hubble observations of these star clusters, and then Geller and his Northwestern colleague Claude-André Faucher-Giguère carried out calculations that show this theoretical explanation is possible in the globular clusters this team studied. "We have now finally shown that this idea of clusters forming new stars with accreted gas might actually work," de Grijs said, "and not just for the three clusters we observed for this study, but possibly for a whole slew of them." Future studies will aim to extend the findings to other Magellanic Cloud as well as Milky Way globular clusters. More information: Formation of new stellar populations from gas accreted by massive young star clusters, Nature, nature.com/articles/doi:10.1038/nature16493


News Article | February 15, 2017
Site: phys.org

But all bets are off, if the students journey to the center of the Earth, à la Jules Verne's Otto Lidenbrock or if they venture to one of the solar system's large planets, such as Jupiter or Saturn. "That's because extremely high pressure, like that found at the Earth's core or giant neighbors, completely alters helium's chemistry," says Boldyrev, faculty member in USU's Department of Chemistry and Biochemistry. It's a surprising finding, he says, because, on Earth, helium is a chemically inert and unreactive compound that eschews connections with other elements and compounds. The first of the noble gases, helium features an extremely stable, closed-shell electronic configuration, leaving no openings for connections. Further, Boldyrev's colleagues confirmed computationally and experimentally that sodium, never an earthly comrade to helium, readily bonds with the standoffish gas under high pressure to form the curious Na He compound. These findings were so unexpected, Boldyrev says, that he and colleagues struggled for more than two years to convince science reviewers and editors to publish their results. Persistence paid off. Boldyrev and his doctoral student Ivan Popov, as members of an international research group led by Artem Oganov of Stony Brook University, published the pioneering findings in the Feb. 6, 2017, issue of Nature Chemistry. Additional authors on the paper include researchers from China's Nankai University, Center for High Pressure Science and Technology, Chinese Academy of Sciences, Northwestern Polytechnical University, Xi'an and Nanjing University; Russia's Skolkovo Institute of Science and Technology, Moscow Institute of Physics and Technology, Sobolev Institute of Geology and Mineralogy and RUDN University; the Carnegie Institution of Washington, Lawrence Livermore National Laboratory, Italy's University of Milan, the University of Chicago and Germany's Aachen University and Photo Science DESY. Boldyrev and Popov's role in the project was to interpret a chemical bonding in the computational model developed by Oganov and the experimental results generated by Carnegie's Alexander Goncharov. Initially, the Na He compound was found to consist of Na cubes, of which half were occupied by helium atoms and half were empty. "Yet, when we performed chemical bonding analysis of these structures, we found each 'empty' cube actually contained an eight-center, two-electron bond," Boldyrev says. "This bond is what's responsible for the stability of this enchanting compound." Their findings advanced the research to another step. "As we explore the structure of this compound, we're deciphering how this bond occurs and we predicted that, adding oxygen, we could create a similar compound," Popov says. Such knowledge raises big questions about chemistry and how elements behave beyond the world we know. Questions, Boldyrev says, Earth's inhabitants need to keep in mind as they consider long-term space travel. "With the recent discovery of multiple exoplanets, we're reminded of the vastness of the universe," he says. "Our understanding of chemistry has to change and expand beyond the confines of our own planet." Explore further: Scientists discover extraordinary compounds that may be hidden inside Jupiter and Neptune More information: A stable compound of helium and sodium at high pressure, Nature Chemistry, DOI: 10.1038/nchem.2716


News Article | December 21, 2016
Site: www.businesswire.com

SUZHOU, China--(BUSINESS WIRE)--Chinese firms need to innovate or face oblivion – that was the message from the country’s President Xi Jinping at the G20 summit this year. He said rising protectionism and a sluggish global economy meant “supply side reform” was necessary. It is a call to action that has not gone unheeded in Suzhou Industrial Park (SIP), which is staying ahead of the game by rolling out a raft of initiatives to attract cutting-edge tech firms and encourage outdated plants to become cleaner, smarter and faster. “It’s fair to say that everybody will eventually face these challenges but the eastern coastal areas are the first to see the glass ceilings in this regard,” says Barry Yang, Chairman of SIP. “Therefore, it’s no surprise that we took a proactive approach to explore a transformation route.” Coastal cities such as Suzhou are already losing their competitive edge to other cheaper countries and there is also an oversupply of lower end products, which means companies, as well as the industrial parks where they are located, urgently need to move up the value chain if they want to survive. To make this happen, SIP is dishing out relocation payouts, tax breaks, interest-free loans, R&D grants and setting up incubators for a series of high-value industries, in the hope of becoming the country’s next Silicon Valley. Out With the Old, In With the New SIP was created two decades ago as a joint-venture project between the Chinese and Singapore governments and is already an internationally competitive high-tech industrial park that resembles a garden-like metropolis. High-tech firms and services-based industries now make up about half of all manufacturing there. Among the latest innovative tenants to choose SIP are the Chinese Academy of Sciences’ electrical institute and nanotech research centre, R&D centers for big multinationals such as Microsoft and Siemens and international collaborations such as the China-New Zealand Joint Center of Innovation and the China-Israel Joint Center of Heathcare Innovations. Meanwhile the old firms making simple products in polluting factories are on the way out. “Should tenants fail to upgrade their facilities voluntarily, they will get relocated elsewhere,” says Zhang Dongchi, Director of SIP’s Science, Technology & Informatization Bureau. “Hence we need to ever bring in burgeoning firms.” Although there has been a steady outflow of tenants from traditional industries over the past five years, the park’s GDP has continued to grow– a testament to the strength of the higher-value sectors, Zhang says. The park has also launched a range of talent scouting programs and under the latest overseas incubator program launched this year, foreigners can incubate their projects at SIP and tap into its resources without themselves relocating to the city. “We want to use all available resources to attract innovative professionals from around the world,” says Barry Yang. “New mechanisms and approaches are critical for this and a more flexible and open mindset will help us bring in some of the world’s best people. This offshore talent incubator program is a showcase of our pursuit in this direction.” SIP aims to have over 500 projects nurtured in this cross-border program by 2020, and hopes about 30 such companies will eventually become publicly-listed. Like any living, breathing organism, businesses need suitable conditions to thrive and SIP is working hard to create the right business ecology to nurture its nascent hi-tech industries. From setting up a list of world-class research institutes to building state-of-the-art incubating centers that house a mixture of start-ups and mature enterprises, SIP hopes the clustering of academies and companies can help foster innovation and test-bed new ideas. One of the brightest prospects is the SIP CMO Base at Sangtian Island – a massive pharmaceutical and nanotechnology research and commercialization project whose first phase began operating this year. It is home to Chinese and international companies including Roche Diagnosis, BeiGene, Zai Lab, Beike Biotech, Beckman Coulter and Hyssen, a bio-nano company incubated successfully by SIP. SIP has also nurtured vertically-specialized incubators such as bioBAY, Nanotech National University Science Park and Nanopolis, which collectively help spearhead the development of nanotechnology in a wide range of areas including material, energy, environmental protection, biology and medicine and information and advanced manufacturing. SIP is now among the world’s top eight hubs for nanotechnology. This kind of ecosystem allows entrepreneurs to quickly prototype concepts into physical products, while the presence of more than 300 venture capitalists, private equity and fund management companies means start-ups can easily find financial support that cover all stages of growth. “A few years ago, most of the high-tech manufacturing plants were based in Shenzhen but we are starting to see more of them spring up in the park and around the Yangtze River Delta region,” says Wu Xiaozhen, Chief Administration Officer & HR Director at Aispeech, which specializes in developing voice control software for smart consumer products, including smart auto-gadgets, robots and home systems. Aispeech sited itself in 2014 at SIP’s Dushuhu-based SISPARK (Suzhou International Science Park). “One of our clients is a robot maker in the area and we are also looking to partner up with more smart-product makers here. The maturing of the AI (artificial intelligence) industry in SIP is helping to close the loop in commercializing our research and latest technology,” Wu adds. Smart gadgets are the next big thing and manufacturers around the world are rushing to create voice-activated gadgets that can interact seamlessly with humans and execute tasks smoothly. This has led to a boom in demand for AI researchers globally and intense competition in this field means companies and industrial parks alike need to be even more creative in attracting talents. One such initiative allows scientists yet to join the corporate sector to test their work while keeping their academic status by carrying out activity at research labs set up by Aispeech jointly with SIP. Opportunities for collaboration between local and foreign companies – both looking for technological advancement – are also a catalyst for innovation. “This means investors in the same or supporting industries will be more willing to work together in areas such as strategic alliance, licensing, and joint venture,” says Deli Yang, R. Burr & D. Clark Professor of International Business at Trinity University. SIP’s total output is expected to exceed 500 billion yuan by 2020, with electronic & IT as well as machinery manufacturing surpassing 400 billion yuan. The hi-tech sectors, namely biopharma, nanotech and cloud computing, are targeted to exceed 200 billion yuan, compared with 110 billion yuan recorded in 2016. China has since 2008 been drawing hundreds of experts to the country by offering generous relocation and research grants. The progress achieved has led some industry experts to say China is now at the golden age of innovation and the technological gap between it and the US, Japan and major EU nations, will narrow quickly. China comes top for the most domestic patent filings the last two years, outstripping the combined total in its next-closest followers, the US and Japan, according to the World Intellectual Property Organization. The number of patent applications received in China was 1.48 million in the first six months of 2016, up 38% from year ago. “There isn’t anywhere else in the world that is spending as much effort as China to drive R&D in science and technology,” said Du Zhengming, senior vice president of BeiGene, a Nasdaq-listed biopharma company developing molecularly-targeted cancer drugs which is locating in the new SIP CMO Base at Sangtian Island. Du says China’s maturing system of trading in intellectual property is also boosting progress. He also points out that entrepreneurship is beginning to find its feet in China after defining itself for decades against entrepreneurship elsewhere. Companies that were founded to do the same thing as their foreign counterparts but better or cheaper still exist, he says, but there is now creative thinking too. “People here have been isolated for so long time that it takes time to catch up on the journey of being truly original and creative,” says Du, an American citizen who has over 20 years of pharmaceutical experience. “But there is a lot of exciting work being done and I believe you will see Chinese companies leading in these fields in eight to ten years.”


News Article | December 20, 2016
Site: globenewswire.com

SUZHOU, China, Dec. 20, 2016 (GLOBE NEWSWIRE) -- Chinese firms need to innovate or face oblivion – that was the message from the country’s President Xi Jinping at the G20 summit this year. He said rising protectionism and a sluggish global economy meant “supply side reform” was necessary. It is a call to action that has not gone unheeded in Suzhou Industrial Park (SIP), which is staying ahead of the game by rolling out a raft of initiatives to attract cutting-edge tech firms and encourage outdated plants to become cleaner, smarter and faster. “It’s fair to say that everybody will eventually face these challenges but the eastern coastal areas are the first to see the glass ceilings in this regard,” says Barry Yang, Chairman of SIP. “Therefore, it’s no surprise that we took a proactive approach to explore a transformation route.” Coastal cities such as Suzhou are already losing their competitive edge to other cheaper countries and there is also an oversupply of lower end products, which means companies, as well as the industrial parks where they are located, urgently need to move up the value chain if they want to survive. To make this happen, SIP is dishing out relocation payouts, tax breaks, interest-free loans, R&D grants and setting up incubators for a series of high-value industries, in the hope of becoming the country’s next Silicon Valley. Out With the Old, In With the New SIP was created two decades ago as a joint-venture project between the Chinese and Singapore governments and is already an internationally competitive high-tech industrial park that resembles a garden-like metropolis. High-tech firms and services-based industries now make up about half of all manufacturing there. Among the latest innovative tenants to choose SIP are the Chinese Academy of Sciences’ electrical institute and nanotech research centre, R&D centers for big multinationals such as Microsoft and Siemens and international collaborations such as the China-New Zealand Joint Center of Innovation and the China-Israel Joint Center of Heathcare Innovations. Meanwhile the old firms making simple products in polluting factories are on the way out. “Should tenants fail to upgrade their facilities voluntarily, they will get relocated elsewhere,” says Zhang Dongchi, Director of SIP’s Science, Technology & Informatization Bureau. “Hence we need to ever bring in burgeoning firms.” Although there has been a steady outflow of tenants from traditional industries over the past five years, the park’s GDP has continued to grow– a testament to the strength of the higher-value sectors, Zhang says. The park has also launched a range of talent scouting programs and under the latest overseas incubator program launched this year, foreigners can incubate their projects at SIP and tap into its resources without themselves relocating to the city. “We want to use all available resources to attract innovative professionals from around the world,” says Barry Yang. “New mechanisms and approaches are critical for this and a more flexible and open mindset will help us bring in some of the world’s best people. This offshore talent incubator program is a showcase of our pursuit in this direction.” SIP aims to have over 500 projects nurtured in this cross-border program by 2020, and hopes about 30 such companies will eventually become publicly-listed. Like any living, breathing organism, businesses need suitable conditions to thrive and SIP is working hard to create the right business ecology to nurture its nascent hi-tech industries. From setting up a list of world-class research institutes to building state-of-the-art incubating centers that house a mixture of start-ups and mature enterprises, SIP hopes the clustering of academies and companies can help foster innovation and test-bed new ideas. One of the brightest prospects is the SIP CMO Base at Sangtian Island – a massive pharmaceutical and nanotechnology research and commercialization project whose first phase began operating this year. It is home to Chinese and international companies including Roche Diagnosis, BeiGene, Zai Lab, Beike Biotech, Beckman Coulter and Hyssen, a bio-nano company incubated successfully by SIP. SIP has also nurtured vertically-specialized incubators such as bioBAY, Nanotech National University Science Park and Nanopolis, which collectively help spearhead the development of nanotechnology in a wide range of areas including material, energy, environmental protection, biology and medicine and information and advanced manufacturing. SIP is now among the world’s top eight hubs for nanotechnology. This kind of ecosystem allows entrepreneurs to quickly prototype concepts into physical products, while the presence of more than 300 venture capitalists, private equity and fund management companies means start-ups can easily find financial support that cover all stages of growth. “A few years ago, most of the high-tech manufacturing plants were based in Shenzhen but we are starting to see more of them spring up in the park and around the Yangtze River Delta region,” says Wu Xiaozhen, Chief Administration Officer & HR Director at Aispeech, which specializes in developing voice control software for smart consumer products, including smart auto-gadgets, robots and home systems. Aispeech sited itself in 2014 at SIP’s Dushuhu-based SISPARK (Suzhou International Science Park). “One of our clients is a robot maker in the area and we are also looking to partner up with more smart-product makers here. The maturing of the AI (artificial intelligence) industry in SIP is helping to close the loop in commercializing our research and latest technology,” Wu adds. Smart gadgets are the next big thing and manufacturers around the world are rushing to create voice-activated gadgets that can interact seamlessly with humans and execute tasks smoothly. This has led to a boom in demand for AI researchers globally and intense competition in this field means companies and industrial parks alike need to be even more creative in attracting talents. One such initiative allows scientists yet to join the corporate sector to test their work while keeping their academic status by carrying out activity at research labs set up by Aispeech jointly with SIP. Opportunities for collaboration between local and foreign companies – both looking for technological advancement – are also a catalyst for innovation. “This means investors in the same or supporting industries will be more willing to work together in areas such as strategic alliance, licensing, and joint venture,” says Deli Yang, R. Burr & D. Clark Professor of International Business at Trinity University. SIP’s total output is expected to exceed 500 billion yuan by 2020, with electronic & IT as well as machinery manufacturing surpassing 400 billion yuan. The hi-tech sectors, namely biopharma, nanotech and cloud computing, are targeted to exceed 200 billion yuan, compared with 110 billion yuan recorded in 2016. China has since 2008 been drawing hundreds of experts to the country by offering generous relocation and research grants. The progress achieved has led some industry experts to say China is now at the golden age of innovation and the technological gap between it and the US, Japan and major EU nations, will narrow quickly. China comes top for the most domestic patent filings the last two years, outstripping the combined total in its next-closest followers, the US and Japan, according to the World Intellectual Property Organization. The number of patent applications received in China was 1.48 million in the first six months of 2016, up 38% from year ago. “There isn’t anywhere else in the world that is spending as much effort as China to drive R&D in science and technology,” said Du Zhengming, senior vice president of BeiGene, a Nasdaq-listed biopharma company developing molecularly-targeted cancer drugs which is locating in the new SIP CMO Base at Sangtian Island. Du says China’s maturing system of trading in intellectual property is also boosting progress. He also points out that entrepreneurship is beginning to find its feet in China after defining itself for decades against entrepreneurship elsewhere. Companies that were founded to do the same thing as their foreign counterparts but better or cheaper still exist, he says, but there is now creative thinking too. “People here have been isolated for so long time that it takes time to catch up on the journey of being truly original and creative,” says Du, an American citizen who has over 20 years of pharmaceutical experience. “But there is a lot of exciting work being done and I believe you will see Chinese companies leading in these fields in eight to ten years.”


News Article | April 20, 2016
Site: www.nature.com

The US Department of Agriculture (USDA) will not regulate a mushroom genetically modified with the gene-editing tool CRISPR–Cas9. The long-awaited decision means that the mushroom can be cultivated and sold without passing through the agency's regulatory process — making it the first CRISPR-edited organism to receive a green light from the US government. “The research community will be very happy with the news,” says Caixia Gao, a plant biologist at the Chinese Academy of Sciences’s Institute of Genetics and Developmental Biology in Beijing, who was not involved in developing the mushroom. “I am confident we'll see more gene-edited crops falling outside of regulatory authority.” Yinong Yang, a plant pathologist at Pennsylvania State University (Penn State) in University Park, engineered the common white button (Agaricus bisporus) mushroom to resist browning. The effect is achieved by targeting the family of genes that encodes polyphenol oxidase (PPO) — an enzyme that causes browning. By deleting just a handful of base pairs in the mushroom’s genome, Yang knocked out one of six PPO genes — reducing the enzyme’s activity by 30%. The mushroom is one of about 30 genetically modified organisms (GMOs) to sidestep the USDA regulatory system in the past five years. In each case, the agency's Animal and Plant Health Inspection Service (APHIS) has said that the organisms — mostly plants — do not qualify as something the agency must regulate. (Once a crop passes the USDA reviews, it may still undergo a voluntary review by the US Food and Drug Administration.) Several of the plants that bypassed the USDA were made using gene-editing techniques such as the zinc-finger nuclease (ZFN) and transcription activator-like effector nuclease (TALEN) systems. But until now, it was not clear whether the USDA would give the same pass to organisms engineered with science’s hottest new tool, CRISPR–Cas9. Yang first presented the crop to a small group of USDA regulators in October 2015, after being encouraged to do so by an APHIS official. “They were very excited,” Yang says. “There was certainly interest and a positive feeling” at the meetings. He followed up with an official letter of inquiry to the agency later that month. The USDA’s answer came this week. “APHIS does not consider CRISPR/Cas9-edited white button mushrooms as described in your October 30, 2015 letter to be regulated,” the agency wrote in a 13 April letter to Yang. Yang’s mushroom did not trigger USDA oversight because it does not contain foreign DNA from ‘plant pests’ such as viruses or bacteria. Such organisms were necessary for genetically modifying plants in the 1980s and 1990s, when the US government developed its framework for regulating GMOs. But newer gene-editing techniques that do not involve plant pests are quickly supplanting the old tools. The United States is revamping its rules for regulating GMOs, which collectively are known as the Coordinated Framework for Regulation of Biotechnology. To that end, the US National Academies of Sciences, Engineering and Medicine have convened a committee that is charged with predicting what advances will be made in biotechnology products over the next 5–10 years. It will hold its first meeting on 18 April. In the meantime, Yang is mulling over whether to start a company to commercialize his modified mushroom. Fruits and vegetables that resist browning are valuable because they keep their color longer when sliced, which lengthens shelf life. In the past 18 months, biotech companies have commercialized genetically engineered non-browning apples and potatoes. “I need to talk to my dean about that. We’ll have to see what the university wants to do next,” he says about the prospect of bringing his mushroom to market. But he notes that in September 2015, Penn State filed a provisional patent application on the technology.


News Article | October 26, 2016
Site: www.eurekalert.org

Researchers led by Carnegie Mellon University's Neil M. Donahue have shown that semi-volatile organic compounds can readily diffuse into the billions of tiny atmospheric particles that inhabit the air, easily moving among them. The findings, published in the early online edition of the Proceedings of the National Academy of Sciences (PNAS), provide greater understanding into how organic particles behave in the atmosphere. The air is full of microscopic airborne particles called aerosols. Aerosols can come from natural sources, like fires or sea spray, or they can come from man-made sources, like emissions from cars and power plants. As the aerosol particles travel through the atmosphere, they encounter other particle populations and chemically evolve, resulting in a dense soup of oxidized organic matter. While many atmospheric particles start off too small to influence climate, as they grow their potential to impact climate increases. Understanding how these particles change is crucial to understanding how they affect the environment and human health. As the particles grow and travel through the atmosphere, they pick up material called secondary organic aerosol (SOA). Much of the SOA consists of semi-volatile organic compounds (SVOCs) that can diffuse into particles--moving from one particle, entering the gas phase, and moving to another particle. Recently, there has been controversy as to whether or not SVOCs are able to diffuse into "glassy" atmospheric particles. If SVOCs can't diffuse into these particles, they will not condense onto them, which will slow down particle growth rates. "Our work shows that some particles are kind of crunchy when they are dry -- they are glassy -- but they turn gooey when they get wet; under most conditions, the semi-volatile compounds will diffuse into particles quite easily," said Donahue, the Lord Professor of Chemistry in the Mellon College of Science, and Professor of Chemical Engineering and Engineering and Public Policy in the College of Engineering. Donahue is also a member of Carnegie Mellon University's Center for Atmospheric Particle Studies (CAPS), which is a leader in studying the chemistry of atmospheric particles and has completed groundbreaking studies that are revealing how these atmospheric particles change over time. In the current study, chemistry doctoral student Qing Ye used single-particle mass spectrometry to see if SVOCs diffused from one group of particles into another, adding to the particles' complexity. Ye looked at two different types of secondary organic aerosols formed by the oxidation of organic gases: alpha-pinene, a molecule given off by pine trees, and toluene, an aromatic hydrocarbon in gasoline that is also often used as a solvent or in the production of industrial materials. She combined two populations of the particles, one of which was isotopically labeled, and measured the populations over time. In the alpha-pinene particles, the isotopes from the labeled particles easily evaporated into the unlabeled particles. The toluene particles also diffused easily, but only if the relative humidity was above 30 percent. The findings show that SVOCs can travel between atmospheric particles, but the conditions under which they can travel are dependent on the particle's original source. Other study authors include: Ryan Sullivan and Penglin Ye, and Ellis S. Robinson Carnegie Mellon University; and Xiang Ding, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences. This research was supported by the National Science Foundation (CHE1412309 and CBET0922643), the Wallace Research Foundation, the US EPA STAR Program, and the Faculty for the Future Fellowship from the Schlumberger Foundation.


News Article | March 18, 2016
Site: www.nanotech-now.com

Abstract: You probably don't think much of fungi, and especially those that turn bread moldy, but researchers reporting in the Cell Press journal Current Biology on March 17, 2016 have evidence that might just change your mind. Their findings suggest that a red bread mold could be the key to producing more sustainable electrochemical materials for use in rechargeable batteries. The researchers show for the first time that the fungus Neurospora crassa can transform manganese into a mineral composite with favorable electrochemical properties. "We have made electrochemically active materials using a fungal manganese biomineralization process," says Geoffrey Gadd of the University of Dundee in Scotland. "The electrochemical properties of the carbonized fungal biomass-mineral composite were tested in a supercapacitor and a lithium-ion battery, and it [the composite] was found to have excellent electrochemical properties. This system therefore suggests a novel biotechnological method for the preparation of sustainable electrochemical materials." Gadd and his colleagues have long studied the ability of fungi to transform metals and minerals in useful and surprising ways. In earlier studies, the researchers showed that fungi could stabilize toxic lead and uranium, for example. That led the researchers to wonder whether fungi could offer a useful alternative strategy for the preparation of novel electrochemical materials too. "We had the idea that the decomposition of such biomineralized carbonates into oxides might provide a novel source of metal oxides that have significant electrochemical properties," Gadd says. In fact, there have been many efforts to improve lithium-ion battery or supercapacitor performance using alternative electrode materials such as carbon nanotubes and other manganese oxides. But few had considered a role for fungi in the manufacturing process. In the new study, Gadd and his colleagues incubated N. crassa in media amended with urea and manganese chloride (MnCl2) and watched what happened. The researchers found that the long branching fungal filaments (or hyphae) became biomineralized and/or enveloped by minerals in various formations. After heat treatment, they were left with a mixture of carbonized biomass and manganese oxides. Further study of those structures show that they have ideal electrochemical properties for use in supercapacitors or lithium-ion batteries. "We were surprised that the prepared biomass-Mn oxide composite performed so well," Gadd says. In comparison to other reported manganese oxides in lithium-ion batteries, the carbonized fungal biomass-mineral composite "showed an excellent cycling stability and more than 90% capacity was retained after 200 cycles," he says. The new study is the first to demonstrate the synthesis of active electrode materials using a fungal biomineralization process, illustrating the great potential of these fungal processes as a source of useful biomaterials. Gadd says they'll continue to explore the use of fungi in producing various potentially useful metal carbonates. They're also interested in investigating such processes for the biorecovery of valuable or scarce metal elements in other chemical forms. ### The authors acknowledge financial support from the China Scholarship Council and the 1000 Talents Plan with the Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | December 31, 2015
Site: www.nanotech-now.com

Abstract: •Capitalizes on strengthsof both SITRI and Accelink to advanceoptical technology, products and markets •Strengthens SITRI’s R&D capabilities in optical technology SITRI, the innovation center for accelerating the development and commercialization of “More than Moore” solutions to power the Internet of Things,and Accelink Technologies Co., Ltd., the leadingopto-electronic components supplier in China, announced today they have signed an agreement to collaborate on the development of opto-electronic communication. The agreementwill allow Accelink and SITRI tocombine their respective strengths in optical technology and further promote MEMS optical components, silicon photonics, MEMS technology, and opto-electronics technology.It calls for strategic cooperation at the technology and product levels, as well as in market development. “China’s optical communications industry has developed an international competitive advantage,” said Charles Yang, President of SITRI. “Accelink is a leader inthe fieldand has developed several standards for optical components in China. By joining forces with them, SITRI not only strengthens our position in the vital optical market, we also build on our R&D capabilities in optical technologies.” As the largest opto-electronic components supplier in China, Accelink is the leading Chinese company with the ability to conduct systematic and strategic R&D on the component level and is the first Chinese company to own chip level technology and mass production capabilities. “MEMS and silicon photonics technology is vital to the future development of optical components,” said Hao Mao, Vice President of Accelink. “The MEMS and silicon photonics technology and expertise SITRI brings to this relationship are key to helping us fulfill our strategic vision for the optical market.” SITRI is emerging as the center for “More than Moore” commercialization and industry development, providing 360-degree solutions for companies and startups pursuing these new technologies, including investment, design, simulation, market engagement and company growth support. SITRI is associated with the Shanghai Institute of Microsystem and Information Technology (SIMIT) and the Chinese Academy of Sciences, and hasestablished strong ties to a broad range of Chinese industry, research and university players. This ecosystem enables these new businesses to grow by quickly taking their innovations from concept to commercialization. The addition of Accelinkas a SITRIcollaborator represents an important step in developing and promoting optical communications and opto-electronic solutions. About SITRI SITRI(Shanghai Industrial µTechnology Research Institute)is an innovation center established to accelerate the development and commercialization of “More than Moore” solutions to power the Internet of Things. As a global organization, SITRI offers its partners and customers a 360-degree solution offering – from funding, IP and ecosystem development to engineering services, marketing research, manufacturing resources and China market access – helping them more quickly move their ideas from concept to commercialization. For more information, visit www.sitrigroup.com. About Accelink Accelink Technologies Co., Ltd, (Stock Code: 002281). Accelink is currently the largest opto-electronic components supplier in China. It is the only Chinese company with the ability to conduct systematic/strategic R&D on the component level and is the first Chinese company to own chip level technology and mass production capabilities.Accelink was founded in 2001 with Solid-state Device Institute of China P&T Ministry as its predecessor, which has over 20 years of R&D experience in opto-electronic components. In August, 2009, Accelink went public on the Shenzhen Stock Exchange, becoming the first opto-electronic components supplier in China to do so. In December, 2012, Accelink merged with Wuhan Telecommunication Devices Co., Ltd. (WTD). Accelink’s basic technology platforms include: semiconductor material growth; semiconductor processing and planar lightwave guide; optics design and packaging; thermal analysis and mechanics design; high frequency simulation and design; software and subsystem development. With these platforms, Accelink can provide a vast product portfolio and various solutions, as well as vertical integration from chip level to subsystem level. Applicationsfor Accelink solutions include telecom, datacom, wireless, and sensing, among others.For more information, visit: www.accelink.com For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | November 16, 2016
Site: www.eurekalert.org

LA JOLLA--(November 16, 2016) Salk Institute researchers have discovered a holy grail of gene editing--the ability to, for the first time, insert DNA at a target location into the non-dividing cells that make up the majority of adult organs and tissues. The technique, which the team showed was able to partially restore visual responses in blind rodents, will open new avenues for basic research and a variety of treatments, such as for retinal, heart and neurological diseases. "We are very excited by the technology we discovered because it's something that could not be done before," says Juan Carlos Izpisua Belmonte, a professor in Salk's Gene Expression Laboratory and senior author of the paper published on November 16, 2016 in Nature. "For the first time, we can enter into cells that do not divide and modify the DNA at will. The possible applications of this discovery are vast." Until now, techniques that modify DNA--such as the CRISPR-Cas9 system--have been most effective in dividing cells, such as those in skin or the gut, using the cells' normal copying mechanisms. The new Salk technology is ten times more efficient than other methods at incorporating new DNA into cultures of dividing cells, making it a promising tool for both research and medicine. But, more importantly, the Salk technique represents the first time scientists have managed to insert a new gene into a precise DNA location in adult cells that no longer divide, such as those of the eye, brain, pancreas or heart, offering new possibilities for therapeutic applications in these cells. To achieve this, the Salk researchers targeted a DNA-repair cellular pathway called NHEJ (for "non-homologous end-joining"), which repairs routine DNA breaks by rejoining the original strand ends. They paired this process with existing gene-editing technology to successfully place new DNA into a precise location in non-dividing cells. "Using this NHEJ pathway to insert entirely new DNA is revolutionary for editing the genome in live adult organisms," says Keiichiro Suzuki, a senior research associate in the Izpisua Belmonte lab and one of the paper's lead authors. "No one has done this before." First, the Salk team worked on optimizing the NHEJ machinery for use with the CRISPR-Cas9 system, which allows DNA to be inserted at very precise locations within the genome. The team created a custom insertion package made up of a nucleic acid cocktail, which they call HITI, or homology-independent targeted integration. Then they used an inert virus to deliver HITI's package of genetic instructions to neurons derived from human embryonic stem cells. "That was the first indication that HITI might work in non-dividing cells," says Jun Wu, staff scientist and co-lead author. With that feat under their belts, the team then successfully delivered the construct to the brains of adult mice. Finally, to explore the possibility of using HITI for gene-replacement therapy, the team tested the technique on a rat model for retinitis pigmentosa, an inherited retinal degeneration condition that causes blindness in humans. This time, the team used HITI to deliver to the eyes of 3-week-old rats a functional copy of Mertk, one of the genes that is damaged in retinitis pigmentosa. Analysis performed when the rats were 8 weeks old showed that the animals were able to respond to light, and passed several tests indicating healing in their retinal cells. "We were able to improve the vision of these blind rats," says co-lead author Reyna Hernandez-Benitez, a Salk research associate. "This early success suggests that this technology is very promising." The team's next steps will be to improve the delivery efficiency of the HITI construct. As with all genome editing technologies, getting enough cells to incorporate the new DNA is a challenge. The beauty of HITI technology is that it is adaptable to any targeted genome engineering system, not just CRISPR-Cas9. Thus, as the safety and efficiency of these systems improve, so too will the usefulness of HITI. "We now have a technology that allows us to modify the DNA of non-dividing cells, to fix broken genes in the brain, heart and liver," says Izpisua Belmonte. "It allows us for the first time to be able to dream of curing diseases that we couldn't before, which is exciting." Other researchers on the study were Euiseok J. Kim, Fumiyuki Hatanaka, Mako Yamamoto, Toshikazu Araoka, Masakazu Kurita, Tomoaki Hishida, Mo Li, Emi Aizawa, April Goebl, Rupa Devi Soligalla, Concepcion Rodriguez Esteban, Travis Berggren and Edward M. Callaway of the Salk Institute; Yuji Tsunekawa and Fumio Matsuzaki of RIKEN Center for Developmental Biology; Pierre Magistretti of King Abdullah University of Science and Technology; Jie Zhu, Tingshuai Jiang, Xin Fu, Maryam Jafari and Kang Zhang of Shiley Eye Institute and Institute for Genomic Medicine, University of California San Diego; Zhe Li, Shicheng Guo, Song Chen and Kun Zhang of Institute of Engineering in Medicine, University of California San Diego; Jing Qu and Guang-Hui Liu of Chinese Academy of Sciences; Jeronimo Lajara, Estrella Nuñez and Pedro Guillen of Universidad Catolica San Antonio de Murcia; and Josep M. Campistol of the University of Barcelona. The work and the researchers involved were supported in part by the National Institutes of Health, The Leona M. and Harry B. Helmsley Charitable Trust, the G. Harold and Leila Y. Mathers Charitable Foundation, The McKnight Foundation, The Moxie Foundation, the Dr. Pedro Guillen Foundation and Universidad Catolica San Antonio de Murcia, Spain. Every cure has a starting point. The Salk Institute embodies Jonas Salk's mission to dare to make dreams into reality. Its internationally renowned and award-winning scientists explore the very foundations of life, seeking new understandings in neuroscience, genetics, immunology and more. The Institute is an independent nonprofit organization and architectural landmark: small by choice, intimate by nature and fearless in the face of any challenge. Be it cancer or Alzheimer's, aging or diabetes, Salk is where cures begin. Learn more at: salk.edu.


Grant
Agency: European Commission | Branch: FP7 | Program: CSA | Phase: INFRA-2010-3.3 | Award Amount: 1.57M | Year: 2010

Over the past 6 years, the EC has invested to extend the European eInfrastructure technology and European eInfrastructure (and particularly Grid) operational and organisational principles to a number of regions in the world, and reinforcing the close collaboration and exchange of know-how with similar technologies in other areas. A number of different collaboration models have thus been established between Europe and the rest of the world, while the projects implementing these collaborations have had impacts typically focused on their regions. The CHAIN project aims to coordinate and leverage these efforts and their results with a vision of a harmonised and optimised interaction model for eInfrastructure and specifically Grid interfaces between Europe and the rest of the world. The project will elaborate a strategy and define the instruments in order to ensure coordination and interoperation of the European Grid Infrastructures with other external e-Infrastructures. The CHAIN consortium, consisting of leading organisations in all the regions addressed by the project, will ensure global coverage, European leadership, and most efficient leveraging of results with respect to preceding regional initiatives. First, the project will define a coherent operational and organisational model, where a number of EU countries/regions will possibly act, in collaboration with EGI.eu, as bridges/gateways to other Regions/Continents. Further, the project will validate this model by supporting the extension and consolidation of worldwide virtual communities, which increasingly require distributed facilities (large instruments, distributed data and databases, digital repositories, etc.) across the regions for trans-continental research. Finally, the project will act as a worldwide policy-watch and coordination instrument, by exploring and proposing concrete steps for the coordination with other initiatives and studying the evolution of e-Infrastructures.


Grant
Agency: European Commission | Branch: FP7 | Program: CSA | Phase: INFRA-2012-3.3. | Award Amount: 2.12M | Year: 2012

CHAIN-REDS vision is to promote and support technological and scientific collaboration across different eInfrastructures established and operated in various continents, in order to facilitate their uptake and use by established and emerging Virtual Research Communities (VRCs) but also by single researchers, promoting instruments and practices that can facilitate their inclusion in the community of users. The project aims to support and disseminate the best practices currently adopted in Europe and other continents, and promote and facilitate interoperability among different regional eInfrastructures. CHAIN-REDS gathers the main stakeholders of regional eInfrastructures, collectively engaged in studying and defining a path towards a global eInfrastructure ecosystem that will allow VRCs, research groups and even single researchers to access and efficiently use worldwide distributed resources (i.e. computing, storage, data, services, tools, applications). The core objective of CHAIN-REDS project is to promote, coordinate and support the effort of a critical mass of non-European eInfrastructures for Research and Education to collaborate with Europe addressing interoperability and interoperation of Grids and other Distributed Computing Infrastructures. From this perspective, CHAIN-REDS will optimise the interoperation of European infrastructures with those present in other 6 regions of the world, both from development and use point of view, and catering to different communities.\nOverall, CHAIN-REDs will provide input for future strategies and decision-making regarding collaboration with other regions on eInfrastructure deployment and availability of related data; it will raise the visibility of eInfrastructures towards intercontinental audiences, covering most of the world and will provide support to establish globally connected and interoperable infrastructures, in particular between EU and the developing regions.


News Article | November 23, 2016
Site: www.businesswire.com

SALT LAKE CITY--(BUSINESS WIRE)--SC16, the 28th annual international conference of high performance computing, networking, storage and analysis, celebrated the contributions of researchers and scientists - from those just starting their careers to those whose contributions have made lasting impacts. The conference drew more than 11,100 registered attendees and featured a technical program spanning six days. The exhibit hall featured 349 exhibitors from industry, academia and research organizations from around the world. “There has never been a more important time for high performance computing, networking and data analysis,” said SC16 General Chair John West from the Texas Advanced Computing Center. “But it is also an acute time for growing our workforce and expanding diversity in the industry. SC16 was the perfect blend of research, technological advancement, career recognition and improving the ways in which we attract and retain that next generation of scientists.” According to Trey Breckenridge, SC16 Exhibits Chair from Mississippi State University, the SC16 Exhibition was the largest in the history of the conference. The overall size of the exhibition was 150,000 net square feet (breaking the 2015 record of 141,430). The 349 industry and research-focused exhibits included 44 first-timers and 120 organizations from 25 countries outside the United States. During the conference, Salt Lake City also became the hub for the world’s fastest computer network: SCinet, SC16’s custom-built network which delivered 3.15 terabits per second in bandwidth. The network featured 56 miles of fiber deployed throughout the convention center and $32 million in loaned equipment. It was all made possible by 200 volunteers representing global organizations spanning academia, government and industry. For the third year, SC featured an opening “HPC Matters” plenary that this year focused on Precision Medicine, which examined what the future holds in this regard and how advances are only possible through the power of high performance computing and big data. Leading voices from the frontlines of clinical care, medical research, HPC system evolution, pharmaceutical R&D and public policy shared diverse perspectives on the future of precision medicine and how it will impact society. The Technical Program again offered the highest quality original HPC research. The SC workshops set a record with more than 2,500 attendees. There were 14 Best Paper Finalists and six Gordon Bell Finalists. These submissions represent the best of the best in a wide variety of research topics in HPC. “These awards are very important for the SC Conference Series. They celebrate the best and the brightest in high performance computing,” said Satoshi Matsuoka, SC16 Awards Chair from Tokyo Institute of Technology. “These awards are not just plaques or certificates. They define excellence. They set the bar for the years to come and are powerful inspiration for both early career and senior researchers.” Following is the list of Technical Program awards presented at SC16: SC16 received 442 paper submissions, of which 81 were accepted (18.3 percent acceptance rate). Of those, 13 were selected as finalists for the Best Paper (six) and Best Student Paper (seven) awards. The Best Paper Award went to “Daino: A High-Level Framework for Parallel and Efficient AMR on GPUs” by Mohamed Wahib Attia and Naoya Maruyama, RIKEN; and Takayuki Aoki, Tokyo Institute of Technology. The Best Student Paper Award went to “Flexfly: Enabling a Reconfigurable Dragonfly Through Silicon Photonics” by Ke Wen, Payman Samadi, Sebastien Rumley, Christine P. Chen, Yiwen Shen, Meisam Bahadori, and Karen Bergman, Columbia University and Jeremiah Wilke, Sandia National Laboratories. The ACM Gordon Bell Prize is awarded for outstanding team achievement in high performance computing and tracks the progress of parallel computing. This year, the prize was awarded to a 12-member Chinese team for their research project, “10M-Core Scalable Fully-Implicit Solver for Nonhydrostatic Atmospheric Dynamics.” The winning team presented a solver (method for calculating) atmospheric dynamics. In the abstract of their presentation, the winning team writes, “On the road to the seamless weather-climate prediction, a major obstacle is the difficulty of dealing with various spatial and temporal scales. The atmosphere contains time-dependent multi-scale dynamics that support a variety of wave motions.” To simulate the vast number of variables inherent in a weather system developing in the atmosphere, the winning group presents a highly scalable fully implicit solver for three-dimensional nonhydrostatic atmospheric simulations governed by fully compressible Euler equations. Euler equations are a set of equations frequently used to understand fluid dynamics (liquids and gasses in motion). Winning team members are Chao Yang, Chinese Academy of Sciences; Wei Xue, Weimin Zheng, Guangwen Yang, Ping Xu, and Haohuan Fu, Tsinghua University; Hongtao You, National Research Center of Parallel Computer Engineering and Technology; Xinliang Wang, Beijing Normal University; Yulong Ao and Fangfang Liu, Chinese Academy of Sciences, Lin Gan, Tsinghua University; Lanning Wang, Beijing Normal University. This year, SC received 172 detailed poster submissions that went through a rigorous review process. In the end, 112 posters were accepted and five finalists were selected for the Best Poster Award. As part of its research poster activities, SC16 also hosted the ACM Student Research Competition for both undergraduate and graduate students. In all 63 submissions were received, 26 Student Research Competition posters were accepted – 14 in the graduate category and 12 in the undergraduate category. The Best Poster Award went to “A Fast Implicit Solver with Low Memory Footprint and High Scalability for Comprehensive Earthquake Simulation System” with Kohei Fujita from RIKEN as the lead author. First Place: “Touring Dataland? Automated Recommendations for the Big Data Traveler” by Willian Agnew and Michael Fischer, Advisors: Kyle Chard and Ian Foster. Second Place: “Analysis of Variable Selection Methods on Scientific Cluster Measurement Data” by Jonathan Wang, Advisors: Wucherl Yoo and Alex Sim. Third Place: “Discovering Energy Usage Patterns on Scientific Clusters” by Matthew Bae, Advisors: Wucherl Yoo, Alex Sim and Kesheng Wu. First Place: “Job Startup at Exascale: Challenges and Solutions” by Sourav Chakroborty, Advisor: Dhabaleswar K. Panda. Second Place: “Performance Modeling and Engineering with Kerncraft,” by Julian Hammer, Advisors: Georg Hager and Gerhard Wellein. Third Place: “Design and Evaluation of Topology-Aware Scatter and AllGather Algorithms for Dragonfly Networks” by Nathanael Cheriere, Advisor: Matthieu Dorier. The Scientific Visualization and Data Analytics Award featured six finalists. The award went to “Visualization and Analysis of Threats from Asteroid Ocean Impacts” with John Patchett as the lead author. The Student Cluster Competition returned for its 10th year. The competition which debuted at SC07 in Reno and has since been replicated in Europe, Asia and Africa, is a real-time, non-stop, 48-hour challenge in which teams of six undergraduates assemble a small cluster at SC16 and race to complete a real-world workload across a series of scientific applications, demonstrate knowledge of system architecture and application performance, and impress HPC industry judges. The students partner with vendors to design and build a cutting-edge cluster from commercially available components, not to exceed a 3120-watt power limit and work with application experts to tune and run the competition codes. For the first-time ever, the team that won top honors also won the award for achieving highest performance for the Linpack benchmark application. The team “SwanGeese” is from the University of Science and Technology of China. In traditional Chinese culture, the rare Swan Goose stands for teamwork, perseverance and bravery. This is the university’s third appearance in the competition. Also, an ACM SIGHPC Certificate of Appreciation is presented to the authors of a recent SC paper to be used for the SC16 Student Cluster Competition Reproducibility Initiative. The selected paper was “A Parallel Connectivity Algorithm for de Bruijn Graphs in Metagenomic Applications” by Patrick Flick, Chirag Jain, Tony Pan and Srinivas Aluru from Georgia Institute of Technology. The George Michael Memorial HPC Fellowship honors exceptional Ph.D. students. The first recipient is Johann Rudi from the Institute for Computational Engineering and Sciences at the University of Texas at Austin for his project, “Extreme-Scale Implicit Solver for Nonlinear, Multiscale, and Heterogeneous Stokes Flow in the Earth’s Mantle.” The second recipient is Axel Huebl from Helmholtz-Zentrum Dresden-Rossendorf at the Technical University of Dresden for his project, “Scalable, Many-core Particle-in-cell Algorithms to Stimulate Next Generation Particle Accelerators and Corresponding Large-scale Data Analytics.” The SC Conference Series also serves as the venue for recognizing leaders in the HPC community for their contributions during their careers. Here are the career awards presented at SC16: The IEEE-CS Seymour Cray Computer Engineering Award recognizes innovative contributions to high performance computing systems that best exemplify the creative spirit demonstrated by Seymour Cray. The 2016 IEEE-CS Seymour Cray Computer Engineering Award was presented to William J. Camp of Los Alamos National Laboratory “for visionary leadership of the Red Storm project, and for decades of leadership of the HPC community.” Camp previously served as Intel’s Chief Supercomputing Architect and directed Intel’s Exascale R&D efforts. Established in memory of Ken Kennedy, the founder of Rice University's nationally ranked computer science program and one of the world's foremost experts on high-performance computing, the ACM/IEEE-CS Ken Kennedy Award recognizes outstanding contributions to programmability or productivity in high-performance computing together with significant community service or mentoring contributions. The 2016 Ken Kennedy Award was presented to William D. Gropp “for highly influential contributions to the programmability of high-performance parallel and distributed computers, and extraordinary service to the profession.” Gropp Is the Acting Director of the National Center for Supercomputing Applications and Director, Parallel Computing Institute, Thomas M. Siebel Chair in Computer Science at the University of Illinois Urbana-Champaign. The IEEE-CS Sidney Fernbach Memorial Award is awarded for outstanding contributions in the application of high performance computers using innovative approaches. The 2016 IEEE-CS Sidney Fernbach Memorial Award was presented to Vipin Kumar “for foundational work on understanding scalability, and highly scalable algorithms for graph positioning, sparse linear systems and data mining.” Kumar is a Regents Professor at the University of Minnesota. The Supercomputing Conference Test of Time Award recognizes an outstanding paper that has appeared at the SC conference and has deeply influenced the HPC discipline. It is a mark of historical impact and recognition that the paper has changed HPC trends. The winning paper is “Automatically Tuned Linear Algebra Software” by Clint Whaley from University of Tennessee and Jack Dongarra from University of Tennessee and Oak Ridge National Laboratory. IEEE TCSC Award for Excellence in Scalable Computing for Early Career Researchers: The IEEE TCHPC Award for Excellence in Scalable Computing for Early Career Researchers recognizes individuals who have made outstanding and potentially long-lasting contributions to the field within five years of receiving their Ph.D. The 2016 awards were presented to Kyle Chard, Computation Institute , University of Chicago and Argonne National Laboratory; Sunita Chandrassekaran, University of Delaware; and Seyong Lee, Oak Ridge National Laboratory. SC17 will be held next November 12-17 in Denver, Colorado. For more details, go to http://sc17.supercomputing.org/. SC16, sponsored by the IEEE Computer Society and ACM (Association for Computing Machinery), offers a complete technical education program and exhibition to showcase the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. This premier international conference includes a globally attended technical program, workshops, tutorials, a world-class exhibit area, demonstrations and opportunities for hands-on learning. For more information on SC16, visit: http://sc16.supercomputing.org.


News Article | December 3, 2015
Site: www.sciencenews.org

WASHINGTON, D.C. — Human gene-editing research, even on embryos, is needed and should go ahead, with one major caveat: No pregnancies can result, leaders of an international summit on the topic said December 3. In recent years, scientists have devised increasingly precise molecular scissors for cutting and pasting DNA. These tools, especially the guided scissors known as CRISPR/Cas9, have become so cheap and easy to use that it may be possible to use them to correct genetic diseases. Many see the technology as a medical boon; others, though, say that the prospect of designer babies and tinkering with the DNA of future generations should be out of bounds (SN: 5/30/15, p. 16). The U.S. National Academies of Sciences and Medicine, the Chinese Academy of Sciences and the United Kingdom’s Royal Society convened the summit to discuss the state of the science as well as ethical, legal and regulatory considerations surrounding gene-editing technology. Gene editing of human body, or somatic, cells, which do not pass genetic information to future generations, is already in clinical trials. Most of those studies have involved older technologies and cells that were edited outside the body and then given to a patient later, such as a baby with leukemia treated with edited immune cells (SN: 12/12/15, p. 7). A company called Sangamo BioSciences announced December 1 that clinical trials using gene editing to replace a broken gene in adult hemophiliacs could begin next year. Such research could continue and would fall under current regulations for gene therapy, the 12-member organizing committee of the International Summit on Human Gene Editing said in statement. But moral, ethical and safety concerns would make it “irresponsible” to proceed with clinical studies in germline cells — eggs, sperm, embryos and other cells that transmit DNA to future generations, the statement added. That doesn’t mean all germ cell editing would be off-limits. Researchers who edit embryos or other germ cells in labs would not be doing germline editing if the resulting embryos are not implanted in the uterus for reproductive purposes, said committee chairman David Baltimore of Caltech. The scientists purposely did not call their statement a ban or even a moratorium. Instead, the recommendations should be revisited on a regular basis as research advances and societal opinions evolve. The panel also called for an ongoing forum to discuss human germline editing. Recommendations from the scientists are not legally binding, but peer pressure could be an effective deterrent. For instance, researchers who violate agreements might not be able to get their work published or could lose funding. Scientists also must still follow their individual countries’ laws and regulations on working with embryos. In the United States, such work is not banned, but researchers cannot get government funds to do it. A study conducted by a separate committee of scientists commissioned by the science academies will produce a report on the advisability of germline editing, expected by the end of 2016. Editor's note: The headline of this story was edited on December 7 to clarify, as noted elsewhere, that the recommendations endorsed germ cell gene editing for research purposes only.


News Article | December 7, 2015
Site: www.sciencenews.org

WASHINGTON, D.C. — Human gene-editing research, even on embryos, is needed and should go ahead, with one major caveat: No pregnancies can result, leaders of an international summit on the topic said December 3. In recent years, scientists have devised increasingly precise molecular scissors for cutting and pasting DNA. These tools, especially the guided scissors known as CRISPR/Cas9, have become so cheap and easy to use that it may be possible to use them to correct genetic diseases. Many see the technology as a medical boon; others, though, say that the prospect of designer babies and tinkering with the DNA of future generations should be out of bounds (SN: 5/30/15, p. 16). The U.S. National Academies of Sciences and Medicine, the Chinese Academy of Sciences and the United Kingdom’s Royal Society convened the summit to discuss the state of the science as well as ethical, legal and regulatory considerations surrounding gene-editing technology. Gene editing of human body, or somatic, cells, which do not pass genetic information to future generations, is already in clinical trials. Most of those studies have involved older technologies and cells that were edited outside the body and then given to a patient later, such as a baby with leukemia treated with edited immune cells (SN: 12/12/15, p. 7). A company called Sangamo BioSciences announced December 1 that clinical trials using gene editing to replace a broken gene in adult hemophiliacs could begin next year. Such research could continue and would fall under current regulations for gene therapy, the 12-member organizing committee of the International Summit on Human Gene Editing said in statement. But moral, ethical and safety concerns would make it “irresponsible” to proceed with clinical studies in germline cells — eggs, sperm, embryos and other cells that transmit DNA to future generations, the statement added. That doesn’t mean all germ cell editing would be off-limits. Researchers who edit embryos or other germ cells in labs would not be doing germline editing if the resulting embryos are not implanted in the uterus for reproductive purposes, said committee chairman David Baltimore of Caltech. The scientists purposely did not call their statement a ban or even a moratorium. Instead, the recommendations should be revisited on a regular basis as research advances and societal opinions evolve. The panel also called for an ongoing forum to discuss human germline editing. Recommendations from the scientists are not legally binding, but peer pressure could be an effective deterrent. For instance, researchers who violate agreements might not be able to get their work published or could lose funding. Scientists also must still follow their individual countries’ laws and regulations on working with embryos. In the United States, such work is not banned, but researchers cannot get government funds to do it. A study conducted by a separate committee of scientists commissioned by the science academies will produce a report on the advisability of germline editing, expected by the end of 2016. Editor's note: The headline of this story was edited on December 7 to clarify, as noted elsewhere, that the recommendations endorsed germ cell gene editing for research purposes only.


News Article | January 13, 2016
Site: www.nature.com

In the northern reaches of the Tibetan Plateau, dozens of yaks graze on grasslands that look like a threadbare carpet. The pasture has been munched down to bare soil in places, and deep cracks run across the snow-dusted landscape. The animals' owner, a herder named Dodra, emerges from his home wearing a black robe, a cowboy hat and a gentle smile tinged with worry. “The pastures are in a bad state and lack the kind of plants that make livestock strong and grow fat,” says Dodra. “The yaks are skinny and produce little milk.” His family of eight relies on the yaks for most of its livelihood — milk, butter, meat and fuel. Dodra was forced to give up half of his animals a decade ago, when the Chinese government imposed strict limits on livestock numbers. Although his family receives financial compensation, nobody knows how long it will last. “We barely survive these days,” he says. “It's a hand-to-mouth existence.” If the grasslands continue to deteriorate, he says, “we will lose our only lifeline”. The challenges that face Dodra and other Tibetan herders are at odds with glowing reports from Chinese state media about the health of Tibetan grasslands — an area of 1.5 million square kilometres — and the experiences of the millions of nomads there. Since the 1990s, the government has carried out a series of policies that moved once-mobile herders into settlements and sharply limited livestock grazing. According to the official account, these policies have helped to restore the grasslands and to improve standards of living for the nomads. But many researchers argue that available evidence shows the opposite: that the policies are harming the environment and the herders. “Tibetan grasslands are far from safe,” says Wang Shiping, an ecologist at the Chinese Academy of Sciences' (CAS) Institute of Tibetan Plateau Research (ITPR) in Beijing. “A big part of the problem is that the policies are not guided by science, and fail to take account of climate change and regional variations.” The implications of that argument stretch far beyond the Tibetan Plateau, which spans 2.5 million square kilometres — an area bigger than Greenland — and is mostly controlled by China. The grasslands, which make up nearly two-thirds of the plateau, store water that feeds into Asia's largest rivers. Those same pastures also serve as a gigantic reservoir of carbon, some of which could escape into the atmosphere if current trends continue. Degradation of the grasslands “will exacerbate global warming, threaten water resources for over 1.4 billion people and affect Asian monsoons”, says David Molden, director general of the International Centre for Integrated Mountain Development (ICIMOD) in Kathmandu, Nepal. Such concerns propelled me to make a 4,700-kilometre journey last year from Xining, on the northeastern fringe of the plateau, to Lhasa in the Tibetan heartland (see 'Trek across Tibet'). Meeting with herders and scientists along the way, I traversed diverse landscapes and traced the Yellow and Yangtze Rivers to their sources. The trip revealed that Tibetan grasslands are far less healthy than official government reports suggest, and scientists are struggling to understand how and why the pastures are changing. It began to drizzle soon after we set off from the city of Xining on a stretch of newly built highway along the Yellow River. As our Land Cruiser climbed onto a 3,800-metre-high part of the plateau, the vista opened to reveal rolling hills blanketed by a thick layer of alpine meadow, resembling a gigantic golf course. We passed herds of sheep and yaks, white tents and nomads in colourful robes — along with barbed-wired fences that cut the rangeland into small blocks. This part of the Tibetan Plateau, in a region known as Henan county, is blessed with abundant monsoonal rains every summer. The herders who live here are able to maintain healthy livestock and can make a decent living. “We have plenty to go around, and the livestock are well taken care of,” says herder Gongbu Dondrup. But life has been different since the government began to fence up grasslands around a decade ago, says Dondrup. Before that, he took his herd to the best pastures at high elevations in the summer, and then came back down in the winter. Now, he must keep the yaks in an 80-hectare plot that the government assigned to his family. The pasture looks worn, and he is being pressed by the government to further downsize his herd. “I don't know how long it can keep us going,” he says. The fencing initiative is the latest of a string of Chinese grassland policies. After annexing Tibet in 1950, the young revolutionary Chinese republic turned all livestock and land into state properties. Large state farms competed with each other to maximize production, and livestock numbers on the plateau doubled over two decades, reaching nearly 100 million by the late 1970s. But in the 1980s, as China moved towards a market-based economy, Beijing swung to the other extreme: it privatized the pastures and gave yaks back to individual households, hoping that the move would push Tibetans to better manage their land and so boost its productivity. Despite the privatization, nomads continued to use the rangeland communally — often in groups led by village elders. Then the government began to limit herds, and it built fences to separate households and villages. “This has totally changed the way livestock are traditionally raised on the plateau, turning a mobile lifestyle into a sedentary existence,” says Yang Xiaosheng, director of Henan county's rangeland-management office. The fencing policy does have merits when applied in moderation, says Yönten Nyima, a Tibetan policy researcher at Sichuan University in Chengdu. Because an increasing number of nomads now lead a settled life — at least for parts of the year — it helps to control the level of grazing in heavily populated areas, he says. “Fencing is an effective way to keep animals out of a patch of meadow.” Many herders also say that it makes life much easier: they do not have to spend all day walking the hills to herd their yaks and sheep, and if they go away for a few days, they don't worry about the animals running off. But the convenience comes at a cost, says Cao Jianjun, an ecologist at Northwest Normal University in Lanzhou. Fenced pastures often show signs of wear after a few years. In a 2013 study, Cao and his colleagues measured growth of the sedge species preferred by livestock in two scenarios: enclosed pastures and much larger patches of land jointly managed by up to 30 households. Despite similar livestock densities in both cases, the sedge grew twice as fast in the larger pastures, where animals could roam and plants had more opportunity to recover1. That matches the experience of Henan county herders, who say that their land sustains fewer animals than it has in the past. The future of the grasslands looked even bleaker as we left relatively well-to-do Henan county and ventured into the much higher, arid territory to the west. After 700 kilometres, we reached Madoi county, also known as qianhu xian ('county of a thousand lakes'), where the Yellow River begins. Although this region gets only 328 millimetres of rain on average each year, about half of what Henan receives, Madoi was once one of the richest counties on the plateau — famous for its fish, high-quality livestock and gold mines. Now, the wetlands are drying up and sand dunes are replacing the prairies, which means that less water flows into the Yellow River. Such changes on the plateau have contributed to recurring water shortages downstream: the Yellow River often dries up well before it reaches the sea, an event not recorded before 1970. In 2000, China sought to protect this region, along with adjacent areas that give rise to the Yangtze and Mekong Rivers, by establishing the Sanjiangyuan (or Three-Rivers' Headwaters) National Nature Reserve, an area nearly two-thirds the size of the United Kingdom. Nearly one-tenth of the reserve area falls into core zones in which all activities, including herding, are prohibited. The government spends hundreds of millions of US dollars each year on moving nomads out of those core areas, constructing steel meshes to stabilize the slopes and planting artificially bred grass species to restore the eroded land. Outside the core regions, officials have banned grazing on 'severely degraded grasslands', where vegetation typically covers less than 25% of the ground. Land that is 'moderately degraded', where vegetation coverage measures 25–50%, can be grazed for half of the year. Such policies — and related initiatives to limit livestock numbers and fence off areas of pasture — have not been easy on the herders, says Guo Hongbao, director of the livestock-husbandry bureau in Nagchu county in the southern Tibetan Plateau. “The nomads have made sacrifices for protecting the grasslands,” he says. But he also says that the strategies have paid off. Guo and other officials point to satellite studies showing that the plateau has grown greener in the past three decades2. This increase in vegetation growth, possibly the result of a combination of grazing restrictions and climate change, “has had a surprisingly beneficial effect on climate by dampening surface warming”, says Piao Shilong, a climate modeller at Peking University. But ecologists say that such measurements look only at surface biomass and thus are not a good indicator of grassland health. “Not all vegetation species are equal,” says Wang. “And satellites can't see what's going on underground.” This is particularly important in the case of the sedge species that dominate much of the Tibetan Plateau, and that are the preferred food of livestock. These species, part of the Kobresia genus, grow only 2 centimetres above the surface and have a dense, extensive root mat that contains 80% of the total biomass. Studies of pollen in lake sediments show that Kobresia and other dominant sedges emerged about 8,000 years ago, when early Tibetans began burning forests to convert them to grasslands for livestock3. The prehistoric grazing helped to create the thick root mat that blankets the vast plateau and that has stored 18.1 billion tonnes of organic carbon. But Kobresia plants are being driven out by other types of vegetation, and there is a risk that the locked-up carbon could be released and contribute to global warming. Every now and then on the trip to Lhasa, we passed fields blooming with the beautiful red and white flowers of Stellera chamaejasme, also known as wolf poison. “It's one of a dozen poisonous species that have increasingly plagued China's grasslands,” says Zhao Baoyu, an ecologist at the Northwest Agriculture and Forestry University in Yangling. Zhao and his colleagues estimated that poisonous weeds have infested more than 160,000 square kilometres of the Tibetan grasslands, killing tens of thousands of animals a year4. Herders also report seeing new grass species and weeds emerge in recent years. Although most are not toxic, they are much less nutritious than Kobresia pastures, says Karma Phuntsho, a specialist on natural-resource management at ICIMOD. “Some parts of the plateau may seem lush to an untrained eye,” he says. “But it's a kind of 'green desertification' that has little value.” In one unpublished study of the northeastern Tibetan Plateau, researchers found that Kobresia pastures that had gone ungrazed for more than a decade had been taken over by toxic weeds and much taller, non-palatable grasses: the abundance of the sedge species had dropped from 40% to as low as 1%. “Kobresia simply doesn't stand a chance when ungrazed,” says Elke Seeber, a PhD student at the Senckenberg Natural History Museum in Görlitz, Germany, who conducted the field experiment for a project supported by the German Research Foundation (DFG). The changes in vegetation composition have important implications for long-term carbon storage, says project member Georg Guggenberger, a soil scientist at Leibniz University of Hanover in Germany. In moderately grazed Kobresia pastures, up to 60% of the carbon that is fixed by photosynthesis went into the roots and soil instead of the above-ground vegetation — three times the amount seen in ungrazed plots5. This underground organic carbon is much more stable than surface biomass, which normally decomposes within a couple of years and releases its stored carbon into the air. So a shift from Kobresia sedge to taller grasses on the plateau will ultimately release a carbon sink that has remained buried for thousands of years, says Guggenberger. Critics of the grazing restrictions in Tibet say that the government has applied them in a blanket way, without proper study and without taking on board scientific findings. In some cases, they make sense, says Tsechoe Dorji, an ecologist at the ITPR's Lhasa branch, who grew up in a herder family in western Tibet. “A total grazing ban can be justified in regions that are severely degraded”, he says, but he objects to the simple system used by the government to classify the health of the grasslands. It only considers the percentage of land covered by vegetation and uses the same threshold for all areas, without adjusting for elevation or natural moisture levels. “Pastures with 20% vegetation cover, for instance, could be severely degraded at one place but totally normal at another,” says Dorji. This means that some of the grasslands that are classified as severely degraded are actually doing fine — and the grazing ban is actually hurting the ecosystem. “Having a sweeping grazing policy regardless of geographical variations is a recipe for disasters,” he says. China's grazing policy is only one of several factors responsible for such damaging changes, say the researchers. Pollution, global warming and a rash of road-building and other infrastructure-construction projects have all taken a toll on the grasslands. Ten days after leaving Xining, we caught a glimpse of Tibet's future when we arrived at Nam Tso, a massive glacial lake in the southern part of the plateau. Here Dorji and Kelly Hopping, a graduate student at Colorado State University in Fort Collins, have been turning the clock forward by surrounding small patches of grassland with open-topped plastic chambers that artificially raise the temperature. These experiments are important because Tibet is a hotspot in terms of climate change; the average temperature on the plateau has soared by 0.3–0.4 °C per decade since 1960 — about twice the global average. In trials over the past six years, they found that Kobresia pygmaea, the dominant sedge species, develops fewer flowers and blooms much later under warming conditions6. Such changes, says Dorji, “may compromise its reproductive success and long-term competitiveness”. At the experimental site, the artificially warmed pastures have been taken over by shrubs, lichens, toxic weeds and non-palatable grass species, says Hopping. But when the researchers added snow to some heated plots, Kobresia did not lose out to the other plants, which suggests that the loss of soil moisture might be driving the shift in species. Higher temperatures increase evaporation, which can be especially potent at high elevations. “This is not good news for species with shallow roots”, such as the Kobresia favoured by livestock, she says. Piao says that “this interplay between temperature and precipitation illustrates the complexity of ecosystem responses to climate change”. But researchers have too little information at this point to build models that can reliably predict how global warming will affect the grasslands, he says. To fill that gap, Wang and his colleagues started a decade-long experiment in 2013 at Nagchu, where they are using heat lamps to warm patches of grassland by precise amounts, ranging from 0.5 °C to 4 °C. They are also varying the amount of rainfall on the plots, and they are measuring a host of factors, such as plant growth, vegetation composition, nutrient cycling and soil carbon content. They hope to improve projections for how the grasslands will change — and also to determine whether there is a tipping point that would lead to an irreversible collapse of the ecosystem, says Piao. A fortnight into the trip, we finally arrived at the outskirts of Lhasa. At the end of the day, herders were rounding up their sheep and yaks in the shadows cast by snow-capped peaks. They and the other pastoralists across the plateau will have a difficult time in coming decades, says Nyima. Climate change was not a consideration when grassland polices were conceived over a decade ago, and so “many pastoralists are ill prepared for a changing environment”, he says. “There is a pressing need to take this into account and identify sound adaptation strategies.” As a start, researchers would like to conduct a comprehensive survey of plant cover and vegetation composition at key locations across different climate regimes. “The information would form the baseline against which future changes can be measured,” says Wang. Many scientists would also support changes to the grazing ban and fencing policies that have harmed the grasslands. Dorji says that the government should drop the simplistic practice of 'one policy fits all' across the plateau and re-evaluate whether individual regions are degraded enough to merit a ban on grazing. “Unless the pastures are severely degraded, moderate grazing will help to restore the ecosystems,” he says. But scientists are not banking on such reforms happening soon. Policies in Tibet are driven less by scientific evidence than by bureaucrats' quest for power and funds, says a Lhasa-based researcher who requests anonymity for fear of political repercussions. Local officials often lobby Beijing for big investments and expensive projects in the name of weiwen (meaning 'maintaining stability'). Because resistance to Chinese control over Tibet continues to flare up, the government is mostly concerned with maintaining political stability, and it does not require local officials to back up plans with scientific support, says the researcher. “As long as it's for weiwen, anything goes.” But officials such as Guo say that their policies are intended to help Tibet. “Although there is certainly room for improvement in some of the policies, our primary goals are to promote economic development and protect the environment,” he says. Far away from Lhasa, herders such as Dodra say that they are not seeing the benefits of government policies. After we finish our visit at his home, Dodra's entire family walks us into the courtyard — his mother in-law spinning a prayer wheel and his children trailing behind. It has stopped snowing, and the sky has turned a crystal-clear, cobalt blue. “The land has served us well for generations,” says Dodra as he looks uneasily over his pasture. “Now things are falling apart — but we don't get a say about how best to safeguard our land and future.”


News Article | November 24, 2016
Site: www.eurekalert.org

A common species of Asian tree frog may actually be two separate species according to new genetic data collected by an international group of scientists. If the two groups of frogs are confirmed to be different species, assigning their scientific names may require searching historical records of foreign explorers in Japan during the 1800s. Before the frogs are officially recognized as two separate species, researchers will test if individual frogs from the two groups have unique physical or behavioral features and if they can produce healthy offspring. The project began when researchers at European universities expanded their studies on sex determination and population dynamics in amphibians to include Asian species. The species of tree frog that they chose, Hyla japonica, is found throughout Japan, the Korean peninsula, eastern China, and eastern Russia. Collaborators around the world began sending genetic samples from local frog populations to discover their evolutionary relationships. The data revealed evolutionarily distinct groups of frogs in Japan, the Korean peninsula, and eastern Russia. Ancestors of the modern frog populations likely traveled either into or out of Japan by two separate routes: from the North on a chain of islands between Russia and Japan, and from the South along a land bridge on the Philippine Sea Plate between South Korea and Japan. Japanese H. japonica populations may have been isolated into separate East and West groups. Researchers are exploring this possibility in more detail with an ongoing research project led by Ikuo Miura, PhD, an Associate Professor in Amphibian Research Center at Hiroshima University. The same separation between East and West Japan is known in other species of frogs and skinks. Miura explains that the scientific community has no definitive information about exactly what caused the divide between East and West Japan, but suggests the possibility of the expansion of ancient basin associated with volcanic activity in central Japan. Miura and Yuya Higaki, a fourth-year bachelor's degree student, are currently running genetic analysis on 50 populations of H. japonica from across Japan. They will present their preliminary results on November 26th at the annual conference of the Herpetological Society of Japan. This project is part of Miura's larger research interests in sex determination and its influence on speciation and evolution. If H. japonica is recognized as two separate species, it will be challenging for researchers to decide which species should keep the original name due to the mystery surrounding which population of H. japonica was used for the original species characterization in 1858. The German-British naturalist Albert Gunther named H. japonica after examining a specimen collected years earlier, potentially in 1826, by Philipp Siebold and Heinrich Burger, German botanists and physicians who were among the first Westerners granted official access to Japan. The modern research team visited the British Museum of Natural History to inspect the original specimen, but the location of where Siebold and Burger collected the first H. japonica is recorded only as "Japan." For now, naming the species will remain a historical mystery secondary to the ongoing scientific questions. The current research paper is published in the November 23, 2016 issue of BioMed Central Evolutionary Biology. Authors of the paper are based at the University of Lausanne (Switzerland), Leibniz-Institute of Freshwater Ecology and Inland Fisheries (Germany), Russian Academy of Sciences, Seoul National University, Ewha Woman's University (Republic of Korea), Chinese Academy of Sciences, and Hiroshima University. The species Hyla japonica is listed as Least Concern on the International Union for the Conservation of Nature (ICUN) Red List. Find more Hiroshima University news on our Facebook page: http://www. More information about the Hiroshima University Amphibian Research Center is available in both English and Japanese on their website: http://home. Information about the Herpetological Society of Japan is available in both English and Japanese on their website: http://herpetology. Academic Paper Citation: Dufresnes C, Litvinchuk SN, Borzee A, Jang Y, Li J, Miura I, Perrin N, Stock M. Phylogeography reveals an ancient cryptic radiation in East-Asian tree frogs (Hyla japonica group) and complex relationships between continental and island lineages. BioMed Central Evolutionary Biology. 23 November 2016.


News Article | November 20, 2015
Site: www.nanotech-now.com

Abstract: Deciding whether to cook or toss a steak that's been in the fridge for a few days calls for a sniff test. This generally works well for home cooks. But food manufacturers that supply tons of meats to consumers require more reliable measures. In a new journal called ACS Sensors, scientists report a simple method that uses nanotubes to quickly detect spoilage. It could help make sure meats are safe when they hit store shelves. Transporting meats and seafood from the farm or sea to the market while they're still fresh is a high priority. But telling whether a product has gone bad isn't a simple process. Current strategies for measuring freshness can be highly sensitive to spoilage but require bulky, slow equipment, which prevents real-time analysis. Some newer methods designed to speed up the testing process have fallen short in sensitivity. Yanke Che and colleagues wanted to develop one simple test that could deliver both rapid and sensitive results. The researchers turned to highly fluorescent, hollow nanotubes that grow dim when they react with compounds given off by meat as it decomposes. To test the nanotubes, the team sealed commercial samples — 1 gram each — of pork, beef, chicken, fish and shrimp in containers for up to four days. When they exposed the portable system to a teaspoon of vapor emitted by the samples, it reacted in under an hour, fast enough to serve as a real-time measure of freshness. The researchers also found that if the tubes' glow dulled by more than 10 percent, this meant a sample was spoiled. The authors acknowledge funding from the Chinese Academy of Sciences. About American Chemical Society The American Chemical Society is a nonprofit organization chartered by the U.S. Congress. With more than 158,000 members, ACS is the world's largest scientific society and a global leader in providing access to chemistry-related research through its multiple databases, peer-reviewed journals and scientific conferences. Its main offices are in Washington, D.C., and Columbus, Ohio. For more information, please click Contacts: Michael Bernstein 202-872-6042 Yanke Che, Ph.D. Key laboratory of Photochemistry Institute of Chemistry Chinese Academy of Sciences Beijing, China If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | October 26, 2016
Site: www.eurekalert.org

Severe weather is becoming more and more frequent around the globe. Often, the difference between minor property damage or something far more devastating is the ability to predict a storm in the first place, especially when it comes to the short-lived but severe variety. "Severe convective storms are intense local phenomena. Research on such phenomena requires high-resolution observational networks, and high-resolution numerical simulation and prediction tools, as well as researchers well trained on the subject," said XUE Ming, a professor of meteorology and the director of the Center for Analysis and Predictions of Storms at the University of Oklahoma. XUE is also an adjunct professor at the School of Atmospheric Sciences at Nanjing University, and a member of the Key Laboratory of Mesoscale Severe Weather/Ministry of Education, affiliated with Nanjing University. "For the past decade, rapid progresses have been made in establishing modern, integrated meteorological observation systems in China," XUE said, who also pointed out the availability of mobile research weather radars and other instruments needed to study convective storms. "The time was right to carry out systematic and organized research on convective storms, with the goal of improving the prediction skills of such weather systems." With that, the "Observation, Prediction and Analysis of severe Convection of China (OPACC)" project was formed. The five-year venture was started in 2013, and has more than $5 million in funding from the Chinese Ministry of Science and Technology. XUE serves as the principal scientist, and Nanjing University leads the project. Other members include Peking University, the Institute of Atmospheric Physics, the National Meteorological Center, the Nanjing University of Information Science and Technology, Lanzhou University, Zhejiang University, and the Beijing Institute of Urban Meteorology. Since 2013, project scientists have published 224 papers on convective weather in peer-reviewed journals. To help publicize the OPACC project, and the research results so far, Advances in Atmospheric Sciences - sponsored by the Institute of Atmospheric Physics of the Chinese Academy of Sciences and published by Springer - organized a special collection of 16 original papers spanning the October and November 2016 issues. The papers cover severe convection subjects ranging from extreme precipitation climatology, data assimilation, numerical simulation and prediction, radar observation, and analyses for convective weather, as well as a study on tornadic events. XUE, a co-chief editor of the journal with LYU Daren and ZHU Jiang, authored the collection's preface. XUE, as an expert on tornado dynamics and convective-scale weather in general, also contributed several papers to the collection, including a featured highlight article on several recent tornadoes in China. "In China, heavy and extreme precipitation, as well as typhoons have received much more attention in research over the past half a century or so, because they tend to produce widespread damages, and their larges sizes also make observational data on these systems much more readily available," XUE said. Observational data on tornadoes is much more difficult to collect, as the equipment has to be physically close to the twister. The tools are more readily available in the United States, where with more than a thousand tornadoes a year, tornado research and prediction has been a top priority. At about a one to 10 ratio, China has far fewer tornadoes than the United States. However, China's tornadoes tend to be in heavily populated areas along the coast, and cause significant damage and loss of life. The tornadoes introduced in XUE's paper were spawned from a variety of weather conditions. The paper serves as a call to action for more tornado research in China, as well as the development and implementation of forecasting and warning tools and operations. "The exact causes of tornado formation are still not well understood... Without understanding the tornado formation processes, and the necessary and sufficient conditions for tornadogenesis, it is difficult to predict tornadoes, or to provide accurate and effective tornado warnings to the public," XUE said. "Research on tornadoes will help us understand [these] questions, and to construct numerical models and establish methods and tools for predicting tornadoes in advance, and... to provide advanced warnings to the public and reduce loss of life and properties." XUE co-authored another paper that addresses this prediction problem, focusing on the July 21, 2012 flash flood in the Beijing area. Seventy-nine people died, and the area suffered 11.64 billion Chinese Yuan (more than $1.8 billion U.S. dollars) in damages. XUE's study uses the 2012 flood to demonstrate the efficacy of an ensemble forecasting system that uses a four-kilometer spacing grid to predict extreme rainfall. The system can successful predict accumulated precipitation of more than 400 millimeters, and it can calculate the probability of extreme precipitation. "Compared to single deterministic forecasts, ensemble forecasts can better capture extreme events and, at the same time, provide information on the reliability or uncertainty of the forecasts. The ensemble mean forecast is often more accurate than individual forecasts," XUE said. "Rapidly updated ensemble probabilistic prediction at convection-resolving resolutions is the future for the forecasting and warning of severe convective weather, and the effective assimilation of very high-resolution weather observations into the forecasting models is also essential." XUE hopes that the special issue of Advances in Atmospheric Sciences will heighten awareness regarding convective weather research associated with the OPACC project, as well as potentially attract more scientists to collaborate in future research endeavors. Related links are as follows: http://link. http://link. .


News Article | November 15, 2016
Site: en.prnasia.com

SHANGHAI, Nov. 15, 2016 /PRNewswire/ -- Semiconductor Manufacturing International Corporation ("SMIC"; NYSE: SMI; SEHK: 981), one of the leading semiconductor foundries in the world and the largest and most advanced foundry in mainland China, and The Institute of Microelectronics of the Chinese Academy of Sciences ("IMECAS") announced the signing of a cooperation agreement for a MEMS R&D foundry platform to jointly develop MEMS sensor standard processes and build a complete MEMS supply chain. According to the agreement, SMIC and IMECAS will work together closely to take advantage of IMECAS's experiences in MEMS Sensor design and packaging technology design and SMIC's standardized process technology platforms, industry and market influence. Starting with the development of a MEMS environmental sensor and combining the features of other types of MEMS Sensors, SMIC and IMECAS will collaborate to create a platform-based standard as well as mass production technologies to shorten the development cycle from design to production, thus helping the MEMS industry grow more effectively and efficiently. "SMIC's R&D team has made a lot of achievements in developing new sensor technology platforms and introducing new customers. SMIC is willing to open our platforms to support commercialized production and the R&D of universities and research institutions," said Dr. Tzu-Yin Chiu, Chief Executive Officer and Executive Director of SMIC. "SMIC and IMECAS have cooperated in numerous logic process development projects. This time we will expand our collaboration and promote the R&D of complete standardized MEMS sensor technologies to help integrate and improve the MEMS supply chain." Ye Tianchun, Director of IMECAS, visited SMIC's middle-end production line of MEMS sensors and said, "Through the cooperation between SMIC and IMECAS, we can exploit our advantages and jointly build an open MEMS technology service platform and an electronic information integration platform for the MEMS supply chain. With the integration of design, manufacturing, packing, testing, public platform and venture investment, we can form a supply chain ecosystem and support the development of a global as well as domestic Chinese MEMS industry." Semiconductor Manufacturing International Corporation ("SMIC"; NYSE: SMI; SEHK: 981) is one of the leading semiconductor foundries in the world and the largest and most advanced foundry in mainland China. SMIC provides integrated circuit (IC) foundry and technology services on process nodes from 0.35 micron to 28 nanometer. Headquartered in Shanghai, China, SMIC has an international manufacturing and service base. In China, SMIC has a 300mm wafer fabrication facility (fab) and a 200mm mega-fab in Shanghai; a 300mm mega-fab and a majority-owned 300mm fab for advanced nodes in Beijing; 200mm fabs in Tianjin and Shenzhen; and a majority-owned joint-venture 300mm bumping facility in Jiangyin; additionally, in Italy SMIC has a majority-owned 200mm fab. SMIC also has marketing and customer service offices in the U.S., Europe, Japan, and Taiwan, and a representative office in Hong Kong. For more information, please visit www.smics.com. This press release contains, in addition to historical information, "forward-looking statements" within the meaning of the "safe harbor" provisions of the U.S. Private Securities Litigation Reform Act of 1995. These forward-looking statements, including statements under "Fourth Quarter 2016 Guidance", "CapEx Summary" and the statements contained in the quotes of our CEO are based on SMIC's current assumptions, expectations and projections about future events. SMIC uses words like "believe," "anticipate," "intend," "estimate," "expect," "project," "target" and similar expressions to identify forward-looking statements, although not all forward-looking statements contain these words. These forward-looking statements involve significant risks, both known and unknown, uncertainties and other factors that may cause SMIC's actual performance, financial condition or results of operations to be materially different from those suggested by the forward-looking statements including, among others, risks associated with the cyclical nature of the semiconductor industry, changes in demand for our products, competition in our markets, our reliance on a small number of customers, orders or judgments from pending litigation, intensive intellectual property lawsuits in semiconductor industry and financial stability in end markets, general economic conditions and fluctuations in currency exchange rates. Investors should consider the information contained in SMIC's filings with the U.S. Securities and Exchange Commission ("SEC"), including its annual report on 20-F filed with the SEC on April 25, 2016, especially the consolidated financial statements, and such other documents that SMIC may file with the SEC or The Hong Kong Stock Exchange Limited ("SEHK") from time to time, including current reports on Form 6-K. Other unknown or unpredictable factors also could have material adverse effects on SMIC's future results, performance or achievements. In light of these risks, uncertainties, assumptions and factors, the forward-looking events discussed in this press release may not occur. You are cautioned not to place undue reliance on these forward-looking statements, which speak only as of the date stated, or if no date is stated, as of the date of this press release. Except as may be required by law, SMIC undertakes no obligation and does not intend to update any forward-looking statement, whether as a result of new information, future events or otherwise. The Institute of Microelectronics of the Chinese Academy of Sciences ("IMECAS") has become a key research institution integrated with both fundamental research and technology development, which is quite industry-oriented. It is also an entity of two centers and one college under the name of CAS. They are the CAS R&D Center for The Internet of Things, the CAS EDA Center (Electronic Design Automation Center) and the college of Microelectronics of UCAS (University of the Chinese Academy of Sciences). There are 14 departments at IMECAS including two key CAS Laboratories, four R&D centers for industry service, five R&D centers for typical applications, and three R&D centers for core product development. Among which, the Smart Sensing Center is conducting the research and development of sensor design, fabrication, packaging and testing. The CAS R&D Center for The Internet of Things has also established public service platforms, including a MEMS Design and Manufacture Platform and SIP Packaging Platform, a Communication System and Chip Design Test and Verification Platform, and more. For more information, please visit http://english.ime.cas.cn.


News Article | February 22, 2017
Site: www.nature.com

A laboratory in Wuhan is on the cusp of being cleared to work with the world’s most dangerous pathogens. The move is part of a plan to build between five and seven biosafety level-4 (BSL-4) labs across the Chinese mainland by 2025, and has generated much excitement, as well as some concerns. Some scientists outside China worry about pathogens escaping, and the addition of a biological dimension to geopolitical tensions between China and other nations. But Chinese microbiologists are celebrating their entrance to the elite cadre empowered to wrestle with the world’s greatest biological threats. “It will offer more opportunities for Chinese researchers, and our contribution on the BSL‑4-level pathogens will benefit the world,” says George Gao, director of the Chinese Academy of Sciences Key Laboratory of Pathogenic Microbiology and Immunology in Beijing. There are already two BSL-4 labs in Taiwan, but the National Bio-safety Laboratory, Wuhan, would be the first on the Chinese mainland. The lab was certified as meeting the standards and criteria of BSL-4 by the China National Accreditation Service for Conformity Assessment (CNAS) in January. The CNAS examined the lab’s infrastructure, equipment and management, says a CNAS representative, paving the way for the Ministry of Health to give its approval. A representative from the ministry says it will move slowly and cautiously; if the assessment goes smoothly, it could approve the laboratory by the end of June. BSL-4 is the highest level of biocontainment: its criteria include filtering air and treating water and waste before they leave the laboratory, and stipulating that researchers change clothes and shower before and after using lab facilities. Such labs are often controversial. The first BSL-4 lab in Japan was built in 1981, but operated with lower-risk pathogens until 2015, when safety concerns were finally overcome. The expansion of BSL-4-lab networks in the United States and Europe over the past 15 years — with more than a dozen now in operation or under construction in each region — also met with resistance, including questions about the need for so many facilities. The Wuhan lab cost 300 million yuan (US$44 million), and to allay safety concerns it was built far above the flood plain and with the capacity to withstand a magnitude-7 earthquake, although the area has no history of strong earthquakes. It will focus on the control of emerging diseases, store purified viruses and act as a World Health Organization ‘reference laboratory’ linked to similar labs around the world. “It will be a key node in the global biosafety-lab network,” says lab director Yuan Zhiming. The Chinese Academy of Sciences approved the construction of a BSL-4 laboratory in 2003, and the epidemic of SARS (severe acute respiratory syndrome) around the same time lent the project momentum. The lab was designed and constructed with French assistance as part of a 2004 cooperative agreement on the prevention and control of emerging infectious diseases. But the complexity of the project, China’s lack of experience, difficulty in maintaining funding and long government approval procedures meant that construction wasn’t finished until the end of 2014. The lab’s first project will be to study the BSL-3 pathogen that causes Crimean–Congo haemorrhagic fever: a deadly tick-borne virus that affects livestock across the world, including in northwest China, and that can jump to people. Future plans include studying the pathogen that causes SARS, which also doesn’t require a BSL-4 lab, before moving on to Ebola and the West African Lassa virus, which do. Some one million Chinese people work in Africa; the country needs to be ready for any eventuality, says Yuan. “Viruses don’t know borders.” Gao travelled to Sierra Leone during the recent Ebola outbreak, allowing his team to report the speed with which the virus mutated into new strains1. The Wuhan lab will give his group a chance to study how such viruses cause disease, and to develop treatments based on antibodies and small molecules, he says. The opportunities for international collaboration, meanwhile, will aid the genetic analysis and epidemiology of emergent diseases. “The world is facing more new emerging viruses, and we need more contribution from China,” says Gao. In particular, the emergence of zoonotic viruses — those that jump to humans from animals, such as SARS or Ebola — is a concern, says Bruno Lina, director of the VirPath virology lab in Lyon, France. Many staff from the Wuhan lab have been training at a BSL-4 lab in Lyon, which some scientists find reassuring. And the facility has already carried out a test-run using a low-risk virus. But worries surround the Chinese lab, too. The SARS virus has escaped from high-level containment facilities in Beijing multiple times, notes Richard Ebright, a molecular biologist at Rutgers University in Piscataway, New Jersey. Tim Trevan, founder of CHROME Biosafety and Biosecurity Consulting in Damascus, Maryland, says that an open culture is important to keeping BSL-4 labs safe, and he questions how easy this will be in China, where society emphasizes hierarchy. “Diversity of viewpoint, flat structures where everyone feels free to speak up and openness of information are important,” he says. Yuan says that he has worked to address this issue with staff. “We tell them the most important thing is that they report what they have or haven’t done,” he says. And the lab’s inter­national collaborations will increase openness. “Transparency is the basis of the lab,” he adds. The plan to expand into a network heightens such concerns. One BSL-4 lab in Harbin is already awaiting accreditation; the next two are expected to be in Beijing and Kunming, the latter focused on using monkey models to study disease. Lina says that China’s size justifies this scale, and that the opportunity to combine BSL-4 research with an abundance of research monkeys — Chinese researchers face less red tape than those in the West when it comes to research on primates — could be powerful. “If you want to test vaccines or antivirals, you need a non-human primate model,” says Lina. But Ebright is not convinced of the need for more than one BSL-4 lab in mainland China. He suspects that the expansion there is a reaction to the networks in the United States and Europe, which he says are also unwarranted. He adds that governments will assume that such excess capacity is for the potential development of bioweapons. “These facilities are inherently dual use,” he says. The prospect of ramping up opportunities to inject monkeys with pathogens also worries, rather than excites, him: “They can run, they can scratch, they can bite.” Trevan says China’s investment in a BSL-4 lab may, above all, be a way to prove to the world that the nation is competitive. “It is a big status symbol in biology,” he says, “whether it’s a need or not.”


October 31, 2016 -Chinese Academy of Sciences and Clarivate Analytics today released "Research Fronts 2016", an annual report identifying prominent areas of scientific research over the past year. This is the third collaborative report from these two organizations. The joint paper identifies 180 key research fronts including 100 hot and 80 emerging fronts, based on a comprehensive analysis of scientific literature citations. The analysis generated 12,188 research fronts in the Essential Science Indicators (ESI) database from 2009 until 2015. Research fronts are specialties discovered when scientists cite one another's work, reflecting a specific commonality in the research, which can be experimental data, a concept or hypothesis or even a method. Working in collaboration with the Chinese Academy of Sciences, Clarivate bibliometric experts utilized the ESI database, a web-based research analytics platform and a unique compilation of science performance metrics and trend data based on scholarly paper publication counts and citation data from the Web of ScienceTM. Once identified, the research fronts built on recently published "core" or foundational journal articles. The 2016 report also features an analysis of the current and potential performance of six leading countries in the 180 research fronts -- USA, China, the UK, Germany, France and Japan. Analysts at the the Institutes of Science and Development and the National Science Library of the Chinese Academy of Sciences also analyzed the 180 research fronts provided by Clarivate in great depth and interpreted it to highlight 28 research fronts of particular interest. Read the full report: "Research Fronts 2016." Learn more about Essential Science IndicatorsSM, InCites™ and Web of Science. The Chinese Academy of Sciences is the linchpin of China's drive to explore and harness high technology and the natural sciences for the benefit of China and the world. Comprising a comprehensive research and development network, a merit-based learned society and a system of higher education, CAS brings together scientists and engineers from China and around the world to address both theoretical and applied problems using world-class scientific and management approaches. Since its founding, CAS has fulfilled multiple roles -- as a national team and a locomotive driving national technological innovation, a pioneer in supporting nationwide S&T development, a think tank delivering S&T advice and a community for training young S&T talent. For more information, please visit http://english. . ClarivateTM Analytics accelerates the pace of innovation by providing trusted insights and analytics to customers around the world, enabling them to discover, protect and commercialize new ideas, faster. Formerly the Intellectual Property and Science business of Thomson Reuters, we've been assisting our customers for over 60 years. Now as an independent company with over 4,000 employees, operating in more than 100 countries around the world, we remain expert, objective and agile. For more information, please visit us at Clarivate.com.


News Article | October 12, 2016
Site: www.nanotech-now.com

Abstract: Scientists at Moscow Institute of Physics and Technology (MIPT) and several universities in the US came up with a technology for faster structure analysis of receptor proteins, which are important for human health. An international team of scientists has learnt how to determine the spatial structure of a protein obtained with an X-ray laser using the sulfur atoms it contains. This development is the next stage in the project of a group led by Vadim Cherezov to create an effective method of studying receptor proteins. A detailed description of the study has been published in the journal Science Advances. Receptor proteins (GPCRs) allow signals to be transmitted within cells, which, in turn, enables the cells to obtain information about their environment and interact with one another. As a result, we are able to see, feel, maintain blood pressure etc., i.e. everything that is needed for the functioning of our bodies. Any disorders in the way these proteins work can result in serious consequences, such as blindness. Developing medicines to restore the normal function of receptors is not possible without a precise understanding of the way in which GPCRs operate, which, as with other proteins, is determined by their spatial structure, i.e. the way in which the protein folds. The best method of doing this is to use X-ray crystallography. For X-rays, a crystal is a three-dimensional diffraction lattice in which the radiation is scattered on the atoms. A particular problem with this method is obtaining protein crystals. In order to do this, receptor proteins have to be extracted from a cell membrane and placed in a special lipid environment. Then, by selecting the temperature and using substances to speed up the nucleation process, the protein crystallizes. One challenge with GPCRs is that they are highly mobile and dynamic molecules that frequently change their spatial structure. This means that it is difficult for them to grow large crystals that are needed for the classical diffraction procedure. This procedure involves exposing the crystal to radiation at different angles for a relatively long period of time. X-rays ionize the atoms, which destroys the protein molecules. Large crystals of a few dozen microns are what is needed to compensate for this effect. Thanks to the new experimental method of Serial Femtosecond Crystallography, it is now possible to solve this problem. The method has been developed over the past few years by an international team of scientists from Arizona State University and the University of Zurich, SLAC National Accelerator Laboratory in Stanford, the iHuman Institute at ShanghaiTech University, the Institute of Biophysics of the Chinese Academy of Sciences, the CFEL center in Hamburg, the University of Southern California and MIPT. One of the leaders of the team is Vadim Cherezov, a professor at the University of Southern California and MIPT. The method is based on the use of new generation X-ray sources - free-electron lasers. The radiation they emit is so powerful that it fully ionizes atoms in a crystal as it passes through, essentially destroying it. However, as the laser pulse has a very short duration (a few femtoseconds, 10-15 s), a diffraction pattern can be recorded before the atoms move from their position. This has meant that the scientists are able to avoid the difficulties associated with radiation damage. As the crystal is destroyed immediately, it is not possible to measure it at different orientations. To solve this problem, scientists collect and process data from several crystals. Using a special injector, the lipid environment in which the crystals are situated is exposed to an X-ray pulse. The whole process is similar to squeezing toothpaste out of a tube. The result is millions of diffraction images that need to be processed: by selecting images with crystals, finding their orientation, and then putting them together in a three-dimensional diffraction pattern. Two parameters must be known in order to decipher the structure: the amplitude and the phase of the reflected radiation. The amplitude value is measured on a detector during the experiment, however determining the phase is a complex task and there are a number of ways of solving the problem. For example, if we know of a certain protein that has a similar structure, we can use it as a first approximation. Of course, this is not possible in all cases. Another popular method is to use an effect known as anomalous scattering. This occurs when the X-ray wavelength is close to the electron transition energy in the atoms, which results in the wave being absorbed and re-emitted. This causes a change in the amplitudes and phases. If the amplitudes are measured very precisely, the differences between them can be used to reconstruct the phases. However, most of the atoms that make up proteins (carbon, oxygen and nitrogen) are not suitable for this. A relatively heavy element found in almost all proteins is sulfur, and this is the element the researchers used in their most recent study to reconstruct the phases. Special software had to be developed specifically for the task. Out of 7 million images obtained, the researchers had to pick out those with diffracted reflections. They then had to determine the orientation of the crystal and the intensity of all reflections and subsequently collate all the data obtained. 600,000 diffraction patterns were found and then used to successfully reconstruct the structure of a protein with a resolution of 2.5Å. By combining the data with the results obtained at a different X-ray wavelength, the researchers were able to increase the resolution to 1.9Å. This level of precision not only enables the structure of receptor proteins to be determined with high accuracy, but also allows scientists to see molecules of water, ions and lipids that surround them, which is extremely important for understanding how proteins function and modeling their interaction with other substances. "When I participated in a study to determine the structure of the first receptor protein, it took me about a year to obtain crystals that were large enough to conduct classical X-ray diffraction. We hope that the method we have developed will help to greatly increase the speed of this process," said Prof. Cherezov commenting on the significance of the research. Of the 800 receptor proteins that exist, we currently know the structure of only 34. The experimental method developed by the scientists will significantly speed up the studies of the remaining proteins. This, in turn, will help in developing new and effective drugs to treat a vast number of diseases. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | October 26, 2016
Site: phys.org

Banxing-2 snaps Tiangong-2 and Shenzhou-11 using a fisheye camera. Credit: Chinese Academy of Sciences Here's a great new view of China's Tiangong II space station, taken by a new 'selfie' satellite. The Banxing-2 satellite is about the size of a desktop printer and was released from the station on Sunday. It has been nicknamed the "Selfie Stick" by Chinese officials and is taking pictures of the station and the docked Shenzhou XI spacecraft. The Chinese astronauts who boarded the station last week aren't just joining the selfie craze; the 25 megapixel camera with wide-angle and infrared imagers has a specific job. "The companion satellite monitors the conditions of Tiangong II and Shenzhou XI all the time, which is helpful in detecting failures," said Chen Hongyu, chief engineer of the satellite program and a researcher with the Chinese Academy of Sciences' Micro-satellite Innovation Institute. The microsatellite as three solar panels, so can generate enough power to adjust its orbit to shoot pictures of the lab and spacecraft. Its predecessor, Banxing-1, accomplished the same mission for Shenzhou VII in 2008. The Chinese Academy of Sciences says the new model is smaller and has a higher capacity. Now well into their 30-day mission, astronauts Jing Haipeng and Chen Dong boarded China's second version of its "Heavenly Palace" last week. They launched Monday, October 17 from the Jiuquan Satellite Launch Center in the Gobi Desert on a Long March 2F rocket and Shenzhou-11 completed a fully automated approach and docking to Tiangong-2 on Tuesday. During their mission, the two crew members will perform experiments from 14 different areas including biology, space life science and technological demonstrations. They have set up plant cultivation and growing experiments and have six silkworms on board for a student-based study to see how silkworms produce silk in microgravity. The crew is also doing medical testing on themselves using Tiangong II's on board ultrasound equipment to scan their cardiovascular and pulmonary systems. They'll also be checking for bone and muscle degradation and track any changes to their eyesight. NASA and ESA has discovered that the majority of astronauts doing long-duration space flights on the International Space Station have suffered various kinds of vision problems while in space, or upon their return. This 30-day medium duration mission is China's longest space mission to date, and the main task of the Tiangong crew is to help prepare for longer future missions on a larger, modular space station that, according to reports, China hopes to launch by 2018.


News Article | February 15, 2017
Site: www.eurekalert.org

WASHINGTON - Clinical trials for genome editing of the human germline - adding, removing, or replacing DNA base pairs in gametes or early embryos - could be permitted in the future, but only for serious conditions under stringent oversight, says a new report from the National Academy of Sciences and the National Academy of Medicine. The report outlines several criteria that should be met before allowing germline editing clinical trials to go forward. Genome editing has already entered clinical trials for non-heritable applications, but should be allowed only for treating or preventing diseases or disabilities at this time. Genome editing is not new. But new powerful, precise, and less costly genome editing tools, such as CRISPR/Cas9, have led to an explosion of new research opportunities and potential clinical applications, both heritable and non-heritable, to address a wide range of human health issues. Recognizing the promise and the concerns related to this technology, NAS and NAM appointed a study committee of international experts to examine the scientific, ethical, and governance issues surrounding human genome editing. Human genome editing is already widely used in basic research and is in the early stages of development and trials for clinical applications that involve non-heritable (somatic) cells. These therapies affect only the patient, not any offspring, and should continue for treatment and prevention of disease and disability, using the existing ethical norms and regulatory framework for development of gene therapy. Oversight authorities should evaluate safety and efficacy of proposed somatic applications in the context of the risks and benefits of intended use. However, there is significant public concern about the prospect of using these same techniques for so-called "enhancement" of human traits and capacities such as physical strength, or even for uses that are not possible, such as improving intelligence. The report recommends that genome editing for enhancement should not be allowed at this time, and that broad public input and discussion should be solicited before allowing clinical trials for somatic genome editing for any purpose other than treating or preventing disease or disability. "Human genome editing holds tremendous promise for understanding, treating, or preventing many devastating genetic diseases, and for improving treatment of many other illnesses," said Alta Charo, co-chair of the study committee and Sheldon B. Lubar Distinguished Chair and Warren P. Knowles Professor of Law and Bioethics, University of Wisconsin-Madison. "However, genome editing to enhance traits or abilities beyond ordinary health raises concerns about whether the benefits can outweigh the risks, and about fairness if available only to some people." Germline genome editing, in contrast, is contentious because genetic changes would be inherited by the next generation. Many view germline editing as crossing an "ethically inviolable" line, the report says. Concerns raised include spiritual objections to interfering with human reproduction to speculation about effects on social attitudes toward people with disabilities to possible risks to the health and safety of future children. But germline genome editing could provide some parents who are carriers of genetic diseases with their best or most acceptable option for having genetically related children who are born free of these diseases. Heritable germline editing is not ready to be tried in humans. Much more research is needed before it could meet the appropriate risk and benefit standards for clinical trials. The technology is advancing very rapidly, though, making heritable genome editing of early embryos, eggs, sperm, or precursor cells in the foreseeable future "a realistic possibility that deserves serious consideration," the report says. Although heritable germline genome editing trials must be approached with caution, the committee said, caution does not mean prohibition. At present, heritable germline editing is not permissible in the United States, due to an ongoing prohibition on the U.S. Food and Drug Administration's ability to use federal funds to review "research in which a human embryo is intentionally created or modified to include a heritable genetic modification." A number of other countries have signed an international convention that prohibits germline modification. If current restrictions are removed, and for countries where germline editing would already be permitted, the committee recommended stringent criteria that would need to be met before going forward with clinical trials. They include: (1) absence of reasonable alternatives; (2) restriction to editing genes that have been convincingly demonstrated to cause or strongly predispose to a serious disease or condition; (3) credible pre-clinical and/or clinical data on risks and potential health benefits; (4) ongoing, rigorous oversight during clinical trials; (5) comprehensive plans for long-term multigenerational follow-up; and (6) continued reassessment of both health and societal benefits and risks, with wide-ranging, ongoing input from the public. Policymaking surrounding human genome editing applications should incorporate public participation, and funding of genome editing research should include support to study the socio-political, ethical, and legal aspects and evaluate efforts to build public communication and engagement on these issues. "Genome editing research is very much an international endeavor, and all nations should ensure that any potential clinical applications reflect societal values and be subject to appropriate oversight and regulation," said committee co-chair Richard Hynes, Howard Hughes Medical Institute Investigator and Daniel K. Ludwig Professor for Cancer Research, Massachusetts Institute of Technology. "These overarching principles and the responsibilities that flow from them should be reflected in each nation's scientific community and regulatory processes. Such international coordination would enhance consistency of regulation." The study was funded by the Defense Advanced Research Projects Agency, the Greenwall Foundation, the John D. and Catherine T. MacArthur Foundation, U.S. Department of Health and Human Services, U.S. Food and Drug Administration, and the Wellcome Trust, with additional support from the National Academies' Presidents' Circle Fund and the National Academy of Sciences W.K. Kellogg Foundation Fund. The National Academy of Sciences and the National Academy of Medicine are private, nonprofit institutions that, along with the National Academy of Engineering, provide independent, objective analysis and advice to the nation to solve complex problems and inform public policy decisions related to science, technology, and medicine. The Academies operate under an 1863 congressional charter to the National Academy of Sciences, signed by President Lincoln. For more information, visit http://www. . Copies of Human Genome Editing: Science, Ethics, and Governance are available at http://www. or by calling 202-334-3313 or 1-800-624-6242. Reporters may obtain a copy from the Office of News and Public Information (contacts listed above). R. Alta Charo1 (co-chair) Sheldon B. Lubar Distinguished Chair and Warren P. Knowles Professor of Law and Bioethics University of Wisconsin Madison Richard O. Hynes1,2 (co-chair) Investigator Howard Hughes Medical Institute, and Daniel K. Ludwig Professor for Cancer Research Massachusetts Institute of Technology Cambridge Ellen Wright Clayton1 Craig Weaver Professor of Pediatrics, and Professor of Law Vanderbilt University Nashville, Tenn. Barry S. Coller1,2 David Rockefeller Professor of Medicine, Physician in Chief, and Head Allen and Frances Adler Laboratory of Blood and Vascular Biology Rockefeller University New York City Ephrat Levy-Lahad Director Fuld Family Department of Medical Genetics Shaare Zedek Medical Center Faculty of Medicine Hebrew University of Jerusalem Jerusalem Luigi Naldini Professor of Cell and Tissue Biology and of Gene and Cell Therapy San Raffaele University, and Director San Raffaele Telethon Institute for Gene Therapy Milan Duanqing Pei Professor and Director General Guangzhou Institute of Biomedicine and Health Chinese Academy of Sciences Guangzhou, China Janet Rossant2 Senior Scientist and Chief of Research Emeritus Hospital for Sick Children University of Toronto Toronto Dietram A. Scheufele John E. Ross Professor in Science Communication and Vilas Distinguished Achievement Professor University of Wisconsin Madison Jonathan Weissman2 Professor Department of Cellular and Molecular Pharmacology University of California San Francisco Keith R. Yamamoto1,2 Vice Chancellor for Science Policy and Strategy University of California San Francisco


News Article | November 14, 2016
Site: www.theenergycollective.com

The development of new nuclear fuels is crucial to the success of new fast reactor designs. Examples include TRISO fuel for HTGRs and Molten Salt fuel for 21st century iterations of the work done at Oak Ridge in the 1960s. (WNN) Russia has started testing its new type of nuclear fuel, REMIX, at the MIR research reactor at the Research Institute of Atomic Reactors in Dimitrovgrad, which is in the Ulyanovsk region. Rostaom said on 11/3 that REMIX fuel rods manufactured in July had been “immersed in the active zone” of MIR. Development of REMIX (from Regenerated Mixture) fuel is part of state nuclear corporation Rosatom’s strategy to enable better use of recycled uranium and plutonium on an industrial scale in pressurized water reactors. Some of the plutonium may come from nuclear weapons decommissioned as part of international treaties. Russia is also using this inventory of surplus plutonium to make MOX fuel for its BN-800 fast reactor which was recently connected to the grid to generate electricity. A loop-type research reactor, MIR is designed mainly for testing fuel elements, fuel assemblies and other core components of different types of operating and promising nuclear power reactors. The first data from testing the fuel in MIR will include the “swelling, gassing and distribution of fission products and, of course, the isotopic composition of the used fuel rods,” the head of innovation at the Khlopin Radium Institute, Andrey Belozub, said in the Rosatom statement. Use of the MIR research reactor is an “extremely important step”, Rosatom said, towards full implementation of the project to introduce REMIX into the Russian fuel cycle. According to World Nuclear News, REMIX fuel is produced directly from a non-separated mix of recycled uranium and plutonium from reprocessing used fuel, with a low-enriched uranium (LEU, up to 17% U-235) make-up comprising about 20% of the mix. This gives fuel initially with about 1% Pu-239 and 4% U-235 which can sustain burn-up of 50 GWd/t over four years. REMIX fuel can be repeatedly recycled with 100% core load in current VVER-1000 reactors, and correspondingly reprocessed many times – up to five times according to Russian nuclear fuel manufacturer Tenex, so that with less than three fuel loads in circulation a reactor could run for 60 years using the same fuel, with LEU recharge and waste removal on each cycle. (WNN) Canadian reactor designer StarCore Nuclear has applied to the Canadian Nuclear Safety Commission (CNSC) to begin the vendor design review process for its Generation IV high temperature gas reactor (HTGR). The CNSC’s pre-licensing vendor review process is an optional service to provide an assessment of a nuclear power plant design based on a vendor’s reactor technology. The three-phase review is not a required part of the licensing process for a new nuclear power plant, but aims to verify the acceptability of a nuclear power plant design with respect to Canadian nuclear regulatory requirements and expectations. Earlier this year the CNSC agreed to conduct a phase 1 vendor design review for Terrestrial Energy’s integral molten salt reactor design concept. StarCore CEO David Dabney said the company’s application to the CNSC, lodged on 24 October, marked the culmination of eight years’ work. “We are confident that our plant size and technology will enable us to bring safe, clean energy to the many remote sites in Northern Canada that currently have no choice other than to use costly, unreliable and polluting carbon-based fuels,” he said. Montréal-based StarCore, founded in 2008, is focused on developing small modular reactors (SMRs) to provide power and potable water to remote communities in Canada. Its standard HTGR unit would produce 20 MWe (36 MWth), expandable to 100 MWe, from a unit small enough to be delivered by truck. The helium-cooled reactor uses Triso fuel, – spherical particles of uranium fuel coated by carbon which effectively gives each tiny particle its own primary containment system, manufactured by BWXT Technologies. Each reactor would require refueling at five-yearly intervals. StarCore describes its reactor as “inherently safe.” The use of helium, which does not become radioactive, as a coolant means that any loss of coolant would be “inconsequential”, the company says. The reactors would be embedded 50 metres underground in concrete silos sealed with ten-tonne caps. DOE Inks Deal with GE-Hitachi for Laser Enrichment Plant at Paducah The Department of Energy (DOE) has agreed to sell depleted uranium to GE-Hitachi Global Laser Enrichment, LLC (GLE) over a 40-year period which would be enriched at a proposed GLE state-of-the-art facility. DOE has agreed to sell 300,000 tonnes of depleted uranium hexafluoride (UF6) to GE Hitachi Global Laser Enrichment (GLE) for re-enrichment at a proposed plant to be built near DOE’s Paducah site in Kentucky. The agreement paves the way for commercialization of Silex laser enrichment technology. The proposed new facility would use depleted uranium to produce natural uranium which is used for production of fuel for civil nuclear reactors. The facility would be built near DOE’s Paducah Gaseous Diffusion Plant in western Kentucky. The construction and operation of the billion-dollar facility at Paducah could to bring approximately 800 to 1,200 jobs to the local community. “This agreement furthers the Energy Department’s environmental cleanup mission while reducing cleanup costs, creating good local jobs, and supporting an economical enrichment enterprise for our energy needs,” said Energy Secretary Ernest Moniz. GLE will finance, construct, own and operate the Paducah Laser Enrichment Facility (PLEF) adjacent to the Energy Department site. The facility will be a commercial uranium enrichment production facility under a Nuclear Regulatory Commission (NRC) license. DOE’s inventory of depleted uranium is safely stored in approximately 65,000 specialized storage cylinders at the Department’s Paducah and Portsmouth (Ohio) sites. The Paducah plant was constructed in the 1950s to enrich uranium for national security applications, and later enriched uranium for commercial nuclear power generation. The Energy Department resumed control of the plant enrichment facilities in 2014 after the operator ceased gaseous-diffusion enrichment operations in 2013. GLE is a joint business venture of GE (51%), Hitachi (25%) and Cameco (24%). Earlier this year GE Hitachi announced its desire to reduce its equity interest in GLE and in April signed a term sheet with Silex giving the Australian company an exclusive option to acquire GE Hitachi’s entire 76% interest in GLE. In 2012, the US NRC granted GLE a combined construction and operating licence for a laser enrichment plant of up to 6 million separative work units at Wilmington, North Carolina. GLE has successfully demonstrated the concept in a test loop at Global Nuclear Fuel’s Wilmington fuel fabrication facility but has not yet decided whether to proceed with a full-scale commercial plant there. (NucNet): Russia is considering asking foreign partners to join its development of the Generation IV SVBR 100 reactor design, but has denied reports that the cost of the project has more than doubled. The original cost of the project was put at 15bn rubles (€209m, $226m) and this has not changed, Rosatom said. The SVBR 100 is one of six designs chosen by the Generation IV International Forum (GIF) for its program of research and development into the next generation nuclear energy systems. GIF said the SVBR 100 is a lead-cooled fast reactor which features a fast neutron spectrum, high temperature operation, and cooling by molten lead or lead-bismuth. It would have multiple applications including production of electricity, hydrogen and process heat. Molten Salt Reactors: IAEA to Establish New Platform for Collaboration Experts from 17 countries laid the foundations last week for enhanced international cooperation on a technology that promises to deliver nuclear power with a lower risk of severe accidents, helping to decrease the world’s dependence on fossil fuels and mitigate climate change. “It is the first time a comprehensive IAEA international meeting on molten salt reactors has ever taken place,” said Stefano Monti, Head of the Nuclear Power Development Section at the IAEA. “Given the interest of Member States, the IAEA could provide a platform for international cooperation and information exchange on the development of these advanced nuclear systems.” Molten salt reactor technology has attracted private funding over the last few years, and several reactor concepts are under development. One area under research is the compatibility between the salt coolant and the structural materials and, for some designs, the chemical processes related to the associated fuel cycle, Monti said. The challenges are not only technical. Nuclear regulators will need to review existing safety regulations to see how these can be modified, if necessary, to fit molten salt reactors, since they differ significantly from reactors in use today, said Stewart Magruder, senior nuclear safety officer at the IAEA. Participants, including researchers, designers and industry representatives, emphasized the need for an international platform for information exchange. “While the United States is actively developing both technology and safety regulations for molten salt reactors, the meeting is an important platform to exchange knowledge and information with Member States not engaged in the existing forums,” said David Holcomb from the Oak Ridge National Laboratory. Molten salt reactors, nuclear power reactors that use liquid salt as primary coolant or a molten salt mixture as fuel, have many favorable characteristics for nuclear safety and sustainability. The concept was developed in the 1960s, but put aside in favor of what has become mainstream nuclear technology since. In recent years, however, technological advances have led to growing interest in molten salt technology and to the launch of new initiatives. The technology needs at least a decade of further intensive research, validation and qualification before commercialization. Molten salt reactors operate at higher temperatures, making them more efficient in generating electricity. In addition, their low operating pressure can reduce the risk of coolant loss, which could otherwise result in an accident. Molten salt reactors can run on various types of nuclear fuel and use different fuel cycles. This conserves fuel resources and reduces the volume, radiotoxicity and lifetime of high-level radioactive waste. To help speed up research, it is essential to move from bilateral to multilateral cooperation, said Chen Kun from the Shanghai Institute of Applied Physics of the Chinese Academy of Sciences. “It is the first time China has the opportunity to share knowledge with India, Indonesia and Turkey on this technology.” Indonesia is considering building its first nuclear power plant with molten salt reactor design, said Bob Soelaiman Effendi from Indonesia Thorium Energy Community. (WNN) China and the UK have signed a joint R&D agreement which created their Joint Research and Innovation Centre (JRIC) to be opened soon in Manchester, England. Initial work is expected to include developing advanced manufacturing methods. JRIC will support innovation in nuclear research and development through UK-China collaboration. This will develop, it said, “leading-edge research and innovative technologies which will support safe and reliable nuclear energy around the globe.” With NNL and CNNC each owning a 50% share, they will jointly pay for the centre’s research and development expenses and plan to invest 422 million yuan ($65.1 million) over a five-year period, CNNC said. (WNN) The UK’s Nuclear Advanced Manufacturing Research Centre (AMRC) said it has signed a new agreement with the US Nuclear Infrastructure Council (USNIC) to work together on research and development to support the UK’s civil nuclear program. The memorandum of understanding was signed by Jay Shaw, senior business development manager for the Nuclear AMRC, and David Blee, executive director of USNIC, during a visit to the Nuclear AMRC on 10/26.


Human recombinant BDNF was purchased from Millipore; d-2-amino-5-phosphonovalerate (d-AP5) and NSC-23766 were from Tocris; 1’-naphthylmethyl-4-amino-1-tert-butyl-3-(p-methylphenyl)pyrazolo[3,4-d] pyrimidine (1NMPP1) was from Santa Cruz and Shanghai Institute of Materia Medica, Chinese Academy of Sciences; and TrkB-Ig was a gift from Regeneron. The tat-CN21 peptide (YGRKKRRQRRRKRPPKLGQIGRSKRVVIEDDR) was synthesized by GenScript. All animal procedures were approved by the Duke Univeristy School of Medicine Animal Care and Use Committee. Both male and female rats and mice were used. TrkbF616A mutant mice were provided by D. Ginty21. Bdnffl/fl and Trkbfl/fl were provided by L. Parada17, 36. Rac1fl/fl animals were acquired from C. Brakebusch30. The genotype of each animal was verified before and after preparing slices using PCR of genomic DNA isolated from tail DNA before and slice samples after. Plasmids containing human RAC1 and PAK1(65–118) are gifts from M. Matsuda and S. Soderling, respectively. The Pak GTPase binding domain of PAK2 (PBD2) was prepared by introducing mutations L77P and S115L into PAK1(60–118) using a Site-Directed Mutagenesis kit (Stratagene). W56–mCh–MTBD was prepared by amplifying the Rac1 inhibitory peptide W56 (ref. 23) using overhang PCR with a C-terminal linker (GGGGGGGGGGGGGGGGGGGGGGGGMADQLTEEWHRGTAGPGS) and inserting it into pCAG-mCh-mCh (ref. 3) by removing the first mCh with EcoRI and KpnI restriction digest and replacing it with the W56-linker amplicon, creating pCAG-W56-(linker)-mCh. In parallel, the MTBD of human MAP2 (272-end)25 was isolated from a human cDNA library and PCR amplification. This amplicon was then further amplified with overhang PCR to contain a linker (same as above), and then inserted into pCAG-mCh-mCh using BamHI-NotI restriction digest to produce pCAG-mCh-(Linker)-MTBD. The two constructs were then combined using BamHI plus NotI restriction digest to create pCAG-W56-(linker)-mCh-(linker)-MTBD. The scrambled variant of W56–MTBD was created by randomly re-ordering the residues of W56. ARHGAP15-mCh-MTBD was made by inserting ARHGAP15 (1–723; Addgene plasmid 38903) into the -mCh-MTBD sequence described above by adding EcoRI and KpnI sites at the N and C terminus, respectively. DNRhoA and DNCdc42 variants of the X-mCh-MTBD construct were prepared by first incorporating an MfeI digestion site on the 3′ end of W56, then removing W56 by digestion with NheI/MfeI and insertion of the dominant-negative construct. Hippocampal slices were prepared from postnatal day 5–7 rats or mice in accordance with the animal care and use guidelines of Duke University Medical Centre. In brief, we deeply anaesthetized the animal with isoflurane, after which the animal was quickly decapitated and the brain removed. The hippocampi were isolated and cut into 350-μm sections using a McIlwain tissue chopper. Hippocampal slices were plated on tissue culture inserts (Millicell) fed by tissue medium (for 2.5 l: 20.95 g MEM, 17.9 g HEPES, 1.1 g NaHCO 5.8 g d-glucose, 120 μl 25% ascorbic acid, 12.5 ml l-glutamine, 2.5 ml insulin, 500 ml horse serum, 5 ml 1 M MgSO , 2.5 ml 1 M CaCl ). Slices were incubated at 35 °C in 3% CO . After 1–2 weeks in culture, CA1 pyramidal neurons were transfected with ballistic gene transfer using gold beads (8–12 mg) coated with plasmids containing 30 μg of total cDNA (Rac1 sensor, donor:acceptor = 1:2; eGFP + W56–MTBD, 5:1; Rac1 sensor + W56–MTBD, donor:acceptor:inhibitor = 2:4:1; TrkB sensor, donor:acceptor = 1:1; Cdc42 sensor, donor:acceptor = 1:1; RhoA sensor, donor:acceptor = 1:1). Cells expressing only eGFP were imaged 1–5 days after transfection, cells expressing TrkB were imaged 1–2 days after transfection, and all other plasmid combinations were imaged 2–5 days after transfection. For structural plasticity experiments, conditional knockout slices (Bdnffl/fl and Rac1fl/fl) were transfected with either eGFP alone or eGFP and tdTomato-Cre (1:1) for 3–7 days before imaging. For sensor experiments in these slices, the sensors were used in the ratios listed above with an amount of Cre recombinase equal to the amount of donor DNA. The presence of Cre was confirmed by nuclear-localized tdTomato signal. HEK293T cells (ATCC) were cultured in DMEM supplemented with 10% fetal calf serum at 37 °C in 5% CO . Transfection was performed at ~50–90% cell confluency using Lipofectamine (Invitrogen) and 2 μg ml−1 of total cDNA/35 mm dish, following the ratios listed above. Cells were used as an expression platform only, and were thus not rigorously tested for potential contamination from other cell lines. FRET imaging using a custom-built two-photon fluorescence lifetime imaging microscope was performed as previously described3, 31, 32. Two-photon imaging was performed using a Ti-sapphire laser (MaiTai, Spectraphysics) tuned to a wavelength of 920 nm, allowing simultaneous excitation of eGFP and mCh. All samples were imaged using <2 mW laser power measured at the objective. Fluorescence emission was collected using an immersion objective (60×, numerical aperture 0.9, Olympus), divided with a dichroic mirror (565 nm), and detected with two separate photoelectron multiplier tubes (PMTs) placed downstream of two wavelength filters (Chroma, HQ510-2p to select for green and HQ620/90-2p to select for red). The green channel was fitted with a PMT having a low transfer time spread (H7422-40p; Hamamatsu) to allow for fluorescence lifetime imaging, while the red channel was fitted with a wide-aperture PMT (R3896; Hamamatsu). Photon counting for fluorescence lifetime imaging was performed using a time-correlated single photon counting board (SPC-150; Becker and Hickl) controlled with custom software31, while the red channel signal was acquired using a separate data acquisition board (PCI-6110) controlled with Scanimage software33. A second Ti-sapphire laser tuned at a wavelength of 720 nm was used to uncage 4-methoxy-7-nitroindolinyl-caged-l-glutamate (MNI-caged glutamate) in extracellular solution with a train of 4–6 ms, 4–5 mW pulses (30 times at 0.5 Hz) near a spine of interest (‘sLTP stimulus’). Experiments were performed in Mg2+ fee artificial cerebral spinal fluid (ACSF; 127 mM NaCl, 2.5 mM KCl, 4 mM CaCl , 25 mM NaHCO , 1.25 mM NaH PO and 25 mM glucose) containing 1 μM tetrodotoxin (TTX) and 4 mM MNI-caged l-glutamate aerated with 95% O and 5% CO at 30 °C, as described previously. Subthreshold stimuli were delivered using a train of 1 ms, 4–5 mW pulses (30 times at 0.5 Hz). Crosstalk experiments were performed by first delivering an sLTP stimulus (4–6 ms), then delivering a subthreshold stimulus to a nearby (~2–5 μm) spine on the same dendrite ~90 s later, as previously described19. Anywhere from 1–5 spines were stimulated per cell, and a maximum of 3 crosstalk experiments were performed on a single cell. Spine volume was calculated as the background-subtracted integrated fluorescence intensity over a region of interest around the dendritic spine head (fluorescence, F). Change in spine volume was measured as F/F , in which F is the average fluorescence intensity before stimulation. Analysis of two-photon images outside of the context of 2pFLIM was performed in ImageJ. All experiments involving dendritic inhibitor constructs were performed in a blinded fashion until the experiments were complete (when the groups significantly diverged, or until 15–20 individual experiments across at least three cells were complete, whichever came first). To measure the fraction of donor bound to acceptor, we fit a fluorescence lifetime curve summing all pixels over a whole image with a double exponential function convolved with the Gaussian pulse response function: where τ is the fluorescence lifetime of donor bound with acceptor, P and P are the fraction of free donor and donor bound with acceptor, respectively, and H(t) is a fluorescence lifetime curve with a single exponential function convolved with the Gaussian pulse response function: in which τ is the fluorescence lifetime of the free donor, τ is the width of the Guassian pulse response function, F is the peak fluorescence before convolution and t is the time offset, and erfc is the error function. We fixed τ to the fluorescence lifetime obtained from free eGFP (2.6 ns). To generate the fluorescence lifetime image, we calculated the mean photon arrival time, 〈t〉, in each pixel as: then, the mean photon arrival time is related to the mean fluorescence lifetime, 〈τ〉, by an offset arrival time, t , which is obtained by fitting the whole image: For small regions-of-interest (ROIs) in an image (spines or dendrites), we calculated the binding fraction (P ) as: Polyhistidine-tagged super-folder GFP (sfGFP)–Rac1, mCh-PBD2 and their mutants were cloned into the pRSET bacterial expression vector (Invitrogen). Proteins were overexpressed in Escherichia coli (DH5α), purified with a Ni+-nitrilotriacetate (NTA) column (HiTrap, GE Healthcare), and desalted with a desalting column (PD10, GE Healthcare) equilibrated with PBS. The concentration of the purified protein was measured by the absorbance of the fluorophore (sfGFP, A  = 83,000 cm−1 M−1 (ref. 34); mCh, A  = 72,000 cm−1 M−1 (ref. 35)). Purified sfGFP–Rac1 was loaded with GppNHp (2’,3’-O-N-methyl anthraniloyl-GppNHp) and GDP by incubating in the presence of tenfold molar excess of GppNHp and GDP in MgCl -free PBS containing 1 mM EDTA for 10 min, respectively. The reaction was terminated by adding 10 mM MgCl . sfGFP–Rac1 and mCh–PBD2 were mixed and incubated at room temperature for 20 min. FRET between sfGFP and mCh was measured under 2pFLIM, and the fraction of sfGFP–Rac1 bound to mCh–PDB2 was calculated by fitting the fluorescence lifetime curve with a double exponential function (equation (1)). The dissociation constant was obtained by fitting the relationship between the binding fraction and the concentration of mCh–PDB2 ([mCh–PDB2]) with a Michaelis–Menten function. Sample sizes for all experiments were chosen based on signal-to-noise ratios identified in pilot experiments. The variances of all data were estimated and compared using Bartlett’s test or Levene’s test before further statistical analysis. The distribution patterns of Rho GTPase sensor activity was determined by performing a Shapiro–Wilk test for normality on the peak response (the same points used for statistical comparisons). All of the sensors tested adhered to the null hypothesis, and thus are considered normally distributed. As such, parametric statistics were used to compare values of Rho GTPases response. For multiple comparisons of sensor activity, data were first subjected to ANOVA, followed by a post-hoc test to determine statistical significance, according to the structure of the comparison being made. In cases where each condition is being compared to all other conditions in the group, the Tukey–Kramer method was used. In cases where each condition is being compared to a single control, Dunnet’s test was used instead. To compare values of non-normally distribution changes in spine volume, data were log-transformed to resolve skewness, then subjected to normal parametric statistics, as indicated in the figure legends. To support these statistical claims, non-parametric statistics were also applied to the original, non-transformed data using a Wilcoxon rank-sum test in place of t-tests, and the Kruskal–Wallis procedure in place of ANOVA, followed by a post-hoc analysis using Dunn’s test. All of the data tested were significant by both of these approaches. Data were only excluded if obvious signs of poor cellular health (for example, dendritic blebbing, spine collapse) were apparent. Crosstalk experiments comparing different genetic perturbations were performed in a blinded fashion. Experimenters were unblinded when either statistical significance was reached, or when experimental number was comparable to similar experiments that had reached statistical significance.


News Article | March 31, 2016
Site: cleantechnica.com

I know we already publish more clean transport stories than most readers can consume, but the arena is getting so popular that there’s a lot of bike, rail, bus, electric car, and even Tesla news that we can never get to. Below are several stories I think are worth a read, or at least a glance, if you have time. WeCycle Atlanta uses donated bicycles to reward students who complete a 4 week educational program that teaches them about growing their own food, recycling and sustainability. Engineering an Innovation: The Inside Story of the Green Lane Project It was 2012, and Leah Golby was sitting in a meeting she’d been looking forward to for three years. A recently elected city councilor for the 10th Ward of Albany, N.Y., Golby had heard years of lobbying from 10th Ward residents for a redesign of Madison Avenue, a four-lane thoroughfare just south of downtown. Albany was finally holding a work session about a road diet for the street. The American Public Transit Association (APTA) recently released its February Savings Report. The association releases a monthly savings report in order to analyze how much money the average two-person household can save by taking public transportation and using one less automobile. The average commuter is looking at a savings of more than $754 a month. On a recent episode of Jimmy Kimmel Live, actor and frequent narrator Morgan Freeman lent his signature voice to a segment exploring the “pedestrian experience” on Hollywood Boulevard via a hidden street camera. Noting a certain unsuspecting gentleman and his ‘selfie stick,’ Freeman unleashes a laugh-out-loud minute of commentary which should rouse a genuine laugh. Plug-in Hybrid Sales Are Exploding On The Continent Plug-in hybrid sales are exploding in Europe and the UK, where demand is stronger than manufacturers predicted. Will that trend carry over to the US market? Still without Poland, the European market had more than 13,000 registrations in January, a 46% increase over February 2015, with the EV market returning to previous growth rate, it seems the December sales hangover experienced last month is now surpassed. Volkswagen just gave the BUDD-e concept car its NYC debut at the New York International Auto Show. This follows the concept’s official unveiling at the January Consumer Electronics Show in Las Vegas. The BUDD-e electric vehicle (EV) concept is notable because, as we’ve reported previously, it’s Volkswagen’s first vehicle based on the new Modular Electric Platform. If like me you’re waiting for your 2017 Volt to be delivered… You can now read the manual while waiting! Massachusetts offers a $2,500 rebate for new buyers of electric vehicles which could get even better. The Joint Committee on Transportation is working on a bill that would allow EV drivers to use the highway occupancy vehicles lanes on all state highways. In addition, it would increase access to charging stations in the state, and require information about charging power and compatibility of each station to be posted online. Top 5 Tesla Model X Tidbits You Might Not Know If you’re on the fence with ordering a Tesla Model X, just know that you will never see this level of detail and engineering, in any production vehicle, ever again. If you’re an existing Model X owner, know that you’re driving a “once in a lifetime” classic. Tesla Drops 70 kWh Battery For Model X… Then Adds It Back Over the weekend, Tesla quietly dropped any reference to a Model X with the 70 kWh battery. It also deleted reference to a 90D version. And then it added them back. Are new batteries about to be announced by the company? Or just website glitches? I found almost the exact 2017 Volt that I wanted online at a dealer about 45 minutes away: Only thing missing was ash interior, but I could live without it. So I decided to schedule a test drive. Unfortunately, they just sold the one I wanted. So I test drove a loaded Premier trim instead. I live in Southern California and over the past week I took my family to Palm Springs for a short Spring Break vacation. We went to the ACE Hotel, a place we have been many times. I pulled up in my Volt and asked a hotel employee if they had any EV charging stations. His face lit up and proudly pointed over “there.” What he was referring to was a new Tesla Supercharger station. This hotel took two prime parking spots in the front of the hotel and reserved them for Tesla only and the non-standard Tesla charger. Meaning only a $76,000 + Tesla can park and charge. After further investigation they did have a 120 volt outlet in the back of the hotel in cord distance to a parking spot without any charging adapter — so basically “bring your own portable charger” and “park in the back.” Back in the early days of the Third Age of the Electric Car, the need for range and the ability to go long distances in a 100% electric car was a hot topic, because EVs needed to get out of their natural habitat (metropolitan areas) and avoid mistakes of past lives. Long gone are the days of a simple key to open your car’s door and turn on the ignition. Keys today aren’t even really keys in the traditional sense. With many cars, you only need to have the key on you and as you grab the handle of the vehicle the door will unlock. Then, you get into the vehicle and instead of inserting the key into an ignition, you simply push a button and the car will turn on. This is, in fact how the BMW i3 works, as long as you ordered the car with the optional Comfort Access feature. Executives from Renault and Audi told separate audiences this week that the electric car revolution won’t happen until an infrastructure of high power charging stations is in place. The only question is, who will pay for them? Danish car designer Henrik Fisker appears to have moved on from his eponymous company and the Karma plug-in luxury sedan. Fisker Automotive is now Karma Automotive, and is working its way back from bankruptcy under the wing of Chinese automotive supplier Wanxiang. Tesla has begun shipping its cars to Norway on LNG powered ferries owned by Nor Lines. Going by sea will eliminate the diesel emissions from hundreds of car transports a year. BorgWarner’s eGearDrive® transmission will propel the Geely EC7-EV sedan. The Chinese automaker’s first mass-produced electric vehicle is powered by a 129 horsepower electric motor with a top speed of 140 km/h (87 mph). Season 2 of Formula E allows the teams to come up with innovative new power unit solutions — you can see the different designs here. If you’ve ever dreamed about watching Formula E racing driver Daniel Abt get naked and sing Taylor Swift songs, you’ll want to pay attention. Stanford Team Develops New Simple Approach for Viable Li-Metal Anodes for Advanced Batteries Lithium-metal anodes are favored for use in next-generation rechargeable Li-air or Li-sulfur batteries due to a tenfold higher theoretical specific capacity than graphite (3,860 mAh/g vs. 372 mAh/g); light weight and lowest anode potential. However, safety issues resulting from dendrite formation and instability caused by volume expansion have hampered development and deployment of commercially viable solutions. If monks can sell caskets direct-to-public, bypassing the middleman as they did after hurricane Katrina, then Tesla should be able to sell cars direct-to-public too. That’s the tack Tesla will take in federal court. If challenged on relevance, Tesla’s legal team has Exhibit A totally in the bag… Tesla has filed for trademark protection for the official logo of its new Model 3. It is using the E from the company name instead of the number 3. That letter is similar to the number 3 in Chinese. Why Telsa Will Likely See 100k Model 3 Reservations Within the First 24 Hours With all the buzz surrounding the upcoming Model 3 unveil and how Tesla will likely book 100,000 reservations within the first 24 hours of reservation opening, we decided to do a little math of our own to see what that may mean given some basic assumptions. We know that there are 221 Tesla stores worldwide that will begin taking Model 3 reservation at 10 am local time. Of those stores, there’s a high probability – based on feedback from staff at Tesla stores regarding the volume of inquiries around the Model 3 March 31st in-store reservation process – that hundreds of people will already be lined up, many of which camped overnight to guarantee their early spot in line, before the store opens. It’s important to note that we are assuming 24 hours from a 10am Pacific time opening and not 10am from the Sydney Australia Tesla Store which is 18 hours ahead. When They’re Aping Your Product, Tesla, You SHOULD Charge More First, Tesla was a joke. Then it intrigued some influential auto execs with its battery tech — even landed a couple of supply contracts and investments from Toyota and Daimler. Then it prodded everyone to re-assess their their half-assed EV programs and make them at least three-quarter assed. Now comes the really humbling stage, which is “me too.” Imitating Tesla may be embarrassing and risky to your brand, but ignoring Tesla carries even more risk. You want to be the next Oldsmobile? Gig 2, Where Are You? Demand for Tesla Energy batteries could max out Gig 1, Musk has said. Yet it’s very possible that demand for Model ☰ could also gobble up the capacity of Gig 1. You realize what this means? It means the state governors who lost out on Gig 1 should jump into their dumpsters and brush the coffee grounds off their “come hither” proposals. They might get another shot. Remember the word tracks? A team from the Shenzhen Institutes of Advanced Technology (SIAT) of the Chinese Academy of Sciences has developed a novel, environmentally friendly low-cost battery. The new aluminum-graphite dual-ion battery (AGDIB) offers significantly reduced weight, volume, and fabrication cost, as well as higher energy density, in comparison with conventional LIBs. 11 Best Fuel Efficient Motorcycles You Can Buy in 2016 This is it: the semi-official list of the 11 best fuel efficient motorcycles your money will be able to buy in 2016, courtesy of Gas2. Enjoy! Florida City First In U.S. To Subsidize Uber Rides Reuters.com recently reported that Altamonte Springs, Florida, will be the first U.S. city to subsidize Uber services in an attempt to reduce traffic and increase transit ridership. The Orlando suburb has currently allocated $500,000 to cover 20% of Uber trips within city limits and 25% of trips to or from a SunRail station.    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | December 27, 2016
Site: www.eurekalert.org

Carbon dioxide (CO2) is one of the major greenhouse gases, and causes great concern due to the rapid increase in its atmospheric concentrations. China launched its first minisatellite dedicated to the carbon dioxide detection and monitoring at 15:22 UTC on December 22, 2016. The Chinese Carbon Dioxide Observation Satellite (TANSAT) was designed to focus on the global observation of CO2. For retrieving carbon dioxide from TANSAT observations, cloud detection is an essential preprocessing step. The TANSAT project is one of the National High-tech Research and Development Programs funded by the Ministry of Science and Technology of the People's Republic of China and the Chinese Academy of Sciences. During the pre-launch study of TANSAT, a cloud-screening scheme for the Cloud and Aerosol Polarization Imager (CAPI) was proposed by a team at the Department of Atmospheric and Oceanic Sciences, School of Physics, Peking University. They noticed that previous cloud-screening algorithms were basically designed to provide comprehensive utilization for sensors that contain multiple channels over a wide spectral range. However, for TANSAT/CAPI, the channels available for cloud screening cover only five spectral bands, which is why such sensors need a more effective method to regroup results from few threshold tests. Their work relies upon the radiance data from the Visible and Infrared Radiometer (VIRR) onboard the Chinese FengYun-3A Polar-orbiting Meteorological Satellite (FY-3A), which uses four wavebands similar to that of CAPI and can serve as a proxy for its measurements. The cloud-screening scheme for TANSAT/CAPI, based on previous cloud-screening algorithms, defines a method to regroup individual threshold tests on a pixel-by-pixel basis according to the derived clear confidence level (CCL). The scheme has been applied to a number of the FY3A/VIRR scenes over four target areas (desert, snow, ocean, forest) in China for all seasons. Comparisons against the cloud-screening product from MODIS suggest that the proposed scheme inherits the advantages of schemes described in previous publications and shows improved cloud-screening results. This scheme is proven to be more efficient for sensors with few channels or frequencies available for cloud screening.


News Article | October 5, 2016
Site: www.nature.com

Human recombinant BDNF and human recombinant β-NGF were purchased from Millipore, K252a and d-2-amino-5-phosphonovalerate (d-AP5) and 2,3-dihydroxy-6-nitro-7-sulfamoyl-benzo[f]quinoxaline-2,3-dione (NBQX) were from Tocris, human-IgG was from Sigma, and 1’-naphthylmethyl-4-amino-1-tert-butyl-3-(p-methylphenyl)pyrazolo[3,4-d] pyrimidine (1NMPP1) was from Santa Cruz and Shanghai Institute of Materia Medica, Chinese Academy of Sciences. TrkB-Ig was a gift from Regeneron and the tat-CN21 peptide (YGRKKRRQRRRKRPPKLGQIGRSKRVVIEDDR) was synthesized by GenScript. TrkB–eGFP was prepared by inserting the coding sequence of mouse TrkB (obtained from a previously described plasmid29) into pEGFP-N1 (Clontech) containing the A206K monomeric mutation in eGFP and the CAG promoter30. The linker between TrkB and eGFP is TGRH. mRFP1–PLC–mRFP1 was prepared by inserting the coding sequence for the C-terminal SH2 domain of human PLC-γ1 (659–769; obtained from full-length, human PLC-γ1 purchased from Origene) into a tandem-mRFP1 plasmid containing the CAG promoter. The linkers between the mRFP1s and PLC-γ1 (659–769) are RSRAQASNS for the N terminus and GSG for the C terminus. TrkBY816F–eGFP was prepared by introducing a point mutation using the Site-Directed Mutagenesis Kit (Stratagene). Tandem mCherry (mCh–mCh) was generated as previously described16. HA–BDNF–Flag was a gift from A. West. The coding sequence for SEP (obtained from SEP-GluA1; ref. 31) was incorporated onto the 3′ end of HA–BDNF–Flag to generate HA–BDNF–Flag-SEP. HA–BDNF–Flag–mRFP1 was generated in a similar fashion. A plasmid containing mCh-IRES-TeTX was a gift from M. Ehlers. POMC-mCh was generated by amplifying the POMC peptide (MWCLESSQCQDLTTESNLLACIRACRLDL)27 using overhang PCR with a C-terminal linker (GGGGGGGGGGGGGGGGGGGGGGGGMADQLTEEWHRGTAGPGS). This amplicon was then inserted into the tandem mCh plasmid by replacing the coding sequence of the first mCh. All animal procedures were approved by the Duke University School of Medicine Animal Care and Use Committee, Max Planck Florida Institute for Neuroscience, and Weill Cornell Medical College Institutional Animal Care and Use Committees and were conducted in accordance with the NIH Guide for the Care and Use of Laboratory Animals. We used both male and female rats and mice. Rats and C57/B6 mice were obtained from Charles River, TrkbF616A mutant mice were provided by D. Ginty28, Bdnffl/fl and Trkbfl/fl mice were provided by L. Parada32, and Bdnf-HA mice were generated as previously described26. The genotype of each animal used was verified before and after preparing slices using PCR of genomic DNA isolated from tail DNA before and slice samples after. HeLa cells were obtained from the Duke University Cell Culture Facility. These cells had been authenticated using short-tandem repeat profiling and evaluated for mycoplasma contamination. Cells were cultured and maintained as previously described16. Cells were transfected with Lipofectamine 2000 using the manufacturer’s protocol (Invitrogen). Concentrations used were 0.5 μl ml−1 Lipofectamine and 1 μg ml−1 total cDNA (1:1 ratio of TrkB–eGFP to mRFP1–PLC–mRFP1 DNA). Then, 24–48 h later, culture media was replaced with HEPES-buffered ACSF for imaging (HACSF; 20 mM HEPES, 130 mM NaCl, 2 mM NaHCO , 25 mM d-glucose, 2.5 mM KCl and 1.25 mM NaH PO ; adjusted to pH 7.4 and 310 mOsm). After a 30-min equilibration period, transfected cells were imaged using 2pFLIM as described below. Cell stimulation was performed by directly adding BDNF or vehicle to the HACSF bathing the cells. Mixed cortical cultures were prepared as described previously33 and transfected with Lipofectamine 2000 using a modified protocol. For transfection of neurons in 3.5 cm dishes, 1 μl Lipofectamine was mixed with 1 μg of plasmid DNA (1 μg per construct transfected) in 100 μl of culture media for 20 min. Culture media was removed from the 3.5 cm dish until only 1 ml remained. The Lipofectamine/DNA solution was added to the neurons for 45 min. At this point, all the media was removed and replaced with 2 ml conditioned culture media. After 24–48 h, culture media was replaced with HACSF. To stimulate cells, we added BDNF or NGF directly to the HACSF bathing the cells. 30 min after stimulation, we added K252a to the HACSF. Cultured hippocampal slices were prepared from post-natal day 5–7 rats or mice, as previously described34, in accordance with the animal care and use guidelines of Duke University Medical Center. After 5–12 days in culture, CA1 pyramidal neurons were transfected with biolistic gene transfer using gold beads (12 mg; Biorad) coated with plasmids containing 20–40 μg of total cDNA (TrkB sensor: 15 μg TrkB–eGFP and 15 μg mRFP1–PLC–mRFP1; TrkB sensor plus mCh: 5 μg TrkB–eGFP, 5 μg mRFP1–PLC–mRFP1, and 20 μg mCh–mCh; TrkB sensor plus mCh and Cre: 5 μg TrkB–GFP, 5 μg mRFP1–PLC–mRFP1, 5 μg tdTom-Cre, and 15 μg mCh–mCh; BDNF–SEP plus mCh: 20 μg BDNF–SEP and 10 μg mCh–mCh; BDNF–SEP plus TeTX: 20 μg BDNF–SEP and 10 μg mCh-IRES-TeTX; BDNF–SEP plus POMC: 20 μg BDNF–SEP and 10 μg POMC–mCh; eGFP: 20 μg eGFP; and eGFP plus Cre: 10 μg eGFP plus 10 μg tdTom-Cre). Neurons expressing the TrkB sensor were imaged 12–48 h after transfection. Neurons expressing the TrkB sensor with mCh or mCh plus Cre were imaged 5–7 days after transfection. The addition of the mCh proved critical in limiting the TrkB sensor expression thereby allowing neurons to survive longer with the sensor present. Neurons expressing only eGFP were imaged 1–7 days after transfection. Neurons expressing eGFP plus Cre were imaged 5–9 days after transfection. FRET imaging using a custom-built two-photon fluorescence lifetime imaging microscope was performed as previously described13, 35. Two-photon imaging was performed using a Ti-sapphire laser (MaiTai, Spectraphysics) tuned to a wavelength of 920 nm, allowing simultaneous excitation of eGFP, mRFP1 and mCh. All samples were imaged using <2 mW laser power measured at the objective. Fluorescence emission was collected using an immersion objective (60×, numerical aperture 0.9, Olympus), divided with a dichroic mirror (565 nm), and detected with two separate photoelectron multiplier tubes (PMTs) placed downstream of two wavelength filters (Chroma, HQ510-2p to select for green and HQ620/90-2p to select for red). The green channel was fitted with a PMT having a low transfer time spread (H7422-40p; Hamamatsu) to allow for fluorescence lifetime imaging, while the red channel was fitted with a wide-aperture PMT (R3896; Hamamatsu). Photon counting for fluorescence lifetime imaging was performed using a time-correlated single photon counting board (SPC-150; Becker and Hickl) controlled with custom software13, while the red channel signal was acquired using a separate data acquisition board (PCI-6110) controlled with Scanimage software36. A second Ti-sapphire laser tuned at a wavelength of 720 nm was used to uncage 4-methoxy-7-nitroindolinyl-caged-l-glutamate (MNI-caged glutamate) in extracellular solution with a train of 4–6 ms, 4–5 mW pulses (30 times at 0.5 Hz) near a spine of interest. Experiments were performed in Mg2+ free artificial cerebral spinal fluid (ACSF; 127 mM NaCl, 2.5 mM KCl, 4 mM CaCl , 25 mM NaHCO , 1.25 mM NaH PO and 25 mM glucose) containing 1 μM tetrodotoxin (TTX) and 4 mM MNI-caged l-glutamate aerated with 95% O and 5% CO Experiments were performed at 24–26 °C (room temperature) or 30–32 °C using a heating block holding the ACSF container. Temperature measurements were made from ACSF within the perfusion chamber holding the slice. To measure the fraction of donor bound to acceptor, we fit a fluorescence lifetime curve summing all pixels over a whole image with a double exponential function convolved with the Gaussian pulse response function: in which τ is the fluorescence lifetime of donor bound with acceptor, P and P are the fraction of free donor and donor bound with acceptor, respectively, and H(t) is a fluorescence lifetime curve with a single exponential function convolved with the Gaussian pulse response function: in which τ is the fluorescence lifetime of the free donor, τ is the width of the Guassian pulse response function, F is the peak fluorescence before convolution and t is the time offset, and erfc is the error function. We fixed τ to the fluorescence lifetime obtained from free mEGFP (2.6 ns). To generate the fluorescence lifetime image, we calculated the mean photon arrival time, 〈t〉, in each pixel as: then, the mean photon arrival time is related to the mean fluorescence lifetime, 〈τ〉, by an offset arrival time, t , which is obtained by fitting the whole image: For small regions-of-interest (ROIs) in an image (spines or dendrites), we calculated the binding fraction (P ) as: BDNF–SEP imaging was performed by interleaving 8 Hz two-photon imaging with two-photon glutamate uncaging (30 pulses at 0.5 Hz). Multiple (1–30) spines were imaged on each neuron. Change in BDNF–SEP fluorescence was measured as ΔF/F after subtracting background fluorescence. Uncaging-triggered averages were calculated as the average increase in SEP fluorescence after each individual uncaging pulse. Red fluorescence increase was smoothed using a 16-frame window. For visualizing BDNF–mRFP1 localization in CA1 pyramidal neurons, images were obtained using a Leica SP5 laser scanning confocal microscope (Leica). During 2pFLIM and BDNF–SEP imaging (Figs 2, 3), spine volume was reported using the red fluorescent intensity from mRFP1 or mCh. For two-photon imaging without FLIM (Fig. 4), green fluorescent intensity from eGFP was used. In all experiments, spine volume was measured as the integrated fluorescent intensity after subtracting background (F). Spine volume change was calculated by F/F , in which F is the average spine volume before stimulation. Additionally, to compare basal spine size/morphology between various conditions, maximal spine (F ) and dendrite (F ) fluorescent intensities were measured and the F /F ratio was calculated after subtracting background fluorescence. E14.5/15.5 timed-pregnant Bdnffl/fl mice were deeply anaesthetized using an isoflurane–oxygen mixture. The uterine horns were exposed and approximately 1–2 μl of AAV solution mix (containing AAV1.CAG.EGFP, AAV1.CAG.Flex.tdTomato and AAV1.hSyn.Cre, all from U Penn vector core) was injected through a pulled-glass capillary tube into the right lateral ventricle of each embryo. To achieve sufficient labelling of eGFP CA1 neurons alongside sparse expression of Cre + BDNF knockout tdTomato neurons, eGFP and Flex-tdTomato viruses were used at concentration of ~1012 viral genome copies per μl, and Cre was diluted (~100-fold) in PBS at a dilution determined to achieve a sparse labelling density of Cre-positive CA1 neurons. LTP experiments in Fig. 5 and Extended Data Fig. 10 were performed in Max Planck Florida Institute (MPFI) and Duke University, respectively. Mice (wild type, TrkbF616A, or Bdnffl/fl age 21–42 days) were sedated by isoflurane inhalation, and the brain was removed and dissected in a chilled cutting solution (124 mM choline chloride, 2.5 mM KCl, 26 mM NaHCO , 3.3 mM MgCl , 1.2 mM NaH PO , 10 mM d-glucose and 0.5 mM CaCl : MPFI or 110 mM sucrose, 60 mM NaCl, 3 mM KCl, 1.25 mM NaH PO , 28 mM NaHCO , 0.5 mM CaCl , 7.0 mM MgCl , and 5 mM d-glucose. The solutions were saturated with 95% O plus 5% CO , pH 7.4)37. Coronal slices (250 μm: MPFI) or transverse hippocampal slices (400 μm: Duke) were prepared and maintained in oxygenated ACSF (MPFI/Duke: 127/124 mM NaCl, 2.5/1.75 mM KCl, 10/11 mM d-glucose, 26/25 mM NaHCO , 1.25/0 mM NaH PO , 0/1.25 mM KH PO , 1.3/2.0 MgCl and 2.4/2.0 CaCl ,) in a submerged chamber at 32–34 °C for at least 1 h before use. Electrophysiological recordings were performed in ACSF (plus picrotoxin at MPFI). CA1 pyramidal neurons in acute hippocampal slices from wild-type and TrkbF616A mice were visualized using oblique illumination or differential interference contrast (DIC). For Bdnffl/fl experiments, Cre-negative (eGFP-expressing) and Cre-positive (tdTomato-expressing) neurons were identified and targeted with fluorescence microscopy. Patch pipettes (3–6 MΩ) were filled with an internal solution (130 mM K gluconate, 10 mM, Na phosphocreatine 4 mM MgCl ,4 mM NaATP, 0.3 mM MgGTP, 3 mM l-ascorbic acid and 10 mM HEPES, pH 7.4, and 310 mOsm at MPFI or K-gluconate 140 mM, HEPES 10 mM, EGTA 1 mM, NaCl 4 mM, Mg ATP 4 mM, and Mg GTP 0.3 mM, pH 7.25, and 290 mOsm at Duke). Series resistances (10–40 MΩ) and input resistances (100–300 MΩ) were monitored throughout the experiment using negative voltage steps. The membrane potential was held at −70 mV. Experiments were performed at room temperature (~21 °C) and slices were perfused with oxygenated ACSF. For TrkbF616A/wild-type experiments, 1NMMP1 or vehicle was added to the ACSF before stimulation. For TrkB-Ig experiments, slices were incubated in 2 μg ml−1 TrkB-Ig or control human IgG for at least 2 h before the experiments. EPSCs were evoked by extracellular stimulation of Schaffer collaterals using a concentric bipolar stimulating electrode (World Precision Instruments) at a rate of 0.03 Hz. LTP was induced by pairing a 2-Hz stimulation with a postsynaptic depolarization to 0 mV for 15 s (MPFI) or 75 s (Duke). EPSC potentiation was assessed for 30–45 min (for TrkbF616A experiments), 40–60 min (for Bdnffl/fl experiments) or 20–30 min (for TrkB-Ig experiments) after stimulation. HeLa cells were transfected with the TrkB sensor (TrkB–eGFP and mRFP1–PLC–mRFP1) using Lipofectamine 2000 as described above. Then, 24–48 h after transfection, the media bathing the cells was exchanged for HEPES buffered ACSF for biochemistry (150 mM NaCl, 3 mM KCl, 10 mM HEPES pH 7.35, 20 mM glucose, and 310 mOsM). After a 30-min equilibration period, cells were stimulated with 100 ng ml−1 BDNF for 10 min. Following stimulation, cells were washed in ice-cold PBS (Gibco), and then lysed in modified RIPA buffer (50 mM Tris-HCl pH 7.4, 150 mM NaCl, 1% NP-40, 0.25% sodium deoxycholate, 1 mM EDTA, 1 mM PMSF, 1 mM Na VO , and protease inhibitors) for 10 min on ice. The supernatant was collected after a 10 min centrifugation at 16,000g at 4 °C. At this point, a small volume of the supernatant was added to SDS-sample buffer and saved as the ‘cell lysate’ sample. The remaining supernatant was pre-cleared using protein G Sepharose beads (25 μl, Roche) for 30 min at 4 °C. After pre-clearing, the supernatant was incubated with 20 μg mouse monoclonal anti-phosphotyrosine (BD Transduction Labs) at 4 °C overnight. The immunocomplexes were precipitated with protein G Sepharose beads (50 μl) for 3 h at 4 °C and then analysed with western blotting. Antibodies used in western blotting included TrkB (Millipore), GFP (Abcam), actin (Sigma), and pTrkB(Y515) (Sigma). Male adult (~2–3 months old) Bdnf-HA knock-in mice26 and aged matched wild-type C57/BL mice were used. The same investigator (T.A.M.) perfused all mice (Bdnf-HA and wild type) to maintain consistency between groups. Mice (3 per group) were deeply anaesthetized with sodium pentobarbital (150 mg kg−1, i.p.) and perfused sequentially through the ascending aorta with: (1) ~5 ml saline (0.9%) containing 2% heparin, and (2) 30 ml of 3.75% acrolein and 2% paraformaldehyde in 0.1 M phosphate buffer (PB; pH 7.4)38. Following removal from the skull, the brain was post-fixed for in 2% acrolein and 2% paraformaldehyde in PB 30 min. Brains were then sectioned (40 μm thick) on a Vibratome and stored at −20 °C in cryoprotectant until use. For each animal, two dorsal hippocampal sections were processed for immunoelectron microscopy (immunoEM) experiments using previously described methods38. Before immunohistochemical processing, sections were rinsed in PB, and experimental groups were coded with hole-punches so that tissue could be run in single crucibles, ensuring identical exposure to all reagents. Before processing for immunolabelling, sections were treated with 1% sodium borohydride for 30 min to remove free aldehyde sites. Sections then were rinsed in PB followed by a rinse in 0.1 M Tris-saline (TS; pH 7.6) and then a 30 min incubation in 0.5% BSA in TS. Sections then were incubated in primary rabbit anti-HA (1:1,000; Sigma) in 0.025% Triton-X 100 and 0.1% BSA in TS for 1 day at room temperature and 4 days at 4 °C. Sections then were incubated in donkey anti-rabbit biotinylated IgG (1:400; Jackson Immunoresearch Laboratories) for 30 min followed by a 30 min incubation in avidin-biotin complex (ABC; Vectastain Elite Kit, Vector Laboratories) in TS (1:100 dilution). Sections were developed in 3,3′-diaminobenzidine (Sigma-Aldrich) and H O in TS. All antibody incubations were performed in 0.1% BSA/TS and separated by washes in TS. Sections were post-fixed in 2% osmium tetroxide for 1 h, dehydrated, and flat embedded in Embed-812 (EMS) between two sheets of Aclar plastic. Brain sections containing the CA1 and dentate gyrus were selected from the plastic embedded sections, glued onto Epon blocks and trimmed to 1 mm-wide trapezoids. Ultra-thin sections (70 nm thickness) through the tissue-plastic interface were cut with a diamond knife (EMS) on a Leica EM UC6 ultratome, and sections were collected on 400-mesh, thin-bar copper grids (EMS). Grids were then counterstained with uranyl acetate and Reynold’s lead citrate. An investigator blinded to animal condition performed the data collection and analysis. One section from each of Bdnf-HA and wild-type animals was analysed (n = 3 each group). The thin sections were examined and photographed on a Tecnai Biotwin transmission electron microscope (FEI). Cell profiles were identified by defined morphological criteria39. Dendritic profiles generally were postsynaptic to axon terminals and contained regular microtubule arrays. Dendritic spines also were usually postsynaptic to axon terminal profiles and sometimes contained a spine apparatus. Axon terminals contained small synaptic vesicles and occasional dense-core vesicles. Unmyelinated axons were profiles smaller than 0.15 μm that contained a few small synaptic vesicles and lacked a synaptic junction in the plane of section. Glial profiles were distinguished by the presence of glial filaments (astrocytic profiles), by the presence of microtubules and/or their tendency to conform irregularly to the boundaries of surrounding profiles. ‘Unknown profiles’ were those that contained immunoperoxidase reaction product but could not be definitively placed in one of the above categories. From each block, 4 grid squares (each square was 55 × 55 μm2) each from the CA1 near stratum radiatum (nSR in Fig. 3; that is, adjacent to the pyramidal cell layer) and distal stratum radiatum (dSR in Fig. 3; that is, 50–150 μm away from the pyramidal cell layer) were randomly sampled for analysis. Thus, 12,100 μm2 was sampled for each area in each block. Grid squares were selected plastic-tissue interface to ensure even antibody tissue penetration38. Immunoperoxidase labelling for HA was evident as a characteristic, electron-dense DAB reaction product precipitate. All peroxidase labelled profiles from each square were photographed and categorized. Animal codes were not broken until all 6 blocks were analysed. Sample sizes for all experiments were chosen based on signal-to-noise ratios identified in pilot experiments. Variances of all data sets were estimated and compared using Bartlett’s or Levene’s test before further statistical analysis. Randomization of animals and/or slices was not needed. To evaluate distribution patterns of TrkB sensor activity, spine volume change, and BDNF–SEP signal, peak responses for each data set (the same points used for statistical comparisons) were subjected to a Shapiro–Wilk test for normality. TrkB sensor activity adhered to the null hypothesis (normal distribution) while spine volume change and BDNF–SEP signal did not. Because TrkB sensor activity had a normal distribution, parametric statistics were used: paired and unpaired two-tailed t-test, ANOVA, and repeated-measures ANOVA with appropriate post-hoc analysis, as indicated in the figure legends and supplementary note. For t-tests, homoscedasticity between groups was evaluated using the F-test. If variance was unequal, Welch’s corrected t-test was performed. For ANOVA, homoscedasticity was evaluated with Bartlett’s test. For multiple comparisons of sensor activity, data were subjected to ANOVA or repeated-measures ANOVA followed by a post-hoc test to determine statistical significance. In cases where each condition was compared to all other conditions in the experiment, the Tukey–Kramer method was employed. In cases where each condition was compared to a single control, Dunnet’s test was used. Since spine volume change had a non-normal distribution, data were log-transformed to resolve skewness and then analysed with parametric statistics (the same tests described above), as indicated in the figure legends. For the BDNF–SEP signal, log-transformation of the data did not resolve the skewness. As such, non-parametric statistics were used—Wilcoxon rank-sum test and Kruskal–Wallis test with followed by a Dunn’s test. Data were only excluded if obvious signs of poor cellular health (dendritic blebbing, spine collapse, etc.) were apparent.


Filipin III was from Sigma. Amplex Red cholesterol assay kit was from Invitrogen. IL-2 was from Promega. For the flow cytometric analysis, anti-mCD4 (RM4-5), anti-mCD8 (53-6.7), anti-mCD3ε (145-2C11), anti-IFNγ (XMG1.2), anti-TNFα (MP6-XT22), anti-granzyme B (NGZB), anti-CD44 (IM7), anti-CD69 (H1.2F3), anti-PD-1 (J43), anti-CTLA-4 (UC10-4B9), anti-Ki-67 (16A8), anti-FoxP3 (FJK-16 s), anti-Gr1 (RB6-8C5), anti-CD11b (M1/70) and anti-CD45 (30-F11) were purchased from eBioscience. For western blots, anti-pCD3ζ, anti-CD3ζ, anti-pZAP70, anti-ZAP70, anti-pLAT, anti-LAT, anti-pERK1/2 and anti-ERK1/2 were from Cell Signaling Technology. Avasimibe was from Selleck. MβCD-cholesterol and MβCD were from Sigma. Lovastatin was from Sigma. U18666A was from Merck. K604 was chemically synthesized in F.-J. Nan’s laboratory. CP113,818 was a research gift from P. Fabre. MTS (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium) was from Promega. B16F10, Lewis lung carcinoma and EL-4 cell lines were originally obtained from the American Type Culture Collection, and proved mycoplasma-free. Listeria monocytogenes was provided by Q. Leng. C57BL/6 mice were purchased from SLAC. OT-I TCR transgenic mice were from the Jackson Laboratory. CD4cre transgenic mice was described previously31. InGeneious Labs produced homozygous Acat1flox/flox mouse. To produce this mouse, the Acat1 loxP construct was made by inserting two loxP sites covering Acat1 exon 14, which includes His460 known to be essential for the enzymatic activity32. The construct was injected into embryonic stem cells. The correctly targeted clones as determined by Southern blot and diagnostic PCR were injected into C57BL/6 blastocysts. To remove the Neo marker, the mice were further backcrossed to the C57BL/6 Frt mice. Through mouse crossing, the wild-type Acat1 allele (Acat1+/+), heterozygous Acat1 loxP allele (Acat1flox/+) and homozygous Acat1 loxP allele (Acat1flox/flox) were obtained and confirmed by using diagnostic PCR. Acat1flox/flox mice were crossed with CD4cre transgenic mice to get Acat1CKO mice with ACAT1 deficiency in T cells. Acat1CKO mice were further crossed with OT-I TCR transgenic mice to get Acat1CKO OT-I mice. Animal experiments using Acat1CKO mice were controlled by their littermates with normal ACAT1 expression (Acat1flox/flox). Animal experiments using Acat1CKO OT-I mice were controlled by their littermate with normal ACAT1 and OT-I TCR expression (Acat1flox/flox OT-I). Acat2−/− mice were purchased from Jackson Laboratory. All mice were maintained in pathogen-free facilities at the Institute of Biochemistry and Cell Biology. All animal experiments used mice with matched age and sex. Animals were randomly allocated to experimental groups. The animal experiments performed with a blinded manner were described below. All animal experiments were approved by the Institutional Animal Care and Use Committee (IACUC) of Institute of Biochemistry and Cell Biology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences. The maximal tumour measurements/volumes are in accordance with the IACUC. All human studies have been approved by the Research Ethical Committee from ChangZheng Hospital, Shanghai, China. Informed consent was obtained from all study subjects. Total RNA was extracted with Trizol (Life technology) from the indicated cells and subjected to quantitative reverse transcription PCR (qRT–PCR) using gene specific primers (5′–3′): Acat1 (forward, GAAACCGGCTGTCAAAATCTGG; reverse, TGTGACCATTTCTGTATGTGTCC); Acat2 (forward, ACAAGACAGACCTCTTCCCTC; reverse, ATGGTTCGGAAATGTTCACC); Nceh (forward, TTGAATACAGGCTAGTCCCACA; reverse, CAACGTAGGTAAACTGTTGTCCC); Srebp1 (forward, GCAGCCACCATCTAGCCTG; reverse, CAGCAGTGAGTCTGCCTTGAT); Srebp2 (forward, GCAGCAACGGGACCATTCT; reverse, CCCCATGACTAAGTCCTTCAACT); Acaca (forward, ATGGGCGGAATGGTCTCTTTC; reverse, TGGGGACCTTGTCTTCATCAT); Fasn (forward, GGAGGTGGTGATAGCCGGTAT; reverse, TGGGTAATCCATAGAGCCCAG); Hmgcs (forward, AACTGGTGCAGAAATCTCTAGC; reverse, GGTTGAATAGCTCAGAACTAGCC); Hmgcr (forward, AGCTTGCCCGAATTGTATGTG; reverse, TCTGTTGTGAACCATGTGACTTC); Sqle (forward, ATAAGAAATGCGGGGATGTCAC; reverse, ATATCCGAGAAGGCAGCGAAC); Ldlr (forward, TGACTCAGACGAACAAGGCTG, reverse, ATCTAGGCAATCTCGGTCTCC); Idol (forward, TGCAGGCGTCTAGGGATCAT; reverse, GTTTAAGGCGGTAAGGTGCCA); Abca1 (forward, AAAACCGCAGACATCCTTCAG; reverse, CATACCGAAACTCGTTCACCC); Abcg1 (forward, CTTTCCTACTCTGTACCCGAGG; reverse, CGGGGCATTCCATTGATAAGG); Ifng (forward, ATGAACGCTACACACTGCATC; reverse, CCATCCTTTTGCCAGTTCCTC). Three methods were used to measure the cholesterol level of T cells. Filipin III was dissolved in ethanol to reach the final concentration of 5 mg ml−1. Cells were fixed with 4% paraformaldehyde (PFA) and stained with 50 μg ml−1 filipin III for 30 min at 4 °C. Images were collected using a Leica SP8 confocal microscope and analysed using a Leica LAS AF software. The total cellular cholesterol level was quantified using the Amplex Red cholesterol assay kit (Invitrogen). To quantify the intracellular cholesterol, CD8+ T cells were fixed with 0.1% glutaraldehyde and then treated with 2 U ml−1 cholesterol oxidase for 15 min to oxidize the plasma membrane cholesterol. The intracellular cholesterol was then extracted with methanol/chloroform (vol/vol, 1: 2), and quantified using the Amplex Red cholesterol assay kit. The value of the plasma membrane cholesterol was obtained by subtracting the intracellular cholesterol from the total cellular cholesterol. Plasma membrane cholesterol level was measured as previously described33. The plasma membrane of CD8+ T cells was biotinylated by 1 mg ml−1 sulfo-NHS-S-biotin, and then the cells were lysed by passing 13 times through a ball-bearing homogenizer. Plasma membrane was isolated from the supernatant of homogenate by streptavidin magnetic beads. Lipids were extracted with hexane/isopropanol (vol/vol, 3: 2), and then were used for measurement of unesterified cholesterol with Amplex Red Cholesterol Assay Kit and choline-containing phospholipids with EnzyChrom Phospholipid Assay Kit. The relative plasma membrane cholesterol level was normalized to the total phospholipids. To deplete cholesterol from the plasma membrane, CD8+ T cells were treated with 0.1–1 mM MβCD for 5 min at 37 °C, and then washed three times with PBS. To add cholesterol to the plasma membrane, CD8+ T cells were incubated with the culture medium supplied with 1–20 μg ml−1 MβCD-coated cholesterol at 37 °C for 15 min. The cells were then washed three times with PBS. Peripheral T cells were isolated from mouse spleen and draining lymph nodes by a CD8+ or CD4+ T-cell negative selection kit (Stem cell). To analyse the tumour-infiltrating T cells, tumours were first digested by collagenase IV (sigma), and tumour-infiltrating leukocytes were isolated by 40–70% Percoll (GE) gradient centrifugation. To measure the effector function of CD8+ T cells, the isolated cells were first stimulated with 1 μM ionomycin and 50 ng ml−1 phorbol 12-myristate 13-acetate (PMA) for 4 h in the presence of 5 μg ml−1 BFA, and then stained with PERCP-conjugated anti-CD8a. Next, cells were fixed with 4% PFA and stained with FITC-conjugated anti-granzyme B, allophycocyanin (APC)-conjugated anti-IFNγ and phycoerythrin (PE)-conjugated anti-TNFα. In general, to gate the cytokine or granule-producing cells, T cells without stimulation or stained with isotype control antibody were used as negative controls. This gating strategy is applicable for most of the flow cytometric analyses. To detect the MDSC cells in the tumour, the Percoll-isolated leukocyte were stained with anti-CD45, anti-CD11b and anti-Ly6G (Gr1), the CD45+ population was gated, after which the MDSC population (CD11b+ Gr1+) in CD45+ were gated. A pan T-cell isolation kit (Miltenyi biotech) was used to deplete T cells from splenocytes isolated from C57BL/6 mice. The T-cell-depleted splenocytes were pulsed with antigenic peptides for 2 h and washed three times. SIINFEKL (OVA or N4), SAINFEKL (A2), SIITFEKL (T4), SIIGFEKL (G4) are four types of agonist antigens with strong to weak TCR affinities. RTYTYEKL (Catnb) is a self-antigen of OT-I TCR. SIIRFEKL (R4) supports the positive selection of OT-I T cells and thus mimics a self-antigen. The T-cell-depleted and antigen-pulsed splenocytes were co-incubated with Acat1CKO OT-I T cells or wild-type OT-I T cells for 24 h. Cytokine production of CD8+ T cells was measured by intracellular staining and flow cytometric analysis. To generate mature CTLs, splenocytes isolated from Acat1CKO OT-I mice or wild-type OT-I mice were stimulated with OVA (N4) for 3 days in the presence of 10 ng ml−1 IL-2. Cells were centrifuged and cultured in fresh medium containing 10 ng ml−1 IL-2 for 2 more days, after which most of the cells in the culture were CTLs. To measure CD8+ T-cell cytotoxicity, EL-4 cells were pulsed with 2 nM antigenic peptide (N4, A2, T4, G4, R4 or Catnb) for 30 min. After washing EL-4 cells and CTLs three times with PBS, we mixed CTLs and antigen-pulsed EL-4 cells (1 × 105) in the killing medium (phenol-free RPMI 1640, 2% FBS), at the ratios of 1:1, 2:1 and 5:1, respectively. After 4 h, the cytotoxic efficiency was measured by quantifying the release of endogenous lactate dehydrogenase (LDH) from EL-4 cells using a CytoTox 96 Non-Radioactive Cytotoxicity kit (Promega). Human peripheral blood mononuclear cells from healthy donators were stimulated with 5 μg ml−1 phytohaemagglutinin (Sigma) for 2 days and then rested for 1 day. Cells were pretreated with vehicle (DMSO), CP113,818 or avasimibe for 12 h and then stimulated with 5 μg ml−1 plate-bound anti-CD3 and anti-CD28 antibodies for 24 h. Intracellular staining and flow cytometry were used to measure cytokine productions of CD8+ T cells. Oxygen consumption rates and extracellular acidification rates were measured in nonbuffered DMEM (sigma) containing either 25 mM or 10 mM glucose, 2 mM l-glutamine, and 1 mM sodium pyruvate, under basal conditions and in response to 1 μM oligomycin (to block ATP synthesis), 1.5 μM FCCP (to uncouple ATP synthesis from the electron transport chain), 0.5 μM rotenone and antimycin A (to block complex I and III of the electron transport chain, respectively), and 200 μM etomoxir (to block mitochondrial fatty acid oxidation) on the XF-24 or XF-96 Extracellular Flux Analyzers (Seahorse Bioscience) according to the manufacturer’s recommendations. B16F10 cells (5 × 103) in 100 μl media containing avasimibe or DMSO were cultured for 24, 48 or 72 h. MTS reagent (20 μl) (CellTiter 96 AQueous One Solution Cell Proliferation Assay, Promega) was added into each well. After a 2–3-h incubation, the absorbance at 490 nm was measured. The effect of avasimibe on cell viability was obtained by normalizing the absorbance of avasimibe-treated cells with that of the DMSO-treated cells. The viability value of DMSO-treated cells was set as 1. L. monocytogenes (2 × 104–7 × 104 colony-forming units (CFU)) expressing a truncated OVA protein were intravenously injected into Acat1CKO and littermate wild-type mice aged 8–10 weeks. On day 6, T cells isolated from spleens were stimulated with 50 ng ml−1 PMA and 1 μM ionomycin for 4 h in the presence of brefeldin A and then assessed by flow cytometry to detect IFNγ production. At the same time, the serum IFNγ level was assessed by ELISA. To detect the antigen-specific response of CD8+ T cells, the splenocytes were stimulated with 1 μM OVA peptide for 24 h. IFNγ production was analysed as mentioned above. To detect the L. monocytogens titre in the livers of infected mice, the livers were homogenized in 10 ml 0.2% (vol/vol) Nonidet P-40 in PBS, and the organ homogenates were diluted and plated on agar plates to determine the CFU of L. monocytogenes. Investigator was blinded to group allocation during the experiment and when assessing the outcome. B16F10 cells were washed three times with PBS, and filtered through a 40-μm strainer. In a skin melanoma model, B16F10 cells (2 × 105) were subcutaneously injected into the dorsal part of mice (aged 8–10 weeks). From day 10, tumour size was measured every 2 days, and animal survival rate was recorded every day. Tumour size was calculated as length × width. Mice with tumour size larger than 20 mm at the longest axis were euthanized for ethical consideration. To analyse effector function of tumour-infiltrating T cells, mice were euthanized on day 16. In the avasimibe therapy, melanoma-bearing mice with similar tumour size were randomly divided into two groups. From day 10, avasimibe was injected intraperitoneally to the mice at the dose of 15 mg kg−1 every 2 days. In a lung-metastatic melanoma model, B16F10 cells (2 × 105) were intravenously injected into mice (aged 8–10 weeks). Animal survival rate was recorded every day. To study tumour growth, mice were euthanized on day 20 and tumour numbers on lungs were counted. Lung-infiltrating T cells were isolated and analysed as mentioned above. In the lung-metastatic melanoma model, investigator was blinded to group allocation during the experiment and when assessing the outcome. B16F10-OVA cells (2 × 105) were injected subcutaneously into C57BL/6 mice at age 8–10 weeks. On day 16, the naive wild-type or Acat1CKO OT-I CD8+ T cells were isolated and labelled with live cell dye CFSE or CTDR (Cell Tracker Deep Red, Life Technologies), respectively. The labelled wild-type and CKO cells were mixed together at a 1:1 ratio, and 1 × 107 mixed cells per mouse were injected intravenously into the B16F10-OVA-bearing mice. After 12 h, blood, spleens, inguinal lymph nodes (draining) and mesenteric lymph nodes (non-draining) of the mice were collected. Single-cell suspensions from these tissues were stained with the anti-CD8a antibody, and the ratio of transferred cells in CD8+ populations was analysed using flow cytometry. The Lewis lung carcinoma cells were washed twice with PBS and filtered through a 40-μm strainer. After which, the Lewis lung carcinoma cells (2 × 106) were intravenously injected into wild-type or Acat1CKO mice at age 8–10 weeks. To detect the tumour multiplicity in the lung, the mice were euthanized at day 35 after tumour inoculation and tumour numbers in the lung were counted. In the avasimibe therapy, mice were randomly divided into two groups. From days 10 to 35 after tumour inoculation, avasimibe was delivered to the mice by intragastric administration at the dose of 15 mg kg−1 every 3 days. B16F10-OVA cells (2 × 105) were injected subcutaneously into C57BL/6 mice at age 8–10 weeks. On day 10, melanoma-bearing mice with similar tumour size were randomly divided into three groups (n = 9–10) and respectively received PBS, wild-type OT-I CTLs (1.5 × 106) or Acat1CKO OT-I CTLs (1.5 × 106) by intravenous injection. From day 13, the tumour size was measured every two days, and the animal survival rate was recorded every day. Tumour size was calculated as length × width. Mice with tumour size larger than 20 mm at the longest axis were euthanized for ethical consideration. B16F10 cells (2 × 105) were injected subcutaneously into C57BL/6 mice at age 8–12 weeks. On day 10, melanoma-bearing mice with similar tumour size were randomly divided into four groups (n = 8–10) and received PBS, avasimibe, anti-PD-1 antibody or both avasimibe and anti-PD-1 antibody, respectively. Avasimibe was delivered every 2 days at the dose of 15 mg kg−1 by intragastric administration. Anti-PD-1 antibody (RMP1-14, Bio X Cell, 200 μg per injection) was injected intraperitoneally every 3 days. The tumour size and survival were measured as mentioned above. Mice with tumour size larger than 20 mm at the longest axis were euthanized for ethical consideration. Super-resolution STORM imaging was performed on a custom modified Nikon N-STORM microscope equipped with a motorized inverted microscope ECLIPSE Ti-E, an Apochromat TIRF 100 × oil immersion lens with a numerical aperture of 1.49 (Nikon), an electron multiplying charge-coupled device (EMCCD) camera (iXon3 DU-897E, Andor Technology), a quad band filter composed of a quad line beam splitter (zt405/488/561/640rpc TIRF, Chroma Technology Corporation) and a quad line emission filter (brightline HC 446, 523, 600, 677, Semrock, Inc.). The TIRF angle was adjusted to oblique incidence excitation at the value of 3,950–4,000, allowing the capture of images at about 1 μm depth of samples. The focus was kept stable during acquisition using Nikon focus system. For the excitation of Alexa647, the 647 nm continuous wave visible fibre laser was used, and the 405 nm diode laser (CUBE 405-100C, Coherent Inc.) was used for switching back the fluorophores from dark to the fluorescent state. The integration time of the EMCCD camera was 90–95 frames per second. To image TCR distribution in the plasma membrane, naive CD8+ T cells or activated CD8+ T cells (stimulated with 10 μg ml−1 anti-CD3 for 10 min at 37 °C) were placed in Ibidi 35 mm μ-Dish and fixed with 4% PFA, followed by surface staining with 5 μg ml−1 anti-mCD3ε (145-2C11) for 4 h at 4 °C, then the cells were stained with 2 μg ml−1 Alexa 647-conjugated goat anti-hamster IgG (the secondary antibody) for 2 h at 4 °C after washing with PBS ten times. Before imaging, the buffer in the dish was replaced with the imaging buffer contained 100 mM β-mercaptoethanolamin (MEA) for a sufficient blinking of fluorophores. Super-resolution images were reconstructed from a series of 20,000–25,000 frames using the N-STORM analysis module of NIS Elements AR (Laboratory imaging s.r.o.). Molecule distribution and cluster position were analysed with MATLAB (MathWorks) based on Ripley’s K function. L(r) − r represents the efficiency of molecule clustering, and r value represents cluster radius. The r value at the maximum L(r) − r value represents the cluster size with the highest probability34. Planar lipid bilayers (PLBs) containing biotinylated lipids were prepared to bind biotin-conjugated stimulating antibody by streptavidin as previously described35, 36. Biotinylated liposomes were prepared by sonicating 1,2-dioleoyl-sn-glycero-3-phosphocholine and 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine-cap-biotin (25:1 molar ratio, Avanti Polar Lipids) in PBS at a total lipid concentration of 5 mM. PLBs were formed in Lab-Tek chambers (NalgeNunc) in which the cover glasses were replaced with nanostrip-washed coverslips. Coverslips were incubated with 0.1 mM biotinylated liposomes in PBS for 20 min. After washing with 10 ml PBS, PLBs were incubated with 20 nM streptavidin for 20 min, and excessive streptavidin was removed by washing with 10 ml PBS. Streptavidin-containing PLBs were incubated with 20 nM bionylated anti-mCD3ε (145-2C11) (Biolegend). Excessive antibody was removed by washing with PBS. Next, PLBs were treated with 5% FBS in PBS for 30 min at 37 °C and washed thoroughly for TIRFM of T cells. Adhesion ligands necessary for immunological synapse formation were provided by treating the bilayer with serum. Freshly isolated mouse splenocytes were stained with Alexa568-anti-mTCRβ Fab and FITC-anti-mCD8 and washed twice. Anti-mTCRβ antibody was labelled with Alexa568-NHS ester (Molecular probes) and digested to get Fab fragments with Pierce Fab Micro Preparation Kit (Thermo). Cells were then placed on anti-mCD3ε-containing PLBs to crosslink TCR. Time-lapse TIRFM images were acquired on a heated stage with a 3-s interval time at 37 °C, 5% CO , using a Zeiss Axio Observer SD microscopy equipped with a TIRF port, Evolve 512 EMCCD camera and Zeiss Alpha Plan-Apochromat 100 × oil lens. The acquisition was controlled by ZEN system 2012 software. An OPSL laser 488 nm and a DPSS laser 561 nm were used. Field of 512 × 512 pixels was used to capture 6–8 CD8+ T cells per image. Results of synapse formation and TCR movements were the population averages of all CD8+ T cells from 2–3 individual images. The movements of TCR microclusters were splitted into directed, confined and random movement using the method described37. To sort the three movements, the MSD plot of each TCR microcluster was fitted with three functions as described37. The ones with good fit (square of correlation coefficients (R2) ≥ 0.33) were selected for further classification. For a certain TCR microcluster, the movement is defined as random if s.d. < 0.010. The distinction of directed and confined movement depends on which function fit better in the population of those s.d. ≥ 0.010. Images were analysed with Image Pro Plus software (Media Cybernetics), ImageJ (NIH) and MATLAB (MathWorks). In the granule polarization imaging, CTLs stained with Alexa568-anti-mTCRβ Fab were placed on anti-mCD3ε-containing PLBs for indicated time and fixed with 4% PFA. After the permabilization, cells were stained with Alexa488-anti-mCD107a (1D4B) antibody. Three-dimensional spinning-disc confocal microscopy was used to image the granules polarized at 0–2 μm distance from the synapse. The total granule volumes were quantified with Imaris software. The degranulation level was measured as previously described38. OT-I CTLs were mixed with OVA pulsed EL4 cells at 1:1 ratio. The mixed cells were then cultured in the medium supplemented with 1 μg ml−1 Alexa488-anti-CD107a antibody and 2 μM monensin for 1, 2 and 4 h. After which, cells were washed with PBS and further stained with PE–Cy7-anti-CD8a antibody. Flow cytometry was used for assessing the surface and internalized CD107a levels. MATLAB code used to perform STORM and TIRFM data analysis can be accessed by contacting W.L. (liuwanli@biomed.tsinghua.edu.cn). All sample sizes are large enough to ensure proper statistical analysis. Statistical analyses were performed using GraphPad Prism (GraphPad Software, Inc.). Statistical significance was determined as indicated in the figure legends. P < 0.05 was considered significant; *P < 0.05; **P < 0.01; ***P < 0.001. All t-test analyses are two-tailed unpaired t-tests. The replicates in Figs 2, 3b, i, k–o, 4a, b, e–j, l, m and Extended Data Figs 1a, 3a–c, g–l, 4f, 5a–e, 6, 7g, 8, 9e, h, j and 10 were biological replicates. The replicates in Figs 1, 3c, d, p, Fig. 4o, p and Extended Data Figs 1b–i, 2, 3d–f, m, n, 4b–e, 5f, g, 7a, b, i–l and 9a–c were technical replicates. The centre values shown in all figures are average values.


Animal procedures were carried out according to the ethical guidelines of the Institute of Biochemical and Cell Biology, Chinese Academy of Sciences, China. All animal experiments were randomized and the experimenters were blinded to allocation during experiments and outcome assessment. No statistical methods were used to determine sample size. The Tet1 and Tet3 gene targeting schemes were described previously28, 31. The Tet2 gene was inactivated by targeting exons 9–10 that encode part of the catalytic domain20. The schematics of targeted disruption of Tet1/2/3 are shown in Extended Data Fig. 1a. The strains of Stra8-Cre and Zp3-Cre mice used in this study were FVB/N-Tg (Stra8-Cre)1Reb32 and C57BL/6-TgN (Zp3-Cre) 93Knw33, respectively. To inactivate the three Tet genes in germ cells from the postnatal stage, we generated Tet1/2/3 conditional knockout parents by crossing Tet1/2/3 floxed mice with Stra8-Cre and Zp3-Cre transgenic mice, respectively. An alternative mating strategy was used which allowed us to obtain littermates of Tet triple knockout (TKO) embryos and controls that retained one wild-type allele of a Tet gene (Extended Data Fig. 3a). Stra8-Cre mice express Cre in spermatogonia and Zp3-Cre mice express Cre exclusively in growing oocytes. (Tet1f/−Tet2f/−Tet3f/−; Stra8-Cre) ((3 × f/−; Stra8-Cre)) male and (Tet1f/−Tet2f/−Tet3f/−; Zp3-Cre) ((3 × f/−; Zp3-Cre)) female deleted the floxed Tet sequences in respective germ cells with high efficiencies (Extended Data Fig. 1b, c). The triple-homozygous progeny of (3 × f/−; Stra8-Cre) male and (3 × f/−; Zp3-Cre) female lacked all three TET proteins beginning at the zygotic stage. The conditional knockout mice were on a mixed C57BL/6J–129Sv genetic background. Embryos were fixed in 4% paraformaldehyde or Bouin’s solution at 4 °C and then processed for paraffin wax embedding. Sections 5-μm thick were cut, dewaxed in xylene, rehydrated through an ethanol series into PBS. Haematoxylin and eosin counterstaining was performed according to standard protocols. Images were taken on an Olympus BX51 microscope. For immunostaining, E7.5 embryos were fixed with 4% paraformaldehyde/PBS at 4 °C overnight and impregnated with 30% sucrose. Then embryos were embedded in JUNG tissue freezing medium (Leica) and cut to 8-μm sections. After washing with PBS, sections were blocked with 10% goat serum (Abcam), 1% BSA (Sigma), 0.3% Triton X-100 for 1 h, followed by incubation in primary antibody (anti-Histone H3-phospho-S28, Abcam5169; anti-E-cadherin, BD#610182; Anti-Snail, CST#3895) overnight at 4 °C and secondary antibody at room temperature for 1 h. Finally, the slides were mounted in Anti-fade Reagent (Invitrogen) and imaged using a LEICA TCS SP5 II confocal microscope. For immunostaining with anti-5mC (Eurogentec#BI-MECY-0100) and anti-5hmC (Active Motif # 39792) antibodies, sections were treated with HCl solution (4N HCl, 0.1% Triton X-100 in distilled water) for 15 min. For TUNEL staining, sections were pre-incubated with 1× TDT (Promega, M1893) buffer for 5 min, then added terminal transferase (Promega, M1871) and Biotin-dUTP (Roche, 11093070910) at 37 °C for 1 h. After washed with PBS, the slides were stained with Streptavidin-CF555 at room temperate for 1 h. We mainly followed the published protocols34. Genomic DNA was isolated from E8.0–8.5 embryos with the DNeasy blood & tissue kit (Qiagen) according to the manufacturer’s instruction. The extracted DNA (800 ng) was digested to nucleosides with 1.0 U DNase I, 2.0 U calf intestinal phosphatase, and 0.005 U snake venom phosphodiesterase I at 37 °C overnight. The digests were filtered by ultrafiltration tubes to remove the enzymes and then were subjected to ultra-high pressure liquid chromatography and tandem mass spectroscopy (UHPLC–MS/MS) analysis for detection of 5mC and 5hmC.The stable isotope 5′-(methyl-d ) 2′-deoxycytidine ([2D ]5mC) and 5′-(hydroxymethyl-d ) 2′-deoxycytidine ([2D ]5hmC) were used as internal standards for calibrating UHPLC–MS/MS quantitation of 5mC and 5hmC. Analysis of 5mC and 5hmC was performed with a Zorbax SB-Aq column (2.1 mm × 100 mm, 1.8 μm, Agilent) for separation and electrospray MS/MS (Agilent 6490, Santa Clara) for detection in the positive ion mode. The number of embryos analysed by UHPLC-MS/MS is listed in Supplementary Table 3. For collection of mature oocytes, oviducts were removed from the female mice 13–15 h after human chorionic gonadotropin injection. Cumulus–oocyte complexes were released into HEPES-buffered CZB medium (HCZB) containing 0.1% bovine testicular hyaluronidase (300 USP units per mg; ICN Biomedicals Inc.). Cavitated E3.5 blastocysts were flushed from the uteri of naturally mated mice into M2 or DMEM followed by sequential washing in KSOM. Then, isolated MII oocytes and blastocysts were subjected to RNA-seq. Tet-TKO ES cell lines were derived by intercrossing (3 × f/−; Stra8-Cre) male and (3 × f/−; Zp3-Cre) female mice using standard protocol. ESCs were maintained on feeder, and cultured in DMEM supplemented with 15% heat-inactivated fetal bovine serum (FBS, Invitrogen), 2 mM glutamine (Gibco-BRL), 0.1 mM nonessential amino acid (Gibco-BRL), 1 mM sodium pyruvate (Gibco-BRL), 0.1 mM β-mercaptoethanol (Sigma), 1 μM PD0325901 (StemGent), 3 μM CHIR99021 (Axon Medchem), 1000 U ml−1 murine leukaemia inhibitory factor (LIF, Chemicon), and 1× penicillin/streptomycin (PS, Gibco-BRL). All ES cell lines were tested for mycoplasma contamination and analysed for karyotype. E6.5 embryos were recovered carefully from uteri and isolated from decidua using fine forceps and a needle under a dissecting microscope. After incubation in PBS with 0.25% pancreatin (Sigma, P7545) and 0.05% trypsin (Gibco, 27250-018) for 10 min at 4 °C, embryos were transferred to Dulbecco’s Modified Eagle Medium (DMEM) (Invitrogen) supplemented with 10% FBS and then the visceral endoderm layer was detached from embryos by being gently aspirated through a mouth pipette several times. The epiblast was separated from the extraembryonic ectoderm using fine tungsten needles. The visceral endoderm layer or the extraembryonic ectoderm was used for genotyping. Thereafter, isolated epiblasts were subjected to RNA-seq or PBAT whole-genome bisulfite sequencing analysis. DNA and total RNA were simultaneously isolated from snap-frozen embryos in Buffer RLT Plus with the Qiagen AllPrep DNA/RNA Mini Kit according to the manufacturer’s instruction. Isolated DNA and RNA from E6.5 epiblasts were subjected to PBAT whole-genome bisulfite sequencing and RNA-seq analysis, respectively. For real-time PCR analysis, total RNA was treated with DNase and reverse-transcribed into the first-strand cDNA by One Step SYBR PrimeScript RT–PCR Kit (Takara). Real-time PCR was performed in Bio-Rad CFX96 using SYBR Premix Ex Taq (Takara). PCR efficiency and specificity of each primer pair was examined by standard curve of serially diluted cDNA and melting curve functionality respectively. Fold change was calculated based on the 2−∆∆Ct method after normalization to the transcript level of the housekeeping gene Gapdh. Gene specific primers used for real-time PCR are listed in Supplementary Table 4. For embryo preparation, mice were maintained on a 12/12 h light/dark cycle. After mating, the morning of appearance of a vaginal plug was designated as embryonic day 0.5 (E0.5). At an appropriate gestation point, embryos were dissected from the uteri and isolated from decidua in 5% FBS in RNase-free PBS. Then the embryos were fixed in 4% paraformaldehyde at 4 °C and dehydrated into methanol using a graded methanol/PBT series (25%, 50%, 75%, 100% methanol) (PBT: 1 × PBS plus 0.1% Tween) and stored in 100% methanol at −20 °C. In situ hybridization on whole embryos was performed according to standard protocols as described previously35. Standard probes for Bmp4, Cer1, Dkk1, Eomes, Fgf8, Gbx2, Gsc, Hesx1, Foxa2, Lefty1, Lefty2, Lim1, Mixl1, Meox1, Nodal, Oct4, Otx2, Shh, Sox17, T, Tbx6, and Wnt3 were used. Tet1, Tet2, and Tet3 in situ hybridization probes were prepared by using the primers listed in Supplementary Table 4. Briefly, RNA probes were generated by in vitro transcription from a linearized temple using a DIG RNA Labelling Kit (SP6/T7). The hybridization solution contained SSC at pH 4.5. The embryos were incubated in this solution with RNA probes overnight at 66 °C. The anti-DIG-AP antibody (Roche, 11093274910) was used at a dilution of 1:2500 in blocking solution (Roche, 1096176) overnight at 4 °C. Post-antibody washes consisted of four one-hour washes at room temperature and one overnight wash at 4 °C. Then embryos were stained in 7.5 μl ml−1 NBT/BCIP (Roche, 1681451) at room temperature. The number of embryos analysed by in situ hybridization is listed in Supplementary Table 3. Embryos were photographed on an Olympus SZX16 Zoom Stereo microscope. For sectioning of whole-mount stained embryos, specimens were post-fixed in 4% paraformaldehyde before be embedded in JUNG tissue freezing medium (Leica). Sections were cut at 10 μm thickness and photographed using an Olympus BX51 microscope. We mainly followed the published protocols36, 37, 38. For production of Cas9 mRNA, the T7 promoter was added to the Cas9 coding region by PCR amplification from pX330 plasmid. The T7-Cas9 PCR product was purified and used as the template for in vitro transcription with mMESSAGE mMACHINE T7 ULTRA kit (Life Technologies). For production of sgRNA, the template DNA with a T7 promoter was amplified by PCR from pX330 and the product was purified for in vitro transcription using MEGAshortscript T7 kit (Life Technologies). After transcription, the Cas9 mRNA was purified with Lithium Chloride (LiCl) precipitation method and sgRNAs were purified with MEGAclear kit (Life Technologies) according to manufacturer’s instructions. sgRNA target sites and oligonucleotides are available in Supplementary Table 5. For microinjection of one-cell embryos, Tet1/2/3 germline-specific conditional knockout female mice and ICR mice were used as embryo donors and foster mothers, respectively. The embryo-donor female mice were superovulated and mated with appropriate male mice. One-cell-stage embryos were collected from oviducts and injected into the cytoplasm with Cas9 mRNA (100 ng μl−1) and sgRNA (50 ng μl−1) in M2 medium (Sigma). For injection of donor oligonucleotides, the oligonucleotide (final concentration 100 ng μl−1) was mixed with Cas9 mRNA (100 ng μl−1) and sgRNA (50 ng μl−1) and injected into the cytoplasm of an embryo. The injected embryos were cultured in KSOM at 37 °C under 5% CO until the 2-cell stage and transferred into oviducts of pseudopregnant ICR females at dpc 0.5. Mice were genotyped using a 2 mm piece of the tail tip. The tail tips were incubated overnight at 55 °C in 400 μl of lysis buffer (50 mM NaCl, 10 mM Tris-HCl (pH 8.0), 5 mM EDTA, 0.1% SDS) containing 150 μg of proteinase K. The next day, DNA was precipitated with an equal volume of isopropanol and dissolved in 200 μl TE buffer. For a single embryo or a few cells, DNA was extracted with the KAPA Express Extract Kit according to the manufacturer’s instruction. PCR reactions were carried out using 2 × Taq PCR Master (LifeFeng, PT102-01). Embryos were genotyped retrospectively after whole-mount in situ hybridization and removal of the ectoplacental cone. PCR primers are listed in Supplementary Table 4. For bisulfite sequencing, genomic DNA was extracted from E6.5 epiblasts or E7.0–7.5 isolated embryonic areas which were dissected manually from embryos after in situ hybridization. The genomic DNA was treated with the EZ DNA Methylation-Direct Kit (Zymo Research) according to the manufacturer’s instruction. Bisulfite-treated DNA was subjected to PCR amplification. The bisulfite primers are listed in Supplementary Table 4. PCR products were purified with a Gel Extraction Kit (Qiagen), and cloned into pMD19-T vector (Takara). Individual clones were sequenced by standard Sanger sequencing. Data was analysed by an online tool BISMA ( http://services.ibc.uni-stuttgart.de/BDPC/BISMA/)39. The number of embryos analysed by bisulfite sequencing is listed in Supplementary Table 3. TAB-seq was performed according to standard protocols as described previously20. Briefly, genomic DNA was extracted from ESCs or embryos by AllPrep Micro Kit (Qiagen), then glucosylated with T4 phage β-glucosyltransferase (NEB), oxidized with TET2, and then treated with bisulfite sequentially. Affinity purification of DNA in every enzymatic reaction was carried out using DNA Clean & Concentrator (Zymo Research). For MII oocytes, total RNA was extracted from pooled oocytes with RNeasy Micro Kit (Qiagen, Cat No. 74004). And the DNA was digested with RNase-Free DNase Set (Qiagen, Cat No. 79254). Then libraries were constructed using NEBNext Ultra RNA Library Prep Kit for Illumina (NEB, Cat No. E7530L) following the manual. The cDNA libraries were sequenced on Illumina HiSeq X Ten instrument with 150-bp reads and paired-end parameter. For E3.5 blastocyst and E6.5 epiblast, about 1 ng of extracted RNA was first amplified 18 cycles according to the protocol for single-cell RNA-seq published previously40, 41. Briefly, 1 ng of extracted RNA was reverse translated into cDNA, then the free primers were removed by the ExoSAP-IT. TdT was used to add PolyA to the 3′ end of cDNA, the second strand were synthesized, and the double-stranded DNA product were amplified 18 cycles to obtain enough product for DNA library construction. The amplified DNA product was purified by DNA Clean & Concentrator-5 (Zymo Research), and further purified by Zymoclean Gel DNA Recovery Kit (Zymo Research) for the 0.5–8 kb size DNA products. The gel purified products were sheared by Covaris S2 for the generation of fragments about 250 bp in length, and purified with DNA Clean & Concentrator-5 (Zymo Research). Then NEBNext Ultra DNA Library Prep Kit for Illumina (NEB, E7370L) was used to construct a cDNA library, with reagent amount reduced to half of the reactions. The cDNA libraries were sequenced on Illumina HiSeq 2500 instrument with 100-bp reads and paired-end parameter. The number of embryos analysed by RNA-seq is listed in Supplementary Table 3. First, by applying customized Perl script, the raw pair-end RNA-seq FASTQ data were trimmed to remove low-quality bases and adaptor sequences. Then the clean RNA-seq FASTQ data were mapped to mouse reference genome mm9 using Tophat42 (v2.0.12) with default settings. The transcription levels of annotated genes (FPKM, fragments per kilobase of transcript per million mapped reads) were quantified and normalized using Cuffquant (version 2.2.1) and Cuffnorm43 (version 2.2.1) with default parameters. For differential gene expression analysis, we used HTSeq with default setting to produce raw counts. With the raw count, differentially expressed genes analysis was performed by using DESeq2 package44. Only genes with adjusted P value less than 0.05 and at least 2-fold-change were considered to be differentially expressed. We mainly followed the published protocol45. About 5 ng genomic DNA were used to construct the PBAT library using the protocol slightly modified from previously published45, 46, which was almost the same with another protocol previously published47. Briefly, the isolated genomic DNAs, together with 1% unmethylated lambda DNA (Thermo Scientific), were converted and purified with MethylCode Bisulfite Conversion kit (Invitrogen) following the manufacturer’s instructions. Then random nonamer primers with a 5′ biotin tag and a truncated Illumina P5 adaptor (5′-biotin-CTACACGACGCTCTTCCGATCTNNNNNNNNN-3′) and 50 units of Klenow polymerase (3′ to 5′ exo-, NEB) were used to amplify the bisulfite-converted templates linearly. The excess primers remained in last reaction were removed using 40 U Exonuclease I (NEB) before amplified products were purified with 0.8 × Agencourt Ampure XP beads (Beckman Coulter). The newly synthesized DNA strands with biotin were captured with Dynabeads M280 Streptavidin (Invitrogen), and the original bisulfite-treated DNA templates were washed away by 0.1 N NaOH. The second strands were synthesized using 50 U Klenow polymerase with random nonamer primers containing a truncated P7 Illumina adaptor (5′-AGACGTGTGCTCTTCCGATCTNNNNNNNNN-3′). The beads with double-strand DNA products were further collected and washed several times, and the libraries were generated with 5–6 cycles of PCR amplifications using 1 unit KAPA HiFi HS DNA Polymerase (KAPA Biosystems), together with 0.4 μM Illumina Forward PE1.0 primer (5′-AATGATACGGCGACCACCGAGATCTACACTCTTTCCCTACACGACGCTCTTCCGATCT-3′) and 0.4 μM pre-indexed Illumina Reverse primer (5′- CAAGCAGAAGACGGCATACGAGAT GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT-3′; the underlined hexamer indicates the index sequences). Amplified libraries were purified with 0.8 × Agencourt Ampure XP beads twice and were assessed with the Fragment Analyzer (Advanced analytical) and quantified with a standard curve-based qPCR assay. The final quality-ensured libraries were pooled and sequenced on the Illumina HiSeq 4000 sequencer for 150 bp paired-end sequencing. By using Trim Galore (v0.3.3), raw pair-end FASTQ reads were trimmed to remove low quality bases and adaptor sequences. The remaining clean reads were proceeded with Two Steps alignment by using Bismark48 (v0.7.6). First, the remaining clean reads were mapped to mouse reference genome mm9 with paired end and non-directional mapping parameters. Then, the unmapped reads after the first step alignment were re-aligned to the same reference in single-end mode. Duplicated reads were removed using SAMtools49 (v0.1.19-44428cd) for the subsequent analysis. Bisulfite conversion rate was estimated by the Lambda genome, which was built as the extra chromosome. Only samples with at least 99% bisulfite conversion rate were kept for the DNA methylation analysis. DNA methylation level in Extended Data Fig. 8a was calculated by extracting CpG sites with at least 5 × sequencing coverage. Then, the single-C files of the replicates were merged together, and only the single CpG sites with at least 5× coverage were kept for further analysis. For methylated tiles analysis, the whole mouse genome (mm9) was first divided into 100-bp tiles. Only the tiles that were assigned at least 3 CpGs were kept for further analysis. Tiles with at least 20% absolute methylation level difference between Tet-TKO and wild-type samples were defined as differentially methylated: hypermethylated tiles and hypomethylated tiles (Tet-TKO compared with wild-type samples). We annotated the hypermethylated tiles by gene elements, and then divided the number of annotated gene element tiles by the number of all hypermethylated tiles to get the hypermethylated tiles ratio of different gene elements. Then the hypomethylated tiles ratio and total 100-bp tiles ratio were calculated in the same way. By dividing the hypermethylated and hypomethylated tiles ratio with the total 100-bp tiles ratio, we obtained the relative ratio of hypermethylated and hypomethylated tiles in each gene element. DMRs were analysed by MethPipe (v3.3.1)50, which is based on Hidden Markov Model. The Bismark mapped reads in .bam file format were converted to the .mr file format using MethPipe. The hypomethylated-DMRs and hypermethylated-DMRs were generated by following the manual of MethPipe, then DMRs were filtered by the DMR length (at least 400 bp), CpG number (at least 4 CpGs in a DMR) and the methylation level difference between wild-type and Tet-TKO (at least 20% absolute methylation level difference). The annotated information of exon, intron, DNase I-hypersensitive sites (DHS), CGI (CpG island), 5′UTR and 3′UTR was downloaded from the UCSC Genome Browser (mm9), and all the repetitive elements were annotated by using RepeatMasker (mm9). Regions within ± 0.5 kb of the enhancer site of the EPU (enhancer-promoter units)51 from all tissues were combined with enhancer regions annotated by VISTA Enhancer Browser to form the annotation of enhancers. Promoters were defined as the regions from −1.5 to +1.5 kb of the TSS (Transcription Start Site).


News Article | August 22, 2016
Site: www.materialstoday.com

The recipient of the 2017 Acta Materialia Silver Medal is Jing-yang Wang, the distinguished professor and division head in the High-performance Ceramic Division at the Shenyang National Laboratory for Materials Science and Institute of Metal Research, Chinese Academy of Sciences. He is also the assistant director of Shenyang National Laboratory for Materials Science. Jingyang Wang received the B.A. degree in Physics in 1992 from Peking University, M.A. degree in 1995 and Ph.D. degree in 1998, both in Materials Physics from Institute of Metal Research, Chinese Academy of Sciences. He joined the faculty in Institute of Metal Research where he became the assistant professor in 1998, associate professor in 2002, and full professor in 2006. He was the visiting scientist at International Centre for Theoretical Physics (Italy) in 2001, University of Trento (Italy) in 2001, and International Center for Young Scientists (ICYS) at National Institute of Materials Science (Japan) in 2007. Professor Wang focused over 15 years of research activities in the area of materials science of advanced engineering ceramics. He has published more than 180 peer-reviewed SCI papers (H-index factor 36), including 30 in Acta Materialia and Scripta Materialia, and has 17 patents in the field of ceramics. In addition, he presented ~50 keynote/invited talks and served 25 advisory board members and symposium organizers in international conferences. He is internationally recognized for his scientific contributions and leadership in high-throughput materials design and modeling, novel methods for processing bulk, low-dimensional and porous ceramic materials, and multi-scale structure-property relationship of high performance structural ceramics. His recent notable research contributions are: His contributions have been recognized on many scientific advisory boards and committees of the American Ceramic Society (ACerS) and the American Society of Metals International (ASM Int.) and serves on the International Advisory Board of UK CAFFE consortium (University of Cambridge, Imperial College London and University of Manchester) on ceramics for nuclear applications. He also served as the volume editor ofCeramic Engineering and Science Proceedings and is the book editor ofDevelopments in Strategic Materials and Computational Design, both published by John Wiley & Sons, Inc., and is the Executive editor ofJournal of Materials Science and Technology published by Elsevier. Professor Wang’s scientific career has also been recognized with many awards and honors, including ASM-IIM Visiting Lecturer Award in 2016, Distinguished Professor of CAS Distinguished Research Fellow Program from Chinese Academy of Sciences (CAS) in 2016, National Leading Talent of Young and Middle-aged Scientist Award from the Ministry of Science and Technology of China in 2015, DisLate Shri Sardar Pratap Singh Memorial Award from the Indian Ceramic Society in 2015, JACerS Author Loyalty Recognition Award in 2014 and the Global Star Award Society in 2012 from the ACerS, Second Prize in 2012 and First Prize in 2011 for Science and Technology Progress Award from China and First Prize for Natural Science Award from Liaoning Province in 2005. The Acta Materialia Silver Medal honors and recognizes scientific contributions and leadership from academic, industry and public sector leaders in materials research in the midst of their careers.  The Silver Medal was established in 2016 and nominees are solicited each year from the Cooperating Societies and Sponsoring Societies of Acta Materialia. Inc.  Professor Wang will receive the Silver Medal at the TMS Annual Meeting in San Diego in March 2017.


Scientific understanding of the role of humans in influencing and altering the global climate has been evolving for over a century. That understanding is now extremely advanced, combining hundreds of years of observations of many different climatic variables, millions of years of paleoclimatic evidence of past natural climatic variations, extended application of fundamental physical, chemical, and biological processes, and the most sophisticated computer modeling ever conducted. There is no longer any reasonable doubt that humans are altering the climate, that those changes will grow in scope and severity in the future, and that the economic, ecological, and human health consequences will be severe. While remaining scientific uncertainties are still being studied and analyzed, the state of the science has for several decades been sufficient to support implementing local, national, and global policies to address growing climate risks. This is the conclusion of scientific studies, syntheses, and reports to policymakers extending back decades. Because of the strength of the science, and the depth of the consensus about climate change, the scientific community has worked hard to clearly and consistently present the state of understanding to the public and policymakers to help them make informed decisions. The scientific community does this in various ways. Individual scientists speak out, presenting scientific results to journalists and the public. Scientists and scientific organizations prepare, debate, and publish scientific statements and declarations based on their expertise and concerns. And national scientific organizations, especially the formal “Academies of Sciences,” prepare regular reports on climate issues that are syntheses of all relevant climate science and knowledge. The number and scope of these statements is truly impressive. Not a single major scientific organization or national academy of science on earth denies that the climate is changing, that humans are responsible, and that some form of action should be taken to address the risks to people and the planet. This consensus is not to be taken lightly. Indeed, this consensus is an extraordinarily powerful result given the contentious nature of science and the acclaim that accrues to scientists who find compelling evidence that overthrows an existing paradigm (as Galileo, Darwin, Einstein, Wegener, and others did in their fields). In a peculiar twist, some have tried to argue that acceptance of the strength of the evidence and the massive consensus in the geoscience community about human-caused climate change is simply “argument from consensus” or “argument from authority” – a classic potential “logical fallacy.” Indeed, the mere fact that nearly 100 percent of climate and geoscience professions believe humans are changing the climate does not guarantee that the belief is correct. But arguing that something is false simply because there is a strong consensus for it is an even worse logical fallacy, especially when the consensus is based on deep, extensive, and constantly tested scientific evidence. In fact, this false argument has a name: the Galileo Gambit. It is used by those who deny well-established scientific principles such as the theory of climate change as follows: Because Galileo was mocked and criticized for his views by a majority, but later shown to be right, current minority views that are mocked and criticized must also be right. The obvious flaw in the Galileo Gambit is that being criticized for one’s views does not correlate with being right – especially when the criticism is based on scientific evidence. Galileo was right because the scientific evidence supported him, not because he was mocked and criticized. The late professor Carl Sagan addressed this use of the Galileo Gambit in a humorous way when he noted: These statements and declarations about climate change by the world’s leading scientific organizations represent the most compelling summary of the state of knowledge and concern about the global geophysical changes now underway, and they provide the foundation and rationale for actions now being debated and implemented around the world. The world ignores them at its peril. Here, based on information available as of early January 2017, is a synthesis, listing, and links for these public positions and declarations. These statements are summarized below for more than 140 of the planet’s national academies and top scientific health, geosciences, biological, chemical, physical, agricultural, and engineering organizations. Each statement is archived online as noted in the links. Abbreviated sections of statements only are presented, but readers should consult the full statements for context and content. Also, scientific organizations and committees periodically update, revise, edit, and re-issue position statements. Please send me any corrections, updates, additions, and changes. The AAN is a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ Rising global temperatures are causing major physical, chemical, and ecological changes in the planet. There is wide consensus among scientific organizations and climatologists that these broad effects, known as “climate change,” are the result of contemporary human activity. Climate change poses threats to human health, safety, and security, and children are uniquely vulnerable to these threats… The social foundations of children’s mental and physical health are threatened by the specter of far-reaching effects of unchecked climate change, including community and global instability, mass migrations, and increased conflict. Given this knowledge, failure to take prompt, substantive action would be an act of injustice to all children… Pediatricians have a uniquely valuable role to play in the societal response to this global challenge… [The AAP is also a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/] The scientific evidence is clear: global climate change caused by human activities is occurring now, and it is a growing threat to society. Accumulating data from across the globe reveal a wide array of effects: rapidly melting glaciers, destabilization of major ice sheets, increases in extreme weather, rising sea level, shifts in species ranges, and more. The pace of change and the evidence of harm have increased markedly over the last five years. The time to control greenhouse gas emissions is now. [The AAAS has also signed onto more recent letters on climate from an array of scientific organizations, including the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] There is widespread scientific agreement that the world’s climate is changing and that the weight of evidence demonstrates that anthropogenic factors have and will continue to contribute significantly to global warming and climate change. It is anticipated that continuing changes to the climate will have serious negative impacts on public, animal and ecosystem health due to extreme weather events, changing disease transmission dynamics, emerging and re-emerging diseases, and alterations to habitat and ecological systems that are essential to wildlife conservation. Furthermore, there is increasing recognition of the inter-relationships of human, domestic animal, wildlife, and ecosystem health as illustrated by the fact the majority of recent emerging diseases have a wildlife origin. Consequently, there is a critical need to improve capacity to identify, prevent, and respond to climate-related threats.  The following statements present the American Association of Wildlife Veterinarians (AAWV) position on climate change, wildlife diseases, and wildlife health…. The American Geophysical Union (AGU) notes that human impacts on the climate system include increasing concentrations of greenhouse gases in the atmosphere, which is significantly contributing to the warming of the global climate. The climate system is complex, however, making it difficult to predict detailed outcomes of human-induced change: there is as yet no definitive theory for translating greenhouse gas emissions into forecasts of regional weather, hydrology, or response of the biosphere. As the AGU points out, our ability to predict global climate change, and to forecast its regional impacts, depends directly on improved models and observations. The American Astronomical Society (AAS) joins the AGU in calling for peer-reviewed climate research to inform climate-related policy decisions, and, as well, to provide a basis for mitigating the harmful effects of global change and to help communities adapt and become resilient to extreme climatic events. In endorsing the “Human Impacts on Climate” statement, the AAS recognizes the collective expertise of the AGU in scientific subfields central to assessing and understanding global change, and acknowledges the strength of agreement among our AGU colleagues that the global climate is changing and human activities are contributing to that change. Careful and comprehensive scientific assessments have clearly demonstrated that the Earth’s climate system is changing in response to growing atmospheric burdens of greenhouse gases (GHGs) and absorbing aerosol particles. (IPCC, 2007) Climate change is occurring, is caused largely by human activities, and poses significant risks for—and in many cases is already affecting—a broad range of human and natural systems. (NRC, 2010a) The potential threats are serious and actions are required to mitigate climate change risks and to adapt to deleterious climate change impacts that probably cannot be avoided. (NRC, 2010b, c) This statement reviews key probable climate change impacts and recommends actions required to mitigate or adapt to current and anticipated consequences. …comprehensive scientific assessments of our current and potential future climates clearly indicate that climate change is real, largely attributable to emissions from human activities, and potentially a very serious problem. This sober conclusion has been recently reconfirmed by an in-depth set of studies focused on “America’s Climate Choices” (ACC) conducted by the U.S. National Academies (NRC, 2010a, b, c, d). The ACC studies, performed by independent and highly respected teams of scientists, engineers, and other skilled professionals, reached the same general conclusions that were published in the latest comprehensive assessment conducted by the International Panel on Climate Change (IPCC, 2007)… The range of observed and potential climate change impacts identified by the ACC assessment include a warmer climate with more extreme weather events, significant sea level rise, more constrained fresh water sources, deterioration or loss of key land and marine ecosystems, and reduced food resources— many of which may pose serious public health threats. (NRC, 2010a) The effects of an unmitigated rate of climate change on key Earth system components, ecological systems, and human society over the next 50 years are likely to be severe and possibly irreversible on century time scales… [The ACS has also signed onto more recent letters on climate from an array of scientific organizations, including the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] THAT: The American College of Preventive Medicine (ACPM) accept the position that global warming and climate change is occurring, that there is potential for abrupt climate change, and that human practices that increase greenhouse gases exacerbate the problem, and that the public health consequences may be severe. THAT: The ACPM staff and appropriate committees continue to explore opportunities to address this matter, including sessions at Preventive Medicine conferences and the development of a policy position statement as well as other modes of communicating this issue to the ACPM membership. [The ACPM is also a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/] Humanity is the major influence on the global climate change observed over the past 50 years. Rapid societal responses can significantly lessen negative outcomes. Human activities are changing Earth’s climate. At the global level, atmospheric concentrations of carbon dioxide and other heat‐trapping greenhouse gases have increased sharply since the Industrial Revolution. Fossil fuel burning dominates this increase. Human‐caused increases in greenhouse gases are responsible for most of the observed global average surface warming of roughly 0.8°C (1.5°F) over the past 140 years. Because natural processes cannot quickly remove some of these gases (notably carbon dioxide) from the atmosphere, our past, present, and future emissions will influence the climate system for millennia. Extensive, independent observations confirm the reality of global warming. These observations show large‐scale increases in air and sea temperatures, sea level, and atmospheric water vapor; they document decreases in the extent of mountain glaciers, snow cover, permafrost, and Arctic sea ice. These changes are broadly consistent with long understood physics and predictions of how the climate system is expected to respond to human‐caused increases in greenhouse gases. The changes are inconsistent with explanations of climate change that rely on known natural influences… [The AGU has also signed onto more recent letters on climate from an array of scientific organizations, including the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] [The AIBS is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] The Governing Board of the American Institute of Physics has endorsed a position statement on climate change adopted by the American Geophysical Union (AGU) Council in December 2003. AGU is one of ten Member Societies of the American Institute of Physics. The statement follows: Human activities are increasingly altering the Earth’s climate. These effects add to natural influences that have been present over Earth’s history. Scientific evidence strongly indicates that natural influences cannot explain the rapid increase in global near-surface temperatures observed during the second half of the 20th century. Human impacts on the climate system include increasing concentrations of atmospheric greenhouse gases (e.g., carbon dioxide, chlorofluorocarbons and their substitutes, methane, nitrous oxide, etc.), air pollution, increasing concentrations of airborne particles, and land alteration. A particular concern is that atmospheric levels of carbon dioxide may be rising faster than at any time in Earth’s history, except possibly following rare events like impacts from large extraterrestrial objects… The ALA is a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ If physicians want evidence of climate change, they may well find it in their own offices. Patients are presenting with illnesses that once happened only in warmer areas. Chronic conditions are becoming aggravated by more frequent and extended heat waves. Allergy and asthma seasons are getting longer. Spates of injuries are resulting from more intense ice storms and snowstorms. Scientific evidence shows that the world’s climate is changing and that the results have public health consequences. The American Medical Association is working to ensure that physicians and others in health care understand the rise in climate-related illnesses and injuries so they can prepare and respond to them. The Association also is promoting environmentally responsible practices that would reduce waste and energy consumption. Amicus Brief filed before the Supreme Court in support of the Clean Power Plan. Failure to uphold the Clean Power Plan would undermine [the] EPA’s ability to carry out its legal obligation to regulate carbon emissions that endanger human health and would negatively impact the health of current and future generations. Carbon emissions are a significant driver of the anthropogenic greenhouse gas emissions that cause climate change and consequently harm human health. Direct impacts from the changing climate include health-related illness, declining air quality and increased respiratory and cardiovascular illness. Changes in climate also facilitate the migration of mosquito-borne diseases, such as dengue fever, malaria and most recently the Zika Virus. “In surveys conducted by three separate U.S. medical professional societies,” the brief said, “a significant majority of surveyed physicians concurred that climate change is occurring … is having a direct impact on the health of their patients, and that physicians anticipate even greater climate-driven adverse human health impacts in the future.” [This statement is considered in force until August 2017 unless superseded by a new statement issued by the AMS Council before this date.] …Warming of the climate system now is unequivocal, according to many different kinds of evidence.  Observations show increases in globally averaged air and ocean temperatures, as well as widespread melting of snow and ice and rising globally averaged sea level. Surface temperature data for Earth as a whole, including readings over both land and ocean, show an increase of about 0.8°C (1.4°F) over the period 1901-2010 and about 0.5°C (0.9°F) over the period 1979–2010 (the era for which satellite-based temperature data are routinely available). Due to natural variability, not every year is warmer than the preceding year globally. Nevertheless, all of the 10 warmest years in the global temperature records up to 2011 have occurred since 1997, with 2005 and 2010 being the warmest two years in more than a century of global records. The warming trend is greatest in northern high latitudes and over land. In the U.S., most of the observed warming has occurred in the West and in Alaska; for the nation as a whole, there have been twice as many record daily high temperatures as record daily low temperatures in the first decade of the 21st century… There is unequivocal evidence that Earth’s lower atmosphere, ocean, and land surface are warming; sea level is rising; and snow cover, mountain glaciers, and Arctic sea ice are shrinking. The dominant cause of the warming since the 1950s is human activities. This scientific finding is based on a large and persuasive body of research. The observed warming will be irreversible for many years into the future, and even larger temperature increases will occur as greenhouse gases continue to accumulate in the atmosphere. Avoiding this future warming will require a large and rapid reduction in global greenhouse gas emissions. The ongoing warming will increase risks and stresses to human societies, economies, ecosystems, and wildlife through the 21st century and beyond, making it imperative that society respond to a changing climate. To inform decisions on adaptation and mitigation, it is critical that we improve our understanding of the global climate system and our ability to project future climate through continued and improved monitoring and research. This is especially true for smaller (seasonal and regional) scales and weather and climate extremes, and for important hydroclimatic variables such as precipitation and water availability… Technological, economic, and policy choices in the near future will determine the extent of future impacts of climate change. Science-based decisions are seldom made in a context of absolute certainty. National and international policy discussions should include consideration of the best ways to both adapt to and mitigate climate change. Mitigation will reduce the amount of future climate change and the risk of impacts that are potentially large and dangerous. At the same time, some continued climate change is inevitable, and policy responses should include adaptation to climate change. Prudence dictates extreme care in accounting for our relationship with the only planet known to be capable of sustaining human life. [The AIBS is also a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe. While natural sources of climate variability are significant, multiple lines of evidence indicate that human influences have had an increasingly dominant effect on global climate warming observed since the mid-twentieth century. Although the magnitudes of future effects are uncertain, human influences on the climate are growing. The potential consequences of climate change are great and the actions taken over the next few decades will determine human influences on the climate for centuries. As summarized in the 2013 report of the Intergovernmental Panel on Climate Change (IPCC), there continues to be significant progress in climate science. In particular, the connection between rising concentrations of atmospheric greenhouse gases and the increased warming of the global climate system is more compelling than ever. Nevertheless, as recognized by Working Group 1 of the IPCC, scientific challenges remain in our abilities to observe, interpret, and project climate changes. To better inform societal choices, the APS urges sustained research in climate science. The APS reiterates its 2007 call to support actions that will reduce the emissions, and ultimately the concentration, of greenhouse gases as well as increase the resilience of society to a changing climate, and to support research on technologies that could reduce the climate impact of human activities. … The APA is a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ [This policy builds upon and replaces existing policies 20078 (Addressing the Urgent Threat of Global Climate Change to Public Health and the Environment) and 9510 (Global Climate Change)] Public Health Opportunities to Address the Health Effects of Climate Change Climate change poses major threats to human health, human and animal populations, ecological stability, and human social, financial, and political stability and well-being. Observed health impacts of climate change include increased heat-related morbidity and mortality, expanded ranges and frequency of infectious disease outbreaks, malnutrition, trauma, violence and political conflict, mental health issues, and loss of community and social connections. Certain populations will experience disproportionate negative effects, including pregnant women, children, the elderly, marginalized groups such as racial and ethnic minorities, outdoor workers, those with chronic diseases, and those in economically disadvantaged communities. Climate change poses significant ethical challenges as well as challenges to global and health equity. The economic risks of inaction may be significant, yet many strategies to combat climate change offer near- and long-term co-benefits to health, producing cost savings that could offset implementation costs. At present, there are major political barriers to adopting strategies to mitigate and adapt to climate change. Recognizing the urgency of the issue and importance of the public health role, APHA, the Centers for Disease Control and Prevention, and others have developed resources and tools to help support public health engagement. APHA calls for individual, community, national, and global action to address the health risks posed by climate change. The public health community has critical roles to play, including advocating for action, especially among policymakers; engaging in health prevention and preparedness efforts; conducting surveillance and research on climate change and health; and educating public health professionals. [The APHA is also a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/] [The APHA is also a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] Letter to EOS of the Council of the AQA The available scientific evidence clearly shows that the Earth on average is becoming warmer… Few credible scientists now doubt that humans have influenced the documented rise of global temperatures since the Industrial Revolution. The first government led U.S. Climate Change Science Program synthesis and assessment report supports the growing body of evidence that warming of the atmosphere, especially over the past 50 years, is directly impacted by human activity. In 2003, the ASM issued a policy report in which they recommend “reducing net anthropogenic CO emissions to the atmosphere” and “minimizing anthropogenic disturbances of” atmospheric gases: “Carbon dioxide concentrations were relatively stable for the past 10,000 years but then began to increase rapidly about 150 years ago… as a result of fossil fuel consumption and land use change. Of course, changes in atmospheric composition are but one component of global change, which also includes disturbances in the physical and chemical conditions of the oceans and land surface. Although global change has been a natural process throughout Earth’s history, humans are responsible for substantially accelerating present-day changes. These changes may adversely affect human health and the biosphere on which we depend. Outbreaks of a number of diseases, including Lyme disease, hantavirus infections, dengue fever, bubonic plague, and cholera, have been linked to climate change.” A comprehensive body of scientific evidence indicates beyond reasonable doubt that global climate change is now occurring and that its manifestations threaten the stability of societies as well as natural and managed ecosystems. Increases in ambient temperatures and changes in related processes are directly linked to rising anthropogenic greenhouse gas (GHG) concentrations in the atmosphere. The potential related impacts of climate change on the ability of agricultural systems, which include soil and water resources, to provide food, feed, fiber, and fuel, and maintenance of ecosystem services (e.g., water supply and habitat for crop landraces, wild relatives, and pollinators) as well as the integrity of the environment, are major concerns. Around the world and in the United States (US), agriculture—which is comprised of field, vegetable, and tree crops, as well as livestock production—constitutes a major land use which influences global ecosystems. Globally, crop production occupies approximately 1.8 Billion (B) hectares out of a total terrestrial land surface of about 13.5 B hectares. In addition, animal production utilizes grasslands, rangelands, and savannas, which altogether cover about a quarter of the Earth’s land. Even in 2010, agriculture remains the most basic and common human occupation on the planet and a major contributor to human well-being. Changes in climate are already affecting the sustainability of agricultural systems and disrupting production. [The May 2011 statement was also signed by the Crop Science Society of America and the Soil Science Society of America.] [The ASoA is also a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] There is strong evidence that the climate is changing and will continue to change.  Climate scientists project that there will be substantial increases in temperature with related increases in atmospheric water vapor and increases in extreme precipitation amounts and intensities in most geographic regions as a result of climate change.  However, while there is clear evidence of a changing climate, understanding the significance of climate change at the temporal and spatial scales as it relates to engineering practice is more difficult. There is an increasing demand for engineers to address future climate change into project design criteria; however, current practices and rules governing such practices do not adequately address concerns associated with climate change… Climate change poses a potentially serious impact on worldwide water resources, energy production and use, agriculture, forestry, coastal development and resources, flood control and public infrastructure… The ASIH is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf The ASN is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf [The ASPB is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] Adopted by the ASA Board of Directors The American Statistical Association (ASA) recently convened a workshop of leading atmospheric scientists and statisticians involved in climate change research. The goal of this workshop was to identify a consensus on the role of statistical science in current assessments of global warming and its impacts. Of particular interest to this workshop was the recently published Fourth Assessment Report of the United Nations’ Intergovernmental Panel on Climate Change (IPCC), endorsed by more than 100 governments and drawing on the expertise of a large portion of the climate science community. Through a series of meetings spanning several years, IPCC drew in leading experts and assessed the relevant literature in the geosciences and related disciplines as it relates to climate change. The Fourth Assessment Report finds that “Warming of the climate system is unequivocal, as is now evident from observations of increases in global average air and ocean temperatures, widespread melting of snow and ice, and rising mean sea level. … Most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations. … Discernible human influences now extend to other aspects of climate, including ocean warming, continental-average temperatures, temperature extremes, and wind patterns. [The ASA is also a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] After people, water is our most critical and strategic natural resource, yet the U.S. lack a national strategy for water resources management. In addition, Americans are the world’s largest water consumers. Threats of an aging infrastructure, climate change and population growth are so significant that the nation can no longer afford to postpone action. It’s imperative that a focused effort be articulated and initiated to create and demonstrate strategies to sustain U.S. water resources. The country’s future growth and prosperity depend on it. The ATS is also a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ The ASLO is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf The ATBC is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf The AERC is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf The AAFA is a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ There is broad scientific consensus that coral reefs are heavily affected by the activities of man and there are significant global influences that can make reefs more vulnerable such as global warming… It is highly likely that coral bleaching has been exacerbated by global warming. There is almost total consensus among experts that the earth’s climate is changing as a result of the build-up of greenhouse gases. The IPCC (involving over 3,000 of the world’s experts) has come out with clear conclusions as to the reality of this phenomenon. One does not have to look further than the collective academy of scientists worldwide to see the string (of) statements on this worrying change to the earth’s atmosphere… Given the observed damage caused by a temperature increase of ~1°C above pre-industrial levels, we urge all possible actions to keep future warming below the 1.5°C target set by the Paris Agreement. The following proposed initiatives will act to reduce the severity of climate-inflicted damage on reefs, helping to avoid total ecological collapse. The ACRS strongly supports the following proposed actions… The AIP supports a reduction of the green house gas emissions that are leading to increased global temperatures, and encourages research that works towards this goal… Research in Australia and overseas shows that an increase in global temperature will adversely affect the Earth’s climate patterns. The melting of the polar ice caps, combined with thermal expansion, will lead to rises in sea levels that may impact adversely on our coastal cities. The impact of these changes on biodiversity will fundamentally change the ecology of Earth… Human health is ultimately dependent on the health of the planet and its ecosystem. The AMA recognises the latest findings regarding the science of climate change, the role of humans, past observations and future projections. The consequences of climate change have serious direct and indirect, observed and projected health impacts both globally and in Australia. There is inequity in the distribution of these health impacts both within and between countries, with some groups being particularly vulnerable. In recognition of these issues surrounding climate change and health, the AMA believes that: Global climate has changed substantially. Global climate change and global warming are real and observable… Human influence has been detected in the warming of the atmosphere and the ocean globally, and in Australia. It is now certain that the human activities that have increased the concentration of greenhouse gases in the atmosphere contribute significantly to observed warming. Further it is extremely likely that these human activities are responsible for most of the observed global warming since 1950. The warming associated with increases in greenhouse gases originating from human activity is called the enhanced greenhouse effect…. Our climate is very likely to continue to change as a result of human activity. Global temperature increases are already set to continue until at least the middle of this century even if emissions were reduced to zero. The magnitude of warming and related changes can be limited depending on the total amount of carbon dioxide and other greenhouse gases ultimately emitted as a result of human activities; future climate scenarios depend critically on future changes in emissions… BioQUEST is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf The BSA is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf We, the members of the Board of Trustees of CFCAS and Canadian climate science leaders from the public and academic sectors in Canada, concur with The Joint Science Academies statement that “climate change is real” and note that the 2004 Arctic Climate Impact Assessment concluded that Arctic temperatures have risen at almost twice the rate of the rest of the world over the past few decades. Furthermore, we endorse the assessment of climate science undertaken by the Intergovernmental Panel on Climate Change (IPCC) and its conclusion that “There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities.” There is now increasing unambiguous evidence of a changing climate in Canada and around the world… There is an increasing urgency to act on the threat of climate change. Significant steps are needed to stop the growth in atmospheric greenhouse gas concentrations by reducing emissions. Since mitigation measures will become effective only after many years, adaptive strategies as well are of great importance and need to begin now…. …Since the industrial revolution of the early 19th century, human activities have also markedly influenced the climate. This well-documented human-induced change is large and very rapid in comparison to past changes in the Earth’s climate… Even if the human-induced emission of greenhouse gases into the atmosphere were to cease today, past emissions have committed the world to long-term changes in climate. Carbon dioxide emitted from the combustion of fossil fuels will remain in the atmosphere for centuries to millennia, and the slow ocean response to atmospheric warming will cause the climate change to persist even longer. Further CO2 emissions will lead to greater human-induced change in proportion to total cumulative emissions. Meaningful interventions to mitigate climate change require a reduction in emissions. To avoid societally, economically, and ecologically disruptive changes to the Earth’s climate, we will have little choice but to leave much of the unextracted fossil fuel carbon in the ground… The urgent challenges for the global community, and Canadians in particular, are to learn how to adapt to the climate changes to which we are already committed and to develop effective and just responses to avoid further damaging climate change impacts for both present and future generations. The COL is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf A comprehensive body of scientific evidence indicates beyond reasonable doubt that global climate change is now occurring and that its manifestations threaten the stability of societies as well as natural and managed ecosystems. Increases in ambient temperatures and changes in related processes are directly linked to rising anthropogenic greenhouse gas (GHG) concentrations in the atmosphere. The potential related impacts of climate change on the ability of agricultural systems, which include soil and water resources, to provide food, feed, fiber, and fuel, and maintenance of ecosystem services (e.g., water supply and habitat for crop landraces, wild relatives, and pollinators) as well as the integrity of the environment, are major concerns. Around the world and in the United States (US), agriculture—which is comprised of field, vegetable, and tree crops, as well as livestock production—constitutes a major land use which influences global ecosystems. Globally, crop production occupies approximately 1.8 Billion (B) hectares out of a total terrestrial land surface of about 13.5 B hectares. In addition, animal production utilizes grasslands, rangelands, and savannas, which altogether cover about a quarter of the Earth’s land. Even in 2010, agriculture remains the most basic and common human occupation on the planet and a major contributor to human well-being. Changes in climate are already affecting the sustainability of agricultural systems and disrupting production. [The May 2011 Statement was also signed by the American Society of Agronomy and the Soil Science Society of America.] [The CSSA is also a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] Ecosystems are already responding to climate change. Continued warming—some of which is now unavoidable—may impair the ability of many such systems to provide critical resources and services like food, clean water, and carbon sequestration. Buffering against the impacts of climate change will require new strategies to both mitigate the extent of change and adapt to changes that are inevitable. The sooner such strategies are deployed, the more effective they will be in reducing irreversible damage. Ecosystems can be managed to limit and adapt to both the near- and long-term impacts of climate change. Strategies that focus on restoring and maintaining natural ecosystem function (reducing deforestation, for example) are the most prudent; strategies that drastically alter ecosystems may have significant and unpredictable impacts… The Earth is warming— average global temperatures have increased by 0.74°C (1.3°F) in the past 100 years. The scientific community agrees that catastrophic and possibly irreversible environmental change will occur if average global temperatures rise an additional 2°C (3.6°F). Warming to date has already had significant impacts on the Earth and its ecosystems, including increased droughts, rising sea levels, disappearing glaciers, and changes in the distribution and seasonal activities of many species… Most warming seen since the mid 1900s is very likely due to greenhouse gas emissions from human activities. Global emissions have risen rapidly since pre-industrial times, increasing 70% between 1970 and 2004 alone… Even if greenhouse gas emissions stop immediately, global temperatures will continue to rise at least for the next 100 years. Depending on the extent and effectiveness of climate change mitigation strategies, global temperatures could rise 1-6°C (2-10°F) by the end of the 21st century, according to the Intergovernmental Panel on Climate Change. Swift and significant emissions reductions will be vital in minimizing the impacts of warming… [The ESA is also a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] Engineers Australia accepts the comprehensive scientific basis regarding climate change, the influence of anthropogenic global warming, and that climate change can have very serious community consequences. Engineers are uniquely placed to provide both mitigation and adaptation solutions for this serious global problem, as well as address future advances in climate change science. This Climate Change Policy Statement has been developed to enable organisational governance on the problem, and provide support for members in the discipline and practice of the engineering profession. Building upon a long history of Engineers Australia policy development, and as the largest technically informed professional body in Australia, Engineers Australia advocates that Engineers must act proactively to address climate change as an ecological, social and economic risk… The ESA is also a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf Human activity is most likely responsible for climate warming. Most of the climatic warming over the last 50 years is likely to have been caused by increased concentrations of greenhouse gases in the atmosphere. Documented long-term climate changes include changes in Arctic temperatures and ice, widespread changes in precipitation amounts, ocean salinity, wind patterns and extreme weather including droughts, heavy precipitation, heat waves and the intensity of tropical cyclones. The above development potentially has dramatic consequences for mankind’s future… The EFG recognizes the work of the IPCC and other organizations, and subscribes to the major findings that climate change is happening, is predominantly caused by anthropogenic emissions of CO2, and poses a significant threat to human civilization. Anthropogenic CO2 emissions come from fossil carbon sources, such as coal, oil, natural gas, limestone and carbonate rocks. Thriving and developing economies currently depend on these resources. Since geologists play a crucial role in their exploration and exploitation, we feel praised by the increasing welfare, but also implicated by the carbon curse. It is clear that major efforts are necessary to quickly and strongly reduce CO2 emissions. The EFG strongly advocates renewable and sustainable energy production, including geothermal energy, as well as the need for increasing energy efficiency. Impacts of ocean acidification may be just as dramatic as those of global warming (resulting from anthropogenic activities on top of natural variability) and the combination of both are likely to exacerbate consequences, resulting in potentially profound changes throughout marine ecosystems and in the services that they provide to humankind… Since the beginning of the industrial revolution the release of carbon dioxide (CO ) from our industrial and agricultural activities has resulted in atmospheric CO  concentrations that have increased from approximately 280 to 385 parts per million (ppm). The atmospheric concentration of CO  is now higher than experienced on Earth for at least the last 800,000 years (direct ice core evidence) and probably the last 25 million years, and is expected to continue to rise at an increasing rate, leading to significant temperature increases in the atmosphere and ocean in the coming decades… Ocean acidification is already occurring today and will continue to intensify, closely tracking atmospheric CO2 increase. Given the potential threat to marine ecosystems and its ensuing impact on human society and economy, especially as it acts in conjunction with anthropogenic global warming, there is an urgent need for immediate action. This rather new recognition that, in addition to the impact of CO  as a greenhouse gas on global climate change, OA is a direct consequence of the absorption of anthropogenic CO  emissions, will hopefully help to set in motion an even more stringent CO  mitigation policy worldwide. The only solutions to avoid excessive OA are a long-term mitigation strategy to limit future release of CO  to the atmosphere and/or enhance removal of excess CO  from the atmosphere. The emission of anthropogenic greenhouse gases, among which carbon dioxide is the main contributor, has amplified the natural greenhouse effect and led to global warming. The main contribution stems from burning fossil fuels. A further increase will have decisive effects on life on earth. An energy cycle with the lowest possible CO2 emission is called for wherever possible to combat climate change. The forthcoming United Nations Climate Change Conference (Paris, December 2015) will be held with the objective of achieving a binding and global agreement on climate-related policy from all nations of the world. This conference, seeking to protect the climate, will be a great opportunity to find solutions in the human quest for sustainable energy as a global endeavour. The Energy Group of the European Physical Society (EPS) welcomes the energy policy of the European Union (EU) to promote renewable energies for electricity generation, together with energy efficiency measures. This policy needs to be implemented by taking into account the necessary investments and the impact on the economical position of the EU in the world. Since the direct impact of any EU energy policy on world CO2 emissions is rather limited, the best strategy is to take the lead in mitigating climate change and in developing an energy policy that offers an attractive and economically viable model with reduced CO2 emissions and lower energy dependence… The scientific evidence is now overwhelming that climate change is a serious global threat which requires an urgent global response, and that climate change is driven by human activity… Enough is now known to make climate change the challenge of the 21st century, and the research community is poised to address this challenge… There is now convincing evidence that since the industrial revolution, human activities, resulting in increasing concentrations of greenhouse gases have become a major agent of climate change. These greenhouse gases affect the global climate by retaining heat in the troposphere, thus raising the average temperature of the planet and altering global atmospheric circulation and precipitation patterns. While on-going national and international actions to curtail and reduce greenhouse gas emissions are essential, the levels of greenhouse gases currently in the atmosphere, and their impact, are likely to persist for several decades. On-going and increased efforts to mitigate climate change through reduction in greenhouse gases are therefore crucial… The European Space Sciences Committee (ESSC) supports the Article (2) agreement on climate change of the Declaration of the ‘2015 Budapest World Science Forum on the enabling power of science’ urges such a universal agreement aiming at stabilising atmospheric concentrations of greenhouse gases and reducing the amount of airborne particles. The ESSC encourages countries to reduce their emissions in order to avoid dangerous anthropogenic interference with the climate system, which could lead to disastrous consequences. Such consequences, albeit from natural evolution, are witnessed in other objects of our Solar System. Global climate change is real and measurable. Since the start of the 20th century, the global mean surface temperature of the Earth has increased by more than 0.7°C and the rate of warming has been largest in the last 30 years… Key vulnerabilities arising from climate change include water resources, food supply, health, coastal settlements, biodiversity and some key ecosystems such as coral reefs and alpine regions. As the atmospheric concentration of greenhouse gases increases, impacts become more severe and widespread. To reduce the global net economic, environmental and social losses in the face of these impacts, the policy objective must remain squarely focused on returning greenhouse gas concentrations to near pre-industrial levels through the reduction of emissions… The spatial and temporal fingerprint of warming can be traced to increasing greenhouse gas concentrations in the atmosphere, which are a direct result of burning fossil fuels, broad-scale deforestation and other human activity. Decades of scientific research have shown that climate can change from both natural and anthropogenic causes. The Geological Society of America (GSA) concurs with assessments by the National Academies of Science (2005), the National Research Council (2011), the Intergovernmental Panel on Climate Change (IPCC, 2013) and the U.S. Global Change Research Program (Melillo et al., 2014) that global climate has warmed in response to increasing concentrations of carbon dioxide (CO2) and other greenhouse gases. The concentrations of greenhouse gases in the atmosphere are now higher than they have been for many thousands of years. Human activities (mainly greenhouse-gas emissions) are the dominant cause of the rapid warming since the middle 1900s (IPCC, 2013). If the upward trend in greenhouse-gas concentrations continues, the projected global climate change by the end of the twenty-first century will result in significant impacts on humans and other species. The tangible effects of climate change are already occurring. Addressing the challenges posed by climate change will require a combination of adaptation to the changes that are likely to occur and global reductions of CO2 emissions from anthropogenic sources… [The GSA is also a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] The HCWH is a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ The HCCC is a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ Human activities have increased the concentration of these atmospheric greenhouse gases, and although the changes are relatively small, the equilibrium maintained by the atmosphere is delicate, and so the effect of these changes is significant. The world’s most important greenhouse gas is carbon dioxide, a by-product of the burning of fossil fuels. … Professional engineers commonly deal with risk, and frequently have to make judgments based on incomplete data. The available evidence suggests very strongly that human activities have already begun to make significant changes to the earth’s climate, and that the longterm risk of delaying action is greater than the cost of avoiding/minimising the risk. Scientific evidence is overwhelming that current energy trends are unsustainable. Immediate action is required to effect change in the timeframe needed to address significant ecological, human health and development, and energy security needs. Aggressive changes in policy are thus needed to accelerate the deployment of superior technologies. With a combination of such policies at the local, national, and international level, it should be possible—both technically and economically—to elevate the living conditions of most of humanity, while simultaneously addressing the risks posed by climate change and other forms of energy-related environmental degradation and reducing the geopolitical tensions and economic vulnerabilities generated by existing patterns of dependence on predominantly fossil-fuel resources… The Study Panel believes that, given the dire prospect of climate change, the following three recommendations should be acted upon without delay and simultaneously: Taking into account the three urgent recommendations above, another recommendation stands out by itself as a moral and social imperative and should be pursued with all means available While the Earth’s climate has changed many times during the planet’s history because of natural factors, including volcanic eruptions and changes in the Earth’s orbit, never before have we observed the present rapid rise in temperature and carbon dioxide (CO ). Human activities resulting from the industrial revolution have changed the chemical composition of the atmosphere…. Deforestation is now the second largest contributor to global warming, after the burning of fossil fuels. These human activities have significantly increased the concentration of “greenhouse gases” in the atmosphere… As the Earth’s climate warms, we are seeing many changes: stronger, more destructive hurricanes; heavier rainfall; more disastrous flooding; more areas of the world experiencing severe drought; and more heat waves. As reported by the Intergovernmental Panel on Climate Change (IPCC), most of the observed global warming since the mid-20th century is very likely due to human-produced emission of greenhouse gases and this warming will continue unabated if present anthropogenic emissions continue or, worse, expand without control. CAETS, therefore, endorses the many recent calls to decrease and control greenhouse gas emissions to an acceptable level as quickly as possible. There is now strong evidence that significant global warming is occurring. The evidence comes from direct measurements of rising surface air temperatures and subsurface ocean temperatures and, indirectly, from increases in average global sea levels, retreating glaciers, and changes in many physical and biological systems. It is very likely that most of the observed increase in global temperatures since the mid-twentieth century is due to human-induced increases in greenhouse gas concentrations in the atmosphere (IPCC 2007). Human activities are now causing atmospheric concentrations of greenhouse gases – including carbon dioxide, methane, tropospheric ozone, and nitrous oxide – to rise well above pre-industrial levels. Carbon dioxide levels have increased from 280 ppm in 1750 to over 380 ppm today, higher than any previous levels in at least the past 650,000 years. Increases in greenhouse gases are causing temperatures to rise; the Earth’s surface warmed by approximately 0.6°C over the twentieth century. The Intergovernmental Panel on Climate Change (IPCC) has forecast that average global surface temperatures will continue to increase, reaching between 1.1°C and 6.4°C above 1990 levels, by 2100. The uncertainties about the amount of global warming we face in coming decades can be reduced through further scientific research. Part of this research must be better documenting and understanding past climate change. Research on Earth’s climate in the recent geologic past provides insights into ways in which climate can change in the future. It also provides data that contribute to the testing and improvement of the computer models that are used to predict future climate change. Reduce the causes of climate change The scientific understanding of climate change is now sufficiently clear to justify nations taking prompt action. A lack of full scientific certainty about some aspects of climate change is not a reason for delaying an immediate response that will, at a reasonable cost, prevent dangerous anthropogenic interference with the climate system. It is vital that all nations identify cost-effective steps that they can take now to contribute to substantial and long-term reduction in net global greenhouse gas emissions. Action taken now to reduce significantly the build-up of greenhouse gases in the atmosphere will lessen the magnitude and rate of climate change. Fossil fuels, which are responsible for most of carbon dioxide emissions produced by human activities, provide valuable resources for many nations and will provide 85% of the world energy demand over the next 25 years (IEA 2004). Minimizing the amount of this carbon dioxide reaching the atmosphere presents a huge challenge but must be a global priority. The advances in scientific understanding of the Earth system generated by collaborative international, regional, and national observations and research programs; and The comprehensive and widely accepted and endorsed scientific assessments carried out by the Intergovernmental Panel on Climate Change and regional and national bodies, which have firmly established, on the basis of scientific evidence, that human activities are the primary cause of recent climate change; Continuing reliance on combustion of fossil fuels as the world’s primary source of energy will lead to much higher atmospheric concentrations of greenhouse gases, which will, in turn, cause significant increases in surface temperature, sea level, ocean acidification, and their related consequences to the environment and society; Stabilization of climate to avoid “dangerous anthropogenic interference with the climate system”, as called for in the UN Framework Convention on Climate Change, will require significant cutbacks in greenhouse gas emissions during the 21st century; and Mitigation of and adaptation to climate change can be made more effective by reducing uncertainties regarding feedbacks and the associated mechanisms; Nations collectively to begin to reduce sharply global atmospheric emissions of greenhouse gases and absorbing aerosols, with the goal of urgently halting their accumulation in the atmosphere and holding atmospheric levels at their lowest practicable value; National and international agencies to adequately support comprehensive observation and research programs that can clarify the urgency and extent of needed mitigation and promote adaptation to the consequences of climate change; Resource managers, planners, and leaders of public and private organizations to incorporate information on ongoing and projected changes in climate and its ramifications into their decision-making, with goals of limiting emissions, reducing the negative consequences of climate change, and enhancing adaptation, public well-being, safety, and economic vitality; and Organizations around the world to join with IUGG and its member Associations to encourage scientists to communicate freely and widely with public and private decision-makers about the consequences and risks of on-going climate change and actions that can be taken to limit climate change and promote adaptation; and To act with its member Associations to develop and implement an integrated communication and outreach plan to increase public understanding of the nature and implications of human-induced impacts on the Earth system, with the aim of reducing detrimental consequences. The LMS is a signatory to the July 21, 2015 UK science communiqué on climate change The NACCHO is a signatory to the April 2016 declaration: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ The National Association of Geoscience Teachers (NAGT) recognizes: (1) that Earth’s climate is changing, (2) that present warming trends are largely the result of human activities, and (3) that teaching climate change science is a fundamental and integral part of earth science education. The core mission of NAGT is to “foster improvement in the teaching of the earth sciences at all levels of formal and informal instruction, to emphasize the cultural significance of the earth sciences and to disseminate knowledge in this field to the general public.” The National Science Education Standards call for a populace that understands how scientific knowledge is both generated and verified, and how complex interactions between human activities and the environment can impact the Earth system. Climate is clearly an integral part of the Earth system connecting the physical, chemical and biological components and playing an essential role in how the Earth’s environment interacts with human culture and societal development. Thus, climate change science is an essential part of Earth Science education and is fundamental to the mission set forth by NAGT. In recognition of these imperatives, NAGT strongly supports and will work to promote education in the science of climate change, the causes and effects of current global warming, and the immediate need for policies and actions that reduce the emission of greenhouse gases. The NAHN is a signatory to the April 2016 declaration: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ The NAML is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf The NEHA is a signatory to the April 2016 declaration: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ The NMA is a signatory to the April 2016 declaration: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ Many national science academies have published formal statements and declarations acknowledging the state of climate science, the fact that climate is changing, the compelling evidence that humans are responsible, and the need to debate and implement strategies to reduce emissions of greenhouse gases. A few examples of joint academy statements are listed here. Following the release of the third in the ongoing series of international reviews of climate science conducted by the Intergovernmental Panel on Climate Chang (IPCC), seventeen national science academies issued a joint statement, entitled “The Science of Climate Change,” acknowledging the IPCC study to be the scientific consensus on climate change science. The statement was signed by: Australian Academy of Sciences, Royal Flemish Academy of Belgium for Sciences and the Arts, Brazilian Academy of Sciences, Royal Society of Canada, Caribbean Academy of Sciences, Chinese Academy of Sciences, French Academy of Sciences, German Academy of Natural Scientists Leopoldina, Indian National Science Academy, Indonesian Academy of Sciences, Royal Irish Academy, Accademia Nazionale dei Lincei (Italy), Academy of Sciences Malaysia, Academy Council of the Royal Society of New Zealand, Royal Swedish Academy of Sciences, Turkish Academy of Sciences, and Royal Society (UK). Eleven national science academies, including all of the largest emitters of greenhouse gases, signed a statement that the scientific understanding of climate change was sufficiently strong to justify prompt action. The statement explicitly endorsed the IPCC consensus and stated: “…there is now strong evidence that significant global warming is occurring. The evidence comes from direct measurements of rising surface air temperatures and subsurface ocean temperatures and from phenomena such as increases in average global sea levels, retreating glaciers, and changes to many physical and biological systems. It is likely that most of the warming in recent decades can be attributed to human activities (IPCC 2001). This warming has already led to changes in the Earth’s climate.” The statement was signed by the science academies of: Brazil, Canada, China, France, Germany, India, Italy, Japan, Russia, the United Kingdom, and the United States. In 2007, seventeen national academies issued a joint declaration reconfirming previous statements and strengthening language based on new research from the fourth assessment report of the IPCC, including the following: “It is unequivocal that the climate is changing, and it is very likely that this is predominantly caused by the increasing human interference with the atmosphere. These changes will transform the environmental conditions on Earth unless counter-measures are taken.” The thirteen signatories were the national science academies of Brazil, Canada, China, France, Germany, Italy, India, Japan, Mexico, Russia, South Africa, the United Kingdom, and the United States. In 2007, the Network of African Science Academies submitted a joint “statement on sustainability, energy efficiency, and climate change:” “A consensus, based on current evidence, now exists within the global scientific community that human activities are the main source of climate change and that the burning of fossil fuels is largely responsible for driving this change. The Intergovernmental Panel on Climate Change (IPCC) reached this conclusion with “90 percent certainty” in its Fourth Assessment issued earlier this year. The IPCC should be congratulated for the contribution it has made to public understanding of the nexus that exists between energy, climate and sustainability.” The thirteen signatories were the science academies of Cameroon, Ghana, Kenya, Madagascar, Nigeria, Senegal, South Africa, Sudan, Tanzania, Uganda, Zambia, Zimbabwe, as well as the African Academy of Sciences. In 2008, the thirteen signers of the 2007 joint academies declaration issued a statement reiterating previous statements and reaffirming “that climate change is happening and that anthropogenic warming is influencing many physical and biological systems.” Among other actions, the declaration urges all nations to “(t)ake appropriate economic and policy measures to accelerate transition to a low carbon society and to encourage and effect changes in individual and national behaviour.” The thirteen signatories were the national science academies of Brazil, Canada, China, France, Germany, Italy, India, Japan, Mexico, Russia, South Africa, the United Kingdom, and the United States. In May 2009, thirteen national academies issued a joint statement that said among other things: “The IPCC 2007 Fourth Assessment of climate change science concluded that large reductions in the emissions of greenhouse gases, principally CO2, are needed soon to slow the increase of atmospheric concentrations, and avoid reaching unacceptable levels. However, climate change is happening even faster than previously estimated; global CO2 emissions since 2000 have been higher than even the highest predictions, Arctic sea ice has been melting at rates much faster than predicted, and the rise in the sea level has become more rapid. Feedbacks in the climate system might lead to much more rapid climate changes. The need for urgent action to address climate change is now indisputable.” The thirteen signatories were the national science academies of Brazil, Canada, China, France, Germany, Italy, India, Japan, Mexico, Russia, South Africa, the United Kingdom, and the United States. In addition to the statement signed in 2001 by the Royal Flemish Academy of Belgium for Sciences and the Arts, the Academie Royale des Sciences, des Lettres & des Beaux-arts de Belgique (the French language academy in Belgium) issued a formal statement: In July 2015, the Royal Society and member organizations issued a joint “U.K. Science Communiqué on Climate Change.” In part, that statement reads: “The scientific evidence is now overwhelming that the climate is warming and that human activity is largely responsible for this change through emissions of greenhouse gases. Governments will meet in Paris in November and December this year to negotiate a legally binding and universal agreement on tackling climate change. Any international policy response to climate change must be rooted in the latest scientific evidence. This indicates that if we are to have a reasonable chance of limiting global warming in this century to 2°C relative to the pre-industrial period, we must transition to a zero-carbon world by early in the second half of the century. To achieve this transition, governments should demonstrate leadership by recognising the risks climate change poses, embracing appropriate policy and technological responses, and seizing the opportunities of low-carbon and climate-resilient growth.” It was signed by: The Academy of Medical Sciences (UK), The Academy of Social Sciences (UK), The British Academy for the Humanities and Social Sciences, The British Ecological Society, The Geological Society (UK), The Challenger Society for Marine Sciences, The Institution of Civil Engineers (UK), The Institution of Chemical Engineers, The Institution of Environmental Sciences, The Institute of Physics, The Learned Society of Wales, London Mathematical Society, Royal Astronomical Society, Royal Economic Society, Royal Geographic Society, Royal Meteorological Society, Royal Society, Royal Society of Biology, Royal Society of Chemistry, Royal Society of Edinburgh, Society for General Microbiology, Wellcome Trust, Zoological Society of London Climate change is occurring, is caused largely by human activities, and poses significant risks for — and in many cases is already affecting — a broad range of human and natural systems. The compelling case for these conclusions is provided in Advancing the Science of Climate Change, part of a congressionally requested suite of studies known as America’s Climate Choices. While noting that there is always more to learn and that the scientific process is never closed, the book shows that hypotheses about climate change are supported by multiple lines of evidence and have stood firm in the face of serious debate and careful evaluation of alternative explanations. [The U.S. National Academies of Sciences have also signed a long series of statements with other national academies around the world in support of the state-of-the-science.] The NSCA is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf Acid rain, toxic air pollutants, and greenhouse gas emissions are a major threat to human health and welfare, as well as plant and animal life. Based on recognized adequate research of the causes and effects of the various forms of air pollution, the federal government should establish environmentally and economically sound standards for the reduction and control of these emissions. The OBFS is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf The PHI is a signatory to the April 2016 declaration: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ The RAS is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The RES is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The RGS is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The Fourth Assessment Report (AR4) of the Inter-Governmental Panel on Climate Change (IPCC) is unequivocal in its conclusion that climate change is happening and that humans are contributing significantly to these changes. The evidence, from not just one source but a number of different measurements, is now far greater and the tools we have to model climate change contain much more of our scientific knowledge within them. The world’s best climate scientists are telling us it’s time to do something about it. Carbon Dioxide is such an important greenhouse gas because there is an increasing amount of it in the atmosphere from the burning of fossil fuels and it stays in the atmosphere for such a long time; a hundred years or so. The changes we are seeing now in our climate are the result of emissions since industrialisation and we have already set in motion the next 50 years of global warming – what we do from now on will determine how worse it will get. The RMS is also a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The RS is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF Climate change is one of the defining issues of our time. It is now more certain than ever, based on many lines of evidence, that humans are changing Earth’s climate. The atmosphere and oceans have warmed, accompanied by sea-level rise, a strong decline in Arctic sea ice, and other climate-related changes. The evidence is clear. We strongly support the introduction of policies to significantly reduce UK and global greenhouse gas emissions, as we feel that the consequences of climate change will be severe. We believe that biologists have a crucial role to play in developing innovative biotechnologies to generate more efficient and environmentally sustainable biofuels, and to capture and store greenhouse gases from power stations and the atmosphere. It is important for the government to continue to consult scientists, to review policy, and to encourage new technologies so as to ensure the best possible strategies are used to combat this complex issue. We are in favour of reducing energy demands, in particular by improvements in public transport and domestic appliances. As some degree of climate change is inevitable, we encourage the development of adaptation strategies to reduce the effects of global warming on our environment. There is an overwhelming scientific consensus worldwide, and a broad political consensus, that greenhouse gas emissions are affecting global climate, and that measures are needed to reduce these emissions significantly so as to limit the extent of climate change. The term ‘climate change’ is used predominantly to refer to global warming and its consequences, and this policy briefing will address these issues. Although long-term fluctuations in global temperature occur due to various factors such as solar activity, there is scientific agreement that the rapid global warming that has occurred in recent years is mostly anthropogenic, i.e. due to human activity. The absorption and emission of solar radiation by greenhouse gases causes the atmosphere to warm. Human activities such as fossil fuel consumption and deforestation have elevated atmospheric levels of greenhouse gases such as carbon dioxide, methane and nitrous oxide significantly since pre-industrial times. The RSB is also a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The RSC is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The RSE is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and oceans have warmed, the amounts of snow and ice have diminished, and sea level has risen. Global surface temperatures have warmed, on average, by around one degree Celsius since the late 19th century. Much of the warming, especially since the 1950s, is very likely a result of increased amounts of greenhouse gases in the atmosphere, resulting from human activity. The Northern Hemisphere have warmed much faster than the global average, while the southern oceans south of New Zealand latitudes have warmed more slowly. Generally, continental regions have warmed more than the ocean surface at the same latitudes. Global sea levels have risen around 19 cm since the start of the 20th century, and are almost certain to rise at a faster rate in future. Surface temperature is projected to rise over the 21st century under all assessed emission scenarios. It is very likely that heat waves will occur more often and last longer, and that extreme precipitation events will become more intense and frequent in many regions. The ocean will continue to warm and acidify, and global mean sea level will continue to rise. Relatively small changes in average climate can have a big effect on the frequency of occurrence or likelihood of extreme events. How the future plays out depends critically on the emissions of greenhouses gases that enter the atmosphere over coming decades. New Zealand is being affected by climate change and impacts are set to increase in magnitude and extent over time. Floods, storms, droughts and fires will become more frequent unless significant action is taken to reduce global emissions of greenhouse gases, which are changing the climate. Even small changes in average climate conditions are likely to lead to large changes in the frequency of occurrence of extreme events. Our societies are not designed to cope with such rapid changes. The SGM is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The SIAM is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf The SMB is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf The SSAR is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf The Society of American Foresters (SAF) believes that climate change policies and actions should recognize the role that forests play in reducing greenhouse gas (GHG) emissions through 1) the substitution of wood products for nonrenewable building materials, 2) forest biomass substitution for fossil fuel-based energy sources, 3) reducing wildfire and other disturbance emissions, and 4) avoided land-use change. SAF also believes that sustainably managed forests can reduce GHG concentrations by sequestering atmospheric carbon in trees and soil, and by storing carbon in wood products made from the harvested trees. Finally, climate change policies can invest in sustainable forest management to achieve these benefits, and respond to the challenges and opportunities that a changing climate poses for forests. Of the many ways to reduce GHG emissions and atmospheric particulate pollution, the most familiar are increasing energy efficiency and conservation, and using renewable energy sources as a substitution for fossil fuels. Equally important is using forests to address climate change. Forests play an essential role controlling GHG emissions and atmospheric GHGs, while simultaneously providing essential environmental and social benefits, including clean water, wildlife habitat, recreation, and forest products that, in turn, store carbon. Finally, changes in long-term patterns of temperature and precipitation have the potential to dramatically affect forests nationwide through a variety of changes to growth and mortality (USDA Forest Service 2012). Many such changes are already evident, such as longer growing and wildfire seasons, increased incidence of pest and disease, and climate-related mortality of specific species (Westerling et al. 2006). These changes have been associated with increasing concentrations of atmospheric carbon dioxide (CO2) and other GHGs in the atmosphere. Successfully achieving the benefits forests can provide for addressing climate change will therefore require explicit and long-term policies and investment in managing these changes, as well as helping private landowners and public agencies understand the technologies and practices that can be used to respond to changing climate conditions… The SoN is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf The SSB is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf A comprehensive body of scientific evidence indicates beyond reasonable doubt that global climate change is now occurring and that its manifestations threaten the stability of societies as well as natural and managed ecosystems. Increases in ambient temperatures and changes in related processes are directly linked to rising anthropogenic greenhouse gas (GHG) concentrations in the atmosphere. The potential related impacts of climate change on the ability of agricultural systems, which include soil and water resources, to provide food, feed, fiber, and fuel, and maintenance of ecosystem services (e.g., water supply and habitat for crop landraces, wild relatives, and pollinators) as well as the integrity of the environment, are major concerns. Around the world and in the United States (US), agriculture—which is comprised of field, vegetable, and tree crops, as well as livestock production—constitutes a major land use which influences global ecosystems. Globally, crop production occupies approximately 1.8 Billion (B) hectares out of a total terrestrial land surface of about 13.5 B hectares. In addition, animal production utilizes grasslands, rangelands, and savannas, which altogether cover about a quarter of the Earth’s land. Even in 2010, agriculture remains the most basic and common human occupation on the planet and a major contributor to human well-being. Changes in climate are already affecting the sustainability of agricultural systems and disrupting production. [The May 2011 Statement was also signed by the American Society of Agronomy and the Crop Science Society of America.] [The SSSA is also a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf] The AMS is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The AoSS is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The BAHSS is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The BES is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The CSMS is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The last century has seen a rapidly growing global population and much more intensive use of resources, leading to greatly increased emissions of gases, such as carbon dioxide and methane, from the burning of fossil fuels (oil, gas and coal), and from agriculture, cement production and deforestation. Evidence from the geological record is consistent with the physics that shows that adding large amounts of carbon dioxide to the atmosphere warms the world and may lead to: higher sea levels and flooding of low-lying coasts; greatly changed patterns of rainfall; increased acidity of the oceans; and decreased oxygen levels in seawater… There is now widespread concern that the Earth’s climate will warm further, not only because of the lingering effects of the added carbon already in the system, but also because of further additions as human population continues to grow… [The GS is also a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF] The IoP is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The ICE is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The ICE is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The IES is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF The LSoW is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF Human activities over the past 100 years have caused significant changes in the earth’s climatic conditions, resulting in severe alterations in regional temperature and precipitation patterns that are expected to continue and become amplified over the next 100 years or more. Although climates have varied since the earth was formed, few scientists question the role of humans in exacerbating recent climate change through the increase in emissions of greenhouse gases (e.g., carbon dioxide, methane, water vapor). Human activities contributing to climate warming include the burning of fossil fuels, slash and burn agriculture, methane production from animal husbandry practices, and land-use changes. The critical issue is no longer “whether” climate change is occurring, but rather how to address its effects on wildlife and wildlife- habitats… The TFAA is a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ The USCHA is a signatory to the April 2016 statement: http://www.lung.org/our-initiatives/healthy-air/outdoor/climate-change/declaration-on-climate-change.html?referrer=https://www.google.com/ The UCAR is a signatory to the June 28, 2016 letter to the U.S. Congress: https://www.eurekalert.org/images/2016climateletter6-28-16.pdf Wellcome is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF Now that the world has negotiated the Paris agreement to mitigate GHGs and pursue adaptation to the changing climate, the focus must now turn towards implementation to turn the words into action. The world’s engineers are a human resource that must be tapped to contribute to this implementation. All countries use engineers to deliver services that provide the quality of life that society enjoys, in particular, potable water, sanitation, shelter, buildings, roads, bridges, power, energy and other types of infrastructure. There are opportunities to achieve GHG reduction as well as improving the climate resilience of this infrastructure through design, construction and operation all of which require the expertise and experience of engineers. Engineers are problem-solvers and seek to develop feasible solutions that are cost-effective and sustainable. Engineers serve the public interest and offer objective, unbiased review and advice. Having their expertise to evaluate the technical feasibility and economic viability of proposals to reduce GHGs and to adapt to climate change impacts should be pursued. Engineers input and action is required to implement solutions at country and local levels. The international organization known as the World Federation of Engineering Organizations consist of members of national engineering organizations from over 90 developing and developed countries representing more than 20 million engineers. The WFEO offers to facilitate contact and engagement with these organizations to identify subject matter experts that will contribute their time and expertise as members of the engineering profession. The expertise of the world’s engineers is needed to help successfully implement the Paris agreement. We encourage all countries to engage their engineers in this effort. The WFEO is prepared to assist in this effort. The WFEO consists of national members representing more than 85 countries as well as 10 regional engineering organizations. These members collectively engage with more than 20 million engineers worldwide who are committed to serve the public interest through Codes of Practice and a Code of Ethics that emphasize professional practice in sustainable development, environmental stewardship and climate change. WFEO, the International Council for Science (ICSU) and the International Social Science Council (ISSC) are co-organizing partners of the UN Major Group on Scientific and Technological Communities, one of the nine major groups of civil society recognized by the United Nations. Engineers acknowledge that climate change is underway and that sustained efforts must be undertaken to address this worldwide challenge to society, our quality of life and prosperity. Urgent actions are required and the engineering profession is prepared to do its part towards implementing cost-effective, feasible and sustainable solutions working in partnership with stakeholders. Noting the conclusions of the United Nations’ Intergovernmental Panel on Climate Change (IPCC) and other climatologists that anthropogenic greenhouse gases, which contribute to global climate change, have substantially increased in atmospheric concentration beyond natural processes and have increased by 28 percent since the industrial revolution….Realizing that subsequent health effects from such perturbations in the climate system would likely include an increase in: heat-related mortality and morbidity; vector-borne infectious diseases,… water-borne diseases…(and) malnutrition from threatened agriculture….the World Federation of Public Health Associations…recommends precautionary primary preventive measures to avert climate change, including reduction of greenhouse gas emissions and preservation of greenhouse gas sinks through appropriate energy and land use policies, in view of the scale of potential health impacts… Over the last 50 years, human activities – particularly the burning of fossil fuels – have released sufficient quantities of carbon dioxide and other greenhouse gases to trap additional heat in the lower atmosphere and affect the global climate. In the last 130 years, the world has warmed by approximately 0.85oC. Each of the last 3 decades has been successively warmer than any preceding decade since 1850. Sea levels are rising, glaciers are melting and precipitation patterns are changing. Extreme weather events are becoming more intense and frequent… Many policies and individual choices have the potential to reduce greenhouse gas emissions and produce major health co-benefits. For example, cleaner energy systems, and promoting the safe use of public transportation and active movement – such as cycling or walking as alternatives to using private vehicles – could reduce carbon emissions, and cut the burden of household air pollution, which causes some 4.3 million deaths per year, and ambient air pollution, which causes about 3 million deaths every year. In 2015, the WHO Executive Board endorsed a new work plan on climate change and health. This includes: Partnerships: to coordinate with partner agencies within the UN system, and ensure that health is properly represented in the climate change agenda. Awareness raising: to provide and disseminate information on the threats that climate change presents to human health, and opportunities to promote health while cutting carbon emissions. Science and evidence: to coordinate reviews of the scientific evidence on the links between climate change and health, and develop a global research agenda. Support for implementation of the public health response to climate change: to assist countries to build capacity to reduce health vulnerability to climate change, and promote health while reducing carbon emissions. Climate change is the greatest threat to global health in the 21st century. Health professionals have a duty of care to current and future generations. You are on the front line in protecting people from climate impacts – from more heat-waves and other extreme weather events; from outbreaks of infectious diseases such as malaria, dengue and cholera; from the effects of malnutrition; as well as treating people that are affected by cancer, respiratory, cardiovascular and other non-communicable diseases caused by environmental pollution. Already the hottest year on record, 2015 will see nations attempt to reach a global agreement to address climate change at the United Nations Climate Change Conference (COP) in Paris in December. This may be the most important health agreement of the century: an opportunity not only to reduce climate change and its consequences, but to promote actions that can yield large and immediate health benefits, and reduce costs to health systems and communities… Since the beginning of the 20th century, scientists have been observing a change in the climate that cannot be attributed solely to natural influences. This change has occurred faster than any other climate change in Earth’s history and will have consequences for future generations. Scientists agree that this climate change is anthropogenic (human-induced). It is principally attributable to the increase of certain heat absorbing greenhouse gases in our atmosphere since the industrial revolution. The ever-increasing amount of these gases has directly lead to more heat being retained in the atmosphere and thus to increasing global average surface temperatures. The partners in the WMO Global Atmosphere Watch (GAW) compile reliable scientific data and information on the chemical composition of the atmosphere and its natural and anthropogenic change. This helps to improve the understanding of interactions between the atmosphere, the oceans and the biosphere. The World Meteorological Organization has published a detailed analysis of the global climate 2011-2015 – the hottest five-year period on record  – and the increasingly visible human footprint on extreme weather and climate events with dangerous and costly impacts. The record temperatures were accompanied by rising sea levels and declines in Arctic sea-ice extent, continental glaciers and northern hemisphere snow cover. All these climate change indicators confirmed the long-term warming trend caused by greenhouse gases. Carbon dioxide reached the significant milestone of 400 parts per million in the atmosphere for the first time in 2015, according to the WMO report which was submitted to U.N. climate change conference. The Zoological Society is a signatory to the July 21, 2015 UK science communiqué on climate change. https://royalsociety.org/~/media/policy/Publications/2015/21-07-15-climate-communique.PDF [Edited, compiled by Dr. Peter Gleick. Please send any corrections, additions, updates…]


Jiang F.,Chinese Academy of Sciences | Waterfield N.R.,University of Warwick | Yang J.,Chinese Academy of Sciences | Yang G.,Chinese Academy of Sciences | Jin Q.,Chinese Academy of Sciences
Cell Host and Microbe | Year: 2014

Widely found in animal and plant-associated proteobacteria, type VI secretion systems (T6SSs) are potentially capable of facilitating diverse interactions with eukaryotes and/or other bacteria. Pseudomonas aeruginosa encodes three distinct T6SS haemolysin coregulated protein (Hcp) secretion islands (H1, H2, and H3-T6SS), each involved in different aspects of the bacterium's interaction with other organisms. Here we describe the characterization of a P. aeruginosa H3-T6SS-dependent phospholipase D effector, PldB, and its three tightly linked cognate immunity proteins. PldB targets the periplasm of prokaryotic cells and exerts an antibacterial activity. Surprisingly, PldB also facilitates intracellular invasion of host eukaryotic cells by activation of the PI3K/Akt pathway, revealing it to be a trans-kingdom effector. Our findings imply a potentially widespread T6SS-mediated mechanism, which deploys a single phospholipase effector to influence both prokaryotic cells and eukaryotic hosts. © 2014 Elsevier Inc.


Zhang X.-G.,Chinese Academy of Sciences | Zhang X.-G.,Chinese Center for Antarctic Astronomy
Monthly Notices of the Royal Astronomical Society | Year: 2013

In this paper, the optical spectra index-luminosity relationship is checked for the well-known 17 individually mapped quasi-stellar objects (QSOs), in order to give one more clearer conclusion on the so far conflicting dependence of the spectral index on the luminosity for an active galactic nucleus (AGN). Unlike the global relationships based on the colour difference (photometry parameters) for samples of AGNs, a more reliable relationship is determined for the multi-epoch observed individually mapped QSOs with no contamination from the host galaxies, the line variabilities and the very different central properties. The final confirmed results are as follows. (i) No strong dependence of the optical spectral index on the continuum luminosity can be found for all 17 QSOs, besides two objects (PG 0026 and PG 1613) that have some weak trends (with 3σ confidence level) for the relationship. In other words, the common expectation that 'AGNs get bluer when they get brighter' is not so common. (ii) There are very different damped intrinsic variability time-scales for the variability modes of the optical spectral index and the continuum emission, through the well-applied damped random walk method for the AGN variability. In other words, there are some different intrinsic mechanisms controlling the variabilities of the optical spectral index and the power-law AGN continuum emission. Therefore, the much weaker dependence of the optical spectral index on the continuum luminosity can be further confirmed. © 2013 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.


Zhang X.-G.,Chinese Academy of Sciences | Zhang X.-G.,Chinese Center for Antarctic Astronomy
Monthly Notices of the Royal Astronomical Society | Year: 2013

In this paper, the properties of the proposed intermediate broad emission line region (BLR) are checked for the mapped AGN PG 0052+251. With the considerations of the apparent effects of the broad He II line on the observed broad Hβ profile, the line parameters (especially the line width and the line flux) of the observed broad Ha and the broad Hβ are carefully determined. Based on the measured line parameters, the model with two broad components applied for each observed broad Balmer line is preferred, and then confirmed by the calculated much different time lags for the inner/intermediate broad components and the corresponding virial black hole mass ratio determined by the properties of the inner and the intermediate broad components. Then, the correlation between the broad line width and the broad line flux is checked for the two broad components: one clearly strong negative correlation for the inner broad component and one positive correlation for the intermediate broad component. The different correlations for the two broad components strongly support the intermediate BLR of PG 0052+251. © 2013 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.


Zhang X.-G.,Chinese Academy of Sciences | Zhang X.-G.,Chinese Center for Antarctic Astronomy
Monthly Notices of the Royal Astronomical Society | Year: 2013

In this paper, we carefully check the correlation between the line width (second moment) and the line flux of the double-peaked broad Ha of the well-known mapped active galactic nucleus (AGN) 3C390.3 in order to show some further distinctions between double-peaked emitters and normal broad-line AGN. Based on the virialization assumption MBH αRBLR × V2(BLR) and the empirical relation RBLR α L~0.5, one strong negative correlation between the line width and the line flux of the double-peaked broad lines should be expected for 3C390.3, such as the negative correlation confirmed for the mapped broad-line object NGC5548, RBLR × V2x(BLR)αL~0.5 ×s2 =constant.Moreover, based on the public spectra around 1995 from the AGN WATCH project for 3C390.3, one reliable positive correlation is found between the line width and the line flux of the double-peaked broadHa. In the context of the proposed theoretical accretion disc model for double-peaked emitters, the unexpected positive correlation can be naturally explained, due to different time delays for the inner and outer parts of the disc-like broad-line region (BLR) of 3C390.3. Moreover, the virialization assumption is checked and found to be still available for 3C390.3. However, the time-varying size of the BLR of 3C390.3 cannot be expected by the empirical relation RBLR α L~0.5. In other words, the mean size of the BLR of 3C390.3 can be estimated by the continuum luminosity (line luminosity), while the continuum emission strengthening leads to the size of BLR decreasing (not increasing) in different moments for 3C390.3. Then, we compared our results of 3C390.3 with the previous results reported in the literature for the other double-peaked emitters, and found that before to clearly correct the effects from disc physical parametersvarying (such as the effects of disc precession) for long-term observed line spectra, it is not so meaningful to discussthe correlation of the line parameters of double-peaked broad lines. Furthermore, due to the probable 'external' ionizing source with so far unclear structures, it is hard to give one conclusion that the positive correlation between the line width and the line flux can be found for all double-peaked emitters, even after the considerations of disc physical parameters varying. However, once one positive correlation of broad-line parameters is found, the accretion disc origination of the broad line should be considered first. © 2012 The Author. Published by Oxford University Press on behalf of the Royal Astronomical Society.


Zhu X.-G.,Chinese Academy of Sciences | Zhu X.-G.,CAS Institute of Plant Physiology and Ecology | Zhu X.-G.,University of Illinois at Urbana - Champaign | Long S.P.,University of Illinois at Urbana - Champaign | And 2 more authors.
Annual Review of Plant Biology | Year: 2010

Increasing the yield potential of the major food grain crops has contributed very significantly to a rising food supply over the past 50 years, which has until recently more than kept pace with rising global demand. Whereas improved photosynthetic efficiency has played only a minor role in the remarkable increases in productivity achieved in the last half century, further increases in yield potential will rely in large part on improved photosynthesis. Here we examine inefficiencies in photosynthetic energy transduction in crops from light interception to carbohydrate synthesis, and how classical breeding, systems biology, and synthetic biology are providing new opportunities to develop more productive germplasm. Near-term opportunities include improving the display of leaves in crop canopies to avoid light saturation of individual leaves and further investigation of a photorespiratory bypass that has already improved the productivity of model species. Longer-term opportunities include engineering into plants carboxylases that are better adapted to current and forthcoming CO 2 concentrations, and the use of modeling to guide molecular optimization of resource investment among the components of the photosynthetic apparatus, to maximize carbon gain without increasing crop inputs. Collectively, these changes have the potential to more than double the yield potential of our major crops. Copyright © 2010 by Annual Reviews. All rights reserved.


Yang C.-Y.,Chinese Academy of Sciences | Fang Z.,Chinese Academy of Sciences | Li B.,Kangda BioEnergy Technology Co. | Long Y.-F.,542 Beijing Road
Renewable and Sustainable Energy Reviews | Year: 2012

Jatropha curcas L. is chosen as an ideal biodiesel crop in China because its seed kernel has high oil content (43-61%) and it does not compete with food. Its oil is non-edible, and the trees can resist drought and grow on barren and marginal lands without using arable land. This article reviews the history of Jatropha, current development status and problems in its seeds, propagation, plantation management, oil extraction, biodiesel processing and other value-added products production techniques in China. The commercial production of seed, oil and biodiesel as well as research advancement in China is also introduced and discussed. Examples about our new bred mutant and selected high-oil-yield Jatropha varieties, high-qualified produced biodiesel, and biodiesel pilot plant are presented. Finally, future prospects of Jatropha biodiesel industry in China are discussed. © 2012 Elsevier Ltd. All rights reserved.


Shi G.,KTH Royal Institute of Technology | Johansson K.H.,KTH Royal Institute of Technology | Hong Y.,Chinese Academy of Sciences
IEEE Transactions on Automatic Control | Year: 2013

In this paper, multi-agent systems minimizing a sum of objective functions, where each component is only known to a particular node, is considered for continuous-time dynamics with time-varying interconnection topologies. Assuming that each node can observe a convex solution set of its optimization component, and the intersection of all such sets is nonempty, the considered optimization problem is converted to an intersection computation problem. By a simple distributed control rule, the considered multi-agent system with continuous-time dynamics achieves not only a consensus, but also an optimal agreement within the optimal solution set of the overall optimization objective. Directed and bidirectional communications are studied, respectively, and connectivity conditions are given to ensure a global optimal consensus. In this way, the corresponding intersection computation problem is solved by the proposed decentralized continuous-time algorithm. We establish several important properties of the distance functions with respect to the global optimal solution set and a class of invariant sets with the help of convex and non-smooth analysis. © 1963-2012 IEEE.


Sun C.,CAS Institute of Physics | Sun C.,Chinese Academy of Sciences | Li H.,CAS Institute of Physics | Li H.,Chinese Academy of Sciences | And 2 more authors.
Energy and Environmental Science | Year: 2012

The controllable synthesis of nanostructured CeO 2-based materials is an imperative issue for environment- and energy-related applications. In this review, we present the recent technological and theoretical advances related to the CeO 2-based nanomaterials, with a focus on the synthesis from one dimensional to mesoporous ceria as well as the properties from defect chemistry to nano-size effects. Seven extensively studied aspects regarding the applications of nanostructured ceria-based materials are selectively surveyed as well. New experimental approaches have been demonstrated with an atomic scale resolution characterization. Density functional theory (DFT) calculations can provide insight into the rational design of highly reactive catalysts and understanding of the interactions between the noble metal and ceria support. Achieving desired morphologies with designed crystal facets and oxygen vacancy clusters in ceria via controlled synthesis process is quite important for highly active catalysts. Finally, remarks on the challenges and perspectives on this exciting field are proposed. © 2012 The Royal Society of Chemistry.


Zhang X.-G.,Chinese Academy of Sciences | Zhang X.-G.,Chinese Center for Antarctic Astronomy
Monthly Notices of the Royal Astronomical Society: Letters | Year: 2013

In this Letter, under the widely accepted theoretical accretion disc model for the double-peaked emitter 3C 390.3, the extended disc-like broad-line region can be well split into 10 rings, and then the time lags between the lines from the rings and the continuum emission are estimated, based on the observed spectra around 1995. We can find one very strong correlation between the determined time lags (in units of light-day) and the flux-weighted radii (in units of RG) of the rings, which is well consistent with the expected results through the theoretical accretion disc model. Moreover, through the strong correlation, the black hole masses of 3C 390.3 are independently estimated as 109M⊙, the same as the reported black hole masses in the literature. The consistencies provide further evidence to strongly support the accretion disc origination of the double-peaked broad Balmer lines of 3C 390.3. © 2013 The Author Published by Oxford University Press on behalf of the Royal Astronomical Society.


Zhu W.,Chongqing University of Posts and Telecommunications | Zhu W.,Chinese Academy of Sciences | Cheng D.,Chinese Academy of Sciences
Automatica | Year: 2010

In this paper, a leader-following consensus problem of second-order multi-agent systems with fixed and switching topologies as well as non-uniform time-varying delays is considered. For the case of fixed topology, a necessary and sufficient condition is obtained. For the case of switching topology, a sufficient condition is obtained under the assumption that the total period over which the leader is globally reachable is sufficiently large. We not only prove that a consensus is reachable asymptotically but also give an estimation of the convergence rate. An example with simulation is presented to illustrate the theoretical results. © 2010 Elsevier Ltd. All rights reserved.


Ma Q.,Hainan University | Lu H.,Chinese Academy of Sciences
Desalination | Year: 2011

Throughout the world, desalination is intensively used as a means to reduce current or future water scarcity, especially for the coastal areas. However, the dramatic increase in desalinated water supply will create a series of problems, the most significant of which are those related to energy consumption and environment impacts. Renewable energy provides an energy security and environmental friendly option simultaneously when decreasing global reserves of fossil fuels threatens the long-term sustainability of global economy. Thus, the integration of renewable resources in desalination and water purification is becoming increasingly attractive. In this paper an attempt has been made to present a review, in brief, work of the highlights that have been achieved during the recent years worldwide and the state-of-the-art for most important efforts in the field of desalination by wind energy, which is one of the most common form of renewable energies. The wind energy transform patterns, modeling and experimental studies of various wind energy powered desalination plant, and the prototypes established worldwide are majorly discussed. Moreover, two important technological problems in wind utilization are discussed, and the present or potential countermeasures for the intermittent characteristic and direct utilization of wind energy are presented. © 2011.