Time filter

Source Type

Cambridge, MA, United States

The Massachusetts Institute of Technology is a private research university in Cambridge, Massachusetts. Founded in 1861 in response to the increasing industrialization of the United States, MIT adopted a European polytechnic university model and stressed laboratory instruction in applied science and engineering. Researchers worked on computers, radar, and inertial guidance during World War II and the Cold War. Post-war defense research contributed to the rapid expansion of the faculty and campus under James Killian. The current 168-acre campus opened in 1916 and extends over 1 mile along the northern bank of the Charles River basin.MIT, with five schools and one college which contain a total of 32 departments, is traditionally known for research and education in the physical science and engineering, and more recently in biology, economics, linguistics, and management as well. The "Engineers" sponsor 31 sports, most teams of which compete in the NCAA Division III's New England Women's and Men's Athletic Conference; the Division I rowing programs compete as part of the EARC and EAWRC.MIT is often cited as among the world's top universities. As of 2014, 81 Nobel laureates, 52 National Medal of Science recipients, 45 Rhodes Scholars, 38 MacArthur Fellows, and 2 Fields Medalists have been affiliated with MIT. MIT has a strong entrepreneurial culture and the aggregated revenues of companies founded by MIT alumni would rank as the eleventh-largest economy in the world. Wikipedia.

Agency: NSF | Branch: Standard Grant | Program: | Phase: ATMOSPHERIC CHEMISTRY | Award Amount: 470.71K | Year: 2015

This project focuses on an investigation of the effects of recent small volcanic eruptions on stratospheric ozone. It was once believed that only very explosive volcanic eruptions could affect the content of small particles in the stratospheric. However recent observations suggest that substantial volcanic enhancements of the stratospheric aerosol layer have occurred due to a series of smaller eruptions since about 2005. These eruptions have occurred just at the time when ozone recovery should be becoming more and more discernible. Volcanic aerosols have had major, well-documented effects on past changes in ozone, but have received only qualitative rather than quantitative attention in efforts to identify recovery to date. This research will advance the understanding of the sensitivity of ozone to changes in stratospheric aerosols and improve the basis for estimating uncertainty in ozone recovery.

A broad range of aerosol data since 2000 from satellite- and ground-based methods will be compared and used to improve the characterization of recent stratospheric aerosol changes and the uncertainties across different measurements. Data to be used include the CCMI (Chemistry-Climate Model Initiative) project aerosol data together with lidar data from distributed stations, satellite observations, and the University of Wyoming balloonsondes. The Specified Dynamics Whole Atmosphere Community Climate Model (SD-WACCM) will be used in the conduct of sensitivity tests to examine how uncertainties in kinetic rate constants, in background aerosol levels, and in variability of water vapor influence how volcanic surface area changes affect ozone depletion in the lowermost stratosphere, including the region close to the tropopause.

The largest stratospheric aerosol burden since 1991 was observed in 2011 following the Nabro eruption. At that time, reported column abundances of stratospheric aerosol at some locations and times reached as high as 30-50% of the maximum values obtained after Pinatubo. To date, the implications of these perturbations for mid-latitude and global ozone loss and recovery have not been assessed. Ensuring that the signature of chemical recovery can be fully understood is broadly acknowledged as a needed step in the comprehensive evaluation of scientific understanding and its role in the science-policy interface on this highly visible issue.

Bove V.M.,Massachusetts Institute of Technology
Proceedings of the IEEE | Year: 2012

Holography, with its stunning 3-D realism and its expressive potential, in the 1970s and 1980s seemed poised to become the next step in the evolution of visual display. Yet apart from certain specialized niches, display holograms are perhaps more rarely encountered in everyday life than they were 20 or 30 years ago. But the recent resurgence of interest in 3-D video for entertainment applications has underlined the limitations of left/right stereoscopic imaging and created a desire for more natural 3-D imagery in which no glasses are required and all perceptual cues to depth are provided in a consistent fashion. Could holographywhose transition from darkroom to digital has taken some years longer than that of photographycapitalize on this opportunity? In this paper, I examine digital developments in holographic printing, holographic projection, and holographic television, and explore connections between holographic imaging and areas such as integral imaging and telepresence. © 2012 IEEE.

Chan V.W.S.,Massachusetts Institute of Technology
Proceedings of the IEEE | Year: 2012

Present-day networks are being challenged by dramatic increases in data rate demands of emerging applications. A new network architecture, incorporating optical flow switching, will enable significant rate growth, power efficiency, and cost-effective scalability of next-generation networks. We will explore architecture concepts germinated 22 years ago, technology and testbed demonstrations performed in the last 17 years, and the architecture construct from the physical layer to the transport layer of an implementable optical flow switching network that is scalable and manageable. © 2012 IEEE.

Weidman M.C.,Massachusetts Institute of Technology
Nature Materials | Year: 2016

On solvent evaporation, non-interacting monodisperse colloidal particles self-assemble into a close-packed superlattice. Although the initial and final states can be readily characterized, little is known about the dynamic transformation from colloid to superlattice. Here, by using in situ grazing-incidence X-ray scattering, we tracked the self-assembly of lead sulfide nanocrystals in real time. Following the first appearance of an ordered arrangement, the superlattice underwent uniaxial contraction and collective rotation as it approached its final body-centred cubic structure. The nanocrystals became crystallographically aligned early in the overall self-assembly process, showing that nanocrystal ordering occurs on a faster timescale than superlattice densification. Our findings demonstrate that synchrotron X-ray scattering is a viable method for studying self-assembly in its native environment, with ample time resolution to extract kinetic rates and observe intermediate configurations. The method could be used for real-time direction of self-assembly processes and to better understand the forces governing self-organization of soft materials. © 2016 Nature Publishing Group

Zeng H.,Massachusetts Institute of Technology
Nature Genetics | Year: 2015

The contribution of repetitive elements to quantitative human traits is largely unknown. Here we report a genome-wide survey of the contribution of short tandem repeats (STRs), which constitute one of the most polymorphic and abundant repeat classes, to gene expression in humans. Our survey identified 2,060 significant expression STRs (eSTRs). These eSTRs were replicable in orthogonal populations and expression assays. We used variance partitioning to disentangle the contribution of eSTRs from that of linked SNPs and indels and found that eSTRs contribute 10–15% of the cis heritability mediated by all common variants. Further functional genomic analyses showed that eSTRs are enriched in conserved regions, colocalize with regulatory elements and may modulate certain histone modifications. By analyzing known genome-wide association study (GWAS) signals and searching for new associations in 1,685 whole genomes from deeply phenotyped individuals, we found that eSTRs are enriched in various clinically relevant conditions. These results highlight the contribution of STRs to the genetic architecture of quantitative human traits. © 2015 Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved.

Farahat W.A.,Massachusetts Institute of Technology
PLoS computational biology | Year: 2010

Integrative approaches to studying the coupled dynamics of skeletal muscles with their loads while under neural control have focused largely on questions pertaining to the postural and dynamical stability of animals and humans. Prior studies have focused on how the central nervous system actively modulates muscle mechanical impedance to generate and stabilize motion and posture. However, the question of whether muscle impedance properties can be neurally modulated to create favorable mechanical energetics, particularly in the context of periodic tasks, remains open. Through muscle stiffness tuning, we hypothesize that a pair of antagonist muscles acting against a common load may produce significantly more power synergistically than individually when impedance matching conditions are met between muscle and load. Since neurally modulated muscle stiffness contributes to the coupled muscle-load stiffness, we further anticipate that power-optimal oscillation frequencies will occur at frequencies greater than the natural frequency of the load. These hypotheses were evaluated computationally by applying optimal control methods to a bilinear muscle model, and also evaluated through in vitro measurements on frog Plantaris longus muscles acting individually and in pairs upon a mass-spring-damper load. We find a 7-fold increase in mechanical power when antagonist muscles act synergistically compared to individually at a frequency higher than the load natural frequency. These observed behaviors are interpreted in the context of resonance tuning and the engineering notion of impedance matching. These findings suggest that the central nervous system can adopt strategies to harness inherent muscle impedance in relation to external loads to attain favorable mechanical energetics.

Wachinger C.,Massachusetts Institute of Technology | Navab N.,TU Munich
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2013

We address the alignment of a group of images with simultaneous registration. Therefore, we provide further insights into a recently introduced framework for multivariate similarity measures, referred to as accumulated pair-wise estimates (APE), and derive efficient optimization methods for it. More specifically, we show a strict mathematical deduction of APE from a maximum-likelihood framework and establish a connection to the congealing framework. This is only possible after an extension of the congealing framework with neighborhood information. Moreover, we address the increased computational complexity of simultaneous registration by deriving efficient gradient-based optimization strategies for APE: Gauss-Newton and the efficient second-order minimization (ESM). We present next to SSD the usage of intrinsically nonsquared similarity measures in this least squares optimization framework. The fundamental assumption of ESM, the approximation of the perfectly aligned moving image through the fixed image, limits its application to monomodal registration. We therefore incorporate recently proposed structural representations of images which allow us to perform multimodal registration with ESM. Finally, we evaluate the performance of the optimization strategies with respect to the similarity measures, leading to very good results for ESM. The extension to multimodal registration is in this context very interesting because it offers further possibilities for evaluations, due to publicly available datasets with ground-truth alignment. © 1979-2012 IEEE.

Greenwald M.,Massachusetts Institute of Technology
Physics of Plasmas | Year: 2010

Dramatic progress in the scope and power of plasma simulations over the past decade has extended our understanding of these complex phenomena. However, as codes embody imperfect models for physical reality, a necessary step toward developing a predictive capability is demonstrating agreement, without bias, between simulations and experimental results. While comparisons between computer calculations and experimental data are common, there is a compelling need to make these comparisons more systematic and more quantitative. Tests of models are divided into two phases, usually called verification and validation. Verification is an essentially mathematical demonstration that a chosen physical model, rendered as a set of equations, has been accurately solved by a computer code. Validation is a physical process which attempts to ascertain the extent to which the model used by a code correctly represents reality within some domain of applicability, to some specified level of accuracy. This paper will cover principles and practices for verification and validation including lessons learned from related fields. © 2010 American Institute of Physics.

Schlaufman K.C.,Massachusetts Institute of Technology
Astrophysical Journal | Year: 2014

Kepler has identified over 600 multiplanet systems, many of which have several planets with orbital distances smaller than that of Mercury. Because these systems may be difficult to explain in the paradigm of core accretion and disk migration, it has been suggested that they formed in situ within protoplanetary disks with high solid surface densities. The strong connection between giant planet occurrence and stellar metallicity is thought to be linked to enhanced solid surface densities in disks around metal-rich stars, so the presence of a giant planet can be a sign of planet formation in a high solid surface density disk. I formulate quantitative predictions for the frequency of long-period giant planets in these in situ models by translating the proposed increase in disk mass into an equivalent metallicity enhancement. I rederive the scaling of giant planet occurrence with metallicity as Pgp = 0.05+0.02 -0.02 × 10(2.1±0.4)[M/H] = 0.08+0.02 -0.03 × 10 (2.3±0.4)[Fe/H] and show that there is significant tension between the frequency of giant planets suggested by the minimum mass extrasolar nebula scenario and the observational upper limits. Consequently, high-mass disks alone cannot explain the observed properties of the close-in Kepler multiplanet systems and therefore migration is still important. More speculatively, I combine the metallicity scaling of giant planet occurrence with small planet occurrence rates to estimate the number of solar system analogs in the Galaxy. I find that in the Milky Way there are perhaps 4 × 10 6 true solar system analogs with an FGK star hosting both a terrestrial planet in the habitable zone and a long-period giant planet companion. © 2014. The American Astronomical Society. All rights reserved.

Drucker A.,Massachusetts Institute of Technology
Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS | Year: 2012

Given an instance of a hard decision problem, a limited goal is to compress that instance into a smaller, equivalent instance of a second problem. As one example, consider the problem where, given Boolean formulas ψ1,.. ., ψt, we must determine if at least one ψj is satisfiable. An OR-compression scheme for SAT is a polynomial-time reduction that maps (ψ1,..., ψt) to a string z, such that z lies in some "target" language L′ if and only if Vj [ψj ∈ SAT] holds. (Here, L′ can be arbitrarily complex.) AND-compression schemes are defined similarly. A compression scheme is strong if |z| is polynomially bounded in n = maxj |ψj|, independent of t. Strong compression for SAT seems unlikely. Work of Harnik and Naor (FOCS '06/SICOMP '10) and Bodlaender, Downey, Fellows, and Hermelin (ICALP '08/JCSS '09) showed that the infeasibility of strong OR-compression for SAT would show limits to instance compression for a large number of natural problems. Bodlaender et al. also showed that the infeasibility of strong AND-compression for SAT would have consequences for a different list of problems. Motivated by this, Fort now and Santhanam (STOC '08/JCSS '11) showed that if SAT is strongly OR-compressible, then NP ⊆ coNP/poly. Finding similar evidence against AND-compression was left as an open question. We provide such evidence: we show that strong AND- or OR-compression for SAT would imply non-uniform, statistical zero-knowledge proofs for SAT - an even stronger and more unlikely consequence than NP ⊆ coNP/poly. Our method applies against probabilistic compression schemes of sufficient "quality" with respect to the reliability and compression amount (allowing for tradeoff). This greatly strengthens the evidence given by Fort now and Santhanam against probabilistic OR-compression for SAT. We also give variants of these results for the analogous task of quantum instance compression, in which a polynomial-time quantum reduction must output a quantum state that, in an appropriate sense, "preserves the answer" to the input instance. The central idea in our proofs is to exploit the information bottleneck in an AND-compression scheme for a language L in order to fool a cheating prover in a proof system for L̄. Our key technical tool is a new method to "disguise" information being fed into a compressive mapping, we believe this method may find other applications. © 2012 IEEE.

Tuller H.L.,Massachusetts Institute of Technology
Sensors and Actuators, B: Chemical | Year: 2013

Extensive emissions and fuel economy directives are stimulating the development of every more sophisticated sensors, catalyst systems and emissions traps with improved performance and self-evaluation capabilities. For the development of such systems, deeper insights are required to understand the often complex interplay between the chemical state of the material and its electrical, optical, thermal and mechanical properties. Three examples, illustrating how information gained about the defect, electronic and transport structure of oxide materials can be used to optimize sensor and catalyst properties, are presented. Also discussed are means for in situ monitoring of the status of exhaust catalysts and traps. © 2012 Elsevier B.V. All rights reserved.

Muller B.,Duke University | Schukraft J.,CERN | Wyslouch B.,Massachusetts Institute of Technology
Annual Review of Nuclear and Particle Science | Year: 2012

At the end of 2010, the Large Hadron Collider (LHC) at CERN started operation with heavy-ion beams, colliding lead nuclei at a center-of-mass energy of 2.76 TeV per nucleon. These collisions ushered in a new era in ultrarelativistic heavy-ion physics at energies exceeding that of previous accelerators by more than an order of magnitude. This review summarizes the results from the first year of heavy-ion physics at the LHC obtained by the three experiments participating in the heavy-ion program: ALICE, ATLAS, and CMS. © 2012 by Annual Reviews.

Kounine A.,Massachusetts Institute of Technology
International Journal of Modern Physics E | Year: 2012

The Alpha Magnetic Spectrometer (AMS-02) is a general purpose high energy particle detector which was successfully deployed on the International Space Station (ISS) on May 19, 2011 to conduct a unique long duration mission of fundamental physics research in space. Among the physics objectives of AMS are the searches for an understanding of Dark Matter, Anti-matter, the origin of cosmic rays and the exploration of new physics phenomena not possible to study with ground based experiments. This paper reviews the layout of the AMS-02 detector, tests and calibrations performed with the detector on the ground, and its performance on the ISS illustrated with data collected during the first year of operations in space. © 2012 World Scientific Publishing Company.

Sterman J.D.,Massachusetts Institute of Technology
Climatic Change | Year: 2011

The Intergovernmental Panel on Climate Change (IPCC) has been extraordinarily successful in the task of knowledge synthesis and risk assessment. However, the strong scientific consensus on the detection, attribution, and risks of climate change stands in stark contrast to widespread confusion, complacency and denial among policymakers and the public. Risk communication is now a major bottleneck preventing science from playing an appropriate role in climate policy. Here I argue that the ability of the IPCC to fulfill its mission can be enhanced through better understanding of the mental models of the audiences it seeks to reach, then altering the presentation and communication of results accordingly. Few policymakers are trained in science, and public understanding of basic facts about climate change is poor. But the problem is deeper. Our mental models lead to persistent errors and biases in complex dynamic systems like the climate and economy. Where the consequences of our actions spill out across space and time, our mental models have narrow boundaries and focus on the short term. Where the dynamics of complex systems are conditioned by multiple feedbacks, time delays, accumulations and nonlinearities, we have difficulty recognizing and understanding feedback processes, underestimate time delays, and do not understand basic principles of accumulation or how nonlinearities can create regime shifts. These problems arise not only among laypeople but also among highly educated elites with significant training in science. They arise not only in complex systems like the climate but also in familiar contexts such as filling a bathtub. Therefore they cannot be remedied merely by providing more information about the climate, but require different modes of communication, including experiential learning environments such as interactive simulations. © 2011 Springer Science+Business Media B.V.

Nikurashin M.,Princeton University | Ferrari R.,Massachusetts Institute of Technology
Geophysical Research Letters | Year: 2011

A global estimate of the energy conversion rate from geostrophic flows into internal lee waves in the ocean is presented. The estimate is based on a linear theory applied to bottom topography at O(1-10) km scales obtained from single beam echo soundings, to bottom stratification estimated from climatology, and to bottom velocity obtained from a global ocean model. The total energy flux into internal lee waves is estimated to be 0.2 TW which is 20% of the global wind power input into the ocean. The geographical distribution of the energy flux is largest in the Southern Ocean which accounts for half of the total energy flux. The results suggest that the generation of internal lee waves at rough topography is a significant energy sink for the geostrophic flows as well as an important energy source for internal waves and the associated turbulent mixing in the deep ocean. © 2011 by the American Geophysical Union.

Schlichting H.E.,Massachusetts Institute of Technology
Astrophysical Journal Letters | Year: 2014

Recent observations by the Kepler space telescope have led to the discovery of more than 4000 exoplanet candidates consisting of many systems with Earth- to Neptune-sized objects that reside well inside the orbit of Mercury around their respective host stars. How and where these close-in planets formed is one of the major unanswered questions in planet formation. Here, we calculate the required disk masses for in situ formation of the Kepler planets. We find that if close-in planets formed as isolation masses, then standard gas-to-dust ratios yield corresponding gas disks that are gravitationally unstable for a significant fraction of systems, ruling out such a scenario. We show that the maximum width of a planet's accretion region in the absence of any migration is 2υesc/Ω, where υesc is the escape velocity of the planet and Ω is the Keplerian frequency, and we use it to calculate the required disk masses for in situ formation with giant impacts. Even with giant impacts, formation without migration requires disk surface densities in solids at semi-major axes of less than 0.1 AU of 103-105 g cm-2, implying typical enhancements above the minimum-mass solar nebular (MMSN) by at least a factor of 20. Corresponding gas disks are below but not far from the gravitational stability limit. In contrast, formation beyond a few AU is consistent with MMSN disk masses. This suggests that the migration of either solids or fully assembled planets is likely to have played a major role in the formation of close-in super-Earths and mini-Neptunes. © 2014. The American Astronomical Society. All rights reserved.

Perron J.T.,Massachusetts Institute of Technology
Journal of Geophysical Research: Earth Surface | Year: 2011

The numerical methods used to solve nonlinear sediment transport equations often set very restrictive limits on the stability and accuracy of landscape evolution models. This is especially true for hillslope transport laws in which sediment flux increases nonlinearly as the surface slope approaches a limiting value. Explicit-time finite difference methods applied to such laws are subject to fundamental limits on numerical stability that require time steps much shorter than the timescales over which landscapes evolve, creating a heavy computational burden. I present an implicit method for nonlinear hillslope transport that builds on a previously proposed approach to modeling alluvial sediment transport and improves stability and accuracy by avoiding the direct calculation of sediment flux. This method can be adapted to any transport law in which the expression for sediment flux is differentiable. Comparisons of numerical solutions with analytic solutions in one and two dimensions show that the implicit method retains the accuracy of a standard explicit method while permitting time steps several orders of magnitude longer than the maximum stable time step for the explicit method. The ability to take long time steps affords a substantial savings in overall computation time, despite the implicit method's higher per-iteration computational cost. Implicit models for hillslope evolution also offer a distinct advantage when modeling the response of hillslopes to incising channels. Copyright 2011 by the American Geophysical Union.

Colton C.K.,Massachusetts Institute of Technology
Advanced Drug Delivery Reviews | Year: 2014

Therapeutic cells encapsulated in immunobarrier devices have promise for treatment of a variety of human diseases without immunosuppression. The absence of sufficient oxygen supply to maintain viability and function of encapsulated tissue has been the most critical impediment to progress. Within the framework of oxygen supply limitations, we review the major issues related to development of these devices, primarily in the context of encapsulated islets of Langerhans for treating diabetes, including device designs and materials, supply of tissue, protection from immune rejection, and maintenance of cell viability and function. We describe various defensive measures investigated to enhance survival of transplanted tissue, and we review the diverse approaches to enhancement of oxygen transport to encapsulated tissue, including manipulation of diffusion distances and oxygen permeability of materials, induction of neovascularization with angiogenic factors and vascularizing membranes, and methods for increasing the oxygen concentration adjacent to encapsulated tissue so as to exceed that in the microvasculature. Recent developments, particularly in this latter area, suggest that the field is ready for clinical trials of encapsulated therapeutic cells to treat diabetes. © 2014 Elsevier B.V.

Vishwanath A.,University of California at Berkeley | Vishwanath A.,Lawrence Berkeley National Laboratory | Senthil T.,Massachusetts Institute of Technology
Physical Review X | Year: 2013

We discuss physical properties of "integer"topological phases of bosons in D = 3 + 1 dimensions, protected by internal symmetries like time reversal and/or charge conservation. These phases invoke interactions in a fundamental way but do not possess topological order; they are bosonic analogs of freefermion topological insulators and superconductors. While a formal cohomology-based classification of such states was recently discovered, their physical properties remain mysterious. Here, we develop a field-theoretic description of several of these states and show that they possess unusual surface states, which, if gapped, must either break the underlying symmetry or develop topological order. In the latter case, symmetries are implemented in a way that is forbidden in a strictly two-dimensional theory. While these phases are the usual fate of the surface states, exotic gapless states can also be realized. For example, tuning parameters can naturally lead to a deconfined quantum critical point or, in other situations, to a fully symmetric vortex metal phase. We discuss cases where the topological phases are characterized by a quantized magnetoelectric response Θ, which, somewhat surprisingly, is an odd multiple of 2π. Two different surface theories are shown to capture these phenomena: The first is a nonlinear sigma model with a topological term. The second invokes vortices on the surface that transform under a projective representation of the symmetry group. We identify a bulk-field theory consistent with these properties, which is a multicomponent background-field theory supplemented, crucially, with a topological term. We also provide bulk sigma-model field theories of these phases and discuss a possible topological phase characterized by the thermal analog of the magnetoelectric effect.

Seager S.,Massachusetts Institute of Technology | Deming D.,Goddard Space Flight Center
Annual Review of Astronomy and Astrophysics | Year: 2010

At the dawn of the first discovery of exoplanets orbiting Sun-like stars in the mid-1990s, few believed that observations of exoplanet atmospheres would ever be possible. After the 2002 Hubble Space Telescope detection of a transiting exoplanet atmosphere, many skeptics discounted it as a one-object, one-method success. Nevertheless, the field is now firmly established, with over two dozen exoplanet atmospheres observed today. Hot Jupiters are the type of exoplanet currently most amenable to study. Highlights include: detection of molecular spectral features, observation of day-night temperature gradients, and constraints on vertical atmospheric structure. Atmospheres of giant planets far from their host stars are also being studied with direct imaging. The ultimate exoplanet goal is to answer the enigmatic and ancient question, "Are we alone?" via detection of atmospheric biosignatures.Two exciting prospects are the immediate focus on transiting super Earths orbiting in the habitable zone of M-dwarfs, and ultimately the spaceborne direct imaging of true Earth analogs. © 2010 by Annual Reviews.

A strategy to enable zero-carbon variable electricity production with full utilization of renewable and nuclear energy sources has been developed. Wind and solar systems send electricity to the grid. Nuclear plants operate at full capacity with variable steam to turbines to match electricity demand with production (renewables and nuclear). Excess steam at times of low electricity prices and electricity demand go to hybrid fuel production and storage systems. The characteristic of these hybrid technologies is that the economic penalties for variable nuclear steam inputs are small. Three hybrid systems were identified that could be deployed at the required scale. The first option is the gigawatt-year hourly-to-seasonal heat storage system where excess steam from the nuclear plant is used to heat rock a kilometer underground to create an artificial geothermal heat source. The heat source produces electricity on demand using geothermal technology. The second option uses steam from the nuclear plant and electricity from the grid with high-temperature electrolysis (HTR) cells to produce hydrogen and oxygen. Hydrogen is primarily for industrial applications; however, the HTE can be operated in reverse using hydrogen for peak electricity production. The third option uses variable steam and electricity for shale oil production. © 2013 Elsevier Ltd.

Kamrin K.,Massachusetts Institute of Technology | Koval G.,National Institute for Applied Sciences, Strasbourg
Physical Review Letters | Year: 2012

Extending recent modeling efforts for emulsions, we propose a nonlocal fluidity relation for flowing granular materials, capturing several known finite-size effects observed in steady flow. We express the local Bagnold-type granular flow law in terms of a fluidity ratio and then extend it with a particular Laplacian term that is scaled by the grain size. The resulting model is calibrated against a sequence of existing discrete element method data sets for two-dimensional annular shear, where it is shown that the model correctly describes the divergence from a local rheology due to the grain size as well as the rate-independence phenomenon commonly observed in slowly flowing zones. The same law is then applied in two additional inhomogeneous flow geometries, and the predicted velocity profiles are compared against corresponding discrete element method simulations utilizing the same grain composition as before, yielding favorable agreement in each case. © 2012 American Physical Society.

Orr-Weaver T.L.,Massachusetts Institute of Technology
Trends in Genetics | Year: 2015

Defining how organ size is regulated, a process controlled not only by the number of cells but also by the size of the cells, is a frontier in developmental biology. Large cells are produced by increasing DNA content or ploidy, a developmental strategy employed throughout the plant and animal kingdoms. The widespread use of polyploidy during cell differentiation makes it important to define how this hypertrophy contributes to organogenesis. I discuss here examples from a variety of animals and plants in which polyploidy controls organ size, the size and function of specific tissues within an organ, or the differentiated properties of cells. In addition, I highlight how polyploidy functions in wound healing and tissue regeneration. © 2015 Elsevier Ltd.

Brettmann B.K.,Massachusetts Institute of Technology
Pharmaceutical research | Year: 2013

To investigate the use of electrospinning for forming solid dispersions containing crystalline active pharmaceutical ingredients (API) and understand the relevant properties of the resulting materials. Free surface electrospinning was used to prepare nanofiber mats of poly(vinyl pyrrolidone) (PVP) and crystalline albendazole (ABZ) or famotidine (FAM) from a suspension of the drug crystals in a polymer solution. SEM and DSC were used to characterize the dispersion, XRD was used to determine the crystalline polymorph, and dissolution studies were performed to determine the influence of the preparation method on the dissolution rate. The electrospun fibers contained 31 wt% ABZ and 26 wt% FAM for the 1:2 ABZ:PVP and 1:2 FAM:PVP formulations, respectively, and both APIs retained their crystalline polymorphs throughout processing. The crystals had an average size of about 10 μm and were well-dispersed throughout the fibers, resulting in a higher dissolution rate for electrospun tablets than for powder tablets. Previously used to produce amorphous formulations, electrospinning has now been demonstrated to be a viable option for producing fibers containing crystalline API. Due to the dispersion of the crystals in the polymer, tablets made from the fiber mats may also exhibit improved dissolution properties over traditional powder compression.

Poliannikov O.V.,Massachusetts Institute of Technology
Geophysics | Year: 2011

Source-receiver wavefield interferometry has been proposed recently as an extension of classical correlation-based interferometry. Under idealized assumptions, it allows the Green's function between a source and a receiver to be reconstructed from boundary data that are collected using two closed contours of sources and receivers, respectively. An intuitive geometric description of this method, based on ray theory, can be used to design a method for reconstructing virtual-gather events that typically are lost when conventional interferometry is employed. In classical interferometry, events in the Green's function between two receiver locations are reconstructed by automatically finding a ray that emanates from a stationary source, passes through the virtual source, reflects off the structure, and finally is recorded by the second receiver. The common path to both receivers is then canceled by a suitable crosscorrelation. If illumination of the structure is not ideal, such a ray may not exist and the reflection is not reconstructed. Source-receiver wave interferometry can be given a similar geometric description. The reflection off the structure can be constructed using not one but multiple rays produced simultaneously by two stationary sources. This new redatuming technique proves successful in geometries in which classical interferometry fails. Extensive numerical simulations support these theoretical conclusions. © 2011 Society of Exploration Geophysicists.

Ernst J.,University of California at Los Angeles | Kellis M.,Massachusetts Institute of Technology | Kellis M.,The Broad Institute of MIT and Harvard
Nature Biotechnology | Year: 2015

With hundreds of epigenomic maps, the opportunity arises to exploit the correlated nature of epigenetic signals, across both marks and samples, for large-scale prediction of additional datasets. Here, we undertake epigenome imputation by leveraging such correlations through an ensemble of regression trees. We impute 4,315 high-resolution signal maps, of which 26% are also experimentally observed. Imputed signal tracks show overall similarity to observed signals and surpass experimental datasets in consistency, recovery of gene annotations and enrichment for disease-associated variants. We use the imputed data to detect low-quality experimental datasets, to find genomic sites with unexpected epigenomic signals, to define high-priority marks for new experiments and to delineate chromatin states in 127 reference epigenomes spanning diverse tissues and cell types. Our imputed datasets provide the most comprehensive human regulatory region annotation to date, and our approach and the ChromImpute software constitute a useful complement to large-scale experimental mapping of epigenomic information. © 2015 Nature America, Inc. All rights reserved.

Wichterle H.,Columbia University | Gifford D.,Massachusetts Institute of Technology | Mazzoni E.,New York University
Science | Year: 2013

A universal method for classifying neuronal subtypes will increase our understanding of the human brain.

Spokoyny A.M.,Massachusetts Institute of Technology
Pure and Applied Chemistry | Year: 2013

200 years of research with carbon-rich molecules have shaped the development of modern chemistry. Research pertaining to the chemistry of boron-rich species has historically trailed behind its more distinguished neighbor (carbon) in the periodic table. Notably, a potentially rich and, in many cases, unmatched field of coordination chemistry using boron-rich clusters remains fundamentally underdeveloped. Our work has been devoted to examining several basic concepts related to the functionalization of icosahedral boron-rich clusters and their use as ligands, aimed at designing fundamentally new hybrid molecular motifs and materials. Particularly interesting are icosahedral carboranes, which can be regarded as 3D analogs of benzene. These species comprise a class of boron-rich clusters that were discovered in the 1950s during the "space race" while researchers were developing energetic materials for rocket fuels. Ultimately, the unique chemical and physical properties of carborane species, such as rigidity, indefinite stability to air and moisture, and 3D aromaticity, may allow one to access a set of properties not normally available in carbon-based chemistry. While technically these species are considered as inorganic clusters, the chemical properties they possess make these boron-rich species suitable for replacing and/or altering structural and functional features of the organic and organometallic molecules-a phenomenon best described as "organomimetic". Aside from purely fundamental features associated with the organomimetic chemistry of icosahedral carboranes, their use can also provide new avenues in the development of systems relevant to solving current problems associated with energy production, storage, and conversion. © 2013 IUPAC.

Stokes L.C.,Massachusetts Institute of Technology
Energy Policy | Year: 2013

Designing and implementing a renewable energy policy involves political decisions and actors. Yet most research on renewable energy policies comparatively evaluates instruments from an economic or technical perspective. This paper presents a case study of Ontario's feed-in tariff policies between 1997 and 2012 to analyze how the political process affects renewable energy policy design and implementation. Ontario's policy, although initially successful, has met with increasing resistance over time. The case reveals key political tensions that arise during implementation. First, high-level support for a policy does not necessarily translate into widespread public support, particularly for local deployment. Second, the government often struggles under asymmetric information during price setting, which may weaken the policy's legitimacy with the public due to higher costs. Third, there is an unacknowledged tension that governments must navigate between policy stability, to spur investment, and adaptive policymaking, to improve policy design. Fourth, when multiple jurisdictions pursue the same policies simultaneously, international political conflict over jobs and innovation may occur. These implementation tensions result from political choices during policy design and present underappreciated challenges to transforming the electricity system. Governments need to critically recognize the political dimension of renewable energy policies to secure sustained political support. © 2013 Elsevier Ltd.

Schrock R.R.,Massachusetts Institute of Technology
Accounts of Chemical Research | Year: 2014

ConspectusSome of the most readily available and inexpensive monomers for ring-opening metathesis polymerization (ROMP) are norbornenes or substituted norbornadienes. Polymers made from them have tacticities (the stereochemical relationship between monomer units in the polymer chain) that remain after the C=C bonds in the polymer backbone are hydrogenated. Formation of polymers with exclusively a single structure (one tacticity) was rare until approximately 20 years ago, when well-defined ROMP catalysts based on molybdenum imido alkylidene complexes that contain a chiral biphenolate or binaphtholate ligand were shown to yield cis,isotactic-poly(2,3-dicarbomethoxynorbornadiene) and related polymers through addition of the monomer to the same side of the M=C bond in each step. Over the past few years, molybdenum and tungsten monoaryloxide pyrrolide (MAP) imido alkylidene initiators have been found to produce cis,syndiotactic polynorbornenes and substituted norbornadienes through addition of the monomer to one side of the M=C bond in one step followed by addition to the other side of the M=C bond in the next step. This "stereogenic metal control" is possible as a consequence of the fact that the configuration of the stereogenic metal center switches with each step in the polymerization. Stereogenic metal control also allows syndiotactic polymers to be prepared from racemic monomers in which enantiomers of the monomer are incorporated alternately into the main chain. Because pure trans polymers have not yet been prepared through some predictable mechanism of stereochemical control, it seems unlikely that all four basic polymer structures from a single given monomer can be prepared simply by choosing the right initiator. However, because tactic, and relatively oxygen-stable, hydrogenated polymers are often a desirable goal, the ability to form pure cis,isotactic polymers (through enantiomorphic site control) and cis,syndiotactic polymers (through stereogenic metal control) is sufficient for preparing hydrogenated polymers with a single structure. It is hoped that the principles of forming polymers that have a single structure through ring-opening metathesis polymerization will be general for a relatively large number of monomers and that some important problems in ROMP polymer chemistry can benefit from knowledge of polymer structure at a molecular level. With an increase in knowledge concerning the mechanistic details of polymerization by well-defined initiators, more elaborate ROMP polymers and copolymers with stereoregular structures may be possible. © 2014 American Chemical Society.

Lobel I.,New York University | Ozdaglar A.,Massachusetts Institute of Technology
IEEE Transactions on Automatic Control | Year: 2011

We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works on multi-agent optimization that make worst-case assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide almost sure convergence results for our subgradient algorithm. © 2006 IEEE.

Sommer C.,Massachusetts Institute of Technology
ACM Computing Surveys | Year: 2014

We consider the point-to-point (approximate) shortest-path query problem, which is the following generalization of the classical single-source (SSSP) and all-pairs shortest-path (APSP) problems: we are first presented with a network (graph). A so-called preprocessing algorithm may compute certain information (a data structure or index) to prepare for the next phase. After this preprocessing step, applications may ask shortest-path or distance queries, which should be answered as fast as possible. Due to its many applications in areas such as transportation, networking, and social science, this problem has been considered by researchers from various communities (sometimes under different names): algorithm engineers construct fast route planning methods; database and information systems researchers investigate materialization tradeoffs, query processing on spatial networks, and reachability queries; and theoretical computer scientists analyze distance oracles and sparse spanners. Related problems are considered for compact routing and distance labeling schemes in networking and distributed computing and for metric embeddings in geometry as well. In this survey, we review selected approaches, algorithms, and results on shortest-path queries from these fields, with the main focus lying on the tradeoff between the index size and the query time. We survey methods for general graphs as well as specializedmethods for restricted graph classes, in particular for those classes with arguable practical significance such as planar graphs and complex networks. © 2014 ACM.

Bookatz A.D.,Massachusetts Institute of Technology
Quantum Information and Computation | Year: 2014

In this paper we give an overview of the quantum computational complexity class QMA and a description of known QMA-complete problems to date. Such problems are believed to be diffcult to solve, even with a quantum computer, but have the property that if a purported solution to the problem is given, a quantum computer would easily be able to verify whether it is correct. An attempt has been made to make this paper as self-contained as possible so that it can be accessible to computer scientists, physicists, mathematicians, and quantum chemists. Problems of interest to all of these professions can be found here. © Rinton Press.

Leveson N.G.,Massachusetts Institute of Technology
Safety Science | Year: 2011

Major accidents keep occurring that seem preventable and that have similar systemic causes. Too often, we fail to learn from the past and make inadequate changes in response to losses. Examining the assumptions and paradigms underlying safety engineering may help identify the problem. The assumptions questioned in this paper involve four different areas: definitions of safety and its relationship to reliability, accident causality models, retrospective vs. prospective analysis, and operator error. Alternatives based on systems thinking are proposed. © 2010 Elsevier Ltd.

Oromendia A.B.,Howard Hughes Medical Institute | Oromendia A.B.,Massachusetts Institute of Technology | Dodgson S.E.,Howard Hughes Medical Institute | Amon A.,Howard Hughes Medical Institute
Genes and Development | Year: 2012

Gains or losses of entire chromosomes lead to aneuploidy, a condition tolerated poorly in all eukaryotes analyzed to date. How aneuploidy affects organismal and cellular physiology is poorly understood. We found that aneuploid budding yeast cells are under proteotoxic stress. Aneuploid strains are prone to aggregation of endogenous proteins as well as of ectopically expressed hard-to-fold proteins such as those containing polyglutamine (polyQ) stretches. Protein aggregate formation in aneuploid yeast strains is likely due to limiting protein quality-control systems, since the proteasome and at least one chaperone family, Hsp90, are compromised in many aneuploid strains. The link between aneuploidy and the formation and persistence of protein aggregates could have important implications for diseases such as cancer and neurodegeneration. © 2012 by Cold Spring Harbor Laboratory Press.

Jagoutz O.E.,Massachusetts Institute of Technology
Contributions to Mineralogy and Petrology | Year: 2010

Results of simple model calculations that integrate cumulate compositions from the Kohistan arc terrain are presented in order to develop a consistent petrogenetic model to explain the Kohistan island arc granitoids. The model allows a quantitative approximation of the possible relative roles of fractional crystallization and assimilation to explain the silica-rich upper crust composition of oceanic arcs. Depending in detail on the parental magma composition hydrous moderate-to-high pressure fractional crystallization in the lower crust/upper mantle is an adequate upper continental crust forming mechanism in terms of volume and compositions. Accordingly, assimilation and partial melting in the lower crust is not per se a necessary process to explain island arc granitoids. However, deriving few percent of melts using low degree of dehydration melting is a crucial process to produce volumetrically important amounts of upper continental crust from silica-poorer parental magmas. Even though the model can explain the silica-rich upper crustal composition of the Kohistan, the fractionation model does not predict the accepted composition of the bulk continental crust. This finding supports the idea that additional crustal refining mechanism (e. g., delamination of lower crustal rocks) and/or non-cogenetic magmatic process were critical to create the bulk continental crust composition. © 2010 Springer-Verlag.

Garcia O.E.,University of Tromso | Garcia O.E.,Massachusetts Institute of Technology
Physical Review Letters | Year: 2012

Single-point measurements of fluctuations in the scrape-off layer of magnetized plasmas are generally found to be dominated by large-amplitude bursts which are associated with radial motion of bloblike structures. A stochastic model for these fluctuations is presented, with the plasma density given by a random sequence of bursts with a fixed wave form. When the burst events occur in accordance to a Poisson process, this model predicts a parabolic relation between the skewness and kurtosis moments of the plasma fluctuations. In the case of an exponential wave form and exponentially distributed burst amplitudes, the probability density function for the fluctuation amplitudes is shown to be a Gamma distribution with the scale parameter given by the average burst amplitude, and the shape parameter given by the ratio of the burst duration and waiting times. © 2012 American Physical Society.

Wilczek F.,Massachusetts Institute of Technology
Physical Review Letters | Year: 2012

Some subtleties and apparent difficulties associated with the notion of spontaneous breaking of time-translation symmetry in quantum mechanics are identified and resolved. A model exhibiting that phenomenon is displayed. The possibility and significance of breaking of imaginary time-translation symmetry is discussed. © 2012 American Physical Society.

Shapere A.,University of Kentucky | Wilczek F.,Massachusetts Institute of Technology
Physical Review Letters | Year: 2012

We consider the possibility that classical dynamical systems display motion in their lowest-energy state, forming a time analogue of crystalline spatial order. Challenges facing that idea are identified and overcome. We display arbitrary orbits of an angular variable as lowest-energy trajectories for nonsingular Lagrangian systems. Dynamics within orbits of broken symmetry provide a natural arena for formation of time crystals. We exhibit models of that kind, including a model with traveling density waves. © 2012 American Physical Society.

Hauser C.A.E.,Institute Of Bioengineering And Nanotechnology, Singapore | Zhang S.,Institute Of Bioengineering And Nanotechnology, Singapore | Zhang S.,Massachusetts Institute of Technology
Chemical Society Reviews | Year: 2010

Scientists and bioengineers have dreamed of designing materials from the bottom up with the finest detail and ultimate control at the single molecular level. The discovery of a class of self-assembling peptides that spontaneously undergo self-organization into well-ordered structures opened a new avenue for molecular fabrication of biological materials. Since this discovery, diverse classes of short peptides have been invented with broad applications, including 3D tissue cell culture, reparative and regenerative medicine, tissue engineering, slow drug release and medical device development. Molecular design of new materials using short peptides is poised to become increasingly important in biomedical research, biomedical technology and medicine, and is covered in this tutorial review. © 2010 The Royal Society of Chemistry.

Wang L.,Massachusetts Institute of Technology | Renner R.,ETH Zurich
Physical Review Letters | Year: 2012

The one-shot classical capacity of a quantum channel quantifies the amount of classical information that can be transmitted through a single use of the channel such that the error probability is below a certain threshold. In this work, we show that this capacity is well approximated by a relative-entropy-type measure defined via hypothesis testing. Combined with a quantum version of Stein's lemma, our results give a conceptually simple proof of the well-known Holevo-Schumacher-Westmoreland theorem for the capacity of memoryless channels. More generally, we obtain tight capacity formulas for arbitrary (not necessarily memoryless) channels. © 2012 American Physical Society.

Zhang S.,Massachusetts Institute of Technology
Accounts of Chemical Research | Year: 2012

One important question in prebiotic chemistry is the search for simple structures that might have enclosed biological molecules in a cell-like space. Phospholipids, the components of biological membranes, are highly complex. Instead, we looked for molecules that might have been available on prebiotic Earth. Simple peptides with hydrophobic tails and hydrophilic heads that are made up of merely a combination of these robust, abiotically synthesized amino acids and could self-assemble into nanotubes or nanovesicles fulfilled our initial requirements. These molecules could provide a primitive enclosure for the earliest enzymes based on either RNA or peptides and other molecular structures with a variety of functions.We discovered and designed a class of these simple lipid-like peptides, which we describe in this Account. These peptides consist of natural amino acids (glycine, alanine, valine, isoleucine, leucine, aspartic acid, glutamic acid, lysine, and arginine) and exhibit lipid-like dynamic behaviors. These structures further undergo spontaneous assembly to form ordered arrangements including micelles, nanovesicles, and nanotubes with visible openings. Because of their simplicity and stability in water, such assemblies could provide examples of prebiotic molecular evolution that may predate the RNA world. These short and simple peptides have the potential to self-organize to form simple enclosures that stabilize other fragile molecules, to bring low concentration molecules into a local environment, and to enhance higher local concentration. As a result, these structures plausibly could not only accelerate the dehydration process for new chemical bond formation but also facilitate further self-organization and prebiotic evolution in a dynamic manner.We also expect that this class of lipid-like peptides will likely find a wide range of uses in the real world. Because of their favorable interactions with lipids, these lipid-like peptides have been used to solubilize and stabilize membrane proteins, both for scientific studies and for the fabrication of nanobiotechological devices. They can also increase the solubility of other water-insoluble molecules and increase long-term stability of some water-soluble proteins. Likewise, because of their lipophilicity, these structures can deliver molecular cargo, such as small molecules, siRNA, and DNA, in vivo for potential therapeutic applications. © 2012 American Chemical Society.

Nandkishore R.,Massachusetts Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

We point out that a system which supports chiral superconductivity should also support a chiral pseudogap phase: a finite temperature phase wherein superconductivity is lost but time-reversal symmetry is still broken. This chiral pseudogap phase can be viewed as a state with phase incoherent Cooper pairs of a definite angular momentum. This physical picture suggests that the chiral pseudogap phase should have definite magnetization, should exhibit a (nonquantized) charge Hall effect, and should possess protected edge states that lead to a quantized thermal Hall response. We explain how these phenomena are realized in a Ginzburg-Landau description, and comment on the experimental signatures of the chiral pseudogap phase. We expect this work to be relevant for all systems that exhibit chiral superconductivity, including doped graphene and strontium ruthenate. © 2012 American Physical Society.

Wen X.-G.,Massachusetts Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

Symmetry-protected topological (SPT) phases are gapped quantum phases with a certain symmetry, which can all be smoothly connected to the same trivial product state if we break the symmetry. For noninteracting fermion systems with time reversal (T), charge conjugation (C), and/or U(1) (N) symmetries, the total symmetry group can depend on the relations between those symmetry operations, such as TNT -1=N or TNT -1=-N. As a result, the SPT phases of those fermion systems with different symmetry groups have different classifications. In this paper, we use Kitaev's K-theory approach to classify the gapped free-fermion phases for those possible symmetry groups. In particular, we can view the U(1) as a spin rotation. We find that superconductors with the S z spin-rotation symmetry are classified by Z in even dimensions, while superconductors with the time reversal plus the S z spin-rotation symmetries are classified by Z in odd dimensions. We show that all 10 classes of gapped free-fermion phases can be realized by electron systems with certain symmetries. We also point out that, to properly describe the symmetry of a fermionic system, we need to specify its full symmetry group that includes the fermion number parity transformation (-)N. The full symmetry group is actually a projective symmetry group. © 2012 American Physical Society.

McDonald M.,Massachusetts Institute of Technology
Astrophysical Journal Letters | Year: 2011

In recent years the number of known galaxy clusters beyond z ≳ 0.2 has increased drastically with the release of multiple catalogs containing >30,000 optically detected galaxy clusters over the range 0 < z < 0.6. Combining these catalogs with the availability of optical spectroscopy of the brightest cluster galaxy (BCG) from the Sloan Digital Sky Survey allows for the evolution of optical emission-line nebulae in cluster cores to be quantified. For the first time, the continuous evolution of optical line emission in BCGs over the range 0 < z < 0.6 is determined. A minimum in the fraction of BCGs with optical line emission is found at z ∼ 0.3, suggesting that complex, filamentary emission in systems such as Perseus A is a recent phenomenon. Evidence for an upturn in the number of strongly emitting systems is reported beyond z > 0.3, hinting at an earlier epoch of strong cooling. We compare the evolution of emission-line nebulae to the X-ray-derived cool core (CC) fraction from the literature over the same redshift range and find overall agreement, with the exception that an upturn in the strong CC fraction is not observed at z > 0.3. The overall agreement between the evolution of CCs and optical line emission at low redshift suggests that emission-line surveys of galaxy clusters may provide an efficient method of indirectly probing the evolution of CCs and thus provide insights into the balance of heating and cooling processes at early cosmic times. © 2011. The American Astronomical Society. All rights reserved..

Nepf H.M.,Massachusetts Institute of Technology
Journal of Hydraulic Research | Year: 2012

This paper highlights some recent trends in vegetation hydrodynamics, focusing on conditions within channels and spanning spatial scales from individual blades, to canopies or vegetation patches, to the channel reach. At the blade scale, the boundary layer formed on the plant surface plays a role in controlling nutrient uptake. Flow resistance and light availability are also influenced by the reconfiguration of flexible blades. At the canopy scale, there are two flow regimes. For sparse canopies, the flow resembles a rough boundary layer. For dense canopies, the flow resembles a mixing layer. At the reach scale, flow resistance is more closely connected to the patch-scale vegetation distribution, described by the blockage factor, than to the geometry of individual plants. The impact of vegetation distribution on sediment movement is discussed, with attention being paid to methods for estimating bed stress within regions of vegetation. The key research challenges of the hydrodynamics of vegetated channels are highlighted. © 2012 International Association for Hydro-Environment Engineering and Research.

Zhao Y.,Massachusetts Institute of Technology
Combinatorics Probability and Computing | Year: 2010

We show that the number of independent sets in an N-vertex, d-regular graph is at most (2d+1 1)N/2d, where the bound is sharp for a disjoint union of complete d-regular bipartite graphs. This settles a conjecture of Alon in 1991 and Kahn in 2001. Kahn proved the bound when the graph is assumed to be bipartite. We give a short proof that reduces the general case to the bipartite case. Our method also works for a weighted generalization, i.e., an upper bound for the independence polynomial of a regular graph. © 2009 Cambridge University Press.

Brady T.F.,Massachusetts Institute of Technology
Journal of vision | Year: 2011

Traditional memory research has focused on identifying separate memory systems and exploring different stages of memory processing. This approach has been valuable for establishing a taxonomy of memory systems and characterizing their function but has been less informative about the nature of stored memory representations. Recent research on visual memory has shifted toward a representation-based emphasis, focusing on the contents of memory and attempting to determine the format and structure of remembered information. The main thesis of this review will be that one cannot fully understand memory systems or memory processes without also determining the nature of memory representations. Nowhere is this connection more obvious than in research that attempts to measure the capacity of visual memory. We will review research on the capacity of visual working memory and visual long-term memory, highlighting recent work that emphasizes the contents of memory. This focus impacts not only how we estimate the capacity of the system--going beyond quantifying how many items can be remembered and moving toward structured representations--but how we model memory systems and memory processes.

Tasoglu S.,Harvard University | Demirci U.,Harvard University | Demirci U.,Massachusetts Institute of Technology
Trends in Biotechnology | Year: 2013

Recently, there has been growing interest in applying bioprinting techniques to stem cell research. Several bioprinting methods have been developed utilizing acoustics, piezoelectricity, and lasers to deposit living cells onto receiving substrates. Using these technologies, spatially defined gradients of immobilized biomolecules can be engineered to direct stem cell differentiation into multiple subpopulations of different lineages. Stem cells can also be patterned in a high-throughput manner onto flexible implementation patches for tissue regeneration or onto substrates with the goal of accessing encapsulated stem cells of interest for genomic analysis. Here, we review recent achievements with bioprinting technologies in stem cell research, and identify future challenges and potential applications including tissue engineering and regenerative medicine, wound healing, and genomics. © 2012 Elsevier Ltd.

Bonoli P.T.,Massachusetts Institute of Technology
Physics of Plasmas | Year: 2014

Progress in experiment and simulation capability in the lower hybrid range of frequencies at ITER relevant parameters is reviewed. Use of LH power in reactor devices is motivated in terms of its potential for efficient off-axis current profile control. Recent improvements in simulation capability including the development of full-wave field solvers, inclusion of the scrape off layer (SOL) in wave propagation codes, the use of coupled ray tracing/full-wave/3D (r vτ, v∥) Fokker Planck models, and the inclusion of wave scattering as well as nonlinear broadening effects in ray tracing / Fokker Planck codes are discussed. Experimental and modeling results are reviewed which are aimed at understanding the spectral gap problem in LH current drive (LHCD) and the density limit that has been observed and mitigated in LHCD experiments. Physics mechanisms that could be operative in these experiments are discussed, including toroidally induced variations in the parallel wavenumber, nonlinear broadening of the pump wave, scattering of LH waves from density fluctuations in the SOL, and spectral broadening at the plasma edge via full-wave effects. © 2014 AIP Publishing LLC.

Bendor D.,Massachusetts Institute of Technology
Journal of Neurophysiology | Year: 2012

Pitch perception is an important component of hearing, allowing us to appreciate melodies and harmonies as well as recognize prosodic cues in speech. Multiple studies over the last decade have suggested that pitch is represented by a pitchprocessing center in auditory cortex. However, recent data (Barker D, Plack CJ, Hall DA. Cereb Cortex. In press; Hall DA, Plack CJ. Cereb Cortex 19: 576-585, 2009) now challenge these previous claims of a human "pitch center." © 2012 the American Physiological Society.

Yildiz B.,Massachusetts Institute of Technology
MRS Bulletin | Year: 2014

Elastic strain engineering offers a new route to enable high-performance catalysts, electrochemical energy conversion devices, separation membranes and memristors. By applying mechanical stress, the inherent energy landscape of reactions involved in the material can be altered. This is the so-called mechano-chemical coupling. Here we discuss how elastic strain activates reactions on metals and oxides. We also present analogies to strained polymer reactions. A rich set of investigations have been performed on strained metal surfaces over the last 15 years, and the mechanistic reasons behind strain-induced reactivity are explained by an electronic structure model. On the other hand, the potential of strain engineering of oxides for catalytic and energy applications has been largely underexplored. In oxides, mechanical stress couples to reaction and diffusion kinetics by altering the oxygen defect formation enthalpy, migration energy barrier, adsorption energy, dissociation barrier, and charge transfer barrier. A generalization of the principles for stress activated reactions from polymers to metals to oxides is offered, and the prospect of using elastic strain to tune reaction and diffusion kinetics in functional oxides is discussed. © 2014 Materials Research Society.

Wagner J.C.,Massachusetts Institute of Technology
Nature Methods | Year: 2014

Malaria is a major cause of global morbidity and mortality, and new strategies for treating and preventing this disease are needed. Here we show that the Streptococcus pyogenes Cas9 DNA endonuclease and single guide RNAs (sgRNAs) produced using T7 RNA polymerase (T7 RNAP) efficiently edit the Plasmodium falciparum genome. Targeting the genes encoding native knob-associated histidine-rich protein (kahrp) and erythrocyte binding antigen 175 (eba-175), we achieved high (≥50-100%) gene disruption frequencies within the usual time frame for generating transgenic parasites.

Mayers J.R.,Massachusetts Institute of Technology
Nature Medicine | Year: 2014

Most patients with pancreatic ductal adenocarcinoma (PDAC) are diagnosed with advanced disease and survive less than 12 months. PDAC has been linked with obesity and glucose intolerance, but whether changes in circulating metabolites are associated with early cancer progression is unknown. To better understand metabolic derangements associated with early disease, we profiled metabolites in prediagnostic plasma from individuals with pancreatic cancer (cases) and matched controls from four prospective cohort studies. We find that elevated plasma levels of branched-chain amino acids (BCAAs) are associated with a greater than twofold increased risk of future pancreatic cancer diagnosis. This elevated risk was independent of known predisposing factors, with the strongest association observed among subjects with samples collected 2 to 5 years before diagnosis, when occult disease is probably present. We show that plasma BCAAs are also elevated in mice with early-stage pancreatic cancers driven by mutant Kras expression but not in mice with Kras-driven tumors in other tissues, and that breakdown of tissue protein accounts for the increase in plasma BCAAs that accompanies early-stage disease. Together, these findings suggest that increased whole-body protein breakdown is an early event in development of PDAC.

Gibson L.J.,Massachusetts Institute of Technology
Journal of the Royal Society Interface | Year: 2012

The cell walls in plants aremade up of just four basic building blocks: cellulose (the main structural fibre of the plant kingdom) hemicellulose, lignin and pectin. Although the microstructure of plant cell walls varies in different types of plants, broadly speaking, cellulose fibres reinforce a matrix of hemicellulose and either pectin or lignin. The cellular structure of plants varies too, from the largely honeycomb-like cells of wood to the closed-cell, liquid-filled foam-like parenchyma cells of apples and potatoes and to composites of these two cellular structures, as in arborescent palm stems. The arrangement of the four basic building blocks in plant cell walls and the variations in cellular structure give rise to a remarkably wide range of mechanical properties: Young's modulus varies from 0.3 MPa in parenchyma to 30 GPa in the densest palm, while the compressive strength varies from 0.3 MPa in parenchyma to over 300 MPa in dense palm. The moduli and compressive strength of plant materials span this entire range. This study reviews the composition and microstructure of the cell wall as well as the cellular structure in three plant materials (wood, parenchyma and arborescent palm stems) to explain the wide range in mechanical properties in plants as well as their remarkable mechanical efficiency. © 2012 The Royal Society.

Reppert M.,Massachusetts Institute of Technology
Journal of Physical Chemistry Letters | Year: 2011

Resonant hole-burned (HB) spectra provide detailed insight into optical line shapes and single-site absorption spectra. In this work, a new method is presented for modeling low-fluence resonant HB spectra in excitonically coupled systems. Our calculations make use of the line shape theory of Renger and Marcus [ J. Chem. Phys. 2002, 116, 9997-10019 ] to include explicitly the effects of excitonic interactions and energy transfer. We find that an accurate treatment of lifetime broadening is crucial for reproducing resonant hole shapes and zero-phonon action spectra. Representative spectra are calculated for a series of simple dimers, with a description and physical explanation of the main features of the calculated spectra. Qualitatively, the results are in excellent agreement with experimental literature data. It is anticipated that the new method will provide both more detailed insight into the excitonic information encoded in resonant HB spectra and a means of testing theoretical treatments of excitonic optical line shapes. © 2011 American Chemical Society.

Hanna J.H.,Whitehead Institute For Biomedical Research | Saha K.,Whitehead Institute For Biomedical Research | Jaenisch R.,Whitehead Institute For Biomedical Research | Jaenisch R.,Massachusetts Institute of Technology
Cell | Year: 2010

Direct reprogramming of somatic cells to induced pluripotent stem cells by ectopic expression of defined transcription factors has raised fundamental questions regarding the epigenetic stability of the differentiated cell state. In addition, evidence has accumulated that distinct states of pluripotency can interconvert through the modulation of both cell-intrinsic and exogenous factors. To fully realize the potential of in vitro reprogrammed cells, we need to understand the molecular and epigenetic determinants that convert one cell type into another. Here we review recent advances in this rapidly moving field and emphasize unresolved and controversial questions. © 2010 Elsevier Inc.

Wen X.-G.,Perimeter Institute for Theoretical Physics | Wen X.-G.,Massachusetts Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2014

Recently, it was realized that quantum states of matter can be classified as long-range entangled states (i.e., with nontrivial topological order) and short-range entangled states (i.e., with trivial topological order). We can use group cohomology class Hd(SG,R/Z) to systematically describe the SRE states with a symmetry SG [referred as symmetry-protected trivial (SPT) or symmetry-protected topological (SPT) states] in d-dimensional space-time. In this paper, we study the physical properties of those SPT states, such as the fractionalization of the quantum numbers of the global symmetry on some designed point defects and the appearance of fractionalized SPT states on some designed defect lines/membranes. Those physical properties are SPT invariants of the SPT states which allow us to experimentally or numerically detect those SPT states, i.e., to measure the elements in Hd(G,R/Z) that label different SPT states. For example, 2+1-dimensional bosonic SPT states with Zn symmetry are classified by a Zn integer mH3(Zn,R/Z)=Zn. We find that n identical monodromy defects, in a Zn SPT state labeled by m, carry a total Zn charge 2m (which is not a multiple of n in general). © 2014 American Physical Society.

Watts M.R.,Massachusetts Institute of Technology
Optics Letters | Year: 2010

A class of whispering-gallery-mode resonators, herein referred to as adiabatic microring resonators, is proposed and numerically demonstrated. Adiabatic microrings enable electrical and mechanical contact to be made to the resonator without inducing radiation, while supporting only a single radial mode and therein achieving an uncorrupted free spectral range (FSR). Rigorous finite-difference time-domain simulations indicate that adiabatic microrings with outer diameters as small as 4 μm can achieve resonator quality factors (Qs) as high as Q = 88,000 and an FSR of 8:2 THz, despite large internal contacts. © 2010 Optical Society of America.

Rajagopala K.,Massachusetts Institute of Technology
Journal of High Energy Physics | Year: 2010

We solve second order relativistic hydrodynamics equations for a boost-invariant 1+1-dimensional expanding fluid with an equation of state taken from lattice calculations of the thermodynamics of strongly coupled quark-gluon plasma. We investigate the dependence of the energy density as a function of proper time on the values of the shear viscosity η, the bulk viscosity ζ, and second order coefficients, confirming that large changes in the values of the latter have negligible effects. Varying the shear viscosity between zero and a few times s/4π, with s the entropy density, has significant effects, as expected based on other studies. Introducing a nonzero bulk viscosity also has significant effects. In fact, if the bulk viscosity peaks near the crossover temperature Tc to the degree indicated by recent lattice calculations in QCD without quarks, it can make the fluid cavitate - falling apart into droplets. It is interesting to see a hydrodynamic calculation predicting its own breakdown, via cavitation, at the temperatures where hadronization is thought to occur in ultrarelativistic heavy ion collisions. © SISSA 2010.

Van Der Veen A.G.,Whitehead Institute For Biomedical Research | Ploegh H.L.,Whitehead Institute For Biomedical Research | Ploegh H.L.,Massachusetts Institute of Technology
Annual Review of Biochemistry | Year: 2012

The eukaryotic ubiquitin family encompasses nearly 20 proteins that are involved in the posttranslational modification of various macromolecules. The ubiquitin-like proteins (UBLs) that are part of this family adopt the β-grasp fold that is characteristic of its founding member ubiquitin (Ub). Although structurally related, UBLs regulate a strikingly diverse set of cellular processes, including nuclear transport, proteolysis, translation, autophagy, and antiviral pathways. New UBL substrates continue to be identified and further expand the functional diversity of UBL pathways in cellular homeostasis and physiology. Here, we review recent findings on such novel substrates, mechanisms, and functions of UBLs. © 2012 by Annual Reviews. All rights reserved.

Hemann M.T.,Massachusetts Institute of Technology
Cancer Discovery | Year: 2014

DNA repair deficiencies are common among cancer cells and represent a potential vulnerability that might be exploited by targeting compensatory repair pathways. However, the identification of synthetically lethal combinations of DNA repair defects, although of significant clinical relevance, has been somewhat anecdotal. Although numerous models have been proposed to explain synthetic lethality among DNA repair mutations, we have only a limited understanding of why a given mutation should render cells sensitive to another. In this issue of Cancer Discovery, Dietlein and colleagues define a general connection between mutations in genes involved in homologous recombination and sensitivity to inhibitors of non-homologous end joining. In doing so, they provide a mechanism to demarcate a set of seemingly diverse tumors that may be highly responsive to established DNA repair-targeted therapeutics. © 2014 American Association for Cancer Research.

Highly efficient exfoliation of individual single-walled carbon nanotubes (SWNTs) was successfully demonstrated by utilizing biocompatible phenoxylated dextran, a kind of polysaccharide, as a SWNT dispersion agent. Phenoxylated dextran shows greater ability in producing individual SWNTs from raw materials than any other dispersing agent, including anionic surfactants and another polysaccharide. Furthermore, with this novel polymer, SWNT bundles or impurities present in raw materials are removed under much milder processing conditions compared to those of ultra-centrifugation procedures. There exists an optimal composition of phenoxy groups (∼13.6 wt%) that leads to the production of high-quality SWNT suspensions, as confirmed by UV-vis-nIR absorption and nIR fluorescence spectroscopy. Furthermore, phenoxylated dextran strongly adsorbs onto SWNTs, enabling SWNT fluorescence even in solid-state films in which metallic SWNTs co-exist. By bypassing ultra-centrifugation, this low-energy dispersion scheme can potentially be scaled up to industrial production levels.

Whitfield-Gabrieli S.,Massachusetts Institute of Technology
Molecular Psychiatry | Year: 2015

We asked whether brain connectomics can predict response to treatment for a neuropsychiatric disorder better than conventional clinical measures. Pre-treatment resting-state brain functional connectivity and diffusion-weighted structural connectivity were measured in 38 patients with social anxiety disorder (SAD) to predict subsequent treatment response to cognitive behavioral therapy (CBT). We used a priori bilateral anatomical amygdala seed-driven resting connectivity and probabilistic tractography of the right inferior longitudinal fasciculus together with a data-driven multivoxel pattern analysis of whole-brain resting-state connectivity before treatment to predict improvement in social anxiety after CBT. Each connectomic measure improved the prediction of individuals’ treatment outcomes significantly better than a clinical measure of initial severity, and combining the multimodal connectomics yielded a fivefold improvement in predicting treatment response. Generalization of the findings was supported by leave-one-out cross-validation. After dividing patients into better or worse responders, logistic regression of connectomic predictors and initial severity combined with leave-one-out cross-validation yielded a categorical prediction of clinical improvement with 81% accuracy, 84% sensitivity and 78% specificity. Connectomics of the human brain, measured by widely available imaging methods, may provide brain-based biomarkers (neuromarkers) supporting precision medicine that better guide patients with neuropsychiatric diseases to optimal available treatments, and thus translate basic neuroimaging into medical practice.Molecular Psychiatry advance online publication, 11 August 2015; doi:10.1038/mp.2015.109. © 2015 Macmillan Publishers Limited

Demory B.-O.,Astrophysics Group | Demory B.-O.,Massachusetts Institute of Technology
Astrophysical Journal Letters | Year: 2014

Exoplanet research focusing on the characterization of super-Earths is currently limited to the handful of targets orbiting bright stars that are amenable to detailed study. This Letter proposes to look at alternative avenues to probe the surface and atmospheric properties of this category of planets, known to be ubiquitous in our galaxy. I conduct Markov Chain Monte Carlo light-curves analyses for 97 Kepler close-in RP ≲ 2.0 R ⊕ super-Earth candidates with the aim of detecting their occultations at visible wavelengths. Brightness temperatures and geometric albedos in the Kepler bandpass are constrained for 27 super-Earth candidates. A hierarchical Bayesian modeling approach is then employed to characterize the population-level reflective properties of these close-in super-Earths. I find median geometric albedos Ag in the Kepler bandpass ranging between 0.16 and 0.30, once decontaminated from thermal emission. These super-Earth geometric albedos are statistically larger than for hot Jupiters, which have medians Ag ranging between 0.06 and 0.11. A subset of objects, including Kepler-10b, exhibit significantly larger albedos (Ag ≳ 0.4). I argue that a better understanding of the incidence of stellar irradation on planetary surface and atmospheric processes is key to explain the diversity in albedos observed for close-in super-Earths. © 2014. The American Astronomical Society. All rights reserved.

Argon A.S.,Massachusetts Institute of Technology
Polymer | Year: 2011

In spite of the continued interest in the yield, flow and fracture behavior of glassy homopolymers and the important role that crazing plays in their ultimate mechanical response, the basic understanding of the molecular level processes that govern craze initiation still remains murky. Here we revisit some early experiments of Argon and Hannoosh of the 1970s on craze initiation in homo polystyrene under combinations of both deviatoric shear stress and mean normal stress and also take note of more recent computer simulations of ultimate plastic flow and cavitation response of model glassy polymers. We then formulate a new craze initiation model and compare its predictions with the earlier experiments to find much better agreement over a wider range of applicability that sheds new light on this hitherto inadequately understood phenomenon. © 2011 Elsevier Ltd. All rights reserved.

Dimarogonas D.V.,KTH Royal Institute of Technology | Frazzoli E.,Massachusetts Institute of Technology | Johansson K.H.,KTH Royal Institute of Technology
IEEE Transactions on Automatic Control | Year: 2012

Event-driven strategies for multi-agent systems are motivated by the future use of embedded microprocessors with limited resources that will gather information and actuate the individual agent controller updates. The controller updates considered here are event-driven, depending on the ratio of a certain measurement error with respect to the norm of a function of the state, and are applied to a first order agreement problem. A centralized formulation is considered first and then its distributed counterpart, in which agents require knowledge only of their neighbors' states for the controller implementation. The results are then extended to a self-triggered setup, where each agent computes its next update time at the previous one, without having to keep track of the state error that triggers the actuation between two consecutive update instants. The results are illustrated through simulation examples. © 2012 IEEE.

Brenner H.,Massachusetts Institute of Technology
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2011

This paper offers a simple macroscopic approach to the question of the slip boundary condition to be imposed upon the tangential component of the fluid velocity at a solid boundary. Plausible reasons are advanced for believing that it is the energy equation rather than the momentum equation that determines the correct fluid-mechanical boundary condition. The scheme resulting therefrom furnishes the following general, near-equilibrium linear constitutive relation for the slip velocity of mass along a relatively flat wall bounding a single-component gas or liquid: (vm)slip= -αlnρ/s|wall, where α and ρ are, respectively, the fluid's thermometric diffusivity and mass density, while the length δs refers to distance measured along the wall in the direction in which the slip or creep occurs. This constitutive relation is shown to agree with experimental data for gases and liquids undergoing thermal creep or pressure-driven viscous creep at solid surfaces. © 2011 American Physical Society.

Spivak D.I.,Massachusetts Institute of Technology
PloS one | Year: 2012

In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research.

Ceder G.,Massachusetts Institute of Technology
MRS Bulletin | Year: 2010

The idea of first-principles methods is to determine the properties of materials by solving the basic equations of quantum mechanics and statistical mechanics. With such an approach, one can, in principle, predict the behavior of novel materials without the need to synthesize them and create a virtual design laboratory. By showing several examples of new electrode materials that have been computationally designed, synthesized, and tested, the impact of first-principles methods in the field of Li battery electrode materials will be demonstrated. A significant advantage of computational property prediction is its scalability, which is currently being implemented into the Materials Genome Project at the Massachusetts Institute of Technology. Using a high-throughput computational environment, coupled to a database of all known inorganic materials, basic information on all known inorganic materials and a large number of novel "designed" materials is being computed. Scalability of high-throughput computing can easily be extended to reach across the complete universe of inorganic compounds, although challenges need to be overcome to further enable the impact of first-principles methods.

Han J.W.,Massachusetts Institute of Technology
Topics in Catalysis | Year: 2012

Highly stepped metal surfaces can define intrinsically chiral structures, and can potentially be used to separate chiral molecules. The decoration of steps on these surfaces with additional metal atoms may be one potential avenue for improving the enantiospecificity of those surfaces. Density functional theory (DFT) calculations have been performed to study the enantiospecific chemisorption of amino acids adsorbed on the pure, Pd-decorated, and Au-decorated Cu(643) S surfaces. Negligible differences in adsorption energies for the most stable minima of enantiomers of alanine were found on these surfaces. There are, however, measureable energy differences between the two enantiomers of both serine and cysteine in their most stable states for all surfaces. For serine and cysteine in μ3 adsorption geometries, no enhancement in enantiospecificity upon the step decoration is observed, while for those in μ4 geometries, it is improved, especially on Au-decorated surface. Our results provide initial information on how tuning the chemistry of intrinsically chiral surfaces can affect the enantiospecific adsorption of amino acids on these surfaces. © Springer Science+Business Media, LLC 2012.

Jagoutz O.,Massachusetts Institute of Technology | Schmidt M.W.,ETH Zurich
Chemical Geology | Year: 2012

The intraoceanic Kohistan arc, northern Pakistan, exposes a complete crustal section composed of infracrustal basal cumulates formed at ≤55km depth, a broadly basaltic/gabbroic lower crust, a 26km thick calc-alkaline batholith, and 4km of a volcanoclastic/sedimentary sequence. The bulk composition of the Kohistan arc crust is approximated by estimating the relative volumes of exposed rocks through detailed field observations, in particular along a representative km-wide transect across the arc, through geobarometric constrains to determine the unit thicknesses, and through satellite images to estimate their lateral extent. We separated the arc into three major units: lower, mid-, and mid- to upper crust, which contain a total of 17 subunits whose average compositions were derived from employing a total of 594 whole rock analyses. The volume-integrated compositions of each unit yield the bulk composition of the arc crust. While the details of the resulting bulk composition depend slightly on the method of integration, all models yield an andesitic bulk supra Moho composition, with an average SiO 2 of 56.6-59.3wt.% and X Mg of 0.51-0.55. The Kohistan arc composition is similar to global continental crust estimates, suggesting that modern style arc activity is the dominant process that formed the (preserved) continental crust. A slight deficit in high field strength and incompatible elements in the Kohistan arc with respect to the global continental crust can be mitigated by adding 6-8wt.% of (basaltic) intraplate type magmas. Our results document that infra arc processes, even in a purely oceanic environment, result in an overall andesitic crust composition in mature arcs, contrary to the widely accepted view that oceanic arcs are basaltic. Bulk crust differentiation from a basaltic parent occurs through foundering of ultramafic cumulates. Our results imply that secondary reworking processes such as continental collision are of secondary importance to explain the major element chemistry of the bulk continental crust composition. © 2011.

Alexander-Katz A.,Massachusetts Institute of Technology
Macromolecules | Year: 2014

Blood clotting provides a beautiful playground to explore polymer-based materials used in natural self-healing scaffolds with applications in multiple highly sought-after areas that include rheological modifiers, self-healing materials, and targeted drug delivery. In this Perspective we present an overview of blood clotting with particular emphasis on the dynamics of biopolymers involved during the first stages of the clotting cascade. We cover from single molecule dynamics in multiple types of flows to complete plug assembly. Different interaction models between monomers and between monomers and platelets/colloids are considered. The many-body polymer and colloid dynamics during the assembly of blood-clotting scaffolds is discussed in detail. Throughout this Perspective insights and connections between the natural system and synthetic systems are highlighted. We finalize by presenting a list of emerging areas inspired by the clotting process. These emerging areas not only present new and interesting scientific challenges but also offer the possibility of controlling and tailoring the properties of polymer systems in novel ways. © 2014 American Chemical Society.

Royden L.H.,Massachusetts Institute of Technology | Papanikolaou D.J.,National and Kapodistrian University of Athens
Geochemistry, Geophysics, Geosystems | Year: 2011

The Hellenic subduction zone displays well-defined temporal and spatial variations in subduction rate and offers an excellent natural laboratory for studying the interaction among slab buoyancy, subduction rate, and tectonic deformation. In space, the active Hellenic subduction front is dextrally offset by 100-120 km across the Kephalonia Transform Zone, coinciding with the junction of a slowly subducting Adriatic continental lithosphere in the north (5-10 mm/yr) and a rapidly subducting Ionian oceanic lithosphere in the south (∼35 mm/yr). Subduction rates can be shown to have decreased from late Eocene time onward, reaching 5-12 mm/yr by late Miocene time, before increasing again along the southern portion of the subduction system. Geodynamic modeling demonstrates that the differing rates of subduction and the resultant trench offset arise naturally from subduction of oceanic (Pindos) lithosphere until late Eocene time, followed by subduction of a broad tract of continental or transitional lithosphere (Hellenic external carbonate platform) and then by Miocene entry of high-density oceanic (Ionian) lithosphere into the southern Hellenic trench. Model results yield an initiation age for the Kephalonia Transform of 6-8 Ma, in good agreement with observations. Consistency between geodynamic model results and geologic observations suggest that the middle Miocene and younger deformation of the Hellenic upper plate, including formation of the Central Hellenic Shear Zone, can be quantitatively understood as the result of spatial variations in the buoyancy of the subducting slab. Using this assumption, we make late Eocene, middle Miocene, and Pliocene reconstructions of the Hellenic system that include quantitative constraints from subduction modeling and geologic constraints on the timing and mode of upper plate deformation. Copyright 2011 by the American Geophysical Union.

Suzuki T.,Massachusetts Institute of Technology
Nature Physics | Year: 2016

The quantum mechanical (Berry) phase of the electronic wavefunction plays a critical role in the anomalous and spin Hall effects, including their quantized limits. While progress has been made in understanding these effects in ferromagnets, less is known in antiferromagnetic systems. Here we present a study of antiferromagnet GdPtBi, whose electronic structure is similar to that of the topologically non-trivial HgTe (refs ,,), and where the Gd ions offer the possibility to tune the Berry phase via control of the spin texture. We show that this system supports an anomalous Hall angle ΘAH > 0.1, comparable to the largest observed in bulk ferromagnets and significantly larger than in other antiferromagnets. Neutron scattering measurements and electronic structure calculations suggest that this effect originates from avoided crossing or Weyl points that develop near the Fermi level due to a breaking of combined time-reversal and lattice symmetries. Berry phase effects associated with such symmetry breaking have recently been explored in kagome networks; our results extend this to half-Heusler systems with non-trivial band topology. The magnetic textures indicated here may also provide pathways towards realizing the topological insulating and semimetallic states predicted in this material class. © 2016 Nature Publishing Group

Loebrich S.,Massachusetts Institute of Technology
Communicative and Integrative Biology | Year: 2014

Clathrin-mediated endocytosis is one of several mechanisms for retrieving transmembrane proteins from the cell surface. This key mechanism is highly conserved in evolution and is found in any eukaryotic cell from yeast to mammals. Studies from several model organisms have revealed that filamentous actin (F-actin) plays multiple distinct roles in shaping Clathrin-mediated endocytosis. Yet, despite the identification of numerous molecules at the interface between endocytic machinery and the cytoskeleton, our mechanistic understanding of how F-actin regulates endocytosis remains limited. Key insights come from neurons where vesicular release and internalization are critical to pre- and postsynaptic function. Recent evidence from human genetics puts postsynaptic organization, glutamate receptor trafficking, and F-actin remodeling in the spotlight as candidate mechanisms underlying neuropsychiatric disorders. Here I review recent findings that connect the F-actin cytoskeleton mechanistically to Clathrin-mediated endocytosis in the central nervous system, and discuss their potential involvement in conferring risk for neuropsychiatric disorder. © 2014 Landes Bioscience.

Bertsekas D.P.,Massachusetts Institute of Technology
IEEE Transactions on Automatic Control | Year: 2011

We consider projected equations for approximate solution of high-dimensional fixed point problems within low-dimensional subspaces. We introduce an analytical framework based on an equivalence with variational inequalities, and algorithms that may be implemented with low-dimensional simulation. These algorithms originated in approximate dynamic programming (DP), where they are collectively known as temporal difference (TD) methods. Even when specialized to DP, our methods include extensions/new versions of TD methods, which offer special implementation advantages and reduced overhead over the standard LSTD and LSPE methods, and can deal with near singularity in the associated matrix inversion. We develop deterministic iterative methods and their simulation-based versions, and we discuss a sharp qualitative distinction between them: the performance of the former is greatly affected by direction and feature scaling, yet the latter have the same asymptotic convergence rate regardless of scaling, because of their common simulation-induced performance bottleneck. © 2011 IEEE.

Lake B.M.,New York University | Salakhutdinov R.,Kings College | Tenenbaum J.B.,Massachusetts Institute of Technology
Science | Year: 2015

People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms-for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world's alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches.We also present several "visual Turing tests" probing the model's creative generalization abilities, which in many cases are indistinguishable from human behavior.

Selin N.E.,Massachusetts Institute of Technology
Journal of Environmental Monitoring | Year: 2011

Despite decades of scientific research and policy actions to control mercury, exposure to toxic methylmercury continues to pose risks to humans and the environment. This article critically reviews the linkages between scientific advancements and mercury reduction policies aimed at reducing this risk, focusing on the challenges that mercury poses as an issue that crosses both spatial and temporal scales. Scientific aspects of the mercury issue at various spatial and temporal scales are reviewed, and policy examples at global, national and local scale are analysed. Policy activity to date has focused on the mercury problem at a single level of spatial scale, and on near-term timescales. Efforts at the local scale have focused on monitoring levels in fish and addressing local contamination issues; national-scale assessments have addressed emissions from particular sources; and global-scale reports have integrated long-range transport of emissions and commercial trade concerns. However, aspects of the mercury issue that cross the political scale (such as interactions between different forms of mercury) as well as contamination problems with long timescales are at present beyond the reach of current policies. It is argued that these unaddressed aspects of the mercury problem may be more effectively addressed by (1) expanded cross-scale policy coordination on mitigation actions and (2) better incorporating adaptation into policy decision-making to minimize impacts. © 2011 The Royal Society of Chemistry.

Cusumano M.A.,Massachusetts Institute of Technology
Communications of the ACM | Year: 2014

Bitcoins are currently less like a currency and more like a computer -generated commodity. Those who solve calculations receive blocks of bitcoins in decreasing size as more are produced. The value of a single bitcoin, which has ranged from fractions of a penny to over a thousand dollars, comes primarily from speculators. Yet bitcoins can be divided into fractions and more fractions, which are especially useful to conduct micropayments over the Internet, such as for music or newspaper articles. The Bitcoin ecosystem must solve several common platform-market problems. The wallets resemble smartphone apps with the equivalent of digital bar codes to hold keys and transaction information. Payments processors and wallet firms are critical intermediaries and have also raised a lot of venture capital.

Sadowski A.,Massachusetts Institute of Technology | Narayan R.,Harvard - Smithsonian Center for Astrophysics
Monthly Notices of the Royal Astronomical Society | Year: 2015

We describe a set of simulations of supercritical accretion on to a non-rotating supermassive black hole (BH). The accretion flow takes the form of a geometrically thick disc with twin low-density funnels around the rotation axis. For accretion rates ≳10 M˙Edd, there is sufficient gas in the funnel to make this region optically thick. Radiation from the disc first flows into the funnel, after which it accelerates the optically thick funnel gas along the axis. The resulting jet is baryon loaded and has a terminal density-weighted velocity ≈0.3c. Much of the radiative luminosity is converted into kinetic energy by the time the escaping gas becomes optically thin. These jets are not powered by BHrotation or magnetic driving, but purely by radiation. Their characteristic beaming angle is ~0.2 rad. For an observer viewing down the axis, the isotropic equivalent luminosity of total energy is as much as 1048 erg s-1 for a 107M⊙ BH accreting at 103 Eddington. Therefore, energetically, the simulated jets are consistent with observations of the most powerful tidal disruption events, e.g. Swift J1644. The jet velocity is, however, too low to match the Lorentz factor γ > 2 inferred in J1644. There is no such conflict in the case of other tidal disruption events. Since favourably oriented observers see isotropic equivalent luminosities that are highly super-Eddington, the simulated models can explain observations of ultraluminous X-ray sources, at least in terms of luminosity and energetics, without requiring intermediate-mass BHs. © 2015 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.

Fu L.,Massachusetts Institute of Technology | Kane C.L.,University of Pennsylvania
Physical Review Letters | Year: 2012

A field theory of the Anderson transition in two-dimensional disordered systems with spin-orbit interactions and time-reversal symmetry is developed, in which the proliferation of vortexlike topological defects is essential for localization. The sign of vortex fugacity determines the Z2 topological class of the localized phase. There are two distinct fixed points with the same critical exponents, corresponding to transitions from a metal to an insulator and a topological insulator, respectively. The critical conductivity and correlation length exponent of these transitions are computed in an N=1-Ïμ expansion in the number of replicas, where for small Ïμ the critical points are perturbatively connected to the Kosterlitz-Thouless critical point. Delocalized states, which arise at the surface of weak topological insulators and topological crystalline insulators, occur because vortex proliferation is forbidden due to the presence of symmetries that are violated by disorder, but are restored by disorder averaging. © 2012 American Physical Society.

Simcoe R.A.,Massachusetts Institute of Technology
Astrophysical Journal | Year: 2011

This paper presents ionization-corrected measurements of the carbon abundance in intergalactic gas at 4.0 < z < 4.5, using spectra of three bright quasars obtained with the Magellan Inamori Kycocera Echelle spectrograph on Magellan. By measuring the C IV strength in a sample of 131 discrete H I-selected quasar absorbers with ρ/ρ≥1.6, we derive a median carbon abundance of [C/H]=-3.55, with lognormal scatter of approximately σ ≈ 0.8dex. This median value is a factor of two to three lower than similar measurements made at z ∼ 2.4 using C IV and O VI. The strength of evolution is modestly dependent on the choice of UV background spectrum used to make ionization corrections, although our detection of an abundance evolution is generally robust with respect to this model uncertainty. We present a framework for analyzing the effects of spatial fluctuations in the UV ionizing background at frequencies relevant for C IV production. We also explore the effects of reduced flux between 3 and 4 Rydbergs (as from He II Lyman series absorption) on our abundance estimates. At He II line absorption levels similar to published estimates, the effects are very small, although a larger optical depth could reduce the strength of the abundance evolution. Our results imply that ∼50% of the heavy elements seen in the intergalactic medium at z ∼ 2.4 were deposited in the 1.3Gyr between z ∼ 4.3 and z ∼ 2.4. The total implied mass flux of carbon into the Lyα forest would constitute 30% of the IMF-weighted carbon yield from known star-forming populations over this period. © 2011. The American Astronomical Society. All rights reserved.

Kwong J.,Texas Instruments | Chandrakasan A.P.,Massachusetts Institute of Technology
IEEE Journal of Solid-State Circuits | Year: 2011

This paper presents an energy-efficient processing platform for wearable sensor nodes, designed to support diverse biological signals and algorithms. The platform features a 0.5 V-1.0 V 16-bit microcontroller, SRAM, and accelerators for biomedical signal processing. Voltage scaling and block-level power gating allow optimizing energy efficiency under applications of varying complexity. Programmable accelerators support numerous usage scenarios and perform signal processing tasks at 133 to 215 × lower energy than the general-purpose CPU. When running complete EEG and EKG applications using both CPU and accelerators, the platform achieves 10.2× and 11.5× energy reduction respectively compared to CPU-only implementations. © 2011 IEEE.

Haeupler B.,Massachusetts Institute of Technology
Proceedings of the Annual ACM Symposium on Theory of Computing | Year: 2011

We introduce projection analysis - a new technique to analyze the stopping time of gossip protocols that are based on random linear network coding (RLNC). Projection analysis drastically simplifies, extends and strengthens previous results. We analyze RLNC gossip in a general framework for network and communication models that encompasses and unifies the models used previously in this context. We show, in most settings for the first time, that the RLNC gossip converges with high probability in optimal time. Most stopping times are of the form O(k + T), where k is the number of messages to be distributed and T is the time it takes to disseminate one message. This means RLNC gossip achieves "perfect pipelining". Our analysis directly extends to highly dynamic networks in which the topology can change completely at any time. This remains true, even if the network dynamics are controlled by a fully adaptive adversary that knows the complete network state. Virtually nothing besides simple O(kT) sequential flooding protocols was previously known for such a setting. While RLNC gossip works in this wide variety of networks our analysis remains the same and extremely simple. This contrasts with more complex proofs that were put forward to give less strong results for various special cases. © 2011 ACM.

Wang Q.,Massachusetts Institute of Technology
Journal of Computational Physics | Year: 2013

This paper describes a forward algorithm and an adjoint algorithm for computing sensitivity derivatives in chaotic dynamical systems, such as the Lorenz attractor. The algorithms compute the derivative of long time averaged "statistical" quantities to infinitesimal perturbations of the system parameters. The algorithms are demonstrated on the Lorenz attractor. We show that sensitivity derivatives of statistical quantities can be accurately estimated using a single, short trajectory (over a time interval of 20) on the Lorenz attractor. © 2012 Elsevier Inc.

Economists traditionally view a Pigouvian fee on carbon dioxide and other greenhouse gas emissions, either via carbon taxes or emissions caps and permit trading ("cap-and-trade"), as the economically optimal or "first-best" policy to address climate change-related externalities. Yet several political economy factors can severely constrain the implementation of these carbon pricing policies, including opposition of industrial sectors with a concentration of assets that would lose considerable value under such policies; the collective action nature of climate mitigation efforts; principal agent failures; and a low willingness-to-pay for climate mitigation by citizens. Real-world implementations of carbon pricing policies can thus fall short of the economically optimal outcomes envisioned in theory. Consistent with the general theory of the second-best, the presence of binding political economy constraints opens a significant "opportunity space" for the design of creative climate policy instruments with superior political feasibility, economic efficiency, and environmental efficacy relative to the constrained implementation of carbon pricing policies. This paper presents theoretical political economy frameworks relevant to climate policy design and provides corroborating evidence from the United States context. It concludes with a series of implications for climate policy making and argues for the creative pursuit of a mix of second-best policy instruments. •Political economy constraints can bind carbon pricing policies.•These constraints can prevent implementation of theoretically optimal carbon prices.•U.S. household willingness-to-pay for climate policy likely falls in the range of $80-$200 per year.•U.S. carbon prices may be politically constrained to as low as $2-$8 per ton of CO2.•An opportunity space exists for improvements in climate policy design and outcomes. © 2014 Elsevier Ltd.

Wei W.,University of Alberta | Wei W.,Massachusetts Institute of Technology | Cui X.,University of Alberta | Chen W.,University of Alberta | Ivey D.G.,University of Alberta
Chemical Society Reviews | Year: 2011

Electrochemical supercapacitors (ECs), characteristic of high power and reasonably high energy densities, have become a versatile solution to various emerging energy applications. This critical review describes some materials science aspects on manganese oxide-based materials for these applications, primarily including the strategic design and fabrication of these electrode materials. Nanostructurization, chemical modification and incorporation with high surface area, conductive nanoarchitectures are the three major strategies in the development of high-performance manganese oxide-based electrodes for EC applications. Numerous works reviewed herein have shown enhanced electrochemical performance in the manganese oxide-based electrode materials. However, many fundamental questions remain unanswered, particularly with respect to characterization and understanding of electron transfer and atomic transport of the electrochemical interface processes within the manganese oxide-based electrodes. In order to fully exploit the potential of manganese oxide-based electrode materials, an unambiguous appreciation of these basic questions and optimization of synthesis parameters and material properties are critical for the further development of EC devices (233 references). © 2011 The Royal Society of Chemistry.

Homan J.,Massachusetts Institute of Technology
Astrophysical Journal Letters | Year: 2012

Relativistic Lense-Thirring precession of a tilted inner accretion disk around a compact object has been proposed as a mechanism for low-frequency (∼0.01-70Hz) quasi-periodic oscillations (QPOs) in the light curves of X-ray binaries. A substantial misalignment angle (∼15°-20°) between the inner-disk rotation axis and the compact-object spin axis is required for the effects of this precession to produce observable modulations in the X-ray light curve. A consequence of this misalignment is that in high-inclination X-ray binaries the precessing inner disk will quasi-periodically intercept our line of sight to the compact object. In the case of neutron-star systems, this should have a significant observational effect, since a large fraction of the accretion energy is released on or near the neutron-star surface. In this Letter, I suggest that this specific effect of Lense-Thirring precession may already have been observed as ∼1Hz QPOs in several dipping/eclipsing neutron-star X-ray binaries. © 2012. The American Astronomical Society. All rights reserved.

Varela C.,Massachusetts Institute of Technology
Journal of Neurophysiology | Year: 2013

Thalamic input to the neocortex is crucial for sensory perception and constitutes the basis of complex awake behavior. Connections within the neocortex play an important role in internally generated neural activity, which is considered critical for memory retrieval and for the generation of imagery in our dreams. Modulatory neurotransmitters, such as noradrenaline and acetylcholine, gate information transmission in the brain. Favero et al. (J Neurophysiol 108: 1010-1024, 2012) show that modulators differentially facilitate thalamocortical relative to intracortical transmission in the input layers of cortex. © 2013 the American Physiological Society.

Meyers E.M.,Massachusetts Institute of Technology
Frontiers in Neuroinformatics | Year: 2013

Population decoding is a powerful way to analyze neural data, however currently only a small percentage of systems neuroscience researchers use this method. In order to increase the use of population decoding, we have created the Neural Decoding Toolbox (NDT) which is a Matlab package that makes it easy to apply population decoding analyses to neural activity. The design of the toolbox revolves around four abstract object classes which enables users to interchange particular modules in order to try different analyses while keeping the rest of the processing stream intact. The toolbox is capable of analyzing data from many different types of recording modalities, and we give examples of how it can be used to decode basic visual information from neural spiking activity and how it can be used to examine how invariant the activity of a neural population is to stimulus transformations. Overall this toolbox will make it much easier for neuroscientists to apply population decoding analyses to their data, which should help increase the pace of discovery in neuroscience. © 2013 Meyers.

Price E.,Massachusetts Institute of Technology
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2011

We develop an algorithm for estimating the values of a vector a; € R" over a support S of size k from a randomized sparse binary linear sketch Ax of size O(k). Given Ax and S, we can recover x′ with ∥x′-xs∥2≤ε∥x;-xs∥2 with probabiUty at least 1 -k-Ω(1). The recovery takes O(k) time. While interesting in its own right, this primitive also has a number of applications. For example, we can: 1. Improve the linear k-sparse recovery of heavy hitters in Zipfian distributions with O(k logn) space from a 1 + ε approximation to a 1 + o(l) approximation, giving the first such approximation in O(k logn) space when l≤ O(n1-ε) 2. Recover block-sparse vectors with O(k) space and a l+ε approximation. Previous algorithms required either w(k) space or w(l) approximation.

Li E.Y.,Massachusetts Institute of Technology | Marzari N.,Ecole Polytechnique Federale de Lausanne
ACS Nano | Year: 2011

We address the issue of the low electrical conductivity observed in carbon nanotube networks using first-principles calculations of the structure, stability, and ballistic transport of different nanotube junctions. We first study covalent linkers, using the nitrene-pyrazine case as a model for conductance-preserving [2 + 1] cycloadditions, and discuss the reasons for their poor performance. We then characterize the role of transition-metal adsorbates in improving mechanical coupling and electrical tunneling between the tubes. We show that the strong hybridization between the transition-metal d orbitals with the π orbitals of the nanotube can provide an excellent electrical bridge for nanotube-nanotube junctions. This effect is maximized in the case of nitrogen-doped nanotubes, thanks to the strong mechanical coupling between the tubes mediated by a single transition metal adatom. Our results suggest effective strategies to optimize the performance of carbon nanotube networks. © 2011 American Chemical Society.

Hendrickx J.M.,Catholic University of Louvain | Tsitsiklis J.N.,Massachusetts Institute of Technology
IEEE Transactions on Automatic Control | Year: 2013

We consider continuous-time consensus seeking systems whose time-dependent interactions are cut-balanced, in the following sense: if a group of agents influences the remaining ones, the former group is also influenced by the remaining ones by at least a proportional amount. Models involving symmetric interconnections and models in which a weighted average of the agent values is conserved are special cases. We prove that such systems always converge. We give a sufficient condition on the evolving interaction topology for the limit values of two agents to be the same. Conversely, we show that if our condition is not satisfied, then these limits are generically different. These results allow treating systems where the agent interactions are a priori unknown, being for example random or determined endogenously by the agent values. © 2012 IEEE.

Janes K.A.,University of Virginia | Lauffenburger D.A.,Massachusetts Institute of Technology
Journal of Cell Science | Year: 2013

Computational models of cell signalling are perceived by many biologists to be prohibitively complicated. Why do math when you can simply do another experiment? Here, we explain how conceptual models, which have been formulated mathematically, have provided insights that directly advance experimental cell biology. In the past several years, models have influenced the way we talk about signalling networks, how we monitor them, and what we conclude when we perturb them. These insights required wet-lab experiments but would not have arisen without explicit computational modelling and quantitative analysis. Today, the best modellers are crosstrained investigators in experimental biology who work closely with collaborators but also undertake experimental work in their own laboratories. Biologists would benefit by becoming conversant in core principles of modelling in order to identify when a computational model could be a useful complement to their experiments. Although the mathematical foundations of a model are useful to appreciate its strengths and weaknesses, they are not required to test or generate a worthwhile biological hypothesis computationally. © 2013. Published by The Company of Biologists Ltd.

Shapiro J.H.,Massachusetts Institute of Technology
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2011

The effect of scintillation, arising from propagation through atmospheric turbulence, on the sift and error probabilities of a quantum key distribution (QKD) system that uses the weak-laser-pulse version of the Bennett-Brassard 1984 (BB84) protocol is evaluated. Two earth-space scenarios are examined: satellite-to-ground and ground-to-satellite transmission. Both lie in the far-field power-transfer regime. This work complements previous analysis of turbulence effects in near-field terrestrial BB84 QKD. More importantly, it shows that scintillation has virtually no impact on the sift and error probabilities in earth-space BB84 QKD, something that has been implicitly assumed in prior analyses for that application. This result contrasts rather sharply with what is known for high-speed laser communications over such paths, in which deep, long-lived scintillation fades present a major challenge to high-reliability operation. © 2011 American Physical Society.

Kaminer I.,Massachusetts Institute of Technology
Nature Photonics | Year: 2015

Rapid progress in nanofabrication methods has fuelled a quest for ultra-compact photonic integrated systems and nanoscale light sources. The prospect of small-footprint, high-quality emitters of short-wavelength radiation is especially exciting due to the importance of extreme-ultraviolet and X-ray radiation as research and diagnostic tools in medicine, engineering and the natural sciences. Here, we propose a highly directional, tunable and monochromatic radiation source based on electrons interacting with graphene plasmons. Our complementary analytical theory and ab initio simulations demonstrate that the high momentum of the strongly confined graphene plasmons enables the generation of high-frequency radiation from relatively low-energy electrons, bypassing the need for lengthy electron acceleration stages or extreme laser intensities. For instance, highly directional 20 keV photons could be generated in a table-top design using electrons from conventional radiofrequency electron guns. The conductive nature and high damage threshold of graphene make it especially suitable for this application. Our electron–plasmon scattering theory is readily extended to other systems in which free electrons interact with surface waves. © 2015 Nature Publishing Group

Wilczek F.,Massachusetts Institute of Technology
Physical Review Letters | Year: 2013

I present a simple model that exhibits a temporal analogue of superconducting crystalline (Larkin-Ovchinnikov-Ferrell-Fulde) ordering, with a time-dependent order parameter. I sketch designs for minimally dissipative ac circuits, all based on time translation symmetry (τ) invariant dynamics, exploiting weak links (Josephson effects). These systems violate τ spontaneously. I also discuss effective theories of that phenomenon, and space-time generalizations. © 2013 American Physical Society.

Chen S.,University of Chicago | Chen S.,Massachusetts Institute of Technology | Krinsky B.H.,University of Chicago | Long M.,University of Chicago
Nature Reviews Genetics | Year: 2013

During the course of evolution, genomes acquire novel genetic elements as sources of functional and phenotypic diversity, including new genes that originated in recent evolution. In the past few years, substantial progress has been made in understanding the evolution and phenotypic effects of new genes. In particular, an emerging picture is that new genes, despite being present in the genomes of only a subset of species, can rapidly evolve indispensable roles in fundamental biological processes, including development, reproduction, brain function and behaviour. The molecular underpinnings of how new genes can develop these roles are starting to be characterized. These recent discoveries yield fresh insights into our broad understanding of biological diversity at refined resolution. © 2013 Macmillan Publishers Limited. All rights reserved.

Guarente L.,Massachusetts Institute of Technology
Methods in Molecular Biology | Year: 2013

Over the past 15 years, the number of papers published on sirtuins has exploded. The initial link between sirtuins and aging comes from studies in yeast, in which it was shown that the life span of yeast mother cells (replicative aging) was proportional to the SIR2 gene dosage. Subsequent studies have shown that SIR2 homologs also slow aging in C. elegans, Drosophila, and mice. An important insight into the function of sirtuins came from the finding that yeast Sir2p and mammalian SIRT1 are NAD+-dependent protein deacetylases. In mammals, there are seven sirtuins (SIRT1-7). Their functions do not appear to be redundant, in part because three are primarily nuclear (SIRT1, 6, and 7), three are mitochondrial (SIRT3, 4, and 5), and one is cytoplasmic (SIRT2). The past decade has provided an avalanche of data showing deacetylation of many key transcription factors. In this chapter, I will address the evidence that sirtuins mediate the effects of CR on physiology and will then turn to the evidence of a relationship between sirtuins and aging and life span. Finally, I will discuss the roles of sirtuins in diseases of aging and the prospects of translating these findings to novel therapeutic strategies to treat major diseases. © 2013 Springer Science+Business Media, LLC.

Vandiver J.K.,Massachusetts Institute of Technology
Journal of Fluids and Structures | Year: 2012

A dimensionless damping parameter, c*=2cω/ρU 2, is defined for cylinders experiencing flow-induced vibration. It overcomes the limitations of "mass-damping" parameters, which first came into use in 1955. A review of the history of mass-damping parameters reveals that they have been used in three principal variations, commonly expressed as S c, S G and α. For spring-mounted rigid cylinders all three forms reduce to a constant times the following dimensionless group, 2c/πρD 2ω n, where 'c' is the structural damping constant per unit length of cylinder and ω nis the natural frequency of the oscillator, including, when so specified, the fluid added mass. All have been used to predict A * max=A max/D, the peak response amplitude for VIV. None are useful at organizing response at reduced velocities away from the peak in response. The proposed alternative, c *, may be used to characterize VIV at all reduced velocities in the lock-in range. The simple product of A * and c * is shown to equal C L, the lift coefficient, thus providing a simple method for compiling C L data from free response measurements. Mass-damping parameters are not well-suited to the organization of the response of flexible cylinders in sheared flows or for cylinders equipped with strakes or fairings. c * is well-suited for use with sheared flows or for cylinders with partial coverage of strakes or fairings. Data from three independent sources are used to illustrate the applications of c *. It is shown that the method of modal analysis may be used to generalize the application of c * to flexible risers. An example for a riser with partial fairing coverage is presented. © 2012 Elsevier Ltd.

Ebert M.S.,Massachusetts Institute of Technology
Cell | Year: 2012

Biological systems use a variety of mechanisms to maintain their functions in the face of environmental and genetic perturbations. Increasing evidence suggests that, among their roles as posttranscriptional repressors of gene expression, microRNAs (miRNAs) help to confer robustness to biological processes by reinforcing transcriptional programs and attenuating aberrant transcripts, and they may in some network contexts help suppress random fluctuations in transcript copy number. These activities have important consequences for normal development and physiology, disease, and evolution. Here, we will discuss examples and principles of miRNAs that contribute to robustness in animal systems. Copyright © 2012 Elsevier Inc. All rights reserved.

Jagoutz O.,Massachusetts Institute of Technology | Schmidt M.W.,ETH Zurich
Earth and Planetary Science Letters | Year: 2013

Most primitive arc melts are basaltic in composition, yet the bulk continental crust, thought to be generated in arcs, is andesitic. In order to produce an andesitic crust from primitive arc basalts, rocks complementary to the andesitic crust have to be fractionated and subsequently removed, most likely through density sorting in the lower arc crust. The Kohistan Arc in northern Pakistan offers a unique opportunity to constrain the composition and volume of material fluxes involved in this process. In a lower crustal section >10km cumulates (dunites, wehrlites, websterites, clinopyroxene-bearing garnetites and hornblendites, and garnet-gabbros) are exposed that are 0.1-0.3g/cm3 denser than the underlying mantle. The cumulates combine with the andesitic bulk Kohistan Arc crust to reproduce the major and trace element composition of primitive basaltic arc melts. Our petrochemical analysis suggests that fractionation and subsequent foundering of wehrlites+ultramafic hornblende-garnet-clinopyroxene cumulates+garnet-gabbros is a viable mechanism for producing andesitic crust from a calc-alkaline/tholeiitic primitive high-Mg basalt. The mass of the foundered material is approximately twice that of the arc crust generated. For an overall andesitic arc composition, we estimate a magma flux into the arc (11-15km3/yr) about three times the rate of arc crust production itself. Foundering fluxes of cumulates (6.4-8.1km3/yr) are one third to half those of the globally subducted oceanic crust (~19km3/yr). Hence, the delaminate forms a volumetrically significant, albeit refractory and depleted geochemical reservoir in the mantle. Owing to its low U/Pb and high Lu/Hf the foundered material evolves with time to a reservoir characterized by unradiogenic Pb and highly radiogenic Hf isotopes, unlike any of the common mantle endmembers defined by OIB chemistry. The unradiogenic Pb of the foundered arc cumulates could counterbalance the radiogenic Pb composition of the depleted mantle. The predicted highly radiogenic Hf (at rather unradiogenic Nd) of the foundered material can explain the εHf-εNd systematics observed in some abyssal peridotites and mantle xenoliths. © 2013.

Wunsch C.,Massachusetts Institute of Technology
Journal of Physical Oceanography | Year: 2010

The time- and space-scale descriptive power of two-dimensional Fourier analysis is exploited to reanalyze the behavior of midlatitude variability as seen in altimetric data. These data are used to construct a purely empirical and analytical frequency-zonal wavenumber spectrum of ocean variability for periods between about 20 days and 15 yr and on spatial scales of about 200-10 000 km. The spectrum is dominated by motions along a "nondispersive" line, which is a robust feature of the data but for whose prominence a complete theoretical explanation is not available. The estimated spectrum also contains significant energy at all frequencies and wavenumbers in this range, including eastward-propagating motions, which are likely some combination of nonlinear spectral cascades, wave propagation, and wind-forced motions. The spectrum can be used to calculate statistical expectations of spatial average sea level and transport variations. However, because the statistics of trend determination in quantities such as sea level and volume transports depend directly upon the spectral limit of the frequency approaching zero, the appropriate significance calculations remain beyond reach, because low-frequency variability is indistinguishable from trends already present in the data. © 2010 American Meteorological Society.

Schmitt A.J.,Massachusetts Institute of Technology | Snyder L.V.,Lehigh University
Computers and Operations Research | Year: 2012

We consider a firm facing supply chain risk in two forms: disruptions and yield uncertainty. We demonstrate the importance of analyzing a sufficiently long time horizon when modeling inventory systems subject to supply disruptions. Several previous papers have used single-period newsboy-style models to study supply disruptions, and we show that such models underestimate the risk of supply disruptions and generate sub-optimal solutions. We consider one case where a firm's only sourcing option is an unreliable supplier subject to disruptions and yield uncertainty, and a second case where a second, reliable (but more expensive) supplier is available. We develop models for both cases to determine the optimal order and reserve quantities. We then compare these results to those found when a single-period approximation is used. We demonstrate that a single-period approximation causes increases in cost, under-utilizes the unreliable supplier, and distorts the order quantities that should be placed with the reliable supplier in the two-supplier case. Moreover, using a single-period model can lead to selecting the wrong strategy for mitigating supply risk. © 2009 Elsevier Ltd. All rights reserved.

Kwak S.K.,Massachusetts Institute of Technology
Journal of High Energy Physics | Year: 2010

We investigate the full set of equations of motion in double field theory and discuss their O(D,D) symmetry and gauge transformation properties. We obtain a Ricci-like tensor, its associated Bianchi identities, and relate our results to those with a generalized metric formulation. © 2010 SISSA.

Paltsev S.,Massachusetts Institute of Technology
Energy Economics | Year: 2014

Russia is an important energy supplier as it holds the world's largest natural gas reserves and it is the world's largest exporter of natural gas. Despite a recent reduction in Russia's exports to Europe, it plans to build new pipelines. We explore the long-term (up to 2050) scenarios of Russian natural gas exports to Europe and Asia using the MIT Emissions Prediction and Policy Analysis (EPPA) model, a computable general equilibrium model of the world economy. We found that over the next 20-40. years natural gas can still play a substantial role in Russian exports and there are substantial reserves to support a development of the gas-oriented energy system both in Russia and in its current and potential gas importers. Based on the considered scenarios, Russia does not need any new pipeline capacity to the EU unless it wants to diversify its export routes to supply the EU without any gas transit via Ukraine and Belarus. Asian markets are attractive to Russian gas and substantial volumes may be exported there. Relatively cheap shale gas in China may sufficiently alter the prospects of Russian gas, especially in Asian markets. In the Reference scenario, exports of natural gas grow from Russia's current 7. Tcf to 11-12. Tcf in 2030 and 13-14. Tcf in 2050. Alternative scenarios provide a wider range of projections, with a share of Russian gas exports shipped to Asian markets rising to more than 30% by 2030 and almost 50% in 2050. Europe's reliance on LNG imports increases, while it still maintains sizable imports from Russia. © 2014 Elsevier B.V.

Kumar S.,Massachusetts Institute of Technology
IEEE Journal on Selected Topics in Quantum Electronics | Year: 2011

Terahertz quantum cascade lasers (QCLs) emit radiation due to intersubband optical transitions in semiconductor superlattices that could be engineered by design. Among a variety of possible design schemes, we have pursued designs that utilize strong electron - phonon interaction in the semiconductor as a means to establish population inversion for optical gain. This report describes the recent progress in phonon-depopulated terahertz QCLs. Operation above 160 K has been realized in GaAs/AlGaAs-based QCLs with metal - metal waveguides for frequencies ranging from 1.8 - 4.4 THz (λ∼ 17 - 70, μm). A record highest operating temperature of 186 K has been demonstrated for a 3.9-THz QCL based on a diagonal design scheme. Also, operation down to a frequency of 1.45THz (λ∼ 205μm) has been achieved. Whereas metal - metal waveguides provide strong mode confinement and low loss at terahertz frequencies, obtaining single-mode operation in a narrow beam-pattern-posed unconventional challenges due to the subwavelength dimensions of the emitting aperture. New techniques in waveguide engineering have been developed to overcome those challenges. Finally, a unique method to tune the resonant-cavity mode of metal - metal terahertz wire lasers has been demonstrated to realize continuous tuning over a range of 137 GHz for a 3.8-THz QCL. © 2010 IEEE.

Frebel A.,Massachusetts Institute of Technology | Bromm V.,University of Texas at Austin
Astrophysical Journal | Year: 2012

We utilize metal-poor stars in the local, ultra-faint dwarf galaxies (UFDs; L tot ≤ 105 L) to empirically constrain the formation process of the first galaxies. Since UFDs have much simpler star formation histories than the halo of the Milky Way, their stellar populations should preserve the fossil record of the first supernova (SN) explosions in their long-lived, low-mass stars. Guided by recent hydrodynamical simulations of first galaxy formation, we develop a set of stellar abundance signatures that characterize the nucleosynthetic history of such an early system if it was observed in the present-day universe. Specifically, we argue that the first galaxies are the product of chemical "one-shot" events, where only one (long-lived) stellar generation forms after the first, PopulationIII, SN explosions. Our abundance criteria thus constrain the strength of negative feedback effects inside the first galaxies. We compare the stellar content of UFDs with these one-shot criteria. Several systems (Ursa MajorII, and also Coma Berenices, BootesI, LeoIV, Segue1) largely fulfill the requirements, indicating that their high-redshift predecessors did experience strong feedback effects that shut off star formation. We term the study of the entire stellar population of a dwarf galaxy for the purpose of inferring details about the nature and origin of the first galaxies "dwarf galaxy archaeology." This will provide clues to the connection of the first galaxies, the surviving, metal-poor dwarf galaxies, and the building blocks of the Milky Way. © 2012. The American Astronomical Society. All rights reserved.

Hutchinson I.H.,Massachusetts Institute of Technology
Physical Review Letters | Year: 2011

The transverse force on a spherical charged grain lying in the plasma wake of another grain is analyzed to assess the importance of ion-drag perturbation, in addition to the wake-potential-gradient. The ion-drag perturbation is intrinsically one order smaller than the wake-potential force in the ratio of grain size (rp) to Debye length (λDe). So ion-drag perturbation is important only in nonlinear wakes. Rigorous particle-in-cell calculations of the force are performed in the nonlinear regime with two interacting grains. It is found that even for quite large grains, r p/λDe=0.1, the force is dominated by the wake-potential gradient. The wake-potential structure can then help explain the preferred alignment of floating dust grains. © 2011 American Physical Society.

Silvestri A.,Massachusetts Institute of Technology
Physical Review Letters | Year: 2011

I study the profile of the chameleon field around a radially pulsating mass. Focusing on the case in which the background (static) chameleon profile exhibits a thin shell, I add small perturbations to the source in the form of time-dependent radial pulsations. It is found that the chameleon field inherits a time dependence and there is a resultant scalar radiation from the region of the source. This has several interesting and potentially testable consequences. © 2011 American Physical Society.

Korolev K.S.,Massachusetts Institute of Technology | Nelson D.R.,Harvard University
Physical Review Letters | Year: 2011

Mutualism is a major force driving evolution and sustaining ecosystems. Although the importance of spatial degrees of freedom and number fluctuations is well known, their effects on mutualism are not fully understood. With range expansions of microbes in mind, we show that, even when mutualism confers a selective advantage, it persists only in populations with high density and frequent migrations. When these parameters are reduced, mutualism is generically lost via a directed percolation (DP) process, with a phase diagram strongly influenced by an exceptional symmetric DP (DP2) transition. © 2011 American Physical Society.

Cusumano M.A.,Massachusetts Institute of Technology
Communications of the ACM | Year: 2013

The article discusses the key elements of successful startups. A strong management team has the right level and breadth of experience, and needs strong technical leadership if it is a technology-driven company. At the same time, ventures dominated by technology often spend too much money refining the product and too little effort getting ready for customers and closing deals. Successful startups usually focus on markets capable of becoming large, fast growing, and profitable for new entrants. Structural factors are more important. Then the entrepreneurs need to convince investors of their particular advantages over competitors, and how their firms will reach customers and capitalize on those advantages. Some startups enter a market with a product or service that is cheaper but less functional than what large, established firms offer.

Reis P.M.,Massachusetts Institute of Technology
Nature Materials | Year: 2011

Hierarchical patterns of localized folds have been observed in thin films under biaxial compression, which show intriguing resemblance to fracture patterns in drying pastes and to venation networks in leaves. Kim, Abkarian and Stone report that repetitive and successive wrinkle-to-fold transitions in a thin film under biaxial compression result in hierarchical patterns of localized folds, and provide insights into their nucleation and evolution. Kim and co-workers used a film consisting of a stiff, thin polymer crust floating on a viscoelastic foundation. When irradiated with plasma, the crust expands isotropically, and the resulting confinement-induced equibiaxial compression leads to the development of a regular wrinkling field. The authors found that the patterns in the network of folds are hierarchical, consecutive generations of small-scale structures form progressively under increased compression, creating increasingly smaller domains.

Malik R.,Massachusetts Institute of Technology | Zhou F.,University of California at Los Angeles | Ceder G.,University of California at Los Angeles
Nature Materials | Year: 2011

Lithium-ion batteries are a key technology for multiple clean energy applications. Their energy and power density is largely determined by the cathode materials, which store Li by incorporation into their crystal structure. Most commercialized cathode materials, such as LiCoO2 (ref. ), LiMn2 O4 (ref. ), Li(Ni,Co,Al)O2 or Li(Ni,Co,Mn)O2 (ref. ), form solid solutions over a large concentration range, with occasional weak first-order transitions as a result of ordering of Li or electronic effects. An exception is LiFePO4, which stores Li through a two-phase transformation between FePO4 and LiFePO4 (ref1-4). Notwithstanding having to overcome extra kinetic barriers, such as nucleation of the second phase and growth through interface motion, the observed rate capability of LiFePO4 has become remarkably high. In particular, once transport limitations at the electrode level are removed through carbon addition and particle size reduction, the innate rate capability of LiFePO4 is revealed to be very high. We demonstrate that the reason LiFePO4 functions as a cathode at reasonable rate is the availability of a single-phase transformation path at very low overpotential, allowing the system to bypass nucleation and growth of a second phase. The Li x FePO4 system is an example where the kinetic transformation path between LiFePO4 and FePO4 is fundamentally different from the path deduced from its equilibrium phase diagram. © 2011 Macmillan Publishers Limited. All rights reserved.

Through the resilience design approach, I propose to extend the resilience paradigm by re-examining the components of adaptive decision-making and governance processes. The approach can be divided into three core components: (1) equity design, i.e., the integration of collaborative approaches to conservation and adaptive governance that generates effective self-organization and emergence in conservation and natural resource stewardship; (2) process design, i.e., the generation of more effective knowledge through strategic development of information inputs; and (3) outcome design, i.e., the pragmatic synthesis of the previous two approaches, generating a framework for developing durable and dynamic conservation and stewardship. The design of processes that incorporate perception and learning is critical to generating durable solutions, especially in developing linkages between wicked social and ecological challenges. Starting from first principles based on human cognition, learning, and collaboration, coupled with nearly two decades of practical experience designing and implementing ecosystem-level conservation and restoration programs, I present how design-based approaches to conservation and stewardship can be achieved. This context is critical in helping practitioners and resources managers undertake more effective policy and practice. © 2014 by the author(s).

Beste M.T.,Massachusetts Institute of Technology
Science translational medicine | Year: 2014

Clinical management of endometriosis is limited by the complex relationship between symptom severity, heterogeneous surgical presentation, and variability in clinical outcomes. As a complement to visual classification schemes, molecular profiles of disease activity may improve risk stratification to better inform treatment decisions and identify new approaches to targeted treatment. We use a network analysis of information flow within and between inflammatory cells to discern consensus behaviors characterizing patient subpopulations. Unsupervised multivariate analysis of cytokine profiles quantified by multiplex immunoassays identified a subset of patients with a shared "consensus signature" of 13 elevated cytokines that was associated with common clinical features of endometriosis, but was not observed among patient subpopulations defined by morphologic presentation alone. Enrichment analysis of consensus markers reinforced the primacy of peritoneal macrophage infiltration and activation, which was demonstrably elevated in ex vivo cultures. Although familiar targets of the nuclear factor κB family emerged among overrepresented transcriptional binding sites for consensus markers, our analysis provides evidence for an unexpected contribution from c-Jun, c-Fos, and AP-1 effectors of mitogen-associated kinase signaling. Their crucial involvement in propagation of macrophage-driven inflammatory networks was confirmed via targeted inhibition of upstream kinases. Collectively, these analyses suggest a clinically relevant inflammatory network that may serve as an objective measure for guiding treatment decisions for endometriosis management, and in the future may provide a mechanistic endpoint for assessing efficacy of new agents aimed at curtailing inflammatory mechanisms that drive disease progression.

Matusik W.,Massachusetts Institute of Technology
ACM Transactions on Graphics | Year: 2013

We present an interactive design system that allows non-expert users to create animated mechanical characters. Given an articulated character as input, the user iteratively creates an animation by sketching motion curves indicating how different parts of the character should move. For each motion curve, our framework creates an optimized mechanism that reproduces it as closely as possible. The resulting mechanisms are attached to the character and then connected to each other using gear trains, which are created in a semi-automated fashion. The mechanical assemblies generated with our system can be driven with a single input driver, such as a hand-operated crank or an electric motor, and they can be fabricated using rapid prototyping devices. We demonstrate the versatility of our approach by designing a wide range of mechanical characters, several of which we manufactured using 3D printing. While our pipeline is designed for characters driven by planar mechanisms, significant parts of it extend directly to non-planar mechanisms, allowing us to create characters with compelling 3D motions. Copyright © ACM 2013.

Hin C.,Massachusetts Institute of Technology
Advanced Functional Materials | Year: 2011

The kinetic anisotropy of lithium ion adsorption and lithium absorption for LixFePO4 olivine nanocrystals is simulated and reported. The kinetics depend on the orientation of the electrolyte/Li xFePO4 interface with respect to the far-field ionic flux. As a consequence of these kinetics and a Li miscibility gap in Li xFePO4, the particle geometry and orientation also have an effect on the morphology of the two-phase evolution. These processes accompany the charge and discharge behavior in battery microstructures and a direct influence on battery behavior is suggested. A kinetic Monte Carlo (KMC) algorithm based on a cathode particle rigid lattice is used to simulate the kinetics in this system. In these simulations the adsorption kinetics of the electrolyte/electrode interface are treated by coupling the normal flux outside the particle from a continuum numerical simulation of Li-ion diffusion in the electrolyte to the atomistic KMC model within the particle. The interfacial reaction depends on local concentration and the potential drop at the interface via the Butler-Volmer (B-V) relation. The atomic potentials for the KMC simulation are derived from empirical solubility limits (as determined by OCV measurements). The main results show that the galvanostatic lithium-uptake/cell- voltage has three regimes: 1) a decreasing cell potential for Li-insertion into a Li-poor phase; 2) a nearly constant potential after the nucleation of a Li-rich phase Li(1-β)FePO4; 3) a decreasing cell potential after the Li-poor phase has been evolved into a Li-rich phase. The behavior in the second regime is sensitive to crystallographic orientation. The kinetic anisotropy of lithium ion adsorption and lithium absorption for Li xFePO4 olivine nanocrystals is simulated and reported. The kinetics depend on the electrolyte/LixFePO4 interface orientation with respect to the far-field ionic flux. As a consequence of these kinetics and a Li miscibility gap in LixFePO4, the morphology of the two-phase evolution also depends on particle geometry and orientation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Centola D.,Massachusetts Institute of Technology
Science | Year: 2010

How do social networks affect the spread of behavior? A popular hypothesis states that networks with many clustered ties and a high degree of separation will be less effective for behavioral diffusion than networks in which locally redundant ties are rewired to provide shortcuts across the social space. A competing hypothesis argues that when behaviors require social reinforcement, a network with more clustering may be more advantageous, even if the network as a whole has a larger diameter. I investigated the effects of network structure on diffusion by studying the spread of health behavior through artificially structured online communities. Individual adoption was much more likely when participants received social reinforcement from multiple neighbors in the social network. The behavior spread farther and faster across clustered-lattice networks than across corresponding random networks.

Hu R.,Massachusetts Institute of Technology
Astrophysical Journal | Year: 2010

It has been suggested that chondrules and calcium-aluminum-rich inclusions (CAIs) were formed at the inner edge of the protoplanetary disk and then entrained in magnetocentrifugal X-winds. We study trajectories of such solid bodies with the consideration of the central star gravity, the protoplanetary disk gravity, and the gas drag of the wind. The efficiency of the gas drag depends on a parameter η, which is the product of the solid body size and density. We find that the gravity of the protoplanetary disk has a non-negligible effect on the trajectories. If a solid body re-enters the flared disk, the re-entering radius depends on the stellar magnetic dipole moment, the disk's gravity, the parameter η, and the initial launching angle. The disk's gravity can make the re-entering radius lower by up to 30%. We find a threshold η, denoted as ηt , for any particular configuration of the X-wind, below which the solid bodies will be expelled from the planetary system. ηt sensitively depends on the initial launching angle, and also depends on the mass of the disk. Only the solid bodies with an η larger than but very close to ηt can be launched to a re-entering radius larger than 1 AU. This size-sorting effect may explain why chondrules come with a narrow range of sizes within each chondritic class. In general, the size distributions of CAIs and chondrules in chondrites can be determined from the initial size distribution as well as the distribution over the initial launching angle. © 2010. The American Astronomical Society. All rights reserved.

Shapiro B.J.,University of Montreal | Polz M.F.,Massachusetts Institute of Technology
Trends in Microbiology | Year: 2014

We propose that microbial diversity must be viewed in light of gene flow and selection, which define units of genetic similarity, and of phenotype and ecological function, respectively. We discuss to what extent ecological and genetic units overlap to form cohesive populations in the wild, based on recent evolutionary modeling and on evidence from some of the first microbial populations studied with genomics. These show that if recombination is frequent and selection moderate, ecologically adaptive mutations or genes can spread within populations independently of their original genomic background (gene-specific sweeps). Alternatively, if the effect of recombination is smaller than selection, genome-wide selective sweeps should occur. In both cases, however, distinct units of overlapping ecological and genotypic similarity will form if microgeographic separation, likely involving ecological tradeoffs, induces barriers to gene flow. These predictions are supported by (meta)genomic data, which suggest that a 'reverse ecology' approach, in which genomic and gene flow information is used to make predictions about the nature of ecological units, is a powerful approach to ordering microbial diversity. © 2014 Elsevier Ltd.

Hohm O.,Massachusetts Institute of Technology | Samtleben H.,Ecole Normale Superieure de Lyon
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2014

We develop exceptional field theory for E8(8), defined on a (3+248)-dimensional generalized spacetime with extended coordinates in the adjoint representation of E8(8). The fields transform under E8(8) generalized diffeomorphisms and are subject to covariant section constraints. The bosonic fields include an "internal" dreibein and an E8(8)-valued "zweihundertachtundvierzigbein" (248-bein). Crucially, the theory also features gauge vectors for the E8(8) E bracket governing the generalized diffeomorphism algebra and covariantly constrained gauge vectors for a separate but constrained E8(8) gauge symmetry. The complete bosonic theory, with a novel Chern-Simons term for the gauge vectors, is uniquely determined by gauge invariance under internal and external generalized diffeomorphisms. The theory consistently comprises components of the dual graviton encoded in the 248-bein. Upon picking particular solutions of the constraints the theory reduces to D=11 or type IIB supergravity, for which the dual graviton becomes pure gauge. This resolves the dual graviton problem, as we discuss in detail. © 2014 American Physical Society.

Browne E.P.,Massachusetts Institute of Technology
Journal of Virology | Year: 2015

Interleukin-1 beta (IL-1β) is an inflammatory cytokine that is secreted in response to inflammasome activation by innate microbe- sensing pathways. Although some retroviruses can trigger IL-1β secretion through the DNA-sensing molecule IFI16, the effect of IL-1β on the course of infection is unknown. To test whether IL-1β secretion affects retroviral replication in vivo, I constructed a novel murine leukemia virus strain (FMLV-IL-1β) that encodes the mature form of IL-1β. This virus replicated with kinetics similar to that of wild-type virus in tissue culture but caused a dramatically more aggressive infection of both C57BL/6 and BALB/c mice. By 7 days postinfection (PI), mice infected with FMLV-IL-1β exhibited splenomegaly and viral loads 300-fold higher than those in mice infected with wild-type FMLV. Furthermore, the enlarged spleens of FMLV-IL-1β-infected mice correlated with a large expansion of Gr-1+ CD11b+ myeloid-derived suppressor cells, as well as elevated levels of immune activation. Although FMLV-IL-1β infection was controlled by C57BL/6 mice by 14 days p.i., FMLV-IL-1β was able to establish a significant persistent infection and immune activation in BALB/c mice. These results demonstrate that IL-1β secretion is a powerful positive regulator of retroviral infection and that FMLV-IL-1β represents a new model of proinflammatory retroviral infection. © 2015, American Society for Microbiology.

Weiss B.P.,Massachusetts Institute of Technology | Elkins-Tanton L.T.,Carnegie Institution of Washington
Annual Review of Earth and Planetary Sciences | Year: 2013

Meteorites are samples of dozens of small planetary bodies that formed in the early Solar System. They exhibit great petrologic diversity, ranging from primordial accretional aggregates (chondrites), to partially melted residues (primitive achondrites), to once fully molten magmas (achondrites). It has long been thought that no single parent body could be the source of more than one of these three meteorite lithologies. This view is now being challenged by a variety of new measurements and theoretical models, including the discovery of primitive achondrites, paleomagnetic analyses of chondrites, thermal modeling of planetesimals, the discoveries of new metamorphosed chondrites and achondrites with affinities to some chondrite groups, and the possible identification of extant partially differentiated asteroids. These developments collectively suggest that some chondrites could in fact be samples of the outer, unmelted crusts of otherwise differentiated planetesimals with silicate mantles and metallic cores. This may have major implications for the origin of meteorite groups, the meaning of meteorite paleomagnetism, the rates and onset times of accretion, and the interior structures and histories of asteroids. © Copyright ©2013 by Annual Reviews. All rights reserved.

Gershman S.J.,Harvard University | Horvitz E.J.,Microsoft | Tenenbaum J.B.,Massachusetts Institute of Technology
Science | Year: 2015

After growing up together, and mostly growing apart in the second half of the 20th century, the fields of artificial intelligence (AI), cognitive science, and neuroscience are reconverging on a shared view of the computational foundations of intelligence that promotes valuable cross-disciplinary exchanges on questions, methods, and results. We chart advances over the past several decades that address challenges of perception and action under uncertainty through the lens of computation. Advances include the development of representations and inferential procedures for large-scale probabilistic inference and machinery for enabling reflection and decisions about tradeoffs in effort, precision, and timeliness of computations. These tools are deployed toward the goal of computational rationality: identifying decisions with highest expected utility, while taking into consideration the costs of computation in complex real-world problems in which most relevant calculations can only be approximated.We highlight key concepts with examples that show the potential for interchange between computer science, cognitive science, and neuroscience.

Miller A.R.,University of Virginia | Tucker C.,Massachusetts Institute of Technology
Information Systems Research | Year: 2013

Given the demand for authentic personal interactions over social media, it is unclear how much firms should actively manage their social media presence. We study this question empirically in a health care setting. We show that active social media management drives more user-generated content. However, we find that this is due to an incremental increase in user postings from an organization's employees rather than from its clients. This result holds when we explore exogenous variation in social media policies, employees, and clients that are explained by medical marketing laws, medical malpractice laws, and distortions in Medicare incentives. Further examination suggests that content being generated mainly by employees can be avoided if a firm's postings are entirely client focused. However, most firm postings seem not to be specifically targeted to clients' interests, instead highlighting more general observations or achievements of the firm itself. We show that untargeted postings like these provoke activity by employees rather than clients. This may not be a bad thing because employee-generated content may help with employee motivation, recruitment, or retention, but it does suggest that social media should not be funded or managed exclusively as a marketing function of the firm. © 2013 INFORMS.

Little J.D.C.,Massachusetts Institute of Technology
Operations Research | Year: 2011

Fifty years ago, the author published a paper in Operations Research with the title, "A proof for the queuing formula: L = λW" [Little, J. D. C. 1961. A proof for the queuing formula: L = λW. Oper. Res. 9(3) 383-387]. Over the years, L = λW has become widely known as "Little's Law." Basically, it is a theorem in queuing theory. It has become well known because of its theoretical and practical importance. We report key developments in both areas with the emphasis on practice. In the latter, we collect new material and search for insights on the use of Little's Law within the fields of operations management and computer architecture. © 2011 INFORMS.

Kaastrup K.,Massachusetts Institute of Technology
Lab on a chip | Year: 2012

Although polymerization-based amplification (PBA) has demonstrated promise as an inexpensive technique for use in molecular diagnostics, oxygen inhibition of radical photopolymerization has hindered its implementation in point-of-care devices. The addition of 0.3-0.7 μM eosin to an aqueous acrylate monomer solution containing a tertiary amine allows an interfacial polymerization reaction to proceed in air only near regions of a test surface where additional eosin initiators coupled to proteins have been localized as a function of molecular recognition events. The dose of light required for the reaction is inversely related to eosin concentration. This system achieves sensitivities comparable to those reported for inert gas-purged systems and requires significantly shorter reaction times. We provide several comparisons of this system with other implementations of polymerization-based amplification.

Venderbos J.W.F.,Massachusetts Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2016

In this work we introduce a symmetry classification for electronic density waves which break translational symmetry due to commensurate wave-vector modulations. The symmetry classification builds on the concept of extended point groups: symmetry groups which contain, in addition to the lattice point group, translations that do not map the enlarged unit cell of the density wave to itself, and become "nonsymmorphic"-like elements. Multidimensional representations of the extended point group are associated with degenerate wave vectors. Electronic properties such as (nodal) band degeneracies and topological character can be straightforwardly addressed, and often follow directly. To further flesh out the idea of symmetry, the classification is constructed so as to manifestly distinguish time-reversal invariant charge (i.e., site and bond) order, and time-reversal breaking flux order. For the purpose of this work, we particularize to spin-rotation invariant density waves. As a first example of the application of the classification we consider the density waves of a simple single- and two-orbital square lattice model. The main objective, however, is to apply the classification to two-dimensional (2D) hexagonal lattices, specifically the triangular and the honeycomb lattices. The multicomponent density waves corresponding to the commensurate M-point ordering vectors are worked out in detail. To show that our results generally apply to 2D hexagonal lattices, we develop a general low-energy SU(3) theory of (spinless) saddle-point electrons. © 2016 American Physical Society.

Moitra A.,Massachusetts Institute of Technology
Proceedings of the Annual ACM Symposium on Theory of Computing | Year: 2015

Super-resolution is a fundamental task in imaging, where the goal is to extract fine-grained structure from coarsegrained measurements. Here we are interested in a popular mathematical abstraction of this problem that has been widely studied in the statistics, signal processing and machine learning communities. We exactly resolve the threshold at which noisy super-resolution is possible. In particular, we establish a sharp phase transition for the relationship between the cutoff frequency (m) and the separation (Δ). If m > 1/Δ + 1, our estimator converges to the true values at an inverse polynomial rate in terms of the magnitude of the noise. And when m < (1 - ∈)/Δ no estimator can distinguish between a particular pair of Δ-separated signals even if the magnitude of the noise is exponentially small. Our results involve making novel connections between extremal functions and the spectral properties of Vandermonde matrices. We establish a sharp phase transition for their condition number which in turn allows us to give the first noise tolerance bounds for the matrix pencil method. Moreover we show that our methods can be interpreted as giving preconditioners for Vandermonde matrices, and we use this observation to design faster algorithms for super-resolution. We believe that these ideas may have other applications in designing faster algorithms for other basic tasks in signal processing. © Copyright 2015 ACM.

Atkin D.,Massachusetts Institute of Technology
American Economic Review | Year: 2016

Anthropologists have documented substantial and persistent differences in food preferences across social groups. My paper asks whether such food cultures can constrain caloric intake? I first document that interstate migrants within India consume fewer calories per rupee of food expenditure compared to their neighbors. Second, I show that migrants bring their origin-state food preferences with them. Third, I link these findings by showing that the gap in caloric intake between locals and migrants depends on the suitability and intensity of the migrants' origin-state preferences. The most affected migrants would consume seven percent more calories if they possessed their neighbors' preferences. (JEL D12, I12, O15, R23, Z12, Z13).

Mousseau J.J.,Massachusetts Institute of Technology | Charette A.B.,University of Montreal
Accounts of Chemical Research | Year: 2013

The possibility of finding novel disconnections for the efficient synthesis of organic molecules has driven the interest in developing technologies to directly functionalize C-H bonds. The ubiquity of these bonds makes such transformations attractive, while also posing several challenges. The first, and perhaps most important, is the selective functionalization of one C-H bond over another. Another key problem is inducing reactivity at sites that have been historically unreactive and difficult to access without prior inefficient prefunctionalization.Although remarkable advances have been made over the past decade toward solving these and other problems, several difficult tasks remain as researchers attempt to bring C-H functionalization reactions into common use. The functionalization of sp3 centers continues to be challenging relative to their sp and sp2 counterparts. Directing groups are often needed to increase the effective concentration of the catalyst at the targeted reaction site, forming thermodynamically stable coordination complexes. As such, the development of removable or convertible directing groups is desirable. Finally, the replacement of expensive rare earth reagents with less expensive and more sustainable catalysts or abandoning the use of catalysts entirely is essential for future practicality.This Account describes our efforts toward solving some of these quandaries. We began our work in this area with the direct arylation of N-iminopyridinium ylides as a universal means to derivatize the germane six-membered heterocycle. We found that the Lewis basic benzoyl group of the pyridinium ylide could direct a palladium catalyst toward insertion at the 2-position of the pyridinium ring, forming a thermodynamically stable six-membered metallocycle. Subsequently we discovered the arylation of the benzylic site of 2-picolonium ylides. The same N-benzoyl group could direct a number of inexpensive copper salts to the 2-position of the pyridinium ylide, which led to the first description of a direct copper-catalyzed alkenylation onto an electron-deficient arene. This particular directing group offers two advantages: (1) it can be easily appended and removed to reveal the desired pyridine target, and (2) it can be incorporated in a cascade process in the preparation of pharmacologically relevant 2-pyrazolo[1,5-a]pyridines.This work has solved some of the challenges in the direct arylation of nonheterocyclic arenes, including reversing the reactivity often observed with such transformations. Readily convertible directing groups were applied to facilitate the transformation. We also demonstrated that iron can promote intermolecular arylations effectively and that the omission of any metal still permits intramolecular arylation reactions. Lastly, we recently discovered a nickel-catalyzed intramolecular arylation of sp3 C-H bonds. Our mechanistic investigations of these processes have elucidated radical pathways, opening new avenues in future direct C-H functionalization reactions. © 2012 American Chemical Society.

Henann D.L.,Brown University | Kamrin K.,Massachusetts Institute of Technology
Physical Review Letters | Year: 2014

Recent dense granular flow experiments have shown that shear deformation in one region of a granular medium fluidizes its entirety, including regions far from the sheared zone, effectively erasing the yield condition everywhere. This enables slow creep deformation to occur when an external force is applied to a probe in the nominally static regions of the material. The apparent change in rheology induced by far-away motion is termed the "secondary rheology," and a theoretical rationalization of this phenomenon is needed. Recently, a new nonlocal granular rheology was successfully used to predict steady granular flow fields, including grain-size-dependent shear-band widths in a wide variety of flow configurations. We show that the nonlocal fluidity model is also capable of capturing secondary rheology. Specifically, we explore creep of a circular intruder in a two-dimensional annular Couette cell and show that the model captures all salient features observed in experiments, including both the rate-independent nature of creep for sufficiently slow driving rates and the faster-than-linear increase in the creep speed with the force applied to the intruder. © 2014 American Physical Society.

Pilawa-Podgurski R.C.N.,University of Illinois at Urbana - Champaign | Perreault D.J.,Massachusetts Institute of Technology
IEEE Transactions on Power Electronics | Year: 2013

This paper explores the benefits of distributed power electronics in solar photovoltaic applications through the use of submodule integrated maximum power point trackers (MPPT). We propose a system architecture that provides a substantial increase in captured energy during partial shading conditions, while at the same time enabling significant overall cost reductions. This is achieved through direct integration of miniature MPPT power converters into existing junction boxes. We describe the design and implementation of a high-efficiency (>98%) synchronous buck MPPT converter, along with digital control techniques that ensure both local and global maximum power extraction. Through detailed experimental measurements under real-world conditions, we verify the increase in energy capture and quantify the benefits of the architecture. ©2012 IEEE.

Bonnet N.,Massachusetts Institute of Technology | Marzari N.,Ecole Polytechnique Federale de Lausanne
Physical Review Letters | Year: 2013

A first-principles model of the electrochemical double layer is applied to study surface energies and surface coverage under realistic electrochemical conditions and to determine the equilibrium shape of metal nanoparticles as a function of applied potential. The potential bias is directly controlled by adding electronic charge to the system, while total energy calculations and thermodynamic relations are used to predict electrodeposition curves and changes in surface energies and coverage. This approach is applied to Pt surfaces subject to hydrogen underpotential deposition. The shape of Pt nanoparticles under a cathodic scan is shown to undergo an octahedric-to-cubic transition, which is more pronounced in alkaline media due to the interaction energy of the pH-dependent surface charge with the surface dipole. © 2013 American Physical Society.

Senthil T.,Massachusetts Institute of Technology | Levin M.,University of Maryland University College
Physical Review Letters | Year: 2013

A simple physical realization of an integer quantum Hall state of interacting two dimensional bosons is provided. This is an example of a symmetry-protected topological (SPT) phase which is a generalization of the concept of topological insulators to systems of interacting bosons or fermions. Universal physical properties of the boson integer quantum Hall state are described and shown to correspond with those expected from general classifications of SPT phases. © 2013 American Physical Society.

Cao J.,Massachusetts Institute of Technology
Journal of Physical Chemistry B | Year: 2011

Many enzymatic reactions in biochemistry are far more complex than the celebrated Michaelis-Menten scheme, but the observed turnover rate often obeys the hyperbolic dependence on the substrate concentration, a relation estab-lished almost a century ago for the simple Michaelis-Menten mechanism. To resolve the longstanding puzzle, we apply the flux balance method to predict the functional form of the substrate dependence in the mean turnover time of complex enzymatic reactions and identify detailed balance (i.e., the lack ) of unbalanced conformtional current) as a sufficient condition for the Michaelis-Menten equation to describe the substrate concentration dependence of the turnover rate in an enzymatic network. This prediction can be verified in single-molecule event-averaged measurements using the recently proposed signatures of detailed balance violations. The finding helps analyze recent single-molecule studies of enzymatic networks and can be applied to other external variables, such as force-dependence and voltage-dependence. © 2011 American Chemical Society.

Kim D.H.,Massachusetts Institute of Technology
Annual Review of Genetics | Year: 2013

The molecular genetic analysis of longevity of Caenorhabditis elegans has yielded fundamental insights into evolutionarily conserved pathways and processes governing the physiology of aging. Recent studies suggest that interactions between C elegans and its microbial environment may influence the aging and longevity of this simple host organism. Experimental evidence supports a role for bacteria in affecting longevity through distinct mechanisms-as a nutrient source, as a potential pathogen that induces double-edged innate immune and stress responses, and as a coevolved sensory stimulus that modulates neuronal signaling pathways regulating longevity. Motivating this review is the anticipation that the molecular genetic dissection of the integrated host immune, stress, and neuroendocrine responses to microbes in C elegans will uncover basic insights into the cellular and organismal physiology that governs aging and longevity. © 2013 by Annual Reviews. All rights reserved.

Metallo C.M.,University of California at San Diego | Vander Heiden M.G.,Massachusetts Institute of Technology | Vander Heiden M.G.,Dana-Farber Cancer Institute
Molecular Cell | Year: 2013

Metabolism impacts all cellular functions and plays a fundamental role in biology. In the last century, our knowledge of metabolic pathway architecture and the genomic landscape of disease has increased exponentially. Combined with these insights, advances in analytical methods for quantifying metabolites and systems approaches to analyze these data now provide powerful tools to study metabolic regulation. Here we review the diverse mechanisms cells use to adapt metabolism to specific physiological states and discuss how metabolic flux analyses can be applied to identify important regulatory nodes to understand normal and pathological cell physiology. © 2013 Elsevier Inc.

Dekker J.,University of Massachusetts Medical School | Marti-Renom M.A.,Genome Biology Group | Marti-Renom M.A.,Center for Genomic Regulation | Mirny L.A.,Massachusetts Institute of Technology
Nature Reviews Genetics | Year: 2013

How DNA is organized in three dimensions inside the cell nucleus and how this affects the ways in which cells access, read and interpret genetic information are among the longest standing questions in cell biology. Using newly developed molecular, genomic and computational approaches based on the chromosome conformation capture technology (such as 3C, 4C, 5C and Hi-C), the spatial organization of genomes is being explored at unprecedented resolution. Interpreting the increasingly large chromatin interaction data sets is now posing novel challenges. Here we describe several types of statistical and computational approaches that have recently been developed to analyse chromatin interaction data. © 2013 Macmillan Publishers Limited.

Bazant M.Z.,Massachusetts Institute of Technology | Storey B.D.,Franklin W. Olin College Of Engineering | Kornyshev A.A.,Imperial College London
Physical Review Letters | Year: 2011

We develop a simple Landau-Ginzburg-type continuum theory of solvent-free ionic liquids and use it to predict the structure of the electrical double layer. The model captures overscreening from short-range correlations, dominant at small voltages, and steric constraints of finite ion sizes, which prevail at large voltages. Increasing the voltage gradually suppresses overscreening in favor of the crowding of counterions in a condensed inner layer near the electrode. This prediction, the ion profiles, and the capacitance-voltage dependence are consistent with recent computer simulations and experiments on room-temperature ionic liquids, using a correlation length of order the ion size. © 2011 American Physical Society.

Ando Y.,Osaka University | Fu L.,Massachusetts Institute of Technology
Annual Review of Condensed Matter Physics | Year: 2015

In this review, we discuss recent progress in the explorations of topological materials beyond topological insulators; specifically, we focus on topological crystalline insulators and bulk topological erconductors. The basic concepts, model Hamiltonians, and novel electronic properties of these new topological materials are explained. The key role of the symmetries that underlie their topological properties is elucidated. Key issues in their materials realizations are also discussed. © 2015 by Annual Reviews.

Lee T.I.,Whitehead Institute For Biomedical Research | Young R.A.,Whitehead Institute For Biomedical Research | Young R.A.,Massachusetts Institute of Technology
Cell | Year: 2013

The gene expression programs that establish and maintain specific cell states in humans are controlled by thousands of transcription factors, cofactors, and chromatin regulators. Misregulation of these gene expression programs can cause a broad range of diseases. Here, we review recent advances in our understanding of transcriptional regulation and discuss how these have provided new insights into transcriptional misregulation in disease. © 2013 Elsevier Inc.

Long M.,University of Chicago | Vankuren N.W.,University of Chicago | Chen S.,Massachusetts Institute of Technology | Vibranovski M.D.,University of Sao Paulo
Annual Review of Genetics | Year: 2013

Genes are perpetually added to and deleted from genomes during evolution. Thus, it is important to understand how new genes are formed and how they evolve to be critical components of the genetic systems that determine the biological diversity of life. Two decades of effort have shed light on the process of new gene origination and have contributed to an emerging comprehensive picture of how new genes are added to genomes, ranging from the mechanisms that generate new gene structures to the presence of new genes in different organisms to the rates and patterns of new gene origination and the roles of new genes in phenotypic evolution. We review each of these aspects of new gene evolution, summarizing the main evidence for the origination and importance of new genes in evolution. We highlight findings showing that new genes rapidly change existing genetic systems that govern various molecular, cellular, and phenotypic functions. © 2013 by Annual Reviews. All rights reserved.

Boyden E.S.,Massachusetts Institute of Technology
Nature Neuroscience | Year: 2015

Over the last 10 years, optogenetics has become widespread in neuroscience for the study of how specific cell types contribute to brain functions and brain disorder states. The full impact of optogenetics will emerge only when other toolsets mature, including neural connectivity and cell phenotyping tools and neural recording and imaging tools. The latter tools are rapidly improving, in part because optogenetics has helped galvanize broad interest in neurotechnology development.

Dresselhaus M.S.,Massachusetts Institute of Technology
ACS Nano | Year: 2010

A review of recent advances in carbon nanotube science and applications is presented in terms of what was learned at the NT10 11th International Conference on the Science and Application of Nanotubes held in Montreal, Canada, June 29 - July 2,2010. © 2010 American Chemical Society.

Dragoi G.,Massachusetts Institute of Technology
Philosophical transactions of the Royal Society of London. Series B, Biological sciences | Year: 2014

Internal representations about the external world can be driven by the external stimuli or can be internally generated in their absence. It has been a matter of debate whether novel stimuli from the external world are instructive over the brain network to create de novo representations or, alternatively, are selecting from existing pre-representations hosted in preconfigured brain networks. The hippocampus is a brain area necessary for normal internally generated spatial-temporal representations and its dysfunctions have resulted in anterograde amnesia, impaired imagining of new experiences, and hallucinations. The compressed temporal sequence of place cell activity in the rodent hippocampus serves as an animal model of internal representation of the external space. Based on our recent results on the phenomenon of novel place cell sequence preplay, we submit that the place cell sequence of a novel spatial experience is determined, in part, by a selection of a set of cellular firing sequences from a repertoire of existing temporal firing sequences in the hippocampal network. Conceptually, this indicates that novel stimuli from the external world select from their pre-representations rather than create de novo our internal representations of the world.

Tucker C.E.,Massachusetts Institute of Technology
International Journal of Industrial Organization | Year: 2012

One of the new realities of advertising is that personal information can be used to ensure that advertising is only shown and designed for a select group of consumers who stand to gain most from this information. However, to gather the data used for targeting requires some degree of privacy intrusion by advertisers. This sets up a tradeoff between the informativeness of advertising and the degree of privacy intrusion. This paper summarizes recent empirical research that illuminates this tradeoff. © 2012 Elsevier B.V. All rights reserved.

Hohm O.,Massachusetts Institute of Technology | Samtleben H.,Ecole Normale Superieure de Lyon
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2014

We present the details of the recently constructed E6(6)-covariant extension of 11-dimensional supergravity. This theory requires a 5+27-dimensional spacetime in which the "internal" coordinates transform in the 27̄ of E6(6). All fields are E6(6) tensors and transform under (gauged) internal generalized diffeomorphisms. The "Kaluza- Klein" vector field acts as a gauge field for the E6(6)-covariant "E-bracket" rather than a Lie bracket, requiring the presence of 2-forms akin to the tensor hierarchy of gauged supergravity. We construct the complete and unique action that is gauge invariant under generalized diffeomorphisms in the internal and external coordinates. The theory is subject to covariant section constraints on the derivatives, implying that only a subset of the extra 27 coordinates is physical. We give two solutions of the section constraints: the first preserves GL(6) and embeds the action of the complete (i.e. untruncated) 11-dimensional supergravity; the second preserves GL(5)×SL(2) and embeds complete type IIB supergravity. As a byproduct, we thus obtain an off-shell action for type IIB supergravity. © 2014 American Physical Society.

Zegras C.,Massachusetts Institute of Technology
Urban Studies | Year: 2010

This paper examines the relationships between the built environment-both 'neighborhood' design characteristics and relative location-and motor vehicle ownership and use in a rapidly motorising, developing city context, that of Santiago de Chile. A vehicle choice model suggests that income dominates the household vehicle ownership decision, but also detects a relationship between several built environment characteristics and a household's likelihood of car ownership. A second model, directly linked to the ownership model to correct for selection bias and endogeneity, suggests a strong relationship with locational characteristics like distance to the central business district and Metro stations. Elasticities of vehicle kilometres travelled (VKT), calculated via the combined models, suggest that income plays the overall largest single role in determining VKT. In combination, however, a range of different design and relative location characteristics also display a relatively strong association with VKT. © 2010 Urban Studies Journal Limited.

Wurtman R.,Massachusetts Institute of Technology
Metabolism: Clinical and Experimental | Year: 2015

Traditionally Alzheimer's disease (AD) has been diagnosed and its course followed based on clinical observations and cognitive testing, and confirmed postmortem by demonstrating amyloid plaques and neurofibrillary tangles in the brain. But the growing recognition that the disease process is ongoing, damaging the brain long before clinical findings appear, has intensified a search for biomarkers that might allow its very early diagnosis and the objective assessment of its responses to putative treatments. At present at least eight biochemical measurements or scanning procedures are used as biomarkers, usually in panels, by neurologists and others. The biochemical measurements are principally of amyloid proteins and their A-beta precursors, or of tau proteins. Brain atrophy can be assessed by means of structural magnetic resonance imaging (sMRI), and decreased blood flow and metabolism can be estimated by functional magnetic resonance imaging (fMRI). [18F]fluorodeoxyglucose-positron emission tomography (FDG-PET) is used to measure the brain's energy utilization and to infer synaptic number. Impaired connectivity between brain regions is indicated by diffusion tensor imaging (DTI), while magnetic resonance spectroscopy (MRS) provides metabolic markers of diminished cell number. Additional proposed biomarkers utilize electroencephalography (EEG) and magnetoencephalography (MEG) for quantifying impairments in connectivity. Genetic analyses illustrate the heterogeneity of disease processes that can cause cognitive impairment syndromes. Recent observations awaiting confirmation suggest that levels of some plasma phospholipids can also be biomarkers of AD and that reductions in these levels can enable the accurate prediction that a cognitively normal individual will go on to develop MCI or AD within 2 years. © 2015 Elsevier Inc.

Venderbos J.W.F.,Massachusetts Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2016

We study hexagonal spin-channel ("triplet") density waves with commensurate M-point propagation vectors. We first show that the three Q=M components of the singlet charge density and charge-current density waves can be mapped to multicomponent Q=0 nonzero angular momentum order in three dimensions (3D) with cubic crystal symmetry. This one-to-one correspondence is exploited to define a symmetry classification for triplet M-point density waves using the standard classification of spin-orbit coupled electronic liquid crystal phases of a cubic crystal. Through this classification we naturally identify a set of noncoplanar spin density and spin-current density waves: the chiral spin density wave and its time-reversal invariant analog. These can be thought of as 3DL=2 and 4 spin-orbit coupled isotropic β-phase orders. In contrast, uniaxial spin density waves are shown to correspond to α phases. The noncoplanar triple-M spin-current density wave realizes a novel 2D semimetal state with three flavors of four-component spin-momentum locked Dirac cones, protected by a crystal symmetry akin to nonsymmorphic symmetry, and sits at the boundary between a trivial and topological insulator. In addition, we point out that a special class of classical spin states, defined as classical spin states respecting all lattice symmetries up to global spin rotation, are naturally obtained from the symmetry classification of electronic triplet density waves. These symmetric classical spin states are the classical long-range ordered limits of chiral spin liquids. © 2016 American Physical Society.

Del Vecchio D.,Massachusetts Institute of Technology
Trends in Biotechnology | Year: 2015

The ability to link systems together such that they behave as predicted once they interact with each other is an essential requirement for the forward-engineering of robust synthetic biological circuits. Unfortunately, because of context-dependencies, parts and functional modules often behave unpredictably once interacting in the cellular environment. This paper reviews recent advances toward establishing a rigorous engineering framework for insulating parts and modules from their context to improve modularity. Overall, a synergy between engineering better parts and higher-level circuit design will be important to resolve the problem of context-dependence. © 2014 Elsevier Ltd.

McCunney R.J.,Massachusetts Institute of Technology | Li J.,Harvard University
Chest | Year: 2014

The National Lung Cancer Screening Trial (NLST) demonstrated that screening with low-dose CT (LDCT) scan reduced lung cancer and overall mortality by 20% and 7%, respectively. The LDCT scanning involves an approximate 2-mSv dose, wherea full-chest CT scanning, the major diagnostic study used to follow up nodules, may involve a dose of 8 mSv. Radiation associated with CT scanning and other diagnostic studies to follow up nodules may present an independent risk of lung cancer. On the basis of the NLST, we estimated the incidence and prevalence of nodules detected in screening programs. We followed the Fleischner guidelines for follow-up of nodules to assess cumulative radiation exposure over 20- and 30-year periods. We then evaluated nuclear worker cohort studies and atomic bomb survivor studies to assess the risk of lung cancer from radiation associated with long-term lung cancer screening programs. The fi ndings indicate that a 55-year-old lung screening participant may experience a cumulative radiation exposure of up to 280 mSv over a 20-year period and 420 mSv over 30 years. These exposures exceed those of nuclear workers and atomic bomb survivors. This assessment suggests that long-term (20-30 years) LDCT screening programs are associated with nontrivial cumulative radiation doses. Current lung cancer screening protocols, if conducted over 20- to 30-year periods, can independently increase the risk of lung cancer beyond cigarette smoking as a result of cumulative radiation exposure. Radiation exposures from LDCT screening and follow-up diagnostic procedures exceed lifetime radiation exposures among nuclear power workers and atomic bomb survivors. © 2014 American College of Chest Physicians.

Stone M.T.,Massachusetts Institute of Technology
Organic Letters | Year: 2011

Chemical equations presented. A modified Larock method has been developed for the one-pot synthesis of substituted quinolines via a Heck reaction of 2-bromoanilines and allylic alcohols followed by dehydrogenation with diisopropyl azodicarboxylate (DIAD). © 2011 American Chemical Society.

Edwards M.D.,Massachusetts Institute of Technology
BMC bioinformatics | Year: 2012

Modern genetics has been transformed by high-throughput sequencing. New experimental designs in model organisms involve analyzing many individuals, pooled and sequenced in groups for increased efficiency. However, the uncertainty from pooling and the challenge of noisy sequencing data demand advanced computational methods. We present MULTIPOOL, a computational method for genetic mapping in model organism crosses that are analyzed by pooled genotyping. Unlike other methods for the analysis of pooled sequence data, we simultaneously consider information from all linked chromosomal markers when estimating the location of a causal variant. Our use of informative sequencing reads is formulated as a discrete dynamic Bayesian network, which we extend with a continuous approximation that allows for rapid inference without a dependence on the pool size. MULTIPOOL generalizes to include biological replicates and case-only or case-control designs for binary and quantitative traits. Our increased information sharing and principled inclusion of relevant error sources improve resolution and accuracy when compared to existing methods, localizing associations to single genes in several cases. MULTIPOOL is freely available at http://cgs.csail.mit.edu/multipool/.

Soto J.A.,Massachusetts Institute of Technology
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2011

In the Matroid Secretary Problem, introduced by Babaioff et al. [5], the elements of a given matroid are presented to an online algorithm in random order. When an element is revealed, the algorithm learns its weight and decides whether or not to select it. The objective is to return a maximum weight independent set of the matroid. There are different variants for this problem depending on the information known about the weights beforehand. In the random assignment model, a hidden list of weights is randomly assigned to the matroid ground set, independently from the random order they are revealed to the algorithm. Our main result is the first constant competitive algorithm for this version of the problem, solving an open question of Babaioff et al. Our algorithm achieves a competitive ratio of 2e2/(e - 1). It exploits the notion of principal partition of a matroid, its decomposition into uniformly dense minors, and a 2e-competitive algorithm for uniformly dense matroids we also develop. We also present constant competitive algorithms in the standard model where the weights are assigned adversarially, for various classes of matroids including cographic, low density, k-column sparse linear matroids and the case when every element is in a small cocircuit. In the same model, we give a new O(log r)-competitive algorithm for matroids of rank r which only uses the relative order of the weights seen and not their actual values, as previously needed.

Dimarogonas D.V.,Massachusetts Institute of Technology | Johansson K.H.,KTH Royal Institute of Technology
Automatica | Year: 2010

The spectral properties of the incidence matrix of the communication graph are exploited to provide solutions to two multi-agent control problems. In particular, we consider the problem of state agreement with quantized communication and the problem of distance-based formation control. In both cases, stabilizing control laws are provided when the communication graph is a tree. It is shown how the relation between tree graphs and the null space of the corresponding incidence matrix encode fundamental properties for these two multi-agent control problems. © 2010 Elsevier Ltd. All rights reserved.

Heng K.,University of Bern | Demory B.-O.,Massachusetts Institute of Technology
Astrophysical Journal | Year: 2013

Unlike previously explored relationships between the properties of hot Jovian atmospheres, the geometric albedo and the incident stellar flux do not exhibit a clear correlation, as revealed by our re-analysis of Q0-Q14 Kepler data. If the albedo is primarily associated with the presence of clouds in these irradiated atmospheres, a holistic modeling approach needs to relate the following properties: the strength of stellar irradiation (and hence the strength and depth of atmospheric circulation), the geometric albedo (which controls both the fraction of starlight absorbed and the pressure level at which it is predominantly absorbed), and the properties of the embedded cloud particles (which determine the albedo). The anticipated diversity in cloud properties renders any correlation between the geometric albedo and the stellar flux weak and characterized by considerable scatter. In the limit of vertically uniform populations of scatterers and absorbers, we use an analytical model and scaling relations to relate the temperature-pressure profile of an irradiated atmosphere and the photon deposition layer and to estimate whether a cloud particle will be lofted by atmospheric circulation. We derive an analytical formula for computing the albedo spectrum in terms of the cloud properties, which we compare to the measured albedo spectrum of HD 189733b by Evans et al. Furthermore, we show that whether an optical phase curve is flat or sinusoidal depends on whether the particles are small or large as defined by the Knudsen number. This may be an explanation for why Kepler-7b exhibits evidence for the longitudinal variation in abundance of condensates, while Kepler-12b shows no evidence for the presence of condensates despite the incident stellar flux being similar for both exoplanets. We include an "observer's cookbook" for deciphering various scenarios associated with the optical phase curve, the peak offset of the infrared phase curve, and the geometric albedo. © 2013. The American Astronomical Society. All rights reserved..

Reineke S.,Massachusetts Institute of Technology | Thomschke M.,TU Dresden | Lussem B.,TU Dresden | Leo K.,TU Dresden
Reviews of Modern Physics | Year: 2013

White organic light-emitting diodes (OLEDs) are ultrathin, large-area light sources made from organic semiconductor materials. Over the past decades, much research has been spent on finding suitable materials to realize highly efficient monochrome and white OLEDs. With their high efficiency, color tunability, and color quality, white OLEDs are emerging as one of the next-generation light sources. In this review, the physics of a variety of device concepts that have been introduced to realize white OLEDs based on both polymer and small-molecule organic materials are discussed. Owing to the fact that about 80% of the internally generated photons are trapped within the thin-film layer structure, a second focus is put on reviewing promising concepts for improved light outcoupling. © 2013 American Physical Society.

Buehler M.J.,Massachusetts Institute of Technology
MRS Bulletin | Year: 2013

Biological materials are effectively synthesized, controlled, and used for a variety of purposes in Nature - in spite of limitations in energy, quality, and quantity of their building blocks. Whereas the chemical composition of materials in the living world plays some role in achieving functional properties, the way components are connected at different length scales defines what material properties can be achieved, how they can be altered to meet functional requirements, and how they fail in disease states and other extreme conditions. Recent work has demonstrated this using large-scale computer simulations to predict materials properties from fundamental molecular principles, combined with experimental work and new mathematical techniques to categorize complex structure-property relationships into a systematic framework. Enabled by such categorization, we discuss opportunities based on the exploitation of concepts from distinct hierarchical systems that share common principles in how function is created, even linking music to materials science. Copyright © Materials Research Society 2013.

Slocum A.,Massachusetts Institute of Technology
International Journal of Machine Tools and Manufacture | Year: 2010

From the humble three-legged milking stool to a SEMI standard wafer pod location to numerous sub-micron fixturing applications in instruments and machines, exactly constrained mechanisms provide precision, robustness, and certainty of location and design. Kinematic couplings exactly constrain six degrees of freedom between two parts and hence closed-form equations can be written to describe the structural performance of the coupling. Hertz contact theory can also be used to design the contact interface so very high stiffness and load capacity can also be achieved. Potential applications such as mechanical/electrical couplings for batteries could enable electric vehicles to rapidly exchange battery packs. © 2009 Elsevier Ltd. All rights reserved.

Von Stetina J.R.,Massachusetts Institute of Technology
Cold Spring Harbor perspectives in biology | Year: 2011

Production of functional eggs requires meiosis to be coordinated with developmental signals. Oocytes arrest in prophase I to permit oocyte differentiation, and in most animals, a second meiotic arrest links completion of meiosis to fertilization. Comparison of oocyte maturation and egg activation between mammals, Caenorhabditis elegans, and Drosophila reveal conserved signaling pathways and regulatory mechanisms as well as unique adaptations for reproductive strategies. Recent studies in mammals and C. elegans show the role of signaling between surrounding somatic cells and the oocyte in maintaining the prophase I arrest and controlling maturation. Proteins that regulate levels of active Cdk1/cyclin B during prophase I arrest have been identified in Drosophila. Protein kinases play crucial roles in the transition from meiosis in the oocyte to mitotic embryonic divisions in C. elegans and Drosophila. Here we will contrast the regulation of key meiotic events in oocytes.

Young R.A.,Whitehead Institute For Biomedical Research | Young R.A.,Massachusetts Institute of Technology
Cell | Year: 2011

Embryonic stem cells and induced pluripotent stem cells hold great promise for regenerative medicine. These cells can be propagated in culture in an undifferentiated state but can be induced to differentiate into specialized cell types. Moreover, these cells provide a powerful model system for studies of cellular identity and early mammalian development. Recent studies have provided insights into the transcriptional control of embryonic stem cell state, including the regulatory circuitry underlying pluripotency. These studies have, as a consequence, uncovered fundamental mechanisms that control mammalian gene expression, connect gene expression to chromosome structure, and contribute to human disease. © 2011 Elsevier Inc.

Nocera D.G.,Massachusetts Institute of Technology
Energy and Environmental Science | Year: 2010

New R&D is needed to provide the nonlegacy world with the "fast food" equivalent of solar energy - light-weight and highly manufacturable solar capture and storage systems © 2010 The Royal Society of Chemistry.

Krishnan Y.,National Center for Biological science | Bathe M.,Massachusetts Institute of Technology
Trends in Cell Biology | Year: 2012

Recent advances in nucleic acid sequencing, structural, and computational technologies have resulted in dramatic progress in our understanding of nucleic acid structure and function in the cell. This knowledge, together with the predictable base-pairing of nucleic acids and powerful synthesis and expression capabilities now offers the unique ability to program nucleic acids to form precise 3D architectures with diverse applications in synthetic and cell biology. The unique modularity of structural motifs that include aptamers, DNAzymes, and ribozymes, together with their well-defined construction rules, enables the synthesis of functional higher-order nucleic acid complexes from these subcomponents. As we illustrate here, these highly programmable, smart complexes are increasingly enabling researchers to probe and program the cell in a sophisticated manner that moves well beyond the use of nucleic acids for conventional genetic manipulation alone. © 2012 Elsevier Ltd.

Kholodenko B.,University College Dublin | Yaffe M.B.,Massachusetts Institute of Technology | Kolch W.,University College Dublin
Science Signaling | Year: 2012

The advancements in "omics" (proteomics, genomics, lipidomics, and metabolomics) technologies have yielded large inventories of genes, transcripts, proteins, and metabolites. The challenge is to find out how these entities work together to regulate the processes by which cells respond to external and internal signals. Mathematical and computational modeling of signaling networks has a key role in this task, and network analysis provides insights into biological systems and has applications for medicine. Here, we review experimental and theoretical progress and future challenges toward this goal. We focus on how networks are reconstructed from data, how these networks are structured to control the flow of biological information, and how the design features of the networks specify biological decisions.

Ma M.,Massachusetts Institute of Technology | Bong D.,Ohio State University
Accounts of Chemical Research | Year: 2013

Lipid membrane fusion is a fundamental noncovalent transformation as well as a central process in biology. The complex and highly controlled biological machinery of fusion has been the subject of intense investigation. In contrast, fewer synthetic approaches that demonstrate selective membrane fusion have been developed. Artificial recapitulation of membrane fusion is an informative pursuit in that fundamental biophysical concepts of biomembrane merger may be generally tested in a controlled reductionist system. A key concept that has emerged from extensive studies on lipid biophysics and biological membrane fusion is that selective membrane fusion derives from the coupling of surface recognition with local membrane disruption, or strain. These observations from native systems have guided the development of de novo-designed biomimetic membrane fusion systems that have unequivocally established the generality of these concepts in noncovalent chemistry.In this Account, we discuss the function and limitations of the artificial membrane fusion systems that have been constructed to date and the insights gained from their study by our group and others. Overall, the synthetic systems are highly reductionist and chemically selective, though there remain aspects of membrane fusion that are not sufficiently understood to permit designed function. In particular, membrane fusion with efficient retention of vesicular contents within the membrane-bound compartments remains a challenge. We discuss examples in which lipid mixing and some degree of vesicle-contents mixing is achieved, but the determinants of aqueous-compartment mixing remain unclear and therefore are difficult to generally implement. The ability to fully design membrane fusogenic function requires a deeper understanding of the biophysical underpinnings of membrane fusion, which has not yet been achieved. Thus, it is critical that biological and synthetic studies continue to further elucidate this biologically important process. Examination of lipid membrane fusion from a synthetic perspective can also reveal the governing noncovalent principles that drive chemically determined release and controlled mixing within nanometer-scale compartments. These are processes that figure prominently in numerous biotechnological and chemical applications. A rough guide to the construction of a functional membrane fusion system may already be assembled from the existing studies: surface-directed membrane apposition may generally be elaborated into selective fusion by coupling to a membrane-disruptive element, as observed over a range of systems that include small-molecule, DNA, or peptide fusogens.Membrane disruption may take different forms, and we briefly describe our investigation of the sequence determinants of fusion and lysis in membrane-active viral fusion peptide variants. These findings set the stage for further investigation of the critical elements that enable efficient, fully functional fusion of both membrane and aqueous compartments and the application of these principles to unite synthetic and biological membranes in a directed fashion. Controlled fusion of artificial and living membranes remains a chemical challenge that is biomimetic of native chemical transport and has a direct impact on drug delivery approaches. © 2013 American Chemical Society.

Erkmen B.I.,Jet Propulsion Laboratory | Shapiro J.H.,Massachusetts Institute of Technology
Advances in Optics and Photonics | Year: 2010

Ghost-imaging experiments correlate the outputs from two photodetectors: a high-spatial-resolution (scanning pinhole or CCD array) detector that measures a field that has not interacted with the object to be imaged, and a bucket (singlepixel) detector that collects a field that has interacted with the object.We give a comprehensive review of ghost imaging-within a unified Gaussian-state framework-presenting detailed analyses of its resolution, field of view, image contrast, and signal-to-noise ratio behavior. We consider three classes of illumination: thermal-state (classical), biphoton-state (quantum), and classicalstate phase-sensitive light. The first two have been employed in a variety of ghost-imaging demonstrations. The third is the classical Gaussian state that produces ghost images that most closely mimic those obtained from biphoton illumination. The insights we develop lead naturally to a new, single-beam approach to ghost imaging, called computational ghost imaging, in which only the bucket detector is required. We provide quantitative results while simultaneously emphasizing the underlying physics of ghost imaging. The key to developing the latter understanding lies in the coherence behavior of a pair of Gaussian-state light beams with either phase-insensitive or phase-sensitive cross correlation. © 2010 Optical Society of America.

Penna R.F.,Massachusetts Institute of Technology
Journal of High Energy Physics | Year: 2016

Abstract: The Bondi-van der Burg-Metzner-Sachs (BMS) group is the asymptotic symmetry group of asymptotically flat spacetime. It is infinite dimensional and entails an infinite number of conservation laws. According to the black hole membrane paradigm, null infinity (in asymptotically flat spacetime) and black hole event horizons behave like fluid membranes. The fluid dynamics of the membrane is governed by an infinite set of symmetries and conservation laws. Our main result is to point out that the infinite set of symmetries and conserved charges of the BMS group and the membrane paradigm are the same. This relationship has several consequences. First, it sheds light on the physical interpretation of BMS conservation laws. Second, it generalizes the BMS conservation laws to arbitrary subregions of arbitrary null surfaces. Third, it clarifies the identification of the superrotation subgroup of the BMS group. We briefly comment on the black hole information problem. © 2016, The Author(s).

Hohm O.,Massachusetts Institute of Technology
Progress of Theoretical Physics Supplement | Year: 2011

We review the recently constructed 'double field theory' which introduces in addition to the conventional coordinates associated to momentum modes coordinates associated to winding modes. Thereby, T-duality becomes a global symmetry of the theory, which can be viewed as an 'O(D,D) covariantization' of the low-energy effective space-time action of closed string theory. We discuss its symmetries with a special emphasis on the relation between global duality symmetries and local gauge symmetries.

Wunsch C.,Massachusetts Institute of Technology
Quaternary Science Reviews | Year: 2010

A comparison is made between some of the framework used to discuss Paleoceanography and parallel situations in modern physical oceanography. A main inference is that too often the paleo literature aims to rationalize why a particular hypothesis remains appropriate, rather than undertaking to deliberately test that hypothesis. © 2010 Elsevier Ltd.

Schneider T.,California Institute of Technology | O'Gorman P.A.,Massachusetts Institute of Technology | Levine X.J.,California Institute of Technology
Reviews of Geophysics | Year: 2010

Water vapor is not only Earth's dominant greenhouse gas. Through the release of latent heat when it condenses, it also plays an active role in dynamic processes that shape the global circulation of the atmosphere and thus climate. Here we present an overview of how latent heat release affects atmosphere dynamics in a broad range of climates, ranging from extremely cold to extremely warm. Contrary to widely held beliefs, atmospheric circulation statistics can change nonmonotonic ally with global-mean surface temperature, in part because of dynamic effects of water vapor. For example, the strengths of the tropical Hadley cir culation and of zonally asymmetric tropical circulations as well as the kinetic energy of extratropical baroclinic eddies can be lower than they presently are both in much warmer climates and in much colder climates. We discuss how latent heat release is implicated in such circulation changes particularly through its effect on the atmospheric static stability and we illustrate the circulation changes through simulations with an idealized general circulation model. This allows us to explore a continuum of climates to constrain macroscopic laws governing this climatic continuum and to place past and possible future climate changes in a broader context. Copyright 2010 by the American Geophysical Union.

Delong E.F.,University of Hawaii at Manoa | Delong E.F.,Massachusetts Institute of Technology
Cell | Year: 2014

Animals harbor gut microbiota characteristic of the host and diet of origin. Whether bacteria from diverse nonindigenous origins successfully invade foreign gut habitats is not well known. Now, Seedorf et al. show that microbiota from a variety of disparate habitats can successfully colonize and compete in the mammalian gut environment. © 2014 Elsevier Inc.

Chlipala A.,Massachusetts Institute of Technology
Proceedings of the ACM SIGPLAN International Conference on Functional Programming, ICFP | Year: 2013

We report on the design and implementation of an extensible programming language and its intrinsic support for formal verification. Our language is targeted at low-level programming of infrastructure like operating systems and runtime systems. It is based on a cross-platform core combining characteristics of assembly languages and compiler intermediate languages. From this foundation, we take literally the saying that C is a "macro assembly language": we introduce an expressive notion of certified low-level macros, sufficient to build up the usual features of C and beyond as macros with no special support in the core. Furthermore, our macros have integrated support for strongest postcondition calculation and verification condition generation, so that we can provide a high-productivity formal verification environment within Coq for programs composed from any combination of macros. Our macro interface is expressive enough to support features that low-level programs usually only access through external tools with no formal guarantees, such as declarative parsing or SQL-inspired querying. The abstraction level of these macros only imposes a compile-time cost, via the execution of functional Coq programs that compute programs in our intermediate language; but the run-time cost is not substantially greater than for more conventional C code. We describe our experiences constructing a full C-like language stack using macros, with some experiments on the verifiability and performance of individual programs running on that stack.

Bonvillian W.B.,Massachusetts Institute of Technology
Science | Year: 2013

U.S. innovation is heavy on early-stage R&D; it requires additional focus on manufacturing stages.

Bouffanais R.,Massachusetts Institute of Technology
Computers and Fluids | Year: 2010

Large-eddy simulation has become one of the most promising and successful methodology that concerns turbulent flows. It is reaching a level of maturity that brings this approach to the mainstream of engineering and industrial computations, while it opens new opportunities and brings new challenges for further progress. These advances and challenges, in the framework of industrial applications, have been the subject of a discussion meeting held at the Royal Society that brought together leading LES experts and industrial practitioners. The outcome of this discussion meeting is reported in a recent issue of Philosophical Transactions of the Royal Society A, and is thoroughly reviewed in this article. Advances and challenges of LES applied in industrial applications are concurrently reviewed and further discussed. © 2009 Elsevier Ltd. All rights reserved.

Hohm O.,Massachusetts Institute of Technology
Journal of High Energy Physics | Year: 2011

Some features of Einstein gravity are most easily understood from string theory but are not manifest at the level of the usual Lagrangian formulation. One example is the factorization of gravity amplitudes into gauge theory amplitudes. Based on the recently constructed 'double field theory' and a geometrical frame-like formalism developed by Siegel, we provide a framework of perturbative Einstein gravity coupled to a 2-form and a dilaton in which, as a consequence of T-duality, the Feynman rules factorize to all orders in perturbation theory. We thereby establish the precise relation between the field variables in different formulations and discuss the Lagrangian that, when written in terms of these variables, makes a left-right factorization manifest. © SISSA 2011.

Guarente L.,Massachusetts Institute of Technology
Cold Spring Harbor Symposia on Quantitative Biology | Year: 2011

Sirtuins are nicotinamide adenine dinucleotide (NAD)-dependent protein deacetylases that link protein acetylation, metabolism, aging, and diseases of aging. Sirtuins were initially found to slow aging in lower organisms and more recently shown to mediate many effects of calorie restriction on metabolism and longevity in mammals. This chapter focuses on two key mammalian sirtuins, SIRT1 (which resides mainly in the nucleus) and SIRT3 (which is mitochondrial). I discuss the many protein substrates of these sirtuins and how they determine the metabolic strategy most efficacious under scarce or abundant food supplies. I also discuss the logic by which sirtuins link protein acetylation and metabolism. Finally, I discuss emerging data showing protection by sirtuins against most of the common diseases of aging. It is possible that sirtuins will be novel targets to combat these diseases pharmacologically. © 2011 Cold Spring Harbor Laboratory Press.

Thaler J.,Massachusetts Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

Jet finding is a type of optimization problem, where hadrons from a high-energy collision event are grouped into jets based on a clustering criterion. As three interesting examples, one can form a jet cluster that (i) optimizes the overall jet four-vector, (ii) optimizes the jet axis, or (iii) aligns the jet axis with the jet four-vector. In this paper, we show that these three approaches to jet finding, despite being philosophically quite different, can be regarded as descendants of a mother optimization problem. For the special case of finding a single cone jet of fixed opening angle, the three approaches are genuinely identical when defined appropriately, and the result is a stable cone jet with the largest value of a quantity J. This relationship is only approximate for cone jets in the rapidity-azimuth plane, as used at the Large Hadron Collider, though the differences are mild for small radius jets. © 2015 American Physical Society. © 2015 American Physical Society.

Forney Jr. G.D.,Massachusetts Institute of Technology
IEEE Transactions on Information Theory | Year: 2011

A conceptual framework involving partition functions of normal factor graphs is introduced, paralleling a similar recent development by Al-Bashabsheh and Mao. The partition functions of dual normal factor graphs are shown to be a Fourier transform pair, whether or not the graphs have cycles. The original normal graph duality theorem follows as a corollary. Within this framework, MacWilliams identities are found for various local and global weight generating functions of general group or linear codes on graphs; this generalizes and provides a concise proof of the MacWilliams identity for linear time-invariant convolutional codes that was recently found by Gluesing-Luerssen and Schneider. Further MacWilliams identities are developed for terminated convolutional codes, particularly for tail-biting codes, similar to those studied recently by Bocharova, Hug, Johannesson, and Kudryashov. © 2011 IEEE.

Allais A.,Harvard University | Mezei M.,Massachusetts Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

We study how the universal contribution to entanglement entropy in a conformal field theory depends on the entangling region. We show that for a deformed sphere the variation of the universal contribution is quadratic in the deformation amplitude. We generalize these results for Rényi entropies. We obtain an explicit expression for the second order variation of entanglement entropy in the case of a deformed circle in a three-dimensional conformal field theory with a gravity dual. For the same system, we also consider an elliptic entangling region and determine numerically the entanglement entropy as a function of the aspect ratio of the ellipse. Based on these three-dimensional results and Solodukhin's formula in four dimensions, we conjecture that the sphere minimizes the universal contribution to entanglement entropy in all dimensions. © 2015 American Physical Society.

Penna R.F.,Massachusetts Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

Numerical simulations indicate that black holes carrying linear momentum and/or orbital momentum can power jets. The jets extract the kinetic energy stored in the black hole's motion. This could provide an important electromagnetic counterpart to gravitational wave searches. We develop the theory underlying these jets. In particular, we derive the analogues of the Penrose process and the Blandford-Znajek jet power prediction for boosted black holes. The jet power we find is (v/2M)2Φ2/(4π), where v is the hole's velocity, M is its mass, and Φ is the magnetic flux. We show that energy extraction from boosted black holes is conceptually similar to energy extraction from spinning black holes. However, we highlight two key technical differences: in the boosted case, jet power is no longer defined with respect to a Killing vector, and the relevant notion of black hole mass is observer dependent. We derive a new version of the membrane paradigm in which the membrane lives at infinity rather than the horizon and we show that this is useful for interpreting jets from boosted black holes. Our jet power prediction and the assumptions behind it can be tested with future numerical simulations. © 2015 American Physical Society.

Jones B.J.P.,Massachusetts Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

In this paper we consider the coherence properties of neutrinos produced by the decays of pions in conventional neutrino beams. Using a multiparticle density matrix formalism we derive the oscillation probability for neutrinos emitted by a decaying pion in an arbitrary quantum state. Then, using methods from decoherence theory, we calculate the pion state which evolves through interaction with decay-pipe gases in a typical accelerator neutrino experiment. These two ingredients are used to obtain the distance scales for neutrino beam coherence loss. We find that for the known neutrino mass splittings, no nonstandard oscillation effects are expected on terrestrial baselines. Heavy sterile neutrinos may experience terrestrial loss of coherence, and we calculate both the distance over which this occurs and the energy resolution required to observe the effect. By treating the pion-muon-neutrino-environment system quantum mechanically, neutrino beam coherence properties are obtained without assuming arbitrary spatial or temporal scales at the neutrino production vertex. © 2015 American Physical Society.

Penna R.F.,Massachusetts Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

Black hole jet power depends on the angular velocity of magnetic field lines, ΩF. Force-free black hole magnetospheres typically have ΩF/ΩH≈0.5, where ΩH is the angular velocity of the horizon. We give a streamlined proof of this result using an extension of the classical black hole membrane paradigm. The proof is based on an impedance-matching argument between membranes at the horizon and infinity. Then we consider a general relativistic magnetohydrodynamic simulation of an accreting, spinning black hole and jet. We find that the theory correctly describes the simulation in the jet region. However, the field lines threading the horizon near the equator have much smaller ΩF/ΩH because the force-free approximation breaks down in the accretion flow. © 2015 American Physical Society.

Forney Jr. G.D.,Massachusetts Institute of Technology
IEEE Transactions on Information Theory | Year: 2011

Given a discrete-time linear system C, a shortest basis for C is a set of linearly independent generators for C with the least possible lengths. A basis B is a shortest basis if and only if it has the predictable span property (i.e., has the predictable delay and degree properties, and is non-catastrophic), or alternatively if and only if it has the subsystem basis property (for any interval J, the generators in B whose span is in J is a basis for the subsystem C J. The dimensions of the minimal state spaces and minimal transition spaces of C are simply the numbers of generators in a shortest basis B that are active at any given state or symbol time, respectively. A minimal linear realization for C in controller canonical form follows directly from a shortest basis for C, and a minimal linear realization for C in observer canonical form follows directly from a shortest basis for the orthogonal system C ⊥. This approach seems conceptually simpler than that of classical minimal realization theory. © 2006 IEEE.

Senthil T.,Massachusetts Institute of Technology
Annual Review of Condensed Matter Physics | Year: 2015

We describe recent progress in our understanding of the interplay between interactions, symmetry, and topology in states of quantum matter. We focus on a minimal generalization of the celebrated topological band insulators (TBIs) to interacting many-particle systems known as symmetry-protected topological (SPT) phases. As with the TBIs, these states have a bulk gap and no exotic excitations but have nontrivial surface states that are protected by symmetry. We describe the various possible phases and their properties in three-dimensional systems with realistic symmetries. We develop many key ideas for the theory of these states using simple examples. The emphasis is on physical rather than mathematical properties. We survey insights obtained from the study of SPT phases for a number of other theoretical problems. © 2015 by Annual Reviews.

Orlin J.B.,Massachusetts Institute of Technology
Proceedings of the Annual ACM Symposium on Theory of Computing | Year: 2010

We give the first strongly polynomial time algorithm for computing an equilibrium for the linear utilities case of Fisher's market model. We consider a problem with a set B of buyers and a set G of divisible goods. Each buyer i starts with an initial integral allocation ei of money. The integral utility for buyer i of good j is Uij. We first develop a weakly polynomial time algorithm that runs in O(n4 log Umax + n3 emax) time, where n = |B| + |G|. We further modify the algorithm so that it runs in O(n4 log n) time. These algorithms improve upon the previous best running time of O(n8 log U max + n7 log emax), due to Devanur et al. © 2010 ACM.

Demkowicz M.J.,Massachusetts Institute of Technology | Thilly L.,University of Poitiers
Acta Materialia | Year: 2011

Atomistic modeling is used to investigate the shear resistance and interaction with point defects of a Cu-Nb interface found in nanocomposites synthesized by severe plastic deformation. The shear resistance of this interface is highly anisotropic: in one direction shearing occurs at stresses <1200 MPa, while in the other it does not occur at all. The binding energy of vacancies, interstitials and He impurities to this interface depends sensitively on the binding location, but there is no point defect delocalization, nor does this interface contain any constitutional defects. These behaviors are markedly dissimilar from a different Cu-Nb interface found in magnetron sputtered composites. The dissimilarities may, however, be explained by quantitative differences in the detailed structure of these two interfaces. © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

Hertzberg M.P.,Massachusetts Institute of Technology
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2015

We examine the role of using symmetry and effective field theory in inflationary model building. We describe the standard formulation of starting with an approximate shift symmetry for a scalar field, and then introducing corrections systematically in order to maintain control over the inflationary potential. We find that this leads to models in good agreement with recent data. On the other hand, there are attempts in the literature to deviate from this paradigm by envoking other symmetries and corrections. In particular: in a suite of recent papers, several authors have made the claim that standard Einstein gravity with a cosmological constant and a massless scalar carries conformal symmetry. They claim this conformal symmetry is hidden when the action is written in the Einstein frame, and so has not been fully appreciated in the literature. They further claim that such a theory carries another hidden symmetry; a global SO(1, 1) symmetry. By deforming around the global SO(1, 1) symmetry, they are able to produce a range of inflationary models with asymptotically flat potentials, whose flatness is claimed to be protected by these symmetries. These models tend to give rise to B-modes with small amplitude. Here we explain that standard Einstein gravity does not in fact possess conformal symmetry. Instead these authors are merely introducing a redundancy into the description, not an actual conformal symmetry. Furthermore, we explain that the only real (global) symmetry in these models is not at all hidden, but is completely manifest when expressed in the Einstein frame; it is in fact the shift symmetry of a scalar field. When analyzed systematically as an effective field theory, deformations do not generally produce asymptotically flat potentials and small B-modes as suggested in these recent papers. Instead, deforming around the shift symmetry systematically, tends to produce models of inflation with B-modes of appreciable amplitude. Such simple models typically also produce the observed red spectral index, Gaussian fluctuations, etc. In short: simple models of inflation, organized by expanding around a shift symmetry, are in excellent agreement with recent data. © 2015 The Author.

Bathe K.-J.,Massachusetts Institute of Technology
Computers and Structures | Year: 2013

The objective in this paper is to present some recent developments regarding the subspace iteration method for the solution of frequencies and mode shapes. The developments pertain to speeding up the basic subspace iteration method by choosing an effective number of iteration vectors and by the use of parallel processing. The subspace iteration method lends itself particularly well to shared and distributed memory processing. We present the algorithms used and illustrative sample solutions. The present paper may be regarded as an addendum to the publications presented in the early 1970s, see Refs. [1,2], taking into account the changes in computers that have taken place. © 2012 Elsevier Ltd.

Arai K.,Massachusetts Institute of Technology
Nature Nanotechnology | Year: 2015

Optically detected magnetic resonance using nitrogen–vacancy (NV) colour centres in diamond is a leading modality for nanoscale magnetic field imaging, as it provides single electron spin sensitivity, three-dimensional resolution better than 1 nm (ref. 5) and applicability to a wide range of physical and biological samples under ambient conditions. To date, however, NV-diamond magnetic imaging has been performed using ‘real-space’ techniques, which are either limited by optical diffraction to ∼250 nm resolution or require slow, point-by-point scanning for nanoscale resolution, for example, using an atomic force microscope, magnetic tip, or super-resolution optical imaging. Here, we introduce an alternative technique of Fourier magnetic imaging using NV-diamond. In analogy with conventional magnetic resonance imaging (MRI), we employ pulsed magnetic field gradients to phase-encode spatial information on NV electronic spins in wavenumber or ‘k-space’ followed by a fast Fourier transform to yield real-space images with nanoscale resolution, wide field of view and compressed sensing speed-up. © 2015 Nature Publishing Group

Fee M.S.,Massachusetts Institute of Technology
Current Opinion in Neurobiology | Year: 2014

Reinforcement learning requires the convergence of signals representing context, action, and reward. While models of basal ganglia function have well-founded hypotheses about the neural origin of signals representing context and reward, the function and origin of signals representing action are less clear. Recent findings suggest that exploratory or variable behaviors are initiated by a wide array of 'action-generating' circuits in the midbrain, brainstem, and cortex. Thus, in order to learn, the striatum must incorporate an efference copy of action decisions made in these action-generating circuits. Here we review several recent neural models of reinforcement learning that emphasize the role of efference copy signals. Also described are ideas about how these signals might be integrated with inputs signaling context and reward. © 2014 Elsevier Ltd.

Barouch D.H.,Beth Israel Deaconess Medical Center | Barouch D.H.,Massachusetts Institute of Technology | Picker L.J.,Oregon Health And Science University
Nature Reviews Microbiology | Year: 2014

The ultimate solution to the global HIV.1 epidemic will probably require the development of a safe and effective vaccine. Multiple vaccine platforms have been evaluated in preclinical and clinical trials, but given the disappointing results of clinical efficacy studies so far, novel vaccine approaches are needed. In this Opinion article, we discuss the scientific basis and clinical potential of novel adenovirus and cytomegalovirus vaccine vectors for HIV.1 as two contrasting but potentially complementary vector approaches. Both of these vector platforms have demonstrated partial protection against stringent simian immunodeficiency virus challenges in rhesus monkeys using different immunological mechanisms. © 2014 Macmillan Publishers Limited. All rights reserved.

England J.L.,Massachusetts Institute of Technology
Nature Nanotechnology | Year: 2015

In a collection of assembling particles that is allowed to reach thermal equilibrium, the energy of a given microscopic arrangement and the probability of observing the system in that arrangement obey a simple exponential relationship known as the Boltzmann distribution. Once the same thermally fluctuating particles are driven away from equilibrium by forces that do work on the system over time, however, it becomes significantly more challenging to relate the likelihood of a given outcome to familiar thermodynamic quantities. Nonetheless, it has long been appreciated that developing a sound and general understanding of the thermodynamics of such non-equilibrium scenarios could ultimately enable us to control and imitate the marvellous successes that living things achieve in driven self-assembly. Here, I suggest that such a theoretical understanding may at last be emerging, and trace its development from historic first steps to more recent discoveries. Focusing on these newer results, I propose that they imply a general thermodynamic mechanism for self-organization via dissipation of absorbed work that may be applicable in a broad class of driven many-body systems. © 2015 Macmillan Publishers Limited. All rights reserved.

Cai T.T.,University of Pennsylvania | Wang L.,Massachusetts Institute of Technology
IEEE Transactions on Information Theory | Year: 2011

We consider the orthogonal matching pursuit (OMP) algorithm for the recovery of a high-dimensional sparse signal based on a small number of noisy linear measurements. OMP is an iterative greedy algorithm that selects at each step the column, which is most correlated with the current residuals. In this paper, we present a fully data driven OMP algorithm with explicit stopping rules. It is shown that under conditions on the mutual incoherence and the minimum magnitude of the nonzero components of the signal, the support of the signal can be recovered exactly by the OMP algorithm with high probability. In addition, we also consider the problem of identifying significant components in the case where some of the nonzero components are possibly small. It is shown that in this case the OMP algorithm will still select all the significant components before possibly selecting incorrect ones. Moreover, with modified stopping rules, the OMP algorithm can ensure that no zero components are selected. © 2011 IEEE.

Ma W.J.,New York University | Jazayeri M.,Massachusetts Institute of Technology
Annual Review of Neuroscience | Year: 2014

Organisms must act in the face of sensory, motor, and reward uncertainty stemming from a pandemonium of stochasticity and missing information. In many tasks, organisms can make better decisions if they have at their disposal a representation of the uncertainty associated with task-relevant variables. We formalize this problem using Bayesian decision theory and review recent behavioral and neural evidence that the brain may use knowledge of uncertainty, confidence, and probability. © Copyright ©2014 by Annual Reviews. All rights reserved.

Dwyer D.J.,University of Maryland University College | Collins J.J.,Howard Hughes Medical Institute | Collins J.J.,Harvard University | Collins J.J.,Boston University | Walker G.C.,Massachusetts Institute of Technology
Annual Review of Pharmacology and Toxicology | Year: 2015

We face an impending crisis in our ability to treat infectious disease brought about by the emergence of antibiotic-resistant pathogens and a decline in the development of new antibiotics. Urgent action is needed. This review focuses on a less well-understood aspect of antibiotic action: the complex metabolic events that occur subsequent to the interaction of antibiotics with their molecular targets and play roles in antibiotic lethality. Independent lines of evidence from studies of the action of bactericidal antibiotics on diverse bacteria collectively suggest that the initial interactions of drugs with their targets cannot fully account for the antibiotic lethality and that these interactions elicit the production of reactive oxidants including reactive oxygen species that contribute to bacterial cell death. Recent challenges to this concept are considered in the context of the broader literature of this emerging area of research. Possible ways that this new knowledge might be exploited to improve antibiotic therapy are also considered. ©2015 by Annual Reviews. All rights reserved.

McLaren P.J.,Ecole Polytechnique Federale de Lausanne | Carrington M.,Frederick National Laboratory for Cancer Research | Carrington M.,Massachusetts Institute of Technology
Nature Immunology | Year: 2015

The outcome after infection with the human immunodeficiency virus type 1 (HIV-1) is a complex phenotype determined by interactions among the pathogen, the human host and the surrounding environment. An impact of host genetic variation on HIV-1 susceptibility was identified early in the pandemic, with a major role attributed to the genes encoding class I human leukocyte antigens (HLA) and the chemokine receptor CCR5. Studies using genome-wide data sets have underscored the strength of these associations relative to variants located throughout the rest of the genome. However, the extent to which additional polymorphisms influence HIV-1 disease progression, and how much of the variability in outcome can be attributed to host genetics, remain largely unclear. Here we discuss findings concerning the functional impact of associated variants, outline methods for quantifying the host genetic component and examine how available genome-wide data sets may be leveraged to discover gene variants that affect the outcome of HIV-1 infection.

Janak P.H.,Johns Hopkins University | Tye K.M.,Massachusetts Institute of Technology
Nature | Year: 2015

The amygdala has long been associated with emotion and motivation, playing an essential part in processing both fearful and rewarding environmental stimuli. How can a single structure be crucial for such different functions? With recent technological advances that allow for causal investigations of specific neural circuit elements, we can now begin to map the complex anatomical connections of the amygdala onto behavioural function. Understanding how the amygdala contributes to a wide array of behaviours requires the study of distinct amygdala circuits. © 2015 Macmillan Publishers Limited.

Fernandez-Ramirez C.,Complutense University of Madrid | Bernstein A.M.,Massachusetts Institute of Technology
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2013

With the availability of the new neutral pion photoproduction from the proton data from the A2 and CB-TAPS Collaborations at Mainz it is mandatory to revisit Heavy Baryon Chiral Perturbation Theory (HBChPT) and address the extraction of the partial waves as well as other issues such as the value of the low-energy constants, the energy range where the calculation provides a good agreement with the data and the impact of unitarity. We find that, within the current experimental status, HBChPT with the fitted LECs gives a good agreement with the existing neutral pion photoproduction data up to ~170 MeV and that imposing unitarity does not improve this picture. Above this energy the data call for further improvement in the theory such as the explicit inclusion of the δ(1232). We also find that data and multipoles can be well described up to ~185 MeV with Taylor expansions in the partial waves up to first order in pion energy. © 2013 Elsevier B.V.

Taylor J.A.,University of California at Berkeley | Hover F.S.,Massachusetts Institute of Technology
IEEE Transactions on Power Systems | Year: 2012

We derive new mixed-integer quadratic, quadratically constrained, and second-order cone programming models of distribution system reconfiguration, which are to date the first formulations of the ac problem that have convex, continuous relaxations. Each model can be reliably and efficiently solved to optimality using standard commercial software. In the course of deriving each model, we obtain original quadratically constrained and second-order cone approximations to power flow in radial networks. © 2012 IEEE.

Swingle B.,Massachusetts Institute of Technology | Swingle B.,Harvard University
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2012

We show how recent progress in real space renormalization group methods can be used to define a generalized notion of holography inspired by holographic dualities in quantum gravity. The generalization is based upon organizing information in a quantum state in terms of scale and defining a higher-dimensional geometry from this structure. While states with a finite correlation length typically give simple geometries, the state at a quantum critical point gives a discrete version of anti-de Sitter space. Some finite temperature quantum states include black hole-like objects. The gross features of equal time correlation functions are also reproduced. © 2012 American Physical Society.

Kwak R.,Massachusetts Institute of Technology
Analytical chemistry | Year: 2011

We present a novel continuous-flow nanofluidic biomolecule/cell concentrator, utilizing the ion concentration polarization (ICP) phenomenon. The device has one main microchannel which bifurcates into two channels, one for a narrow, concentrated stream and the other for a wider but target-free stream. A nanojunction [cation-selective material (Nafion)] is patterned along the tilted concentrated channel. Application of an electric field generates the ICP zone near the nanojunction so that biomolecules and cells are guided into the narrow, concentrated channel by hydrodynamic force. Once biomolecules from the main channel are continuously streamed out to the concentrated channel, one can achieve a continuous flow of the same sample solution but with higher concentrations up to 100-fold. By controlling hydrodynamic resistance of the main and concentrated channel, the concentration factors can be adjusted. We demonstrated the continuous-flow concentration with various targets, such as bacteria [fluorescein sodium salt, recombinant green fluorescence protein (rGFP), red blood cells (RBCs), and Escherichia coli ( E. coli )]. Specially, fluorescein isothiocyanate (FITC)-conjugated lectin from Lens culinaris (lentil) (FITC-lectin) was tested on the different buffer conditions to clarify the effect of polarities of the target sample. This system is ideally suited for a generic concentration front-end for a wide variety of biosensors, with minimal integration-related complications.

Tegmark M.,Massachusetts Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2012

We analyze cosmology assuming unitary quantum mechanics, using a tripartite partition into system, observer, and environment degrees of freedom. This generalizes the second law of thermodynamics to "The system's entropy cannot decrease unless it interacts with the observer, and it cannot increase unless it interacts with the environment." The former follows from the quantum Bayes theorem we derive. We show that because of the long-range entanglement created by cosmological inflation, the cosmic entropy decreases exponentially rather than linearly with the number of bits of information observed, so that a given observer can reduce entropy by much more than the amount of information her brain can store. Indeed, we argue that as long as inflation has occurred in a non-negligible fraction of the volume, almost all sentient observers will find themselves in a post-inflationary low-entropy Hubble volume, and we humans have no reason to be surprised that we do so as well, which solves the so-called inflationary entropy problem. An arguably worse problem for unitary cosmology involves gamma-ray-burst constraints on the "big snap," a fourth cosmic doomsday scenario alongside the "big crunch," "big chill," and "big rip," where an increasingly granular nature of expanding space modifies our life-supporting laws of physics. Our tripartite framework also clarifies when the popular quantum gravity approximation G μν≈8πGT μν is valid, and how problems with recent attempts to explain dark energy as gravitational backreaction from superhorizon scale fluctuations can be understood as a failure of this approximation. © 2012 American Physical Society.

Spitz J.,Massachusetts Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2012

Monoenergetic muon neutrinos (235.5 MeV) from positive kaon decay at rest are considered as a source for an electron neutrino appearance search. In combination with a liquid argon time projection chamber based detector, such a source could provide discovery-level sensitivity to the neutrino oscillation parameter space (Δm2∼1eV2) indicative of a sterile neutrino. Current and future intense 3GeV kinetic energy proton facilities around the world can be employed for this experimental concept. © 2012 American Physical Society.

Briggs D.E.G.,Yale University | Summons R.E.,Massachusetts Institute of Technology
BioEssays | Year: 2014

The discovery of traces of a blood meal in the abdomen of a 50-million-year-old mosquito reminds us of the insights that the chemistry of fossils can provide. Ancient DNA is the best known fossil molecule. It is less well known that new fossil targets and a growing database of ancient gene sequences are paralleled by discoveries on other classes of organic molecules. New analytical tools, such as the synchrotron, reveal traces of the original composition of arthropod cuticles that are more than 400 my old. Pigments such as melanin are readily fossilized, surviving virtually unaltered for ~200 my. Other biomarkers provide evidence of microbial processes in ancient sediments, and have been used to reveal the presence of demosponges, for example, more than 635 mya, long before their spicules appear in the fossil record. Ancient biomolecules are a powerful complement to fossil remains in revealing the history of life. Ancient biomolecules range from hundreds of thousand-year-old DNA to lipids and structural macromolecules that survive for billions of years. Extracted from fossils and sedimentary rocks, they reveal organism relationships, past environments, and the origin of fossil fuels. In the oldest rocks they may be the only evidence of particular life forms. © 2014 WILEY Periodicals, Inc.

Xu C.,University of California at Santa Barbara | Senthil T.,Massachusetts Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

We study the structure of the ground-state wave functions of bosonic symmetry protected topological (SPT) insulators in three space dimensions. We demonstrate that the differences with conventional insulators are captured simply in a dual vortex description. As an example, we show that a previously studied bosonic topological insulator with both global U(1) and time-reversal symmetry can be described by a rather simple wave function written in terms of dual "vortex ribbons." The wave function is a superposition of all the vortex-ribbon configurations of the boson, and a factor (-1) is associated with each self-linking of the vortex ribbons. This wave function can be conveniently derived using an effective field theory of the SPT phase in the strong-coupling limit, and it naturally explains all the phenomena of this SPT phase discussed previously. The ground-state structure for other three-dimensional (3D) bosonic SPT phases are also discussed similarly in terms of vortex loop gas wave functions. We show that our methods reproduce known results on the ground-state structure of some 2D SPT phases. © 2013 American Physical Society.

Wong S.Y.,Massachusetts Institute of Technology
Biomacromolecules | Year: 2012

Polyelectrolyte multilayer films assembled from a hydrophobic N-alkylated polyethylenimine and a hydrophilic polyacrylate were discovered to exhibit strong antifouling, as well as antimicrobial, activities. Surfaces coated with these layer-by-layer (LbL) films, which range from 6 to 10 bilayers (up to 45 nm in thickness), adsorbed up to 20 times less protein from blood plasma than the uncoated controls. The dependence of the antifouling activity on the nature of the polycation, as well as on assembly conditions and the number of layers in the LbL films, was investigated. Changing the hydrophobicity of the polycation altered the surface composition and the resistance to protein adsorption of the LbL films. Importantly, this resistance was greater for coated surfaces with the polyanion on top; for these films, the average zeta potential pointed to a near neutral surface charge, thus, presumably minimizing their electrostatic interactions with the protein. The film surface exhibited a large contact angle hysteresis, indicating a heterogeneous topology likely due to the existence of hydrophobic-hydrophilic regions on the surface. Scanning electron micrographs of the film surface revealed the existence of nanoscale domains. We hypothesize that the existence of hydrophobic/hydrophilic nanodomains, as well as surface charge neutrality, contributes to the LbL film's resistance to protein adsorption.

Dunkel J.,Massachusetts Institute of Technology | Hilbert S.,Max Planck Institute for Astrophysics
Nature Physics | Year: 2013

Over the past 60 years, a considerable number of theories and experiments have claimed the existence of negative absolute temperature in spin systems and ultracold quantum gases. This has led to speculation that ultracold gases may be dark-energy analogues and also suggests the feasibility of heat engines with efficiencies larger than one. Here, we prove that all previous negative temperature claims and their implications are invalid as they arise from the use of an entropy definition that is inconsistent both mathematically and thermodynamically. We show that the underlying conceptual deficiencies can be overcome if one adopts a microcanonical entropy functional originally derived by Gibbs. The resulting thermodynamic framework is self-consistent and implies that absolute temperature remains positive even for systems with a bounded spectrum. In addition, we propose a minimal quantum thermometer that can be implemented with available experimental techniques. © 2014 Macmillan Publishers Limited. All rights reserved.

Chen J.-J.,Massachusetts Institute of Technology
Blood | Year: 2014

In this issue of Blood, Hahn and Lowrey demonstrate enhanced translation of γ-globin messenger RNA (mRNA) upon activation of eukaryotic initiation factor 2αP (eIF2αP) stress signaling. This finding underscores translational control as a novel mechanism in regulating fetal hemoglobin production (HbF).1 © 2014 by The American Society of Hematology.

Fine C.,Massachusetts Institute of Technology
Journal of Supply Chain Management | Year: 2013

In the past twenty years, there has been an evolution in the way that sourcing and global manufacturing are viewed. While price is important, it is no longer supreme, as increased complexity and transparency have changed leading firms to think about intelli-sourcing: simultaneously balancing economics while protecting the firm's reputation. © 2013 Institute for Supply Management, Inc.

Chesler P.M.,Massachusetts Institute of Technology | Yaffe L.G.,University of Washington
Physical Review Letters | Year: 2011

Using holography, we study the collision of planar shock waves in strongly coupled N=4 supersymmetric Yang-Mills theory. This requires the numerical solution of a dual gravitational initial value problem in asymptotically anti-de Sitter spacetime. © 2011 The American Physical Society.

Luo T.,University of Notre Dame | Chen G.,Massachusetts Institute of Technology
Physical Chemistry Chemical Physics | Year: 2013

Heat transfer can differ distinctly at the nanoscale from that at the macroscale. Recent advancement in computational and experimental techniques has enabled a large number of interesting observations and understanding of heat transfer processes at the nanoscale. In this review, we will first discuss recent advances in computational and experimental methods used in nanoscale thermal transport studies, followed by reviews of novel thermal transport phenomena at the nanoscale observed in both computational and experimental studies, and discussion on current understanding of these novel phenomena. Our perspectives on challenges and opportunities on computational and experimental methods are also presented. This journal is © 2013 the Owner Societies.

Miller A.R.,University of Virginia | Tucker C.,Massachusetts Institute of Technology
Journal of Health Economics | Year: 2014

There are many technology platforms that bring benefits only when users share data. In healthcare, this is a key policy issue, because of the potential cost savings and quality improvements from 'big data' in the form of sharing electronic patient data across medical providers. Indeed, one criterion used for federal subsidies for healthcare information technology is whether the software has the capability to share data. We find empirically that larger hospital systems are more likely to exchange electronic patient information internally, but are less likely to exchange patient information externally with other hospitals. This pattern is driven by instances where there may be a commercial cost to sharing data with other hospitals. Our results suggest that the common strategy of using 'marquee' large users to kick-start a platform technology has an important drawback of potentially creating information silos. This suggests that federal subsidies for health data technologies based on 'meaningful use' criteria, that are based simply on the capability to share data rather than actual sharing of data, may be misplaced. © 2013 Elsevier B.V.

Magnetic skyrmions are topologically protected spin textures that exhibit fascinating physical behaviours and large potential in highly energy-efficient spintronic device applications. The main obstacles so far are that skyrmions have been observed in only a few exotic materials and at low temperatures, and fast current-driven motion of individual skyrmions has not yet been achieved. Here, we report the observation of stable magnetic skyrmions at room temperature in ultrathin transition metal ferromagnets with magnetic transmission soft X-ray microscopy. We demonstrate the ability to generate stable skyrmion lattices and drive trains of individual skyrmions by short current pulses along a magnetic racetrack at speeds exceeding 100 m s-1 as required for applications. Our findings provide experimental evidence of recent predictions and open the door to room-temperature skyrmion spintronics in robust thin-film heterostructures. © 2016 Nature Publishing Group

Rock J.M.,Massachusetts Institute of Technology | Amon A.,Howard Hughes Medical Institute
Genes and Development | Year: 2011

In budding yeast, a Ras-like GTPase signaling cascade known as the mitotic exit network (MEN) promotes exit from mitosis. To ensure the accurate execution of mitosis, MEN activity is coordinated with other cellular events and restricted to anaphase. The MEN GTPase Tem1 has been assumed to be the central switch in MEN regulation. We show here that during an unperturbed cell cycle, restricting MEN activity to anaphase can occur in a Tem1 GTPase-independent manner. We found that the anaphase-specific activation of the MEN in the absence of Tem1 is controlled by the Polo kinase Cdc5. We further show that both Tem1 and Cdc5 are required to recruit the MEN kinase Cdc15 to spindle pole bodies, which is both necessary and sufficient to induce MEN signaling. Thus, Cdc15 functions as a coincidence detector of two essential cell cycle oscillators: The Polo kinase Cdc5 synthesis/degradation cycle and the Tem1 G-protein cycle. The Cdc15-dependent integration of these temporal (Cdc5 and Tem1 activity) and spatial (Tem1 activity) signals ensures that exit from mitosis occurs only after proper genome partitioning. © 2011 by Cold Spring Harbor Laboratory Press.

Lauren J.,Massachusetts Institute of Technology
Journal of Alzheimer's Disease | Year: 2014

Soluble oligomeric species of amyloid-β (Aβ) peptide are presumed to be drivers of synaptic impairment, and the resulting cognitive dysfunction in Alzheimer's disease. In 2009, cellular prion protein (PrPC) was identified in a genome-wide screen as a high-affinity receptor for Aβ oligomers, and since then, many studies have explored the role of PrP C in Alzheimer's disease. Herein, I systematically assess the current level of target validation for PrPC in Alzheimer's disease and the merits of the identified approaches to therapeutically affect the PrP C:Aβ oligomer-interaction. The interaction of Aβ oligomers with PrPC in mice impairs hippocampal long-term potentiation, memory, and learning in a manner that involves Fyn, tau, and glutamate receptors. Furthermore, PrPC acts to catalyze the formation of certain Aβ oligomeric species in the synapse and may mediate the toxic effects of other β-sheet rich oligomers as well. Therapeutic approaches utilizing soluble PrPC ectodomain or monoclonal antibodies targeting PrPC can at least partially prevent the neurotoxic effects of Aβ oligomers in mice. © 2014-IOS Press.

Heald C.L.,Massachusetts Institute of Technology | Spracklen D.V.,University of Leeds
Chemical Reviews | Year: 2015

The authors review current understanding of the interplay between land use change and atmospheric chemistry, with a focus on short-lived atmospheric pollutants. Both natural and anthropogenic land use change may substantially impact global air quality, with significant radiative effects on global and local climate. In particular, they focus on how historical and projected land use change will alter emissions of BVOC, soil NOx, dust, smoke, and bioaerosol, as well as the dry deposition of ozone. These constitute the key drivers of air quality changes related to a dynamic land surface. The review suggests that land use change has led to an overall cooling over the 20th century, with aerosol effects alone equivalent to 10-50% of the aerosol direct radiative forcing due to anthropogenic emissions. This cooling is likely to continue through the 21st century, but is subject to large uncertainties associated with future agricultural practices. In an era of declining pollution emissions, anthropogenic land use change may become the dominant human fingerprint on ozone and aerosol climate forcing. In addition, natural land use change represents a critical climate feedback which impacts both air quality and our assessment of how anthropogenic pollution has affected cloud formation, and the associated climate cooling.

Lee Y.-J.,Massachusetts Institute of Technology
Journal of Physics G: Nuclear and Particle Physics | Year: 2011

We report the measurements of nuclear modification factors of the Z bosons, isolated photons and charged particles in √sNN = 2.76 TeV Pb-Pb collisions with the Compact Muon Solenoid detector. The nuclear modification factors are constructed by dividing the Pb-Pb pT spectra, normalized to the number of binary collisions, by the pp references. No modifications are observed in the isolated photon and Z boson production with respect to the pp references, while a large suppression is observed in the charged particles. © CERN 2011. Published under licence by IOP Publishing Ltd.

Li W.,Massachusetts Institute of Technology
Journal of Physics G: Nuclear and Particle Physics | Year: 2011

Measurements of charged dihadron angular correlations are presented in proton-proton (pp) and lead-lead (PbPb) collisions, over a broad range of pseudorapidity and azimuthal angle, using the CMS detector at the LHC. In very high multiplicity pp events at center-of-mass energy of 7 TeV, a striking 'ridge'-like structure emerges in the two-dimensional correlation function for particle pairs with intermediate pT of 1-3 GeV/c, in the kinematic region 2.0 < |Δη| < 4.8 and small Δ, which is similar to observations in heavy-ion collisions. Studies of this new effect as a function of particle transverse momentum are discussed. The long-range and short-range dihadron correlations are also studied in PbPb collision at a nucleon-nucleon center-of-mass energy of 2.76 TeV, as a function of transverse momentum and collision centrality. A Fourier analysis of the long-range dihadron correlations is presented and discussed in the context of CMS measurements of higher order flow coefficients. © CERN 2011. Published under licence by IOP Publishing Ltd.

Solar-Lezama A.,Massachusetts Institute of Technology
International Journal on Software Tools for Technology Transfer | Year: 2013

Sketching is a synthesis methodology that aims to bridge the gap between a programmer's high-level insights about a problem and the computer's ability to manage low-level details. In sketching, the programmer uses a partial program, a sketch, to describe the desired implementation strategy, and leaves the low-level details of the implementation to an automated synthesis procedure. In order to generate an implementation from the programmer provided sketch, the synthesizer uses counterexample-guided inductive synthesis (CEGIS). Inductive synthesis refers to the process of generating candidate implementations from concrete examples of correct or incorrect behavior. CEGIS combines a SAT-based inductive synthesizer with an automated validation procedure, a bounded model-checker, that checks whether the candidate implementation produced by inductive synthesis is indeed correct and to produce new counterexamples. The result is a synthesis procedure that is able to handle complex problems from a variety of domains including ciphers, scientific programs, and even concurrent data-structures. © 2012 Springer-Verlag.

Wurtman R.J.,Massachusetts Institute of Technology
Metabolism: Clinical and Experimental | Year: 2013

Patients exhibiting the classic manifestations of parkinsonism - tremors, rigidity, postural instability, slowed movements and, sometimes, sleep disturbances and depression - may also display severe cognitive disturbances. All of these particular motoric and behavioral symptoms may arise from Parkinson's disease [PD] per se, but they can also characterize Lewy Body dementia [LBD] or concurrent Parkinson's and Alzheimer's diseases [PD & AD]. Abnormalities of both movement and cognition are also observed in numerous other neurologic diseases, for example Huntington's Disease and the frontotemporal dementia. Distinguishing among these diseases in an individual patient is important in personalizing his or her mode of treatment, since an agent that is often highly effective in one of the diagnoses (e.g., l-dopa or muscarinic antagonists in PD) might be ineffective or even damaging in one of the others. That such personalization, based on genetic, biochemical, and imaging-based biomarkers, is feasible is suggested by the numerous genetic abnormalities already discovered in patients with parkinsonism, Alzheimer's disease and Huntington's disease (HD) and by the variety of regional and temporal patterns that these diseases can produce, as shown using imaging techniques. © 2013 Elsevier Inc.

Slatyer T.R.,Massachusetts Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2016

Any injection of electromagnetically interacting particles during the cosmic dark ages will lead to increased ionization, heating, production of Lyman-α photons and distortions to the energy spectrum of the cosmic microwave background, with potentially observable consequences. In this paper we describe numerical results for the low-energy electrons and photons produced by the cooling of particles injected at energies from keV to multi-TeV scales, at arbitrary injection redshifts (but focusing on the post-recombination epoch). We use these data, combined with existing calculations modeling the cooling of these low-energy particles, to estimate the resulting contributions to ionization, excitation and heating of the gas, and production of low-energy photons below the threshold for excitation and ionization. We compute corrected deposition-efficiency curves for annihilating dark matter, and demonstrate how to compute equivalent curves for arbitrary energy-injection histories. These calculations provide the necessary inputs for the limits on dark matter annihilation presented in the accompanying paper I, but also have potential applications in the context of dark matter decay or deexcitation, decay of other metastable species, or similar energy injections from new physics. We make our full results publicly available at http://nebel.rc.fas.harvard.edu/epsilon, to facilitate further independent studies. In particular, we provide the full low-energy electron and photon spectra, to allow matching onto more detailed codes that describe the cooling of such particles at low energies. © 2016 American Physical Society.

Mross D.F.,California Institute of Technology | Senthil T.,Massachusetts Institute of Technology
Physical Review X | Year: 2015

Spontaneous breaking of translational symmetry, known as density-wave order, is common in nature. However, such states are strongly sensitive to impurities or other forms of frozen disorder leading to fascinating glassy phenomena. We analyze impurity effects on a particularly ubiquitous form of broken translation symmetry in solids: a spin-density wave (SDW) with spatially modulated magnetic order. Related phenomena occur in pair-density-wave (PDW) superconductors where the superconducting order is spatially modulated. For weak disorder, we find that the SDWor PDWorder can generically give way to a SDWor PDW glass-new phases of matter with a number of striking properties, which we introduce and characterize here. In particular, they exhibit an interesting combination of conventional (symmetrybreaking) and spin-glass (Edwards-Anderson) order. This is reflected in the dynamic response of such a system, which-as expected for a glass-is extremely slow in certain variables, but, surprisingly, is fast in others. Our results apply to all uniaxial metallic SDW systems where the ordering vector is incommensurate with the crystalline lattice. In addition, the possibility of a PDW glass has important consequences for some recent theoretical and experimental work on La2-xBaxCu2O4.

Ford B.,Colorado State University | Heald C.L.,Massachusetts Institute of Technology
Atmospheric Chemistry and Physics | Year: 2013

We investigate the seasonality in aerosols over the Southeastern United States using observations from several satellite instruments (MODIS, MISR, CALIOP) and surface network sites (IMPROVE, SEARCH, AERONET). We find that the strong summertime enhancement in satellite-observed aerosol optical depth (AOD) (factor 2-3 enhancement over wintertime AOD) is not present in surface mass concentrations (25-55% summertime enhancement). Goldstein et al. (2009) previously attributed this seasonality in AOD to biogenic organic aerosol; however, surface observations show that organic aerosol only accounts for ∼35% of fine particulate matter (smaller than 2.5 μm in aerodynamic diameter, PM2.5) and exhibits similar seasonality to total surface PM2.5. The GEOS-Chem model generally reproduces these surface aerosol measurements, but underrepresents the AOD seasonality observed by satellites. We show that seasonal differences in water uptake cannot sufficiently explain the magnitude of AOD increase. As CALIOP profiles indicate the presence of additional aerosol in the lower troposphere (below 700 hPa), which cannot be explained by vertical mixing, we conclude that the discrepancy is due to a missing source of aerosols above the surface layer in summer. © Author(s) 2013.

Armour K.C.,Massachusetts Institute of Technology | Bitz C.M.,University of Washington | Roe G.H.,University of Washington
Journal of Climate | Year: 2013

The sensitivity of global climate with respect to forcing is generally described in terms of the global climate feedback-the global radiative response per degree of global annual mean surface temperature change. While the global climate feedback is often assumed to be constant, its value-diagnosed from global climate models-shows substantial time variation under transient warming. Here a reformulation of the global climate feedback in terms of its contributions from regional climate feedbacks is proposed, providing a clear physical insight into this behavior. Using (i) a state-of-the-art global climate model and (ii) a low-order energy balance model, it is shown that the global climate feedback is fundamentally linked to the geographic pattern of regional climate feedbacks and the geographic pattern of surface warming at any given time. Time variation of the global climate feedback arises naturally when the pattern of surface warming evolves, actuating feedbacks of different strengths in different regions. This result has substantial implications for the ability to constrain future climate changes from observations of past and present climate states. The regional climate feedbacks formulation also reveals fundamental biases in a widely used method for diagnosing climate sensitivity, feedbacks, and radiative forcing-the regression of the global top-of-atmosphere radiation flux on global surface temperature. Further, it suggests a clear mechanism for the "efficacies" of both ocean heat uptake and radiative forcing. © 2013 American Meteorological Society.

Wunsch C.,Massachusetts Institute of Technology
Deep-Sea Research Part II: Topical Studies in Oceanography | Year: 2013

The problem of understanding linear predictability of elements of the ocean circulation is explored in the Atlantic Ocean for two disparate elements: (1) sea surface temperature (SST) under the storm track in a small region east of the Grand Banks and, (2) the meridional overturning circulation north of 30.5°S. To be worthwhile, any nonlinear method would need to exhibit greater skill, and so a rough baseline from which to judge more complex methods is the goal. A 16-year ocean state estimate is used, under the assumption that internal oceanic variability is dominating externally imposed changes. No evidence exists of significant nonlinearity in the bulk of the system over this time span. Linear predictability is the story of time and space correlations, and some predictive skill exists for a few months in SST, with some minor capability extending to a few years. Sixteen years is, however, far too short for an evaluation for interannual, much less decadal, variability, although orders of magnitude are likely stably estimated. The meridional structure of the meridional overturning circulation (MOC), defined as the time-varying vertical integral to the maximum meridional volume transport at each latitude, shows nearly complete decorrelation in the variability across about 35°N-the Gulf Stream system. If a time-scale exists displaying coherence of the MOC between subpolar and subtropical gyres, it lies beyond the existing observation duration, and that has consequences for observing system strategies and the more general problem of detectability of change. © 2012 Elsevier Ltd.

Irvine D.J.,Massachusetts Institute of Technology | Irvine D.J.,Massachusetts General Hospital | Irvine D.J.,Howard Hughes Medical Institute
Nature Materials | Year: 2011

Nanoparticles capable of specifically binding to target cells and delivering high doses of therapeutic compounds are finding applications in the treatment of cancer chemotherapy and several other diseases. A number of nanoparticle-based therapeutics are in use in the clinic, which are based on liposomes, vesicles comprising a lipid bilayer surrounding an aqueous core with low immunogenicity and excellent safety profiles in humans. The targeted delivery is achieved by functionalizing the surface of nanoparticles with ligands that will bind to specific molecules on target cells while avoiding nonspecific binding to other cells, blood or tissue components. The high-avidity binding to target cells could be achieved with lower densities of targeting ligand capable of clustering to bind multivalently to cell-surface receptors. The adsorption-mediated drug loading in the particle cores led to 1,000-fold greater doses of the common chemotherapy drug doxorubicin on a per-particle basis, compared with liposomes prepared by clinically employed methods.

Fu L.,Massachusetts Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2014

CuxBi2Se3 was recently proposed as a promising candidate for time-reversal-invariant topological superconductors. In this work, we argue that the unusual anisotropy of the Knight shift observed by Zheng and co-workers (unpublished), taken together with specific heat measurements, provides strong support for an unconventional odd-parity pairing in the two-dimensional Eu representation of the D3d crystal point group, which spontaneously breaks the threefold rotational symmetry of the crystal, leading to a subsidiary nematic order. We predict that the spin-orbit interaction associated with hexagonal warping plays a crucial role in pinning the two-component order parameter and makes the superconducting state generically fully gapped, leading to a topological superconductor. Experimental signatures of the Eu pairing related to the nematic order are discussed. © 2014 American Physical Society.

Wurtman R.J.,Massachusetts Institute of Technology
Nutrients | Year: 2014

Brain neurons form synapses throughout the life span. This process is initiated by neuronal depolarization, however the numbers of synapses thus formed depend on brain levels of three key nutrients-uridine, the omega-3 fatty acid DHA, and choline. Given together, these nutrients accelerate formation of synaptic membrane, the major component of synapses. In infants, when synaptogenesis is maximal, relatively large amounts of all three nutrients are provided in bioavailable forms (e.g., uridine in the UMP of mothers' milk and infant formulas). However, in adults the uridine in foods, mostly present at RNA, is not bioavailable, and no food has ever been compelling demonstrated to elevate plasma uridine levels. Moreover, the quantities of DHA and choline in regular foods can be insufficient for raising their blood levels enough to promote optimal synaptogenesis. In Alzheimer's disease (AD) the need for extra quantities of the three nutrients is enhanced, both because their basal plasma levels may be subnormal (reflecting impaired hepatic synthesis), and because especially high brain levels are needed for correcting the disease-related deficiencies in synaptic membrane and synapses. © 2014 by the authors; licensee MDPI, Basel, Switzerland.

Panigrahi D.,Massachusetts Institute of Technology
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2011

Survivable network design is an important suite of algorithmic problems where the goal is to select a minimum cost network subject to the constraint that some desired connectivity property has to be satisfied by the network. Traditionally, these problems have been studied in a model where individual edges (and sometimes nodes) have an associated cost. This model does not faithfully represent wireless networks, where the activation of an edge is dependent on the selection of parameter values at its endpoints, and the cost incurred is a function of these values. We present a realistic optimization model for the design of survivable wireless networks that generalizes various connectivity problems studied in the theory literature, e.g. node-weighted Steiner network, power optimization, minimum connected dominating set, and in the networking literature, e.g. installation cost optimization, minimum broadcast tree. We obtain the following algorithmic results for our general model: 1. For k = 1 and 2, we give O(log n)-approximation algorithms for both the vertex and edge connectivity versions of the k-connectivity problem. These results are tight (up to constants); we show that even for k = 1, it is NP-hard to obtain an approximation factor of o(log n). 2. For the minimum steiner network problem, we give a tight (up to constants) O(log n)-approximation algorithm. 3. We give a reduction from the k-edge connectivity problem to a more tractable degree-constrained problem. This involves proving new connectivity theorems that might be of independent interest. We apply this result to obtain new approximation algorithms in the power optimization arid installation cost optimization applications.

Marshall J.,Massachusetts Institute of Technology | Speer K.,Florida State University
Nature Geoscience | Year: 2012

The meridional overturning circulation of the ocean plays a central role in climate and climate variability by storing and transporting heat, fresh water and carbon around the globe. Historically, the focus of research has been on the North Atlantic Basin, a primary site where water sinks from the surface to depth, triggered by loss of heat, and therefore buoyancy, to the atmosphere. A key part of the overturning puzzle, however, is the return path from the interior ocean to the surface through upwelling in the Southern Ocean. This return path is largely driven by winds. It has become clear over the past few years that the importance of Southern Ocean upwelling for our understanding of climate rivals that of North Atlantic downwelling, because it controls the rate at which ocean reservoirs of heat and carbon communicate with the surface. © 2012 Macmillan Publishers Limited. All rights reserved.

Rose B.E.J.,University of Washington | Ferreira D.,Massachusetts Institute of Technology
Journal of Climate | Year: 2013

The authors study the role of ocean heat transport (OHT) in the maintenance of a warm, equable, ice-free climate. An ensemble of idealized aquaplanetGCMcalculations is used to assess the equilibrium sensitivity of global mean surface temperature (T) and its equator-to-pole gradient (ΔT) to variations in OHT, prescribed through a simple analytical formula representing export out of the tropics and poleward convergence. Lowlatitude OHT warms the mid- to high latitudes without cooling the tropics; T increases by 1°C and DT decreases by 2.6°C for every 0.5-PW increase in OHT across 30° latitude. This warming is relatively insensitive to the detailed meridional structure of OHT. It occurs in spite of near-perfect atmospheric compensation of large imposed variations in OHT: the total poleward heat transport is nearly fixed. The warming results from a convective adjustment of the extratropical troposphere. Increased OHT drives a shift from large-scale to convective precipitation in the midlatitude storm tracks. Warming arises primarily from enhanced greenhouse trapping associated with convective moistening of the upper troposphere. Warming extends to the poles by atmospheric processes even in the absence of high-latitude OHT. A new conceptual model for equable climates is proposed, in which OHT plays a key role by driving enhanced deep convection in the midlatitude storm tracks. In this view, the climatic impact of OHT depends on its effects on the greenhouse properties of the atmosphere, rather than its ability to increase the total poleward energy transport. © 2013 American Meteorological Society.

Wang C.,Massachusetts Institute of Technology
Atmospheric Research | Year: 2013

The climate impact of anthropogenic absorbing aerosols has attracted wide attentions recently. The unique forcing distribution of these aerosols displays, as instantaneous and in solar band, a significant heating to the atmosphere and a cooling in a close but smaller magnitude at the Earth's surface, leading to a positive net forcing to the Earth-atmosphere system, i.e., the forcing at the top of the atmosphere, which brings a warming tendency to the climate system. On the other hand, the atmospheric heating and surface cooling introduced by these aerosols have been demonstrated to be able to interact with dynamical processes in various scales to alter atmospheric circulation, and hence clouds and precipitation. Recent studies have suggested that the changes in precipitation caused by persistent forcing of anthropogenic absorbing aerosols through certain dynamical interactions, often appearing distant from the aerosol-laden regions, are likely more significant than those caused through aerosol-cloud microphysical connection confined locally to the aerosol concentrated areas. An active research field is forming to understand the changes in cloud and precipitation caused by anthropogenic absorbing aerosol through various dynamical linkages. This review discusses several recent findings regarding the effect of anthropogenic absorbing aerosols on cloud and precipitation, with an emphasis on works relate to the coupling between aerosol forcing and dynamical processes. © 2012 Elsevier B.V.

Goemans M.X.,Massachusetts Institute of Technology
Mathematical Programming | Year: 2015

In this note, we consider the permutahedron, the convex hull of all permutations of {1,2…,n}. We show how to obtain an extended formulation for this polytope from any sorting network. By using the optimal Ajtai–Komlós–Szemerédi sorting network, this extended formulation has Θ(n log n) variables and inequalities. Furthermore, from basic polyhedral arguments, we show that this is best possible (up to a multiplicative constant) since any extended formulation has at least Ω(n log n) inequalities. The results easily extend to the generalized permutahedron. © 2014, Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society.

Cheung W.C.,Massachusetts Institute of Technology | Quek T.Q.S.,Institute for Infocomm Research | Kountouris M.,Supelec
IEEE Journal on Selected Areas in Communications | Year: 2012

The deployment of femtocells in a macrocell network is an economical and effective way to increase network capacity and coverage. Nevertheless, such deployment is challenging due to the presence of inter-tier and intra-tier interference, and the ad hoc operation of femtocells. Motivated by the flexible subchannel allocation capability of OFDMA, we investigate the effect of spectrum allocation in two-tier networks, where the macrocells employ closed access policy and the femtocells can operate in either open or closed access. By introducing a tractable model, we derive the success probability for each tier under different spectrum allocation and femtocell access policies. In particular, we consider joint subchannel allocation, in which the whole spectrum is shared by both tiers, as well as disjoint subchannel allocation, whereby disjoint sets of subchannels are assigned to both tiers. We formulate the throughput maximization problem subject to quality of service constraints in terms of success probabilities and per-tier minimum rates, and provide insights into the optimal spectrum allocation. Our results indicate that with closed access femtocells, the optimized joint and disjoint subchannel allocations provide the highest throughput among all schemes in sparse and dense femtocell networks, respectively. With open access femtocells, the optimized joint subchannel allocation provides the highest possible throughput for all femtocell densities. © 2006 IEEE.

Gilbert W.V.,Massachusetts Institute of Technology
Trends in Biochemical Sciences | Year: 2011

Ribosomes are highly conserved macromolecular machines that are responsible for protein synthesis in all living organisms. Work published in the past year has shown that changes to the ribosome core can affect the mechanism of translation initiation that is favored in the cell, which potentially leads to specific changes in the relative efficiencies with which different proteins are made. Here, I examine recent data from expression and proteomic studies that suggest that cells make slightly different ribosomes under different growth conditions, and discuss genetic evidence that such differences are functional. In particular, I argue that eukaryotic cells probably produce ribosomes that lack one or more core ribosomal proteins (RPs) under some conditions, and that core RPs contribute differentially to translation of distinct subpopulations of mRNAs. © 2010 Elsevier Ltd.

Aaronson S.,Massachusetts Institute of Technology
Proceedings of the Annual ACM Symposium on Theory of Computing | Year: 2010

The relationship between BQP and PH has been an open problem since the earliest days of quantum computing. We present evidence that quantum computers can solve problems outside the entire polynomial hierarchy, by relating this question to topics in circuit complexity, pseudorandomness, and Fourier analysis. First, we show that there exists an oracle relation problem (i.e., a problem with many valid outputs) that is solvable in BQP, but not in PH. This also yields a non-oracle relation problem that is solvable in quantum logarithmic time, but not in AC0. Second, we show that an oracle decision problem separating BQP from PH would follow from the Generalized Linial-Nisan Conjecture, which we formulate here and which is likely of independent interest. The original Linial-Nisan Conjecture (about pseudorandomness against constant-depth circuits) was recently proved by Braverman, after being open for twenty years. © 2010 ACM.

Madry A.,Massachusetts Institute of Technology
Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS | Year: 2010

We present a general method of designing fast approximation algorithms for cut-based minimization problems in undirected graphs. In particular, we develop a technique that given any such problem that can be approximated quickly on trees, allows approximating it almost as quickly on general graphs while only losing a poly-logarithmic factor in the approximation guarantee. To illustrate the applicability of our paradigm, we focus our attention on the undirected sparsest cut problem with general demands and the balanced separator problem. By a simple use of our framework, we obtain poly-logarithmic approximation algorithms for these problems that run in time close to linear. The main tool behind our result is an efficient procedure that decomposes general graphs into simpler ones while approximately preserving the cut-flow structure. This decomposition is inspired by the cut-based graph decomposition of Räcke that was developed in the context of oblivious routing schemes, as well as, by the construction of the ultrasparsifiers due to Spielman and Teng that was employed to preconditioning symmetric diagonally-dominant matrices. © 2010 IEEE.

Kim D.H.,Massachusetts Institute of Technology
Current Biology | Year: 2015

A recent study identifies a role for a Toll pathway with likely non-canonical features in the developmental specification of BAG neurons in Caenorhabditis elegans. These neurons function to sense carbon dioxide, which is shown to facilitate avoidance of pathogenic bacteria. © 2015 Elsevier Ltd. All rights reserved.

Hong S.,Massachusetts Institute of Technology
Frontiers in Human Neuroscience | Year: 2013

There are a growing number of roles that midbrain dopamine (DA) neurons assume, such as, reward, aversion, alerting and vigor. Here I propose a theory that may be able to explain why the suggested functions of DA came about. It has been suggested that largely parallel cortico-basal ganglia-thalamo-cortico loops exist to control different aspects of behavior. I propose that (1) the midbrain DA system is organized in a similar manner, with different groups of DA neurons corresponding to these parallel neural pathways (NPs). The DA system can be viewed as the "manager" of these parallel NPs in that it recruits and activates only the task-relevant NPs when they are needed. It is likely that the functions of those NPs that have been consistently activated by the corresponding DA groups are facilitated. I also propose that (2) there are two levels of DA roles: the How and What roles. The How role is encoded in tonic and phasic DA neuron firing patterns and gives a directive to its target NP: how vigorously its function needs to be carried out. The tonic DA firing is to provide the needed level of DA in the target NPs to support their expected behavioral and mental functions; it is only when a sudden unexpected boost or suppression of activity is required by the relevant target NP that DA neurons in the corresponding NP act in a phasic manner. The What role is the implementational aspect of the role of DA in the target NP1 such as binding to D1 receptors to boost working memory. This What aspect of DA explains why DA seems to assume different functions depending on the region of the brain in which it is involved. In terms of the role of the lateral habenula (LHb), the LHb is expected to suppress maladaptive behaviors and mental processes by controlling the DA system. The demand-based smart management by the DA system may have given animals an edge in evolution with adaptive behaviors and a better survival rate in resource-scarce situations. © 2013 Hong.

Madry A.,Massachusetts Institute of Technology
Proceedings of the Annual ACM Symposium on Theory of Computing | Year: 2010

We combine the work of Garg and Könemann, and Fleischer with ideas from dynamic graph algorithms to obtain faster (1-ε)-approximation schemes for various versions of the multicommodity flow problem. In particular, if ε is moderately small and the size of every number used in the input instance is polynomially bounded, the running times of our algorithms match - up to poly-logarithmic factors and some provably optimal terms - the Ω(mn) flow-decomposition barrier for single-commodity flow. © 2010 ACM.

Bull J.A.,Imperial College London | Mousseau J.J.,Massachusetts Institute of Technology | Pelletier G.,University of Montreal | Charette A.B.,University of Montreal
Chemical Reviews | Year: 2012

A study was conducted to demonstrate the synthesis of pyridine and dihydropyridine derivatives by regio- and stereoselective addition to N-activated pyridines. The study examined the functionalization of N-activated pyridinium species by the addition of nucleophiles and electrophiles, along with transition metal-mediated functionalization. The investigations emphasized on the bimolecular addition of nucleophilic and electrophilic reagents directly to the ring of N-activated pyridinium species. It was observed that mixtures of substituted 1,2- and 1,4-dihydropyridines were frequently obtained upon the addition of an organometallic nucleophile in place of the addition at the 1-acyl carbon. The regioselectivity of addition to pyridines activated by methyl chloroformate was also investigated by other researchers who compared the nature of the nucleophile.

Cima M.J.,Massachusetts Institute of Technology
Annual Review of Chemical and Biomolecular Engineering | Year: 2011

Medical technologies are evolving at a very rapid pace. Portable communication devices and other handheld electronics are influencing our expectations of future medical tools. The advanced medical technologies of our future will not necessarily be large expensive systems. They are just as likely to be small and disposable. This paper reviews how microsystems are already impacting health care as commercial products or in clinical development. Example systems for point-of-care testing (POCT), patient monitoring tools, systemic drug delivery, local drug delivery, and surgical tools are described. These technologies are moving care from hospitals to outpatient settings, physicians' offices, community health centers, nursing homes, and the patients' homes. Microsystems that are rapidly adopted fulfill significant medical needs and are compatible with existing clinical practice. © Copyright 2011 by Annual Reviews. All rights reserved.

Park D.S.,Massachusetts Institute of Technology
Journal of High Energy Physics | Year: 2012

Six-dimensional supergravity theories with N = (1, 0) supersymmetry must satisfy anomaly equations. These equations come from demanding the cancellation of gravitational, gauge and mixed anomalies. The anomaly equations have implications for the geometrical data of Calabi-Yau threefolds, since F-theory compactified on an elliptically fibered Calabi-Yau threefold with a section generates a consistent six-dimensional N = (1, 0) supergravity theory. In this paper, we show that the anomaly equations can be summarized by three intersection theory identities. In the process we also identify the geometric counterpart of the anomaly coefficients - in particular, those of the abelian gauge groups - that govern the low-energy dynamics of the theory. We discuss the results in the context of investigating string universality in six dimensions. © SISSA 2012.

Taylor W.,Massachusetts Institute of Technology
Journal of High Energy Physics | Year: 2012

The Hodge numbers of generic elliptically fibered Calabi-Yau threefolds over toric base surfaces fill out the "shield" structure previously identified by Kreuzer and Skarke. The connectivity structure of these spaces and bounds on the Hodge numbers are illuminated by considerations from F-theory and the minimal model program. In particular, there is a rigorous bound on the Hodge number h21 ≤ 491 for any elliptically fibered Calabi-Yau threefold. The threefolds with the largest known Hodge numbers are associated with a sequence of blow-ups of toric bases beginning with the Hirzebruch surface F12 and ending with the toric base for the F-theory model with largest known gauge group. © SISSA 2012.