Entity

Time filter

Source Type

Pasadena, CA, United States

The California Institute of Technology is a private research university located in Pasadena, California, United States. Caltech has six academic divisions with strong emphasis on science and engineering. Its 124-acre primary campus is located approximately 11 mi northeast of downtown Los Angeles.Although founded as a preparatory and vocational school by Amos G. Throop in 1891, the college attracted influential scientists such as George Ellery Hale, Arthur Amos Noyes, and Robert Andrews Millikan in the early 20th century. The vocational and preparatory schools were disbanded and spun off in 1910, and the college assumed its present name in 1921. In 1934, Caltech was elected to the Association of American Universities, and the antecedents of NASA's Jet Propulsion Laboratory, which Caltech continues to manage and operate, were established between 1936 and 1943 under Theodore von Kármán. The university is one among a small group of Institutes of Technology in the United States which tends to be primarily devoted to the instruction of technical arts and applied science.Despite its small size, 33 Caltech alumni and faculty have won a total of 34 Nobel Prizes and 71 have won the United States National Medal of Science or Technology. There are 112 faculty members who have been elected to the National Academies. In addition, numerous faculty members are associated with the Howard Hughes Medical Institute as well as NASA. Caltech managed $332 million in 2011 in sponsored research and $1.85 billion for its endowment in 2013. It also has a long standing rivalry with the Massachusetts Institute of Technology .First year students are required to live on campus, and 95% of undergraduates remain in the on-campus house system. Although Caltech has a strong tradition of practical jokes and pranks, student life is governed by an honor code which allows faculty to assign take-home examinations. The Caltech Beavers compete in 13 intercollegiate sports in the NCAA Division III's Southern California Intercollegiate Athletic Conference. Wikipedia.


Piro A.L.,California Institute of Technology | Nakar E.,Tel Aviv University
Astrophysical Journal | Year: 2014

Ongoing transient surveys are presenting an unprecedented account of the rising light curves of Type Ia supernovae (SNe Ia). This early emission probes the shallowest layers of the exploding white dwarf (WD), which can provide constraints on the progenitor star and the properties of the explosive burning. We use semianalytic models of radioactively powered rising light curves to analyze these observations. As we have summarized in previous work, the main limiting factor in determining the surface distribution of 56Ni is the lack of an unambiguously identified time of explosion, as would be provided by detection of shock breakout or shock-heated cooling. Without this the SN may in principle exhibit a "dark phase" for a few hours to days, where the only emission is from shock-heated cooling that is too dim to be detected. We show that by assuming a theoretically motivated time-dependent velocity evolution, the explosion time can be better constrained, albeit with potential systematic uncertainties. This technique is used to infer the surface 56Ni distributions of three recent SNe Ia that were caught especially early in their rise. In all three we find fairly similar 56Ni distributions. Observations of SN 2011fe and SN 2012cg probe shallower depths than SN 2009ig, and in these two cases 56Ni is present merely 10 -2 M from the WDs' surfaces. The uncertainty in this result is up to an order of magnitude given the difficulty of precisely constraining the explosion time. We also use our conclusions about the explosion times to reassess radius constraints for the progenitor of SN 2011fe, as well as discuss the roughly t2 power law that is inferred for many observed rising light curves. © 2014. The American Astronomical Society. All rights reserved.. Source


Kanamori H.,California Institute of Technology
Annual Review of Earth and Planetary Sciences | Year: 2014

With the advent of broadband seismology and GPS, significant diversity in the source radiation spectra of large earthquakes has been clearly demonstrated. This diversity requires different approaches to mitigate hazards. In certain tectonic environments, seismologists can forecast the future occurrence of large earthquakes within a solid scientific framework using the results from seismology and GPS. Such forecasts are critically important for long-term hazard mitigation practices, but because stochastic fracture processes are complex, the forecasts are inevitably subject to large uncertainty, and unexpected events will continue to surprise seismologists. Recent developments in real-time seismology will help seismologists to cope with and prepare for tsunamis and earthquakes. Combining a better understanding of earthquake diversity with modern technology is the key to effective and comprehensive hazard mitigation practices. © 2014 by Annual Reviews. All rights reserved. Source


Szilagyi B.,California Institute of Technology
International Journal of Modern Physics D | Year: 2014

As a network of advanced-era gravitational wave detectors is nearing its design sensitivity, efficient and accurate waveform modeling becomes more and more relevant. Understanding of the nature of the signal being sought can have an order unity effect on the event rates seen in these instruments. The paper provides a description of key elements of the Spectral Einstein Code (SpEC), with details of our spectral adaptive mesh refinement (AMR) algorithm that has been optimized for binary black hole (BBH) evolutions. We expect that the gravitational waveform catalog produced by our code will have a central importance in both the detection and parameter estimation of gravitational waves in these instruments. © 2014 World Scientific Publishing Company. Source


Metzger B.D.,Columbia University | Piro A.L.,California Institute of Technology
Monthly Notices of the Royal Astronomical Society | Year: 2014

The coalescence of binary neutron stars (NSs) may in some cases produce a stable massive NS remnant rather than a black hole. Due to the substantial angular momentum from the binary, such a remnant is born rapidly rotating and likely acquires a strong magnetic field (a 'millisecond magnetar'). Magnetic spin-down deposits a large fraction of the rotational energy from the magnetar behind the small quantity of mass ejected during the merger. If the magnetar outflow is indeed trapped behind the ejecta (instead of placing most of its energy into a collimated jet), this has the potential for creating a bright transient that could be useful for determining whether an NS or black holewas formed in the merger.We investigate the expected signature of such an event, including for the first time the important impact of e± pairs injected by the millisecond magnetar into the surrounding nebula. These pairs cool via synchrotron and inverse Compton emission, producing a pair cascade and hard X-ray spectrum. A fraction of these X-rays are absorbed by the ejecta walls and re-emitted as thermal radiation, leading to an optical/UV transient peaking at a luminosity of ~1043-1044 erg s-1 on a time-scale of several hours to days. This is dimmer than predicted by simpler analytic models because the large optical depth of e± pairs across the nebula suppresses the efficiency with which the magnetar spin-down luminosity is thermalized. Nevertheless, the optical/UV emission is more than two orders of magnitude brighter than a radioactively powered 'kilonova'. In some cases, nebular X-rays are sufficiently luminous to re-ionize the ejecta, in which case nonthermal X-rays escape the ejecta unattenuated with a similar peak luminosity and time-scale as the optical radiation.We discuss the implications of our results for the temporally extended X-ray emission that is observed to follow some short gamma-ray bursts (GRBs), including the kilonova candidates GRB 080503 and GRB 130603B. © 2014 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society. Source


Herczeg G.J.,Peking University | Hillenbrand L.A.,California Institute of Technology
Astrophysical Journal | Year: 2015

Absolute ages of young stars are important for many issues in pre-main-sequence stellar and circumstellar evolution but long have been recognized as difficult to derive and calibrate. In this paper, we use literature spectral types and photometry to construct empirical isochrones in Hertzsprung-Russell diagrams for low mass stars and brown dwarfs in the η Cha, Cha, and TW Hya Associations and the β Pic and Tuc-Hor Moving Groups. A successful theory of pre-main-sequence evolution should match the shapes of the stellar loci for these groups of young stars. However, when comparing the combined empirical isochrones to isochrones predicted from evolutionary models, discrepancies lead to a spectral type (mass) dependence in stellar age estimates. Improved prescriptions for convection and boundary conditions in the latest models of pre-main-sequence evolution lead to a significantly improved correspondence between empirical and model isochrones, with small offsets at low temperatures that may be explained by observational uncertainties or by model limitations. Independent of model predictions, linear fits to combined stellar loci of these regions provide a simple empirical method to order clusters by luminosity with a reduced dependence on spectral type. Age estimates calculated from various sets of modern models that reproduce Li depletion boundary ages of the β Pic Moving Group also imply a ∼4 Myr age for the low mass members of the Upper Sco OB Association, which is younger than the 11 Myr age that has been recently estimated for intermediate and high mass members. © 2015. The American Astronomical Society. All rights reserved.. Source


A widespread phenomenon in nature is sex ratio distortion of arthropod populations caused by microbial and genetic parasites. Currently little is known about how these agents alter host developmental processes to favor one sex or the other. The paternal sex ratio (PSR) chromosome is a nonessential, paternally transmitted centric fragment that segregates in natural populations of the jewel wasp, Nasonia vitripennis. To persist, PSR is thought to modify the hereditary material of the developing sperm, with the result that all nuclear DNA other than the PSR chromosome is destroyed shortly after fertilization. This results in the conversion of a fertilized embryo--normally a female--into a male, thereby insuring transmission of the "selfish" PSR chromosome, and simultaneously leading to wasp populations that are male-biased. To begin to understand this system at the mechanistic level, we carried out transcriptional profiling of testis from WT and PSR-carrying males. We identified a number of transcripts that are differentially expressed between these conditions. We also discovered nine transcripts that are uniquely expressed from the PSR chromosome. Four of these PSR-specific transcripts encode putative proteins, whereas the others have very short open reading frames and no homology to known proteins, suggesting that they are long noncoding RNAs. We propose several different models for how these transcripts could facilitate PSR-dependent effects. Our analyses also revealed 15.71 MB of novel transcribed regions in the N. vitripennis genome, thus increasing the current annotation of total transcribed regions by 53.4%. Finally, we detected expression of multiple meiosis-related genes in the wasp testis, despite the lack of conventional meiosis in the male sex. Source


Akbari O.S.,California Institute of Technology
G3 (Bethesda, Md.) | Year: 2013

Mosquitoes are vectors of a number of important human and animal diseases. The development of novel vector control strategies requires a thorough understanding of mosquito biology. To facilitate this, we used RNA-seq to identify novel genes and provide the first high-resolution view of the transcriptome throughout development and in response to blood feeding in a mosquito vector of human disease, Aedes aegypti, the primary vector for Dengue and yellow fever. We characterized mRNA expression at 34 distinct time points throughout Aedes development, including adult somatic and germline tissues, by using polyA+ RNA-seq. We identify a total of 14,238 novel new transcribed regions corresponding to 12,597 new loci, as well as many novel transcript isoforms of previously annotated genes. Altogether these results increase the annotated fraction of the transcribed genome into long polyA+ RNAs by more than twofold. We also identified a number of patterns of shared gene expression, as well as genes and/or exons expressed sex-specifically or sex-differentially. Expression profiles of small RNAs in ovaries, early embryos, testes, and adult male and female somatic tissues also were determined, resulting in the identification of 38 new Aedes-specific miRNAs, and ~291,000 small RNA new transcribed regions, many of which are likely to be endogenous small-interfering RNAs and Piwi-interacting RNAs. Genes of potential interest for transgene-based vector control strategies also are highlighted. Our data have been incorporated into a user-friendly genome browser located at www.Aedes.caltech.edu, with relevant links to Vectorbase (www.vectorbase.org). Source


Echenard B.,California Institute of Technology
Advances in High Energy Physics | Year: 2012

Many extensions of the Standard Model include the possibility of light new particles, such as light Higgs bosons or dark matter candidates. These scenarios can be probed using the large datasets collected by B factories, complementing measurements performed at the LHC. This paper summarizes recent searches for light new physics conducted by the BABAR and Belle experiments. © 2012 Bertrand Echenard. Source


Kasliwal M.M.,Carnegie Institution for Science | Nissanke S.,California Institute of Technology
Astrophysical Journal Letters | Year: 2014

We present the first simulation addressing the prospects of finding an electromagnetic (EM) counterpart to gravitational wave (GW) detections during the early years of only two advanced detectors. The perils of such a search may have appeared insurmountable when considering the coarse ring-shaped GW localizations spanning thousands of square degrees using time-of-arrival information alone. Leveraging the amplitude and phase information of the predicted GW signal narrows the localization to arcs with a median area of only a few hundred square degrees, thereby making an EM search tractable. Based on the locations and orientations of the two LIGO detectors, we find that the GW sensitivity is limited to only two of the four sky quadrants. Thus, the rates of GW events with two interferometers is only ≈40% of the rate with three interferometers of similar sensitivity. Another important implication of the sky quadrant bias is that EM observatories in North America and Southern Africa would be able to systematically respond to GW triggers several hours sooner than Russia and Chile. Given the larger sky areas and the relative proximity of detected mergers, 1 m class telescopes with very wide-field cameras are well-positioned for the challenge of finding an EM counterpart. Identification of the EM counterpart amidst the larger numbers of false positives further underscores the importance of building a comprehensive catalog of foreground stellar sources, background active galactic nucleus and potential host galaxies in the local universe. This initial study is based on a small sample of 17 detected mergers; future works will expand this sample. © 2014. The American Astronomical Society. All rights reserved. Source


Patterson R.B.,California Institute of Technology
Nuclear Physics B - Proceedings Supplements | Year: 2013

The NOvA long-baseline neutrino oscillation experiment is currently under construction and will use an upgraded NuMI neutrino source at Fermilab and a 14-kton detector at Ash River, Minnesota to explore the neutrino sector. NOvA uses a highly active, finely segmented detector design that offers superb event identification capability, allowing precision measurements of νe/ν-e appearance and νμ/ν-μ disappearance, through which NOvA will provide constraints on θ13, θ23, |δmatm2|, the neutrino mass hierarchy, and the CP-violating phase δ. In this article, we review NOvA's uniquely broad physics scope, including sensitivity updates in light of the latest knowledge of θ13, and we discuss the experiment's construction and operation timeline. © 2013. Source


Robertson B.E.,California Institute of Technology
Astrophysical Journal Letters | Year: 2010

As deeper observations discover increasingly distant galaxies, characterizing the properties of high-redshift galaxy populations will become increasingly challenging and paramount.We present amethod formeasuring the clustering bias of high-redshift galaxies from the field-to-field scatter in their number densities induced by cosmic variance. Multiple widely separated fields are observed to provide a large number of statistically independent samples of the high-redshift galaxy population. The expected Poisson uncertainty is removed from the measured dispersion in the distribution of galaxy number counts observed across these many fields, leaving, on average, only the contribution to the scatter expected from cosmic variance. With knowledge of the Λ-cold dark matter power spectrum, the galaxy bias is then calculated from the measured cosmic variance. The results of cosmological N-body simulations can then be used to estimate the halo mass associated with the measured bias. We use Monte Carlo simulations to demonstrate that Hubble Space Telescope pure parallel programs will be able to determine galaxy bias at z ≳ 6 using this method, complementing future measurements from correlation functions. © 2010. The American Astronomical Society. All rights reserved. Source


Lavaux G.,University of Illinois at Urbana - Champaign | Wandelt B.D.,California Institute of Technology
Monthly Notices of the Royal Astronomical Society | Year: 2010

We propose a new definition of cosmic voids based on methods of Lagrangian orbit reconstruction as well as an algorithm to find them in actual data called DynamIcal Void Analysis. Our technique is intended to yield results which can be modelled sufficiently accurately to create a new probe of precision cosmology. We then develop an analytical model of the ellipticity of voids found by our method based on the Zel'dovich approximation. We measure in N-body simulation that this model is precise at the ∼0.1 per cent level for the mean ellipticity of voids of size greater than ∼4 h-1 Mpc. We estimate that at this scale we are able to predict the ellipticity with an accuracy of σe{open}∼ 0.02. Finally, we compare the distribution of void shapes in N-body simulation for two different equations of state .w of the dark energy. We conclude that our method is far more accurate than Eulerian methods and is therefore promising as a precision probe of dark energy phenomenology. © 2010 The Authors. Journal compilation © 2010 RAS. Source


Dahl T.W.,Niels Bohr Institute | Stevenson D.J.,California Institute of Technology
Earth and Planetary Science Letters | Year: 2010

In the current view of planet formation, the final assembly of the Earth involved giant collisions between proto-planets (>1000. km radius), with the Moon formed as a result of one such impact. At this stage the colliding bodies had likely differentiated into a metallic core surrounded by a silicate mantle. During the Moon-forming impact, nearly all metal sank into the Earth's core. We investigate to what extent large self-gravitating iron cores can mix with surrounding silicate and how this influences the short-lived chronometer, Hf-W, used to infer the age of the Moon. We present fluid dynamical models of turbulent mixing in fully liquid systems, attempting to place constraints on the degree of mixing. Erosion of sinking cores driven by Rayleigh-Taylor instability does lead to intimate mixing and equilibration, but large blobs (>10 km diameter) do not emulsify entirely. Emulsification is enhanced if most of the accreting metal cores deform into thin structures during descent through the Earth's mantle. Yet, only 1-20% of Earth's core would equilibrate with silicate during Earth's accretion. The initial speed of the impactor is of little importance. We proceed to evaluate the mixing potential for shear instabilities where silicate entrainment across vertical walls causes mixing. The turbulent structure indicates that vortices remain at the largest scale and do not mix to centimeter length scale, where diffusion operates and isotopes can equilibrate. Thus, incomplete emulsification and equilibration of accreting iron cores is likely to occur.The extent of metal-silicate equilibration provides key information for interpretation of siderophile budgets and the timing of core formation using the Hf-W chronometer. The time scale of core formation derived from the Hf-W chronometer is usually tied to the last major metal-silicate re-equilibration, believed to coincide with time of the Moon-forming impact. However, we show that large cores have limited ability to reset the Hf-W system in the silicate Earth. Excess 182W in bulk silicate Earth is more sensitive to early core formation processes than to radiogenic ingrowth after the last giant impact. © 2010 Elsevier B.V. Source


Dabiri J.O.,California Institute of Technology
Geophysical Research Letters | Year: 2010

Recent efforts to empirically measure and numerically simulate biogenic ocean mixing have consistently observed low mixing efficiency. This suggests that the buoyancy flux achieved by swimming animals in the ocean may be negligible in spite of the observed large kinetic energy dissipation rates. The present letter suggests that vertical migration across isopycnals may be necessary in order to generate overturning and subsequent mixing at length scales significantly larger than the individual animals. The animal-fluid interactions are simulated here using a simplified potential flow model of solid spheres migrating vertically in a stably stratified fluid. The interaction of successive solid bodies with each parcel of fluid is shown to lead, under certain conditions, to vertical displacement of the fluid parcels over distances much larger than the individual body size. These regions of displaced fluid are unstably stratified and, hence, amenable to large-scale overturning and efficient mixing. Copyright © 2010 by the American Geophysical Union. Source


Woodward J.,California Institute of Technology
Biology and Philosophy | Year: 2010

This paper attempts to elucidate three characteristics of causal relationships that are important in biological contexts. Stability has to do with whether a causal relationship continues to hold under changes in background conditions. Proportionality has to do with whether changes in the state of the cause "line up" in the right way with changes in the state of the effect and with whether the cause and effect are characterized in a way that contains irrelevant detail. Specificity is connected both to David Lewis' notion of "influence" and also with the extent to which a causal relation approximates to the ideal of one cause-one effect. Interrelations among these notions and their possible biological significance are also discussed. © 2010 Springer Science+Business Media B.V. Source


Stevenson M.A.,California Institute of Technology
Astrophysical Journal | Year: 2014

An analytical function for the spectrum of spinning dust emission is presented. It is derived through the application of careful approximations, with each step tested against numerical calculations. This approach ensures accuracy while providing an intuitive picture of the physics. The final result may be useful for fitting of anomalous microwave emission observations, as is demonstrated by a comparison with the Planck observations of the Perseus Molecular Cloud. It is hoped that this will lead to a broader consideration of the spinning dust model when interpreting microwave continuum observations, and that it will provide a standard framework for interpreting and comparing the variety of anomalous microwave emission observations. © 2014. The American Astronomical Society. All rights reserved. Source


Booth I.R.,University of Aberdeen | Booth I.R.,California Institute of Technology
Current Opinion in Microbiology | Year: 2014

Bacterial mechanosensitive channels sense the changes in lateral tension in the bilayer of the cytoplasmic membrane generated by rapid water flow into the cell. Two major structural families are found widely distributed across bacteria and archaea: MscL and MscS. Our understanding of the mechanisms of gating has advanced rapidly through genetic analysis, structural biology and electrophysiology. It is only recently that the analysis of the physiological roles of the channels has kept pace with mechanistic studies. Recent advances have increased our understanding of the role of the channels in preventing structural perturbation during osmotic transitions and its relationship to water flow across the membrane. It is to these recent developments that this review is dedicated. © 2014 The Author. Source


Kobilarov M.,California Institute of Technology
International Journal of Robotics Research | Year: 2012

This paper is concerned with motion planning for non-linear robotic systems operating in constrained environments. A method for computing high-quality trajectories is proposed building upon recent developments in sampling-based motion planning and stochastic optimization. The idea is to equip sampling-based methods with a probabilistic model that serves as a sampling distribution and to incrementally update the model during planning using data collected by the algorithm. At the core of the approach lies the cross-entropy method for the estimation of rare-event probabilities. The cross-entropy method is combined with recent optimal motion planning methods such as the rapidly exploring random trees (RRT*) in order to handle complex environments. The main goal is to provide a framework for consistent adaptive sampling that correlates the spatial structure of trajectories and their computed costs in order to improve the performance of existing planning methods. © 2012 The Author(s). Source


Adolphs R.,California Institute of Technology
Current Biology | Year: 2010

When asked "what is an emotion?" most people answer in one of three ways. One answer is to list the most salient attributes of emotions. The psychologist and philosopher William James, in an 1884 essay with the eponymous title of our question, causally linked two commonsense attributes. According to James, certain stimuli can trigger emotional bodily reactions, and our perception of those changes constitutes our conscious experience of emotions, feelings. We see a bear: our heart rate accelerates, our blood pressure shoots up, and many other bodily changes transpire. Our perception of those changes in our body constitutes our fear of the bear. More recent accounts propose neurobiological substrates involved in causing emotional reactions and perceiving the feelings, laying the foundation for conceiving of emotions as neural states. Modern emotion theories typically try to account for the observations that emotions are triggered by events of some significance or relevance to an organism, that they encompass a coordinated set of changes in brain and body, and that they appear adaptive in the sense that they are directed towards coping with whatever challenge was posed by the triggering event. Emotions also have an onset, a dynamic timecourse, and an offset or resolution; their phasic nature is one feature that distinguishes them from moods. Additional layers of complexity are added, especially in humans, through our capacity to control and regulate our emotions (at least to some extent), and to vicariously experience the emotions of other people through empathy, both of which are current major themes in emotion research. © 2010 Elsevier Ltd. Source


Nakar E.,Tel Aviv University | Piro A.L.,California Institute of Technology
Astrophysical Journal | Year: 2014

Early observations of supernova light curves are powerful tools for shedding light on the pre-explosion structures of their progenitors and their mass-loss histories just prior to explosion. Some core-collapse supernovae that are detected during the first days after the explosion prominently show two peaks in the optical bands, including the R and I bands, where the first peak appears to be powered by the cooling of shocked surface material and the second peak is clearly powered by radioactive decay. Such light curves have been explored in detail theoretically for SN 1993J and 2011dh, where it was found that they may be explained by progenitors with extended, low-mass envelopes. Here, we generalize these results. We first explore whether any double-peaked light curve of this type can be generated by a progenitor with a "standard" density profile, such as a red supergiant or a Wolf-Rayet star. We show that a standard progenitor (1) cannot produce a double-peaked light curve in the R and I bands and (2) cannot exhibit a fast drop in the bolometric luminosity as is seen after the first peak. We then explore the signature of a progenitor with a compact core surrounded by extended, low-mass material. This may be a hydrostatic low-mass envelope or material ejected just prior to the explosion. We show that it naturally produces both of these features. We use this result to provide simple formulae to estimate (1) the mass of the extended material from the time of the first peak, (2) the extended material radius from the luminosity of the first peak, and (3) an upper limit on the core radius from the luminosity minimum between the two peaks. © 2014. The American Astronomical Society. All rights reserved. Source


Li N.,California Institute of Technology | Marden J.R.,University of Colorado at Boulder
IEEE Journal on Selected Topics in Signal Processing | Year: 2013

The central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent's control law on the least amount of information possible. This paper focuses on achieving this goal using the field of game theory. In particular, we derive a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting Nash equilibria and the optimizers of the system level objective and (ii) that the resulting game possesses an inherent structure that can be exploited in distributed learning, e.g., potential games. The control design can then be completed utilizing any distributed learning algorithm which guarantees convergence to a Nash equilibrium for the attained game structure. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures. © 2013 IEEE. Source


Michener J.K.,California Institute of Technology | Smolke C.D.,Stanford University
Metabolic Engineering | Year: 2012

Metabolic engineering can produce a wide range of bulk and fine chemicals using renewable resources. These approaches frequently require high levels of activity from multiple heterologous enzymes. Directed evolution techniques have been used to improve the activity of a wide range of enzymes but can be difficult to apply when the enzyme is used in whole cells. To address this limitation, we developed generalizable in vivo biosensors using engineered RNA switches to link metabolite concentrations and GFP expression levels in living cells. Using such a sensor, we quantitatively screened large enzyme libraries in high throughput based on fluorescence, either in clonal cultures or in single cells by fluorescence activated cell sorting (FACS). By iteratively screening libraries of a caffeine demethylase, we identified beneficial mutations that ultimately increased the enzyme activity in vivo by 33 fold and the product selectivity by 22 fold. As aptamer selection strategies allow RNA switches to be readily adapted to recognize new small molecules, these RNA-based screening techniques are applicable to a broad range of enzymes and metabolic pathways. © 2012 Elsevier Inc. Source


Tsang D.,California Institute of Technology
Astrophysical Journal | Year: 2011

Waves reflected by the inner edge of a protoplanetary disk are shown to significantly modify Type I migration, even allowing the trapping of planets near the inner disk edge for small planets in a range of disk parameters. This may inform the distribution of planets close to their central stars, as observed recently by the Kepler mission. © 2011. The American Astronomical Society. All rights reserved. Source


Minnich A.J.,California Institute of Technology
Physical Review Letters | Year: 2012

The mean free paths (MFPs) of thermal phonons are mostly unknown in many solids. Recent work indicates that MFPs may be measured using experimental observations of quasiballistic thermal transport, but the precise relationship between the measurements and the MFP distribution remains unclear. Here, we present a method that can accurately reconstruct the MFP distribution from quasiballistic thermal measurements without any assumptions regarding the phonon scattering mechanisms. Our result will enable a substantially improved understanding of thermal transport in many solids, particularly thermoelectrics. © 2012 American Physical Society. Source


Jordan P.,University of Poitiers | Colonius T.,California Institute of Technology
Annual Review of Fluid Mechanics | Year: 2013

Turbulent jet noise is a controversial fluid mechanical puzzle that has amused and bewildered researchers for more than half a century. Whereas numerical simulations are now capable of simultaneously predicting turbulence and its radiated sound, the theoretical framework that would guide noise-control efforts is incomplete. Wave packets are intermittent, advecting disturbances that are correlated over distances far exceeding the integral scales of turbulence. Their signatures are readily distinguished in the vortical, turbulent region; the irrotational, evanescent near field; and the propagating far field. We review evidence of the existence, energetics, dynamics, and acoustic efficiency of wave packets. We highlight how extensive data available from simulations and modern measurement techniques can be used to distill acoustically relevant turbulent motions. The evidence supports theories that seek to represent wave packets as instability waves, or more general modal solutions of the governing equations, and confirms the acoustic importance of these structures in the aft-angle radiation of high subsonic and supersonic jets. The resulting unified view of wave packets provides insights that can help guide control strategies. Copyright © 2013 by Annual Reviews. All rights reserved. Source


Riechers D.A.,California Institute of Technology
Astrophysical Journal | Year: 2011

We report the detection of luminous CO(J= 2→1), CO(J= 3→2), and CO(J= 4→3) emission in the strongly lensed high-redshift quasars B1938+666 (z= 2.059), HE0230-2130 (z= 2.166), HE1104-1805 (z= 2.322), and B1359+154 (z= 3.240), using the Combined Array for Research in Millimeter-wave Astronomy. B1938+666 was identified in a "blind" CO redshift search, demonstrating the feasibility of such investigations with millimeter interferometers. These galaxies are lensing-amplified by factors of μL ≃ 11-170, and thus allow us to probe the molecular gas in intrinsically fainter galaxies than currently possible without the aid of gravitational lensing. We report lensing-corrected intrinsic CO line luminosities of L′CO= 0.65-21×109Kkms -1pc2, translating to H2 masses of M(H 2)= 0.52-17 × 109(αCO/0.8) M 1. To investigate whether or not the active galactic nucleus (AGN) in luminous quasars substantially contributes to L FIR, we study the L′CO-L FIR relation for quasars relative to galaxies without a luminous AGN as a function of redshift. We find no substantial differences between submillimeter galaxies and high-z quasars, but marginal evidence for an excess in L FIR in nearby low-L FIR AGN galaxies. This may suggest that an AGN contribution to L FIR is significant in systems with relatively low gas and dust content, but only minor in the most far-infrared-luminous galaxies (in which L FIR is dominated by star formation). © 2011. The American Astronomical Society. All rights reserved. Source


Flanagan I.I.,Cornell University | Hinderer T.,California Institute of Technology
Physical Review Letters | Year: 2012

We show that transient resonances occur in the two-body problem in general relativity for spinning black holes in close proximity to one another when one black hole is much more massive than the other. These resonances occur when the ratio of polar and radial orbital frequencies, which is slowly evolving under the influence of gravitational radiation reaction, passes through a low order rational number. At such points, the adiabatic approximation to the orbital evolution breaks down, and there is a brief but order unity correction to the inspiral rate. The resonances cause a perturbation to orbital phase of order a few tens of cycles for mass ratios ∼10 -6, make orbits more sensitive to changes in initial data (though not quite chaotic), and are genuine nonperturbative effects that are not seen at any order in a standard post-Newtonian expansion. Our results apply to an important potential source of gravitational waves, the gradual inspiral of white dwarfs, neutron stars, or black holes into much more massive black holes. Resonances' effects will increase the computational challenge of accurately modeling these sources. © 2012 American Physical Society. Source


Jeong D.,Johns Hopkins University | Kamionkowski M.,Johns Hopkins University | Kamionkowski M.,California Institute of Technology
Physical Review Letters | Year: 2012

Many inflationary theories introduce new scalar, vector, or tensor degrees of freedom that may then affect the generation of primordial density perturbations. Here we show how to search a galaxy (or 21-cm) survey for the imprint of primordial scalar, vector, and tensor fields. These new fields induce local departures to an otherwise statistically isotropic two-point correlation function, or equivalently, nontrivial four-point correlation functions (or trispectra, in Fourier space), that can be decomposed into scalar, vector, and tensor components. We write down the optimal estimators for these various components and show how the sensitivity to these modes depends on the galaxy-survey parameters. New probes of parity-violating early-Universe physics are also presented. © 2012 American Physical Society. Source


Zaletel M.P.,University of California at Berkeley | Mong R.S.K.,California Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

We show that the model wave functions used to describe the fractional quantum Hall effect have exact representations as matrix product states (MPS). These MPS can be implemented numerically in the orbital basis of both finite and infinite cylinders, which provides an efficient way of calculating arbitrary observables. We extend this approach to the charged excitations and numerically compute their Berry phases. Finally, we present an algorithm for numerically computing the real-space entanglement spectrum starting from an arbitrary orbital basis MPS, which allows us to study the scaling properties of the real-space entanglement spectra on infinite cylinders. The real-space entanglement spectrum obeys a scaling form dictated by the edge conformal field theory, allowing us to accurately extract the two entanglement velocities of the Moore-Read state. In contrast, the orbital space spectrum is observed to scale according to a complex set of power laws that rule out a similar collapse. © 2012 American Physical Society. Source


Dust grains in neutral gas behave as aerodynamic particles, so they can develop large local density fluctuations entirely independent of gas density fluctuations. Specifically, gas turbulence can drive order-of-magnitude "resonant" fluctuations in the dust density on scales where the gas stopping/drag timescale is comparable to the turbulent eddy turnover time. Here we show that for large grains (size ≳ 0.1 μm, containing most grain mass) in sufficiently large molecular clouds (radii ≳ 1-10 pc, masses ≳ 104 M⊙), this scale becomes larger than the characteristic sizes of prestellar cores (the sonic length), so large fluctuations in the dust-to-gas ratio are imprinted on cores. As a result, star clusters and protostellar disks formed in large clouds should exhibit significant abundance spreads in the elements preferentially found in large grains (C, O). This naturally predicts populations of carbon-enhanced stars, certain highly unusual stellar populations observed in nearby open clusters, and may explain the "UV upturn" in early-type galaxies. It will also dramatically change planet formation in the resulting protostellar disks, by preferentially "seeding" disks with an enhancement in large carbonaceous or silicate grains. The relevant threshold for this behavior scales simply with cloud densities and temperatures, making straightforward predictions for clusters in starbursts and high-redshift galaxies. Because of the selective sorting by size, this process is not necessarily visible in extinction mapping. We also predict the shape of the abundance distribution - when these fluctuations occur, a small fraction of the cores may actually be seeded with abundances Z ∼ 100 〈Z〉 such that they are almost "totally metal" (Z ∼ 1)! Assuming the cores collapse, these totally metal stars would be rare (1 in ∼104 in clusters where this occurs), but represent a fundamentally new stellar evolution channel. © 2014. The American Astronomical Society. All rights reserved. Source


Loncar M.,Harvard University | Faraon A.,California Institute of Technology
MRS Bulletin | Year: 2013

Advances in nanotechnology have enabled the opportunity to fabricate nanoscale optical devices and chip-scale systems in diamond that can generate, manipulate, and store optical signals at the single-photon level. In particular, nanophotonics has emerged as a powerful interface between optical elements such as optical fibers and lenses, and solid-state quantum objects such as luminescent color centers in diamond that can be used effectively to manipulate quantum information. While quantum science and technology has been the main driving force behind recent interest in diamond nanophotonics, such a platform would have many applications that go well beyond the quantum realm. For example, diamond's transparency over a wide wavelength range, large third-order nonlinearity, and excellent thermal properties are of great interest for the implementation of frequency combs and integrated Raman lasers. Diamond is also an inert material that makes it well suited for biological applications and for devices that must operate in harsh environments. Copyright © Materials Research Society 2013. Source


Anderson D.L.,California Institute of Technology | Natland J.H.,University of Miami
Proceedings of the National Academy of Sciences of the United States of America | Year: 2014

Convection in an isolated planet is characterized by narrow downwellings and broad updrafts--consequences of Archimedes' principle, the cooling required by the second law of thermodynamics, and the effect of compression on material properties. A mature cooling planet with a conductive low-viscosity core develops a thick insulating surface boundary layer with a thermal maximum, a subadiabatic interior, and a cooling highly conductive but thin boundary layer above the core. Parts of the surface layer sink into the interior, displacing older, colder material, which is entrained by spreading ridges. Magma characteristics of intraplate volcanoes are derived from within the upper boundary layer. Upper mantle features revealed by seismic tomography and that are apparently related to surface volcanoes are intrinsically broad and are not due to unresolved narrow jets. Their morphology, aspect ratio, inferred ascent rate, and temperature show that they are passively responding to downward fluxes, as appropriate for a cooling planet that is losing more heat through its surface than is being provided from its core or from radioactive heating. Response to doward flux is the inverse of the heat-pipe/mantle-plume mode of planetary cooling. Shear-driven melt extraction from the surface boundary layer explains volcanic provinces such as Yellowstone, Hawaii, and Samoa. Passive upwellings from deeper in the upper mantle feed ridges and near-ridge hotspots, and others interact with the sheared and metasomatized surface layer. Normal plate tectonic processes are responsible both for plate boundary and intraplate swells and volcanism. Source


Fidkowski L.,California Institute of Technology
Physical Review Letters | Year: 2010

We study two a priori unrelated constructions: the spectrum of edge modes in a band topological insulator or superconductor with a physical edge, and the ground state entanglement spectrum in an extended system where an edge is simulated by an entanglement bipartition. We prove an exact relation between the ground state entanglement spectrum of such a system and the spectrum edge modes of the corresponding spectrally flattened Hamiltonian. In particular, we show that gapless edge modes result in degeneracies of the entanglement spectrum. © 2010 The American Physical Society. Source


Piro A.L.,California Institute of Technology
Astrophysical Journal Letters | Year: 2011

The recently discovered system J0651 is the tightest known detached white dwarf (WD) binary. Since it has not yet initiated Roche-lobe overflow, it provides a relatively clean environment for testing our understanding of tidal interactions. I investigate the tidal heating of each WD, parameterized in terms of its tidal Q parameter. Assuming that the heating can be radiated efficiently, the current luminosities are consistent with Q 1 ≈ 7 × 1010 and Q 2 ≈ 2 × 107, for the He and C/O WDs, respectively. Conversely, if the observed luminosities are merely from the cooling of the WDs, these estimated values of Q represent the upper limits. A large Q 1 for the He WD means its spin velocity will be slower than that expected if it was tidally locked, which, since the binary is eclipsing, may be measurable via the Rossiter-McLaughlin effect. After one year, gravitational wave emission shifts the time of eclipses by 5.5 s, but tidal interactions cause the orbit to shrink more rapidly, changing the time by up to an additional 0.3 s after a year. Future eclipse timing measurements may therefore infer the degree of tidal locking. © 2011. The American Astronomical Society. All rights reserved. Source


Glassman M.J.,Massachusetts Institute of Technology | Chan J.,California Institute of Technology | Olsen B.D.,Massachusetts Institute of Technology
Advanced Functional Materials | Year: 2013

Shear thinning hydrogels are promising materials that exhibit rapid self-healing following the cessation of shear, making them attractive for applications including injectable biomaterials. Here, self-assembly is demonstrated as a strategy to introduce a reinforcing network within shear thinning artificially engineered protein gels, enabling a responsive transition from an injectable state at low temperatures with a low yield stress to a stiffened state at physiological temperatures with resistance to shear thinning, higher toughness, and reduced erosion rates and creep compliance. Protein-polymer triblock copolymers capable of the responsive self-assembly of two orthogonal networks are synthesized. Midblock association forms a shear-thinning network, while endblock aggregation at elevated temperatures introduces a second, independent physical network into the protein hydrogel. These reversible crosslinks introduce extremely long relaxation times and lead to a five-fold increase in the elastic modulus, significantly larger than is expected from transient network theory. Thermoresponsive reinforcement reduces the high temperature creep compliance by over four orders of magnitude, decreases the erosion rate by at least a factor of five, and increases the yield stress by up to a factor of seven. Combined with the demonstrated potential of shear thinning artificial protein hydrogels for various uses, this reinforcement mechanism broadens the range of applications that can be addressed with shear-thinning physical gels. Hybrid triblock copolymers prepared from associative proteins and thermoresponsive polymers form shear thinning gels with a low yield stress at low temperatures. Upon increasing temperature, an orthogonal network of reinforcing nanostructures is formed that results in gels with significantly higher stiffness, resistance to shear thinning, higher toughness, reduced erosion rates, and lower creep compliance at physiological temperatures. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


O'Connell R.M.,University of Utah | Baltimore D.,California Institute of Technology
Current Topics in Developmental Biology | Year: 2012

Hematopoiesis is a dynamic and highly complex developmental process that gives rise to a multitude of the cell types that circulate in the blood of multicellular organisms. These cells provide tissues with oxygen, guard against infection, prevent bleeding by clotting, and mediate inflammatory reactions. Because the hematopoietic system plays such a central role in human diseases such as infections, cancer, autoimmunity, and anemia, it has been intensely studied for more than a century. This scrutiny has helped to shape many of the developmental paradigms that exist today and has identified specific protein factors that serve as master regulators of blood cell lineage specification. Despite this progress, many aspects of blood cell development remain obscure, suggesting that novel layers of regulation must exist. Consequently, the emergence of regulatory noncoding RNAs, such as the microRNAs (miRNAs), is beginning to provide new insights into the molecular control networks underlying hematopoiesis and diseases that stem from aberrations in this process. This review will discuss how miRNAs fit into our current understanding of hematopoietic development in mammals and how breakdowns in these pathways can trigger disease. © 2012 Elsevier Inc. Source


Benson A.J.,California Institute of Technology
New Astronomy | Year: 2012

We describe a new, free and open source semi-analytic model of galaxy formation, Galacticus. The Galacticus model was designed to be highly modular to facilitate expansion and the exploration of alternative descriptions of key physical ingredients. We detail the Galacticus engine for evolving galaxies through a merging hierarchy of dark matter halos and give details of the specific implementations of physics currently available in Galacticus. Finally, we show results from an example model that is in reasonably good agreement with several observational datasets. We use this model to explore numerical convergence and to demonstrate the types of information which can be extracted from Galacticus. © 2011 Elsevier B.V. All rights reserved. Source


Adolphs R.,California Institute of Technology
Annals of the New York Academy of Sciences | Year: 2010

The amygdala has received intense recent attention from neuroscientists investigating its function at the molecular, cellular, systems, cognitive, and clinical level. It clearly contributes to processing emotionally and socially relevant information, yet a unifying description and computational account have been lacking. The difficulty of tying together the various studies stems in part from the sheer diversity of approaches and species studied, in part from the amygdala's inherent heterogeneity in terms of its component nuclei, and in part because different investigators have simply been interested in different topics. Yet, a synthesis now seems close at hand in combining new results from social neuroscience with data from neuroeconomics and reward learning. The amygdala processes a psychological stimulus dimension related to saliency or relevance; mechanisms have been identified to link it to processing unpredictability; and insights from reward learning have situated it within a network of structures that include the prefrontal cortex and the ventral striatum in processing the current value of stimuli. These aspects help to clarify the amygdala's contributions to recognizing emotion from faces, to social behavior toward conspecifics, and to reward learning and instrumental behavior. © 2010 New York Academy of Sciences. Source


Brown M.E.,California Institute of Technology
Annual Review of Earth and Planetary Sciences | Year: 2012

Objects in the Kuiper belt are difficult to study in detail, even with the best telescopes available. Therefore, for many years, studies of the compositions of these objects were relegated to collections of moderate-quality spectroscopic and photometric data that remained difficult to interpret. Much early effort was put into simple correlations of surface colors and identifications of spectral features, but connecting these observations to a larger understanding of the region remained elusive. The past decade, however, has seen a blossoming in our understanding, a product of the discoveries of larger-and thus easier to study-objects, continued collection of high-quality photometric and spectroscopic observations, and continued work at the laboratory and theoretical levels. Today, we now know of many processes that affect these objects' surface compositions, including atmospheric loss, differentiation and cryovolcanism, radiation processing, the effects of giant impacts, and the early dynamical excitation of the Kuiper belt. I review the large quantity of data now available and attempt to build a comprehensive framework for understanding the compositions and their causes. © 2012 by Annual Reviews. All rights reserved. Source


Experimental observations on cyclic splitting and bending by a flagellar doublet pair are modeled using forces obtained from a model for dynein mechanochemistry, based on ideas introduced by Andrew Huxley and Terrill Hill and extended previously for modeling flagellar movements. The new feature is elastic attachment of dynein to the A doublet, which allows movement perpendicular to the A doublet and provides adhesive force that can strain attached dyneins. This additional strain influences the kinetics of dynein attachment and detachment. Computations using this dynein model demonstrate that very simple and realistic ideas about dynein mechanochemistry are sufficient for explaining the separation and reattachment seen experimentally with flagellar doublet pairs. Additional simulations were performed after adding a "super-adhesion" elasticity. This elastic component is intended to mimic interdoublet connections, normally present in an intact axoneme, that would prevent visible splitting but allow sufficient separation to cause dynein detachment and cessation of shear force generation. This is the situation envisioned by Lindemann's "geometric clutch" hypothesis for control of dynein function in flagella and cilia. The simulations show abrupt disengagement of the "clutch" at one end of a bend, and abrupt reengagement of the "clutch" at the other end of a bend, ensuring that active sliding is only operating where it will cause bend propagation from base to tip. © 2014 Wiley Periodicals, Inc. Source


Sokic M.,California Institute of Technology
Order | Year: 2012

We classify Fraïssé classes of finite posets with convex linear orderings with respect to the Ramsey property and extend the list of extremely amenable groups and universal minimal flows thanks to a theory developed by Kechris et al. (Geom Funct Anal 15:106-189, 2005). For the structures from the Schmerl list for which this technique is not applicable, we provide a direct calculation of universal minimal flows. © 2011 Springer Science+Business Media B.V. Source


Kochmann D.M.,California Institute of Technology
Physica Status Solidi (B) Basic Research | Year: 2012

Recent experimental findings and theoretical analyses have confirmed the bound-exceeding performance of composite materials with one constituent of so-called negative stiffness (i.e., with non-positive-definite elastic moduli): the overall elastic properties greatly exceed those of the composite constituents, when the negative-stiffness phase's properties are appropriately tuned. However, the stability of such composite materials has remained a key open question. It has been shown, e.g., that a spherical particle of a negative-stiffness material can be stabilized when embedded in a sufficiently stiff and thick coating to impose a geometrical constraint on the negative-stiffness phase. For general composite geometries (as those arising from actual manufacturing processes), no such investigation has been reported. We review the classical stability conditions for homogeneous linear elastic solids and outline methods to determine the sufficient stability conditions for elastic composites. In addition, a numerical technique to obtain the stability restrictions on the elastic moduli of a composite with, in principle, arbitrary geometry is presented. Based on this method, we investigate the stability of simple elastic two-phase composites consisting of an inclusion (having non-positive-definite elastic moduli) embedded in a different coating material. In particular, the influence of the geometry of the encapsulated particles and the surrounding matrix is shown to considerably affect the overall stability. Our results compare the stability limits for two- (2D) and three-dimensional (3D) composite arrangements and provide design guidelines for optimal stability. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Patterson P.H.,California Institute of Technology
Pediatric Research | Year: 2011

A variety of features of autism can be simulated in rodents, including the core behavioral hallmarks of stereotyped and repetitive behaviors, and deficits in social interaction and communication. Other behaviors frequently found in autism spectrum disorders (ASDs) such as neophobia, enhanced anxiety, abnormal pain sensitivity and eye blink conditioning, disturbed sleep patterns, seizures, and deficits in sensorimotor gating are also present in some of the animal models. Neuropathology and some characteristic neurochemical changes that are frequently seen in autism, and alterations in the immune status in the brain and periphery are also found in some of the models. Several known environmental risk factors for autism have been successfully established in rodents, including maternal infection and maternal valproate administration. Also under investigation are a number of mouse models based on genetic variants associated with autism or on syndromic disorders with autistic features. This review briefly summarizes recent developments in this field, highlighting models with face and/or construct validity, and noting the potential for investigation of pathogenesis, and early progress toward clinical testing of potential therapeutics. Wherever possible, reference is made to reviews rather than to primary articles. Copyright © 2011 International Pediatric Research Foundation, Inc. Source


Shu J.,CAS Institute of Theoretical Physics | Zhang Y.,California Institute of Technology
Physical Review Letters | Year: 2013

We observe a generic connection between LHC Higgs data and electroweak baryogenesis: the particle that contributes to the CP-odd hgg or hγγ vertex would provide the CP-violating source during a first-order phase transition. It is illustrated in the two Higgs doublet model that a common complex phase controls the lightest Higgs properties at the LHC, electric dipole moments, and the CP-violating source for electroweak baryogenesis. We perform a general parametrization of Higgs effective couplings and a global fit to the LHC Higgs data. Current LHC measurements prefer a nonzero phase for tana 1 and electric dipole moment constraints still allow an order-one phase for tana β∼1, which gives sufficient room to generate the correct cosmic baryon asymmetry. We also give some prospects in the direct measurements of CP violation in the Higgs sector at the LHC. © 2013 American Physical Society. Source


Dabiri J.O.,California Institute of Technology
Journal of Renewable and Sustainable Energy | Year: 2011

Modern wind farms comprised of horizontal-axis wind turbines (HAWTs) require significant land resources to separate each wind turbine from the adjacent turbine wakes. This aerodynamic constraint limits the amount of power that can be extracted from a given wind farm footprint. The resulting inefficiency of HAWT farms is currently compensated by using taller wind turbines to access greater wind resources at high altitudes, but this solution comes at the expense of higher engineering costs and greater visual, acoustic, radar, and environmental impacts. We investigated the use of counter-rotating vertical-axis wind turbines (VAWTs) in order to achieve higher power output per unit land area than existing wind farms consisting of HAWTs. Full-scale field tests of 10-m tall VAWTs in various counter-rotating configurations were conducted under natural wind conditions during summer 2010. Whereas modern wind farms consisting of HAWTs produce 2-3 W of power per square meter of land area, these field tests indicate that power densities an order of magnitude greater can potentially be achieved by arranging VAWTs in layouts that enable them to extract energy from adjacent wakes and from above the wind farm. Moreover, this improved performance does not require higher individual wind turbine efficiency, only closer wind turbine spacing and a sufficient vertical flux of turbulence kinetic energy from the atmospheric surface layer. The results suggest an alternative approach to wind farming that has the potential to concurrently reduce the cost, size, and environmental impacts of wind farms. © 2011 American Institute of Physics. Source


Colas J.T.,California Institute of Technology | Hsieh P.-J.,National University of Singapore
Human Brain Mapping | Year: 2014

Intuition and an assumption of basic rationality would suggest that people evaluate a stimulus on the basis of its properties and their underlying utility. However, various findings suggest that evaluations often depend not only on what is being evaluated, but also on contextual factors. Here we demonstrate a further departure from normative decision making: Aesthetic evaluations of abstract fractal art by human subjects were predicted from pre-stimulus patterns of BOLD fMRI signals across a distributed network of frontal regions before the stimuli were presented. This predictive power was dissociated from motor biases in favor of pressing a particular button to indicate one's choice. Our findings suggest that endogenous neural signals present before stimulation can bias decisions at multiple levels of representation when evaluating stimuli. © 2013 Wiley Periodicals, Inc. Source


Weinstein A.J.,California Institute of Technology
Classical and Quantum Gravity | Year: 2012

With the advanced gravitational wave detectors coming on line in the next 5 years, we expect to make the first detections of gravitational waves from astrophysical sources, and study the properties of the waves themselves as tests of general relativity. In addition, these gravitational waves will be powerful tools for the study of their astrophysical sources and source populations. They carry information that is quite complementary to what can be learned from electromagnetic or neutrino observations, probing the central gravitational engines that power the electromagnetic emissions at the outer layers of the source. Preparations are being made to enable near-simultaneous observations of both gravitational wave and electromagnetic observations of transient sources, using low-latency search pipelines and rapid sky localization. We will review the many opportunities for multi-messenger astronomy and astrophysics with gravitational waves enabled by the advanced detectors, and the preparations that are being made to quickly and fully exploit them. © 2012 IOP Publishing Ltd. Source


Goldreich P.,California Institute of Technology | Goldreich P.,Institute for Advanced Study | Schlichting H.E.,Massachusetts Institute of Technology
Astronomical Journal | Year: 2014

We assess the multi-planet systems discovered by the Kepler satellite in terms of current ideas about orbital migration and eccentricity damping due to planet-disk interactions. Our primary focus is on first order mean motion resonances, which we investigate analytically to lowest order in eccentricity. Only a few percent of planet pairs are in close proximity to a resonance. However, predicted migration rates (parameterized by ) imply that during convergent migration most planets would have been captured into first order resonances. Eccentricity damping (parameterized by ) offers a plausible resolution. Estimates suggest τe τn(h/a) 210-2, where h/a is the ratio of disk thickness to radius. Together, eccentricity damping and orbital migration give rise to an equilibrium eccentricity, e eq(τeτn) 1/2. Capture is permanent provided e eq ≲ μ1/3, where μ denotes the planet to star mass ratio. But for eeq ≳ μ1/3, capture is only temporary because librations around equilibrium are overstable and lead to passage through resonance on timescale τe . Most Kepler planet pairs have e eq > μ1/3. Since τn ≫ τe is the timescale for migration between neighboring resonances, only a modest percentage of pairs end up trapped in resonances after the disk disappears. Thus the paucity of resonances among Kepler pairs should not be taken as evidence for in situ planet formation or the disruptive effects of disk turbulence. Planet pairs close to a mean motion resonance typically exhibit period ratios 1%-2% larger than those for exact resonance. The direction of this shift undoubtedly reflects the same asymmetry that requires convergent migration for resonance capture. Permanent resonance capture at these separations from exact resonance would demand μ(τn /τe )1/2 ≳ 0.01, a value that estimates of μ from transit data and (τe /τn )1/2 from theory are insufficient to match. Plausible alternatives involve eccentricity damping during or after disk dispersal. The overstability referred to above has applications beyond those considered in this investigation. It was discovered numerically by Meyer & Wisdom in their study of the tidal evolution of Saturn's satellites. © 2014. The American Astronomical Society. All rights reserved. Source


Lukic M.,California Institute of Technology
Communications in Mathematical Physics | Year: 2011

We consider probability measures on the real line or unit circle with Jacobi or Verblunsky coefficients satisfying an ℓp condition and a generalized bounded variation condition. This latter condition requires that a sequence can be expressed as a sum of sequences β(l), each of which has rotated bounded variation, i.e., for some φl. This includes a large class of discrete Schrödinger operators with almost periodic potentials modulated by ℓp decay, i.e. linear combinations of λn cos(2παn + φ)with λ e{open} ℓp of bounded variation and any α. In all cases, we prove absence of singular continuous spectrum, preservation of absolutely continuous spectrum from the corresponding free case, and that pure points embedded in the continuous spectrum can only occur in an explicit finite set. © 2011 Springer-Verlag. Source


Adkins J.F.,California Institute of Technology
Paleoceanography | Year: 2013

The glacial cycles of the Pleistocene involve changes in the circulation of the deep ocean in important ways. This review seeks to establish what were the robust patterns of deep-sea water mass changes and how they might have influenced important parts of the last glacial cycle. After a brief review of how tracers in the modern ocean can be used to understand the distribution of water masses, I examine the data for biogeochemical, circulation rate, and conservative tracers during glacial climates. Some of the robust results from the literature of the last 30 years include: a shoaled version of northern source deep water in the Atlantic, expanded southern source water in the abyss and deep ocean, salt (rather than heat) stratification of the last glacial maximum (LGM) deep-sea, and several lines of evidence for slower overturning circulation in the southern deep cell. We combine these observations into a new idea for how the ocean-atmosphere system moves from interglacial to glacial periods across a single cycle. By virtue of its influence on the melting of land-based ice around Antarctica, cooling North Atlantic Deep Water (NADW) leads to a cold and salty version of Antarctic Bottom Water (AABW). This previously underappreciated feedback can lead to a more stratified deep ocean that operates as a more effective carbon trap than the modern, helping to lower atmospheric CO2 and providing a mechanism for the deep ocean to synchronize the hemispheres in a positive feedback that drives the system to further cooling. Key Points There is a lot of information about the past deep oceanTracers can be used to constrain the past circulationCooling NADW is a mechanism for entering glacials ©2013. American Geophysical Union. All Rights Reserved. Source


Wierman A.,California Institute of Technology
Surveys in Operations Research and Management Science | Year: 2011

Traditionally, the study of scheduling policies has focused on performance metrics such as response time, queue length, and throughput. However, the more vague notion of 'fairness' is often equally or more important than these traditional performance metrics. But, the concept of fairness is difficult to define and so it has been studied only sporadically. This has changed over the past decade and a growing literature providing an analytic framework for studying fairness has emerged. This article surveys recent developments, which include a rich variety of fairness metrics as well as a growing understanding of the fairness of common scheduling policies. © 2010 Elsevier Ltd. Source


Tsai V.C.,California Institute of Technology
Journal of Geophysical Research: Solid Earth | Year: 2011

Cross correlation of ambient seismic noise is known to result in time series from which station-station travel-time measurements can be made. Part of the reason that these cross-correlation travel-time measurements are reliable is that there exists a theoretical framework that quantifies how these travel times depend on the features of the ambient noise. However, corresponding theoretical results do not currently exist to describe how the amplitudes of the cross correlation depend on such features. For example, currently it is not possible to take a given distribution of noise sources and calculate the cross correlation amplitudes one would expect from such a distribution. Here, we provide a ray-theoretical framework for calculating cross correlations. This framework differs from previous work in that it explicitly accounts for attenuation as well as the spatial distribution of sources and therefore can address the issue of quantifying amplitudes in noise correlation measurements. After introducing the general framework, we apply it to two specific problems. First, we show that we can quantify the amplitudes of coherency measurements, and find that the decay of coherency with station-station spacing depends crucially on the distribution of noise sources. We suggest that researchers interested in performing attenuation measurements from noise coherency should first determine how the dominant sources of noise are distributed. Second, we show that we can quantify the signal-to-noise ratio of noise correlations more precisely than previous work, and that these signal-to-noise ratios can be estimated for given situations prior to the deployment of seismometers. It is expected that there are applications of the theoretical framework beyond the two specific cases considered, but these applications await future work. Copyright 2011 by the American Geophysical Union. Source


Brewster R.,California Institute of Technology | Safran S.A.,Welzmann Institute of Science
Biophysical Journal | Year: 2010

A simple model of the line activity of a hybrid lipid (e.g., POPC) with one fully saturated chain and one partially unsaturated chain demonstrates that these lipids preferentially pack at curved interfaces between phase-separated saturated and unsaturated domains. We predict that the domain sizes typically range from tens to hundreds of nm, depending on molecular interactions and parameters such as molecular volume and area per headgroup in the bulk fluid phase. The role of cholesterol is taken into account by an effective change in the headgroup areas and the domain sizes are predicted to increase with cholesterol concentration. © 2010 by the Biophysical Society. Source


Reeves R.,California Institute of Technology
Journal of Low Temperature Physics | Year: 2012

The Q/U Imaging ExperimenT (QUIET) consists of a pair of radiometer arrays designed to measure the B-mode polarization of the Cosmic Microwave Background (CMB) at 44 GHz (Q-band) and 95 GHz (W-band) with 19 and 90 modules at each frequency. The architecture of both arrays is based on miniaturized coherent polarimeters using High Electron Mobility Transistor (HEMT) chips for amplification; QUIET is the largest coherent microwave array ever fielded. We describe the detector technology used in Phase I of QUIET and our efforts to improve both the W-band module design and the fundamental performance of the HEMT amplifiers. These technology developments have wide application in the field of radio astronomy. © Springer Science+Business Media, LLC 2012. Source


Tropp J.A.,California Institute of Technology
Foundations of Computational Mathematics | Year: 2012

This paper presents new probability inequalities for sums of independent, random, self-adjoint matrices. These results place simple and easily verifiable hypotheses on the summands, and they deliver strong conclusions about the large-deviation behavior of the maximum eigenvalue of the sum. Tail bounds for the norm of a sum of random rectangular matrices follow as an immediate corollary. The proof techniques also yield some information about matrix-valued martingales. In other words, this paper provides noncommutative generalizations of the classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Hoeffding, and McDiarmid. The matrix inequalities promise the same diversity of application, ease of use, and strength of conclusion that have made the scalar inequalities so valuable. © 2011 The Author(s). Source


Lamb M.P.,California Institute of Technology | Fonstad M.A.,Texas State University
Nature Geoscience | Year: 2010

Deep river canyons are thought to form slowly over geological time (see, for example, ref. 1), cut by moderate flows that reoccur every few years. In contrast, some of the most spectacular canyons on Earth and Mars were probably carved rapidly during ancient megaflood events. Quantification of the flood discharge, duration and erosion mechanics that operated during such events is hampered because we lack modern analogues. Canyon Lake Gorge, Texas, was carved in 2002 during a single catastrophic flood. The event offers a rare opportunity to analyse canyon formation and test palaeohydraulic- reconstruction techniques under known topographic and hydraulic conditions. Here we use digital topographic models and visible/near-infrared aerial images from before and after the flood, discharge measured during the event, field measurements and sediment-transport modelling to show that the flood moved metre-sized boulders, excavated ∼7m of limestone and transformed a soil-mantled valley into a bedrock canyon in just ∼3 days. We find that canyon morphology is strongly dependent on rock type: plucking of limestone blocks produced waterfalls, inner channels and bedrock strath terraces, whereas abrasion of cemented alluvium sculpted walls, plunge pools and streamlined islands. Canyon formation was so rapid that erosion might have been limited by the ability of the flow to transport sediment.We suggest that our results might improve hydraulic reconstructions of similar megafloods on Earth and Mars. © 2010 Macmillan Publishers Limited. All rights reserved. Source


Ma C.H.I.,California Institute of Technology
American Mineralogist | Year: 2010

Hibonite-(Fe), (Fe, Mg)Al12O19, is the Fe 2+-dominant analog of hibonite CaAl12O19, discovered in a highly altered Ca-,Al-rieh refractory inclusion from the Allende meteorite. It occurs as scattered micrometer-sized single crystals within an aggregate of hercynite (Fe,Mg)Al2O4, adjacent to nepheline, ilmenite, ferroan spinel, perovslcite, and hibonite. The mean chemical composition determined by electron microprobe analysis of hibonite-(Fe) is Al2O3 90.05 wt%, FeO 3.60, SiO2 2.09, MgO 1.61, Na2O 0.55, CaO 0.28, TiO2 0.04, V2O 3 0.02, sum 98.25, corresponding to an empirical formula of (Fe 0.34 2+Mg0.27Na0.12Al 0.11Ca0.03)Σ0.87(Al 11.77Si0.23)Σ12.00O19. Hibonite-(Fe) is hexagonal, P63/mmc; a= 5.613 Å, c = 22.285 Å, V= 608.0 Å3 and Z = I. Its electron backscatter diffraction pattern is a good match to that of the hibonite structure. Hibonite-(Fe) is apparently a secondary alteration product formed by iron-alkali-halogen metasomatism, whereas hibonite is a primary refractory phase. Source


Gauthier J.-R.,California Institute of Technology
Monthly Notices of the Royal Astronomical Society | Year: 2013

We present the results of a spectroscopic survey of galaxies in the vicinity of an ultra-strong MgII λλ2786, 2803 absorber of rest-frame absorption equivalent width Wr(2796) = 4.2 Å at z = 0.5624. This absorber was originally found at projected separation ρ = 246 kpc of a luminous red galaxy (LRG) at z = 0.5604. Inamori-Magellan Areal Camera and Spectrograph (IMAC) spectroscopy has revealed two galaxies at Ρ < 60 kpc (z = 0.5623 and z = 0.5621) and a third one at ρ = 209 kpc (z = 0.5623) near the redshift of the absorber. These findings indicate that the absorbing gas resides in a group environment. Combining Sloan Digital Sky Survey (SDSS) broad-band photometry with additional B-, Ks-band images and optical spectroscopy, we perform a stellar population synthesis analysis of the group members to characterize their star formation histories, on-going star formation rates (SFR), and stellar masses. We find that the two group members at ρ < 60 kpc are best characterized by old stellar populations (>1 Gyr) and little on-going star formation activity (SFR < 2.9 Mȯ yr-1), while the third object at Ρ = 209 kpc exhibits [O II]- and continuum-derived SFR consistent with SFR > 3.0 Mȯ yr-1. Including the two ultra-strong MgII absorbers analysed by Nestor et al., this is the third ultra-strong MgII absorber for which a detailed study of the galactic environment is available. All three absorbers are found in galaxy groups.We examine different physical mechanisms giving rise to the absorbing gas including starburst-driven outflows, cold filaments, extended rotating discs and stripped gas. We argue that the large equivalent width observed in these absorbers is more likely due to the gas dynamics of the intragroup medium rather than driven by starburst outflows. © 2013 The Author Published by Oxford University Press on behalf of the Royal Astronomical Society. Source


Bisquert J.,Jaume I University | Marcus R.A.,California Institute of Technology
Topics in Current Chemistry | Year: 2014

We review the concepts and methods of modeling of the dye-sensitized solar cell, starting from fundamental electron transfer theory, and using phenomenological transport-conservation equations. The models revised here are aimed at describing the components of the current-voltage curve of the solar cell, based on small perturbation experimental methods, and to such an end, a range of phenomena occurring in the nanoparticulate electron transport materials, and at interfaces, are covered. Disorder plays a major role in the definition of kinetic parameters, and we introduce single particle as well as collective function definitions of diffusion coefficient and electron lifetime. Based on these fundamental considerations, applied tools of analysis of impedance spectroscopy are described, and we outline in detail the theory of recombination via surface states that is successful to describe the measured recombination resistance and lifetime. © 2013 Springer-Verlag Berlin Heidelberg. Source


Doostan A.,University of Colorado at Boulder | Owhadi H.,California Institute of Technology
Journal of Computational Physics | Year: 2011

We propose a method for the approximation of solutions of PDEs with stochastic coefficients based on the direct, i.e., non-adapted, sampling of solutions. This sampling can be done by using any legacy code for the deterministic problem as a black box. The method converges in probability (with probabilistic error bounds) as a consequence of sparsity and a concentration of measure phenomenon on the empirical correlation between samples. We show that the method is well suited for truly high-dimensional problems. © 2011 Elsevier Inc. Source


Davis M.E.,California Institute of Technology
Chemistry of Materials | Year: 2014

Zeolites and zeolite-like materials are continually finding new applications. Because of the uniformity of these solids, the expression of macroscale materials properties that are controlled by the materials chemistry at the atomic/molecular scale are achievable. In this Perspective, I discuss the following areas of current interest in zeolites and zeolite-like materials that rely on manipulation of the materials chemistry for their preparation and provide new opportunities for application: (i) exploitation of organic structure-directing agents (SDAs) for new materials, (ii) the synthesis of zeolites without SDAs, (iii) the synthesis of very hydrophobic materials, (iv) conversions of two-dimensional (2D) to 3D materials and vice versa, (v) hierarchically organized materials, (vi) chiral materials, and (vii) direction of tetrahedral atoms to specific framework positions. © 2013 American Chemical Society. Source


Babichenko Y.,California Institute of Technology
Proceedings of the Annual ACM Symposium on Theory of Computing | Year: 2014

We study the query complexity of approximate notions of Nash equilibrium in games with a large number of players n and a constant number of actions m. Our main result states that even for constant ε, the query complexity of an ε-well-supported Nash equilibrium is exponential in n. © 2014 ACM. Source


Rothenberg E.V.,California Institute of Technology
Briefings in Functional Genomics | Year: 2013

Three interlocking problems in gene regulation are: how to explain genome-wide targeting of transcription factors in different cell types, how prior transcription factor action can establish an 'epigenetic state' that changes the options for future transcription factor action, and how directly a sequence of developmental decisions can be memorialized in a hierarchy of repression structures applied to key genes of the 'paths not taken'. This review uses the finely staged process of T-cell lineage commitment as a test case in which to examine how changes in developmental status are reflected in changes in transcription factor expression, transcription factor binding distribution across genomic sites, and chromatin modification. These are evaluated in a framework of reciprocal effects of previous chromatin structure features on transcription factor access and of transcription factor binding on other factors and on future chromatin structure. © The Author 2013. Published by Oxford University Press. All rights reserved. Source


Cai L.,California Institute of Technology
Briefings in Functional Genomics | Year: 2013

In this review, we discuss a strategy to bring genomics and proteomics into single cells by super-resolution microscopy. The basis for this new approach are the following: given the 10 nm resolution of a super-resolution microscope and a typical cell with a size of (10 μm)3, individual cells contain effectively 109 super-resolution pixels or bits of information. Most eukaryotic cells have 104 genes and cellular abundances of 10^100 copies per transcript. Thus, under a super-resolution microscope, an individual cell has 1000 times more pixel volume or information capacities than is needed to encode all transcripts within that cell. Individual species of mRNA can be uniquely identified by labeling them each with a distinct combination of fluorophores by fluorescence in situ hybridization.With at least 15 fluorophores available in super-resolution, hundreds of genes in can be barcoded with a three-color barcode (3C15=455). These calculations suggest that by combining super-resolution microscopy and barcode labeling, single cells can be turned into informatics platforms denser than microarrays and that molecular species in individual cells can be profiled in a massively parallel fashion. © The Author 2012. Published by Oxford University Press. All rights reserved. Source


Batygin K.,California Institute of Technology
Monthly Notices of the Royal Astronomical Society | Year: 2015

The early stages of dynamical evolution of planetary systems are often shaped by dissipative processes that drive orbital migration. In multi-planet systems, convergent amassing of orbits inevitably leads to encounters with rational period ratios, which may result in establishment of mean-motion resonances. The success or failure of resonant capture yields exceedingly different subsequent evolutions, and thus plays a central role in determining the ensuing orbital architecture of planetary systems. In this work, we employ an integrable Hamiltonian formalism for first order planetary resonances that allows both secondary bodies to have finite masses and eccentricities, and construct a comprehensive theory for resonant capture. Particularly, we derive conditions under which orbital evolution lies within the adiabatic regime, and provide a generalized criterion for guaranteed resonant locking as well as a procedure for calculating capture probabilities when capture is not certain. Subsequently, we utilize the developed analytical model to examine the evolution of Jupiter and Saturn within the protosolar nebula, and investigate the origins of the dominantly non-resonant orbital distribution of sub-Jovian extrasolar planets. Our calculations show that the commonly observed extrasolar orbital structure can be understood if planet pairs encounter mean-motion commensurabilities on slightly eccentric (e ~ 0.02) orbits. Accordingly, we speculate that resonant capture among low-mass planets is typically rendered unsuccessful due to subtle axial asymmetries inherent to the global structure of protoplanetary discs. © 2015 The Authors. Source


Adolphs R.,California Institute of Technology
Current Biology | Year: 2013

Each of us has felt afraid, and we can all recognize fear in many animal species. Yet there is no consensus in the scientific study of fear. Some argue that 'fear' is a psychological construct rather than something discoverable through scientific investigation. Others argue that the term 'fear' cannot properly be applied to animals because we cannot know whether they feel afraid. Studies in rodents show that there are highly specific brain circuits for fear, whereas findings from human neuroimaging seem to make the opposite claim. Here, I review the field and urge three approaches that could reconcile the debates. For one, we need a broadly comparative approach that would identify core components of fear conserved across phylogeny. This also pushes us towards the second point of emphasis: an ecological theory of fear that is essentially functional. Finally, we should aim even to incorporate the conscious experience of being afraid, reinvigorating the study of feelings across species. © 2013 Elsevier Ltd All rights reserved. Source


Oggier F.,Nanyang Technological University | Hassibi B.,California Institute of Technology
IEEE Transactions on Information Theory | Year: 2011

We consider the MIMO wiretap channel, that is a MIMO broadcast channel where the transmitter sends some confidential information to one user which is a legitimate receiver, while the other user is an eavesdropper. Perfect secrecy is achieved when the transmitter and the legitimate receiver can communicate at some positive rate, while insuring that the eavesdropper gets zero bits of information. In this paper, we compute the perfect secrecy capacity of the multiple antenna MIMO broadcast channel, where the number of antennas is arbitrary for both the transmitter and the two receivers. Our technique involves a careful study of a Sato-like upper bound via the solution of a certain algebraic Riccati equation. © 2011 IEEE. Source


Beigi S.,California Institute of Technology
Quantum Information and Computation | Year: 2010

Although it is believed unlikely that NP-hard problems admit efficient quantum algorithms, it has been shown that a quantum verifier can solve NP-complete problems given a short quantum proof; more precisely, NP ⊆ QMA log(2) where QMA log(2) denotes the class of quantum Merlin-Arthur games in which there are two unentangled provers who send two logarithmic size quantum witnesses to the verifier. The inclusion NP ⊆ QMA log(2) has been proved by Blier and Tapp by stating a quantum Merlin-Arthur protocol for 3-coloring with perfect completeness and gap 1/24n 6 Moreover, Aaronson et al. have shown the above inclusion with a constant gap by considering O(√n) witnesses of logarithmic size. However, we still do not know if QMA log(2) with a constant gap contains NP. In this paper, we show that 3-SAT admits a QMA log(2) protocol with the gap 1/n3+ε or every constant ε > 0. © Rinton Press. Source


Eiler J.M.,California Institute of Technology
Quaternary Science Reviews | Year: 2011

Carbonate clumped isotope thermometry is a relatively new paleotemperature proxy based on measurements of the degree of ordering of 13C and 18O into bonds with each other (making the 13C18O16O2 -2 ion group) in lattices of carbonate minerals. This technique has several unusual properties that complement existing methods of paleoclimate reconstruction. Most importantly, it is based on a homogeneous isotope exchange equilibrium and thus constrains temperature independent of the isotopic composition of waters from which carbonates grew. This method also appears to be generally insensitive to 'vital effects' that compromise many other paleothermometers based on the chemical properties of biominerals or organic matter, at least for those organisms that have been subjected to systematic study to-date (corals and foraminifera); however, discrepancies among some calibrations, particularly at low temperatures, may point toward the existence of vital effects in mollusks and other organisms. This review discusses the principles and calibrations of the technique, its uses in combination with conventional stable isotope measurements to constrain the δ18O of past waters, preservation of paleotemperatures in ancient materials, as well as current problems in our understanding of calibrations and interlaboratory data comparisons. © 2011 Elsevier Ltd. Source


Konig R.,California Institute of Technology
Quantum Information and Computation | Year: 2010

Perturbative gadgets were originally introduced to generate effective k-local interactions in the low-energy sector of a 2-local Hamiltonian. Extending this idea, we present gadgets which are specifically suited for realizing Hamiltonians exhibiting non-abelian anyonic excitations. At the core of our construction is a perturbative analysis of a widely used hopping-term Hamiltonian. We show that in the low-energy limit, this Hamiltonian can be approximated by a certain ordered product of operators. In particular, this provides a simplified realization of Kitaev's quantum double Hamiltonians. Source


Kapustin A.,California Institute of Technology | Thorngren R.,University of California at Berkeley
Physical Review Letters | Year: 2014

We study 't Hooft anomalies for a global discrete internal symmetry G. We construct examples of bosonic field theories in three dimensions with a nonvanishing 't Hooft anomaly for a discrete global symmetry. We also construct field theories in three dimensions with a global discrete internal symmetry G1×G2 such that gauging G1 necessarily breaks G2 and vice versa. This is analogous to the Adler-Bell-Jackiw axial anomaly in four dimensions and parity anomaly in three dimensions. © 2014 American Physical Society. Source


Vazirani U.,University of California at Berkeley | Vidick T.,California Institute of Technology
Physical Review Letters | Year: 2014

Quantum cryptography promises levels of security that are impossible to replicate in a classical world. Can this security be guaranteed even when the quantum devices on which the protocol relies are untrusted? This central question dates back to the early 1990s when the challenge of achieving device-independent quantum key distribution was first formulated. We answer this challenge by rigorously proving the device-independent security of a slight variant of Ekert's original entanglement-based protocol against the most general (coherent) attacks. The resulting protocol is robust: While assuming only that the devices can be modeled by the laws of quantum mechanics and are spatially isolated from each other and from any adversary's laboratory, it achieves a linear key rate and tolerates a constant noise rate in the devices. In particular, the devices may have quantum memory and share arbitrary quantum correlations with the eavesdropper. The proof of security is based on a new quantitative understanding of the monogamous nature of quantum correlations in the context of a multiparty protocol. © 2014 American Physical Society. Source


Koumakis N.,University of Crete | Brady J.F.,California Institute of Technology | Petekidis G.,University of Crete
Physical Review Letters | Year: 2013

The yielding behavior of hard sphere glasses under large-amplitude oscillatory shear has been studied by probing the interplay of Brownian motion and shear-induced diffusion at varying oscillation frequencies. Stress, structure and dynamics are followed by experimental rheology and Browian dynamics simulations. Brownian-motion-assisted cage escape dominates at low frequencies while escape through shear-induced collisions at high ones, both related with a yielding peak in G′′. At intermediate frequencies a novel, for hard sphere glasses, double peak in G ′′ is revealed reflecting both mechanisms. At high frequencies and strain amplitudes a persistent structural anisotropy causes a stress drop within the cycle after strain reversal, while higher stress harmonics are minimized at certain strain amplitudes indicating an apparent harmonic response. © 2013 American Physical Society. Source


Fujiwara Y.,California Institute of Technology
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2013

Locating the boundaries of consecutive blocks of quantum information is a fundamental building block for advanced quantum computation and quantum communication systems. We develop a coding theoretic method for properly locating boundaries of quantum information without relying on external synchronization when block synchronization is lost. The method also protects qubits from decoherence in a manner similar to conventional quantum error-correcting codes, seamlessly achieving synchronization recovery and error correction. A family of quantum codes that are simultaneously synchronizable and error-correcting is given through this approach. © 2013 American Physical Society. Source


Batygin K.,California Institute of Technology | Morbidelli A.,Observatoire de la Cote dAzur | Holman M.J.,Harvard - Smithsonian Center for Astrophysics
Astrophysical Journal | Year: 2015

On timescales that greatly exceed an orbital period, typical planetary orbits evolve in a stochastic yet stable fashion. On even longer timescales, however, planetary orbits can spontaneously transition from bounded to unbound chaotic states. Large-scale instabilities associated with such behavior appear to play a dominant role in shaping the architectures of planetary systems, including our own. Here we show how such transitions are possible, focusing on the specific case of the long-term evolution of Mercury. We develop a simple analytical model for Mercury's dynamics and elucidate the origins of its short-term stochastic behavior as well as of its sudden progression to unbounded chaos. Our model allows us to estimate the timescale on which this transition is likely to be triggered, i.e., the dynamical lifetime of the solar system as we know it. The formulated theory is consistent with the results of numerical simulations and is broadly applicable to extrasolar planetary systems dominated by secular interactions. These results constitute a significant advancement in our understanding of the processes responsible for sculpting of the dynamical structures of generic planetary systems. © 2015. The American Astronomical Society. All rights reserved. Source


Kedlaya K.S.,Massachusetts Institute of Technology | Umans C.,California Institute of Technology
SIAM Journal on Computing | Year: 2011

We obtain randomized algorithms for factoring degree n univariate polynomials over F q requiring O(n 1.5+o(1) log +o(1)q + n 1+o(1) log 2+o (1)q) bit operations. When logq < n, this is asymptotically faster than the best previous algorithms [J. von zur Gathen and V. Shoup, Comput. Complexity, 2(1992), pp. 187-224; E. Kaltofen and V. Shoup, Math. Comp., 67(1998), pp. 1179-1197]; for logq ≥ n, it matches the asymptotic running time of the best known algorithms. The improvements come from new algorithms for modular composition of degree n univariate polynomials, which is the asymptotic bottleneck in fast algorithms for factoring polynomials over finite fields. The best previous algorithms for modular composition use O(n (ω+1)/2) field operations, where ω is the exponent of matrix multiplication [R. P. Brent and H. T. Kung, J. Assoc. Comput. Mach., 25(1978), pp. 581-595], with a slight improvement in the exponent achieved by employing fast rectangular matrix multiplication [X. Huang and V. Y. Pan, J. Complexity, 14(1998), pp. 257-299]. We show that modular composition and multipoint evaluation of multivariate polynomials are essentially equivalent, in the sense that an algorithm for one achieving exponent α implies an algorithm for the other with exponent α+o(1), and vice versa. We then give two new algorithms that solve the problem near-optimally: an algebraic algorithm for fields of characteristic at most n o (1), and a nonalgebraic algorithm that works in arbitrary characteristic. The latter algorithm works by lifting to characteristic 0, applying a small number of rounds of multimodular reduction, and finishing with a small number of multidimensional FFTs. The final evaluations are reconstructed using the Chinese remainder theorem. As a bonus, this algorithm produces a very efficient data structure supporting polynomial evaluation queries, which is of independent interest. Our algorithms use techniques that are commonly employed in practice, in contrast to all previous subquadratic algorithms for these problems, which relied on fast matrix multiplication. © 2011 Society for Industrial and Applied Mathematics. Source


Cho I.H.,California Institute of Technology
Earthquake Spectra | Year: 2013

We embarked upon developing a novel parallel simulation platform that is rooted in microphysical mechanisms. Primarily aiming at large-scale reinforced-concrete structures exposed to cyclic loading, we sought to settle the question as to how to capture nonlinear shear, localized damage and progressive buckling of reinforcing bar. We proposed a tribology-inspired three-dimensional (3-D) interlocking mechanism in the well-established framework of multidirectional smeared crack models. Strong correlation between random material property and localized damage has been shown, notably at the global system level. An automated platform has been suggested to capture progressive buckling phenomena. Validation and applications straddle a wide range, from small laboratory tests to large-scale 3-D experiments, successfully offering a clear causal pathway between underlying physical mechanisms and the unresolved issues addressed above. © 2013, Earthquake Engineering Research Institute. Source


Kim I.H.,California Institute of Technology
Physical Review Letters | Year: 2013

A general inequality between entanglement entropy and a number of topologically ordered states is derived, even without using the properties of the parent Hamiltonian or the formalism of topological quantum field theory. Given a quantum state, we obtain an upper bound on the number of distinct states that are locally indistinguishable from . The upper bound is determined only by the entanglement entropy of some local subsystems. As an example, we show that log2γ for a large class of topologically ordered systems on a torus, where N is the number of topologically protected states and γ is the constant subcorrection term of the entanglement entropy. We discuss applications to quantum many-body systems that do not have any low-energy topological quantum field theory description, as well as tradeoff bounds for general quantum error correcting codes. © 2013 American Physical Society. Source


Groot P.J.,California Institute of Technology | Groot P.J.,Radboud University Nijmegen
Astrophysical Journal | Year: 2012

In eclipsing binaries the stellar rotation of the two components will cause a rotational Doppler beaming during eclipse ingress and egress when only part of the eclipsed component is covered. For eclipsing binaries with fast spinning components this photometric analog of the well-known spectroscopic Rossiter-McLaughlin effect can exceed the strength of the orbital effect. Example light curves are shown for a detached double white dwarf binary, a massive O-star binary and a transiting exoplanet case, similar to WASP-33b. Inclusion of the rotational Doppler beaming in eclipsing systems is a prerequisite for deriving the correct stellar parameters from fitting high-quality photometric light curves and can be used to determine stellar obliquities as well as, e.g., an independent measure of the rotational velocity in those systems that may be expected to be fully synchronized. © 2012 The American Astronomical Society. All rights reserved. Source


Wegg C.,California Institute of Technology
Astrophysical Journal | Year: 2012

We describe a pseudo-Newtonian potential which, to within 1% error at all angular momenta, reproduces the precession due to general relativity of particles whose specific orbital energy is small compared to c 2 in the Schwarzschild metric. For bound orbits, the constraint of low energy is equivalent to requiring the apoapsis of a particle to be large compared to the Schwarzschild radius. Such low-energy orbits are ubiquitous close to supermassive black holes in galactic nuclei, but the potential is relevant in any context containing particles on low-energy orbits. Like the more complex post-Newtonian expressions, the potential correctly reproduces the precession in the far field, but also correctly reproduces the position and magnitude of the logarithmic divergence in precession for low angular momentum orbits. An additional advantage lies in its simplicity, both in computation and implementation. We also provide two simpler, but less accurate potentials, for cases where orbits always remain at large angular momenta, or when the extra accuracy is not needed. In all of the presented cases, the accuracy in precession in low-energy orbits exceeds that of the well-known potential of Paczyński & Wiita, which has ∼30% error in the precession at all angular momenta. © 2012. The American Astronomical Society. All rights reserved. Source


Tocheva E.I.,California Institute of Technology
Cold Spring Harbor perspectives in biology | Year: 2010

Electron cryotomography (ECT) is an emerging technology that allows thin samples such as macromolecular complexes and small bacterial cells to be imaged in 3-D in a nearly native state to "molecular" ( approximately 4 nm) resolution. As such, ECT is beginning to deliver long-awaited insight into the positions and structures of cytoskeletal fi laments, cell wall elements, motility machines, chemoreceptor arrays, internal compartments, and other ultrastructures. This article describes the technique and summarizes its contributions to bacterial cell biology. For comparable recent reviews, see (Subramaniam 2005; Jensen and Briegel 2007; Murphy and Jensen 2007; Li and Jensen 2009). For reviews on the history, technical details, and broader application of electron tomography in general, see for example (Subramaniam and Milne 2004; Lucić et al. 2005; Leis et al. 2008; Midgley and Dunin-Borkowski 2009). Source


Nakayama Yu.,California Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

We discuss boundary conditions for conformal field theories that preserve the boundary Poincaré invariance. As in the bulk field theories, a question arises whether boundary scale invariance leads to boundary conformal invariance. With unitarity, Cardy's condition of vanishing momentum flow is necessary for the boundary conformal invariance, but it is not sufficient in general. We show both a proof and a counterexample of the enhancement of boundary conformal invariance in (1+1) dimension, which depends on the extra assumption we make. In (1+2) dimension, Cardy's condition is shown to be sufficient. In higher dimensions, we give a perturbative argument in favor of the enhancement based on the boundary g-theorem. With the help of the holographic dual recently proposed, we show a holographic proof of the boundary conformal invariance under the assumption of the boundary strict null energy condition, which also gives a sufficient condition for the strong boundary g-theorem. © 2013 American Physical Society. Source


Thrane E.,California Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

A stochastic gravitational-wave background (SGWB) can arise from the superposition of many independent events. If the fraction of time that events emit in some frequency band is sufficiently high, the resulting background is Gaussian in that band, which is to say that it is characterized only by a gravitational-wave strain power spectrum. Alternatively, if this fraction is low, we expect a non-Gaussian background, characterized by intermittent subthreshold signals. Many experimentally accessible models of the SGWB, such as the SGWB arising from compact binary coalescences, are expected to be of this non-Gaussian variety. Primordial backgrounds from the early Universe, on the other hand, are more likely to be Gaussian. Measuring the Gaussianity of the SGWB can therefore provide additional information about its origin. In this paper we introduce a novel maximum likelihood estimator that can be used to estimate the non-Gaussian component of an SGWB signature measured in a network of interferometers. This method can be robustly applied to spatially separated interferometers with colored, non-Gaussian noise. Furthermore, it can be cast as a generalization of the widely used stochastic radiometer algorithm. © 2013 American Physical Society. Source


Tseng C.-Y.,California Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

Quantum decoherence and the transition to semiclassical behavior during inflation have been extensively considered in the literature. In this paper, we use a simple model to analyze the same process in ekpyrosis. Our result is that the quantum to classical transition would not happen during an ekpyrotic phase even for superhorizon modes, and therefore the fluctuations cannot be interpreted as classical. This implies the prediction of a scale-free power spectrum in an ekpyrotic/cyclic universe model requires more inspection. © 2013 American Physical Society. Source


Schwarz J.H.,California Institute of Technology
Journal of High Energy Physics | Year: 2014

It is conjectured that the world-volume action of a probe D3-brane in an AdS 5 × S 5 background of type IIB superstring theory, with one unit of flux, can be reinterpreted as the exact effective action (or highly effective action) for U(2) N= 4 super Yang-Mills theory on the Coulomb branch. An analogous conjecture for U(2) k × U(2) -k ABJM theory is also presented. The main evidence supporting these conjectures is that the brane actions have all of the expected symmetries and dualities. Highly effective actions have general coordinate invariance, even though they describe nongravitational theories. © 2014 SISSA. Source


Candes E.J.,Stanford University | Plan Y.,California Institute of Technology | Plan Y.,Stanford University
IEEE Transactions on Information Theory | Year: 2011

This paper presents several novel theoretical results regarding the recovery of a low-rank matrix from just a few measurements consisting of linear combinations of the matrix entries. We show that properly constrained nuclear-norm minimization stably recovers a low-rank matrix from a constant number of noisy measurements per degree of freedom; this seems to be the first result of this nature. Further, with high probability, the recovery error from noisy data is within a constant of three targets: 1) the minimax risk, 2) an "oracle" error that would be available if the column space of the matrix were known, and 3) a more adaptive "oracle" error which would be available with the knowledge of the column space corresponding to the part of the matrix that stands above the noise. Lastly, the error bounds regarding low-rank matrices are extended to provide an error bound when the matrix has full rank with decaying singular values. The analysis in this paper is based on the restricted isometry property (RIP). © 2011 IEEE. Source


Bravyi S.,IBM | Haah J.,California Institute of Technology
Physical Review Letters | Year: 2013

A big open question in the quantum information theory concerns the feasibility of a self-correcting quantum memory. A quantum state recorded in such memory can be stored reliably for a macroscopic time without need for active error correction, if the memory is in contact with a cold enough thermal bath. Here we report analytic and numerical evidence for self-correcting behavior in the quantum spin lattice model known as the 3D cubic code. We prove that its memory time is at least Lcβ, where L is the lattice size, β is the inverse temperature of the bath, and c>0 is a constant coefficient. However, this bound applies only if the lattice size L does not exceed a critical value which grows exponentially with β. In that sense, the model can be called a partially self-correcting memory. We also report a Monte Carlo simulation indicating that our analytic bounds on the memory time are tight up to constant coefficients. To model the readout step we introduce a new decoding algorithm, which can be implemented efficiently for any topological stabilizer code. A longer version of this work can be found in Bravyi and Haah, arXiv:1112.3252. © 2013 American Physical Society. Source


Arkani-Hameda N.,Institute for Advanced Study | Trnkab J.,California Institute of Technology
Journal of High Energy Physics | Year: 2014

Perturbative scattering amplitudes in gauge theories have remarkable simplicity and hidden infinite dimensional symmetries that are completely obscured in the conventional formulation of field theory using Feynman diagrams. This suggests the existence of a new understanding for scattering amplitudes where locality and unitarity do not play a central role but are derived consequences from a different starting point. In this note we provide such an understanding for N = 4 SYM scattering amplitudes in the planar limit, which we identify as “the volume” of a new mathematical object — the Amplituhedron — generalizing the positive Grassmannian. Locality and unitarity emerge hand-in-hand from positive geometry. © The Authors. Source


Favata M.,Montclair State University | Favata M.,California Institute of Technology | Favata M.,University of Wisconsin - Milwaukee
Physical Review Letters | Year: 2014

The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled. © 2014 American Physical Society. Source


Fujiwara Y.,California Institute of Technology
Physical Review Letters | Year: 2013

Known quantum error correction schemes are typically able to take advantage of only a limited class of classical error-correcting codes. Entanglement-assisted quantum error correction is a partial solution which made it possible to exploit any classical linear codes over the binary or quaternary finite field. However, the known entanglement-assisted scheme requires noiseless qubits that help correct quantum errors on noisy qubits, which can be too severe an assumption. We prove that a more relaxed and realistic assumption is sufficient by presenting encoding and decoding operations assisted by qubits on which quantum errors of one particular kind may occur. As in entanglement assistance, our scheme can import any binary or quaternary linear codes. If the auxiliary qubits are noiseless, our codes become entanglement-assisted codes, and saturate the quantum Singleton bound when the underlying classical codes are maximum distance separable. © 2013 American Physical Society. Source


Galley C.R.,Jet Propulsion Laboratory | Galley C.R.,California Institute of Technology
Physical Review Letters | Year: 2013

Hamilton's principle of stationary action lies at the foundation of theoretical physics and is applied in many other disciplines from pure mathematics to economics. Despite its utility, Hamilton's principle has a subtle pitfall that often goes unnoticed in physics: it is formulated as a boundary value problem in time but is used to derive equations of motion that are solved with initial data. This subtlety can have undesirable effects. I present a formulation of Hamilton's principle that is compatible with initial value problems. Remarkably, this leads to a natural formulation for the Lagrangian and Hamiltonian dynamics of generic nonconservative systems, thereby filling a long-standing gap in classical mechanics. Thus, dissipative effects, for example, can be studied with new tools that may have applications in a variety of disciplines. The new formalism is demonstrated by two examples of nonconservative systems: an object moving in a fluid with viscous drag forces and a harmonic oscillator coupled to a dissipative environment. © 2013 American Physical Society. Source


Deser S.,California Institute of Technology | Deser S.,Brandeis University | Waldron A.,University of California at Davis
Physical Review Letters | Year: 2013

We show, by analyzing its characteristics, that the ghost-free, 5 degree of freedom, Wess-Zumino massive gravity model admits superluminal shock wave solutions and thus is acausal. Ironically, this pathology arises from the very constraint that removes the (sixth) Boulware-Deser ghost mode. © 2013 American Physical Society. Source


Bravyi S.,IBM | Haah J.,California Institute of Technology
Physical Review Letters | Year: 2011

We explore the feasibility of a quantum self-correcting memory based on 3D spin Hamiltonians with topological quantum order in which thermal diffusion of topological defects is suppressed by macroscopic energy barriers. To this end we characterize the energy landscape of stabilizer code Hamiltonians with local bounded-strength interactions which have a topologically ordered ground state but do not have stringlike logical operators. We prove that any sequence of local errors mapping a ground state of such a Hamiltonian to an orthogonal ground state must cross an energy barrier growing at least as a logarithm of the lattice size. Our bound on the energy barrier is tight up to a constant factor for one particular 3D spin Hamiltonian. © 2011 American Physical Society. Source


Piro A.L.,California Institute of Technology
Astrophysical Journal | Year: 2012

The rising luminosity of the recent, nearby supernova 2011fe shows a quadratic dependence with time during the first 0.5-4 days. In addition, studies of the composite light curves formed from stacking together many Type Ia supernovae (SNe Ia) have found similar power-law indices for the rise, but may also show some dispersion that may indicate diversity. I explore what range of power-law rises are possible due to the presence of radioactive material near the surface of the exploding white dwarf (WD). I summarize what constraints such a model places on the structure of the progenitor and the distribution and velocity of ejecta. My main conclusion is that for the inferred explosion time for SN 2011fe, its rise requires an increasing mass fraction X 56 (4-6) × 10-2 of 56Ni distributed between a depth of 10-2 and 0.3 M O below the WD's surface. Radioactive elements this shallow are not found in simulations of a single C/O detonation. Scenarios that may produce this material include helium-shell burning during a double-detonation ignition, a gravitationally confined detonation, and a subset of deflagration to detonation transition models. In general, the power-law rise can differ from quadratic depending on the details of the velocity, density, and radioactive deposition gradients in a given event. Therefore, comparisons of this work with observed bolometric rises of SNe Ia would place strong constraints on the properties of the shallow outer layers, providing important clues for identifying the elusive progenitors of SNe Ia. © 2012. The American Astronomical Society. All rights reserved.. Source


Rothstein I.Z.,California Institute of Technology
Physical Review Letters | Year: 2013

We present a higher dimensional model where gravity is bound to a brane due to Anderson localization. The extra dimensions are taken to be a disordered crystal of branes, with randomly distributed tensions of order the fundamental scale. Such geometries bind the graviton and thus allow for arbitrarily large extra dimensions even when the curvature is small. Thus this model is quite distinct from that of Randall and Sundrum where localization is a consequence of curvature effects in the bulk. The hierarchy problem can be solved by having the standard model brane live a distance away from the brane on which the graviton is localized. The statistical properties of the system are worked out and it is shown that the scenario leads to a continuum of four dimensional theories with differing strengths of gravitational interactions. We live on one particular brane whose gravitational constant is GN. © 2013 American Physical Society. Source


Thrane E.,California Institute of Technology | Romano J.D.,University of Texas at Brownsville
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs "power-law integrated curves." For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves. © 2013 American Physical Society. Source


Nakayama Y.,California Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

We give a holographic explanation of how the renormalization group approach to singular perturbations in nonlinear differential equations proposed by Chen, Goldenfeld, and Oono is indeed equivalent to a renormalization group method in quantum field theories proposed by Gell-Mann and Low via AdS/CFT correspondence. © 2013 American Physical Society. Source


Bergman D.L.,California Institute of Technology
Physical Review Letters | Year: 2011

The strong topological insulator in 3D is expected to realize a quantized magnetoelectric response, the so-called axion response. However, many of the materials predicted to be topological insulators have turned out to be metallic, with bulk Fermi surfaces. Following the result of Bergman and Refael that the surface states of the topological insulator persist even when the band structure gap is closed, we explore the fate of the magnetoelectric response in such systems. We find that a nonquantized magnetoelectric coupling remains once a bulk Fermi surface opens. More generally, we find higher-dimensional analogs of the intrinsic anomalous Hall effect for all Chern forms-quantized transport coefficients in the gapped case become nonquantized when the gap is closed. In particular, the nonquantized magnetoelectric response in 3D descends from the intrinsic anomalous Hall effect analog in 4D. © 2011 American Physical Society. Source


Morton T.D.,California Institute of Technology
Astrophysical Journal | Year: 2012

Surveys searching for transiting exoplanets have found many more candidates than they have been able to confirm as true planets. This situation is especially acute with the Kepler survey, which has found over 2300 candidates but has to date confirmed only a small fraction of them as planets. I present here a general procedure that can quickly be applied to any planet candidate to calculate its false positive probability. This procedure takes into account the period, depth, duration, and shape of the signal; the colors of the target star; arbitrary spectroscopic or imaging follow-up observations; and informed assumptions about the populations and distributions of field stars and multiple-star properties. Applying these methods to a sample of known Kepler planets, I demonstrate that many signals can be validated with very limited follow-up observations: in most cases with only a spectrum and an adaptive optics image. Additionally, I demonstrate that this procedure can reliably identify false positive signals. Because of the computational efficiency of this analysis, it is feasible to apply it to all Kepler planet candidates in the near future, and it will streamline the follow-up efforts for Kepler and other current and future transit surveys. © 2012. The American Astronomical Society. All rights reserved.. Source


Brown M.E.,California Institute of Technology
Astrophysical Journal Letters | Year: 2013

The formation of the largest objects in the Kuiper belt, with measured densities of ∼1.5 g cm-3 and higher, from the coagulation of small bodies, with measured densities below 1 g cm-3, is difficult to explain without invoking significant porosity in the smallest objects. If such porosity does occur, measured densities should begin to increase at the size at which significant porosity is no longer supported. Among the asteroids, this transition occurs for diameters larger than ∼350 km. In the Kuiper belt, no density measurements have been made between ∼350 km and ∼850 km, the diameter range where porosities might first begin to drop. Objects in this range could provide key tests of the rock fraction of small Kuiper belt objects (KBOs). Here we report the orbital characterization, mass, and density determination of the 2002 UX25 system in the Kuiper belt. For this object, with a diameter of ∼650 km, we find a density of 0.82 ± 0.11 g cm -3, making it the largest solid known object in the solar system with a measured density below that of pure water ice. We argue that the porosity of this object is unlikely to be above ∼20%, suggesting a low rock fraction. If the currently measured densities of KBOs are a fair representation of the sample as a whole, creating ∼1000 km and larger KBOs with rock mass fractions of 70% and higher from coagulation of small objects with rock fractions as low as those inferred from 2002 UX25 is difficult. © 2013. The American Astronomical Society. All rights reserved. Source


Couch S.M.,University of Chicago | Ott C.D.,California Institute of Technology | Ott C.D.,University of Tokyo
Astrophysical Journal Letters | Year: 2013

Multi-dimensional simulations of advanced nuclear burning stages of massive stars suggest that the Si/O layers of presupernova stars harbor large deviations from the spherical symmetry typically assumed for presupernova stellar structure. We carry out three-dimensional core-collapse supernova simulations with and without aspherical velocity perturbations to assess their potential impact on the supernova hydrodynamics in the stalled-shock phase. Our results show that realistic perturbations can qualitatively alter the postbounce evolution, triggering an explosion in a model that fails to explode without them. This finding underlines the need for a multi-dimensional treatment of the presupernova stage of stellar evolution. © 2013. The American Astronomical Society. All rights reserved.. Source


Brown M.E.,California Institute of Technology
Astrophysical Journal Letters | Year: 2013

A recent stellar occultation by the dwarf planet Makemake provided an excellent opportunity to measure the size and shape of one of the largest objects in the Kuiper belt. The analysis of these results provided what were reported to be precise measurements of the lengths of the projected axes, the albedo, and even the density of Makemake, but these results were, in part, derived from qualitative arguments. We reanalyzed the occultation timing data using a quantitative statistical description, and, in general, found the previously reported results on the shape of Makemake to be unjustified. In our solution, in which we use our inference from photometric data that Makemake is being viewed nearly pole-on, we find a 1σ upper limit to the projected elongation of Makemake of 1.02, with measured equatorial diameter of 1434 ± 14 km and a projected polar diameter of 1422 ± 14 km, yielding an albedo of 0.81+0.01 ?0.02. If we remove the external constraint on the pole position of Makemake, we find instead a 1σ upper limit to the elongation of 1.06, with a measured equatorial diameter of 1434+48 ?18 km and a projected polar diameter of 1420 +18 ?24 km, yielding an albedo of 0.81+0.03 ?0.05. Critically, we find that the reported measurement of the density of Makemake was based on the misapplication of the volatile retention models. A corrected analysis shows that the occultation measurements provide no meaningful constraint on the density of Makemake. © 2013. The American Astronomical Society. All rights reserved. Source


Dorrestein P.C.,University of California at San Diego | Mazmanian S.K.,California Institute of Technology | Knight R.,BioFrontiers Institute | Knight R.,Howard Hughes Medical Institute
Immunity | Year: 2014

The unexpected diversity of the human microbiome and metabolome far exceeds the complexity of the human genome. Although we now understand microbial taxonomic and genetic repertoires in some populations, we are just beginning to assemble the necessary computational and experimental tools to understand the metabolome in comparable detail. However, even with the limited current state of knowledge, individual connections between microbes and metabolites, between microbes and immune function, and between metabolites and immune function are being established. Here, we provide our perspective on these connections and outline a systematic research program that could turn these individual links into a broader network that allows us to understand how these components interact. This program will enable us to exploit connections among the microbiome, metabolome, and host immune system to maintain health and perhaps help us understand how to reverse the processes that lead to a wide range of immune and other diseases. Recent research has developed specific connections among particular microbes, metabolites, and host immune processes, but a generalized map of this interaction network is still lacking. Dorrestein etal. describe these connections and outline how we could move toward a global view. © 2014 Elsevier Inc. Source


Brown M.E.,California Institute of Technology | Hand K.P.,Jet Propulsion Laboratory
Astronomical Journal | Year: 2013

The surface of Europa could contain the compositional imprint of an underlying interior ocean, but competing hypotheses differ over whether spectral observations from the Galileo spacecraft show the signature of ocean evaporates or simply surface radiation products unrelated to the interior. Using adaptive optics at the W. M. Keck Observatory, we have obtained spatially resolved spectra of most of the disk of Europa at a spectral resolution ∼40 times higher than seen by the Galileo spacecraft. These spectra show a previously undetected distinct signature of magnesium sulfate salts on Europa, but the magnesium sulfate is confined to the trailing hemisphere and spatially correlated with the presence of radiation products like sulfuric acid and SO2. On the leading, less irradiated, hemisphere, our observations rule out the presence of many of the proposed sulfate salts, but do show the presence of distorted water ice bands. Based on the association of the potential MgSO4 detection on the trailing side with other radiation products, we conclude that MgSO4 is also a radiation product, rather than a constituent of a Europa ocean brine. Based on ocean chemistry models, we hypothesize that, prior to irradiation, magnesium is primarily in the form of MgCl2, and we predict that NaCl and KCl are even more abundant, and, in fact, dominate the non-ice component of the leading hemisphere. We propose observational tests of this new hypothesis. © 2013. The American Astronomical Society. All rights reserved. Source


Marden J.R.,University of Colorado at Boulder | Wierman A.,California Institute of Technology
Operations Research | Year: 2013

Game-theoretic tools are becoming a popular design choice for distributed resource allocation algorithms. A central component of this design choice is the assignment of utility functions to the individual agents. The goal is to assign each agent an admissible utility function such that the resulting game possesses a host of desirable properties, including scalability, tractability, and existence and efficiency of pure Nash equilibria. In this paper we formally study this question of utility design on a class of games termed distributed welfare games. We identify several utility design methodologies that guarantee desirable game properties irrespective of the specific application domain. Lastly, we illustrate the results in this paper on two commonly studied classes of resource allocation problems: "coverage" problems and "coloring" problems. © 2013 INFORMS. Source


Eiler J.M.,California Institute of Technology
Annual Review of Earth and Planetary Sciences | Year: 2013

Most natural compounds are composed of diverse isotopologues that differ in the number andor symmetrically unique atomic locations of isotopic substitutions. Little of this isotopic diversity is observed by conventional methods of stable isotope geochemistry, which generally measure concentrations of rare isotopes without constraining differences in isotopic composition between different atomic sites or nonrandom probabilities of multiple isotopic substitutions in the same molecule. Recent advances in analytical instrumentation and methodology have created a set of geochemical tools-geothermometers, biosynthetic signatures, forensic fingerprints-based on these position-specific isotope effects and multiply substituted isotopologues. This progress suggests we are entering a period in which many new geochemical tools of this type will be created. This review describes the principles, background, analytical methods, existing applied tools, and likely future progress of this emerging field. © Copyright ©2013 by Annual Reviews. All rights reserved. Source


Piro A.L.,California Institute of Technology
Astrophysical Journal Letters | Year: 2013

It has long been expected that some massive stars produce stellar mass black holes (BHs) upon death. Unfortunately, the observational signature of such events has been unclear. It has even been suggested that the result may be an "unnova," in which the formation of a BH is marked by the disappearance of a star rather than an electromagnetic outburst. I argue that when the progenitor is a red supergiant, evidence for BH creation may instead be a ≈3-10 day optical transient with a peak luminosity of ≈10 40-1041 erg s-1, a temperature of ≈10 4 K, slow ejection speeds of ≈200 km s-1, and a spectrum devoid of the nucleosynthetic products associated with explosive burning. This signal is the breakout of a shock generated by the hydrodynamic response of a massive stellar envelope when the protoneutron star loses ∼few × 0.1 M⊙ to neutrino emission prior to collapse to a BH. Current and future wide-field, high-cadence optical surveys make this an ideal time to discover and study these events. Motivated by the unique parameter space probed by this scenario, I discuss more broadly the range of properties expected for shock breakout flashes, with an emphasis on progenitors with large radii and/or small shock energies. This may have application in a wider diversity of explosive events, from pair instability supernovae to newly discovered but yet to be understood transients. © 2013. The American Astronomical Society. All rights reserved. Source


Varshavsky A.,California Institute of Technology
Protein Science | Year: 2011

The N-end rule relates the regulation of the in vivo half-life of a protein to the identity of its N-terminal residue. Degradation signals (degrons) that are targeted by the N-end rule pathway include a set called N-degrons. The main determinant of an N-degron is a destabilizing N-terminal residue of a protein. In eukaryotes, the N-end rule pathway is a part of the ubiquitin system and consists of two branches, the Ac/N-end rule and the Arg/N-end rule pathways. The Ac/N-end rule pathway targets proteins containing N α- terminally acetylated (Nt-acetylated) residues. The Arg/N-end rule pathway recognizes unacetylated N-terminal residues and involves N-terminal arginylation. Together, these branches target for degradation a majority of cellular proteins. For example, more than 80% of human proteins are cotranslationally Nt-acetylated. Thus, most proteins harbor a specific degradation signal, termed AcN-degron, from the moment of their birth. Specific N-end rule pathways are also present in prokaryotes and in mitochondria. Enzymes that produce N-degrons include methionine-aminopeptidases, caspases, calpains, Nt-acetylases, Nt-amidases, arginyl-transferases, and leucyl-transferases. Regulated degradation of specific proteins by the N-end rule pathway mediates a legion of physiological functions, including the sensing of heme, oxygen, and nitric oxide; selective elimination of misfolded proteins; the regulation of DNA repair, segregation, and condensation; the signaling by G proteins; the regulation of peptide import, fat metabolism, viral and bacterial infections, apoptosis, meiosis, spermatogenesis, neurogenesis, and cardiovascular development; and the functioning of adult organs, including the pancreas and the brain. Discovered 25 years ago, this pathway continues to be a fount of biological insights. © 2011 The Protein Society. Source


Haswell E.S.,Washington University in St. Louis | Phillips R.,California Institute of Technology | Rees D.C.,Howard Hughes Medical Institute
Structure | Year: 2011

While mechanobiological processes employ diverse mechanisms, at their heart are force-induced perturbations in the structure and dynamics of molecules capable of triggering subsequent events. Among the best characterized force-sensing systems are bacterial mechanosensitive channels. These channels reflect an intimate coupling of protein conformation with the mechanics of the surrounding membrane; the membrane serves as an adaptable sensor that responds to an input of applied force and converts it into an output signal, interpreted for the cell by mechanosensitive channels. The cell can exploit this information in a number of ways: ensuring cellular viability in the presence of osmotic stress and perhaps also serving as a signal transducer for membrane tension or other functions. This review focuses on the bacterial mechanosensitive channels of large (MscL) and small (MscS) conductance and their eukaryotic homologs, with an emphasis on the outstanding issues surrounding the function and mechanism of this fascinating class of molecules. © 2011 Elsevier Ltd. Source


Pullen A.R.,Jet Propulsion Laboratory | Hirata C.M.,California Institute of Technology
Publications of the Astronomical Society of the Pacific | Year: 2013

Primordial non-Gaussianity of local type is predicted to lead to enhanced halo clustering on very large scales. Photometric quasars, which can be seen from cosmological redshifts z > 2 even in wide-shallow optical surveys, are promising tracers for constraining non-Gaussianity using this effect. However, large-scale systematics can also mimic this signature of non-Gaussianity. In order to assess the contribution of systematic effects, we cross-correlate overdensity maps of photometric quasars from the Sloan Digital Sky Survey (SDSS) Data Release 6 (DR6) in different redshift ranges.We find that the maps are significantly correlated on large scales, even though we expect the angular distributions of quasars at different redshifts to be uncorrelated. This implies that the quasar maps are contaminated with systematic errors.We investigate the use of external templates that provide information on the spatial dependence of potential systematic errors to reduce the level of spurious clustering in the quasar data.We find that templates associated with stellar density, the stellar color locus, airmass, and seeing are major contaminants of the quasar maps, with seeing having the largest effect. Using template projection, we are able to decrease the significance of the cross-correlation measurement on the largest scales from 9.2σ to 5.4σ. Although this is an improvement, the remaining cross-correlation suggests the contamination in this quasar sample is too great to allow a competitive constraint on fNL by correlations internal to this sample. The SDSS quasar catalog exhibits spurious number density fluctuations of approximately 2% rms, and we need a contamination level less than 1% (0.6%) in order to measure values of fNL less than 100 (10). Properly dealing with these systematics will be paramount for future large scale structure surveys that seek to constrain non-Gaussianity. © 2013. The Astronomical Society of the Pacific. All rights reserved. Printed in U.S.A. Source


Davidson E.H.,California Institute of Technology
EMBO Journal | Year: 2014

Viewed through the lens of comparative regulatory mechanisms in developmental processes, the article of Calero-Nieto et al (2014, this issue) is of particular interest. This work uncovers the causal combinatorial subtleties of the distinct enhancer occupancy profiles displayed by ten different transcription factors, which are expressed in common in two hematopoietic cell types, a stem cell-like precursor and primary mast cells. © 2014 The Author. Source


Phillips R.,California Institute of Technology | Phillips R.,CNRS Gulliver Laboratory
Annual Review of Condensed Matter Physics | Year: 2015

It has been said that the cell is the test tube of the twenty-first century. If so, the theoretical tools needed to quantitatively and predictively describe what goes on in such test tubes lag sorely behind the stunning experimental advances in biology seen in the decades since the molecular biology revolution began. Perhaps surprisingly, one of the theoretical tools that has been used with great success on problems ranging from how cells communicate with their environment and each other to the nature of the organization of proteins and lipids within the cell membrane is statistical mechanics. A knee-jerk reaction to the use of statistical mechanics in the description of cellular processes is that living organisms are so far from equilibrium that one has no business even thinking about it. But such reactions are probably too hasty given that there are many regimes in which, because of a separation of timescales, for example, such an approach can be a useful first step. In this article, we explore the power of statistical mechanical thinking in the biological setting, with special emphasis on cell signaling and regulation. We show how such models are used to make predictions and describe some recent experiments designed to test them. We also consider the limits of such models based on the relative timescales of the processes of interest. © 2015 by Annual Reviews. Source


Doty D.,California Institute of Technology
Communications of the ACM | Year: 2012

Self-assembly is the process by which small components automatically assemble themselves into large, complex structures. Examples in nature abound: lipids self-assemble a cell's membrane, and bacteriophage virus proteins self-assemble a capsid that allows the virus to invade other bacteria. Even a phenomenon as simple as crystal formation is a process of self-assembly. How could such a process be described as algorithmic? The key word in the first sentence is automatically. Algorithms automate a series of simple computational tasks. Algorithmic self-assembly systems automate a series of simple growth tasks, in which the object being grown is simultaneously the machine controlling its own growth. © 2012 ACM 0001-0782/12/12. Source


Atwater H.,California Institute of Technology
MRS Bulletin | Year: 2011

Solar energy is currently enjoying substantial growth and investment, owing to worldwide sensitivity to energy security and climate change. Solar energy is an inexhaustible resource and is in abundant supply on all continents of the world. The power density of sunlight (∼1000 W/m2) and the efficiency of photovoltaic devices (∼10-25 %) are high enough so that land use does not limit photovoltaic deployment at the terawatt scale. However solar photovoltaics are currently too expensive to achieve parity with other forms of electricity generation based on fossil fuels. This is largely due to the cost (and for some cases, the abundance) of materials used in photovoltaic modules and systems, and the cost of deploying in current form. This economic and social context has created the present situation where there is widespread interest in photovoltaic technology for power generation, but the cumulative installed world capacity for photovoltaics is <50 GW, and it appears to be very challenging for photovoltaics to play a very substantial role in large-scale (terawatt) electricity generation in the short term. © 2011 Materials Research Society. Source


Menon K.P.,California Institute of Technology
Wiley interdisciplinary reviews. Developmental biology | Year: 2013

The Drosophila larval neuromuscular system is relatively simple, containing only 32 motor neurons in each abdominal hemisegment, and its neuromuscular junctions (NMJs) have been studied extensively. NMJ synapses exhibit developmental and functional plasticity while displaying stereotyped connectivity. Drosophila Type I NMJ synapses are glutamatergic, while the vertebrate NMJ uses acetylcholine as its primary neurotransmitter. The larval NMJ synapses use ionotropic glutamate receptors (GluRs) that are homologous to AMPA-type GluRs in the mammalian brain, and they have postsynaptic scaffolds that resemble those found in mammalian postsynaptic densities. These features make the Drosophila neuromuscular system an excellent genetic model for the study of excitatory synapses in the mammalian central nervous system. The first section of the review presents an overview of NMJ development. The second section describes genes that regulate NMJ development, including: (1) genes that positively and negatively regulate growth of the NMJ, (2) genes required for maintenance of NMJ bouton structure, (3) genes that modulate neuronal activity and alter NMJ growth, (4) genes involved in transsynaptic signaling at the NMJ. The third section describes genes that regulate acute plasticity, focusing on translational regulatory mechanisms. As this review is intended for a developmental biology audience, it does not cover NMJ electrophysiology in detail, and does not review genes for which mutations produce only electrophysiological but no structural phenotypes. Copyright © 2013 Wiley Periodicals, Inc. Source


Lifshitz R.,California Institute of Technology
Physical Review Letters | Year: 2014

A Comment on the Letter by S. Gopalakrishnan, I. Martin, and E.A. Demler, Phys. Rev. Lett. 111, 185304 (2013).PRLTAO0031-900710.1103/PhysRevLett.111. 185304. The authors of the Letter offer a Reply. © 2014 American Physical Society. Source


Liang J.,California Institute of Technology | Bloom R.,Stanford University | Smolke C.,Stanford University
Molecular Cell | Year: 2011

RNA molecules play diverse functional roles in natural biological systems. There has been growing interest in designing synthetic RNA counterparts for programming biological function. The design of synthetic RNA molecules that exhibit diverse activities, including sensing, regulatory, information processing, and scaffolding activities, has highlighted the advantages of RNA as a programmable design substrate. Recent advances in implementing these engineered RNA molecules as key control elements in synthetic genetic networks are highlighting the functional relevance of this class of synthetic elements in programming cellular behaviors. © 2011 Elsevier Inc. Source


Brady J.F.,California Institute of Technology
Journal of Fluid Mechanics | Year: 2011

Diffusiophoresis, the motion of a particle in response to an externally imposed concentration gradient of a solute species, is analysed from both the traditional coarse-grained macroscopic (i.e. continuum) perspective and from a fine-grained micromechanical level in which the particle and the solute are treated on the same footing as Brownian particles dispersed in a solvent. It is shown that although the two approaches agree when the solute is much smaller in size than the phoretic particle and is present at very dilute concentrations, the micromechanical colloidal perspective relaxes these restrictions and applies to any size ratio and any concentration of solute. The different descriptions also provide different mechanical analyses of phoretic motion. At the continuum level the macroscopic hydrodynamic stress and interactive force with the solute sum to give zero total force, a condition for phoretic motion. At the colloidal level, the particle's motion is shown to have two contributions: (i) a back-flow contribution composed of the motion of the particle due to the solute chemical potential gradient force acting on it and a compensating fluid motion driven by the long-range hydrodynamic velocity disturbance caused by the chemical potential gradient force acting on all the solute particles and (ii) an indirect contribution arising from the mutual interparticle and Brownian forces on the solute and phoretic particle, that contribution being non-zero because the distribution of solute about the phoretic particle is driven out of equilibrium by the chemical potential gradient of the solute. At the colloidal level the forces acting on the phoretic particle-both the statistical or thermodynamic chemical potential gradient and Brownian forces and the interparticle force-are balanced by the Stokes drag of the solvent to give the net phoretic velocity. For a particle undergoing self-phoresis or autonomous motion, as can result from chemical reactions occurring asymmetrically on a particle surface, e.g. catalytic nanomotors, there is no imposed chemical potential gradient and the back-flow contribution is absent. Only the indirect Brownian and interparticle forces contribution is responsible for the motion. The velocity of the particle resulting from this contribution can be written in terms of a mobility times the integral of the local solute pressure-the solute concentration times the thermal energy-over the surface of contact between the particle and the solute. This was the approach taken by Crdova-Figueroa & Brady (Phys. Rev. Lett., vol. 100, 2008, 158303) in their analysis of self-propulsion. It is shown that full hydrodynamic interactions can be incorporated into their analysis by a simple scale factor. © 2010 Cambridge University Press. Source


Rothenberg E.V.,California Institute of Technology
Current Opinion in Immunology | Year: 2012

The T-cell development program is specifically triggered by Notch-Delta signaling, but most transcription factors needed to establish T-cell lineage identity also have crossover roles in other hematopoietic lineages. This factor sharing complicates full definition of the core gene regulatory circuits required for T-cell specification. But new advances illuminate the roles of three of the most T-cell specific transcription factors. Commitment to the T-cell lineage is now shown to depend on Bcl11b, while initiation of the T-cell differentiation program begins earlier with the induction of TCF-1 (. Tcf7 gene product) and GATA-3. Several reports now reveal how TCF-1 and GATA-3 are mobilized in early T cells and the pathways for their T-lineage specific effects. © 2012 Elsevier Ltd. Source


Hackl A.,California Institute of Technology | Vojta M.,TU Dresden
Physical Review Letters | Year: 2011

The heavy-fermion metal YbRh2Si2 displays a field-driven quantum phase transition where signatures of a Fermi-surface reconstruction have been identified, often interpreted as a breakdown of the Kondo effect. We argue that instead many properties of the material can be consistently described by assuming a Zeeman-driven Lifshitz transition of narrow heavy-fermion bands. Using a suitable quasiparticle model, we find a smeared jump in the Hall constant and lines of maxima in susceptibility and specific heat, very similar to experimental data. An intermediate non-Fermi-liquid regime emerges due to the small effective Fermi energy near the transition. Further experiments to discriminate the different scenarios are proposed. © 2011 American Physical Society. Source


Koppens F.H.L.,ICFO - Institute of Photonic Sciences | Chang D.E.,California Institute of Technology | Garcia De Abajo F.J.,CSIC - Institute of Optics | Garcia De Abajo F.J.,University of Southampton
Nano Letters | Year: 2011

Graphene plasmons provide a suitable alternative to noble-metal plasmons because they exhibit much tighter confinement and relatively long propagation distances, with the advantage of being highly tunable via electrostatic gating. Here, we propose to use graphene plasmons as a platform for strongly enhanced light-matter interactions. Specifically, we predict unprecedented high decay rates of quantum emitters in the proximity of a carbon sheet, observable vacuum Rabi splittings, and extinction cross sections exceeding the geometrical area in graphene nanoribbons and nanodisks. Our theoretical results provide the basis for the emerging and potentially far-reaching field of graphene plasmonics, offering an ideal platform for cavity quantum electrodynamics, and supporting the possibility of single-molecule, single-plasmon devices. © 2011 American Chemical Society. Source


Minnich A.J.,California Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2015

The thermal properties of anisotropic crystals are of both fundamental and practical interest, but transport phenomena in anisotropic materials such as graphite remain poorly understood because solutions of the Boltzmann equation often assume isotropy. Here, we extend an analytic solution of the transient, frequency-dependent Boltzmann equation to highly anisotropic solids and examine its predictions for graphite. We show that this simple model predicts key results, such as long c-axis phonon mean free paths and a negative correlation of cross-plane thermal conductivity with in-plane group velocity, that were previously observed with computationally expensive molecular-dynamics simulations. Further, using our analytic solution, we demonstrate a method to reconstruct the anisotropic mean free path spectrum of crystals with arbitrary dispersion relations without any prior knowledge of their harmonic or anharmonic properties using observations of quasiballistic heat conduction. These results provide a useful analytic framework to understand thermal transport in anisotropic crystals. © 2015 American Physical Society. Source


Haah J.,California Institute of Technology
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2011

We suggest concrete models for self-correcting quantum memory by reporting examples of local stabilizer codes in 3D that have no string logical operators. Previously known local stabilizer codes in 3D all have stringlike logical operators, which make the codes non-self-correcting. We introduce a notion of "logical string segments" to avoid difficulties in defining one-dimensional objects in discrete lattices. We prove that every stringlike logical operator of our code can be deformed to a disjoint union of short segments, each of which is in the stabilizer group. The code has surfacelike logical operators whose partial implementation has unsatisfied stabilizers along its boundary. © 2011 American Physical Society. Source


Jordan S.P.,California Institute of Technology
Quantum Information and Computation | Year: 2010

In topological quantum computation the geometric details of a particle trajectory are irrelevant; only the topology matters. Taking this one step further, we consider a model of computation that disregards even the topology of the particle trajectory, and computes by permuting particles. Whereas topological quantum computation requires anyons, permutational quantum computation can be performed with ordinary spin-1/2 particles, using a variant of the spin-network scheme of Marzuoli and Rasetti. We do not know whether permutational computation is universal. It may represent a new complexity class within BQP. Nevertheless, permutational quantum computers can in polynomial time approximate matrix elements of certain irreducible representations of the symmetric group and approximate certain transition amplitudes from the Ponzano-Regge spin foam model of quantum gravity. No polynomial time classical algorithms for these problems are known. © Rinton Press. Source


Sesar B.,California Institute of Technology
Astronomical Journal | Year: 2012

We present template radial velocity curves of ab-type RR Lyrae stars constructed from high-precision measurements of Hα, Hβ, and Hγ lines. Amplitude correlations between the Balmer line velocity curves, Johnson V band, and Sloan Digital Sky Survey (SDSS) g- and r-band light curves are also derived. Compared to previous methods, these templates and derived correlations reduce the uncertainty in measured systemic (center-of-mass) velocities of RRLyrae stars by up to 15km s -1, and will be of particular interest to wide-area spectroscopic surveys such as the SDSS and LAMOST Experiment for Galactic Understanding and Exploration. © © 2012. The American Astronomical Society. All rights reserved.. Source


Rothenberg E.V.,California Institute of Technology
Annual Review of Immunology | Year: 2014

T and B cells share a common somatic gene rearrangement mechanism for assembling the genes that code for their antigen receptors; they also have developmental pathways with many parallels. Shared usage of basic helix-loop-helix E proteins as transcriptional drivers underlies these common features. However, the transcription factor networks in which these E proteins are embedded are different both in membership and in architecture for T and B cell gene regulatory programs. These differences permit lineage commitment decisions to be made in different hierarchical orders. Furthermore, in contrast to B cell gene networks, the T cell gene network architecture for effector differentiation is sufficiently modular so that E protein inputs can be removed. Complete T cell-like effector differentiation can proceed without T cell receptor rearrangement or selection when E proteins are neutralized, yielding natural killer and other innate lymphoid cells. © 2014 by Annual Reviews. All rights reserved. Source


Nakayama Y.,California Institute of Technology
Journal of High Energy Physics | Year: 2011

We study the superconformal index of the N = 4 super-Yang-Milles theory on S3 × S1 with the half BPS superconformal surface operator (defect) inserted at the great circle of S3. The half BPS superconformal surface operators preserve the same supersymmetry as well as the symmetry of the chemical potential used in the definition of the superconformal index, so the structure and the parameterization of the superconformal index remain unaffected by the presence of the surface operator. On the surface defect, a two-dimensional (4, 4) superconformal field theory resides, and the four-dimensional superconformal index may be regarded as a superconformal index of the two-dimensional (4, 4) superconformal field theory coupled with the four-dimensional bulk system. We construct the matrix model that computes the superconformal index with the surface operator when it couples with the bulk N = 4 super-Yang-Milles theory through the defect hypermultiplets on it. © 2011 SISSA. Source


Zhang Y.,California Institute of Technology
Journal of Cosmology and Astroparticle Physics | Year: 2015

We calculate the early universe evolution of perturbations in the dark matter energy density in the context of simple dark sector models containing a GeV scale light mediator. We consider the case that the mediator is long-lived, with lifetime up to a second, and before decaying it temporarily dominates the energy density of the universe. We show that for primordial perturbations that enter the horizon around this period, the interplay between linear growth during matter domination and collisional damping can generically lead to a sharp peak in the spectrum of dark matter density perturbation. As a result, the population of the smallest DM halos gets enhanced. Possible implications of this scenario are discussed. Source


Hatzenpichler R.,California Institute of Technology
Applied and Environmental Microbiology | Year: 2012

Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, has been suggested to have been a central part of the global biogeochemical nitrogen cycle since the oxygenation of Earth. The cultivation of several ammonia-oxidizing archaea (AOA) as well as the discovery that archaeal ammonia monooxygenase (amo)-like gene sequences are nearly ubiquitously distributed in the environment and outnumber their bacterial counterparts in many habitats fundamentally revised our understanding of nitrification. Surprising insights into the physiological distinctiveness of AOA are mirrored by the recognition of the phylogenetic uniqueness of these microbes, which fall within a novel archaeal phylum now known as Thaumarchaeota. The relative importance of AOA in nitrification, compared to ammonia-oxidizing bacteria (AOB), is still under debate. This minireview provides a synopsis of our current knowledge of the diversity and physiology of AOA, the factors controlling their ecology, and their role in carbon cycling as well as their potential involvement in the production of the greenhouse gas nitrous oxide. It emphasizes the importance of activity-based analyses in AOA studies and formulates priorities for future research. © 2012, American Society for Microbiology. Source


Anderson D.L.,California Institute of Technology
Journal of Petrology | Year: 2011

Recent high-resolution seismic observations and geodynamic calculations suggest that mid-plate swells and volcanoes are plausibly controlled by processes and materials entirely in the upper boundary layer (<220 km depth) of the mantle rather than by deep-seated thermal instabilities. The upper boundary layer (BL) of the mantle is fertile enough, hot enough and variable enough to provide the observed range of temperatures and compositions of mid-plate magmas, plus it is conveniently located to easily supply these. Seismic data show that the outer ~220 km of the mantle is heterogeneous, anisotropic and has a substantially superadiabatic vertical temperature gradient. This is the shear, and thermal, BL of the upper mantle. It is usually referred to as the 'asthenosphere' and erroneously thought of as simply part of the well-mixed 'convecting mantle'. Because it supports both a shear and a thermal gradient, the lower portions are hot and move slowly with respect to the surface and can be levitated and exposed by normal plate tectonic processes, even if not buoyant. The nature of BL anisotropy is consistent with a shear-induced laminated structure with aligned melt-rich lenses. The two polarizations of shear waves travel at different velocities, VSV and VSH, and they vary differently with depth. VSH is mainly sensitive to the temperature gradient and indicates a high thermal gradient to 220 km depth. VSV is mainly sensitive to melt content. The depth of the minimum isotropic shear velocity, Vs, under young plates occurs near 60 km and this rapidly increases to 150 km under older oceanic plates, including Hawaii; 150 km may represent the depth of isostatic compensation for swells and the source of tholeiiic basalt magmas. The high-velocity seismic lid thickens as the square-root of age across the entire Pacific, but the underlying mantle is not isothermal; average sub-ridge mantle is colder, by various measures, than mid-plate mantle. Ambient mantle potential temperature at depth under the central Pacific may be ~200°C higher, without deep mantle plume input, than near spreading ridges. This is consistent with bathymetry and seismic velocities and the temperature range of non-ridge magmas. Some of the thinnest and, in terms of traditional interpretations, hottest transition zones (TZ; ~410-650 km depth) are under hotspot-free areas of western North America, Greenland, Europe, Russia, Brazil and India. The lowest seismic velocity regions in the upper mantle BL are under young oceanic plates, back-arc basins and hotspot-free areas of California and the Pacific and Indian oceans. Cold slabs may displace hotter material out of the TZ but geophysical data, and geodynamic simulations, do not require deeper sources. Magmas extracted from deep in a thick conduction layer are expected to be hotter than shallower oceanic ridge magmas and more variable in temperature. Mid-plate magmas appear to represent normal ambient mantle at depths of ~150 km, rather than very localized very deep upwellings. Shear-driven upwellings from the base of the BL explain mid-plate magmatism and its association with fracture zones and anomalous anisotropy, and the persistence of some volcanic chains and the short duration of others. The hotter deeper part of the surface BL is moving at a fraction of the plate velocity and is sampled only where sheared or displaced upwards by tectonic structures and processes that upset the usual stable laminar flow. If mid-plate volcanoes are sourced in the lower half of the BL, between 100 and 220 km depth, or below, then they will appear to define a relatively fixed reference system and the associated temperatures will increase with depth of magma extraction. Lithospheric architecture and stress control the locations of volcanoes, not localized thermal anomalies or deep mantle plumes. © The Author 2010. Published by Oxford University Press. All rights reserved. Source


O'Doherty J.P.,California Institute of Technology
Annals of the New York Academy of Sciences | Year: 2011

In this article, it will be argued that one of the key contributions of the ventromedial prefrontal cortex (vmPFC) to goal-directed action selection lies both in retrieving the value of goals that are the putative outcomes of the decision process and in establishing a relative preference ranking for these goals by taking into account the value of each of the different goals under consideration in a given decision-making scenario. These goal-value signals are then suggested to be used as an input into the on-line computation of action values mediated by brain regions outside of the vmPFC, such as parts of the parietal cortex, supplementary motor cortex, and dorsal striatum. Collectively, these areas can be considered to be constituent elements of a multistage decision process whereby the values of different goals must first be represented and ranked before the value of different courses of action available for the pursuit of those goals can be computed. © 2011 New York Academy of Sciences. Source


Thompson A.J.,University of Cambridge | Lester H.A.,California Institute of Technology | Lummis S.C.R.,University of Cambridge
Quarterly Reviews of Biophysics | Year: 2010

Cys-loop receptors are membrane-spanning neurotransmitter-gated ion channels that are responsible for fast excitatory and inhibitory transmission in the peripheral and central nervous systems. The best studied members of the Cys-loop family are nACh, 5-HT 3, GABAA and glycine receptors. All these receptors share a common structure of five subunits, pseudo-symmetrically arranged to form a rosette with a central ion-conducting pore. Some are cation selective (e.g. nACh and 5-HT 3) and some are anion selective (e.g. GABA A and glycine). Each receptor has an extracellular domain (ECD) that contains the ligand-binding sites, a transmembrane domain (TMD) that allows ions to pass across the membrane, and an intracellular domain (ICD) that plays a role in channel conductance and receptor modulation. Cys-loop receptors are the targets for many currently used clinically relevant drugs (e.g. benzodiazepines and anaesthetics). Understanding the molecular mechanisms of these receptors could therefore provide the catalyst for further development in this field, as well as promoting the development of experimental techniques for other areas of neuroscience. In this review, we present our current understanding of Cys-loop receptor structure and function. The ECD has been extensively studied. Research in this area has been stimulated in recent years by the publication of high-resolution structures of nACh receptors and related proteins, which have permitted the creation of many Cys loop receptor homology models of this region. Here, using the 5-HT 3 receptor as a typical member of the family, we describe how homology modelling and ligand docking can provide useful but not definitive information about ligand interactions. We briefly consider some of the many Cys-loop receptors modulators. We discuss the current understanding of the structure of the TMD, and how this links to the ECD to allow channel gating, and consider the roles of the ICD, whose structure is poorly understood. We also describe some of the current methods that are beginning to reveal the differences between different receptor states, and may ultimately show structural details of transitions between them. © 2010 Cambridge University Press. Source


Piro A.L.,California Institute of Technology | Nakar E.,Tel Aviv University
Astrophysical Journal | Year: 2013

The light curve of the explosion of a star with a radius ≲ 10-100 R⊙ is powered mostly by radioactive decay. Observationally, such events are dominated by hydrogen-deficient progenitors and classified as Type I supernovae (SNe I), i.e., white dwarf thermonuclear explosions (Type Ia), and core collapses of hydrogen-stripped massive stars (Type Ib/c). Current transient surveys are finding SNe I in increasing numbers and at earlier times, allowing their early emission to be studied in unprecedented detail. Motivated by these developments, we summarize the physics that produces their rising light curves and discuss ways in which observations can be utilized to study these exploding stars. The early radioactive-powered light curves probe the shallowest deposits of 56Ni. If the amount of 56Ni mixing in the outermost layers of the star can be deduced, then it places important constraints on the progenitor and properties of the explosive burning. In practice, we find that it is difficult to determine the level of mixing because it is hard to disentangle whether the explosion occurred recently and one is seeing radioactive heating near the surface or whether the explosion began in the past and the radioactive heating is deeper in the ejecta. In the latter case, there is a "dark phase" between the moment of explosion and the first observed light emitted once the shallowest layers of 56Ni are exposed. Because of this, simply extrapolating a light curve from radioactive heating back in time is not a reliable method for estimating the explosion time. The best solution is to directly identify the moment of explosion, either through observing shock breakout (in X-ray/UV) or the cooling of the shock-heated surface (in UV/optical), so that the depth being probed by the rising light curve is known. However, since this is typically not available, we identify and discuss a number of other diagnostics that are helpful for deciphering how recently an explosion occurred. As an example, we apply these arguments to the recent SN Ic PTF 10vgv. We demonstrate that just a single measurement of the photospheric velocity and temperature during the rise places interesting constraints on its explosion time, radius, and level of 56Ni mixing. © 2013. The American Astronomical Society. All rights reserved. Source


Metasurfaces are planar structures that locally modify the polarization, phase and amplitude of light in reflection or transmission, thus enabling lithographically patterned flat optical components with functionalities controlled by design. Transmissive metasurfaces are especially important, as most optical systems used in practice operate in transmission. Several types of transmissive metasurface have been realized, but with either low transmission efficiencies or limited control over polarization and phase. Here, we show a metasurface platform based on high-contrast dielectric elliptical nanoposts that provides complete control of polarization and phase with subwavelength spatial resolution and an experimentally measured efficiency ranging from 72% to 97%, depending on the exact design. Such complete control enables the realization of most free-space transmissive optical elements such as lenses, phase plates, wave plates, polarizers, beamsplitters, as well as polarization-switchable phase holograms and arbitrary vector beam generators using the same metamaterial platform. © 2015 Nature Publishing Group Source


Schmidt M.,California Institute of Technology
Annual Review of Astronomy and Astrophysics | Year: 2015

I cover my life and career, first in Holland during World War II and then in Pasadena, California, after we emigrated in 1959. My main work concerned the rate of star formation and the discovery of quasars and their cosmological evolution. It's a great honor to write an essay for this volume of the Annual Review of Astronomy and Astrophysics. It is primarily an autobiography with an attempt to link my own career to some of the major developments in astronomy. © 2015 by Annual Reviews. Source


Bergman D.L.,California Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

Following the recent realization of an artificial version of graphene in the electronic surface states of copper with judiciously placed carbon monoxide molecules inducing the honeycomb lattice symmetry, we demonstrate that these can be used to realize a vortex in a Kekule texture of the honeycomb lattice. The Kekule texture is mathematically analogous to a superconducting order parameter in the Bogoliubov-de Gennes equations, opening a spectral gap in the massless Dirac point spectrum of the graphene structure. The core of a vortex in the texture order parameter supports subgap states, which for this system are mathematical analogs of Majorana fermions in some superconducting states. The subgap states bind a fractional charge of e/2 to the vortex core, in effect delocalizing a single electron between two vortex cores, or between one vortex core and the system boundary. The Kekule texture as realized in the molecular graphene system realizes three different domain types, and we show that a Y junction between them realizes the coveted Kekule vortex. © 2013 American Physical Society. Source


Keith D.W.,Harvard University | MacMartin D.G.,California Institute of Technology
Nature Climate Change | Year: 2015

The risks and benefits of solar geoengineering, or solar radiation management (SRM), depend on assumptions about its implementation. Claims that SRM will reduce precipitation, increase ocean acidification or deplete stratospheric ozone, or that it must be continued forever once started, are not inherent features of SRM; rather, they are features of common scenarios for its implementation. Most analyses assume, for example, that SRM would be used to stop the increase in global temperature or restore temperature to pre-industrial values. We argue that these are poor scenario choices on which to base policy-relevant judgements about SRM. As a basis for further analysis, we provide a scenario that is temporary in that its end point is zero SRM, is moderate in that it offsets only half of the growth in anthropogenic climate forcing and is responsive in that it recognizes that the amount of SRM will be adjusted in light of new information. Source


Markovic D.,California Institute of Technology
Methods in Enzymology | Year: 2013

Alternative mRNA splicing as an important mechanism for generating protein diversity has been implicated to affect 60-70% of human genes (Wang et al., 2008). The G protein-coupled receptors (GPCRs), the largest group of membrane receptors, have a broad distribution and function. Posttranslational modifications or/and association with regulatory proteins can govern cell- and tissue-specific pharmacological, signaling, and regulatory characteristics of GPCRs. However, increasing body of evidence suggests that alternative splicing of GPCRs is a mechanism that can modulate GPCRs' function (Einstein et al., 2008). Many GPCRs have a potential to undergo alternative pre-mRNA splicing (∼ 50%). Due to a general lack of isoform-specific antibodies, GPCRs' alternative splicing is mainly studied on mRNA level employing various PCR techniques. Functional characteristics (including signaling and structure) of alternatively spliced GPCRs are studied in overexpression system employing standard biochemical techniques. In this chapter, we will describe the methods for detection and quantification of alternatively spliced GPCR mRNA. As the experimental paradigm, we use type 1 corticotropin-releasing hormone receptor (CRH-R1). © 2013 Elsevier Inc. Source


Mackey B.H.,California Institute of Technology | Roering J.J.,University of Oregon
Bulletin of the Geological Society of America | Year: 2011

In mountainous landscapes with weak, fine-grained rocks, earthflows can dominate erosion and landscape evolution by supplying sediment to channels and controlling hillslope morphology. To estimate the contribution of earthflows to regional sediment budgets and identify patterns of landslide activity, earthflow movement needs to be quantified over significant spatial and temporal scales. Presently, there is a paucity of data that can be used to predict earthflow behavior beyond the seasonal scale or over spatially extensive study areas. Across 226 km2 of rapidly eroding Franciscan Complex rocks of the Eel River catchment, northern California, we used a combination of LiDAR (light detection and ranging) and orthorectified historical aerial photographs to objectively map earthflow movement between 1944 and 2006. By tracking the displacement of trees growing on earthflow surfaces, we find that 7.3% of the study area experienced movement over this 62 yr interval, preferentially in sheared argillaceous lithology. This movement is distributed across 122 earthflow features that have intricate, elongate planform shapes, a preferred south-southwesterly aspect, and a mean longitudinal slope of 31%. The distribution of mapped earthflow areas is wellapproximated by a lognormal distribution with a median size of 36,500 m2. Approximately 6% of the study area is composed of earthflows that connect to major channels; these flows generated an average sediment yield of 19,000 t km-2 yr-1 (rock erosion rate of ~7.6 mm/yr) over the 62 yr study period, equating to a regional yield of 1100 t km-2 yr-1 (~0.45 mm/yr) if distributed across the study area. As such, a small fraction of the landscape can account for half of the regional denudation rate estimated from suspended sediment records (2200 t km-2 yr-1 or ~0.9 mm/yr). We propose a conceptual model for long-term earthflow evolution wherein earthflows experience intermittent activity and long periods of dormancy when limited by the availability of readily mobilized sediment on upper slopes. Ultimately, high-order river channels and ephemeral gully networks may serve to destabilize hillslopes, controlling the evolution of earthflow-prone terrain. © 2011 Geological Society of America. Source


Hauksson E.,California Institute of Technology
Geophysical Journal International | Year: 2011

The geographical distribution of the (1981-2005) seismicity in southern California forms a ±150 km broad zone adjacent to the Pacific-North America plate boundary, ranging from depths of ∼1-∼30 km, with the bulk of the focal depths in the range of 2-12 km. The distribution of the seismicity that includes both mainshock-aftershock sequences and background events is affected by both static and kinematic geophysical parameters of the crust. The static parameters include heat flow, topography, crustal density, Vp/Vs ratio, hypocentral fault-distance and crustal thickness from receiver functions. The tectonic loading is represented by kinematic parameters such as the crustal shear strain rate field, and the dilatational strain rate field. In our analysis, we normalize the seismicity relative to the areal density of the range of values of each of the parameters. Most of the seismicity occurs in areas of average heat flow, low to intermediate topography, average Vp/Vs and high late Quaternary fault density, and forms seismogenic zones that extend through the brittle crust. The location of late Quaternary faults, often described as zones of weakness, influences the geographical distribution of seismicity more than any other parameter. Although above or below average crustal properties such as high heat flow, thin crust or very low Vp/Vs values exist, these properties only influence the spatial distribution of seismicity in a minor way. As an example, the Salton Trough area of low topography, high heat flow, high Vp/Vs, high shear strain rate and thin crust has distributed seismicity within a thin seismogenic zone. Also, somewhat surprisingly, areas of high topography, low heat flow, low Vp/Vs, low shear strain rate and thick crust have low seismicity rates but a thin seismogenic zone. We determine an empirical relationship between heat flow and crustal thickness to show how the ∼400 °C temperature isotherm gradually deepens with crustal thickness and forms the base of the seismogenic zone for crustal thicknesses from 22 to 37 km. For crustal thickness ranging from 37 to 43 km, the ∼250 °C isotherm forms the base of the seismogenic zone, suggesting that seismic faulting in these regions is confined to the top of the upper crust (12-14 km), and thus does not accommodate plate motion. © 2011 The Authors. Geophysical Journal International © 2011 RAS. Source


Beck J.L.,California Institute of Technology
Structural Control and Health Monitoring | Year: 2010

Probability logic with Bayesian updating provides a rigorous framework to quantify modeling uncertainty and perform system identification. It uses probability as a multi-valued propositional logic for plausible reasoning where the probability of a model is a measure of its relative plausibility within a set of models. System identification is thus viewed as inference about plausible system models and not as a quixotic quest for the true model. Instead of using system data to estimate the model parameters, Bayes' Theorem is used to update the relative plausibility of each model in a model class, which is a set of input-output probability models for the system and a probability distribution over this set that expresses the initial plausibility of each model. Robust predictive analyses informed by the system data use the entire model class with the probabilistic predictions of each model being weighed by its posterior probability. Additional robustness to modeling uncertainty comes from combining the robust predictions of each model class in a set of candidates for the system, where each contribution is weighed by the posterior probability of the model class. This application of Bayes' Theorem automatically applies a quantitative Ockham's razor that penalizes the data-fit of more complex model classes that extract more information from the data. Robust analyses involve integrals over parameter spaces that usually must be evaluated numerically by Laplace's method of asymptotic approximation or by Markov Chain Monte Carlo methods. An illustrative application is given using synthetic data corresponding to a structural health monitoring benchmark structure. Copyright © 2010 John Wiley & Sons, Ltd. Source


Cable M.L.,Jet Propulsion Laboratory | Kirby J.P.,Planetary Science Institute | Gray H.B.,California Institute of Technology | Ponce A.,Jet Propulsion Laboratory
Accounts of Chemical Research | Year: 2013

In the design of molecular sensors, researchers exploit binding interactions that are usually defined in terms of topology and charge complementarity. The formation of complementary arrays of highly cooperative, noncovalent bonding networks facilitates protein-ligand binding, leading to motifs such as the "lock-and-key". Synthetic molecular sensors often employ metal complexes as key design elements as a way to construct a binding site with the desired shape and charge to achieve target selectivity. In transition metal complexes, coordination number, structure and ligand dynamics are governed primarily by a combination of inner-sphere covalent and outer-sphere noncovalent interactions. These interactions provide a rich variable space that researchers can use to tune structure, stability, and dynamics.In contrast, lanthanide(III)-ligand complex formation and ligand-exchange dynamics are dominated by reversible electrostatic and steric interactions, because the unfilled f shell is shielded by the larger, filled d shell. Luminescent lanthanides such as terbium, europium, dysprosium, and samarium display many photophysical properties that make them excellent candidates for molecular sensor applications. Complexes of lanthanide ions act as receptors that exhibit a detectable change in metal-based luminescence upon binding of an anion. In our work on sensors for detection of dipicolinate, the unique biomarker of bacterial spores, we discovered that the incorporation of an ancillary ligand (AL) can enhance binding constants of target anions to lanthanide ions by as much as two orders of magnitude.In this Account, we show that selected ALs in lanthanide/anion systems greatly improve sensor performance for medical, planetary science, and biodefense applications. We suggest that the observed anion binding enhancement could result from an AL-induced increase in positive charge at the lanthanide ion binding site. This effect depends on lanthanide polarizability, which can be established from the ionization energy of Ln3+ → Ln4+. These results account for the order Tb3+ > Dy3+ > Eu3+ ≈ Sm3+. As with many lanthanide properties, ranging from hydration enthalpy to vaporization energy, this AL-induced enhancement shows a large discrepancy between Tb3+ and Eu3+ despite their similarity in size, a phenomenon known as the "gadolinium break". This discrepancy, based on the unusual stabilities of the Eu2+ and Tb4+ oxidation states, results from the half-shell effect, as both of these ions have half-filled 4f-shells. The high polarizability of Tb3+ explains the extraordinarily large increase in the binding affinity of anions for terbium compared to other lanthanides.We recommend that researchers consider this AL-induced enhancement when designing lanthanide-macrocycle optical sensors. Ancillary ligands also can reduce the impact of interfering species such as phosphate commonly found in environmental and physiological samples. © 2013 American Chemical Society. Source


Fultz B.,California Institute of Technology
Progress in Materials Science | Year: 2010

The literature on vibrational thermodynamics of materials is reviewed. The emphasis is on metals and alloys, especially on the progress over the last decade in understanding differences in the vibrational entropy of different alloy phases and phase transformations. Some results on carbides, nitrides, oxides, hydrides and lithium-storage materials are also covered. Principles of harmonic phonons in alloys are organized into thermodynamic models for unmixing and ordering transformations on an Ising lattice, and extended for non-harmonic potentials. Owing to the high accuracy required for the phonon frequencies, quantitative predictions of vibrational entropy with analytical models prove elusive. Accurate tools for such calculations or measurements were challenging for many years, but are more accessible today. Ab initio methods for calculating phonons in solids are summarized. The experimental techniques of calorimetry, inelastic neutron scattering, and inelastic X-ray scattering are explained with enough detail to show the issues of using these methods for investigations of vibrational thermodynamics. The explanations extend to methods of data analysis that affect the accuracy of thermodynamic information. It is sometimes possible to identify the structural and chemical origins of the differences in vibrational entropy of materials, and the number of these assessments is growing. There has been considerable progress in our understanding of the vibrational entropy of mixing in solid solutions, compound formation from pure elements, chemical unmixing of alloys, order-disorder transformations, and martensitic transformations. Systematic trends are available for some of these phase transformations, although more examples are needed, and many results are less reliable at high temperatures. Nanostructures in materials can alter sufficiently the vibrational dynamics to affect thermodynamic stability. Internal stresses in polycrystals of anisotropic materials also contribute to the heat capacity. Lanthanides and actinides show a complex interplay of vibrational, electronic, and magnetic entropy, even at low temperatures. A "quasiharmonic model" is often used to extend the systematics of harmonic phonons to high temperatures by accounting for the effects of thermal expansion against a bulk modulus. Non-harmonic effects beyond the quasiharmonic approximation originate from the interactions of thermally-excited phonons with other phonons, or with the interactions of phonons with electronic excitations. In the classical high temperature limit, the adiabatic electron-phonon coupling can have a surprisingly large effect in metals when temperature causes significant changes in the electron density near the Fermi level. There are useful similarities in how temperature, pressure, and composition alter the conduction electron screening and the interatomic force constants. Phonon-phonon "anharmonic" interactions arise from those non-harmonic parts of the interatomic potential that cannot be accounted for by the quasiharmonic model. Anharmonic shifts in phonon frequency with temperature can be substantial, but trends are not well understood. Anharmonic phonon damping does show systematic trends, however, at least for fcc metals. Trends of vibrational entropy are often justified with atomic properties such as atomic size, electronegativity, electron-to-atom ratio, and mass. Since vibrational entropy originates at the level of electrons in solids, such rules of thumb prove no better than similar rules devised for trends in bonding and structure, and tend to be worse. Fortunately, the required tools for accurate experimental investigations of vibrational entropy have improved dramatically over the past few years, and the required ab initio methods have become more accessible. Steady progress is expected for understanding the phenomena reviewed here, as investigations are performed with the new tools of experiment and theory, sometimes in integrated ways. © 2009 Elsevier Ltd. All rights reserved. Source


Keller C.A.,California Institute of Technology
Journal of High Energy Physics | Year: 2011

Since many thermodynamic properties of black holes are universal, the thermodynamics of their holographic duals should be universal too. We show how this universality is exhibited in the example of symmetric orbifolds of general two dimensional CFTs. We discuss the free energies and phase diagrams of such theories and show that they are indeed universal in the large N limit. We also comment on the implications of our results for the classification of CFTs that can have an interpretation as holographic duals to gravity theories on AdS 3. © SISSA 2011. Source


Capponi A.,California Institute of Technology
Automatica | Year: 2010

We introduce a new methodology to construct a Gaussian mixture approximation to the true filter density in hybrid Markovian switching systems. We relax the assumption that the mode transition process is a Markov chain and allow it to depend on the actual and unobservable state of the system. The main feature of the method is that the Gaussian densities used in the approximation are selected as the solution of a convex programming problem which trades off sparsity of the solution with goodness of fit. A meaningful example shows that the proposed method can outperform the widely used interacting multiple model (IMM) filter and GPB2 in terms of accuracy at the expenses of an increase in computational time. © 2009 Elsevier Ltd. All rights reserved. Source


Nakayama Y.,California Institute of Technology
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2013

Null energy condition plays a crucial role in holographic renormalization group flow, leading to the holographic c-theorem. Unfortunately, the null energy condition is quantum mechanically violated. Even the averaged version can be violated. We discuss how the anomalous violation of the null energy condition affects the holographic renormalization group flow in (1+3)-dimensional bulk gravity. We show that despite the violation of the null energy condition, a suitably modified holographic c-function with a peculiar log correction is still monotonically decreasing in so far as we add the counterterm that removes a ghost mode of gravity. © 2013 Elsevier B.V. Source


Tsao D.,California Institute of Technology
Cold Spring Harbor Symposia on Quantitative Biology | Year: 2014

The macaque brain contains a set of regions that show stronger fMRI activation to faces than other classes of object. This "face patch system" has provided a unique opportunity to gain insight into the organizing principles of IT cortex and to dissect the neural mechanisms underlying form perception, because the system is specialized to process one class of complex forms, and because its computational components are spatially segregated. Over the past 5 years, we have set out to exploit this system to clarify the nature of object representation in the brain through a multilevel approach combining electrophysiology, anatomy, and behavior. These experiments reveal (1) a remarkably precise connectivity of face patches to each other, (2) a functional hierarchy for representation of view-invariant identity comprising at least three distinct stages along the face patch system, and (3) the computational mechanisms used by cells in face patches to detect and recognize faces, including measurement of diagnostic local contrast features for detection and measurement of face feature values for recognition. © 2014 Cold Spring Harbor Laboratory Press. Source


Censi A.,California Institute of Technology
IEEE Transactions on Automatic Control | Year: 2011

This technical note shows that the stationary distribution for the covariance of Kalman filtering with intermittent observations exists under mild conditions for a very general class of packet dropping models (semi-Markov chain). These results are proved using the geometric properties of Riccati recursions with respect to a particular Riemannian distance. Moreover, the Riemannian mean induced by that distance is always bounded, therefore it can be used for characterizing the performance of the system for regimes where the moments of the covariance do not exist. Other interesting properties of that mean include the symmetry between covariance and information matrices (averaging covariances or their inverse gives the same result), and its interpretation in information geometry as the "natural" mean for the manifold of Gaussian distributions. © 2006 IEEE. Source


Mackay M.,California Institute of Technology
Journal of vision | Year: 2012

Various models have been proposed to explain the interplay between bottom-up and top-down mechanisms in driving saccades rapidly to one or a few isolated targets. We investigate this relationship using eye-tracking data from subjects viewing natural scenes to test attentional allocation to high-level objects within a mathematical decision-making framework. We show the existence of two distinct types of bottom-up saliency to objects within a visual scene, which disappear within a few fixations, and modification of this saliency by top-down influences. Our analysis reveals a subpopulation of early saccades, which are capable of accurately fixating salient targets after prior fixation within the same image. These data can be described quantitatively in terms of bottom-up saliency, including an explicit face channel, weighted by top-down influences, determining the mean rate of rise of a decision-making model to a threshold that triggers a saccade. These results are compatible with a rapid subcortical pathway generating accurate saccades to salient targets after analysis by cortical mechanisms. Source


Wernicke B.,California Institute of Technology
Bulletin of the Geological Society of America | Year: 2011

Recently published thermochronological and paleoelevation studies in the Grand Canyon region, combined with sedimentary provenance data in both the coastal and interior portions of the North American Cordillera, place important new constraints on the paleohydrological evolution of the southwestern United States. Review and synthesis of these data lead to an interpretation where incision of a large canyon from a plain of low elevation and relief to a canyon of roughly the length and depth of modern Grand Canyon occurred primarily in Campanian time (80-70 Ma). Incision was accomplished by a main-stem, NE-flowing antecedent river with headwaters on the NE slope of the North American Cordillera in California, referred to herein after its source region as the California River. At this time, the river had cut to within a few hundred meters of its modern erosion level in western Grand Canyon, and to the level of Lower Mesozoic strata in eastern Grand Canyon. Subsequent collapse of the head waters region into a continental border land and coeval uplift of the Cordilleran foreland during the Laramide orogeny reversed the river's course by Paleogene time. After reversal, its terminus lay near its former source regions in what is now the Western Transverse Ranges and Salinian terrane. Its headwaters lay in the ancient Mojave/Mogollon Highlands region of Arizona and eastern California, apparently reaching as far northeast as the eastern Grand Canyon region. This system is herein referred to after its source region as the Arizona River. From Paleogene through late Miocene time, the interior of the Colorado Plateau was a closed basin separated from the Arizona River drainage by an asymmetrical divide in the Lees Ferry-Glen Canyon area, with a steep SW flank and gently sloping NE flank that drained into large interior lakes, fed primarily by Cor dilleran/Rocky Mountain sources to the north and west, and by recycled California River detritus shed from Laramide uplifts on the plateau. By Oligocene time, the lakes had largely dried up and were replaced by ergs. By mid-Miocene time, a pulse of unroofing had lowered the erosion level of eastern Grand Canyon to within a few hundred meters of its present level, and the Arizona River drainage below modern Grand Canyon was deranged by extensional tectonism, cutting off the supply of interior detritus to the coast. Increasing moisture in the Rocky Mountains in late Miocene time reinvigorated fluvio lacustrine aggradation NE of the asymmetrical divide, which was finally overtopped between 6 and 5 Ma, lowering base level in the interior of the plateau by 1500 m. This event reintegrated the former Arizona drainage system through a cascade of spillover events through Basin and Range valleys, for the first time connecting sediment sources in Colorado with the coast. This event, combined with the intensification of summer rainfall as the Gulf of California opened, increased the sediment yield through Grand Canyon by perhaps two orders of magnitude from its Miocene nadir, giving birth to the modern subcontinentalscale Colorado River drainage system. The Colorado River has thus played a major role in unroofing the interior of the Colorado Plateau, but was not an important factor in the excavation of Grand Canyon. © 2011 Geological Society of America. Source


Greer J.R.,California Institute of Technology | De Hosson J.T.M.,Zernike Institute for Advanced Materials
Progress in Materials Science | Year: 2011

A material strength depends on its microstructure, which in turn, is controlled by an engineering process. Strengthening mechanisms like work hardening, precipitate, and grain boundary strengthening can alter the strength of a material in a predictive, quantitative manner and are readily linked to the deformation mechanism. This quantification strongly depends on the characteristic length scale of a particular microstructure, thereby dictating bulk material's strength as a function of, for example, grain or precipitate size, twin boundary spacing, or dislocation density. This microstructural, or intrinsic, size governs the mechanical properties and post-elastic material deformation at all sample dimensions, as the classical definition of "ultimate tensile strength" deems it to be "an intensive property, therefore its value does not depend on the size of the test specimen." Yet in the last 5 years, the vast majority of uniaxial deformation experiments and computations on small-scale metallic structures unambiguously demonstrated that at the micron and sub-micron scales, this definition no longer holds true. In fact, it has been shown that in single crystals the ultimate tensile strength and the yield strength scale with external sample size in a power law fashion, sometimes attaining a significant fraction of material's theoretical strength, and exhibiting the now-commonly-known phenomenon "smaller is stronger." Understanding of this "extrinsic size effect" at small scales is not yet mature and is currently a topic of rigorous investigations. As both the intrinsic (i.e. microstructural) and extrinsic (i.e. sample size) dimensions play a non-trivial role in the mechanical properties and material deformation mechanisms, it is critical to develop an understanding of their interplay and mutual effects on the mechanical properties and material deformation, especially in small-scale structures. This review focuses on providing an overview of metal-based material classes whose properties as a function of external size have been investigated and provides a critical discussion on the combined effects of intrinsic and extrinsic sizes on the material deformation behavior. © 2011 Elsevier Ltd. All rights reserved. Source


Dergachev V.,California Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2012

We introduce a high-performance implementation of a loosely coherent statistic sensitive to signals spanning a finite-dimensional manifold in parameter space. Results from full-scale simulations on Gaussian noise are discussed, as well as implications for future searches for continuous gravitational waves and related searches for slowly modulated signals, such as emitted by radio pulsars. We demonstrate an improvement of more than an order of magnitude in analysis speed over previously available algorithms. As searches for continuous gravitational waves are computationally limited, the large speedup results in gain in sensitivity. © 2012 American Physical Society. Source


Bercaw J.E.,California Institute of Technology
Organometallics | Year: 2010

A novel PNP bis(secondary phosphine)pyridine pincer ligand, 2,6-bis(2-(phenylphosphino)phenyl)pyridine, has been prepared in high yield, and the properties of the doubly deprotonated form as a ligand in K 4(PNP)2(THF)6 and (PNP)Zr(NMe2) 2 have been investigated. The neutral PNP ligand has been isolated as a mixture of noninterconverting diastereomers, due to the presence of two chirogenic phosphorus atoms of the secondary phopshines, but coordination of the dianionic form to potassium and zirconium allows for isolation of a single diastereomer in near-quantitative yield. The structure of a bis(dimethylamido) zirconium(IV) derivative of the bis(phosphido)pyridine ligand and DFT calculations suggest that the phosphides do not π-bond to early transition metals, likely due to geometric strain and possibly orbital size mismatch between phosphorus and zirconium. As a result, the soft phosphides are prone to formation of insoluble oligomers with substantial bridging of the phosphido lone pairs to other zirconium centers. © 2010 American Chemical Society. Source


Boldin M.P.,Beckman Research Institute | Baltimore D.,California Institute of Technology
Immunological Reviews | Year: 2012

Since its discovery 25years ago, nuclear factor-κB (NF-κB) has emerged as a transcription factor that controls diverse biological functions, ranging from inflammation to learning and memory. Activation of NF-κB initiates an elaborate genetic program. Some of the NF-κB-driven genes do not encode proteins but rather are precursors to microRNAs. These microRNAs play important roles in the regulation of the inflammatory process, some being inhibitory and others activating. Here, we discuss both the regulation of their expression and the function of some of these non-coding RNA genes. We also include a personal discussion of how NF-κB was first discovered. © 2012 John Wiley & Sons A/S. Source


Bellan P.M.,California Institute of Technology
Journal of Geophysical Research: Space Physics | Year: 2012

It is shown that the low frequency plasma wave equation can be obtained much more directly than by the previously used method of solving for the determinant of a matrix involving the three components of the electric field vector. The more direct method uses a two-dimensional current density vector space that is precisely equivalent to the previously used three-dimensional electric field vector space. Unlike the electric field, the current density is restricted by the quasi-neutrality condition to a two-dimensional vector space. Comparison with previously obtained dispersion relations is provided and a method is presented for obtaining exact analytic solutions for the three roots of the cubic dispersion relation. The commonly used kinetic Alfvén dispersion relation is shown to be valid only for near-perpendicular propagation in a low beta plasma. It is shown that at a cross-over point where the perpendicular wave phase velocity equals the ion acoustic velocity, the coupling between Alfvén and fast modes vanishes and the Alfvén mode reverts to its cold form even in situations where the Alfvén velocity is smaller than the electron thermal velocity. A method is prescribed by which measurement of wave electric current density completely eliminates the space-time ambiguity previously believed to be an unavoidable shortcoming of single-spacecraft frequency measurements. © 2012. American Geophysical Union. All Rights Reserved. Source


In this lecture, several examples are considered that illustrate the interplay of experiment, theory, and computations. The examples include on-water catalysis of organic reactions, enzymatic catalysis, single molecule fluctuations, and some much earlier work on electron transfer and atom or group transfer reactions. Computations have made a major impact on our understanding and in the comparisons with experiments. There are also major advantages of analytical theories that may capture in a single equation an entire field and relate experiments of one type to those of another. Such a theory has a generic quality. These topics are explored in the present lecture. © 2010 The Royal Society of Chemistry. Source


Schmandt B.,California Institute of Technology
Earth and Planetary Science Letters | Year: 2012

Broadband P-to-s scattering isolated by teleseismic receiver function analysis is used to investigate shear velocity (VS) gradients in the mantle transition zone beneath USArray. Receiver functions from 2244 stations were filtered in multiple frequency bands and migrated to depth through P and S tomography models. The depth-migrated receiver functions were stacked along their local 410 and 660km discontinuity depths to reduce stack incoherence and more accurately recover the frequency-dependent amplitudes of P410s and P660s. The stacked waveforms were inverted for one-dimensional VS between 320 and 840km depth. First, a gradient-based inversion was used to find a least-squares solution and a subsequent Monte Carlo search about that solution constrained the range of VS profiles that provide an acceptable fit to the receiver function stacks. Relative to standard references models, all the acceptable models have diminished VS gradients surrounding the 410, a local VS gradient maximum at 490-500km depth, and an enhanced VS gradient above the 660. The total 410 VS increase of 6.3% is greater than in reference models, and it occurs over a thickness of 20km. However, 60% of this VS increase occurs over only 6km. The 20km total thickness of the 410 and diminished VS gradients surrounding the 410 are potential indications of high water content in the regional transition zone. An enhanced VS gradient overlying the 660 likely results from remnants of subduction lingering at the base of the transition zone. Cool temperatures from slabs subducted since the late Cretaceous and longer-term accumulation of former ocean crust both may contribute to the high gradient above the 660. The shallow depth of the 520km discontinuity, 490-500km, implies that the regional mean temperature in the transition zone is 110-160K cooler than the global mean. A concentrated Vs gradient maximum centered near 660km depth and a low VS gradient below 675km confirms that the ringwoodite to perovskite+magnesiowüstite reaction is the dominant cause of the 660 in the region. © 2012 Elsevier B.V. Source


Bossaerts P.,California Institute of Technology
Brain structure & function | Year: 2010

Most accounts of the function of anterior insula in the human brain refer to concepts that are difficult to formalize, such as feelings and awareness. The discovery of signals that reflect risk assessment and risk learning, however, opens the door to formal analysis. Hitherto, activations have been correlated with objective versions of risk and risk prediction error, but subjective versions (influenced by pessimism/optimism or risk aversion/tolerance) exist. Activation in closely related cortical structures has been found to be both objective (anterior cingulate cortex) and subjective (inferior frontal gyrus). For this quantitative analysis of uncertainty-induced neuronal activation to further understanding of insula's role in feelings and awareness, however, formalization and documentation of the relation between uncertainty and feelings/awareness will be needed. One obvious starting point is the link with failure anxiety and error awareness. Source


Schmidt F.,California Institute of Technology | Jeong D.,Johns Hopkins University
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2012

The B (curl) mode of the correlation of galaxy ellipticities (shear) can be used to detect a stochastic gravitational wave background, such as that predicted by inflation. In this paper, we derive the tensor mode contributions to shear from both gravitational lensing and intrinsic alignments, using the gauge-invariant, full-sky results of Schmidt and Jeong {arXiv:1204.3625 [Phys. Rev. D (to be published)]}. We find that the intrinsic alignment contribution, calculated using the linear alignment model, is larger than the lensing contribution by an order of magnitude or more, if the alignment strength for tensor modes is of the same order as for scalar modes. This contribution also extends to higher multipoles. These results make the prospects for probing tensor modes using galaxy surveys less pessimistic than previously thought, though still very challenging. © 2012 American Physical Society. Source


Bossaerts P.,California Institute of Technology | Bossaerts P.,Ecole Polytechnique Federale de Lausanne
Brain Structure and Function | Year: 2010

Most accounts of the function of anterior insula in the human brain refer to concepts that are difficult to formalize, such as feelings and awareness. The discovery of signals that reflect risk assessment and risk learning, however, opens the door to formal analysis. Hitherto, activations have been correlated with objective versions of risk and risk prediction error, but subjective versions (influenced by pessimism/optimism or risk aversion/tolerance) exist. Activation in closely related cortical structures has been found to be both objective (anterior cingulate cortex) and subjective (inferior frontal gyrus). For this quantitative analysis of uncertainty-induced neuronal activation to further understanding of insula's role in feelings and awareness, however, formalization and documentation of the relation between uncertainty and feelings/awareness will be needed. One obvious starting point is the link with failure anxiety and error awareness. © 2010 Springer-Verlag. Source


Schmidt F.,California Institute of Technology | Jeong D.,Johns Hopkins University
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2012

We derive general covariant expressions for the six independent observable modes of distortion of ideal standard rulers in a perturbed Friedmann-Robertson- Walker spacetime. Our expressions are gauge invariant and valid on the full sky. These six modes are most naturally classified in terms of their rotational properties on the sphere, yielding two scalars, two vector (spin-1), and two tensor (spin-2) components. One scalar corresponds to the magnification, while the spin-2 components correspond to the shear. The vector components allow for a polar/axial decomposition analogous to the E/B decomposition for the shear. Scalar modes do not contribute to the axial (B-)vector, opening a new avenue to probing tensor modes. Our results apply, but are not limited to, the distortion of correlation functions (of the cosmic microwave background, 21-cm emission, or galaxies) as well as to weak lensing shear and magnification, all of which can be seen as methods relying on "standard rulers." © 2012 American Physical Society. Source


Sato Y.,California Institute of Technology
PloS one | Year: 2010

One of the least understood and most central questions confronting biologists is how initially simple clusters or sheet-like cell collectives can assemble into highly complex three-dimensional functional tissues and organs. Due to the limits of oxygen diffusion, blood vessels are an essential and ubiquitous presence in all amniote tissues and organs. Vasculogenesis, the de novo self-assembly of endothelial cell (EC) precursors into endothelial tubes, is the first step in blood vessel formation. Static imaging and in vitro models are wholly inadequate to capture many aspects of vascular pattern formation in vivo, because vasculogenesis involves dynamic changes of the endothelial cells and of the forming blood vessels, in an embryo that is changing size and shape. We have generated Tie1 transgenic quail lines Tg(tie1:H2B-eYFP) that express H2B-eYFP in all of their endothelial cells which permit investigations into early embryonic vascular morphogenesis with unprecedented clarity and insight. By combining the power of molecular genetics with the elegance of dynamic imaging, we follow the precise patterning of endothelial cells in space and time. We show that during vasculogenesis within the vascular plexus, ECs move independently to form the rudiments of blood vessels, all while collectively moving with gastrulating tissues that flow toward the embryo midline. The aortae are a composite of somatic derived ECs forming its dorsal regions and the splanchnic derived ECs forming its ventral region. The ECs in the dorsal regions of the forming aortae exhibit variable mediolateral motions as they move rostrally; those in more ventral regions show significant lateral-to-medial movement as they course rostrally. The present results offer a powerful approach to the major challenge of studying the relative role(s) of the mechanical, molecular, and cellular mechanisms of vascular development. In past studies, the advantages of the molecular genetic tools available in mouse were counterbalanced by the limited experimental accessibility needed for imaging and perturbation studies. Avian embryos provide the needed accessibility, but few genetic resources. The creation of transgenic quail with labeled endothelia builds upon the important roles that avian embryos have played in previous studies of vascular development. Source


Hong W.,Howard Hughes Medical Institute | Kim D.W.,California Institute of Technology | Anderson D.J.,Howard Hughes Medical Institute
Cell | Year: 2014

Animals display a range of innate social behaviors that play essential roles in survival and reproduction. While the medial amygdala (MeA) has been implicated in prototypic social behaviors such as aggression, the circuit-level mechanisms controlling such behaviors are not well understood. Using cell-type-specific functional manipulations, we find that distinct neuronal populations in the MeA control different social and asocial behaviors. A GABAergic subpopulation promotes aggression and two other social behaviors, while neighboring glutamatergic neurons promote repetitive self-grooming, an asocial behavior. Moreover, this glutamatergic subpopulation inhibits social interactions independently of its effect to promote self-grooming, while the GABAergic subpopulation inhibits self-grooming, even in a nonsocial context. These data suggest that social versus repetitive asocial behaviors are controlled in an antagonistic manner by inhibitory versus excitatory amygdala subpopulations, respectively. These findings provide a framework for understanding circuit-level mechanisms underlying opponency between innate behaviors, with implications for their perturbation in psychiatric disorders. Copyright © 2014 Elsevier Inc. All rights reserved. Source


Duerr M.,Max Planck Institute for Nuclear Physics | Fileviez Perez P.,Max Planck Institute for Nuclear Physics | Wise M.B.,California Institute of Technology
Physical Review Letters | Year: 2013

Models where the baryon (B) and lepton (L) numbers are local gauge symmetries that are spontaneously broken at a low scale are revisited. We find new extensions of the standard model which predict the existence of fermions that carry both baryon and lepton numbers (i.e., leptoquarks). The local baryonic and leptonic symmetries can be broken at a scale close to the electroweak scale and we do not need to postulate the existence of a large desert to satisfy the experimental constraints on baryon number violating processes like proton decay. © 2013 American Physical Society. Source


Davis M.E.,California Institute of Technology
MRS Bulletin | Year: 2012

Papyrus writings from 1600-1500 BC describe cancer and the attempts at treatment. Centuries later, cancer remains a devastating disease. Given the long history of difficulties in developing cancer therapies, why is there excitement about nanoparticle medicine (nanomedicines) for fighting cancer? This article describes the current understanding of why these engineered, nano-sized medicines, which are highly multifunctional chemical systems, have the potential to provide revolutionary ways to treat cancer. This point is illustrated by physical insights at the nanoscale that allow for the development of nanoparticles that can function in both animals and humans. The human data show how we have translated two independent nanoparticle cancer therapeutics from laboratory curiosities to experimental therapeutics in human clinical trials. © 2012 2012 Materials Research Society. Source


Hsiao E.Y.,California Institute of Technology
International Review of Neurobiology | Year: 2013

Autism spectrum disorder (ASD) is a highly heterogeneous disorder diagnosed based on the presence and severity of core abnormalities in social communication and repetitive behavior, yet several studies converge on immune dysregulation as a feature of ASD. Widespread alterations in immune molecules and responses are seen in the brains and periphery of ASD individuals, and early life immune disruptions are associated with ASD. This chapter discusses immune-related environmental and genetic risk factors for ASD, emphasizing population-wide studies and animal research that reveal potential mechanistic pathways involved in the development of ASD-related symptoms. It further reviews immunologic pathologies seen in ASD individuals and how such abnormalities can impact neurodevelopment and behavior. Finally, it evaluates emerging evidence for an immune contribution to the pathogenesis of ASD and a potential role for immunomodulatory effects in current treatments for ASD. © 2013 Elsevier Inc. Source


Brown A.S.,Columbia University | Patterson P.H.,California Institute of Technology
Schizophrenia Bulletin | Year: 2011

Accumulating evidence suggests that maternal infection is a risk factor for schizophrenia. Prospective epidemiological studies indicate that maternal influenza, toxoplasmosis, and genital/reproductive infection are associated with this disorder in offspring. Preclinical models of maternal immune activation have supported the neurobiological plausibility of these microbes in schizophrenia. Previous studies suggest that treatment or prophylactic efforts targeting these and other infections could have significant effects on reducing the incidence of schizophrenia, given that they are common in the population and the effect sizes derived from epidemiological studies of these and other microbial pathogens and schizophrenia, to date, are not small. Fortunately, the occurrence of many of these infections can be reduced with relatively practical and inexpensive interventions that are scalable to large populations given adequate resources. Hence, in the present article, we focus on the potential for prevention of schizophrenia by control of infection, using these 3 categories of infection as examples. Lessons learned from previous successful public health efforts targeting these infections, including the relative advantages and disadvantages of these measures, are reviewed. © The Author 2011. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. Source


Dionne J.A.,Stanford University | Atwater H.A.,California Institute of Technology
MRS Bulletin | Year: 2012

Electrons and photons can coexist as a single entity called a surface plasmon-an elementary excitation found at the interface between a conductor and an insulator. Because of their hybrid electric and photonic nature, plasmons allow photons to be precisely controlled on the nanoscale. Plasmons are evident in the vivid hues of rose windows, which derive their color from small metallic nanoparticles embedded in the glass. They also provide the basis for color-changing biosensors (such as home pregnancy tests), photothermal cancer treatments, improved photovoltaic cell efficiencies, and nanoscale lasers. While surface plasmons were first identified nearly 55 years ago, many of their exciting applications are yet to come. This issue of MRS Bulletin reviews the progress and promise of plasmonics-from the characterization tools that have allowed nanometer-scale probing of plasmons to the new materials that may enable low-loss, active, and quantum plasmonics. Within reach are applications ranging from integrated plasmonic circuits for nanophotonic computation to plasmonic optical tweezers for manipulation of nano-sized particles and proteins. © 2012 Materials Research Society. Source


Mayer E.A.,University of California at Los Angeles | Knight R.,Howard Hughes Medical Institute | Mazmanian S.K.,California Institute of Technology | Cryan J.F.,Alimentary Pharmabiotic Center | Tillisch K.,University of California at Los Angeles
Journal of Neuroscience | Year: 2014

The discovery of the size and complexity of the human microbiome has resulted in an ongoing reevaluation of many concepts of health and disease, including diseases affecting the CNS. A growing body of preclinical literature has demonstrated bidirectional signaling between the brain and the gut microbiome, involving multiple neurocrine and endocrine signaling mechanisms. While psychological and physical stressors can affect the composition and metabolic activity of the gut microbiota, experimental changes to the gut microbiome can affect emotional behavior and related brain systems. These findings have resulted in speculation that alterations in the gut microbiomemayplay a pathophysiological role inhumanbrain diseases, including autism spectrum disorder, anxiety, depression, and chronic pain. Ongoing large-scale population-based studies of the gut microbiome and brain imaging studies looking at the effect of gut microbiome modulation on brain responses to emotion-related stimuli are seeking to validate these speculations. This article is a summary of emerging topics covered in a symposium and is not meant to be a comprehensive review of the subject. © 2014 the authors. Source


Contreras J.,University of California at Los Angeles | Rao D.S.,University of California at Los Angeles | Rao D.S.,California Institute of Technology
Leukemia | Year: 2012

MicroRNAs (miRNAs) are important regulators of gene expression in the immune system. In a few short years, their mechanism of action has been described in various cell lineages within the immune system, targets have been defined and their unique contributions to immune cell function have been examined. Certain miRNAs serve in important negative feedback loops in the immune system, whereas others serve to amplify the response of the immune system by repressing inhibitors of the response. Here, we review some of the better understood mechanisms as well as some emerging concepts of miRNA function. Future work will likely involve defining the function of specific miRNAs in specific immune cell lineages and to utilize them in the design of therapeutic strategies for diseases involving the immune system. © 2012 Macmillan Publishers Limited All rights reserved. Source


Benson A.J.,California Institute of Technology
Physics Reports | Year: 2010

We review the current theory of how galaxies form within the cosmological framework provided by the cold dark matter paradigm for structure formation. Beginning with the pre-galactic evolution of baryonic material we describe the analytical and numerical understanding of how baryons condense into galaxies, what determines the structure of those galaxies and how internal and external processes (including star formation, merging, active galactic nuclei, etc.) determine their gross properties and evolution. Throughout, we highlight successes and failures of current galaxy formation theory. We include a review of computational implementations of galaxy formation theory and assess their ability to provide reliable modeling of this complex phenomenon. We finish with a discussion of several "hot topics" in contemporary galaxy formation theory and assess future directions for this field. © 2010 Elsevier B.V. Source


Zuckerman J.E.,University of California at Los Angeles | Davis M.E.,California Institute of Technology
Nature Reviews Drug Discovery | Year: 2015

Small interfering RNA (siRNA)-based therapies are emerging as a promising new anticancer approach, and a small number of Phase I clinical trials involving patients with solid tumours have now been completed. Encouraging results from these pioneering clinical studies show that these new therapeutics can successfully and safely inhibit targeted gene products in patients with cancer, and have taught us important lessons regarding appropriate dosages and schedules. In this Review, we critically assess these Phase I studies and discuss their implications for future clinical trial design. Key challenges and future directions in the development of siRNA-containing anticancer therapeutics are also considered. © 2015 Macmillan Publishers Limited. Source


Candes E.J.,Stanford University | Plan Y.,California Institute of Technology
IEEE Transactions on Information Theory | Year: 2011

This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all standard modelse.g., Gaussian, frequency measurementsdiscussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) to hold near the sparsity level in question, nor a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s log n Fourier coefficients that are contaminated with noise. © 2011 IEEE. Source


Kim I.H.,California Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

We study the ground state of a gapped quantum many-body system whose entanglement entropy SA can be expressed as SA=a|∂A|- γ, where a,γ are some constants and |∂A| is an area of the subsystem A. By using a recently proved operator extension of strong subadditivity of entropy, we show that a certain linear combination of the real-space entanglement spectrum has a small correlation with almost any local operator. Our result implies that there exists a structure relating the real-space entanglement spectrum over different subsystems. Further, this structure is inherited from the generic property of the ground state alone, suggesting that the locality of the entanglement spectrum may be attributed to the area law of entanglement entropy. © 2013 American Physical Society. Source


Ishiwata K.,California Institute of Technology
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2012

We consider a model where two new scalars are introduced in the standard model, assuming classical scale invariance. In this model the scale invariance is broken by quantum corrections and one of the new scalars acquires non-zero vacuum expectation value (VEV), which induces the electroweak symmetry breaking in the standard model, and the other scalar becomes dark matter. It is shown that TeV scale dark matter is realized, independent of the value of the other scalar's VEV. The impact of the new scalars on the Higgs potential is also discussed. The Higgs potential is stabilized when the Higgs mass is over ~120 GeV. © 2012 Elsevier B.V. Source


Evenbly G.,California Institute of Technology | Vidal G.,Perimeter Institute for Theoretical Physics
Physical Review Letters | Year: 2014

We propose a real-space renormalization group method to explicitly decouple into independent components a many-body system that, as in the phenomenon of spin-charge separation, exhibits separation of degrees of freedom at low energies. Our approach produces a branching holographic description of such systems that opens the path to the efficient simulation of the most entangled phases of quantum matter, such as those whose ground state violates a boundary law for entanglement entropy. As in the coarse-graining transformation of Vidal [Phys. Rev. Lett. 99, 220405 (2007)], the key ingredient of this decoupling transformation is the concept of entanglement renormalization, or removal of short-range entanglement. We demonstrate the feasibility of the approach, both analytically and numerically, by decoupling in real space the ground state of a critical quantum spin chain into two. Generalized notions of renormalization group flow and of scale invariance are also put forward. © 2014 American Physical Society. Source


Evenbly G.,California Institute of Technology | Vidal G.,Perimeter Institute for Theoretical Physics
Physical Review Letters | Year: 2014

We describe a quantum circuit that produces a highly entangled state of N qubits from which one can efficiently compute expectation values of local observables. This construction yields a variational ansatz for quantum many-body states that can be regarded as a generalization of the multiscale entanglement renormalization ansatz (MERA), which we refer to as the branching MERA. In a lattice system in D dimensions, the scaling of entanglement of a region of size LD in the branching MERA is not subject to restrictions such as a boundary law LD-1, but can be proportional to the size of the region, as we demonstrate numerically. © 2014 American Physical Society. Source


Batygin K.,California Institute of Technology | Batygin K.,Harvard - Smithsonian Center for Astrophysics | Morbidelli A.,University of Nice Sophia Antipolis
Astronomical Journal | Year: 2013

A considerable fraction of multi-planet systems discovered by the observational surveys of extrasolar planets reside in mild proximity to first-order mean-motion resonances. However, the relative remoteness of such systems from nominal resonant period ratios (e.g., 2:1, 3:2, and 4:3) has been interpreted as evidence for lack of resonant interactions. Here, we show that a slow divergence away from exact commensurability is a natural outcome of dissipative evolution and demonstrate that libration of critical angles can be maintained tens of percent away from nominal resonance. We construct an analytical theory for the long-term dynamical evolution of dissipated resonant planetary pairs and confirm our calculations numerically. Collectively, our results suggest that a significant fraction of the near-commensurate extrasolar planets are in fact resonant and have undergone significant dissipative evolution. © 2013. The American Astronomical Society. All rights reserved. Source


Adlakha S.,California Institute of Technology | Johari R.,Stanford University
Operations Research | Year: 2013

We study a class of stochastic dynamic games that exhibit strategic complementarities between players; formally, in the games we consider, the payoff of a player has increasing differences between her own state and the empirical distribution of the states of other players. Such games can be used to model a diverse set of applications, including network security models, recommender systems, and dynamic search in markets. Stochastic games are generally difficult to analyze, and these difficulties are only exacerbated when the number of players is large (as might be the case in the preceding examples). We consider an approximation methodology called mean field equilibrium to study these games. In such an equilibrium, each player reacts to only the long-run average state of other players. We find necessary conditions for the existence of a mean field equilibrium in such games. Furthermore, as a simple consequence of this existence theorem, we obtain several natural monotonicity properties. We show that there exist a "largest" and a "smallest" equilibrium among all those where the equilibrium strategy used by a player is nondecreasing, and we also show that players converge to each of these equilibria via natural myopic learning dynamics; as we argue, these dynamics are more reasonable than the standard best-response dynamics. We also provide sensitivity results, where we quantify how the equilibria of such games move in response to changes in parameters of the game (for example, the introduction of incentives to players). © 2013 INFORMS. Source


Perley D.A.,California Institute of Technology | Perley R.A.,U.S. National Radio Astronomy Observatory
Astrophysical Journal | Year: 2013

Highly dust-obscured starbursting galaxies (submillimeter galaxies and their ilk) represent the most extreme sites of star formation in the distant universe and contribute significantly to overall cosmic star formation beyond z > 1.5. Some stars formed in these environments may also explode as gamma-ray bursts (GRBs) and contribute to the population of "dark" bursts. Here we present Very Large Array wideband radio-continuum observations of 15 heavily dust-obscured Swift GRBs to search for radio synchrotron emission associated with intense star formation in their host galaxies. Most of these targets (11) are not detected. Of the remaining four objects, one detection is marginal, and for two others we cannot yet rule out the contribution of a long-lived radio afterglow. The final detection is secure, but indicates a star formation rate (SFR) roughly consistent with the dust-corrected UV-inferred value. Most galaxies hosting obscured GRBs are therefore not forming stars at extreme rates, and the amount of optical extinction seen along a GRB afterglow sightline does not clearly correlate with the likelihood that the host has a sufficiently high SFR to be radio-detectable. While some submillimeter galaxies do readily produce GRBs, these GRBs are often not heavily obscured - suggesting that the outer (modestly obscured) parts of these galaxies overproduce GRBs and the inner (heavily obscured) parts underproduce GRBs relative to their respective contributions to star formation, hinting at strong chemical or initial mass function gradients within these systems. © 2013. The American Astronomical Society. All rights reserved. Source


Scherler D.,California Institute of Technology
Geology | Year: 2014

Ice-free bedrock headwalls are widespread landforms of many glacial landscapes, but their formation and evolution are not well understood. Here, I present observations from the Khumbu Himalaya, eastern Nepal, of a distinct elevation zone that aligns the base of many steep headwalls with the highest predicted frost-cracking intensity (FCI). Below this zone, median ice-free hillslope angles are ≤40° and similar to those of other areas in the Himalaya, where threshold hillslopes that are close to the critical angle of stability have been inferred. At higher elevations, ice-free hillslopes of different rock types have median slope angles of ~50°-55°, suggesting threshold hillslopes with higher rock mass strength, possibly related to the presence of deep-reaching permafrost. High-altitude meteorological data combined with FCI models support frost cracking as a mechanism for headwall retreat by undercutting of threshold headwalls, while glacial transport inhibits the accumulation of scree deposits at their base. This mechanism could account for continued headwall retreat as long as climatic conditions enable frost cracking near the base of headwalls that are high enough to sustain glaciers at their base, even if subglacial erosion is minor. © 2014 Geological Society of America. Source


Li J.,TU Munich | Li J.,California Institute of Technology | Richter K.,TU Munich | Reinstein J.,Max Planck Institute for Medical Research | Buchner J.,TU Munich
Nature Structural and Molecular Biology | Year: 2013

Heat-shock protein 90 (Hsp90) is an ATP-dependent molecular chaperone that associates dynamically with various co-chaperones during its chaperone cycle. Here we analyzed the role of the activating co-chaperone Aha1 in the progression of the yeast Hsp90 chaperone cycle and identified a critical ternary Hsp90 complex containing the co-chaperones Aha1 and Cpr6. Aha1 accelerates the intrinsically slow conformational transitions of Hsp90 to an N-terminally associated state but does not fully close the nucleotide-binding pocket yet. Cpr6 increases the affinity between Aha1 and Hsp90 and further stimulates the Hsp90 ATPase activity. Synergistically, Aha1 and Cpr6 displace the inhibitory co-chaperone Sti1 from Hsp90. To complete the cycle, Aha1 is released by the co-chaperone p23. Thus, at distinct steps during the Hsp90 chaperone cycle, co-chaperones selectively trap statistically distributed Hsp90 conformers and thus turn Hsp90 into a deterministic machine. Copyright © 2013 Nature America, Inc. Source


Parker H.J.,Stowers Institute for Medical Research | Bronner M.E.,California Institute of Technology | Krumlauf R.,Stowers Institute for Medical Research | Krumlauf R.,University of Kansas
Nature | Year: 2014

A defining feature governing head patterning of jawed vertebrates is a highly conserved gene regulatory network that integrates hindbrain segmentationwith segmentally restricted domains ofHox gene expression. Althoughnon-vertebrate chordatesdisplaynesteddomains of axial Hox expression, they lack hindbrain segmentation. The sea lamprey, a jawless fish, can provide unique insights into vertebrate origins owing to its phylogenetic position at the base of the vertebrate tree1-3. It has been suggested that lamprey may represent an intermediate state where nested Hox expression has not been coupled to the process of hindbrain segmentation4-6. However, little is known about the regulatory network underlyingHox expression in lamprey or its relationship to hindbrain segmentation. Here, using a novel tool that allows cross-species comparisons of regulatory elements between jawed and jawless vertebrates, we report deep conservation of both upstream regulators and segmental activity of enhancer elements across these distant species. Regulatory regions from diverse gnathostomes drive segmental reporter expression in the lamprey hindbrain and require the sametranscriptional inputs (for example, Kreisler (also knownas Mafba), Krox20 (also knownas Egr2a)) in both lamprey and zebrafish. We find that lamprey hox genes display dynamic segmentally restricted domains of expression; we also isolated a conserved exonichox2 enhancer fromlamprey that drives segmental expression in rhombomeres 2 and 4. Our results show that coupling ofHox gene expression to segmentation of the hindbrain is an ancient trait with origin at the base of vertebrates that probably led to the formation of rhombomeric compartmentswith an underlyingHox code. © 2014 Macmillan Publishers Limited. All rights reserved. Source


Zewail A.H.,California Institute of Technology
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2010

In this paper, the evolutionary and revolutionary developments of microscopic imaging are overviewed with a perspective on origins. From Alhazen's camera obscura, to Hooke and van Leeuwenhoek's two-dimensional optical micrography, and on to threeand four-dimensional (4D) electron microscopy, these developments over a millennium have transformed humans' scope of visualization. The changes in the length and time scales involved are unimaginable, beginning with the visible shadows of candles at the centimetre and second scales, and ending with invisible atoms with space and time dimensions of sub-nanometre and femtosecond. With these advances it has become possible to determine the structures of matter and to observe their elementary dynamics as they unfold in real time. Such observations provide the means for visualizing materials behaviour and biological function, with the aim of understanding emergent phenomena in complex systems. © 2010 The Royal Society. Source


Dougherty D.A.,California Institute of Technology
Accounts of Chemical Research | Year: 2013

The chemistry community now recognizes the cation-π interaction as a major force for molecular recognition, joining the hydrophobic effect, the hydrogen bond, and the ion pair in determining macromolecular structure and drug-receptor interactions. This Account provides the author's perspective on the intellectual origins and fundamental nature of the cation-π interaction.Early studies on cyclophanes established that water-soluble, cationic molecules would forego aqueous solvation to enter a hydrophobic cavity if that cavity was lined with π systems. Important gas phase studies established the fundamental nature of the cation-π interaction. The strength of the cation-π interaction (Li+ binds to benzene with 38 kcal/mol of binding energy; NH4 + with 19 kcal/mol) distinguishes it from the weaker polar-π interactions observed in the benzene dimer or water-benzene complexes. In addition to the substantial intrinsic strength of the cation-π interaction in gas phase studies, the cation-π interaction remains energetically significant in aqueous media and under biological conditions. Many studies have shown that cation-π interactions can enhance binding energies by 2-5 kcal/mol, making them competitive with hydrogen bonds and ion pairs in drug-receptor and protein-protein interactions.As with other noncovalent interactions involving aromatic systems, the cation-π interaction includes a substantial electrostatic component. The six (four) C δ--Hδ+ bond dipoles of a molecule like benzene (ethylene) combine to produce a region of negative electrostatic potential on the face of the π system. Simple electrostatics facilitate a natural attraction of cations to the surface. The trend for (gas phase) binding energies is Li+ > Na+ > K+ > Rb+: as the ion gets larger the charge is dispersed over a larger sphere and binding interactions weaken, a classical electrostatic effect. On other hand, polarizability does not define these interactions. Cyclohexane is more polarizable than benzene but a decidedly poorer cation binder.Many studies have documented cation-π interactions in protein structures, where lysine or arginine side chains interact with phenylalanine, tyrosine, or tryptophan. In addition, countless studies have established the importance of the cation-π interaction in a range of biological processes. Our work has focused on molecular neurobiology, and we have shown that neurotransmitters generally use a cation-π interaction to bind to their receptors. We have also shown that many drug-receptor interactions involve cation-π interactions. A cation-π interaction plays a critical role in the binding of nicotine to ACh receptors in the brain, an especially significant case. Other researchers have established important cation-π interactions in the recognition of the "histone code," in terpene biosynthesis, in chemical catalysis, and in many other systems. © 2012 American Chemical Society. Source


Park S.T.,California Institute of Technology
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

Solving the Dirac equation is a formidable task due to the high frequency and the degrees of freedom involved. However, this high frequency allows one to obtain an approximation to the equation. Here, we directly solve the Dirac equation using an envelope method and derive analytical solutions of Dirac wave packets to first order for the small momentum spread. We apply the insight gained from this solution to the Zitterbewegung behavior in a Dirac-like system, where we show that it is crucial to include the first-order term in our solution to correctly describe a Dirac packet. © 2012 American Physical Society. Source


Varshavsky A.,California Institute of Technology
Protein Science | Year: 2012

Despite extensive understanding of sleep regulation, the molecular-level cause and function of sleep are unknown. I suggest that they originate in individual neurons and stem from increased production of protein fragments during wakefulness. These fragments are transient parts of protein complexes in which the fragments were generated. Neuronal Ca2+ fluxes are higher during wakefulness than during sleep. Subunits of transmembrane channels and other proteins are cleaved by Ca2+-activated calpains and by other nonprocessive proteases, including caspases and secretases. In the proposed concept, termed the fragment generation (FG) hypothesis, sleep is a state during which the production of fragments is decreased (owing to lower Ca2+ transients) while fragment-destroying pathways are upregulated. These changes facilitate the elimination of fragments and the remodeling of protein complexes in which the fragments resided. The FG hypothesis posits that a proteolytic cleavage, which produces two fragments, can have both deleterious effects and fitness-increasing functions. This (previously not considered) dichotomy can explain both the conservation of cleavage sites in proteins and the evolutionary persistence of sleep, because sleep would counteract deleterious aspects of protein fragments. The FG hypothesis leads to new explanations of sleep phenomena, including a longer sleep after sleep deprivation. Studies in the 1970s showed that ethanol-induced sleep in mice can be strikingly prolonged by intracerebroventricular injections of either Ca2+ alone or Ca 2+ and its ionophore (Erickson et al., Science 1978;199:1219-1221; Harris, Pharmacol Biochem Behav 1979;10:527-534; Erickson et al., Pharmacol Biochem Behav 1980;12:651-656). These results, which were never interpreted in connection to protein fragments or the function of sleep, may be accounted for by the FG hypothesis about molecular causation of sleep. © 2012 The Protein Society. Source


Evenbly G.,California Institute of Technology | Vidal G.,Perimeter Institute for Theoretical Physics
Physical Review Letters | Year: 2015

We introduce a coarse-graining transformation for tensor networks that can be applied to study both the partition function of a classical statistical system and the Euclidean path integral of a quantum many-body system. The scheme is based upon the insertion of optimized unitary and isometric tensors (disentanglers and isometries) into the tensor network and has, as its key feature, the ability to remove short-range entanglement or correlations at each coarse-graining step. Removal of short-range entanglement results in scale invariance being explicitly recovered at criticality. In this way we obtain a proper renormalization group flow (in the space of tensors), one that in particular (i) is computationally sustainable, even for critical systems, and (ii) has the correct structure of fixed points, both at criticality and away from it. We demonstrate the proposed approach in the context of the 2D classical Ising model. © 2015 American Physical Society. © 2015 American Physical Society. Source


Evenbly G.,California Institute of Technology | Vidal G.,Perimeter Institute for Theoretical Physics
Physical Review B - Condensed Matter and Materials Physics | Year: 2014

We investigate the scaling of entanglement entropy in both the multiscale entanglement renormalization ansatz (MERA) and in its generalization, the branching MERA. We provide analytical upper bounds for this scaling, which take the general form of a boundary law with various types of multiplicative corrections, including power-law corrections all the way to a bulk law. For several cases of interest, we also provide numerical results that indicate that these upper bounds are saturated to leading order. In particular, we establish that, by a suitable choice of holographic tree, the branching MERA can reproduce the logarithmic multiplicative correction of the boundary law observed in Fermi liquids and spin-Bose metals in D≥2 dimensions. © 2014 American Physical Society. Source


Li L.,California Institute of Technology
Blood | Year: 2013

Bcl11b is a T-cell specific gene in hematopoiesis that begins expression during T-lineage commitment and is required for this process. Aberrant expression of BCL11B or proto-oncogene translocation to the vicinity of BCL11B can be a contributing factor in human T-ALL. To identify the mechanism that controls its distinctive T-lineage expression, we corrected the identified Bcl11b transcription start site and mapped a cell-type-specific differentially methylated region bracketing the Bcl11b promoter. We identified a 1.9-kb region 850 kb downstream of Bcl11b, "Major Peak," distinguished by its dynamic histone marking pattern in development that mirrors the pattern at the Bcl11b promoter. Looping interactions between promoter-proximal elements including the differentially methylated region and downstream elements in the Major Peak are required to recapitulate the T-cell specific expression of Bcl11b in stable reporter assays. Functional dissection of the Major Peak sequence showed distinct subregions, in which TCF-1 sites and a conserved element were required for T-lineage-specific activation and silencing in non-T cells. A bacterial artificial chromosome encompassing the full Bcl11b gene still required the addition of the Major Peak to exhibit T-cell specific expression. Thus, promoter-proximal and Major Peak sequences are cis-regulatory elements that interact over 850 kb to control expression of Bcl11b in hematopoietic cells. Source


Patterson P.H.,California Institute of Technology
Trends in Molecular Medicine | Year: 2011

Recent studies have highlighted a connection between infection during pregnancy and the increased risk of autism in the offspring. Parallel studies of cerebral spinal fluid, blood and postmortem brains reveal an ongoing, hyper-responsive inflammatory-like state in many young as well as adult autism subjects. There are also indications of gastrointestinal problems in at least a subset of autistic children. Work on the maternal infection risk factor using animal models indicates that aspects of brain and peripheral immune dysregulation can begin during fetal development and continue through adulthood. The offspring of infected or immune-activated dams also display cardinal behavioral features of autism, as well as neuropathology consistent with that seen in human autism. These rodent models are proving useful for the study of pathogenesis and gene-environment interactions as well as for the exploration of potential therapeutic strategies. © 2011 Elsevier Ltd. Source


Arbelaez P.,University of California at Berkeley | Maire M.,California Institute of Technology | Fowlkes C.,University of California at Irvine | Malik J.,University of California at Berkeley
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2011

This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications. © 2006 IEEE. Source


Palmer J.H.,California Institute of Technology
Structure and Bonding | Year: 2012

The transition metal complexes of the non-innocent, electron-rich corrole macrocycle are discussed. A detailed summary of the investigations to determine the physical oxidation states of formally iron(IV) and cobalt(IV) corroles as well as formally copper(III) corroles is presented. Electronic structures and reactivity of other metallocorroles are also discussed, and comparisons between corrole and porphyrin complexes are made where data are available. The growing assortment of second-row corrole complexes is discussed and compared to first-row analogs, and work describing the synthesis and characterization of third-row corroles is summarized. Emphasis is placed on the role of spectroscopic and computational studies in elucidating oxidation states and electronic configurations. © 2011 Springer-Verlag Berlin Heidelberg. Source


Feinberg E.H.,Harvard University | Meister M.,Harvard University | Meister M.,California Institute of Technology
Nature | Year: 2015

More than twenty types of retinal ganglion cells conduct visual information from the eye to the rest of the brain. Each retinal ganglion cell type tessellates the retina in a regular mosaic, so that every point in visual space is processed for visual primitives such as contrast and motion. This information flows to two principal brain centres: the visual cortex and the superior colliculus. The superior colliculus plays an evolutionarily conserved role in visual behaviours, but its functional architecture is poorly understood. Here we report on population recordings of visual responses from neurons in the mouse superior colliculus. Many neurons respond preferentially to lines of a certain orientation or movement axis. We show that cells with similar orientation preferences form large patches that span the vertical thickness of the retinorecipient layers. This organization is strikingly different from the randomly interspersed orientation preferences in the mouse's visual cortex; instead, it resembles the orientation columns observed in the visual cortices of large mammals. Notably, adjacent superior colliculus orientation columns have only limited receptive field overlap. This is in contrast to the organization of visual cortex, where each point in the visual field activates neurons with all preferred orientations. Instead, the superior colliculus favours specific contour orientations within ∼30regions of the visual field, a finding with implications for behavioural responses mediated by this brain centre. © 2015, Macmillan Publishers Limited. All rights reserved. Source


Hess S.,California Institute of Technology
Proteomics - Clinical Applications | Year: 2013

The emerging field of chemo- and pharmacoproteomics studies the mechanisms of action of bioactive molecules in a systems pharmacology context. In contrast to traditional drug discovery, pharmacoproteomics integrates the mechanism of a drug's action, its side effects including toxicity, and the discovery of new drug targets in a single approach. Thus, it determines early favorable (e.g. multiple kinase target in cancer drugs) and unfavorable (e.g. side effects) polypharmacology. Target profiling is accomplished using either active site-labeling probes or immobilized drugs. This strategy identifies direct targets and has in fact enabled even the determination of binding curves and half maximum inhibitory concentrations of these targets. In addition, the enrichment greatly reduces the complexity of the proteome to be analyzed by quantitative MS. Complementary to these approaches, global proteomics profiling studying drug treatement-induced changes in protein expression levels and/or post-translational modification status have started to become possible mostly due to significant improvements in instrumentation. Particularly, when using multidimensional separations, a considerable proteome depth of up to 10 000 proteins can be achieved with current state-of-the-art mass spectrometers and bioinformatics tools. In summary, chemo- and pharmacoproteomics has already contributed significantly to the identification of novel drug targets and their mechanisms of action(s). Aided by further technological advancements, this interdisciplinary approach will likely be used more broadly in the future. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Landau Z.,University of California at Berkeley | Vazirani U.,University of California at Berkeley | Vidick T.,California Institute of Technology
Nature Physics | Year: 2015

The density matrix renormalization group method has been extensively used to study the ground state of 1D many-body systems since its introduction two decades ago. In spite of its wide use, this heuristic method is known to fail in certain cases and no certifiably correct implementation is known, leaving researchers faced with an ever-growing toolbox of heuristics, none of which is guaranteed to succeed. Here we develop a polynomial time algorithm that provably finds the ground state of any 1D quantum system described by a gapped local Hamiltonian with constant ground-state energy. The algorithm is based on a framework that combines recently discovered structural features of gapped 1D systems with an efficient construction of a class of operators called approximate ground-state projections (AGSPs). The combination of these tools yields a method that is guaranteed to succeed in all 1D gapped systems. An AGSP-centric approach may help guide the search for algorithms for more general quantum systems, including for the central challenge of 2D systems, where even heuristic methods have had more limited success. © 2015 Macmillan Publishers Limited. All rights reserved. Source


Nakayama Y.,California Institute of Technology
Nuclear Physics B | Year: 2015

When a globally supersymmetric theory is scale invariant, it must possess a Virial supercurrent supermultiplet. The multiplet structure is analogous to the R-current supermultiplet in globally R-symmetric theories but we put extra ". i"s in various formulae. We construct a novel type of supergravity from gauging the Virial supercurrent supermultiplet in d= 1 + 3 dimensions. We give the full non-linear superspace action with the help of a covariantly linear unitary superconformal compensator. The resulting supergravity is peculiar: (1) no Einstein-like second order kinetic term is allowed without matter, (2) there exists a dynamical non-geometrical connection, (3) the metric is unimodular in the Wess-Zumino gauge and can be coupled only to scale invariant matter. Examples of matter couplings and higher derivative kinetic terms are studied from the superconformal approach. Our work completes the classification of the irreducible N=1 supergravities in d= 1 + 3 dimensions. © 2015 . Source


Rothenberg E.V.,California Institute of Technology
Trends in Immunology | Year: 2014

T cell development from multipotent progenitors to specialized effector subsets of mature T cells is guided by the iterative action of transcription factors. At each stage, transcription factors interact not only with an existing landscape of histone modifications and nucleosome packing, but also with other bound factors, while they modify the landscape for later-arriving factors in ways that fundamentally affect the control of gene expression. This review covers insights from genome-wide analyses of transcription factor binding and resulting chromatin conformation changes that reveal roles of cytokine signaling in effector T cell programming, the ways in which one factor can completely transform the impacts of previously bound factors, and the ways in which the baseline chromatin landscape is established during early T cell lineage commitment. © 2014 Elsevier Ltd. Source


Roman-Leshkov Y.,Massachusetts Institute of Technology | Davis M.E.,California Institute of Technology
ACS Catalysis | Year: 2011

Current interest in reacting carbonyl-containing molecules in aqueous media is primarily due to the growing emphasis on conversion of biomass to fuels and chemicals. Recently, solid Lewis acids have been shown to perform catalytic reactions with carbonyl-containing molecules such as sugars in aqueous media. Here, catalysis mediated by Lewis acids is briefly discussed, Lewis acid solids that perform catalysis in aqueous media are then described, and the review is concluded with a few comments on the outlook for the future. © 2011 American Chemical Society. Source


An H.,California Institute of Technology | Wang L.-T.,University of Chicago
Physical Review Letters | Year: 2015

Light top superpartners play a key role in stabilizing the electroweak scale in supersymmetric theories. For R-parity conserved supersymmetric models, traditional searches are not sensitive to the compressed regions. In this Letter, we propose a new method targeting this region, with top squark and neutralino mass splitting ranging from mt-mχ mt to about 20 GeV. In particular, we focus on the signal process in which a pair of top squarks are produced in association with a hard jet, and we define a new observable RM whose distribution has a peak in this compressed region. The position of the peak is closely correlated with mt. We show that for the 13 TeV LHC with a luminosity of 3000 fb-1, this analysis can extend the reach of the top squark in the compressed region to mt around 800 GeV. © 2015 American Physical Society. Source


Eisenberg R.,University of Rochester | Gray H.B.,California Institute of Technology
Inorganic Chemistry | Year: 2011

Noninnocence in inorganic chemistry traces its roots back half a century to work that was done on metal complexes containing unsaturated dithiolate ligands. In a flurry of activity in the early 1960s by three different research groups, homoleptic bis and tris complexes of these ligands, which came to be known as dithiolenes, were synthesized, and their structural, electrochemical, spectroscopic, and magnetic properties were investigated. The complexes were notable for facile one-electron transfers and intense colors in solution, and conventional oxidation-state descriptions could not account for their electronic structures. The bis complexes were, in general, found to be square-planar, including the first examples of this geometry for paramagnetic complexes and different formal d n configurations. Several of the neutral and monoanionic tris complexes were found to have trigonal-prismatic coordination, the first time that this geometry had been observed in molecular metal complexes. Electronic structural calculations employing extended Hückel and other semiempirical computational methods revealed extensive ligand-metal mixing in the frontier orbitals of these systems, including the observation of structures in which filled metal-based orbitals were more stable than ligand-based orbitals of the same type, suggesting that the one-electron changes upon oxidation or reduction were occurring on the ligand rather than on the metal center. A summary of this early work is followed with a brief section on the current interpretations of these systems based on more advanced spectroscopic and computational methods. The take home message is that the early work did indeed provide a solid foundation for what was to follow in investigations of metal complexes containing redox-active ligands. © 2011 American Chemical Society. Source


Agapie T.,California Institute of Technology
Coordination Chemistry Reviews | Year: 2011

Selective production of linear α-olefins is of significant commercial interest. Recently discovered catalytic systems based on titanium, tantalum, and chromium show remarkable selectivity and productivity for the oligomerization of ethylene to 1-hexene or 1-octene. Chromium-based catalysts are the most selective and active and show the highest structural diversity. This paper discusses the most recent advances in chromium chemistry related to selective olefin oligomerization. Aspects regarding ligand design, catalyst generation, selectivity for different products, and reaction mechanism are presented. Isotopic labeling protocols designed to distinguish between various mechanisms of catalysis are reviewed. © 2010 Elsevier B.V. Source


Rutledge D.,California Institute of Technology
International Journal of Coal Geology | Year: 2011

An estimate for world coal production in the long run would be helpful for developing policies for alternative energy sources and for climate change. This production has often been estimated from reserves that are calculated from measurements of coal seams. We show that where the estimates based on reserves can be tested in mature coal regions, they have been too high, and that more accurate estimates can be made by curve fits to the production history. These curve fits indicate that total world production, including past and future production, will be 680. Gt. The historical range for these fits made on an annual basis from 1995 to 2009 is 653. Gt to 749. Gt, 14% in percentage terms. The curve fits also indicate that 90% of the total production will have taken place by 2070. This gives the time scale for considering alternatives. This estimate for total production is somewhat less than the current reserves plus cumulative production, 1163. Gt, and very much less than the amount of coal that the UN Intergovernmental Panel on Climate Change, or IPCC, assumes is available for its scenarios. The maximum cumulative coal production through 2100 in an IPCC scenario is 3500. Gt. © 2010 Elsevier B.V. Source


Haah J.,California Institute of Technology
Communications in Mathematical Physics | Year: 2013

We study unfrustrated spin Hamiltonians that consist of commuting tensor products of Pauli matrices. Assuming translation-invariance, a family of Hamiltonians that belong to the same phase of matter is described by a map between modules over the translation-group algebra, so homological methods are applicable. In any dimension every point-like charge appears as a vertex of a fractal operator, and can be isolated with energy barrier at most logarithmic in the separation distance. For a topologically ordered system in three dimensions, there must exist a point-like nontrivial charge. A connection between the ground state degeneracy and the number of points on an algebraic set is discussed. Tools to handle local Clifford unitary transformations are given. © 2013 Springer-Verlag Berlin Heidelberg. Source


Preskill J.,California Institute of Technology
Quantum Information and Computation | Year: 2013

I study the effectiveness of fault-tolerant quantum computation against correlated Hamiltonian noise, and derive a sufficient condition for scalability. Arbitrarily long quantum computations can be executed reliably provided that noise terms acting collectively on k system qubits are sufficiently weak, and decay sufficiently rapidly with increasing k and with increasing spatial separation of the qubits. © Rinton Press. Source


Wu T.Y.,California Institute of Technology
Annual Review of Fluid Mechanics | Year: 2011

This expository review is devoted to fish swimming and bird/insect flight. (a) The simple waving motion of an elongated flexible ribbon plate of constant width propagating a wave distally down the plate to swim forward in a fluid, initially at rest, is first considered to provide a fundamental concept on energy conservation. It is generalized to include variations in body width and thickness, with appended dorsal, ventral and caudal fins shedding vortices to closely simulate fish swimming, for which a nonlinear theory is presented for large-amplitude propulsion. (b) For bird flight, the pioneering studies on oscillatory rigid wings are discussed with delineating a fully nonlinear unsteady theory for a two-dimensional flexible wing with arbitrary variations in shape and trajectory to provide a comparative study with experiments. (c) For insect flight, recent advances are reviewed by items on aerodynamic theory and modeling, computational methods, and experiments, for forward and hovering flights with producing leading-edge vortex to yield unsteady high lift. (d) Prospects are explored on extracting prevailing intrinsic flow energy by fish and bird to enhance thrust for propulsion. (e) The mechanical and biological principles are drawn together for unified studies on the energetics in deriving metabolic power for animal locomotion, leading to the surprising discovery that the hydrodynamic viscous drag on swimming fish is largely associated with laminar boundary layers, thus drawing valid and sound evidences for a resounding resolution to the long-standing fish-swim paradox proclaimed by Gray (1936, 1968 ). © 2011 by Annual Reviews. All rights reserved. Source


Bravyi S.,IBM | Haah J.,California Institute of Technology
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

We propose a family of error-detecting stabilizer codes with an encoding rate of 1/3 that permit a transversal implementation of the gate T=exp(-iπZ/8) on all logical qubits. These codes are used to construct protocols for distilling high-quality "magic" states T+ by Clifford group gates and Pauli measurements. The distillation overhead scales as O(logγ(1/ε)), where ε is the output accuracy and γ=log2(3)∼1.6. To construct the desired family of codes, we introduce the notion of a triorthogonal matrix, a binary matrix in which any pair and any triple of rows have even overlap. Any triorthogonal matrix gives rise to a stabilizer code with a transversal T gate on all logical qubits, possibly augmented by Clifford gates. A powerful numerical method for generating triorthogonal matrices is proposed. Our techniques lead to a twofold overhead reduction for distilling magic states with accuracy ε∼10-12 compared with previously known protocols. © 2012 American Physical Society. Source


Zwiebel B.I.,California Institute of Technology
Journal of High Energy Physics | Year: 2012

The superconformal index is an important invariant of superconformal field theories. In this note we refine the superconformal index by inserting the charge conjugation operator C. We construct a matrix integral for this charged index for N = 4 SYM with SU(N) gauge group. The key ingredient for the construction is a "charged character", which reduces to Tr(C) for singlet representations of the gauge group. For each irreducible real SU(N) representation, we conjecture that this charged character is equal to the standard character for a corresponding representation of SO(N + 1) or SP(N - 1), for N even or odd respectively. The matrix integral for the charged index passes tests for small N and for N → ∞. Like the ordinary superconformal index, for N = 4 SYM the charged index is independent of N in the large-N limit. © SISSA 2012. Source


Chueh W.C.,Sandia National Laboratories | Haile S.M.,California Institute of Technology
Annual Review of Chemical and Biomolecular Engineering | Year: 2012

Mixed ion and electron conductors (MIECs) have garnered increased attention as active components in the electrodes of solid oxide electrolyzers (for electricity to fuel conversion) and especially of solid oxide fuel cells (for fuel to electricity conversion). Although much of the work in the literature is directed toward the understanding of oxygen electroreduction on the surfaces of MIECs, more recent studies also explore the role of these materials in fuel electrooxidation. In both cases, the rich chemical and electronic behaviors of MIECs imply a broad range of possible reaction pathways. We highlight the significant progress that has been made in elucidating these pathways through well-designed experimental and computational studies. At the macroscopic level, patterned electrode studies enable identification of active sites, whereas at the microscopic level, surface-sensitive techniques in combination with atomistic-level simulations are beginning to reveal the nature of the rate-determining step(s) and enable rational design of materials with enhanced activity. Copyright © 2012 by Annual Reviews. All rights reserved. Source


Ooguri H.,California Institute of Technology | Ooguri H.,University of Tokyo | Park C.-S.,University of California at Santa Cruz
Physical Review Letters | Year: 2011

We present a string theory construction of a gravity dual of a spatially modulated phase. Our earlier work shows that the Chern-Simons term in the five-dimensional Maxwell theory destabilizes the Reissner-Nordström black holes in anti-de Sitter space if the Chern-Simons coupling is sufficiently high. In this Letter, we show that a similar instability is realized on the world volume of 8-branes in the Sakai-Sugimoto model in the quark-gluon plasma phase. Our result suggests a new spatially modulated phase in quark-gluon plasma when the baryon density is above 0.8Nffm⊃-3 at temperature 150 MeV.© 2011 American Physical Society. Source


Nakayama Y.,California Institute of Technology
Journal of High Energy Physics | Year: 2015

Abstract: The bulk locality in the constructive holographic renormalization group requires miraculous cancellations among various local renormalization group functions. The cancellation is not only from the properties of the spectrum but from more detailed aspects of operator product expansions in relation to conformal anomaly. It is remarkable that one-loop computation of the universal local renormalization group functions in the weakly coupled limit of the N=4$$ \mathcal{N}=4 $$ super Yang-Mills theory fulfils the necessary condition for the cancellation in the strongly coupled limit in its SL(2, Z) duality invariant form. From the consistency between the quantum renormalization group and the holographic renormalization group, we determine some unexplored local renormalization group functions (e.g. diffusive term in the beta function for the gauge coupling constant) in the strongly coupled limit of the planar N=4$$ \mathcal{N}=4 $$ super Yang-Mills theory. © 2015, The Author(s). Source


Robertson B.E.,California Institute of Technology
Astrophysical Journal | Year: 2010

The installation of the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) will revolutionize the study of high-redshift galaxy populations. Initial observations of the HST Ultra Deep Field (UDF) have yielded multiple z ≳ 7 dropout candidates. Supplemented by the GOODS Early Release Science (ERS) and further UDF pointings, these data will provide crucial information about the most distant known galaxies. However, achieving tight constraints on the z ∼ 7 galaxy luminosity function (LF) will require even more ambitious photometric surveys. Using a Fisher matrix approach to fully account for Poisson and cosmic sample variance, as well as covariances in the data, we estimate the uncertainties on LF parameters achieved by surveys of a given area and depth. Applying this method to WFC3 z ∼ 7 dropout galaxy samples, we forecast the LF parameter uncertainties for a variety of model surveys. We demonstrate that performing a wide area (∼ 1 deg2) survey to H AB ∼ 27 depth or increasing the UDF depth to H AB ∼ 30 provides excellent constraints on the high-z ∼ LF when combined with the existing Ultradeep Field Guest Observation and GOODS ERS data. We also show that the shape of the matter power spectrum may limit the possible gain of splitting wide area (≳0.5 deg2) high-redshift surveys into multiple fields to probe statistically independent regions; the increased rms density fluctuations in smaller volumes mostly offset the improved variance gained from independent samples. © 2010. The American Astronomical Society. All rights reserved. Source


Ellis R.S.,California Institute of Technology
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2010

I review the development of gravitational lensing as a powerful tool of the observational cosmologist. After the historic eclipse expedition organized by Arthur Eddington and Frank Dyson, the subject lay observationally dormant for 60 years. However, subsequent progress has been astonishingly rapid, especially in the past decade, so that gravitational lensing now holds the key to unravelling the two most profound mysteries of our Universe - the nature and distribution of dark matter, and the origin of the puzzling cosmic acceleration first identified in the late 1990s. In this non-specialist review, I focus on the unusual history and achievements of gravitational lensing and its future observational prospects. © 2010 The Royal Society. Source


This review provides an overview of the involvement of the corpus callosum (CC) in a variety of developmental disorders that are currently defined exclusively by genetics, developmental insult, and/or behavior. I begin with a general review of CC development, connectivity, and function, followed by discussion of the research methods typically utilized to study the callosum. The bulk of the review concentrates on specific developmental disorders, beginning with agenesis of the corpus callosum (AgCC)-the only condition diagnosed exclusively by callosal anatomy. This is followed by a review of several genetic disorders that commonly result in social impairments and/or psychopathology similar to AgCC (neurofibromatosis-1, Turner syndrome, 22q11.2 deletion syndrome, Williams yndrome, and fragile X) and two forms of prenatal injury (premature birth, fetal alcohol syndrome) known to impact callosal development. Finally, I examine callosal involvement in several common developmental disorders defined exclusively by behavioral patterns (developmental language delay, dyslexia, attention-deficit hyperactive disorder, autism spectrum disorders, and Tourette syndrome). © 2010 Springer Science+Business Media, LLC. Source


Strobl-Mazzulla P.H.,CONICET | Bronner M.E.,California Institute of Technology
Seminars in Cancer Biology | Year: 2012

The epithelial-to-mesenchymal transition (EMT) is an important event converting compact and ordered epithelial cells into migratory mesenchymal cells. Given the molecular and cellular similarities between pathological and developmental EMTs, studying this event during neural crest development offers and excellent in vivo model for understanding the mechanisms underlying this process. Here, we review new and old insight into neural crest EMT in search of commonalities with cancer progression that might aid in the design of specific therapeutic prevention. © 2012 Elsevier Ltd. Source


Varshavsky A.,California Institute of Technology
Methods in Molecular Biology | Year: 2012

Many intracellular proteins are metabolically unstable or can become unstable during their lifetime in a cell. The in vivo half-lives of specific proteins range from less than a minute to many days. Among the functions of intracellular proteolysis are the elimination of misfolded or otherwise abnormal proteins; maintenance of amino acid pools in cells affected by stresses such as starvation; and generation of protein fragments that act as hormones, antigens, or other effectors. One major function of proteolytic pathways is the selective destruction of proteins whose concentrations must vary with time and alterations in the state of a cell. Short in vivo half-lives of such proteins provide a way to generate their spatial gradients and to rapidly adjust their concentration or subunit composition through changes in the rate of their degradation. The regulated (and processive) degradation of intracellular proteins is carried out largely by the ubiquitin-proteasome system (Ub system), in conjunction with autophagy-lysosome pathways. Other contributors to intracellular proteolysis include cytosolic and nuclear proteases, such as caspases, calpains, and separases. They often function as "upstream" components of the Ub system, which destroys protein fragments that had been produced by these (nonprocessive) proteases. Ub, a 76-residue protein, mediates selective proteolysis through its enzymatic conjugation to proteins that contain primary degradation signals (degrons (1)), thereby marking such proteins for degradation by the 26S proteasome, an ATP-dependent multisubunit protease. Ub conjugation involves the formation of a poly-Ub chain that is linked (in most cases) to the ε-amino group of an internal Lys residue in a substrate protein. Ub is a "secondary" degron, in that Ub is conjugated to proteins that contain primary degradation signals. © 2012 Springer Science+Business Media, LLC. Source


Tropp J.A.,California Institute of Technology
Advances in Adaptive Data Analysis | Year: 2011

This paper presents an improved analysis of a structured dimension-reduction map called the subsampled randomized Hadamard transform. This argument demonstrates that the map preserves the Euclidean geometry of an entire subspace of vectors. The new proof is much simpler than previous approaches, and it offers for the first time optimal constants in the estimate on the number of dimensions required for the embedding. © 2011 World Scientific Publishing Company. Source


Oreg Y.,Weizmann Institute of Science | Refael G.,California Institute of Technology | Von Oppen F.,Free University of Berlin
Physical Review Letters | Year: 2010

We show that the combination of spin-orbit coupling with a Zeeman field or strong interactions may lead to the formation of a helical electron liquid in single-channel quantum wires, with spin and velocity perfectly correlated. We argue that zero-energy Majorana bound states are formed in various situations when such wires are situated in proximity to a conventional s-wave superconductor. This occurs when the external magnetic field, the superconducting gap, or, most simply, the chemical potential vary along the wire. These Majorana states do not require the presence of a vortex in the system. Experimental consequences of the helical liquid and the Majorana states are also discussed. © 2010 The American Physical Society. Source


Kapustin A.,California Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2014

Gapped phases with long-range entanglement may admit gapped boundaries. If the boundary is gapped, the ground-state degeneracy is well defined and can be computed using methods of topological quantum field theory. We derive a general formula for the ground-state degeneracy for Abelian fractional quantum Hall phases, including the cases when connected components of the boundary are subdivided into an arbitrary number of segments, with a different boundary condition on each segment, and in the presence of an arbitrary number of boundary domain walls. © 2014 American Physical Society. Source


Aguirre A.,University of California at Santa Cruz | Johnson M.C.,California Institute of Technology
Reports on Progress in Physics | Year: 2011

In the picture of eternal inflation as driven by a scalar potential with multiple minima, our observable universe resides inside one of many bubbles formed from transitions out of a false vacuum. These bubbles necessarily collide, upsetting the homogeneity and isotropy of our bubble interior, and possibly leading to detectable signatures in the observable portion of our bubble, potentially in the cosmic microwave background or other precision cosmological probes. This constitutes a direct experimental test of eternal inflation and the landscape of string theory vacua. Assessing this possibility roughly splits into answering three questions: What happens in a generic bubble collision? What observational effects might be expected? How likely are we to observe a collision? In this review we report the current progress on each of these questions, improve upon a few of the existing results and attempt to lay out directions for future work. © 2011 IOP Publishing Ltd. Source


Bronner M.E.,California Institute of Technology
Histochemistry and Cell Biology | Year: 2012

The neural crest is a stem cell population, unique to vertebrates, that gives rise to a vast array of derivatives, ranging from peripheral ganglia to the facial skeleton. This population is induced in the early embryo at the border of the neural plate, which will form the central nervous system (CNS). After neural tube closure, neural crest cells depart from the dorsal CNS via an epithelial to mesenchymal transition (EMT), forming a migratory mesenchymal cell type that migrates extensive to diverse locations in the embryo. Using in vivo loss-of-function approaches and cis-regulatory analysis coupled with live imaging, we have investigated the gene regulatory network that mediates formation of this fascinating cell type. The results show that a combination of transcriptional inputs and epigenetic modifiers control the timing of onset of neural crest gene expression. This in turn leads to the EMT process that produces this migratory cell population. © Springer-Verlag 2012. Source


Frantsuzov P.A.,University of Notre Dame | Frantsuzov P.A.,Novosibirsk State University | Volkan-Kacso S.,California Institute of Technology | Janko B.,University of Notre Dame
Nano Letters | Year: 2013

A variety of optically active nanoscale objects show extremely long correlations in the fluctuations of fluorescence intensity (blinking). Here we performed a systematic study to quantitatively estimate the power spectral density (PSD) of the fluorescence trajectories of colloidal and self-assembled quantum dots (QDs), nanorods (NRs), nanowires (NWs), and organic molecules. We report for the first time a statistically correct method of PSD estimation suitable for these systems. Our method includes a detailed analysis of the confidence intervals. The striking similarity in the spectra of these nanoscale systems, including even a "nonblinking" quantum dot investigated by Wang and collaborators (Nature2009, 459, 685-689), is powerful evidence for the existence of a universal physical mechanism underlying the blinking phenomenon in all of these fluorophores (Frantsuzov et al. Nat. Phys.2008, 4, 519-522). In this paper we show that the features of this universal mechanism can be captured phenomenologically by the multiple recombination center model (MRC) we suggested recently for explaining single colloidal QD intermittency. Within the framework of the MRCs we qualitatively explain all of the important features of fluorescence intensity fluctuations for a broad spectrum of nanoscale emitters. © 2012 American Chemical Society. Source


Marston J.B.,California Institute of Technology | Marston J.B.,Brown University
Annual Review of Condensed Matter Physics | Year: 2012

Planetary atmospheres, and models of them, are discussed from the viewpoint of condensed matter physics. Atmospheres are a form of condensed matter, and many of the most interesting concepts of condensed matter systems are realized by them. The essential physics of the general circulation is illustrated with idealized two-layer and one-layer models of the atmosphere. Equilibrium and nonequilibrium statistical mechanics are used to directly ascertain the statistics of these models. Copyright © 2012 by Annual Reviews. All rights reserved. Source


Zmuidzinas J.,California Institute of Technology
Annual Review of Condensed Matter Physics | Year: 2012

Interest in superconducting microresonators has grown dramatically over the past decade. Resonator performance has improved substantially through the use of improved geometries and materials as well as a better understanding of the underlying physics. These advances have led to the adoption of superconducting microresonators in a large number of low-temperature experiments and applications. This review outlines these developments, with particular attention given to the use of superconducting microresonators as detectors. Copyright © 2012 by Annual Reviews. All rights reserved. Source


Schwarz J.H.,California Institute of Technology
Journal of High Energy Physics | Year: 2015

Abstract: Previous studies of the type IIB superstring in an AdS5 × S5 background are based on a description of the superspace geometry as the quotient space PSU(2, 2|4)/SO(4, 1) × SO(5). This paper develops an alternative approach in which the Grassmann coordinates provide a nonlinear realization of PSU(2, 2|4) based on the quotient space PSU(2, 2|4)/SU(2, 2) × SU(4), and the bosonic coordinates are described as a submanifold of SU(2, 2) × SU(4). This formulation keeps all bosonic symmetries manifest, and it provides the complete dependence on the Grassmann coordinates in terms of simple analytic expressions. It is used to construct the superstring world-sheet action in a form in which the PSU(2, 2|4) symmetry is manifest and kappa symmetry can be established. This formulation might have some advantages compared to previous ones, but this remains to be demonstrated. © 2015, The Author(s). Source


Davidson E.H.,California Institute of Technology
Developmental Biology | Year: 2011

At present several entirely different explanatory approaches compete to illuminate the mechanisms by which animal body plans have evolved. Their respective relevance is briefly considered here in the light of modern knowledge of genomes and the regulatory processes by which development is controlled. Just as development is a system property of the regulatory genome, causal explanation of evolutionary change in developmental process must be considered at a system level. Here I enumerate some mechanistic consequences that follow from the conclusion that evolution of the body plan has occurred by alteration of the structure of developmental gene regulatory networks. The hierarchy and multiple additional design features of these networks act to produce Boolean regulatory state specification functions at upstream phases of development of the body plan. These are created by the logic outputs of network subcircuits, and in modern animals these outputs are impervious to continuous adaptive variation unlike genes operating more peripherally in the network. © 2011 Elsevier Inc. Source


Bullard E.C.,California Institute of Technology
Nature Nanotechnology | Year: 2016

Frequency stability is key to the performance of nanoresonators. This stability is thought to reach a limit with the resonator's ability to resolve thermally induced vibrations. Although measurements and predictions of resonator stability usually disregard fluctuations in the mechanical frequency response, these fluctuations have recently attracted considerable theoretical interest. However, their existence is very difficult to demonstrate experimentally. Here, through a literature review, we show that all studies of frequency stability report values several orders of magnitude larger than the limit imposed by thermomechanical noise. We studied a monocrystalline silicon nanoresonator at room temperature and found a similar discrepancy. We propose a new method to show that this was due to the presence of frequency fluctuations, of unexpected level. The fluctuations were not due to the instrumentation system, or to any other of the known sources investigated. These results challenge our current understanding of frequency fluctuations and call for a change in practices. © 2016 Nature Publishing Group Source


Minnich A.J.,California Institute of Technology
Applied Physics Letters | Year: 2015

Boundary scattering of thermal phonons in thin solid films is typically analyzed using Fuchs-Sondheimer theory, which provides a simple equation to calculate the reduction of thermal conductivity as a function of the film thickness. However, this widely used equation is not applicable to highly anisotropic solids like graphite because it assumes the phonon dispersion is isotropic. Here, we derive a generalization of the Fuchs-Sondheimer equation for solids with arbitrary dispersion relations and examine its predictions for graphite. We find that the isotropic equation vastly overestimates the boundary scattering that occurs in thin graphite films due to the highly anisotropic group velocity, and that graphite can maintain its high in-plane thermal conductivity even in thin films with thicknesses as small as 10 nm. © 2015 AIP Publishing LLC. Source


Dergachev V.,California Institute of Technology
Classical and Quantum Gravity | Year: 2010

We introduce a 'loosely coherent' method for detection of continuous gravitational waves that bridges the gap between semi-coherent and purely coherent methods. Explicit control over accepted families of signals is used to increase the sensitivity of a power-based statistic while avoiding the high computational costs of conventional matched filters. Several examples as well as a prototype implementation are discussed. © 2010 IOP Publishing Ltd. Source


Vaidyanathan P.P.,California Institute of Technology
IEEE Transactions on Signal Processing | Year: 2014

The famous mathematician S. Ramanujan introduced a summation in 1918, now known as the Ramanujan sum cq(n). For any fixed integer q , this is a sequence in n with periodicity q. Ramanujan showed that many standard arithmetic functions in the theory of numbers, such as Euler's totient function φ(n) and the Möbius function μ (n), can be expressed as linear combinations of cq(n), 1 ≤ q ≤ ∞. In the last ten years, Ramanujan sums have aroused some interest in signal processing. There is evidence that these sums can be used to extract periodic components in discrete-time signals. The purpose of this paper and the companion paper (Part II) is to develop this theory in detail. After a brief review of the properties of Ramanujan sums, the paper introduces a subspace called the Ramanujan subspace Sq and studies its properties in detail. For fixed q, the subspace Sq includes an entire family of signals with properties similar to cq (n). These subspaces have a simple integer basis defined in terms of the Ramanujan sum cq (n) and its circular shifts. The projection of arbitrary signals onto these subspaces can be calculated using only integer operations. Linear combinations of signals belonging to two or more such subspaces follows certain specific periodicity patterns, which makes it easy to identify periods. In the companion paper (Part II), it is shown that arbitrary finite duration signals can be decomposed into a finite sum of orthogonal projections onto Ramanujan subspaces. © 1991-2012 IEEE. Source


Vaidyanathan P.P.,California Institute of Technology
IEEE Transactions on Signal Processing | Year: 2014

The mathematician Ramanujan introduced a summation in 1918, now known as the Ramanujan sum cq(n). In a companion paper (Part I), properties of Ramanujan sums were reviewed, and Ramanujan subspaces Sq introduced, of which the Ramanujan sum is a member. In this paper, the problem of representing finite duration (FIR) signals based on Ramanujan sums and spaces is considered. First, it is shown that the traditional way to solve for the expansion coefficients in the Ramanujan-sum expansion does not work in the FIR case. Two solutions are then developed. The first one is based on a linear combination of the first N Ramanujan-sums (with N being the length of the signal). The second solution is based on Ramanujan subspaces. With q 1, q2,...,qK denoting the divisors of N; it is shown that x(n) can be written as a sum of K signals xqi (n) ∈ Sqi. Furthermore, the ith signal xqi (n) has period q i, and any pair of these periodic components is orthogonal. The components xqi (n) can be calculated as orthogonal projections of x(n) onto Ramanujan spaces Sqi. Then, the Ramanujan Periodic Transform (RPT) is defined based on this, and is useful to identify hidden periodicities. It is shown that the projection matrices (which compute x qi (n) from x(n)) are integer matrices except for an overall scale factor. The calculation of projections is therefore rendered easy. To estimate internal periods Nx < N of x(n), one only needs to know which projection energies are nonzero. © 1991-2012 IEEE. Source


Perona P.,California Institute of Technology
Proceedings of the IEEE | Year: 2010

The web is not perfect: while text is easily searched and organized, pictures (the vast majority of the bits that one can find online) are not. In order to see how one could improve the web and make pictures first-class citizens of the web, I explore the idea of Visipedia, a visual interface for Wikipedia that is able to answer visual queries and enables experts to contribute and organize visual knowledge. Five distinct groups of humans would interact through Visipedia: users, experts, editors, visual workers, and machine vision scientists. The latter would gradually build automata able to interpret images. I explore some of the technical challenges involved in making Visipedia happen. I argue that Visipedia will likely grow organically, combining state-of-the-art machine vision with human labor. © 2006 IEEE. Source


Coskun A.F.,University of California at Los Angeles | Coskun A.F.,California Institute of Technology | Ozcan A.,University of California at Los Angeles
Current Opinion in Biotechnology | Year: 2014

<