The University of Texas at Austin is a state research university and the flagship institution of The University of Texas System. Founded in 1883 as "The University of Texas," its campus is located in Austin—approximately 1 mile from the Texas State Capitol. The institution has the fifth-largest single-campus enrollment in the nation, with over 50,000 undergraduate and graduate students and over 24,000 faculty and staff. The university has been labeled one of the "Public Ivies," a publicly funded university considered as providing a quality of education comparable to those of the Ivy League.UT Austin was inducted into the American Association of Universities in 1929, becoming only the third university in the American South to be elected. It is a major center for academic research, with research expenditures exceeding $640 million for the 2009–2010 school year. The university houses seven museums and seventeen libraries, including the Lyndon Baines Johnson Library and Museum and the Blanton Museum of Art, and operates various auxiliary research facilities, such as the J. J. Pickle Research Campus and the McDonald Observatory. Among university faculty are recipients of the Nobel Prize, Pulitzer Prize, the Wolf Prize, and the National Medal of Science, as well as many other awards.UT Austin student athletes compete as the Texas Longhorns and are members of the Big 12 Conference. Its Longhorn Network is unique in that it is the only sports network featuring the college sports of a single university. The Longhorns have won four NCAA Division I National Football Championships, six NCAA Division I National Baseball Championships and has claimed more titles in men's and women's sports than any other school in the Big 12 since the league was founded in 1996. Current and former UT Austin athletes have won 130 Olympic medals, including 14 in Beijing in 2008 and 13 in London in 2012. The university was recognized by Sports Illustrated as "America's Best Sports College" in 2002. Wikipedia.
Maciejko J.,University of Alberta |
Maciejko J.,Canadian Institute for Advanced Research |
Fiete G.A.,University of Texas at Austin
Nature Physics | Year: 2015
Topological insulators have emerged as a major topic of condensed matter physics research, with several novel applications proposed. Although there are now a number of established experimental examples of materials in this class, all of them can be described by theories based on electronic band structure, which implies that they do not possess electronic correlations strong enough to fundamentally change this theoretical description. Here, we review recent theoretical progress in the description of a class of strongly correlated topological insulators - fractionalized topological insulators - where band theory fails owing to the fractionalization of the electron into other degrees of freedom.
Moore A.L.,IBM |
Shi L.,University of Texas at Austin
Materials Today | Year: 2014
The rapid development of faster, cheaper, and more powerful computing has led to some of the most important technological and societal advances in modern history. However, the physical means associated with enhancing computing capabilities at the device and die levels have also created a very challenging set of circumstances for keeping electronic devices cool, a critical factor in determining their speed, efficiency, and reliability. With advances in nanoelectronics and the emergence of new application areas such as three-dimensional chip stack architectures and flexible electronics, now more than ever there are both needs and opportunities for novel materials to help address some of these pressing thermal management challenges. In this paper a number of cubic crystals, two-dimensional layered materials, nanostructure networks and composites, molecular layers and surface functionalization, and aligned polymer structures are examined for potential applications as heat spreading layers and substrates, thermal interface materials, and underfill materials in future-generation electronics. © 2014 Elsevier Ltd.
Atkinson L.,University of Texas at Austin
International Journal of Consumer Studies | Year: 2013
Smartphone-based mobile advertising is a promising marketing vehicle, especially in retail and point-of-purchase environments, yet we still do not know enough about what motivates consumers to turn to mobile advertising. This is particularly true of new forms of mobile advertising, like quick response (QR) codes, which rely on pull-based approaches, rather than push-style advertising. The study clarifies this omission by exploring various antecedents that encourage consumer use of mobile advertising. Relying on the uses and gratifications approach as a theoretical framework, this study tests three antecedents of new media adoption: institutional trust, involvement and market mavenism. Data are drawn from a representative sample of 401 American adults and their willingness to use QR codes in shopping environments within the context of purchasing sustainable products. The data were analysed using ordinary least squares regression. Results indicate government trust, buycotting and market mavenism are positively related to consumers' willingness to use mobile phone-based QR code advertising whereas corporate trust is negatively related. The results offer important theoretical and managerial implications: first, there is a need to include broader institutional trust variables in the uses and gratifications framework of new media use; second, QR code content should be carefully crafted to provide meaningful, usable information for involved consumers; and third, market mavens' tendency to share information should be harnessed by providing QR code content that is relevant, interesting and easily passed on to other consumers. © 2013 John Wiley & Sons Ltd.
Miller J.A.,University of Texas at Austin
Progress in Physical Geography | Year: 2014
Species distribution models (SDMs) have become a dominant paradigm for quantifying species-environment relationships, and both the models and their outcomes have seen widespread use in conservation studies, particularly in the context of climate change research. With the growing interest in SDMs, extensive comparative studies have been undertaken. However, few generalizations and recommendations have resulted from these empirical studies, largely due to the confounding effects of differences in and interactions among the statistical methods, species traits, data characteristics, and accuracy metrics considered. This progress report addresses 'virtual species distribution models': the use of spatially explicit simulated data to represent a 'true' species distribution in order to evaluate aspects of model conceptualization and implementation. Simulating a 'true' species distribution, or a virtual species distribution, and systematically testing how these aspects affect SDMs, can provide an important baseline and generate new insights into how these issues affect model outcomes. © The Author(s) 2014.
Matouschek A.,University of Texas at Austin |
Finley D.,Harvard University
Science | Year: 2012
The discovery in archaea of an alternative proteasome based on Cdc48 provides insights into the evolution of protein degradation machines.
Spikes K.T.,University of Texas at Austin
Geophysics | Year: 2011
A statistical rock-physics technique, based on well data that provides estimates and associated uncertainty of fracture density in the Middle Bakken Siltstone, is presented. Geologic and hydrocarbon-charging history of the Middle Bakken indicate multiple sets of fractures that justify treating this unit as elastically isotropic. The generalized n-phase self-consistent model relates the elastic properties to composition, matrix porosity, and fracture porosity, where an assigned aspect ratio and volumetric fraction corresponds to each input. The modeling of bulk density as a function of total porosity supplies deterministic estimates of the composition. Analysis of in situ stress and pore-stiffness calculations provide a range of fracture aspect ratios, corresponding to open fractures. Stochastic simulation of fracture porosity initiates the statistical nature of the technique. This treatment of fracture porosity enables the rock-physics model to be treated statistically through multiple realizations. Modeling results explain the measured bulk and shear moduli, with the bulk modulus more accurately described, and the results also provide statistical estimates of fracture porosity. Calculations using these estimates of fracture porosity, along with fracture aspect ratios, result in statistical estimates of fracture density for each depth value in the Middle Bakken unit. Values of fracture density fall within imposed limits (< 0.10). The results and technique demonstrated here could be applied to field seismic data to identify locations of increased fracture density. These locations might indicate areas of increased permeability in the Middle Bakken Siltstone. © 2011 Society of Exploration Geophysicists.
Carlson W.D.,University of Texas at Austin
Journal of Metamorphic Geology | Year: 2010
Quantitative constraints on the accelerative effects of H2O on the kinetics of metamorphic reactions arise from a comparison of rates of intergranular diffusion of Al in natural systems that are fluid-saturated, hydrous but fluid-undersaturated, and nearly anhydrous. Widths of symplectitic reaction coronas around partially resorbed garnet crystals in the contact aureole of the Makhavinekh Lake Pluton, northern Labrador, combined with time-temperature histories from conductive thermal models, yield intergranular diffusivities for Al from ∼700-900 °C under nearly anhydrous conditions. Those rates, when extrapolated down temperature, are approximately three orders of magnitude slower than rates derived from re-analysis of garnet resorption coronas formed under hydrous but fluid-undersaturated conditions near 575 °C in rocks of the Llano Uplift of central Texas, which are in turn approximately four orders of magnitude slower than rates at comparable temperatures derived from numerical simulations of prograde garnet growth in fluid-saturated conditions in rocks from the Picuris Range of north-central New Mexico. Thus, even at constant temperature, rates of intergranular diffusion of Al - and corresponding length scales and timescales of metamorphic reaction and equilibration - may vary by as much as seven orders of magnitude across the range of H2O activities found in nature. © 2010 Blackwell Publishing Ltd.
Breecker D.O.,University of Texas at Austin
Geochemistry, Geophysics, Geosystems | Year: 2013
A computer program (PBUQ) that uses Monte Carlo simulations to propagate uncertainty through regression equations and the equation for the paleosol carbonate CO2 paleobarometer is presented. PBUQ includes options for all of the common approaches to determining values for input variables and incorporates several recent advancements relevant to determining values for soil-respired CO2 concentrations, δ13C values of respired CO2, δ13C values of atmospheric CO 2, and temperatures of soil carbonate formation. PBUQ is intended to improve confidence in paleoatmospheric CO2 research by helping researchers draw statistically significant conclusions. PBUQ can also be used to attribute and partition error among various sources and thereby advance this technique. Sensitivity analysis indicates that S(z) is the largest source of uncertainty for most paleosols and that uncertainty is minimized for soils in which CO2 is an evenly balanced mixture between soil-derived and atmospheric components. Evenly balanced mixtures are most likely for paleosols formed in deserts and for weakly developed paleosols. Development of proxies for soil-respired CO2 concentrations and δ13C values of soil-respired CO2 specifically for such soils is perhaps the most crucial next step for improving this technique. Currently, calcic paleosols are best used to test the significance of trends and/or differences among time slices in paleoatmospheric CO2 concentration. Application to quantifying Earth System Sensitivity will require large scale averaging of determinations from individual paleosols and/or reduced uncertainty associated with input variables. © 2013. American Geophysical Union. All Rights Reserved.
Smith D.,University of Texas at Austin
Geochemistry, Geophysics, Geosystems | Year: 2013
The evolution of the mantle wedge below the Colorado Plateau during low-angle subduction has been investigated by analysis of fragments from the Navajo Volcanic Field (NVF), most from serpentinized ultramafic microbreccia (SUM) diatremes. In most SUM-hosted olivine, concentrations of Al and V are < 1 ppm, and Cr, Ca, and Na concentrations also are unusually low: most temperatures from three olivine thermometers (Al, Cr, and V-based) are in the range 530°C to 650°C. The temperatures are consistent with the presence of chlorite in some inclusions, and they support the utility of olivine thermometry for diverse mineral assemblages in cool peridotite. Most pressures calculated for discrete diopside grains in SUM correspond to depths in the range 80 to 120 km. Diopside is relatively high in Li (∼3.5 ppm), and two of five diopside grains have high Sr/Nd. SUM diatreme sources are inferred to be serpentine-rich mélange tectonically eroded from the forearc, transported above the Farallon slab, and incorporated into the lithosphere of the Plateau. Garnet peridotite xenoliths from minettes of the NVF record deeper depths in the range 120 to 150 km. These garnet peridotites also may be from forearc mantle emplaced during Farallon subduction. Calculated temperatures preclude the possibility that asthenosphere was in contact with that lithosphere at or near 150 km depth for tens of m.y. before NVF formation. Structures observed in seismic images of mantle to depths of 150 km below the central Colorado Plateau may be related to Farallon subduction, not inherited from Proterozoic lithosphere. ©2013. American Geophysical Union. All Rights Reserved.
Dingwell J.B.,University of Texas at Austin
PLoS computational biology | Year: 2010
It is widely accepted that humans and animals minimize energetic cost while walking. While such principles predict average behavior, they do not explain the variability observed in walking. For robust performance, walking movements must adapt at each step, not just on average. Here, we propose an analytical framework that reconciles issues of optimality, redundancy, and stochasticity. For human treadmill walking, we defined a goal function to formulate a precise mathematical definition of one possible control strategy: maintain constant speed at each stride. We recorded stride times and stride lengths from healthy subjects walking at five speeds. The specified goal function yielded a decomposition of stride-to-stride variations into new gait variables explicitly related to achieving the hypothesized strategy. Subjects exhibited greatly decreased variability for goal-relevant gait fluctuations directly related to achieving this strategy, but far greater variability for goal-irrelevant fluctuations. More importantly, humans immediately corrected goal-relevant deviations at each successive stride, while allowing goal-irrelevant deviations to persist across multiple strides. To demonstrate that this was not the only strategy people could have used to successfully accomplish the task, we created three surrogate data sets. Each tested a specific alternative hypothesis that subjects used a different strategy that made no reference to the hypothesized goal function. Humans did not adopt any of these viable alternative strategies. Finally, we developed a sequence of stochastic control models of stride-to-stride variability for walking, based on the Minimum Intervention Principle. We demonstrate that healthy humans are not precisely "optimal," but instead consistently slightly over-correct small deviations in walking speed at each stride. Our results reveal a new governing principle for regulating stride-to-stride fluctuations in human walking that acts independently of, but in parallel with, minimizing energetic cost. Thus, humans exploit task redundancies to achieve robust control while minimizing effort and allowing potentially beneficial motor variability.
Wang K.,Beijing Normal University |
Dickinson R.E.,University of Texas at Austin
Reviews of Geophysics | Year: 2012
This review surveys the basic theories, observational methods, satellite algorithms, and land surface models for terrestrial evapotranspiration, E (or E, i.e., latent heat flux), including a long-term variability and trends perspective. The basic theories used to estimate E are the Monin-Obukhov similarity theory (MOST), the Bowen ratio method, and the Penman-Monteith equation. The latter two theoretical expressions combine MOST with surface energy balance. Estimates of E can differ substantially between these three approaches because of their use of different input data. Surface and satellite-based measurement systems can provide accurate estimates of diurnal, daily, and annual variability of E. But their estimation of longer time variability is largely not established. A reasonable estimate of E as a global mean can be obtained from a surface water budget method, but its regional distribution is still rather uncertain. Current land surface models provide widely different ratios of the transpiration by vegetation to total E. This source of uncertainty therefore limits the capability of models to provide the sensitivities of E to precipitation deficits and land cover change. © 2012 by the American Geophysical Union.
Zeidouni M.,University of Texas at Austin
Water Resources Research | Year: 2012
A fault is generally composed of a fault core surrounded by damage zones and can accommodate both lateral and vertical flow. In this paper we develop an analytical model to evaluate the leakage rate through a fault and corresponding pressure changes in the injection zone and a shallower permeable interval. The leaky fault connects the upper interval and the target zone, which are otherwise separated by a confining layer. We account for both across-fault and up-fault flow to honor the general architecture of the fault. We extend the two-formation analytical solution to consider multiple overlying formations with alternating confining layers offset by the fault. The solution methodology involves writing and transforming the coupled governing flow equations successively into the Laplace and Fourier domains and solving the resulting ordinary differential equations. The solution is verified through a comparison with existing analytical solutions for bounding cases. Two examples are presented to demonstrate the behavior and potential applications of our analytical model. © 2012. American Geophysical Union. All Rights Reserved.
Angel R.J.,University of Texas at Austin
Journal of Cross-Cultural Gerontology | Year: 2013
The rapid growth in comparative survey research carried out in multiple countries, or among different language communities within a single nation, has given rise to a renewed concern with problems of translation. The fundamental problem facing the comparative survey researcher relates to the complexity and subjectivity of language, and the fact that complete equivalence of concepts in different linguistic, cultural, and social class contexts may be in principle impossible to achieve. Yet language remains the only medium through which information concerning subjective states, values, and beliefs can be collected. That language and the subjective constructs to which it refers are influenced by a wide range of cultural and social factors. This fact has particular relevance for comparative aging research since older individuals are often monolingual in their native languages and more tied to traditional cultures than younger individuals. This paper consists of a review of basic issues related to the nature of language and communication, and discusses the possibility of a truly scientific translation process. It outlines current best practices, and also raises questions related to the common practice of using information collected with translated survey instruments in ways that assume it reflects a comparable and quantifiable latent construct. © 2013 Springer Science+Business Media New York.
Fomel S.,University of Texas at Austin
Geophysics | Year: 2010
Predictive painting is a numerical algorithm that spreads information in 3D volumes according to the local structure of seismic events. The algorithm consists of two steps. First, local spatially variable inline and crossline slopes of seismic events are estimated by the plane-wave-destruction method. Next, a seed trace is inserted in the volume, and the information contained in that trace is spread inside the volume, thus automatically "painting" the data space. Immediate applications of this technique include automatic horizon picking and flattening in applications to both prestack and poststack seismic data analysis. Synthetic and field data tests demonstrate the effectiveness of predictive painting. © 2010 Society of Exploration Geophysicists.
O'Dea E.B.,University of Texas at Austin
PLoS computational biology | Year: 2010
Lethal mutagenesis is a promising new antiviral therapy that kills a virus by raising its mutation rate. One potential shortcoming of lethal mutagenesis is that viruses may resist the treatment by evolving genomes with increased robustness to mutations. Here, we investigate to what extent mutational robustness can inhibit extinction by lethal mutagenesis in viruses, using both simple toy models and more biophysically realistic models based on RNA secondary-structure folding. We show that although the evolution of greater robustness may be promoted by increasing the mutation rate of a viral population, such evolution is unlikely to greatly increase the mutation rate required for certain extinction. Using an analytic multi-type branching process model, we investigate whether the evolution of robustness can be relevant on the time scales on which extinction takes place. We find that the evolution of robustness matters only when initial viral population sizes are small and deleterious mutation rates are only slightly above the level at which extinction can occur. The stochastic calculations are in good agreement with simulations of self-replicating RNA sequences that have to fold into a specific secondary structure to reproduce. We conclude that the evolution of mutational robustness is in most cases unlikely to prevent the extinction of viruses by lethal mutagenesis.
Haus E.L.,University of Minnesota |
Smolensky M.H.,University of Texas at Austin
Sleep Medicine Reviews | Year: 2013
Shift work that includes a nighttime rotation has become an unavoidable attribute of today's 24-h society. The related disruption of the human circadian time organization leads in the short-term to an array of jet-lag-like symptoms, and in the long-run it may contribute to weight gain/obesity, metabolic syndrome/type II diabetes, and cardiovascular disease. Epidemiologic studies also suggest increased cancer risk, especially for breast cancer, in night and rotating female shift workers. If confirmed in more controlled and detailed studies, the carcinogenic effect of night and shift work will constitute additional serious medical, economic, and social problems for a substantial proportion of the working population. Here, we examine the possible multiple and interconnected cancer-promoting mechanisms as a consequence of shift work, i.e., repeated disruption of the circadian system, pineal hormone melatonin suppression by exposure to light at night, sleep-deprivation-caused impairment of the immune system, plus metabolic changes favoring obesity and generation of proinflammatory reactive oxygen species. © 2012 Elsevier Ltd.
Ganesan V.,University of Texas at Austin |
Jayaraman A.,University of Colorado at Boulder
Soft Matter | Year: 2014
Polymer nanocomposites are a class of materials that consist of a polymer matrix filled with inorganic/organic nanoscale additives that enhance the inherent macroscopic (mechanical, optical and electronic) properties of the polymer matrix. Over the past few decades such materials have received tremendous attention from experimentalists, theoreticians, and computational scientists. These studies have revealed that the macroscopic properties of polymer nanocomposites depend strongly on the (microscopic) morphology of the constituent nanoscale additives in the polymer matrix. As a consequence, intense research efforts have been directed to understand the relationships between interactions, morphology, and the phase behavior of polymer nanocomposites. Theory and simulations have proven to be useful tools in this regard due to their ability to link molecular level features of the polymer and nanoparticle additives to the resulting morphology within the composite. In this article we review recent theory and simulation studies, presenting briefly the methodological developments underlying PRISM theories, density functional theory, self-consistent field theory approaches, and atomistic and coarse-grained molecular simulations. We first discuss the studies on polymer nanocomposites with bare or un-functionalized nanoparticles as additives, followed by a review of recent work on composites containing polymer grafted or functionalized nanoparticles as additives. We conclude each section with a brief outlook on some potential future directions. This journal is © The Royal Society of Chemistry.
Narayanaswamy V.,North Carolina State University |
Clemens N.T.,University of Texas at Austin
Proceedings of the Combustion Institute | Year: 2013
An experimental study was performed to investigate the soot-turbulence interaction in the sootformation region of turbulent non-premixed co-flowing ethylene/N2 jet flames. Simultaneous velocity and soot volume-fraction (fv) fields were obtained using two-component particle image velocimetry and laser-induced incandescence, respectively. Measurements were made for jet exit Reynolds numbers between 8500 and 12,300, and the measurement location was 10 jet diameters downstream, near the beginning of the yellow luminous region where soot is first formed. In agreement with previous studies, the peak mean fv in the production region is inversely related to the bulk strain rate. The simultaneous data show that soot is formed to the inside of the stoichiometric surface (inferred from stoichiometric velocity), but the formation region moves outside to regions of lower velocity and strain rate as the bulk strain rate is increased. Soot structures form in low strain rate regions, but their upstream edge is seen to become stretched out and aligned at a preferred angle (near 45 degrees) owing to alignment with the instantaneous principal extensive strain rate. Statistical analysis shows that the soot exists, on average, in fluid with axial velocity of about 3 m/s and strain rate of 700 s1, regardless of the jet exit velocity. The radial profiles of the covariance between fv and radial velocity are consistent with a model where the soot is formed at a preferred radial location (near the reaction zone) and then is transported by turbulent fluctuations to regions of lower fv. © 2012 The Combustion Institute.
Thomeer M.B.,University of Texas at Austin
American Journal of Public Health | Year: 2013
Objectives: I examined how sexual minority status, as indicated by sex of sexual partners, is associated with self-rated health and how socioeconomic status suppresses and age and sex moderate this association. Methods: I used multinomial logistic regression to analyze aggregated data from the 1991 to 2010 General Social Survey, a population-based data set (n = 13 480). Results: Respondents with only different-sex partners or with any same-sex partners reported similar levels of health. With socioeconomic status added to the model, respondents with any same-sex partners reported worse health than those with only different-sex partners, but only if sexual intercourse with same-sex partners occurred in the previous 5 years. Age and sex moderated this relationship: having any same-sex partners was associated with worse health for women but not men and among younger adults only. Conclusions: The relationship between sexual minority status and self-rated health varies across sociodemographic groups. Future research should use population-level data to examine other health outcomes and continue to explore how the intersection of sexual minority status and other sociodemographic indicators shapes health.
Weinberg S.,University of Texas at Austin
Physical Review Letters | Year: 2010
An effective field theory of quarks, gluons, and pions, with the number N of colors treated as large, is proposed as a basis for calculations of hadronic phenomena at moderate energies. The qualitative consequences of the large N limit are similar though not identical to those in pure quantum chromodynamics, but because constituent quark masses appear in the effective Lagrangian, the 't Hooft coupling in the effective theory need not be strong at moderate energies. To leading order in 1/N the effective theory is renormalizable, with only a finite number of terms in the Lagrangian. © 2010 The American Physical Society.
MacDonald A.H.,University of Texas at Austin
Physical Review Letters | Year: 2010
We show that the low-temperature sash features in lowest Landau-level (LLL) tunneling spectra recently discovered by Dial and Ashoori are intimately related to the discrete Haldane-pseudopotential interaction energy scales that govern fractional quantum-Hall physics. Our analysis is based on expressions for the tunneling density of states which become exact at filling factors close to ν=0 and ν=1, where the sash structure is most prominent. We comment on other aspects of LLL correlation physics that can be revealed by accurate temperature-dependent tunneling data. © 2010 The American Physical Society.
Henneghan A.,University of Texas at Austin
Supportive Care in Cancer | Year: 2016
Purpose: It is unknown why some breast cancer survivors experience cancer-related cognitive impairments (CRCI) after cancer treatment, and modifiable risk factors for CRCI remain to be explicated. This mixed-method systematic review synthesizes quantitative and qualitative evidence for relationships between modifiable factors and CRCI in breast cancer survivors who receive chemotherapy as part of their treatment. Methods: Keyword Searches of PubMed/Medline, PsychINFO, and CINAHL were performed for January 2005 through June 2015. Studies that provided data on associations between modifiable biological, behavioral, environmental, and psychosocial factors and cognition were included. Results: Twenty-two quantitative studies and five qualitative studies were identified after applying inclusion and exclusion criteria yielding evidence for significant relationships among modifiable biological (inflammatory cytokines), behavioral (sleep quality, physical activity), and psychosocial (stress, distress, affect) factors and CRCI. Conclusion: Many women unfortunately experience CRCI after breast cancer chemotherapy, with limited treatment options available to improve cognitive function. This review synthesizes current evidence to support the associations between modifiable factors and CRCI and can inform research to evaluate these factors prospectively. The clinical implications of these findings suggest that lifestyle factors such as physical activity, stress management, and sleep quality may be appropriate targets for behavioral interventions to improve cognitive function following breast cancer chemotherapy; however, further research is necessary. © 2015, Springer-Verlag Berlin Heidelberg.
Alu A.,University of Texas at Austin |
Engheta N.,University of Pennsylvania
Physical Review Letters | Year: 2010
Near-field imaging is a well-established technique in biomedical measurements, since closer to the detail of interest it is possible to resolve subwavelength details otherwise unresolved by regular lenses. A near-field scanning optical microscope (NSOM) tip may indeed overcome the resolution limits of far-field optics, but its proximity inherently perturbs the measurement. Here, we apply the recent concept of a "cloaked sensor" to an NSOM device in collection mode, showing theoretically how a proper plasmonic cover applied to an NSOM tip may drastically improve its overall measurement capabilities. © 2010 The American Physical Society.
Ghosh B.,University of Texas at Austin |
Akram M.W.,Indian Institute of Technology Kanpur
IEEE Electron Device Letters | Year: 2013
In this letter, a double-gate junctionless tunnel field effect transistor (JL-TFET) is proposed and investigated. The JL-TFET is a Si-channel heavily n-type-doped junctionless field effect transistor (JLFET), which uses two isolated gates (Control-Gate, P-Gate) with two different metal work-functions to behave like a tunnel field effect transistor (TFET). In this structure, the advantages of JLFET and TFET are combined together. The simulation results of JL-TFET with high-k dielectric material (TiO2) of 20-nm gate length shows excellent characteristics with high IONI OFF ratio (∼ 6× 108), a point subthreshold slope (SS) of ∼ 38mV/decade, and an average SS of ∼ 70 mV decade at room temperature, which indicates that JL-TFET is a promising candidate for a switching performance. © 1980-2012 IEEE.
Maxwell J.C.,University of Texas at Austin
Drug and Alcohol Review | Year: 2011
Introduction and Aims. Abuse of prescription analgesics in the USA is increasing. The epidemic has been driven by many factors, including marketing strategies, incorrect prescribing practices, a variety of legal and illegal drug sources, belated governmental responses and increases in the number of prescriptions written. Design and Methods. Data sources including surveys, emergency room visits, treatment admissions, overdose deaths, toxicology laboratory findings and journal articles were examined to identify trends. Results. The surveys and emergency department visits show use lowest among young teenagers and highest among older teenagers and young adults, with significant increases among those aged 55 and older. The length of time between initial use of an opioid other than heroin and admission to treatment is shortening. Mortality data and toxicology exhibits confirm the increases and show the variation in the prevalence of various drugs across the USA. Discussion and Conclusions. Abuse is increasing, with varying patterns of use by high-risk groups and different geographic preferences. Prescription drug monitoring programs are being developed in each of the US states to deter 'doctor shopping'; the Food and Drug Administration has increased authority over manufacturers; and options for proper disposal of leftover medications exist. There is increased emphasis on responsible prescribing including risk assessments, prescribing agreements, treatment plans, and training for clinicians, as well as monitoring the interactions with benzodiazepines. However, unless these efforts decrease diversion, abuse and addiction, clinicians may lose the ability to use some of these opioids for effective pain management or so many barriers will be raised that pain will go undertreated or untreated.[Maxwell JC. The prescription drug epidemic in the United States: A perfect storm. Drug Alcohol Rev 2011;30:264-270] © 2011 Australasian Professional Society on Alcohol and other Drugs.
Karlsson T.,University of Sydney |
Karlsson T.,University of Oxford |
Bromm V.,University of Texas at Austin |
Bland-Hawthorn J.,University of Sydney
Reviews of Modern Physics | Year: 2013
The emergence of the first sources of light at redshifts of z∼10-30 signaled the transition from the simple initial state of the Universe to one of increasing complexity. Recent progress in our understanding of the formation of the first stars and galaxies, starting with cosmological initial conditions, primordial gas cooling, and subsequent collapse and fragmentation are reviewed. The important open question of how the pristine gas was enriched with heavy chemical elements in the wake of the first supernovae is emphasized. The review concludes by discussing how the chemical abundance patterns conceivably allow us to probe the properties of the first stars, and allow us to test models of early metal enrichment. © 2013 American Physical Society.
Bersuker I.B.,University of Texas at Austin
Physical Review Letters | Year: 2012
The conditions of multiferroicity in dn perovskites are derived from the pseudo Jahn-Teller effect, due to which ferroelectric displacements are triggered by vibronic coupling between ground and excited electronic states of opposite parity but same spin multiplicity; it takes place for some specific dn configurations and spin states only. In combination with the high-spin-low-spin crossover effect this leads to a novel phenomenon, the magnetic-ferroelectric (multiferroics) crossover which predicts magnetoelectric effects with exciting functionalities including electric magnetization and demagnetization. © 2012 American Physical Society.
Abanin D.A.,Harvard University |
Pesin D.A.,University of Texas at Austin
Physical Review Letters | Year: 2012
The electronic properties of graphene can be manipulated via mechanical deformations, which opens prospects for both studying the Dirac fermions in new regimes and for new device applications. Certain natural configurations of strain generate large nearly uniform pseudomagnetic fields, which have opposite signs in the two valleys, and give rise to flat spin- and valley-degenerate pseudo-Landau levels (PLLs). Here we consider the effect of the Coulomb interactions in strained graphene with a uniform pseudomagnetic field. We show that the spin or valley degeneracies of the PLLs get lifted by the interactions, giving rise to topological insulator states. In particular, when a nonzero PLL is quarter or three-quarter filled, an anomalous quantum Hall state spontaneously breaking time-reversal symmetry emerges. At half-filled PLLs, a weak spin-orbital interaction stabilizes the time-reversal-symmetric quantum spin-Hall state. These many-body states are characterized by the quantized conductance and persist to a high temperature scale set by the Coulomb interactions, which we estimate to be a few hundreds Kelvin at moderate strain values. At fractional fillings, fractional quantum Hall states breaking valley symmetry emerge. These results suggest a new route to realizing robust topological states in mesoscopic graphene. © 2012 American Physical Society.
Brown 3rd. H.S.,University of Texas at Austin
Preventing chronic disease | Year: 2012
The objective of our study was to estimate the long-term cost-effectiveness of a lifestyle modification program led by community health workers (CHWs) for low-income Hispanic adults with type 2 diabetes. We forecasted disease outcomes, quality-adjusted life years (QALYs) gained, and lifetime costs associated with attaining different hemoglobin A1c (A1c) levels. Outcomes were projected 20 years into the future and discounted at a 3.0% rate. Sensitivity analyses were conducted to assess the extent to which our results were dependent on assumptions related to program effectiveness, projected years, discount rates, and costs. The incremental cost-effectiveness ratio of the intervention ranged from $10,995 to $33,319 per QALY gained when compared with usual care. The intervention was particularly cost-effective for adults with high glycemic levels (A1c > 9%). The results are robust to changes in multiple parameters. The CHW program was cost-effective. This study adds to the evidence that culturally sensitive lifestyle modification programs to control diabetes can be a cost-effective way to improve health among Hispanics with diabetes, particularly among those with high A1c levels.
Bourell D.L.,University of Texas at Austin
Annual Review of Materials Research | Year: 2016
Additive manufacturing (AM) has skyrocketed in visibility commercially and in the public sector. This article describes the development of this field from early layered manufacturing approaches of photosculpture, topography, and material deposition. Certain precursors to modern AM processes are also briefly described. The growth of the field over the last 30 years is presented. Included is the standard delineation of AM technologies into seven broad categories. The economics of AM part generation is considered, and the impacts of the economics on application sectors are described. On the basis of current trends, the future outlook will include a convergence of AM fabricators, mass-produced AM fabricators, enabling of topology optimization designs, and specialization in the AM legal arena. Long-term developments with huge impact are organ printing and volume-based printing. Copyright © 2016 by Annual Reviews. All rights reserved.
Lee H.,University of Texas at Austin
Optics Letters | Year: 2014
The shape of a focus-modulated point spread function (PSF) is used as a quick visual assessment tool of aberration modes in the PSF. Further analysis in terms of shape moments can permit quantifying the modal coefficients with an accuracy comparable to that of typical wavefront sensors. In this Letter, the error of the moment-based wavefront sensing is analytically described in terms of the pixelation and photon/readout noise. All components highly depend on the (unknown) PSF shape but can be estimated from the measured PSF sampled at a reasonable spatial resolution and photon count. Numerical simulations verified that the models consistently predicted the behavior of the modal estimation error of the moment-based wavefront sensing. © 2014 Optical Society of America.
Krogh-Jespersen S.,University of Chicago |
Echols C.H.,University of Texas at Austin
Child Development | Year: 2012
Children's confidence in their own knowledge may influence their willingness to learn novel information from others. Twenty-four-month-old children's (N=160) willingness to learn novel labels for either familiar or novel objects from an adult speaker was tested in 1 of 5 conditions: accurate, inaccurate, knowledgeable, ignorant, or uninformative. Children were willing to learn a second label for an object from a reliable informant in the accurate, knowledgeable, and uninformative conditions; children were less willing to apply a novel label to a familiar object if the speaker previously was inaccurate or had expressed ignorance. However, when the objects were novel, children were willing to learn the label regardless of the speaker's knowledge level. © 2012 The Authors. Child Development © 2012 Society for Research in Child Development, Inc.
Miller J.A.,University of Texas at Austin
Progress in Physical Geography | Year: 2012
The main goal of species distribution modeling is to identify important underlying factors related to broad-scale ecological patterns in order to make meaningful explanations or accurate predictions. When standard statistical methods such as regression are used to formulate these models, assumptions about the spatial structure of the data and the model parameters are often violated. Autocorrelation and non-stationarity are characteristics of spatial data and models, respectively, and if present and unaccounted for in model development, they can result in poorly specified models as well as inappropriate spatial inference and prediction. While these spatial issues are addressed here in an ecological context using species distribution models, they are broadly relevant to any statistical modeling applications using spatial data. © The Author(s) 2012.
Legare C.H.,University of Texas at Austin
Child Development | Year: 2012
Explaining inconsistency may serve as an important mechanism for driving the process of causal learning. But how might this process generate amended beliefs? One way that explaining inconsistency may promote discovery is by guiding exploratory, hypothesis-testing behavior. In order to investigate this, a study with young children ranging in age from 2 to 6years (N=80) examined the relation between explanation and exploratory behavior following consistent versus inconsistent outcomes. Results indicated that for inconsistent outcomes only, the kind of explanation children provided informed the kind of exploratory behavior they engaged in and the extent to which children modified and generated new hypotheses. In sum, the data provide insight into a mechanism by which explaining inconsistent evidence guides causal cognition. © 2011 The Author. Child Development © 2011 Society for Research in Child Development, Inc.
Dunn A.K.,University of Texas at Austin
Annals of Biomedical Engineering | Year: 2012
Laser speckle contrast imaging (LSCI) has emerged over the past decade as a powerful, yet simple, method for imaging of blood flow dynamics in real time. The rapid adoption of LSCI for physiological studies is due to the relative ease and low cost of building an instrument as well as the ability to quantify blood flow changes with excellent spatial and temporal resolution. Although measurements are limited to superficial tissues with no depth resolution, LSCI has been instrumental in pre-clinical studies of neurological disorders as well as clinical applications including dermatological, neurosurgical and endoscopic studies. Recently a number of technical advances have been developed to improve the quantitative accuracy and temporal resolution of speckle imaging. This article reviews some of these recent advances and describes several applications of speckle imaging. © 2011 Biomedical Engineering Society.
Martin S.F.,University of Texas at Austin
Advances in Heterocyclic Chemistry | Year: 2013
Designing strategies for the efficient synthesis of frameworks found in alkaloids and other biologically active nitrogen heterocycles is an important goal in contemporary organic chemistry. This account presents an overview of our discovery and development of some useful approaches to prepare nitrogen heterocycles that are common to a number of alkaloid families. Our interest in the broad arena of alkaloid synthesis has helped lead to the emergence of the vinylogous Mannich reaction and ring-closing metathesis as strategically powerful bond-forming reactions that can be exploited as key steps in the synthesis of alkaloids and other nitrogen heterocycles of biological interest. Our discovery of the vinylogous Mannich reaction also led to the development of a novel strategy for diversity oriented synthesis that features the creation of highly substituted arylmethyl amine derivatives by multicomponent assembly processes related to the Mannich reaction. These adducts may be rapidly elaborated by ring-closing metathesis and other transition metal catalyzed cyclizations, cycloadditions, and a variety of standard ring-forming reactions to provide a variety of nitrogen heterocyclic scaffolds that may be further derivatized to generate collections of small molecules for biological screening. © 2013 Elsevier Inc.
Wang K.,Beijing Normal University |
Dickinson R.E.,University of Texas at Austin
Reviews of Geophysics | Year: 2013
Atmospheric downward longwave radiation at the surface (Ld) varies with increasing CO2 and other greenhouse gases. This study quantifies the uncertainties of current estimates of global Ld at monthly to decadal timescales and its global climatology and trends during the past decades by a synthesis of the existing observations, reanalyses, and satellite products. We find that current Ld observations have a standard deviation error of ~3.5 W m-2 on a monthly scale. Observations of Ld by different pyrgeometers may differ substantially for lack of a standard reference. The calibration of a pyrgeometer significantly affects its quantification of annual variability. Compared with observations collected at 169 global land sites from 1992 to 2010, the L d derived from state-of-the-art satellite cloud observations and reanalysis temperature and humidity profiles at a grid scale of ~1° has a bias of ±9 W m-2 and a standard deviation of 7 W m -2, with a nearly zero overall bias. The standard deviations are reduced to 4 W m-2 over tropical oceans when compared to L d observations collected by 24 buoy sites from 2002 to 2011. The -4 W m-2 bias of satellite Ld retrievals over tropical oceans is likely because of the overestimation of Ld observations resulting from solar heating of the pyrgeometer. Our best estimate of global means L d from 2003 to 2010 are 342 ± 3 W m-2 (global), 307 ± 3 W m-2 (land), and 356 ± 3 W m-2 (ocean). Estimates of Ld trends are seriously compromised by the changes in satellite sensors giving changes of water vapor profiles. Key Points Ld observations have a standard deviation error of ~3.5 W m-2 at a monthly scale Satellite and reanalysis Ld has a standard deviation of 7 W m-2 Global mean Ld is 342 W m-2 (global), 307 W m-2 (land), and 356 W m-2 (ocean) ©2013. American Geophysical Union. All Rights Reserved.
Korzdorfer T.,Georgia Institute of Technology |
Marom N.,University of Texas at Austin
Physical Review B - Condensed Matter and Materials Physics | Year: 2012
Many-body perturbation theory in the G 0W 0 approximation is an increasingly popular tool for calculating electron removal energies and fundamental gaps for molecules and solids. However, the predictive power of G 0W 0 is limited by its sensitivity to the density functional theory (DFT) starting point. We introduce a nonempirical scheme, which allows us to find a reliable DFT starting point for G 0W 0 calculations. This is achieved by adapting the amount of Hartree-Fock exchange in a hybrid DFT functional. The G 0W 0 spectra resulting from this starting point reliably predict experimental photoelectron spectra for a test set of 13 typical organic semiconductor molecules. © 2012 American Physical Society.
De La Torre J.C.,University of Texas at Austin
American Journal of Alzheimer's Disease and other Dementias | Year: 2013
Evidence is growing that vascular risk factors (VRFs) for Alzheimer's disease (AD) affect cerebral hemodynamics to launch a cascade of cellular and molecular changes that initiate cognitive deficits and eventual progression of AD. Neuroimaging studies have reported VRFs for AD to be accurate predictors of cognitive decline and dementia. In regions that participate in higher cognitive function, middle temporal, posterior cingulate, inferior parietal and precuneus regions, and neuroimaging studies indicate an association involving VRFs, cerebral hypoperfusion, and cognitive decline in elderly individuals who develop AD. The VRF can be present in cognitively intact individuals for decades before mild cognitive deficits or neuropathological signs are manifested. In that sense, they may be "ticking time bombs" before cognitive function is demolished. Preventive intervention of modifiable VRF may delay or block progression of AD. Intervention could target cerebral blood flow (CBF), since most VRFs act to lower CBF in aging individuals by promoting cerebrovascular dysfunction. © 2013 The Author(s).
Ross C.E.,University of Texas at Austin
Journal of Health and Social Behavior | Year: 2011
A threatening and dangerous neighborhood may produce distressing emotions of anxiety, anger, and depression among the individuals who live there because residents find these neighborhoods subjectively alienating. The author introduces the idea that neighborhood disorder indicates collective threat, which is alienating-shaping perceptions of powerlessness and mistrust. The author presents a theory of trust that posits that mistrust develops in places where resources are scarce and threat is common and among individuals with few resources and who feel powerless to avoid or manage the threat. Perceived powerlessness develops with exposure to uncontrollable, negative conditions such as crime, danger, and threat in one's neighborhood. Thus, neighborhood disorder, common in disadvantaged neighborhoods, influences mistrust directly and indirectly by increasing perceptions of powerlessness among residents, which amplify disorder's effect on mistrust. The very thing needed to protect disadvantaged residents from the negative effects of their environment-a sense of personal control-is eroded by that environment in a process that the author calls structural amplification. Powerlessness and mistrust in turn are distressing, increasing levels of anxiety, anger, and depression. © American Sociological Association 2011.
Alu A.,University of Texas at Austin |
Engheta N.,University of Pennsylvania
Physical Review Letters | Year: 2010
Optical waveguide interconnects are a major component of chip-scale data processing and computational systems. Here, we propose an alternative mechanism based on optical wireless broadcasting links using nanoantennas, which may overcome some of the limitations of nanoscale waveguide interconnects. By properly loading and matching nanoantenna pairs with optical nanocircuits, we theoretically demonstrate a complete optical wireless link that, in spite of some radiation loss and mismatch factors, may exhibit much less absorption loss, largely outperforming regular plasmonic waveguide links. © 2010 The American Physical Society.
Yu E.T.,University of Texas at Austin |
Van De Lagemaat J.,National Renewable Energy Laboratory
MRS Bulletin | Year: 2011
Photovoltaics are expected to play an important role in the future energy infrastructure. However, achieving simultaneously high efficiency in both light absorption and carrier collection remains a challenging tradeoff. Photon management, which refers to the engineering of materials and device structures to control the spatial distribution of optical energy, offers a number of promising routes to optimizing this tradeoff. Progress in fabrication of nanostructured materials combined with advances in the understanding of nanophotonic devices has enabled new strategies for photon management in a range of photovoltaic devices. Prominent among these are structures with pronounced surface topography or graded refractive-index profiles that reduce surface reflectivity; materials processing that increases optical absorption in materials such as silicon; incorporation of semiconductor nanostructures that enables simultaneous improvements in optical absorption and photogenerated carrier collection; and coherent light trapping in optical waveguide modes via plasmonic or optical scattering effects. The articles in this issue review some of these emerging directions. © Copyright Materials Research Society 2011.
Li X.,University of Texas at Austin
Proceedings of the Annual IEEE Conference on Computational Complexity | Year: 2011
We study the problem of constructing affine extractors over GF(2). Previously the only known construction that can handle sources with arbitrarily linear entropy is due to Bourgain (and a slight modification by Yehudayoff), which makes extensive use of complicated inequality manipulations and relies on a careful choice of a polynomial. In this paper we give a new and conceptually much cleaner construction of affine extractors for linear entropy sources that outputs a constant fraction of the entropy with exponentially small error. This matches the previous best result of Bourgain. The extractor can be pushed to handle affine sources with entropy n/√lognlogn. This slightly improves Bourgain's result and matches the recent result of Yehudayoff. We also give a zero-error disperser for affine sources with entropy n/√logn that outputs nΩ(1) bits. This improves previous constructions of affine dispersers that output more than 1 bit. In contrast to Bourgain's construction, our construction mainly uses extractor machinery and basic properties of polynomials. Some of our techniques may be of independent interest. © 2011 IEEE.
Gillespie J.H.,University of Texas at Austin
PLoS ONE | Year: 2013
Background: Understanding dietary trends for endangered species may be essential to assessing the effects of ecological disturbances such as habitat modification, species introductions or global climate change. Documenting temporal variation in prey selection may also be crucial for understanding population dynamics. However, the rarity, secretive behaviours and obscure microhabitats of some endangered species can make direct foraging observations difficult or impossible. Furthermore, the lethality or invasiveness of some traditional methods of dietary analysis (e.g. gut contents analysis, gastric lavage) makes them inappropriate for such species. Stable isotope analysis facilitates non-lethal, indirect analysis of animal diet that has unrealized potential in the conservation of endangered organisms, particularly amphibians. Methodology/findings: I determined proportional contributions of aquatic macroinvertebrate prey to the diet of an endangered aquatic salamander Eurycea sosorum over a two-year period using stable isotope analysis of 13/12C and 15/14N and the Bayesian stable isotope mixing model SIAR. I calculated Strauss' dietary electivity indices by comparing these proportions with changing relative abundance of potential prey species through time. Stable isotope analyses revealed that a previously unknown prey item (soft-bodied planarian flatworms in the genus Dugesia) made up the majority of E. sosorum diet. Results also demonstrate that E. sosorum is an opportunistic forager capable of diet switching to include a greater proportion of alternative prey when Dugesia populations decline. There is also evidence of intra-population dietary variation. Conclusions/significance: Effective application of stable isotope analysis can help circumvent two key limitations commonly experienced by researchers of endangered species: the inability to directly observe these species in nature and the invasiveness or lethality of traditional methods of dietary analysis. This study illustrates the feasibility of stable isotope analysis in identifying preferred prey species that can be used to guide conservation management of both wild and captive food sources for endangered species. © 2013 J. Hayley Gillespie.
Macneilage P.F.,University of Texas at Austin
Developmental Psychobiology | Year: 2013
As part of a vertebrate-wide trend toward left brain/right side asymmetries in routine whole-body actions, marine mammals show signs of rightward appendage-use biases, and short- and long-term turning asymmetries most of which are unique in non-humans in being just as strong as right handedness, and even stronger than human handedness-related turning biases. Short-term marine mammal turning asymmetries and human about-turning asymmetries share a leading right side, suggesting a commonality in left hemisphere intentional control. The long-term leftward turning bias that both groups share may be an indirect result of both sensory and motor influences on the right side in dolphins, but be induced by a right-hemisphere-controlled spatial/attentional bias to the left in humans. Marine mammals may share, with humans and other higher primates, a left hemisphere specialization for action dynamics, although evidence is currently lacking for human-like right hemisphere specializations relevant to action in other vertebrates. © 2013 Wiley Periodicals, Inc.
Snedden J.W.,University of Texas at Austin
Marine and Petroleum Geology | Year: 2013
Scouring at the base of channel bodies plays a significant role in enhancing connectivity of sandstone reservoirs. Investigation of high-resolution 3D seismic data from a fluvial and deep-water channel system illuminates the location and spacing of channel-body basal scours and possible controls. Scours in both East Breaks upper fan (Quaternary) and Iron River fluvial channel-bodies (Cretaceous) are comparable in scale, with a deviation of up to 10 m scour depth relative to the average channel-body basal depth.Substrate lithology, as documented by draping well-calibrated seismic response onto 3D channel body basal surfaces, does not appear to be a major influence on scour location or depth. In the datasets examined, channel-body basal scour locations appear to be most influenced by changes in channel orientation, with outer bends being particularly prone to scouring, even in channels that are not highly sinuous.Scaling relationships between scour spacing and channel width observed in modern fluvial systems are further tested against these high-resolution datasets and published Miocene subsurface deep-water reservoir maps but results are mixed, with a consistent over-prediction of scour spacing. This may reflect difficulties in accurately determining channel bank full width in confined deep-water channel complex systems.Observations from an ancillary 4D seismic dataset show that over 30% of the identified scour areas exhibit attribute anomalies calibrated with water saturation changes between the baseline and monitor surveys (about 3 years) and thus indicate production-induced fluid movement through these features. This underlines the importance of scours as connection points between fluid compartments and the significance of observations of scour location and spacing made here from high-resolution 3D and ancillary 4D seismic data. © 2012 Elsevier Ltd.
Fomel S.,University of Texas at Austin
Geophysics | Year: 2013
Seismic data can be decomposed into nonstationary spectral components with smoothly variable frequencies and smoothly variable amplitudes. To estimate local frequencies, I use a nonstationary version of Prony's spectral analysis method defined with the help of regularized nonstationary autoregression. To estimate local amplitudes of different components, I fit their sum to the data using regularized nonstationary regression. Shaping regularization ensures stability of the estimation process and provides controls on smoothness of the estimated parameters. Potential applications of the proposed technique include noise attenuation, seismic data compression, and seismic data regularization. © 2013 Society of Exploration Geophysicists.
Vaze R.,Tata Institute of Fundamental Research |
Heath R.W.,University of Texas at Austin
IEEE Transactions on Information Theory | Year: 2011
In a two-way relay channel, two sources use one or more relay nodes to exchange data with each other. This paper considers a multiple input multiple output (MIMO) two-way relay channel, where each relay node has one or more antennas. Optimal relay transmission strategies for the two-way relay channel are derived to maximize the achievable rate with amplify and forward (AF) at each relay and to achieve the optimal diversity-multiplexing tradeoff (DM-tradeoff). To maximize the achievable rate with AF, an iterative algorithm is proposed which solves a power minimization problem subject to minimum signal-to-interference-and-noise ratio constraints at every step. The power minimization problem is nonconvex. The Karush Kuhn Tucker conditions, however, are shown to be sufficient for optimality. Capacity scaling law of the two-way relay channel with increasing number of relays is also established by deriving a lower and upper bound on the capacity region of the two-way relay channel. To achieve the optimal DM-tradeoff, a compress and forward strategy is proposed and its DM-tradeoff is derived. For the full-duplex two-way relay channel, the proposed strategy achieves the optimal DM-tradeoff, while for the half-duplex case the proposed strategy is shown to achieve the optimal DM-tradeoff under some conditions. © 2011 IEEE.
Ruegg A.,University of California at Berkeley |
Lin C.,University of Texas at Austin
Physical Review Letters | Year: 2013
We investigate the electronic structure induced by wedge disclinations (conical singularities) in a honeycomb lattice model realizing Chern numbers γ=±1. We establish a correspondence between the bound state of (i) an isolated Φ0/2 flux, (ii) an isolated pentagon (n=1) or heptagon (n=-1) defect with an external flux of magnitude nγΦ 0/4 through the center, and (iii) an isolated square or octagon defect without external flux, where Φ0=h/e is the flux quantum. Because of the above correspondence, the existence of isolated electronic states bound to disclinations is robust against various perturbations. Hence, measuring these defect states offers an interesting probe of graphene-based topological insulators which is complementary to measurements of the quantized edge currents. © 2013 American Physical Society.
Stachowiak J.C.,University of Texas at Austin |
Brodsky F.M.,University of California at San Francisco |
Miller E.A.,Columbia University
Nature Cell Biology | Year: 2013
Many cellular membrane-bound structures exhibit distinct curvature that is driven by the physical properties of their lipid and protein constituents. Here we review how cells manipulate and control this curvature in the context of dynamic events such as vesicle-mediated membrane traffic. Lipids and cargo proteins each contribute energy barriers that must be overcome during vesicle formation. In contrast, protein coats and their associated accessory proteins drive membrane bending using a variety of interdependent physical mechanisms. We survey the energy costs and drivers involved in membrane curvature, and draw a contrast between the stochastic contributions of molecular crowding and the deterministic assembly of protein coats. These basic principles also apply to other cellular examples of membrane bending events, including important disease-related problems such as viral egress. © 2013 Macmillan Publishers Limited. All rights reserved.
Boas D.A.,Harvard University |
Dunn A.K.,University of Texas at Austin
Journal of Biomedical Optics | Year: 2010
First introduced in the 1980s, laser speckle contrast imaging is a powerful tool for full-field imaging of blood flow. Recently laser speckle contrast imaging has gained increased attention, in part due to its rapid adoption for blood flow studies in the brain. We review the underlying physics of speckle contrast imaging and discuss recent developments to improve the quantitative accuracy of blood flow measures. We also review applications of laser speckle contrast imaging in neuroscience, dermatology and ophthalmology. © 2010 Society of Photo-Optical Instrumentation Engineers.
Ricles L.M.,University of Texas at Austin
International journal of nanomedicine | Year: 2011
Stem cells can differentiate into multiple cell types, and therefore can be used for cellular therapies, including tissue repair. However, the participation of stem cells in tissue repair and neovascularization is not well understood. Therefore, implementing a noninvasive, long-term imaging technique to track stem cells in vivo is needed to obtain a better understanding of the wound healing response. Generally, we are interested in developing an imaging approach to track mesenchymal stem cells (MSCs) in vivo after delivery via a polyethylene glycol modified fibrin matrix (PEGylated fibrin matrix) using MSCs loaded with gold nanoparticles as nanotracers. The objective of the current study was to assess the effects of loading MSCs with gold nanoparticles on cellular function. In this study, we utilized various gold nanoparticle formulations by varying size and surface coatings and assessed the efficiency of cell labeling using darkfield microscopy. We hypothesized that loading cells with gold nanotracers would not significantly alter cell function due to the inert and biocompatible characteristics of gold. The effect of nanoparticle loading on cell viability and cytotoxicity was analyzed using a LIVE/DEAD stain and an MTT assay. The ability of MSCs to differentiate into adipocytes and osteocytes after nanoparticle loading was also examined. In addition, nanoparticle loading and retention over time was assessed using inductively coupled plasma mass spectrometry (ICP-MS). Our results demonstrate that loading MSCs with gold nanotracers does not alter cell function and, based on the ICP-MS results, long-term imaging and tracking of MSCs is feasible. These findings strengthen the possibility of imaging MSCs in vivo, such as with optical or photoacoustic imaging, to understand better the participation and role of MSCs in neovascularization.
Lee H.,University of Texas at Austin
Optics Express | Year: 2010
A collimation method of misaligned optical systems is proposed. The method is based on selectively nullifying main alignment-driven aberration components. This selective compensation is achieved by the optimal adjustment of chosen alignment parameters. It is shown that this optimal adjustment can be obtained by solving a linear matrix equation of the low-order alignment-driven terms of primary field aberrations. A significant result from the adjustment is to place the centers of the primary field aberrations, initially scattered over the field due to misalignment, to a desired common field location. This aberration concentering naturally results in recovery of image quality across the field of view. Error analyses and robustness tests show the method's feasibility in efficient removal of alignment-driven aberrations in the face of measurement and model uncertainties. The extension of the method to the collimation of a misaligned system with higher-order alignment-driven aberrations is also shown.© 2010 Optical Society of America.
Weinberg S.,University of Texas at Austin
Physical Review Letters | Year: 2013
It is argued that exotic mesons consisting of two quarks and two antiquarks are not ruled out in quantum chromodynamics with a large number N of colors, as generally thought. Tetraquarks of one class are typically long-lived, with decay rates proportional to 1/N. © 2013 American Physical Society.
Weinberg S.,University of Texas at Austin
Physical Review Letters | Year: 2013
It is suggested that Goldstone bosons may be masquerading as fractional cosmic neutrinos, contributing about 0.39 to what is reported as the effective number of neutrino types in the era before recombination. The broken symmetry associated with these Goldstone bosons is further speculated to be the conservation of the particles of dark matter. © 2013 American Physical Society.
Matz M.V.,University of Texas at Austin
Physiological Reviews | Year: 2010
Green fluorescent protein (GFP) from the jellyfish Aequorea victoria and its homologs from diverse marine animals are widely used as universal genetically encoded fluorescent labels. Many laboratories have focused their efforts on identification and development of fluorescent proteins with novel characteristics and enhanced properties, resulting in a powerful toolkit for visualization of structural organization and dynamic processes in living cells and organisms. The diversity of currently available fluorescent proteins covers nearly the entire visible spectrum, providing numerous alternative possibilities for multicolor labeling and studies of protein interactions. Photoactivatable fluorescent proteins enable tracking of photolabeled molecules and cells in space and time and can also be used for super-resolution imaging. Genetically encoded sensors make it possible to monitor the activity of enzymes and the concentrations of various analytes. Fast-maturing fluorescent proteins, cell clocks, and timers further expand the options for real time studies in living tissues. Here we focus on the structure, evolution, and function of GFP-like proteins and their numerous applications for in vivo imaging, with particular attention to recent techniques. Copyright © 2010 the American Physiological Society.
Panse V.G.,ETH Zurich |
Johnson A.W.,University of Texas at Austin
Trends in Biochemical Sciences | Year: 2010
In eukaryotic cells, ribosomes are pre-assembled in the nucleus and exported to the cytoplasm where they undergo final maturation. This involves the release of trans-acting shuttling factors, transport factors, incorporation of the remaining ribosomal proteins, and final rRNA processing steps. Recent work, particularly on the large (60S) ribosomal subunit, has confirmed that the 60S subunit is exported from the nucleus in a functionally inactive state. Its arrival in the cytoplasm triggers events that render it translationally competent. Here we focus on these cytoplasmic maturation events and speculate why eukaryotic cells have evolved such an elaborate maturation pathway. © 2010.
Whitfield C.,University of Guelph |
Stephen Trent M.,University of Texas at Austin
Annual Review of Biochemistry | Year: 2014
Lipopolysaccharide molecules represent a unique family of glycolipids based on a highly conserved lipid moiety known as lipid A. These molecules are produced by most gram-negative bacteria, in which they play important roles in the integrity of the outer-membrane permeability barrier and participate extensively in host-pathogen interplay. Few bacteria contain lipopolysaccharide molecules composed only of lipid A. In most forms, lipid A is glycosylated by addition of the core oligosaccharide that, in some bacteria, provides an attachment site for a long-chain O-antigenic polysaccharide. The complexity of lipopolysaccharide structures is reflected in the processes used for their biosynthesis and export. Rapid growth and cell division depend on the bacterial cell's capacity to synthesize and export lipopolysaccharide efficiently and in large amounts. We review recent advances in those processes, emphasizing the reactions that are essential for viability. Copyright © 2014 by Annual Reviews.
Kwasinski A.,University of Texas at Austin
IEEE Transactions on Power Electronics | Year: 2012
In this paper, a switching strategy for multiple-input converters (MICs) is presented and analyzed. MICs have been identified to provide a cost-effective approach for energy harvesting in hybrid systems, and for power distribution in micro- and nanogrids. The basic principle of the proposed switching strategy is that the effective duty ratio of each switch is an integer multiple of a common duty ratio (CDR), the CDR being the duty ratio of a common switching function that is generated at a higher frequency by frequency division. The proposed strategy enables switching functions for MICs that have a greater number of input legs to be generated with relative ease. Another benefit of this scheme is that it allows an MICs output voltage to be regulated by employing the CDR as the only control variable, irrespective of the number of input legs present. Essentially, the strategy transforms an MIC into an equivalent single-input single-output system for analysis, which simplifies controller design and implementation. Without loss of generality, this technique is demonstrated by analyzing a multiple-input buck-boost converter. A PI controller is shown to regulate the MICs operating point. The analysis is verified by simulations and experiments. © 2012 IEEE.
Guo L.,University of Texas at Austin
Journal of Broadcasting and Electronic Media | Year: 2012
This article presents the innovative application of social network analysis to agenda setting research. It suggests that the approach of network analysis enables researchers to map out the interrelationships among objects and attributes both in the media agenda and the public agenda. Further, by conducting statistical analysis, researchers are able to compare the media agenda networks and public agenda networks in order to explore a third level of agenda setting effects. Concrete procedures for applying network analysis in agenda setting research are presented, and a set of hypotheses are suggested in this article. © 2012 Copyright Taylor and Francis Group, LLC.
Baldick R.,University of Texas at Austin
IEEE Systems Journal | Year: 2012
Many jurisdictions worldwide are greatly increasing the amount of wind production, with the expectation that increasing renewables will cost-effectively reduce greenhouse emissions. This paper discusses the interaction of increasing wind, transmission constraints, renewable credits, wind and demand correlation, intermittency, carbon prices, and electricity market prices using the particular example of the Electric Reliability Council of Texas market. An estimate is made of the cost of using wind to mitigate greenhouse gas emissions. © 2011 IEEE.
Rezzolla L.,Institute for Theoretical Physics |
Kumar P.,University of Texas at Austin
Astrophysical Journal | Year: 2015
The merger of a binary of neutron stars provides natural explanations for many of the features of short gamma-ray bursts (SGRBs), such as the generation of a hot torus orbiting a rapidly rotating black hole, which can then build a magnetic jet and provide the energy reservoir to launch a relativistic outflow. However, this scenario has problems explaining the recently discovered long-term and sustained X-ray emission associated with the afterglows of a subclass of SGRBs. We propose a new model that explains how an X-ray afterglow can be sustained by the product of the merger and how the X-ray emission is produced before the corresponding emission in the gamma-band, though it is observed to follow it. Overall, our paradigm combines in a novel manner a number of well-established features of the emission in SGRBs and results from simulations. Because it involves the propagation of an ultra-relativistic outflow and its interaction with a confining medium, the paradigm also highlights a unifying phenomenology between short and long GRBs. © 2015. The American Astronomical Society. All rights reserved.
Darnell M.Z.,University of Texas at Austin
Journal of Experimental Marine Biology and Ecology | Year: 2012
Color change can serve a number of functions, and can be a singular aperiodic event or a rhythmic process driven by responses to environmental cues or endogenous rhythms. Endogenous pigmentation rhythms have been identified in a number of taxa, with fiddler crabs being the subject of much of this research. Despite a long history of experimental studies of chromatophore-level rhythmicity in fiddler crabs, few studies have considered the entrainment cues for these rhythms or extended findings to whole-organism coloration, information important for understanding the biological properties of circadian systems and understanding the functional significance of these rhythms. This study examined the circadian pigmentation rhythm in the subtropical fiddler crab Uca panacea at both the cellular (melanophore) and organismal levels, including expression in artificial light/dark cycles and constant darkness, entrainment by light/dark and temperature cycles, and relationships between melanophore rhythms and the spectral reflectance of the carapace. On the melanophore level, crabs exhibited a circadian rhythm in pigment dispersion, with maximum dispersion occurring during the day and maximum concentration occurring during the night. This rhythm persisted under ambient or reversed light/dark cycles, with maximum pigment dispersion occurring during the light phase, or under constant darkness. Both light/dark and temperature cycles entrained the rhythm, although light/dark cycles resulted in greater phase shift. The circadian rhythm in melanin dispersion within melanophores is associated with a circadian rhythm in organismal coloration, with carapace reflectance low during the day and high at night. Because of the high absorption of UV radiation by melanin, the functional significance of this rhythm may be as a mechanism of UV-protection during the day when crabs are exposed to high levels of UV radiation while foraging on open sand flats of the intertidal zone. © 2012 Elsevier B.V.
Patricola C.M.,Cornell University |
Cook K.H.,University of Texas at Austin
Climate Dynamics | Year: 2010
A method for simulating future climate on regional space scales is developed and applied to northern Africa. Simulation with a regional model allows for the horizontal resolution needed to resolve the region's strong meridional gradients and the optimization of parameterizations and land-surface model. The control simulation is constrained by reanalysis data, and realistically represents the present day climate. Atmosphere-ocean general circulation model (AOGCM) output provides SST and lateral boundary condition anomalies for 2081-2100 under a business-as-usual emissions scenario, and the atmospheric CO2 concentration is increased to 757 ppmv. A nine-member ensemble of future climate projections is generated by using output from nine AOGCMs. The consistency of precipitation projections for the end of the twenty-first century is much greater for the regional model ensemble than among the AOGCMs. More than 77% of ensemble members produce the same sign rainfall anomaly over much of northern Africa. For West Africa, the regional model projects wetter conditions in spring, but a mid-summer drought develops during June and July, and the heat stoke risk increases across the Sahel. Wetter conditions resume in late summer, and the likelihood of flooding increases. The regional model generally projects wetter conditions over eastern Central Africa in June and drying during August through September. Severe drought impacts parts of East Africa in late summer. Conditions become wetter in October, but the enhanced rainfall does not compensate for the summertime deficit. The risk of heat stroke increases over this region, although the threat is not projected to be as great as in the Sahel. © 2009 The Author(s).
Lambowitz A.M.,University of Texas at Austin
Cold Spring Harbor perspectives in biology | Year: 2011
Group II introns are mobile ribozymes that self-splice from precursor RNAs to yield excised intron lariat RNAs, which then invade new genomic DNA sites by reverse splicing. The introns encode a reverse transcriptase that stabilizes the catalytically active RNA structure for forward and reverse splicing, and afterwards converts the integrated intron RNA back into DNA. The characteristics of group II introns suggest that they or their close relatives were evolutionary ancestors of spliceosomal introns, the spliceosome, and retrotransposons in eukaryotes. Further, their ribozyme-based DNA integration mechanism enabled the development of group II introns into gene targeting vectors ("targetrons"), which have the unique feature of readily programmable DNA target specificity.
Fiori G.,University of Pisa |
Bonaccorso F.,Italian Institute of Technology |
Iannaccone G.,University of Pisa |
Palacios T.,Massachusetts Institute of Technology |
And 4 more authors.
Nature Nanotechnology | Year: 2014
The compelling demand for higher performance and lower power consumption in electronic systems is the main driving force of the electronics industry's quest for devices and/or architectures based on new materials. Here, we provide a review of electronic devices based on two-dimensional materials, outlining their potential as a technological option beyond scaled complementary metal-oxide-semiconductor switches. We focus on the performance limits and advantages of these materials and associated technologies, when exploited for both digital and analog applications, focusing on the main figures of merit needed to meet industry requirements. We also discuss the use of two-dimensional materials as an enabling factor for flexible electronics and provide our perspectives on future developments. © 2014 Macmillan Publishers Limited.
Harris K.M.,University of Texas at Austin
Cold Spring Harbor perspectives in biology | Year: 2012
The morphology and molecular composition of synapses provide the structural basis for synaptic function. This article reviews the electron microscopy of excitatory synapses on dendritic spines, using data from rodent hippocampus, cerebral cortex, and cerebellar cortex. Excitatory synapses have a prominent postsynaptic density, in contrast with inhibitory synapses, which have less dense presynaptic or postsynaptic specializations and are usually found on the cell body or proximal dendritic shaft. Immunogold labeling shows that the presynaptic active zone provides a scaffold for key molecules involved in the release of neurotransmitter, whereas the postsynaptic density contains ligand-gated ionic channels, other receptors, and a complex network of signaling molecules. Delineating the structure and molecular organization of these axospinous synapses represents a crucial step toward understanding the mechanisms that underlie synaptic transmission and the dynamic modulation of neurotransmission associated with short- and long-term synaptic plasticity.
Kulis B.,Ohio State University |
Grauman K.,University of Texas at Austin
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2012
Fast retrieval methods are critical for many large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sublinear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several data sets, and show that it enables accurate and fast performance for several vision problems, including example-based object classification, local feature matching, and content-based retrieval. © 2012 IEEE.
Chen W.,University of Texas at Austin
Social Science Computer Review | Year: 2013
Drawing on nationally representative survey data, this article examines the implications of Internet use and online communication for strong and weak ties in Americans' social networks. In line with the existing literature, this research shows that frequent Internet use and online communication are associated with a larger core discussion network and a more extensive position-generated network. More importantly, this research provides a finer tuned analysis by disaggregating the overall network into strong and weak ties. First, Internet use-but not online communication-is positively related to weak-tie based network extensity in the position-generated networks. Second, Internet use and online communication are positively related to the number and the proportion of strong ties in Americans' core discussion networks. These results help reconcile some of the conflicting findings and interpretations based on different network measures in the exiting literature. © The Author(s) 2013.
Goff J.A.,University of Texas at Austin
Journal of Geophysical Research: Solid Earth | Year: 2010
Abyssal hills, which are pervasive landforms on the seafloor of the Earth's oceans, represent a potential tectonic record of the history of mid-ocean ridge spreading. However, the most detailed global maps of the seafloor, derived from the satellite altimetry-based gravity field, cannot be used to deterministically characterize such small-scale (<10 km) morphology. Nevertheless, the small-scale variability of the gravity field can be related to the statistical properties of abyssal hill morphology using the upward continuation formulation. In this paper, I construct a global prediction of abyssal hill root-mean-square (rms) heights from the small-scale variability of the altimetric gravity field. The abyssal hill-related component of the gravity field is derived by first masking distinct features, such as seamounts, mid-ocean ridges, and continental margins, and then applying a newly designed adaptive directional filter algorithm to remove fracture zone/discontinuity fabric. A noise field is derived empirically by correlating the rms variability of the small-scale gravity field to the altimetric noise field in regions of very low relief, and the noise variance is subtracted from the small-scale gravity variance. Suites of synthetically derived, abyssal hill formed gravity fields are generated as a function of water depth, basement rms heights, and sediment thickness and used to predict abyssal hill seafloor rms heights from corrected small-scale gravity rms height. The resulting global prediction of abyssal hill rms heights is validated qualitatively by comparing against expected variations in abyssal hill morphology and quantitatively by comparing against actual measurements of rms heights. Although there is scatter, the prediction appears unbiased. Copyright 2010 by the American Geophysical Union.
Chen W.,University of Texas at Austin
Information Society | Year: 2013
The existing literature is oriented toward examining how Internet access and use may affect social capital. The role of social capital in narrowing the digital divides has been frequently mentioned but few studies have empirically examined how various types of social capital may affect people's access and use of the Internet. Drawing on a two-wave national panel data set, this article aims to fill this gap. Results demonstrate that social capital facilitates Internet access and use. In particular resource-rich bonding social capital helps overcome the digital divides in access, general use, and online communication. Before the Internet can revitalize social capital, there must be the right social capital in place to close the digital divides. Highlighting the relationship between social connectivity and digital connectivity, the findings have important implications for policymakers and practitioners. © 2013 Copyright Taylor and Francis Group, LLC.
Navarrete C.C.,University of Texas at Austin
Computers and Education | Year: 2013
In a case study on middle-school student educational game creation course in south central US state, the students' creative thinking process is investigated in order to understand perceptions of the digital design and programming involved in the game creation learning approach. Interviewing 12 students at with three different levels of game design experience, students in grade 6, 7 and 8, with 1, 2, and 3 years of game design experience respectively, findings suggest that students enjoyed the learning approach as satisfying and engaging, yet technologically challenging. The students experienced positive opportunities for engaging the creative thinking process in synthesizing social issue information for constructing their understanding through the creation of interactive, educational digital games. Findings suggest that the creative thinking process in student-centered game creation learning approach may provide learners a rich and enjoyable learning experience with the authentic technology use as well as provide for deep, insightful learning. © 2013 Elsevier Ltd. All rights reserved.
Shaikh S.R.,East Carolina University |
Jolly C.A.,University of Texas at Austin |
Chapkin R.S.,Texas A&M University
Molecular Aspects of Medicine | Year: 2012
Fish oil, enriched in bioactive n-3 polyunsaturated fatty acids (PUFA), has therapeutic value for the treatment of inflammation-associated disorders. The effects of n-3 PUFAs are pleiotropic and complex; hence, an understanding of their cellular targets and molecular mechanisms of action remains incomplete. Here we focus on recent data indicating n-3 PUFAs exert immunosuppressive effects on the function of effector and regulatory CD4 + T cells. In addition, we also present emerging evidence that n-3 PUFAs have immunomodulatory effects on B cells. We then focus on one multifaceted mechanism of n-3 PUFAs, which is the alteration of the biophysical and biochemical organization of the plasma membrane. This mechanism is central for downstream signaling, eicosanoid production, transcriptional regulation and cytokine secretion. We highlight recent work demonstrating n-3 PUFA acyl chains in the plasma membrane target the lateral organization of membrane signaling assemblies (i.e. lipid rafts or signaling networks) and de novo phospholipid biosynthesis. We conclude by proposing new functional and mechanistic questions in this area of research that will aid in the development of fish oil as adjuvant therapy for treating unresolved chronic inflammation. © 2012 Elsevier Inc. All rights reserved.
Zhang Y.,University of Texas at Austin
Library and Information Science Research | Year: 2013
This study investigated the dynamic process of people constructing mental models of MedlinePlus, a medical information web space, during their interaction with the system. Thirty-eight participants participated in the study. Their mental models of MedlinePlus were measured by a concept listing protocol and an interview method at three time points: after they freely explored the system for 5. min (T1), after the first search session (T2), and after the second search session (T3). The analysis revealed that participants constructed their mental models of MedlinePlus based on the schemas that they have of information-rich web spaces. The model construction process involved changes and development in three parallel dimensions: cognition, emotion, and behavior. The development is enabled and coordinated by three mental activities: assimilating new concepts, modifying existing concepts, and phasing out previously perceived concepts. Furthermore, mental model construction is not only a function of users' internal cognition, but also affected by external cognitive structures, including the system, system feedback, and tasks. Mental model construction is also a process distributed over time. The results suggested that mental models could serve as a framework for guiding user research and system design. The dynamic nature of the mental models indicated that an iterative approach needs to be adopted. © 2012 Elsevier Inc.
Sun N.,University of Texas at Austin
IEEE Transactions on Circuits and Systems I: Regular Papers | Year: 2012
This paper presents a digital background calibration technique that intentionally exploits process variation and noise in comparators to correct conversion errors caused by interstage gain error, gain nonlinearity, and capacitor mismatch in pipelined ADCs. The merits of this technique lies in its design simplicity, fast convergence speed, and low power. Simulation results are presented for a 12-bit pipelined ADC, similar to that described by Murmann and Boser , and Keane ,  using low-gain amplifiers. With calibration, the SNDR and SFDR are improved from 47 and 49 dB to 72 and 92 dB, respectively. The number of conversions required for convergence is 10 6, which is about 4 times faster than that of Keane and 40 times faster than that of Murmamn and Boser. © 2004-2012 IEEE.
Castelli D.M.,University of Texas at Austin
Monographs of the Society for Research in Child Development | Year: 2014
The study of physical activity, physical fitness, and academic performance research are reviewed from a historical perspective, by providing an overview of existing publications focused on children and adolescents. Using rigorous inclusion criteria, the studies were quantified and qualified using both meta-analytic and descriptive evaluations analyses, first by time-period and then as an overall summary, particularly focusing on secular trends and future directions. This review is timely because the body of literature is growing exponentially, resulting in the emergence of new terminology, methodologies, and identification of mediating and moderating factors. Implications and recommendations for future research are summarized. © 2014 The Society for Research in Child Development, Inc.
Andrews J.G.,University of Texas at Austin
IEEE Communications Magazine | Year: 2013
Imagine a world with more base stations than cell phones: this is where cellular technology is headed in 10-20 years. This mega-trend requires many fundamental differences in visualizing, modeling, analyzing, simulating, and designing cellular networks vs. the current textbook approach. In this article, the most important shifts are distilled down to seven key factors, with the implications described and new models and techniques proposed for some, while others are ripe areas for future exploration. © 1979-2012 IEEE.
Waters B.,University of Texas at Austin
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011
We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions. © 2011 International Association for Cryptologic Research.
Meschiari S.,University of Texas at Austin
Astrophysical Journal Letters | Year: 2012
The existence of planets born in environments highly perturbed by a stellar companion represents a major challenge to the paradigm of planet formation. In numerical simulations, the presence of a close binary companion stirs up the relative velocity between planetesimals, which is fundamental in determining the balance between accretion and erosion. However, the recent discovery of circumbinary planets by Kepler establishes that planet formation in binary systems is clearly viable. We perform N-body simulations of planetesimals embedded in a protoplanetary disk, where planetesimal phasing is frustrated by the presence of stochastic torques, modeling the expected perturbations of turbulence driven by the magnetorotational instability. We examine perturbation amplitudes relevant to dead zones in the midplane (conducive to planet formation in single stars), and find that planetesimal accretion can be inhibited even in the outer disk (4-10 AU) far from the central binary, a location previously thought to be a plausible starting point for the formation of circumbinary planets. © 2012. The American Astronomical Society. All rights reserved.
Abanin D.A.,Princeton University |
Pesin D.A.,University of Texas at Austin
Physical Review Letters | Year: 2011
We study collective behavior of magnetic adatoms randomly distributed on the surface of a topological insulator. Interactions of an ensemble of adatoms are frustrated, as the RKKY-type interactions of two adatom spins depend on the directions of spins relative to the vector connecting them. We show that at low temperatures the frustrated RKKY interactions give rise to two phases: an ordered ferromagnetic phase with spins pointing perpendicular to the surface, and a disordered spin-glass-like phase. The two phases are separated by a quantum phase transition driven by the magnetic exchange anisotropy. The ordered phase breaks time-reversal symmetry spontaneously, driving the surface states into a gapped state, which exhibits an anomalous quantum Hall effect and provides a realization of the parity anomaly. We find that the magnetic ordering is suppressed by potential scattering. © 2011 American Physical Society.
Dodson-Robinson S.E.,University of Texas at Austin
Astrophysical Journal | Year: 2012
Previous studies of the interior structure of transiting exoplanets have shown that the heavy-element content of gas giants increases with host star metallicity. Since metal-poor planets are less dense and have larger radii than metal-rich planets of the same mass, one might expect that metal-poor stars host a higher proportion of gas giants with large radii than metal-rich stars. Here I present evidence for a negative correlation at the 2.3σ level between eclipse depth and stellar metallicity in the Kepler gas giant candidates. Based on Kendall's τ statistics, the probability that eclipse depth depends on star metallicity is 0.981. The correlation is consistent with planets orbiting low-metallicity stars being, on average, larger in comparison with their host stars than planets orbiting metal-rich stars. Furthermore, since metal-rich stars have smaller radii than metal-poor stars of the same mass and age, a uniform population of planets should show a rise in median eclipse depth with [M/H]. The fact that I find the opposite trend indicates that substantial changes in the gas giant interior structure must accompany increasing [M/H]. I investigate whether the known scarcity of giant planets orbiting low-mass stars could masquerade as an eclipse depth-metallicity correlation, given the degeneracy between metallicity and temperature for cool stars in the Kepler Input Catalog. While the eclipse depth-metallicity correlation is not yet on firm statistical footing and will require spectroscopic [Fe/H] measurements for validation, it is an intriguing window into how the interior structure of planets and even the planet formation mechanism may be changing with Galactic chemical evolution. © 2012. The American Astronomical Society. All rights reserved..
Liu C.,University of Texas at Austin
Journal of the Acoustical Society of America | Year: 2013
Just noticeable differences of tone pitch contour discrimination were examined for young English- and Mandarin Chinese-native listeners to examine categorical features of tone perception for the two groups of listeners. Three types of stimuli were used: A Mandarin Chinese vowel, an English vowel, and tonal glides. Level, rising, and falling tones within or across tone boundaries served as the standard stimuli to measure thresholds of tone pitch discrimination. Performance was equivalent between English- and Chinese-native listeners for level tones, but significantly differed for rising and falling tones, regardless of the type of stimuli. English listeners showed significantly lower thresholds at the offset of F0 shifts than Chinese listeners, while Chinese listeners discriminated tone pitch changes at the onset with significantly lower thresholds than their English peers. These psychophysical results, combined with tone perception reported in other studies, indicated that Mandarin-native listeners perceived lexical tones in a categorical manner, based on their lexical tone experience, whereas English-native listeners perceived tones on a psychophysical base. © 2013 Acoustical Society of America.
Zhang Y.,University of Texas at Austin
Journal of the American Society for Information Science and Technology | Year: 2013
Designing effective consumer health information systems requires deep understanding of the context in which the systems are being used. However, due to the elusive nature of the concept of context, few studies have made it a focus of examination. To fill this gap, we studied the context of consumer health information searching by analyzing questions posted on a social question and answer site: Yahoo! Answers. Based on the analysis, a model of context was developed. The model consists of 5 layers: demographic, cognitive, affective, situational, and social and environmental. The demographic layer contains demographic factors of the person of concern; the cognitive layer contains factors related to the current search task (specifically, topics of interest and information goals) and users' cognitive ability to articulate their needs. The affective layer contains different affective motivations and intentions behind the search. The situational layer contains users' perceptions of the current health condition and where the person is in the illness trajectory. The social and environmental layer contains users' social roles, social norms, and various information channels. Several novel system functions, including faceted search and layered presentation of results, are proposed based on the model to help contextualize and improve users' interactions with health information systems. © 2013 ASIS&T.
Frebel A.,Massachusetts Institute of Technology |
Bromm V.,University of Texas at Austin
Astrophysical Journal | Year: 2012
We utilize metal-poor stars in the local, ultra-faint dwarf galaxies (UFDs; L tot ≤ 105 L) to empirically constrain the formation process of the first galaxies. Since UFDs have much simpler star formation histories than the halo of the Milky Way, their stellar populations should preserve the fossil record of the first supernova (SN) explosions in their long-lived, low-mass stars. Guided by recent hydrodynamical simulations of first galaxy formation, we develop a set of stellar abundance signatures that characterize the nucleosynthetic history of such an early system if it was observed in the present-day universe. Specifically, we argue that the first galaxies are the product of chemical "one-shot" events, where only one (long-lived) stellar generation forms after the first, PopulationIII, SN explosions. Our abundance criteria thus constrain the strength of negative feedback effects inside the first galaxies. We compare the stellar content of UFDs with these one-shot criteria. Several systems (Ursa MajorII, and also Coma Berenices, BootesI, LeoIV, Segue1) largely fulfill the requirements, indicating that their high-redshift predecessors did experience strong feedback effects that shut off star formation. We term the study of the entire stellar population of a dwarf galaxy for the purpose of inferring details about the nature and origin of the first galaxies "dwarf galaxy archaeology." This will provide clues to the connection of the first galaxies, the surviving, metal-poor dwarf galaxies, and the building blocks of the Milky Way. © 2012. The American Astronomical Society. All rights reserved.
Waring B.G.,University of Texas at Austin
Soil Biology and Biochemistry | Year: 2013
The theory of ecological stoichiometry predicts that the microbial biomass should regulate production of extracellular enzymes to target the resource in shortest supply. Therefore, microbial communities on decomposing leaf litter should optimize allocation to C-, N-, and P-degrading enzymes according to the stoichiometry of the foliar substrate. Because extracellular enzymes are the proximate agents of leaf litter decay, shifts in microbial enzyme allocation may influence overall rates of litter mass loss. To test these hypotheses, I measured fungal growth and the activities of acid phosphatase (AP), beta-glucosidase (BG), cellobiohydrolase (CB) and glycine aminopeptidase (GAP) on decaying leaf litter of five plant species over the course of a 394-day decomposition experiment. I used regression and correlation analyses to link to interspecific variation in mass loss rates with enzyme activities and foliar nutrient content. Enzymes explained 35% of the variance in foliar decay rates across plant species, yet fungal abundance and enzyme activities were unrelated to foliar concentrations of N, P, K, or 9 other nutrients. Furthermore, relative activities of C-, N-, and P-acquiring enzymes did not vary across litter types despite wide variance in foliar C:N and C:P ratios. This weak relationship between litter stoichiometry and decomposition rates suggests that nutrients are not the primary control on microbial growth or enzyme allocation in this tropical forest. However, substantial interspecific differences in fungal abundance and enzyme activities imply that differences in litter composition strongly influence microbial communities and the ecosystem processes they mediate. © 2013 Elsevier Ltd.
Allen D.T.,University of Texas at Austin
Annual Review of Chemical and Biomolecular Engineering | Year: 2014
The US Energy Information Administration projects that hydraulic fracturing of shale formations will become a dominant source of domestic natural gas supply over the next several decades, transforming the energy landscape in the United States. However, the environmental impacts associated with fracking for shale gas have made it controversial. This review examines emissions and impacts of air pollutants associated with shale gas production and use. Emissions and impacts of greenhouse gases, photochemically active air pollutants, and toxic air pollutants are described. In addition to the direct atmospheric impacts of expanded natural gas production, indirect effects are also described. Widespread availability of shale gas can drive down natural gas prices, which, in turn, can impact the use patterns for natural gas. Natural gas production and use in electricity generation are used as a case study for examining these indirect consequences of expanded natural gas availability. © 2014 by Annual Reviews.
Goodenough J.B.,University of Texas at Austin
Chemistry of Materials | Year: 2014
Engineering transition-metal oxides depends on understanding a few general concepts. Three of these are discussed: (1) orbital mixing and the roles of cation-d/O-2p covalent bonding as distinct from on-site cation-orbital hybridization; (2) cooperativity in ordering (a) localized orbitals to remove an orbital degeneracy, (b) ferroic atomic displacements, and (c) bond lengths in a charge-density wave; and (3) cation-site expansion at the crossover from itinerant to localized electron behavior. The latter can stabilize a first-order transition to a ferromagnetic metallic phase on the approach to crossover from the itinerant-electron side or, in a single-valent compound, an intermediate charge-density-wave phase on the approach to crossover from either the localized- or itinerant-electron side. In a mixed-valent compound, a two-phase segregation at a first-order crossover may be static or mobile, and a mobile second phase may become ordered at low temperature to stabilize high-temperature superconductivity. © 2013 American Chemical Society.
Stanton J.F.,University of Texas at Austin
Journal of Chemical Physics | Year: 2010
The vibronic coupling model of Köuppel, Domcke, and Cederbaum in one dimension is introduced as a means to estimate the effects of electronic nonadiabaticity on the vibrational energy levels of molecules that exhibit vibronic coupling. For the BNB molecule, the nonadiabatic contribution to the nominal fundamental vibrational energy of the antisymmetric stretching mode is approximately -80 cm-1. The surprisingly large effect for this mode, which corresponds to an adiabatic potential that is essentially flat near the minimum due to the vibronic interaction, is contrasted with another model system that also exhibits a flat potential (precisely, a vanishing quadratic force constant) but has a significantly larger gap between interacting electronic states. For the latter case, the nonadiabatic contribution to the level energies is about two orders of magnitude smaller even though the effect on the potential is qualitatively identical. A simple analysis shows that significant nonadiabatic corrections to energy levels should occur only when the affected vibrational frequency is large enough to be of comparable magnitude to the energy gap involved in the coupling. The results provide evidence that nonadiabatic corrections should be given as much weight as issues such as high-level electron correlation, relativistic corrections, etc., in quantum chemical calculations of energy levels for radicals with close-lying and strongly coupled electronic states even in cases where conical intersections are not obviously involved. The same can be said for high-accuracy thermochemical studies, as the zero-point vibrational energy of the BNB example contains a nonadiabatic contribution of approximately -70 cm-1 (-0.9 kJ mol -1). © 2010 American Institute of Physics.
Fenn T.D.,Stanford University |
Schnieders M.J.,University of Texas at Austin
Acta Crystallographica Section D: Biological Crystallography | Year: 2011
In the past, weighting between the sum of chemical and data-based targets in macromolecular crystallographic refinement was based on comparing the gradients or Hessian diagonal terms of the two potential functions. Here, limitations of this scheme are demonstrated, especially in the context of a maximum-likelihood target that is inherently weighted by the model and data errors. In fact, the congruence between the maximum-likelihood target and a chemical potential based on polarizable atomic multipole electrostatics evaluated with Ewald summation has opened the door to a transferable static weight. An optimal static weight is derived from first principles and is demonstrated to be transferable across a broad range of data resolutions in the context of a recent implementation of X-ray crystallographic refinement using the polarizable AMOEBA force field and it is shown that the resulting models are balanced with respect to optimizing both R free and MolProbity scores. Conversely, the classical automatic weighting scheme is shown to lead to underfitting or overfitting of the data and poor model geometry. The benefits of this approach for low-resolution diffraction data, where the need for prior chemical information is of particular importance, are also highlighted. It is demonstrated that this method is transferable between low- and high-resolution maximum-likelihood-based crystallographic refinement, which proves for the first time that resolution-dependent parameterization of either the weight or the chemical potential is unnecessary. © 2011 International Union of Crystallography Printed in Singapore - all rights reserved.
Legare C.H.,University of Texas at Austin
Child Development Perspectives | Year: 2014
Both explanation and exploration can foster causal learning, yet the mechanisms underlying these effects are largely unknown, especially in early childhood. In this article, I provide an overview of research on the relation among explanation, exploration, and the development of scientific reasoning. I propose that explaining and exploring operate in tandem as hypothesis-generating and hypothesis-testing mechanisms. I review evidence that supports this claim by demonstrating that inconsistency with prior knowledge selectively motivates children to construct explanations, guides discovery-oriented behavior, and drives the early-developing capacity to reason scientifically. I conclude with a discussion of the educational applications of research on the development of children's scientific reasoning in informal learning settings. © 2014 The Society for Research in Child Development.
Stacy A.,NASA |
Bromm V.,University of Texas at Austin
Monthly Notices of the Royal Astronomical Society | Year: 2013
We perform a cosmological simulation in order to model the growth and evolution of Population III (Pop III) stellar systems in a range of host minihalo environments. A Pop III multiple system forms in each of the 10 minihaloes, and the overall mass function is top-heavy compared to the currently observed initial mass function in the Milky Way. Using a sink particle to represent each growing protostar, we examine the binary characteristics of the multiple systems, resolving orbits on scales as small as 20 au. We find a binary fraction of ~35 per cent, with semi-major axes as large as 3000 au. The distribution of orbital periods is slightly peaked at ≲ 900 yr, while the distribution of mass ratios is relatively flat. Of all sink particles formed within the 10 minihaloes, ~50 per cent are lost to mergers with larger sinks, and ~50 per cent of the remaining sinks are ejected from their star-forming discs. The large binary fraction may have important implications for Pop III evolution and nucleosynthesis, as well as the final fate of the first stars. © 2013 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.
Kwon S.J.,Konkuk University |
Bard A.J.,University of Texas at Austin
Journal of the American Chemical Society | Year: 2012
This study demonstrates a highly sensitive sensing scheme for the detection of low concentrations of DNA, in principle down to the single biomolecule level. The previously developed technique of electrochemical current amplification for detection of single nanoparticle (NP) collisions at an ultramicroelectrode (UME) has been employed to determine DNA. The Pt NP/Au UME/hydrazine oxidation reaction was employed, and individual NP collision events were monitored. The Pt NP was modified with a 20-base oligonucleotide with a C6 spacer thiol (detection probe), and the Au UME was modified with a 16-base oligonucleotide with a C6 spacer thiol (capture probe). The presence of a target oligonucleotide (31 base) that hybridized with both capture and detection probes brought a Pt NP on the electrode surface, where the resulting electrochemical oxidation of hydrazine resulted in a current response. © 2012 American Chemical Society.
Jha S.,University of Texas at Austin |
Kremen C.,University of California at Berkeley
Molecular Ecology | Year: 2013
Potential declines in native pollinator communities and increased reliance on pollinator-dependent crops have raised concerns about native pollinator conservation and dispersal across human-altered landscapes. Bumble bees are one of the most effective native pollinators and are often the first to be extirpated in human-altered habitats, yet little is known about how bumble bees move across fine spatial scales and what landscapes promote or limit their gene flow. In this study, we examine regional genetic differentiation and fine-scale relatedness patterns of the yellow-faced bumble bee, Bombus vosnesenskii, to investigate how current and historic habitat composition impact gene flow. We conducted our study across a landscape mosaic of natural, agricultural and urban/suburban habitats, and we show that B. vosnesenskii exhibits low but significant levels of differentiation across the study system (FST = 0.019, Dest = 0.049). Most importantly, we reveal significant relationships between pairwise FST and resistance models created from contemporary land use maps. Specifically, B. vosnesenskii gene flow is most limited by commercial, industrial and transportation-related impervious cover. Finally, our fine-scale analysis reveals significant but declining relatedness between individuals at the 1-9 km spatial scale, most likely due to local queen dispersal. Overall, our results indicate that B. vosnesenskii exhibits considerable local dispersal and that regional gene flow is significantly limited by impervious cover associated with urbanization. © 2013 Blackwell Publishing Ltd.
Milliken K.,University of Texas at Austin
Journal of Sedimentary Research | Year: 2014
A tripartite compositional classification is proposed for sediments and sedimentary rocks that have grain assemblages with greater than 50 percent of a weight or volume of particles smaller than 62.5 μm (4 Phi). Tarl (terrigenous-argillaceous) contains a grain assemblage dominated by more than 75 percent of particles of extrabasinal derivation, including grains derived from continental weathering and also volcanogenic debris. Carl (calcareous-argillaceous) contains less than 75 percent of particles of extrabasinal derivation debris and among its intrabasinal grains contains a preponderance of biogenic carbonate particles including carbonate aggregates. Sarl (siliceous-argillaceous) contains less than 75 percent of particles of extrabasinal derivation and contains a preponderance of biogenic siliceous particles over carbonate grains.These three classes of fine-grained particulate sediments and rocks effectively separate materials that have distinct depositional settings and systematic contrasts in organic-matter content and minor grain types. In the subsurface the grain assemblages that define these classes follow contrasting and predictable diagenetic pathways that have significant implications for the evolution of bulk rock properties, and thus, assigning a fine-grained rock to one of these classes is an important first step for predicting its economic and engineering qualities. For purposes of description these three class names can be joined to modifier terms denoting rock texture, more precise compositional divisions, specific grain types of notable importance, and diagenetic features. Copyright © 2014, SEPM (Society for Sedimentary Geology).
Zhang Y.,University of Texas at Austin
Journal of the American Society for Information Science and Technology | Year: 2014
Searches for specific factual health information constitute a significant part of consumer health information requests, but little is known about how users search for such information. This study attempts to fill this gap by observing users' behavior while using MedlinePlus to search for specific health information. Nineteen students participated in the study, and each performed 12 specific tasks. During the search process, they submitted short queries or complete questions, and they examined less than 1 result per search. Participants rarely reformulated queries; when they did, they tended to make a query more specific or more general, or iterate in different ways. Participants also browsed, primarily relying on the alphabetical list and the anatomical classification, to navigate to specific health topics. Participants overall had a positive experience with Medline- Plus, and the experience was significantly correlated with task difficulty and participants' spatial abilities. The results suggest that, to better support specific item search in the health domain, systems could provide a more "natural" interface to encourage users to ask questions; effective conceptual hierarchies could be implemented to help users reformulate queries; and the search results page should be reconceptualized as a place for accessing answers rather than documents. Moreover, multiple schemas should be provided to help users navigate to a health topic. The results also suggest that users' experience with information systems in general and health-related systems in particular should be evaluated in relation to contextual factors, such as task features and individual differences. © 2013 ASIS&T.
Allen D.T.,University of Texas at Austin
Current Opinion in Chemical Engineering | Year: 2014
Methane emissions from the natural gas supply chain are a key factor in determining the greenhouse gas footprint of natural gas production and use. Recent estimates of these emissions have varied widely, because of the large population of sources, because of different measurement and estimation approaches, and because of extreme values of emission rates from individual sources that are much larger than population average values of emission rates from sources in the same category (a 'fat-tail' distribution). Reconciling differences between ambient methane concentration measurements (top-down methods) and direct measurement of emissions from individual sources (bottom-up methods) is critical to understanding methane emissions from the natural gas supply chain. A combination of top-down and bottom-up approaches is recommended. © 2014 Published by Elsevier Ltd.
Mirowsky J.,University of Texas at Austin
The journals of gerontology. Series B, Psychological sciences and social sciences | Year: 2011
Upward trends in IQ, education, and mental work suggest that cognitive function among seniors should be rising strongly across cohorts. There is little sign of such improvement in recent decades, and some analyses find poorer function in the newer cohorts. This essay explores possible explanations of the anomaly. Major long-term trends that might increase cognitive impairment are reviewed, and their implications are considered. Physical activity is declining, food is increasingly manufactured, body fat is increasing, diabetes and metabolic syndrome are on the rise, the number of prescription drugs per person is increasing, and the proportion of the population either old or obese is growing. Technological and economic development may lower the cognitive function needed for survival. They also lower physical activity in daily life. Sedentary work, transportation, and leisure undermine the aerobic and metabolic fitness required for the brain to perform well. Some prescription drugs impair cognitive function, and others do so when taken for many years or in combination with others. The growing fraction of the population that is either old or obese may further lower physical activity norms and requirements and substitute medical intervention for health, accelerating a trend toward cognitive impairment.
Makarov D.E.,University of Texas at Austin
Journal of Chemical Physics | Year: 2010
There has been considerable effort to understand the inherent time scale for conformational reconfiguration of denatured proteins. Even a simple homopolymer, however, exhibits a spectrum of fluctuation time scales rather than a unique characteristic time. Consequently, different time scales may be probed by different measurements. Motivated by recent single-molecule fluorescence resonance energy transfer experiments, here I have studied theoretically how the characteristic time scale exhibited by fluctuations of the distance between two residues within an unfolded polypeptide depends on the choice of the residue pair. This time scale was generally found to become shorter as the sequence separation between the residues is reduced. The maximum reconfiguration time, however, corresponds not to the residues being located at the ends of the chain but rather to each residue residing a short length apart from the ends. Comparison of these findings with recent single-molecule measurements suggests that the latter may bear signatures of transient residual structure. © 2010 American Institute of Physics.
Brodbelt J.S.,University of Texas at Austin
Chemical Society Reviews | Year: 2014
Photodissociation mass spectrometry combines the ability to activate and fragment ions using photons with the sensitive detection of the resulting product ions by mass spectrometry. This combination affords a versatile tool for characterization of biological molecules. The scope and breadth of photodissociation mass spectrometry have increased substantially over the past decade as new research groups have entered the field and developed a number of innovative applications that illustrate the ability of photodissociation to produce rich fragmentation patterns, to cleave bonds selectively, and to target specific molecules based on incorporation of chromophores. This review focuses on many of the key developments in photodissociation mass spectrometry over the past decade with a particular emphasis on its applications to biological molecules. This journal is © the Partner Organisations 2014.
Goodenough J.B.,University of Texas at Austin
Energy and Environmental Science | Year: 2014
The storage of electrical energy in a rechargeable battery is subject to the limitations of reversible chemical reactions in an electrochemical cell. The limiting constraints on the design of a rechargeable battery also depend on the application of the battery. Of particular interest for a sustainable modern society are (1) powering electric vehicles that can compete with cars powered by the internal combustion engine and (2) stationary storage of electrical energy from renewable energy sources that can compete with energy stored in fossil fuels. Existing design strategies for the rechargeable battery have enabled the wireless revolution and the plug-in hybrid electric car, but they show little promise of providing safe, adequate capacity with an acceptable shelf and cycle life to compete in cost and convenience with the chemical energy stored in fossil fuels. Electric vehicles that are charged overnight (plug-in vehicles) offer a distributed energy storage, but larger battery packs are needed for stationary storage of electrical energy generated from wind or solar farms and for stand-by power. This paper outlines the limitations of existing commercial strategies and some developing strategies that may overcome these limitations. © 2014 The Royal Society of Chemistry.
Cisneros G.A.,Wayne State University |
Karttunen M.,University of Waterloo |
Ren P.,University of Texas at Austin |
Sagui C.,North Carolina State University
Chemical Reviews | Year: 2014
Electrostatic interactions are crucial for biomolecular simulations, as their calculation is the most time-consuming when computing the total classical forces, and their representation has profound consequences for the accuracy of classical force fields. Long-range electrostatic interactions are crucial for the stability of proteins, nucleic acids, glycomolecules, lipids, and other macromolecules, and their interactions with solvent, ions, and other molecules. Traditionally, electrostatic interactions have been modeled using a set of fixed atom-centered point charges or partial charges. The most popular methods for extracting charges from molecular wave functions are based on a fitting of the atomic charges to the molecular electrostatic potential (MEP) computed with ab initio or semiempirical methods outside the van der Waals surface. Computationally, the electrostatic potential for a system with explicit solvent is calculated by either solving Poisson's equation or explicitly adding the individual charge potentials.
Willets K.A.,University of Texas at Austin
Chemical Society Reviews | Year: 2014
Surface-enhanced Raman scattering (SERS) hot spots occur when molecules are positioned near regions of strongly enhanced electromagnetic fields on the surface of nano-featured plasmonic substrates. The emission from the molecule is coupled out into the far field by the plasmon modes of the substrate, but due to the diffraction-limit of light, the properties of this coupled molecule-plasmon emitter cannot be resolved using typical far-field optical microscopy techniques. However, by fitting the emission to a model function such as 2-dimensional Gaussian, the relative position of the emitter can be determined with precision better than 5 nm in a process known as super-resolution imaging. This tutorial review describes the basic principles of super-resolution imaging of SERS hot spots using single molecules to probe local electromagnetic field enhancements. New advances using dipole-based fitting functions and spectrally- and spatially-resolved measurements are described, providing new insight into SERS hot spots and the important roles of both the molecule and the substrate in defining their properties. © 2014 the Partner Organisations.
Echeverria I.,University of Maryland University College |
Makarov D.E.,University of Texas at Austin |
Papoian G.A.,University of Maryland University College
Journal of the American Chemical Society | Year: 2014
Protein chains undergo conformational diffusion during folding and dynamics, experiencing both thermal kicks and viscous drag. Recent experiments have shown that the corresponding friction can be separated into wet friction, which is determined by the solvent viscosity, and dry friction, where frictional effects arise due to the interactions within the protein chain. Despite important advances, the molecular origins underlying dry friction in proteins have remained unclear. To address this problem, we studied the dynamics of the unfolded cold-shock protein at different solvent viscosities and denaturant concentrations. Using extensive all-atom molecular dynamics simulations we estimated the internal friction time scales and found them to agree well with the corresponding experimental measurements (Soranno et al. Proc. Natl. Acad. Sci. U.S.A. 2012, 109, 17800-17806). Analysis of the reconfiguration dynamics of the unfolded chain further revealed that hops in the dihedral space provide the dominant mechanism of internal friction. Furthermore, the increased number of concerted dihedral moves at physiological conditions suggest that, in such conditions, the concerted motions result in higher frictional forces. These findings have important implications for understanding the folding kinetics of proteins as well as the dynamics of intrinsically disordered proteins. © 2014 American Chemical Society.
Hatlestad G.J.,University of Texas at Austin
Nature genetics | Year: 2015
Nearly all flowering plants produce red/violet anthocyanin pigments. Caryophyllales is the only order containing families that replace anthocyanins with unrelated red and yellow betalain pigments. Close biological correlation of pigmentation patterns suggested that betalains might be regulated by a conserved anthocyanin-regulating transcription factor complex consisting of a MYB, a bHLH and a WD repeat-containing protein (the MBW complex). Here we show that a previously uncharacterized anthocyanin MYB-like protein, Beta vulgaris MYB1 (BvMYB1), regulates the betalain pathway in beets. Silencing BvMYB1 downregulates betalain biosynthetic genes and pigmentation, and overexpressing BvMYB1 upregulates them. However, unlike anthocyanin MYBs, BvMYB1 will not interact with bHLH members of heterologous anthocyanin MBW complexes because of identified nonconserved residues. BvMYB1 resides at the historic beet pigment-patterning locus, Y, required for red-fleshed beets. We show that Y and y express different levels of BvMYB1 transcripts. The co-option of a transcription factor regulating anthocyanin biosynthesis would be an important evolutionary event allowing betalains to largely functionally replace anthocyanins.
Hausmann N.T.,University of California at Berkeley |
Hawkes C.V.,University of Texas at Austin
Ecology | Year: 2010
The causes of local diversity and composition remain a central question in community ecology. Numerous studies have attempted to understand community assembly, both within and across trophic levels. However, little is known about how community assembly aboveground influences soil microbial communities belowground. We hypothesized that plant establishment order can affect the community of arbuscular mycorrhizal fungi (AMF) in roots, with the strength of this effect dependent on both host plant identity and neighboring plant identity. Such priority effects of plants on AMF may act through hostspeci fic filters of the initial species pool that limit the available pool for plants that established second. In a greenhouse experiment with four plant hosts, we found that the strength of the priority effect on AMF communities reflected both host plant characteristics and interactions between host and neighbor plant species, consistent with differential host specificity among plants. These patterns were independent of plant biomass and root colonization. Functional studies of AMF associated with a wide array of host plants will be required to further understand this potential driver of community dynamics. © 2010 by the Ecological Society of America.
Benner A.D.,University of Texas at Austin |
Graham S.,University of California at Los Angeles
Child Development | Year: 2011
Changes in perceptions of discrimination were examined with 668 Latino students (62% Mexican American; 56% female; Mage=14.6years). Adolescents' reports of discrimination increased across the first 2years of high school. Perceptions of discrimination were higher for boys and for primary language brokers, as well as for adolescents in schools with more ethnically diverse student bodies but a less diverse teaching staff. Path analysis revealed that higher levels of discrimination and increases in discrimination across time influenced Latino adolescents' academic outcomes (i.e., grades, absences) indirectly via their influences on perceptions of school climate. Findings highlight previously understudied individual and school contextual factors that shape experiences of discrimination and the mechanisms by which discrimination indirectly influences Latino adolescents' outcomes. © 2011 The Authors. Child Development © 2011 Society for Research in Child Development, Inc..
Hines K.E.,University of Texas at Austin
Journal of General Physiology | Year: 2013
Single molecule photobleaching is a powerful tool for determining the stoichiometry of protein complexes. By attaching fluorophores to proteins of interest, the number of associated subunits in a complex can be deduced by imaging single molecules and counting fluorophore photobleaching steps. Because some bleaching steps might be unobserved, the ensemble of steps will be binomially distributed. In this work, it is shown that inferring the true composition of a complex from such data is nontrivial because binomially distributed observations present an illposed inference problem. That is, a unique and optimal estimate of the relevant parameters cannot be extracted from the observations. Because of this, a method has not been firmly established to quantify confidence when using this technique. This paper presents a general inference model for interpreting such data and provides methods for accurately estimating parameter confidence. The formalization and methods presented here provide a rigorous analytical basis for this pervasive experimental tool. © 2013 Hines.
Loehlin J.C.,University of Texas at Austin
Behavior Genetics | Year: 2012
An earlier twin analysis by Loehlin and Nichols was repeated with a considerably larger twin sample (2,600 adult Australian twin pairs). Like the previous analysis, it was based on item clusters, although from different questionnaires. Unlike the previous study, it found consistent differences in MZ-DZ discrepancy in correlation among the clusters. A more elaborate analysis confirmed this for the clusters, but not for broad Extraversion and Neuroticism composites based upon them. © Springer Science+Business Media, LLC 2011.
Umberson D.,University of Texas at Austin
Journal of health and social behavior | Year: 2010
Social relationships--both quantity and quality--affect mental health, health behavior, physical health, and mortality risk. Sociologists have played a central role in establishing the link between social relationships and health outcomes, identifying explanations for this link, and discovering social variation (e.g., by gender and race) at the population level. Studies show that social relationships have short- and long-term effects on health, for better and for worse, and that these effects emerge in childhood and cascade throughout life to foster cumulative advantage or disadvantage in health. This article describes key research themes in the study of social relationships and health, and it highlights policy implications suggested by this research.
Matz M.V.,University of Texas at Austin
PloS one | Year: 2013
Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the "classic" analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R.
Sage W.M.,University of Texas at Austin
Health Affairs | Year: 2014
As hospital, physician, and health insurance markets consolidate and change in response to health care reform, some commentators have called for vigorous enforcement of the federal antitrust laws to prevent the acquisition and exercise of market power. In health care, however, stricter antitrust enforcement will benefit consumers only if it accounts for the competitive distortions caused by the sector's long history of government regulation. This article directs policy makers to a neglected dimension of health care competition that has been altered by regulation: the product. Competition may have failed to significantly lower costs, increase access, or improve quality in health care because we have been buying and selling the wrong things. Competition policy makers-meaning both antitrust enforcers and regulators-should force the health care industry to define and market products that can be assembled and warranted to consumers while keeping emerging sectors such as mHealth free from overregulation, wasteful subsidy, and appropriation by established insurer and provider interests. © 2014 Project HOPE-The People-to-People Health Foundation, Inc.
King C.W.,University of Texas at Austin
Environmental Research Letters | Year: 2010
In this letter I compare two measures of energy quality, energy return on energy invested (EROI) and energy intensity ratio (EIR) for the fossil fuel consumption and production of the United States. All other characteristics being equal, a fuel or energy system with a higher EROI or EIR is of better quality because more energy is provided to society. I define and calculate the EIR for oil, natural gas, coal, and electricity as measures of the energy intensity (units of energy divided by money) of the energy resource relative to the energy intensity of the overall economy. EIR measures based upon various unit prices for energy (e.g. $/Btu of a barrel of oil) as well as total expenditures on energy supplies (e.g. total dollars spent on petroleum) indicate net energy at different points in the supply chain of the overall energy system. The results indicate that EIR is an easily calculated and effective proxy for EROI for US oil, gas, coal, and electricity. The EIR correlates well with previous EROI calculations, but adds additional information on energy resource quality within the supply chain. Furthermore, the EIR and EROI of oil and gas as well as coal were all in decline for two time periods within the last 40 years, and both time periods preceded economic recessions. © 2010 IOP Publishing Ltd.
Weinmaster G.,University of California at Los Angeles |
Fischer J.,University of Texas at Austin
Developmental Cell | Year: 2011
In the first volume of Developmental Cell, it was reported that the classic Drosophila neurogenic gene neuralized encodes a ubiquitin ligase that monoubiquitylates the Notch ligand Delta, thus promoting Delta endocytosis. A requirement for ligand internalization by the signal-sending cell, although counterintuitive, remains to date a feature unique to Notch signaling. Ten years and many ubiquitin ligases later, we discuss sequels to these three papers with an eye toward reviewing the development of ideas for how ligand ubiquitylation and endocytosis propel Notch signaling. © 2011 Elsevier Inc.
Rindova V.,University of Texas at Austin
Journal of Supply Chain Management | Year: 2011
The topic of theory development remains an ongoing area of interest among management scholars. Much of the discussion on the topic has focused on the question of what constitutes a theoretical contribution. In contrast, this article emphasizes the process through which ideas develop into a value-added theoretical contribution. The article highlights the paradoxical nature of the demands that the theory development process poses on scholars. © 2011 Institute for Supply Management, Inc.™.
Wallingford J.B.,University of Texas at Austin |
Mitchell B.,Northwestern University
Genes and Development | Year: 2011
Cilia are important cellular structures that have been implicated in a variety of signaling cascades. In this review, we discuss the current evidence for and against a link between cilia and both the canonical Wnt/β-catenin pathway and the noncanonical Wnt/planar cell polarity (PCP) pathway. Furthermore, we address the evidence implicating a role for PCP components in ciliogenesis. Given the lack of consensus in the field, we use new data on the control of ciliary protein localization as a basis for proposing new models by which cell type-specific regulation of ciliary components via differential transport, regulated entry and exit, or diffusion barriers might generate context-dependent functions for cilia. © 2011 by Cold Spring Harbor Laboratory Press.
Macdonald P.M.,University of Texas at Austin
Current Opinion in Genetics and Development | Year: 2011
Localization of mRNAs to subcellular domains can enrich proteins at sites where they function. Coordination with translational control can ensure that the encoded proteins will not appear elsewhere, an important property for factors that control cell fate or body patterning. Here I focus on two aspects of mRNA localization. One is the question of how mRNAs that undergo directed transport by a shared mechanism are bound to the transport machinery, and why localization signals from these mRNAs have very diverse sequences. The second topic concerns the role of particles, in which localized mRNAs often appear. Recent evidence highlights the importance of such assemblies, and the possibility that close association of mRNAs confers community effects and a novel form of regulation. © 2011 Elsevier Ltd.
Choi S.,University of Texas at Austin
Scientometrics | Year: 2012
Assuming the OECD member states as 'advanced' nations equipped with basic scientific capacities, the present research addresses the network configuration of these countries in international scientific collaboration and the transformation of this network along with globalization. The result suggests that geographical, linguistic, and economic affinities did not have a meaningful impact on the formation of co-authorship network between 'advanced' nations, different from previous research results which claimed their importance on international cooperation. Globalization facilitated by the development of information and transportation technologies was found to influence the co-authorship link between countries, but not to accelerate centralization of the network in the past 15 years. Though the core-periphery pattern still persists, new rising stars, which are Korea and Turkey, have emerged in the co-authorship network among 'advanced' nations. These two countries, having a rapid increase in the share of degree centrality from 1995 to 2010, had strategic financial support from the government which stimulated the collaboration between universities and industries and emphasized the development of science and engineering fields. © 2011 Akadémiai Kiadó, Budapest, Hungary.
Benedek N.A.,University of Texas at Austin |
Fennie C.J.,Cornell University
Journal of Physical Chemistry C | Year: 2013
We use a combination of symmetry arguments and first-principles calculations to explore the connection between structural distortions and ferroelectricity in the perovskite family of materials. We explain the role of octahedral rotations in suppressing ferroelectricity in these materials and show that, as the tolerance factor decreases, rotations alone cannot fully suppress ferroelectricity. Our results show that it is cation displacements ("hidden" in Glazer notation) that accompany the rotations, rather than the rotations themselves, that play the decisive role in suppressing ferroelectricity in these cases. We use the knowledge gained in our analysis of this problem to explain the origin of ferroelectricity in R3c materials such as FeTiO3 and ZnSnO3 and to suggest strategies for the design and synthesis of new perovskite ferroelectrics. Our results have implications not only for the fundamental crystal chemistry of the perovskites but also for the discovery of new functional materials. © 2013 American Chemical Society.
Warren D.L.,University of Texas at Austin
Trends in Ecology and Evolution | Year: 2012
There is a growing awareness of problems with the estimation of the ecological tolerances of species through correlative modeling approaches. These problems have led some investigators to argue for abandoning terms such as 'ecological niche model' and 'environmental niche model' in favor of the ostensibly more value-neutral 'species distribution model', as the models are thought to frequently be poor estimators of the niche. Here, I argue that most applications to which these models are put require the assumption that they do estimate the niche, however imperfectly, and that obscuring this inescapable and potentially flawed assumption in the terminology may only serve to hinder the development of the field. © 2012 Elsevier Ltd.
Averill C.,University of Texas at Austin
Ecology Letters | Year: 2014
Allocation trade-offs shape ecological and biogeochemical phenomena at local to global scale. Plant allocation strategies drive major changes in ecosystem carbon cycling. Microbial allocation to enzymes that decompose carbon vs. organic nutrients may similarly affect ecosystem carbon cycling. Current solutions to this allocation problem prioritise stoichiometric tradeoffs implemented in plant ecology. These solutions may not maximise microbial growth and fitness under all conditions, because organic nutrients are also a significant carbon resource for microbes. I created multiple allocation frameworks and simulated microbial growth using a microbial explicit biogeochemical model. I demonstrate that prioritising stoichiometric trade-offs does not optimise microbial allocation, while exploiting organic nutrients as carbon resources does. Analysis of continental-scale enzyme data supports the allocation patterns predicted by this framework, and modelling suggests large deviations in soil C loss based on which strategy is implemented. Therefore, understanding microbial allocation strategies will likely improve our understanding of carbon cycling and climate. © 2014 John Wiley & Sons Ltd/CNRS.
Hopkins M.E.,University of Texas at Austin
International Journal of Primatology | Year: 2013
Many animals generate long-distance vocalizations ("loud calls") that have been hypothesized to mediate spatial interactions. Howls by mantled howlers (Alouatta palliata), specifically, are hypothesized to function in group spacing, but the mechanism by which this is accomplished remains unclear. In this study, I evaluated the role of naturally occurring howls in mediating mantled howler intergroup spatial interactions on Barro Colorado Island, Panama. Multivariate analyses examined relationships between social and environmental contextual variables and the likelihood that a group would approach or avoid a neighboring group's howl. Mantled howlers were more likely to approach howling neighbors when fruit and flower availability was limited. Relative dominance was also a highly significant predictor of approach, though the relationship ran contrary to expectations. Groups were more likely to approach howling neighbors higher in the established dominance hierarchy than those lower down, possibly reflecting priority access of more dominant groups to higher quality resources. Other variables tested, including numeric odds, vocalization intensity, infanticide risk, and location of the howling group, were not significant predictors of spatial response. These findings are consistent with the assertion that howls contain identifying information and highlight the importance of including contextual variables in multivariate analyses of spatial responses to loud calls. These results also indicate potential contrasts between observational and playback studies by suggesting that howl location may play a greater role in responses to simulated intruders than to known neighbor groups, and that relative dominance may provide a more salient measure of competitive ability than numeric odds among groups with interaction histories. © 2013 Springer Science+Business Media New York.
Zhang Y.,University of Texas at Austin
Journal of medical Internet research | Year: 2013
Preference for information is a personality trait that affects people's tendency to seek information in health-related situations. Prior studies have focused primarily on investigating its impact on patient-provider communication and on the implications for designing information interventions that prepare patients for medical procedures. Few studies have examined its impact on general consumers' interactions with Web-based search engines for health information or the implications for designing more effective health information search systems. This study intends to fill this gap by investigating the impact of preference for information on the search behavior of general consumers seeking health information, their perceptions of search tasks (representing information needs), and user experience with search systems. Forty general consumers who had previously searched for health information online participated in the study in our usability lab. Preference for information was measured using Miller's Monitor-Blunter Style Scale (MBSS) and the Krantz Health Opinion Survey-Information Scale (KHOS-I). Each participant completed four simulated health information search tasks: two look-up (fact-finding) and two exploratory. Their behaviors while interacting with the search systems were automatically logged and ratings of their perceptions of tasks and user experience with the systems were collected using Likert-scale questionnaires. The MBSS showed low reliability with the participants (Monitoring subscale: Cronbach alpha=.53; Blunting subscale: Cronbach alpha=.35). Thus, no further analyses were performed based on the scale. KHOS-I had sufficient reliability (Cronbach alpha=.77). Participants were classified into low- and high-preference groups based on their KHOS-I scores. The high-preference group submitted significantly shorter queries when completing the look-up tasks (P=.02). The high-preference group made a significantly higher percentage of parallel movements in query reformulation than did the low-preference group (P=.04), whereas the low-preference group made a significantly higher percentage of new concept movements than the high-preference group when completing the exploratory tasks (P=.01). The high-preference group found the exploratory tasks to be significantly more difficult (P=.05) and the systems to be less useful (P=.04) than did the low-preference group. Preference for information has an impact on the search behavior of general consumers seeking health information. Those with a high preference were more likely to use more general queries when searching for specific factual information and to develop more complex mental representations of health concerns of an exploratory nature and try different combinations of concepts to explore these concerns. High-preference users were also more demanding on the system. Health information search systems should be tailored to fit individuals' information preferences.
Ge S.-F.,Tsinghua University |
Dicus D.A.,University of Texas at Austin |
Repko W.W.,Michigan State University
Physical Review Letters | Year: 2012
The residual Z2s(k) and Z̄2s(k) symmetries induce a direct and unique phenomenological relation with θ x(θ 13) expressed in terms of the other two mixing angles θ s(θ 12) and θ a(θ 23) and the Dirac CP phase δ D. Z2s(k) predicts a θ x probability distribution centered around 3°-6° with an uncertainty of 2°-4°, while those from Z̄2s(k) are approximately a factor of 2 larger. Either result fits the T2K, MINOS, and Double Chooz measurements. Alternately, a prediction for the Dirac CP phase δ D results in a peak at ±74°(±106°) for Z2s(k) or ±123°(±57° ) for Z̄2s(k) which is consistent with the latest global fit. We also give a distribution for the leptonic Jarlskog invariant J ν which can provide further tests from measurements at T2K and NOνA. © 2012 American Physical Society.
Sun A.Y.,University of Texas at Austin
Water Resources Research | Year: 2013
The purpose of this work is to investigate the feasibility of downscaling Gravity Recovery and Climate Experiment (GRACE) satellite data for predicting groundwater level changes and, thus, enhancing current capability for sustainable water resources management. In many parts of the world, water management decisions are traditionally informed by in situ observation networks which, unfortunately, have seen a decline in coverage in recent years. Since its launch, GRACE has provided terrestrial water storage change (ΔTWS) data at global and regional scales. The application of GRACE data for local-scale groundwater resources management has been limited because of uncertainties inherent in GRACE data and difficulties in disaggregating various TWS components. In this work, artificial neural network (ANN) models are developed to predict groundwater level changes directly by using a gridded GRACE product and other publicly available hydrometeorological data sets. As a feasibility study, ensemble ANN models are used to predict monthly and seasonal water level changes for several wells located in different regions across the US. Results indicate that GRACE data play a modest but significantly role in the performance of ANN ensembles, especially when the cyclic pattern of groundwater hydrograph is disrupted by extreme climate events, such as the recent Midwest droughts. The statistical downscaling approach taken here may be readily integrated into local water resources planning activities. Key Points GRACE data are downscaled to predict in situ water level changes A neural network statistical downscaling approach was taken Modest gain in prediction accuracy was observed ©2013. American Geophysical Union. All Rights Reserved.
Norton I.,University of Texas at Austin
Geosphere | Year: 2011
Extension in Death Valley is usually interpreted as a combination of low-angle Basin and Range-style extension and strike slip associated with the developing Pacific-North America plate boundary in western North America, with these two tectonic regimes operating synchronously in Death Valley. Examination of structural, stratigraphic, and timing relationships in the region suggests that this interpretation needs revision. Evolution of Death Valley is best described as a two-stage process. In the first stage, lasting from ca. 18 to 5 Ma, low-angle Basin and Range extension transported allochthons consisting of Late Proterozoic through Early Paleozoic miogeoclinal section along detachment surfaces that, as extension continued, were exhumed from mid-lower crustal levels to the surface. Near the end of this extensional phase and lasting until ca. 3 Ma, deposition of a thick sequence of volcanics, clastics, and some lacustrine carbonates signaled a period of relative tectonic quiescence, with sediments in some areas covering the exhumed detachment surfaces. At ca. 3 Ma, initiation of the East California Shear Zone started development of the present-day topographic depression of Death Valley, formed as a pull-apart basin associated with this strike slip. Faulting broke the older, inactive, Basin and Range detachment surfaces, with high-angle transtensional faulting along the Black Mountains front. The Black Mountains were elevated as a result of footwall uplift, with the well-known turtleback structures being megamullions along these bounding faults. These megamullions are similar to those seen at oceanic spreading centers. The Panamint Range has previously been interpreted as an extensional allochthon, with the entire range transported from on top of or east of the Black Mountains. A new interpretation presented here is that the range is a large core complex similar to the core complex at Tucki Mountain, at the northern end of the range. The Basin and Range extensional detachment tracks over the top of the range, with extensional allochthons perched on the eastern flanks of the range. This modified model for evolution of Death Valley suggests a strong link between timing and style of deformation in the basin with the developing Pacific-North America plate boundary, particularly eastward propagation of this boundary. © 2011 Geological Society of America.
Chen X.,University of Texas at Austin
Journal of the American Chemical Society | Year: 2012
Toehold-mediated strand displacement has proven extremely powerful in programming enzyme-free DNA circuits and DNA nanomachines. To achieve multistep, autonomous, and complex behaviors, toeholds must be initially inactivated by hybridizing to inhibitor strands or domains and then relieved from inactivation in a programmed, timed manner. Although powerful and reasonably robust, this strategy has several drawbacks that limit the architecture of DNA circuits. For example, the combination between toeholds and branch migration (BM) domains is 'hard wired' during DNA synthesis thus cannot be created or changed during the execution of DNA circuits. To solve this problem, I propose a strategy called 'associative toehold activation', where the toeholds and BM domains are connected via hybridization of auxiliary domains during the execution of DNA circuits. Bulged thymidines that stabilize DNA three-way junctions substantially accelerate strand displacement reactions in this scheme, allowing fast strand displacement initiated by reversible toehold binding. To demonstrate the versatility of the scheme, I show (1) run-time combination of toeholds and BM domains, (2) run-time recombination of toeholds and BM domains, which results in a novel operation 'toehold switching', and (3) design of a simple conformational self-replicator. © 2011 American Chemical Society.
Dicarlo D.A.,University of Texas at Austin
Water Resources Research | Year: 2013
Gravity-driven multiphase flow in porous media is ubiquitous in the geophysical world; the classic case in hydrology is vertical infiltration of precipitation into a soil. For homogenous porous media, infiltrations are sometimes observed to be stable and laterally uniform, but other times are observed to be unstable and produce preferential flow paths. Since Saffman and Taylor (1958), researchers have attempted to define criteria that determine instability. Saffman and Taylor's analysis consisted of two regions of single phase flow, while Parlange and Hill (1976) integrated this analysis with the multiphase flow equations to provide testable predictions. In the subsequent 40 years, great advances have been made determining the complex interactions between multiphase flow and instability. Theoretically, the stability of the standard multiphase flow equations has been verified, showing the necessity of extensions to the multiphase flow equations to describe the observed unstable flow. Experimentally, it has been shown that the instability is related to a phenomena in 1-D infiltrations called saturation or pressure overshoot. In this review, the connection between overshoot and instability is detailed, and it is described how models of overshoot can simplify the analysis of current and future models of instability and multiphase flow. ©2013. American Geophysical Union. All Rights Reserved.
Johnson K.A.,University of Texas at Austin
FEBS Letters | Year: 2013
This review traces the history and logical progression of methods for quantitative analysis of enzyme kinetics from the 1913 Michaelis and Menten paper to the application of modern computational methods today. Following a brief review of methods for fitting steady state kinetic data, modern methods are highlighted for fitting full progress curve kinetics based upon numerical integration of rate equations, including a re-analysis of the original Michaelis-Menten full time course kinetic data. Finally, several illustrations of modern transient state kinetic methods of analysis are shown which enable the elucidation of reactions occurring at the active sites of enzymes in order to relate structure and function. © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Aleynikov P.,ITER Organization |
Breizman B.N.,University of Texas at Austin
Physical Review Letters | Year: 2015
This Letter presents a rigorous kinetic theory for relativistic runaway electrons in the near critical electric field in tokamaks. The theory provides a distribution function of the runaway electrons, reveals the presence of two different threshold electric fields, and describes a mechanism for hysteresis in the runaway electron avalanche. Two different threshold electric fields characterize a minimal field required for sustainment of the existing runaway population and a higher field required for the avalanche onset. The near-threshold regime for runaway electrons determines the time scale of toroidal current decay during runaway mitigation in tokamaks. © 2015 American Physical Society.
Nagaosa N.,University of Tokyo |
Nagaosa N.,RIKEN |
Sinova J.,Texas A&M University |
Sinova J.,ASCR Institute of Physics Prague |
And 3 more authors.
Reviews of Modern Physics | Year: 2010
The anomalous Hall effect (AHE) occurs in solids with broken time-reversal symmetry, typically in a ferromagnetic phase, as a consequence of spin-orbit coupling. Experimental and theoretical studies of the AHE are reviewed, focusing on recent developments that have provided a more complete framework for understanding this subtle phenomenon and have, in many instances, replaced controversy by clarity. Synergy between experimental and theoretical works, both playing a crucial role, has been at the heart of these advances. On the theoretical front, the adoption of the Berry-phase concepts has established a link between the AHE and the topological nature of the Hall currents. On the experimental front, new experimental studies of the AHE in transition metals, transition-metal oxides, spinels, pyrochlores, and metallic dilute magnetic semiconductors have established systematic trends. These two developments, in concert with first-principles electronic structure calculations, strongly favor the dominance of an intrinsic Berry-phase-related AHE mechanism in metallic ferromagnets with moderate conductivity. The intrinsic AHE can be expressed in terms of the Berry-phase curvatures and it is therefore an intrinsic quantum-mechanical property of a perfect crystal. An extrinsic mechanism, skew scattering from disorder, tends to dominate the AHE in highly conductive ferromagnets. The full modern semiclassical treatment of the AHE is reviewed which incorporates an anomalous contribution to wave-packet group velocity due to momentum-space Berry curvatures and correctly combines the roles of intrinsic and extrinsic (skew-scattering and side-jump) scattering-related mechanisms. In addition, more rigorous quantum-mechanical treatments based on the Kubo and Keldysh formalisms are reviewed, taking into account multiband effects, and demonstrate the equivalence of all three linear response theories in the metallic regime. Building on results from recent experiment and theory, a tentative global view of the AHE is proposed which summarizes the roles played by intrinsic and extrinsic contributions in the disorder strength versus temperature plane. Finally outstanding issues and avenues for future investigation are discussed. © 2010 The American Physical Society.
Zakhour R.,University of Texas at Austin |
Hanly S.V.,Macquarie University
IEEE Transactions on Information Theory | Year: 2012
This paper considers maximizing the network-wide minimum supported rate in the downlink of a two-cell system, where each base station (BS) is endowed with multiple antennas. This is done for different levels of cell cooperation. At one extreme, we consider single cell processing where the BS is oblivious to the interference it is creating at the other cell. At the other extreme, we consider full cooperative macroscopic beamforming. In between, we consider coordinated beamforming, which takes account of inter-cell interference, but does not require full cooperation between the BSs. We combine elements of Lagrangian duality and large system analysis to obtain limiting SINRs and bit-rates, allowing comparison between the considered schemes. The main contributions of the paper are theorems which provide concise formulas for optimal transmit power, beamforming vectors, and achieved signal to interference and noise ratio (SINR) for the considered schemes. The formulas obtained are valid for the limit in which the number of users per cell, K, and the number of antennas per base station, N, tend to infinity, with fixed ratio β = K/N. These theorems also provide expressions for the effective bandwidths occupied by users, and the effective interference caused in the adjacent cell, which allow direct comparisons between the considered schemes. © 2006 IEEE.
Veletsianos G.,University of Texas at Austin
Journal of Computer Assisted Learning | Year: 2012
Scholars participate in online social networks for professional purposes. In such networks, learning takes the form of participation and identity formation through engagement in and contribution to networked practices. While current literature describes the possible benefits of online participation, empirical research on scholars' use of online social networks in the educational technology literature is negligible. The purpose of this paper is to understand scholars' naturalistic practices in social networks in general, and on Twitter in particular. Tweets from 45 scholars were analysed qualitatively to arrive at dominant themes describing online social network practice. Findings indicate that scholars participating on Twitter (1) shared information, resources, and media relating to their professional practice; (2) shared information about their classroom and their students; (3) requested assistance from and offered suggestions to others; (4) engaged in social commentary; (5) engaged in digital identity and impression management; (6) sought to network and make connections with others; and (7) highlighted their participation in online networks other than Twitter. These findings assist the field in understanding the emerging practice of scholarly participation in online networks. © 2011 Blackwell Publishing Ltd.
Harden K.P.,University of Texas at Austin
Journal of Child Psychology and Psychiatry and Allied Disciplines | Year: 2010
Background: Adolescent involvement in religious organizations has been hypothesized to protect against early age at first drink. However, the correlation between adolescent religiosity and later age at first drink may be confounded by environmental or genetic differences between families. This study tests whether, after controlling for shared environmental and genetic confounds using a behavior genetic design, the association between individual levels of religiosity and earlier age at first drink is still evident. Method: Twin and sibling pairs were drawn from the National Longitudinal Study of Adolescent Health, a nationally-representative sample of US adolescents. Age at first drink was measured as how old adolescents were when they first had a drink of beer, wine, or liquor. Religiosity was measured using four items concerning frequency of religious activities and importance of religious beliefs. Using twins and siblings who were discordant for religiosity, analyses tested whether religious adolescents had a later age at first drink than their non-religious co-twins/co-siblings. Results: Religious adolescents did not differ from their non-religious siblings in their mean age at first drink. Results from survival models indicate that environmental differences between families completely account for the correlation between religiosity and later age at first drink. Conclusions: Results suggest that individual religious involvement is a proxy variable for family or cultural environments that are salient for when adolescents initiate alcohol use. Future research is needed to identify specific protective environments in religious families. These results have implications for both public policy and etiological theory. © 2010 Association for Child and Adolescent Mental Health.
Ren K.,University of Texas at Austin
Communications in Computational Physics | Year: 2010
The objective of this paper is to review recent developments in numerical reconstruction methods for inverse transport problems in imaging applications, mainly optical tomography, fluorescence tomography and bioluminescence tomography. In those inverse problems, one aims at reconstructing physical parameters, such as the absorption coefficient, the scattering coefficient and the fluorescence light source, inside heterogeneousmedia, frompartial knowledge of transport solutions on the boundaries of the media. The physical parameters recovered can be used for diagnostic purpose. Numerical reconstruction techniques for those inverse transport problems can be roughly classified into two categories: linear reconstruction methods and nonlinear reconstruction methods. In the first type of methods, the inverse problems are linearized around some known background to obtain linear inverse problems. Classical regularization techniques are then applied to solve those inverse problems. The second type of methods are either based on regularized nonlinear least-square techniques or based on gradient-driven iterative methods for nonlinear operator equations. In either case, the unknown parameters are iteratively updated until the solutions of the transport equations with the those parameters match the measurements to a certain extent. We review linear and nonlinear reconstruction methods for inverse transport problems in medical imaging with stationary, frequency-domain and time-dependent data. The materials presented include both existing and new results. Meanwhile, we attempt to present similar algorithms for different problems in the same framework to make it more straightforward to generalize those algorithms to other inverse (transport) problems. © 2010 Global-Science Press.
Koyluoglu O.O.,University of Texas at Austin |
El Gamal H.,Ohio State University
IEEE Transactions on Information Theory | Year: 2011
This paper investigates the fundamental performance limits of the two-user interference channel in the presence of an external eavesdropper. In this setting, we construct an inner bound, to the secrecy capacity region, based on the idea of cooperative encoding in which the two users cooperatively design their randomized codebooks and jointly optimize their channel prefixing distributions. Our achievability scheme also utilizes message-splitting in order to allow for partial decoding of the interference at the nonintended receiver. Outer bounds are then derived and used to establish the optimality of the proposed scheme in certain cases. In the Gaussian case, the previously proposed cooperative jamming and noise-forwarding techniques are shown to be special cases of our proposed approach. Overall, our results provide structural insights on how the interference can be exploited to increase the secrecy capacity of wireless networks. © 2011 IEEE.
Shahmoradi A.,University of Texas at Austin
Astrophysical Journal | Year: 2013
It is proposed that the luminosity function, the rest-frame spectral correlations, and distributions of cosmological long-duration (Type-II) gamma-ray bursts (LGRBs) may be very well described as a multivariate log-normal distribution. This result is based on careful selection, analysis, and modeling of LGRBs' temporal and spectral variables in the largest catalog of GRBs available to date: 2130 BATSE GRBs, while taking into account the detection threshold and possible selection effects. Constraints on the joint rest-frame distribution of the isotropic peak luminosity (Liso), total isotropic emission (E iso), the time-integrated spectral peak energy (E p,z), and duration (T 90,z) of LGRBs are derived. The presented analysis provides evidence for a relatively large fraction of LGRBs that have been missed by the BATSE detector with E iso extending down to ∼1049 erg and observed spectral peak energies (Ep) as low as ∼5 keV. LGRBs with rest-frame duration T 90,z ≲ 1 s or observer-frame duration T 90 ≲ 2 s appear to be rare events (≲ 0.1% chance of occurrence). The model predicts a fairly strong but highly significant correlation (ρ = 0.58 ± 0.04) between E iso and E p,z of LGRBs. Also predicted are strong correlations of L iso and E iso with T 90, z and moderate correlation between L iso and E p, z . The strength and significance of the correlations found encourage the search for underlying mechanisms, though undermine their capabilities as probes of dark energy's equation of state at high redshifts. The presented analysis favors - but does not necessitate - a cosmic rate for BATSE LGRBs tracing metallicity evolution consistent with a cutoff Z/Z ∼ 0.2-0.5, assuming no luminosity-redshift evolution. © 2013. The American Astronomical Society. All rights reserved.
Fast W.,University of Texas at Austin |
Tipton P.A.,University of Missouri
Trends in Biochemical Sciences | Year: 2012
N-Acyl-L-homoserine lactones (AHLs) are a major class of quorum-sensing signals used by Gram-negative bacteria to regulate gene expression in a population-dependent manner, thereby enabling group behavior. Enzymes capable of generating and catabolizing AHL signals are of significant interest for the study of microbial ecology and quorum-sensing pathways, for understanding the systems that bacteria have evolved to interact with small-molecule signals, and for their possible use in therapeutic and industrial applications. The recent structural and functional studies reviewed here provide a detailed insight into the chemistry and enzymology of bacterial communication. © 2011 Elsevier Ltd.
Ballard M.S.,University of Texas at Austin
Journal of the Acoustical Society of America | Year: 2012
An acoustic propagation model is applied to predict measurements of three-dimensional (3-D) effects recorded off the southeast coast of Florida. The measured signal is produced by a low frequency source that is towed north parallel to the shelf from a fixed receiving array. The acoustic data show the direct path arrival at the bearing of the tow ship and a second refracted path arrival as much as 30 inshore of the direct arrival. Notably, the refracted arrival has a received level more than 25 dB greater than that of the direct arrival. A geoacoustic model of the environment is created to explain the data. It is shown that the topography of the seafloor plays the largest role in controlling horizontal refraction effects, whereas the range-dependent sediment properties have the most influence on the received level. The modeling approach is based on a 3-D adiabatic mode technique in which the horizontal refraction equation is solved using a parabolic equation in Cartesian coordinates. A modal decomposition of the field provides insight into the variability in the arrival angle and received level of the measured signal. © 2012 Acoustical Society of America.
Pearce J.A.,University of Texas at Austin
International Journal of Hyperthermia | Year: 2013
The standard method for assessing hyperthermia treatment has been calculation of cumulative equivalent minutes at 43 °C, CEM43 and its variations. This parameter normalises treatment thermal histories rather than predicts treatment results. Arrhenius models have been widely used in analysing higher temperature thermal treatments and successfully employed to predict irreversible thermal alterations in structural proteins. Unfortunately, in many, but not all cases they fail to represent thermally induced damage or cell death at hyperthermic temperatures, 43-50 °C, exhibiting significant over-prediction of the initial 'shoulder' region. The failure arises from the simplifying assumptions used to derive the irreversible reaction format that has been used in thermal damage studies. Several successful multi-parameter fit methods have been employed to model cell survival data. The two-state statistical thermodynamic model was derived from basic thermodynamic principles. The three-state model results from relaxing the assumptions under the Arrhenius formulation that result in an irreversible reaction. In other cell processes studied in vitro the irreversible Arrhenius model holds, and is sufficient to provide an accurate and useful estimate of thermal damage and cell death. It is essential in numerical model work to include multiple thermal damage processes operating in parallel to obtain a clear image of the likely outcome in tissues. Arrhenius and other C(t) models have that capability, while a single value for CEM43, does not. © 2013 Informa UK Ltd. All rights reserved.
Xhemalce B.,University of Texas at Austin
Briefings in Functional Genomics | Year: 2013
Cancer results from abnormal gene expression that transforms cellular identity. A rising consensus is that genetic mutations and epigenetic alterations act in concert to achieve tumorigenesis. On one hand, cancer cells harbor classic genetic mutations that activate oncogenes and inhibit tumor suppressors.On the other hand, they also display broad alterations of their epigenomes, as defined by modifications of DNA, histones and coding/noncoding RNAs. In particular, methylation is a ubiquitous modification that affects several residues/sites in these molecules. In this review, I will discuss the central role of this modification in the regulation of gene expression, its alterations in cancer as well as its possible targeting for cancer therapies ©The Author 2013. Published by Oxford University Press.
Lingam M.,University of Texas at Austin
Physics of Plasmas | Year: 2015
A Hamiltonian and Action Principle formulation of spin-1/2 magnetohydrodynamics is presented via a first-principles derivation of the underlying Lagrangian, and the associated Hamiltonian. The derivation invokes the notion of "frozen-in" constraints, symmetry breaking, and similarities with Ginzburg-Landau theory to arrive at the relevant terms in the Hamiltonian. The model thus obtained includes the effects of spin and other quantum corrections and is shown to be in full agreement with existent models in the literature. It is also indicated how two-fluid effects, gyroviscosity, and anisotropic pressure can be included in the model, in addition to incorporating higher-order (nonlinear) quantum spin corrections. An interesting analogy with the theory of liquid crystals is also highlighted. © 2015 AIP Publishing LLC.
Seshadrinathan K.,Intel Corporation |
Bovik A.C.,University of Texas at Austin
IEEE Transactions on Image Processing | Year: 2010
There has recently been a great deal of interest in the development of algorithms that objectively measure the integrity of video signals. Since video signals are being delivered to human end users in an increasingly wide array of applications and products, it is important that automatic methods of video quality assessment (VQA) be available that can assist in controlling the quality of video being delivered to this critical audience. Naturally, the quality of motion representation in videos plays an important role in the perception of video quality, yet existing VQA algorithms make little direct use of motion information, thus limiting their effectiveness. We seek to ameliorate this by developing a general, spatio-spectrally localized multiscale framework for evaluating dynamic video fidelity that integrates both spatial and temporal (and spatio-temporal) aspects of distortion assessment. Video quality is evaluated not only in space and time, but also in space-time, by evaluating motion quality along computed motion trajectories. Using this framework, we develop a full reference VQA algorithm for which we coin the term the MOtion-based Video Integrity Evaluation index, or MOVIE index. It is found that the MOVIE index delivers VQA scores that correlate quite closely with human subjective judgment, using the Video Quality Expert Group (VQEG) FRTV Phase 1 database as a test bed. Indeed, the MOVIE index is found to be quite competitive with, and even outperform, algorithms developed and submitted to the VQEG FRTV Phase 1 study, as well as more recent VQA algorithms tested on this database. © 2010 IEEE.
Yoder L.H.,University of Texas at Austin
The American journal of nursing | Year: 2014
To determine to what extent RNs in an acute care multihospital system used research findings in their practice; what types of knowledge they used in their practice; and what personal, professional, and organizational factors enhanced or hindered their research utilization. A cross-sectional, descriptive, online survey design was used. The survey, which asked about use of research findings in practice and evidence-based practice (EBP) participation, was placed on the hospital system intranet. Of the 2,900 RNs invited to participate, 1,112 nurses completed usable surveys, for a response rate of 38%. This article reports findings for 794 of the staff nurses who responded to the survey. The forms of knowledge that staff nurses reported relying on most were their personal experience with patients, conferences, hospital policies and procedures, physician colleagues, and nursing peers. Although a variety of resources were available for nurses to use in locating research and implementing EBP, respondents reported many of the same barriers that have been reported in other studies over the last two decades: lack of time, lack of resources, and lack of knowledge. Although their attitudes about research utilization and EBP were positive overall, respondents expected unit-based educators and clinical nurse specialists to collect and synthesize the research for them. These findings are similar to those of other recent studies regarding nurses' research utilization and EBP. A great deal of work remains to be done if we are to inform, educate, and assist staff nurses in using research and implementing EBP. It may be unrealistic to expect bedside nurses to add these activities to their duties unless they are compensated for the time and have the support of master's or doctorally prepared nurses to serve as EBP coaches and champions.
Heller A.,University of Texas at Austin |
Feldman B.,Abbott Laboratories
Accounts of Chemical Research | Year: 2010
Diabetes devastates lives and burdens society. Hypoglycemic (low glucose) episodes cause blackouts, and severe ones are life-threatening. Periods of hyperglycemia (high glucose) cause circulatory disease, stroke, amputations, blindness, kidney failure and nerve degeneration. In this Account, we describe the founding of TheraSense, now a major part of Abbott Diabetes Care, and the development of two products that have improved the lives of people with diabetes. The first, a virtually painless microcoulometer (300 nL volume), the FreeStyle blood glucose monitoring system, was approved by the FDA and became available in 2000. In 2009, this system was used in more than one billion blood assays. The second, the enzyme-wiring based, subcutaneously-implanted FreeStyle Navigator continuous glucose monitoring system, was approved by the FDA and became available in the United States in 2008. The strips of the FreeStyle blood glucose monitoring system comprise a printed parallel plate coulometer, with a 50 ?m gap between two facing printed electrodes, a carbon electrode and a Ag/AgCl electrode. The volume of blood between the facing plates is accurately controlled. The glucose is electrooxidized through catalysis by a glucose dehydrogenase (GDH) and an Os2+/3+ redox mediator, which is reduced by the glucose-reduced enzyme and is electrooxidized on the carbon electrode. Initially the system used pyrroloquinoline quinone (PQQ)-dependent GDH but now uses flavin adenine dinucleotide (FAD)-dependent GDH. Because the facing electrodes are separated by such a small distance, shuttling of electrons by the redox couple could interfere with the coulometric assay. However, the Os 2+/3+ redox mediator is selected to have a substantially negative formal potential, between 0.0 and ?0.2 V, versus that of the facing Ag/AgCl electrode. This makes the flow of a shuttling current between the two electrodes virtually impossible because the oxidized Os3+ complex cannot be appreciably reduced at the more positively poised Ag/AgCl electrode. The FreeStyle Navigator continuous glucose monitoring system uses a subcutaneously implanted miniature plastic sensor connected to a transmitter to measure glycemia amperometrically and sends the information to a PDA-like device every minute. The sensor consists of a narrow (0.6 mm wide) plastic substrate on which carbon-working, Ag/AgCl reference, and carbon counter electrodes are printed in a stacked geometry. The active wired enzyme sensing layer covers only about 0.1 mm2 of the working electrode and is overlaid by a flux-limiting membrane. It resides at about 5 mm depth in the subcutaneous adipose tissue and monitors glucose concentrations over the range 20?500 mg/dL. Its core component, a miniature, disposable, amperometric glucose sensor, has an electrooxidation catalyst made from a crosslinked adduct of glucose oxidase (GOx) and a GOx wiring redox hydrogel containing a polymer-bound Os2+/3+ complex. Because of the selectivity of the catalyst for glucose, very little current flows in the absence of glucose. That feature, either alone or in combination with other features of the sensor, facilitates the one-point calibration of the system. The sensor is implanted subcutaneously and replaced by the patient after 5 days use with minimal pain. The wearer does not feel its presence under the skin. © 2010 American Chemical Society.
Verslues P.E.,Academia Sinica, Taiwan |
Juenger T.E.,University of Texas at Austin
Current Opinion in Plant Biology | Year: 2011
Drought elicits substantial changes in plant metabolism and it remains a challenge to determine which of these changes represent adaptive responses and which of them are merely neutral effects or even symptoms of damage. Arabidopsis primarily uses low water potential/dehydration avoidance strategies to respond to water limitation. The large variation in evolved stress responses among accessions can be a powerful tool to identify ecologically important and adaptive traits; however, collection of relevant phenotype data under controlled water stress is often a limiting factor. Quantitative genetics of Arabidopsis has great potential to find the genes underlying variation in drought-affected metabolic traits, for example proline metabolism, as well as overall adaptation. © 2011 Elsevier Ltd.
Chen Z.J.,University of Texas at Austin |
Chen Z.J.,Nanjing Agricultural University
Nature Reviews Genetics | Year: 2013
Heterosis, also known as hybrid vigour, is widespread in plants and animals, but the molecular bases for this phenomenon remain elusive. Recent studies in hybrids and allopolyploids using transcriptomic, proteomic, metabolomic, epigenomic and systems biology approaches have provided new insights. Emerging genomic and epigenetic perspectives suggest that heterosis arises from allelic interactions between parental genomes, leading to altered programming of genes that promote the growth, stress tolerance and fitness of hybrids. For example, epigenetic modifications of key regulatory genes in hybrids and allopolyploids can alter complex regulatory networks of physiology and metabolism, thus modulating biomass and leading to heterosis. The conceptual advances could help to improve plant and animal productivity through the manipulation of heterosis. © 2013 Macmillan Publishers Limited. All rights reserved.
Kim J.W.,University of Texas at Austin
Computers in Human Behavior | Year: 2014
An online survey was used to the examine college students' uses and gratifications of social recommendation systems such as "like" on Facebook. By presenting numerical values, these systems display the volume (i.e.; how many users) and valence (i.e.; like/dislike) of collective opinions within a specific e-community that users are involved with. Consequently, these systems offer the chance for users to scan the climate of opinions within a specific online community. More importantly, these systems have introduced a new mode of speaking out online. Due to social recommendation systems' significance in the public opinion domain, the study attempted to understand why and how people use them to express opinions. Results showed that social recommendations were used for expression, information, socialization, and entertainment purposes. Above all, users primarily viewed them as potential expressive tools and actually use them to express their opinions. While internet use had no significant relationships with either motives or system uses, social media uses were closely related with both motives and actual uses. The findings suggest that social recommendation systems seem to play an important role in the public opinion domain by offering an additional way to scan collective opinions and express opinions. © 2014 Elsevier Ltd. All rights reserved.
Gilbert O.M.,University of Texas at Austin
American Naturalist | Year: 2015
Multicellular tissue compatibility, or histocompatibility, restricts fusion to close kin. Histocompatibility depends on hypervariable cue genes, which often have more than 100 alleles in a population. To explain the evolution of histocompatibility, I here take a historical approach. I focus on the specific example of marine invertebrate histocompatibility. I use simple game-theoretical models to show that histocompatibility can evolve through five steps. These steps include the evolution of indiscriminate fusion, the evolution of discriminatory within-organism conflict, the evolution of minor histocompatibility, the evolution of major histocompatibility, and the evolution of major histocompatibility cue polymorphism. Allowing for gradual evolution reveals discriminatory within-organism conflict as a selective pressure for histocompatibility and associated cue polymorphism. Existing data from marine invertebrates and other organisms are consistent with this hypothesis. © 2015 by The University of Chicago.
Daigle H.,University of Texas at Austin
Marine Geology | Year: 2014
I characterized microporosity by performing low pressure nitrogen adsorption measurements on 13 shallow marine mudstone samples from the Nankai Trough offshore Japan. The samples were from two reference Sites on the incoming Philippine Sea Plate, and one Site above the accretionary prism. I determined pore size distributions using the Barrett-Joyner-Hallenda (BJH) model, and merged these with existing mercury injection capillary pressure (MICP) measurements to construct a full distribution covering micro- to macropores. I found that overall pore sizes decrease with consolidation, and that microporosity content (pores<2nm in diameter) is influenced mainly by mineralogy, with some influence of diagenetic processes. A small amount of microporosity (~0.25% of bulk sediment volume) is present in these sediments at the time of burial, presumably contained mainly in clays. Additional microporosity may develop as a result of alteration of volcanic ash at the reference Sites, and may be related to diagenetic processes that create zones of anomalous high porosity. Comparisons with porewater chemistry (K+, Ca2+, Sr, Si) show inconsistent relationships with microporosity development and cannot confirm or deny the role of ash alteration in this process. The strongest correlation observed at the three Sites was between microporosity volume and clay mineral fraction. This suggests that microporosity content is determined mainly by detrital clay abundance and development of clay as an ash alteration product, with some contribution from amorphous silica cement precipitated in the zones of anomalous high porosity. © 2014 Elsevier B.V.
Dalziel I.W.D.,University of Texas at Austin
Geology | Year: 2014
The geologically abrupt appearance in the fossil record of almost all animal phyla is referred to as the Cambrian radiation or "explosion" of life on Earth. Also known as "Darwin's dilemma," because it seemingly posed a major problem for his theory of gradual evolution, it coincided with the initiation of the first of the two principal global marine transgressions of the Phanerozoic. Although now seen as more protracted, it is still one of the most striking and critical events in the history of the biosphere. Almost all paleogeographic reconstructions for the early Cambrian feature a previously isolated Laurentia, the core of ancestral North America. Yet geological evidence from five continents, integrated here for the first time, indicates that the present-day "southern cone" of Laurentia was still attached to the newly amalgamated supercontinent of Gondwanaland into Cambrian times. Laurentia was then isolated by the development of a major deep oceanic connection between the opening Iapetus Ocean basin and the already well-developed paleo-Pacific. As the marine transgression advanced, major changes in ocean chemistry occurred, upwelling generated phosphorite deposits, and the number of fossilized metazoan phyla "exploded" with morphologic disparity between Laurentia and Gondwanaland already established. The development of this deep oceanic gateway, and of an ocean floor-consuming and arc-generating subduction zone along virtually the entire margin of Gondwanaland shortly thereafter, need to be taken into account in consideration of the global environmental and biotic changes associated with the Neoproterozoic-Phanerozoic transition. © 2014 Geological Society of America.
Willets K.A.,University of Texas at Austin
ChemPhysChem | Year: 2013
Surface-enhanced Raman scattering (SERS) is quickly growing as an analytical technique, because it offers both molecular specificity and excellent sensitivity. For select substrates, SERS can even be observed from single molecules, which is the ultimate limit of detection. This review describes recent developments in the field of single-molecule SERS (SM-SERS) with a focus on new tools for characterizing SM-SERS-active substrates and how they interact with single molecules on their surface. In particular, techniques that combine optical spectroscopy and microscopy with electron microscopy are described, including correlated optical and transmission electron microscopy, correlated super-resolution imaging and scanning electron microscopy, and correlated optical microscopy and electron energy loss spectroscopy. Hot stuff! Correlated optical and electron microscopy allow multiple length scales to be resolved for studying hot spots in single-molecule surface-enhanced Raman scattering (SERS). This review describes techniques that improve the understanding of SERS hot spots by using both electromagnetic and electronic excitation of plasmon modes in metallic nanostructures. How both the substrate and the molecule define the hot spot is also considered. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rossky P.J.,University of Texas at Austin
Faraday Discussions | Year: 2010
In this lecture, aspects of the hydration of hydrophobic interfaces that are emergent nanoscale properties of the interface chemical structure are discussed. General results inferred from systematic computational studies are emphasized, with a central theme focusing on the separate roles of surface topography and surface chemistry. The roles of surface curvature, polarity, and chemical heterogeneity, as well as the important role of solvent thermodynamic state are considered. The potential importance of understanding evolved natural biological interfaces on the same basis as model synthetic surfaces is pointed out, and progress in this direction is discussed. © 2010 The Royal Society of Chemistry.
Wick T.,University of Texas at Austin
Computational Mechanics | Year: 2013
We present a specific application of the fluid-solid interface-tracking/ interface-capturing technique (FSITICT) for solving fluid-structure interaction. Specifically, in the FSITICT, we choose as interface-tracking technique the arbitrary Lagrangian-Eulerian method and as interface-capturing technique the fully Eulerian approach, leading to the Eulerian-arbitrary Lagrangian-Eulerian (EALE) technique. Using this approach, the domain is partitioned into two sub-domains in which the different methods are used for the numerical solution. The discretization is based on a monolithic solver in which finite differences are used for temporal integration and a Galerkin finite element method for spatial discretization. The nonlinear problem is treated with Newton'smethod. Themethod combines advantages of both sub-frameworks, which is demonstrated with the help of some benchmarks. © Springer-Verlag Berlin Heidelberg 2013.
Preston A.R.,University of Texas at Austin |
Eichenbaum H.,Boston University
Current Biology | Year: 2013
Recent studies on the hippocampus and the prefrontal cortex have considerably advanced our understanding of the distinct roles of these brain areas in the encoding and retrieval of memories, and of how they interact in the prolonged process by which new memories are consolidated into our permanent storehouse of knowledge. These studies have led to a new model of how the hippocampus forms and replays memories and how the prefrontal cortex engages representations of the meaningful contexts in which related memories occur, as well as how these areas interact during memory retrieval. Furthermore, they have provided new insights into how interactions between the hippocampus and prefrontal cortex support the assimilation of new memories into pre-existing networks of knowledge, called schemas, and how schemas are modified in this process as the foundation of memory consolidation. © 2013 Elsevier Ltd.
Xiao D.,Oak Ridge National Laboratory |
Chang M.-C.,National Taiwan Normal University |
Niu Q.,University of Texas at Austin
Reviews of Modern Physics | Year: 2010
Ever since its discovery the notion of Berry phase has permeated through all branches of physics. Over the past three decades it was gradually realized that the Berry phase of the electronic wave function can have a profound effect on material properties and is responsible for a spectrum of phenomena, such as polarization, orbital magnetism, various (quantum, anomalous, or spin) Hall effects, and quantum charge pumping. This progress is summarized in a pedagogical manner in this review. A brief summary of necessary background is given and a detailed discussion of the Berry phase effect in a variety of solid-state applications. A common thread of the review is the semiclassical formulation of electron dynamics, which is a versatile tool in the study of electron dynamics in the presence of electromagnetic fields and more general perturbations. Finally, a requantization method is demonstrated that converts a semiclassical theory to an effective quantum theory. It is clear that the Berry phase should be added as an essential ingredient to our understanding of basic material properties. © 2010 The American Physical Society.
Keatinge-Clay A.T.,University of Texas at Austin
Natural Product Reports | Year: 2012
Covering: up to 2012 With the recent structural characterization of each of the component enzymes of type I polyketide synthases, scientists are coming tantalizingly close to elucidating the overall architectures and mechanisms of these enormous molecular factories. This review highlights not only what has been revealed about the structures and activities of each of the domains but also the mysteries that remain to be solved. This journal is © The Royal Society of Chemistry 2012.
Narevicius E.,Weizmann Institute of Science |
Raizen M.G.,University of Texas at Austin
Chemical Reviews | Year: 2012
Researchers conducted studies to explore the potential of cold chemistry with magnetically decelerated supersonic beams. The researchers demonstrated that magnetic deceleration offered many possibilities in reactive scattering at low ̃ 1 K temperatures. Atoms, such as hydrogen, nitrogen, oxygen, and fluorine were decelerated through magnetic deceleration, allowing investigations of canonical atom-diatomic molecule reactions. These processes were also tracked using quantum mechanical calculations that were necessary to correctly describe dynamics at low temperatures. Reaction complexity was gradually increased by moving to a larger number of involved atoms. An effective moving magnetic trap potential was created using a series of spatially overlapping traps and activating them in a temporally overlapping pulse sequence to conduct the investigations.
Pudrovska T.,University of Texas at Austin
Journal of Aging and Health | Year: 2015
Objective: Internal health locus of control (HLOC) reflects individuals' beliefs that their own behavior influences their health. This study explores the gender difference in internal HLOC among middle-aged and older adults. Method: Using data from two waves of the National Survey of Midlife Development in the United States (MIDUS; N = 1,748), I estimate two-level random-intercept models predicting internal HLOC. Results: Women report higher levels of health control beliefs than men, especially in older cohorts born in the 1920s and 1930s. Adjustment for health, socioeconomic status, generalized control, and masculinity increases this gender gap, whereas adjustment for femininity and religiosity significantly reduces this difference. Women's higher religiosity and more feminine traits, such as warmth, nurturance, and care, partly explain their higher internal HLOC relative to men. Discussion: Because femininity and religiosity are positively associated with other-orientation, interventions to increase communal orientation may enhance beliefs in proactive responsibility for one's health among older adults. © The Author(s) 2014.
Moran N.A.,University of Texas at Austin
Current Opinion in Insect Science | Year: 2015
The guts of honey bee workers contain a distinctive community of bacterial species. They are microaerophilic or anaerobic, and were not clearly deliniated by earlier studies relying on laboratory culture of isolates under atmospheric oxygen levels. Recently, a more complete picture of the potential metabolism and functions of these bacteria have been possible, using genomic approaches based on metagenomic samples, as well as cultured isolates. Of these, most are host-restricted and are generally absent outside adult guts. These species include both Gram negative groups, such as Gilliamella apicola and Snodgrassella alvi, and Gram positive groups such as certain Lactobacillus and Bifidobacterium species. These gut bacterial species appear to have undergone long term coevolution with honey bee and, in some cases, bumble bee hosts. Prediction of gene functions from genome sequences suggests roles in nutrition, digestion, and potentially in defense against pathogens. In particular, genes for sugar utilization and carbohydrate breakdown are enriched in G. apicola and the Lactobacillus species.
Kanapathipillai M.,Wyss Institute for Biologically Inspired Engineering |
Brock A.,University of Texas at Austin |
Ingber D.E.,Wyss Institute for Biologically Inspired Engineering |
Ingber D.E.,Harvard University
Advanced Drug Delivery Reviews | Year: 2014
Nanoparticle-based therapeutics are poised to become a leading delivery strategy for cancer treatment because they potentially offer higher selectivity, reduced toxicity, longer clearance times, and increased efficacy compared to conventional systemic therapeutic approaches. This article reviews existing nanoparticle technologies and methods that are used to target drugs to treat cancer by altering signal transduction or modulating the tumor microenvironment. We also consider the implications of recent advances in the nanotherapeutics field for the future of cancer therapy. © 2014 Elsevier B.V.
Burge J.,University of Texas at Austin
Journal of vision | Year: 2014
A great challenge of systems neuroscience is to understand the computations that underlie perceptual constancies, the ability to represent behaviorally relevant stimulus properties as constant even when irrelevant stimulus properties vary. As signals proceed through the visual system, neural states become more selective for properties of the environment, and more invariant to irrelevant features of the retinal images. Here, we describe a method for determining the computations that perform these transformations optimally, and apply it to the specific computational task of estimating a powerful depth cue: binocular disparity. We simultaneously determine the optimal receptive field population for encoding natural stereo images of locally planar surfaces and the optimal nonlinear units for decoding the population responses into estimates of disparity. The optimal processing predicts well-established properties of neurons in cortex. Estimation performance parallels important aspects of human performance. Thus, by analyzing the photoreceptor responses to natural images, we provide a normative account of the neurophysiology and psychophysics of absolute disparity processing. Critically, the optimal processing rules are not arbitrarily chosen to match the properties of neurophysiological processing, nor are they fit to match behavioral performance. Rather, they are dictated by the task-relevant statistical properties of complex natural stimuli. Our approach reveals how selective invariant tuning-especially for properties not trivially available in the retinal images-could be implemented in neural systems to maximize performance in particular tasks.
Ruoff R.S.,University of Texas at Austin
MRS Bulletin | Year: 2012
In this article, I describe my early interest in graphene and contributions that I and my co-authors, in particular, have made to the field, along with a brief history of the experimental discovery of graphene. I then turn to new carbon materials whose experimental syntheses might be on the horizon. One example involves using graphene as a template to generate large-area ultrathin sp 3-bonded carbon sheets that could also be substitutionally doped with, for example, nitrogen atoms, as one approach to making materials of interest for quantum computing. Such large-area sp 3-bonded carbon sheets hold tremendous promise for use in thermal management; as a new material for electronics and photonics; and as ultrahigh-strength components in various structures including those used in aerospace, among other applications. Another example is the class of negative-curvature carbons (NCCs) that have atom-thick walls and carbon atoms trivalently bonded to other carbon atoms. Such NCCs have a nanoscale pore structure, atom-thick walls, and exceptionally high specific surface areas, and they fill three-dimensional space in ways that suggest their use as electrode materials for ultracapacitors and batteries, as adsorbents, as support material for catalysts, and for other applications. © Copyright 2012 Materials Research Society.
Willets K.A.,University of Texas at Austin
Progress in Surface Science | Year: 2012
Noble metal nanoparticles have attracted significant research interest due to their ability to support localized surface plasmons. Plasmons not only give the nanoparticles a characteristic color, but they also enhance electromagnetic fields at the nanoparticle surface, often by many orders of magnitude. The enhanced electromagnetic fields are the basis for a host of surface-enhanced spectroscopies, such as surface-enhanced Raman scattering (SERS), but characterizing how the enhanced electromagnetic fields are distributed on the surface of the nanoparticles is an experimental challenge due to the small size of the nanoparticles (∼20-200 nm) relative to the diffraction limit of light. This Progress Highlight will discuss methods for characterizing local electromagnetic field enhancements with < 5 nm resolution, including electron energy loss spectroscopy, cathodoluminescence, and super-resolution optical imaging. © 2012 Elsevier Ltd. All rights reserved.
Pavel M.D.,University of Texas at Austin
Astronomical Journal | Year: 2014
A new method for measuring the large-scale structure of the Galactic magnetic field is presented. The Galactic magnetic field has been probed through the Galactic disk with near-infrared starlight polarimetry; however, the distance to each background star is unknown. Using red clump stars as near-infrared standard candles, this work presents the first attempt to decompose the line-of-sight structure of the sky-projected Galactic magnetic field. Two example lines of sight are decomposed: toward a field with many red clump stars and toward a field with few red clump stars. A continuous estimate of magnetic field orientation over several kiloparsecs of distance is possible in the field with many red clump stars, while only discrete estimates are possible in the sparse example. Toward the outer Galaxy, there is a continuous field orientation with distance that shows evidence of perturbation by the Galactic warp. Toward the inner Galaxy, evidence for a large-scale change in the magnetic field geometry is consistent with models of magnetic field reversals, independently derived from Faraday rotation studies. A photo-polarimetric method for identifying candidate intrinsically polarized stars is also presented. The future application of this method to large regions of the sky will begin the process of mapping the Galactic magnetic field in a way never before possible. © 2014. The American Astronomical Society. All rights reserved.
Ghosh J.,University of Texas at Austin
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery | Year: 2011
Cluster ensembles combine multiple clusterings of a set of objects into a single consolidated clustering, often referred to as the consensus solution. Consensus clustering can be used to generate more robust and stable clustering results compared to a single clustering approach, perform distributed computing under privacy or sharing constraints, or reuse existing knowledge. This paper describes avariety of algorithms that have been proposed to address the cluster ensemble problem, organizing them in conceptual categories that bring out the common threads and lessons learnt while simultaneously highlighting unique features of individual approaches. © 2011 John Wiley & Sons, Inc.
Crews D.,University of Texas at Austin
Hormones and Behavior | Year: 2011
Evolutionary change is a product of selection. Selection operates on the phenotype, and its consequences are manifest in representation of the genotype in successive generations. Of particular interest to both evolutionary and behavioral biologists is the newly emerging field of epigenetics and behavior. Two broad categories of epigenetic modifications must be distinguished. Context-dependent epigenetic change can be observed if the environmental factors that bring about the epigenetic modification persists (e.g., the frequency and quality of maternal care modifying the brain and future behavior of the offspring each generation). Because the environment induces epiallelic change, removing the causative factor can reverse a context-dependent epigenetic state. Germline-dependent epigenetic change occurs when the epigenetic imprint is mediated through the germline. Such effects are independent of the causative agent and there is no evidence at present that a germline-dependent epigenetic state can be reversed. Finally, only germline-dependent epigenetic modifications can be truly transgenerational. Although an individual's life history is progressive and continuous, it might usefully be viewed as the cumulation of divisions: each period emerging from what has gone before and, at the same time, setting the stage for what follows. These life history stages are somewhat arbitrary, with many traits spanning conventional divisions, but each period tends to have its own characteristic ethologies and particular contribution to neural and behavioral phenotypes. To understand how these episodes 'fit' together, it is necessary to deconstruct early life events and study each period both in its' own right and how it interacts with the preceding and subsequent stages. Lastly, it seems intuitive that germline- and context-dependent epigenetic modifications interact, resulting in the individual variation observed in behaviors, but until now this hypothesis has never been tested experimentally. © 2010 Elsevier Inc.
Bard A.J.,University of Texas at Austin
Journal of the American Chemical Society | Year: 2010
This paper offers a perspective on inner-sphere heterogeneous electron-transfer reactions and electrocatalysis as it is applied to electrochemical energy conversion systems. Fundamental concepts and an overview of past approaches to studies of these types of reactions are discussed. A method for the discovery of new electrocatalysts (for example, ones for the oxygen reduction reaction) and photocatalysts (for solar energy conversion to fuels) based on scanning electrochemical microscopy is briefly described, as well as new surface interrogation techniques for quantifying intermediates. © 2010 American Chemical Society.
Fuiman L.A.,University of Texas at Austin
Biology letters | Year: 2013
Fatty acid composition of eggs affects development, growth and ecological performance of fish embryos and larvae, with potential consequences for recruitment success. Essential fatty acids in eggs derive from the maternal diet, and the time between ingestion and deposition in eggs is ecologically important but unknown. We examined the dynamics of diet-egg transfer of arachidonic acid (ARA) in the batch-spawning fish, red drum (Sciaenops ocellatus), by measuring ARA concentrations in eggs after a single diet shift and during a period of irregular variations in diet. ARA concentrations in eggs changed within 2-16 days of a diet shift. The rate of change was proportional to the magnitude of the shift, with no evidence of equilibration. These results are not consistent with eggs being assembled entirely from accumulated body stores. The immediate source of ARA in eggs appears to be the recent diet. We propose that batch spawning produces rapid diet-egg transfer of ARA because it removes large amounts of fatty acids from the body and prevents equilibration. The immediacy of the diet-egg connection suggests that spawning migration combined with short-interval batch spawning may have evolved to take advantage of nutrients critical for offspring survival that are available at the spawning site.
Leger N.,University of Texas at Austin
Archive for Rational Mechanics and Analysis | Year: 2011
We consider scalar nonviscous conservation laws with strictly convex flux in one spatial dimension, and we investigate the behavior of bounded L2 perturbations of shock wave solutions to the Riemann problem using the relative entropy method. We show that up to a time-dependent translation of the shock, the L2 norm of a perturbed solution relative to the shock wave is bounded above by the L2 norm of the initial perturbation. © 2010 Springer-Verlag.
Caffarelli L.,University of Texas at Austin |
Silvestre L.,University of Chicago
Archive for Rational Mechanics and Analysis | Year: 2011
We obtain C1,α regularity estimates for nonlocal elliptic equations that are not necessarily translation-invariant using compactness and perturbative methods and our previous regularity results for the translation-invariant case. © 2010 Springer-Verlag.
Hall A.K.,University of Washington |
Cole-Lewis H.,Columbia University |
Cole-Lewis H.,ICF International |
Bernhardt J.M.,University of Texas at Austin
Annual Review of Public Health | Year: 2015
The aim of this systematic review of reviews is to identify mobile text-messaging interventions designed for health improvement and behavior change and to derive recommendations for practice. We have compiled and reviewed existing systematic research reviews and meta-analyses to organize and summarize the text-messaging intervention evidence base, identify best-practice recommendations based on findings from multiple reviews, and explore implications for future research. Our review found that the majority of published text-messaging interventions were effective when addressing diabetes self-management, weight loss, physical activity, smoking cessation, and medication adherence for antiretroviral therapy. However, we found limited evidence across the population of studies and reviews to inform recommended intervention characteristics. Although strong evidence supports the value of integrating text-messaging interventions into public health practice, additional research is needed to establish longer-term intervention effects, identify recommended intervention characteristics, and explore issues of cost-effectiveness. Copyright © 2015 by Annual Reviews. All rights reserved.
Iyer V.R.,University of Texas at Austin
Trends in Cell Biology | Year: 2012
Nucleosomes are an essential component of eukaryotic chromosomes. The impact of nucleosomes is seen not just on processes that directly access the genome, such as transcription, but also on an evolutionary timescale. Recent studies in various organisms have provided high-resolution maps of nucleosomes throughout the genome. Computational analysis, in conjunction with many other kinds of data, has shed light on several aspects of nucleosome biology. Nucleosomes are positioned by several means, including intrinsic sequence biases, by stacking against a fixed barrier, by DNA-binding proteins and by chromatin remodelers. These studies underscore the important organizational role of nucleosomes in all eukaryotic genomes. This paper reviews recent genomic studies that have shed light on the determinants of nucleosome positioning and their impact on the genome. © 2012 Elsevier Ltd.
Shi L.,University of Texas at Austin
Nanoscale and Microscale Thermophysical Engineering | Year: 2012
Significant progress has been made in recent studies of thermal and thermoelectric transport phenomena in nanostructures and low-dimensional systems. This article reviews several intriguing quantum and classical size effects on thermal and thermoelectric properties that have been predicted by theoretical calculations or observed in experiments. Attention is focused on the Casimir limit in phonon boundary scattering and the effect of phonon confinement on the lattice thermal conductivity of semiconductor nanowires (NWs) and nanomeshes; the effects of thickness, lateral size, and interface interaction on the lattice thermal conductivity of carbon nanotubes (CNTs) and graphene; and the phonon-drag thermopower and quantum size effects on the thermoelectric power factor in semiconductor NWs. Further experimental and theoretical investigations are suggested for better understanding of some of these nanoscale transport phenomena. © Taylor & Francis Group, LLC.
Barrick J.E.,University of Texas at Austin |
Barrick J.E.,Michigan State University |
Lenski R.E.,Michigan State University
Nature Reviews Genetics | Year: 2013
Evolutionary changes in organismal traits may occur either gradually or suddenly. However, until recently, there has been little direct information about how phenotypic changes are related to the rate and the nature of the underlying genotypic changes. Technological advances that facilitate whole-genome and whole-population sequencing, coupled with experiments that 'watch' evolution in action, have brought new precision to and insights into studies of mutation rates and genome evolution. In this Review, we discuss the evolutionary forces and ecological processes that govern genome dynamics in various laboratory systems in the context of relevant population genetic theory, and we relate these findings to evolution in natural populations.
Fitzpatrick R.,University of Texas at Austin
Nuclear Fusion | Year: 2011
A semi-analytic sharp-boundary model of a nonaxisymmetric vertical displacement event (VDE) in a large aspect-ratio, high-beta (i.e. β ∼ ε), vertically elongated tokamak plasma is developed. The model is used to simulate nonaxisymmetric VDEs with a wide range of different plasma equilibrium and vacuum vessel parameters. These simulations yield poloidal halo current fractions and toroidal peaking factors whose magnitudes are similar to those seen in experiments, and also reproduce the characteristic inverse scaling between the halo current fraction and the toroidal peaking factor. Moreover, the peak poloidal halo current density in the vacuum vessel is found to correlate strongly with the reciprocal of the minimum edge safety factor attained during the VDE. In addition, under certain circumstances, the ratio of the net sideways force acting on the vacuum vessel to the net vertical force is observed to approach unity. Finally, the peak vertical force per unit area acting on the vessel is found to have a strong correlation with the equilibrium toroidal plasma current at the start of the VDE, but is also found to increase with increasing vacuum vessel resistivity relative to the scrape-off layer plasma. © 2011 IAEA, Vienna.
Meyer A.G.,University of Texas at Austin
Philosophical transactions of the Royal Society of London. Series B, Biological sciences | Year: 2013
We investigate the causes of site-specific evolutionary-rate variation in influenza haemagglutinin (HA) between human and avian influenza, for subtypes H1, H3, and H5. By calculating the evolutionary-rate ratio, ω = dN/dS as a function of a residue's solvent accessibility in the three-dimensional protein structure, we show that solvent accessibility has a significant but relatively modest effect on site-specific rate variation. By comparing rates within HA subtypes among host species, we derive an upper limit to the amount of variation that can be explained by structural constraints of any kind. Protein structure explains only 20-40% of the variation in ω. Finally, by comparing ω at sites near the sialic-acid-binding region to ω at other sites, we show that ω near the sialic-acid-binding region is significantly elevated in both human and avian influenza, with the exception of avian H5. We conclude that protein structure, HA subtype, and host biology all impose distinct selection pressures on sites in influenza HA.
Sun A.,University of Texas at Austin
Environmental Modelling and Software | Year: 2013
Watershed management, in its very nature, represents a participatory process, requiring horizontal and vertical collaborations among multiple institutions and stakeholders. For watershed-scale management to be effective, a social-learning infrastructure needs to be in place to allow for the integration of diverse knowledge and interests related to watershed protection and restoration. Environmental decision support systems (EDSS) have long been used to support co-learning processes during watershed management. However, implementation and maintenance of EDSS in house often pose a significant burden to local watershed partnerships because of budgetary and technological constraints. Recent advances in service-oriented computing can help shift away the technical burden of EDSS implementation to service providers and enable watershed partnerships to focus primarily on decision-making activities. In this paper, I describe the migration of an EDSS module from the traditional client-server-based architecture to a client of cloud-computing services. Google Drive, which is behind the new version of the EDSS module, provides a number of basic visual analytics features that can be used to increase the collaborative decision-making experience while drastically reducing the cost of small-scale EDSS. More sophisticated EDSS may be implemented by leveraging the strengths of both client-server architectures and cloud-computing services. © 2012 Elsevier Ltd.
Harden K.P.,University of Texas at Austin
Perspectives on Psychological Science | Year: 2014
In this article, I propose a sex-positive framework for research on adolescent sexuality in which I consider consensual sexual activities in adolescence as developmentally normative and potentially healthy. The sex-positive framework is contrasted with the predominant “risk” perspective that presumes that abstinence from sexual activity is the ideal behavioral outcome for teenagers. Evidence from longitudinal and behavioral genetic studies indicates that engaging in sexual intercourse in adolescence does not typically cause worse psychological functioning. The relationship context of sexual experience may be a critical moderator of its psychological impact. Moreover, cross-cultural data on adolescents’ contraception usage, unintended pregnancy, and sexually transmitted infections suggest that, despite the unacceptably high rate of negative health consequences among U.S. teenagers, adolescents can have the developmental capacity to regulate the health risks inherent in sexual activity. Understanding adolescent sexuality can be fostered by considering sexual well-being, a multidimensional construct that incorporates an adolescent’s sexual self-efficacy, sexual self-esteem, feelings of sexual pleasure and satisfaction, and freedom from pain and negative affect regarding sexuality. New research is necessary to understand the development of adolescent sexual well-being, including its normative age trends, its reciprocal links with sexual behavior, and its impact on psychological and physical health. © The Author(s) 2014.
Goodenough J.B.,University of Texas at Austin
Journal of Solid State Electrochemistry | Year: 2012
The challenges for rechargeable batteries are cost, safety, energy, density, life, and rate. Traditional rechargeable batteries based on aqueous electrolytes have good rate capabilities but limited energy density because the voltage for a long shelf-life is restricted to 1.5 V. The discovery of fast Na ion conductivity in β-alumina in 1967 introduced the novel concept of a solid oxide electrolyte and molten electrodes: the sodium-sulfur battery operates at 350 °C. Interest in rechargeable batteries with aprotic electrolytes was further stimulated by the first energy crisis in the early 1970s. Since protons are not mobile in aprotic electrolytes, the Li + ion was the logical choice for the working ion, and on-going work on reversible Li intercalation into layered sulfides suggested the TiS 2//Li cell, which was shown in 1976 to have a voltage of V≃2.2 V and good rate capability. However, the organic liquid carbonates used as electrolytes are flammable, and dendrites growing across the electrolyte from the lithium anode on repeated charge/discharge cycles short-circuited the cells with disastrous consequences. Safety concerns caused this effort to be dropped. However, substitution of the layered oxides LiMO 2 for the layered sulfides MS2 and reversible intercalation of Li into graphitic carbon without dendrite formation at slow charging rates gave a safe rechargeable lithium ion battery (LIB) of large-enough energy density to enable the wireless revolution. Although carbon-buffered alloys now provide anodes that allow a fast charge and have a higher capacity, nevertheless a passivation layer permeable to Li + forms on the anode surface, and the Li + in the passivation layer is taken irreversibly from the cathode on the initial charge. Since the specific capacity of a cell with an insertion-compound cathode is limited by the latter, strategies to increase the specific capacity for a LIB powering an electric vehicle or storing electricity from wind or solar farms include a return to consideration of a solid electrolyte.
Vaze R.,Tata Institute of Fundamental Research |
Heath Jr. R.W.,University of Texas at Austin
IEEE Transactions on Information Theory | Year: 2012
The transmission capacity of an ad-hoc network is the maximum density of active transmitters per unit area, given an outage constraint at each receiver for a fixed rate of transmission. Assuming that the transmitter locations are distributed as a Poisson point process, this paper derives upper and lower bounds on the transmission capacity of an ad-hoc network when each node is equipped with multiple antennas. The transmitter either uses eigen multi-mode beamforming or a subset of its antennas without channel information to transmit multiple data streams, while the receiver uses partial zero forcing to cancel certain interferers using some of its spatial receive degrees of freedom (SRDOF). The receiver either cancels the nearest interferers or those interferers that maximize the post-cancellation signal-to-interference ratio. Using the obtained bounds, the optimal number of data streams to transmit, and the optimal SRDOF to use for interference cancellation are derived that provide the best scaling of the transmission capacity with the number of antennas. With beamforming, single data stream transmission together with using all but one SRDOF for interference cancellation is optimal, while without beamforming, single data stream transmission together with using a fraction of the total SRDOF for interference cancellation is optimal. © 2011 IEEE.
Inobe T.,University of Toyama |
Matouschek A.,University of Texas at Austin
Current Opinion in Structural Biology | Year: 2014
The proteasome is the main proteolytic machine in the cytosol and nucleus of eukaryotic cells where it degrades hundreds of regulatory proteins, removes damaged proteins, and produces peptides that are presented by MHC complexes. New structures of the proteasome particle show how its subunits are arranged and provide insights into how the proteasome is regulated. Proteins are targeted to the proteasome by tags composed of several ubiquitin moieties. The structure of the tags tunes the order in which proteins are degraded. The proteasome itself edits the ubiquitin tags and drugs that interfere in this process can enhance the clearance of toxic proteins from cells. Finally, the proteasome initiates degradation at unstructured regions within its substrates and this step contributes to substrate selection. © 2014 Elsevier Ltd.
Jha S.,University of Texas at Austin
Molecular Ecology | Year: 2015
Much of the world's terrestrial landscapes are being altered by humans in the form of agriculture, urbanization and pastoral systems, with major implications for biodiversity. Bumble bees are one of the most effective pollinators in both natural and cultivated landscapes, but are often the first to be extirpated in human-altered habitats. Yet, little is known about the role of natural and human-altered habitats in promoting or limiting bumble bee gene flow. In this study, I closely examine the genetic structure of the yellow-faced bumble bee, Bombus vosnesenskii, across the southwestern US coast and find strong evidence that natural oceanic barriers, as well as contemporary human-altered habitats, limit bee gene flow. Heterozygosity and allelic richness were lower in island populations, while private allelic richness was higher in island populations compared to mainland populations. Genetic differentiation, measured for three indices across the 1000 km study region, was significantly greater than the null expectation (FST = 0.041, F'ST = 0.044 and Dest = 0.155) and correlated with geographic distance. Furthermore, genetic differentiation patterns were most strongly correlated with contemporary (2011) not past (2006, 2001) resistance maps calibrated for high dispersal limitation over oceans, impervious habitat and croplands. Despite the incorporation of dramatic elevation gradients, the analyses reveal that oceans and contemporary human land use, not mountains, are the primary dispersal barriers for B. vosnesenskii gene flow. These findings reinforce the importance of maintaining corridors of suitable habitat across the distribution range of native pollinators to promote their persistence and safeguard their ability to provide essential pollination services. © 2015 John Wiley & Sons Ltd.
Hardy J.G.,CNRS Institute of Science and Supramolecular Engineering |
Hardy J.G.,University of Texas at Austin
Chemical Society Reviews | Year: 2013
Metallosupramolecular grid complexes (hereafter referred to as metallogrids) are well-defined oligonuclear metal ion complexes involving essentially planar arrays of the metal ions sited at the points of intersection of square or rectangular metallogrids and possess a variety of interesting optical, electronic, magnetic and supramolecular properties. Herein I aim to give the reader an overview of the synthesis, properties and potential for a variety of high-tech applications of metallogrids. This journal is © The Royal Society of Chemistry.
Pena E.D.,University of Texas at Austin
Child Development Perspectives | Year: 2011
This article explores linguistic, cultural, and contextual factors that may present significant challenges to the accurate assessment of capabilities of preschool dual language learners (DLLs). This discussion includes an examination of (a) differences between "cultures" of home and school and their implications for assessing DLL children's development, (b) content and language(s) used for assessments of children's development, and (c) the ability of extant standardized assessments to provide accurate portrayals of DLL children's abilities. The article concludes with recommendations for improving the accurate assessment of DLL preschoolers, and for the development of an expanded research agenda. © 2011 The Authors. Child Development Perspectives © 2011 The Society for Research in Child Development.
de la Torre J.C.,University of Texas at Austin
BioFactors | Year: 2012
Despite an archive of over 73,000 research papers published in the last two decades on the subject of Alzheimer's disease (AD), little clinical progress has been made relative to how people get sporadic AD and what can be done to help them avoid it. This review spotlights strategic steps that could be a turning point in the dramatic lowering of Alzheimer prevalence. The main strategy includes application of four pillars of prevention: 1) early identification of AD vascular risk factors; 2) early detection of AD vascular risk factors; 3) early intervention of AD vascular risk factors based on evidence-based medical decisions; 4) patient follow-up to assess and modify interventions as needed. Tandem to these four pillars of prevention, a proactive lifestyle consisting of a healthy diet coupled to physical and mental activity should be applied as part of any therapeutic intervention. We are persuaded by mounting and compelling evidence that AD is a multifactorial disorder kindled by vascular risk factors that generate chronic brain hypoperfusion (CBH) during advanced aging. A pathobiological cascade of biochemical events in the presence of CBH that leads to oxidative stress and neurodegeneration appears to involve multiple biofactors including micronutrients, trace metals, lipids, and pro-oxidants, as reviewed in this special issue of BioFactors. Modulation of these biofactors may help prevent or control incipient AD. © 2012 International Union of Biochemistry and Molecular Biology, Inc.
Deng X.,Ohio State University |
Srinivasan R.,University of Texas at Austin
Journal of Marketing | Year: 2013
Transparent packages are pervasive in food consumption environments. Yet prior research has not systematically examined whether and how transparent packaging affects food consumption. The authors propose that transparent packaging has two opposing effects on food consumption: it enhances food salience, which increases consumption (salience effect), and it facilitates consumption monitoring, which decreases consumption (monitoring effect). They argue that the net effect of transparent packaging on food consumption is moderated by food characteristics (e.g., unit size, appearance). For small, visually attractive foods, the monitoring effect is low, so the salience effect dominates, and people eat more from a transparent package than from an opaque package. For large foods, the monitoring effect dominates the salience effect, decreasing consumption. For vegetables, which are primarily consumed for their health benefits, consumption monitoring is not activated, so the salience effect dominates, which ironically decreases consumption. The authors' findings suggest that marketers should offer small foods in transparent packages and large foods and vegetables in opaque packages to increase postpurchase consumption (and sales). © 2013, American Marketing Association.
Bhagavatula R.,ASSIA Inc |
Heath Jr. R.W.,University of Texas at Austin
IEEE Transactions on Signal Processing | Year: 2011
Base station cooperation can exploit knowledge of the users' channel state information (CSI) at the transmitters to manage co-channel interference. Users have to feedback CSI of the desired and interfering channels using finite-bandwidth backhaul links. Existing codebook designs for single-cell limited feedback can be used for multicell cooperation by partitioning the available feedback resources between the multiple channels. In this paper, a new feedback-bit allocation strategy is proposed, as a function of the delays in the communication links and received signal strengths in the downlink. Channel temporal correlation is modeled as a function of delay using the Gauss-Markov model. Closed-form expressions for bit partitions are derived to allocate more bits to quantize the stronger channels with smaller delays and fewer bits to weaker channels with larger delays, assuming random vector quantization. Cellular network simulations are used to show that the algorithm presented in the paper yields higher sum-rates than the equal-bit allocation technique. © 2011 IEEE.
Kiguchi K.,University of Texas at Austin
Journal of Hepato-Biliary-Pancreatic Sciences | Year: 2014
Novel targets for therapeutic or chemopreventive approaches against cholangiocarcinoma (CCA) are urgently needed. In this review article, we discuss the molecular aspects of CCA including the role of erbB receptor tyrosine kinases (RTKs), downstream signaling pathways of these erbB RTKs, inflammatory mediators during gallbladder carcinogenesis and bile acids based on our study using a mouse model for human CCA (BK5.erbB2 mice) as well as additional information in the literature. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
Bell M.R.,University of Texas at Austin
Current Opinion in Pharmacology | Year: 2014
Polychlorinated biphenyls are among the most well-studied endocrine-disrupting chemicals (EDCs) for their neurobehavioral effects, especially neurodevelopment and cognitive performance. In addition, past research has demonstrated effects of PCBs on circulating hormones and associated changes in reproductive behaviors. This article will focus on recent advances that have been made in characterizing developmental PCB effects on reproductive function, broader social and affective behaviors, and the neuroendocrine mechanisms behind such changes. In general, PCBs seem to inhibit reproductive function by suppressing multiple aspects of the associated hypothalamic circuitry. Additionally, PCBs may also reduce motivation for social behaviors and induce depressive-like symptoms via overall reductions in dopaminergic and glutamatergic functions in the limbic system. However, more work with human-relevant exposure paradigms is needed to fully support these conclusions © 2014 Acta Materialia Inc.
Johnson A.W.,University of Texas at Austin
Current Biology | Year: 2014
A recent study shows that nuclear export of the large ribosomal subunit is regulated by a GTPase that blocks recruitment of the nuclear export factor Nmd3 until remodeling of the pre-ribosome by the AAA-ATPase Rea1 (Midasin). © 2014 Elsevier Ltd.
Bull J.J.,University of Texas at Austin
Nature | Year: 2015
Wild populations of an Australian lizard have sex chromosomes and also exhibit temperature-controlled sexual development, providing insight into how these two sex-determining mechanisms may evolve back and forth. © 2015 Macmillan Publishers Limited. All rights reserved
Mallam A.L.,University of Cambridge |
Mallam A.L.,University of Texas at Austin |
Jackson S.E.,University of Cambridge
Nature Chemical Biology | Year: 2012
Topological knots are found in a considerable number of protein structures, but it is not clear how they knot and fold within the cellular environment. We investigated the behavior of knotted protein molecules as they are first synthesized by the ribosome using a cell-free translation system. We found that newly translated knotted proteins can spontaneously self-tie and do not require the assistance of molecular chaperones to fold correctly to their trefoil-knotted structures. This process is slow but efficient, and we found no evidence of misfolded species. A kinetic analysis indicates that the knotting process is rate limiting, occurs post-translationally, and is specifically and significantly (P < 0.001) accelerated by the GroEL-GroES chaperonin complex. This demonstrates a new active mechanism for this molecular chaperone and suggests that chaperonin-catalyzed knotting probably dominates in vivo. These results explain how knotted protein structures have withstood evolutionary pressures despite their topological complexity. © 2012 Nature America, Inc. All rights reserved.
Kirk D.S.,University of Texas at Austin |
Matsuda M.,University of Maryland College Park
Criminology | Year: 2011
Ethnographic evidence reveals that many crimes in poor minority neighborhoods evade criminal justice sanctioning, thus leading to a negative association between the proportion of minority residents in a neighborhood and the arrest rate. To explain this finding, we extend recent theoretical explications of the concept of legal cynicism. Legal cynicism refers to a cultural orientation in which the law and the agents of its enforcement are viewed as illegitimate, unresponsive, and ill equipped to ensure public safety. Crime might flourish in neighborhoods characterized by legal cynicism because individuals who view the law as illegitimate are less likely to comply with it; yet because of legal cynicism, these crimes might go unreported and therefore unsanctioned. This study draws on data from the Project on Human Development in Chicago Neighborhoods to test the importance of legal cynicism for understanding geographic variation in the probability of arrest. We find that, in neighborhoods characterized by high levels of legal cynicism, crimes are much less likely to lead to an arrest than in neighborhoods where citizens view the police more favorably. Findings also reveal that residents of highly cynical neighborhoods are less likely to engage in collective efficacy and that collective efficacy mediates the association between legal cynicism and the probability of arrest. © 2011 American Society of Criminology.
Willets K.A.,University of Texas at Austin
Physical Chemistry Chemical Physics | Year: 2013
Super-resolution far-field imaging has recently emerged as a novel strategy for imaging interactions between plasmonic nanostructures and single molecules with spatial resolution <5 nm. In these experiments, the emission centroid of a diffraction-limited spot is modeled as a two-dimensional Gaussian, allowing the position of an emitter to be determined with nanoscale precision. In this perspective, we describe the principles of super-resolution far-field imaging and then highlight its application to several different problems in plasmonics, including surface-enhanced fluorescence of ligands bound to nanoparticle surfaces, nanoparticle-mediated catalysis, and mapping electromagnetic hot spots. In all cases, the complex coupling between molecular emission and plasmon modes of the underlying nanostructure must be considered. While this complicates the interpretation of super-resolution images of plasmonic systems, the coupling also opens new doors for understanding the fundamental interactions between molecules and plasmonic nanostructures. © 2013 the Owner Societies.
Hodges B.R.,University of Texas at Austin
Environmental Modelling and Software | Year: 2013
Continental River Dynamics (CRD) is herein defined as modelling the flow dynamics in all channels of a continental-scale river basin using the physics-based Saint-Venant equations. At the boundary of hydraulics and hydrology, CRD requires significant collaborative efforts to make new progress. Six constraints and seven challenges are identified in the areas of dynamics, dimensionality, resolution, uncertainty, model coupling, and data availability. Three key short-term needs for CRD are identified as (1) scaling up Saint-Venant river models to continental scales, (2) standards for integrating river and hydrology models, and (3) methods for effective use of lidar data and synthetic methods for approximating geometry for 1D dynamic models. An over-arching need for comprehensive data collection programs for river geometry is discussed. © 2013 Elsevier Ltd.
Kwasinski A.,University of Texas at Austin
IEEE Transactions on Power Electronics | Year: 2011
This paper presents a quantitative method to evaluate dc microgrids availability by identifying and calculating minimum cut sets occurrence probability for different microgrid architectures and converter topologies. Hence, it provides planners with an essential tool to evaluate downtime costs and decide technology deployments based on quantitative risk assessments by allowing to compare the effect that converter topologies and microgrid architecture choices have on availability. Conventional architectures with single-input converters and alternative configurations with multiple-input converters (MICs) are considered. Calculations yield that all microgrid configurations except those utilizing center converters achieve similar availability of 6-nines. Three converter topologies are used as representatives of many other circuits. These three benchmark circuits are the boost, the isolated SEPIC (ISEPIC), and the current-source half-bridge. Marginal availability differences are observed for different circuit topology choices, although architectures with MICs are more sensitive to this choice. MICs and, in particular, the ISEPIC, are identified as good compromise options for dc microgrids source interfaces. The analysis also models availability influence of local energy storage, both in batteries and generators fuel. These models provide a quantitative way of comparing dc microgrids with conventional backup energy systems. Calculations based on widely accepted data in industry supports the analysis. © 2010 IEEE.
Economo E.P.,University of Texas at Austin |
Economo E.P.,University of Michigan
American Naturalist | Year: 2011
A central goal of conservation science is to identify the most important habitat patches for maintaining biodiversity on a landscape. Spatial biodiversity patterns are often used for such assessments, and patches that harbor unique diversity are generally prioritized over those with high community similarity to other areas. This places an emphasis on biodiversity representation, but removing a patch can have cascading effects on biodiversity persistence in the remaining ecological communities. Metacommunity theory provides a mechanistic route to the linking of biodiversity patterns on a landscape with the subsequent dynamics of diversity loss after habitat is degraded. Using spatially explicit neutral theory, I focus on the situation where spatial patterns of diversity and similarity are generated by the structure of dispersal networks and not environmental gradients. I find that gains in biodiversity representation are nullified by losses in persistence, and as a result the effects of removing a patch on metacommunity diversity are essentially independent of complementarity or other biodiversity patterns. In this scenario, maximizing protected area and not biodiversity representation is the key to maintaining diversity in the long term. These results highlight the need for a broader understanding of how conservation paradigms perform under different models of metacommunity dynamics. © 2011 by The University of Chicago.
Kennicutt Jr. R.C.,University of Cambridge |
Evans N.J.,University of Texas at Austin |
Evans N.J.,European Southern Observatory
Annual Review of Astronomy and Astrophysics | Year: 2012
We review progress over the past decade in observations of large-scale star formation, with a focus on the interface between extragalactic and Galactic studies. Methods of measuring gas contents and star-formation rates are discussed, and updated prescriptions for calculating star-formation rates are provided. We review relations between star formation and gas on scales ranging from entire galaxies to individual molecular clouds. Copyright © 2012 by Annual Reviews.
Hwang H.,University of Texas at Austin
Asian Pacific Journal of Cancer Prevention | Year: 2013
Objectives: Colorectal cancer (CRC) is the most commonly diagnosed cancer for all US populations including Asian Americans. CRC screening has considerable benefits to prevent CRC and reduce mortality. The purpose of this article was to review the published literature on rates of colorectal cancer screening and factors associated with colorectal cancer screening practice among Asian Americans. Methods: Through searching electronic reference databases from 2000 to 2013, 30 articles were found on Chinese, Filipino, Japanese, Korean, and Vietnamese Americans. Findings: Asian Americans had significantly low ratesfor CRC screening; Korean Americans reported the lowest rates, while higher screening rates were found among Japanese Americans. Older age, longer length of stay in the US, and having a physician's recommendation were the most common facilitators to receiving screening. The common inhibiting factors were financial issues, employment status, and worries/fears about the procedure. Conclusions: Despite a number of Asian Americans being vulnerable to CRC, individual Asian subgroups were underserved with CRC screening and intervention. Further studies should focus on each individual Asian subgroup and culturally proficient CRC screening intervention programs should be developed for each.
Bersuker I.B.,University of Texas at Austin
Chemical Reviews | Year: 2013
The theoretical background formulation is to include the terms of nuclear displacements in the Hamiltonian and to solve such a nonadiabatic problem exactly. This approach is impractical. Fortunately, important solutions can be obtained by solving first the problem of electronic structure with fixed nuclei in the adiabatic approximation and then including the interaction of the electronic states with the nuclear displacements taken as a perturbation, thus reaching out beyond the adiabatic approximation. The similarity between the JTE and PJTE ends with the statement that both effects may distort the system, producing several equivalent minima of the APES at which the system has lower symmetry. For a better understanding the chemical implications of the PJTE, it is important to follow up on how this effect is related to intramolecular interactions.
Goff J.A.,University of Texas at Austin
Continental Shelf Research | Year: 2014
A high-resolution seismic reflection and vibracore survey was conducted on the inner shelf offshore of Panama City, Florida in April, 2011, to provide seabed characterization for Office of Naval Research acoustic reverberation experiments. The seafloor in this region is part of the MAFLA sand sheet: marine sands up to 5.5. m thick extending from Mississippi to the Florida panhandle. The survey area spanned a transition in sand ridge morphology: northwest of the St. Andrews Bay inlet channel, sand ridges are up to ~4. m tall, but die off southeast of the inlet channel. Analysis of reflection and core data documents a basal reflector beneath the ridges interpreted to be the shoreface ravinement separating estuarine and Pleistocene sediments below from the marine sands above scavenged from the eroded sediments. In the northwest, this surface truncates buried river channels filled with estuarine sediment. In the middle of the survey area, where sand ridges are ~2. m high, the basal reflector is stratigraphically above the channel tops, and the presence of woody fragments in the intervening sand implies that it is relict barrier/estuarine material that has not been reworked into marine sand. In the southeast, where no sand ridges are present, the basal reflector is not observed, but the shoreface ravinement is identified in cores just ~0.4. m below the seaflo∨ unreworked barrier/estuarine sands are up to ~4. m thick above the channel tops. The stratigraphic progression implies reduced depth of shoreface erosion to the southeast, correlating with the reduction in sand ridge height. It is hypothesized here that sand ridge migration provides an addition to wave erosion in the formation of the shoreface ravinement along the lower shoreface and inner shelf, eroding through net negative bedform climb, and transferring sand to the sand ridges comprising the modern sand sheet. © 2014 Elsevier Ltd.
Hill W.G.,University of Edinburgh |
Kirkpatrick M.,University of Texas at Austin
Annual Review of Ecology, Evolution, and Systematics | Year: 2010
We review the impact of developments from animal breeding on our understanding of evolution and on the methodology used in evolutionary biology. The theory developed for improvement of polygenic traits, in particular the breeders' equation and the effects of finite population size, has had a significant influence. The effectiveness of within population selection is exemplified by the continued rapid genetic change, often with concomitant effects on fitness, produced by breeders. Many of the models and methods for estimation of quantitative genetic parameters, notably the animal model, have been motivated by animal breeding problems. Results from selection programs and quantitative trait loci (QTL) experiments show quantitative traits are often highly polygenic and can be adequately modeled by the infinitesimal model. Copyright © 2010 by Annual Reviews. All rights reserved.
Maxwell J.C.,University of Texas at Austin
Journal of Addictive Diseases | Year: 2015
Aims: The abuse of prescription pain relievers and/or heroin is a significant public health concern. This article identifies the changes in this epidemic and estimates the extent of the gap between treatment availability and need. Method: This article uses the national data on opioid and heroin abuse to identify at-risk groups. Result: Findings include shifting from narcotic analgesics to heroin and demographic changes as well as changes in supply and new medication-assisted therapies. Conclusions: The number of persons dependent on pain relievers or heroin and new initiates is more than three times greater than the current capacity to deliver treatment. © 2015 Taylor & Francis Group, LLC.
Kountouris M.,Supelec |
Andrews J.G.,University of Texas at Austin
IEEE Transactions on Wireless Communications | Year: 2012
The tremendous capacity gains promised by space division multiple access (SDMA) depend critically on the accuracy of the transmit channel state information. In the broadcast channel, even without any network interference, it is known that such gains collapse due to interstream interference if the feedback is delayed or low rate. In this paper, we investigate SDMA in the presence of interference from many other simultaneously active transmitters distributed randomly over the network. In particular we consider zero-forcing beamforming in a decentralized (ad hoc) network where each receiver provides feedback to its respective transmitter. We derive closed-form expressions for the outage probability, network throughput, transmission capacity, and average achievable rate and go on to quantify the degradation in network performance due to residual self-interference as a function of key system parameters. One particular finding is that as in the classical broadcast channel, the per-user feedback rate must increase linearly with the number of transmit antennas and SINR (in dB) for the full multiplexing gains to be preserved with limited feedback. We derive the throughput-maximizing number of streams, establishing that single-stream transmission is optimal in most practically relevant settings. In short, SDMA does not appear to be a prudent design choice for interference-limited wireless networks. © 2012 IEEE.
Weinberg S.,University of Texas at Austin
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2012
It is shown that in a scale-invariant relativistic field theory, any field ψ n belonging to the (j,0) or (0,j) representations of the Lorentz group and with dimensionality d=j+1 is a free field. For other field types there is no value of the dimensionality that guarantees that the field is free. Conformal invariance is not used in the proof of these results, but it gives them a special interest; as already known and as shown here in the appendix, the only fields in a conformal field theory that can describe massless particles belong to the (j,0) or (0,j) representations of the Lorentz group and have dimensionality d=j+1. Hence in conformal field theories massless particles are free. © 2012 American Physical Society.
Weinberg S.,University of Texas at Austin
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2012
This note supplements an earlier paper on conformal field theories. There it was shown how to construct tensor, spinor, and spinor-tensor primary fields in four dimensions from their counterparts in six dimensions, where conformal transformations act simply as SO(4,2) Lorentz transformations. Here we show how to constrain fields in six dimensions so that the corresponding primary fields in four dimensions transform according to irreducible representations of the four-dimensional Lorentz group, even when the irreducibility conditions on these representations involve the four-component Levi-Civita tensor μνρσ. © 2012 American Physical Society.
Howard R.J.,Skidmore College |
Trudell J.R.,Stanford University |
Adron Harris R.,University of Texas at Austin
Pharmacological Reviews | Year: 2014
Alcohols and other anesthetic agents dramatically alter neurologic function in a wide range of organisms, yet their molecular sites of action remain poorly characterized. Pentameric ligand-gated ion channels, long implicated in important direct effects of alcohol and anesthetic binding, have recently been illuminated in renewed detail thanks to the determination of atomic-resolution structures of several family members from lower organisms. These structures provide valuable models for understanding and developing anesthetic agents and for allosteric modulation in general. This review surveys progress in this field from function to structure and back again, outlining early evidence for relevant modulation of pentameric ligand-gated ion channels and the development of early structural models for ion channel function and modulation. We highlight insights and challenges provided by recent crystal structures and resulting simulations, as well as opportunities for translation of these newly detailed models back to behavior and therapy. © 2014 by The American Society for Pharmacology and Experimental Therapeutics.
Gardner J.E.,University of Texas at Austin
Geochimica et Cosmochimica Acta | Year: 2012
Explosive Plinian eruptions tap a wide range of magma compositions, including highly alkaline phonolite magma. Such eruptions are driven by volatiles exsolving from those magmas, and so determining how gas bubbles form is important to understanding those eruptions. Nucleation of bubbles in silicate melts is dictated strongly by the surface tension (σ) of the melt, and so this study focuses on determining r for phonolite melts. Cylinders cored from a sodium-rich phonolite were hydrated with 5.37 ± 0.09 wt.% dissolved water at 150 MPa and 1150 °C, and then decompressed at either 1150 °C or 875 °C. Bubbles nucleated at 1150 °C only when pressure dropped below 95 MPa, in number densities of 10 4-5 cm -3. At 875 °C bubbles nucleated only when pressure dropped below 53 MPa, in number densities of 10 4-7 cm -3. Depending on whether the pressure within critical bubble nuclei equals the saturation pressure or a variable lower pressure, the observed number densities and nucleation rates imply that surface tension for Na-rich phonolite is 0.061 or 0.068 N m -1 at 875 °C and 0.052 or 0.066 N m -1 at 1150 °C. Importantly, temperature has little or slightly negative impact on σ, in contrast to the thermal impact on σ of rhyolite melts. Regardless of pressure assumption, the inferred surface tensions indicate that Na-rich phonolite can become highly super-saturated with water before bubbles nucleate, which could cause them to explosively degas and erupt, similar to highly viscous rhyolites. © 2011 Elsevier Ltd.
Nishiyama H.,University of Texas at Austin
International Review of Neurobiology | Year: 2014
Activity-dependent changes in synaptic properties are considered key neural mechanisms of learning and memory. Most studies focus on changes in synaptic function such as long-term potentiation (LTP) and long-term depression (LTD), while changes in synaptic structure have been largely ignored. However, structural synaptic changes are also important. In fact, LTP and LTD are often associated with structural alterations of dendritic spines. Furthermore, experimental evidence indicates that behavioral learning often induces structural rewiring of synaptic circuitry through the formation of new synapses and elimination of existing ones. To discuss the role of structural plasticity in cerebellar learning, this chapter mainly focuses on parallel fiber-Purkinje cell synapses in the cerebellar cortex and reviews their structural changes under several different forms of motor learning. © 2014 Elsevier Inc.
Kormendy J.,University of Texas at Austin |
Ho L.C.,Carnegie Institution for Science
Annual Review of Astronomy and Astrophysics | Year: 2013
Supermassive black holes (BHs) have been found in 85 galaxies by dynamical modeling of spatially resolved kinematics. The Hubble Space Telescope revolutionized BH research by advancing the subject from its proof-of-concept phase into quantitative studies of BH demographics. Most influential was the discovery of a tight correlation between BH mass and the velocity dispersion σ of the bulge component of the host galaxy. Together with similar correlations with bulge luminosity and mass, this led to the widespread belief that BHs and bulges coevolve by regulating each other's growth. Conclusions based on one set of correlations from in brightest cluster ellipticals to in the smallest galaxies dominated BH work for more than a decade. New results are now replacing this simple story with a richer and more plausible picture in which BHs correlate differently with different galaxy components. A reasonable aim is to use this progress to refine our understanding of BH-galaxy coevolution. BHs with masses of 105-106Mȯ are found in many bulgeless galaxies. Therefore, classical (elliptical-galaxy-like) bulges are not necessary for BH formation. On the other hand, although they live in galaxy disks, BHs do not correlate with galaxy disks. Also, any correlations with the properties of disk-grown pseudobulges and dark matter halos are weak enough to imply no close coevolution. The above and other correlations of host-galaxy parameters with each other and with suggest that there are four regimes of BH feedback. (1) Local, secular, episodic, and stochastic feeding of small BHs in largely bulgeless galaxies involves too little energy to result in coevolution. (2) Global feeding in major, wet galaxy mergers rapidly grows giant BHs in short-duration, quasar-like events whose energy feedback does affect galaxy evolution. The resulting hosts are classical bulges and coreless-rotating-disky ellipticals. (3) After these AGN phases and at the highest galaxy masses, maintenance-mode BH feedback into X-ray-emitting gas has the primarily negative effect of helping to keep baryons locked up in hot gas and thereby keeping galaxy formation from going to completion. This happens in giant, core-nonrotating-boxy ellipticals. Their properties, including their tight correlations between and core parameters, support the conclusion that core ellipticals form by dissipationless major mergers. They inherit coevolution effects from smaller progenitor galaxies. Also, (4) independent of any feedback physics, in BH growth modes 2 and 3, the averaging that results from successive mergers plays a major role in decreasing the scatter in correlations from the large values observed in bulgeless and pseudobulge galaxies to the small values observed in giant elliptical galaxies.Copyright ©2013 by Annual Reviews. All rights reserved.
Colgin L.L.,University of Texas at Austin
Current Opinion in Neurobiology | Year: 2015
For decades, theta rhythms (~5-10. Hz) have been thought to play a critical role in memory processing in the entorhinal-hippocampal network. However, recent evidence suggests that successful memory performance also requires coupling of ~30-100. Hz gamma rhythms to particular phases of the theta cycle. Recent insights imply ways in which theta-gamma coupling may facilitate transfer of information throughout the entorhinal-hippocampal network. Activating gamma-modulated cell assemblies at a particular theta phase may allow the network to produce a more powerful output by ensuring that distributed cells fire closely in time. I hypothesize that such a mechanism would serve to facilitate either memory encoding or memory retrieval, depending on which type of gamma rhythms are recruited. © 2014 Elsevier Ltd.
Kirk D.S.,University of Texas at Austin
Criminology | Year: 2012
Many former prisoners return home to the same residential environment, with the same criminal opportunities and criminal peers, where they resided before incarceration. If the path to desistance from crime largely requires knifing off from past situations and establishing a new set of routine activities, then returning to one's old environment and routines may drastically limit an ex-prisoner's already dismal chances of desisting from crime. This study tests these ideas by examining how forced residential migration caused by Hurricane Katrina affected the likelihood of reincarceration among a sample of ex-prisoners originally from New Orleans, LA. Property damage from the hurricane induced some ex-prisoners who otherwise would have moved back to their former neighborhoods to move to new neighborhoods. Findings from an instrumental variables survival analysis reveal that those parolees who moved to a new parish following release were substantially less likely to be reincarcerated during the first 3 years after release than those ex-offenders who moved back to the parish where they were originally convicted. Moreover, at no point in the 3-year time period was the hazard of reincarceration greater for those parolees who moved than for those who returned to the same parish. © 2012 American Society of Criminology.
Grauman K.,University of Texas at Austin
Communications of the ACM | Year: 2010
Siegel J.J.,University of Texas at Austin
Journal of Neurophysiology | Year: 2014
Persistent spiking in response to a discrete stimulus is considered to reflect the active maintenance of a memory for that stimulus until a behavioral response is made. This response pattern has been reported in learning paradigms that impose a temporal gap between stimulus presentation and behavioral response, including trace eyeblink conditioning. However, it is unknown whether persistent responses are acquired as a function of learning or simply represent an already existing category of response type. This fundamental question was addressed by recording single-unit activity in the medial prefrontal cortex (mPFC) of rabbits during the initial learning phase of trace eyeblink conditioning. Persistent responses to the tone conditioned stimulus were observed in the mPFC during the very first training sessions. Further analysis revealed that most cells with persistent responses showed this pattern during the very first training trial, before animals had experienced paired training. However, persistent cells showed reliable decreases in response magnitude over the first training session, which were not observed on the second day of training or for sessions in which learning criterion was met. This modification of response magnitude was specific to persistent responses and was not observed for cells showing phasic toneevoked responses. The data suggest that persistent responses to discrete stimuli do not require learning but that the ongoing robustness of such responses over the course of training is modified as a result of experience. Putative mechanisms for this modification are discussed, including changes in cellular or network properties, neuromodulatory tone, and/or the synaptic efficacy of toneassociated inputs. © 2014, J Neurophysiol. All rights reserved.
Tanriverdi H.,University of Texas at Austin |
Uysal V.B.,University of Oklahoma
Information Systems Research | Year: 2011
This study develops and tests the idea that the cross-business information technology integration (CBITI) capability of an acquirer creates significant value for shareholders of the acquirer in mergers and acquisitions (M&A). In M&A, integrating the IT systems and IT management processes of acquirer and target could generate benefits such as (a) the consolidation of IT resources and the reduction of overall IT costs of the combined firm, (b) the development of an IT-based coordination mechanism and the realization of cross-firm business synergies, (c) the minimization of potential disruptions to business operations, and (d) greater ability to comply with relevant laws and regulations and the reduction of regulatory compliance costs. We test these ideas in a sample of 141 acquisitions conducted by 86 Fortune 1000 firms. In the short run, acquirers that have high levels of CBITI capabilities receive positive and significant cumulative abnormal returns to their M&A announcements. Announcement period returns indicate that the capital markets value CBITI similarly in sameindustry and different-industry acquisitions. In the long run, acquirers with high levels of CBITI capabilities obtain significantly higher abnormal operating performance. They create significantly greater value in complementary acquisitions from different industries than in related acquisitions from the same industry. The findings have important implications for M&A research and practice. © 2011 INFORMS.
Levin D.A.,University of Texas at Austin
Journal of Ecology | Year: 2010
1. Pollen-mediated gene flow from ecological core populations may hamper the ability of populations to invade new, closely adjacent habitats. The potential for the invasion of a new habitat is enhanced when selection against immigrants is augmented by the elevation of self-fertilization. This shift imparts a measure of reproductive isolation to the invader. 2. Whereas genetically mediated changes in selfing often accompany niche shifts in plants, the elevation of self-fertilization by plastic responses to novel, stressful, environments has not been explored. Here I will show that invaders of novel habitats may have elevated levels of self-fertility mediated by alterations in floral form and self-compatibility. 3. Heightened selfing facilitates the evolution of local adaptation by increasing the additive genetic variance for tolerance to the new habitat. Heightened selfing also facilitates local adaptation by increasing reproductive assurance. 4. Elevated levels of selfing based on developmental responses are immediate, as opposed to that obtained by multigenerational selection. 5. Synthesis. The likelihood of a niche shift will be substantially enhanced if the invader is protected from the swamping effect of gene flow from a nearby core population. This protection may be afforded by an environment-induced elevation of self-fertility. The likelihood of a niche shift also is enhanced by an increase in reproductive assurance which may be vital to the survival of an invading population whose reproductive success is density-dependent. © 2010 The Author. Journal compilation © 2010 British Ecological Society.
Garg R.,University of Texas at Austin |
Telang R.,Carnegie Mellon University
MIS Quarterly: Management Information Systems | Year: 2013
With an abundance of products available online, many online retailers provide sales rankings to make it easier for consumers to find the best-selling products. Successfully implementing product rankings online was done a decade ago by Amazon, and more recently by Apple's App Store. However, neither market provides actual download data, a very useful statistic for both practitioners and researchers. In the past, researchers developed various strategies that allowed them to infer demand from rank data. Almost all of that work is based on an experiment that shifts sales or collaboration with a vendor to get actual sales data. In this research, we present an innovative method to use public data to infer the rank-demand relationship for the paid apps on Apple's iTunes App Store. We find that the top-ranked paid app for iPhone generates 150 times more downloads compared to the paid app ranked at 200. Similarly, the top paid app on iPad generates 120 times more downloads compared to the paid app ranked at 200. We conclude with a discussion on an extension of this framework to the Android platform, in-app purchases, and free apps.
Malcom J.W.,University of Texas at Austin
PLoS ONE | Year: 2011
One of the goals of biology is to bridge levels of organization. Recent technological advances are enabling us to span from genetic sequence to traits, and then from traits to ecological dynamics. The quantitative genetics parameter heritability describes how quickly a trait can evolve, and in turn describes how quickly a population can recover from an environmental change. Here I propose that we can link the details of the genetic architecture of a quantitative trait-i.e., the number of underlying genes and their relationships in a network-to population recovery rates by way of heritability. I test this hypothesis using a set of agent-based models in which individuals possess one of two network topologies or a linear genotype-phenotype map, 16-256 genes underlying the trait, and a variety of mutation and recombination rates and degrees of environmental change. I find that the network architectures introduce extensive directional epistasis that systematically hides and reveals additive genetic variance and affects heritability: network size, topology, and recombination explain 81% of the variance in average heritability in a stable environment. Network size and topology, the width of the fitness function, pre-change additive variance, and certain interactions account for ~75% of the variance in population recovery times after a sudden environmental change. These results suggest that not only the amount of additive variance, but importantly the number of loci across which it is distributed, is important in regulating the rate at which a trait can evolve and populations can recover. Taken in conjunction with previous research focused on differences in degree of network connectivity, these results provide a set of theoretical expectations and testable hypotheses for biologists working to span levels of organization from the genotype to the phenotype, and from the phenotype to the environment. © 2011 Jacob W. Malcom.
Ren D.,University of Texas at Austin
Journal of Renewable and Sustainable Energy | Year: 2010
The use of wind energy reduces our greenhouse gas emissions into the atmosphere. In this study, we proposed a generic power-law relationship between global warming and the usable wind energy (Betz's law). The power law index (∼4, region dependent) is then determined using simulated atmospheric parameters from eight global coupled ocean-atmosphere climate models (CGCMs). It is found that the power-law relationship holds across all eight climate models and also is time scale independent. Reduction of wind power scales with the degree of warming according to a generic power-law relationship. Thus, the earlier we switch to clean energy, and thereby decrease the global climate warming trend, the more cost-effective will be the harnessing of wind energy. This relationship is an area-averaged consequence of the reduced poleward temperature gradient as the climate warms during the 21st Century; it does not imply spatial uniformity over a region of interest. © 2010 American Institute of Physics.
Poldrack R.A.,University of Texas at Austin
Developmental Cognitive Neuroscience | Year: 2015
It is common in the cognitive neuroscience literature to explain differences in activation in terms of differences in the "efficiency" of neural function. I argue here that this usage of the concept of efficiency is empty and simply redescribes activation differences rather than providing a useful explanation of them. I examine a number of possible explanations for differential activation in terms of task performance, neuronal computation, neuronal energetics, and network organization. While the concept of "efficiency" is vacuous as it is commonly employed in the neuroimaging literature, an examination of brain development in the context of neural coding, neuroenergetics, and network structure provides a roadmap for future investigation, which is fundamental to an improved understanding of developmental effects and group differences in neuroimaging signals. © 2014 The Author Published by Elsevier Ltd.
Dey A.,University of Texas at Austin
Journal of High Energy Physics | Year: 2012
Mirror Symmetry for a large class of three dimensional N = 4 supersymmetric gauge theories has a natural explanation in terms of M-theory compactified on a product of ALE spaces. A pair of such mirror duals can be described as two different deformations of the eleven-dimensional supergravity background M= R 2, 1 ×ALE 1 ×ALE2, to which they flow in the deep IR. Using the A ? D ? E classification of ALE spaces, we present a neat way to catalogue dual quiver gauge theories that arise in this fashion. In addition to the well-known examples studied in [1, 2], this procedure leads to new sets of dual theories. For a certain subset of dual theories which arise from the aforementioned M-theory background with an A-type ALE1 and a D-type ALE2, we verify the duality explicitly by a computation of partition functions of the theories on S3, using localization techniques. We derive the relevant mirror map and discuss its agreement with predictions from the Type IIB brane construction for these theories. © 2012 SISSA.
Ravikumar I.,Indian Association for The Cultivation of Science |
Ravikumar I.,University of Texas at Austin |
Ghosh P.,Indian Association for The Cultivation of Science
Chemical Society Reviews | Year: 2012
This tutorial review focuses on some recent aspects in the development of synthetic receptors for selective sulfate anion recognition and separation, with a special emphasis to: (i) receptors for selective recognition of sulfate in organic and aqueous media and (ii) receptors for separation of sulfate from water via liquid-liquid extraction and crystallization. © 2012 The Royal Society of Chemistry.
Weinberg S.,University of Texas at Austin
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012
Modifications of quantum mechanics are considered, in which the state vector of any system, large or small, undergoes a stochastic evolution. The general class of theories is described, in which the probability distribution of the state vector collapses to a sum of δ functions, one for each possible final state, with coefficients given by the Born rule. © 2012 American Physical Society.
Giovambattista N.,Brooklyn College |
Rossky P.J.,University of Texas at Austin |
Debenedetti P.G.,Princeton University
Annual Review of Physical Chemistry | Year: 2012
The behavior of water confined on nanometer length scales is important in a diverse set of technical and scientific contexts, ranging from the performance of fuel cells and biological molecular machines to the design of self-assembling nanoscale materials. Here, we review recent insights into the structure and thermodynamics of confined water that have been elucidated primarily by computer simulation studies. We emphasize investigations in which interfacial chemistry and molecular topography are varied systematically and in which a wide range of thermodynamic conditions of temperature and pressure are explored. We consider homogeneous interfaces ranging from the simplest hard wall to chemically realistic, but structurally ideal, hydrophobic and hydrophilic surfaces, and the continuous scale of surface polarity is investigated. Features associated with interface heterogeneities arising from chemical patterning or from the natural characteristics of protein surfaces are discussed. Finally, we provide our thoughts on important directions for further studies. © Copyright ©2012 by Annual Reviews. All rights reserved.
Mueller U.G.,University of Texas at Austin
Current Opinion in Microbiology | Year: 2012
The symbiosis between fungus-farming ants (Attini, Formicidae), their cultivated fungi, garden-infecting Escovopsis pathogens, and Pseudonocardia bacteria on the ant integument has been popularized as an example of ant- Escovopsis- Pseudonocardia co-evolution. Recent research could not verify earlier conclusions regarding antibiotic-secreting, integumental Pseudonocardia that co-evolve to specifically suppress Escovopsis disease in an ancient co-evolutionary arms-race. Rather than long-term association with a single, co-evolving Pseudonocardia strain, attine ants accumulate complex, dynamic biofilms on their integument and in their gardens. Emerging views are that the integumental biofilms protect the ants primarily against ant diseases, whereas garden biofilms protect primarily against garden diseases; attine ants selectively recruit ('screen in') microbes into their biofilms; and the biofilms of ants and gardens serve diverse functions beyond disease-suppression. © 2012 Elsevier Ltd.
Singh A.,Cornell University |
Peppas N.A.,University of Texas at Austin
Advanced Materials | Year: 2014
For over two decades, immunologists and biomaterials scientists have co-existed in parallel world with the rationale of understanding the molecular profile of immune responses to vaccination, implantation, and treating incurable diseases. Much of the field of biomaterial-based immunotherapy has relied on evaluating model antigens such as chicken egg ovalbumin in mouse models but their relevance to humans has been point of much discussion. Nevertheless, such model antigens have provided important insights into the mechanisms of immune regulation and served as a proof-of-concept for plethora of biomaterial-based vaccines. After years of extensive development of numerous biomaterials for immunomodulation, it is only recently that an experimental scaffold vaccine implanted beneath the skin has begun to use the human model to study the immune responses to cancer vaccination by co-delivering patient-derived tumor lysates and immunomodulatory proteins. If successful, this scaffold vaccine will change the way we approached untreatable cancers, but more importantly, will allow a faster and more rational translation of therapeutic regimes to other cancers, chronic infections, and autoimmune diseases. Most materials reviews have focused on immunomodulatory adjuvants and micro-nano-particles. Here we provide an insight into emerging hydrogel and scaffold based immunomodulatory approaches that continue to demonstrate efficacy against immune associated diseases. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Rowe T.B.,University of Texas at Austin
Proceedings. Biological sciences / The Royal Society | Year: 2011
Sauropodomorph dinosaurs originated in the Southern Hemisphere in the Middle or Late Triassic and are commonly portrayed as spreading rapidly to all corners of Pangaea as part of a uniform Late Triassic to Early Jurassic cosmopolitan dinosaur fauna. Under this model, dispersal allegedly inhibited dinosaurian diversification, while vicariance and local extinction enhanced it. However, apomorphy-based analyses of the known fossil record indicate that sauropodomorphs were absent in North America until the Early Jurassic, reframing the temporal context of their arrival. We describe a new taxon from the Kayenta Formation of Arizona that comprises the third diagnosable sauropodomorph from the Early Jurassic of North America. We analysed its relationships to test whether sauropodomorphs reached North America in a single sweepstakes event or in separate dispersals. Our finding of separate arrivals by all three taxa suggests dispersal as a chief factor in dinosaurian diversification during at least the early Mesozoic. It questions whether a 'cosmopolitan' dinosaur fauna ever existed, and corroborates that vicariance, extinction and dispersal did not operate uniformly in time or under uniform conditions during the Mesozoic. Their relative importance is best measured in narrow time slices and circumscribed geographical regions.
Golding N.L.,University of Texas at Austin |
Oertel D.,University of Wisconsin - Madison
Journal of Physiology | Year: 2012
Some neurons in the mammalian auditory system are able to detect and report the coincident firing of inputs with remarkable temporal precision. A strong, low-voltage-activated potassium conductance (gKL) at the cell body and dendrites gives these neurons sensitivity to the rate of depolarization by EPSPs, allowing neurons to assess the coincidence of the rising slopes of unitary EPSPs. Two groups of neurons in the brain stem, octopus cells in the posteroventral cochlear nucleus and principal cells of the medial superior olive (MSO), extract acoustic information by assessing coincident firing of their inputs over a submillisecond timescale and convey that information at rates of up to 1000 spikes s-1. Octopus cells detect the coincident activation of groups of auditory nerve fibres by broadband transient sounds, compensating for the travelling wave delay by dendritic filtering, while MSO neurons detect coincident activation of similarly tuned neurons from each of the two ears through separate dendritic tufts. Each makes use of filtering that is introduced by the spatial distribution of inputs on dendrites. © 2012 The Authors. The Journal of Physiology © 2012 The Physiological Society.
Clemens N.T.,University of Texas at Austin |
Narayanaswamy V.,North Carolina State University
Annual Review of Fluid Mechanics | Year: 2014
Shock wave/boundary layer interactions occur in a wide range of supersonic internal and external flows, and often these interactions are associated with turbulent boundary layer separation. The resulting separated flow is associated with large-scale, low-frequency unsteadiness whose cause has been the subject of much attention and debate. In particular, some researchers have concluded that the source of low-frequency motions is in the upstream boundary layer, whereas others have argued for a downstream instability as the driving mechanism. Owing to substantial recent activity, we are close to developing a comprehensive understanding, albeit only in simplified flow configurations. A plausible model is that the interaction responds as a dynamical system that is forced by external disturbances. The low-frequency dynamics seem to be adequately described by a recently proposed shear layer entrainment-recharge mechanism. Upstream boundary layer fluctuations seem to be an important source of disturbances, but the evidence suggests that their impact is reduced with increasing size of the separated flow. Copyright © 2014 by Annual Reviews. All rights reserved.
Freeland-Graves J.H.,University of Texas at Austin |
Nitzke S.,University of Wisconsin - Madison
Journal of the Academy of Nutrition and Dietetics | Year: 2013
It is the position of the Academy of Nutrition and Dietetics that the total diet or overall pattern of food eaten is the most important focus of healthy eating. All foods can fit within this pattern if consumed in moderation with appropriate portion size and combined with physical activity. The Academy strives to communicate healthy eating messages that emphasize a balance of food and beverages within energy needs, rather than any one food or meal. Public policies and dietary patterns that support the total diet approach include the 2010 Dietary Guidelines for Americans, DASH (Dietary Approaches to Stop Hypertension) Diet, MyPlate, Let's Move, Nutrition Facts labels, Healthy People 2020, and the Dietary Reference Intakes. In contrast to the total diet approach, classification of specific foods as good or bad is overly simplistic and can foster unhealthy eating behaviors. Alternative approaches are necessary in some situations. Eating practices are dynamic and influenced by many factors, including taste and food preferences, weight concerns, physiology, time and convenience, environment, abundance of foods, economics, media/marketing, perceived product safety, culture, and attitudes/beliefs. To increase the effectiveness of nutrition education in promoting sensible food choices, skilled food and nutrition practitioners utilize appropriate behavioral theory and evidence-based strategies. Focusing on variety, moderation, and proportionality in the context of a healthy lifestyle, rather than targeting specific nutrients or foods, can help reduce consumer confusion and prevent unnecessary reliance on supplements. Proactive, empowering, and practical messages that emphasize the total diet approach promote positive lifestyle changes. © 2013 Academy of Nutrition and Dietetics.
Brodbelt J.S.,University of Texas at Austin
Journal of the American Society for Mass Spectrometry | Year: 2011
The development of new ion activation/dissociation methods is motivated by the need for more versatile ways to characterize structures of ions, especially in the growing arena of biological mass spectrometry in which better tools for determining sequences, modifications, interactions, and conformations of biopolymers are essential. Although most agree that collision-induced dissociation (CID) remains the gold standard for ion activation/dissociation, recent inroads in electron- and photon-based activation methods have cemented their role as outstanding alternatives. This article will focus on the impact of photodissociation, including its strengths and drawbacks as an analytical tool, and its potential for further development in the next decade. Moreover, the discussion will emphasize photodissociation in quadrupole ion traps, because that platform has been used for one of the greatest arrays of new applications over the past decade. © American Society for Mass Spectrometry, 2011.
Zeidouni M.,University of Texas at Austin
Advances in Water Resources | Year: 2014
Deep saline aquifers are commonly used for disposal and storage of various surface fluids. The target injection zone must be hydraulically isolated from overlying zones in order to ensure containment of the injected fluids. Improperly plugged nonoperational abandoned wells that penetrate the injection zone are the main potential leakage pathways. Leakage through such wells may cause an observable pressure signal in a zone overlying the injection zone such a signal can be used to detect the leakage. In this paper we develop an analytical model to evaluate the pressure change induced by leakage through a well in a multilayer system. Unlike previous analytical models on the topic, our model uses a closed system, which may significantly affect the strength and behavior of the pressure signal induced by leakage. The analytical model is first presented for a two-layer system centered at the leaky well location. We evaluate the leakage-induced pressure change using the Laplace transform of Duhamel's superposition integral, yielding the solution in the Laplace domain. We then derive a late-time asymptotic solution using the final value theorem, which suggests that the leakage rate becomes constant after sufficient time. We then obtain the multilayer solution by extending the two-layer solution and presenting it in matrix form in the Laplace domain. We apply the solution to three examples. In the first example, we apply the analytical model to a two-layer system, investigating its behavior and comparing the results with a numerical solution. In order to demonstrate behavior and potential applications of the multilayer analytical model, we present two multilayer examples: one with identical layers and another, replicating a CO2 storage site, with dissimilar layers. The leakage-induced pressure change does not necessarily decrease as the distance increases from the injection zone toward the surface. © 2014 Elsevier Ltd.
Moon I.K.,Sungkyunkwan University |
Lee J.,Sungkyunkwan University |
Ruoff R.S.,University of Texas at Austin |
Lee H.,Sungkyunkwan University
Nature Communications | Year: 2010
Reduced graphene oxides (RG-Os) have attracted considerable interest, given their potential applications in electronic and optoelectronic devices and circuits. However, very little is known regarding the chemically induced reduction method of graphene oxide (G-O) in both solution and gas phases, with the exception of the hydrazine-reducing agent, even though it is essential to use the vapour phase for the patterning of hydrophilic G-Os on prepatterned substrates and in situ reduction to hydrophobic RG-Os. In this paper, we report a novel reducing agent system (hydriodic acid with acetic acid (HI - AcOH)) that allows for an efficient, one-pot reduction of a solution-phased RG-O powder and vapour-phased RG-O (VRG-O) paper and thin film. The reducing agent system provided highly qualified RG-Os by mass production, resulting in highly conducting RG-OHI-AcOH. Moreover, VRG-OHI-AcOH paper and thin films were prepared at low temperatures (40 °C) and were found to be applicable to flexible devices. This one-pot method is expected to advance research on highly conducting graphene platelets. © 2010 Macmillan Publishers Limited. All rights reserved.
Weinberg S.,University of Texas at Austin
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2010
Inflation is studied in the context of asymptotically safe theories of gravitation. Conditions are explored under which it is possible to have a long period of nearly exponential expansion that eventually comes to an end. © 2010 The American Physical Society.
Kaufman N.,University of Texas at Austin
Climatic Change | Year: 2012
Climate scientists currently predict there is a small but real possibility that climate change will lead to civilization threatening catastrophic events. Martin Weitzman has used this evidence along with his controversial "Dismal Theorem" to argue that integrated assessment models of climate change cannot be used to determine an optimal price for carbon dioxide. In this paper, I provide additional support for Weitzman's conclusions by running numerical simulations to estimate risk premiums toward climate catastrophes. Compared to the assumptions found in most integrated assessment models, I incorporate into the model a more realistic range of uncertainty for both climate catastrophes and societal risk aversion. The resulting range of risk premiums indicates that the conclusions drawn from integrated assessment models that do not incorporate the potential for climate catastrophes are too imprecise to support any particular policy recommendation. The analysis of this paper is more straightforward and less technical than Weitzman's, and therefore the conclusions should be accessible to a wider audience. © 2011 Springer Science+Business Media B.V.
Bakolas E.,University of Texas at Austin
Automatica | Year: 2013
We consider the problem of characterizing a generalized Voronoi diagram that is relevant to a special class of area assignment problems for multi-vehicle systems. It is assumed that the motion of each vehicle is described by a second order mechanical system with time-varying linear or affine dynamics. The proposed generalized Voronoi diagram encodes information regarding the proximity relations between the vehicles and arbitrary target points in the plane. These proximity relations are induced by an anisotropic (generalized) distance function that incorporates the vehicle dynamics. In particular, the generalized distance is taken to be the minimum control effort required for the transition of a vehicle to an arbitrary target point with a small terminal speed at a fixed final time. The space we wish to partition corresponds to the union of all the terminal positions that can be attained by each vehicle using finite control effort. Consequently, the partition space has lower dimension than the state space of each vehicle. We show that, in the general case, the solution to the proposed partitioning problem can be associated with a power Voronoi diagram generated by a set of spheres in a five-dimensional Euclidean space for the computation of which efficient techniques exist in the relevant literature.© 2013 Elsevier Ltd. All rights reserved.
Wilke C.O.,University of Texas at Austin
PLoS Computational Biology | Year: 2012
Much molecular-evolution research is concerned with sequence analysis. Yet these sequences represent real, three-dimensional molecules with complex structure and function. Here I highlight a growing trend in the field to incorporate molecular structure and function into computational molecular-evolution work. I consider three focus areas: reconstruction and analysis of past evolutionary events, such as phylogenetic inference or methods to infer selection pressures; development of toy models and simulations to identify fundamental principles of molecular evolution; and atom-level, highly realistic computational modeling of molecular structure and function aimed at making predictions about possible future evolutionary events. © 2012 Claus O. Wilke.
Manthiram A.,University of Texas at Austin
Journal of Physical Chemistry Letters | Year: 2011
Lithium ion batteries have revolutionized the portable electronics market, and they are being intensively pursued now for transportation and stationary storage of renewable energies like solar and wind. The success of lithium ion technology for the latter applications will depend largely on the cost, safety, cycle life, energy, and power, which are in turn controlled by the component materials used. Accordingly, this Perspective focuses on the challenges and prospects associated with the electrode materials. Specifically, the issues associated with high-voltage and high-capacity cathodes as well as high-capacity anodes and the approaches to overcome them are presented. © 2011 American Chemical Society.
Winget M.A.,University of Texas at Austin
Journal of the American Society for Information Science and Technology | Year: 2011
Videogames are important cultural and economic artifacts. They also present challenges that anticipate the problems inherent in any complex digital interactive system. Not only are they digital and hence very difficult to preserve but they also are software systems that have significant hardware, peripheral, and network dependencies, which are difficult to collect and formally represent. This article reviews the literature related to videogame preservation. In addition to covering the traditional technology-related issues inherent in all digital preservation endeavors, this review also attempts to describe the complexities and relationships between the traditional acts of technology preservation, representation, and collection development. Future work should include the identification of important user groups, an examination of games' context of use, and the development of representational models to describe interaction of players with the game and the interactions between players playing the game. © 2011 ASIS&T.
Carlson W.D.,University of Texas at Austin
International Geology Review | Year: 2011
Analysis of the processes, kinetics, and microstructures that characterize porphyroblast crystallization identifies the primary factors that govern the progress of metamorphic reactions and highlights the importance of feedbacks among those factors. Commonly, the kinetics of nucleation and the kinetics of intergranular diffusion are rate-limiting in porphyroblast crystallization. That finding should inspire petrologic vigilance, as it implies strong potential for significant thermal overstepping of reactions, crystallization at high levels of chemical affinity, reactions that span protracted intervals of time and temperature, and limited length scales for chemical equilibration. © 2011 Taylor & Francis.
Kumar A.,University of Texas at Austin
International journal of nanomedicine | Year: 2011
The present study was designed to evaluate the extent to which pretreatment with microneedles can enhance skin permeation of nanoparticles in vitro and in vivo. Permeation of live bacteria, which are physically nanoparticles or microparticles, through mouse skin pretreated with microneedles was also studied to evaluate the potential risk of microbial infection. It was found that pretreatment of mouse skin with microneedles allowed permeation of solid lipid nanoparticles, size 230 nm, with ovalbumin conjugated on their surface. Transcutaneous immunization in a mouse skin area pretreated with microneedles with ovalbumin nanoparticles induced a stronger antiovalbumin antibody response than using ovalbumin alone. The dose of ovalbumin antigen determined whether microneedle-mediated transcutaneous immunization with ovalbumin nanoparticles induced a stronger immune response than subcutaneous injection of the same ovalbumin nanoparticles. Microneedle treatment permitted skin permeation of live Escherichia coli, but the extent of the permeation was not greater than that enabled by hypodermic injection. Transcutaneous immunization on a microneedle-treated skin area with antigens carried by nanoparticles can potentially induce a strong immune response, and the risk of bacterial infection associated with microneedle treatment is no greater than that with a hypodermic injection.
Yin M.,University of Texas at Austin
Journal of Statistical Physics | Year: 2013
The exponential family of random graphs is one of the most promising class of network models. Dependence between the random edges is defined through certain finite subgraphs, analogous to the use of potential energy to provide dependence between particle states in a grand canonical ensemble of statistical physics. By adjusting the specific values of these subgraph densities, one can analyze the influence of various local features on the global structure of the network. Loosely put, a phase transition occurs when a singularity arises in the limiting free energy density, as it is the generating function for the limiting expectations of all thermodynamic observables. We derive the full phase diagram for a large family of 3-parameter exponential random graph models with attraction and show that they all consist of a first order surface phase transition bordered by a second order critical curve. © 2013 Springer Science+Business Media New York.
Bovik A.C.,University of Texas at Austin
Proceedings of the IEEE | Year: 2013
Finding ways to monitor and control the perceptual quality of digital visual media has become a pressing concern as the volume being transported and viewed continues to increase exponentially. This paper discusses the principles and methods of modern algorithms for automatically predicting the quality of visual signals. By casting the problem as analogous to assessing the efficacy of a visual communication system, it is possible to divide the quality assessment problem into understandable modeling subproblems. Along the way, we will visit models of natural images and videos, of visual perception, and a broad spectrum of applications. © 1963-2012 IEEE.
McDonald G.D.,University of Texas at Austin
Astrobiology | Year: 2010
It has been hypothesized in this journal and elsewhere, based on surveys of published data from prebiotic synthesis experiments and carbonaceous meteorite analyses, that basic amino acids such as lysine and arginine were not abundant on prebiotic Earth. If the basic amino acids were incorporated only rarely into the first peptides formed in that environment, it is important to understand what protobiotic chemistry is possible in their absence. As an initial test of the hypothesis that basic amino acid negative [BAA(-)] proteins could have performed at least a subset of protobiotic chemistry, the current work reports on a survey of 13 archaeal and 13 bacterial genomes that has identified 61 modern gene sequences coding for known or putative proteins not containing arginine or lysine. Eleven of the sequences found code for proteins whose functions are well known and important in the biochemistry of modern microbial life: lysine biosynthesis protein LysW, arginine cluster proteins, copper ion binding proteins, bacterial flagellar proteins, and PE or PPE family proteins. These data indicate that the lack of basic amino acids does not prevent peptides or proteins from serving useful structural and biochemical functions. However, as would be predicted from fundamental physicochemical principles, we see no fossil evidence of prebiotic BAA(-) peptide sequences capable of interacting directly with nucleic acids.
Miller J.,University of Texas at Austin
Geography Compass | Year: 2010
The use of species distribution models (SDM) to map and monitor animal and plant distributions has become increasingly important in the context of awareness of environmental change and its ecological consequences. From their original inception as resource inventory and conservation mapping tools, SDM have evolved along with the increasing variety and availability of statistical methods, digital biological, and environmental data with which they are built in a geographic information system. Beyond predicting species distributions, these models have become an important and widely used decision-making tool for a variety of biogeographical applications, such as studying the effects of climate change, identifying potential protected areas, determining locations potentially susceptible to invasion, and mapping vector-borne disease spread and risk. This article outlines the steps involved in formulating an SDM and focuses on the conceptual and theoretical foundations on which it is based and identifies issues that have merited recent and will merit future research attention. © 2010 The Author. Journal Compilation © 2010 Blackwell Publishing Ltd.
Smith T.G.,University of Texas at Austin
Journal of Peace Research | Year: 2014
While both academics and politicians have long acknowledged the connection between food price shocks and so-called ‘food riots’, this article asks whether rising domestic consumer food prices are a contributing cause of sociopolitical unrest, more broadly defined, in urban areas of Africa. In order to unravel the complex and circular relationship between rising food prices and unrest, an instrumental approach with country fixed effects is used to isolate causality at the country-month unit of analysis for the period 1990 through 2012. Two instrumental variables, changes in international grain commodity prices and local rainfall scarcity, are evaluated and used individually and jointly as instruments for changes in domestic food prices. The main finding is that a sudden increase in domestic food prices in a given month significantly increases the probability of urban unrest, especially spontaneous events and riots, in that month. Undeniably, more fundamental economic and political grievances are also drivers of such events and are likely to determine how the unrest ultimately manifests, even when triggered by rising food prices. Although more research is necessary to determine why people choose particular protest methods and targets, the findings of this research provide evidence that sociopolitical unrest of different types is driven, or at least triggered, by a consumer response to economic pressure from increasing food prices regardless of the cause of the increase. © The Author(s) 2014.
Levin D.A.,University of Texas at Austin
Annals of Botany | Year: 2012
BackgroundThe trailing edges of species ranges are becoming a subject of increasing interest as the environment changes due to global warming. Trailing edge populations are likely to face extinction because of a decline in numbers and an inability to evolve new adaptations with sufficient speed. Discussions of character change in the trailing edge have focused on physiological, exomorphic and phenological traits. The mating pattern within populations has not been part of the discourse, in spite of the fact that the mating pattern may affect the ability of populations to respond to environmental change and to maintain their sizes. In this paper, the case is made that a substantial increase in self-fertilization rates may occur via plastic responses to stress. Scope and ConclusionsSmall populations on the trailing edge are especially vulnerable to environmental change because of inadequate levels of cross-fertilization. Evidence is presented that a deficiency of cross-seed production is due to inadequate pollinator services and a paucity of self-incompatibility alleles within populations. Evidence also is presented that if plants are self-compatible, self-fertilization may compensate in part for this deficiency through a stress-induced increase in levels of self-compatibility and stress-induced alterations in floral morphology that elevate self-pollination. Whereas increased self-fertility may afford populations the time to adapt to their changing environments, it can be concluded that increased selfing is not a panacea for the ills of environmental change, because it will lead to substantial reductions in genetic diversity, which may render adaptation unlikely. © The Author 2011.
Wick T.,University of Heidelberg | Wick T.,University of Texas at Austin