Entity

Time filter

Source Type

Austin, TX, United States

The University of Texas at Austin is a state research university and the flagship institution of The University of Texas System. Founded in 1883 as "The University of Texas," its campus is located in Austin—approximately 1 mile from the Texas State Capitol. The institution has the fifth-largest single-campus enrollment in the nation, with over 50,000 undergraduate and graduate students and over 24,000 faculty and staff. The university has been labeled one of the "Public Ivies," a publicly funded university considered as providing a quality of education comparable to those of the Ivy League.UT Austin was inducted into the American Association of Universities in 1929, becoming only the third university in the American South to be elected. It is a major center for academic research, with research expenditures exceeding $640 million for the 2009–2010 school year. The university houses seven museums and seventeen libraries, including the Lyndon Baines Johnson Library and Museum and the Blanton Museum of Art, and operates various auxiliary research facilities, such as the J. J. Pickle Research Campus and the McDonald Observatory. Among university faculty are recipients of the Nobel Prize, Pulitzer Prize, the Wolf Prize, and the National Medal of Science, as well as many other awards.UT Austin student athletes compete as the Texas Longhorns and are members of the Big 12 Conference. Its Longhorn Network is unique in that it is the only sports network featuring the college sports of a single university. The Longhorns have won four NCAA Division I National Football Championships, six NCAA Division I National Baseball Championships and has claimed more titles in men's and women's sports than any other school in the Big 12 since the league was founded in 1996. Current and former UT Austin athletes have won 130 Olympic medals, including 14 in Beijing in 2008 and 13 in London in 2012. The university was recognized by Sports Illustrated as "America's Best Sports College" in 2002. Wikipedia.


Miller K.,University of Texas at Austin
Maternal and Child Health Journal | Year: 2014

Children with special health care needs (CSHCN) often require services from multiple health care providers. This study's objective is to evaluate whether CSHCN, enrolled in Medicaid or the Children's Health Insurance Program (CHIP) and receiving care coordination services, experience improved access to mental and specialty health care services. Using data from the 2009-2010 National Survey of Children with Special Health Care Needs, two separate outcomes are used to evaluate children's access to care: receipt of needed mental and specialty care and timely access to services. Using propensity score matching, CSHCN propensity for receiving care coordination services is derived and an assessment is made of care coordination's impact on the receipt of health care and whether care is delayed. Results demonstrate that care coordination is positively associated with whether a child receives the mental and specialty care that they need, regardless of whether or not that coordination is perceived to be adequate by parents. However, receiving care coordination services that parents perceive to be adequate has a larger impact on the timeliness in which care is received. This study indicates that care coordination is associated with an increased ability for CSHCN to access needed mental and specialty care. States should consider offering care coordination services that support provider communication and fulfill families' coordination needs to the CSHCN enrolled in their Medicaid and CHIP programs. © 2013 Springer Science+Business Media New York. Source


Maciejko J.,University of Alberta | Maciejko J.,Canadian Institute for Advanced Research | Fiete G.A.,University of Texas at Austin
Nature Physics | Year: 2015

Topological insulators have emerged as a major topic of condensed matter physics research, with several novel applications proposed. Although there are now a number of established experimental examples of materials in this class, all of them can be described by theories based on electronic band structure, which implies that they do not possess electronic correlations strong enough to fundamentally change this theoretical description. Here, we review recent theoretical progress in the description of a class of strongly correlated topological insulators - fractionalized topological insulators - where band theory fails owing to the fractionalization of the electron into other degrees of freedom. Source


Moore A.L.,IBM | Shi L.,University of Texas at Austin
Materials Today | Year: 2014

The rapid development of faster, cheaper, and more powerful computing has led to some of the most important technological and societal advances in modern history. However, the physical means associated with enhancing computing capabilities at the device and die levels have also created a very challenging set of circumstances for keeping electronic devices cool, a critical factor in determining their speed, efficiency, and reliability. With advances in nanoelectronics and the emergence of new application areas such as three-dimensional chip stack architectures and flexible electronics, now more than ever there are both needs and opportunities for novel materials to help address some of these pressing thermal management challenges. In this paper a number of cubic crystals, two-dimensional layered materials, nanostructure networks and composites, molecular layers and surface functionalization, and aligned polymer structures are examined for potential applications as heat spreading layers and substrates, thermal interface materials, and underfill materials in future-generation electronics. © 2014 Elsevier Ltd. Source


Atkinson L.,University of Texas at Austin
International Journal of Consumer Studies | Year: 2013

Smartphone-based mobile advertising is a promising marketing vehicle, especially in retail and point-of-purchase environments, yet we still do not know enough about what motivates consumers to turn to mobile advertising. This is particularly true of new forms of mobile advertising, like quick response (QR) codes, which rely on pull-based approaches, rather than push-style advertising. The study clarifies this omission by exploring various antecedents that encourage consumer use of mobile advertising. Relying on the uses and gratifications approach as a theoretical framework, this study tests three antecedents of new media adoption: institutional trust, involvement and market mavenism. Data are drawn from a representative sample of 401 American adults and their willingness to use QR codes in shopping environments within the context of purchasing sustainable products. The data were analysed using ordinary least squares regression. Results indicate government trust, buycotting and market mavenism are positively related to consumers' willingness to use mobile phone-based QR code advertising whereas corporate trust is negatively related. The results offer important theoretical and managerial implications: first, there is a need to include broader institutional trust variables in the uses and gratifications framework of new media use; second, QR code content should be carefully crafted to provide meaningful, usable information for involved consumers; and third, market mavens' tendency to share information should be harnessed by providing QR code content that is relevant, interesting and easily passed on to other consumers. © 2013 John Wiley & Sons Ltd. Source


Miller J.A.,University of Texas at Austin
Progress in Physical Geography | Year: 2014

Species distribution models (SDMs) have become a dominant paradigm for quantifying species-environment relationships, and both the models and their outcomes have seen widespread use in conservation studies, particularly in the context of climate change research. With the growing interest in SDMs, extensive comparative studies have been undertaken. However, few generalizations and recommendations have resulted from these empirical studies, largely due to the confounding effects of differences in and interactions among the statistical methods, species traits, data characteristics, and accuracy metrics considered. This progress report addresses 'virtual species distribution models': the use of spatially explicit simulated data to represent a 'true' species distribution in order to evaluate aspects of model conceptualization and implementation. Simulating a 'true' species distribution, or a virtual species distribution, and systematically testing how these aspects affect SDMs, can provide an important baseline and generate new insights into how these issues affect model outcomes. © The Author(s) 2014. Source


Matouschek A.,University of Texas at Austin | Finley D.,Harvard University
Science | Year: 2012

The discovery in archaea of an alternative proteasome based on Cdc48 provides insights into the evolution of protein degradation machines. Source


A statistical rock-physics technique, based on well data that provides estimates and associated uncertainty of fracture density in the Middle Bakken Siltstone, is presented. Geologic and hydrocarbon-charging history of the Middle Bakken indicate multiple sets of fractures that justify treating this unit as elastically isotropic. The generalized n-phase self-consistent model relates the elastic properties to composition, matrix porosity, and fracture porosity, where an assigned aspect ratio and volumetric fraction corresponds to each input. The modeling of bulk density as a function of total porosity supplies deterministic estimates of the composition. Analysis of in situ stress and pore-stiffness calculations provide a range of fracture aspect ratios, corresponding to open fractures. Stochastic simulation of fracture porosity initiates the statistical nature of the technique. This treatment of fracture porosity enables the rock-physics model to be treated statistically through multiple realizations. Modeling results explain the measured bulk and shear moduli, with the bulk modulus more accurately described, and the results also provide statistical estimates of fracture porosity. Calculations using these estimates of fracture porosity, along with fracture aspect ratios, result in statistical estimates of fracture density for each depth value in the Middle Bakken unit. Values of fracture density fall within imposed limits (< 0.10). The results and technique demonstrated here could be applied to field seismic data to identify locations of increased fracture density. These locations might indicate areas of increased permeability in the Middle Bakken Siltstone. © 2011 Society of Exploration Geophysicists. Source


Lake E.T.,University of Texas at Austin
Earth and Planetary Science Letters | Year: 2013

The cooling and crystallization style of silicic magma bodies in the upper crust falls on a continuum between whole-chamber processes of convection, crystal settling, and cumulate formation and interface-driven processes of conduction and crystallization front migration. In the end-member case of vigorous convection and crystal settling, volatile saturation advances downward from the roof and upward from the floor throughout the chamber. In the end-member case of stagnant magma bodies, volatile saturation occurs along an inward propagating front from all sides of the chamber. Ambient thermal gradient primarily controls the propagation rate; warm (≥40°C/km) geothermal gradients lead to thick (1200 + m) crystal mush zones and slow crystallization front propagation. Cold (<40°C/km) geothermal gradients lead to rapid crystallization front propagation and thin (<1000 m) mush zones. Magma chamber geometry also exerts a first-order control on propagation rates; bodies with high surface to magma volume ratio and large Earth-surface-parallel faces exhibit more rapid propagation and thinner mush zones. Crystallization front propagation occurs at speeds of greater than 10 cm/yr (rhyolitic magma; 1 km thick sill geometry in a 20°C/km geotherm), far faster than diffusion of volatiles in magma and faster than bubbles can nucleate, grow, and ascend through the chamber. Numerical simulations indicate saturation front propagation is determined primarily by pressure and magma crystallization rate; above certain initial water contents (4.4 wt.% in a dacite) the mobile magma is volatile-rich enough above 10 km depth to always contains a saturation front. Saturation fronts propagate down from the magma chamber roof at lower water contents (3.3 wt.% in a dacite at 5 km depth), creating an upper saturated interface for most common (4-6 wt.%) magma water contents. This upper interface promotes the production of a fluid pocket underneath the apex of the magma chamber. If the fluid pocket grew faster than rates of escape into the wall rock, fluid accumulation and hydro-fracturing could possibly trigger an eruption. © 2013 Elsevier B.V. Source


Carlson W.D.,University of Texas at Austin
Journal of Metamorphic Geology | Year: 2010

Quantitative constraints on the accelerative effects of H2O on the kinetics of metamorphic reactions arise from a comparison of rates of intergranular diffusion of Al in natural systems that are fluid-saturated, hydrous but fluid-undersaturated, and nearly anhydrous. Widths of symplectitic reaction coronas around partially resorbed garnet crystals in the contact aureole of the Makhavinekh Lake Pluton, northern Labrador, combined with time-temperature histories from conductive thermal models, yield intergranular diffusivities for Al from ∼700-900 °C under nearly anhydrous conditions. Those rates, when extrapolated down temperature, are approximately three orders of magnitude slower than rates derived from re-analysis of garnet resorption coronas formed under hydrous but fluid-undersaturated conditions near 575 °C in rocks of the Llano Uplift of central Texas, which are in turn approximately four orders of magnitude slower than rates at comparable temperatures derived from numerical simulations of prograde garnet growth in fluid-saturated conditions in rocks from the Picuris Range of north-central New Mexico. Thus, even at constant temperature, rates of intergranular diffusion of Al - and corresponding length scales and timescales of metamorphic reaction and equilibration - may vary by as much as seven orders of magnitude across the range of H2O activities found in nature. © 2010 Blackwell Publishing Ltd. Source


Breecker D.O.,University of Texas at Austin
Geochemistry, Geophysics, Geosystems | Year: 2013

A computer program (PBUQ) that uses Monte Carlo simulations to propagate uncertainty through regression equations and the equation for the paleosol carbonate CO2 paleobarometer is presented. PBUQ includes options for all of the common approaches to determining values for input variables and incorporates several recent advancements relevant to determining values for soil-respired CO2 concentrations, δ13C values of respired CO2, δ13C values of atmospheric CO 2, and temperatures of soil carbonate formation. PBUQ is intended to improve confidence in paleoatmospheric CO2 research by helping researchers draw statistically significant conclusions. PBUQ can also be used to attribute and partition error among various sources and thereby advance this technique. Sensitivity analysis indicates that S(z) is the largest source of uncertainty for most paleosols and that uncertainty is minimized for soils in which CO2 is an evenly balanced mixture between soil-derived and atmospheric components. Evenly balanced mixtures are most likely for paleosols formed in deserts and for weakly developed paleosols. Development of proxies for soil-respired CO2 concentrations and δ13C values of soil-respired CO2 specifically for such soils is perhaps the most crucial next step for improving this technique. Currently, calcic paleosols are best used to test the significance of trends and/or differences among time slices in paleoatmospheric CO2 concentration. Application to quantifying Earth System Sensitivity will require large scale averaging of determinations from individual paleosols and/or reduced uncertainty associated with input variables. © 2013. American Geophysical Union. All Rights Reserved. Source


Wallace L.M.,University of Texas at Austin | Eberhart-Phillips D.,Institute of Geological & Nuclear Sciences
Geophysical Research Letters | Year: 2013

We present previously undocumented deep (25-45 km depth), moderate-duration (2-3 months) slow slip events (SSEs) directly downdip of known shallow (<15 km depth), short-term (2-3 weeks) SSEs at the central Hikurangi margin, New Zealand. The moderate-duration SSEs produce 2-6 mm of horizontal displacement at continuous Global Positioning System stations in 2006 and 2008, similar to Cascadia SSEs. They involve 2-5 cm of slip over ∼5000 km2 of the subduction interface. Tremor occurs just downdip of the deep SSE source area. We also present results from a large, shallow (<12 km) SSE in 2013 just updip of the 2006/2008 SSEs. The location of deep SSEs between shallow, short-term SSEs and tremor indicates strong downdip variation of subduction interface slip behavior at central Hikurangi. The deep SSEs occur where Qp and Vp/Vs data suggest an accumulation of fluid-rich underplated sediment at the interface, consistent with the idea that SSEs occur under high fluid pressure. Key Points Slow slip is highly variable in the downdip direction Slow slip occurs over a broad range of depths Slow slip occurs in a region of inferred high fluid pressure ©2013. American Geophysical Union. All Rights Reserved. Source


Bannar-Martin K.H.,University of Texas at Austin
International Journal of Primatology | Year: 2014

At least three community assembly processes shape communities: 1) environmental niche-based processes, 2) spatial effects through dispersal limitation, and 3) biogeographic dispersal barriers. Previous studies suggested primate communities were dispersal limited, except in Madagascar, where environmental effects shaped communities. However, previous work did not investigate the role of biogeographic barriers. Further, the relative roles of these processes are potentially specific to taxonomic and/or functional groups owing to a group’s ecological preferences. I aimed to identify to what extent environmental factors, spatial effects, and biogeographic barriers shape patterns of primate and nonprimate community composition, in comparison to terrestrial and arboreal mammal communities in Madagascar. I analyzed occurrence data of nonvolant mammals and site-specific environmental and biogeographic data for 34 sites in Madagascar using principal coordinates of neighbor matrices and variation partitioning to test the relative contribution of environmental, spatial, and biogeographic effects to the patterns of community composition. Environmental and spatial effects almost equally explained nonvolant mammal communities. However, for primate and arboreal mammal communities, especially at broad spatial scales, spatial effects explained more of the variation than environmental effects. By contrast, only environmental effects explained nonprimate and terrestrial mammal distributions. Biogeographic effects were not significant for any community type. The difference between arboreal and terrestrial mammals is perhaps due to functional differences in dispersal ability, which habitat modification and a large impassable agricultural matrix in Madagascar may compound. Future research should consider the influence of functional diversity on patterns of community assembly. © 2014, Springer Science+Business Media New York. Source


The evolution of the mantle wedge below the Colorado Plateau during low-angle subduction has been investigated by analysis of fragments from the Navajo Volcanic Field (NVF), most from serpentinized ultramafic microbreccia (SUM) diatremes. In most SUM-hosted olivine, concentrations of Al and V are < 1 ppm, and Cr, Ca, and Na concentrations also are unusually low: most temperatures from three olivine thermometers (Al, Cr, and V-based) are in the range 530°C to 650°C. The temperatures are consistent with the presence of chlorite in some inclusions, and they support the utility of olivine thermometry for diverse mineral assemblages in cool peridotite. Most pressures calculated for discrete diopside grains in SUM correspond to depths in the range 80 to 120 km. Diopside is relatively high in Li (∼3.5 ppm), and two of five diopside grains have high Sr/Nd. SUM diatreme sources are inferred to be serpentine-rich mélange tectonically eroded from the forearc, transported above the Farallon slab, and incorporated into the lithosphere of the Plateau. Garnet peridotite xenoliths from minettes of the NVF record deeper depths in the range 120 to 150 km. These garnet peridotites also may be from forearc mantle emplaced during Farallon subduction. Calculated temperatures preclude the possibility that asthenosphere was in contact with that lithosphere at or near 150 km depth for tens of m.y. before NVF formation. Structures observed in seismic images of mantle to depths of 150 km below the central Colorado Plateau may be related to Farallon subduction, not inherited from Proterozoic lithosphere. ©2013. American Geophysical Union. All Rights Reserved. Source


Dingwell J.B.,University of Texas at Austin
PLoS computational biology | Year: 2010

It is widely accepted that humans and animals minimize energetic cost while walking. While such principles predict average behavior, they do not explain the variability observed in walking. For robust performance, walking movements must adapt at each step, not just on average. Here, we propose an analytical framework that reconciles issues of optimality, redundancy, and stochasticity. For human treadmill walking, we defined a goal function to formulate a precise mathematical definition of one possible control strategy: maintain constant speed at each stride. We recorded stride times and stride lengths from healthy subjects walking at five speeds. The specified goal function yielded a decomposition of stride-to-stride variations into new gait variables explicitly related to achieving the hypothesized strategy. Subjects exhibited greatly decreased variability for goal-relevant gait fluctuations directly related to achieving this strategy, but far greater variability for goal-irrelevant fluctuations. More importantly, humans immediately corrected goal-relevant deviations at each successive stride, while allowing goal-irrelevant deviations to persist across multiple strides. To demonstrate that this was not the only strategy people could have used to successfully accomplish the task, we created three surrogate data sets. Each tested a specific alternative hypothesis that subjects used a different strategy that made no reference to the hypothesized goal function. Humans did not adopt any of these viable alternative strategies. Finally, we developed a sequence of stochastic control models of stride-to-stride variability for walking, based on the Minimum Intervention Principle. We demonstrate that healthy humans are not precisely "optimal," but instead consistently slightly over-correct small deviations in walking speed at each stride. Our results reveal a new governing principle for regulating stride-to-stride fluctuations in human walking that acts independently of, but in parallel with, minimizing energetic cost. Thus, humans exploit task redundancies to achieve robust control while minimizing effort and allowing potentially beneficial motor variability. Source


Wang K.,Beijing Normal University | Dickinson R.E.,University of Texas at Austin
Reviews of Geophysics | Year: 2012

This review surveys the basic theories, observational methods, satellite algorithms, and land surface models for terrestrial evapotranspiration, E (or E, i.e., latent heat flux), including a long-term variability and trends perspective. The basic theories used to estimate E are the Monin-Obukhov similarity theory (MOST), the Bowen ratio method, and the Penman-Monteith equation. The latter two theoretical expressions combine MOST with surface energy balance. Estimates of E can differ substantially between these three approaches because of their use of different input data. Surface and satellite-based measurement systems can provide accurate estimates of diurnal, daily, and annual variability of E. But their estimation of longer time variability is largely not established. A reasonable estimate of E as a global mean can be obtained from a surface water budget method, but its regional distribution is still rather uncertain. Current land surface models provide widely different ratios of the transpiration by vegetation to total E. This source of uncertainty therefore limits the capability of models to provide the sensitivities of E to precipitation deficits and land cover change. © 2012 by the American Geophysical Union. Source


Zeidouni M.,University of Texas at Austin
Water Resources Research | Year: 2012

A fault is generally composed of a fault core surrounded by damage zones and can accommodate both lateral and vertical flow. In this paper we develop an analytical model to evaluate the leakage rate through a fault and corresponding pressure changes in the injection zone and a shallower permeable interval. The leaky fault connects the upper interval and the target zone, which are otherwise separated by a confining layer. We account for both across-fault and up-fault flow to honor the general architecture of the fault. We extend the two-formation analytical solution to consider multiple overlying formations with alternating confining layers offset by the fault. The solution methodology involves writing and transforming the coupled governing flow equations successively into the Laplace and Fourier domains and solving the resulting ordinary differential equations. The solution is verified through a comparison with existing analytical solutions for bounding cases. Two examples are presented to demonstrate the behavior and potential applications of our analytical model. © 2012. American Geophysical Union. All Rights Reserved. Source


Pearce J.,University of Texas at Austin
International Journal of Hyperthermia | Year: 2011

Laser sources are under increasing study for in vivo tumour ablation. Photo-thermal ablation in tissues varies tremendously in governing physical phenomena, depending on wavelength, owing to wide variation in the optical properties of tissues, specifically the dominant chromophore and degree and type of scattering. Once converted into local tissue heating, however, the governing thermodynamic principles remain the same. Observed irreversible thermal alterations range from substantial structural disruption due to steam evolution in high temperature short-term activations to low temperature rise, longer-term initiation of the complex protein cascades that result in apoptosis and/or necroptosis. The usual mathematical model in hyperthermia studies, the thermal isoeffect dose, arising from the relative reaction rate formulation, is not an effective description of the higher temperature effects because multiple processes occur in parallel. The Arrhenius formulation based on the theory of absolute reaction rates is much more useful and descriptive in laser heating since the multiple thermodynamically independent processes may be studied separately. © 2011 Informa UK Ltd. All rights reserved. Source


Angel R.J.,University of Texas at Austin
Journal of Cross-Cultural Gerontology | Year: 2013

The rapid growth in comparative survey research carried out in multiple countries, or among different language communities within a single nation, has given rise to a renewed concern with problems of translation. The fundamental problem facing the comparative survey researcher relates to the complexity and subjectivity of language, and the fact that complete equivalence of concepts in different linguistic, cultural, and social class contexts may be in principle impossible to achieve. Yet language remains the only medium through which information concerning subjective states, values, and beliefs can be collected. That language and the subjective constructs to which it refers are influenced by a wide range of cultural and social factors. This fact has particular relevance for comparative aging research since older individuals are often monolingual in their native languages and more tied to traditional cultures than younger individuals. This paper consists of a review of basic issues related to the nature of language and communication, and discusses the possibility of a truly scientific translation process. It outlines current best practices, and also raises questions related to the common practice of using information collected with translated survey instruments in ways that assume it reflects a comparable and quantifiable latent construct. © 2013 Springer Science+Business Media New York. Source


Fomel S.,University of Texas at Austin
Geophysics | Year: 2010

Predictive painting is a numerical algorithm that spreads information in 3D volumes according to the local structure of seismic events. The algorithm consists of two steps. First, local spatially variable inline and crossline slopes of seismic events are estimated by the plane-wave-destruction method. Next, a seed trace is inserted in the volume, and the information contained in that trace is spread inside the volume, thus automatically "painting" the data space. Immediate applications of this technique include automatic horizon picking and flattening in applications to both prestack and poststack seismic data analysis. Synthetic and field data tests demonstrate the effectiveness of predictive painting. © 2010 Society of Exploration Geophysicists. Source


Bazilevs Y.,University of California at San Diego | Hsu M.-C.,University of California at San Diego | Scott M.A.,University of Texas at Austin
Computer Methods in Applied Mechanics and Engineering | Year: 2012

In this paper we develop a framework for fluid-structure interaction (FSI) modeling and simulation with emphasis on isogeometric analysis (IGA) and non-matching fluid-structure interface discretizations. We take the augmented Lagrangian approach to FSI as a point of departure. Here the Lagrange multiplier field is defined on the fluid-structure interface and is responsible for coupling of the two subsystems. Thus the FSI formulation does not rely on the continuity of the underlying function spaces across the fluid-structure interface in order to produce the correct coupling conditions between the fluid and structural subdomains. However, in deriving the final FSI formulation the interface Lagrange multiplier is formally eliminated and the formulation is written purely in terms of primal variables. Avoiding the use of Lagrange multipliers adds efficiency to the proposed formulation. As an added benefit, the ability to employ non-matching grids for multi-physics simulations leads to significantly relaxed requirements that are placed on the geometry modeling and meshing tools for IGA.We show an application of the proposed FSI formulation to the simulation of the NREL 5. MW offshore wind turbine rotor, where the aerodynamics domain is modeled using volumetric quadratic NURBS, while the rotor structure is modeled using a cubic T-spline-based discretization of a rotation-free Kirchhoff-Love shell. We conclude the article by showing FSI coupling of a T-spline shell with a low-order finite element method (FEM) discretization of the aerodynamics equations. This combined use of IGA and FEM is felt to be a good balance between speed, robustness, and accuracy of FSI simulations for this class of problems. © 2012 Elsevier B.V. Source


O'Dea E.B.,University of Texas at Austin
PLoS computational biology | Year: 2010

Lethal mutagenesis is a promising new antiviral therapy that kills a virus by raising its mutation rate. One potential shortcoming of lethal mutagenesis is that viruses may resist the treatment by evolving genomes with increased robustness to mutations. Here, we investigate to what extent mutational robustness can inhibit extinction by lethal mutagenesis in viruses, using both simple toy models and more biophysically realistic models based on RNA secondary-structure folding. We show that although the evolution of greater robustness may be promoted by increasing the mutation rate of a viral population, such evolution is unlikely to greatly increase the mutation rate required for certain extinction. Using an analytic multi-type branching process model, we investigate whether the evolution of robustness can be relevant on the time scales on which extinction takes place. We find that the evolution of robustness matters only when initial viral population sizes are small and deleterious mutation rates are only slightly above the level at which extinction can occur. The stochastic calculations are in good agreement with simulations of self-replicating RNA sequences that have to fold into a specific secondary structure to reproduce. We conclude that the evolution of mutational robustness is in most cases unlikely to prevent the extinction of viruses by lethal mutagenesis. Source


Haus E.L.,University of Minnesota | Smolensky M.H.,University of Texas at Austin
Sleep Medicine Reviews | Year: 2013

Shift work that includes a nighttime rotation has become an unavoidable attribute of today's 24-h society. The related disruption of the human circadian time organization leads in the short-term to an array of jet-lag-like symptoms, and in the long-run it may contribute to weight gain/obesity, metabolic syndrome/type II diabetes, and cardiovascular disease. Epidemiologic studies also suggest increased cancer risk, especially for breast cancer, in night and rotating female shift workers. If confirmed in more controlled and detailed studies, the carcinogenic effect of night and shift work will constitute additional serious medical, economic, and social problems for a substantial proportion of the working population. Here, we examine the possible multiple and interconnected cancer-promoting mechanisms as a consequence of shift work, i.e., repeated disruption of the circadian system, pineal hormone melatonin suppression by exposure to light at night, sleep-deprivation-caused impairment of the immune system, plus metabolic changes favoring obesity and generation of proinflammatory reactive oxygen species. © 2012 Elsevier Ltd. Source


Ganesan V.,University of Texas at Austin | Jayaraman A.,University of Colorado at Boulder
Soft Matter | Year: 2014

Polymer nanocomposites are a class of materials that consist of a polymer matrix filled with inorganic/organic nanoscale additives that enhance the inherent macroscopic (mechanical, optical and electronic) properties of the polymer matrix. Over the past few decades such materials have received tremendous attention from experimentalists, theoreticians, and computational scientists. These studies have revealed that the macroscopic properties of polymer nanocomposites depend strongly on the (microscopic) morphology of the constituent nanoscale additives in the polymer matrix. As a consequence, intense research efforts have been directed to understand the relationships between interactions, morphology, and the phase behavior of polymer nanocomposites. Theory and simulations have proven to be useful tools in this regard due to their ability to link molecular level features of the polymer and nanoparticle additives to the resulting morphology within the composite. In this article we review recent theory and simulation studies, presenting briefly the methodological developments underlying PRISM theories, density functional theory, self-consistent field theory approaches, and atomistic and coarse-grained molecular simulations. We first discuss the studies on polymer nanocomposites with bare or un-functionalized nanoparticles as additives, followed by a review of recent work on composites containing polymer grafted or functionalized nanoparticles as additives. We conclude each section with a brief outlook on some potential future directions. This journal is © The Royal Society of Chemistry. Source


Narayanaswamy V.,North Carolina State University | Clemens N.T.,University of Texas at Austin
Proceedings of the Combustion Institute | Year: 2013

An experimental study was performed to investigate the soot-turbulence interaction in the sootformation region of turbulent non-premixed co-flowing ethylene/N2 jet flames. Simultaneous velocity and soot volume-fraction (fv) fields were obtained using two-component particle image velocimetry and laser-induced incandescence, respectively. Measurements were made for jet exit Reynolds numbers between 8500 and 12,300, and the measurement location was 10 jet diameters downstream, near the beginning of the yellow luminous region where soot is first formed. In agreement with previous studies, the peak mean fv in the production region is inversely related to the bulk strain rate. The simultaneous data show that soot is formed to the inside of the stoichiometric surface (inferred from stoichiometric velocity), but the formation region moves outside to regions of lower velocity and strain rate as the bulk strain rate is increased. Soot structures form in low strain rate regions, but their upstream edge is seen to become stretched out and aligned at a preferred angle (near 45 degrees) owing to alignment with the instantaneous principal extensive strain rate. Statistical analysis shows that the soot exists, on average, in fluid with axial velocity of about 3 m/s and strain rate of 700 s1, regardless of the jet exit velocity. The radial profiles of the covariance between fv and radial velocity are consistent with a model where the soot is formed at a preferred radial location (near the reaction zone) and then is transported by turbulent fluctuations to regions of lower fv. © 2012 The Combustion Institute. Source


Thomeer M.B.,University of Texas at Austin
American Journal of Public Health | Year: 2013

Objectives: I examined how sexual minority status, as indicated by sex of sexual partners, is associated with self-rated health and how socioeconomic status suppresses and age and sex moderate this association. Methods: I used multinomial logistic regression to analyze aggregated data from the 1991 to 2010 General Social Survey, a population-based data set (n = 13 480). Results: Respondents with only different-sex partners or with any same-sex partners reported similar levels of health. With socioeconomic status added to the model, respondents with any same-sex partners reported worse health than those with only different-sex partners, but only if sexual intercourse with same-sex partners occurred in the previous 5 years. Age and sex moderated this relationship: having any same-sex partners was associated with worse health for women but not men and among younger adults only. Conclusions: The relationship between sexual minority status and self-rated health varies across sociodemographic groups. Future research should use population-level data to examine other health outcomes and continue to explore how the intersection of sexual minority status and other sociodemographic indicators shapes health. Source


Weinberg S.,University of Texas at Austin
Physical Review Letters | Year: 2010

An effective field theory of quarks, gluons, and pions, with the number N of colors treated as large, is proposed as a basis for calculations of hadronic phenomena at moderate energies. The qualitative consequences of the large N limit are similar though not identical to those in pure quantum chromodynamics, but because constituent quark masses appear in the effective Lagrangian, the 't Hooft coupling in the effective theory need not be strong at moderate energies. To leading order in 1/N the effective theory is renormalizable, with only a finite number of terms in the Lagrangian. © 2010 The American Physical Society. Source


Balatti S.,Polytechnic of Milan | Larentis S.,University of Texas at Austin | Gilmer D.C.,SEMATECH | Ielmini D.,Polytechnic of Milan
Advanced Materials | Year: 2013

Multilevel operation in resistive switching memory (RRAM) based on HfO x is demonstrated through variable sizes and orientations of the conductive filament. Memory states with the same resistance, but opposite orientation of defects, display a different response to an applied read voltage, therefore allowing an improvement of the information stored in each physical cell. The multilevel scheme allows a 50% increase (from 2 to 3 bits) of the stored information. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


MacDonald A.H.,University of Texas at Austin
Physical Review Letters | Year: 2010

We show that the low-temperature sash features in lowest Landau-level (LLL) tunneling spectra recently discovered by Dial and Ashoori are intimately related to the discrete Haldane-pseudopotential interaction energy scales that govern fractional quantum-Hall physics. Our analysis is based on expressions for the tunneling density of states which become exact at filling factors close to ν=0 and ν=1, where the sash structure is most prominent. We comment on other aspects of LLL correlation physics that can be revealed by accurate temperature-dependent tunneling data. © 2010 The American Physical Society. Source


Henneghan A.,University of Texas at Austin
Supportive Care in Cancer | Year: 2016

Purpose: It is unknown why some breast cancer survivors experience cancer-related cognitive impairments (CRCI) after cancer treatment, and modifiable risk factors for CRCI remain to be explicated. This mixed-method systematic review synthesizes quantitative and qualitative evidence for relationships between modifiable factors and CRCI in breast cancer survivors who receive chemotherapy as part of their treatment. Methods: Keyword Searches of PubMed/Medline, PsychINFO, and CINAHL were performed for January 2005 through June 2015. Studies that provided data on associations between modifiable biological, behavioral, environmental, and psychosocial factors and cognition were included. Results: Twenty-two quantitative studies and five qualitative studies were identified after applying inclusion and exclusion criteria yielding evidence for significant relationships among modifiable biological (inflammatory cytokines), behavioral (sleep quality, physical activity), and psychosocial (stress, distress, affect) factors and CRCI. Conclusion: Many women unfortunately experience CRCI after breast cancer chemotherapy, with limited treatment options available to improve cognitive function. This review synthesizes current evidence to support the associations between modifiable factors and CRCI and can inform research to evaluate these factors prospectively. The clinical implications of these findings suggest that lifestyle factors such as physical activity, stress management, and sleep quality may be appropriate targets for behavioral interventions to improve cognitive function following breast cancer chemotherapy; however, further research is necessary. © 2015, Springer-Verlag Berlin Heidelberg. Source


Alu A.,University of Texas at Austin | Engheta N.,University of Pennsylvania
Physical Review Letters | Year: 2010

Near-field imaging is a well-established technique in biomedical measurements, since closer to the detail of interest it is possible to resolve subwavelength details otherwise unresolved by regular lenses. A near-field scanning optical microscope (NSOM) tip may indeed overcome the resolution limits of far-field optics, but its proximity inherently perturbs the measurement. Here, we apply the recent concept of a "cloaked sensor" to an NSOM device in collection mode, showing theoretically how a proper plasmonic cover applied to an NSOM tip may drastically improve its overall measurement capabilities. © 2010 The American Physical Society. Source


Rai V.,University of Texas at Austin | Henry A.D.,University of Arizona
Nature Climate Change | Year: 2016

Strategies to mitigate global climate change should be grounded in a rigorous understanding of energy systems, particularly the factors that drive energy demand. Agent-based modelling (ABM) is a powerful tool for representing the complexities of energy demand, such as social interactions and spatial constraints. Unlike other approaches for modelling energy demand, ABM is not limited to studying perfectly rational agents or to abstracting micro details into system-level equations. Instead, ABM provides the ability to represent behaviours of energy consumers-such as individual households-using a range of theories, and to examine how the interaction of heterogeneous agents at the micro-level produces macro outcomes of importance to the global climate, such as the adoption of low-carbon behaviours and technologies over space and time. We provide an overview of ABM work in the area of consumer energy choices, with a focus on identifying specific ways in which ABM can improve understanding of both fundamental scientific and applied aspects of the demand side of energy to aid the design of better policies and programmes. Future research needs for improving the practice of ABM to better understand energy demand are also discussed. © 2016 Macmillan Publishers Limited. Source


Zhang X.,University of Texas at Austin | Haenggi M.,University of Notre Dame
IEEE Transactions on Wireless Communications | Year: 2014

Inter-cell interference coordination (ICIC) and intra-cell diversity (ICD) play important roles in improving cellular downlink coverage. By modeling cellular base stations (BSs) as a homogeneous Poisson point process (PPP), this paper provides explicit finite-integral expressions for the coverage probability with ICIC and ICD, taking into account the temporal/spectral correlation of the signal and interference. In addition, we show that, in the high-reliability regime, where the user outage probability goes to zero, ICIC and ICD affect the network coverage in drastically different ways: ICD can provide order gain, whereas ICIC only offers linear gain. In the high-spectral efficiency regime where the SIR threshold goes to infinity, the order difference in the coverage probability does not exist; however, a linear difference makes ICIC a better scheme than ICD for realistic path loss exponents. Consequently, depending on the SIR requirements, different combinations of ICIC and ICD optimize the coverage probability. © 2002-2012 IEEE. Source


Ghosh B.,University of Texas at Austin | Akram M.W.,Indian Institute of Technology Kanpur
IEEE Electron Device Letters | Year: 2013

In this letter, a double-gate junctionless tunnel field effect transistor (JL-TFET) is proposed and investigated. The JL-TFET is a Si-channel heavily n-type-doped junctionless field effect transistor (JLFET), which uses two isolated gates (Control-Gate, P-Gate) with two different metal work-functions to behave like a tunnel field effect transistor (TFET). In this structure, the advantages of JLFET and TFET are combined together. The simulation results of JL-TFET with high-k dielectric material (TiO2) of 20-nm gate length shows excellent characteristics with high IONI OFF ratio (∼ 6× 108), a point subthreshold slope (SS) of ∼ 38mV/decade, and an average SS of ∼ 70 mV decade at room temperature, which indicates that JL-TFET is a promising candidate for a switching performance. © 1980-2012 IEEE. Source


Maxwell J.C.,University of Texas at Austin
Drug and Alcohol Review | Year: 2011

Introduction and Aims. Abuse of prescription analgesics in the USA is increasing. The epidemic has been driven by many factors, including marketing strategies, incorrect prescribing practices, a variety of legal and illegal drug sources, belated governmental responses and increases in the number of prescriptions written. Design and Methods. Data sources including surveys, emergency room visits, treatment admissions, overdose deaths, toxicology laboratory findings and journal articles were examined to identify trends. Results. The surveys and emergency department visits show use lowest among young teenagers and highest among older teenagers and young adults, with significant increases among those aged 55 and older. The length of time between initial use of an opioid other than heroin and admission to treatment is shortening. Mortality data and toxicology exhibits confirm the increases and show the variation in the prevalence of various drugs across the USA. Discussion and Conclusions. Abuse is increasing, with varying patterns of use by high-risk groups and different geographic preferences. Prescription drug monitoring programs are being developed in each of the US states to deter 'doctor shopping'; the Food and Drug Administration has increased authority over manufacturers; and options for proper disposal of leftover medications exist. There is increased emphasis on responsible prescribing including risk assessments, prescribing agreements, treatment plans, and training for clinicians, as well as monitoring the interactions with benzodiazepines. However, unless these efforts decrease diversion, abuse and addiction, clinicians may lose the ability to use some of these opioids for effective pain management or so many barriers will be raised that pain will go undertreated or untreated.[Maxwell JC. The prescription drug epidemic in the United States: A perfect storm. Drug Alcohol Rev 2011;30:264-270] © 2011 Australasian Professional Society on Alcohol and other Drugs. Source


Karlsson T.,University of Sydney | Karlsson T.,University of Oxford | Bromm V.,University of Texas at Austin | Bland-Hawthorn J.,University of Sydney
Reviews of Modern Physics | Year: 2013

The emergence of the first sources of light at redshifts of z∼10-30 signaled the transition from the simple initial state of the Universe to one of increasing complexity. Recent progress in our understanding of the formation of the first stars and galaxies, starting with cosmological initial conditions, primordial gas cooling, and subsequent collapse and fragmentation are reviewed. The important open question of how the pristine gas was enriched with heavy chemical elements in the wake of the first supernovae is emphasized. The review concludes by discussing how the chemical abundance patterns conceivably allow us to probe the properties of the first stars, and allow us to test models of early metal enrichment. © 2013 American Physical Society. Source


Ekdale E.G.,University of Texas at Austin
Journal of Morphology | Year: 2011

A large sample of isolated elephantimorph petrosal bones was recovered from Pleistocene deposits in Friesenhahn Cave, Bexar County, Texas. Morphology of the middle and inner ear of the elephantimorphs is described and variation within the sample is identified. Observed variations occur in the stapedial ratio, morphology of the aquaeductus Fallopii, and connection of the crista interfenestralis to the tympanohyal on the posterior portion of the petrosal to form a foramen for passage of the stapedius muscle. The morphology of the aquaeductus Fallopii supports an ontogenetic explanation for some variation, and a sequence of ossification surrounding the aquaeductus Fallopii, from the anterior end of the canal to the posterior, is hypothesized. The stapedial ratio varies to a high degree across the sample, and such variation should be considered when the ratio is used in phylogenetic analyses. Within the inner ear, the absence of the secondary lamina suggests evolution of low-frequency hearing in extinct proboscideans, which is known for extant elephants. The morphology of the petrosals from Friesenhahn Cave is compared to published descriptions of the ear regions of other extinct proboscideans, and the distribution and evolution of morphologic characters are discussed. © 2011 Wiley-Liss, Inc. Source


The conditions of multiferroicity in dn perovskites are derived from the pseudo Jahn-Teller effect, due to which ferroelectric displacements are triggered by vibronic coupling between ground and excited electronic states of opposite parity but same spin multiplicity; it takes place for some specific dn configurations and spin states only. In combination with the high-spin-low-spin crossover effect this leads to a novel phenomenon, the magnetic-ferroelectric (multiferroics) crossover which predicts magnetoelectric effects with exciting functionalities including electric magnetization and demagnetization. © 2012 American Physical Society. Source


Abanin D.A.,Harvard University | Pesin D.A.,University of Texas at Austin
Physical Review Letters | Year: 2012

The electronic properties of graphene can be manipulated via mechanical deformations, which opens prospects for both studying the Dirac fermions in new regimes and for new device applications. Certain natural configurations of strain generate large nearly uniform pseudomagnetic fields, which have opposite signs in the two valleys, and give rise to flat spin- and valley-degenerate pseudo-Landau levels (PLLs). Here we consider the effect of the Coulomb interactions in strained graphene with a uniform pseudomagnetic field. We show that the spin or valley degeneracies of the PLLs get lifted by the interactions, giving rise to topological insulator states. In particular, when a nonzero PLL is quarter or three-quarter filled, an anomalous quantum Hall state spontaneously breaking time-reversal symmetry emerges. At half-filled PLLs, a weak spin-orbital interaction stabilizes the time-reversal-symmetric quantum spin-Hall state. These many-body states are characterized by the quantized conductance and persist to a high temperature scale set by the Coulomb interactions, which we estimate to be a few hundreds Kelvin at moderate strain values. At fractional fillings, fractional quantum Hall states breaking valley symmetry emerge. These results suggest a new route to realizing robust topological states in mesoscopic graphene. © 2012 American Physical Society. Source


Brown 3rd. H.S.,University of Texas at Austin
Preventing chronic disease | Year: 2012

The objective of our study was to estimate the long-term cost-effectiveness of a lifestyle modification program led by community health workers (CHWs) for low-income Hispanic adults with type 2 diabetes. We forecasted disease outcomes, quality-adjusted life years (QALYs) gained, and lifetime costs associated with attaining different hemoglobin A1c (A1c) levels. Outcomes were projected 20 years into the future and discounted at a 3.0% rate. Sensitivity analyses were conducted to assess the extent to which our results were dependent on assumptions related to program effectiveness, projected years, discount rates, and costs. The incremental cost-effectiveness ratio of the intervention ranged from $10,995 to $33,319 per QALY gained when compared with usual care. The intervention was particularly cost-effective for adults with high glycemic levels (A1c > 9%). The results are robust to changes in multiple parameters. The CHW program was cost-effective. This study adds to the evidence that culturally sensitive lifestyle modification programs to control diabetes can be a cost-effective way to improve health among Hispanics with diabetes, particularly among those with high A1c levels. Source


Bourell D.L.,University of Texas at Austin
Annual Review of Materials Research | Year: 2016

Additive manufacturing (AM) has skyrocketed in visibility commercially and in the public sector. This article describes the development of this field from early layered manufacturing approaches of photosculpture, topography, and material deposition. Certain precursors to modern AM processes are also briefly described. The growth of the field over the last 30 years is presented. Included is the standard delineation of AM technologies into seven broad categories. The economics of AM part generation is considered, and the impacts of the economics on application sectors are described. On the basis of current trends, the future outlook will include a convergence of AM fabricators, mass-produced AM fabricators, enabling of topology optimization designs, and specialization in the AM legal arena. Long-term developments with huge impact are organ printing and volume-based printing. Copyright © 2016 by Annual Reviews. All rights reserved. Source


Lee H.,University of Texas at Austin
Optics Letters | Year: 2014

The shape of a focus-modulated point spread function (PSF) is used as a quick visual assessment tool of aberration modes in the PSF. Further analysis in terms of shape moments can permit quantifying the modal coefficients with an accuracy comparable to that of typical wavefront sensors. In this Letter, the error of the moment-based wavefront sensing is analytically described in terms of the pixelation and photon/readout noise. All components highly depend on the (unknown) PSF shape but can be estimated from the measured PSF sampled at a reasonable spatial resolution and photon count. Numerical simulations verified that the models consistently predicted the behavior of the modal estimation error of the moment-based wavefront sensing. © 2014 Optical Society of America. Source


Malcom J.W.,University of Texas at Austin
PLoS ONE | Year: 2011

Dispersal is an important mechanism contributing to both ecological and evolutionary dynamics. In metapopulation and metacommunity ecology, dispersal enables new patches to be colonized; in evolution, dispersal counter-acts local selection, leading to regional homogenization. Here, I consider a three-patch metacommunity in which two species, each with a limiting quantitative trait underlain by gene networks of 16 to 256 genes, compete with one another and disperse among patches. Incorporating dispersal among heterogeneous patches introduces a tradeoff not observed in single-patch simulations: if the difference between gene network size of the two species is greater than the difference in dispersal ability (e.g., if the ratio of network sizes is larger than the ratio of dispersal abilities), then genetic architecture drives community outcome. However, if the difference in dispersal abilities is greater than gene network differences, then any adaptive advantages afforded by genetic architecture are over-ridden by dispersal. Thus, in addition to the selective pressures imposed by competition that shape the genetic architecture of quantitative traits, dispersal among patches creates an escape that may further alter the effects of different genetic architectures. These results provide a theoretical expectation for what we may observe as the field of ecological genomics develops. © 2011 Jacob W. Malcom. Source


Krogh-Jespersen S.,University of Chicago | Echols C.H.,University of Texas at Austin
Child Development | Year: 2012

Children's confidence in their own knowledge may influence their willingness to learn novel information from others. Twenty-four-month-old children's (N=160) willingness to learn novel labels for either familiar or novel objects from an adult speaker was tested in 1 of 5 conditions: accurate, inaccurate, knowledgeable, ignorant, or uninformative. Children were willing to learn a second label for an object from a reliable informant in the accurate, knowledgeable, and uninformative conditions; children were less willing to apply a novel label to a familiar object if the speaker previously was inaccurate or had expressed ignorance. However, when the objects were novel, children were willing to learn the label regardless of the speaker's knowledge level. © 2012 The Authors. Child Development © 2012 Society for Research in Child Development, Inc. Source


Based on ethnographic work on police-linked death squads and with black women's organizations, this article analyzes current urban governance policies and the spatial politics of resistance embraced by communities under siege in Brazil. Space matters not only in terms of defining one's access to the polis, but also as a deadly tool through which police killings, economic marginalization, and mass incarceration produce the very geographies (here referred to as "the black necropolis") that the state aims to counteract in its war against the black urban poor. Yet, within the context of necropolitical governance, blackness appears as a spatially grounded praxis that enables victims of state terror to reclaim their placeless location as a political resource for redefining themselves and the polis. © 2013 Antipode Foundation Ltd. Source


Miller J.A.,University of Texas at Austin
Progress in Physical Geography | Year: 2012

The main goal of species distribution modeling is to identify important underlying factors related to broad-scale ecological patterns in order to make meaningful explanations or accurate predictions. When standard statistical methods such as regression are used to formulate these models, assumptions about the spatial structure of the data and the model parameters are often violated. Autocorrelation and non-stationarity are characteristics of spatial data and models, respectively, and if present and unaccounted for in model development, they can result in poorly specified models as well as inappropriate spatial inference and prediction. While these spatial issues are addressed here in an ecological context using species distribution models, they are broadly relevant to any statistical modeling applications using spatial data. © The Author(s) 2012. Source


Legare C.H.,University of Texas at Austin | Lombrozo T.,University of California at Berkeley
Journal of Experimental Child Psychology | Year: 2014

Two studies examined the specificity of effects of explanation on learning by prompting 3- to 6-year-old children to explain a mechanical toy and comparing what they learned about the toy's causal and non-causal properties with children who only observed the toy, both with and without accompanying verbalization. In Study 1, children were experimentally assigned to either explain or observe the mechanical toy. In Study 2, children were classified according to whether the content of their response to an undirected prompt involved explanation. Dependent measures included whether children understood the toy's functional-mechanical relationships, remembered perceptual features of the toy, effectively reconstructed the toy, and (for Study 2) generalized the function of the toy when constructing a new one. Results demonstrate that across age groups, explanation promotes causal learning and generalization but does not improve (and in younger children can even impair) memory for causally irrelevant perceptual details. © 2014 The Authors. Source


Explaining inconsistency may serve as an important mechanism for driving the process of causal learning. But how might this process generate amended beliefs? One way that explaining inconsistency may promote discovery is by guiding exploratory, hypothesis-testing behavior. In order to investigate this, a study with young children ranging in age from 2 to 6years (N=80) examined the relation between explanation and exploratory behavior following consistent versus inconsistent outcomes. Results indicated that for inconsistent outcomes only, the kind of explanation children provided informed the kind of exploratory behavior they engaged in and the extent to which children modified and generated new hypotheses. In sum, the data provide insight into a mechanism by which explaining inconsistent evidence guides causal cognition. © 2011 The Author. Child Development © 2011 Society for Research in Child Development, Inc. Source


Dunn A.K.,University of Texas at Austin
Annals of Biomedical Engineering | Year: 2012

Laser speckle contrast imaging (LSCI) has emerged over the past decade as a powerful, yet simple, method for imaging of blood flow dynamics in real time. The rapid adoption of LSCI for physiological studies is due to the relative ease and low cost of building an instrument as well as the ability to quantify blood flow changes with excellent spatial and temporal resolution. Although measurements are limited to superficial tissues with no depth resolution, LSCI has been instrumental in pre-clinical studies of neurological disorders as well as clinical applications including dermatological, neurosurgical and endoscopic studies. Recently a number of technical advances have been developed to improve the quantitative accuracy and temporal resolution of speckle imaging. This article reviews some of these recent advances and describes several applications of speckle imaging. © 2011 Biomedical Engineering Society. Source


Von Hippel P.T.,University of Texas at Austin
BMC Medical Research Methodology | Year: 2015

Abstract Background: Estimated effects vary across studies, partly because of random sampling error and partly because of heterogeneity. In meta-analysis, the fraction of variance that is due to heterogeneity is estimated by the statistic I 2. We calculate the bias of I 2, focusing on the situation where the number of studies in the meta-analysis is small. Small meta-analyses are common; in the Cochrane Library, the median number of studies per meta-analysis is 7 or fewer. Methods: We use Mathematica software to calculate the expectation and bias of I 2. Results: I 2 has a substantial bias when the number of studies is small. The bias is positive when the true fraction of heterogeneity is small, but the bias is typically negative when the true fraction of heterogeneity is large. For example, with 7 studies and no true heterogeneity, I 2 will overestimate heterogeneity by an average of 12 percentage points, but with 7 studies and 80 percent true heterogeneity, I 2 can underestimate heterogeneity by an average of 28 percentage points. Biases of 12-28 percentage points are not trivial when one considers that, in the Cochrane Library, the median I 2 estimate is 21 percent. Conclusions: The point estimate I 2 should be interpreted cautiously when a meta-analysis has few studies. In small meta-analyses, confidence intervals should supplement or replace the biased point estimate I2. © 2015 von Hippel. Source


Martin S.F.,University of Texas at Austin
Advances in Heterocyclic Chemistry | Year: 2013

Designing strategies for the efficient synthesis of frameworks found in alkaloids and other biologically active nitrogen heterocycles is an important goal in contemporary organic chemistry. This account presents an overview of our discovery and development of some useful approaches to prepare nitrogen heterocycles that are common to a number of alkaloid families. Our interest in the broad arena of alkaloid synthesis has helped lead to the emergence of the vinylogous Mannich reaction and ring-closing metathesis as strategically powerful bond-forming reactions that can be exploited as key steps in the synthesis of alkaloids and other nitrogen heterocycles of biological interest. Our discovery of the vinylogous Mannich reaction also led to the development of a novel strategy for diversity oriented synthesis that features the creation of highly substituted arylmethyl amine derivatives by multicomponent assembly processes related to the Mannich reaction. These adducts may be rapidly elaborated by ring-closing metathesis and other transition metal catalyzed cyclizations, cycloadditions, and a variety of standard ring-forming reactions to provide a variety of nitrogen heterocyclic scaffolds that may be further derivatized to generate collections of small molecules for biological screening. © 2013 Elsevier Inc. Source


Wang K.,Beijing Normal University | Dickinson R.E.,University of Texas at Austin
Reviews of Geophysics | Year: 2013

Atmospheric downward longwave radiation at the surface (Ld) varies with increasing CO2 and other greenhouse gases. This study quantifies the uncertainties of current estimates of global Ld at monthly to decadal timescales and its global climatology and trends during the past decades by a synthesis of the existing observations, reanalyses, and satellite products. We find that current Ld observations have a standard deviation error of ~3.5 W m-2 on a monthly scale. Observations of Ld by different pyrgeometers may differ substantially for lack of a standard reference. The calibration of a pyrgeometer significantly affects its quantification of annual variability. Compared with observations collected at 169 global land sites from 1992 to 2010, the L d derived from state-of-the-art satellite cloud observations and reanalysis temperature and humidity profiles at a grid scale of ~1° has a bias of ±9 W m-2 and a standard deviation of 7 W m -2, with a nearly zero overall bias. The standard deviations are reduced to 4 W m-2 over tropical oceans when compared to L d observations collected by 24 buoy sites from 2002 to 2011. The -4 W m-2 bias of satellite Ld retrievals over tropical oceans is likely because of the overestimation of Ld observations resulting from solar heating of the pyrgeometer. Our best estimate of global means L d from 2003 to 2010 are 342 ± 3 W m-2 (global), 307 ± 3 W m-2 (land), and 356 ± 3 W m-2 (ocean). Estimates of Ld trends are seriously compromised by the changes in satellite sensors giving changes of water vapor profiles. Key Points Ld observations have a standard deviation error of ~3.5 W m-2 at a monthly scale Satellite and reanalysis Ld has a standard deviation of 7 W m-2 Global mean Ld is 342 W m-2 (global), 307 W m-2 (land), and 356 W m-2 (ocean) ©2013. American Geophysical Union. All Rights Reserved. Source


Korzdorfer T.,Georgia Institute of Technology | Marom N.,University of Texas at Austin
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

Many-body perturbation theory in the G 0W 0 approximation is an increasingly popular tool for calculating electron removal energies and fundamental gaps for molecules and solids. However, the predictive power of G 0W 0 is limited by its sensitivity to the density functional theory (DFT) starting point. We introduce a nonempirical scheme, which allows us to find a reliable DFT starting point for G 0W 0 calculations. This is achieved by adapting the amount of Hartree-Fock exchange in a hybrid DFT functional. The G 0W 0 spectra resulting from this starting point reliably predict experimental photoelectron spectra for a test set of 13 typical organic semiconductor molecules. © 2012 American Physical Society. Source


De La Torre J.C.,University of Texas at Austin
American Journal of Alzheimer's Disease and other Dementias | Year: 2013

Evidence is growing that vascular risk factors (VRFs) for Alzheimer's disease (AD) affect cerebral hemodynamics to launch a cascade of cellular and molecular changes that initiate cognitive deficits and eventual progression of AD. Neuroimaging studies have reported VRFs for AD to be accurate predictors of cognitive decline and dementia. In regions that participate in higher cognitive function, middle temporal, posterior cingulate, inferior parietal and precuneus regions, and neuroimaging studies indicate an association involving VRFs, cerebral hypoperfusion, and cognitive decline in elderly individuals who develop AD. The VRF can be present in cognitively intact individuals for decades before mild cognitive deficits or neuropathological signs are manifested. In that sense, they may be "ticking time bombs" before cognitive function is demolished. Preventive intervention of modifiable VRF may delay or block progression of AD. Intervention could target cerebral blood flow (CBF), since most VRFs act to lower CBF in aging individuals by promoting cerebrovascular dysfunction. © 2013 The Author(s). Source


Ross C.E.,University of Texas at Austin
Journal of Health and Social Behavior | Year: 2011

A threatening and dangerous neighborhood may produce distressing emotions of anxiety, anger, and depression among the individuals who live there because residents find these neighborhoods subjectively alienating. The author introduces the idea that neighborhood disorder indicates collective threat, which is alienating-shaping perceptions of powerlessness and mistrust. The author presents a theory of trust that posits that mistrust develops in places where resources are scarce and threat is common and among individuals with few resources and who feel powerless to avoid or manage the threat. Perceived powerlessness develops with exposure to uncontrollable, negative conditions such as crime, danger, and threat in one's neighborhood. Thus, neighborhood disorder, common in disadvantaged neighborhoods, influences mistrust directly and indirectly by increasing perceptions of powerlessness among residents, which amplify disorder's effect on mistrust. The very thing needed to protect disadvantaged residents from the negative effects of their environment-a sense of personal control-is eroded by that environment in a process that the author calls structural amplification. Powerlessness and mistrust in turn are distressing, increasing levels of anxiety, anger, and depression. © American Sociological Association 2011. Source


Alu A.,University of Texas at Austin | Engheta N.,University of Pennsylvania
Physical Review Letters | Year: 2010

Optical waveguide interconnects are a major component of chip-scale data processing and computational systems. Here, we propose an alternative mechanism based on optical wireless broadcasting links using nanoantennas, which may overcome some of the limitations of nanoscale waveguide interconnects. By properly loading and matching nanoantenna pairs with optical nanocircuits, we theoretically demonstrate a complete optical wireless link that, in spite of some radiation loss and mismatch factors, may exhibit much less absorption loss, largely outperforming regular plasmonic waveguide links. © 2010 The American Physical Society. Source


Alu A.,University of Texas at Austin | Engheta N.,University of Pennsylvania
New Journal of Physics | Year: 2010

In this paper, we formulate an analytical theory that quantifies the first-order effect of a small random uncontrollable disorder that is due to limitations in the realization of periodic arrays of plasmonic nanoparticles. In particular, we show how the effect of a small disorder may be quantitatively taken into account when evaluating the guidance properties of these otherwise periodic chains, and how the main effect of the small disorder consists of additional radiation losses for the guided mode. Similar quantitative analyses may be extended to the general class of periodic metamaterials, providing an idea of how disorder affects their electromagnetic response, and which types of disorder have the most effect. © IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. Source


Bard J.F.,University of Texas at Austin | Nananukul N.,Optimize
Computers and Operations Research | Year: 2010

With globalization, the need to better integrate production and distribution decisions has become ever more pressing for manufacturers trying to streamline their supply chain. This paper investigates a previously developed mixed-integer programming (MIP) model aimed at minimizing production, inventory, and delivery costs across the various stages of the system. The problem being modeled includes a single production facility, a set of customers with time varying demand, a finite planning horizon, and a fleet of homogeneous vehicles. Demand can be satisfied from either inventory held at a customer site or from daily product distribution. Whether a customer is visited on a particular day is determined by an implicit tradeoff between inventory and distribution costs. Once the decision is made, a vehicle routing problem must be solved for those customers who are scheduled for a delivery. A hybrid methodology that combines exact and heuristic procedures within a branch-and-price framework is developed to solve the underlying MIP. The approach takes advantage of the efficiency of heuristics and the precision of branch and price. Implementation required devising a new branching strategy to accommodate the unique degeneracy characteristics of the master problem, and a new procedure for handling symmetry. A novel column generation heuristic and a rounding heuristic were also implemented to improve algorithmic efficiency. Computational testing on standard data sets showed that the hybrid scheme can solve instances with up to 50 customers and 8 time periods within 1 h. This level of performance could not be matched by either CPLEX or standard branch and price alone. © 2010 Elsevier Ltd. All rights reserved. Source


Yu E.T.,University of Texas at Austin | Van De Lagemaat J.,National Renewable Energy Laboratory
MRS Bulletin | Year: 2011

Photovoltaics are expected to play an important role in the future energy infrastructure. However, achieving simultaneously high efficiency in both light absorption and carrier collection remains a challenging tradeoff. Photon management, which refers to the engineering of materials and device structures to control the spatial distribution of optical energy, offers a number of promising routes to optimizing this tradeoff. Progress in fabrication of nanostructured materials combined with advances in the understanding of nanophotonic devices has enabled new strategies for photon management in a range of photovoltaic devices. Prominent among these are structures with pronounced surface topography or graded refractive-index profiles that reduce surface reflectivity; materials processing that increases optical absorption in materials such as silicon; incorporation of semiconductor nanostructures that enables simultaneous improvements in optical absorption and photogenerated carrier collection; and coherent light trapping in optical waveguide modes via plasmonic or optical scattering effects. The articles in this issue review some of these emerging directions. © Copyright Materials Research Society 2011. Source


Li X.,University of Texas at Austin
Proceedings of the Annual IEEE Conference on Computational Complexity | Year: 2011

We study the problem of constructing affine extractors over GF(2). Previously the only known construction that can handle sources with arbitrarily linear entropy is due to Bourgain (and a slight modification by Yehudayoff), which makes extensive use of complicated inequality manipulations and relies on a careful choice of a polynomial. In this paper we give a new and conceptually much cleaner construction of affine extractors for linear entropy sources that outputs a constant fraction of the entropy with exponentially small error. This matches the previous best result of Bourgain. The extractor can be pushed to handle affine sources with entropy n/√lognlogn. This slightly improves Bourgain's result and matches the recent result of Yehudayoff. We also give a zero-error disperser for affine sources with entropy n/√logn that outputs nΩ(1) bits. This improves previous constructions of affine dispersers that output more than 1 bit. In contrast to Bourgain's construction, our construction mainly uses extractor machinery and basic properties of polynomials. Some of our techniques may be of independent interest. © 2011 IEEE. Source


Background: Understanding dietary trends for endangered species may be essential to assessing the effects of ecological disturbances such as habitat modification, species introductions or global climate change. Documenting temporal variation in prey selection may also be crucial for understanding population dynamics. However, the rarity, secretive behaviours and obscure microhabitats of some endangered species can make direct foraging observations difficult or impossible. Furthermore, the lethality or invasiveness of some traditional methods of dietary analysis (e.g. gut contents analysis, gastric lavage) makes them inappropriate for such species. Stable isotope analysis facilitates non-lethal, indirect analysis of animal diet that has unrealized potential in the conservation of endangered organisms, particularly amphibians. Methodology/findings: I determined proportional contributions of aquatic macroinvertebrate prey to the diet of an endangered aquatic salamander Eurycea sosorum over a two-year period using stable isotope analysis of 13/12C and 15/14N and the Bayesian stable isotope mixing model SIAR. I calculated Strauss' dietary electivity indices by comparing these proportions with changing relative abundance of potential prey species through time. Stable isotope analyses revealed that a previously unknown prey item (soft-bodied planarian flatworms in the genus Dugesia) made up the majority of E. sosorum diet. Results also demonstrate that E. sosorum is an opportunistic forager capable of diet switching to include a greater proportion of alternative prey when Dugesia populations decline. There is also evidence of intra-population dietary variation. Conclusions/significance: Effective application of stable isotope analysis can help circumvent two key limitations commonly experienced by researchers of endangered species: the inability to directly observe these species in nature and the invasiveness or lethality of traditional methods of dietary analysis. This study illustrates the feasibility of stable isotope analysis in identifying preferred prey species that can be used to guide conservation management of both wild and captive food sources for endangered species. © 2013 J. Hayley Gillespie. Source


Macneilage P.F.,University of Texas at Austin
Developmental Psychobiology | Year: 2013

As part of a vertebrate-wide trend toward left brain/right side asymmetries in routine whole-body actions, marine mammals show signs of rightward appendage-use biases, and short- and long-term turning asymmetries most of which are unique in non-humans in being just as strong as right handedness, and even stronger than human handedness-related turning biases. Short-term marine mammal turning asymmetries and human about-turning asymmetries share a leading right side, suggesting a commonality in left hemisphere intentional control. The long-term leftward turning bias that both groups share may be an indirect result of both sensory and motor influences on the right side in dolphins, but be induced by a right-hemisphere-controlled spatial/attentional bias to the left in humans. Marine mammals may share, with humans and other higher primates, a left hemisphere specialization for action dynamics, although evidence is currently lacking for human-like right hemisphere specializations relevant to action in other vertebrates. © 2013 Wiley Periodicals, Inc. Source


Xu H.,National University of Singapore | Caramanis C.,University of Texas at Austin | Mannor S.,Technion - Israel Institute of Technology
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2012

We consider two desired properties of learning algorithms: sparsity and algorithmic stability. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: A sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that l1-regularized regression (Lasso) cannot be stable, while l2-regularized regression is known to have strong stability properties and is therefore not sparse. © 2012 IEEE. Source


Iwen M.A.,Duke University | Tewfik A.H.,University of Texas at Austin
IEEE Transactions on Signal Processing | Year: 2012

This paper studies the problem of recovering a signal with a sparse representation in a given orthonormal basis using as few noisy observations as possible. Herein, observations are subject to the type of background clutter noise encountered in radar applications. Given this model, this paper proves for the first time that highly sparse signals contaminated with Gaussian background noise can be recovered by adaptive methods using fewer noisy linear measurements than required by any possible recovery method based on nonadaptive Gaussian measurement ensembles. © 1991-2012 IEEE. Source


Zhang R.,University of Texas at Austin | Castagna J.,University of Houston
Geophysics | Year: 2011

A basis pursuit inversion of seismic reflection data for reflection coefficients is introduced as an alternative method of incorporating a priori information in the seismic inversion process. The inversion is accomplished by building a dictionary of functions representing reflectivity patterns and constituting the seismic trace as a superposition of these patterns. Basis pursuit decomposition finds a sparse number of reflection responses that sum to form the seismic trace. When the dictionary of functions is chosen to be a wedge-model of reflection coefficient pairs convolved with the seismic wavelet, the resulting reflectivity inversion is a sparse-layer inversion, rather than a sparse-spike inversion. Synthetic tests suggest that a sparse-layer inversion using basis pursuit can better resolve thin beds than a comparable sparse-spike inversion. Application to field data indicates that sparse-layer inversion results in the potentially improved detectability and resolution of some thin layers and reveals apparent stratigraphic features that are not readily seen on conventional seismic sections. © 2011 Society of Exploration Geophysicists. Source


Frohlich C.,University of Texas at Austin | Brunt M.,Eagle Pass High School
Earth and Planetary Science Letters | Year: 2013

Between November 2009 and September 2011 the EarthScope USArray program deployed ~25 temporary seismograph stations on a 70-km grid in south-central Texas between 27°N-31°N and 96°W-101°W. This area includes the Eagle Ford Shale. For decades this geographic region has produced gas and oil from other strata using conventional methods, but recent developments in hydrofracturing technology has allowed extensive development of natural gas resources from within the Eagle Ford. Our study surveys small-magnitude seismic events and evaluates their correlation with fluid extraction and injection in the Eagle Ford, identifying and locating 62 probable earthquakes, including 58 not reported by the U.S. Geological Survey. The 62 probable earthquakes occur singly or in clusters at 14 foci; of these foci, two were situated near wells injecting recently increased volumes of water; eight were situated near wells extracting recently increased volumes of oil and/or water; and four were not situated near wells reporting significant injection/extraction increases. Thus in this region, while the majority of small earthquakes may be triggered/induced by human activity, they are more often associated with fluid extraction than with injection. We also investigated the MW4.8 20 October 2011 Fashing earthquake-the largest historically reported earthquake in south-central Texas-that occurred two weeks after the removal of the temporary USArray stations. A field study indicated that the highest-intensity (MMI VI) region was about 10 km south of 2010-2011 foreshock activity, and that there were no high-volume injection wells within 20 km of the MMI V-VI region or the foreshocks. However, the 20 October 2011 earthquake did coincide with a significant increase in oil/water extraction volumes at wells within the MMI V-VI region, and this was also true for previous earthquakes felt at Fashing in 1973 and 1983. In contrast, our study found significant increases in injection prior to an mbLG3.6 20 July 1991 earthquake near Falls City, Texas. Thus the Eagle Ford geographic region, with seismic activity associated both with extraction and injection, appears to be more complex than the Barnett Shale of northeast Texas, where a similar survey found possible correlations only with fluid injection. © 2013 The Authors. Source


Jun G.,University of Michigan | Ghosh J.,University of Texas at Austin
IEEE Transactions on Geoscience and Remote Sensing | Year: 2011

This paper proposes a novel framework called Gaussian process maximum likelihood for spatially adaptive classification of hyperspectral data. In hyperspectral images, spectral responses of land covers vary over space, and conventional classification algorithms that result in spatially invariant solutions are fundamentally limited. In the proposed framework, each band of a given class is modeled by a Gaussian random process indexed by spatial coordinates. These models are then used to characterize each land cover class at a given location by a multivariate Gaussian distribution with parameters adapted for that location. Experimental results show that the proposed method effectively captures the spatial variations of hyperspectral data, significantly outperforming a variety of other classification algorithms on three different hyperspectral data sets. © 2011 IEEE. Source


Snedden J.W.,University of Texas at Austin
Marine and Petroleum Geology | Year: 2013

Scouring at the base of channel bodies plays a significant role in enhancing connectivity of sandstone reservoirs. Investigation of high-resolution 3D seismic data from a fluvial and deep-water channel system illuminates the location and spacing of channel-body basal scours and possible controls. Scours in both East Breaks upper fan (Quaternary) and Iron River fluvial channel-bodies (Cretaceous) are comparable in scale, with a deviation of up to 10 m scour depth relative to the average channel-body basal depth.Substrate lithology, as documented by draping well-calibrated seismic response onto 3D channel body basal surfaces, does not appear to be a major influence on scour location or depth. In the datasets examined, channel-body basal scour locations appear to be most influenced by changes in channel orientation, with outer bends being particularly prone to scouring, even in channels that are not highly sinuous.Scaling relationships between scour spacing and channel width observed in modern fluvial systems are further tested against these high-resolution datasets and published Miocene subsurface deep-water reservoir maps but results are mixed, with a consistent over-prediction of scour spacing. This may reflect difficulties in accurately determining channel bank full width in confined deep-water channel complex systems.Observations from an ancillary 4D seismic dataset show that over 30% of the identified scour areas exhibit attribute anomalies calibrated with water saturation changes between the baseline and monitor surveys (about 3 years) and thus indicate production-induced fluid movement through these features. This underlines the importance of scours as connection points between fluid compartments and the significance of observations of scour location and spacing made here from high-resolution 3D and ancillary 4D seismic data. © 2012 Elsevier Ltd. Source


Seismic data can be decomposed into nonstationary spectral components with smoothly variable frequencies and smoothly variable amplitudes. To estimate local frequencies, I use a nonstationary version of Prony's spectral analysis method defined with the help of regularized nonstationary autoregression. To estimate local amplitudes of different components, I fit their sum to the data using regularized nonstationary regression. Shaping regularization ensures stability of the estimation process and provides controls on smoothness of the estimated parameters. Potential applications of the proposed technique include noise attenuation, seismic data compression, and seismic data regularization. © 2013 Society of Exploration Geophysicists. Source


Dhillon H.S.,University of Southern California | Andrews J.G.,University of Texas at Austin
IEEE Wireless Communications Letters | Year: 2014

Considering both small-scale fading and long-term shadowing, we characterize the downlink rate distribution at a typical user equipment (UE) in a heterogeneous cellular network (HetNet), where shadowing, following any general distribution, impacts cell selection while fading does not. Most prior work either ignores the impact of channel randomness on cell selection or lumps all the sources of randomness into a single variable, with cell selection based on the instantaneous signal strength, which is unrealistic. As an application of the results, we study the impact of shadowing on load balancing in terms of the optimal per-tier selection bias needed for rate maximization. © 2012 IEEE. Source


Karch A.,University of Texas at Austin
Policy Studies Journal | Year: 2010

The early childhood education policy community has been described as a " divided constituency" in which groups with the same underlying goals sometimes work at cross purposes. This article examines how this internal division affects the contemporary funding of preschool education. It finds that states with a relatively large Head Start community are significantly more likely not to fund preschool education and significantly less likely to dedicate preschool funding exclusively to a freestanding state program. These results suggest that the creation and political solidification of Head Start generated policy feedback. They contributed to an ongoing tension within the early education community as Head Start beneficiaries viewed the creation of a freestanding preschool program as a political threat. This political dynamic illustrates the more general way in which the existence of a public policy can alter the dynamics of future political action. © 2010 Policy Studies Organization. Source


Russo G.A.,University of Texas at Austin
American Journal of Physical Anthropology | Year: 2010

Two contrasting patterns of lumbar vertebral morphology generally characterize anthropoids. "Long-backed" monkeys are distinguished from "shortbacked" apes [Benton: The baboon in medical research, Vol. 2 (1967:201)] with respect to several vertebral features thought to afford greater spinal flexibility in the former and spinal rigidity in the latter. Yet, discussions of spinal mobility are lacking important functional insight that can be gained by analysis of the zygapophyses, the spine's synovial joints responsible for allowing and resisting intervertebral movements. Here, prezygapophyseal articular facet (PAF) shape in the thoracolumbar spine of Papio, Hylobates, Pongo, Gorilla, and Pan is evaluated in the context of the "long-backed" versus "short-backed" model. A three-dimensional geometric morphometric approach is used to examine how PAF shape changes along the thoracolumbar vertebral column of each taxon and how PAF shape varies across taxa at corresponding vertebral levels. The thoracolumbar transition in PAF shape differs between Papio and the hominoids, between Hylobates and the great apes, and to a lesser extent, among great apes. At the level of the first lumbar vertebra, the PAF shape of Papio is distinguished from that of hominoids. At the level of the second lumbar vertebra, there is variation to some extent among all taxa. These findings suggest that morphological and functional distinctions in primate vertebral anatomy may be more complex than suggested by a "long-backed" versus "short-backed" dichotomy. © 2010 Wiley-Liss, Inc. Source


Vaze R.,Tata Institute of Fundamental Research | Heath R.W.,University of Texas at Austin
IEEE Transactions on Information Theory | Year: 2011

In a two-way relay channel, two sources use one or more relay nodes to exchange data with each other. This paper considers a multiple input multiple output (MIMO) two-way relay channel, where each relay node has one or more antennas. Optimal relay transmission strategies for the two-way relay channel are derived to maximize the achievable rate with amplify and forward (AF) at each relay and to achieve the optimal diversity-multiplexing tradeoff (DM-tradeoff). To maximize the achievable rate with AF, an iterative algorithm is proposed which solves a power minimization problem subject to minimum signal-to-interference-and-noise ratio constraints at every step. The power minimization problem is nonconvex. The Karush Kuhn Tucker conditions, however, are shown to be sufficient for optimality. Capacity scaling law of the two-way relay channel with increasing number of relays is also established by deriving a lower and upper bound on the capacity region of the two-way relay channel. To achieve the optimal DM-tradeoff, a compress and forward strategy is proposed and its DM-tradeoff is derived. For the full-duplex two-way relay channel, the proposed strategy achieves the optimal DM-tradeoff, while for the half-duplex case the proposed strategy is shown to achieve the optimal DM-tradeoff under some conditions. © 2011 IEEE. Source


Ruegg A.,University of California at Berkeley | Lin C.,University of Texas at Austin
Physical Review Letters | Year: 2013

We investigate the electronic structure induced by wedge disclinations (conical singularities) in a honeycomb lattice model realizing Chern numbers γ=±1. We establish a correspondence between the bound state of (i) an isolated Φ0/2 flux, (ii) an isolated pentagon (n=1) or heptagon (n=-1) defect with an external flux of magnitude nγΦ 0/4 through the center, and (iii) an isolated square or octagon defect without external flux, where Φ0=h/e is the flux quantum. Because of the above correspondence, the existence of isolated electronic states bound to disclinations is robust against various perturbations. Hence, measuring these defect states offers an interesting probe of graphene-based topological insulators which is complementary to measurements of the quantized edge currents. © 2013 American Physical Society. Source


Stachowiak J.C.,University of Texas at Austin | Brodsky F.M.,University of California at San Francisco | Miller E.A.,Columbia University
Nature Cell Biology | Year: 2013

Many cellular membrane-bound structures exhibit distinct curvature that is driven by the physical properties of their lipid and protein constituents. Here we review how cells manipulate and control this curvature in the context of dynamic events such as vesicle-mediated membrane traffic. Lipids and cargo proteins each contribute energy barriers that must be overcome during vesicle formation. In contrast, protein coats and their associated accessory proteins drive membrane bending using a variety of interdependent physical mechanisms. We survey the energy costs and drivers involved in membrane curvature, and draw a contrast between the stochastic contributions of molecular crowding and the deterministic assembly of protein coats. These basic principles also apply to other cellular examples of membrane bending events, including important disease-related problems such as viral egress. © 2013 Macmillan Publishers Limited. All rights reserved. Source


Boas D.A.,Harvard University | Dunn A.K.,University of Texas at Austin
Journal of Biomedical Optics | Year: 2010

First introduced in the 1980s, laser speckle contrast imaging is a powerful tool for full-field imaging of blood flow. Recently laser speckle contrast imaging has gained increased attention, in part due to its rapid adoption for blood flow studies in the brain. We review the underlying physics of speckle contrast imaging and discuss recent developments to improve the quantitative accuracy of blood flow measures. We also review applications of laser speckle contrast imaging in neuroscience, dermatology and ophthalmology. © 2010 Society of Photo-Optical Instrumentation Engineers. Source


Ricles L.M.,University of Texas at Austin
International journal of nanomedicine | Year: 2011

Stem cells can differentiate into multiple cell types, and therefore can be used for cellular therapies, including tissue repair. However, the participation of stem cells in tissue repair and neovascularization is not well understood. Therefore, implementing a noninvasive, long-term imaging technique to track stem cells in vivo is needed to obtain a better understanding of the wound healing response. Generally, we are interested in developing an imaging approach to track mesenchymal stem cells (MSCs) in vivo after delivery via a polyethylene glycol modified fibrin matrix (PEGylated fibrin matrix) using MSCs loaded with gold nanoparticles as nanotracers. The objective of the current study was to assess the effects of loading MSCs with gold nanoparticles on cellular function. In this study, we utilized various gold nanoparticle formulations by varying size and surface coatings and assessed the efficiency of cell labeling using darkfield microscopy. We hypothesized that loading cells with gold nanotracers would not significantly alter cell function due to the inert and biocompatible characteristics of gold. The effect of nanoparticle loading on cell viability and cytotoxicity was analyzed using a LIVE/DEAD stain and an MTT assay. The ability of MSCs to differentiate into adipocytes and osteocytes after nanoparticle loading was also examined. In addition, nanoparticle loading and retention over time was assessed using inductively coupled plasma mass spectrometry (ICP-MS). Our results demonstrate that loading MSCs with gold nanotracers does not alter cell function and, based on the ICP-MS results, long-term imaging and tracking of MSCs is feasible. These findings strengthen the possibility of imaging MSCs in vivo, such as with optical or photoacoustic imaging, to understand better the participation and role of MSCs in neovascularization. Source


Lee H.,University of Texas at Austin
Optics Express | Year: 2010

A collimation method of misaligned optical systems is proposed. The method is based on selectively nullifying main alignment-driven aberration components. This selective compensation is achieved by the optimal adjustment of chosen alignment parameters. It is shown that this optimal adjustment can be obtained by solving a linear matrix equation of the low-order alignment-driven terms of primary field aberrations. A significant result from the adjustment is to place the centers of the primary field aberrations, initially scattered over the field due to misalignment, to a desired common field location. This aberration concentering naturally results in recovery of image quality across the field of view. Error analyses and robustness tests show the method's feasibility in efficient removal of alignment-driven aberrations in the face of measurement and model uncertainties. The extension of the method to the collimation of a misaligned system with higher-order alignment-driven aberrations is also shown.© 2010 Optical Society of America. Source


Kovetz E.D.,University of Texas at Austin | Kamionkowski M.,Johns Hopkins University
Physical Review Letters | Year: 2013

An extremely large void and a cosmic texture are two possible explanations for the cold spot seen in the cosmic microwave background. We investigate how well these two hypotheses can be tested with weak lensing of 21-cm fluctuations from the epoch of reionization measured with the Square Kilometer Array. While the void explanation for the cold spot can be tested with Square Kilometer Array, given enough observation time, the texture scenario requires significantly prolonged observations, at the highest frequencies that correspond to the epoch of reionization, over the field of view containing the cold spot. © 2013 American Physical Society. Source


Weinberg S.,University of Texas at Austin
Physical Review Letters | Year: 2013

It is argued that exotic mesons consisting of two quarks and two antiquarks are not ruled out in quantum chromodynamics with a large number N of colors, as generally thought. Tetraquarks of one class are typically long-lived, with decay rates proportional to 1/N. © 2013 American Physical Society. Source


Weinberg S.,University of Texas at Austin
Physical Review Letters | Year: 2013

It is suggested that Goldstone bosons may be masquerading as fractional cosmic neutrinos, contributing about 0.39 to what is reported as the effective number of neutrino types in the era before recombination. The broken symmetry associated with these Goldstone bosons is further speculated to be the conservation of the particles of dark matter. © 2013 American Physical Society. Source


Matz M.V.,University of Texas at Austin
Physiological Reviews | Year: 2010

Green fluorescent protein (GFP) from the jellyfish Aequorea victoria and its homologs from diverse marine animals are widely used as universal genetically encoded fluorescent labels. Many laboratories have focused their efforts on identification and development of fluorescent proteins with novel characteristics and enhanced properties, resulting in a powerful toolkit for visualization of structural organization and dynamic processes in living cells and organisms. The diversity of currently available fluorescent proteins covers nearly the entire visible spectrum, providing numerous alternative possibilities for multicolor labeling and studies of protein interactions. Photoactivatable fluorescent proteins enable tracking of photolabeled molecules and cells in space and time and can also be used for super-resolution imaging. Genetically encoded sensors make it possible to monitor the activity of enzymes and the concentrations of various analytes. Fast-maturing fluorescent proteins, cell clocks, and timers further expand the options for real time studies in living tissues. Here we focus on the structure, evolution, and function of GFP-like proteins and their numerous applications for in vivo imaging, with particular attention to recent techniques. Copyright © 2010 the American Physiological Society. Source


Demkowicz L.,University of Texas at Austin | Gopalakrishnan J.,University of Florida
Computer Methods in Applied Mechanics and Engineering | Year: 2010

Considering a simple model transport problem, we present a new finite element method. While the new method fits in the class of discontinuous Galerkin (DG) methods, it differs from standard DG and streamline diffusion methods, in that it uses a space of discontinuous trial functions tailored for stability. The new method, unlike the older approaches, yields optimal estimates for the primal variable in both the element size h and polynomial degree p, and outperforms the standard upwind DG method. © 2010 Elsevier B.V. Source


Ksepka D.T.,North Carolina State University | Clarke J.A.,University of Texas at Austin
Bulletin of the American Museum of Natural History | Year: 2010

We present the first detailed description of Perudyptes devriesi, a basal penguin from the middle Eocene (∼42 Ma) Paracas Formation of Peru, and a new analysis of all published extinct penguin species as well as controversial fragmentary specimens. The Perudyptes devriesi holotype includes key regions of the skull and significant postcranial material, thus helping to fill a major phylogenetic and stratigraphic (∼20 million year) gap between the earliest fossil penguins (Waimanu manneringi and Waimanu tuatahi, ∼5861.6 Ma) and the next oldest partial skeletons. Perudyptes devriesi is diagnosable by five autapomorphies: (1) an anteroventrally directed postorbital process, (2) marked anterior expansion of the parasphenoid rostrum, (3) posterior trochlear ridge of the humerus projecting distal to the middle trochlear ridge and conformed as a large, broadly curved surface, (4) convex articular surface for the antitrochanter of the femur, and (5) extremely weak anterior projection of the lateral condyle of the tibiotarsus. The skull of Perudyptes is characterized by deep temporal fossae and an elongate, narrow beak that differs from other reported stem penguins in its short mandibular symphysis. The wing skeleton of Perudyptes preserves a combination of plesiomorphic features also observed in the basal penguin Waimanu and derived features shared with more crownward penguins. Features of the wing optimized as primitive for Sphenisciformes include retention of a discrete dorsal supracondylar tubercle on the humerus and presence of a modestly projected pisiform process on the carpometacarpus. Derived features present in Perudyptes and all more crownward penguins, but absent in Waimanu, include a more flattened humerus, development of a trochlea for the tendon of m. scapulotriceps at the distal end of the humerus, and bowing of the anterior face of the carpometacarpus. A combined molecular and morphological dataset for Spheniciformes was expanded by adding 25 osteological and soft tissue characters as well as 11 taxa. In agreement with previous results, Perudyptes devriesi is identified as one of the most basal members of Sphenisciformes. This analysis also confirms the placement of the middle/late Miocene (∼1113 Ma) fossil Spheniscus muizoni as a member of the Spheniscus clade and places the late Miocene (∼10 Ma) Madrynornis mirandus as sister taxon to extant Eudyptes. These two species, known from relatively complete partial skeletons, are the oldest crown clade penguin fossils and represent well-corroborated temporal calibration points for the Spheniscus-Eudyptula divergence and Megadyptes-Eudyptes divergence, respectively. Our results reaffirm that the Miocene penguin taxon Palaeospheniscus, recently proposed to represent a member of the crown radiation, belongs outside of the crown clade Spheniscidae. The phylogenetic positions of small Eocene Antarctic penguin taxa (Delphinornis, Marambiornis, and Mesetaornis) recently proposed as possible direct ancestors to crown Spheniscidae were further evaluated using alternate coding strategies for incorporating scorings from isolated elements that preserve critical morphologies and are thought to represent these taxa, although they cannot yet be reliably assigned to individual species. Under all scoring regimes, Delphinornis, Marambiornis, and Mesetaornis were recovered as distantly related to Spheniscidae. Using synapomorphies identified in the primary analysis, we evaluated the phylogenetic position of fragmentary specimens, including the holotypes of valid but poorly known species, specimens currently unassignable to the species level, and morphologically distinct specimens that have not yet been named. All pre-Miocene specimens can be excluded from Spheniscidae based on presence of plesiomorphies lost in all crown penguins, consistent with a recent radiation for the penguin crown clade. This study provides additional support for a scenario of penguin evolution characterized by an origin of flightlessness near the K-T boundary, dispersal throughout the Southern Hemisphere during the early Paleogene, and a late Cenozoic origin for the crown clade Spheniscidae. Stratigraphic distribution and phylogenetic relationships of fossil penguins are consistent with distinct radiations during the Eocene, Oligocene, and Miocene. While the Eocene and Oligocene penguin faunas are similar in many respects, the Miocene fauna is characterized by smaller average size and novel cranial morphologies, suggesting that an ecological shift in diet occurred close to the origin of crown Spheniscidae. © American Museum of Natural History 2010. Source


Qu Y.,Accenture | Bard J.F.,University of Texas at Austin
Computers and Operations Research | Year: 2012

The pickup and delivery problem (PDP) has been studied extensively for applications ranging from courier, cargo and postal services, to public transportation. The work presented here was inspired by a daily route planning problem at a regional air carrier who was trying to determine the benefits of transshipment. Accordingly, a primary goal of this paper is identify the circumstances under which measurable cost saving can be achieved when one aircraft transports a request from its origin to an intermediate point and a second aircraft picks it up and delivers it to its final destination. In structuring the analysis, we describe a unique way to model this transshipment option on a directed graph and introduce a specialized two-route insertion heuristic that considers when to exploit this option. Based on the new representation, most existing heuristics for the PDP can be readily extended to handle transshipments. To find solutions, we developed a greedy randomized adaptive search procedure (GRASP) with several novel features. In the construction phase, shipment requests are inserted into routes until all demand is satisfied or no feasible insertion exists. In the improvement phase, an adaptive large neighborhood search algorithm is used to modify portions of the feasible routes. Specialized removal and insertion heuristics were designed for this purpose. In the absence of test cases in the literature, we also developed a procedure for randomly generating problem instances. Testing was done on 56 existing PDP instances which have 50 requests each, and on 50 new data sets with 25 requests each and one transshipment location. For the former, the performance and solution quality of the GRASP were comparable to the best known heuristics. For the latter, GRASP found the solutions within 1% of optimality on 88% of the instances. © 2011 Elsevier Ltd. All rights reserved. Source


Panse V.G.,ETH Zurich | Johnson A.W.,University of Texas at Austin
Trends in Biochemical Sciences | Year: 2010

In eukaryotic cells, ribosomes are pre-assembled in the nucleus and exported to the cytoplasm where they undergo final maturation. This involves the release of trans-acting shuttling factors, transport factors, incorporation of the remaining ribosomal proteins, and final rRNA processing steps. Recent work, particularly on the large (60S) ribosomal subunit, has confirmed that the 60S subunit is exported from the nucleus in a functionally inactive state. Its arrival in the cytoplasm triggers events that render it translationally competent. Here we focus on these cytoplasmic maturation events and speculate why eukaryotic cells have evolved such an elaborate maturation pathway. © 2010. Source


Whitfield C.,University of Guelph | Stephen Trent M.,University of Texas at Austin
Annual Review of Biochemistry | Year: 2014

Lipopolysaccharide molecules represent a unique family of glycolipids based on a highly conserved lipid moiety known as lipid A. These molecules are produced by most gram-negative bacteria, in which they play important roles in the integrity of the outer-membrane permeability barrier and participate extensively in host-pathogen interplay. Few bacteria contain lipopolysaccharide molecules composed only of lipid A. In most forms, lipid A is glycosylated by addition of the core oligosaccharide that, in some bacteria, provides an attachment site for a long-chain O-antigenic polysaccharide. The complexity of lipopolysaccharide structures is reflected in the processes used for their biosynthesis and export. Rapid growth and cell division depend on the bacterial cell's capacity to synthesize and export lipopolysaccharide efficiently and in large amounts. We review recent advances in those processes, emphasizing the reactions that are essential for viability. Copyright © 2014 by Annual Reviews. Source


Kwasinski A.,University of Texas at Austin
IEEE Transactions on Power Electronics | Year: 2012

In this paper, a switching strategy for multiple-input converters (MICs) is presented and analyzed. MICs have been identified to provide a cost-effective approach for energy harvesting in hybrid systems, and for power distribution in micro- and nanogrids. The basic principle of the proposed switching strategy is that the effective duty ratio of each switch is an integer multiple of a common duty ratio (CDR), the CDR being the duty ratio of a common switching function that is generated at a higher frequency by frequency division. The proposed strategy enables switching functions for MICs that have a greater number of input legs to be generated with relative ease. Another benefit of this scheme is that it allows an MICs output voltage to be regulated by employing the CDR as the only control variable, irrespective of the number of input legs present. Essentially, the strategy transforms an MIC into an equivalent single-input single-output system for analysis, which simplifies controller design and implementation. Without loss of generality, this technique is demonstrated by analyzing a multiple-input buck-boost converter. A PI controller is shown to regulate the MICs operating point. The analysis is verified by simulations and experiments. © 2012 IEEE. Source


Guo L.,University of Texas at Austin
Journal of Broadcasting and Electronic Media | Year: 2012

This article presents the innovative application of social network analysis to agenda setting research. It suggests that the approach of network analysis enables researchers to map out the interrelationships among objects and attributes both in the media agenda and the public agenda. Further, by conducting statistical analysis, researchers are able to compare the media agenda networks and public agenda networks in order to explore a third level of agenda setting effects. Concrete procedures for applying network analysis in agenda setting research are presented, and a set of hypotheses are suggested in this article. © 2012 Copyright Taylor and Francis Group, LLC. Source


Baldick R.,University of Texas at Austin
IEEE Systems Journal | Year: 2012

Many jurisdictions worldwide are greatly increasing the amount of wind production, with the expectation that increasing renewables will cost-effectively reduce greenhouse emissions. This paper discusses the interaction of increasing wind, transmission constraints, renewable credits, wind and demand correlation, intermittency, carbon prices, and electricity market prices using the particular example of the Electric Reliability Council of Texas market. An estimate is made of the cost of using wind to mitigate greenhouse gas emissions. © 2011 IEEE. Source


Rezzolla L.,Institute for Theoretical Physics | Kumar P.,University of Texas at Austin
Astrophysical Journal | Year: 2015

The merger of a binary of neutron stars provides natural explanations for many of the features of short gamma-ray bursts (SGRBs), such as the generation of a hot torus orbiting a rapidly rotating black hole, which can then build a magnetic jet and provide the energy reservoir to launch a relativistic outflow. However, this scenario has problems explaining the recently discovered long-term and sustained X-ray emission associated with the afterglows of a subclass of SGRBs. We propose a new model that explains how an X-ray afterglow can be sustained by the product of the merger and how the X-ray emission is produced before the corresponding emission in the gamma-band, though it is observed to follow it. Overall, our paradigm combines in a novel manner a number of well-established features of the emission in SGRBs and results from simulations. Because it involves the propagation of an ultra-relativistic outflow and its interaction with a confining medium, the paradigm also highlights a unifying phenomenology between short and long GRBs. © 2015. The American Astronomical Society. All rights reserved. Source


Darnell M.Z.,University of Texas at Austin
Journal of Experimental Marine Biology and Ecology | Year: 2012

Color change can serve a number of functions, and can be a singular aperiodic event or a rhythmic process driven by responses to environmental cues or endogenous rhythms. Endogenous pigmentation rhythms have been identified in a number of taxa, with fiddler crabs being the subject of much of this research. Despite a long history of experimental studies of chromatophore-level rhythmicity in fiddler crabs, few studies have considered the entrainment cues for these rhythms or extended findings to whole-organism coloration, information important for understanding the biological properties of circadian systems and understanding the functional significance of these rhythms. This study examined the circadian pigmentation rhythm in the subtropical fiddler crab Uca panacea at both the cellular (melanophore) and organismal levels, including expression in artificial light/dark cycles and constant darkness, entrainment by light/dark and temperature cycles, and relationships between melanophore rhythms and the spectral reflectance of the carapace. On the melanophore level, crabs exhibited a circadian rhythm in pigment dispersion, with maximum dispersion occurring during the day and maximum concentration occurring during the night. This rhythm persisted under ambient or reversed light/dark cycles, with maximum pigment dispersion occurring during the light phase, or under constant darkness. Both light/dark and temperature cycles entrained the rhythm, although light/dark cycles resulted in greater phase shift. The circadian rhythm in melanin dispersion within melanophores is associated with a circadian rhythm in organismal coloration, with carapace reflectance low during the day and high at night. Because of the high absorption of UV radiation by melanin, the functional significance of this rhythm may be as a mechanism of UV-protection during the day when crabs are exposed to high levels of UV radiation while foraging on open sand flats of the intertidal zone. © 2012 Elsevier B.V. Source


Zhang B.,Peking University | Zhang B.,University of Nevada, Las Vegas | Kumar P.,University of Texas at Austin
Physical Review Letters | Year: 2013

The IceCube Collaboration recently reported a stringent upper limit on the high energy neutrino flux from gamma-ray bursts (GRBs), which provides a meaningful constraint on the standard internal shock model. Recent broadband electromagnetic observations of GRBs also challenge the internal shock paradigm for GRBs, and some competing models for γ-ray prompt emission have been proposed. We describe a general scheme for calculating the GRB neutrino flux, and compare the predicted neutrino flux levels for different models. We point out that the current neutrino flux upper limit starts to constrain the standard internal shock model. The dissipative photosphere models are also challenged if the cosmic ray luminosity from GRBs is at least 10 times larger than the γ-ray luminosity. If the neutrino flux upper limit continues to go down in the next few years, then it would suggest the following possibilities: (i) the photon-to-proton luminosity ratio in GRBs is anomalously high for shocks, which may be achieved in some dissipative photosphere models and magnetic dissipation models; or (ii) the GRB emission site is at a larger radius than the internal shock radius, as expected in some magnetic dissipation models such as the internal collision-induced magnetic reconnection and turbulence model. © 2013 American Physical Society. Source


Patricola C.M.,Cornell University | Cook K.H.,University of Texas at Austin
Climate Dynamics | Year: 2010

A method for simulating future climate on regional space scales is developed and applied to northern Africa. Simulation with a regional model allows for the horizontal resolution needed to resolve the region's strong meridional gradients and the optimization of parameterizations and land-surface model. The control simulation is constrained by reanalysis data, and realistically represents the present day climate. Atmosphere-ocean general circulation model (AOGCM) output provides SST and lateral boundary condition anomalies for 2081-2100 under a business-as-usual emissions scenario, and the atmospheric CO2 concentration is increased to 757 ppmv. A nine-member ensemble of future climate projections is generated by using output from nine AOGCMs. The consistency of precipitation projections for the end of the twenty-first century is much greater for the regional model ensemble than among the AOGCMs. More than 77% of ensemble members produce the same sign rainfall anomaly over much of northern Africa. For West Africa, the regional model projects wetter conditions in spring, but a mid-summer drought develops during June and July, and the heat stoke risk increases across the Sahel. Wetter conditions resume in late summer, and the likelihood of flooding increases. The regional model generally projects wetter conditions over eastern Central Africa in June and drying during August through September. Severe drought impacts parts of East Africa in late summer. Conditions become wetter in October, but the enhanced rainfall does not compensate for the summertime deficit. The risk of heat stroke increases over this region, although the threat is not projected to be as great as in the Sahel. © 2009 The Author(s). Source


Lambowitz A.M.,University of Texas at Austin
Cold Spring Harbor perspectives in biology | Year: 2011

Group II introns are mobile ribozymes that self-splice from precursor RNAs to yield excised intron lariat RNAs, which then invade new genomic DNA sites by reverse splicing. The introns encode a reverse transcriptase that stabilizes the catalytically active RNA structure for forward and reverse splicing, and afterwards converts the integrated intron RNA back into DNA. The characteristics of group II introns suggest that they or their close relatives were evolutionary ancestors of spliceosomal introns, the spliceosome, and retrotransposons in eukaryotes. Further, their ribozyme-based DNA integration mechanism enabled the development of group II introns into gene targeting vectors ("targetrons"), which have the unique feature of readily programmable DNA target specificity. Source


Fiori G.,University of Pisa | Bonaccorso F.,Italian Institute of Technology | Iannaccone G.,University of Pisa | Palacios T.,Massachusetts Institute of Technology | And 4 more authors.
Nature Nanotechnology | Year: 2014

The compelling demand for higher performance and lower power consumption in electronic systems is the main driving force of the electronics industry's quest for devices and/or architectures based on new materials. Here, we provide a review of electronic devices based on two-dimensional materials, outlining their potential as a technological option beyond scaled complementary metal-oxide-semiconductor switches. We focus on the performance limits and advantages of these materials and associated technologies, when exploited for both digital and analog applications, focusing on the main figures of merit needed to meet industry requirements. We also discuss the use of two-dimensional materials as an enabling factor for flexible electronics and provide our perspectives on future developments. © 2014 Macmillan Publishers Limited. Source


Harris K.M.,University of Texas at Austin
Cold Spring Harbor perspectives in biology | Year: 2012

The morphology and molecular composition of synapses provide the structural basis for synaptic function. This article reviews the electron microscopy of excitatory synapses on dendritic spines, using data from rodent hippocampus, cerebral cortex, and cerebellar cortex. Excitatory synapses have a prominent postsynaptic density, in contrast with inhibitory synapses, which have less dense presynaptic or postsynaptic specializations and are usually found on the cell body or proximal dendritic shaft. Immunogold labeling shows that the presynaptic active zone provides a scaffold for key molecules involved in the release of neurotransmitter, whereas the postsynaptic density contains ligand-gated ionic channels, other receptors, and a complex network of signaling molecules. Delineating the structure and molecular organization of these axospinous synapses represents a crucial step toward understanding the mechanisms that underlie synaptic transmission and the dynamic modulation of neurotransmission associated with short- and long-term synaptic plasticity. Source


Kulis B.,Ohio State University | Grauman K.,University of Texas at Austin
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2012

Fast retrieval methods are critical for many large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sublinear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several data sets, and show that it enables accurate and fast performance for several vision problems, including example-based object classification, local feature matching, and content-based retrieval. © 2012 IEEE. Source


Chen W.,University of Texas at Austin
Social Science Computer Review | Year: 2013

Drawing on nationally representative survey data, this article examines the implications of Internet use and online communication for strong and weak ties in Americans' social networks. In line with the existing literature, this research shows that frequent Internet use and online communication are associated with a larger core discussion network and a more extensive position-generated network. More importantly, this research provides a finer tuned analysis by disaggregating the overall network into strong and weak ties. First, Internet use-but not online communication-is positively related to weak-tie based network extensity in the position-generated networks. Second, Internet use and online communication are positively related to the number and the proportion of strong ties in Americans' core discussion networks. These results help reconcile some of the conflicting findings and interpretations based on different network measures in the exiting literature. © The Author(s) 2013. Source


Goff J.A.,University of Texas at Austin
Journal of Geophysical Research: Solid Earth | Year: 2010

Abyssal hills, which are pervasive landforms on the seafloor of the Earth's oceans, represent a potential tectonic record of the history of mid-ocean ridge spreading. However, the most detailed global maps of the seafloor, derived from the satellite altimetry-based gravity field, cannot be used to deterministically characterize such small-scale (<10 km) morphology. Nevertheless, the small-scale variability of the gravity field can be related to the statistical properties of abyssal hill morphology using the upward continuation formulation. In this paper, I construct a global prediction of abyssal hill root-mean-square (rms) heights from the small-scale variability of the altimetric gravity field. The abyssal hill-related component of the gravity field is derived by first masking distinct features, such as seamounts, mid-ocean ridges, and continental margins, and then applying a newly designed adaptive directional filter algorithm to remove fracture zone/discontinuity fabric. A noise field is derived empirically by correlating the rms variability of the small-scale gravity field to the altimetric noise field in regions of very low relief, and the noise variance is subtracted from the small-scale gravity variance. Suites of synthetically derived, abyssal hill formed gravity fields are generated as a function of water depth, basement rms heights, and sediment thickness and used to predict abyssal hill seafloor rms heights from corrected small-scale gravity rms height. The resulting global prediction of abyssal hill rms heights is validated qualitatively by comparing against expected variations in abyssal hill morphology and quantitatively by comparing against actual measurements of rms heights. Although there is scatter, the prediction appears unbiased. Copyright 2010 by the American Geophysical Union. Source


Chen W.,University of Texas at Austin
Information Society | Year: 2013

The existing literature is oriented toward examining how Internet access and use may affect social capital. The role of social capital in narrowing the digital divides has been frequently mentioned but few studies have empirically examined how various types of social capital may affect people's access and use of the Internet. Drawing on a two-wave national panel data set, this article aims to fill this gap. Results demonstrate that social capital facilitates Internet access and use. In particular resource-rich bonding social capital helps overcome the digital divides in access, general use, and online communication. Before the Internet can revitalize social capital, there must be the right social capital in place to close the digital divides. Highlighting the relationship between social connectivity and digital connectivity, the findings have important implications for policymakers and practitioners. © 2013 Copyright Taylor and Francis Group, LLC. Source


Navarrete C.C.,University of Texas at Austin
Computers and Education | Year: 2013

In a case study on middle-school student educational game creation course in south central US state, the students' creative thinking process is investigated in order to understand perceptions of the digital design and programming involved in the game creation learning approach. Interviewing 12 students at with three different levels of game design experience, students in grade 6, 7 and 8, with 1, 2, and 3 years of game design experience respectively, findings suggest that students enjoyed the learning approach as satisfying and engaging, yet technologically challenging. The students experienced positive opportunities for engaging the creative thinking process in synthesizing social issue information for constructing their understanding through the creation of interactive, educational digital games. Findings suggest that the creative thinking process in student-centered game creation learning approach may provide learners a rich and enjoyable learning experience with the authentic technology use as well as provide for deep, insightful learning. © 2013 Elsevier Ltd. All rights reserved. Source


Shaikh S.R.,East Carolina University | Jolly C.A.,University of Texas at Austin | Chapkin R.S.,Texas A&M University
Molecular Aspects of Medicine | Year: 2012

Fish oil, enriched in bioactive n-3 polyunsaturated fatty acids (PUFA), has therapeutic value for the treatment of inflammation-associated disorders. The effects of n-3 PUFAs are pleiotropic and complex; hence, an understanding of their cellular targets and molecular mechanisms of action remains incomplete. Here we focus on recent data indicating n-3 PUFAs exert immunosuppressive effects on the function of effector and regulatory CD4 + T cells. In addition, we also present emerging evidence that n-3 PUFAs have immunomodulatory effects on B cells. We then focus on one multifaceted mechanism of n-3 PUFAs, which is the alteration of the biophysical and biochemical organization of the plasma membrane. This mechanism is central for downstream signaling, eicosanoid production, transcriptional regulation and cytokine secretion. We highlight recent work demonstrating n-3 PUFA acyl chains in the plasma membrane target the lateral organization of membrane signaling assemblies (i.e. lipid rafts or signaling networks) and de novo phospholipid biosynthesis. We conclude by proposing new functional and mechanistic questions in this area of research that will aid in the development of fish oil as adjuvant therapy for treating unresolved chronic inflammation. © 2012 Elsevier Inc. All rights reserved. Source


Zhang Y.,University of Texas at Austin
Library and Information Science Research | Year: 2013

This study investigated the dynamic process of people constructing mental models of MedlinePlus, a medical information web space, during their interaction with the system. Thirty-eight participants participated in the study. Their mental models of MedlinePlus were measured by a concept listing protocol and an interview method at three time points: after they freely explored the system for 5. min (T1), after the first search session (T2), and after the second search session (T3). The analysis revealed that participants constructed their mental models of MedlinePlus based on the schemas that they have of information-rich web spaces. The model construction process involved changes and development in three parallel dimensions: cognition, emotion, and behavior. The development is enabled and coordinated by three mental activities: assimilating new concepts, modifying existing concepts, and phasing out previously perceived concepts. Furthermore, mental model construction is not only a function of users' internal cognition, but also affected by external cognitive structures, including the system, system feedback, and tasks. Mental model construction is also a process distributed over time. The results suggested that mental models could serve as a framework for guiding user research and system design. The dynamic nature of the mental models indicated that an iterative approach needs to be adopted. © 2012 Elsevier Inc. Source


Sun N.,University of Texas at Austin
IEEE Transactions on Circuits and Systems I: Regular Papers | Year: 2012

This paper presents a digital background calibration technique that intentionally exploits process variation and noise in comparators to correct conversion errors caused by interstage gain error, gain nonlinearity, and capacitor mismatch in pipelined ADCs. The merits of this technique lies in its design simplicity, fast convergence speed, and low power. Simulation results are presented for a 12-bit pipelined ADC, similar to that described by Murmann and Boser [28], and Keane , [29] using low-gain amplifiers. With calibration, the SNDR and SFDR are improved from 47 and 49 dB to 72 and 92 dB, respectively. The number of conversions required for convergence is 10 6, which is about 4 times faster than that of Keane and 40 times faster than that of Murmamn and Boser. © 2004-2012 IEEE. Source


Chen Y.,University of Texas at Austin
IEEE Geoscience and Remote Sensing Letters | Year: 2015

It has been previously shown that blended simultaneous-source data can be successfully separated using an iterative seislet thresholding algorithm. In this letter, I combine iterative seislet thresholding with a local orthogonalization technique via a shaping regularization framework. During the iterations, the deblended data and its blending noise section are not orthogonal to each other, indicating that the noise section contains significant coherent useful energy. Although the leakage of useful energy can be retrieved by updating the deblended data from the data misfit during many iterations, I propose to accelerate the retrieval of the leakage energy using iterative orthogonalization. It is the first time that multiple constraints are applied in an underdetermined deblending problem, and the new proposed framework can overcome the drawback of a low-dimensionality constraint in a traditional 2-D deblending problem. Simulated synthetic and field data examples show the superior performance of the proposed approach. © 2004-2012 IEEE. Source


Castelli D.M.,University of Texas at Austin
Monographs of the Society for Research in Child Development | Year: 2014

The study of physical activity, physical fitness, and academic performance research are reviewed from a historical perspective, by providing an overview of existing publications focused on children and adolescents. Using rigorous inclusion criteria, the studies were quantified and qualified using both meta-analytic and descriptive evaluations analyses, first by time-period and then as an overall summary, particularly focusing on secular trends and future directions. This review is timely because the body of literature is growing exponentially, resulting in the emergence of new terminology, methodologies, and identification of mediating and moderating factors. Implications and recommendations for future research are summarized. © 2014 The Society for Research in Child Development, Inc. Source


Aschoff J.,Colorado School of Mines | Steel R.,University of Texas at Austin
Bulletin of the Geological Society of America | Year: 2011

New, high-resolution, regional correlation and isopach maps provide evidence that (1) Laramide-style deformation began as early as ca. 77 Ma in central Utah, and (2) rapid (208 km m.y.-1) and extensive (400 km) progradation of a clastic wedge was facilitated by reduced subsidence during the transition from Sevier- to Laramide-style deformation. This study defines three Campanian, alluvialto-marine clastic wedges that traversed 200-400 km eastward across the Utah-Colorado segment of the Cordilleran foreland basin. Wedges A and C are thicker successions with rising-trajectory shoreline stacking patterns (Blackhawk Formation and Lower Castlegate Sandstone, Bluecastle Tongue and Rollins Sandstone) that reflect relatively slow overall progradation (50-81 km m.y.-1) of narrow (10-20 km wide), wave-dominated shorelines. In contrast, wedge B consists of lower-volume successions with a flat to falling shoreline stacking pattern (Middle Castlegate Sandstone, Sego Sandstone, Neslen Formation, Corcoran and Cozzette Members of the Iles Formation) that suggests rapid progradation (~208 km m.y.-1) of embayed (60-80 km wide), mixed-energy (wave- and tide-influenced) shorelines. Wedges A and C prograded 200-250 km in more than ~3 m.y., whereas wedge B prograded 340-400 km in ~2 m.y. (~170 km m.y.-1). The anomalously extensive wedge B is unique in the Utah-Colorado segment because of its long extent, rapid progradation rate, dominance of tidally influenced facies, long-transit transgressions/regressions, and low-accommodation, nested sequence architecture. Stratigraphic relationships indicate development of wedge B coeval with both Sevier- and Laramide-style deformation in Utah. Assuming a constant sediment supply, the extensive (>300-400 km) "sheet-like" amalgamated wedge (wedge B) may have been caused by (1) reduced subsidence driven by a flexural interference pattern, whereby two short-wavelength (±200 km) flexural profiles are superimposed during the uplift of both basement-cored and thin-skinned thrust belt, (2) an increase in, or eastward migration of, dynamic subsidence during tectonic transitions associated with slab flattening or rollback (i.e., long-wavelength flexure), or (3) reduced subsidence due to short-wavelength flexural interference augmented by the reduction, or migration, of long-wavelength flexure. © 2011 Geological Society of America. Source


Andrews J.G.,University of Texas at Austin
IEEE Communications Magazine | Year: 2013

Imagine a world with more base stations than cell phones: this is where cellular technology is headed in 10-20 years. This mega-trend requires many fundamental differences in visualizing, modeling, analyzing, simulating, and designing cellular networks vs. the current textbook approach. In this article, the most important shifts are distilled down to seven key factors, with the implications described and new models and techniques proposed for some, while others are ripe areas for future exploration. © 1979-2012 IEEE. Source


Waters B.,University of Texas at Austin
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions. © 2011 International Association for Cryptologic Research. Source


Meschiari S.,University of Texas at Austin
Astrophysical Journal Letters | Year: 2012

The existence of planets born in environments highly perturbed by a stellar companion represents a major challenge to the paradigm of planet formation. In numerical simulations, the presence of a close binary companion stirs up the relative velocity between planetesimals, which is fundamental in determining the balance between accretion and erosion. However, the recent discovery of circumbinary planets by Kepler establishes that planet formation in binary systems is clearly viable. We perform N-body simulations of planetesimals embedded in a protoplanetary disk, where planetesimal phasing is frustrated by the presence of stochastic torques, modeling the expected perturbations of turbulence driven by the magnetorotational instability. We examine perturbation amplitudes relevant to dead zones in the midplane (conducive to planet formation in single stars), and find that planetesimal accretion can be inhibited even in the outer disk (4-10 AU) far from the central binary, a location previously thought to be a plausible starting point for the formation of circumbinary planets. © 2012. The American Astronomical Society. All rights reserved. Source


Abanin D.A.,Princeton University | Pesin D.A.,University of Texas at Austin
Physical Review Letters | Year: 2011

We study collective behavior of magnetic adatoms randomly distributed on the surface of a topological insulator. Interactions of an ensemble of adatoms are frustrated, as the RKKY-type interactions of two adatom spins depend on the directions of spins relative to the vector connecting them. We show that at low temperatures the frustrated RKKY interactions give rise to two phases: an ordered ferromagnetic phase with spins pointing perpendicular to the surface, and a disordered spin-glass-like phase. The two phases are separated by a quantum phase transition driven by the magnetic exchange anisotropy. The ordered phase breaks time-reversal symmetry spontaneously, driving the surface states into a gapped state, which exhibits an anomalous quantum Hall effect and provides a realization of the parity anomaly. We find that the magnetic ordering is suppressed by potential scattering. © 2011 American Physical Society. Source


Dodson-Robinson S.E.,University of Texas at Austin
Astrophysical Journal | Year: 2012

Previous studies of the interior structure of transiting exoplanets have shown that the heavy-element content of gas giants increases with host star metallicity. Since metal-poor planets are less dense and have larger radii than metal-rich planets of the same mass, one might expect that metal-poor stars host a higher proportion of gas giants with large radii than metal-rich stars. Here I present evidence for a negative correlation at the 2.3σ level between eclipse depth and stellar metallicity in the Kepler gas giant candidates. Based on Kendall's τ statistics, the probability that eclipse depth depends on star metallicity is 0.981. The correlation is consistent with planets orbiting low-metallicity stars being, on average, larger in comparison with their host stars than planets orbiting metal-rich stars. Furthermore, since metal-rich stars have smaller radii than metal-poor stars of the same mass and age, a uniform population of planets should show a rise in median eclipse depth with [M/H]. The fact that I find the opposite trend indicates that substantial changes in the gas giant interior structure must accompany increasing [M/H]. I investigate whether the known scarcity of giant planets orbiting low-mass stars could masquerade as an eclipse depth-metallicity correlation, given the degeneracy between metallicity and temperature for cool stars in the Kepler Input Catalog. While the eclipse depth-metallicity correlation is not yet on firm statistical footing and will require spectroscopic [Fe/H] measurements for validation, it is an intriguing window into how the interior structure of planets and even the planet formation mechanism may be changing with Galactic chemical evolution. © 2012. The American Astronomical Society. All rights reserved.. Source


Liu C.,University of Texas at Austin
Journal of the Acoustical Society of America | Year: 2013

Just noticeable differences of tone pitch contour discrimination were examined for young English- and Mandarin Chinese-native listeners to examine categorical features of tone perception for the two groups of listeners. Three types of stimuli were used: A Mandarin Chinese vowel, an English vowel, and tonal glides. Level, rising, and falling tones within or across tone boundaries served as the standard stimuli to measure thresholds of tone pitch discrimination. Performance was equivalent between English- and Chinese-native listeners for level tones, but significantly differed for rising and falling tones, regardless of the type of stimuli. English listeners showed significantly lower thresholds at the offset of F0 shifts than Chinese listeners, while Chinese listeners discriminated tone pitch changes at the onset with significantly lower thresholds than their English peers. These psychophysical results, combined with tone perception reported in other studies, indicated that Mandarin-native listeners perceived lexical tones in a categorical manner, based on their lexical tone experience, whereas English-native listeners perceived tones on a psychophysical base. © 2013 Acoustical Society of America. Source


Zhang Y.,University of Texas at Austin
Journal of the American Society for Information Science and Technology | Year: 2013

Designing effective consumer health information systems requires deep understanding of the context in which the systems are being used. However, due to the elusive nature of the concept of context, few studies have made it a focus of examination. To fill this gap, we studied the context of consumer health information searching by analyzing questions posted on a social question and answer site: Yahoo! Answers. Based on the analysis, a model of context was developed. The model consists of 5 layers: demographic, cognitive, affective, situational, and social and environmental. The demographic layer contains demographic factors of the person of concern; the cognitive layer contains factors related to the current search task (specifically, topics of interest and information goals) and users' cognitive ability to articulate their needs. The affective layer contains different affective motivations and intentions behind the search. The situational layer contains users' perceptions of the current health condition and where the person is in the illness trajectory. The social and environmental layer contains users' social roles, social norms, and various information channels. Several novel system functions, including faceted search and layered presentation of results, are proposed based on the model to help contextualize and improve users' interactions with health information systems. © 2013 ASIS&T. Source


Frebel A.,Massachusetts Institute of Technology | Bromm V.,University of Texas at Austin
Astrophysical Journal | Year: 2012

We utilize metal-poor stars in the local, ultra-faint dwarf galaxies (UFDs; L tot ≤ 105 L) to empirically constrain the formation process of the first galaxies. Since UFDs have much simpler star formation histories than the halo of the Milky Way, their stellar populations should preserve the fossil record of the first supernova (SN) explosions in their long-lived, low-mass stars. Guided by recent hydrodynamical simulations of first galaxy formation, we develop a set of stellar abundance signatures that characterize the nucleosynthetic history of such an early system if it was observed in the present-day universe. Specifically, we argue that the first galaxies are the product of chemical "one-shot" events, where only one (long-lived) stellar generation forms after the first, PopulationIII, SN explosions. Our abundance criteria thus constrain the strength of negative feedback effects inside the first galaxies. We compare the stellar content of UFDs with these one-shot criteria. Several systems (Ursa MajorII, and also Coma Berenices, BootesI, LeoIV, Segue1) largely fulfill the requirements, indicating that their high-redshift predecessors did experience strong feedback effects that shut off star formation. We term the study of the entire stellar population of a dwarf galaxy for the purpose of inferring details about the nature and origin of the first galaxies "dwarf galaxy archaeology." This will provide clues to the connection of the first galaxies, the surviving, metal-poor dwarf galaxies, and the building blocks of the Milky Way. © 2012. The American Astronomical Society. All rights reserved. Source


Waring B.G.,University of Texas at Austin
Soil Biology and Biochemistry | Year: 2013

The theory of ecological stoichiometry predicts that the microbial biomass should regulate production of extracellular enzymes to target the resource in shortest supply. Therefore, microbial communities on decomposing leaf litter should optimize allocation to C-, N-, and P-degrading enzymes according to the stoichiometry of the foliar substrate. Because extracellular enzymes are the proximate agents of leaf litter decay, shifts in microbial enzyme allocation may influence overall rates of litter mass loss. To test these hypotheses, I measured fungal growth and the activities of acid phosphatase (AP), beta-glucosidase (BG), cellobiohydrolase (CB) and glycine aminopeptidase (GAP) on decaying leaf litter of five plant species over the course of a 394-day decomposition experiment. I used regression and correlation analyses to link to interspecific variation in mass loss rates with enzyme activities and foliar nutrient content. Enzymes explained 35% of the variance in foliar decay rates across plant species, yet fungal abundance and enzyme activities were unrelated to foliar concentrations of N, P, K, or 9 other nutrients. Furthermore, relative activities of C-, N-, and P-acquiring enzymes did not vary across litter types despite wide variance in foliar C:N and C:P ratios. This weak relationship between litter stoichiometry and decomposition rates suggests that nutrients are not the primary control on microbial growth or enzyme allocation in this tropical forest. However, substantial interspecific differences in fungal abundance and enzyme activities imply that differences in litter composition strongly influence microbial communities and the ecosystem processes they mediate. © 2013 Elsevier Ltd. Source


Lin L.-C.,University of Texas at Austin
Journal of Immigrant and Minority Health | Year: 2014

Filipina nurses represent the majority of all internationally educated nurses recruited to work in the US. Although the hiring of Filipina nurses is not a new practice in US hospitals, very few studies have detailed how these nurses have adjusted to US nursing practices. This study examines how Filipina nurses transition into their role as nurses and adapt to nursing practice in the US. Following a qualitative approach, 31 Filipina nurses were interviewed as participants. The data were audio-taped and transcribed verbatim, using the method of constant comparison. Analysis revealed that transitioning from Philippine to US nursing practice formed the basis of the social adaptation process, which occurred in three stages: pre-arrival, early adaptation, and late adaptation. Participant-recommended strategies to address adaptation needs experienced at each of these stages are shared. All participants experienced challenges while adjusting to the US healthcare system and cultural landscape. © 2013 Springer Science+Business Media New York. Source


Allen D.T.,University of Texas at Austin
Annual Review of Chemical and Biomolecular Engineering | Year: 2014

The US Energy Information Administration projects that hydraulic fracturing of shale formations will become a dominant source of domestic natural gas supply over the next several decades, transforming the energy landscape in the United States. However, the environmental impacts associated with fracking for shale gas have made it controversial. This review examines emissions and impacts of air pollutants associated with shale gas production and use. Emissions and impacts of greenhouse gases, photochemically active air pollutants, and toxic air pollutants are described. In addition to the direct atmospheric impacts of expanded natural gas production, indirect effects are also described. Widespread availability of shale gas can drive down natural gas prices, which, in turn, can impact the use patterns for natural gas. Natural gas production and use in electricity generation are used as a case study for examining these indirect consequences of expanded natural gas availability. © 2014 by Annual Reviews. Source


Goodenough J.B.,University of Texas at Austin
Chemistry of Materials | Year: 2014

Engineering transition-metal oxides depends on understanding a few general concepts. Three of these are discussed: (1) orbital mixing and the roles of cation-d/O-2p covalent bonding as distinct from on-site cation-orbital hybridization; (2) cooperativity in ordering (a) localized orbitals to remove an orbital degeneracy, (b) ferroic atomic displacements, and (c) bond lengths in a charge-density wave; and (3) cation-site expansion at the crossover from itinerant to localized electron behavior. The latter can stabilize a first-order transition to a ferromagnetic metallic phase on the approach to crossover from the itinerant-electron side or, in a single-valent compound, an intermediate charge-density-wave phase on the approach to crossover from either the localized- or itinerant-electron side. In a mixed-valent compound, a two-phase segregation at a first-order crossover may be static or mobile, and a mobile second phase may become ordered at low temperature to stabilize high-temperature superconductivity. © 2013 American Chemical Society. Source


Stanton J.F.,University of Texas at Austin
Journal of Chemical Physics | Year: 2010

The vibronic coupling model of Köuppel, Domcke, and Cederbaum in one dimension is introduced as a means to estimate the effects of electronic nonadiabaticity on the vibrational energy levels of molecules that exhibit vibronic coupling. For the BNB molecule, the nonadiabatic contribution to the nominal fundamental vibrational energy of the antisymmetric stretching mode is approximately -80 cm-1. The surprisingly large effect for this mode, which corresponds to an adiabatic potential that is essentially flat near the minimum due to the vibronic interaction, is contrasted with another model system that also exhibits a flat potential (precisely, a vanishing quadratic force constant) but has a significantly larger gap between interacting electronic states. For the latter case, the nonadiabatic contribution to the level energies is about two orders of magnitude smaller even though the effect on the potential is qualitatively identical. A simple analysis shows that significant nonadiabatic corrections to energy levels should occur only when the affected vibrational frequency is large enough to be of comparable magnitude to the energy gap involved in the coupling. The results provide evidence that nonadiabatic corrections should be given as much weight as issues such as high-level electron correlation, relativistic corrections, etc., in quantum chemical calculations of energy levels for radicals with close-lying and strongly coupled electronic states even in cases where conical intersections are not obviously involved. The same can be said for high-accuracy thermochemical studies, as the zero-point vibrational energy of the BNB example contains a nonadiabatic contribution of approximately -70 cm-1 (-0.9 kJ mol -1). © 2010 American Institute of Physics. Source


Fenn T.D.,Stanford University | Schnieders M.J.,University of Texas at Austin
Acta Crystallographica Section D: Biological Crystallography | Year: 2011

In the past, weighting between the sum of chemical and data-based targets in macromolecular crystallographic refinement was based on comparing the gradients or Hessian diagonal terms of the two potential functions. Here, limitations of this scheme are demonstrated, especially in the context of a maximum-likelihood target that is inherently weighted by the model and data errors. In fact, the congruence between the maximum-likelihood target and a chemical potential based on polarizable atomic multipole electrostatics evaluated with Ewald summation has opened the door to a transferable static weight. An optimal static weight is derived from first principles and is demonstrated to be transferable across a broad range of data resolutions in the context of a recent implementation of X-ray crystallographic refinement using the polarizable AMOEBA force field and it is shown that the resulting models are balanced with respect to optimizing both R free and MolProbity scores. Conversely, the classical automatic weighting scheme is shown to lead to underfitting or overfitting of the data and poor model geometry. The benefits of this approach for low-resolution diffraction data, where the need for prior chemical information is of particular importance, are also highlighted. It is demonstrated that this method is transferable between low- and high-resolution maximum-likelihood-based crystallographic refinement, which proves for the first time that resolution-dependent parameterization of either the weight or the chemical potential is unnecessary. © 2011 International Union of Crystallography Printed in Singapore - all rights reserved. Source


Legare C.H.,University of Texas at Austin
Child Development Perspectives | Year: 2014

Both explanation and exploration can foster causal learning, yet the mechanisms underlying these effects are largely unknown, especially in early childhood. In this article, I provide an overview of research on the relation among explanation, exploration, and the development of scientific reasoning. I propose that explaining and exploring operate in tandem as hypothesis-generating and hypothesis-testing mechanisms. I review evidence that supports this claim by demonstrating that inconsistency with prior knowledge selectively motivates children to construct explanations, guides discovery-oriented behavior, and drives the early-developing capacity to reason scientifically. I conclude with a discussion of the educational applications of research on the development of children's scientific reasoning in informal learning settings. © 2014 The Society for Research in Child Development. Source


Perry E.L.,University of Texas at Austin
Journal for Specialists in Pediatric Nursing | Year: 2014

Purpose: To determine the state of the science of health literacy instruments and interventions for use in adolescents. Conclusions: There is a scarcity of literature regarding health literacy instruments and interventions for adolescents. Ten studies were included in the final table for analysis. One instrument shows high validity and reliability in English. Most interventions are exploratory, with two studies showing significant outcomes. Practice Implications: More research is needed to establish the appropriate evaluation of and effective interventions for health literacy in adolescents. © 2014, Wiley Periodicals, Inc. Source


Martell S.L.,Australian Astronomical Observatory | Shetrone M.D.,University of Texas at Austin
Monthly Notices of the Royal Astronomical Society | Year: 2013

We present a search for post-main-sequence field stars in the Galaxy with atypically large lithium abundances. Using moderate-resolution spectra taken as part of the Sloan Digital Sky Survey, along with high-resolution followup spectroscopy from the Hobby-Eberly Telescope, we identify 23 post-turnoff stars with log ε(Li) greater than 1.95, including 14 with log ε(Li) ≥ 2.3 and 8 with log ε(Li) ≥ 3.0, well above the low level expected for evolved stars. Comparison with theoretical isochrones indicates that some of our Li-rich stars are affiliated with the upper red giant branch, the asymptotic giant branch and the red clump rather than the RGB bump, which is a challenge to existing models of Li production in evolved stars. © 2013 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society. Source


Stacy A.,NASA | Bromm V.,University of Texas at Austin
Monthly Notices of the Royal Astronomical Society | Year: 2013

We perform a cosmological simulation in order to model the growth and evolution of Population III (Pop III) stellar systems in a range of host minihalo environments. A Pop III multiple system forms in each of the 10 minihaloes, and the overall mass function is top-heavy compared to the currently observed initial mass function in the Milky Way. Using a sink particle to represent each growing protostar, we examine the binary characteristics of the multiple systems, resolving orbits on scales as small as 20 au. We find a binary fraction of ~35 per cent, with semi-major axes as large as 3000 au. The distribution of orbital periods is slightly peaked at ≲ 900 yr, while the distribution of mass ratios is relatively flat. Of all sink particles formed within the 10 minihaloes, ~50 per cent are lost to mergers with larger sinks, and ~50 per cent of the remaining sinks are ejected from their star-forming discs. The large binary fraction may have important implications for Pop III evolution and nucleosynthesis, as well as the final fate of the first stars. © 2013 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society. Source


Kwon S.J.,Konkuk University | Bard A.J.,University of Texas at Austin
Journal of the American Chemical Society | Year: 2012

This study demonstrates a highly sensitive sensing scheme for the detection of low concentrations of DNA, in principle down to the single biomolecule level. The previously developed technique of electrochemical current amplification for detection of single nanoparticle (NP) collisions at an ultramicroelectrode (UME) has been employed to determine DNA. The Pt NP/Au UME/hydrazine oxidation reaction was employed, and individual NP collision events were monitored. The Pt NP was modified with a 20-base oligonucleotide with a C6 spacer thiol (detection probe), and the Au UME was modified with a 16-base oligonucleotide with a C6 spacer thiol (capture probe). The presence of a target oligonucleotide (31 base) that hybridized with both capture and detection probes brought a Pt NP on the electrode surface, where the resulting electrochemical oxidation of hydrazine resulted in a current response. © 2012 American Chemical Society. Source


Jha S.,University of Texas at Austin | Kremen C.,University of California at Berkeley
Molecular Ecology | Year: 2013

Potential declines in native pollinator communities and increased reliance on pollinator-dependent crops have raised concerns about native pollinator conservation and dispersal across human-altered landscapes. Bumble bees are one of the most effective native pollinators and are often the first to be extirpated in human-altered habitats, yet little is known about how bumble bees move across fine spatial scales and what landscapes promote or limit their gene flow. In this study, we examine regional genetic differentiation and fine-scale relatedness patterns of the yellow-faced bumble bee, Bombus vosnesenskii, to investigate how current and historic habitat composition impact gene flow. We conducted our study across a landscape mosaic of natural, agricultural and urban/suburban habitats, and we show that B. vosnesenskii exhibits low but significant levels of differentiation across the study system (FST = 0.019, Dest = 0.049). Most importantly, we reveal significant relationships between pairwise FST and resistance models created from contemporary land use maps. Specifically, B. vosnesenskii gene flow is most limited by commercial, industrial and transportation-related impervious cover. Finally, our fine-scale analysis reveals significant but declining relatedness between individuals at the 1-9 km spatial scale, most likely due to local queen dispersal. Overall, our results indicate that B. vosnesenskii exhibits considerable local dispersal and that regional gene flow is significantly limited by impervious cover associated with urbanization. © 2013 Blackwell Publishing Ltd. Source


Milliken K.,University of Texas at Austin
Journal of Sedimentary Research | Year: 2014

A tripartite compositional classification is proposed for sediments and sedimentary rocks that have grain assemblages with greater than 50 percent of a weight or volume of particles smaller than 62.5 μm (4 Phi). Tarl (terrigenous-argillaceous) contains a grain assemblage dominated by more than 75 percent of particles of extrabasinal derivation, including grains derived from continental weathering and also volcanogenic debris. Carl (calcareous-argillaceous) contains less than 75 percent of particles of extrabasinal derivation debris and among its intrabasinal grains contains a preponderance of biogenic carbonate particles including carbonate aggregates. Sarl (siliceous-argillaceous) contains less than 75 percent of particles of extrabasinal derivation and contains a preponderance of biogenic siliceous particles over carbonate grains.These three classes of fine-grained particulate sediments and rocks effectively separate materials that have distinct depositional settings and systematic contrasts in organic-matter content and minor grain types. In the subsurface the grain assemblages that define these classes follow contrasting and predictable diagenetic pathways that have significant implications for the evolution of bulk rock properties, and thus, assigning a fine-grained rock to one of these classes is an important first step for predicting its economic and engineering qualities. For purposes of description these three class names can be joined to modifier terms denoting rock texture, more precise compositional divisions, specific grain types of notable importance, and diagenetic features. Copyright © 2014, SEPM (Society for Sedimentary Geology). Source


Zhang Y.,University of Texas at Austin
Journal of the American Society for Information Science and Technology | Year: 2014

Searches for specific factual health information constitute a significant part of consumer health information requests, but little is known about how users search for such information. This study attempts to fill this gap by observing users' behavior while using MedlinePlus to search for specific health information. Nineteen students participated in the study, and each performed 12 specific tasks. During the search process, they submitted short queries or complete questions, and they examined less than 1 result per search. Participants rarely reformulated queries; when they did, they tended to make a query more specific or more general, or iterate in different ways. Participants also browsed, primarily relying on the alphabetical list and the anatomical classification, to navigate to specific health topics. Participants overall had a positive experience with Medline- Plus, and the experience was significantly correlated with task difficulty and participants' spatial abilities. The results suggest that, to better support specific item search in the health domain, systems could provide a more "natural" interface to encourage users to ask questions; effective conceptual hierarchies could be implemented to help users reformulate queries; and the search results page should be reconceptualized as a place for accessing answers rather than documents. Moreover, multiple schemas should be provided to help users navigate to a health topic. The results also suggest that users' experience with information systems in general and health-related systems in particular should be evaluated in relation to contextual factors, such as task features and individual differences. © 2013 ASIS&T. Source


Allen D.T.,University of Texas at Austin
Current Opinion in Chemical Engineering | Year: 2014

Methane emissions from the natural gas supply chain are a key factor in determining the greenhouse gas footprint of natural gas production and use. Recent estimates of these emissions have varied widely, because of the large population of sources, because of different measurement and estimation approaches, and because of extreme values of emission rates from individual sources that are much larger than population average values of emission rates from sources in the same category (a 'fat-tail' distribution). Reconciling differences between ambient methane concentration measurements (top-down methods) and direct measurement of emissions from individual sources (bottom-up methods) is critical to understanding methane emissions from the natural gas supply chain. A combination of top-down and bottom-up approaches is recommended. © 2014 Published by Elsevier Ltd. Source


Mirowsky J.,University of Texas at Austin
The journals of gerontology. Series B, Psychological sciences and social sciences | Year: 2011

Upward trends in IQ, education, and mental work suggest that cognitive function among seniors should be rising strongly across cohorts. There is little sign of such improvement in recent decades, and some analyses find poorer function in the newer cohorts. This essay explores possible explanations of the anomaly. Major long-term trends that might increase cognitive impairment are reviewed, and their implications are considered. Physical activity is declining, food is increasingly manufactured, body fat is increasing, diabetes and metabolic syndrome are on the rise, the number of prescription drugs per person is increasing, and the proportion of the population either old or obese is growing. Technological and economic development may lower the cognitive function needed for survival. They also lower physical activity in daily life. Sedentary work, transportation, and leisure undermine the aerobic and metabolic fitness required for the brain to perform well. Some prescription drugs impair cognitive function, and others do so when taken for many years or in combination with others. The growing fraction of the population that is either old or obese may further lower physical activity norms and requirements and substitute medical intervention for health, accelerating a trend toward cognitive impairment. Source


Beal C.C.,University of Texas at Austin
Journal of Neuroscience Nursing | Year: 2010

Ischemic stroke accounts for 87% of the 780,000 strokes occurring annually in the United States and is a leading cause of death and functional limitations worldwide. The prompt recognition of stroke symptoms and timely arrival at the emergency room are important to stroke outcomes. Recent literature was suggestive that women may have different stroke symptoms compared with men. If women have nontraditional symptoms at stroke onset, this may contribute to delay in diagnosis and treatment. The purpose of this manuscript was to review the current literature regarding gender differences in stroke symptoms. Ten articles met the inclusion criteria for the review. Overall, the extant research was not indicative of robust gender differences in the classic symptoms of stroke or a greater frequency of nontraditional symptoms in women. One study found that women were significantly more likely than men to report nonspecific "somatic" symptoms, and in another study women were significantly more likely to have a change in mental status compared with men. Implications for nursing practice and suggestions for future research are discussed. Copyright © 2010 American Association of Neuroscience Nurses. Source


There has been considerable effort to understand the inherent time scale for conformational reconfiguration of denatured proteins. Even a simple homopolymer, however, exhibits a spectrum of fluctuation time scales rather than a unique characteristic time. Consequently, different time scales may be probed by different measurements. Motivated by recent single-molecule fluorescence resonance energy transfer experiments, here I have studied theoretically how the characteristic time scale exhibited by fluctuations of the distance between two residues within an unfolded polypeptide depends on the choice of the residue pair. This time scale was generally found to become shorter as the sequence separation between the residues is reduced. The maximum reconfiguration time, however, corresponds not to the residues being located at the ends of the chain but rather to each residue residing a short length apart from the ends. Comparison of these findings with recent single-molecule measurements suggests that the latter may bear signatures of transient residual structure. © 2010 American Institute of Physics. Source


Brodbelt J.S.,University of Texas at Austin
Chemical Society Reviews | Year: 2014

Photodissociation mass spectrometry combines the ability to activate and fragment ions using photons with the sensitive detection of the resulting product ions by mass spectrometry. This combination affords a versatile tool for characterization of biological molecules. The scope and breadth of photodissociation mass spectrometry have increased substantially over the past decade as new research groups have entered the field and developed a number of innovative applications that illustrate the ability of photodissociation to produce rich fragmentation patterns, to cleave bonds selectively, and to target specific molecules based on incorporation of chromophores. This review focuses on many of the key developments in photodissociation mass spectrometry over the past decade with a particular emphasis on its applications to biological molecules. This journal is © the Partner Organisations 2014. Source


Goodenough J.B.,University of Texas at Austin
Energy and Environmental Science | Year: 2014

The storage of electrical energy in a rechargeable battery is subject to the limitations of reversible chemical reactions in an electrochemical cell. The limiting constraints on the design of a rechargeable battery also depend on the application of the battery. Of particular interest for a sustainable modern society are (1) powering electric vehicles that can compete with cars powered by the internal combustion engine and (2) stationary storage of electrical energy from renewable energy sources that can compete with energy stored in fossil fuels. Existing design strategies for the rechargeable battery have enabled the wireless revolution and the plug-in hybrid electric car, but they show little promise of providing safe, adequate capacity with an acceptable shelf and cycle life to compete in cost and convenience with the chemical energy stored in fossil fuels. Electric vehicles that are charged overnight (plug-in vehicles) offer a distributed energy storage, but larger battery packs are needed for stationary storage of electrical energy generated from wind or solar farms and for stand-by power. This paper outlines the limitations of existing commercial strategies and some developing strategies that may overcome these limitations. © 2014 The Royal Society of Chemistry. Source


Cisneros G.A.,Wayne State University | Karttunen M.,University of Waterloo | Ren P.,University of Texas at Austin | Sagui C.,North Carolina State University
Chemical Reviews | Year: 2014

Electrostatic interactions are crucial for biomolecular simulations, as their calculation is the most time-consuming when computing the total classical forces, and their representation has profound consequences for the accuracy of classical force fields. Long-range electrostatic interactions are crucial for the stability of proteins, nucleic acids, glycomolecules, lipids, and other macromolecules, and their interactions with solvent, ions, and other molecules. Traditionally, electrostatic interactions have been modeled using a set of fixed atom-centered point charges or partial charges. The most popular methods for extracting charges from molecular wave functions are based on a fitting of the atomic charges to the molecular electrostatic potential (MEP) computed with ab initio or semiempirical methods outside the van der Waals surface. Computationally, the electrostatic potential for a system with explicit solvent is calculated by either solving Poisson's equation or explicitly adding the individual charge potentials. Source


Baldea M.,University of Texas at Austin | Harjunkoski I.,ABB
Computers and Chemical Engineering | Year: 2014

Production scheduling and process control have the common aim of identifying (economically) optimal operational decisions, and it is reasonable to expect that significant economic benefits can be derived from their integration. Yet, the scheduling and control fields have evolved quite independently of each other, and efforts aimed at integrating these two decision-making activities are quite recent. In this paper, we review progress made thus far in this direction. We identify key elements of control and scheduling, and carry out a systematic investigation of their use as building blocks for the formulation and solution of the integrated scheduling/control problem. On the basis of our review, we define several necessary directions for future development as well as a complement of promising applications. © 2014 Elsevier Ltd. Source


Willets K.A.,University of Texas at Austin
Chemical Society Reviews | Year: 2014

Surface-enhanced Raman scattering (SERS) hot spots occur when molecules are positioned near regions of strongly enhanced electromagnetic fields on the surface of nano-featured plasmonic substrates. The emission from the molecule is coupled out into the far field by the plasmon modes of the substrate, but due to the diffraction-limit of light, the properties of this coupled molecule-plasmon emitter cannot be resolved using typical far-field optical microscopy techniques. However, by fitting the emission to a model function such as 2-dimensional Gaussian, the relative position of the emitter can be determined with precision better than 5 nm in a process known as super-resolution imaging. This tutorial review describes the basic principles of super-resolution imaging of SERS hot spots using single molecules to probe local electromagnetic field enhancements. New advances using dipole-based fitting functions and spectrally- and spatially-resolved measurements are described, providing new insight into SERS hot spots and the important roles of both the molecule and the substrate in defining their properties. © 2014 the Partner Organisations. Source


Echeverria I.,University of Maryland University College | Makarov D.E.,University of Texas at Austin | Papoian G.A.,University of Maryland University College
Journal of the American Chemical Society | Year: 2014

Protein chains undergo conformational diffusion during folding and dynamics, experiencing both thermal kicks and viscous drag. Recent experiments have shown that the corresponding friction can be separated into wet friction, which is determined by the solvent viscosity, and dry friction, where frictional effects arise due to the interactions within the protein chain. Despite important advances, the molecular origins underlying dry friction in proteins have remained unclear. To address this problem, we studied the dynamics of the unfolded cold-shock protein at different solvent viscosities and denaturant concentrations. Using extensive all-atom molecular dynamics simulations we estimated the internal friction time scales and found them to agree well with the corresponding experimental measurements (Soranno et al. Proc. Natl. Acad. Sci. U.S.A. 2012, 109, 17800-17806). Analysis of the reconfiguration dynamics of the unfolded chain further revealed that hops in the dihedral space provide the dominant mechanism of internal friction. Furthermore, the increased number of concerted dihedral moves at physiological conditions suggest that, in such conditions, the concerted motions result in higher frictional forces. These findings have important implications for understanding the folding kinetics of proteins as well as the dynamics of intrinsically disordered proteins. © 2014 American Chemical Society. Source


Hatlestad G.J.,University of Texas at Austin
Nature genetics | Year: 2015

Nearly all flowering plants produce red/violet anthocyanin pigments. Caryophyllales is the only order containing families that replace anthocyanins with unrelated red and yellow betalain pigments. Close biological correlation of pigmentation patterns suggested that betalains might be regulated by a conserved anthocyanin-regulating transcription factor complex consisting of a MYB, a bHLH and a WD repeat-containing protein (the MBW complex). Here we show that a previously uncharacterized anthocyanin MYB-like protein, Beta vulgaris MYB1 (BvMYB1), regulates the betalain pathway in beets. Silencing BvMYB1 downregulates betalain biosynthetic genes and pigmentation, and overexpressing BvMYB1 upregulates them. However, unlike anthocyanin MYBs, BvMYB1 will not interact with bHLH members of heterologous anthocyanin MBW complexes because of identified nonconserved residues. BvMYB1 resides at the historic beet pigment-patterning locus, Y, required for red-fleshed beets. We show that Y and y express different levels of BvMYB1 transcripts. The co-option of a transcription factor regulating anthocyanin biosynthesis would be an important evolutionary event allowing betalains to largely functionally replace anthocyanins. Source


Hausmann N.T.,University of California at Berkeley | Hawkes C.V.,University of Texas at Austin
Ecology | Year: 2010

The causes of local diversity and composition remain a central question in community ecology. Numerous studies have attempted to understand community assembly, both within and across trophic levels. However, little is known about how community assembly aboveground influences soil microbial communities belowground. We hypothesized that plant establishment order can affect the community of arbuscular mycorrhizal fungi (AMF) in roots, with the strength of this effect dependent on both host plant identity and neighboring plant identity. Such priority effects of plants on AMF may act through hostspeci fic filters of the initial species pool that limit the available pool for plants that established second. In a greenhouse experiment with four plant hosts, we found that the strength of the priority effect on AMF communities reflected both host plant characteristics and interactions between host and neighbor plant species, consistent with differential host specificity among plants. These patterns were independent of plant biomass and root colonization. Functional studies of AMF associated with a wide array of host plants will be required to further understand this potential driver of community dynamics. © 2010 by the Ecological Society of America. Source


Benner A.D.,University of Texas at Austin | Graham S.,University of California at Los Angeles
Child Development | Year: 2011

Changes in perceptions of discrimination were examined with 668 Latino students (62% Mexican American; 56% female; Mage=14.6years). Adolescents' reports of discrimination increased across the first 2years of high school. Perceptions of discrimination were higher for boys and for primary language brokers, as well as for adolescents in schools with more ethnically diverse student bodies but a less diverse teaching staff. Path analysis revealed that higher levels of discrimination and increases in discrimination across time influenced Latino adolescents' academic outcomes (i.e., grades, absences) indirectly via their influences on perceptions of school climate. Findings highlight previously understudied individual and school contextual factors that shape experiences of discrimination and the mechanisms by which discrimination indirectly influences Latino adolescents' outcomes. © 2011 The Authors. Child Development © 2011 Society for Research in Child Development, Inc.. Source


Heinrich C.J.,University of Texas at Austin
Future of Children | Year: 2014

Since modern welfare reform began in the 1980s, we have seen low-income parents leave the welfare rolls and join the workforce in large numbers. At the same time, the Earned Income Tax Credit has offered a monetary incentive for low-income parents to work. Thus, unlike some of the other two-generation mechanisms discussed in this issue of Future of Children, policies that encourage low-income parents to work are both widespread and well-entrenched in the United States. But parents' (and especially mothers') work, writes Carolyn Heinrich, is not unambiguously beneficial for their children. On the one hand, working parents can be positive role models for their children, and, of course, the income they earn can improve their children's lives in many ways. On the other hand, work can impair the developing bond between parents and young children, especially when the parents work long hours or evening and night shifts. The stress that parents bring home from their jobs can detract from their parenting skills, undermine the atmosphere in the home, and thereby introduce stress into children's lives. Unfortunately, it is low-income parents who are most likely to work in stressful, low-quality jobs that feature low pay, little autonomy, inflexible hours, and few or no benefits. And low-income children whose parents are working are more likely to be placed in inadequate child care or to go unsupervised. Two-generation approaches, Heinrich writes, could maximize the benefits and minimize the detriments of parents' work by expanding workplace flexibility, and especially by mandating enough paid leave so that mothers can breastfeed and form close bonds with their infants; by helping parents place their children in high-quality child care; and by helping low-income parents train for, find, and keep a well-paying job with benefits. Source


Hines K.E.,University of Texas at Austin
Journal of General Physiology | Year: 2013

Single molecule photobleaching is a powerful tool for determining the stoichiometry of protein complexes. By attaching fluorophores to proteins of interest, the number of associated subunits in a complex can be deduced by imaging single molecules and counting fluorophore photobleaching steps. Because some bleaching steps might be unobserved, the ensemble of steps will be binomially distributed. In this work, it is shown that inferring the true composition of a complex from such data is nontrivial because binomially distributed observations present an illposed inference problem. That is, a unique and optimal estimate of the relevant parameters cannot be extracted from the observations. Because of this, a method has not been firmly established to quantify confidence when using this technique. This paper presents a general inference model for interpreting such data and provides methods for accurately estimating parameter confidence. The formalization and methods presented here provide a rigorous analytical basis for this pervasive experimental tool. © 2013 Hines. Source


Loehlin J.C.,University of Texas at Austin
Behavior Genetics | Year: 2012

An earlier twin analysis by Loehlin and Nichols was repeated with a considerably larger twin sample (2,600 adult Australian twin pairs). Like the previous analysis, it was based on item clusters, although from different questionnaires. Unlike the previous study, it found consistent differences in MZ-DZ discrepancy in correlation among the clusters. A more elaborate analysis confirmed this for the clusters, but not for broad Extraversion and Neuroticism composites based upon them. © Springer Science+Business Media, LLC 2011. Source


Manthiram A.,University of Texas at Austin
Electrochimica Acta | Year: 2013

Investigation of Cu2S as cathode in lithium cells with ether-based electrolytes shows a discharge voltageplateau at 1.7 and 2.25 V in the first cycle which slowly evolve, respectively, to 1.74 and 1.94 V in thefollowing cycles. This transition process has been analyzed by ex situ X-ray diffraction, scanning elec-tron microscopy (SEM), and electrochemical impedance spectroscopy (EIS). The discharged products areidentified as Li2S and elemental copper, and the charged product is Cu1.96S, all of which are crystallineduring cycling. SEM reveals a pulverization process of particles during the phase transition and the EIS shows a significant reduction in charge transfer resistance after charges. The Cu2S electrodes exhibit arelatively stable discharge capacity of around 250 mAh g-1at rates of C/10, C/5, and C/2 after a capacitydecline in the first cycle. Coulombic efficiency starts to decrease in early cycles as the rate increases. Thepoor cyclability and low Coulombic efficiency at high rates like 1 C rate indicate the transition process islimited at high rates due to the large particle size. © 2013 Elsevier Ltd. All rights reserved. Source


Hopkins R.,University of Texas at Austin | Rausher M.D.,Duke University
American Naturalist | Year: 2014

Reinforcement is the process by which increased reproductive isolation between incipient species evolves due to selection against maladaptive hybrids or costly hybrid mating. Reinforcement is predicted to create a pattern of greater prezygotic reproductive isolation in regions where the two species co-occur, sympatry, than in allopatry. Although most research on reinforcement focuses on understanding the evolutionary forces acting in sympatry, here we consider what prevents the alleles conferring greater reproductive isolation from spreading into allopatry. We investigate flower color divergence in the wildflower Phlox drummondii, which is caused by reinforcement in the regions sympatric with its congener Phlox cuspidata. Specifically, we performed common garden field experiments and pollinator observations to estimate selection acting on flower color variation in allopatry. We combine our estimates of maternal and paternal fitness using simulations and predict how flower color alleles migrating from sympatry will evolve in allopatry. Our results suggest that strong pollinator preference for the ancestral flower color in allopatry can maintain divergence between allopatric and sympatric populations. © 2014 by The University of Chicago. Source


Umberson D.,University of Texas at Austin
Journal of health and social behavior | Year: 2010

Social relationships--both quantity and quality--affect mental health, health behavior, physical health, and mortality risk. Sociologists have played a central role in establishing the link between social relationships and health outcomes, identifying explanations for this link, and discovering social variation (e.g., by gender and race) at the population level. Studies show that social relationships have short- and long-term effects on health, for better and for worse, and that these effects emerge in childhood and cascade throughout life to foster cumulative advantage or disadvantage in health. This article describes key research themes in the study of social relationships and health, and it highlights policy implications suggested by this research. Source


Matz M.V.,University of Texas at Austin
PloS one | Year: 2013

Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the "classic" analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R. Source


Sage W.M.,University of Texas at Austin
Health Affairs | Year: 2014

As hospital, physician, and health insurance markets consolidate and change in response to health care reform, some commentators have called for vigorous enforcement of the federal antitrust laws to prevent the acquisition and exercise of market power. In health care, however, stricter antitrust enforcement will benefit consumers only if it accounts for the competitive distortions caused by the sector's long history of government regulation. This article directs policy makers to a neglected dimension of health care competition that has been altered by regulation: the product. Competition may have failed to significantly lower costs, increase access, or improve quality in health care because we have been buying and selling the wrong things. Competition policy makers-meaning both antitrust enforcers and regulators-should force the health care industry to define and market products that can be assembled and warranted to consumers while keeping emerging sectors such as mHealth free from overregulation, wasteful subsidy, and appropriation by established insurer and provider interests. © 2014 Project HOPE-The People-to-People Health Foundation, Inc. Source


King C.W.,University of Texas at Austin
Environmental Research Letters | Year: 2010

In this letter I compare two measures of energy quality, energy return on energy invested (EROI) and energy intensity ratio (EIR) for the fossil fuel consumption and production of the United States. All other characteristics being equal, a fuel or energy system with a higher EROI or EIR is of better quality because more energy is provided to society. I define and calculate the EIR for oil, natural gas, coal, and electricity as measures of the energy intensity (units of energy divided by money) of the energy resource relative to the energy intensity of the overall economy. EIR measures based upon various unit prices for energy (e.g. $/Btu of a barrel of oil) as well as total expenditures on energy supplies (e.g. total dollars spent on petroleum) indicate net energy at different points in the supply chain of the overall energy system. The results indicate that EIR is an easily calculated and effective proxy for EROI for US oil, gas, coal, and electricity. The EIR correlates well with previous EROI calculations, but adds additional information on energy resource quality within the supply chain. Furthermore, the EIR and EROI of oil and gas as well as coal were all in decline for two time periods within the last 40 years, and both time periods preceded economic recessions. © 2010 IOP Publishing Ltd. Source


Weinmaster G.,University of California at Los Angeles | Fischer J.,University of Texas at Austin
Developmental Cell | Year: 2011

In the first volume of Developmental Cell, it was reported that the classic Drosophila neurogenic gene neuralized encodes a ubiquitin ligase that monoubiquitylates the Notch ligand Delta, thus promoting Delta endocytosis. A requirement for ligand internalization by the signal-sending cell, although counterintuitive, remains to date a feature unique to Notch signaling. Ten years and many ubiquitin ligases later, we discuss sequels to these three papers with an eye toward reviewing the development of ideas for how ligand ubiquitylation and endocytosis propel Notch signaling. © 2011 Elsevier Inc. Source


Patzek T.W.,University of Texas at Austin | Croft G.D.,University of California at Berkeley
Energy | Year: 2010

Based on economic and policy considerations that appear to be unconstrained by geophysics, the Intergovernmental Panel on Climate Change (IPCC) generated forty carbon production and emissions scenarios. In this paper, we develop a base-case scenario for global coal production based on the physical multi-cycle Hubbert analysis of historical production data. Areas with large resources but little production history, such as Alaska and the Russian Far East, are treated as sensitivities on top of this base-case, producing an additional 125Gt of coal. The value of this approach is that it provides a reality check on the magnitude of carbon emissions in a business-as-usual (BAU) scenario. The resulting base-case is significantly below 36 of the 40 carbon emission scenarios from the IPCC. The global peak of coal production from existing coalfields is predicted to occur close to the year 2011. The peak coal production rate is 160EJ/y, and the peak carbon emissions from coal burning are 4.0GtC (15GtCO2) per year. After 2011, the production rates of coal and CO2 decline, reaching 1990 levels by the year 2037, and reaching 50% of the peak value in the year 2047. It is unlikely that future mines will reverse the trend predicted in this BAU scenario. © 2010 Elsevier Ltd. Source


Rindova V.,University of Texas at Austin
Journal of Supply Chain Management | Year: 2011

The topic of theory development remains an ongoing area of interest among management scholars. Much of the discussion on the topic has focused on the question of what constitutes a theoretical contribution. In contrast, this article emphasizes the process through which ideas develop into a value-added theoretical contribution. The article highlights the paradoxical nature of the demands that the theory development process poses on scholars. © 2011 Institute for Supply Management, Inc.™. Source


Wallingford J.B.,University of Texas at Austin | Mitchell B.,Northwestern University
Genes and Development | Year: 2011

Cilia are important cellular structures that have been implicated in a variety of signaling cascades. In this review, we discuss the current evidence for and against a link between cilia and both the canonical Wnt/β-catenin pathway and the noncanonical Wnt/planar cell polarity (PCP) pathway. Furthermore, we address the evidence implicating a role for PCP components in ciliogenesis. Given the lack of consensus in the field, we use new data on the control of ciliary protein localization as a basis for proposing new models by which cell type-specific regulation of ciliary components via differential transport, regulated entry and exit, or diffusion barriers might generate context-dependent functions for cilia. © 2011 by Cold Spring Harbor Laboratory Press. Source


Macdonald P.M.,University of Texas at Austin
Current Opinion in Genetics and Development | Year: 2011

Localization of mRNAs to subcellular domains can enrich proteins at sites where they function. Coordination with translational control can ensure that the encoded proteins will not appear elsewhere, an important property for factors that control cell fate or body patterning. Here I focus on two aspects of mRNA localization. One is the question of how mRNAs that undergo directed transport by a shared mechanism are bound to the transport machinery, and why localization signals from these mRNAs have very diverse sequences. The second topic concerns the role of particles, in which localized mRNAs often appear. Recent evidence highlights the importance of such assemblies, and the possibility that close association of mRNAs confers community effects and a novel form of regulation. © 2011 Elsevier Ltd. Source


Platt J.P.,University of Southern California | Behr W.M.,University of Texas at Austin | Johanesen K.,University of Southern California | Williams J.R.,University of Southern California
Annual Review of Earth and Planetary Sciences | Year: 2013

The Betic-Rif arc is one of the smallest and tightest orogenic arcs on Earth, and together with its extensional hinterland, the Alborán Domain, it formed between two colliding continents. The region provides examples of a range of tectonic processes that are not predictable from the rules of rigid-plate tectonics. The Alborán Domain reveals two stages of subduction and accretion, with different thermal histories and mechanisms of exhumation. The external Betic-Rif thrust belt illustrates four processes that create an arcuate orogen and a strongly divergent pattern of slip vectors: (a) the interaction between the westward moving Alborán Domain and the converging African and Iberian margins, (b) divergence in relative motion due to extension within the Alborán Domain, (c) slip partitioning onto strike-slip faults within the arc, and (d) vertical-Axis rotations resulting from oblique convergence on the limbs of the arc. © Copyright ©2013 by Annual Reviews. All rights reserved. Source


Assuming the OECD member states as 'advanced' nations equipped with basic scientific capacities, the present research addresses the network configuration of these countries in international scientific collaboration and the transformation of this network along with globalization. The result suggests that geographical, linguistic, and economic affinities did not have a meaningful impact on the formation of co-authorship network between 'advanced' nations, different from previous research results which claimed their importance on international cooperation. Globalization facilitated by the development of information and transportation technologies was found to influence the co-authorship link between countries, but not to accelerate centralization of the network in the past 15 years. Though the core-periphery pattern still persists, new rising stars, which are Korea and Turkey, have emerged in the co-authorship network among 'advanced' nations. These two countries, having a rapid increase in the share of degree centrality from 1995 to 2010, had strategic financial support from the government which stimulated the collaboration between universities and industries and emphasized the development of science and engineering fields. © 2011 Akadémiai Kiadó, Budapest, Hungary. Source


Benedek N.A.,University of Texas at Austin | Fennie C.J.,Cornell University
Journal of Physical Chemistry C | Year: 2013

We use a combination of symmetry arguments and first-principles calculations to explore the connection between structural distortions and ferroelectricity in the perovskite family of materials. We explain the role of octahedral rotations in suppressing ferroelectricity in these materials and show that, as the tolerance factor decreases, rotations alone cannot fully suppress ferroelectricity. Our results show that it is cation displacements ("hidden" in Glazer notation) that accompany the rotations, rather than the rotations themselves, that play the decisive role in suppressing ferroelectricity in these cases. We use the knowledge gained in our analysis of this problem to explain the origin of ferroelectricity in R3c materials such as FeTiO3 and ZnSnO3 and to suggest strategies for the design and synthesis of new perovskite ferroelectrics. Our results have implications not only for the fundamental crystal chemistry of the perovskites but also for the discovery of new functional materials. © 2013 American Chemical Society. Source


Warren D.L.,University of Texas at Austin
Trends in Ecology and Evolution | Year: 2012

There is a growing awareness of problems with the estimation of the ecological tolerances of species through correlative modeling approaches. These problems have led some investigators to argue for abandoning terms such as 'ecological niche model' and 'environmental niche model' in favor of the ostensibly more value-neutral 'species distribution model', as the models are thought to frequently be poor estimators of the niche. Here, I argue that most applications to which these models are put require the assumption that they do estimate the niche, however imperfectly, and that obscuring this inescapable and potentially flawed assumption in the terminology may only serve to hinder the development of the field. © 2012 Elsevier Ltd. Source


Mehta P.H.,Erasmus University Rotterdam | Josephs R.A.,University of Texas at Austin
Hormones and Behavior | Year: 2010

Traditional theories propose that testosterone should increase dominance and other status-seeking behaviors, but empirical support has been inconsistent. The present research tested the hypothesis that testosterone's effect on dominance depends on cortisol, a glucocorticoid hormone implicated in psychological stress and social avoidance. In the domains of leadership (Study 1, mixed-sex sample) and competition (Study 2, male-only sample), testosterone was positively related to dominance, but only in individuals with low cortisol. In individuals with high cortisol, the relation between testosterone and dominance was blocked (Study 1) or reversed (Study 2). Study 2 further showed that these hormonal effects on dominance were especially likely to occur after social threat (social defeat). The present studies provide the first empirical support for the claim that the neuroendocrine reproductive (HPG) and stress (HPA) axes interact to regulate dominance. Because dominance is related to gaining and maintaining high status positions in social hierarchies, the findings suggest that only when cortisol is low should higher testosterone encourage higher status. When cortisol is high, higher testosterone may actually decrease dominance and in turn motivate lower status. © 2010 Elsevier Inc. Source


Allocation trade-offs shape ecological and biogeochemical phenomena at local to global scale. Plant allocation strategies drive major changes in ecosystem carbon cycling. Microbial allocation to enzymes that decompose carbon vs. organic nutrients may similarly affect ecosystem carbon cycling. Current solutions to this allocation problem prioritise stoichiometric tradeoffs implemented in plant ecology. These solutions may not maximise microbial growth and fitness under all conditions, because organic nutrients are also a significant carbon resource for microbes. I created multiple allocation frameworks and simulated microbial growth using a microbial explicit biogeochemical model. I demonstrate that prioritising stoichiometric trade-offs does not optimise microbial allocation, while exploiting organic nutrients as carbon resources does. Analysis of continental-scale enzyme data supports the allocation patterns predicted by this framework, and modelling suggests large deviations in soil C loss based on which strategy is implemented. Therefore, understanding microbial allocation strategies will likely improve our understanding of carbon cycling and climate. © 2014 John Wiley & Sons Ltd/CNRS. Source


Hopkins M.E.,University of Texas at Austin
International Journal of Primatology | Year: 2013

Many animals generate long-distance vocalizations ("loud calls") that have been hypothesized to mediate spatial interactions. Howls by mantled howlers (Alouatta palliata), specifically, are hypothesized to function in group spacing, but the mechanism by which this is accomplished remains unclear. In this study, I evaluated the role of naturally occurring howls in mediating mantled howler intergroup spatial interactions on Barro Colorado Island, Panama. Multivariate analyses examined relationships between social and environmental contextual variables and the likelihood that a group would approach or avoid a neighboring group's howl. Mantled howlers were more likely to approach howling neighbors when fruit and flower availability was limited. Relative dominance was also a highly significant predictor of approach, though the relationship ran contrary to expectations. Groups were more likely to approach howling neighbors higher in the established dominance hierarchy than those lower down, possibly reflecting priority access of more dominant groups to higher quality resources. Other variables tested, including numeric odds, vocalization intensity, infanticide risk, and location of the howling group, were not significant predictors of spatial response. These findings are consistent with the assertion that howls contain identifying information and highlight the importance of including contextual variables in multivariate analyses of spatial responses to loud calls. These results also indicate potential contrasts between observational and playback studies by suggesting that howl location may play a greater role in responses to simulated intruders than to known neighbor groups, and that relative dominance may provide a more salient measure of competitive ability than numeric odds among groups with interaction histories. © 2013 Springer Science+Business Media New York. Source


Zhang Y.,University of Texas at Austin
Journal of medical Internet research | Year: 2013

Preference for information is a personality trait that affects people's tendency to seek information in health-related situations. Prior studies have focused primarily on investigating its impact on patient-provider communication and on the implications for designing information interventions that prepare patients for medical procedures. Few studies have examined its impact on general consumers' interactions with Web-based search engines for health information or the implications for designing more effective health information search systems. This study intends to fill this gap by investigating the impact of preference for information on the search behavior of general consumers seeking health information, their perceptions of search tasks (representing information needs), and user experience with search systems. Forty general consumers who had previously searched for health information online participated in the study in our usability lab. Preference for information was measured using Miller's Monitor-Blunter Style Scale (MBSS) and the Krantz Health Opinion Survey-Information Scale (KHOS-I). Each participant completed four simulated health information search tasks: two look-up (fact-finding) and two exploratory. Their behaviors while interacting with the search systems were automatically logged and ratings of their perceptions of tasks and user experience with the systems were collected using Likert-scale questionnaires. The MBSS showed low reliability with the participants (Monitoring subscale: Cronbach alpha=.53; Blunting subscale: Cronbach alpha=.35). Thus, no further analyses were performed based on the scale. KHOS-I had sufficient reliability (Cronbach alpha=.77). Participants were classified into low- and high-preference groups based on their KHOS-I scores. The high-preference group submitted significantly shorter queries when completing the look-up tasks (P=.02). The high-preference group made a significantly higher percentage of parallel movements in query reformulation than did the low-preference group (P=.04), whereas the low-preference group made a significantly higher percentage of new concept movements than the high-preference group when completing the exploratory tasks (P=.01). The high-preference group found the exploratory tasks to be significantly more difficult (P=.05) and the systems to be less useful (P=.04) than did the low-preference group. Preference for information has an impact on the search behavior of general consumers seeking health information. Those with a high preference were more likely to use more general queries when searching for specific factual information and to develop more complex mental representations of health concerns of an exploratory nature and try different combinations of concepts to explore these concerns. High-preference users were also more demanding on the system. Health information search systems should be tailored to fit individuals' information preferences. Source


Ge S.-F.,Tsinghua University | Dicus D.A.,University of Texas at Austin | Repko W.W.,Michigan State University
Physical Review Letters | Year: 2012

The residual Z2s(k) and Z̄2s(k) symmetries induce a direct and unique phenomenological relation with θ x(θ 13) expressed in terms of the other two mixing angles θ s(θ 12) and θ a(θ 23) and the Dirac CP phase δ D. Z2s(k) predicts a θ x probability distribution centered around 3°-6° with an uncertainty of 2°-4°, while those from Z̄2s(k) are approximately a factor of 2 larger. Either result fits the T2K, MINOS, and Double Chooz measurements. Alternately, a prediction for the Dirac CP phase δ D results in a peak at ±74°(±106°) for Z2s(k) or ±123°(±57° ) for Z̄2s(k) which is consistent with the latest global fit. We also give a distribution for the leptonic Jarlskog invariant J ν which can provide further tests from measurements at T2K and NOνA. © 2012 American Physical Society. Source


Sun A.Y.,University of Texas at Austin
Water Resources Research | Year: 2013

The purpose of this work is to investigate the feasibility of downscaling Gravity Recovery and Climate Experiment (GRACE) satellite data for predicting groundwater level changes and, thus, enhancing current capability for sustainable water resources management. In many parts of the world, water management decisions are traditionally informed by in situ observation networks which, unfortunately, have seen a decline in coverage in recent years. Since its launch, GRACE has provided terrestrial water storage change (ΔTWS) data at global and regional scales. The application of GRACE data for local-scale groundwater resources management has been limited because of uncertainties inherent in GRACE data and difficulties in disaggregating various TWS components. In this work, artificial neural network (ANN) models are developed to predict groundwater level changes directly by using a gridded GRACE product and other publicly available hydrometeorological data sets. As a feasibility study, ensemble ANN models are used to predict monthly and seasonal water level changes for several wells located in different regions across the US. Results indicate that GRACE data play a modest but significantly role in the performance of ANN ensembles, especially when the cyclic pattern of groundwater hydrograph is disrupted by extreme climate events, such as the recent Midwest droughts. The statistical downscaling approach taken here may be readily integrated into local water resources planning activities. Key Points GRACE data are downscaled to predict in situ water level changes A neural network statistical downscaling approach was taken Modest gain in prediction accuracy was observed ©2013. American Geophysical Union. All Rights Reserved. Source


Norton I.,University of Texas at Austin
Geosphere | Year: 2011

Extension in Death Valley is usually interpreted as a combination of low-angle Basin and Range-style extension and strike slip associated with the developing Pacific-North America plate boundary in western North America, with these two tectonic regimes operating synchronously in Death Valley. Examination of structural, stratigraphic, and timing relationships in the region suggests that this interpretation needs revision. Evolution of Death Valley is best described as a two-stage process. In the first stage, lasting from ca. 18 to 5 Ma, low-angle Basin and Range extension transported allochthons consisting of Late Proterozoic through Early Paleozoic miogeoclinal section along detachment surfaces that, as extension continued, were exhumed from mid-lower crustal levels to the surface. Near the end of this extensional phase and lasting until ca. 3 Ma, deposition of a thick sequence of volcanics, clastics, and some lacustrine carbonates signaled a period of relative tectonic quiescence, with sediments in some areas covering the exhumed detachment surfaces. At ca. 3 Ma, initiation of the East California Shear Zone started development of the present-day topographic depression of Death Valley, formed as a pull-apart basin associated with this strike slip. Faulting broke the older, inactive, Basin and Range detachment surfaces, with high-angle transtensional faulting along the Black Mountains front. The Black Mountains were elevated as a result of footwall uplift, with the well-known turtleback structures being megamullions along these bounding faults. These megamullions are similar to those seen at oceanic spreading centers. The Panamint Range has previously been interpreted as an extensional allochthon, with the entire range transported from on top of or east of the Black Mountains. A new interpretation presented here is that the range is a large core complex similar to the core complex at Tucki Mountain, at the northern end of the range. The Basin and Range extensional detachment tracks over the top of the range, with extensional allochthons perched on the eastern flanks of the range. This modified model for evolution of Death Valley suggests a strong link between timing and style of deformation in the basin with the developing Pacific-North America plate boundary, particularly eastward propagation of this boundary. © 2011 Geological Society of America. Source


Chen X.,University of Texas at Austin
Journal of the American Chemical Society | Year: 2012

Toehold-mediated strand displacement has proven extremely powerful in programming enzyme-free DNA circuits and DNA nanomachines. To achieve multistep, autonomous, and complex behaviors, toeholds must be initially inactivated by hybridizing to inhibitor strands or domains and then relieved from inactivation in a programmed, timed manner. Although powerful and reasonably robust, this strategy has several drawbacks that limit the architecture of DNA circuits. For example, the combination between toeholds and branch migration (BM) domains is 'hard wired' during DNA synthesis thus cannot be created or changed during the execution of DNA circuits. To solve this problem, I propose a strategy called 'associative toehold activation', where the toeholds and BM domains are connected via hybridization of auxiliary domains during the execution of DNA circuits. Bulged thymidines that stabilize DNA three-way junctions substantially accelerate strand displacement reactions in this scheme, allowing fast strand displacement initiated by reversible toehold binding. To demonstrate the versatility of the scheme, I show (1) run-time combination of toeholds and BM domains, (2) run-time recombination of toeholds and BM domains, which results in a novel operation 'toehold switching', and (3) design of a simple conformational self-replicator. © 2011 American Chemical Society. Source


Dicarlo D.A.,University of Texas at Austin
Water Resources Research | Year: 2013

Gravity-driven multiphase flow in porous media is ubiquitous in the geophysical world; the classic case in hydrology is vertical infiltration of precipitation into a soil. For homogenous porous media, infiltrations are sometimes observed to be stable and laterally uniform, but other times are observed to be unstable and produce preferential flow paths. Since Saffman and Taylor (1958), researchers have attempted to define criteria that determine instability. Saffman and Taylor's analysis consisted of two regions of single phase flow, while Parlange and Hill (1976) integrated this analysis with the multiphase flow equations to provide testable predictions. In the subsequent 40 years, great advances have been made determining the complex interactions between multiphase flow and instability. Theoretically, the stability of the standard multiphase flow equations has been verified, showing the necessity of extensions to the multiphase flow equations to describe the observed unstable flow. Experimentally, it has been shown that the instability is related to a phenomena in 1-D infiltrations called saturation or pressure overshoot. In this review, the connection between overshoot and instability is detailed, and it is described how models of overshoot can simplify the analysis of current and future models of instability and multiphase flow. ©2013. American Geophysical Union. All Rights Reserved. Source


Johnson K.A.,University of Texas at Austin
FEBS Letters | Year: 2013

This review traces the history and logical progression of methods for quantitative analysis of enzyme kinetics from the 1913 Michaelis and Menten paper to the application of modern computational methods today. Following a brief review of methods for fitting steady state kinetic data, modern methods are highlighted for fitting full progress curve kinetics based upon numerical integration of rate equations, including a re-analysis of the original Michaelis-Menten full time course kinetic data. Finally, several illustrations of modern transient state kinetic methods of analysis are shown which enable the elucidation of reactions occurring at the active sites of enzymes in order to relate structure and function. © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved. Source


Aleynikov P.,ITER Organization | Breizman B.N.,University of Texas at Austin
Physical Review Letters | Year: 2015

This Letter presents a rigorous kinetic theory for relativistic runaway electrons in the near critical electric field in tokamaks. The theory provides a distribution function of the runaway electrons, reveals the presence of two different threshold electric fields, and describes a mechanism for hysteresis in the runaway electron avalanche. Two different threshold electric fields characterize a minimal field required for sustainment of the existing runaway population and a higher field required for the avalanche onset. The near-threshold regime for runaway electrons determines the time scale of toroidal current decay during runaway mitigation in tokamaks. © 2015 American Physical Society. Source


Zhang Y.,University of Texas at Austin
Information Processing and Management | Year: 2012

This study explored the impact of task complexity on people's mental models of MedlinePlus, an information-rich web space providing consumer health information to the general public. Thirty-eight subjects were randomly assigned to two groups, performing either simple or complex search tasks. After the search session, subjects' perceptions of MedlinePlus were elicited using a concept listing protocol where subjects listed concepts concerning MedlinePlus in the order each concept occurred to them. The analysis of the concepts suggested that task complexity impacted subjects' mental models by influencing the objects in the system that they perceived and represented, the specificity of the representations, their evaluations of and emotions about the objects, and the heuristics that they developed for what works and what does not in the system. The pragmatic aspect of mental models was represented by subjects' descriptions of the steps that they would follow to tackle a hypothetical task. The analysis showed that task complexity affected the strategies that subjects perceived themselves using to solve new tasks using the system. © 2011 Elsevier Ltd. All rights reserved. Source


Krahmer F.,University of Gottingen | Ward R.,University of Texas at Austin
IEEE Transactions on Image Processing | Year: 2014

In many signal processing applications, one wishes to acquire images that are sparse in transform domains such as spatial finite differences or wavelets using frequency domain samples. For such applications, overwhelming empirical evidence suggests that superior image reconstruction can be obtained through variable density sampling strategies that concentrate on lower frequencies. The wavelet and Fourier transform domains are not incoherent because low-order wavelets and low-order frequencies are correlated, so compressive sensing theory does not immediately imply sampling strategies and reconstruction guarantees. In this paper, we turn to a more refined notion of coherence - the so-called local coherence - measuring for each sensing vector separately how correlated it is to the sparsity basis. For Fourier measurements and Haar wavelet sparsity, the local coherence can be controlled and bounded explicitly, so for matrices comprised of frequencies sampled from a suitable inverse square power-law density, we can prove the restricted isometry property with near-optimal embedding dimensions. Consequently, the variable-density sampling strategy we provide allows for image reconstructions that are stable to sparsity defects and robust to measurement noise. Our results cover both reconstruction by $\ell1-minimization and total variation minimization. The local coherence framework developed in this paper should be of independent interest, as it implies that for optimal sparse recovery results, it suffices to have bounded average coherence from sensing basis to sparsity basis - as opposed to bounded maximal coherence - as long as the sampling strategy is adapted accordingly. © 1992-2012 IEEE. Source


Nagaosa N.,University of Tokyo | Nagaosa N.,RIKEN | Sinova J.,Texas A&M University | Sinova J.,ASCR Institute of Physics Prague | And 3 more authors.
Reviews of Modern Physics | Year: 2010

The anomalous Hall effect (AHE) occurs in solids with broken time-reversal symmetry, typically in a ferromagnetic phase, as a consequence of spin-orbit coupling. Experimental and theoretical studies of the AHE are reviewed, focusing on recent developments that have provided a more complete framework for understanding this subtle phenomenon and have, in many instances, replaced controversy by clarity. Synergy between experimental and theoretical works, both playing a crucial role, has been at the heart of these advances. On the theoretical front, the adoption of the Berry-phase concepts has established a link between the AHE and the topological nature of the Hall currents. On the experimental front, new experimental studies of the AHE in transition metals, transition-metal oxides, spinels, pyrochlores, and metallic dilute magnetic semiconductors have established systematic trends. These two developments, in concert with first-principles electronic structure calculations, strongly favor the dominance of an intrinsic Berry-phase-related AHE mechanism in metallic ferromagnets with moderate conductivity. The intrinsic AHE can be expressed in terms of the Berry-phase curvatures and it is therefore an intrinsic quantum-mechanical property of a perfect crystal. An extrinsic mechanism, skew scattering from disorder, tends to dominate the AHE in highly conductive ferromagnets. The full modern semiclassical treatment of the AHE is reviewed which incorporates an anomalous contribution to wave-packet group velocity due to momentum-space Berry curvatures and correctly combines the roles of intrinsic and extrinsic (skew-scattering and side-jump) scattering-related mechanisms. In addition, more rigorous quantum-mechanical treatments based on the Kubo and Keldysh formalisms are reviewed, taking into account multiband effects, and demonstrate the equivalence of all three linear response theories in the metallic regime. Building on results from recent experiment and theory, a tentative global view of the AHE is proposed which summarizes the roles played by intrinsic and extrinsic contributions in the disorder strength versus temperature plane. Finally outstanding issues and avenues for future investigation are discussed. © 2010 The American Physical Society. Source


Zakhour R.,University of Texas at Austin | Hanly S.V.,Macquarie University
IEEE Transactions on Information Theory | Year: 2012

This paper considers maximizing the network-wide minimum supported rate in the downlink of a two-cell system, where each base station (BS) is endowed with multiple antennas. This is done for different levels of cell cooperation. At one extreme, we consider single cell processing where the BS is oblivious to the interference it is creating at the other cell. At the other extreme, we consider full cooperative macroscopic beamforming. In between, we consider coordinated beamforming, which takes account of inter-cell interference, but does not require full cooperation between the BSs. We combine elements of Lagrangian duality and large system analysis to obtain limiting SINRs and bit-rates, allowing comparison between the considered schemes. The main contributions of the paper are theorems which provide concise formulas for optimal transmit power, beamforming vectors, and achieved signal to interference and noise ratio (SINR) for the considered schemes. The formulas obtained are valid for the limit in which the number of users per cell, K, and the number of antennas per base station, N, tend to infinity, with fixed ratio β = K/N. These theorems also provide expressions for the effective bandwidths occupied by users, and the effective interference caused in the adjacent cell, which allow direct comparisons between the considered schemes. © 2006 IEEE. Source


Veletsianos G.,University of Texas at Austin
Journal of Computer Assisted Learning | Year: 2012

Scholars participate in online social networks for professional purposes. In such networks, learning takes the form of participation and identity formation through engagement in and contribution to networked practices. While current literature describes the possible benefits of online participation, empirical research on scholars' use of online social networks in the educational technology literature is negligible. The purpose of this paper is to understand scholars' naturalistic practices in social networks in general, and on Twitter in particular. Tweets from 45 scholars were analysed qualitatively to arrive at dominant themes describing online social network practice. Findings indicate that scholars participating on Twitter (1) shared information, resources, and media relating to their professional practice; (2) shared information about their classroom and their students; (3) requested assistance from and offered suggestions to others; (4) engaged in social commentary; (5) engaged in digital identity and impression management; (6) sought to network and make connections with others; and (7) highlighted their participation in online networks other than Twitter. These findings assist the field in understanding the emerging practice of scholarly participation in online networks. © 2011 Blackwell Publishing Ltd. Source


Harden K.P.,University of Texas at Austin
Journal of Child Psychology and Psychiatry and Allied Disciplines | Year: 2010

Background: Adolescent involvement in religious organizations has been hypothesized to protect against early age at first drink. However, the correlation between adolescent religiosity and later age at first drink may be confounded by environmental or genetic differences between families. This study tests whether, after controlling for shared environmental and genetic confounds using a behavior genetic design, the association between individual levels of religiosity and earlier age at first drink is still evident. Method: Twin and sibling pairs were drawn from the National Longitudinal Study of Adolescent Health, a nationally-representative sample of US adolescents. Age at first drink was measured as how old adolescents were when they first had a drink of beer, wine, or liquor. Religiosity was measured using four items concerning frequency of religious activities and importance of religious beliefs. Using twins and siblings who were discordant for religiosity, analyses tested whether religious adolescents had a later age at first drink than their non-religious co-twins/co-siblings. Results: Religious adolescents did not differ from their non-religious siblings in their mean age at first drink. Results from survival models indicate that environmental differences between families completely account for the correlation between religiosity and later age at first drink. Conclusions: Results suggest that individual religious involvement is a proxy variable for family or cultural environments that are salient for when adolescents initiate alcohol use. Future research is needed to identify specific protective environments in religious families. These results have implications for both public policy and etiological theory. © 2010 Association for Child and Adolescent Mental Health. Source


Ren K.,University of Texas at Austin
Communications in Computational Physics | Year: 2010

The objective of this paper is to review recent developments in numerical reconstruction methods for inverse transport problems in imaging applications, mainly optical tomography, fluorescence tomography and bioluminescence tomography. In those inverse problems, one aims at reconstructing physical parameters, such as the absorption coefficient, the scattering coefficient and the fluorescence light source, inside heterogeneousmedia, frompartial knowledge of transport solutions on the boundaries of the media. The physical parameters recovered can be used for diagnostic purpose. Numerical reconstruction techniques for those inverse transport problems can be roughly classified into two categories: linear reconstruction methods and nonlinear reconstruction methods. In the first type of methods, the inverse problems are linearized around some known background to obtain linear inverse problems. Classical regularization techniques are then applied to solve those inverse problems. The second type of methods are either based on regularized nonlinear least-square techniques or based on gradient-driven iterative methods for nonlinear operator equations. In either case, the unknown parameters are iteratively updated until the solutions of the transport equations with the those parameters match the measurements to a certain extent. We review linear and nonlinear reconstruction methods for inverse transport problems in medical imaging with stationary, frequency-domain and time-dependent data. The materials presented include both existing and new results. Meanwhile, we attempt to present similar algorithms for different problems in the same framework to make it more straightforward to generalize those algorithms to other inverse (transport) problems. © 2010 Global-Science Press. Source


Koyluoglu O.O.,University of Texas at Austin | El Gamal H.,Ohio State University
IEEE Transactions on Information Theory | Year: 2011

This paper investigates the fundamental performance limits of the two-user interference channel in the presence of an external eavesdropper. In this setting, we construct an inner bound, to the secrecy capacity region, based on the idea of cooperative encoding in which the two users cooperatively design their randomized codebooks and jointly optimize their channel prefixing distributions. Our achievability scheme also utilizes message-splitting in order to allow for partial decoding of the interference at the nonintended receiver. Outer bounds are then derived and used to establish the optimality of the proposed scheme in certain cases. In the Gaussian case, the previously proposed cooperative jamming and noise-forwarding techniques are shown to be special cases of our proposed approach. Overall, our results provide structural insights on how the interference can be exploited to increase the secrecy capacity of wireless networks. © 2011 IEEE. Source


Shahmoradi A.,University of Texas at Austin
Astrophysical Journal | Year: 2013

It is proposed that the luminosity function, the rest-frame spectral correlations, and distributions of cosmological long-duration (Type-II) gamma-ray bursts (LGRBs) may be very well described as a multivariate log-normal distribution. This result is based on careful selection, analysis, and modeling of LGRBs' temporal and spectral variables in the largest catalog of GRBs available to date: 2130 BATSE GRBs, while taking into account the detection threshold and possible selection effects. Constraints on the joint rest-frame distribution of the isotropic peak luminosity (Liso), total isotropic emission (E iso), the time-integrated spectral peak energy (E p,z), and duration (T 90,z) of LGRBs are derived. The presented analysis provides evidence for a relatively large fraction of LGRBs that have been missed by the BATSE detector with E iso extending down to ∼1049 erg and observed spectral peak energies (Ep) as low as ∼5 keV. LGRBs with rest-frame duration T 90,z ≲ 1 s or observer-frame duration T 90 ≲ 2 s appear to be rare events (≲ 0.1% chance of occurrence). The model predicts a fairly strong but highly significant correlation (ρ = 0.58 ± 0.04) between E iso and E p,z of LGRBs. Also predicted are strong correlations of L iso and E iso with T 90, z and moderate correlation between L iso and E p, z . The strength and significance of the correlations found encourage the search for underlying mechanisms, though undermine their capabilities as probes of dark energy's equation of state at high redshifts. The presented analysis favors - but does not necessitate - a cosmic rate for BATSE LGRBs tracing metallicity evolution consistent with a cutoff Z/Z ∼ 0.2-0.5, assuming no luminosity-redshift evolution. © 2013. The American Astronomical Society. All rights reserved. Source


Fast W.,University of Texas at Austin | Tipton P.A.,University of Missouri
Trends in Biochemical Sciences | Year: 2012

N-Acyl-L-homoserine lactones (AHLs) are a major class of quorum-sensing signals used by Gram-negative bacteria to regulate gene expression in a population-dependent manner, thereby enabling group behavior. Enzymes capable of generating and catabolizing AHL signals are of significant interest for the study of microbial ecology and quorum-sensing pathways, for understanding the systems that bacteria have evolved to interact with small-molecule signals, and for their possible use in therapeutic and industrial applications. The recent structural and functional studies reviewed here provide a detailed insight into the chemistry and enzymology of bacterial communication. © 2011 Elsevier Ltd. Source


Ballard M.S.,University of Texas at Austin
Journal of the Acoustical Society of America | Year: 2012

An acoustic propagation model is applied to predict measurements of three-dimensional (3-D) effects recorded off the southeast coast of Florida. The measured signal is produced by a low frequency source that is towed north parallel to the shelf from a fixed receiving array. The acoustic data show the direct path arrival at the bearing of the tow ship and a second refracted path arrival as much as 30 inshore of the direct arrival. Notably, the refracted arrival has a received level more than 25 dB greater than that of the direct arrival. A geoacoustic model of the environment is created to explain the data. It is shown that the topography of the seafloor plays the largest role in controlling horizontal refraction effects, whereas the range-dependent sediment properties have the most influence on the received level. The modeling approach is based on a 3-D adiabatic mode technique in which the horizontal refraction equation is solved using a parabolic equation in Cartesian coordinates. A modal decomposition of the field provides insight into the variability in the arrival angle and received level of the measured signal. © 2012 Acoustical Society of America. Source


Pearce J.A.,University of Texas at Austin
International Journal of Hyperthermia | Year: 2013

The standard method for assessing hyperthermia treatment has been calculation of cumulative equivalent minutes at 43 °C, CEM43 and its variations. This parameter normalises treatment thermal histories rather than predicts treatment results. Arrhenius models have been widely used in analysing higher temperature thermal treatments and successfully employed to predict irreversible thermal alterations in structural proteins. Unfortunately, in many, but not all cases they fail to represent thermally induced damage or cell death at hyperthermic temperatures, 43-50 °C, exhibiting significant over-prediction of the initial 'shoulder' region. The failure arises from the simplifying assumptions used to derive the irreversible reaction format that has been used in thermal damage studies. Several successful multi-parameter fit methods have been employed to model cell survival data. The two-state statistical thermodynamic model was derived from basic thermodynamic principles. The three-state model results from relaxing the assumptions under the Arrhenius formulation that result in an irreversible reaction. In other cell processes studied in vitro the irreversible Arrhenius model holds, and is sufficient to provide an accurate and useful estimate of thermal damage and cell death. It is essential in numerical model work to include multiple thermal damage processes operating in parallel to obtain a clear image of the likely outcome in tissues. Arrhenius and other C(t) models have that capability, while a single value for CEM43, does not. © 2013 Informa UK Ltd. All rights reserved. Source


Xhemalce B.,University of Texas at Austin
Briefings in Functional Genomics | Year: 2013

Cancer results from abnormal gene expression that transforms cellular identity. A rising consensus is that genetic mutations and epigenetic alterations act in concert to achieve tumorigenesis. On one hand, cancer cells harbor classic genetic mutations that activate oncogenes and inhibit tumor suppressors.On the other hand, they also display broad alterations of their epigenomes, as defined by modifications of DNA, histones and coding/noncoding RNAs. In particular, methylation is a ubiquitous modification that affects several residues/sites in these molecules. In this review, I will discuss the central role of this modification in the regulation of gene expression, its alterations in cancer as well as its possible targeting for cancer therapies ©The Author 2013. Published by Oxford University Press. Source


Lingam M.,University of Texas at Austin
Physics of Plasmas | Year: 2015

A Hamiltonian and Action Principle formulation of spin-1/2 magnetohydrodynamics is presented via a first-principles derivation of the underlying Lagrangian, and the associated Hamiltonian. The derivation invokes the notion of "frozen-in" constraints, symmetry breaking, and similarities with Ginzburg-Landau theory to arrive at the relevant terms in the Hamiltonian. The model thus obtained includes the effects of spin and other quantum corrections and is shown to be in full agreement with existent models in the literature. It is also indicated how two-fluid effects, gyroviscosity, and anisotropic pressure can be included in the model, in addition to incorporating higher-order (nonlinear) quantum spin corrections. An interesting analogy with the theory of liquid crystals is also highlighted. © 2015 AIP Publishing LLC. Source


Seshadrinathan K.,Intel Corporation | Bovik A.C.,University of Texas at Austin
IEEE Transactions on Image Processing | Year: 2010

There has recently been a great deal of interest in the development of algorithms that objectively measure the integrity of video signals. Since video signals are being delivered to human end users in an increasingly wide array of applications and products, it is important that automatic methods of video quality assessment (VQA) be available that can assist in controlling the quality of video being delivered to this critical audience. Naturally, the quality of motion representation in videos plays an important role in the perception of video quality, yet existing VQA algorithms make little direct use of motion information, thus limiting their effectiveness. We seek to ameliorate this by developing a general, spatio-spectrally localized multiscale framework for evaluating dynamic video fidelity that integrates both spatial and temporal (and spatio-temporal) aspects of distortion assessment. Video quality is evaluated not only in space and time, but also in space-time, by evaluating motion quality along computed motion trajectories. Using this framework, we develop a full reference VQA algorithm for which we coin the term the MOtion-based Video Integrity Evaluation index, or MOVIE index. It is found that the MOVIE index delivers VQA scores that correlate quite closely with human subjective judgment, using the Video Quality Expert Group (VQEG) FRTV Phase 1 database as a test bed. Indeed, the MOVIE index is found to be quite competitive with, and even outperform, algorithms developed and submitted to the VQEG FRTV Phase 1 study, as well as more recent VQA algorithms tested on this database. © 2010 IEEE. Source


Ghorashian N.,University of Texas at Austin
PloS one | Year: 2013

Automated biosorter platforms, including recently developed microfluidic devices, enable and accelerate high-throughput and/or high-resolution bioassays on small animal models. However, time-consuming delivery of different organism populations to these systems introduces a major bottleneck to executing large-scale screens. Current population delivery strategies rely on suction from conventional well plates through tubing periodically exposed to air, leading to certain disadvantages: 1) bubble introduction to the sample, interfering with analysis in the downstream system, 2) substantial time drain from added bubble-cleaning steps, and 3) the need for complex mechanical systems to manipulate well plate position. To address these concerns, we developed a multiwell-format microfluidic platform that can deliver multiple distinct animal populations from on-chip wells using multiplexed valve control. This Population Delivery Chip could operate autonomously as part of a relatively simple setup that did not require any of the major mechanical moving parts typical of plate-handling systems to address a given well. We demonstrated automatic serial delivery of 16 distinct C. elegans worm populations to a single outlet without introducing any bubbles to the samples, causing cross-contamination, or damaging the animals. The device achieved delivery of more than 90% of the population preloaded into a given well in 4.7 seconds; an order of magnitude faster than delivery modalities in current use. This platform could potentially handle other similarly sized model organisms, such as zebrafish and drosophila larvae or cellular micro-colonies. The device's architecture and microchannel dimensions allow simple expansion for processing larger numbers of populations. Source


Yoder L.H.,University of Texas at Austin
The American journal of nursing | Year: 2014

To determine to what extent RNs in an acute care multihospital system used research findings in their practice; what types of knowledge they used in their practice; and what personal, professional, and organizational factors enhanced or hindered their research utilization. A cross-sectional, descriptive, online survey design was used. The survey, which asked about use of research findings in practice and evidence-based practice (EBP) participation, was placed on the hospital system intranet. Of the 2,900 RNs invited to participate, 1,112 nurses completed usable surveys, for a response rate of 38%. This article reports findings for 794 of the staff nurses who responded to the survey. The forms of knowledge that staff nurses reported relying on most were their personal experience with patients, conferences, hospital policies and procedures, physician colleagues, and nursing peers. Although a variety of resources were available for nurses to use in locating research and implementing EBP, respondents reported many of the same barriers that have been reported in other studies over the last two decades: lack of time, lack of resources, and lack of knowledge. Although their attitudes about research utilization and EBP were positive overall, respondents expected unit-based educators and clinical nurse specialists to collect and synthesize the research for them. These findings are similar to those of other recent studies regarding nurses' research utilization and EBP. A great deal of work remains to be done if we are to inform, educate, and assist staff nurses in using research and implementing EBP. It may be unrealistic to expect bedside nurses to add these activities to their duties unless they are compensated for the time and have the support of master's or doctorally prepared nurses to serve as EBP coaches and champions. Source


Lacy J.H.,University of Texas at Austin
Astrophysical Journal | Year: 2013

Infrared vibration-rotation lines can be valuable probes of interstellar and circumstellar molecules, especially symmetric molecules, which have no pure rotational transitions. But most such observations have been interpreted with an isothermal absorbing slab model, which leaves out important radiative transfer and molecular excitation effects. A more realistic non-LTE and non-isothermal radiative transfer model has been constructed. The results of this model are in much better agreement with the observations, including cases where lines in one branch of a vibration-rotation band are in absorption and another in emission. In general, conclusions based on the isothermal absorbing slab model can be very misleading, but the assumption of LTE may not lead to such large errors, particularly if the radiation field temperature is close to the gas temperature. © 2013. The American Astronomical Society. All rights reserved. Source


Heller A.,University of Texas at Austin | Feldman B.,Abbott Laboratories
Accounts of Chemical Research | Year: 2010

Diabetes devastates lives and burdens society. Hypoglycemic (low glucose) episodes cause blackouts, and severe ones are life-threatening. Periods of hyperglycemia (high glucose) cause circulatory disease, stroke, amputations, blindness, kidney failure and nerve degeneration. In this Account, we describe the founding of TheraSense, now a major part of Abbott Diabetes Care, and the development of two products that have improved the lives of people with diabetes. The first, a virtually painless microcoulometer (300 nL volume), the FreeStyle blood glucose monitoring system, was approved by the FDA and became available in 2000. In 2009, this system was used in more than one billion blood assays. The second, the enzyme-wiring based, subcutaneously-implanted FreeStyle Navigator continuous glucose monitoring system, was approved by the FDA and became available in the United States in 2008. The strips of the FreeStyle blood glucose monitoring system comprise a printed parallel plate coulometer, with a 50 ?m gap between two facing printed electrodes, a carbon electrode and a Ag/AgCl electrode. The volume of blood between the facing plates is accurately controlled. The glucose is electrooxidized through catalysis by a glucose dehydrogenase (GDH) and an Os2+/3+ redox mediator, which is reduced by the glucose-reduced enzyme and is electrooxidized on the carbon electrode. Initially the system used pyrroloquinoline quinone (PQQ)-dependent GDH but now uses flavin adenine dinucleotide (FAD)-dependent GDH. Because the facing electrodes are separated by such a small distance, shuttling of electrons by the redox couple could interfere with the coulometric assay. However, the Os 2+/3+ redox mediator is selected to have a substantially negative formal potential, between 0.0 and ?0.2 V, versus that of the facing Ag/AgCl electrode. This makes the flow of a shuttling current between the two electrodes virtually impossible because the oxidized Os3+ complex cannot be appreciably reduced at the more positively poised Ag/AgCl electrode. The FreeStyle Navigator continuous glucose monitoring system uses a subcutaneously implanted miniature plastic sensor connected to a transmitter to measure glycemia amperometrically and sends the information to a PDA-like device every minute. The sensor consists of a narrow (0.6 mm wide) plastic substrate on which carbon-working, Ag/AgCl reference, and carbon counter electrodes are printed in a stacked geometry. The active wired enzyme sensing layer covers only about 0.1 mm2 of the working electrode and is overlaid by a flux-limiting membrane. It resides at about 5 mm depth in the subcutaneous adipose tissue and monitors glucose concentrations over the range 20?500 mg/dL. Its core component, a miniature, disposable, amperometric glucose sensor, has an electrooxidation catalyst made from a crosslinked adduct of glucose oxidase (GOx) and a GOx wiring redox hydrogel containing a polymer-bound Os2+/3+ complex. Because of the selectivity of the catalyst for glucose, very little current flows in the absence of glucose. That feature, either alone or in combination with other features of the sensor, facilitates the one-point calibration of the system. The sensor is implanted subcutaneously and replaced by the patient after 5 days use with minimal pain. The wearer does not feel its presence under the skin. © 2010 American Chemical Society. Source


Verslues P.E.,Academia Sinica, Taiwan | Juenger T.E.,University of Texas at Austin
Current Opinion in Plant Biology | Year: 2011

Drought elicits substantial changes in plant metabolism and it remains a challenge to determine which of these changes represent adaptive responses and which of them are merely neutral effects or even symptoms of damage. Arabidopsis primarily uses low water potential/dehydration avoidance strategies to respond to water limitation. The large variation in evolved stress responses among accessions can be a powerful tool to identify ecologically important and adaptive traits; however, collection of relevant phenotype data under controlled water stress is often a limiting factor. Quantitative genetics of Arabidopsis has great potential to find the genes underlying variation in drought-affected metabolic traits, for example proline metabolism, as well as overall adaptation. © 2011 Elsevier Ltd. Source


Chen Z.J.,University of Texas at Austin | Chen Z.J.,Nanjing Agricultural University
Nature Reviews Genetics | Year: 2013

Heterosis, also known as hybrid vigour, is widespread in plants and animals, but the molecular bases for this phenomenon remain elusive. Recent studies in hybrids and allopolyploids using transcriptomic, proteomic, metabolomic, epigenomic and systems biology approaches have provided new insights. Emerging genomic and epigenetic perspectives suggest that heterosis arises from allelic interactions between parental genomes, leading to altered programming of genes that promote the growth, stress tolerance and fitness of hybrids. For example, epigenetic modifications of key regulatory genes in hybrids and allopolyploids can alter complex regulatory networks of physiology and metabolism, thus modulating biomass and leading to heterosis. The conceptual advances could help to improve plant and animal productivity through the manipulation of heterosis. © 2013 Macmillan Publishers Limited. All rights reserved. Source


Kim J.W.,University of Texas at Austin
Computers in Human Behavior | Year: 2014

An online survey was used to the examine college students' uses and gratifications of social recommendation systems such as "like" on Facebook. By presenting numerical values, these systems display the volume (i.e.; how many users) and valence (i.e.; like/dislike) of collective opinions within a specific e-community that users are involved with. Consequently, these systems offer the chance for users to scan the climate of opinions within a specific online community. More importantly, these systems have introduced a new mode of speaking out online. Due to social recommendation systems' significance in the public opinion domain, the study attempted to understand why and how people use them to express opinions. Results showed that social recommendations were used for expression, information, socialization, and entertainment purposes. Above all, users primarily viewed them as potential expressive tools and actually use them to express their opinions. While internet use had no significant relationships with either motives or system uses, social media uses were closely related with both motives and actual uses. The findings suggest that social recommendation systems seem to play an important role in the public opinion domain by offering an additional way to scan collective opinions and express opinions. © 2014 Elsevier Ltd. All rights reserved. Source


Gilbert O.M.,University of Texas at Austin
American Naturalist | Year: 2015

Multicellular tissue compatibility, or histocompatibility, restricts fusion to close kin. Histocompatibility depends on hypervariable cue genes, which often have more than 100 alleles in a population. To explain the evolution of histocompatibility, I here take a historical approach. I focus on the specific example of marine invertebrate histocompatibility. I use simple game-theoretical models to show that histocompatibility can evolve through five steps. These steps include the evolution of indiscriminate fusion, the evolution of discriminatory within-organism conflict, the evolution of minor histocompatibility, the evolution of major histocompatibility, and the evolution of major histocompatibility cue polymorphism. Allowing for gradual evolution reveals discriminatory within-organism conflict as a selective pressure for histocompatibility and associated cue polymorphism. Existing data from marine invertebrates and other organisms are consistent with this hypothesis. © 2015 by The University of Chicago. Source


Cummings M.E.,University of Texas at Austin
Animal Behaviour | Year: 2015

Male phenotypes vary across mating systems, but what about the female mind? In poeciliid fishes, we observe that female response towards males varies, both at inter- and intraspecific levels. By using the natural variation in poeciliid male reproductive phenotypes, we can probe female responses in different social contexts to isolate the neuromolecular components associated with mate choice. Female Xiphophorus nigrensis exhibit differential nonapeptide (. isotocin, vasotocin) gene expression between social contexts that differ by social affiliation; while differential expression of synaptic plasticity genes (. neuroserpin, neuroligin-3, NMDAR) is more strongly predictive of social contexts that differ by mate preference discrimination. Gene expression and pharmacological manipulation with X.nigrensis females suggests that the social cognition task of mate choice involves neuromolecular processes linked to learning at the cellular (synaptic plasticity genes) and regional (amygdala and hippocampus) levels. Comparative assays between two poeciliid species with different mating systems provide the first hint of a shared neuromolecular pathway underlying female mate preference response. Yet, it is still unclear how specific components of this neuromolecular pathway (nonapeptides and synaptic plasticity processes) interact to coordinate behavioural tasks associated with mate choice encounters (vigilance, affiliation, evaluation and discrimination). Nevertheless, this collection of studies, coupled with the natural diversity of poeciliid mating systems, pave the way for exciting new insights into the social cognition of mate choice. © 2015 The Association for the Study of Animal Behaviour. Source


De La Torre J.C.,University of Texas at Austin
Journal of Alzheimer's Disease | Year: 2014

The incidence rate in Alzheimer's disease (AD) is expected to quadruple worldwide by 2050. To limit this impending socio-medical calamity, a fulcrum change from how AD is presently managed is crucial. The present approach has not averted the stress of AD on medical resources nor reduced the already cost-strained government health care programs. Since substantial evidence indicates that sporadic AD is directly associated with vascular risk factors, a strategic plan is proposed to target this association and markedly reduce the onset of AD. This plan would establish in-house heart-brain clinics devoted to identifying, detecting, and preventing the progression of vascular risk factors that predispose to cognitive impairment and development of AD. The heart-brain clinics would be staffed with a multidisciplinary group of neurologists, psychologists, neuroradiologists, cardiovascular specialists, and technical personnel Their goal would be to apply and interpret non-invasive, cost-effective multidiagnostic testing of heart and brain function in outpatient asymptomatic and symptomatic patients at risk of dementia. Multidiagnostic testing would permit better risk stratification, medical decision-making, and a tailored intervention of patients at-risk of dementia than the present monotherapeutic approach. Personalized intervention, moreover, should achieve better patient compliance and outcome through periodic follow-up visits to the clinics where the medical plan of action could be monitored and modified as needed. Multidisciplinary heart-brain clinics will be costly at first but eventually should become cost-effective while providing an invaluable medical service to an aging population and possibly extending years of full-health lived in those at risk of dementia. © 2014 - IOS Press and the authors. All rights reserved. Source


Wheeler J.C.,University of Texas at Austin
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2012

Astrophysical explosions result from the release of magnetic, gravitational or thermonuclear energy on dynamical time scales, typically the sound-crossing time for the system. These explosions include solar and stellar flares, eruptive phenomena in accretion discs, thermonuclear combustion on the surfaces of white dwarfs and neutron stars, violent magnetic reconnection in neutron stars, thermonuclear and gravitational collapse supernovae and cosmic gamma-ray bursts, each representing a different type and amount of energy release. This paper summarizes the properties of these explosions and describes new research on thermonuclear explosions and explosions in extended circumstellar media. Parallels are drawn between studies of terrestrial and astrophysical explosions, especially the physics of the transition from deflagration-to-detonation. © 2012 The Royal Society. Source


Daigle H.,University of Texas at Austin
Marine Geology | Year: 2014

I characterized microporosity by performing low pressure nitrogen adsorption measurements on 13 shallow marine mudstone samples from the Nankai Trough offshore Japan. The samples were from two reference Sites on the incoming Philippine Sea Plate, and one Site above the accretionary prism. I determined pore size distributions using the Barrett-Joyner-Hallenda (BJH) model, and merged these with existing mercury injection capillary pressure (MICP) measurements to construct a full distribution covering micro- to macropores. I found that overall pore sizes decrease with consolidation, and that microporosity content (pores<2nm in diameter) is influenced mainly by mineralogy, with some influence of diagenetic processes. A small amount of microporosity (~0.25% of bulk sediment volume) is present in these sediments at the time of burial, presumably contained mainly in clays. Additional microporosity may develop as a result of alteration of volcanic ash at the reference Sites, and may be related to diagenetic processes that create zones of anomalous high porosity. Comparisons with porewater chemistry (K+, Ca2+, Sr, Si) show inconsistent relationships with microporosity development and cannot confirm or deny the role of ash alteration in this process. The strongest correlation observed at the three Sites was between microporosity volume and clay mineral fraction. This suggests that microporosity content is determined mainly by detrital clay abundance and development of clay as an ash alteration product, with some contribution from amorphous silica cement precipitated in the zones of anomalous high porosity. © 2014 Elsevier B.V. Source


Dalziel I.W.D.,University of Texas at Austin
Geology | Year: 2014

The geologically abrupt appearance in the fossil record of almost all animal phyla is referred to as the Cambrian radiation or "explosion" of life on Earth. Also known as "Darwin's dilemma," because it seemingly posed a major problem for his theory of gradual evolution, it coincided with the initiation of the first of the two principal global marine transgressions of the Phanerozoic. Although now seen as more protracted, it is still one of the most striking and critical events in the history of the biosphere. Almost all paleogeographic reconstructions for the early Cambrian feature a previously isolated Laurentia, the core of ancestral North America. Yet geological evidence from five continents, integrated here for the first time, indicates that the present-day "southern cone" of Laurentia was still attached to the newly amalgamated supercontinent of Gondwanaland into Cambrian times. Laurentia was then isolated by the development of a major deep oceanic connection between the opening Iapetus Ocean basin and the already well-developed paleo-Pacific. As the marine transgression advanced, major changes in ocean chemistry occurred, upwelling generated phosphorite deposits, and the number of fossilized metazoan phyla "exploded" with morphologic disparity between Laurentia and Gondwanaland already established. The development of this deep oceanic gateway, and of an ocean floor-consuming and arc-generating subduction zone along virtually the entire margin of Gondwanaland shortly thereafter, need to be taken into account in consideration of the global environmental and biotic changes associated with the Neoproterozoic-Phanerozoic transition. © 2014 Geological Society of America. Source


Willets K.A.,University of Texas at Austin
ChemPhysChem | Year: 2013

Surface-enhanced Raman scattering (SERS) is quickly growing as an analytical technique, because it offers both molecular specificity and excellent sensitivity. For select substrates, SERS can even be observed from single molecules, which is the ultimate limit of detection. This review describes recent developments in the field of single-molecule SERS (SM-SERS) with a focus on new tools for characterizing SM-SERS-active substrates and how they interact with single molecules on their surface. In particular, techniques that combine optical spectroscopy and microscopy with electron microscopy are described, including correlated optical and transmission electron microscopy, correlated super-resolution imaging and scanning electron microscopy, and correlated optical microscopy and electron energy loss spectroscopy. Hot stuff! Correlated optical and electron microscopy allow multiple length scales to be resolved for studying hot spots in single-molecule surface-enhanced Raman scattering (SERS). This review describes techniques that improve the understanding of SERS hot spots by using both electromagnetic and electronic excitation of plasmon modes in metallic nanostructures. How both the substrate and the molecule define the hot spot is also considered. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Rossky P.J.,University of Texas at Austin
Faraday Discussions | Year: 2010

In this lecture, aspects of the hydration of hydrophobic interfaces that are emergent nanoscale properties of the interface chemical structure are discussed. General results inferred from systematic computational studies are emphasized, with a central theme focusing on the separate roles of surface topography and surface chemistry. The roles of surface curvature, polarity, and chemical heterogeneity, as well as the important role of solvent thermodynamic state are considered. The potential importance of understanding evolved natural biological interfaces on the same basis as model synthetic surfaces is pointed out, and progress in this direction is discussed. © 2010 The Royal Society of Chemistry. Source


Wick T.,University of Texas at Austin
Computational Mechanics | Year: 2013

We present a specific application of the fluid-solid interface-tracking/ interface-capturing technique (FSITICT) for solving fluid-structure interaction. Specifically, in the FSITICT, we choose as interface-tracking technique the arbitrary Lagrangian-Eulerian method and as interface-capturing technique the fully Eulerian approach, leading to the Eulerian-arbitrary Lagrangian-Eulerian (EALE) technique. Using this approach, the domain is partitioned into two sub-domains in which the different methods are used for the numerical solution. The discretization is based on a monolithic solver in which finite differences are used for temporal integration and a Galerkin finite element method for spatial discretization. The nonlinear problem is treated with Newton'smethod. Themethod combines advantages of both sub-frameworks, which is demonstrated with the help of some benchmarks. © Springer-Verlag Berlin Heidelberg 2013. Source


Preston A.R.,University of Texas at Austin | Eichenbaum H.,Boston University
Current Biology | Year: 2013

Recent studies on the hippocampus and the prefrontal cortex have considerably advanced our understanding of the distinct roles of these brain areas in the encoding and retrieval of memories, and of how they interact in the prolonged process by which new memories are consolidated into our permanent storehouse of knowledge. These studies have led to a new model of how the hippocampus forms and replays memories and how the prefrontal cortex engages representations of the meaningful contexts in which related memories occur, as well as how these areas interact during memory retrieval. Furthermore, they have provided new insights into how interactions between the hippocampus and prefrontal cortex support the assimilation of new memories into pre-existing networks of knowledge, called schemas, and how schemas are modified in this process as the foundation of memory consolidation. © 2013 Elsevier Ltd. Source


Xiao D.,Oak Ridge National Laboratory | Chang M.-C.,National Taiwan Normal University | Niu Q.,University of Texas at Austin
Reviews of Modern Physics | Year: 2010

Ever since its discovery the notion of Berry phase has permeated through all branches of physics. Over the past three decades it was gradually realized that the Berry phase of the electronic wave function can have a profound effect on material properties and is responsible for a spectrum of phenomena, such as polarization, orbital magnetism, various (quantum, anomalous, or spin) Hall effects, and quantum charge pumping. This progress is summarized in a pedagogical manner in this review. A brief summary of necessary background is given and a detailed discussion of the Berry phase effect in a variety of solid-state applications. A common thread of the review is the semiclassical formulation of electron dynamics, which is a versatile tool in the study of electron dynamics in the presence of electromagnetic fields and more general perturbations. Finally, a requantization method is demonstrated that converts a semiclassical theory to an effective quantum theory. It is clear that the Berry phase should be added as an essential ingredient to our understanding of basic material properties. © 2010 The American Physical Society. Source


Keatinge-Clay A.T.,University of Texas at Austin
Natural Product Reports | Year: 2012

Covering: up to 2012 With the recent structural characterization of each of the component enzymes of type I polyketide synthases, scientists are coming tantalizingly close to elucidating the overall architectures and mechanisms of these enormous molecular factories. This review highlights not only what has been revealed about the structures and activities of each of the domains but also the mysteries that remain to be solved. This journal is © The Royal Society of Chemistry 2012. Source


Narevicius E.,Weizmann Institute of Science | Raizen M.G.,University of Texas at Austin
Chemical Reviews | Year: 2012

Researchers conducted studies to explore the potential of cold chemistry with magnetically decelerated supersonic beams. The researchers demonstrated that magnetic deceleration offered many possibilities in reactive scattering at low ̃ 1 K temperatures. Atoms, such as hydrogen, nitrogen, oxygen, and fluorine were decelerated through magnetic deceleration, allowing investigations of canonical atom-diatomic molecule reactions. These processes were also tracked using quantum mechanical calculations that were necessary to correctly describe dynamics at low temperatures. Reaction complexity was gradually increased by moving to a larger number of involved atoms. An effective moving magnetic trap potential was created using a series of spatially overlapping traps and activating them in a temporally overlapping pulse sequence to conduct the investigations. Source


Pudrovska T.,University of Texas at Austin
Journal of Aging and Health | Year: 2015

Objective: Internal health locus of control (HLOC) reflects individuals' beliefs that their own behavior influences their health. This study explores the gender difference in internal HLOC among middle-aged and older adults. Method: Using data from two waves of the National Survey of Midlife Development in the United States (MIDUS; N = 1,748), I estimate two-level random-intercept models predicting internal HLOC. Results: Women report higher levels of health control beliefs than men, especially in older cohorts born in the 1920s and 1930s. Adjustment for health, socioeconomic status, generalized control, and masculinity increases this gender gap, whereas adjustment for femininity and religiosity significantly reduces this difference. Women's higher religiosity and more feminine traits, such as warmth, nurturance, and care, partly explain their higher internal HLOC relative to men. Discussion: Because femininity and religiosity are positively associated with other-orientation, interventions to increase communal orientation may enhance beliefs in proactive responsibility for one's health among older adults. © The Author(s) 2014. Source


Moran N.A.,University of Texas at Austin
Current Opinion in Insect Science | Year: 2015

The guts of honey bee workers contain a distinctive community of bacterial species. They are microaerophilic or anaerobic, and were not clearly deliniated by earlier studies relying on laboratory culture of isolates under atmospheric oxygen levels. Recently, a more complete picture of the potential metabolism and functions of these bacteria have been possible, using genomic approaches based on metagenomic samples, as well as cultured isolates. Of these, most are host-restricted and are generally absent outside adult guts. These species include both Gram negative groups, such as Gilliamella apicola and Snodgrassella alvi, and Gram positive groups such as certain Lactobacillus and Bifidobacterium species. These gut bacterial species appear to have undergone long term coevolution with honey bee and, in some cases, bumble bee hosts. Prediction of gene functions from genome sequences suggests roles in nutrition, digestion, and potentially in defense against pathogens. In particular, genes for sugar utilization and carbohydrate breakdown are enriched in G. apicola and the Lactobacillus species. Source


Kanapathipillai M.,Wyss Institute for Biologically Inspired Engineering | Brock A.,University of Texas at Austin | Ingber D.E.,Wyss Institute for Biologically Inspired Engineering | Ingber D.E.,Harvard University
Advanced Drug Delivery Reviews | Year: 2014

Nanoparticle-based therapeutics are poised to become a leading delivery strategy for cancer treatment because they potentially offer higher selectivity, reduced toxicity, longer clearance times, and increased efficacy compared to conventional systemic therapeutic approaches. This article reviews existing nanoparticle technologies and methods that are used to target drugs to treat cancer by altering signal transduction or modulating the tumor microenvironment. We also consider the implications of recent advances in the nanotherapeutics field for the future of cancer therapy. © 2014 Elsevier B.V. Source


Burge J.,University of Texas at Austin
Journal of vision | Year: 2014

A great challenge of systems neuroscience is to understand the computations that underlie perceptual constancies, the ability to represent behaviorally relevant stimulus properties as constant even when irrelevant stimulus properties vary. As signals proceed through the visual system, neural states become more selective for properties of the environment, and more invariant to irrelevant features of the retinal images. Here, we describe a method for determining the computations that perform these transformations optimally, and apply it to the specific computational task of estimating a powerful depth cue: binocular disparity. We simultaneously determine the optimal receptive field population for encoding natural stereo images of locally planar surfaces and the optimal nonlinear units for decoding the population responses into estimates of disparity. The optimal processing predicts well-established properties of neurons in cortex. Estimation performance parallels important aspects of human performance. Thus, by analyzing the photoreceptor responses to natural images, we provide a normative account of the neurophysiology and psychophysics of absolute disparity processing. Critically, the optimal processing rules are not arbitrarily chosen to match the properties of neurophysiological processing, nor are they fit to match behavioral performance. Rather, they are dictated by the task-relevant statistical properties of complex natural stimuli. Our approach reveals how selective invariant tuning-especially for properties not trivially available in the retinal images-could be implemented in neural systems to maximize performance in particular tasks. Source


Ruoff R.S.,University of Texas at Austin
MRS Bulletin | Year: 2012

In this article, I describe my early interest in graphene and contributions that I and my co-authors, in particular, have made to the field, along with a brief history of the experimental discovery of graphene. I then turn to new carbon materials whose experimental syntheses might be on the horizon. One example involves using graphene as a template to generate large-area ultrathin sp 3-bonded carbon sheets that could also be substitutionally doped with, for example, nitrogen atoms, as one approach to making materials of interest for quantum computing. Such large-area sp 3-bonded carbon sheets hold tremendous promise for use in thermal management; as a new material for electronics and photonics; and as ultrahigh-strength components in various structures including those used in aerospace, among other applications. Another example is the class of negative-curvature carbons (NCCs) that have atom-thick walls and carbon atoms trivalently bonded to other carbon atoms. Such NCCs have a nanoscale pore structure, atom-thick walls, and exceptionally high specific surface areas, and they fill three-dimensional space in ways that suggest their use as electrode materials for ultracapacitors and batteries, as adsorbents, as support material for catalysts, and for other applications. © Copyright 2012 Materials Research Society. Source


Willets K.A.,University of Texas at Austin
Progress in Surface Science | Year: 2012

Noble metal nanoparticles have attracted significant research interest due to their ability to support localized surface plasmons. Plasmons not only give the nanoparticles a characteristic color, but they also enhance electromagnetic fields at the nanoparticle surface, often by many orders of magnitude. The enhanced electromagnetic fields are the basis for a host of surface-enhanced spectroscopies, such as surface-enhanced Raman scattering (SERS), but characterizing how the enhanced electromagnetic fields are distributed on the surface of the nanoparticles is an experimental challenge due to the small size of the nanoparticles (∼20-200 nm) relative to the diffraction limit of light. This Progress Highlight will discuss methods for characterizing local electromagnetic field enhancements with < 5 nm resolution, including electron energy loss spectroscopy, cathodoluminescence, and super-resolution optical imaging. © 2012 Elsevier Ltd. All rights reserved. Source


Sugiyama N.S.,Tohoku University | Komatsu E.,University of Texas at Austin | Futamase T.,Tohoku University
Physical Review Letters | Year: 2011

While detection of the "local form" bispectrum of primordial perturbations would rule out all single-field inflation models, multifield models would still be allowed. We show that multifield models described by the δN formalism obey an inequality between fNL and one of the local-form trispectrum amplitudes, τNL, such that τNL>12(65fNL)2 with a possible logarithmic scale dependence, provided that 2-loop terms are small. Detection of a violation of this inequality would rule out most of multifield models, challenging inflation as a mechanism for generating the primoridal perturbations. © 2011 American Physical Society. Source


Pavel M.D.,University of Texas at Austin
Astronomical Journal | Year: 2014

A new method for measuring the large-scale structure of the Galactic magnetic field is presented. The Galactic magnetic field has been probed through the Galactic disk with near-infrared starlight polarimetry; however, the distance to each background star is unknown. Using red clump stars as near-infrared standard candles, this work presents the first attempt to decompose the line-of-sight structure of the sky-projected Galactic magnetic field. Two example lines of sight are decomposed: toward a field with many red clump stars and toward a field with few red clump stars. A continuous estimate of magnetic field orientation over several kiloparsecs of distance is possible in the field with many red clump stars, while only discrete estimates are possible in the sparse example. Toward the outer Galaxy, there is a continuous field orientation with distance that shows evidence of perturbation by the Galactic warp. Toward the inner Galaxy, evidence for a large-scale change in the magnetic field geometry is consistent with models of magnetic field reversals, independently derived from Faraday rotation studies. A photo-polarimetric method for identifying candidate intrinsically polarized stars is also presented. The future application of this method to large regions of the sky will begin the process of mapping the Galactic magnetic field in a way never before possible. © 2014. The American Astronomical Society. All rights reserved. Source


Wissler E.H.,University of Texas at Austin
Journal of Heat Transfer | Year: 2012

Analysis of heat transfer in the human involves different scales ranging from countercurrent heat transfer between small arteries and veins to the whole-body, which is the principal subject of this paper. Important applications of whole-body human thermal models are predicting comfort under various conditions, such as riding in an air-conditioned automobile on a hot day, and predicting the probability of survival under life-threatening conditions, such as accidental immersion is cold water. This paper is arranged in three parts. In the first part, the evolution of human thermal models is discussed. Then, aspects of human physiology fundamental to thermoregulation are discussed, and finally we discuss a practical application derived from an Arctic survival project with which the author is involved. © 2012 American Society of Mechanical Engineers. Source


Ghosh J.,University of Texas at Austin
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery | Year: 2011

Cluster ensembles combine multiple clusterings of a set of objects into a single consolidated clustering, often referred to as the consensus solution. Consensus clustering can be used to generate more robust and stable clustering results compared to a single clustering approach, perform distributed computing under privacy or sharing constraints, or reuse existing knowledge. This paper describes avariety of algorithms that have been proposed to address the cluster ensemble problem, organizing them in conceptual categories that bring out the common threads and lessons learnt while simultaneously highlighting unique features of individual approaches. © 2011 John Wiley & Sons, Inc. Source


Crews D.,University of Texas at Austin
Hormones and Behavior | Year: 2011

Evolutionary change is a product of selection. Selection operates on the phenotype, and its consequences are manifest in representation of the genotype in successive generations. Of particular interest to both evolutionary and behavioral biologists is the newly emerging field of epigenetics and behavior. Two broad categories of epigenetic modifications must be distinguished. Context-dependent epigenetic change can be observed if the environmental factors that bring about the epigenetic modification persists (e.g., the frequency and quality of maternal care modifying the brain and future behavior of the offspring each generation). Because the environment induces epiallelic change, removing the causative factor can reverse a context-dependent epigenetic state. Germline-dependent epigenetic change occurs when the epigenetic imprint is mediated through the germline. Such effects are independent of the causative agent and there is no evidence at present that a germline-dependent epigenetic state can be reversed. Finally, only germline-dependent epigenetic modifications can be truly transgenerational. Although an individual's life history is progressive and continuous, it might usefully be viewed as the cumulation of divisions: each period emerging from what has gone before and, at the same time, setting the stage for what follows. These life history stages are somewhat arbitrary, with many traits spanning conventional divisions, but each period tends to have its own characteristic ethologies and particular contribution to neural and behavioral phenotypes. To understand how these episodes 'fit' together, it is necessary to deconstruct early life events and study each period both in its' own right and how it interacts with the preceding and subsequent stages. Lastly, it seems intuitive that germline- and context-dependent epigenetic modifications interact, resulting in the individual variation observed in behaviors, but until now this hypothesis has never been tested experimentally. © 2010 Elsevier Inc. Source


Peres-Neto P.R.,University of Quebec at Montreal | Leibold M.A.,University of Texas at Austin | Dray S.,CNRS Biometry and Evolutionary Biology Laboratory
Ecology | Year: 2012

Patterns in biodiversity and species coexistence are the result of multiple interacting processes including evolutionary history, trait variation, species interactions, dispersal, environmental variation, and landscape heterogeneity. Exploring patterns of biodiversity across space is perhaps the best integrative method (in contrast to the scarcity of temporal data) to interpret the influence of these multiple and interactive effects in determining community assembly, but it is still underdeveloped. Two emerging fields, metacommunity ecology and community phylogenetics, have been making relevant, though rather independent, progress toward understanding how communities are assembled in space. Our main goals were twofold. First, we described a heuristical framework to merge these two fields into "metacommunity phylogenetics." The main goal of this framework is to provide a way to think about how niche properties of species arranged across the environment and different spatial scales influence the process of community assembly. Second, we developed an analytical framework to link niche properties based on trait and phylogenetics to environmental and spatial variation. In order to assess the performance of the framework, we used extensive computer simulations of community assembly to show that the procedure is robust under a variety of scenarios. © 2012 by the Ecological Society of America. Source


Bard A.J.,University of Texas at Austin
Journal of the American Chemical Society | Year: 2010

This paper offers a perspective on inner-sphere heterogeneous electron-transfer reactions and electrocatalysis as it is applied to electrochemical energy conversion systems. Fundamental concepts and an overview of past approaches to studies of these types of reactions are discussed. A method for the discovery of new electrocatalysts (for example, ones for the oxygen reduction reaction) and photocatalysts (for solar energy conversion to fuels) based on scanning electrochemical microscopy is briefly described, as well as new surface interrogation techniques for quantifying intermediates. © 2010 American Chemical Society. Source


Fuiman L.A.,University of Texas at Austin
Biology letters | Year: 2013

Fatty acid composition of eggs affects development, growth and ecological performance of fish embryos and larvae, with potential consequences for recruitment success. Essential fatty acids in eggs derive from the maternal diet, and the time between ingestion and deposition in eggs is ecologically important but unknown. We examined the dynamics of diet-egg transfer of arachidonic acid (ARA) in the batch-spawning fish, red drum (Sciaenops ocellatus), by measuring ARA concentrations in eggs after a single diet shift and during a period of irregular variations in diet. ARA concentrations in eggs changed within 2-16 days of a diet shift. The rate of change was proportional to the magnitude of the shift, with no evidence of equilibration. These results are not consistent with eggs being assembled entirely from accumulated body stores. The immediate source of ARA in eggs appears to be the recent diet. We propose that batch spawning produces rapid diet-egg transfer of ARA because it removes large amounts of fatty acids from the body and prevents equilibration. The immediacy of the diet-egg connection suggests that spawning migration combined with short-interval batch spawning may have evolved to take advantage of nutrients critical for offspring survival that are available at the spawning site. Source


Leger N.,University of Texas at Austin
Archive for Rational Mechanics and Analysis | Year: 2011

We consider scalar nonviscous conservation laws with strictly convex flux in one spatial dimension, and we investigate the behavior of bounded L2 perturbations of shock wave solutions to the Riemann problem using the relative entropy method. We show that up to a time-dependent translation of the shock, the L2 norm of a perturbed solution relative to the shock wave is bounded above by the L2 norm of the initial perturbation. © 2010 Springer-Verlag. Source


Caffarelli L.,University of Texas at Austin | Silvestre L.,University of Chicago
Archive for Rational Mechanics and Analysis | Year: 2011

We obtain C1,α regularity estimates for nonlocal elliptic equations that are not necessarily translation-invariant using compactness and perturbative methods and our previous regularity results for the translation-invariant case. © 2010 Springer-Verlag. Source


Ashby F.G.,University of California at Santa Barbara | Maddox W.T.,University of Texas at Austin
Annals of the New York Academy of Sciences | Year: 2011

During the 1990s and early 2000s, cognitive neuroscience investigations of human category learning focused on the primary goal of showing that humans have multiple category-learning systems and on the secondary goals of identifying key qualitative properties of each system and of roughly mapping out the neural networks that mediate each system. Many researchers now accept the strength of the evidence supporting multiple systems, and as a result, during the past few years, work has begun on the second generation of research questions-that is, on questions that begin with the assumption that humans have multiple category-learning systems. This article reviews much of this second generation of research. Topics covered include (1) How do the various systems interact? (2) Are there different neural systems for categorization and category representation? (3) How does automaticity develop in each system? and (4) Exactly how does each system learn? © 2010 New York Academy of Sciences. Source


Hall A.K.,University of Washington | Cole-Lewis H.,Columbia University | Cole-Lewis H.,ICF International | Bernhardt J.M.,University of Texas at Austin
Annual Review of Public Health | Year: 2015

The aim of this systematic review of reviews is to identify mobile text-messaging interventions designed for health improvement and behavior change and to derive recommendations for practice. We have compiled and reviewed existing systematic research reviews and meta-analyses to organize and summarize the text-messaging intervention evidence base, identify best-practice recommendations based on findings from multiple reviews, and explore implications for future research. Our review found that the majority of published text-messaging interventions were effective when addressing diabetes self-management, weight loss, physical activity, smoking cessation, and medication adherence for antiretroviral therapy. However, we found limited evidence across the population of studies and reviews to inform recommended intervention characteristics. Although strong evidence supports the value of integrating text-messaging interventions into public health practice, additional research is needed to establish longer-term intervention effects, identify recommended intervention characteristics, and explore issues of cost-effectiveness. Copyright © 2015 by Annual Reviews. All rights reserved. Source


Iyer V.R.,University of Texas at Austin
Trends in Cell Biology | Year: 2012

Nucleosomes are an essential component of eukaryotic chromosomes. The impact of nucleosomes is seen not just on processes that directly access the genome, such as transcription, but also on an evolutionary timescale. Recent studies in various organisms have provided high-resolution maps of nucleosomes throughout the genome. Computational analysis, in conjunction with many other kinds of data, has shed light on several aspects of nucleosome biology. Nucleosomes are positioned by several means, including intrinsic sequence biases, by stacking against a fixed barrier, by DNA-binding proteins and by chromatin remodelers. These studies underscore the important organizational role of nucleosomes in all eukaryotic genomes. This paper reviews recent genomic studies that have shed light on the determinants of nucleosome positioning and their impact on the genome. © 2012 Elsevier Ltd. Source


Shi L.,University of Texas at Austin
Nanoscale and Microscale Thermophysical Engineering | Year: 2012

Significant progress has been made in recent studies of thermal and thermoelectric transport phenomena in nanostructures and low-dimensional systems. This article reviews several intriguing quantum and classical size effects on thermal and thermoelectric properties that have been predicted by theoretical calculations or observed in experiments. Attention is focused on the Casimir limit in phonon boundary scattering and the effect of phonon confinement on the lattice thermal conductivity of semiconductor nanowires (NWs) and nanomeshes; the effects of thickness, lateral size, and interface interaction on the lattice thermal conductivity of carbon nanotubes (CNTs) and graphene; and the phonon-drag thermopower and quantum size effects on the thermoelectric power factor in semiconductor NWs. Further experimental and theoretical investigations are suggested for better understanding of some of these nanoscale transport phenomena. © Taylor & Francis Group, LLC. Source


Barrick J.E.,University of Texas at Austin | Barrick J.E.,Michigan State University | Lenski R.E.,Michigan State University
Nature Reviews Genetics | Year: 2013

Evolutionary changes in organismal traits may occur either gradually or suddenly. However, until recently, there has been little direct information about how phenotypic changes are related to the rate and the nature of the underlying genotypic changes. Technological advances that facilitate whole-genome and whole-population sequencing, coupled with experiments that 'watch' evolution in action, have brought new precision to and insights into studies of mutation rates and genome evolution. In this Review, we discuss the evolutionary forces and ecological processes that govern genome dynamics in various laboratory systems in the context of relevant population genetic theory, and we relate these findings to evolution in natural populations. Source


Fitzpatrick R.,University of Texas at Austin
Nuclear Fusion | Year: 2011

A semi-analytic sharp-boundary model of a nonaxisymmetric vertical displacement event (VDE) in a large aspect-ratio, high-beta (i.e. β ∼ ε), vertically elongated tokamak plasma is developed. The model is used to simulate nonaxisymmetric VDEs with a wide range of different plasma equilibrium and vacuum vessel parameters. These simulations yield poloidal halo current fractions and toroidal peaking factors whose magnitudes are similar to those seen in experiments, and also reproduce the characteristic inverse scaling between the halo current fraction and the toroidal peaking factor. Moreover, the peak poloidal halo current density in the vacuum vessel is found to correlate strongly with the reciprocal of the minimum edge safety factor attained during the VDE. In addition, under certain circumstances, the ratio of the net sideways force acting on the vacuum vessel to the net vertical force is observed to approach unity. Finally, the peak vertical force per unit area acting on the vessel is found to have a strong correlation with the equilibrium toroidal plasma current at the start of the VDE, but is also found to increase with increasing vacuum vessel resistivity relative to the scrape-off layer plasma. © 2011 IAEA, Vienna. Source


Meyer A.G.,University of Texas at Austin
Philosophical transactions of the Royal Society of London. Series B, Biological sciences | Year: 2013

We investigate the causes of site-specific evolutionary-rate variation in influenza haemagglutinin (HA) between human and avian influenza, for subtypes H1, H3, and H5. By calculating the evolutionary-rate ratio, ω = dN/dS as a function of a residue's solvent accessibility in the three-dimensional protein structure, we show that solvent accessibility has a significant but relatively modest effect on site-specific rate variation. By comparing rates within HA subtypes among host species, we derive an upper limit to the amount of variation that can be explained by structural constraints of any kind. Protein structure explains only 20-40% of the variation in ω. Finally, by comparing ω at sites near the sialic-acid-binding region to ω at other sites, we show that ω near the sialic-acid-binding region is significantly elevated in both human and avian influenza, with the exception of avian H5. We conclude that protein structure, HA subtype, and host biology all impose distinct selection pressures on sites in influenza HA. Source


Sun A.,University of Texas at Austin
Environmental Modelling and Software | Year: 2013

Watershed management, in its very nature, represents a participatory process, requiring horizontal and vertical collaborations among multiple institutions and stakeholders. For watershed-scale management to be effective, a social-learning infrastructure needs to be in place to allow for the integration of diverse knowledge and interests related to watershed protection and restoration. Environmental decision support systems (EDSS) have long been used to support co-learning processes during watershed management. However, implementation and maintenance of EDSS in house often pose a significant burden to local watershed partnerships because of budgetary and technological constraints. Recent advances in service-oriented computing can help shift away the technical burden of EDSS implementation to service providers and enable watershed partnerships to focus primarily on decision-making activities. In this paper, I describe the migration of an EDSS module from the traditional client-server-based architecture to a client of cloud-computing services. Google Drive, which is behind the new version of the EDSS module, provides a number of basic visual analytics features that can be used to increase the collaborative decision-making experience while drastically reducing the cost of small-scale EDSS. More sophisticated EDSS may be implemented by leveraging the strengths of both client-server architectures and cloud-computing services. © 2012 Elsevier Ltd. Source


Harden K.P.,University of Texas at Austin
Perspectives on Psychological Science | Year: 2014

In this article, I propose a sex-positive framework for research on adolescent sexuality in which I consider consensual sexual activities in adolescence as developmentally normative and potentially healthy. The sex-positive framework is contrasted with the predominant “risk” perspective that presumes that abstinence from sexual activity is the ideal behavioral outcome for teenagers. Evidence from longitudinal and behavioral genetic studies indicates that engaging in sexual intercourse in adolescence does not typically cause worse psychological functioning. The relationship context of sexual experience may be a critical moderator of its psychological impact. Moreover, cross-cultural data on adolescents’ contraception usage, unintended pregnancy, and sexually transmitted infections suggest that, despite the unacceptably high rate of negative health consequences among U.S. teenagers, adolescents can have the developmental capacity to regulate the health risks inherent in sexual activity. Understanding adolescent sexuality can be fostered by considering sexual well-being, a multidimensional construct that incorporates an adolescent’s sexual self-efficacy, sexual self-esteem, feelings of sexual pleasure and satisfaction, and freedom from pain and negative affect regarding sexuality. New research is necessary to understand the development of adolescent sexual well-being, including its normative age trends, its reciprocal links with sexual behavior, and its impact on psychological and physical health. © The Author(s) 2014. Source


Goodenough J.B.,University of Texas at Austin
Journal of Solid State Electrochemistry | Year: 2012

The challenges for rechargeable batteries are co