Urbana, IL, United States

The University of Illinois at Urbana–Champaign is a public research-intensive university in the U.S. state of Illinois. A land-grant university, it is the flagship campus of the University of Illinois system. The University of Illinois at Urbana–Champaign is the second oldest public university in the state , and is a founding member of the Big Ten Conference. It is a member of the Association of American Universities and is designated as a RU/VH Research University . The campus library system possesses the second-largest university library in the United States after Harvard University.The university comprises 17 colleges that offer more than 150 programs of study. Additionally, the university operates an extension that serves 2.7 million registrants per year around the state of Illinois and beyond. The campus holds 647 buildings on 4,552 acres in the twin cities of Champaign and Urbana ; its annual operating budget in 2011 was over $1.7 billion. Wikipedia.

Time filter

Source Type

The Regents Of The University Of Colorado and University of Illinois at Urbana - Champaign | Date: 2016-05-18

The methods and apparatus of the present invention allow the evaluation of inflammation of the esophagus. Measurements may be utilized, for example, to diagnose a disease of the esophagus, to monitor inflammation of the esophagus, or to access the treatment of a disease of the esophagus. In one embodiment, the invention comprises a method for measuring esophageal inflammation comprising deploying a device into the esophagus of a subject, removing the device after a predetermined period of time, analyzing the device for a diagnostic indicator of esophageal inflammation and evaluating the diagnostic indicator to diagnose esophageal inflammation.

University of Illinois at Urbana - Champaign, Vanquish Oncology and Johns Hopkins University | Date: 2016-08-22

The invention provides compositions and methods for the induction of cell death, for example, cancer cell death. Combinations of compounds and related methods of use are disclosed, including the use of compounds in therapy for the treatment of cancer and selective induction of apoptosis in cells. The disclosed drug combinations can have lower neurotoxicity effects than other compounds and combinations of compounds.

H. Lee Moffitt Cancer Center, Research Institute and University of Illinois at Urbana - Champaign | Date: 2016-08-08

Disclosed are selective histone deactylase inhibitors (HDACi) that having Formula (I). Methods of making and using these inhibitors for the treatment of cancer, in particular melanoma are also disclosed.

University of Illinois at Urbana - Champaign | Date: 2016-08-29

A chamber for a cell culture and a chamber holder system are disclosed. A representative chamber embodiment includes a first layer; and a second layer coupled to the first layer, the second layer further comprising a well having at least one side wall, the well extending through the second layer, wherein a predetermined portion of the first layer is substantially optically transmissive and is exposed in and forms a lower side of the well. The well may have a laminar flow shape, and may also include a plurality of recesses to accommodate the tips of inflow and outflow devices, such as for superfusion applications. The second layer may be comprised of a hydrophobic material or comprised of a material having a lower density than the density of culture medium to provide a buoyant chamber for sandwich cell cultures, along with submersible chambers.

University of Illinois at Urbana - Champaign | Date: 2015-05-15

Disclosed is a derivative of amphotericin B (AmB), denoted C2epiAmB, with an improved therapeutic index over amphotericin B, pharmaceutical compositions comprising the AmB derivative, methods of making the AmB derivative and the pharmaceutical composition, and their use in methods of inhibiting growth of a yeast or fungus and treating a yeast or fungal infection. C2epiAmB is an epimer of the parent compound. Specifically, C2epiAmB differs from the parent compound at the CT stereogenic center on mycosamine. This difference in structure results in (i) retained capacity to bind ergosterol and inhibit growth of yeast, (ii) greatly reduced capacity to bind cholesterol, and (iii) essentially no toxicity to human cells.

University of Illinois at Urbana - Champaign | Date: 2016-09-23

Autonomic cooling of a substrate is achieved using a porous thermal protective layer to provide evaporative cooling combined with capillary pumping. The porous thermal protective layer is manufactured onto the substrate. A vascular network is integrated between the substrate and the protective layer. Applied heat causes fluid contained in the protective layer to evaporate, removing heat. The fluid lost to evaporation is replaced by capillary pressure, pulling fluid from a fluid-containing reservoir through the vascular network. Cooling occurs as liquid evaporates from the protective layer.

University of Illinois at Urbana - Champaign, Rohm, Haas Electronic Materials LLC and Dow Chemical Company | Date: 2016-03-10

In one aspect, structures are provided that comprise (a) a one-dimensional periodic plurality of layers, wherein at least two of the layers have a refractive index differential sufficient to provide effective contrast; and (b) one or more light-emitting nanostructure materials effectively positioned with respect to the refractive index differential interface, wherein the structure provides a polarized output emission.

University of Illinois at Urbana - Champaign | Date: 2016-09-28

A system and method includes nano opto-mechanical-fluidic resonators (nano-resonators), e.g., for identification of particles, e.g., single viruses and/or cells.

University of Illinois at Urbana - Champaign | Date: 2016-09-28

A system and method includes resonator device to detect cells or other particles through light and/or vibration sensing.

University of Illinois at Urbana - Champaign | Date: 2016-11-11

A magnetically driven micropump for handling small fluid volumes. The micropump includes a first chamber and a second chamber. A flexible membrane being disposed between the first and second chambers. The flexible membrane being magnetically coupled to an actuator for displacing the membrane.

University of Illinois at Urbana - Champaign | Date: 2015-11-17

Methods and apparatus for storing information or energy. An array of nano-capacitors is provided, where each nano-capacitor has a plurality of cathodic regions and an anode separated from each of the cathodic regions by one or more intervening dielectrics. Each nano-capacitor acts as a quantum resonator thereby suppressing electron emission. The thickness of the intervening dielectric is in the range between 0.1 nanometers and 1000 nanometers and is shorter than an electron mean free path within the dielectric. Each cathodic region is at least 100 times larger than the thickness of the intervening dielectric in every direction transverse to the thickness of the intervening dielectric. An excess of electrons is stored on the cathodic regions. The dielectric may be a metal oxide, particularly a native oxide of the cathode material.

University of Illinois at Urbana - Champaign | Date: 2016-08-23

Compositions are provided comprising water-stable semi-conductor nanoplatelets encapsulated in a hydrophilic coating further comprising lipids and lipoproteins. Uses include biomolecular imaging and sensing, and methods of making comprise: colloidal synthesis of CdSe core NPLs; layer-by-layer growth of a CdS shell; and encapsulation of CdSe/CdScore/shell NPLs in lipid and lipoprotein components through an evaporation-encapsulation using zwitterionic phospholipids, detergents, and amphipathic membrane scaffold proteins.

Kotov V.N.,University of Vermont | Uchoa B.,University of Illinois at Urbana - Champaign | Pereira V.M.,National University of Singapore | Guinea F.,CSIC - Institute of Materials Science | And 2 more authors.
Reviews of Modern Physics | Year: 2012

The problem of electron-electron interactions in graphene is reviewed. Starting from the screening of long-range interactions in these systems, the existence of an emerging Dirac liquid of Lorentz invariant quasiparticles in the weak-coupling regime is discussed, as well as the formation of strongly correlated electronic states in the strong-coupling regime. The analogy and connections between the many-body problem and the Coulomb impurity problem are also analyzed. The problem of the magnetic instability and Kondo effect of impurities and/or adatoms in graphene is also discussed in analogy with classical models of many-body effects in ordinary metals. Lorentz invariance is shown to play a fundamental role and leads to effects that span the whole spectrum, from the ultraviolet to the infrared. The effect of an emerging Lorentz invariance is also discussed in the context of finite size and edge effects as well as mesoscopic physics. The effects of strong magnetic fields in single layers and some of the main aspects of the many-body problem in graphene bilayers are briefly reviewed. In addition to reviewing the fully understood aspects of the many-body problem in graphene, a plethora of interesting issues are shown to remain open, both theoretically and experimentally, and the field of graphene research is still exciting and vibrant. © 2012 American Physical Society.

Beijing Apollo Ding Rong Solar Technology Co. and University of Illinois at Urbana - Champaign | Date: 2016-05-12

A method of manufacturing a photovoltaic structure includes forming a p-type semiconductor absorber layer containing a copper indium gallium selenide based material over a first electrode, forming a n-type cadmium sulfide layer over the p-type semiconductor absorber layer by sputtering in an ambient including hydrogen gas and oxygen gas, and forming a second electrode over the cadmium sulfide layer.

University of Illinois at Urbana - Champaign | Date: 2015-04-28

The present invention relates to the reduction of a symptom of an alcohol withdrawal state comprising administering a modulator of histone acetylation.

Chang H.-H.,University of Illinois at Urbana - Champaign
Psychometrika | Year: 2015

The paper provides a survey of 18 years' progress that my colleagues, students (both former and current) and I made in a prominent research area in Psychometrics-Computerized Adaptive Testing (CAT). We start with a historical review of the establishment of a large sample foundation for CAT. It is worth noting that the asymptotic results were derived under the framework of Martingale Theory, a very theoretical perspective of Probability Theory, which may seem unrelated to educational and psychological testing. In addition, we address a number of issues that emerged from large scale implementation and show that how theoretical works can be helpful to solve the problems. Finally, we propose that CAT technology can be very useful to support individualized instruction on a mass scale. We show that even paper and pencil based tests can be made adaptive to support classroom teaching.

Lopez-Pamies O.,University of Illinois at Urbana - Champaign
Journal of the Mechanics and Physics of Solids | Year: 2014

A microscopic field theory is developed with the aim of describing, explaining, and predicting the macroscopic response of elastic dielectric composites with two-phase particulate (periodic or random) microstructures under arbitrarily large deformations and electric fields. The central idea rests on the construction - via an iterated homogenization technique in finite electroelastostatics - of a specific but yet fairly general class of particulate microstructures which allow to compute exactly the homogenized response of the resulting composite materials. The theory is applicable to any choice of elastic dielectric behaviors (with possibly even or odd electroelastic coupling) for the underlying matrix and particles, and any choice of the one- and two-point correlation functions describing the microstructure. In spite of accounting for fine microscopic information, the required calculations amount to solving tractable first-order nonlinear (Hamilton-Jacobi-type) partial differential equations. As a first application of the theory, explicit results are worked out for the basic case of ideal elastic dielectrics filled with initially spherical particles that are distributed either isotropically or in chain-like formations and that are ideal elastic dielectrics themselves. The effects that the permittivity, stiffness, volume fraction, and spatial distribution of the particles have on the overall electrostrictive deformation (induced by the application of a uniaxial electric field) of the composite are discussed in detail. © 2013 Elsevier Ltd.

Braun P.V.,University of Illinois at Urbana - Champaign
Chemistry of Materials | Year: 2014

This Perspective overviews many of the developments in templated porous three-dimensional photonics, with a particular focus on functional architectures, and provides suggestions for future opportunities for research. A significant diversity of 3D structures is available today with characteristic dimensions appropriate for providing strong light-matter interactions, in no small part due to recent advances in 3D patterning techniques. However, the optical functionality of these structures has generally remained limited. Advances in materials chemistry have the opportunity to dramatically increase the function of templated 3D photonics, and a few examples of highly functional templated 3D photonics for sensing, solar energy harvesting, optical metamaterials, and light emission are presented as first examples of success. © 2013 American Chemical Society.

Hirata S.,University of Illinois at Urbana - Champaign
Theoretical Chemistry Accounts | Year: 2011

This article aims to dispel confusions about the definition of size consistency as well as some incompatibility that exists between different criteria for judging whether an electronic structure theory is size consistent and thus yields energies and other quantities having correct asymptotic size dependence. It introduces extensive and intensive diagram theorems, which provide unambiguous sufficient conditions for size consistency for extensive and intensive quantities, respectively, stipulated in terms of diagrammatic topology and vertex makeup. The underlying algebraic size-consistency criterion is described, which relies on the polynomial dependence of terms in the formalism on the number of wave vector sampling points in Brillouin-zone integrations. The physical meanings of two types of normalization of excitation amplitudes in electron-correlation theories, namely, the intermediate and standard normalization, are revealed. The amplitudes of the operator that describes an extensive quantity (the extensive operator) are subject to the intermediate normalization, while those of the operator that corresponds to an intensive quantity (the intensive operator) must be normalized. The article also introduces the extensive-intensive consistency theorem which specifies the relationship between the spaces of determinants reached by the extensive and intensive operators in a size-consistent method for intensive quantities. Furthermore, a more fundamental question is addressed as to what makes energies extensive and thus an application of thermodynamics to chemistry valid. It is shown that the energy of an electrically neutral, periodic, non-metallic solid is extensive. On this basis, a strictly size-consistent redefinition of the Hartree-Fock theory is proposed. © 2011 Springer-Verlag.

Biswas R.R.,University of Illinois at Urbana - Champaign
Physical Review Letters | Year: 2013

We consider Majorana fermions tunneling among an array of vortices in a 2D chiral p-wave superconductor or equivalent material. The amplitude for Majorana fermions to tunnel between a pair of vortices is found to necessarily depend on the background superconducting phase profile; it is found to be proportional to the sine of half the difference between the phases at the two vortices. Using this result we study tight-binding models of Majorana fermions in vortices arranged in triangular or square lattices. In both cases we find that the aforementioned phase-tunneling relationship leads to the creation of superlattices where the Majorana fermions form macroscopically degenerate localizable flat bands at zero energy, in addition to other dispersive bands. This finding suggests that tunneling processes in these vortex arrays do not change the energies of a finite fraction of Majorana fermions, contrary to previous expectation. The presence of flat Majorana bands, and hence less-than-expected decoherence in these vortex arrays, bodes well for the prospects of topological quantum computation with large numbers of Majorana states. © 2013 American Physical Society.

Layfield J.P.,University of Illinois at Urbana - Champaign | Hammes-Schiffer S.,University of Illinois at Urbana - Champaign
Chemical Reviews | Year: 2014

A study is conducted to demonstrate the theoretical treatments and simulation methods that have been developed to study hydrogen tunneling processes and to present examples of hydrogen tunneling in specific enzymatic and biomimetic systems. The ideas and concepts in this study have originated with assorted authors in various fields. The investigations have discussed the theoretical concepts and fundamental physical principles underlying hydrogen tunneling processes. The investigations characterize proton and hydride transfer, HAT, and EPT reactions in terms of electronic and vibrational nonadiabaticity and explain how to differentiate these types of reactions using electronic structure and semiclassical methods. The study also presents rate constant expressions for each type of reaction and discusses the approximations involved in the derivations of these expressions and the regimes in which they are valid.

Teo J.C.Y.,University of Illinois at Urbana - Champaign | Hughes T.L.,University of Illinois at Urbana - Champaign
Physical Review Letters | Year: 2013

We prove a topological criterion for the existence of a zero-energy Majorana bound state on a disclination, a rotation symmetry breaking point defect, in fourfold symmetric topological crystalline superconductors (TCS) in two dimensions. We first establish a complete topological classification of TCS using the Chern invariant and three integral rotation invariants. By analytically and numerically studying disclinations, we algebraically deduce a Z2 index that identifies the parity of the number of Majorana zero modes at a disclination. Surprisingly, we also find weakly protected Majorana fermions bound at the corners of superconductors with trivial Chern and weak invariants. © 2013 American Physical Society.

Xiang Y.,University of Illinois at Urbana - Champaign | Lu Y.,University of Illinois at Urbana - Champaign
Nature Chemistry | Year: 2011

Portable, low-cost and quantitative detection of a broad range of targets at home and in the field has the potential to revolutionize medical diagnostics and environmental monitoring. Despite many years of research, very few such devices are commercially available. Taking advantage of the wide availability and low cost of the pocket-sized personal glucose meter - used worldwide by diabetes sufferers - we demonstrate a method to use such meters to quantify non-glucose targets, ranging from a recreational drug (cocaine, 3.4 μM detection limit) to an important biological cofactor (adenosine, 18 μM detection limit), to a disease marker (interferon-gamma of tuberculosis, 2.6 nM detection limit) and a toxic metal ion (uranium, 9.1 nM detection limit). The method is based on the target-induced release of invertase from a functional-DNA - invertase conjugate. The released invertase converts sucrose into glucose, which is detectable using the meter. The approach should be easily applicable to the detection of many other targets through the use of suitable functional-DNA partners (aptamers, DNAzymes or aptazymes). © 2011 Macmillan Publishers Limited. All rights reserved.

Dlott D.D.,University of Illinois at Urbana - Champaign
Annual Review of Physical Chemistry | Year: 2011

This review discusses new developments in shock compression science with a focus on molecular media. Some basic features of shock and detonation waves, nonlinear excitations that can produce extreme states of high temperature and high pressure, are described. Methods of generating and detecting shock waves are reviewed, especially those using tabletop lasers that can be interfaced with advanced molecular diagnostics. Newer compression methods such as shockless compression and precompression shock that generate states of cold dense molecular matter are discussed. Shock compression creates a metallic form of hydrogen, melts diamond, and makes water a superionic liquid with unique catalytic properties. Our understanding of detonations at the molecular level has improved a great deal as a result of advanced nonequilibrium molecular simulations. Experimental measurements of detailed molecular behavior behind a detonation front might be available soon using femtosecond lasers to produce nanoscale simulated detonation fronts. © 2011 by Annual Reviews. All rights reserved.

Motl R.W.,University of Illinois at Urbana - Champaign
Multiple Sclerosis Journal | Year: 2014

Supervised exercise training has substantial benefits for persons with multiple sclerosis (MS), yet 80% of those with MS do not meet recommended levels of moderate-to-vigorous physical activity (MVPA). This same problem persisted for decades in the general population of adults and prompted a paradigm shift away from "exercise training for fitness" toward "physical activity for health." The paradigm shift reflects a public health approach of promoting lifestyle physical activity through behavioral interventions that teach people the skills, techniques, and strategies based on established theories for modifying and self-regulating health behaviors. This paper describes: (a) the definitions of and difference between structured exercise training and lifestyle physical activity; (b) the importance and potential impact of the paradigm shift; (c) consequences of lifestyle physical activity in MS; and (d) behavioral interventions for changing lifestyle physical activity in MS. The paper introduces the "new kid on the MS block" with the hope that lifestyle physical activity might become an accepted partner alongside exercise training for inclusion in comprehensive MS care. © The Author(s) 2014.

Goh J.O.,University of Illinois at Urbana - Champaign
Aging and Disease | Year: 2011

Aging is associated with myriad changes in behavioral performance and brain structure and function. Given this complex interplay of brain and behavior, two streams of findings are reviewed here that show that aging is generally associated with dedifferentiated neural processes, and also changes in functional connectivity. This article considers an integrated view of how such age-related dedifferentiation of neural function and changes in functional connectivity are related, highlighting some recent findings on differences in small-world architecture in the functional connectivity of young and older adults. These findings suggest that both neural connectivity and the organization of these connections are important determinants of processing efficiency with aging that may be the underlying mechanisms for dedifferentiation. Thus, the evaluation of the neurocognitive effects of aging on functional connectivity provides an alternative framework that captures the behavioral and brain changes that are observed in cognitive aging.

Sreenivas R.S.,University of Illinois at Urbana - Champaign
IEEE Transactions on Automatic Control | Year: 2012

We first show that the existence, or nonexistence, of a supervisory policy that enforces liveness in an arbitrary Petri net (PN) is not semidecidable. Following this, we show that this is not the case if we restrict our attention to an arbitrary, partially controlled, free-choice Petri net (FCPN). Starting from the observation that the set of initial markings for which there is a supervisory policy that enforces liveness in a free-choice structure is right-closed, we present a string of observations that eventually lead to the conclusion that the existence of a supervisory policy that enforces liveness in an arbitrary FCPN is decidable. The paper concludes with some suggested directions for future research. © 2011 IEEE.

Strambeanu I.I.,University of Illinois at Urbana - Champaign | White M.C.,University of Illinois at Urbana - Champaign
Journal of the American Chemical Society | Year: 2013

The divergent synthesis of syn-1,2-aminoalcohol or syn-1,2-diamine precursors from a common terminal olefin has been accomplished using a combination of palladium(II) catalysis with Lewis acid cocatalysis. Palladium(II)/bis-sulfoxide catalysis with a silver triflate cocatalyst leads for the first time to anti-2-aminooxazolines (C-O) in good to excellent yields. Simple removal of the bis-sulfoxide ligand from this reaction results in a complete switch in reactivity to afford anti-imidazolidinone products (C-N) in good yields and excellent diastereoselectivities. Mechanistic studies suggest the divergent C-O versus C-N reactivity from a common ambident nucleophile arises due to a switch in mechanism from allylic C-H cleavage/functionalization to olefin isomerization/oxidative amination. © 2013 American Chemical Society.

Simple DNA repeats (trinucleotide repeats, micro- and minisatellites) are prone to expansion/contraction via formation of secondary structures during DNA synthesis. Such structures both inhibit replication forks and create opportunities for template-primer slippage, making these repeats unstable. Certain aspects of simple repeat instability, however, suggest additional mechanisms of replication inhibition dependent on the primary DNA sequence, rather than on secondary structure formation. I argue that expanded simple repeats, due to their lower DNA complexity, should transiently inhibit DNA synthesis by locally depleting specific DNA precursors. Such transient inhibition would promote formation of secondary structures and would stabilize these structures, facilitating strand slippage. Thus, replication problems at simple repeats could be explained by potentiated toxicity, where the secondary structure-driven repeat instability is enhanced by DNA polymerase stalling at the low complexity template DNA. © 2013 WILEY Periodicals, Inc.

Tang W.,University of Illinois at Urbana - Champaign | Van Der Donk W.A.,University of Illinois at Urbana - Champaign
Nature Chemical Biology | Year: 2013

The enterococcal cytolysin is a two-component lantibiotic of unknown structure with hemolytic activity that is important for virulence. We prepared cytolysin by coexpression of each precursor peptide with the synthetase CylM in Escherichia coli and characterized its structure. Unexpectedly, cytolysin is to our knowledge the first example of a lantibiotic containing lanthionine and methyllanthionine structures with different stereochemistries in the same peptide. The stereochemistry is determined by the sequence of the substrate peptide. © 2013 Nature America, Inc. All rights reserved.

Hammes-Schiffer S.,University of Illinois at Urbana - Champaign
Biochemistry | Year: 2013

This brief review analyzes the underlying physical principles of enzyme catalysis, with an emphasis on the role of equilibrium enzyme motions and conformational sampling. The concepts are developed in the context of three representative systems, namely, dihydrofolate reductase, ketosteroid isomerase, and soybean lipoxygenase. All of these reactions involve hydrogen transfer, but many of the concepts discussed are more generally applicable. The factors that are analyzed in this review include hydrogen tunneling, proton donor-acceptor motion, hydrogen bonding, pKa shifting, electrostatics, preorganization, reorganization, and conformational motions. The rate constant for the chemical step is determined primarily by the free energy barrier, which is related to the probability of sampling configurations conducive to the chemical reaction. According to this perspective, stochastic thermal motions lead to equilibrium conformational changes in the enzyme and ligands that result in configurations favorable for the breaking and forming of chemical bonds. For proton, hydride, and proton-coupled electron transfer reactions, typically the donor and acceptor become closer to facilitate the transfer. The impact of mutations on the catalytic rate constants can be explained in terms of the factors enumerated above. In particular, distal mutations can alter the conformational motions of the enzyme and therefore the probability of sampling configurations conducive to the chemical reaction. Methods such as vibrational Stark spectroscopy, in which environmentally sensitive probes are introduced site-specifically into the enzyme, provide further insight into these aspects of enzyme catalysis through a combination of experiments and theoretical calculations. © 2012 American Chemical Society.

Wang S.,University of Illinois at Urbana - Champaign
Annals of the Association of American Geographers | Year: 2010

Cyberinfrastructure (CI) integrates distributed information and communication technologies for coordinated knowledge discovery. The purpose of this article is to develop a CyberGIS framework for the synthesis of CI, geographic information systems (GIS), and spatial analysis (broadly including spatial modeling). This framework focuses on enabling computationally intensive and collaborative geographic problem solving. The article describes new trends in the development and use of CyberGIS while illustrating particular CyberGIS components. Spatial middleware glues CyberGIS components and corresponding services while managing the complexity of generic CI middleware. Spatial middleware, tailored to GIS and spatial analysis, is developed to capture important spatial characteristics of problems through the spatially explicit representation of computing, data, and communication intensity (collectively termed computational intensity), which enables GIS and spatial analysis to locate, allocate, and use CI resources effectively and efficiently. A CyberGIS implementation-GISolve-is developed to systematically integrate CI capabilities, including high-performance and distributed computing, data management and visualization, and virtual organization support. Currently, GISolve is deployed on the National Science Foundation TeraGrid, a key element of the U.S. and worldwide CI. A case study, motivated by an application in which geographic patterns of the impact of global climate change on large-scale crop yields are examined in the United States, focuses on assessing the computational performance of GISolve on resolving the computational intensity of a widely used spatial interpolation analysis that is conducted in a collaborative fashion. Computational experiments demonstrate that GISolve achieves a high-performance, distributed, and collaborative CyberGIS implementation. © 2010 by Association of American Geographers.

Huang R.H.,University of Illinois at Urbana - Champaign
Biochemistry | Year: 2012

In an RNA transcript, the 2′-OH group at the 3′-terminal nucleotide is unique as it is the only 2′-OH group that is adjacent to a 3′-OH group instead of a phosphate backbone. The 2′-OH group at the 3′-terminal nucleotide of certain RNAs is methylated in vivo, which is acheived by a methyltransferase named Hen1 that is mechanistically distinct from other known RNA 2′-O-methyltransferases. In eukaryotic organisms, 3′-terminal 2′-O-methylation of small RNAs stabilizes these small RNAs for RNA interference (RNAi). In bacteria, the same methylation during RNA repair results in repaired RNA resisting future damage at the site of repair. Although the chemistry performed by the eukaryotic and bacterial Hen1 is the same, the mechanisms of how RNA is stabilized as a result of the 3′-terminal 2′-O-methylation are different between the eukaryotic RNAi and the bacterial RNA repair. In this review, I will discuss the distribution of Hen1 in living organisms, the classification of Hen1 into four subfamilies, the structure and mechanism of Hen1 that allows it to conduct RNA 3′-terminal 2′-O-methylation, and the possible evolutionary origin of Hen1 present in bacterial and eukaryotic organisms. © 2012 American Chemical Society.

Oman T.J.,University of Illinois at Urbana - Champaign | Van Der Donk W.A.,University of Illinois at Urbana - Champaign
Nature Chemical Biology | Year: 2010

The avalanche of genomic information in the past decade has revealed that natural product biosynthesis using the ribosomal machinery is much more widespread than originally anticipated. Nearly all of these compounds are crafted through post-translational modifications of a larger precursor peptide that often contains the marching orders for the biosynthetic enzymes. We review here the available information for how the peptide sequences in the precursors govern the post-translational tailoring processes for several classes of natural products. In addition, we highlight the great potential these leader peptide-directed biosynthetic systems offer for engineering conformationally restrained and pharmacophore-rich products with structural diversity that greatly expands the proteinogenic repertoire. © 2010 Nature America, Inc. All rights reserved.

Hwang H.,University of Illinois at Urbana - Champaign | Myong S.,University of Illinois at Urbana - Champaign
Chemical Society Reviews | Year: 2014

Single molecule studies of protein-nucleic acid interactions shed light on molecular mechanisms and kinetics involved in protein binding, translocation, and unwinding of DNA and RNA substrates. In this review, we provide an overview of a single molecule fluorescence method, termed "protein induced fluorescence enhancement" (PIFE). Unlike FRET where two dyes are required, PIFE employs a single dye attached to DNA or RNA to which an unlabeled protein is applied. We discuss both ensemble and single molecule studies in which PIFE was utilized. © 2014 The Royal Society of Chemistry.

Blehm B.H.,University of Illinois at Urbana - Champaign | Selvin P.R.,University of Illinois at Urbana - Champaign
Chemical Reviews | Year: 2014

The review describes experimental systems at multiple levels of complexity, including single-motor-type in vitro assays, multimotor in vitro assays, purified-organelle in vitro assays, and finally in vivo cellular assays. The simplest level of complexity is a single motor with a cargo or label attached and a microtubule track in an in vitro environment. This has been the predominant type of experiment in the study of molecular motors. Adding in accessory proteins and parts of the transport complex, such as dynactin, is the next level of complexity. From the wide variety of in vivo optical trapping results, the different kinesin-dynein transport systems present quite a bit of complexity. Dynein apparently is dragged behind kinesin during some plus-end directed transport, whereas kinesin routinely releases to allow unhindered minus-end-directed transport. More complex systems, such as systems with more motor types present, or other specialized types of transport promise to add more complex regulatory mechanisms.

Orlean P.,University of Illinois at Urbana - Champaign
Genetics | Year: 2012

The wall gives a Saccharomyces cerevisiae cell its osmotic integrity; defines cell shape during budding growth, mating, sporulation, and pseudohypha formation; and presents adhesive glycoproteins to other yeast cells. The wall consists of β1,3- and β 1,6-glucans, a small amount of chitin, and many different proteins that may bear N- and O-linked glycans and a glycolipid anchor. These components become cross-linked in various ways to form higher-order complexes. Wall composition and degree of cross-linking vary during growth and development and change in response to cell wall stress. This article reviews wall biogenesis in vegetative cells, covering the structure of wall components and how they are cross-linked; the biosynthesis of N- and O-linked glycans, glycosylphosphatidylinositol membrane anchors, β1,3- and β1,6-linked glucans, and chitin; the reactions that cross-link wall components; and the possible functions of enzymatic and nonenzymatic cell wall proteins. © 2012 the Genetics Society of America.

Knerr P.J.,University of Illinois at Urbana - Champaign | Van Der Donk W.A.,University of Illinois at Urbana - Champaign
Annual Review of Biochemistry | Year: 2012

Aided by genome-mining strategies, knowledge of the prevalence and diversity of ribosomally synthesized natural products (RNPs) is rapidly increasing. Among these are the lantipeptides, posttranslationally modified peptides containing characteristic thioether cross-links imperative for bioactivity and stability. Though this family was once thought to be a limited class of antimicrobial compounds produced by gram-positive bacteria, new insights have revealed a much larger diversity of activity, structure, biosynthetic machinery, and producing organisms than previously appreciated. Detailed investigation of the enzymes responsible for installing the posttranslational modifications has resulted in improved in vivo and in vitro engineering systems focusing on enhancement of the therapeutic potential of these compounds. Although dozens of new lantipeptides have been isolated in recent years, bioinformatic analyses indicate that many hundreds more await discovery owing to the widespread frequency of lantipeptide biosynthetic machinery in bacterial genomes. © 2012 by Annual Reviews. All rights reserved.

Van Der Donk W.A.,University of Illinois at Urbana - Champaign
Journal of the American Chemical Society | Year: 2012

Methylphosphonate synthase is a non-heme iron-dependent oxygenase that converts 2-hydroxyethylphosphonate (2-HEP) to methylphosphonate. On the basis of experiments with two enantiomers of a substrate analog, 2- hydroxypropylphosphonate, catalysis is proposed to commence with stereospecific abstraction of the pro-S hydrogen on C2 of the substrate. Experiments with isotopologues of 2-HEP indicate stereospecific hydrogen transfer of the pro-R hydrogen at C2 of the substrate to the methyl group of methylphosphonate. Kinetic studies with these substrate isotopologues reveal that neither hydrogen transfer is rate limiting under saturating substrate conditions. A mechanism is proposed that is consistent with the available data. © 2012 American Chemical Society.

Wong C.-H.,University of Illinois at Urbana - Champaign | Zimmerman S.C.,University of Illinois at Urbana - Champaign
Chemical Communications | Year: 2013

The concept of orthogonality has been applied to many areas of chemistry, ranging from wave functions to chromatography. But it was Barany and Merrifield's orthogonal protecting group strategy that paved the way for solid phase peptide syntheses, other important classes of biomaterials such as oligosaccharides and oligonucleotides, and ultimately to a term in widespread usage that is focused on chemical reactivity and binding selectivity. The orthogonal protection strategy has been extended to the development of orthogonal activation, and recently the click reaction, for streamlining organic synthesis. The click reaction and its variants are considered orthogonal as the components react together in high yield and in the presence of many other functional groups. Likewise, supramolecular building blocks can also be orthogonal, thereby enabling programmed self-assembly, a superb strategy to create complex architectures. Overall, orthogonal reactions and supramolecular interactions have dramatically improved the syntheses, the preparation of functional materials, and the self-assembly of nanoscale structures. This journal is © The Royal Society of Chemistry 2013.

Bhargava R.,University of Illinois at Urbana - Champaign
Applied Spectroscopy | Year: 2012

Infrared (IR) spectroscopic imaging seemingly matured as a technology in the mid-2000s, with commercially successful instrumentation and reports in numerous applications. Recent developments, however, have transformed our understanding of the recorded data, provided capability for new instrumentation, and greatly enhanced the ability to extract more useful information in less time. These developments are summarized here in three broad areas- data recording, interpretation of recorded data, and information extraction-and their critical review is employed to project emerging trends. Overall, the convergence of selected components from hardware, theory, algorithms, and applications is one trend. Instead of similar, general-purpose instrumentation, another trend is likely to be diverse and applicationtargeted designs of instrumentation driven by emerging component technologies. The recent renaissance in both fundamental science and instrumentation will likely spur investigations at the confluence of conventional spectroscopic analyses and optical physics for improved data interpretation. While chemometrics has dominated data processing, a trend will likely lie in the development of signal processing algorithms to optimally extract spectral and spatial information prior to conventional chemometric analyses. Finally, the sum of these recent advances is likely to provide unprecedented capability in measurement and scientific insight, which will present new opportunities for the applied spectroscopist.

Oldfield E.,University of Illinois at Urbana - Champaign | Lin F.-Y.,University of Illinois at Urbana - Champaign
Angewandte Chemie - International Edition | Year: 2012

Terpenes are the largest class of small-molecule natural products on earth, and the most abundant by mass. Here, we summarize recent developments in elucidating the structure and function of the proteins involved in their biosynthesis. There are six main building blocks or modules (α, β, γ, δ, ε, and ζ) that make up the structures of these enzymes: the αα and αδ head-to-tail trans-prenyl transferases that produce trans-isoprenoid diphosphates from C 5 precursors; the ε head-to-head prenyl transferases that convert these diphosphates into the tri- and tetraterpene precursors of sterols, hopanoids, and carotenoids; the βγ di- and triterpene synthases; the ζ head-to-tail cis-prenyl transferases that produce the cis-isoprenoid diphosphates involved in bacterial cell wall biosynthesis; and finally the α, αβ, and αβγ terpene synthases that produce plant terpenes, with many of these modular enzymes having originated from ancestral α and β domain proteins. We also review progress in determining the structure and function of the two 4Fe-4S reductases involved in formation of the C 5 diphosphates in many bacteria, where again, highly modular structures are found. Natural building blocks: Recent progress has been achieved in determining the structure, function, and inhibition of the enzymes responsible for the formation of terpenes and isoprenoids. Most of these enzymes contain combinations of α-, β-, γ-, δ-, ε-, and/or ζ-domain structures that in many cases are fused to form modular proteins. Gene fusion, exon loss, and recombination are thought to play major roles in the genesis of these enzymes. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Saini S.,University of Illinois at Urbana - Champaign
PLoS pathogens | Year: 2010

Salmonella enterica serovar Typhimurium is a common food-borne pathogen that induces inflammatory diarrhea and invades intestinal epithelial cells using a type three secretion system (T3SS) encoded within Salmonella pathogenicity island 1 (SPI1). The genes encoding the SPI1 T3SS are tightly regulated by a network of interacting transcriptional regulators involving three coupled positive feedback loops. While the core architecture of the SPI1 gene circuit has been determined, the relative roles of these interacting regulators and associated feedback loops are still unknown. To determine the function of this circuit, we measured gene expression dynamics at both population and single-cell resolution in a number of SPI1 regulatory mutants. Using these data, we constructed a mathematical model of the SPI1 gene circuit. Analysis of the model predicted that the circuit serves two functions. The first is to place a threshold on SPI1 activation, ensuring that the genes encoding the T3SS are expressed only in response to the appropriate combination of environmental and cellular cues. The second is to amplify SPI1 gene expression. To experimentally test these predictions, we rewired the SPI1 genetic circuit by changing its regulatory architecture. This enabled us to directly test our predictions regarding the function of the circuit by varying the strength and dynamics of the activating signal. Collectively, our experimental and computational results enable us to deconstruct this complex circuit and determine the role of its individual components in regulating SPI1 gene expression dynamics.

Saini S.,University of Illinois at Urbana - Champaign
PLoS pathogens | Year: 2010

Salmonella enterica serovar Typhimurium is a common food-borne pathogen that induces inflammatory diarrhea and invades intestinal epithelial cells using a type three secretion system (T3SS) encoded within Salmonella pathogenicity island 1 (SPI1). The genes encoding the SPI1 T3SS are tightly regulated by a network of interacting transcriptional regulators involving three coupled positive feedback loops. While the core architecture of the SPI1 gene circuit has been determined, the relative roles of these interacting regulators and associated feedback loops are still unknown. To determine the function of this circuit, we measured gene expression dynamics at both population and single-cell resolution in a number of SPI1 regulatory mutants. Using these data, we constructed a mathematical model of the SPI1 gene circuit. Analysis of the model predicted that the circuit serves two functions. The first is to place a threshold on SPI1 activation, ensuring that the genes encoding the T3SS are expressed only in response to the appropriate combination of environmental and cellular cues. The second is to amplify SPI1 gene expression. To experimentally test these predictions, we rewired the SPI1 genetic circuit by changing its regulatory architecture. This enabled us to directly test our predictions regarding the function of the circuit by varying the strength and dynamics of the activating signal. Collectively, our experimental and computational results enable us to deconstruct this complex circuit and determine the role of its individual components in regulating SPI1 gene expression dynamics.

Nedic A.,University of Illinois at Urbana - Champaign
IEEE Transactions on Automatic Control | Year: 2011

We consider a distributed multi-agent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent functions over a commonly known constraint set, but without a central coordinator and without agents sharing the explicit form of their objectives. We propose an asynchronous broadcast-based algorithm where the communications over the network are subject to random link failures. We investigate the convergence properties of the algorithm for a diminishing (random) stepsize and a constant stepsize, where each agent chooses its own stepsize independently of the other agents. Under some standard conditions on the gradient errors, we establish almost sure convergence of the method to an optimal point for diminishing stepsize. For constant stepsize, we establish some error bounds on the expected distance from the optimal point and the expected function value. We also provide numerical results. © 2006 IEEE.

Zayed A.,York University | Robinson G.E.,University of Illinois at Urbana - Champaign
Annual Review of Genetics | Year: 2012

Behavior is a complex phenotype that is plastic and evolutionarily labile. The advent of genomics has revolutionized the field of behavioral genetics by providing tools to quantify the dynamic nature of brain gene expression in relation to behavioral output. The honey bee Apis mellifera provides an excellent platform for investigating the relationship between brain gene expression and behavior given both the remarkable behavioral repertoire expressed by members of its intricate society and the degree to which behavior is influenced by heredity and the social environment. Here, we review a linked series of studies that assayed changes in honey bee brain transcriptomes associated with natural and experimentally induced changes in behavioral state. These experiments demonstrate that brain gene expression is closely linked with behavior, that changes in brain gene expression mediate changes in behavior, and that the association between specific genes and behavior exists over multiple timescales, from physiological to evolutionary. © 2012 by Annual Reviews.

Kuzminov A.,University of Illinois at Urbana - Champaign
Molecular Microbiology | Year: 2013

Summary: In both eukaryotes and prokaryotes, chromosomal DNA undergoes replication, condensation-decondensation and segregation, sequentially, in some fixed order. Other conditions, like sister-chromatid cohesion (SCC), may span several chromosomal events. One set of these chromosomal transactions within a single cell cycle constitutes the 'chromosome cycle'. For many years it was generally assumed that the prokaryotic chromosome cycle follows major phases of the eukaryotic one: -replication-condensation-segregation-(cell division)-decondensation-, with SCC of unspecified length. Eventually it became evident that, in contrast to the strictly consecutive chromosome cycle of eukaryotes, all stages of the prokaryotic chromosome cycle run concurrently. Thus, prokaryotes practice 'progressive' chromosome segregation separated from replication by a brief SCC, and all three transactions move along the chromosome at the same fast rate. In other words, in addition to replication forks, there are 'segregation forks' in prokaryotic chromosomes. Moreover, the bulk of prokaryotic DNA outside the replication-segregation transition stays compacted. I consider possible origins of this concurrent replication-segregation and outline the 'nucleoid administration' system that organizes the dynamic part of the prokaryotic chromosome cycle. © 2013 John Wiley & Sons Ltd.

Sun J.,University of Illinois at Urbana - Champaign
Nature communications | Year: 2013

Protein functions are largely affected by their conformations. This is exemplified in proteins containing modular domains. However, the evolutionary dynamics that define and adapt the conformation of such modular proteins remain elusive. Here we show that cis-interactions between the C-terminal phosphotyrosines and SH2 domain within the protein tyrosine phosphatase Shp2 can be tuned by an adaptor protein, Grb2. The competitiveness of two phosphotyrosines, namely pY542 and pY580, for cis-interaction with the same SH2 domain is governed by an antagonistic combination of contextual amino acid sequence and position of the phosphotyrosines. Specifically, pY580 with the combination of a favourable position and an adverse sequence has an overall advantage over pY542. Swapping the sequences of pY542 and pY580 results in one dominant form of cis-interaction and subsequently inhibits the trans-regulation by Grb2. Thus, the antagonistic combination of sequence and position may serve as a basic design principle for proteins with tunable conformations.

Tse E.C.M.,University of Illinois at Urbana - Champaign
Nature Materials | Year: 2016

Many chemical and biological processes involve the transfer of both protons and electrons. The complex mechanistic details of these proton-coupled electron transfer (PCET) reactions require independent control of both electron and proton transfer. In this report, we make use of lipid-modified electrodes to modulate proton transport to a Cu-based catalyst that facilitates the O2 reduction reaction (ORR), a PCET process important in fuel cells and O2 reduction enzymes. By quantitatively controlling the kinetics of proton transport to the catalyst, we demonstrate that undesired side products such as H2O2 and O2 - arise from a mismatch between proton and electron transfer rates. Whereas fast proton kinetics induce H2O2 formation and sluggish proton flux produces O2 -, proton transfer rates commensurate with O–O bond breaking rates ensure that only the desired H2O product forms. This fundamental insight aids in the development of a comprehensive framework for understanding the ORR and PCET processes in general. © 2016 Nature Publishing Group

Mcmillen D.P.,University of Illinois at Urbana - Champaign
Journal of Regional Science | Year: 2012

Though standard spatial econometric models may be useful for specification testing, they rely heavily on a parametric structure that is highly sensitive to model misspecification. The commonly used spatial AR model is a form of spatial smoothing with a structure that closely resembles a semiparametric model. Nonparametric and semiparametric models are generally a preferable approach for more descriptive spatial analysis. Estimated population density functions illustrate the differences between the spatial AR model and nonparametric approaches to data smoothing. A series of Monte Carlo experiments demonstrates that nonparametric predicted values and marginal effect estimates are much more accurate then spatial AR models when the contiguity matrix is misspecified. © 2012, Wiley Periodicals, Inc.

Silverman S.K.,University of Illinois at Urbana - Champaign
Accounts of Chemical Research | Year: 2015

ConspectusCatalysis is a fundamental chemical concept, and many kinds of catalysts have considerable practical value. Developing entirely new catalysts is an exciting challenge. Rational design and screening have provided many new small-molecule catalysts, and directed evolution has been used to optimize or redefine the function of many protein enzymes. However, these approaches have inherent limitations that prompt the pursuit of different kinds of catalysts using other experimental methods.Nature evolved RNA enzymes, or ribozymes, for key catalytic roles that in modern biology are limited to phosphodiester cleavage/ligation and amide bond formation. Artificial DNA enzymes, or deoxyribozymes, have great promise for a broad range of catalytic activities. They can be identified from unbiased (random) sequence populations as long as the appropriate in vitro selection strategies can be implemented for their identification. Notably, in vitro selection is different in key conceptual and practical ways from rational design, screening, and directed evolution. This Account describes the development by in vitro selection of DNA catalysts for many different kinds of covalent modification reactions of peptide and protein substrates, inspired in part by our earlier work with DNA-catalyzed RNA ligation reactions.In one set of studies, we have sought DNA-catalyzed peptide backbone cleavage, with the long-term goal of artificial DNA-based proteases. We originally anticipated that amide hydrolysis should be readily achieved, but in vitro selection instead surprisingly led to deoxyribozymes for DNA phosphodiester hydrolysis; this was unexpected because uncatalyzed amide bond hydrolysis is 105-fold faster. After developing a suitable selection approach that actively avoids DNA hydrolysis, we were able to identify deoxyribozymes for hydrolysis of esters and aromatic amides (anilides). Aliphatic amide cleavage remains an ongoing focus, including via inclusion of chemically modified DNA nucleotides in the catalyst, which we have recently found to enable this cleavage reaction. In numerous other efforts, we have investigated DNA-catalyzed peptide side chain modification reactions. Key successes include nucleopeptide formation (attachment of oligonucleotides to peptide side chains) and phosphatase and kinase activities (removal and attachment of phosphoryl groups to side chains).Through all of these efforts, we have learned the importance of careful selection design, including the frequent need to develop specific "capture" reactions that enable the selection process to provide only those DNA sequences that have the desired catalytic functions. We have established strategies for identifying deoxyribozymes that accept discrete peptide and protein substrates, and we have obtained data to inform the key choice of random region length at the outset of selection experiments. Finally, we have demonstrated the viability of modular deoxyribozymes that include a small-molecule-binding aptamer domain, although the value of such modularity is found to be minimal, with implications for many selection endeavors.Advances such as those summarized in this Account reveal that DNA has considerable catalytic abilities for biochemically relevant reactions, specifically including covalent protein modifications. Moreover, DNA has substantially different, and in many ways better, characteristics than do small molecules or proteins for a catalyst that is obtained "from scratch" without demanding any existing information on catalyst structure or mechanism. Therefore, prospects are very strong for continued development and eventual practical applications of deoxyribozymes for peptide and protein modification. © 2015 American Chemical Society.

Ozawa T.,University of Illinois at Urbana - Champaign | Baym G.,University of Illinois at Urbana - Champaign
Physical Review Letters | Year: 2012

We study the stability of Bose condensates with Rashba-Dresselhaus spin-orbit coupling in three dimensions against quantum and thermal fluctuations. The ground state depletion of the plane-wave condensate due to quantum fluctuations is, as we show, finite, and therefore the condensate is stable. We also calculate the corresponding shift of the ground state energy. Although the system cannot condense in the absence of interparticle interactions, by estimating the number of excited particles we show that interactions stabilize the condensate even at nonzero temperature. Unlike in the usual Bose gas, the normal phase is not kinematically forbidden at any temperature; calculating the free energy of the normal phase at finite temperature, and comparing with the free energy of the condensed state, we infer that generally the system is condensed at zero temperature, and undergoes a transition to normal at nonzero temperature. © 2012 American Physical Society.

Bergamaschi A.,University of Illinois at Urbana - Champaign | Katzenellenbogen B.S.,University of Illinois at Urbana - Champaign
Oncogene | Year: 2012

Many estrogen receptor (ER)-positive breast cancers respond well initially to endocrine therapies, but often develop resistance during treatment with selective ER modulators (SERMs) such as tamoxifen. We have reported that the 14-3-3ζ family member and conserved protein, 14-3-3ζ, is upregulated by tamoxifen and that high expression correlated with an early time to disease recurrence. However, the mechanism by which tamoxifen upregulates 14-3-3ζ and may promote the development of endocrine resistance is not known. Our findings herein reveal that the tamoxifen upregulation of 14-3-3ζ results from its ability to rapidly downregulate microRNA (miR)-451 that specifically targets 14-3-3ζ. The levels of 14-3-3ζ and miR-451 were inversely correlated, with 14-3-3ζ being elevated and miR-451 being at a greatly reduced level in tamoxifen-resistant breast cancer cells. Of note, downregulation of miR-451 was selectively elicited by tamoxifen but not by other SERMs, such as raloxifene or ICI182,780 (Fulvestrant). Increasing the level of miR-451 by overexpression, which decreased 14-3-3ζ, suppressed cell proliferation and colony formation, markedly reduced activation of HER2, EGFR and MAPK signaling, increased apoptosis, and, importantly, restored the growth-inhibitory effectiveness of SERMs in endocrine-resistant cells. Opposite effects were elicited by miR-451 knockdown. Thus, we identify tamoxifen downregulation of miR-451, and consequent elevation of the key survival factor 14-3-3ζ, as a mechanistic basis of tamoxifen-associated development of endocrine resistance. These findings suggest that therapeutic approaches to increase expression of this tumor suppressor-like miR should be considered to downregulate 14-3-3ζ and enhance the effectiveness of endocrine therapies. Furthermore, the selective ability of the SERM tamoxifen but not raloxifene to regulate miR-451 and 14-3-3ζ may assist in understanding differences in their activities, as seen in the STAR (Study of Tamoxifen and Raloxifene) breast cancer prevention trial and in other clinical trials. © 2012 Macmillan Publishers Limited All rights reserved.

Boppart S.A.,University of Illinois at Urbana - Champaign | Richards-Kortum R.,Rice University
Science Translational Medicine | Year: 2014

Leveraging advances in consumer electronics and wireless telecommunications, low-cost, portable optical imaging devices have the potential to improve screening and detection of disease at the point of care in primary health care settings in both low- and high-resource countries. Similarly, real-time optical imaging technologies can improve diagnosis and treatment at the point of procedure by circumventing the need for biopsy and analysis by expert pathologists, who are scarce in developing countries. Although many optical imaging technologies have been translated from bench to bedside, industry support is needed to commercialize and broadly disseminate these from the patient level to the population level to transform the standard of care. This review provides an overview of promising optical imaging technologies, the infrastructure needed to integrate them into widespread clinical use, and the challenges that must be addressed to harness the potential of these technologies to improve health care systems around the world. © 2014, American Association for the Advancement of Science. All rights reserved.

Zong X.,University of Illinois at Urbana - Champaign
RNA biology | Year: 2011

The mammalian genome harbors a large number of long non-coding RNAs (lncRNAs) that do not code for proteins, but rather they exert their function directly as RNA molecules. LncRNAs are involved in executing several vital cellular functions. They facilitate the recruitment of proteins to specific chromatin sites, ultimately regulating processes like dosage compensation and genome imprinting. LncRNAs are also known to regulate nucleocytoplasmic transport of macromolecules. A large number of the regulatory lncRNAs are retained within the cell nucleus and constitute a subclass termed nuclear-retained RNAs (nrRNAs). NrRNAs are speculated to be involved in crucial gene regulatory networks, acting as structural scaffolds of subnuclear domains. NrRNAs modulate gene expression by influencing chromatin modification, transcription and post-transcriptional gene processing. The cancer-associated Metastasis-associated lung adenocarcinoma transcript1 (MALAT1) is one such long nrRNA that regulates pre-mRNA processing in mammalian cells. Thus far, our understanding about the roles played by nrRNAs and their relevance in disease pathways is only 'a tip of an iceberg'. It will therefore be crucial to unravel the functions for the vast number of long nrRNAs, buried within the complex mine of the human genome.

Xiang Y.K.,University of Illinois at Urbana - Champaign
Circulation Research | Year: 2011

Activation of adrenergic receptors (AR) represents the primary mechanism to increase cardiac performance under stress. Activated βAR couple to Gs protein, leading to adenylyl cyclase-dependent increases in secondary-messenger cyclic adenosine monophosphate (cAMP) to activate protein kinase A. The increased protein kinase A activities promote phosphorylation of diversified substrates, ranging from the receptor and its associated partners to proteins involved in increases in contractility and heart rate. Recent progress with live-cell imaging has drastically advanced our understanding of the βAR-induced cAMP and protein kinase A activities that are precisely regulated in a spatiotemporal fashion in highly differentiated myocytes. Several features stand out: membrane location of βAR and its associated complexes dictates the cellular compartmentalization of signaling; βAR agonist dose-dependent equilibrium between cAMP production and cAMP degradation shapes persistent increases in cAMP signals for sustained cardiac contraction response; and arrestin acts as an agonist dose-dependent master switch to promote cAMP diffusion and propagation into intracellular compartments by sequestrating phosphodiesterase isoforms associated with the βAR signaling cascades. These features and the underlying molecular mechanisms of dynamic regulation of βAR complexes with adenylyl cyclase and phosphodiesterase enzymes and the implication in heart failure are discussed. © 2011 American Heart Association, Inc.

Venkatesan B.M.,University of Illinois at Urbana - Champaign | Bashir R.,University of Illinois at Urbana - Champaign
Nature Nanotechnology | Year: 2011

Nanopore analysis is an emerging technique that involves using a voltage to drive molecules through a nanoscale pore in a membrane between two electrolytes, and monitoring how the ionic current through the nanopore changes as single molecules pass through it. This approach allows charged polymers (including single-stranded DNA, double-stranded DNA and RNA) to be analysed with subnanometre resolution and without the need for labels or amplification. Recent advances suggest that nanopore-based sensors could be competitive with other third-generation DNA sequencing technologies, and may be able to rapidly and reliably sequence the human genome for under $1,000. In this article we review the use of nanopore technology in DNA sequencing, genetics and medical diagnostics. © 2011 Macmillan Publishers Limited. All rights reserved.

Abbas A.E.,University of Illinois at Urbana - Champaign
Operations Research | Year: 2013

The construction of a multiattribute utility function is an important step in decision analysis and can be a challenging task unless some decomposition of the utility function is performed. When every attribute is utility independent of its complement, the utility elicitation task is significantly simplified because the functional form of the utility function requires only one conditional utility function for each attribute, and some normalizing constants. When utility independence conditions do not hold, the conditional utility function of an attribute may vary across the domain of the complement attributes, and therefore a single conditional utility assessment for each attribute may not be sufficient to capture the decision maker's preferences. This paper proposes a method to construct utility functions that have the flexibility to match the variations in the conditional utility function, across the domain of the attributes, using univariate utility assessments at the boundary values. The approach incorporates the boundary assessments into a new function, which we call the double-sided utility copula. This formulation provides a wealth of new functional forms that the analyst may use to incorporate utility dependence in multiattribute decision problems. The utility copula function also allows for the flexibility to incorporate a wide range of trade-off assessments among the attributes, while keeping the utility assessments at the boundary values fixed. It is also useful in determining the order of approximation provided by using certain independence assumptions in a multiattribute decision problem when the attributes are utility dependent. © 2013 INFORMS.

Wang X.,University of Illinois at Urbana - Champaign | Lemmon M.D.,University of Notre Dame
IEEE Transactions on Automatic Control | Year: 2011

This paper examines event-triggered data transmission in distributed networked control systems with packet loss and transmission delays. We propose a distributed event-triggering scheme, where a subsystem broadcasts its state information to its neighbors only when the subsystem's local state error exceeds a specified threshold. In this scheme, a subsystem is able to make broadcast decisions using its locally sampled data. It can also locally predict the maximal allowable number of successive data dropouts (MANSD) and the state-based deadlines for transmission delays. Moreover, the designer's selection of the local event for a subsystem only requires information on that individual subsystem. Our analysis applies to both linear and nonlinear subsystems. Designing local events for a nonlinear subsystem requires us to find a controller that ensures that subsystem to be input-to-state stable. For linear subsystems, the design problem becomes a linear matrix inequality feasibility problem. With the assumption that the number of each subsystem's successive data dropouts is less than its MANSD, we show that if the transmission delays are zero, the resulting system is finite-gain ℒ p stable. If the delays are bounded by given deadlines, the system is asymptotically stable. We also show that those state-based deadlines for transmission delays are always greater than a positive constant. © 2006 IEEE.

Kwan M.-P.,University of Illinois at Urbana - Champaign
Annals of the Association of American Geographers | Year: 2013

Many fundamental notions in geographic and social science research still tend to be conceptualized largely in static spatial terms, ignoring how our understanding of the issues we study can be greatly enriched through the lenses of time and human mobility. This article revisits three such notions: racial segregation, environmental exposure, and accessibility. It argues for the need to expand our analytical focus from static residential spaces to other relevant places and times in people's everyday lives. Mobility is an essential element of people's spatiotemporal experiences, and these complex experiences cannot be fully understood by just looking at where people live. As many social scientists are interested in studying segregation, environmental exposure, and accessibility, geographers can contribute to advancing temporally integrated analysis of these issues through careful examination of people's everyday experiences as their lives unfold in space and time. Interdisciplinary research along this line could have a broad impact on many disciplines beyond geography. © 2013 Copyright Taylor and Francis Group, LLC.

Walker S.B.,University of Illinois at Urbana - Champaign | Lewis J.A.,University of Illinois at Urbana - Champaign
Journal of the American Chemical Society | Year: 2012

Reactive silver inks for printing highly conductive features (>10 4 S/cm) at room temperature have been created. These inks are stable, particle-free, and suitable for a wide range of patterning techniques. Upon annealing at 90 °C, the printed electrodes exhibit an electrical conductivity equivalent to that of bulk silver. © 2012 American Chemical Society.

Wu J.,University of Illinois at Urbana - Champaign | Yang H.,University of Illinois at Urbana - Champaign
Accounts of Chemical Research | Year: 2013

An efficient oxygen reduction reaction (ORR) offers the potential for clean energy generation in low-temperature, proton-exchange membrane fuel cells running on hydrogen fuel and air. In the past several years, researchers have developed high-performance electrocatalysts for the ORR to address the obstacles of high cost of the Pt catalyst per kilowatt of output power and of declining catalyst activity over time. Current efforts are focused on new catalyst structures that add a secondary metal to change the d-band center and the surface atomic arrangement of the catalyst, altering the chemisorption of those oxygencontaining species that have the largest impact on the ORR kinetics and improving the catalyst activity and cost effectiveness.This Account reviews recent progress in the design of Pt-based ORR electrocatalysts, including improved understanding of the reaction mechanisms and the development of synthetic methods for producing catalysts with high activity and stability. Researchers have made several types of highly active catalysts, including an extended single crystal surface of Pt and its alloy, bimetallic nanoparticles, and self-supported, low-dimensional nanostructures. We focus on the design and synthetic strategies for ORR catalysts including controlling the shape (or facet) and size of Pt and its bimetallic alloys, and controlling the surface composition and structure of core-shell, monolayer, and hollow porous structures.The strong dependence of ORR performance on facet and size suggests that synthesizing nanocrystals with large, highly reactive {111} facets could be as important, if not more important, to increasing their activity as simply making smaller nanoparticles. A newly developed carbon-monoxide (CO)-assisted reduction method produces Pt bimetallic nanoparticles with controlled facets. This CO-based approach works well to control shapes because of the selective CO binding on different, low-indexed metal surfaces. Post-treatment under different gas environments is also important in controlling the elemental distribution, especially the surface composition and the core-shell and bimetallic alloy nanostructures. Besides surface composition and facet, surface strain plays an important role in determining the ORR activity. The surface strain depends on the crystal size, the presence of an interface-lattice mismatch or twinned boundary, and between nanocrystals and extended single crystal surfaces, all of which may be factors in metal alloys. Since the common, effective reaction pathway for the ORR is a four-electron process and the surface binding of oxygen-containing species is typically the limiting step, density functional theory (DFT) calculation is useful for predicting the ORR performance over bimetallic catalysts.Finally, we have noticed there are variations among the published values for activity and durability of ORR catalysts in recent papers. The differences are often due to the data quality and protocols used for carrying out the analysis using a rotating disk electrode (RDE). Thus, we briefly discuss some practices used in such half-cell measurements, such as sample preparation and measurement, data reliability (in both kinetic current density and durability measurement) and iR correction that could lead to more consistency in measured values and in evaluating catalyst performances. © 2013 American Chemical Society.

Holley J.L.,University of Illinois at Urbana - Champaign
Clinical Journal of the American Society of Nephrology | Year: 2012

Advance care planning was historically considered to be simply the completion of a proxy (health care surrogate designation) or instruction (living will) directive that resulted from a conversation between a patient and his or her physician. We now know that advance care planning is a much more comprehensive and dynamic patientcentered process used by patients and families to strengthen relationships, achieve control over medical care, prepare for death, and clarify goals of care. Some advance directives, notably designated health care proxy documents, remain appropriate expressions of advance care planning.Moreover, although physician orders, such as do-not-resuscitate orders and Physician Orders for Life-Sustaining Treatment, may not be strictly defined as advance directives, their completion, when appropriate, is an integral component of advance care planning. The changing health circumstances and illness trajectory characteristic of ESRD mandate that advance care planning discussions adapt to a patient's situation and thereforemust be readdressed at appropriate times and intervals. The options of withholding and withdrawing dialysis add ESRD-specific issues to advance care planning in this population and are events each nephrologist will at some time confront. Advance care planning is important throughout the spectrum of ESRD and is a part of nephrology practice that can be rewarding to nephrologists and beneficial to patients and their families. © 2012 by the American Society of Nephrology.

Viswanathan M.,University of Illinois at Urbana - Champaign
Journal of Product Innovation Management | Year: 2012

In recent years, market-based approaches have been proposed for the base of the pyramid (BoP). However, the literature offers little theoretical or practical guidelines for innovative product development for what are radically new market contexts for most businesses in advanced economies. Considering that product development is a fundamental activity in a market economy, and that much BoP consumer welfare potentially arises from innovative and affordable goods and services that can solve critical life needs, this is a substantial gap in knowledge. This paper attempts to address this gap by using an analysis of 13 year-long university projects on BoP-focused concept and prototype development conducted between 2006 and 2010. An inventory of research propositions is developed that identifies factors necessary for effective product development for BoP markets. Implications for new product development research and practice are discussed. © 2011 Product Development & Management Association.

Ha T.,University of Illinois at Urbana - Champaign
Cell | Year: 2013

Enormous mechanistic insight has been gained by studying the behavior of single molecules. The same approaches used to study proteins in isolation are now being leveraged to examine the changes in functional behavior that emerge when single molecules have company. © 2013 Elsevier Inc.

Leslie S.-J.,Princeton University | Cimpian A.,University of Illinois at Urbana - Champaign | Meyer M.,Otterbein University | Freeland E.,Princeton University
Science | Year: 2015

The gender imbalance in STEM subjects dominates current debates about women's underrepresentation in academia. However, women are well represented at the Ph.D. level in some sciences and poorly represented in some humanities (e.g., in 2011, 54% of U.S. Ph. D.'s in molecular biology were women versus only 31% in philosophy). We hypothesize that, across the academic spectrum, women are underrepresented in fields whose practitioners believe that raw, innate talent is the main requirement for success, because women are stereotyped as not possessing such talent. This hypothesis extends to African Americans' underrepresentation as well, as this group is subject to similar stereotypes. Results from a nationwide survey of academics support our hypothesis (termed the field-specific ability beliefs hypothesis) over three competing hypotheses. © 2015, American Association for the Advancement of Science. All rights reserved.

Nienhaus K.,Karlsruhe Institute of Technology | Ulrich Nienhaus G.,Karlsruhe Institute of Technology | Ulrich Nienhaus G.,University of Illinois at Urbana - Champaign
Chemical Society Reviews | Year: 2014

Fluorescent proteins (FPs) from the GFP family have become indispensable as marker tools for imaging live cells, tissues and entire organisms. A wide variety of these proteins have been isolated from natural sources and engineered to optimize their properties as genetically encoded markers. Here we review recent developments in this field. A special focus is placed on photoactivatable FPs, for which the fluorescence emission can be controlled by light irradiation at specific wavelengths. They enable regional optical marking in pulse-chase experiments on live cells and tissues, and they are essential marker tools for live-cell optical imaging with super-resolution. Photoconvertible FPs, which can be activated irreversibly via a photo-induced chemical reaction that either turns on their emission or changes their emission wavelength, are excellent markers for localization-based super-resolution microscopy (e.g., PALM). Patterned illumination microscopy (e.g., RESOLFT), however, requires markers that can be reversibly photoactivated many times. Photoswitchable FPs can be toggled repeatedly between a fluorescent and a non-fluorescent state by means of a light-induced chromophore isomerization coupled to a protonation reaction. We discuss the mechanistic origins of the effect and illustrate how photoswitchable FPs are employed in RESOLFT imaging. For this purpose, special FP variants with low switching fatigue have been introduced in recent years. Despite nearly two decades of FP engineering by many laboratories, there is still room for further improvement of these important markers for live cell imaging. © 2014 The Royal Society of Chemistry.

Deryugina T.,University of Illinois at Urbana - Champaign
Climatic Change | Year: 2013

Global warming has become a controversial public policy issue in spite of broad scientific consensus that it is real and that human activity is a contributing factor. It is likely that public consensus is also needed to support policies that might counteract it. It is therefore important to understand how people form and update their beliefs about climate change. Using unique survey data on beliefs about the occurrence of the effects of global warming, I estimate how local temperature fluctuations influence what individuals believe about these effects. I find that some features of the updating process are consistent with rational updating. I also test explicitly for the presence of several heuristics known to affect belief formation and find strong evidence for representativeness, some evidence for availability, and no evidence for spreading activation. I find that very short-run temperature fluctuations (1 day-2 weeks) have no effect on beliefs about the occurrence of global warming, but that longer-run fluctuations (1 month-1 year) are significant predictors of beliefs. Only respondents with a conservative political ideology are affected by temperature abnormalities. © 2012 Springer Science+Business Media Dordrecht.

Sturm R.,RAND Corporation | An R.,University of Illinois at Urbana - Champaign
CA Cancer Journal for Clinicians | Year: 2014

This review summarizes current understanding of economic factors during the obesity epidemic and dispels some widely held, but incorrect, beliefs. Rising obesity rates coincided with increases in leisure time (rather than increased work hours), increased fruit and vegetable availability (rather than a decline in healthier foods), and increased exercise uptake. As a share of disposable income, Americans now have the cheapest food available in history, which fueled the obesity epidemic. Weight gain was surprisingly similar across sociodemographic groups or geographic areas, rather than specific to some groups (at every point in time; however, there are clear disparities). It suggests that if one wants to understand the role of the environment in the obesity epidemic, one needs to understand changes over time affecting all groups, not differences between subgroups at a given time. Although economic and technological changes in the environment drove the obesity epidemic, the evidence for effective economic policies to prevent obesity remains limited. Taxes on foods with low nutritional value could nudge behavior toward healthier diets, as could subsidies/discounts for healthier foods. However, even a large price change for healthy foods could close only part of the gap between dietary guidelines and actual food consumption. Political support has been lacking for even moderate price interventions in the United States and this may continue until the role of environmental factors is accepted more widely. As opinion leaders, clinicians play an important role in shaping the understanding of the causes of obesity. © 2014 American Cancer Society.

Pop E.,University of Illinois at Urbana - Champaign
Nano Research | Year: 2010

Understanding energy dissipation and transport in nanoscale structures is of great importance for the design of energy-efficient circuits and energy-conversion systems. This is also a rich domain for fundamental discoveries at the intersection of electron, lattice (phonon), and optical (photon) interactions. This review presents recent progress in understanding and manipulation of energy dissipation and transport in nanoscale solid-state structures. First, the landscape of power usage from nanoscale transistors (~10-8 W) to massive data centers (~109 W) is surveyed. Then, focus is given to energy dissipation in nanoscale circuits, silicon transistors, carbon nanostructures, and semiconductor nanowires. Concepts of steady-state and transient thermal transport are also reviewed in the context of nanoscale devices with sub-nanosecond switching times. Finally, recent directions regarding energy transport are reviewed, including electrical and thermal conductivity of nanostructures, thermal rectification, and the role of ubiquitous material interfaces. © 2010 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

Burkhardt Jr. R.W.,University of Illinois at Urbana - Champaign
Genetics | Year: 2013

Scientists are not always remembered for the ideas they cherished most. In the case of the French biologist Jean-Baptiste Lamarck, his name since the end of the nineteenth century has been tightly linked to the idea of the inheritance of acquired characters. This was indeed an idea that he endorsed, but he did not claim it as his own nor did he give it much thought. He took pride instead in advancing the ideas that (1) nature produced successively all the different forms of life on earth, and (2) environmentally induced behavioral changes lead the way in species change. This article surveys Lamarck's ideas about organic change, identifies several ironies with respect to how his name is commonly remembered, and suggests that some historical justice might be done by using the adjective "Lamarckian" to denote something more (or other) than a belief in the inheritance of acquired characters. © 2013 by the Genetics Society of America.

Gilbert I.,University of Illinois at Urbana - Champaign
Nature Physics | Year: 2014

Artificial spin ice comprises a class of frustrated arrays of interacting single-domain ferromagnetic nanostructures. Previous studies of artificial spin ice have focused on simple lattices based on natural frustrated materials. Here we experimentally examine artificial spin ice created on the shakti lattice, a structure that does not directly correspond to any known natural magnetic material. On the shakti lattice, none of the near-neighbour interactions is locally frustrated, but instead the lattice topology frustrates the interactions leading to a high degree of degeneracy. We demonstrate that the shakti system achieves a physical realization of the classic six-vertex model ground state. Furthermore, we observe that the mixed coordination of the shakti lattice leads to crystallization of effective magnetic charges and the screening of magnetic excitations, underscoring the importance of magnetic charge as the relevant degree of freedom in artificial spin ice and opening new possibilities for studies of its dynamics.

Kemper J.K.,University of Illinois at Urbana - Champaign
Biochimica et Biophysica Acta - Molecular Basis of Disease | Year: 2011

Abnormally elevated lipid and glucose levels due to the disruption of metabolic homeostasis play causative roles in the development of metabolic diseases. A cluster of metabolic conditions, including dyslipidemia, abdominal obesity, and insulin resistance, is referred to as metabolic syndrome, which has been increasing globally at an alarming rate. The primary nuclear bile acid receptor, Farnesoid X Receptor (FXR, NR1H4), plays important roles in controlling lipid and glucose levels by regulating expression of target genes in response to bile acid signaling in enterohepatic tissues. In this review, I discuss how signal-dependent FXR transcriptional activity is dynamically regulated under normal physiological conditions and how it is dysregulated in metabolic disease states. I focus on the emerging roles of post-translational modifications (PTMs) and transcriptional cofactors in modulating FXR transcriptional activity and pathways. Dysregulation of nuclear receptor transcriptional signaling due to aberrant PTMs and cofactor interactions are key determinants in the development of metabolic diseases. Therefore, targeting such abnormal PTMs and transcriptional cofactors of FXR in disease states may provide a new molecular strategy for development of pharmacological agents to treat metabolic syndrome. This article is part of a Special Issue entitled: Translating nuclear receptors from health to disease. © 2010 Elsevier B.V.

This paper performed a comparative analysis of organic Rankine cycle (ORC) using different working fluids, in order to recover waste heat from a solid oxide fuel cell-gas turbine hybrid power cycle. Depending on operating parameters, criteria for the choice of the working fluid were identified. Results reveal that due to a significant temperature glide of the exhaust gas, the actual ORC cycle thermal efficiency strongly depends on the turbine inlet temperature, exhaust gas temperature, and fluid's critical point temperature. When exhaust gas temperature varies in the range of 500K to 600K, R123 is preferred among the nine dry typical organic fluids because of the highest and most stabilized mean thermal efficiency under wide operating conditions and its reasonable condensing pressure and turbine outlet specific volume, which in turn results in a feasible ORC cycle for practical concerns. © 2012 John Wiley & Sons, Ltd.

McMahon J.M.,University of Illinois at Urbana - Champaign | Morales M.A.,Lawrence Livermore National Laboratory | Pierleoni C.,University of L'Aquila | Ceperley D.M.,University of Illinois at Urbana - Champaign
Reviews of Modern Physics | Year: 2012

Hydrogen and helium are the most abundant elements in the Universe. They are also, in principle, the most simple. Nonetheless, they display remarkable properties under extreme conditions of pressure and temperature that have fascinated theoreticians and experimentalists for over a century. Advances in computational methods have made it possible to elucidate ever more of their properties. Some of these methods that have been applied in recent years, in particular, those that perform simulations directly from the physical picture of electrons and ions, such as density functional theory and quantum Monte Carlo are reviewed. The predictions from such methods as applied to the phase diagram of hydrogen, with particular focus on the solid phases and the liquid-liquid transition are discussed. The predictions of ordered quantum states, including the possibilities of a low- or zero-temperature quantum fluid and high-temperature superconductivity are also considered. Finally, pure helium and hydrogen-helium mixtures, the latter which has particular relevance to planetary physics, are discussed. © 2012 American Physical Society.

Wong G.C.L.,University of Illinois at Urbana - Champaign | Pollack L.,Cornell University
Annual Review of Physical Chemistry | Year: 2010

Charges on biological polymers in physiologically relevant solution conditions are strongly screened by water and salt solutions containing counter-ions. However, the entropy of these counterions can result in surprisingly strong interactions between charged objects in water despite short screening lengths, via coupling between osmotic and electrostatic interactions. Widespread work in theory, experiment, and computation has been carried out to gain a fundamental understanding of the rich, yet sometimes counterintuitive, behavior of these polyelectrolyte systems. Examples of polyelectrolyte association in biology include DNA packaging and RNA folding, as well as aggregation and self-organization phenomena in different disease states. Copyright © 2010 by Annual Reviews. All rights reserved.

Erickson J.,University of Illinois at Urbana - Champaign
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2010

We observe that the classical maximum flow problem in any directed planar graph G can be reformulated as a parametric shortest path problem in the oriented dual graph G*. This reformulation immediately suggests an algorithm to compute maximum flows, which runs in O(n log n) time. As we continuously increase the parameter, each change in the shortest path tree can be effected in O(log n) time using standard dynamic tree data structures, and the special structure of the parametrization implies that each directed edge enters the evolving shortest path tree at most once. The resulting maximum-flow algorithm is identical to the recent algorithm of Borradaile and Klein [J. ACM 2009], but our new formulation allows a simpler presentation and analysis. On the other hand, we demonstrate that for a similarly structured parametric shortest path problem on the torus, the shortest path tree can change Ω(n2) times in the worst case, suggesting that a different method may be required to efficiently compute maximum flows in higher-genus graphs. Copyright © by SIAM.

Oldfield E.,University of Illinois at Urbana - Champaign
Accounts of Chemical Research | Year: 2010

The isoprenoid biosynthesis pathways produce the largest class of small molecules in Nature: isoprenoids (also called terpenoids). Not surprisingly then, isoprenoid biosynthesis is a target for drug discovery, and many drugs-such as Lipitor (used to lower cholesterol), Fosamax (used to treat osteoporosis), and many anti-infectives-target isoprenoid biosynthesis. However, drug resistance in malaria, tuberculosis, and staph infections is rising, cheap and effective drugs for the neglected tropical diseases are lacking, and progress in the development of anticancer drugs is relatively slow. Isoprenoid biosynthesis is thus an attractive target, and in this Account, I describe developments in four areas, using in each case knowledge derived from one area of chemistry to guide the development of inhibitors (or drug leads) in another, seemingly unrelated, area. First, I describe mechanistic studies of the enzyme IspH, which is present in malaria parasites and most pathogenic bacteria, but not in humans. IspH is a 4Fe-4S protein and produces the five-carbon (C5) isoprenoids IPP (isopentenyl diphosphate) and DMAPP (dimethylallyl diphosphate) from HMBPP (E-1-hydroxy-2-methyl-but-2-enyl-4-diphosphate) via a 2H +/2e- reduction (of an allyl alcohol to an alkene). The mechanism is unusual in that it involves organometallic species: "metallacycles" (n2-alkenes) and n1/n 3-allyls. These observations lead to novel alkyne inhibitors, which also form metallacycles. Second, I describe structure-function-inhibition studies of FPP synthase, the macromolecule that condenses IPP and DMAPP to the sesquiterpene farnesyl diphosphate (FPP) in a "head-to-tail" manner. This enzyme uses a carbocation mechanism and is potently inhibited by bone resorption drugs (bisphosphonates), which I show are also antiparasitic agents that block sterol biosynthesis in protozoa. Moreover, "lipophilic" bisphosphonates inhibit protein prenylation and invasiveness in tumor cells, in addition to activating γΔ T-cells to kill tumor cells, and are important new leads in oncology. Third, I describe structural and inhibition studies of a "head-to-head" triterpene synthase, dehydrosqualene synthase (CrtM), from Staphylococcus aureus. CrtM catalyzes the first committed step in biosynthesis of the carotenoid virulence factor staphyloxanthin: the condensation of two FPP molecules to produce a cyclopropane (presqualene diphosphate). The structure of CrtM is similar to that of human squalene synthase (SQS), and some SQS inhibitors (originally developed as cholesterol-lowering drugs) block staphyloxanthin biosynthesis. Treated bacteria are white and nonvirulent (because they lack the carotenoid shield that protects them from reactive oxygen species produced by neutrophils), rendering them susceptible to innate immune system clearance-a new therapeutic approach. And finally, I show that the heart drug amiodarone, also known to have antifungal activity, blocks ergosterol biosynthesis at the level of oxidosqualene cyclase in Trypanosoma cruzi, work that has led to its use in the clinic as a novel antiparasitic agent. In each of these four examples, I use information from one area (organometallic chemistry, bone resorption drugs, cholesterol-lowering agents, heart disease) to develop drug leads in an unrelated area: a "knowledge-based" approach that represents an important advance in the search for new drugs. © 2010 American Chemical Society.

Bang J.H.,University of Illinois at Urbana - Champaign | Suslick K.S.,University of Illinois at Urbana - Champaign
Advanced Materials | Year: 2010

Recent advances in nanostructured materials have been led by the development of new synthetic methods that provide control over size, morphology, and nano/microstructure. The utilization of high intensity ultrasound offers a facile, versatile synthetic tool for nanostructured materials that are often unavailable by conventional methods. The primary physical phenomena associated with ultrasound that are relevant to materials synthesis are cavitation and nebulization. Acoustic cavitation (the formation, growth, and implosive collapse of bubbles in a liquid) creates extreme conditions inside the collapsing bubble and serves as the origin of most sonochemical phenomena in liquids or liquid-solid slurries. Nebulization (the creation ofmist fromultrasound passing through a liquid and impinging on a liquid-gas interface) is the basis for ultrasonic spray pyrolysis (USP) with subsequent reactions occurring in the heated droplets of the mist. In both cases, we have examples of phaseseparated attoliter microreactors: for sonochemistry, it is a hot gas inside bubbles isolated from one another in a liquid, while for USP it is hot droplets isolated from one another in a gas. Cavitation-induced sonochemistry provides a unique interaction between energy and matter, with hot spots inside the bubbles of ∼5000 K, pressures of ∼1000 bar, heating and cooling rates of >1010K s-1; these extraordinary conditions permit access to a range of chemical reaction space normally not accessible, which allows for the synthesis of a wide variety of unusual nanostructured materials. Complementary to cavitational chemistry, the microdroplet reactors created by USP facilitate the formation of a wide range of nanocomposites. In this review, we summarize the fundamental principles of both synthetic methods and recent development in the applications of ultrasound in nanostructured materials synthesis.© 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Chen L.-F.,University of Illinois at Urbana - Champaign
Journal of Cellular Biochemistry | Year: 2012

Emerging evidence indicates that RUNX3 is a tumor suppressor in breast cancer. RUNX3 is frequently inactivated in human breast cancer cell lines and cancer samples by hemizygous deletion of the Runx3 gene, hypermethylation of the Runx3 promoter, or cytoplasmic sequestration of RUNX3 protein. Inactivation of RUNX3 is associated with the initiation and progression of breast cancer. Female Runx3 +/- mice spontaneously develop ductal carcinoma, and overexpression of RUNX3 inhibits the proliferation, tumorigenic potential, and invasiveness of breast cancer cells. This review is intended to summarize these findings and discuss the tumor suppressor function of RUNX3 in breast cancer. © 2012 Wiley Periodicals, Inc.

Forsyth D.A.,University of Illinois at Urbana - Champaign
International Journal of Computer Vision | Year: 2011

The shading on curved surfaces is a cue to shape. Current computer vision methods for analyzing shading use physically unrealistic models, have serious mathematical problems, cannot exploit geometric information if it is available, and are not reliable in practice. We introduce a novel method of accounting for variations in irradiance resulting from interreflections, complex sources and the like. Our approach uses a spatially varying source model with a local shading model. Fast spatial variation in the source is penalised, consistent with the rendering community's insight that interreflections are spatially slow. This yields a physically plausible shading model. Because modern cameras can make accurate reports of observed radiance, our method compels the reconstructed surface to have shading exactly consistent with that of the image. For inference, we use a variational formulation, with a selection of regularization terms which guarantee that a solution exists. Our method is evaluated on physically accurate renderings of virtual objects, and on images of real scenes, for a variety of different kinds of boundary condition. Reconstructions for single sources compare well with photometric stereo reconstructions and with ground truth. © 2010 Springer Science+Business Media, LLC.

Sun N.,University of Illinois at Urbana - Champaign
Biotechnology and bioengineering | Year: 2013

Transcription activator-like effector (TALE) nucleases (TALENs) have recently emerged as a revolutionary genome editing tool in many different organisms and cell types. The site-specific chromosomal double-strand breaks introduced by TALENs significantly increase the efficiency of genomic modification. The modular nature of the TALE central repeat domains enables researchers to tailor DNA recognition specificity with ease and target essentially any desired DNA sequence. Here, we comprehensively review the development of TALEN technology in terms of scaffold optimization, DNA recognition, and repeat array assembly. In addition, we provide some perspectives on the future development of this technology. Copyright © 2013 Wiley Periodicals, Inc.

Efron M.,University of Illinois at Urbana - Champaign
Journal of the American Society for Information Science and Technology | Year: 2011

Modern information retrieval (IR) has come to terms with numerous new media in efforts to help people find information in increasingly diverse settings. Among these new media are so-called microblogs. A microblog is a stream of text that is written by an author over time. It comprises many very brief updates that are presented to the microblog's readers in reverse-chronological order. Today, the service called Twitter is the most popular microblogging platform. Although microblogging is increasingly popular, methods for organizing and providing access to microblog data are still new. This review offers an introduction to the problems that face researchers and developers of IR systems in microblog settings. After an overview of microblogs and the behavior surrounding them, the review describes established problems in microblog retrieval, such as entity search and sentiment analysis, and modeling abstractions, such as authority and quality. The review also treats user-created metadata that often appear in microblogs. Because the problem of microblog search is so new, the review concludes with a discussion of particularly pressing research issues yet to be studied in the field. © 2011 ASIS&T.

Hammes-Schiffer S.,University of Illinois at Urbana - Champaign
Journal of the American Chemical Society | Year: 2015

Proton-coupled electron transfer (PCET) is ubiquitous throughout chemistry and biology. This Perspective discusses recent advances and current challenges in the field of PCET, with an emphasis on the role of theory and computation. The fundamental theoretical concepts are summarized, and expressions for rate constants and kinetic isotope effects are provided. Computational methods for calculating reduction potentials and pKa's for molecular electrocatalysts, as well as insights into linear correlations and non-innocent ligands, are also described. In addition, computational methods for simulating the nonadiabatic dynamics of photoexcited PCET are discussed. Representative applications to PCET in solution, proteins, electrochemistry, and photoinduced processes are presented, highlighting the interplay between theoretical and experimental studies. The current challenges and suggested future directions are outlined for each type of application, concluding with an overall view to the future. © 2015 American Chemical Society.

Eichenbaum H.,Boston University | Cohen N.J.,University of Illinois at Urbana - Champaign
Neuron | Year: 2014

Some argue that hippocampus supports declarative memory, our capacity to recall facts and events, whereas others view the hippocampus as part of a system dedicated to calculating routes through space, and these two contrasting views are pursued largely independently in current research. Here we offer a perspective on where these views can and cannot be reconciled and update a bridging framework that will improve our understanding of hippocampal function. Currently, separate lines of research focus on hippocampal function in spatial navigation or in declarative memory. Here, Eichenbaum and Cohen update their relational memory account to reconcile these approaches and offer new insights into the fundamental mechanisms of memory. © 2014 Elsevier Inc.

Silverman S.K.,University of Illinois at Urbana - Champaign
Angewandte Chemie - International Edition | Year: 2010

Beyond biological DNA : Chemists are exploiting DNA for interesting applications as a catalyst, encoding component, and stereocontrol element. Each of these chemical applications takes advantage of a distinct subset of DNA's properties in ways not known in nature. © 2010 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

Wilson B.A.,University of Illinois at Urbana - Champaign | Ho M.,University of Illinois at Urbana - Champaign
Clinical Microbiology Reviews | Year: 2013

In a world where most emerging and reemerging infectious diseases are zoonotic in nature and our contacts with both domestic and wild animals abound, there is growing awareness of the potential for human acquisition of animal diseases. Like other Pasteurellaceae, Pasteurella species are highly prevalent among animal populations, where they are often found as part of the normal microbiota of the oral, nasopharyngeal, and upper respiratory tracts. Many Pasteurella species are opportunistic pathogens that can cause endemic disease and are associated increasingly with epizootic outbreaks. Zoonotic transmission to humans usually occurs through animal bites or contact with nasal secretions, with P. multocida being the most prevalent isolate observed in human infections. Here we review recent comparative genomics and molecular pathogenesis studies that have advanced our understanding of the multiple virulence mechanisms employed by Pasteurella species to establish acute and chronic infections. We also summarize efforts being explored to enhance our ability to rapidly and accurately identify and distinguish among clinical isolates and to control pasteurellosis by improved development of new vaccines and treatment regimens. © 2013, American Society for Microbiology. All Rights Reserved.

Tappenden K.A.,University of Illinois at Urbana - Champaign
JPEN. Journal of parenteral and enteral nutrition | Year: 2014

The human small intestine is organized with a proximal-to-distal gradient of mucosal structure and nutrient processing capacity. However, certain nutrients undergo site-specific digestion and absorption, such as iron and folate in the duodenum/jejunum vs vitamin B12 and bile salts in the ileum. Intestinal resection can result in short bowel syndrome (SBS) due to reduction of total and/or site-specific nutrient processing areas. Depending on the segment(s) of intestine resected, malabsorption can be nutrient specific (eg, vitamin B12 or fat) or sweeping, with deficiencies in energy, protein, and various micronutrients. Jejunal resections are generally better tolerated than ileal resections because of greater postresection adaptive capacity than that of the jejunum. Following intestinal resection, energy scavenging and fluid absorption become particularly important in the colon owing to loss of digestive and absorptive surface area in the resection portion. Resection-induced alterations in enteroendocrine cell abundance can further disrupt intestinal function. For example, patients with end jejunostomy have depressed circulating peptide YY and glucagon-like peptide 2 concentrations, which likely contribute to the rapid intestinal transit and blunted intestinal adaptation observed in this population. SBS-associated pathophysiology often extends beyond the gastrointestinal tract, with hepatobiliary disease, metabolic bone disease, D-lactic acidosis, and kidney stone formation being chronic complications. Clinical management of SBS must be individualized to account for the specific nutrient processing deficit within the remnant bowel and to mitigate potential complications, both inside and outside the gastrointestinal tract.

Nisoli C.,Los Alamos National Laboratory | Moessner R.,Max Planck Institute for the Physics of Complex Systems | Schiffer P.,University of Illinois at Urbana - Champaign
Reviews of Modern Physics | Year: 2013

Frustration, the presence of competing interactions, is ubiquitous in the physical sciences and is a source of degeneracy and disorder, which in turn gives rise to new and interesting physical phenomena. Perhaps nowhere does it occur more simply than in correlated spin systems, where it has been studied in the most detail. In disordered magnetic materials, frustration leads to spin-glass phenomena, with analogies to the behavior of structural glasses and neural networks. In structurally ordered magnetic materials, it has also been the topic of extensive theoretical and experimental studies over the past two decades. Such geometrical frustration has opened a window to a wide range of fundamentally new exotic behavior. This includes spin liquids in which the spins continue to fluctuate down to the lowest temperatures, and spin ice, which appears to retain macroscopic entropy even in the low-temperature limit where it enters a topological Coulomb phase. In the past seven years a new perspective has opened in the study of frustration through the creation of artificial frustrated magnetic systems. These materials consist of arrays of lithographically fabricated single-domain ferromagnetic nanostructures that behave like giant Ising spins. The nanostructures' interactions can be controlled through appropriate choices of their geometric properties and arrangement on a (frustrated) lattice. The degrees of freedom of the material can not only be directly tuned, but also individually observed. Experimental studies have unearthed intriguing connections to the out-of-equilibrium physics of disordered systems and nonthermal "granular" materials, while revealing strong analogies to spin ice materials and their fractionalized magnetic monopole excitations, lending the enterprise a distinctly interdisciplinary flavor. The experimental results have also been closely coupled to theoretical and computational analyses, facilitated by connections to classic models of frustrated magnetism, whose hitherto unobserved aspects have here found an experimental realization. Considerable experimental and theoretical progress in this field is reviewed here, including connections to other frustrated phenomena, and future vistas for progress in this rapidly expanding field are outlined. © 2013 American Physical Society.

Leckband D.,University of Illinois at Urbana - Champaign | Sivasankar S.,Iowa State University
Current Opinion in Cell Biology | Year: 2012

Classical cadherins are the principle adhesive proteins at cohesive intercellular junctions, and are essential proteins for morphogenesis and tissue homeostasis. Because subtype-dependent differences in cadherin adhesion are at the heart of cadherin functions, several structural and biophysical approaches have been used to elucidate relationships between cadherin structures, biophysical properties of cadherin bonds, and cadherin-dependent cell functions. Some experimental approaches appeared to provide conflicting views of the cadherin binding mechanism. However, recent structural and biophysical data, as well as computer simulations generated new insights into classical cadherin binding that increasingly reconcile diverse experimental findings. This review summarizes these recent findings, and highlights both the consistencies and remaining challenges needed to generate a comprehensive model of cadherin interactions that is consistent with all available experimental data. © 2012 Elsevier Ltd.

Fabiani M.,University of Illinois at Urbana - Champaign
Psychophysiology | Year: 2012

This paper reviews research on age-related changes in working memory and attention control. This work is interpreted within a framework labeled "GOLDEN aging" (growing of lifelong differences explains normal aging), which is based on the idea that normal aging (as opposed to pathological aging) represents maturational processes causing progressive shifts in the distributions of mental abilities over the lifespan. As such, brain phenomena observed in normal aging are already apparent, under appropriate conditions, in younger adults. Among the phenomena that can be interpreted according to the GOLDEN aging framework are reductions in working memory capacity, impairments of inhibitory processes, increases in frontal lobe activation, and lack of suppression of responses as a function of repetition. © 2012 Society for Psychophysiological Research.

Liberzon D.,University of Illinois at Urbana - Champaign
Automatica | Year: 2014

We study the problem of asymptotically stabilizing a switched linear control system using sampled and quantized measurements of its state. The switching is assumed to be slow enough in the sense of combined dwell time and average dwell time, each individual mode is assumed to be stabilizable, and the available data rate is assumed to be large enough but finite. Our encoding and control strategy is rooted in the one proposed in our earlier work on non-switched systems, and in particular the data-rate bound used here is the data-rate bound from that earlier work maximized over the individual modes. The main technical step that enables the extension to switched systems concerns propagating over-approximations of reachable sets through sampling intervals, during which the switching signal is not known; a novel algorithm is developed for this purpose. Our primary focus is on systems with time-dependent switching (switched systems) but the setting of state-dependent switching (hybrid systems) is also discussed. © 2013 Elsevier Ltd. All rights reserved.

Oldfield E.,University of Illinois at Urbana - Champaign | Feng X.,University of Illinois at Urbana - Champaign
Trends in Pharmacological Sciences | Year: 2014

New antibiotics are needed because drug resistance is increasing while the introduction of new antibiotics is decreasing. We discuss here six possible approaches to develop 'resistance-resistant' antibiotics. First, multitarget inhibitors in which a single compound inhibits more than one target may be easier to develop than conventional combination therapies with two new drugs. Second, inhibiting multiple targets in the same metabolic pathway is expected to be an effective strategy owing to synergy. Third, discovering multiple-target inhibitors should be possible by using sequential virtual screening. Fourth, repurposing existing drugs can lead to combinations of multitarget therapeutics. Fifth, targets need not be proteins. Sixth, inhibiting virulence factor formation and boosting innate immunity may also lead to decreased susceptibility to resistance. Although it is not possible to eliminate resistance, the approaches reviewed here offer several possibilities for reducing the effects of mutations and, in some cases, suggest that sensitivity to existing antibiotics may be restored in otherwise drug-resistant organisms. © 2014 Elsevier Ltd. All rights reserved.

Bobrovskyy M.,University of Illinois at Urbana - Champaign | Vanderpool C.K.,University of Illinois at Urbana - Champaign
Annual Review of Genetics | Year: 2013

Bacteria live in many dynamic environments with alternating cycles of feast or famine that have resulted in the evolution of mechanisms to quickly alter their metabolic capabilities. Such alterations often involve complex regulatory networks that modulate expression of genes involved in nutrient uptake and metabolism. A great number of protein regulators of metabolism have been characterized in depth. However, our ever-increasing understanding of the roles played by RNA regulators has revealed far greater complexity to regulation of metabolism in bacteria. Here, we review the mechanisms and functions of selected bacterial RNA regulators and discuss their importance in modulating nutrient uptake as well as primary and secondary metabolic pathways. © 2013 by Annual Reviews. All rights reserved.

Cahill D.G.,University of Illinois at Urbana - Champaign
MRS Bulletin | Year: 2012

Thermal conductivity is a familiar property of materials: silver conducts heat well, and plastic does not. In recent years, an interdisciplinary group of materials scientists, engineers, physicists, and chemists have succeeded in pushing back long-established limits in the thermal conductivity of materials. Carbon nanotubes and graphene are at the high end of the thermal conductivity spectrum due to their high sound velocities and relative lack of processes that scatter phonons. Unfortunately, the superlative thermal properties of carbon nanotubes have not found immediate application in composites or interface materials because of difficulties in making good thermal contact with the nanotubes. At the low end of the thermal conductivity spectrum, solids that combine order and disorder in the random stacking of two-dimensional crystalline sheets, so-called disordered layered crystals, show a thermal conductivity that is only a factor of 2 larger than air. The cause of this low thermal conductivity may be explained by the large anisotropy in elastic constants that suppresses the density of phonon modes that propagate along the soft direction. Low-dimensional quantum magnets demonstrate that electrons and phonons are not the only significant carriers of heat. Near room temperature, the spin thermal conductivity of spin-ladders is comparable to the electronic thermal conductivities of metals. Our measurements of nanoscale thermal transport properties employ a variety of ultrafast optical pump-probe metrology tools that we have developed over the past several years. We are currently working to extend these techniques to high pressures (60 GPa), high magnetic fields (5 T), and high temperatures (1000 K). © 2012 2012 Materials Research Society.

Beutner G.L.,Bristol Myers Squibb | Denmark S.E.,University of Illinois at Urbana - Champaign
Angewandte Chemie - International Edition | Year: 2013

Since the landmark publications of the first directed aldol addition reaction in 1973, the site, diastereo-, and enantioselective aldol reaction has been elevated to the rarefied status of being both a named and a strategy-level reaction (the Mukaiyama directed aldol reaction). The importance of this reaction in the stereoselective synthesis of untold numbers of organic compounds, both natural and unnatural, cannot be overstated. However, its impact on the field extends beyond the impressive applications in synthesis. The directed aldol reaction has served as a fertile proving ground for new concepts and new methods for stereocontrol and catalysis. This Minireview provides a case history of how the challenges of merging site selectivity, diastereoselectivity, enantioselectivity, and catalysis into a unified reaction manifold stimulated the development of Lewis base catalyzed aldol addition reactions. The evolution of this process is chronicled from the authors' laboratories as well as in those of Professor Teruaki Mukaiyama. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Olson K.R.,University of Illinois at Urbana - Champaign
Geoderma | Year: 2013

The atmospheric levels of carbon dioxide (CO2) have been due largely to the burning of fossil fuels, deforestation, cultivation of the grasslands, drainage of the land, and land use changes. This has led to increase in greenhouse gases, created concerns about the potential for long-term climate change and interest in developing methods to sequester some of this atmospheric carbon. In agricultural land areas no-tillage (NT) systems have been proposed, to replace moldboard plow and chisel systems as a way to sequester soil organic carbon (SOC). Numerous estimates have been published of soil organic carbon (SOC) sequestration total and rates as a result of a switch to NT systems. Other researchers have proposed the use of cover crops, synthetic fertilizers, organic fertilizer, manure, liming, agricultural systems and management, agroforestry, forages, compost, crop rotations, and reduced row crop use as ways to sequester SOC. For SOC sequestration to occur as a result of a treatment applied to a land unit, all of the SOC sequestered must have come from atmosphere and be transferred into the soil humus through the unit plants, plant residues and other organic solids. The amount of SOC present in the soil humus at the end of the study has to be greater than the pre-treatment SOC levels in the same land unit and there needs to be a net depletion of atmospheric CO2 as a result. The objectives of this paper are to: (1) determine long-term study SOC levels and trends in agricultural lands, (2) application of the SOC sequestration concept to a specific site, (3) identify appropriate experimental designs for plot area use in determining SOC sequestration, (4) develop a procedure, such as pre-treatment measurements of SOC levels in the plots before treatments are applied, to verify SOC sequestration at a site (5) equivalent soil mass sampling method, (6) compare laboratory methods for quantifying SOC content, and (7) account for the loading of C rich amendments. To unequivocally demonstrate SOC sequestration at a specific site has occurred, a temporal increase must be documented relative to pre-treatment SOC level and linked to a net depletion of atmospheric CO2. © 2012.

Gormisky P.E.,University of Illinois at Urbana - Champaign | White M.C.,University of Illinois at Urbana - Champaign
Journal of the American Chemical Society | Year: 2013

Selective aliphatic C-H bond oxidations may have a profound impact on synthesis because these bonds exist across all classes of organic molecules. Central to this goal are catalysts with broad substrate scope (small-molecule-like) that predictably enhance or overturn the substrate's inherent reactivity preference for oxidation (enzyme-like). We report a simple small-molecule, non-heme iron catalyst that achieves predictable catalyst-controlled site-selectivity in preparative yields over a range of topologically diverse substrates. A catalyst reactivity model quantitatively correlates the innate physical properties of the substrate to the site-selectivities observed as a function of the catalyst. © 2013 American Chemical Society.

Zhang H.,University of Illinois at Urbana - Champaign | Braun P.V.,University of Illinois at Urbana - Champaign
Nano Letters | Year: 2012

Silicon-based lithium ion battery anodes are attracting significant attention because of silicon's exceptionally high lithium capacity. However, silicon's large volume change during cycling generally leads to anode pulverization unless the silicon is dispersed throughout a matrix in nanoparticulate form. Because pulverization results in a loss of electric connectivity, the reversible capacity of most silicon anodes dramatically decays within a few cycles. Here we report a three-dimensional (3D) bicontinuous silicon anode formed by depositing a layer of silicon on the surface of a colloidal crystal templated porous nickel metal scaffold, which maintains electrical connectivity during cycling due to the scaffold. The porous metal framework serves to both impart electrical conductivity to the anode and accommodate the large volume change of silicon upon lithiation and delithiation. The initial capacity of the bicontinuous silicon anode is 3568 (silicon basis) and 1450 mAh g -1 (including the metal framework) at 0.05C. After 100 cycles at 0.3C, 85% of the capacity remains. Compared to a foil-supported silicon film, the 3D bicontinuous silicon anode exhibits significantly improved mechanical stability and cycleability. © 2012 American Chemical Society.

Flannigan D.J.,University of Illinois at Urbana - Champaign | Suslick K.S.,University of Illinois at Urbana - Champaign
Nature Physics | Year: 2010

Models of spherical supersonic bubble implosion in cavitating liquids predict that it could generate temperatures and densities sufficient to drive thermonuclear fusion1,2. Convincing evidence for fusion is yet to be shown, but the transient conditions generated by acoustic cavitation are certainly extreme3-5. There is, however, a remarkable lack of observable data on the conditions created during bubble collapse. Only recently has strong evidence of plasma formation been obtained6. Here we determine the plasma electron density, ion-broadening parameter and degree of ionization during single-bubble sonoluminescence as a function of acoustic driving pressure. We find that the electron density can be controlled over four orders of magnitude and exceed 1021 cm-3 -comparable to the densities produced in laser-driven fusion experiments7-with effective plasma temperatures ranging from 7,000 to more than 16,000ĝ€‰K. At the highest acoustic driving force, we find that neutral Ar emission lines no longer provide an accurate measure of the conditions in the plasma. By accounting for the temporal profile of the sonoluminescence pulse and the potential optical opacity of the plasma, our results suggest that the ultimate conditions generated inside a collapsing bubble may far exceed those determined from emission from the transparent outer region of the light-emitting volume. © 2010 Macmillan Publishers Limited. All rights reserved.

Bell A.M.,University of Illinois at Urbana - Champaign | Robinson G.E.,University of Illinois at Urbana - Champaign
Science | Year: 2011

Does behavior evolve through gene expression changes in the brain in response to the environment?

Winkler S.,University of Illinois at Urbana - Champaign
IEEE Journal on Selected Topics in Signal Processing | Year: 2012

Databases of images or videos annotated with subjective ratings constitute essential ground truth for training, testing, and benchmarking algorithms for objective quality assessment. More than two dozen such databases are now available in the public domain; they are presented and analyzed in this paper. We propose several criteria for quantitative comparisons of source content, test conditions, and subjective ratings, which are used as the basis for the ensuing analyses and discussion. This information will allow researchers to make more well-informed decisions about databases, and may also guide the creation of additional test material and the design of future experiments. © 2007-2012 IEEE.

Klinkenberg J.L.,University of Illinois at Urbana - Champaign | Hartwig J.F.,University of Illinois at Urbana - Champaign
Angewandte Chemie - International Edition | Year: 2011

Get a move on ammonia: Reactions of ammonia catalyzed by transition-metal complexes allow direct access to primary amines and other nitrogen-containing functional groups. This Minireview presents recent advances in catalyst development that have led to the reaction of a variety of substrates with ammonia. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Aksimentiev A.,University of Illinois at Urbana - Champaign
Nanoscale | Year: 2010

Within just a decade from the pioneering work demonstrating the utility of nanopores for molecular sensing, nanopores have emerged as versatile systems for single-molecule manipulation and analysis. In a typical setup, a gradient of the electrostatic potential captures charged solutes from the solution and forces them to move through a single nanopore, across an otherwise impermeable membrane. The ionic current blockades resulting from the presence of a solute in a nanopore can reveal the type of the solute, for example, the nucleotide makeup of a DNA strand. Despite great success, the microscopic mechanisms underlying the functionality of such stochastic sensors remain largely unknown, as it is not currently possible to characterize the microscopic conformations of single biomolecules directly in a nanopore and thereby unequivocally establish the causal relationship between the observables and the microscopic events. Such a relationship can be determined using molecular dynamics - a computational method that can accurately predict the time evolution of a molecular system starting from a given microscopic state. This article describes recent applications of this method to the process of DNA transport through biological and synthetic nanopores. © 2010 The Royal Society of Chemistry.

Chipot C.,University of Lorraine | Chipot C.,University of Illinois at Urbana - Champaign
Wiley Interdisciplinary Reviews: Computational Molecular Science | Year: 2014

In a matter of three decades, free-energy calculations have emerged as an indispensable tool to tackle deep biological questions that experiment alone has left unresolved. In spite of recent advances on the hardware front that have pushed back the limitations of brute-force molecular dynamics simulations, opening the way to time and size scales hitherto never attained, they represent a cogent alternative to access with unparalleled accuracy the thermodynamics and possibly the kinetics that underlie the complex processes of the cell machinery. From a pragmatic perspective, the present review draws a picture of how the field has been shaped and invigorated by milestone developments, application, and sometimes rediscovery of foundational principles laid down years ago to reach new frontiers in the exploration of intricate biological phenomena. Through a series of illustrative examples, distinguishing between alchemical and geometrical transformations, it discusses how far free-energy calculations have come, what are the current hurdles they have to overcome, and the challenges they are facing for tomorrow. © 2013 John Wiley & Sons, Ltd.

Heeres J.T.,University of Illinois at Urbana - Champaign | Hergenrother P.J.,University of Illinois at Urbana - Champaign
Chemical Society Reviews | Year: 2011

High-throughput screening (HTS) has played an integral role in the development of small molecule modulators of biological processes. These screens are typically developed for enzymes (such as kinases or proteases) or extracellular receptors, two classes of targets with well-established colorimetric or fluorimetric activity assays. In contrast, methods for detection of protein-protein interactions lack the simplicity inherent to enzyme and receptor assays. Technologies that facilitate the discovery of small molecule modulators of protein-protein interactions are essential to the exploitation of this important class of drug targets. As described in this critical review, photonic crystal (PC) biosensors and other emerging technologies can now be utilized in high-throughput screens for the identification of compounds that disrupt or enhance protein-protein interactions (167 references). © The Royal Society of Chemistry 2011.

Burkle L.A.,University of Washington | Burkle L.A.,Montana State University | Marlin J.C.,University of Illinois at Urbana - Champaign | Knight T.M.,University of Washington
Science | Year: 2013

Using historic data sets, we quantified the degree to which global change over 120 years disrupted plant-pollinator interactions in a temperate forest understory community in Illinois, USA. We found degradation of interaction network structure and function and extirpation of 50% of bee species. Network changes can be attributed to shifts in forb and bee phenologies resulting in temporal mismatches, nonrandom species extinctions, and loss of spatial co-occurrences between extant species in modified landscapes. Quantity and quality of pollination services have declined through time. The historic network showed flexibility in response to disturbance; however, our data suggest that networks will be less resilient to future changes.

Tuo H.,University of Illinois at Urbana - Champaign
International Journal of Energy Research | Year: 2013

A thermal-economic analysis of a transcritical Rankine power cycle with reheat enhancement using a low-grade industrial waste heat is presented. Under the identical operating conditions, the reheat cycle is compared to the non-reheat baseline cycle with respect to the specific net power output, the thermal efficiency, the heat exchanger area, and the total capital costs of the systems. Detailed parametric effects are investigated in order to maximize the cycle performance and minimize the system unit cost per net work output. The main results show that the value of the optimum reheat pressure maximizing the specific net work output is approximately equal to the one that causes the same expansion ratio across each stage turbine. Relative performance improvement by reheat process over the baseline is augmented with an increase of the high pressure but a decrease of the turbine inlet temperature. Enhancement for the specific net work output is more significant than that for the thermal efficiency under each condition, because total heat input is increased in the reheat cycle for the reheat process. The economic analysis reveals that the respective optimal high pressures minimizing the unit heat exchanger area and system cost are much lower than that maximizing the energy performance. The comparative analysis identifies the range of operating conditions when the proposed reheat cycle is more cost effective than the baseline. Copyright © 2012 John Wiley & Sons, Ltd. © 2012 John Wiley & Sons, Ltd.

Freund J.B.,University of Illinois at Urbana - Champaign
Annual Review of Fluid Mechanics | Year: 2014

The cellular detail of blood is an essential factor in its flow, especially in vessels or devices with size comparable to that of its suspended cells. This article motivates and reviews numerical simulation techniques that provide a realistic description of cell-scale blood flow by explicitly representing its coupled fluid and solid mechanics. Red blood cells are the principal focus because of their importance and because of their remarkable deformability, which presents particular simulation challenges. Such simulations must couple discretizations of the large-deformation elasticity of the cells with the viscous flow mechanics of the suspension. The Reynolds numbers are low, so the effectively linear fluid mechanics is amenable to a wide range of simulation methods, although the constitutive models and geometric factors of the coupled system introduce challenging nonlinearity. Particular emphasis is given to the relative merits of several fundamentally different simulation methods. The detailed description provided by such simulations is invaluable for advancing our scientific understanding of blood flow, and their ultimate impact will be in the design of biomedical tools and interventions. Copyright © 2014 by Annual Reviews. All rights reserved.

Tan Y.,University of Illinois at Urbana - Champaign | Hartwig J.F.,University of Illinois at Urbana - Champaign
Journal of the American Chemical Society | Year: 2010

"Chemical equation presented" We report a conceptually new approach to the direct amination of aromatic C-H bonds. In this process, an oxime ester function reacts with an aromatic C-H bond under redox-neutral conditions to form, in the case studied, an indole product. These reactions occur with relatively low catalyst loading (1 mol %) by a mechanism that appears to involve an unusual initial oxidative addition of an N-O bond to a Pd(0) species. The Pd(II) complex from oxidative addition of the N-X bond has been isolated for the first time, and evidence for the intermediacy of such oxidative addition products in the catalytic reaction has been gained. © 2010 American Chemical Society.

Tang L.,University of Illinois at Urbana - Champaign | Cheng J.,University of Illinois at Urbana - Champaign
Nano Today | Year: 2013

Nanomedicine, the use of nanotechnology for biomedical applications, has potential to change the landscape of the diagnosis and therapy of many diseases. In the past several decades, the advancement in nanotechnology and material science has resulted in a large number of organic and inorganic nanomedicine platforms. Silica nanoparticles (NPs), which exhibit many unique properties, offer a promising drug delivery platform to realize the potential of nanomedicine. Mesoporous silica NPs have been extensively reviewed previously. Here we review the current state of the development and application of nonporous silica NPs for drug delivery and molecular imaging.

Davis B.J.,University of Illinois at Urbana - Champaign
Analytical chemistry | Year: 2010

Midinfrared (IR) microspectroscopy is widely employed for spatially localized spectral analyses. A comprehensive theoretical model for the technique, however, has not been previously proposed. In this paper, rigorous theory is presented for IR absorption microspectroscopy by using Maxwell's equations to model beam propagation. Focusing effects, material dispersion, and the geometry of the sample are accounted to predict spectral response for homogeneous samples. Predictions are validated experimentally using Fourier transform IR (FT-IR) microspectroscopic examination of a photoresist. The results emphasize that meaningful interpretation of IR microspectroscopic data must involve an understanding of the coupled optical effects associated with the sample, substrate properties, and microscopy configuration. Simulations provide guidance for developing experimental methods and future instrument design by quantifying distortions in the recorded data. Distortions are especially severe for transflection mode and for samples mounted on certain substrates. Last, the model generalizes to rigorously consider the effects of focusing. While spectral analyses range from examining gross spectral features to assessing subtle features using advanced chemometrics, the limitations imposed by these effects in the data acquisition on the information available are less clear. The distorting effects are shown to be larger than noise levels seen in modern spectrometers. Hence, the model provides a framework to quantify spectral distortions that may limit the accuracy of information or present confounding effects in microspectroscopy.

McKay D.C.,University of Illinois at Urbana - Champaign | DeMarco B.,University of Illinois at Urbana - Champaign
Reports on Progress in Physics | Year: 2011

Optical lattices have emerged as ideal simulators for Hubbard models of strongly correlated materials, such as the high-temperature superconducting cuprates. In optical lattice experiments, microscopic parameters such as the interaction strength between particles are well known and easily tunable. Unfortunately, this benefit of using optical lattices to study Hubbard models comes with one clear disadvantage: the energy scales in atomic systems are typically nanokelvin compared with kelvin in solids, with a correspondingly miniscule temperature scale required to observe exotic phases such as d-wave superconductivity. The ultra-low temperatures necessary to reach the regime in which optical lattice simulation can have an impact - the domain in which our theoretical understanding fails - have been a barrier to progress in this field. To move forward, a concerted effort is required to develop new techniques for cooling and, by extension, techniques to measure even lower temperatures. This paper will be devoted to discussing the concepts of cooling and thermometry, fundamental sources of heat in optical lattice experiments, and a review of proposed and implemented thermometry and cooling techniques. © 2011 IOP Publishing Ltd.

Lankau R.A.,University of Illinois at Urbana - Champaign
New Phytologist | Year: 2011

Invaders can gain ecological advantages because of their evolutionary novelty, but little is known about how these novel advantages will change over time as the invader and invaded community evolve in response to each other. Invasive plants often gain such an advantage through alteration of soil microbial communities. In soil communities sampled from sites along a gradient of invasion history with Alliaria petiolata, microbial richness tended to decline, but the community's resistance to A. petiolata's effects generally increased with increasing history of invasion. However, sensitive microbial taxa appeared to recover in the two oldest sites, leading to an increase in richness, but consequent decrease in resistance. This may be because of evolutionary changes in the A. petiolata populations, which tend to reduce their investment to allelopathic compounds over time. These results show that, over time, microbial communities can develop resistance to an invasive plant but at the cost of lower richness. However, over longer time-scales evolution in the invasive species may allow for the recovery of soil microbial communities. © The Author (2010). Journal compilation © New Phytologist Trust (2010).

Zhang H.,University of Illinois at Urbana - Champaign | Yu X.,University of Illinois at Urbana - Champaign | Braun P.V.,University of Illinois at Urbana - Champaign
Nature Nanotechnology | Year: 2011

Rapid charge and discharge rates have become an important feature of electrical energy storage devices, but cause dramatic reductions in the energy that can be stored or delivered by most rechargeable batteries (their energy capacity). Supercapacitors do not suffer from this problem, but are restricted to much lower stored energy per mass (energy density) than batteries. A storage technology that combines the rate performance of supercapacitors with the energy density of batteries would significantly advance portable and distributed power technology. Here, we demonstrate very large battery charge and discharge rates with minimal capacity loss by using cathodes made from a self-assembled three-dimensional bicontinuous nanoarchitecture consisting of an electrolytically active material sandwiched between rapid ion and electron transport pathways. Rates of up to 400C and 1,000C for lithium-ion and nickel-metal hydride chemistries, respectively, are achieved (where a 1C rate represents a one-hour complete charge or discharge), enabling fabrication of a lithium-ion battery that can be 90% charged in 2 minutes. © 2011 Macmillan Publishers. All rights reserved.

Kim D.H.,University of Illinois at Urbana - Champaign
Nature materials | Year: 2010

Electronics that are capable of intimate, non-invasive integration with the soft, curvilinear surfaces of biological tissues offer important opportunities for diagnosing and treating disease and for improving brain/machine interfaces. This article describes a material strategy for a type of bio-interfaced system that relies on ultrathin electronics supported by bioresorbable substrates of silk fibroin. Mounting such devices on tissue and then allowing the silk to dissolve and resorb initiates a spontaneous, conformal wrapping process driven by capillary forces at the biotic/abiotic interface. Specialized mesh designs and ultrathin forms for the electronics ensure minimal stresses on the tissue and highly conformal coverage, even for complex curvilinear surfaces, as confirmed by experimental and theoretical studies. In vivo, neural mapping experiments on feline animal models illustrate one mode of use for this class of technology. These concepts provide new capabilities for implantable and surgical devices.

Single-cell mass spectrometry (MS) empowers metabolomic investigations by decreasing analytical dimensions to the size of individual cells and subcellular structures. We describe a protocol for investigating and quantifying metabolites in individual isolated neurons using single-cell capillary electrophoresis (CE) coupled to electrospray ionization (ESI) time-of-flight (TOF) MS. The protocol requires ∼2 h for sample preparation, neuron isolation and metabolite extraction, and 1 h for metabolic measurement. We used the approach to detect more than 300 distinct compounds in the mass range of typical metabolites in various individual neurons (25-500 μm in diameter) isolated from the sea slug (Aplysia californica) central and rat (Rattus norvegicus) peripheral nervous systems. We found that a subset of identified compounds was sufficient to reveal metabolic differences among freshly isolated neurons of different types and changes in the metabolite profiles of cultured neurons. The protocol can be applied to the characterization of the metabolome in a variety of smaller cells and/or subcellular domains.

Belmont A.S.,University of Illinois at Urbana - Champaign
Current Opinion in Cell Biology | Year: 2014

Traditionally large-scale chromatin structure has been studied by microscopic approaches, providing direct spatial information but limited sequence context. In contrast, newer 3C (chromosome capture conformation) methods provide rich sequence context but uncertain spatial context. Recent demonstration of large, topologically linked DNA domains, hundreds to thousands of kb in size, may now link 3C data to actual chromosome physical structures, as visualized directly by microscopic methods. Yet, new data suggesting that 3C may measure cytological rather than molecular proximity prompts a renewed focus on understanding the origin of 3C interactions and dissecting the biological significance of long-range genomic interactions. © 2013 Elsevier Ltd.

Fradkin E.,University of Illinois at Urbana - Champaign | Kivelson S.A.,Stanford University | Tranquada J.M.,Brookhaven National Laboratory
Reviews of Modern Physics | Year: 2015

The electronic phase diagrams of many highly correlated systems, and, in particular, the cuprate high temperature superconductors, are complex, with many different phases appearing with similar (sometimes identical) ordering temperatures even as material properties, such as dopant concentration, are varied over wide ranges. This complexity is sometimes referred to as "competing orders." However, since the relation is intimate, and can even lead to the existence of new phases of matter such as the putative "pair-density wave," the general relation is better thought of in terms of "intertwined orders." Some of the experiments in the cuprates which suggest that essential aspects of the physics are reflected in the intertwining of multiple orders, not just in the nature of each order by itself, are selectively analyzed. Several theoretical ideas concerning the origin and implications of this complexity are also summarized and critiqued. ©2015 American Physical Society.

Valero M.C.,University of Illinois at Urbana - Champaign
Journal of Molecular Diagnostics | Year: 2011

Neurofibromatosis type 1 (NF1) is a hereditary disorder caused by mutations in the NF1 gene. Detecting mutation in NF1 is hindered by the gene's large size, the lack of mutation hotspots, the presence of pseudogenes, and the wide variety of possible lesions. We developed a method for detecting germline mutations by combining an original RNA-based cDNA-PCR mutation detection method and denaturing high-performance liquid chromatography (DHPLC) with multiplex ligation-dependent probe amplification (MLPA). The protocol was validated in a cohort of 56 blood samples from NF1 patients who fulfilled NIH diagnostic criteria, identifying the germline mutation in 53 cases (95% sensitivity). The efficiency and reliability of this approach facilitated detection of different types of mutations, including single-base substitutions, deletions or insertions of one to several nucleotides, microdeletions, and changes in intragenic copy number. Because mutational screening for minor lesions was performed using cDNA and the characterization of mutated alleles was performed at both the RNA and genomic DNA level, the analysis provided insight into the nature of the different mutations and their effect on NF1 mRNA splicing. After validation, we implemented the protocol as a routine test. Here we present the overall unbiased spectrum of NF1 mutations identified in 93 patients in a cohort of 105. The results indicate that this protocol is a powerful new tool for the molecular diagnosis of NF1. Copyright © 2011 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

Raine L.B.,University of Illinois at Urbana - Champaign
PloS one | Year: 2013

There is a growing trend of inactivity among children, which may not only result in poorer physical health, but also poorer cognitive health. Previous research has shown that lower fitness has been related to decreased cognitive function for tasks requiring perception, memory, and cognitive control as well as lower academic achievement. To investigate the relationship between aerobic fitness, learning, and memory on a task that involved remembering names and locations on a fictitious map. Different learning strategies and recall procedures were employed to better understand fitness effects on learning novel material. Forty-eight 9-10 year old children (n = 24 high fit; HF and n = 24 low fit; LF) performed a task requiring them to learn the names of specific regions on a map, under two learning conditions in which they only studied (SO) versus a condition in which they were tested during study (TS). The retention day occurred one day after initial learning and involved two different recall conditions: free recall and cued recall. There were no differences in performance at initial learning between higher fit and lower fit participants. However, during the retention session higher fit children outperformed lower fit children, particularly when the initial learning strategy involved relatively poor recall performance (i.e., study only versus test-study strategy). We interpret these novel data to suggest that fitness can boost learning and memory of children and that these fitness-associated performance benefits are largest in conditions in which initial learning is the most challenging. Such data have important implications for both educational practice and policy.

Kumar P.,University of Illinois at Urbana - Champaign
Water Resources Research | Year: 2011

Prediction problems broadly deal with ascertaining the fate of fluctuations or instabilities through the dynamical system being modeled. Predictability is a measure of our ability to provide knowledge about events that have not yet transpired or phenomena that may be hitherto unobserved or unrecognized. The challenges associated with these two problems, that is, forecasting a future event and identifying a novel phenomenon, are distinctly different. Whereas the prediction of novel phenomena seeks to explore all possible logical space of a model's behavioral response, the prediction of future events seeks to constrain the model response to a specific trajectory of the known history to achieve the least uncertainty for the forecast. Predictability challenges have been categorized as initial value, boundary value, and parameter estimation problems. Here I discuss two additional types of challenges arising from the dynamic changes in the spatial complexity driven by evolving connectivity patterns during an event and cross-scale interactions in time and space. These latter two are critical elements in the context of human and climate-driven changes in the hydrologic cycle as they lead to structural change-induced new connectivity and cross-scale interaction patterns that have no historical precedence. To advance the science of prediction under environmental and human-induced changes, the critical issues lie in developing models that address these challenges and that are supported by suitable observational systems and diagnostic tools to enable adequate detection and attribution of model errors. Copyright 2011 by the American Geophysical Union.

McMahon J.M.,University of Illinois at Urbana - Champaign
Physical Review B - Condensed Matter and Materials Physics | Year: 2011

Ab initio random structure searching based on density functional theory is used to determine the ground-state structures of ice at high pressures. Including estimates of lattice zero-point energies, ice is predicted to adopt at least three crystal phases beyond Pbcm. The underlying sublattice of O atoms remains similar among them, and the transitions can be characterized by reorganizations of the hydrogen bonds. The symmetric hydrogen bonds of ice X and Pbcm are initially lost as ice transforms to structures with symmetries Pmc2 1 (800-950 GPa) and P2 1 (1.17 TPa), but they are eventually regained at 5.62 TPa in a layered structure C2/m. The P2 1→C2/m transformation also marks the insulator-to-metal transition in ice, which occurs at a significantly higher pressure than recently predicted. © 2011 American Physical Society.

Salje E.K.H.,University of Cambridge | Dahmen K.A.,University of Illinois at Urbana - Champaign
Annual Review of Condensed Matter Physics | Year: 2014

Recent experimental and theoretical progress on the study of crackling noise in the plastic deformation of crystals, ferroelastics, and porous materials is reviewed. We specifically point out opportunities and potential pitfalls in this approach to the study of the nonequilibrium dynamics of disordered materials. Direct optical observation of domain boundary movement under stress and experimental results from acoustic emission and heat-flux measurements lead to power-law scaling of the jerk distribution with energy exponents between 1.3 and 2.3. The collapse of porous materials under stress leads to exceptionally large intervals of power-law scaling (seven decades). Applications in geology and materials sciences are discussed. © Copyright 2014 by Annual Reviews. All rights reserved.

Dorgan V.E.,University of Illinois at Urbana - Champaign
Nano letters | Year: 2013

We study the intrinsic transport properties of suspended graphene devices at high fields (≥1 V/μm) and high temperatures (≥1000 K). Across 15 samples, we find peak (average) saturation velocity of 3.6 × 10(7) cm/s (1.7 × 10(7) cm/s) and peak (average) thermal conductivity of 530 W m(-1) K(-1) (310 W m(-1) K(-1)) at 1000 K. The saturation velocity is 2-4 times and the thermal conductivity 10-17 times greater than in silicon at such elevated temperatures. However, the thermal conductivity shows a steeper decrease at high temperature than in graphite, consistent with stronger effects of second-order three-phonon scattering. Our analysis of sample-to-sample variation suggests the behavior of "cleaner" devices most closely approaches the intrinsic high-field properties of graphene. This study reveals key features of charge and heat flow in graphene up to device breakdown at ~2230 K in vacuum, highlighting remaining unknowns under extreme operating conditions.

Orthopedic tissue engineering requires biomaterials with robust mechanics as well as adequate porosity and permeability to support cell motility, proliferation, and new extracellular matrix (ECM) synthesis. While collagen-glycosaminoglycan (CG) scaffolds have been developed for a range of tissue engineering applications, they exhibit poor mechanical properties. Building on previous work in our lab that described composite CG biomaterials containing a porous scaffold core and nonporous CG membrane shell inspired by mechanically efficient core-shell composites in nature, this study explores an approach to improve cellular infiltration and metabolic health within these core-shell composites. We use indentation analyses to demonstrate that CG membranes, while less permeable than porous CG scaffolds, show similar permeability to dense materials such as small intestine submucosa (SIS). We also describe a simple method to fabricate CG membranes with organized arrays of microscale perforations. We demonstrate that perforated membranes support improved tenocyte migration into CG scaffolds, and that migration is enhanced by platelet-derived growth factor BB-mediated chemotaxis. CG core-shell composites fabricated with perforated membranes display scaffold-membrane integration with significantly improved tensile properties compared to scaffolds without membrane shells. Finally, we show that perforated membrane-scaffold composites support sustained tenocyte metabolic activity as well as improved cell infiltration and reduced expression of hypoxia-inducible factor 1α compared to composites with nonperforated membranes. These results will guide the design of improved biomaterials for tendon repair that are mechanically competent while also supporting infiltration of exogenous cells and other extrinsic mediators of wound healing. Copyright © 2013 Wiley Periodicals, Inc.

Freund J.B.,University of Illinois at Urbana - Champaign
Physics of Fluids | Year: 2013

Small slits between endothelial cells in the spleen are perhaps the smallest blood passages in the body, and red blood cells must deform significantly to pass through them. These slits have been posited to participate in the removal of senescent blood cells from the circulation, a key function of the spleen. One of the effects of red blood cell aging is an increased cytosol viscosity; relaxation time measurements suggest their interior viscosity can increase by up to a factor of 10 toward the end of their normal 120 day circulating lifetime. We employ a boundary integral model to simulate red blood cells as they deform sufficiently to flow through such a small passage, whether in the spleen or in a microfluidic device. Different flow rates and cytosol viscosities show three distinct behaviors. (1) For sufficiently slow flow, the pressure gradient is insufficient to overcome elastic resistance and the cell becomes jammed. (2) For faster flow, the cell passes the slit, though more slowly for higher cytosol viscosity. This can be hypothesized to facilitate recognition of senescent cells. (3) A surprising behavior is observed for high elastic capillary numbers, due either to high velocity or high cytosol viscosity. In this case, the cells infold within the slit, with a finger of low-viscosity plasma pushing deeply into the cell from its upstream side. Such infolding might provide an additional mechanism for jamming, and the sharpness of the resulting features would be expected to promote cell degradation. Linear analysis of a model system shows a similar instability, which is analyzed in regard to the cell flow. This linear analysis also suggests a similar instability for unphysiologically low cytosol viscosity. Simulations confirm that a similar infolding also occurs in this case, which intriguingly suggests that normal cytosol viscosities are in a range that is protective against such deformations. © 2013 AIP Publishing LLC.

Lankau R.A.,University of Illinois at Urbana - Champaign
Oecologia | Year: 2011

Invasive species can benefit from altered species interactions in their new range, and by interfering with species interactions among native competitors. Since exotic invasions are generally studied at the species level, relatively little is known about intraspecific variation in the traits that determine an invader's effect on native species. Alliaria petiolata is a widespread and aggressive invader of forest understories that succeeds in part by interfering with mutualistic interactions between native plants and soil fungi. Here, I show that the impact of A. petiolata on soil microbial communities varied among individuals due to variation in their allelochemical concentrations. The differential impacts translated into varied effects on native tree growth, partly because A. petiolata's allelochemicals preferentially affected the most mutualistic fungal taxa. These results highlight the importance of considering the spatial and temporal variation in an invasive species' impacts for understanding and managing the invasion process. © 2010 Springer-Verlag.

Liu L.,University of Illinois at Urbana - Champaign
Reviews of Geophysics | Year: 2015

The driving force for transient vertical motions of Earth's surface remains an outstanding question. A main difficulty lies in the uncertain role of the underlying mantle, especially during the geological past. Here I review previous studies on both observational constraints and physical mechanisms of North American topographic evolution since the Mesozoic. I first summarize the North American vertical motion history using proxies from structural geology, geochronology, sedimentary stratigraphy, and geomorphology, based on which I then discuss the published physical models. Overall, there is a progressive consensus on the contribution of mantle dynamic topography due to buoyancy structures associated with the past subduction. At the continental scale, a largely west-to-east migrating deformation pattern suggests an eastward translation of mantle dynamic effects, consistent with models involving an eastward subduction and sinking of former Farallon slabs since the Cretaceous. Among the existing models, the inverse model based on an adjoint algorithm and time-dependent data constraints provides the most extensive explanations for the temporal changes of North American topography since the Mesozoic. At regional scales, debates still exist on the predicted surface subsidence and uplift within both the western and eastern United States, where discrepancies are likely due to differences in model setup (e.g., mantle dynamic properties and boundary conditions) and the amount of time-dependent observational constraints. Toward the development of the next-generation predictive geodynamic models, new research directions may include (1) development of enhanced data assimilation capabilities, (2) exploration of multiscale and multiphysics processes, and (3) cross-disciplinary code coupling. Key Points North America experienced a west-to-east migrating long-wavelength dynamic subsidence since 100 Ma The main control on this dynamic topography is subduction of the Farallon plate since the Mesozoic Predictive models with data assimilation are promising in deciphering continental evolution. ©2015. American Geophysical Union. All Rights Reserved.

Tappenden K.A.,University of Illinois at Urbana - Champaign
JPEN. Journal of parenteral and enteral nutrition | Year: 2014

Intestinal adaptation is a natural compensatory process that occurs following extensive intestinal resection, whereby structural and functional changes in the intestine improve nutrient and fluid absorption in the remnant bowel. In animal studies, postresection structural adaptations include bowel lengthening and thickening and increases in villus height and crypt depth. Functional changes include increased nutrient transporter expression, accelerated crypt cell differentiation, and slowed transit time. In adult humans, data regarding adaptive changes are sparse, and the mechanisms underlying intestinal adaptation remain to be fully elucidated. Several factors influence the degree of intestinal adaptation that occurs post resection, including site and extent of resection, luminal stimulation with enteral nutrients, and intestinotrophic factors. Two intestinotrophic growth factors, the glucagon-like peptide 2 analog teduglutide and recombinant growth hormone (somatropin), are now approved for clinical use in patients with short bowel syndrome (SBS). Both agents enhance fluid absorption and decrease requirements for parenteral nutrition (PN) and/or intravenous fluid. Intestinal adaptation has been thought to be limited to the first 1-2 years following resection in humans. However, recent data suggest that a significant proportion of adult patients with SBS can achieve enteral autonomy, even after many years of PN dependence, particularly with trophic stimulation.

Dell G.S.,University of Illinois at Urbana - Champaign | Chang F.,University of Liverpool
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2014

This article introduces the P-chain, an emerging framework for theory in psycholinguistics that unifies research on comprehension, production and acquisition. The framework proposes that language processing involves incremental prediction, which is carried out by the production system. Prediction necessarily leads to prediction error, which drives learning, including both adaptive adjustment to the mature language processing system as well as language acquisition. To illustrate the P-chain, we review the Dual-path model of sentence production, a connectionist model that explains structural priming in production and a number of facts about language acquisition. The potential of this and related models for explaining acquired and developmental disorders of sentence production is discussed. © 2013 The Author(s) Published by the Royal Society. All rights reserved.

Suslick K.S.,University of Illinois at Urbana - Champaign
Current Opinion in Chemical Biology | Year: 2012

Much of our science and technology relies on the visualization of complex data, and chemical biology, more than most fields, often deals with complex datasets. There are, however, other ways of making information available to our senses beyond the visual. Rare individuals naturally have sensory crossover, whose synesthesia permits them, for example, to see colors or shapes when hearing sounds or to sense a specific taste with a specific word. Many scientists, technologists and inventors, however, make a conscious attempt to convert one type of sensory-like input to a different sensory output. A laser light show, for example, converts sound to sight; infrared imaging converts heat to sight. Two recent examples of such intentional synesthesia are discussed in this context: sight-tasting and smell-seeing. © 2012 Elsevier Ltd.

Lohse S.E.,University of Illinois at Urbana - Champaign | Murphy C.J.,University of Illinois at Urbana - Champaign
Journal of the American Chemical Society | Year: 2012

The synthesis of well-defined inorganic nanoparticles in colloidal solution, which evolved gradually from the 1950s onward, has now reached the point where applications in both the research world and the wider world can be realized. This Perspective explores some of the successes and still-remaining challenges in nanoparticle synthesis and ligand analysis, highlights selected work in the areas of biomedicine and energy conversion that are enabled by colloidal nanomaterials, and discusses technical barriers that need to be overcome by chemists and other scientists in order for nanotechnology to achieve its promise. © 2012 American Chemical Society.

Simons D.J.,University of Illinois at Urbana - Champaign
Perspectives on Psychological Science | Year: 2014

Trying to remember something now typically improves your ability to remember it later. However, after watching a video of a simulated bank robbery, participants who verbally described the robber were 25% worse at identifying the robber in a lineup than were participants who instead listed U.S. states and capitals—this has been termed the “verbal overshadowing” effect (Schooler & Engstler-Schooler, 1990). More recent studies suggested that this effect might be substantially smaller than first reported. Given uncertainty about the effect size, the influence of this finding in the memory literature, and its practical importance for police procedures, we conducted two collections of preregistered direct replications (RRR1 and RRR2) that differed only in the order of the description task and a filler task. In RRR1, when the description task immediately followed the robbery, participants who provided a description were 4% less likely to select the robber than were those in the control condition. In RRR2, when the description was delayed by 20 min, they were 16% less likely to select the robber. These findings reveal a robust verbal overshadowing effect that is strongly influenced by the relative timing of the tasks. The discussion considers further implications of these replications for our understanding of verbal overshadowing. © The Author(s) 2014

University of Illinois at Urbana - Champaign | Date: 2015-03-02

Disclosed are methods of reducing pain and/or preventing or reducing opioid tolerance in a subject by administering to the subject mesenchymal stem cells.

University of Illinois at Urbana - Champaign | Date: 2014-07-29

Methods and apparatus for generating ultrashort optical pulses. Pulses of an infrared source are launched into an optical fiber characterized by a zero-dispersion wavelength (ZDW), where the wavelength of the infrared source exceeds the ZDW of the optical fiber by at least 100 nm. A resonant dispersion wave (RDW) is generated in the optical fiber that has a central wavelength blue-shifted by more than 500 nm relative to the pump wavelength, and, in some cases, by more than 700 nm. The optical fiber has a core of a diameter exceeding the central wavelength of the RDW by at least a factor of five. In a preferred embodiment, the infrared source includes a master-oscillator-power-amplifier, embodied entirely in optical fiber, and may include an Erbium:fiber oscillator, in particular.

University of Illinois at Urbana - Champaign and Semprius | Date: 2015-01-16

Multi-junction photovoltaic devices and methods for making multi-junction photovoltaic devices are disclosed. The multi-junction photovoltaic devices comprise a first photovoltaic p-n junction structure having a first interface surface, a second photovoltaic p-n junction structure having a second interface surface, and an optional interface layer provided between the first interface surface and the second interface surface, where the photovoltaic p-n junction structures and optional layers are provided in a stacked multilayer geometry. In an embodiment, the optional interface layer comprises a chalcogenide dielectric layer.

Agency: NSF | Branch: Standard Grant | Program: | Phase: INFORMATION TECHNOLOGY RESEARC | Award Amount: 15.00K | Year: 2016

The Privacy Enhancing Technology Symposium (PETS) is a leading venue for privacy technology research. Since starting in 2000, PETS has brought together researchers in privacy-related areas including Internet and other data systems and communication networks. Student attendance at PETS is an important part of graduate education in privacy. The 2016 Symposium will be held in Darmstadt Germany.

This grant provides travel support to encourage participation at PETS by students who would normally find it difficult to attend. Criteria for selection include evidence of a serious interest in the field, as demonstrated by research areas. Preference will be offered to those with papers but inadequate travel funding from their organization. Organizers will make particular efforts to encourage participation of women and under-represented minorities.

Agency: NSF | Branch: Standard Grant | Program: | Phase: SOFTWARE & HARDWARE FOUNDATION | Award Amount: 462.00K | Year: 2014

Regression testing is important as it checks that changes to software do not break previously working functionality. However, regression testing is expensive as it requires executing a large number of tests and inspecting their failed runs. To speed up regression testing, researchers have proposed many techniques, including test selection (which, given a set of tests and software changes, selects a subset of tests that are affected by the changes) and test-suite reduction (which identifies what tests can be removed from a test suite without substantially reducing its fault-detection capability). While some of those techniques have been successful in practice, there is a lot of opportunity to further improve regression testing by alleviating the assumptions upon which the existing techniques are built.

Specifically, this project improves regression testing by revisiting these six assumptions: (1) tests are not deterministic (but depend on timing, environment, or concurrency), (2) code histories are not linear (but convoluted graphs of branches and merges), (3) test selection is relevant not only for large projects (but developers manually select tests even for small projects), (4) test-suite reduction can decrease fault-detection capability in one version (but can decrease even more in future versions), (5) tests depend not only on code (but also on non-code artifacts), and (6) tests depend not only on manually written artifacts (but also on automatically generated artifacts). The broader impacts of improving regression testing are to increase the speed of software development and improve the quality of developed software.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Macromolec/Supramolec/Nano | Award Amount: 435.00K | Year: 2013

The Macromolecular, Supramolecular and Nanochemistry Program of the Division of Chemistry supports Professor Catherine J. Murphy of the University of Illinois at Urbana-Champaign on a project focusing on the nanoparticle-biology interface. Gold nanoparticles of various sizes and shapes are being developed as gold standards for the field. The thermodynamics, kinetics, geometry and cooperativity of protein-nanoparticle binding are revealed by numerous spectroscopic and mass spectrometric methods. The major hypothesis to be tested is that the underlying surface chemistry of the nanoparticles dictates the final biological behavior of the entire ensemble. LeChateliers Principle, familiar from general chemistry, is being tested on the nanoscale in the context of gold nanoparticle-protein interactions, since light absorption by the nanoparticles leads to heating, which in turn can lead to either stronger or weaker binding depending on the enthalpies of the interactions.

This fundamental work at the interface of nanoparticles and biological molecules is to enable future applications and best practices in the area of nanobiotechnology. The applications include photothermal destruction of pathogenic or cancerous cells; improved drug delivery vehicles based on nanoparticles; and improved sensors for molecular diagnostics. Graduate students and undergraduate students trained in the laboratory gain great transdisciplinary experience at the frontiers of science, including opportunities to explain their science to nonscientists. The PI is developing educational modules about nanotechnology and interfacial chemistry for massive open online courses, freely available to the world.

Agency: NSF | Branch: Continuing grant | Program: | Phase: Combinatorics | Award Amount: 107.32K | Year: 2016

A coloring of vertices of a graph G is a partition of the vertex set of G into sets (called color classes) such that the ends of every edge of G are in different classes. The basic coloring problem is to find such a partition with the fewest color classes. Coloring deals with the fundamental problem of partitioning a set of objects into classes that avoid certain conflicts. This model has many applications, for example, in time tabling, scheduling, frequency assignment, and sequencing problems. The theory of graph coloring is among central topics in discrete mathematics. It relates to other important areas of combinatorics, such as Ramsey theory, graph minors, independence number, orientations of graphs, and packing of graphs. Coloring properties of graphs certainly heavily depend on the cycle structure of these graphs. The goal of this project is to study a series of extremal problems related to colorings of graphs and hypergraphs, where answers depend on the cycle structure. The plan is to make significant advances in developing the theory of graph and hypergraph coloring and studying their cycle structure. The project involves a number of graduate students and young researchers.

The main directions of study are planned to be color-critical graphs with small average degree, list coloring, improper colorings, equitable coloring, bounds on the independence number, hypergraph coloring, existence of cycles of specified length in graphs with high chromatic number, Turan-type problems on cycles in graphs and hypergraphs, existence of many disjoint cycles in dense graphs, packing, and list packing. Work in these directions will exploit and possibly develop recent advances in the field including the results of the investigator and collaborators, in particular, graduate students working with him. Among promising tools are the language of potentials and the notion of list packing. Among expected results are enhancements of classical results on disjoint cycles and on the longest cycles in graphs with restrictions on the vertex degrees.

Agency: NSF | Branch: Continuing grant | Program: | Phase: THERMAL TRANSPORT PROCESSES | Award Amount: 266.77K | Year: 2014


This project is part of a jointly funded effort by the National Science Foundation and the Electric Power Research Institute to develop new electric power generation systems that use much less water than do current power generating plants. It is hard to overstate the importance of electric power generation and distribution to modern societies, and it is clear that growing power demands must be met within the constraints of limited resources and with reduced environmental impact. Water is a key national resource, and the water used in thermal-electric power plants represents 40% of the total annual draw from fresh-water supplies in the United States. Almost all of this water is used for cooling steam after it passes through the steam-turbine generators in the power plant-this approach usually uses a cooling tower. An approach that cools the steam without using water in a cooling tower exists; it relies on so-called Air-Cooled Condensers. Unfortunately, this technology increases the cost of the power plant by up to five times, compared to a cooling-tower system. The cost of electricity would rise sharply, were power plants required to use current technology to save the cooling water. A new approach is needed to save water and hold down the price of electricity, and this project is aimed at providing the needed breakthrough.

The main idea is to use small wing-like structures (called vortex generators) on a heat exchanger where steam is cooled by air. The vortex generators create a strong swirling flow in the air used to cool the steam. Through this project the potential of vortex generators to enhance the performance of air-cooled condensers will be evaluated, and the fundamental knowledge needed to use them effectively will be acquired. This new knowledge will lead to a new technology, an electric power generation system which provides reliable, inexpensive electricity and eliminates the need for drawing precious fresh water from our limited supplies to do so.

Agency: NSF | Branch: Standard Grant | Program: | Phase: CLIMATE & LARGE-SCALE DYNAMICS | Award Amount: 574.12K | Year: 2015

This research is an effort to understand the influence of land cover change on the hydroclimatology of the La Plata River Basin (LPRB). The LPRB is the second largest hydrological basin in South America after the Amazon, at about a third the size of the continental US. The weather and climate of the region is dominated by the South American monsoon system (SAMS), in which summer (December to February) rains are fed by moisture which enters the continent in the equatorial trade winds, crosses the Amazon, and is transported to the LPRB by the South American low level jet. Over the course of this route the moisture can fall as rain and be returned to the atmosphere through evapotranspiration multiple times, a process referred to as moisture recycling. Previous work by the PI and others suggests that 60 to 70 percent of the mean annual precipitation over the LPRB comes from evapotranspiration from the South American landmass, with about 24 percent coming from local evapotranspiration over the LPRB. Since the land surface is an important source of moisture for precipitation, soil moisture anomalies can become persistent as they lead to further reductions in precipitation. In addition, land surface conditions can affect precipitation through their influence on the thermodynamic structure of the lower atmosphere, which can modulate atmospheric circulation and stability. The key role of land-atmosphere coupling in the LPRB suggests that changes in land cover could have significant impacts on the local hydroclimate. The PI notes that there has been intensive deforestation in the region, with perhaps the highest deforestation rate in the world for the period 2000 to 2012, thus it is of interest to understand how deforestation is affecting the monsoon system in the region. More specifically, the PI seeks to understand the relative impacts of moisture recycling and thermodynamic/dynamic land-atmosphere feedbacks on South American precipitation patterns. One hypothesis to be pursued is that the strongest land-atmosphere interactions occur during the late spring and early summer season in association with the SAMS, with a spatial pattern that highlights the transition regions linking the tropics and subtropics. Another is that the thermodynamic/dynamic effects are more important than changes in precipitation recycling in determining the interannual variability of LPRB precipitation. But the PI also hypothesizes that projected future deforestation in South America would change the relative importance of moisture recycling and thermodynamic/dynamic effects, and the shift would be accompanied by significant reductions in LPRB precipitation. The research will also examine the extent to which deforestation changes the frequency and intensity of precipitation. In addition, the project will consider how these feedbacks may change under projected future changes in land surface conditions.

The research agenda is built around three tools, a statistical approach known as the generalized equilibrium feedback assessment (GEFA) and two enhanced versions of the Weather and Research Forecast (WRF) model. The statistical method is a multivariate lagged regression technique developed to quantify feedback strength, which can be used to determine the strength of moisture recycling on a spatially varying basis. One version of the WRF model (WRF-WVT, previously developed by the PI) has been augmented to include tracers used to track water vapor transport, and this configuration can be used to determine the source regions of water vapor and hence the contributions of transport and local evaporation to precipitation. The model has also been modified to use ecosystem functional types (EFTs) which can be adjusted to represent forested and agricultural land, and the model is able to simulate the mesoscale convective systems which account for much of the rainfall in the LPRB. The other version of WRF is WRF-hydro, a community model which represents the subsurface component of the hydrological cycle, which the PI claims is particularly important over South America, as ground- water dynamics can completely change the atmospheric fluxes.

The research is accompanied by an education effort in which three new courses would be developed to give students a thorough grounding in hydroclimatology One course focuses on hydrometeorological observations, and will give selected students an opportunity to perform fieldwork during a 2-week period in the summer. The fieldwork program is developed in partnership with the Center for Western Weather and Water Extremes (CW3E) at the Scripps Istitute of Oceanography. For the period 2015-2018 students will participate in the Calwater2 field campaign to observe atmospheric rivers, taking measurements of precipitation, stream flow, and soil moisture. A second course introduces students to numerical modeling using the WRF-hydro model, and in this class selected students travel to the National Center for Atmospheric Research during the summer to participate in model development. A third course is developed to teach basic hydrology to students with an atmospheric science background. This course replaces traditional hydrology courses, in which the subject is taught from an atmospheric science perspective. The course is intended as an alternative to typical hydrology courses in which the focus is on issues relating to the construction and operation of waterworks. The course attempts to develop a unified language for the description of surface and subsurface processes, which is analogous to the conceptual framework used for atmospheric processes and helps students to work with models which incorporate atmosphere, surface, and subsurface components.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Software Institutes | Award Amount: 499.99K | Year: 2017

The voluminous growth in data together with the burgeoning field of data science have greatly increased the demand for machine learning, a field of computer science that focuses on the development of programs that can learn and change in response to new data. With increasing access to large volumes of data, practitioners often resort to machine learning to construct more precise models of nature, or to learn fundamentally new concepts. For example, machine learning can help improve the accuracy of weather and climate predictions, model the efficacy of drugs and their interactions, and identify specific features, such as a face, from a large set of images or videos. But with the growth of data volumes, the speed with which machine can learn from data is decreasing. As a result, new techniques are required to accelerate learning algorithms so that they can be applied to larger and more complex data sets. This work will develop new approaches to improve the performance of a wide class of machine learning algorithms. Specifically, this work will leverage the C++ programming language and recent research into fundamental bit operations to make fast tree-like data structures that underlie many of the most commonly used implementations of machine learning algorithms. In particular, algorithms in the scikit-learn library, the most widely used machine learning library written in the Python programming language, will be accelerated by using our these new tree data structures. Given the widespread adoption of the scikit learn library, this work will impact diverse fields from Astronomy to Biology to Geoscience to Physics. The scikit learn library is also one of the more popular libraries for teaching (and understanding) machine learning. With an explosion of books, blogs, and tutorials that use scikit learn algorithms and pipelines to demonstrate specific types of machine learning such as classification, regression, clustering, and feature extraction, this project will immediately impact a wide range of people from seasoned practitioners, engineers gaining additional training, and students at universities and colleges across the nation. In addition, these tree data structures will be submitted for inclusion in the C++ standard, which would impact millions of developers world-wide. Finally, the algorithms will be implemented under an open source license in a public forum.

The STAMLA project aims at developing efficient and scalable tree algorithms inspired from high performance simulation codes for machine learning. Over the last few years, machine learning has become a popular technique in data mining to extract information from data sets, build models and make predictions across a wide range of application areas. However, current tools have been built in high-level languages with more focus on functionalities than on pure performance. But as scientific experiments are accumulating more and more data, and as complex models are requiring larger and larger training sets, scalability issues are emerging. At the same time, in high performance computing, petascale simulations have shown that fundamental data structure optimizations can have a significant impact on overall code performance. In particular, by replacing straightforward tree implementations with implicit trees based on hash tables, simulation codes are able to make the most of modern architecture, leveraging cache and vectorization. This research project will apply this knowledge to machine learning algorithms in order to overcome the limitations of existing libraries and make analyses of extremely large data sets possible. The proposed research includes the development of three library layers, built on top of each other. The first layer is a library of fast bit manipulation tools for tree indexing. It extends existing research that has already demonstrated two to three orders of magnitude improvements compared to standard solutions provided by compilers. The second layer is a tree building blocks library developed using generative programming techniques. This layer will provide developers with generic tools to build efficient implicit trees for specific domains and optimized at compile time to make the most of the targeted architecture. Finally, the third layer consists in a contribution package to the scikit-learn library to leverage the data structures introduced in the second layer. Together, these three layers form a consistent set that propagates low level optimizations based on high performance computing practices to one of the most widely used high level machine learning library. As machine learning is domain independent, the results of this project have the potential to impact all data intensive applications relying on machine learning algorithms based on tree data structures. Moreover, in addition to being developed in an open source framework via a public repository, the three library layers will be released through different channels: 1) the bit manipulation tools will aim at standardization in the C++ language through a collaboration with the ISO C++ Standards Committee 2) the tree building blocks will be proposed for inclusion in the Boost C++ libraries and 3) the machine learning algorithms will be published as a contribution package of the scikit-learn library. These channels will ensure a large adoption of the tools developed throughout this project, and their long-term support by well established communities.

Agency: NSF | Branch: Standard Grant | Program: | Phase: STATISTICS | Award Amount: 220.00K | Year: 2014

This project studies novel inference procedures and models for seasonal time series. The results of this research will have direct impact on the diagnostics of seasonal adjustment procedures that are currently implemented at the U.S. Census Bureau and other domestic or foreign agencies where seasonal adjustments are routinely published. The Visual Significance method used at the Census Bureau lacks a rigorous statistical justification and the new spectral peak detection methods will help to quantify type I and II errors in a disciplined fashion for a wide class of processes. Although motivated by research problems at Census, the new methodology and models are expected to be useful in the analysis of time series from various disciplines, including economics, astronomy, environmental science, and atmospheric sciences, among others.

Specifically, the project consists of three interrelated parts. In the first part, the PI will develop two new methods of spectral peak detection, which are intended to provide more principled approaches to the Visual Significance method used at the U.S. Census Bureau. In the second part, the PI will address the band-limited goodness-of-fit testing using the integral of the square of the normalized periodogram. Instead of assuming the strong Gaussian-like assumption as done in the literature, the PI will use a new Studentizer, so that the limiting distribution of the self-normalization-based test statistic is pivotal under less stringent assumptions. In the third part, the PI will study a new parametric class of spectral density, which can be used in model-based seasonal adjustment to improve the quality of model fitting and seasonal adjustment. The new parametric models and related model-based seasonal adjustment, if successfully developed, may offer a more effective means of modeling and adjusting time series. The research will promote teaching and training through mentoring of undergraduate and graduate students and through the development of related lecture notes.

Agency: NSF | Branch: Standard Grant | Program: | Phase: APPLIED MATHEMATICS | Award Amount: 241.89K | Year: 2016

This award supports a research collaboration on mathematical and numerical modeling and analysis of failure in soft materials. This research project concerns the derivation and numerical implementation of a mathematical theory capable of describing, explaining, and predicting the initiation and propagation of fracture in soft organic solids---namely, solids made up of networks of long carbon-based macromolecules such as elastomers, gels, and biological tissues---when subjected to arbitrarily large mechanical forces. Soft organic solids are known to fracture in a very different manner than standard hard solids (such as metals and ceramics). The defining difference is that internal fracture in soft organic solids initiates through the sudden growth of inherent defects into large enclosed cavities/cracks (a phenomenon popularly referred to as cavitation). With the ever-increasing use of soft materials in new technologies, a fundamental and quantitative understanding of when and how organic solids fracture is of utmost importance for their advancement. Likewise, such a fundamental and quantitative understanding is critical in advancing medical treatments involving soft biological tissues, such as shock-wave lithotripsy, or treatments dealing with aneurysms.

This project centers on a novel variational theory of fracture for finitely deformable solids that is consistent with the principle of conservation of mass (a highly non-trivial feature that has been overlooked in the literature by related formulations) and wherein the newly created surfaces (by fracture) are not restricted to be hypersurfaces (as in classical brittle fracture) but can also be the boundaries of N-dimensional cavities, N being the spatial dimension. The main objectives of the project are: (1) to develop a formulation in terms of variational evolutions for the initiation and propagation of fracture in soft organic solids under arbitrarily large quasi-static deformations, and (2) to implement this formulation numerically and confront its predictions with emerging experimental evidence of high spatio-temporal resolution. Objective (1) entails rigorous existence results, while objective (2) entails the construction of appropriate approximate functionals (of the phase-field type) and their stable and convergent numerical implementation in the non-convex context of finite deformations with constraints (in particular, incompressibility).

Agency: NSF | Branch: Continuing grant | Program: | Phase: Combinatorics | Award Amount: 190.00K | Year: 2015

The extremal and probabilistic theory of combinatorial structures impacts several areas of mathematics, including number theory, combinatorics, and logic, as well as other fields such as information theory, coding theory, and theoretical computer science. The study of random structures and randomized algorithms has gained particular importance in recent years since they have proved to be useful tools in dealing with the many large real world networks that have emerged and are being actively investigated. Developing new techniques to study these complex systems is a major task that will likely continue for many years, and the theory of sparse combinatorial structures may form a theoretical foundation for understanding their large scale behavior. Various new methods in this theory will be applied in this project. One direction is to prove analogues of classical theorems in the sparse environment. Another direction is to apply the methods to various enumeration problems. At a high level, most of the problems that will be investigated seek to understand the quantitative relationship between the local and global behavior of a large system. Additionally, the methods are well-applicable in percolation, which is connected to statistical physics. Much of this work will be done with graduate students, and some of the work may be integrated into courses to help bring students into this exciting area of research.

One of the most important trends in combinatorics over the past twenty years has been the introduction and proof of various random analogues of well-known theorems in extremal graph theory, Ramsey theory, and additive combinatorics. Recently, powerful general transference theorems, which the PI has helped to develop, have been used to attack such questions. Even though these tools have proved useful in resolving several central conjectures, many of their potential applications have not been fully explored. For example, these methods seem to be applicable to many enumeration problems. The investigators will address several problems of this type, and they also expect that this project will lead to new exciting questions and directions. In particular, the investigators will investigate the following related areas: (i) Extremal questions in sparse structures; (ii) Embedding in subgraphs of sparse random and pseudo-random graphs; (iii) Ramsey-Turan questions; (iv) Applications of flag algebras; and (v) Problems in bootstrap percolation.

Agency: NSF | Branch: Standard Grant | Program: | Phase: ENERGY,POWER,ADAPTIVE SYS | Award Amount: 358.65K | Year: 2016

Networked multi-agent systems consist of a group of participants, referred to as agents,that interact over a network to collectively perform collaborative tasks. Networked multi-agent systems are useful in many application domains, including distributed robotics, sensor networks, and smart grids. Due to their many potential applications, networked multi-agent systems have been a focus of intense research activity over the past several decades. Much of the past work on networked multi-agent systems assumes that the agents, and network links over which they communicate, are both reliable. In practical multi-agent systems, some of the system components may fail or may be compromised by an adversary. Faulty agents may behave incorrectly or in an adversarial manner, and similarly, faulty or compromised network links may deliver messages incorrectly. This project addresses the design and analysis of distributed algorithms for multi-agent systems that are robust to adversarial behavior of agents and links, which may result from failures or attacks. The project focusses on two important classes of problems in multi-agent systems, namely, distributed optimization and distributed hypothesis testing. Robust solutions to these problems may be used to obtain robust solutions to other related problems in multi-agent systems. Thus, the project has the potential to yield solutions that improve robustness of practical multi-agent systems. The project scope includes design of robust algorithms, their theoretical analysis, as well as development of a software tool to evaluate these algorithms. The educational component of the project includes participation of undergraduate and graduate students in project activities, and incorporation of project research outcomes into a related graduate course.

The project aims to develop multi-agent algorithms that can tolerate Byzantine failures. The Byzantine fault model captures arbitrary behavior that may be exhibited by faulty or compromised agents or links. A Byzantine faulty agent may be adversarial in nature, and may behave arbitrarily. Possible misbehaviors of a faulty agent include performing computations incorrectly, and sending incorrect or inconsistent messages to other agents. Similarly, a Byzantine faulty link can result in tampering of messages sent over the link. Multi-agent algorithms that can tolerate Byzantine failures are also robust in presence of a wide range of faulty behaviors possible in a practical system. In the context of multi-agent optimization and multi-agent hypothesis testing, the project explores many research challenges, including the following: (i) identifying network properties that are necessary and sufficient to tolerate Byzantine agent or link failures, while achieving desirable properties for the distributed computation, (ii) evaluating the impact of multi-hop forwarding of messages on the multi-agent computation, (iii) mechanisms for network adaptation to improve performance, and (iv) analysis of algorithm behavior in large-scale networks. Through the work on these issues, the project aims to develop fundamental principles that can guide the design of robust fault-tolerant algorithms for different types of distributed computations. The tools used for evaluating the algorithms include mathematical analysis as well as simulation-based experimentation.

Agency: NSF | Branch: Continuing grant | Program: | Phase: FOUNDATIONS | Award Amount: 85.52K | Year: 2015

This is a research project at the interface of the mathematical topics of set theory, combinatorics, and analysis. The research contains three projects involving two main areas of mathematics: descriptive set theory and ergodic Ramsey theory. The first two of the research projects lie in the theory of definable equivalence relations, which provides a general framework for understanding the nature of classification of mathematical objects up to some notion of equivalence; due to its broad scope, it has natural interactions with many areas of mathematics. These two projects are devoted to studying an important subclass of definable equivalence relations and whether slight extensions of the members of this subclass still belong to it. The third project features a new method for obtaining statements in arithmetic combinatorics similar in nature to a celebrated theorem of Szemeredi, which roughly states that any non-negligible subset of integers retains much of the additive structure of the entire set of integers.

In the theory of definable equivalence relations on Polish spaces, a central place is occupied by countable Borel equivalence relations, an important subclass of which is that of treeable equivalence relations. The first two projects investigate the question of closure of this subclass under finite index extensions in two different contexts: Borel and measure-theoretic. The former involves Borel combinatorics and possibly Borel games, whereas the latter is tightly connected with ergodic theory and the theory of cost of equivalence relations, and may require nontrivial machinery from geometric group theory. The third project lies in ergodic Ramsey theory and its goal is to obtain multiple recurrence results for amenable groups via a correspondence principle provided by nonstandard analysis. This is done by transferring recurrence statements from a given amenable group to a more convenient setting of probability groups by taking the ultrapower of the original group and equipping it with Loeb measure. The latter, being countably additive, presents the main advantage of the probability group over the original amenable group equipped with only a finitely additive density function, enabling integration over the group and the use of Fubinis theorem.

Agency: NSF | Branch: Continuing grant | Program: | Phase: ROBUST INTELLIGENCE | Award Amount: 179.27K | Year: 2016

This project develops new technologies at the interface of computer vision and natural language processing to understand text-to-image relationships. For example, given a captioned image, the project develops techniques which determine which words (e.g. woman talking on phone, The farther vehicle) correspond to which image parts. From robotics to human-computer interaction, there are numerous real-world tasks that benefit from practical systems to identify objects in scenes based on language and understand language based on visual context. In particular, the project develops the first language-based image authoring tool which allows users to edit or synthesize realistic imagery using only natural language (e.g. delete the garbage truck from this photo or make an image with three boys chasing a shaggy dog). Beyond the immediate impact of creating new ways for users to access and author digital images, the broader impacts of this work include three focus areas: the development of new benchmarks for the vision and language communities, outreach and undergraduate research, and leadership in promoting diversity.

At the core of the project are new techniques for large-scale text-to-image reference resolution (TIRR) that enable systems to automatically identify the image regions that depict entities described in natural language sentences or commands. These techniques advance image interpretation by enabling systems to perform partial matching between images and sentences, referring expression understanding, and image-based question answering. They also advance image manipulation by enabling systems that can synthesize images starting from a textual description, or modify images based on natural language commands. The main technical contributions of the project are: (1) benchmark datasets for TIRR with comprehensive large-scale gold standard annotations that will make TIRR a standard task for recognition; (2) principled new representations for text-to-image annotations that expose the compositional nature of language using the formalism of the denotation graph; (3) new models for TIRR that perform an explicit alignment (grounding) of words and phrases to image regions guided by the structure of the denotation graph; (4) applications of TIRR methods to referring expression understanding and visual question answering; and (5) applications of TIRR to image creation and manipulation based on natural language input.

Agency: NSF | Branch: Standard Grant | Program: | Phase: ENERGY FOR SUSTAINABILITY | Award Amount: 329.39K | Year: 2014

Principal Investigator: André Schleife
Number: 1437230

Nontechnical Description

The sun represents the most abundant potential source of pollution-free energy on earth. Solar cells for producing electricity require materials that absorb the sun?s energy and convert its photons to electrons, a process called photovoltaics. The search for the best photovoltaic material is an active area of solar cell research. Recently, an exciting new class of photovoltaic materials called organo-metal perovskites has emerged. These materials are promising because they have low production cost, use elements and materials abundant in the earth?s crust, and currently possess solar energy conversion efficiencies of over 15%. Further research is needed to tune the optical properties of these materials to improve light absorption needed to push the efficiencies over 20% to be competitive with silicon solar cells, and to remove lead, which is toxic, from the material matrix. The goal of this project is to gain a fundamental understanding of the beneficial electronic and optical properties of organo-metal perovskites to address these two issues. Advanced computational approaches will be used to predict electronic and optical properties of organo-metal perovskites at the atomic level to study how quantum mechanics plays a role in the performance of these materials for solar photovoltaic applications. Model simulations will be performed to identify replacements for lead that maintain desirable solar energy absorption characteristics. Scientific results will be made available to the public in blog-style updates, and a database will be provided to the scientific community to confirm the findings and make use of this information. In addition, in this project will provide training of material science graduate and undergraduate students in advanced supercomputer techniques needed to provide a pipeline of trained scientists in this critical area of national workforce need.

Technical Description

Recently, organo-metal perovskites, especially those based on lead, have emerged as an exciting new class of earth-abundant photovoltaic materials with solar energy conversion efficiencies exceeding 15% and potential for low manufacturing cost. The goal of this project is to gain a fundamental understanding of the excitonic and optical properties of organo-metal perovskites through advanced computational models, and then use this approach to identify replacements for lead in the material matrix. The perovskite material system offers a large phase space for improvement by replacing different constituent atoms. However, for targeted development, a fundamental understanding of electronic correlations, particularly the physics that governs the excitonic and charge-transport properties, is needed. In this regard, the interplay between free carriers in systems with perovskite crystal structures that possess optical and excitonic properties is unknown. Theoretical spectroscopy techniques based on many-body perturbation theory will be used to achieve a fundamental understanding of light-matter interaction in perovskite-halide materials with different constituent ions. First-principles techniques based on Hedin?s approximation and the Bethe-Salpeter equation, which accurately computes quantum-mechanical, electron-electron, and electron-hole interactions, will be used to predict band gaps, effective carrier masses, and optical absorption spectra. By taking into account quasi-particle energies, excitonic effects, and the interplay with free carriers, this approach will provide mechanistic insights into the optical absorption process in perovskites. This project will also investigate the influence of different constituents, particularly systems with and without lead ions, on these optical absorption processes. The research outcomes will be made available to the public in blog-style updates, and the computational results will be provided to the scientific community for continued validation. In addition, in this project will provide training of material science graduate and undergraduate students in advanced atomistic modeling techniques on supercomputing platforms.

Agency: Department of Defense | Branch: Air Force | Program: STTR | Phase: Phase I | Award Amount: 150.00K | Year: 2014

ABSTRACT: Living systems rely on pervasive vascular networks to enable a plurality of biological function, exemplified by natural composite structures that are lightweight, high-strength, and capable of mass and energy transport. In contrast, synthetic composites possess high strength-to-weight ratios but lack the dynamic functionality of their natural counterparts. CU Aerospace, with team partners the University of Illinois at Urbana-Champaign (UIUC), North Carolina State University (NCSU), and Lockheed Martin, propose to use a revolutionary microvascular technology developed at UIUC to build a composite counter-flow heat exchanger. This technology relies on 3D weaving of sacrificial fibers into a polymeric matrix, which are subsequently vaporized to obtain a uniform array of capillaries. By weaving these sacrificial fibers with a perpendicular array of carbon fibers and using computational modeling to optimize the design, this device can achieve good lateral thermal conductance while retaining very low axial conductance. Most Joule-Thomson heat exchangers are either metal finned-tube devices with limited surface area between the solid and gas streams, or etched-glass/silicon devices that allow relatively limited gas flow and cooling power. A micro-capillary array based heat exchanger offers the potential for both large surface area and large gas flow, with a manufacturing process that offers low-cost mass production. BENEFIT: Development of the sacrificial fibers to allow incorporation of microvascular networks in polymeric composites has tremendous potential. Multiple functionalities are achieved by distributing different fluids throughout the microvascular network, which can be seamlessly integrated into both rigid and flexible materials. By circulating fluids with unique physical properties, there is the capability to create a new generation of biphasic composite materials in which the solid phase provides strength and form while the fluid phase provides interchangeable functionality. Applications that have been examined include self-healing, thermal management, electromagnetic signature, electrical conductivity tuning, and chemical reactivity. The impact of this technology is extremely broad and far-reaching, while our initial efforts are focused on military and aerospace applications; fertile research opportunities exist across a broad cross-section of industries. Long-term strategic plans are to leverage our development efforts to foster spin-off technologies in related industries.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Dynamics, Control and System D | Award Amount: 300.91K | Year: 2015

Recent advances in automation and robotics have created a pressing need for new protocols, that is, for algorithms or control laws that allow teams of multiple autonomous agents to cooperate and accomplish complex tasks. Unfortunately, many of the best protocols for multi-agent coordination problems suffer from scalability issues, that is, while they perform well when the number of agents is small or moderate, their performance degrades sharply as the number of agents in the network grows. This project will develop new control laws for a range of multi-agent problems whose performance is maintained even as network size becomes very large. A number of tasks with broad practical importance will be considered, including optimal distribution of limited resources among agents, cooperative tracking and estimation, and adaptive positioning for optimal sensing. With these new protocols, large groups of autonomous agents(such as mobile robots or unpiloted aerial vehicles) will be able to quickly accomplish a number of useful and important tasks. These advances are needed to allow emerging technologies for autonomous vehicles and other networked autonomous systems to realize their potential economic and societal benefits.

The main technical contribution will be to speed up a widely-used class of nearest neighbor interactions. It is common to optimize a global objective in multi-agent control by means of local updates that interleave the maximization local objectives with consensus terms that effectively couple these objectives. This project will develop techniques to speed up such consensus-like updates. By a judicious combination of weight-selection and extrapolation by each agent, the convergence time of consensus updates will be improved by one or several orders of magnitude. These speedups further imply quick convergence times for a number of multi-agent problems relying on consensus-like updates. The techniques applied mix recent advances from algebraic graph theory, optimization, switched dynamical systems, and the joint spectral radius.

Agency: NSF | Branch: Continuing grant | Program: | Phase: STUDIES OF THE EARTHS DEEP INT | Award Amount: 66.45K | Year: 2016

The physics behind several fundamental aspects of the South American tectonics remains elusive. For example, various geological records suggest that the Andes Mountains started to shorten significantly since at least 40 million years ago. However, paleo-altimetry proxy data reveal that most of the surface elevation of the Andes remained relatively low until as late as 20 or 15 million years ago, a time frame that is inconsistent with the shortening history. Furthermore, the early Cenozoic arc volcanism along the central Andes suddenly shifted more than 500 km inland around 30 million years ago and formed widespread silicic volcanic activity that continues to the present. Consequently, a clear relationship among these tectonic events is lacking. An important reason is that both the mantle structures beneath South America and their dynamic evolution remain poorly understood. In this proposal, the team plans to carry out a multidisciplinary research project to attempt to improve our knowledge about the observational records of surface tectonics, deep mantle seismic properties, and their geodynamic relationship with the temporal evolution of subduction beneath the continent. By collecting more data related to the tectonic history of South America and to the present-day mantle interior, they will build a sophisticated 4-dimentional geodynamic model using data assimilation, in order to quantitatively reproduce the past subduction history. Ultimately, the team hopes to better understand the Cenozoic evolution of the South American continent. The team plans to achieve this goal through a collaborative research effort that involves seismology (led by PI Beck), geology (led by PI DeCelles), and geodynamics (led by PI Liu). They propose to reproduce the Cenozoic subduction history beneath South America and associated continental deformation using geodynamic models constrained by geophysical and geological observations. This project includes interdisciplinary training of several graduate students at two institutions, and also involves undergraduates at both institutions who will be involved in the research and will all students will participate in a summer field trip to western U.S. Cordillera as an analog for the South American Cordillera. This multi-disciplinary combination will provide a unique opportunity for the students to understand orogenic systems at plate scale.

Multiple fundamental questions exist about the Cenozoic tectonic history of South America, including the asynchronous crustal shortening and surface uplift/subsidence history of the Andes, and the enigmatic Central Andean flare-up magmatism occurring during the late Cenozoic. None of the earlier proposed geodynamic models for the Cenozoic evolution of South America could simultaneously explain all these tectonic records and it is unclear whether these different physical processes could co-exist and interact in reality. An important reason for the existence of these alternative models is the uncertain subduction and mantle dynamics due to our imperfect knowledge of deep mantle structures beneath South America. Key observational constraints on the geodynamic and tectonic evolution include an improved present-day mantle seismic structure, especially in the lower mantle, a better identified relationship among structural deformation, surface uplift and magmatic history within western South America, and time-dependent geodynamic models that are consistent with these observational records.

Agency: NSF | Branch: Standard Grant | Program: | Phase: INDUSTRY/UNIV COOP RES CENTERS | Award Amount: 16.26K | Year: 2015

The accepted engineering design methodology requires that mass scale manufacturing of a new product not commence until a prototype of the product is tested and found to meet its performance specifications. It is not unusual for a product to go through multiple design iterations before it can satisfy all the design requirements. Modern electronic products, which range from a single integrated circuit to a smart phone to an aircraft instrumentation system, are so complex and contain so many components - billions in the case of an integrated circuit - that it is infeasible to construct hardware prototypes for each design iteration, from the points of view of both cost and time. Instead, a mathematical representation of the product must be developed, i.e. a virtual prototype, and its behavior then simulated. Each of the components that constitute the product would be represented by a model. Behavioral models of the components are most desirable; a behavioral model represents the terminal response of a component in response to an outside stimulus or signal, without concern to the inner workings of the component. Behavioral models are computationally efficient and have the benefit of obscuring intellectual property. However, despite many years of significant effort by the electronic design automation community, there is not a general, systematic method to generate accurate and comprehensive behavioral models, in part because of the non-linear, complex and multi-port nature of the components being modeled. The proposing team will utilize the planning grant to establish a research center that will overcome these modeling challenges through the development and application of novel machine-learning methods and algorithms.

Machine-learning algorithms are used to extract a model of a component or system from input-output data, despite the presence of uncertainty and noise. In this center, the input-output data are obtained either from measurements of a component or by running detailed simulations of a component. The emphasis is on models that balance good predictive ability against computational complexity. The center will pioneer the application of machine learning to electronics modeling. It will develop a methodology to use prior knowledge, i.e., physical constraints and domain knowledge provided by designers, to speed up the learning process. Novel methods of incorporating component variability, including that due to semiconductor process variations, will be developed.

Agency: NSF | Branch: Standard Grant | Program: | Phase: CHEMICAL & BIOLOGICAL SEPAR | Award Amount: 359.85K | Year: 2015

Uof Ill Urbana-Champaign

Cell separation is especially necessary to advance personalized cancer medicine, because tumors contain a diverse mix of cells with different biomarkers that makes them difficult to treat. If different tumor cell types can be separated and analyzed the promise of tailoring drugs to the unique characteristics of a patients tumor may be analyzed. The proposed research aims to engineer a fast and gentle cell separation system on a microscale.

The ability to capture cells onto a surface and release by targeting a secondary moiety within the anchor would enable gentler, cell separation. As such, the proposed research aims to combine spiral micro-mixing with cell anchoring and secondary moiety targeted cell release to enable a cell separation from blood that is also receptor-preserving. The specific research goals are as follows: (1) identify the optimum conditions for detachable cell-anchoring; (2) determine the spiral micro-mixer design parameters enabling detachable cell-anchoring; (3) develop the microfluidic platform that will separate circulating endothelial cells, immune cells, and tumor cells from blood samples while preserving surface receptors on isolated cells. The proposed work may develop innovative technology that advances science while championing a high-impact, educational mission.

Agency: NSF | Branch: Standard Grant | Program: | Phase: BIOTECH, BIOCHEM & BIOMASS ENG | Award Amount: 500.00K | Year: 2015

Kilian, Kristopher

Somatic cells were recently shown to revert to a primitive embryonic-like pluripotent state upon forced expression of only four genes (2012 Nobel Prize in Physiology or Medicine). This technique could revolutionize medicine by enabling a patients own cells to be reverted back and modified to correct mutations and regenerate injured tissues; however, the process in which cells reprogram is not well defined or understood, is very inefficient and takes considerable time. In this CAREER proposal, designer cell culture materials will be used to study how cells revert to pluripotency in order to develop a system that can quickly and efficiently reprogram a patients cells. The combination of approaches employed is expected to dramatically reduce reprogramming time, increase efficiency, and reduce the need for exogenous factors (e.g. lentivirus), which will prove transformative to commercial and clinical ventures that need to reproducibly generate induced pluripotent stem cells (iPSCs) for regenerative therapies. The emerging view of how materials influence cellular reprogramming that is supported by this research will be integrated into education and outreach activities by establishing a Stem Cell Engineering Training Institute (SCETI). In this institute, laboratory videos with senior undergraduates, and high school educators will be developed for use in an annual summer camp for high school girls - Girls Adventures in Mathematics Engineering and Science (GAMES) - and to augment the curriculum of a senior undergraduate course: Design and Use of Biomaterials.

Most somatic cell reprogramming methods are performed on rigid plastic which leads to heterogeneity in cellular organization and proliferation. These conditions foster a slow stochastic process with rare reprogramming events initiated by a mesenchymal-to-epithelial transition (MET). The process of MET in vivo is regulated by microenvironments with defined biochemical and biophysical properties. Motivated by natural MET processes that occur during development, and the architecture of the early embryo where pluripotency is first established, the proposed work aims to control matrix composition, substrate mechanics and tissue geometry on the cell culture substrate,to more closely recapitulate in vivo architectures and to promote MET and accelerate somatic cell reprogramming. The success of this work will provide mechanistic insight into the de-differentiation process and yield a suite of commercially viable cell culture materials and reagents for somatic cell reprogramming. In addition to reprogramming to iPSCs, other explorations of reprogramming will benefit from the tools and methods developed in this work such as directed differentiation, trans-differentiation and intermediate de-differentiation events.

This CAREER Award by the Biotechnology and Biochemical Engineering Program in the Chemical, Bioengineering, Environmental, and Transport Systems Division is co-funded by the Biomaterials Program of the Division of Materials Research.

Agency: NSF | Branch: Standard Grant | Program: | Phase: FLUID DYNAMICS | Award Amount: 174.72K | Year: 2016

PI: McKenna, Gregory B. / Schroeder, Charles / Anderson, Rae
Proposal Number: 1603943/ 1604038 / 1603925

The goal of this proposal is to explore the behavior of polymer molecules that form large ring, instead of the usual linear polymer molecules. Such polymers, example of which can be the DNA molecule, behave in a different way than linear molecules when processed or when they flow in a solution, because there are not ends in the chains. Results of this work can lead to improved polymer materials, to understanding in detail the behavior of bio-molecules and to new technologies for DNA sequencing.

Circular polymers are fascinating materials that have inspired polymer theorists and experimentalists for decades. The dynamics of circular chains differ fundamentally from their linear counterparts due to the absence of chain ends. Despite recent progress, however, the effects of circular topology on polymer dynamics remain a key unresolved problem in the field. In this proposal, the PIs are poised to make major progress in our understanding by preparing circular and linear DNA molecules that are monodisperse and of high topological purity. The assembled team has the expertise to synthesize and characterize circular and linear DNA, and will study the rheological behavior of these materials over a wider range of concentrations and molecular weights than previously achieved. A comprehensive approach is proposed that will include macroscopic and micro-rheology, single molecule polymer dynamics, and DNA synthesis, to provide new information regarding the dynamics of linear and circular DNA. Beyond providing a point of departure for understanding their circular counterparts, the parallel study of linear entangled DNA will provide unprecedented data using perfectly monodisperse DNA samples to directly test predictions from reptation theory, such as the cross-over to reptative behavior at extremely high entanglement densities. In addition to graduate student participation, educational activities are proposed in all three collaborating institutions, ranging from underrepresented minority student involvement at Texas Tech, to high school teacher engagement at Illinois and undergraduate student participation at the U of San Diego, a mainly undergraduate institution.

Agency: NSF | Branch: Standard Grant | Program: | Phase: CYBER-PHYSICAL SYSTEMS (CPS) | Award Amount: 500.00K | Year: 2016

Since 2000, surgical robots have been used in over 1.75 million minimally invasive procedures in the U.S. across various surgical specialties, including gynecological, urological, general, cardiothoracic, and head and neck surgery. Robotic surgical procedures promise decreased complication rates and morbidity, due to the minimally invasive nature of the procedures. A detailed analysis (also reported to the FDA) of the adverse events associated with the surgical robot indicates that despite the increased number of robotic procedures and their greater utilization, the rate of adverse events has remained relatively steady over the last 14 years. Even though current surgical robots are designed with safety mechanisms in mind, in practice several significant challenges exist in enabling timely and accurate detection and mitigation of adverse incidents during surgery. Toward this goal, the project will address (i) an in-depth analysis of incident causes, which takes into account the interactions among the system components, human operators, and patients; (ii) resiliency assessment of the robotic systems in the presence of realistic safety hazards, reliability failures, and malicious tampering; and (iii) continuous monitoring for detection of safety, reliability, and security violations to ensure patient safety.

The intellectual merit of this work lies in: (i) systems-theoretic approach driven by real data on safety hazards and medical equipment recalls, to identify causes leading to violation of safety constraints at different layers of the cyber and physical system-control-structure; (ii) creation of a unique safety hazard simulation engine to perform injections into robot control software and emulate realistic safety hazard scenarios in a virtual environment; (iii) an adaptive method for rapid detection of events that lead to safety violations, based on continuous monitoring of human operator actions, robot state, and patient status, in conjunction with a probabilistic graph-model that captures dependencies between the causal factors leading to safety hazards; and (iv) experimental validation using the real robot to assess monitoring and protection mechanisms in the presence of realistic safety hazards, reliability faults, and security exploits (recreated using safety hazard simulation engine). The broader impact of the project is a methodology for design and resiliency assessment of a larger class of control cyber-physical systems, which involve humans in the on-line decision making loop. Application of the methodology to robot-assisted surgery demonstrates the strength and practicality of the approach and is likely to attract interest from areas of academia and industry in which cyber-physical systems are either a subject of study or the basis for delivering a service (e.g., transportation or electric power grids). This projects educational outreach encompasses strategies for broadening participation in multi-disciplinary projects spanning medicine and engineering.

Agency: NSF | Branch: Continuing grant | Program: | Phase: ANTARCTIC ORGANISMS & ECOSYST | Award Amount: 362.70K | Year: 2014

This work will broaden our knowledge and insights into genetic trait loss or change accompanying species evolution in general as well as within the uniquely isolated and frigid Southern Ocean. The system of oxygen-carrying and related proteins being studied is very important to human health and the two proteins being specifically studied in this work (haptoglobin and hemopexin) have crucial roles in preventing excess iron loading in the kidneys. As such, the project has the potential to contribute novel insights that could be valuable to medical science. The project will also further the NSF goals of training new generations of scientists and of making scientific discoveries available to the general public. The lead principal investigator on the project is an early career scientist whose career development will be enhanced by this project. It will also support the training of several undergraduate students in molecular biology, protein biochemistry, and appreciation of the unique Antarctic fish fauna and environment. The project will contribute to a content-rich web site that will bring to the public the history of biological discoveries and sciences on fishes of the Southern Ocean and through this project the investigators will contribute to an annual polar event at a childrens science museum.

The Antarctic icefishes have thrived despite the striking evolutionary loss of the normally indispensable respiratory protein hemoglobin in all species and myoglobin in some. Studies over the past decades have predominately focused on the mechanisms behind hemoprotein losses and the resulting compensatory adaptations in these fish, while evolutionary impact of such losses on the supporting protein genes and functions has remained unaddressed. This project investigates the evolutionary fate of two important partner proteins, the hemoglobin scavenger haptoglobin and the heme scavenger hemopexin (heme groups are the iron-containing functional group of proteins such as hemoglobin and myoglobin). With the permanent hemoglobin-null state in Antarctic icefishes, and particularly in dual hemoglobin- and myoglobin-null species, the preservation of a functional haptoglobin would seem unessential and the role of hemopexin likely diminished. This project seeks to resolve whether co-evolutionary loss or reduction of these supporting proteins occurred with the extinction of the hemoglobin trait in the icefishes, and the molecular mechanisms underlying such changes. The investigators envisage the cold and oxygen rich marine environment as the start of a cascade of relaxation of selection pressures. Initially this would have obviated the need for maintaining functional oxygen carrying proteins, ultimately leading to their permanent loss. These events in turn would have relaxed the maintenance of the network of supporting systems, leading to additional trait loss or change.

Agency: NSF | Branch: Continuing grant | Program: | Phase: FOUNDATIONS | Award Amount: 86.74K | Year: 2017

At the heart of logic and model theory lies the observation that within mathematics there are certain objects that have to be considered tame, and others that have to be considered wild. Many celebrated results in logic from the first half of the 20th century concerned the existence of objects considered wild. Gödels proof of the undecidability of arithmetic established that this structure is complicated (or wild) from a logical viewpoint. Such results are negative in spirit as they point to the limitations of mathematical reasoning. However, in the second half of the last century the focus changed. Model theorists found a vast number of tame mathematical structures that exhibit no such wildness, and for often very different reasons are amenable to model-theoretic methods. This program of identifying and analyzing tame classes of structures whose model theory can be understood came to be known as the geography of tame mathematics, and in its various forms has dominated model theory throughout the last thirty years. Although arising as a program of foundational importance, it has led to striking applications of model theory to other areas of mathematics, most recently to the André-Oort conjecture in number theory. Since this program is to explore areas of tame mathematics, these connections are not coincidental at all. The development of such interactions has proven again and again to have far reaching applications outside of logic that could not have been envisioned beforehand.

This project continues this line of research in the setting of expansions of the real line. It aims to settle important open questions within model theory, but also naturally develops new connections between model theory and other areas in logic such as neostability and descriptive set theory, and outside of logic such as geometric measure theory and geometric group theory. The investigator will lead a large scale investigation of dividing lines between tame and wild behavior arising in the study of the geometry of definable sets in ordered structures. Building on early advances, the investigator will determine far reaching consequences of various logical tameness conditions on the topological and metric tameness of definable sets. Furthermore, the project comprises new challenges in the classification of classes of structures considered tame. The educational component of this CAREER grant ties together the investigators research with his teaching and outreach efforts. This project involves undergraduate and graduate students and young researchers in the investigators research program, strengthens the ties between model theory and other branches of mathematics, and continues the investigators outreach efforts in the Urbana-Champaign area.

Agency: NSF | Branch: Standard Grant | Program: | Phase: National Robotics Initiative | Award Amount: 1.50M | Year: 2014

Bat flight, perhaps the most advanced and efficient form of animal flight, has long been a source of inspiration for roboticists and biologists alike. This National Robotics Initiative (NRI) collaborative research award supports research aimed at understanding and reproducing the unparalleled agility and resilience of bat flight. Biological studies of bats (their structure, muscle movement, and flight dynamics) will drive the engineering development of mathematical models of robotic flight and the eventual design and implementation of a prototype 30-80cm bat-like robot. The physical flight capabilities of the robot will be augmented with perception and reasoning abilities, with the aim of providing support for construction site activities such as site monitoring, inspection, and general surveillance of the work site to provide image data to enhance situational awareness of human workers. The research involves several disciplines, including biology, aerodynamics, robotics, control systems engineering, and construction engineering.

Aerial robots have nowhere near the agility and efficiency of animal flight, especially in complex, constrained environments. This is not surprising since even the simplest winged robots have complex flight dynamics that pose significant challenges for modeling, design, and control. In the case of bat-inspired robots, these difficulties are exacerbated by the use of under-actuated mechanisms driving wings constructed from flexible membranes. This project will combine biological and engineering research to address these problems. Biological research on the kinematics of bats and their flight will provide a basis for mechanical designs. To control the robot, agile motion planning and flight control algorithms will employ motion primitives that are derived from biological investigation of the dynamics of bat flight. Conversely, models obtained from biological studies will be validated by experimental investigations using the prototype robot, enabling iterative refinement of reduced-order models and control algorithms. Ultimately, the robots will be equipped with sensing systems and planning algorithms, to facilitate localization, mapping, inspection and surveillance at construction sites.

Agency: NSF | Branch: Standard Grant | Program: | Phase: PETASCALE - TRACK 1 | Award Amount: 15.00K | Year: 2015

This project will use state-of-the-art computational resources, namely Blue Waters, and cutting-edge modeling software in advancing the study of climate change and its potential impacts on our planet over this century. It is expected that results from our studies will be an integral part of the scientific analysis of climate change for the next major international climate assessment of the Intergovernmental Panel on Climate Change (IPCC) and the next U.S. National Climate Assessment. These assessments are an important input to the national and international policy development process. These analyses will be a significant contribution to the coordinated international special computational studies and model intercomparison to analyze past and projected future changes in the Earth?s climate system. In addition, the results of these studies will be fully available to the scientific community for further analyses and resulting insights into the processes, mechanisms, and consequences of climate variability and climate change. On top of this, these studies will be a major input to studies of the potential impacts of climate change on human society across many different sectors (e.g., health, food, water, energy, transportation) and on ecosystems. No previous global modeling study has provided such high resolution information at the regional scales over the time periods needed to fully analyze these issues. The simulations and analyses are designed to use the Blue Waters petascale computing resources and could not be completed without a computational facility like Blue Waters.

This project has two purposes. The first is aimed at better quantifying future regional climate change by running a high-resolution (0.25 degree atmosphere, 1 degree ocean) global coupled climate model, namely an advanced version of the Community Earth System Model (CESM), in the framework of CMIP6 (Coupled Model Intercomparison Project, phase 6) to meet the needs for the next generation assessments of climate change. CMIP6 will be the next phase in the international coordination of special computational studies to analyze the past and projected future changes in the Earth?s climate system. CESM, one of the world?s best models of the Earth?s climate system, has been developed by the National Center for Atmospheric Research (NCAR) in coordination with a community of scientists at universities and national laboratories. The second purpose will be specific studies that grow upon our existing NSF PRAC project to further clarify the understanding the effects of small-scale regional features and interactions across spatial scales in climate through even higher-resolution CESM modeling studies on Blue Waters that will push the state-of-the-art for such analyses. For this project, we have assembled a team from the University of Illinois and the National Center for Atmospheric Research comprised of both highly recognized experts in global climate modeling and analysis and experts in computer science and information technology. This team already has extensive experience with Blue Waters (as well as with other high performance computers) based on the existing PRAC project.

Agency: NSF | Branch: Continuing grant | Program: | Phase: ANALYSIS PROGRAM | Award Amount: 351.60K | Year: 2014

The PI will continue his study of analysis and geometry as it relates to complex numbers. Complex numbers and functions of complex variables have become very useful tools in the study of other areas of pure mathematics and in the application of mathematics. For instance, the study of airflow over a wing uses complex numbers and complex functions to model this flow. The subject continues to be a vital and important topic in the mathematical sciences. Realizing this, the PI has recently begun teaching what is called Advanced Engineering Mathematics. Teaching this material shows in a precise sense how the proposed research develops higher dimensional analogues of fundamental mathematics currently used throughout physics and engineering. In addition to the mathematical research, the PI will also work with the Engineering College at the University of Illinois at Urbana-Champaign on fine-tuning the curriculum in Advanced Engineering Mathematics.

Hermitian analysis has its roots in 19th century mathematics. The subject began, two centuries ago, with the work of Fourier on heat diffusion.It has developed using the theory of Hilbert spaces, by its applications to quantum mechanics, and via its role in signal processing. Modern Hermitian analysis both relies upon and informs complex analysis and CR geometry, the PIs primary research areas. The modern point of view emphasizes the tools of orthonormal expansion and orthogonal projection. The PI has introduced an algebraic technique called orthogonal homogenization which connects these ideas to the geometry of proper mappings between balls. The PI has reformulated one of his older results, on volumes of proper images of balls, as a variational problem, thereby extending the results to considerably more general situations. The PIs work on proper mappings between balls in different dimensional complex spaces has also uncovered unexpected connections to representation theory and algebraic combinatorics. The primary purposes of this research are to extend the ideas of orthogonal homogenization to the rational case, to establish additional variational inequalities, to study homotopy for proper holomorphic mappings of positive codimension, and to further develop the subject of CR complexity. In the process the PI will continue to mentor young mathematicians in these topics and to organize and attend conferences.

Agency: NSF | Branch: Continuing grant | Program: | Phase: Chemical Catalysis | Award Amount: 666.00K | Year: 2014

In this project funded by the Chemical Catalysis program of the Chemistry Division, Prof. Alison Fout of the University of Illinois at Urbana-Champaign is developing new earth-abundant cobalt catalysts capable of mediating various types of bond formation chemistry. This project targets sustainable, biocompatible earth-abundant catalysts to generate products currently made by the more expensive and toxic, Noble metals, such as palladium, platinum and iridium. Simple replacement of the Noble metal catalysts with their earth-abundant counterparts is not straightforward, so methods are being developed to change the chemical environment about the metal center in order to promote the desired catalysis. The broader impacts of the research involve developing catalysts for important fine chemical synthesis based upon earth-abundant metals, training research scientists in a technologically important area of research, and building foundational activities for primary school students to gain exposure to science experiments.

Due to the difficult nature of preventing single-electron transfer or radical chemistry on first-row transition metals, an oxidatively-robust ligand system is being developed to stabilize low-valent late first-row transition metal complexes. This study features a fundamentally important group of transition metal complexes to mediate organic catalysis involving activities ranging from chemical synthesis, spectroscopic analysis to catalysis, providing an excellent vehicle to train future scientists from various educational backgrounds. An outreach program developed by Professor Fout is building the foundation for economically disadvantaged elementary and junior high students to gain exposure to science experiments and to assess how the interest in science changes as a function of age. The broader impacts of this work include the benefits of using sustainable, earth-abundant catalysts for the synthesis of important organic complexes.

Agency: NSF | Branch: Standard Grant | Program: | Phase: CDS&E | Award Amount: 350.00K | Year: 2014

Inspired by the circulatory systems present in a wide range of living organisms, microvascular composites are a new class of fiber-reinforced polymers that possess a network of embedded microchannels. The focus of this work is placed on circulating a coolant through these microchannels to allow for the use of composite components in high temperature conditions, well beyond their traditional use. The design of these materials naturally leads to a multidisciplinary optimization problem: from a thermal point of view, more efficient active cooling can be achieved by multiplying the number of microchannels in which flows the coolant. But from a structural point of view, every channel represents a small void that may lead to stress concentration and negatively affect the stiffness and strength of the composite. Although the multidisciplinary computational design method to be developed as part of this project focuses on high temperature applications, microvascular composites are being considered for a wide range of multidisciplinary applications, including new electrical, electromagnetic or sensing properties. The combined computational and experimental research project will provide a unique multidisciplinary training experience for two graduate students and one summer undergraduate research assistant.

The hierarchical computational design method to be developed in this project will lead to important advances in the efficient and accurate modeling of heterogeneous materials, and in the formulation of a robust gradient-based shape optimization approach that eludes issues associated with mesh distortion often associated with conventional finite element methods. At the heart of the modeling effort is an isogeometric interface-enriched generalized finite element method (IIGFEM) that allows for the thermal and structural modeling of microvascular composites with finite element discretizations that do not conform to the embedded microchannel network and the composite microstructure. The IIGFEM is then combined with a hierarchical, gradient-based shape optimization scheme to compute the optimal shape of the microchannel network based on a set of objective functions and constraints associated with the thermal performance and flow efficiency of the embedded network, and its impact on the structural properties of the microvascular composite. Building on state-of-the-art computational and experimental tools, this top-down design method, which relies on a hierarchy of thermo-mechanical models and length scales, offers an efficient approach to tackle the size and complexity of a design process characterized by multiple, conflicting, multi-disciplinary objective functions and constraints.

Agency: NSF | Branch: Standard Grant | Program: | Phase: ALGEBRA,NUMBER THEORY,AND COM | Award Amount: 181.00K | Year: 2014

The proposal uses insights gained from models used in mathematical physics in order to understand algebraic structures found in pure mathematics. The models are from a special class of models, called integrable models. This means that due to their high degree of symmetry, they have a sufficient number of conservation laws -- generalizing conservation of energy -- that they can be solved exactly. Such models can be found in statistical mechanics (discrete, possibly finite systems) and in quantum field theory (continuous, infinite-dimensional systems). There are many structures in mathematics that can be studied by using techniques and results from the solutions of such systems. They appear in combinatorics, number theory, representations of non-commutative algebras, and geometry, to name a few. The results of this project will advance understanding in all these areas.

Frequently, integrable systems such as quantum spin chains in statistical mechanics, and conformal field theories and their massive deformations, can be described in representation-theoretical terms. For example, the transfer matrix of the Heisenberg spin chains can be given a meaning as a q-character of a finite-dimensional module of quantum affine algebras. The Hilbert space of integrable quantum field theories can be expressed as an infinite-dimensional modules of extensions of the (deformed) Virasoro algebra. Transfer matrices, and the characters of the Virasoro modules, satisfy difference equations or equations which can be shown to be discrete integrable equations. The transfer matrices satisfy a discrete Hirota-type equation which have interpretations as cluster algebra mutations. The following projects are proposed here: (1) The study of the difference equations satisfied by the non-commutative generating functions of graded conformal blocks of WZW theories (generalizations of Demazure modules). These generating functions are expressed in terms of the cluster variables in the quantum cluster algebra corresponding to Q-systems for characters of Kirillov-Reshetikhin modules; (2) The explicit solutions of these equations as fermionic character formulas; (3) The integrable structure of the difference equations; (4) The stabilized limits of these functions which give characters of affine algebra modules or Virasoro modules; (5) Solutions of discrete integrable equations known as T-systems and quantum T-systems (in the sense of quantum cluster algebras) and their relation to Nakajimas t,q-characters; (6) Higher-dimensional integrable difference equations, generalizing the T-systems viewed as Plucker relations, expressing the discrete structure of the higher-dimensional analogs of the pentagram maps in algebraic terms; and (7) Statistical path models which give explicit solutions for Whittaker vectors, functions and quantum Toda Hamiltonians for finite, affine and quantum algebras.

Agency: NSF | Branch: Continuing grant | Program: | Phase: Theory, Models, Comput. Method | Award Amount: 550.00K | Year: 2014

Nancy Makri of the University of Illinois Urbana-Champaign is supported by an award from the Chemical Theory, Models and Computational Methods program in the Chemistry division and the Computational and Data-Enabled Science and Engineering Program (CDS&E) to develop theoretical and computational methods to describe electron and proton transfer processes in molecular systems. Electron and proton transfer play a vital role in many chemical and biological reactions. These small particles are subject to the laws of quantum mechanics, and thus standard simulations based on classical mechanics cannot correctly predict their behavior. Unfortunately, the exact solution of the quantum mechanical equations of motion for processes in liquid or biological environments requires enormous computing power that does not exist. Makri is developing accurate algorithms for simulating these processes; treating the electron or proton at the full quantum mechanical level, while capturing the dynamics of the rest of the system via inexpensive classical trajectories. These algorithms are being implemented in computer code which will be freely available to the research community.

The central theme of this work is the further development and application of the quantum-classical path integral (QCPI) methodology. The spatially localized, trajectory-like nature of quantum paths makes the path integral framework ideal for quantum-classical dynamics. The proposed methodology exploits the memory-free nature of system-independent solvent trajectories to account for all classical decoherence effects on the dynamics of the quantum system. Inclusion of the residual, less prominent quantum decoherence is accomplished by evaluating the full path integral, which is amenable to large time steps and iterative decompositions. Application to electron and proton transfer in solution and in biological molecules will shed light on important mechanisms with unprecedented accuracy. The PIs core computer code, QCPI, is being interfaced with the powerful molecular dynamics packages, NAMD and LAMMPS. The goal is to produce a computer package, QCPI-MD, suitable for the simulation of proton, charge and energy transfer processes with unprecedented accuracy.

Agency: NSF | Branch: Standard Grant | Program: | Phase: MATERIALS PROCESSING AND MANFG | Award Amount: 398.41K | Year: 2013

The research objective of this Grant Opportunity for Academic Liaison with Industry (GOALI) award is to investigate the mechanics of stress, defects, and failure in thin silicon photovoltaic wafers using a combination of infrared photoelasticity and polarized photoluminescence inspection techniques. Silicon wafers are the most common base material in solar cells, but due to stress and defects that arise during the crystal growth process, material losses during manufacturing remain a significant problem. Under support from this award, the relationship between stress and defects in photovoltaic wafers will be studied directly using lock-in infrared photoelasticity. Also, a new class of inspection techniques based on a lock-in polarized photoluminescence method, in which both intensity and polarization of emitted light is measured, will be developed and used to study the interaction of stress and defects with light absorbed or emitted by the material. Finally, the relationship between stress and failure in these materials will be investigated using a series of imaging and fracture experiments.

The award will support a collaborative effort between university and industry researchers. The research could have significant impact in the solar energy industry, as it will help to explain the role of stress and defects in photovoltaic wafer failure, and could enable a pathway to rapid inspection and sorting of silicon photovoltaic wafers based on the probability of mechanical failure. The results will also add to the fundamental understanding of the interaction of light with defects in crystalline materials. In addition to the research objectives, the award will support graduate and undergraduate education and training. The academic/industry partnership will be highlighted by the annual summer placement of a graduate research assistant in the industry facility to learn characterization and manufacturing methods. Undergraduate students will be involved through an engineering outreach event at the University of Illinois, and through research in the laboratory of the Principle Investigator.

Agency: NSF | Branch: Standard Grant | Program: | Phase: CIVIL INFRASTRUCTURE SYSTEMS | Award Amount: 400.00K | Year: 2014

The objective of this Faculty Early Career Development (CAREER) program award is to investigate the dynamics of complex traffic. Complex traffic is characterized by heterogeneous vehicle types (e.g. bikes and cars) that vary in size and performance characteristics but share the same infrastructure, and is often controlled by humans. These features are increasingly common in the US during extreme congestion generated by special events, and are pervasive in emerging economies worldwide. This research postulates that advances in mathematical models, informed by and validated with large volumes of traffic data, are key elements to unlock the full understanding of complex traffic. This research focuses on (i) the development of mathematical models of heterogeneous traffic, (ii) modeling and analysis of human-directed traffic and (iii) the development of fast and accurate estimation algorithms to integrate data into city-scale models. Data to validate the models and estimation algorithms are obtained through a newly developed traffic sensing technology.

If successful, this work will support the development of next generation traffic monitoring and management systems. Ultimately, this will help reduce the multibillion-dollar annual cost of congestion during special events in the US. Educational and outreach activities are executed to prepare students with the computing competencies needed to engineer the next generation of computer enhanced infrastructure. This is achieved through new educational initiatives for undergraduate and graduate students that emphasize programming and computational skills applied to problems in civil engineering. Reproducible computational research initiatives within the transportation community will help maximize the potential impact of the research and increase likelihood of adoption by practitioners. Engagement of the broader community on applications of computing in transportation is achieved through outreach and open courseware activities.

Agency: NSF | Branch: Standard Grant | Program: | Phase: | Award Amount: 2.00M | Year: 2014

This project is using transformational learning theory and an Immunity-to-Change model as justification for forming Communities of Practice (CoPs) to change the teaching culture in gateway STEM courses at the University of Illinois. The Immunity to Change model focuses on the discomfort caused by the many teaching reforms that take the instructor out of the role of routinely providing expertise to the students as the dominant form of interaction with them. When the same instructors are working in or representing their research, it is important to present themselves in ways that demonstrate their expertise. This dissonance is a barrier to changing the teaching practices of research-active faculty in many instances because the teaching practices require them to take on the role of facilitator rather than expert.

Hence, this project is based on the core idea that teaching in gateway courses should be jointly owned and created by the faculty, rather than being the sole province of individual, independent instructors. This process is being initiated through the formation of Communities of Practice (CoPs) around each undergraduate STEM discipline in ten departments, which are located in two colleges - Liberal Arts and Sciences and the College of Engineering. Through the CoP process, the teaching culture is changing, as faculty are adapting evidenced-based reforms and changing their instruction in gateway courses. These gateway courses in ten departments in two colleges enroll over 17,000 students annually, and several of the gateway courses are required for nearly all STEM majors on campus. The Community of Practice approach is operating both within each department and also at the aggregate level across all ten departments. Each CoP is collaboratively exploring a domain of knowledge to support the development of improved practice by connecting faculty who need to adapt their teaching to evidence-based pedagogies with faculty whose beliefs already support those pedagogies.

CoPs are providing an organizational structure that promotes long-term situated learning that is exposing and challenging instructors tacit beliefs that impede change. CoPs also depend on high levels of collaboration for success and effectively spread tacit knowledge, which decreases the learning curve for novices, reduces creation of redundant resources or reenactments of failures, and promotes creativity. The emphasis on CoPs will further engender common ownership of the reforms, countering the current individualistic teaching culture, thereby institutionalizing the reforms so that they are used in the gateway courses as new faculty are assigned to teach them.

CoPs are engaging in a development cycle of innovate to evaluate, facilitated by a large evaluation team and an instructional support team. The evaluation team is providing both formative and summative feedback to CoPs using both qualitative (e.g., student attitudes) and quantitative (e.g., performance outcomes) measures that are in turn being used to improve teaching and learning. In addition, the evaluation team is also studying the functioning of the CoPs, including the extent to which teams are operating as CoPs, and providing detailed descriptions of features of effective and less effective CoPs. The instructional support team is providing just-in-time training to the CoPs by attending their weekly meetings and organizing monthly gatherings for all CoPs. The remaining PIs are working to maintain the top-down administrative support from deans and department heads to sustain the bottom-up reform efforts of the CoPs.

Agency: NSF | Branch: Continuing grant | Program: | Phase: SPECIAL PROJECTS - CISE | Award Amount: 500.00K | Year: 2015

In the United States, there is still a great disparity in medical care and most profoundly for emergency care, where limited facilities and remote location play a central role. Based on the Wessels Living History Farm report, the doctor to patient ratio in the United States is 30 to 10,000 in large metropolitan areas, only 5 to 10,000 in most rural areas; and the highest death rates are often found in the most rural counties. For emergency patient care, time to definitive treatment is critical. However, deciding the most effective care for an acute patient requires knowledge and experience. Though medical best practice guidelines exist and are in hospital handbooks, they are often lengthy and difficult to apply clinically. The challenges are exaggerated for doctors in rural areas and emergency medical technicians (EMT) during patient transport.

This projects solution to transform emergency care at rural hospitals is to use innovative CPS technologies to help hospitals to improve their adherence to medical best practice. The key to assist medical staff with different levels of experience and skills to adhere to medical best practice is to transform required processes described in medical texts to an executable, adaptive, and distributed medical best practice guidance (EMBG) system. Compared to the computerized sepsis best practice protocol, the EMBG system faces a much bigger challenge as it has to adapt the best practice across rural hospitals, ambulances and center hospitals with different levels of staff expertise and equipment capabilities. Using a Global Positioning System analogy, a GPS leads drivers with different route familiarity to their destination through an optimal route based on the drivers preferences, the EMBG system leads medical personnel to follow the best medical guideline path to provide emergency care and minimize the time to definitive treatment for acute patients. The project makes the following contributions: 1) The codification of complex medical knowledge is an important advancement in knowledge capture and representation; 2) Pathophysiological model driven communication in high speed ambulance advances life critical communication technology; and 3) Reduced complexity software architectures designed for formal verification bridges the gap between formal method research and system engineering.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Secure &Trustworthy Cyberspace | Award Amount: 225.00K | Year: 2014

This project studies the security of representative personalized services, such as search engines, news aggregators, and on-line targeted advertising, and identifies vulnerabilities in service components that can be exploited by pollution attacks to deliver contents intended by attackers.

This project also develops defense-in-depth countermeasures against pollution attacks. These include new server-side mechanisms to prevent the various cross-site-request-forgery schemes that allow an attacker to insert actions. The defense mechanisms also include a distributed data collection, measurement and analysis framework to detect anomalies in browsing behaviors and information contents that are indicative of pollution of user profiles or population preferences. The new information analysis techniques use machine learning and natural language processing to identify differences (e.g., missing information) that are significant or important to a user. The project also develops tools to alert users and guide them to understand and repair profiles, and studies regulatory models to incentivize the industry to adopt a more transparent practice.

This project develops an evaluation framework to facilitate the development and adoption of technologies. The evaluation plan includes user studies involving real, diverse user groups on the Internet.

To transition technologies to practice, this project makes the tools freely available, and deploys data collection and measurement systems on the Internet. This project also educates users about pollution attacks and engages with users to improve the usability of the tools.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Dynamics, Control and System D | Award Amount: 346.55K | Year: 2015

Human Activity Recognition is the process of inferring human activity from motion sensors such as accelerometer and gyroscopes worn on the human body. These motion sensors are embedded in physical activity tracking devices and SmartWatches. Accurate inference of human activity offers many benefits for health monitoring and for promoting overall wellness of an individual. There are many machine learning-based approaches to human activity recognition from sensor data. However, in practice, the current approaches often suffer from poor accuracy on account of uncertainty due to wide range of human body types, sensor locations on the body and real-time sources of uncertainty such as changes in activity speed and intensity, sensor noise, packet drops, sensor failure etc. This award supports fundamental research to provide needed knowledge for the development of robust activity recognition methodology and algorithms. It is projected that there will be a trillion embedded sensors in connected people and devices by 2020. This projects algorithmic and software tools can potentially be directly applied to fitness monitoring, eldercare support, long-term preventive and chronic care, and rehabilitation. Therefore, results from this research will benefit the US economy and society. To promote transitions, several educational initiatives are planned that seek to engage undergraduate students in entrepreneurship.

A major objective of the research concerns development of methods and algorithms to mitigate uncertainty in machine learning problems, such as the activity recognition problem, involving dynamic data sets. A control-theoretic framework is planned to not only address the robustness issues due to uncertainty, but also enable certain unified architectures for learning patterns from sensor data. If successful, the work can lead to novel algorithmic approaches to represent, learn and recognize hidden low order patterns in unstructured dynamic data sets. Besides methodological developments, this project will engineer other more tangible outcomes such as the development of algorithms and software for feedback particle filter, enunciation of control architectures and algorithms for representation of complex patterns in data, and development of software tools for the human activity recognition system.

Agency: NSF | Branch: Continuing grant | Program: | Phase: Chemical Synthesis | Award Amount: 390.00K | Year: 2014

With this award from the Chemical Synthesis (SYN) Program of the Chemistry Division of NSF, Professor Gregory S. Girolami of the University of Illinois at Urbana-Champaign will develop new and better molecular precursors for the chemical vapor deposition (CVD) and atomic layer deposition (ALD) of thin films. The current work will develop new classes of transition metal compounds, especially as they relate to the use of such molecules as precursors for the deposition of metal diborides, metal nitrides, and metal oxides. The work will enhance already existing interactions between Professor Girolamis group and leading companies in microelectronics development. The research will lead to deeper understandings of the chemistry of volatile transition metal compounds, and could lead to better ways to deposit the wires and insulators that are key components of modern-day computer chips. In addition, the award will contribute to the education of a diverse group of undergraduate and graduate students, and allow Professor Girolami to maintain his involvement in various services and outreach activities, such as library exhibits of rare books illustrating the history and advancement of science.

The proposed activity is aimed toward the synthesis of new volatile compounds (especially of the transition metals), investigations of their chemical reactivities, and studies of their volatilities and utilities as thin film precursors. Some of the specific objectives of the current project are synthesis of new two-coordinate complexes of the late transition elements, and the chemical tuning of ligands to change the volatility, reducing power, and deposition activation energies of their complexes in a systematic way. In parallel, investigations of the use of these new CVD precursors for the chemical vapor deposition (CVD) and atomic layer deposition (ALD) will be carried out collaboratively with experts in these techniques.

Agency: NSF | Branch: Standard Grant | Program: | Phase: COMMS, CIRCUITS & SENS SYS | Award Amount: 200.00K | Year: 2016

Time-critical instantiations of wireless networks such as vehicular networks, robotic systems, real-time surveillance, and networked control require stringent deadline requirements on packet transmissions. These emerging networks demand reliable and predictable transmissions over an unreliable, time-varying wireless medium to support applications such as warning messages, voice calls and video streaming. Despite remarkable progress in the design of wireless networks with maximum throughput and low delays over the last two decades, communications with hard deadlines in wireless networks remains a challenging open problem. Most existing works on delay performance in wireless networks focus on reducing average delays rather than guaranteeing hard deadlines. The goal of this project is to develop fundamental theories and distributed algorithms for transmitting packets with hard deadlines in wireless networks for time-critical applications. Technical advances in this project will contribute to the improvement of wireless systems for time-critical communications, including the so-called internet of things, cyber-physical systems, emergency response wireless networks, and sensor networks.

The problem of supporting communications with hard deadlines in ad hoc wireless networks is very challenging because the capacity region of the network is arrival-dependent, and packet deadlines induce special types of spatial and temporal correlation. This project focuses on a comprehensive resource allocation solution for time-critical communications in wireless networks by utilizing the following four novel techniques: (1) a virtual link approach to achieve spatial-temporal resource allocation to guarantee end-to-end hard deadlines, (2) a deadline-aware random access algorithm, (3) a virtual frame approach to deal with the infinite temporal correlation among packets, and (4) deadline-aware PHY/MAC resource allocation. By exploiting these transformative technical approaches, the project will (1) design distributed algorithms for mission-critical wireless networks, (2) design resource allocation algorithms for supporting hard end-to-end deadlines for physical layer models beyond the collision model and (3) design joint routing/scheduling algorithms for multihop traffic flows with end-to-end deadline constraints. Education is a core component of this project. During the course of this project, research and education will be integrated by including new theories and algorithms developed in this project into graduate-level courses. Every effort will be made to involve undergraduates and students from under-represented students in this project.

Agency: NSF | Branch: Standard Grant | Program: | Phase: NANOMANUFACTURING | Award Amount: 300.00K | Year: 2014

The emergence of a variety of high quality nanoscale materials in the last few decades have brought forth novel tunable properties with which new and improved technologies can be envisioned. Prospects include low-cost solar cells with unprecedented power conversion efficiencies to high-contrast biomedical imaging agents. While high-performance in laboratory scale devices incorporating nanoscale materials have been demonstrated, for many applications such as photovoltaics and displays, scalable manufacturing of high efficiency devices and device arrays is an absolute necessity. Currently, there are no clear pathways to achieving such practical large area, large arrays of complex, high performing structures. This award supports fundamental research to build the necessary foundation for developing a scalable roll-to-roll printing approach to assembling large arrays of nanoscale materials within functioning device architectures. Such an ability to integrate large arrays of nanoscale materials should enable manufacturing of a wide variety of electronic and optoelectronic devices from high-performance solar cells to energy efficient displays and lighting technologies. This multidisciplinary research involving materials, mechanical and manufacturing sciences as well as optoelectronics will provide research opportunities for students from underrepresented groups and promote/enhance engineering education.

Heterogeneous integration of various 0D, 1D, and 2D nanomaterials such as quantum dots, carbon nanotubes, and graphene into vertically stacked multilayer structures allows for effective methods of utilizing the unique electrical and optical properties of these nanomaterials in a monolithic architecture. However, well-established fabrication processes such as photolithography are often incompatible with nanomaterials. Dry transfer printing using elastomeric stamps is a potential solution to this incompatibility problem but the nature of kinetically switchable adhesion of elastomers requires different peeling speeds between retrieval and printing steps. This requirement in turn introduces a grand challenge in employing elastomeric stamps to scalable roll-to-roll printing since retrieval and printing need to be carried out with the same roller, thus the same peeling speed. This research aims to explore shape memory polymers as stamp materials to replace kinetic control by stiffness control of dry adhesion and, furthermore, to enhance the adhesion force and switchability. The research team will examine the mechanics of shape memory polymers for transfer printing, investigate the fundamental properties of nanomaterials and interfaces assembled using shape memory polymer and apply the knowledge gained to assemble multilayer stacks consisting of nanomaterials and other relevant device components in a continuous roll-to-roll fashion.

Agency: NSF | Branch: Standard Grant | Program: | Phase: DECISION RISK & MANAGEMENT SCI | Award Amount: 402.54K | Year: 2013

This GSS/DRMS project investigates how the theory of efficient financial portfolios can be modified, enriched, and harnessed to manage the nature-conservation risk that climate uncertainty creates. Agencies and conservation groups cannot know exactly which areas will prove to be the best conservation sites in a future warmer climate; this creates the risk that land areas protected today for conservation purposes might not end up being good habitat or hot spots of biodiversity in the future. Scholars of finance have long used portfolio theory to manage financial risk faced by an investor through diversification. Recent research has shown that financial tools could be adapted to the problem of allocating conservation investments between different parts of a conservation planning area to reduce future conservation risk from climate change by spreading conservation lands strategically between multiple areas. However, adapting the science of portfolio theory from finance to conservation is a non-trivial endeavor. To advance that science, this project will develop new spatial conservation-outcome forecasts for the Prairie Pothole Region and construct spatial data sets suitable for spatial portfolio analysis for two other previously studied conservation problems (Eastern birds, Appalachian salamanders). Using the data on those three diverse conservation problems, the researchers will: identify the kinds of problems for which this spatial conservation portfolio diversification is most useful; figure out how to develop a spatial conservation portfolio when information regarding conservation outcomes is only available for a small number of climate scenarios; develop a method of portfolio analysis to guide division of investment between land purchases and management activities that buffer protected areas against warming; and explore whether portfolio analysis can be used by a decision maker who is concerned with several different conservation goals.

This research will yield valuable guidance to government agencies and private conservation groups regarding spatial conservation strategies in the Prairie Pothole Region - sometimes referred to as the Duck Factory - that reduce the risk climate change poses to future waterfowl conservation success. More importantly, this research will develop conservation tools that decision makers wrestling with a wide range of conservation problems can use themselves with climate and ecological forecast data they deem to be sufficiently accurate and that measure the conservation outcomes with which they are most concerned; the tools will help them to allocate conservation investments across space in ways that minimize uncertainty in the outcomes of those investments, increasing the social well-being produced by their expenditures. The grant will train two graduate students at the University of Illinois in spatial analysis and the application of portfolio theory to conservation, and support four female faculty researchers in fields where women are under-represented (two early in their academic careers.) The project will engage and train a diverse group of undergraduate students that represent academic backgrounds across economics, ecology, and geography in a meaningful multidisciplinary research experience. The results will be distributed through journal articles and conference presentations; in addition, the researchers will communicate the results to stakeholders directly through professional contacts in federal agencies and at major land conservation groups such as The Nature Conservancy, Conservation International, and the Environmental Defense Fund.

Agency: NSF | Branch: Standard Grant | Program: | Phase: RES IN NETWORKING TECH & SYS | Award Amount: 256.00K | Year: 2016

Data communication has been studied over a wide range of modalities, including radio frequency (RF), acoustic and visible light. The possibility of communicating over the vibratory channel has been relatively unexplored. This project explores the fundamental limits and new algorithms associated with digital communication via vibrations in the context of vibration motors and accelerometers embedded inside all modern smartphones. The corresponding speed of data transfer may not match established communication modalities such as Near-Field Communication (NFC) or Bluetooth, but we propose that vibratory communication offers inherent advantages whenever security is vital: for instance, any two devices may be able to spontaneously exchange security keys by tapping each other; a user wearing a smart watch may enter her passwords into her laptop by merely bringing the watch in contact with her laptop; clandestine communications that should not leave a trackable trace could perhaps also be an application of vibratory communication.

This project initiates a first-principles study of reliable and efficient communication via vibration and proposes to develop a concrete communication/networking stack on vibratory actuators and sensors. The systems capacity and viability will be tested across different platforms (smartphone, smartwatch, finger rings), with a focus on real-world use-cases and applications. The core research in this project is divided into 4 main threads: (1) modeling the vibratory transmitters and receivers and understanding the theoretical capacity of such systems; (2) designing a vibratory radio using a combination of new and existing techniques; (3) developing a Medium Access Control layer that controls channel access and fault recovery; (4) understanding the information leakage (and side channels) due to vibrations and developing techniques to mitigate them.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Combinatorics | Award Amount: 33.98K | Year: 2015

This award supports participation in the Midwest Algebra Geometry Combinatorics (ALGECOM) meetings, a series of biannual one-day conferences the Fall 2015 edition of which will be held on October 24, 2015, at the University of Michigan in Ann Arbor. The goal of these conferences is to provide an informal and collaborative atmosphere to promote interaction among Midwest mathematicians. Travel support is for graduate students and speakers. The conference will include a poster session that will permit students and postdocs to present their work. This will assist in the development of STEM research and training at a variety of Midwest institutions. Venues and dates are chosen several months in advance. Past/future locations also include DePaul University, Loyola University, Indiana University-Purdue University Indianapolis, and the University of Illinois at Urbana-Champaign. The conference website is https://sites.google.com/site/algecomday/

The conference series grew from a desire to bring together mathematicians with common interests (combinatorics and its interplay with commutative algebra, algebraic geometry and/or representation theory). In particular, it deepens the links between hosting Chicago universities (Depaul and Loyola) with the University of Illinois at Urbana-Champaign, Purdue University, and Indiana University-Purdue University Indianapolis. These institutions have significant geographic separation; ALGECOM provides a regular meeting to stimulate interaction and synergies. Continuing support allows extension from Illinois and Indiana towards Michigan. This award continues this trend, which will in particular benefit graduate students both from the Midwest and elsewhere.

Agency: NSF | Branch: Standard Grant | Program: | Phase: UNDISTRIBUTED PANEL/IPA FUNDS | Award Amount: 58.01K | Year: 2016

Since 1997, the National Science Foundation (NSF) has required that all proposals submitted for funding explicitly address two review criteria: intellectual merit and broader impacts. Advances in both intellectual merit and broader impacts are essential if NSF is to achieve its mission to advance the national health, prosperity, and welfare; to secure the national defense; and for other purposes. The two criteria are often considered separately in developing and reviewing projects. A conceptual framework for understanding the various dimensions of the two criteria and their potential synergy may lead to projects with greater societal value.

This workshop brings together thought-leaders from the Civil, Mechanical, and Industrial Engineering academic research communities to define a roadmap for broader Impact innovations. Participants are academic researcher leaders who have demonstrated innovation in broader impact activities -- creating measurable economic, human capital, and societal value through their research. A publicly available workshop report will define innovative approaches to enhancing the broader impact from NSF-funded research. Subsequent presentations and publications by workshop participants will make these ideas widely available to the community.

Agency: NSF | Branch: Standard Grant | Program: | Phase: I-Corps | Award Amount: 50.00K | Year: 2016

Automation can make repetitive or boring physical tasks ? like moving boxes onto a conveyor belt-a trivial component of life; however, small businesses and consumers that want to use automation in their workspace and/or home face insurmountable challenges to enter a highly technical field. Typically, external companies called robotic integrators are employed for designing and implementing such systems. While the hardware used in this process is getting cheaper and cheaper, the physical labor of highly trained technical engineers needed to build and program these systems is not. This I-Corps team is developing easy-to-use software that will allow nontechnical users to design and implement their own automated systems that are customized to their own needs without having to hire costly robotic integrators. The new, proposed technology is lowering the barrier to entry for small business owners and home consumers through a web-based software platform where users can build their own programs for automation through a nontechnical, intuitive user interface. This initial prototype leverages a way of thinking about automation in terms of action and sensing primitives that derives from a university robotics research lab. The proposed technology has the potential to dramatically transform the population of people to whom automation, and its well-documented benefits, is available. The approach taken in this technology could bring simple, moving machines into many more environments.

Expanding the use cases of industrial automation to smaller batch sizes and narrower profit margins is a key challenge toward realizing the vision of the Internet of Things. This challenge is due in large part to the high cost of technical expertise required for the integration of available, cost-effective sensors, actuators, and algorithms. However, relatively inexpensive software can be developed to help users adapt existing technology to their environment. The proposed technology utilizes two key technical areas. One area is the computational architecture of the product and the development of a scalable code architecture and support for a larger number of so-called sensing and action primitives. Another area is in human factors where the ability of an untrained user to move from a contextualized use-case to specific logic design is key. Here, a relatively complex process must be distilled to essential steps that an untrained user can relate to a familiar process in a particular environment. Participation in the Innovation Corps will allow the team to identify and research the appropriate market in which to develop and launch a first product. In particular, the team will explore which customer base ? ranging from at-home hobbyists to small-to-medium sized industrial companies-is a good fit, in terms of feasible application of the technology as well as market demand and pricing, for the proposed technology. The teams participation will influence parallel development of technology and inform a business plan that will allow for commercialization of the technology.

Agency: NSF | Branch: Standard Grant | Program: | Phase: COMPUTER ARCHITECTURE | Award Amount: 450.00K | Year: 2013

Moores law continues to provide abundant devices on chip, but they are increasingly subject to failures from many sources. The hardware reliability problem is expected to be pervasive, affecting markets from embedded systems to high performance computing. There is an urgent need for research to address this problem with extremely low overheads in area, performance, and power (precluding traditional redundancy based solutions). Recently, researchers have proposed a software-driven hardware reliability solution that handles only the device faults that become visible to software and cause anomalous software behavior. This line of work has been quite successful in detecting most faults at extremely low cost. Unfortunately, some hardware faults escape detection by the proposed anomaly monitors, resulting in silent data corruption or SDC. These remaining few SDCs have been the Achilles heel of the software-driven hardware resiliency approach and a hindrance to widespread adoption. The proposed research seeks to overcome this obstacle.

The research includes methodological innovations that can determine application sites vulnerable to SDCs within a practical workflow and resiliency solution that uses this information to develop low cost detection and recovery techniques to mitigate the impact of SDCs. It builds on a recent resiliency analysis tool developed by the Principle Investigators group called Relyzer. The key insight is that instead of trying to determine the outcome of each fault site, Relyzer can seek to determine which application sites will produce equivalent outcomes. This enables pruning a large number of sites and focusing on fault injections for just one site per equivalence class, resulting in significant reduction in resiliency evaluation time. In addition to providing a list of SDC vulnerable instructions, Relyzer also provides a wealth of information on why they are vulnerable. This motivates the use of inexpensive application-specific detectors that exploit this information. However, Relyzer has several limitations in speed, accuracy, and generality, precluding its use in a practical workflow. This research will first develop new techniques to address these limitations and to implement them in a tool. Second, this research will explore systematic techniques to develop practical resiliency solutions that exploit the wealth of fault-propagation information exposed by Relyzer. It will develop systematic low-cost detection and recovery techniques, with quantifiable tradeoffs between resiliency and performance overheads, that can be incorporated in a practical workflow for real applications. If successful, this work will address a key challenge in meeting the expectations of Moores law performance for a wide variety of societal advances. Besides the research benefits, it will provide a concrete tool for practical full application resiliency analysis and will also train graduate students.

Agency: NSF | Branch: Standard Grant | Program: | Phase: EFRI RESEARCH PROJECTS | Award Amount: 100.00K | Year: 2016

This proposal from the University of Illinois aims to develop a learning framework and associated platforms that will increase the probability of germinating transformative research ideas that will open new opportunities to address important societal challenges. Although the framework is designed for any research domain, for evaluation purposes this project focuses on the interdisciplinary challenge of resilience to global climate change. The target audiences are late stage graduate students, early stage faculty, and postdoctoral scholars who show potential for engaging in cross-institutional interdisciplinary research linking scholars from American Indian Tribal Colleges, Historically Black Colleges and Universities, Hispanic-Serving Colleges and Universities, and Land Grant Universities.

Resilience to climate change is a fundamentally important issue and this project has the opportunity to generate new approaches to this challenge. The inclusion of minority serving institutions in a meaningful way that gives them true voice breaks new ground in this area of inquiry as well. Procedures that facilitate including multiple voices when addressing fundamental scientific challenges will be a contribution to building a commonwealth among the many groups in the US who have stakes in important scientific issues. Expected project outcomes will be: (1) Cross-institutional, interdisciplinary proposals to funding agencies; (2) Sustainable cross-institutional and interdisciplinary networks to promote future research; (3) A procedure for facilitating the first two outcomes and evidence for its efficacy, scalability, and adaptability beyond the institutions involved in the study. A team composed of research and STEM directors and other personnel from the American Indian Higher Education Council (AIHEC), the Hispanic Association of Colleges and Universities (HACU), and the National Association for Equal Opportunity in Higher Education (NAFEO), and directors and research scientists from the National Center for Supercomputing Applications (NCSA), as well as other scholars and process experts will contribute to the development and implementation of the framework.

Agency: NSF | Branch: Standard Grant | Program: | Phase: SOCIAL PSYCHOLOGY | Award Amount: 1.31M | Year: 2015

This project examines the development of a key factor leading to womens underrepresentation in science and technology. Specifically, it examines the development of the cultural stereotype that links males but not females with intellectual brilliance and genius. Previous research has found that academic disciplines that are believed to require a spark of genius tend to have the largest gender gaps. Because many science fields are portrayed in such terms, the brilliance = males stereotype may be an important factor in explaining the persistent gender gap in these disciplines. The primary goal of this research is understanding how this stereotype is acquired over the course of development. Investigating the development of this stereotype will inform how the stereotype might steer capable young women away from pursuing careers in science and technology and may also inform the optimal timing of potential interventions to block its adverse effects.

This project consists of three studies to examine three crucial developmental issues. First, it investigates the development of childrens knowledge of the cultural stereotype that males are more likely to be brilliant than females. Second, it investigates the development of gender differences in childrens motivation to engage in activities portrayed as requiring high levels of intellectual aptitude. Finally, it investigates longitudinally whether internalizing the stereotype against females intellectual abilities undermines young girls subsequent motivation to engage in activities that are said to require brilliance and giftedness. This research explores the development of a set of processes that ultimately limit opportunities for women. As such, these studies will improve parents, educators, and policy-makers ability to intervene at the root of the problem to promote greater gender equity in those domains of academia and industry in which women have traditionally been underrepresented.

This proposal is being co-funded by Developmental and Learning Sciences, Social Psychology, and Science of Broadening Participation within the Social, Behavioral, and Economic Sciences Directorate and by the Education and Human Resources Directorates Core Research program and the Research on Gender in Science and Engineering program.

Agency: NSF | Branch: Standard Grant | Program: | Phase: STATISTICS | Award Amount: 185.00K | Year: 2016

Due to the rapid development of information technologies and their applications in many scientific fields such as climate science, medical imaging, and finance, statistical analysis of high-dimensional data and infinite-dimensional functional data has become increasingly important. A key challenge associated with the analysis of such big data is how to measure and infer complex dependence structure, which is a fundamental step in statistics and becomes more difficult owing to the datas high dimensionality and huge size. The main goal of this research project is to develop new dependence measures for quantifying dependence of large scale data sets such as temporally dependent functional data and high dimensional data, and utilize these new measures to develop novel statistical tools for conducting sparse principal component analysis, dimensional reduction, and simultaneous hypothesis testing. Building on the new dependence metrics that can capture nonlinear and non-monotonic dependence, the methodologies under development are expected to lead to more accurate prediction and inference, as well as more effective dimension reduction in the analysis of functional and high dimensional data.

The research consists of three projects addressing different challenges in the analysis of functional and high dimensional data. In Project 1, the investigators introduce a new operator-valued quantity to characterize the conditional mean (in)dependence of one function-valued random element given another, and apply the newly developed dependent metrics to do dimension reduction for functional time series under a new framework of finite dimensional functional data. In Project 2, the investigators explore a new dimension reduction framework for regression models with high dimensional response, which requires less stringent linear model assumptions and is more flexible in terms of capturing possible nonlinear dependence between the response and the covariates. In Project 3, the investigators develop new tests for the mutual independence of high dimensional data via distance covariance and rank distance covariance using both sum of squares and maximum type test statistics. Overall, the three lines of research are all related to big data, and they touch upon various aspects of modern statistics; the project aims to push the current frontiers in areas including sparse principal component analysis, inference for dependent functional data, and high dimensional multivariate analysis to another level.

Agency: NSF | Branch: Continuing grant | Program: | Phase: Science of Organizations | Award Amount: 220.69K | Year: 2015

Nontechnical Description
Catastrophic events such as Fukushima and Katrina have made it clear that integrating physical and social causes of failure into a cohesive modeling framework is critical in order to prevent complex technological accidents and to maintain public safety and health. In this research, experts in Probabilistic Risk Assessment (PRA), Organizational Behavior and Information Science and Data Analytics disciplines will collaborate to provide answers to the following key questions: (a) what social and organizational factors affect technical system risk? (b) how and why do these factors influence risk? and (c) how much do they contribute to risk? Existing PRA models do not include a complete range of organizational factors. This research investigates organizational root causes of failure and models their paths of influence on technical system performance, resulting in more comprehensive incorporation of underlying organizational failure mechanisms into PRA. The field of PRA has progressed the quantification of equipment failure and human error for modeling risk of complex systems; however, the current organizational risk contributors lack reliable data analytics that go beyond safety climate and safety culture surveys. This research fills that gap by developing predictive causal modeling and big-data theoretic technologies for PRA, expanding the classic approach of data management for risk analysis by utilizing techniques such as text mining, data mining and data analytics. In addition to scientific contributions to organizational science, PRA, and data analytics, this research provides regulatory and industry decision-makers with important organizational factors that contribute to risk and leads to optimized decision making. Other applications include real-time monitoring of organizational safety indicators, efficient safety auditing, in-depth root cause analysis, and risk-informed emergency preparedness, planning and response. The multidisciplinary approach of this project can serve as an educational model, empowering students to pursue research across disciplinary boundaries. Finally, the proposed research represents a successful model of industry-academia collaboration. A nuclear power plant has committed to this project and provides unique access to data and information necessary to complete the research. The proposed methodology is generic and applicable for any high-risk industry (e.g., aviation, healthcare, oil and gas), and will be used for the improvement of organizational safety performance in order to protect workers, the public and the environment.

Technical Description
Organizations produce, process and store a large volume of wide-ranging, unstructured data as a result of business activities and compliance requirements (i.e., corrective action programs, root cause analysis reports, oversight and inspection data, etc.). This research leverages those data resources for the quantification of organizational failure mechanisms and their integration with the technical system risk scenarios generated by PRA. The research is based on a socio-technical risk theory to prevent misleading results from solely data-informed approaches. Combining socio-technical risk theory, systematic modeling and semantic data analytics strategies will greatly enhance risk analysis of complex systems. We will conduct our research based on following steps: (1) Expand factors, sub-factors, and causal relationships in the Socio-Technical Risk Analysis (SoTeRiA) framework, (2) Develop measurement techniques for factors, sub-factors and their causal relationships in SoTeRiA (e.g., integrating text mining with the Bayesian Belief Network; conducting scientific reduction to identify important factors; measuring of important factors), (3) Establish a dynamic, predictive socio-technical causal modeling technique, (4) Perform uncertainty analysis, (5) Conduct verification and validation, (6) Integrate the quantitative socio-technical causal model with PRA, and (7) Conduct sensitivity and importance measure analyses. As the pioneer study on the integration of big data with PRA, this research addresses and quantifies risk emerging from the interface of social and technical systems.

Agency: NSF | Branch: Continuing grant | Program: | Phase: PHYSICAL & DYNAMIC METEOROLOGY | Award Amount: 942.59K | Year: 2013

Processes governing the spatial and temporal variability of precipitation within the comma head sector of extratropical cyclones remain poorly understood. This sector of baroclinic storm systems is often the focus of hazardous winter weather including heavy snowfall, blizzards and ice storms that markedly impact transportation and other human activities. This investigative team seeks to improve our understanding of precipitation substructures within this zone by addressing outstanding questions arising from the Profiling of Winter Storms (PLOWS) field campaign, which was carried out during the winters of 2008-09 and 2009-10. Observations collected during PLOWS included extensive in-situ microphysical data gathered by the NSF/NCAR C-130 aircraft, high-resolution remote sampling of precipitation structures by the University of Wyoming Cloud Radar and Lidar carried aboard the C-130, complementary views from ground-based radars and profiling systems, and special serial rawinsonde launches. Additional insights will be gained through high-resolution simulations suitable for comparison with a wide range of observed precipitation substructures.

Planned investigations will center on: (1) The nature and source of instability creating cloud top generating cells, including determination of updraft magnitudes, origins and role supercooled water in the generation and growth of ice particles near cloud top, ice particle concentrations within generating cells and associated precipitation plumes, and processes leading to the rather ubiquitous generating-cell structures observed during PLOWS; (2) the means by which potential instability is generated within zones characterized by deep upright elevated convection on the warmer side of comma-head regions, the relationship between the dry-slot upper-tropospheric airstream moving over warm-frontal surfaces, and determination of the role of synoptic-scale vertical motions accompanying frontogenesis in triggering release of this instability; (3) the origins of linear precipitation bands and their potential creation by synoptic-scale deformation acting upon descending ice particle plumes issued by elevated generating cells, as well as differing ice particle characteristics within and outside these bands; (4) the relationship of polarization radar signatures to measured in situ microphysical properties, and in particular determination of whether supercooled water or its effects (e.g., rimed particles) can be dependably detected via remote sensing; and (5) the nature of stratiform- vs. convective-cloud region flows with attention to fine scale wave features, frontal interfaces, their effects on microphysical processes, and their relationship to isentropic surfaces, shearing instability, and low-level fronts in the production of locally-enhanced precipitation rates.

The intellectual merit of this effort rests on full exploitation of the unique and extensive PLOWS project dataset to address longstanding questions concerning the nature and origins of precipitation within winter cyclones in more complete and definitive ways than has heretofore been possible. Broader impacts of this research will include a carefully designed mix of undergraduate and graduate education and testing of improved operational strategies for remote sensing of winter weather systems hinging on emerging observational assets including NOAAs newly-upgraded dual-polarization network of WSR-88D radars. Anticipated longer-term impacts include improved ability of numerical models to simulate precipitation substructures in winter storms with direct application to forecasts benefiting the public, as well as recruitment of new scientists qualified to undertake atmospheric field research.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Cyber-Human Systems (CHS) | Award Amount: 845.58K | Year: 2016

This research will study human and machine co-curation of online news feeds and develop new software to compare the users intent with respect to idealized feed curation, when algorithms filter what appears in online news feeds. This will be combined with research that investigates the users perception of the algorithms actions and the users perceptions of feed settings in control panels. Not only are these algorithmic processes opaque to most users, but many users dont even know that algorithms are making decisions on their behalf. People who might be aware of algorithmic processes at work have no way to verify their existence; changes in a users search results, for example, might have resulted from a change in the algorithm, or from a change in the users activity. This work has the potential to increase public algorithm awareness that may lead to more in-depth exploration and possibly to learning about algorithms, with implications for the public understanding of science and technology.

This research will generate evidence-based knowledge in five main areas: (a) the level of social media algorithm awareness across a general population, (b) how awareness levels and interaction behavior change when exposed to facets of social feeds via an interactive social feed visualization tool, (c) whether the use of feed settings results in better feed perception, (d) what people want to see in their feeds, and (e) how we can communicate algorithmic process through design of feed content and feed interfaces. The outcomes of this work will help researchers and practitioners in various fields (Human Computer Interaction, Social Science, Design, Engineering, Law, Ethics) critically rethink current computation and design practice and lead to interfaces that help people understand the algorithms that shape their lives. Code templates to program unique algorithms will be provided to encourage algorithm creation, and public repositories will be made available to encourage sharing.

Agency: NSF | Branch: Standard Grant | Program: | Phase: SPRF-IBSS | Award Amount: 221.50K | Year: 2015

The Directorate of Social, Behavioral and Economic Sciences offers postdoctoral research fellowships to provide opportunities for recent doctoral graduates to obtain additional training, to gain research experience under the sponsorship of established scientists, and to broaden their scientific horizons beyond their undergraduate and graduate training. Postdoctoral fellowships are further designed to assist new scientists to direct their research efforts across traditional disciplinary lines and to avail themselves of unique research resources, sites, and facilities, including at foreign locations. This postdoctoral fellowship award supports a rising interdisciplinary scholar studying ochre pigments from Stone Age archaeological sites. Understanding the potential significance and meaning of ancient symbolism has been enhanced by novel geochemical techniques for provenance studies of ochre, particularly minimally destructive trace element fingerprinting. This study builds on these methods by adapting iron, strontium, and lead stable isotope analysis to geologic sources used by traditional societies in Kenya today, and to ochre artifacts and rock art paint samples. Multi-isotope analysis can improve the accuracy of provenance identification, and also contribute to our understanding of the geologic mechanisms of ochre deposit formation. Finally, information provided by indigenous people who currently use ochre will allow us to learn more about their pigment source preferences, criteria for raw material selection, techniques of pigment preparation, and symbolic meanings associated with geologic sources, rock art images, and pigmented artifacts. Enhanced understanding of modern ochre use and its meanings will shed new light on the evolution of human symbolism from the Stone Age to present day.

Communicating information and identity with symbols is an essential attribute of our species. Humans have used red and yellow ochre pigments for symbolic expression for hundreds of thousands of years. However, rock art and other practices involving these iron-based pigments are understudied in the modern era. Research on this rapidly vanishing form of cultural heritage is thus critical to understanding the origins of symbolism. This project bridges archaeology, ethnography, and geochemistry to investigate ochre use. First, recent rock art sites and the ochre deposits used for pigments will be identified through collaboration with modern rock art painters. Geochemical techniques will then be used to characterize the ochre deposits, and to match them to paint samples from rock art sites and ochre artifacts from archaeological sites. This approach will facilitate verification of the sources of rock art pigments identified from ethnographic information. This project will refine minimally destructive analytical methods that can be used for ochre pigments and other iron-containing materials. Improving methods to determine geologic sources of ochre pigments has significant implications for identifying looted heritage items, forgeries, and for stemming the illicit antiquities trade that endangers tangible culture and fuels conflict around the world.

Agency: NSF | Branch: Continuing grant | Program: | Phase: ADVANCES IN BIO INFORMATICS | Award Amount: 507.18K | Year: 2014

In this project, the investigator will develop rigorous mathematical methods for analyzing the recent high-throughput nucleosome mapping data in yeasts and provide novel computational tools for studying the relations among DNA sequence, nucleosome stability, and nucleosome positioning. Eukaryotic DNA has a complicated three-dimensional structure called chromatin consisting of millions of nucleosomes, the protein-DNA complexes that contain 146 base pairs of DNA wrapping around eight histones. Nucleosomes play a critical role in regulating gene expression by controlling the accessibility of DNA and modulating transcription factor binding activities. Nucleosomes are thus of paramount importance, but there currently do not exist rigorous computational tools for studying the biological signals that regulate their positioning and stability. This research will devlop spectral decomposition methods for analyzing nucleosomal and linker DNA, including their physical properties derived from the molecular measurements of DNA flexibility. Dr. Song will apply a statistical theory for analyzing the maximal frequency spectrum of categorical time series in order to answer a long-standing question of whether certain periodic DNA properties preferentially exist in individual nucleosomes. Furthermore, wrapping DNA around histones introduces superhelical stress, which we show to be often countered by increased sequence-dependent stabilization of DNA. The developed computational tools will help unravel new connections between DNA bendability and stabilization energy and, thus, facilitate the discovery of novel nucleosome positioning signals at important regulatory sites. We currently do not understand how chromatin structure is faithfully inherited upon cell division, and it is likely that the genetic information contained in DNA sequences may significantly influence the process of epigenetic inheritance. Better understanding the mathematical and physical properties of DNA will enhance our understanding of chromatin structure.

This project is uniquely positioned to integrate Dr Songs research in computational epigenomics with innovative educational programs that will significantly advance the training of biology students in mathematics and statistics. The project will help improve the infrastructure for education and research by synergizing different graduate groups. The project will support vertical integration of research and education, whereby the proposed research will constitute an important theme for teaching epigenetics to quantitative scientists and for teaching mathematics. The applicant will continue to teach a statistics course that he has designed, drawing examples from genomics. Topical mini-courses on high-throughput sequencing technology and systems biology will be also taught in order to disseminate the applicants current research activities and to help prepare students and postdocs conduct their own research in genomics. Undergraduate students will also have opportunities to participate in the research, and Dr Song will involved local high school students. The project will exemplify the rich possibility of applying mathematics and statistics to solving biologically important questions. Computational tools will be implemented into open source software that will be accessible to other researchers studying chromatin structure. Further information about the project may be obtained from the PIs lab website at http://song.igb.illinois.edu.

Agency: NSF | Branch: Standard Grant | Program: | Phase: EXTRAGALACTIC ASTRON & COSMOLO | Award Amount: 171.47K | Year: 2016

A great challenge in astrophysics is to understand in detail how the initial smooth distribution of matter in the early Universe formed the first galaxies. Complementing observations of real galaxies, researchers use computational simulations to model the early Universe and study the results. This process allows one to learn how these first galaxies might have formed. However, the sheer size and complexity of such galaxy simulations present their own challenge as a single research group lacks the capacity to explore them fully. As a result, maximizing the scientific value of simulations demands new tools and services designed to foster the growth of a collaborative, multi-group research community. This project aims to develop and use a new virtual laboratory to enable transformative scientific inquiry on new and existing galaxy simulations, some of which were produced by prior NSF support. Enabling public access and unrestricted analysis and fostering a collaborative environment for sharing technology and results will ensure that galaxy simulations continue to be valuable within and beyond the research group that originally conducted them. This project addresses the national imperative to develop US cyber infrastructure and to develop US leadership in scientific research in astrophysics.

More technically, with prior NSF support the investigators used the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA) to perform the Renaissance Simulations: among the largest, most detailed simulations of the formation and evolution of the first galaxies to date. The Renaissance Simulations yielded a sample of over 3,000 high redshift (20 > z > 7) galaxies and were specifically designed to simulate the dominant sources responsible for preheating the intergalactic medium and reionization at an unprecedented level of detail. This project plans two related activities: (1) further analysis of the Renaissance Simulations; and (2) the creation of an open-data access portal, the Renaissance Simulation Laboratory (RSL), which will provide open access to this unique data corpus along with associated data analysis tools for the astronomical research community. Driving its design and utility, the investigators will use the RSL to carry out their own research investigations. The investigators research processes will become part of the RSL in the form of executable Jupyter notebooks. The Jupyter Notebook, an evolution of the iPython Notebook, is a fundamental part of the RSL and is ushering in the era of open science. The investigators use of Docker containers is equally compelling. By containerizing Jupyter Notebooks which execute pre-programmed data analysis workflows the investigators are able to bring computation to the data and share how they obtain their scientific results, a key step toward reproducible science. Users will have the option of downloading notebooks and associated data to their own platforms or executing them on San Diego Supercomputer Center and NCSA high performance platforms.

Agency: NSF | Branch: Standard Grant | Program: | Phase: GRANT OPP FOR ACAD LIA W/INDUS | Award Amount: 350.00K | Year: 2014

The objective of this Grant Opportunity for Academic Liaison with Industry (GOALI) award is to develop an optimization framework for inventory management in Assemble-to-Order manufacturing systems with general structures and parameter values. The basic approach is to employ a stochastic program to transform an intractable control problem into a much simpler solvable optimization problem. The optimal solution of the stochastic program, which sets a provable lower bound on the long-run average expected total inventory cost, will be used to inspire a family of inventory control policies for managing dynamic systems. These policies will be tested by the criterion of asymptotic optimality on the diffusion scale. In other words, when component lead times increase, thus making the system more costly, the proposed policies will drive the percentage difference between the inventory cost and its lower bound to zero. Performance of these policies in various parameter regions will also be evaluated empirically by numerical experiments, simulations, and case studies, including comparisons with other approaches and stress-tests under the least favorable conditions. Algorithms involving approximations will be developed to allow the approach to be implementable on an industrial scale.

If successful, the results of this research will help manufacturing companies to achieve significant savings in inventory cost, thus improving the efficiency of the widely-adopted Assemble-to-Order manufacturing strategy. Because the policies are particularly effective for systems with long lead times, they will mitigate negative impacts on the inventory system of long transportation delays between manufacturers and their oversea suppliers and thus lead to changes in the cost-benefit calculation that favor insourcing, which will contribute to the US manufacturing sectors revitalization. The combined use of stochastic programs and asymptotic analysis also points to a promising new approach for inventory management optimization, which involves many notoriously hard problems.

Agency: NSF | Branch: Continuing grant | Program: | Phase: PHYSICS OF LIVING SYSTEMS | Award Amount: 199.02K | Year: 2015

This collaborative research project, consisting of four institutions (Rice, Yale, UIUC and Princeton) aims to continue the Physics of Living Systems Student Research Network (PoLS SRN). This network has been in existence for four years and has had a dramatic impact on many graduate students, both in the US and abroad, working on the application of physical science techniques to living systems. These students now can participate in a global community that can help deal with the many complex issues involved in conducting research in such a new and inherently multidisciplinary field. These issues range from proper training, to gaining a broad perspective, to accessing technical expertise that may not be available at their home institution. In addition to the obvious broader impacts related to training of a research workforce, there are other broad impacts of this plan. Via the interaction of one of the PoLS nodes (Rice) with the biomedical community in Houston, students and faculty will be exposed to possible avenues whereby physics can contribute to human health issues. Funds to attract students from under-represented groups to network meetings will be available through the new funds administered by the newly proposed network coordinator. Also deas vetted by the PoLS SRN will be adapted to create student networks in other areas of science and engineering.

There is by now little disagreement with the general notion that concepts and methods from physics have been a critical contributor to the increased understanding of the living world, and that its importance will be growing as the scientific world moves toward an ever more quantitative and predictive form of biology. Thus, the physics community clearly needs to train a new generation of scientists who can lead this effort, scientists who have the right mix of physics/mathematics rigor and broad knowledge of living systems from molecular scales on up. The PoLS SRN aims at creating a community of graduate students who can collectively help themselves and their mentors accelerate and enhance this training process. This is being done by a mix of in-person and virtual modes of communication, and this proposal is a plan to continue and expand these efforts; it will reach more students, improve the social networking portals, and make use of the complementary research agendas of the different network nodes to provide broad technical expertise. Doing all of this, will boost the intellectual level of the entire research field and convince the best students that the Physics of Living Systems is truly the most exciting research frontier in 21st century science.

This project is being jointly supported by the Physics of Living Systems program in the Division of Physics, the Molecular Biophysics Cluster in the Division of Molecular and Cellular Biosciences, the Chemistry of Life Processes program in the Division of Chemistry, and the Cellular Dynamics and Function Program in the Division of Integrative Organismal Systems.

Agency: NSF | Branch: Standard Grant | Program: | Phase: ELECTRONIC/PHOTONIC MATERIALS | Award Amount: 390.00K | Year: 2015

Nontechnical Description: Reducing the sizes of semiconductor crystals to nanometer length scales can lead to useful optical and electrical properties that do not exist in large crystals. These tiny semiconductor nanocrystals are finding innovative applications in displays, lighting, and biomedical imaging. Important to developing such applications is the understanding of how electrical charges affect optical and electrical characteristics as well as the ability to control charging effects in nanocrystals. This project elucidates the effects of electrical charges in an emerging class of solution-processed semiconductor nanocrystals with shape, optical and compositional anisotropy, which offers novel means of manipulating electrical charges. The students participating in this project tackle multi-faceted challenges with ample educational and training opportunities necessary for becoming leaders in interdisciplinary science and engineering fields. The PI is committed to utilizing the research results generated in this research project to enhance teaching and promoting benefits of nanoscience to the general public.

Technical Description: Recently achieved multi-composition semiconductor nanocrystals with complex yet well-defined epitaxial interfaces, especially those with rod shapes (or nanorods), represent an emerging class of colloidal semiconductor materials. Active heterojunctions and structural/optical anisotropy designed into these nanorods provide novel means of controlling separation, injection, extraction and recombination of charge carriers. However, surrounding induced charges, whether intentional or not, can lead to an entirely different behavior than one often assumed based on an intrinsic or charge neutral material. This project examines systematically how work functions of contacting charge transport layers and electrodes affect charging behavior and how pre-existing charges in these multi-heterojunction nanorods influence carrier injection, photoluminescence and electroluminescence. The project also investigates how structural/optical anisotropy and active heterojunctions can be utilized to control charging behavior. The knowledge gained through this project helps to identify critical parameters and means for achieving enhanced materials properties and device performance.

Agency: NSF | Branch: Continuing grant | Program: | Phase: COMM & INFORMATION FOUNDATIONS | Award Amount: 669.21K | Year: 2013

This project deals with theory and efficient algorithms for statistical decision problems that are radically different from those that have been studied to date in two key aspects: First, the decision-maker may choose among a large class of observation channels (features) of varying complexity and quality; and second, the total cost of computational resources that can be used prior to arriving at a decision is limited. Computer vision is a paradigmatic source of such feature-rich decision problems, requiring the use of multiple heterogeneous feature types, integration of diverse sources of contextual information, and possibly even human interaction.

This project entails the development of a rigorous mathematical framework for feature-rich decision problems in accordance with three specific aims: (1) structural characterization of features as stochastic belief-refining filters; (2) universal cost-sensitive criteria for numerical comparison of features in terms of expected information gains; and (3) optimal value-of-information criteria for sequential feature selection that take into account both feature extraction costs and terminal decision losses. As corollaries, this research investigates connections to asymptotic information-theoretic characterizations of optimal feature selection rules and decisions. The fourth specific aim of the project is the development of practical algorithms for two challenging computer vision problems: active visual search and fine-grained categorization. This component of the project leverages theoretical aims (1) and (2) to develop practical cost- and loss-sensitive feature compression techniques. Theoretical aim (3) targets algorithms that function as autonomous decision-making agents. Faced with an inference task on an image, they apply cost-sensitive non-myopic value- of-information criteria to decide at each time step whether to extract a new feature from the image or to stop and declare an answer.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Secure &Trustworthy Cyberspace | Award Amount: 500.00K | Year: 2016

It is becoming widespread practice for software applications to be shipped in the form of a virtual instruction set (i.e,. as virtual code) and translated to the instruction set of a physical computer (machine code) after shipping, e.g., when downloading an app to an iPhone or just before execution for code embedded in many Web pages. Increasingly the LLVM virtual instruction set (developed by the principal investigator under prior NSF funding) is being used by various industries. A serious problem with this model is that application developers are unable to test the final computer code that is generated for their applications: they are only able to test the virtual code.

This project is developing new techniques that allow application developers to have much higher confidence in the final code for applications that ship as virtual code. The basic approach, called translation validation, allows the translation process from virtual to machine code to also generate a formal proof that the machine code preserves all the correct behaviors of the virtual code, and does not introduce any unexpected incorrect behaviors. Unlike existing work on translation validation, this work can generate proofs for translation between two very different languages, as well as formal guarantees in the presence of possible incorrect behaviors in the virtual code. Besides these reliability benefits, the strategy also improves the security of software because virtual code is widely used today as a means to enforce security requirements of important systems, e.g., Web browsers, operating systems, and database systems. Such systems rely on correct translations of virtual code to machine code to enforce security, and formally verified translations will prevent security vulnerabilities that might otherwise be introduced by bugs in complex (and so, inherently bug-prone) translators.

Agency: NSF | Branch: Continuing grant | Program: | Phase: Biological Anthropology | Award Amount: 179.18K | Year: 2017

Reconstructing activity patterns and locomotion based on fossil skeletal evidence is an important goal for understanding hominin evolution. However, the relationship between physical activity and bone structure is not fully understood, and such reconstructions require reference data from modern species. This project will investigate relationships between physical activity and bone structure, taking into consideration complicating factors such as age, sex, diet, and individual differences. The project will result in a mammalian reference dataset that will be relevant not only in biological anthropology but also in other research areas such as bone biology and sports medicine. The project will also provide valuable hands-on STEM research and training opportunities for undergraduate, graduate, and postdoctoral students, as well as public outreach activities involving local schools.

This project will investigate differences in habitual posture and range of motion in major limb joints, using micro CT data and finite-element models of subchondral and trabecular structures and whole epiphyses in an ovid model. The investigators will define parameters of the hip, knee and ankle joints that are most indicative of exercise-induced changes in joint loading patterns, and evaluate how bony response changes with distance from the joint surface. They will assess age-related differences and also test whether short-term exercise leads to significant changes in subchondral and trabecular stiffness and force transmission. The goal is to be able to reconstruct individual and group differences in joint and whole-limb posture and range of motion, which will provide the experimental basis for making functional inferences about joint loading conditions and movement patterns used by extinct hominins and other primates.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Chemical Catalysis | Award Amount: 230.00K | Year: 2016

Some of the major objectives of chemical catalysis are to lower the energy input required to effect important chemical transformations and reduce the amount of materials needed (solvents, reagents, etc.) to generate the desired end product. In this project funded by the Chemical Catalysis Program of the Chemistry Division, Professor Scott E. Denmark is conducting research to improve an industrially relevant chemical process, the Water Gas Shift Reaction (WGSR). The reaction is used to form of carbon-carbon bonds in a variety of fundamental chemical reactions. The goal of this research project is to replace metal-containing reagents that generate large amounts of waste products with less polluting reagents. The WGSR is used on an enormous scale to generate hydrogen for high volume chemical processes. The reaction is generally done by combining carbon monoxide and water over a solid catalyst to produce hydrogen and carbon dioxide. The research seeks to form new carbon-carbon bonds with fewer by-products and solvents. Professor Denmarks students identify the most efficient catalysts to enable the desired chemical reaction under mild reaction conditions. The project is designed to reduce waste in fine chemical manufacturing.

The landmark report Sustainability in the Chemical Industry published by the National Research Council in 2005, identified eight Grand Challenges the first of which is Green and Sustainable Chemistry and Engineering. The report called for research to Identify appropriate solvents, control thermal conditions, and purify, recover, and formulate products that prevent waste and that are environmentally benign, economically viable, and generally support a better societal quality of life. Despite all of the research into improving the efficiency of the WGSR, the process is exclusively used for generation of hydrogen using heterogeneous catalysis. However, outside of hydroformylation, no other carbon-carbon bond forming reactions have been developed that harness the reducing potential of the WGSR. This project identifies important carbon-carbon bond forming reactions that require stoichiometric amounts of a reducing agent and replaces that agent with the combination of water and carbon monoxide in the presence of a suitable catalyst or catalyst combination. This research is an good platform for training new scientists to think creatively.

Agency: NSF | Branch: Standard Grant | Program: | Phase: NSF Public Access Initiative | Award Amount: 59.99K | Year: 2016

This EAGER project addresses the urgent need to better understand the research communitys Data Management Plan (DMP) requirements and, based on this understanding, provides an open software tool that helps investigators generate structured and machine-readable Data Management Plans that fulfill both the researchers need to easily deliver a standardized set of information to the funder, and the funders need to analyze the information contained in DMPs. This allows funders to identify trends in data and software submission, repository use patterns, and carry out other analyses that can assist in understanding community use patterns and needs. The Principal Investigators (PIs) will leverage an existing DMP Tool built for the geosciences community by initially assessing the DMPs not only of the geosciences, but also the biological and social, behavioral, and economic sciences and upgrading the DMP Tool to serve those communities as well. Ultimately, the team will work to determine if this upgraded DMP Tool is extendable and scalable to all science, engineering, and educational research funded by the National Science Foundation. If successful, this will ultimately enhance the reproducibility and reuse of scientific research and help improve public access to supplementary scholarly products from federally funded research.

The National Science Foundation has required Data Management Plans (DMPs) for all grant proposals submitted for review since 2011. The DMPs submitted thus far are mostly free text and do not follow any specified format or structure and because of this limitation, current DMPs are not easy to compare or analyze. Consistent and comprehensive structured and machine-readable DMPs may substantially advance understanding of the data management landscape and address gaps that will improve data access which will lead to enhanced re-use and more reproducible science. The PIs propose to modify and upgrade the DMP Tool that has been developed and operated by IEDA (Interdisciplinary Earth Data Alliance) to serve the broadest research communities possible. This DMP Tool gathers relevant data management planning information from investigators in a structured manner into a relational database that can be mined and analyzed. As part of this project, they will analyze the information from the more than 1,350 DMPs that have already been generated with the IEDA DMP Tool to understand gaps, successes, and patterns of use. They will initially focus on the DMP requirements of science communities funded by the National Science Foundations GEO, BIO, and SBE directorates, and use the results of this research to guide the development and prototyping of the extended version of the IEDA DMP Tool. They will subsequently focus on other directorates.

Agency: NSF | Branch: Standard Grant | Program: | Phase: GEOMETRIC ANALYSIS | Award Amount: 95.73K | Year: 2014

This project studies connections between different areas of geometry that are inspired by ideas from theoretical physics. An important idea in theoretical physics is the notion of a duality between physical theories. A consequence of such a duality is a deep connection between the mathematical models that describe those theories. These connections allow us to look at a mathematical question in one model from another perspective, and thus derive new results. For this project, the mathematical models come from algebraic geometry (the geometry of sets defined by polynomial equations) and representation theory (the study of symmetry) on the one hand, and symplectic geometry (the geometry of the phase spaces of classical mechanics) on the other. The duality relating them is known as homological mirror symmetry. The focus of this project is to mine this connection for new insights into structures arising on each side of the duality. One part of the project studies how symmetries arise in symplectic geometry, and how this new perspective can give insights into representation theory. Another part studies the relationship between dynamics in symplectic geometry and one of the most basic objects in algebraic geometry, namely functions. The project also supports the training of graduate and early-career mathematicians in this rapidly-developing area of research. More broadly, this project fits into the ongoing interaction between mathematics and physics that has, over the centuries, led to theoretical advances that have enabled the transformative technologies of our time.

The organizing principle for this project is homological mirror symmetry for log Calabi-Yau varieties (varieties arising as the complement of an anticanonical divisor in a compactification). We consider such varieties both for the algebraic and the symplectic sides of the correspondence. On the symplectic side, the main structure we consider is symplectic cohomology, an algebraic structure that is built out of periodic orbits of certain Hamiltonian flows on a symplectic manifold (hence the connection to dynamics). The heart of the project is to relate this structure to functions and vector fields on the mirror algebraic variety. The connection to representation theory appears by considering the flag variety of a semisimple algebraic group G on the algebraic side. These are not log Calabi-Yau, but they contain open subsets which are, and part of the project is to understand better how to pass between the two situations (this involves considering a potential function on the symplectic side). The symmetries of the flag variety, namely the group G and its Lie algebra, should appear in the symplectic side as well. The natural home for the Lie algebra is symplectic cohomology, and the group action itself is manifested in the action of this Lie algebra on the Floer cohomology of equivariant Lagrangian submanifolds, which are the counterpart of equivariant vector bundles in algebraic geometry. Ultimately, one expects to obtain representations of G in the Lagrangian Floer cohomology groups. The pay-off for this effort is that these Floer cohomology groups come with a distinguished basis, and this project seeks to understand how that basis is related to the various known canonical bases in representation theory of Lusztig, Mirkovic-Vilonen, and others. In approaching these problems, the project uses ideas from the Strominger-Yau-Zaslow approach to mirror symmetry, as developed by Gross-Siebert and Gross-Hacking-Keel, as well as techniques developed by the PI in previous work on the case of log Calabi-Yau surfaces (complex dimension two).

Agency: NSF | Branch: Standard Grant | Program: | Phase: CONDENSED MATTER & MAT THEORY | Award Amount: 413.45K | Year: 2014

So Hirata of the University of Illinois at Urbana-Champaign is supported by an award from the Chemical Theory, Models and Computational Methods program in the Chemistry Division, the Condensed Matter and Materials Theory program in the Division of Material Research and the Computational and Data-enabled Science and Engineering Program (CDS&E) to develop computational approaches and software for the study of molecular crystals. Molecular crystals are a large, important class of solids that consist of well-defined molecular units bound by weak interactions. They include natures most abundant and important solids such as the ices of the atmospheric species of Earth and other planets. Synthetic chemists can fashion molecules that aggregate into superstructures, which, if crystalline, are also molecular crystals. Some explosives and many drugs fall into this category. Some molecular crystals display optical and electronic properties making them suitable for optoelectronic devices such as solar cells. The goal of this project is to develop a general computational method for molecular crystals and related ionic crystals as well as organic molecular superconductors. The principal investigator and his coworkers develop software to predict the structure, optical and thermal properties, and phase behavior of organic crystalline solids with unprecedented accuracy and applications in high-pressure chemistry, geochemistry, planetary science, and materials science. This research activity involves innovative education in physical chemistry. A series of physical chemistry lectures is recorded and made available online with a matching set of problems, releasing all face-to-face classroom hours for problem solving, students explanations of solutions, and discussions.

The energy of a molecular crystal is approximated as a sum of the energies of its constituent fragments embedded in the self-consistently determined electrostatic environment of the crystal. The fragment energies are, in turn, evaluated by sophisticated molecular ab initio electronic structure methods. This allows an accurate calculation of a variety of properties of solids under finite temperature and pressure (structure, equation of state, infrared, Raman, inelastic neutron scattering spectra, heat capacity, enthalpy, Gibbs energy) at such high levels of fidelity as second- and higher-order perturbation theory or coupled-cluster theory. This project implements this method into robust and well-documented software that exploits the methods natural parallelism and makes it available for the broader scientific community. Furthermore, the project extends this method to energy bands and ionic crystals as well as organic molecular (super) conductors. The underlying idea that enables these calculations is the linear-combination-of-molecular-orbital (LCAO) crystal-orbital theory, a coarse-grained extension of the LCAO molecular-orbital concept, which has dominated computational quantum chemistry since its inception. By expanding the wave function of an organic molecular crystal, for instance, as a linear combination of its charge configurations, which, in turn, are treated by the aforementioned embedded-fragmentation scheme, this method describes charge transfer between constituent molecular units in these solids and thus charge density waves, spin density waves, and metallic as well as possibly superconducting states.

Agency: NSF | Branch: Standard Grant | Program: | Phase: SBIR Outreach & Tech. Assist | Award Amount: 100.00K | Year: 2015

This project proposes to increase, within the NSF?fs broadening participation initiative, the number and quality of SBIR/STTR proposals submitted to the NSF by entrepreneurs from the Central Illinois Region, with a specific emphasis on women scientists that tend to be underrepresented in many STEM fields (National Science Foundation, 2013). This proposal also seeks to leverage regional diversity knowledge, experience, efforts, and programs to attract a strong cadre of participants and support their successful completion of the program. Members of other underrepresented groups will be a secondary focus of our targeted efforts, but no less important. The ultimate, long term goal is to foster broad--?]based, sustainable economic development in the region. Currently, as with most engineering and entrepreneurial programs, women--?]led teams comprise a smaller than desired percentage of our participants. Through the programs outlined in this proposal we plan on developing a model that can be used in central Illinois and elsewhere to build a pipeline of future women entrepreneurs and entrepreneurs from other underrepresented groups, training current entrepreneurs, and supporting existing entrepreneurs.

This proposed project will take a new approach to encourage the participation of female entrepreneurs and entrepreneurs from other underrepresented groups. This is done through taking a new approach of providing resources that encourage those from underrepresented groups to participate in the available entrepreneurial ecosystem and then surrounding them with support. Such resources include making available an entrepreneur--?]in--?]residence familiar with the needs of those from underrepresented groups and providing small proof--?]of--?]concept grants to qualified teams. Once the teams are in the system they will be supported through training and curriculum that introduces teams to the skills to not only submit a high quality SBIR proposal, but also to launch and execute a successful STEM--?]related new venture. The skills that teams will be introduced to during the courses include: product development, evaluating opportunities, and putting together a successful application.

Agency: NSF | Branch: Standard Grant | Program: | Phase: ELECT, PHOTONICS, & MAG DEVICE | Award Amount: 369.74K | Year: 2015

Non-Technical: Semiconductor devices are ubiquitous. They represent a multi-trillion dollar industry. Ordinary objects such as electrical outlets, thermostats, and blood pressure/heart rate monitors, are being embedded with increasingly complex control electronics, sensors, and network connectivity to enable greater functionality, value, and service. Although foundry services exist for large volume manufacturing of microelectronics for ?smart? objects, there are no cheap (<$100/run) rapid turnaround (<1hr) options for garage inventors to prototype new ideas. The biggest hurdles are that conventional microfabrication requires a cleanroom and expensive (>$1M) equipment. This NSF project seeks to democratize semiconductor manufacturing by investigating a new fabrication paradigm in which pulses of light of specific colors catalyze electrochemical reactions that dope, etch, and metallize designated circuit patterns onto a semiconductor wafer with high resolution. The project offers rich opportunities for high school and community college teachers to participate in research and develop teaching modules for hands-on labs through Research Experiences for Teachers projects. Research and teaching will be integrated through the PI?s ?Principles of Experimental Research? course. Graduate and undergraduate students will be trained in semiconductor micro- and nano-fabrication, photonics, optical system design, fluid mechanics, and bio-sensors through the proposed research activities. Recruitment, retention, and participation of students from underrepresented groups will be addressed through Research Experiences for Undergraduates internships and engineering summer camps for 9th-12th grade girls. Results from both research and teaching will be widely disseminated in journals and conferences to enhance the current understanding of photoelectrochemical processing and of engineering education/outreach methodologies.
Technical: Photochemical etching uses light to generate minority carriers that catalyze semiconductor wet etching. Recently, the PI?s team implemented photochemical etching using a projector. The local etch rate was controlled using color images drawn in PowerPointTM. Here, the team seeks to drastically improve the etch resolution and anisotropy and expand the method to enable new types of light controlled processes, e.g. patterned doping and metallization, so that new classes of unconventional photonic devices and multifunctional integrated circuits can be fabricated in a single system. In the proposed system, a super-continuum laser, tunable filter, and spatial light modulator will generate high intensity spectrally engineered dynamic image pulses and a synchronized electrical pulse generator will temporally gate the chemical reactions. If successful, this project is potentially transformative because it could create a new semiconductor fabrication paradigm for several reasons. First, multiple processing steps, e.g. doping, etching, and metallization can be performed sequentially in the same system. Second, these processes can be easily aligned to features made through conventional cleanroom processing since the illumination pattern can be adjusted in software. Moreover, this dynamic illumination capability enables new designs to be rapidly prototyped. Next, the processing rate for different bandgap materials can be individually adjusted. Finally, the limitations imposed by conventional planar fabrication technology can be removed and complex 3D devices can be fabricated with precisely controlled dimensions. The overall research goals of this project are to:
1. Understand how spectral and temporal gating affects the resolution, anisotropy, photo-induced selectivity (e.g. light on vs. off), and material selectivity (e.g. GaAs vs. AlGaAs) of the etch;
2. Develop photo-induced electroplating and doping techniques; and
3. Fabricate unconventional devices with complex topography.

University of Illinois at Urbana - Champaign | Date: 2015-07-01

The present invention provides stretchable, and optionally printable, semiconductors and electronic circuits capable of providing good performance when stretched, compressed, flexed or otherwise deformed. Stretchable semiconductors and electronic circuits of the present invention preferred for some applications are flexible, in addition to being stretchable, and thus are capable of significant elongation, flexing, bending or other deformation along one or more axes. Further, stretchable semiconductors and electronic circuits of the present invention may be adapted to a wide range of device configurations to provide fully flexible electronic and optoelectronic devices.

University of Illinois at Urbana - Champaign | Date: 2015-04-14

A method of performing a chemical reaction includes reacting a compound selected from the group consisting of an organohalide and an organo-pseudohalide, and a protected organoboronic acid represented by formula (I) in a reaction mixture: R^(1)B-T(I); where R^(1 )represents an organic group, T represents a conformationally rigid protecting group, and B represents boron having sp^(3 )hybridization. When unprotected, the corresponding organoboronic acid is unstable by the boronic acid neat stability test. The reaction mixture further includes a base having a pK_(B )of at least 1 and a palladium catalyst. The method further includes forming a cross-coupled product in the reaction mixture.

University of Illinois at Urbana - Champaign | Date: 2014-02-05

The present invention provides a method for preparing a nanoassembly that includes the step of reacting the assembly template with at least one nanomaterial to form the nanoassembly using a bifunctional linker.

University of Illinois at Urbana - Champaign | Date: 2015-04-03

A system executes efficient computational methods for high quality image reconstructions from a relatively small number of noisy (or degraded) sensor imaging measurements or scans. The system includes a processing device and instructions. The processing device executes the instructions to employ transform learning as a regularizer for solving inverse problems when reconstructing an image from the imaging measurements, the instructions executable to: adapt a transform model to a first set of image patches of a first set of images containing at least a first image, to model the first set of image patches as sparse in a transform domain while allowing deviation from perfect sparsity; reconstruct a second image by minimizing an optimization objective comprising a transform-based regularizer that employs the transform model, and a data fidelity term formed using the imaging measurements; and store the second image in the computer-readable medium, the second image displayable on a display device.

University of Illinois at Urbana - Champaign | Date: 2014-02-28

MdMYB3 nucleic acids and polypeptides are provided for use in modulating the accumulation of anthocyanins and flavonols as well as the length of styles and peduncles of transgenic flowers.

Northwestern University and University of Illinois at Urbana - Champaign | Date: 2015-04-02

Provided herein are kits, compositions, and methods for diagnosing and treating interstitial cystitis (IC) and/or interstitial cystitis/bladder pain syndrome (IC/BPS) based on finding lower levels of certain bacteria in a subjects stool sample (e.g., O. splanchnicus, F. prausnitzii, C. aerofaciens, E. sinensis, L. longoviformis, and R. intestinalis). In certain embodiments, then present invention provides probiotic formulations containing live bacteria (e.g., from O. splanchnicus, F. prausnitzii, C. aerofaciens, E. sinensis, L. longoviformis, and R. intestinalis).

University of Illinois at Urbana - Champaign | Date: 2014-02-11

A leadpipe for a musical instrument comprising: a sidewall defining an enclosed channel, the sidewall having an upstream end and a downstream end; wherein the sidewall is comprised of a plurality of sections including a first tapered section and at least one subsequent tapered section; wherein each of the sections extends longitudinally between the upstream end and the downstream end of the sidewall; wherein the first tapered section is disposed upstream of the at least one subsequent tapered sections; wherein the first tapered section and at least one subsequent tapered section of the sidewall each define a generally frustoconical cavity having an upstream inner diameter Df and a downstream inner diameter Ds; and wherein the downstream diameter of each of the tapered sections that defines a generally frustoconical cavity is larger than the upstream diameter. The leadpipe may produce greater intensity of overtones and improved sound quality.

University of Illinois at Urbana - Champaign | Date: 2014-01-16

This invention provides processing steps, methods and materials strategies for making patterns of structures for integrated electronic devices and systems. Processing methods of the present invention are capable of making micro- and nano-scale structures, such as Dual Damascene profiles, recessed features and interconnect structures, having non-uniform cross-sectional geometries useful for establishing electrical contact between device components of an electronic device. The present invention provides device fabrication methods and processing strategies using sub pixel-voting lithographic patterning of a single layer of photoresist useful for fabricating and integrating multilevel interconnect structures for high performance electronic or opto-electronic devices, particularly useful for Very Large Scale Integrated (VLSI) and Ultra large Scale Integrated (ULSI) devices. Processing methods of the present invention are complementary to conventional microfabrication and nanofabrication methods for making integrated electronics, and can be effectively integrated into existing photolithographic, etching, and thin film deposition patterning systems, processes and infrastructure.

University of Illinois at Urbana - Champaign | Date: 2015-04-07

Vaccines and compositions containing an attenuated L. monocytogenes prsA2 htrA deletion mutant for use in the presentation of foreign or exogenous antigens and in the treatment or prevention of diseases such as cancer or infectious disease are provided.

University of Illinois at Urbana - Champaign | Date: 2016-04-07

A tubular resonant filter comprises a multilayer sheet in a rolled configuration comprising multiple turns about a longitudinal axis, where the multilayer sheet includes a strain-relieved layer, a patterned first conductive layer on the strain-relieved layer, an insulating layer on the patterned first conductive layer, and a patterned second conductive layer on the insulating layer and the patterned first conductive layer. The patterned first and second conductive layers and the insulating layer are interrelated to form a rolled-up inductor connected to a rolled-up capacitor on the strain-relieved layer.

Agency: NSF | Branch: Continuing grant | Program: | Phase: MODULATION | Award Amount: 460.00K | Year: 2013

A key challenge in biology is to understand the link between brain function and behavior. Knowledge of brain functional dynamics is especially important for understanding how experience shapes the brain. This project couples state-of-the-art techniques to measure and mathematically model brain metabolic activity, using the honey bee as a model system and aggression as the focal behavior. Experiments will determine whether experimentally induced changes in brain metabolism cause changes in aggression, which is what is expected based on previously collected correlative data. Additional experiments will determine in which parts of the brain are the aggression-related changes in metabolic activity. This new information will then be used to build a comprehensive model of the bee brains metabolic network, which will then be validated with experimental work.

This collaborative project will provide integrative training for undergraduate students, graduate students, and post-doctoral associates, and members of the research team will give public presentations on the new insights into brain function gained by this project in a variety of venues for retired people and in K-12 settings. The Institute for Genomic Biology (IGB) at the University of Illinois at Urbana-Champaign will be the facility to store and back up project data, which include mainly RNA bioinformatic sequence-based and metabolomic datasets.

Agency: NSF | Branch: Continuing grant | Program: | Phase: OFFICE OF MULTIDISCIPLINARY AC | Award Amount: 2.38M | Year: 2013

Computer simulation plays a central role in helping us understand, predict, and engineer the physical and chemical properties of technological materials systems such as semiconductor devices, photovoltaic systems, chemical reactions and catalytic behavior. Despite significant progress in performing realistic simulations in full microscopic detail, some problems are currently out of reach: two examples are the modeling of electronic devices with multiple functional parts based on new materials such as novel low power computer switches that would revolutionize the Information Technology industry, and the photovoltaic activity of complex interfaces between polymers and inorganic nanostructures that would enhance US energy self-reliance. The research program of this collaborative software institute aims to create an open and effective scientific software package that can make efficient use of cutting-edge high performance computers (HPC) to solve challenging problems involving the physics and chemistry of materials. By having such software available, this software initiative will have multiple broad impacts. First, the community of materials scientists will be able to study next-generation problems in materials physics and chemistry, and computer science advances that enable the software will be demonstrated and made accessible for both communities which will help cross-fertilize further such collaborative efforts. Second, the capability of simulating and engineering more complex materials systems and technological devices could play a role in helping the US continue is competitive edge in science, technology, and education. Third, through training of young scientists, direct outreach to the broader scientific community through workshops and conferences, and educational programs ranging from secondary to graduate levels, the power, importance, and capabilities of computational modeling, materials science, and computer science methodologies that enable the science will be communicated to a broad audience. Finally, by enabling the refinement of existing materials systems as well as discovery of new materials systems, the resulting scientific advances can help broadly impact society via technological improvements: in terms of the two examples provided above, (a) the successful design of new electronic device paradigms helps significantly advance the digital revolution by permitting the introduction of smaller, more efficient, and more capable electronic circuits and information processing systems, and (b) successful creation of inexpensive, easy-to-fabricate, and durable photovoltaic materials and devices can lead to cleaner forms of energy production while reducing reliance on fossil fuels.

The technical goal is to greatly enhance the open software tool OPENATOM to advance discovery in nanoscience and technology. OPENATOM will be delivered as a open, robust and validated software package capable of utilizing HPC architectures efficiently to describe the electronic structure of complex materials systems from first principles. In terms of describing electronic ground-states, OPENATOM will be enhanced by features such as improved configurational sampling methods, hybrid density functionals, and incorporation of fast super-soft pseudopotential techniques. In addition, the team will incorporate the many-body GW-BSE approach for electronic excitations that permits accurate computation of electronic energy levels, optical absorption and emission, and luminescence. Ultimately, such an extensible software framework will permit accurate electronic structure computations to employ effectively future HPC platforms with 10,000,000 cores.

Agency: NSF | Branch: Standard Grant | Program: | Phase: ELECT, PHOTONICS, & MAG DEVICE | Award Amount: 350.00K | Year: 2015

Title: Coupling Multiple Semiconductor Lasers in a Single Chip for Applications of Fast Optical Data Transmission or High Power Operation

Semiconductor lasers have an increasingly significant impact on many aspects of our daily lives, as well as the economy and security of our nation. The internet relies upon tiny lasers to transmit information originating from our smart phones and computers through optical fiber to destinations around the world, while the low cost manufacture automobiles, planes, and consumer goods require lasers for precision cutting and welding. To continue the expansion and improve the performance of the internet, or to further reduce manufacturing costs, improved performance of the properties of semiconductor lasers are necessary. Conventional approaches for further improvements are not obvious, whereby for both high speed data transmission as well as high power laser light output the laser manufacturing industry are near the limits of performance, in technology areas that are accustomed to an order of magnitude increase every few years or less. Therefore new ideas and paradigms are needed to break through these performance barriers. The research of this project seeks to develop new semiconductor laser chips that will address faster optical transmission of data as well as increased laser output power. The ability to distribute greater amounts of digital data over optical fiber with orders of magnitude less electricity and at faster rates, would be a key enabler for data centers, which has been identified in a recent National Academy of Engineering report as a critical U.S. challenge. High power lasers will also lead to more reliable manufacturing processes and lower cost production, or more compact laser sources for display applications. Finally, the development of high brightness lasers could also enable a new generation of directed energy weapons for enhanced U.S. security. This research program incorporates design, simulation, fabrication, and characterization of semiconductor lasers to challenge and educate a diverse group of undergraduate and graduate students in electrical engineering in both classroom and laboratory experiences.

The approach of this research is to develop multiple lasers within a single semiconductor chip to act together in a coherently coupled manner. Specifically, the light wavelength and phase of each of the lasers in the array will be controlled in such a manner that all of the laser beams are combined together coherently. The coherent combination of the lasers does not simply result in the addition of the beams, but in fact the overall output light intensity increases as the square of the number of lasers in the array. Moreover, the control of the phase of each of the lasers in an array with all of the beams coherently combined can produce a significant increase of the modulation rate (the rate of turning the laser light brighter and dimmer) for digital transmission applications. The key aspect to control the light wavelength and phase of each laser element of the array is to independently electrically contact each laser diode. The intellectual merit of this research originates from the control and manipulation of multiple quantum mechanical optical oscillators. The fabrication approaches that are used in this research project are the same as those presently employed to manufacture individual semiconductor lasers, and thus can be transferred to the laser manufacturing industry in the United States. Furthermore and perhaps most importantly, the students involved in this research will be prepared for future scientific and engineering careers in the U. S. photonics industry.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Secure &Trustworthy Cyberspace | Award Amount: 495.19K | Year: 2015

Differential Privacy has emerged as a well-grounded approach to balancing personal privacy and societal as well as commercial use of data. The basic idea is to add random noise to analysis results sufficient to obscure the impact of any single individuals data on the analysis, thus protecting individual privacy. While general approaches to providing differential privacy exist, in many cases the bounds are not tight; more noise is added than needed. This project uses information theoretic techniques to explore the fundamental privacy/accuracy tradeoffs in differential privacy. The success of the proposed research will make progress towards a safer and more secure nation where the respect for individuals privacy is not compromised. The proposed research is strongly integrated with an education plan that aims to develop a new graduate level course on algorithmic foundations of privacy.

This project will investigate several topics: (1) characterizing the fundamental tradeoffs between the privacy guarantee and the utility of the released data, by applying information theoretic tools and methods to identify tight bounds on achieving differential privacy; (2) designing data privatization mechanisms for individuals that achieve both computational efficiency and the optimal tradeoffs between utility and privacy; and (3) providing a privacy calculus for macroscopic analyses of complex data processing systems, consisting of various components each with its own privacy guarantees. The privacy calculus aims to provide new representations and computational tools for characterizing how privacy components interact in a large system, analogous to how network calculus allows researchers to characterize complex non-linear communication systems using familiar tools from linear systems.

Agency: NSF | Branch: Standard Grant | Program: | Phase: GRANT OPP FOR ACAD LIA W/INDUS | Award Amount: 300.00K | Year: 2014

Historically the microelectronics industry has been driving integrated circuit technology development, shrinking silicon transistor sizes, reducing their power consumption and lowering their cost. For strategic electronic applications like satellite technologies, advanced materials development is essential. Single Wall Carbon Nanotubes have emerged as an important material candidate for these applications. The industry believes that electronic devices fabricated using nanotubes may eventually replace silicon-based devices for logic applications; but it is also well known that nanotube-based radiofrequency transistors are important as high-performance analog components in wireless systems. Key to the advance of high performance is the availability of aligned films of semiconducting nanotubes with high purity (>99.999 percent semiconducting nanotubes) at the wafer scale. Scaling-up electronic purification of nanotubes is the key challenge. Recently the Illinois team developed a path to purification for grown aligned nanotubes at levels that meet these daunting requirements. This Grant Opportunity for Academic Liaison with Industry (GOALI) award will support the work needed to scale this purification process to the wafer-scale, and integrate it into the process of record for nanotube field effect transistors in collaboration with Northrup Grumman Corporation. This work will have a great societal benefit, impacting power consumption and performance in wireless technologies, and will serve as one of the first scalable nanomanufacturing development efforts in the area. The work will provide a training platform for future engineers in this new manufacturing paradigm. Moreover this interdisciplinary effort will broaden participation of underrepresented groups in the research through use of the Summer Research Programs at Illinois and Northrup Grumman Corporation.

Aligned sub-monolayer films of single walled carbon nanotubes grown by chemical vapor deposition represent a promising materials platform for high-performance electronic applications. However, film electronic purity is a critical issue. Recently the Illinois team developed a path to purification for as-grown aligned materials based on utilization of a thermal resist which serves as an etch barrier for semiconducting nanotubes during processing. The process uses nanoscale thermocapillary flows in thin organic films as a processing strategy for complete, selective removal of metallic nanotubes from aligned arrays of single walled nanotubes. Compatibility with current microelectronics fabrication tools suggests it can serve as a scalable technique for nanotube substrate purification. Microwave excitation leads to efficient, selective removal of metallic tubes via this process, enabling 100 percent purity. The research team will scale-up thermocapillary purification methods for wafer-scale manufacture and will introduce them into the transistor processing workflow at Northrup Grumman. A high volume microwave reactor will be designed, assembled, and developed. Ultimately high-performance, low-power, low noise linear carbon nanotube amplifiers will be fabricated at the wafer-scale.

Agency: NSF | Branch: Standard Grant | Program: | Phase: PETROLOGY AND GEOCHEMISTRY | Award Amount: 28.00K | Year: 2016

The Carlin-type gold deposits (CTGDs) of northern Nevada are the source of most of the gold production in the United States and are thus of major economic importance; however, their geologic origin is uncertain. Various models for deposit formation relate the gold mineralization to different geologic events which likely occurred at different times, including magmatic activity, or circulation of groundwater-derived fluids through deeper gold-bearing units in the crust. Because of the lack of easily dateable minerals in CTGDs, it has historically been challenging to determine when gold mineralization occurred. The result is that effective exploration is hampered by the lack of knowledge about how and when these deposits formed. In this study, a new method of dating these deposits will be refined and tested. If the ages of CTGD formation could be more definitively known, gold mineralization could be correlated to broader geologic processes that were occurring at the same time. This knowledge would enable scientists and explorers to more easily discover new deposits.

Available geochronological data suggest that Carlin-type mineralization broadly overlapped in space and time with the southwest sweep of Eocene magmatism through Nevada. Geo- and thermochronologic datasets are critical to identify direct evidence that magmatism and ore formation overlapped at individual deposits. Indeed, new chronometric tools could greatly expand knowledge of the genetic processes that lead to CTGD formation. No straightforward method currently exists to date CTDG mineralization. Recent studies have explored the use of the (U-Th)/He chronometer for Fe-oxides, but this chronometer has not been systematically tested in ore deposits. The proposed research has two major goals: to test feasibility of the Fe-oxide (U-Th)/He chronometer to date hydrothermal fluid flow, and to use this method to determine the age of hydrothermal activity in a CTGD in Nevada.

Agency: NSF | Branch: Continuing grant | Program: | Phase: Molecular Biophysics | Award Amount: 655.23K | Year: 2014

The broader goal of this research is to understand how crowding and the weak association of proteins in the live cell is used by cells to control its proteins. This project makes connection with many areas in physics, chemistry and cell biology. Thus, the undergraduate and graduate students supported by this research project come from many scientific backgrounds, and will acquire the knowledge and hands-on skills to advance US chemical technology and teaching in the area of molecular and cellular biophysics. Special efforts will also be made to disseminate the instrumentation technology resulting from this project, and create strong ties to researchers in the US and abroad. Thus the project will have impacts in biotechnology and instrumentation industry.

The objective of this research project is to look inside single cells to see how their crowded environment affects the proteins and nucleic acids that keep the cell alive. Specific tasks are to find out how protein folding and movement inside cells interact, how the cell cycle controls protein folding, and how components of the spliceosome form inside cells from proteins and nucleic acids. The spliceosome cuts and rejoins our DNA so messenger RNA can be manufactured for making proteins. Humans could not live without it, yet we know virtually nothing about the biophysics of how its many protein and RNA pieces are assembled to do the job. As a final important task, the PI will look at an important industrial method that borrows the crowding idea from cells. So-called PEGylation enhances protein longevity by decorating proteins with stabilizing polymers, but the biophysical mechanism of stabilization remains unknown.

Agency: NSF | Branch: Continuing grant | Program: | Phase: INDUSTRY/UNIV COOP RES CENTERS | Award Amount: 188.00K | Year: 2014

The I/UCRC for Novel High-Voltage/Temperature Materials and Structures plans to work jointly with electric utility, aerospace, nuclear, military, environmental, automotive, health, and other industries with needs of novel HV/T materials and structures. The objectives of the Center are: (1) Design of novel and evaluation of existing HV/T energy transmission/transfer multifunctional materials for next generation composite conductors, insulators, underground cables, towers, and other electric power transmission structures; (2) Design and development of novel advanced high energy transfer materials for aerospace, oil/gas, automotive, and other industrial applications; (3) Failure prediction and prevention of HV/T materials and structures under in-service conditions through state-of-the-art multi-scale modeling and material performance evaluations; (4) Development of new failure monitoring techniques and material repair methods in HV/T materials under laboratory conditions and their subsequent transfer to in-service inspection and repair.

The proposed I/UCRC will seek to create a diverse and interdisciplinary educational, research and business environment for (1) undergraduate and graduate students, including those from underrepresented groups, funded by the research projects of the Center; (2) faculty members from a variety of disciplines, including junior faculty starting their academic careers; (3) utility, aerospace and national lab engineers and designers developing various types of HV/T materials and structures; and (4) utility managers supervising HV transmission lines across the country. The Center intends to enhance the reputation of the U.S. HV/T manufacturing around the world and, in particular, improve the level of confidence among the potential users of novel HV/T structures. The center targets long-term benefit to infrastructure, manufacturing, energy transport and efficiency of the electric grid, and the durability of other HV/T and high energy transfer structures.

Agency: NSF | Branch: Cooperative Agreement | Program: | Phase: FLUID DYNAMICS | Award Amount: 125.63M | Year: 2011

XSEDE: Enabling New Digital Science
The eXtreme Science and Engineering Discovery Environment (XSEDE) partnership will develop an unprecedented, comprehensive advanced digital services cyberinfrastructure (CI) to enable transformative open science and engineering research and innovative training and educational programs. The goal of XSEDE is to offer users tremendous capabilities with maximum productivity, enabling them to advance and share knowledge across domains. The XSEDE architecture, engineering, operations, support, and education activities are co-designed by an unparalleled team to achieve this goal, far surpassing TeraGrid in usability, reliability, capability, performance, and security?and ultimately, in user productivity and science impact. XSEDE will enable scientists, engineers, and educators to exploit powerful digital services and social networking environments to support knowledge exchange and advance understanding across domains. Just a few examples of the advances to science and society include: accurately predicting earthquake damage to urban structures; modeling of protein and nucleic acid folding and structure prediction to understand how drugs interact with target macromolecules to improve health care; developing novel designs for nanoscale microprocessors; advancing scientific understanding of plants to provide a safe and sustainable food supply, as well as benefits in renewable energy; and simulating pandemic spread to create a virtual laboratory where policy decisions such as school closure, vaccine deployment, and quarantine can be explored. The XSEDE partnership will fulfill this vision by creating the most advanced, capable, and robust advanced digital cyberinfrastructure in the world?and supporting it with the most expert and experienced team of CI professionals. XSEDE will accelerate open scientific discovery and enable researchers, educators, and students across disciplines and across campuses to conduct transformational research efforts and innovative education programs. XSEDE will create strong ties with campus personnel spanning technology, workforce development, and policy issues to enhance CI for research and education. Researchers will use XSEDE directly, from campus and personal systems, from other high-end centers and cyberinfrastructure resources, and via science gateways and discovery environments. XSEDE users will be backed by an integrated national user support program offering an array of services from experts in the application of technology to advance science and engineering, including extensive training and advanced user support and collaboration. XSEDE?s governance model will include participation by these users as stakeholders, while providing centralized management to ensure robustness and to facilitate rapid responses to new issues and opportunities. XSEDE will carry out a multifaceted Training, Education, and Outreach Services (TEOS) program to raise the competency of the present and future scientific community. XSEDE will work proactively with the nation?s educational institutions to create a significantly larger and more diverse STEM workforce. TEOS will broaden participation by working with under-represented faculty and students to engage larger numbers of under-represented individuals from among minority-serving and EPSCoR institutions, women, and people with disabilities. TEOS will disseminate best practices, lessons learned, and quality materials and will leverage external partnerships to scale-up successful practices. XSEDE will leverage the XD Technology Insertion Service (TIS) activities?already awarded to the XSEDE team?into continuously advancing CI and will work closely with the Technology Audit Service (TAS) team to ensure that XSEDE can be effectively evaluated and improved. These activities will ensure that XSEDE is robust, easy to use, performing as designed, and evolving constantly to meet the growing demands of scientific research and researchers. Advancing science with the most powerful, diverse, and integrated set of advanced digital services ever?and linking that to other CI projects and to campuses and local research infrastructure?is unprecedented. No engineering and technology plan can anticipate all contingencies and future opportunities. The successful realization of NSF?s vision for XD will require deep expertise and vast experience, as well as focused and passionate effort. The XSEDE team is uniquely experienced and qualified for this incredible opportunity.

Agency: NSF | Branch: Continuing grant | Program: | Phase: ROBUST INTELLIGENCE | Award Amount: 435.36K | Year: 2013

The intellectual merit of this proposal lies in the bio-inspired approaches to novel control strategies, vision-based sensing solutions, and strategies for cooperative pursuit and herding that will be gleaned from the literature and observations of the behavior of birds. This study will result in both fundamental scientific knowledge and a practical application revolving around the development of highly maneuverable unmanned aerial vehicles based on flapping-wing robots. The intelligence of birds manifests itself in the evasive talents of flocks of birds and this proposal attempts to recreate that intelligence through the challenge of applying a bird-like robot to the problem of preventing aircraft/bird collisions near airports. According to surveys by the International Bird Strike Committee, none of the existing systems to prevent bird strikes on airfields are adequate. The reasons include habituation by birds to these systems; movement of birds to other parts of the area, or scattering of them all over the airfield; and the tendency of birds to come back when a threat has gone. The only proven lasting way of removing birds is by using live birds of prey, but real birds are too difficult to control and train. An alternative is to study and extract the behaviors and dynamics of real birds in order to develop and deploy a robotic lookalike. The objective of this proposal, motivated by the problem of keeping airfields clear of disruptive avian flocks, is to develop control and sensing strategies for bird-like flapping robots that can be deployed in swarms to fend off antagonists. This work will build upon the PIs previous work on the control of flapping-wing aircraft using limit-cycle-based central pattern generators (CPGs), and on the dynamics and control of flexible, articulated-wing aircraft.

Broader Impacts: Society as a whole stands to benefit immensely from robotic birds that can effectively prevent bird strikes, which cause airplane crashes and millions of dollars annually in damage. Furthermore, because of their high aerodynamic efficiency in forward flight, articulated-winged flapping aerial robots equipped with sensors could have tremendous value by being able to inspect hazardous areas. For the field of robotics, the proposed transformative research will make far-reaching contributions, advancing the state of the art in aerial robotics, cooperative control theory, and control of flexible robot structures. Additionally, this projects education and outreach activities will help introduce a new generation of young people, including those from underrepresented populations, to the excitement of science and engineering careers. Multiple avenues of educational outreach will be pursued, taking advantage of the powerful appeal of the topics of robotics and bio-inspired flight.

Agency: NSF | Branch: Continuing grant | Program: | Phase: INDUSTRY/UNIV COOP RES CENTERS | Award Amount: 300.00K | Year: 2016

The semiconductor industry is perennially one of Americas top exporters. Worldwide semiconductor sales for 2014 reached $335.8 billion, and the number of U.S. jobs in this sector was estimated to be around 250,000 in 2013. More broadly, the U.S. tech industry, which depends on semiconductor innovation to spur new products and applications, is itself estimated to represent no less than 5.7% of the entire U.S. private sector workforce (at nearly 6.5 million jobs), and with a tech industry payroll of $654 billion in 2014, it accounted for over 11% of all U.S. private sector payroll. Yet despite its success, the industry must continue to innovate if the U.S. is to retain global leadership in this highly competitive area. The complexity of modern microelectronic products necessitates the use of computer tools to formulate and verify product designs prior to manufacturing. When a product doesnt operate as intended or suffers early failures, this can often be attributed to inadequacy of the models used during the design process. In fact, the shortcomings of existing approaches for system component modeling have become a serious impediment to continued innovation.

The Center for Advanced Electronics through Machine Learning (CAEML) proposes to create machine-learning algorithms to derive models used for electronic design automation with the objective of enabling fast, accurate design of microelectronic circuits and systems. Success will make it much easier and cheaper to optimize a system design, allowing the industry to produce lower-power and lower-cost electronic systems without sacrificing functionality. The eventual result will be significant growth in capabilities that will drive innovation throughout the electronics industry, leading to new devices and applications, continued entrepreneurial leadership, and economic growth.

While achieving those goals, CAEML will also focus on diversifying the undergraduate engineering student body and improving the undergraduate experience. Students from groups traditionally underrepresented in engineering will be targeted for recruitment as undergraduate research assistants. Member companies will provide internships and mentors for participating students, and the diverse graduate and undergraduate student researchers in CAEML will receive hands-on multidisciplinary education. CAEML will also participate in all three site universities existing avenues for student and faculty engagement with local youth. In particular, university-based summer camps are a tried and tested method of making high-school students familiar with and comfortable on our campuses. The Girls Adventures in Mathematics, Engineering, and Science (GAMES) summer camp program at the University of Illinois at Urbana-Champaign (Illinois) brings high-school girls to campus for a week of hands-on engineering activities and camaraderie. The engineering content for many of the GAMES camps, including the one on electrical engineering, is developed by engineering faculty. CAEML undergraduate and graduate students can serve as counselors or instructors for camps; the CAEML team proposes to develop new activities and workshops for high-school campers on all three sites campuses. In addition, the Beginning Teacher STEM Conference at Illinois brings 150 teachers who have just completed their first year in the classroom to the Urbana-Champaign campus for 2 days to deepen their knowledge of STEM fields and try out activities for use in their classrooms; several of the sessions are taught by College of Engineering faculty including those affiliated with CAEML.

The Center for Advanced Electronics through Machine Learning (CAEML) will create machine-learning algorithms to derive models used for electronic design automation, with the objective of enabling fast, accurate design of microelectronic circuits and systems. The electronics industrys continued ability to innovate requires the creation of optimization methodologies that result in low-power integrated systems that meet performance specifications, despite being composed of components whose characteristics exhibit variability and that operate in different physical or signal domains. Today, shortcomings in accuracy and comprehensiveness of component-level behavioral models impede the advancement of computer-aided electronic system design optimization. The model accuracy also impacts system verification. Ultimately, the proper functionality of an electronic system is verified through testing of a representative sample. However, modern electronic systems are so complex that it is unthinkable to bring one to the manufacturing stage without first verifying its operation using simulation. Today, simulation generally does not ensure that an integrated circuit or electronic system will pass qualification testing the first time, and failures are often attributed to insufficiency of the simulation models. With an improved modeling capability, one could achieve better design efficiency, and also perform design optimization. For system simulation, behavioral models of the components terminal responses are desired for both computational tractability and protection of intellectual property. Despite many years of significant effort by the electronic design automation community, there is not a general, systematic method to generate accurate and comprehensive behavioral models, in part because of the nonlinear, complex, and multi-port nature of the components being modeled.

CAEML will pioneer the use of machine-learning methods to extract behavioral models of electronic components and subsystems from simulation waveforms and/or measurement data. The Center will make 2 primary contributions to the field of machine learning: it will demonstrate the application of machine learning to electronics modeling, and develop the entire machine-learning pipeline. Historically, machine-learning theorists have focused on the model learning and evaluation tasks, but CAEML will focus on end-to-end performance of the pipeline, including data acquisition, selection and filtering, as well as cost function specification. CAEML will develop a methodology to use prior knowledge, i.e., physical constraints and the domain knowledge provided by designers, to speed up the learning process. Novel methods of incorporating component variability, including that due to semiconductor process variations, will be developed. The intended end-users are electronic design automation (EDA) tool developers, IC design houses, and system design and manufacturing companies.

CAEML consists of 3 sites: Illinois, Georgia Tech, and NC State. The scope of research at each site encompasses both algorithm development and the application of the derived models to a variety of IC and system design tasks. Investigators at all 3 university sites have unique skills and expertise while sharing interests in electronic design automation, IC design, system-level signal integrity, and power distribution. To leverage the cross-campus expertise, many of the Centers proposed projects involve investigators from more than one site. The Illinois investigators have special expertise in computational electromagnetics, electrostatic discharge (ESD), and optimization; they bring capabilities in areas such as circuit design for ESD-induced error detection, computationally-efficient stochastic electromagnetic field simulation, reduced-order modeling and behavioral modeling of electrical/electromagnetic circuits and systems, and multi-domain physics modeling in the presence of uncertainty and variability. All three sites have strong research records in the fields of signal integrity analysis and electronic design automation. Excellent computational resources are available at Illinois for the proposed work; the necessary test and measurement equipment is also available, including a system-level ESD test-bed.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Materials Eng. & Processing | Award Amount: 300.00K | Year: 2015

Concrete is second only to water as the most used material by humans. Its use continues to grow to build new structures as well as to meet an increasing need for repair of existing structures. The projected use of ordinary Portland cement, the main component responsible for binding capacity of concrete, in 2020 is to be three times the level of 1990. As every ton of ordinary Portland cement is known to produce 0.8 tons of carbon dioxide, reduction of cement consumption by use of supplementary cementitious materials is extremely important to reduce greenhouse gas emission associated with construction industry. However, supplementary cementitious materials can be slow to react and greater use of such materials in concrete requires external activation. External activation of supplementary cementitious materials can produce binders with similar to superior mechanical properties and have been used in actual construction. However, there are still many factors, including their high early age deformation due to moisture loss and limited understanding of long-term time dependent deformation, that affect their wider use. This proposal, for the first time, will study processing of such sustainable alternative binders and its relationship with time-dependent deformation to ultimately control it. The proposed work plan also aims to a) advance the integration of research and education through training civil engineering graduate students in materials science, b) encourage study of sustainable infrastructure materials among undergraduates through middle school students and c) increase participation of women and underrepresented students in research.

The research objective of this proposal is to provide fundamental understanding of how the reaction mechanisms, and the molecular and nano structural arrangements of the reaction products in alkali activated sustainable alternative binders made from supplementary cementitious materials, are related to the time dependent deformation of the binder. This proposal hypothesizes that the abovementioned factors can be controlled through the addition of nanocrystalline seeding agents. In this project, the effects of the addition of nanocrystalline seeding agents on the reaction mechanism of alkali activated binders will be studied through the use of high resolution electron microscopy and X-ray scattering. Precise information on the growth mechanism will be transformative as it will permit modification and possibly improvement of predicting capability of existing models for reaction kinetics of such binders. Fundamental understanding achieved through this proposal will be equally important for improving resistance of alkali activated binders against leaching, efflorescence and other chemical degradations as they also depend on the molecular and nanostructure of the binder.

Agency: NSF | Branch: Continuing grant | Program: | Phase: Nuclear & Hadron Quantum Chrom | Award Amount: 7.55M | Year: 2012

Understanding the structure of hadrons and nuclei remains a compelling thrust in nuclear physics. Although it is a relatively straightforward generalization of quantum electrodynamics, the underlying theory of strong interactions, quantum chromodynamics (QCD), is difficult to solve, especially in the context of structure. Although lattice QCD has made important progress, observations are clearly still important to make the theorys phenomena come to life. The University of Illinois at Urbana-Champagne program focuses on the spin and flavor structure of the nucleon in a variety of experiments at high energies where key elements of the microscopic role of quarks, anti-quarks and gluons are poorly understood. The PHENIX polarized pp program, while continuing to measure the contribution of gluons to the nucleon spin, will enter a new phase of measuring the contributions of antiquarks, taking advantage of an Illinois-led muon trigger upgrade. The SEAQUEST experiment at Fermilab will follow on the very successful E866 measurement of the intriguing asymmetry of up and down anti-quarks in the nucleon with more precise data over a broader kinematic range. The Illinois group has recently joined the fixed target COMPASS experiment at CERN, where they will make a major contribution to an apparatus upgrade that will take the next step in characterizing the transverse spin structure of the nucleon. Together, these efforts - with substantial Illinois contributions - promise significant advances in the understanding of QCD. Beyond the current standard model of particle physics must lie new phenomena, only the shadows of which are currently visible. The Illinois group contributes to these investigations through low-energy experiments designed to follow the leads provided by certain non-standard-model phenomena. In the neutrino sector, where even the existence of non-zero masses was a surprise not contemplated in the standard model, the Illinois group is working on the Daya Bay experiment. Currently taking its first data, Daya Bay will make the worlds most sensitive determination of the mixing of the first and third neutrino generations, theta_(13). In the standard model, the violation of the combined charge-conjugation and parity (CP) symmetry is added by hand and is not well-understood. A new measurement of the neutrons electric dipole moment (EDM), which violates CP, aims at a 100-fold improvement compared with the present limit. These measurements are important on their own merits, but may also shed light on the observed asymmetry of matter and anti-matter in the universe. The existence of this asymmetry is thought to require more CP violation than is currently known, and may arise from interactions that would generate an EDM in the neutron, and or in the neutrino sector where it would manifest in CP-violating mixing between the first and third generations.

The Illinois group has reached beyond its laboratories to make significant contributions to education and outreach and will continue to do so. Research projects contribute substantially to the education of a large number of postdoctoral research associates, graduate, and undergraduate students. The faculty are also involved in innovative curricular and outreach efforts. Examples include introduction of research equipment and measurement protocols into advanced modern physics laboratories, establishment of a graduate option in energy and sustainability engineering, inventing new inter-disciplinary math/engineering teaching protocols targeted directly at retention, continued support of a Saturday lecture program aimed primarily at high school students, and a major contribution to the Physics of Baseball, a topic having wide public interest. Our group and Department expect to continue to support strongly the participation of underrepresented groups, for example, acting as host for the Conference for Undergraduate Women in Physics in 2008.

Agency: NSF | Branch: Standard Grant | Program: | Phase: I-Corps | Award Amount: 50.00K | Year: 2017

The broader impact/commercial potential of this I-Corps project will be a transformative approach at fabricating novel smart multi-functional nanostructures that meet the needs of next generation advanced materials by providing a process that is green, cheap, fast, and versatile. This approach to nanostructure synthesis has applications in biotechnology, energy and tooling, among others. For example, in the healthcare field each year nearly one million people in the United States suffer from an infection related to medical implants. Additionally, implants in bone can loosen over their lifetime due to poor tissue integration, resulting in inflammation and pain, and possibly requiring additional surgeries. These issues can potentially be solved through the synthesis of inherently anti-bacterial biomaterial surfaces that enhance biological tissue integration. The commercialization of this technology has the potential to redefine industrial material design paradigms.

This I-Corps project is based around a form of plasma processing of materials called Directed Irradiation Synthesis (DIS) and Directed Plasma Nanosynthesis (DPNS). This technology is able to change the inherent properties of a material surface by creating customized nanoscale topographies (pores, rods, cones, ripples, etc.) and chemistries (stoichiometry, oxidation state, etc.) by exposing the surface to a controlled flux of ions, electrons, and neutral particles with controlled mass, momentum, and fluence, among other conditions. This allows a new level of fidelity with atom-by-atom control using self-organized arrangement in irradiated surfaces that is dominated by ion-induced erosion and surface diffusion. This technology will transform the synthesis and design of nano-structured systems by leveraging the composition-dependent mechanisms that drive self-organization on micro- and nano-structures to enable tunability and control of their biological properties.

Agency: NSF | Branch: Continuing grant | Program: | Phase: Systems and Synthetic Biology | Award Amount: 137.74K | Year: 2016

Bacteria can form diverse community structures, called biofilms that include special characteristics such as spiral swirls on the surface of laboratory agar as well as in natural environments. Understanding how these microbial structures form is of fundamental importance, as bacteria predominantly function in the form of communities to impact the environment, agriculture, and human health. Understanding the design principles underlying the multi-scale organization of communities due to competition between members of the community is important, especially as competition is one of the major driving forces that influence how microbial communities organize themselves within a biofilm. An integrated understanding of bacterial competition and its roles in controlling spatial community structure will advance our basic knowledge about microbial ecology and facilitate new developments in the area of synthetic microbial biology. The research will also provide educational and training opportunities for girls in regional high schools and undergraduate institutions, particularly those underrepresented; simultaneously, it will engage larger research communities and the general public.

Bacterial competition is a major driving force that directs the emergence of diverse spatial community structures. Despite significant advances over decades of research, our understanding and predictive capability for the organization of competing communities remains limited, largely due to the multiscale coupling of the underlying molecular, cellular and ecological events. It is important to determine how the key molecular traits of competition, including its scale, variability and cost, determine the spatiotemporal structures of communities. To achieve this goal, synthetic E. coli consortia will be used as model systems for quantitative exploration; in parallel, a biophysical modeling platform for systematic survey and analysis will be developed. This project will yield a systems-level understanding of bacterial competition and community organization, thus providing critical implications for understanding microbial ecology and engineering artificial communities. More broadly, because of the rich nonlinear dynamics and emergent properties associated with bacterial communities, the planned work will further advance the fundamental knowledge of complex systems in general.

Agency: NSF | Branch: Continuing grant | Program: | Phase: Chemical Catalysis | Award Amount: 405.00K | Year: 2016

The Chemical Catalysis Program of the Chemistry Division supports the research of Professor Kami L. Hull of the University of Illinois, Urbana-Champaign to develop new approaches for the synthesis of carbonyl containing compounds. Such species are extremely important in the commodity chemical industry as well as in pharmaceutical and specialty chemical syntheses. Their preparations, however, are complicated and often involve the production of undesirable by-products. One of the primary goals of this project is to maximize the economy of the synthesis of these species. The development of new methodologies that minimize waste are targeted for the synthesis of amides, esters, thioesters, and ketones, all of which are important starting materials or intermediates in large scale chemical processes. The broader impacts this work include the potential to minimize the waste associated with the synthesis of carbonyl compounds, thereby making better use of our natural resources in the synthesis of fine and commodity chemicals. Additionally, there is an outreach component that aims to increase the scientific literacy of rural children and encourage these students to pursue higher education in STEM disciplines.

New rhodium based metal-catalysts are developed that allow for the direct coupling of alcohols and nucleophiles, through dehydrogenation or transfer hydrogenation, for the synthesis of amides, esters, thioesters and ketones. Further, synthesis of alpha and beta chiral carbonyl compounds is explored. The catalysts and organic products are fully characterized by NMR spectroscopy, IR spectroscopy, and X-ray crystallography, when needed. To increase the scientific understanding of the greater community, an outreach program entitled Next Generation of Illinois Scientists (NGIS) is developed. The goals of NGIS is to bring scientific outreach to rural communities. Working with fourth grade students at rural elementary schools within a 50 mile radius of the University of Illinois, Urbana-Champaign a series of experiments is developed that builds on the Common Core Curriculum.

Agency: NSF | Branch: Standard Grant | Program: | Phase: MANFG ENTERPRISE SYSTEMS | Award Amount: 300.00K | Year: 2014

The objective of this award is to develop novel dynamic pricing models that take into account pertinent, empirically validated consumer behaviors. Building upon empirical data from industrial partners and open databases, we plan to construct accurate and tractable demand models that are amenable to pricing optimization. Specifically, we will explore demand models that incorporate consumers memory-based reference prices under dynamic pricing. We will then use these demand models to build pricing optimization models that take into account a variety of complex practical operation constraints. The projects demand models and decision models range from single products to multiple products, from durable products to perishable products, and from deterministic settings to stochastic settings. Developing advanced analytical techniques and efficient algorithms by exploiting the special structures of these models will be essential to this project.

If successful, this research will lead both to novel and comprehensive analytical models and to advanced methodologies and efficient algorithms that may be used to attack the resulting challenging non-smooth, non-convex optimization problems. These models, methodologies, and algorithms will be critical for the development of the much needed decision support systems to improve companies competitive advantages. Preliminary research demonstrates that memory-based reference price models result in complex price dynamics and raise a host of challenging yet practical research questions that the existing literature does not yet address. The novel theory and techniques we expect to develop will not only successfully address these questions, but will also be useful in other research areas such as dynamic systems.

Agency: NSF | Branch: Standard Grant | Program: | Phase: POLYMERS | Award Amount: 245.13K | Year: 2016


Over the past half century electronic materials have changed the way we live, and in the coming decades they may revolutionize the way we harvest energy as well. However, electronic products are presently manufactured by processes of high energy and environmental cost. In comparison, polymer-based electronics (i.e., using ultra-thin films of specialty plastic materials) can be processed from solutions at low temperatures by low-cost, high-throughput methods such as roll-to-roll printing. Controlled assembly of materials and the way their morphological features evolve during processing has played a central role in a broad range of areas ranging from electronics, pharmaceuticals, food, fine chemicals, energy materials, etc. The approach in this project represents a new methodology for controlling assembly of functional materials by designing the fluid flow used in their processing. Using a hypothesis-driven approach, it is aimed at providing new fundamental insights and design rules on fluid-directed assembly that could have broad implications across numerous areas.

The planned work will integrate research efforts with outreach and educational activities. These activities will include outreach to high-school students, aiming particularly to increase the interest and participation of girls in science, engineering, and technology. Undergraduate students will be mentored and research projects pertinent to directed assembly and polymer crystallization and aggregation will be designed for their engagement. Also, this project will impact new course development on fundamental principles of directed assembly of molecular solids. The course will be designed for graduate students and will include modules on organic semiconductors and alternative energy.


A key challenge in realizing high-performance conjugated-polymer based semiconductors is to direct their assembly from molecular, meso-, to macroscale during the solution printing process. To address this challenge, the PI proposes a new method of fluid-directed assembly to control the morphology of printed conjugated-polymer thin films across multiple length scales. She will implement this methodology by designing the fluid flow on the platform of microfluidic slot-die printing. Specifically, at the molecular scale the aim is to induce local ordering in molecular aggregates by introducing extensional flow to promote polymer nucleation and pre-aggregation. At the mesoscale the objective is to control orientation and alignment of polymer pre-aggregates by designing the shear flow. The PI will further investigate the role of molecular rigidity in flow-induced polymer crystallization and establish the relationship between morphology and charge transport in semiconducting polymers.

The proposed approach will be implemented by combining experiments on polymer crystallization and aggregation, simulations of fluid flow, and theory on fluid-polymer interactions. This approach can enable directed molecular assembly across multiple length scales. Attaining multiscale assembly at once is highly challenging but critical to controlling solid-state properties, such as in the case of printed electronics. Furthermore, the proposed approach will help to unravel the morphology-charge transport relationships for semiconducting polymers. Establishing this relationship has been challenging due the lack of methodologies for systematically tuning the thin film morphology across multiple length scales.

Agency: NSF | Branch: Standard Grant | Program: | Phase: TOPOLOGY | Award Amount: 76.95K | Year: 2015

The thirteenth annual Graduate Student Topology and Geometry Conference (GSTGC) is to be held March 29-March 30, 2015, at the University of Illinois Urbana-Champaign. This will be the largest GSTGC ever, with over 150 graduate students from across the country at various stages of their careers expected to attend. The schedule includes a broad range of talks: 24-32 graduate student 30 minute talks on expository and original research topics in geometry/topology; plenary speakers, Kathryn Hess (EPFL), Misha Kapovich (UC Davis), and Daniel Wise (McGill); and six invited young faculty speakers, Anna Marie Bohmann (Northwestern), Jeff Danciger (UT Austin), Jo Nelson (IAS), Vivek Schende (UC Berkeley), Hiro Lee Tanaka (Harvard) and Jing Tao (Oklahoma). This conference will include talks in many subfields of topology and geometry, including hyperbolic geometry, geometry of positive curvature, global analysis, 3-manifolds, homotopy theory, symplectic geometry, dynamics, knot theory, cobordism theory, category theory, Teichmuller theory, 4-manifolds, differential topology, geometric group theory, algebraic K-theory, and more.

This conference provides a venue for communication among young mathematicians from different geographic regions. Participants include graduate students at all stages of their careers. This is one of the few conferences in topology and geometry that is dedicated to graduate students. Student participants at this conference have the opportunity to learn about cutting edge research, to refine their communication and networking skills, and to establish collaborations with peers. Geometry and topology are fundamental mathematical fields with deep connections to many other areas of research, such as dynamical systems, physics, computer science, and mathematical biology. This conference serves to foster research and communication about research in these areas, and to enable and encourage graduate students in these fields.

Conference URL: http://www.math.illinois.edu/gstgc2015/

Agency: NSF | Branch: Standard Grant | Program: | Phase: ADVANCES IN BIO INFORMATICS | Award Amount: 421.20K | Year: 2014

This research aims to help scientists develop and use relatively simple tools to describe species in a way that makes those descriptions easier to share with other scientists and easier for computers to process and analyze. Taxonomists are scientists who describe the worlds biodiversity. Taxonomists descriptions of millions of species allow scientists to do many different kinds of research, including basic biology, environmental science, climate research, agriculture, and medicine. The problem is that describing any one species is not easy. The language used by taxonomists to describe their data is complex, and typically not easily understandable by computers nor even other scientists. This situation makes it difficult to search for patterns across the millions of species that have been documented by thousands of different researchers over many decades of work worldwide. Innovation from this project is applicable to the long-term development of open source software initiatives serving laboratories throughout the world, and the research facilitates the production of open, shared data, as mandated by various federal agencies. As a result of this project, these data will become more accessible and informative to the general public. The project provides rich, real-world training for graduate students in library and information sciences, training them to be cross-disciplinary researchers in a field that is in need of new experts. Collaborating experts studying bees, wasps, and ants will receive training on the cutting edge theories and methods from the bioinformatics toolbox developed as a consequence of this project. In return their contributions of data will act as the basis for computational benchmarks needed in areas of logical inference and data modeling.

This research addresses the problem of how to produce and utilize semantic data, specifically semantic phenotypes, within the taxonomic context of describing the Earths biological diversity. The approach to be taken is bottom-up and iterative, involving the rapid prototyping of tools, combining of existing tools, and the tailoring of applications developed for one purpose but now being reused for this scientific activity. Scientists are busy innovating partial solutions by tinkering with and combining available computer programs and datasets. Their efforts comprise an incredibly productive source of innovation, since it is often much easier and faster to combine computer resources that already exist than to build something from scratch. However such cobbling together of resources to meet a need can benefit from analysis and active support. In particular, a more principled set of approaches can make innovations easier to share and to maintain. With a focus on the Hymenoptera, the researchers plan an innovative approach for biodiversity informatics based on work in the field of Computer Supported Cooperative Work (CSCW). Using a combination of ethnography to define work practice, user-centered design, and iterative agile software development, the collaboration between information scientists, biologists, and application developers aims to produce a suite of concrete deliverables, a rapid prototype portfolio, comprising interface and workflow tools, and end user requirements for semantic phenotype production. The project will explore and document examples of innovative prototyping of solutions by scientists to understand how it occurs, what it is that scientists most need, and how these can be most effectively supported. These components may be generalized to allow broader scientific use.

Agency: NSF | Branch: Standard Grant | Program: | Phase: GEOGRAPHY AND SPATIAL SCIENCES | Award Amount: 1.50M | Year: 2014

This project will develop a set of tools for spatial data synthesis through scalable data aggregation and integration based on cloud computing, CyberGIS, and other existing tools. Many scientific problems require the aggregation and integration of large and varied spatial data from a multitude of sources, yet existing approaches and software cannot effectively synthesize the enormous amounts of spatial data that often are available. This project will resolve problems associated with the use of massive spatial data, thus facilitating work dependent on this type of data for scientific problem solving, such as research on population dynamics and urban sustainability. Learning materials derived from the research activities will be openly accessible through the CyberGIS Science Gateway. Targeted massive open online course development will provide inexpensive and efficient ways to teaching students about the capabilities and underlying scientific principles of spatial data synthesis. A summer school will be offered during the second half of the project to provide a more focused and in-depth training event.

This research project will create scalable capabilities for spatial data synthesis enabled by cloud computing and CyberGIS. The project will begin by developing the capabilities for solving specific scientific problems and then move on to engage a broader community for validating and improving the core capabilities. The research will incorporate two interrelated themes: (1) measuring urban sustainability based on a number of social, environmental, and physical factors and processes; and (2) examining population dynamics by synthesizing multiple population data sources with social media data. Spatial data synthesis capabilities that the project will provide include extracting metadata and dealing with problems of spatial references and units. The project also will develop a fundamental capability to characterize uncertainty in data and its propagation.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Service, Manufacturing, and Op | Award Amount: 245.49K | Year: 2016

Many information processing, service and manufacturing systems can be viewed as networks of interacting resources. Examples include data centers for cloud computing, online advertising systems, electronic chip manufacturing lines, the Internet, and health care service systems. Requests for services from such systems are processed by an interconnected set of resources such as computers, manufacturing stations, and human servers. In such applications, a common objective is to identify service policies (routing, service order, and server control algorithms) that minimize delays for customers using the system. Except in a few special cases, currently there are no mathematical tools available to compute performance metrics such as the delay experienced by the customers, especially when the system size is large. The goal of this project is to advance the mathematics tools needed to compute performance metrics of large systems of networked resources. These mathematical techniques will enable the design of good service policies for the myriad of applications mentioned earlier. The results from the project will be incorporated into courses. Outreach efforts will be made to include students from underrepresented groups and minorities in the project.

Often the problem of optimal control of networks of interacting resources can be modeled as a Markov Decision Problem (MDP), but the state-space is prohibitively large to obtain optimal solutions. Therefore, it is common to study such problems (under some appropriate scaling) either as fluid control problems, or Brownian control problems, or large-deviations problems. The objective of this project is to enable an alternative approach, which involves studying the drift of appropriately-chosen Lyapunov functions, in transient and steady-state modes. The specific challenge involves developing lower-dimensional models for high-dimensional systems, and using the lower-dimensional models to study the optimality, or lack thereof, of specific architectures and algorithms. If successful, this project will result in (i) new analysis tools based on the drift-based arguments, which provide tight bounds on the steady-state performance of control and decision algorithms in large networks, and (ii) design of optimal or near-optimal algorithms that perform well at all traffic loads.

Agency: NSF | Branch: Standard Grant | Program: | Phase: I-Corps | Award Amount: 50.00K | Year: 2017

The broader impact/commercial potential of this I-Corps project will be to provide a fully automated and versatile biological foundry technology biotechnology to accelerate the biotechnology research and development (R&D). Biotechnology industry spends over $35 billion/year on research and development world-wide. However, the current R&D process mainly relies on manual work of highly experienced research scientists. Such manual operation is slow, expensive, and prone to human errors, which has constrained the growth of this industry. The application of such fully automated and highly versatile biofoundry may help biotech companies greatly expedite their R&D and reduce the cost. Thus, the risk of R&D can be further reduced to enable the development of more bio-based specialty chemicals, fuels, pollution remediation, as well as healthcare products for the market. Initial work suggest the biofoundry technology may have an initial opportunity in the strain development market for fermentation companies.

This I-Corps project proposes to study the specific markets for the advanced biomanufacturing technologies. The first prototype of such a robotic platform has demonstrated its throughput and versatility in the proof-of-concept studies. It is capable of conducting most molecular and cellular biology experiments with minimal human intervention. With its modular hardware and software systems, this platform can be rapidly reconfigured to a variety of tasks for engineering biological systems. The modular system architecture also allows programming of long and complex workflows that were very difficult to implement in previous lab automation systems. When used for assembling complex DNA molecules, it achieved a throughput of over 400 constructs/day with close to 100% success rate. Such throughput and consistency was unattainable by conventional manual operations. Such capacity opens the door to fast prototyping of biological systems by in silico design, such as proteins, cells, and plants.

Agency: NSF | Branch: Standard Grant | Program: | Phase: DEVELOP& LEARNING SCIENCES/CRI | Award Amount: 190.21K | Year: 2014

Understanding how children learn language--how they gather data from language experience and use it to uncover linguistic structure--is a key challenge for cognitive science, in part because native mastery of a language is one of the foundations of both formal and informal education. Infant language learners encounter sentences paired with world situations; at the start, these sentences are made up of unknown words, combined by the rules of an unknown grammar. Based on such data, toddlers begin to understand sentences early in the second year of life, and ultimately build a lexicon and grammar that support nearly unlimited generalization to new sentences. Accounts of how children begin to understand sentences necessarily begin with the non-linguistic world. The infant, not yet knowing the words or the grammar, must figure out what words and sentences mean in large part by observing the world situations in which they occur. But aspects of the meanings of verbs challenge the assumption that learners can straightforwardly recover verb (and thus sentence) meanings from situations. This problem inspired the syntactic bootstrapping theory, which proposes that children use their growing knowledge of syntax itself to learn verbs and interpret sentences.

This research is will provide important insight into normal language acquisition, but it may, in the future, contribute also to the diagnosis and treatment of developmental language disorders. In this project, Dr. Fisher and Dr. Roth explore how syntactic bootstrapping works, and how it begins, extending the structure-mapping account of the origins of syntactic bootstrapping. On this account, infants approach language armed with an innate bias toward one-to-one mapping between nouns in sentences and participant-roles in events. Given this bias, children find the number of nouns in a sentence inherently meaningful: For example, as soon as children can identify some nouns, they can assign different interpretations to transitive and intransitive verbs, essentially by counting the nouns. A corollary of this account is that children identify words as verbs by learning their syntactic combinatorial properties.

This project asks how syntactic bootstrapping scales up to the complexity of verbs predicate-argument structures and the ambiguity of sentences. The project addresses two linked proposals, by combining verb-learning experiments with children and experiments with a computational model based on systems for Semantic Role Labeling (SRL). The first proposal is that distributional learning creates detailed syntactic-semantic combinatorial knowledge about verbs. This knowledge plays two roles: (a) it permits syntactic bootstrapping, as children use verbs combinatorial behavior to identify them as verbs, and to compute their semantic structure; and (b) it supports online sentence processing, by reducing ambiguity and improving childrens sentence representations (this is known as verb bias). The second proposal is that an expectation of discourse continuity facilitates verb learning by letting learners gather evidence for verb argument-structure across nearby sentences. Combinatorial learning about verbs guides this process, by cuing children to seek referents for missing arguments in the discourse context.

Agency: NSF | Branch: Continuing grant | Program: | Phase: EVOLUTIONARY GENETICS | Award Amount: 507.00K | Year: 2014

All organisms, including humans, cannot live without the vast microbial menagerie that lives in and around us. Critical to understanding the vital functions of these communities is knowledge about the processes responsible for microbial species. How microbial species are generated and maintained over time, and whether they are even independent units is not known. Whitaker and colleagues combine genetic and genomic tools to define the rules of speciation in the model microorganism Sulfolobus. Their goal is to better understand the evolutionary and ecological parameters that govern how genetic material moves between individual cells and result in the generation and maintenance of microbial diversity on which we depend.

A more robust understanding of microbial biodiversity will help propel discovery and innovation in a host of scientific fields, ranging from emerging infectious diseases to climate change modeling, that are being transformed by the microbial revolution. Whitaker will extend an understanding of the microbial world and its impact on daily human life through Project Microbe, a curriculum designed to integrate microorganisms into the national K-12 education, in the college setting at the University of Illinois, and outside of the classroom through the Osher Life Long Learning at Illinois program.

Agency: NSF | Branch: Continuing grant | Program: | Phase: OFFICE OF MULTIDISCIPLINARY AC | Award Amount: 375.29K | Year: 2013

A (proper) coloring of vertices of a graph or hypergraph G is a partition of the vertex set of G into sets (called color classes) such that no edge of G is fully contained in any of the classes. The basic coloring problem is to find such a partition with the fewest color classes. The aim of this project is to study extremal problems related to coloring of (hyper)graphs involving degrees of vertices, a number of whose arose during the work of the PI and co-PI with students at the University of Illinois. It is expected that the joint work and combinations of the ideas and approaches of the PI and the co-PI will allow to get the results that will make an essential step in understanding of these problems. Among important topics are color-critical graphs with small average degree, list colorings, improper colorings, coloring of graphs embedded into surfaces, equitable coloring, bounds on the independence number, and hypergraph coloring. Among promising tools is the language of potentials.

Coloring deals with the fundamental problem of partitioning a set of objects into classes that avoid certain conflicts. This model has many applications, for example, in time tabling, scheduling, frequency assignment and sequencing problems. The theory of graph and hypergraph coloring has a central position in discrete mathematics. It relates to other important areas of combinatorics, such as Ramsey theory, graph minors, independence number of graphs and hypergraphs, orientations of graphs, and packing of graphs. The PI and the co-PI plan to make significant advances in developing the theory of coloring of sparse graphs and hypergraphs. They will involve into work a number of graduate students and very recent graduates of the University of Illinois. The research guidance and involvement of graduate students contributes to their professional development and reputation. The results will be published in leading international journals in the field and presented at international scientific conferences. This will support the reputation of the University of Illinois. The results will be used in graduate courses and discussed at research seminars at the University of Illinois.

Agency: NSF | Branch: Continuing grant | Program: | Phase: STELLAR ASTRONOMY & ASTROPHYSC | Award Amount: 357.71K | Year: 2015

The investigators will address important questions concerning the explosion of Supernovae (SNe), which can occur at the end of the life of a star. How do these stars die? Astronomers use SNe as standard candles, because they are all shown to explode with nearly identical energies. If the energy of an explosion is constant, then the fainter the SNe is seen, the more distant it must be. Astronomers use these SNe to measure the distance to galaxies. Because of their great brightness, SNe are astronomers best tool for studying dark matter at great distances. However there are some serious concerns; do stars with different element composition or spin explode with different brightness? How do the details of star systems and explosion physics influence our understanding of dark energy?

SNe have been one of the driving forces behind some of the most interesting physics of the past century: black holes, neutron stars, dark energy, neutrinos, and cosmic rays. Newly discovered classes of transients have already brought about discussions of gravitational waves, pair-instability explosions, and r-process element creation. Discoveries resulting from research funded by this proposal should address some similarly interesting physics.

This program will train two University of Illinois graduate students in observational stellar astrophysics and perform an integrated education program with undergraduate astronomy classes across the country. The graduate students will work with data from multiple telescopes and surveys. The main scientific goals are to (1) discover and characterize new and recently discovered classes of astrophysical transients, (2) detail the progenitors and explosions of those exotic transients, and (3) determine how Type Ia supernova (SN Ia) observables depend on the progenitor system.

Agency: NSF | Branch: Continuing grant | Program: | Phase: CONDENSED MATTER & MAT THEORY | Award Amount: 172.84K | Year: 2017

This CAREER award supports theoretical and computational research and education to elucidate rules for designing polymeric materials that mimic biology. Polymers are long chain-like molecules that are made of joined molecular units called monomers. Materials made from polymers are used in a wide range of common applications from rubber bands to plastic components of automobiles to packaging materials and more. The PI is inspired by the sophisticated precision of biological systems which are made from large molecules that specifically and exclusively interact using information encoded in patterns of electrostatic charge. The PI will investigate whether polymers that self-organize can be made to behave in a similar way. The PIs group will determine how patterns of electrostatically charged monomers along a polymer molecular chain can be designed to guide the self-organization of molecular structures at the nanometer length scale.

The monomer sequence of a polymer will be a tool to fit molecules together like puzzle pieces. To do this, the PI will consider polymer systems that strongly attract because they consist of chains of positive charge called polycations and chains of negative charge called polyanions. In solution, large numbers of these polycations and polyanions stick together in a dynamic, gel-like material known as a complex coacervate. This sticking is highly dependent on the sequence of charges along the polymer backbone, and the PIs group will establish how different patterns emerge from which polycations interact with which polyanions. This assembly motif will enable advances in a broad class of materials that demand structural precision at the nano-level, such as fuel cell membranes, functional coatings and sensors, and drug delivery vehicles.

The integrated education and outreach component of this project supports broader outreach to underrepresented minority groups, along with graduate and undergraduate research training and mentorship. Outreach efforts consist of placing the computer simulation advances of the PIs group into the context of polymer sustainability and experimental collaboration. Interactive computer simulation is the centerpiece of a PI-designed activity within the St. Elmo Brady STEM Academy at the University of Illinois. This activity will introduce the lifecycle of plastics and sustainability to elementary-age students in underrepresented minorities.

This CAREER award supports theoretical and computational research and education that seeks to use sequence-designed polymers to emulate biological macromolecules. Sequence control is key to addressing a grand challenge in polymer science: design soft materials that respond to stimuli, encode information, and form complex structures. The PIs group will take cues from biopolymers that undergo specific binding due to information encoded in charge monomer sequence, and establish the design rules needed to harness charge patterning for polymer self-assembly.

In this work, the PI will systematically explore how charge sequence dictates the interaction strength and specificity between oppositely-charged polyelectrolytes. Solutions of these polymers undergo associative phase separation into complex coacervates, which serve as an ideal model system for connecting charge patterning to macroscopic phase behavior and nanoscale assembly.

Monte Carlo simulation and hybrid particle/field simulation methods will be used to probe: 1) local monomer placement and patterns that will control interaction strength, and 2) contour-length charge variation that can promote interaction specificity via complementary sequences. Both sequence length scales will provide the basis for using charge sequence to encode self-assembly. This research will elucidate principles of using sequence-defined polymers to drive polymer design, using simulation methods uniquely suited to addressing the disparate length scales connecting monomer-level sequence to morphological or macroscopic phenomena.

The integrated education and outreach component of this project supports broader outreach to underrepresented minority groups, along with graduate and undergraduate research training and mentorship. Outreach efforts consist of placing the simulation advances of the PIs group into the context of polymer sustainability and experimental collaboration. Interactive simulation is the centerpiece of a PI-designed activity within the St. Elmo Brady STEM Academy at the University of Illinois. This activity will introduce the lifecycle of plastics and sustainability to elementary-age students in underrepresented minorities.

Agency: NSF | Branch: Standard Grant | Program: | Phase: COMMS, CIRCUITS & SENS SYS | Award Amount: 361.77K | Year: 2015

PI: Gaurav Bahl, Mechanical Science and Engineering
University of Illinois at Urbana-Champaign

1- Proposal Title: Towards label-free single virus identification with nano-optomechanofluidics

2- Brief description of project goals:
We aim to experimentally demonstrate simultaneous optical and mechanical sensing of single virus nanoparticles that can permit rapid label-free identification.

3- Abstract:

3a Nontechnical abstract

Throughout history, viral diseases have inflicted great damage to human populations. Swift identification of a viral pathogen can enable a rapid healthcare response for arresting major outbreaks, and even for speedy drug development. This pressing need is highlighted by outbreaks of H1N1, H5N1, SARS, and Ebolavirus over the last decade. Simultaneous sensing of optical and mechanical properties of viruses could permit the rapid identification of individual virus particles without any chemical tests. This is a new perspective in comparison to existing optical-only or mechanical-only methods that provide limited information. This proposal addresses the associated fundamental problems of measurement throughput, sensitivity, and particle identification by means of a novel nanofluidic opto-mechanical resonator. Such devices could some day be deployed in the field for the label-free identification of viral pathogens, and for generating a swift response by healthcare authorities. In pharmacological studies, these devices could assist in drug discovery.

The proposed work is fundamentally interdisciplinary and of high value from an educational perspective. This project provides rich opportunities for the training of students at all
levels (graduate, undergraduate, high school) at the intersection of optical physics, solid mechanics, and fluid mechanics, using advanced experimental tools. The STEM education impact of this work will be broadened through the development and distribution of educational activities on the optical measurement of Brownian motion of microparticles. These activities will be
targeted towards K-12 students at local schools, with wider distribution through existing on-campus partners. An undergraduate research assistant will also be recruited for the research and educational efforts with preference towards underrepresented groups.

3b Technical abstract

Currently, fast label-free techniques for detecting viral nanoparticles rely on either photonic sensing or on vibrational mass sensing, but not both, and can only provide limited one-dimensional information. For instance, mechanical methods primarily operate on the principle of mass-loading of a resonator and the associated frequency. In this manner, the mass of a particle can be estimated with extremely high resolution, but size and density are not obtainable without additional assumptions. Photonic methods, in contrast, rely on the shift of optical resonance frequency or optical mode splitting. This provides information on the polarizability, approximate size of a nanoparticle, but does not permit further identification. As a result, there remains an ambiguity in the label-free identification of a pathogen (as opposed to mere detection) without the use of specific antibody binding or chemical processes. Having both optical and mechanical properties can shed much needed light on a single virions size, mass density, and optical density (or polarizability), and could help narrow down the protein folding and virus structural properties.

This project has multiple objectives
(1) Elucidate the fundamental limits of sensing single virions with simultaneous optomechanical measurements using a nano-optomechanofluidic device.
(2) Develop models of optical as well as mechanical noise sources, and incorporate the effects of radiation pressure and optomechanical back-action.
(3) Develop a method of throughput enhancement in mechanical resonance sensing, by using simultaneous optical information to spatiotemporally locate the nanoparticles.
(4) Improve the ability to detect and identify single virus particles, not only based on their optical properties but also their mass, through the use of nano-optomechanofluidic resonators.

Agency: NSF | Branch: Standard Grant | Program: | Phase: CAREER: FACULTY EARLY CAR DEV | Award Amount: 500.00K | Year: 2016

This Faculty Early Career Development (CAREER) project will develop a new experimental technique to characterize the nonlinear thermodynamic and kinetic properties of gels. Defying the classical definitions of solid and fluid, gels are both solid-like and fluid-like. They are both ubiquitous components of natural organisms and important engineering materials. Despite their wide applications, the design of these materials at this stage remains mostly trial-and-error due to a lack of fundamental understanding of the complex thermodynamic and kinetic behaviors of gels. The success of this work will lead to a robust and high throughput technique capable of measuring the thermodynamic and kinetic properties of soft gels under wide range of conditions and provide a standard toolbox for engineers to realize quantitative designs based on these materials. The PI will also expand a Soft Squishy Lab to combine visual, tactile and hands-on modules to connect human perception of macroscopic properties to the underlying microstructures of soft materials at appropriate levels for K-12 students.

Gels are composed of crosslinked polymer network and solvent molecules. The crosslinks prevent the long polymers from dissolving in the solvent; rather the gel swells and shrinks as the small molecules migrate in and out. The solvent uptake is a two-way street: as the solvent diffuses into the network, the network deforms, leading to size and shape changes, while the deformation of the network also affects the rate and amount of solvent diffusing into or out of the network. Both the concepts and the behaviors of gels are sufficiently complex such that ample room exists for additional work to connect principles of mechanics, thermodynamics, and kinetics to experiments. The proposed study will develop a technique based on an indentation method for characterizing the nonlinear thermodynamic and kinetic properties of gels. The new technique will allow for systematic characterization of various types of stimuli-responsive gels under different environmental conditions. Based on a complete set of data from the systematic measurements, an in-depth understanding of the structure-property relations of gels can be achieved. Consequently, a physics-based constitutive model will be built.

Agency: NSF | Branch: Standard Grant | Program: | Phase: INFORMATION TECHNOLOGY RESEARC | Award Amount: 1.50M | Year: 2015

Catalyzed by the NSF Big Data Hub program, the Universities of Illinois, Indiana, Michigan, North Dakota, and Iowa State University have created a flexible regional Midwest Big Data Hub (MBDH), with a network of diverse and committed regional supporting partners (including colleges, universities, and libraries; non-profit organizations; industry; city, state and federal government organizations who bring data projects from multiple private, public, and government sources and funding agencies). The NSF-funded SEEDCorn project will be the foundational project to energize the activities of MBDH, leveraging partner activities and resources, coordinating existing projects, initiating 20-30 new public-private partnerships, sharing best practices and data policies, starting pilots, and helping to acquire funding. The result of SEEDCorn will be a sustainable hub of Big Data activities across the region and across the nation that enable research communities to better tackle complex science, engineering, and societal challenges, that support competitiveness of US industry, and that enable decision makers to make more informed decisions on topics ranging from public policy to economic development.

The MBDH is focusing on specific strengths and themes of importance to the Midwest across three sectors: Society (including smart cities and communities, network science, business analytics), Natural & Built World (including food, energy, water, digital agriculture, transportation, advanced manufacturing), and Healthcare and Biomedical Research (which spans patient care to genomics). Integrative rings connect all spokes and will be organized around themes of specific MBDH strengths, including (a) Data Science, where computational and statistical approaches can be developed and integrated with domain knowledge and societal considerations that support the underlying needs of data to knowledge, (b) services, infrastructure, and tools needed to collect, store, link, serve, and analyze complex data collections, to support pilot projects, and ultimately provide production-level data services across the hub, and (c) educational activities needed to advance the knowledge base and train a new generation of data science-enabled specialists and a more general workforce in the practice and use of data science and services.

Further information on the project can be found at http://midwestbigdatahub.org.

Agency: NSF | Branch: Standard Grant | Program: | Phase: CERAMICS | Award Amount: 357.97K | Year: 2014

NON-TECHNICAL DESCRIPTION: The determination of phase diagrams, the underlying thermodynamics for designing new materials is a long and arduous process, but it is necessary to provide the basic scientific knowledge upon which new electronic and structural ceramics are based. This research is focused on how to accelerate and improve the gathering of such information, thereby introducing new methodology which can be applied to technologically-relevant systems. The approach taken in this research is to make in situ measurements at temperatures up to 4,000 F in air, using high intensity, rapid X-ray synchrotron measurements coupled with accurate data analysis. It is estimated that more scientific information can be obtained in one-fifth of the time currently taken to determine a phase diagram, with significantly more crystallographic information than what is usually obtained. A doctoral student and several undergraduate students are engaged in the research. As well, the PI participates in a variety of activities to promote science and engineering to middle and high school students.

TECHNICAL DETAILS: This research aims to demonstrate a new, efficient and highly accurate method to determine ternary phase diagrams, using rapid, in situ, in air synchrotron instrumentation, coupled with accurate, quantitative data analysis by the Rietveld method. This work aims to revolutionize the slow, ex situ methods of collecting data and reduce the process five-fold. In addition, thermal expansion coefficients and crystal structures will be analyzed for any new phases discovered in the ternary phase diagram. The hafnia-titania-tantala ternary was selected as the model system. The ternary system is largely unexplored, but even the elements are technologically interesting. For example, tantala is a promising candidate next generation material for application in a wide range of microelectronics and integrated micro-technologies. Its dielectric constant is six times that of silica. Tantala compounds have applications as dielectric layers for storage capacitors in dynamic random access memories (DRAMS) in computers, gate oxides in field effect transistors, insulating layers in thin film electroluminescent devices, sensor layers in biological and chemical sensors, anti-reflection coatings for silicon solar cells, charge-coupled devices and corrosion resistant materials. Hafnium is a good absorber of neutrons and is used in the control rods of nuclear reactors and hafnium tantalate is of interest for structural nuclear applications. Hafnia is also used as an ultra-high temperature structural and thermally insulating material. This work is timely because current research into the next generation of electronic devices is based on doped, amorphous tantala, and the current knowledge of metastable and stable phases in crystalline tantala is unknown, even though its crystalline dielectric properties are significantly superior to its amorphous properties. An additional strategic benefit is the training of a doctoral student in this cutting-edge research technique. As well, undergraduate students are engaged in the research by helping to make samples for the synchrotron experiments and assisting at the around-the-clock beam line experiments. The PI participates in a variety of activities to promote science and engineering to middle and high school students (e.g., Project Lead the Way for grade school and high school teachers, and GAMES (engineering experiences for girls).

Agency: NSF | Branch: Standard Grant | Program: | Phase: Data Infrastructure | Award Amount: 133.88K | Year: 2014

An important focus of scientific research is understanding the complex interactions between human societies and the climatic, physical, and biological environments on which they depend, and which they, in turn, influence. Past environments, of course, were often quite different from those we experience today. Furthermore, important processes of social and environmental change operate slowly and are sometimes visible only when viewed over decades or centuries. In order to study social and natural processes operating over anything other than short periods in recent decades, long-term environmental knowledge specific to particular locations and times is essential.

Unfortunately, state-of-the-art data on past environments are difficult to find and even more difficult to integrate and interpret. Under the direction of Dr. Kintigh and his colleagues, the project will develop plans for an online tool, SKOPE (Synthesized Knowledge of Past Environments), that will provide state-of-the-art information about the environment experienced by humans at a given a place and time, past or present. Using explicit scientific procedures, it will process the data to yield a cutting-edge synthesis of environmental information specifically tailored to the users request. Initial development is planned for the Southwest US, but SKOPEs design will be designed to be easily extended to other places and times. Once implemented, SKOPE will be freely accessible on the Internet. It will be applicable in such fields as sustainability, archaeology, sociology, economics, anthropology, and ecology as well as for resource management and planning. For example, it will directly benefit archaeologists comparing the social consequences of long-term climate change across different civilizations and ecologists investigating long-term changes in biodiversity. Planners could use its long-term environmental reconstructions to investigate vulnerabilities in existing infrastructure that are outside the experience provided by the historic record. Students and members of the general public could learn how ancient environments differed from contemporary ones in places they study, inhabit, or visit.

SKOPE will access online databases of modern and historic observational data (for example, on rainfall, temperature, and plant and animal distributions) as well as databases of indicators for past environments (for example, rainfall reconstructed from tree-ring widths, and plant remains and animal bones found in dated archaeological sites). Scientific experts in the interpretation of different classes of data will develop procedures that transform these diverse environmental observations and indicators into a thoroughly documented scientific synthesis of the environment for the time and place of interest. The design of SKOPE will be guided by the needs stated by potential users in meetings with academic, public-, and private-sector professionals. The project will identify key sources of data on current and past environments and will generalize the analytical procedures required to achieve useful synthesis. Subsequent development could extend the tool to additional areas and times and incorporate more classes of environmental data.

Agency: NSF | Branch: Standard Grant | Program: | Phase: EXTRAGALACTIC ASTRON & COSMOLO | Award Amount: 427.00K | Year: 2014

Computational modeling of astrophysical phenomena has grown in sophistication and realism, leading to a diversity of complex simulation platforms, each utilizing its own mechanism and format for representing particles and fluids. Similarly, most of the data analysis is conducted with tools developed in isolation and targeted to a specific simulation platform or research domain; very little systematic and direct technology transfer between astrophysical researchers exists. The yt project is a parallel analysis and visualization toolkit designed to support a collaborative community of researchers as they focus on answering physical questions, rather than the technical mechanics of reading, processing and visualizing data formats. This project will enable the development of advanced, physics-based modules that apply universally across simulation codes, advancing scientific inquiry and enabling more efficient utilization of computational and human resources. In doing so, it will help advance a myriad of research goals from the study of black hole binaries to the growth of cosmic structure. In addition, the project will serve as a touchstone for collaboration and cross-code utilization between many groups studying diverse phenomena. Moreover, the project will be developed through a community-oriented process, engaging a wide range of participants.

The infrastructure development in this research will enable these capabilities by broadening the applicable simulation platforms within yt, enabling cross-code utilization of microphysical solvers and physics modules and in situ analysis, and developing collaborative platforms for the exploration of astrophysical datasets. In particular, it will develop the capabilities of yt in three primary mechanisms. The first is to enable support for additional, fundamentally different simulation platforms such as smoothed particle hydrodynamics, unstructured mesh, and non-Cartesian coordinate systems. The second is to provide simulation instrumentation components to ease the process of developing simulation codes, interfacing and exchanging technology between those simulation codes, and to enable deeper, on-the-fly integration of astrophysical simulation codes with yt and other analysis toolkits. The final focus is on developing interface components to enable collaborative and interactive exploration of data utilizing web-based platforms. An explicit goal of this SI2-SSE project is the development of collaborative relationships between scientists, furthering the development of the field as a whole. By conducting all business in the open with a focus on developing and encouraging collaborative, welcoming environments for contributors and researchers, this SSE will help to foster a level playing field that is more accessible to all parties, particularly women and underrepresented minorities. An explicit milestone of this project is to streamline the process of conducting direct outreach through scientific visualization, greatly expanding the domains and individuals engaged in STEM-based public outreach.

Agency: NSF | Branch: Standard Grant | Program: | Phase: COMM & INFORMATION FOUNDATIONS | Award Amount: 306.11K | Year: 2016

Todays data centers rely on advanced methods to efficiently store increasing volumes of data. To protect against loss of data, data is stored redundantly on multiple disks. At the large scale of clusters of thousands of disks, simple replication of data is inefficient and not an option. To address the challenge of efficient, reliable and secure storage of data at a large scale, the project uses various combinations of algebraic and combinatorial methods. New constructions are given for the efficient recovery of data in case of disk failure. New methods are introduced to optimize bounds for storage capacity. New secure schemes are developed to ensure that information can only be obtained from the combined data of multiple disks. The research uses a novel algebraic approach to fundamental aspects of data storage and data access. Undergraduate and graduate students will be involved, working on projects with both a theoretical and a computational component.

Algorithms for data storage encode and divide data over several disks. The encoding challenge is to optimize the allocation of storage space between primary data and repair data. The optimization is analyzed for the general case in the setting of entropy inequalities and for the linear case in the setting of rank inequalities for matroids. The main focus is on three aspects. 1) Outer bounds: Optimize the use of available storage space under various combinations of constraints. 2) Multiple-access: Use coordinated encoding of data from different sources to add error-correction without sacrificing storage capacity. 3) Small alphabets: Using a novel approach, analyze nontraditional coset schemes that are defined over smaller fields.

Agency: NSF | Branch: Standard Grant | Program: | Phase: I-Corps | Award Amount: 50.00K | Year: 2017

The broader impact/commercial potential of this I-Corps project is to provide a filter to businesses, government agencies, and individuals to eliminate nitrate from water. Nitrate in drinking water has significant impacts on society because it causes serious human health problems. It also promotes the growth of algae in rivers, lakes, and coastal areas, which interferes with tourism, recreation as well as other commercial uses of these areas. The two principal sources of nitrate entering water supplies are the runoff of nitrate fertilizers from agricultural fields and the discharge from municipal and industrial waste-water treatment plants. Intercepting these waters and eliminating the nitrate from them before it enters the environment constitutes a major segment of the potential for commercialization of the technology. Drinking water treatment plants are a potential commercial user, as are homeowners with water purification systems.

This I-Corps project will explore the potential commercialization of a filter to eliminate nitrate from water. It consists of a bed of sand grains coated with an iron-bearing clay mineral. Before coating, the clay is engineered to react with nitrate. Once coated, the sand/clay mineral attracts and reacts with nitrate in the flowing water, changing it to harmless nitrogen gas or ammonium. Iron in the clay is the key for changing the nitrate. The intellectual merit of the project is that three independent research findings were combined to make the filter. These include processes to reverse the charge on the clay surfaces, changing the iron charge to react with nitrate, and finally engineering a system with suitable flow characteristics.

Agency: NSF | Branch: Continuing grant | Program: | Phase: PLANT GENOME RESEARCH PROJECT | Award Amount: 4.78M | Year: 2013

PI: Elizabeth Ainsworth (University of Illinois, Urbana-Champaign/USDA-ARS)

CoPIs: Andrew Leakey and Patrick Brown (University of Illinois, Urbana-Champaign) and Lauren McIntyre (University of Florida)

Key Collaborator: Thomas Brutnell (Donald Danforth Plant Science Center)

Tropospheric ozone is the most damaging air pollutant to crops. Today, oxidative stress arising from ozone exposure is reducing potential maize yields by up to 10%, which in 2011 would have been valued at $7646 million. This project couples the unique capabilities of Free Air Concentration Enrichment (FACE) technology, which provides controlled elevation of ozone in open-air at field scale, with the power of the vast genetic resources in maize and transcriptome profiling. It will provide a foundation for crop improvement by quantifying genetic variation in response to elevated ozone among 200 inbred and 100 hybrid maize lines in the field. The project will use high-throughput phenotyping of ozone impacts on maize growth, senescence, leaf metabolism and reproductive processes to identify traits that correlate with yield loss. It will identify the genes and gene networks underpinning the ozone response in the most extreme tolerant and susceptible lines, and their hybrids, by integrating transcriptome analysis and detailed physiological analysis in inbred and hybrid maize. The project will develop or select existing biparental populations derived from tolerant and sensitive parents to identify QTL and eQTL for ozone tolerance. Finally, the project will assess crosstalk between ozone and biotic stress response gene networks in maize.

This work will address key mechanistic hypotheses about how oxidative stress leads to transcriptional reprogramming of antioxidant and carbon metabolism, as well as hormone, senescence and defense pathways. This multifaceted approach is essential because multiple physiological drivers of yield are sensitive to oxidative stress from ozone exposure. Consequently, oxidative stress tolerance is undoubtedly a complex, polygenic trait. The broad application of quantitative genetic tools coupled to gene expression profiling and biochemical and physiological analyses of diverse germplasm makes the challenge of discovering the foundation for ozone tolerance tractable for the first time. With regard to outreach and training, this Mid-Career Investigator Award in Plant Genome Research (MCA-PGR) will help re-tool two mid-career plant physiologists who study plant physiological and agronomic responses to environmental change with training in genomics and quantitative genetics. This new expertise will allow them and their post-docs and graduate students to address major challenges in agriculture and ecology by leveraging the full power of genomics through bioinformatics, quantitative genetics and expression profiling using next-generation sequencing technologies. In addition, the project will provide outreach through an after-school program on plant biology at a local middle school and a summer science camp for high school girls. Pollen images and pollen viability data, sequencing and proteomics datasets developed in this project will be publicly available at public repositories such as the iPlant Collaborative, NCBI GEO (www.ncbi.nlm.nih.gov/geo/), and EMBL-EBI PRIDE (http://www.ebi.ac.uk/pride/). Germplasm developed in this project will be available through the Maize Genetics Cooperation Stock Center (http://maizecoop.cropsci.uiuc.edu/).

Agency: NSF | Branch: Standard Grant | Program: | Phase: Dynamics, Control and System D | Award Amount: 295.08K | Year: 2015

Since its development in the early 1980s, atomic force microscopy (AFM) has been one of the most useful imaging tools in the fields of nano- and biosciences. This award supports theoretical and experimental studies of a new type of AFM realized through constructive use of intentional nonlinear resonance enabling the utilization of high-frequency measurements for sensing. Apart from proving the efficacy of the concept of constructive utilization of intentional nonlinearity in nano/micro designs to achieve performance not otherwise attainable, in a broader sense this projects approach can act as a testbed for assessing how strong nonlinearity incorporated into a complex mechanical system can lead to drastic performance gains. This work can be potentially transformative, since it can provide a new paradigm of intentionally nonlinear AFM technology based on higher-frequency sensing, with demonstrated capacity for drastically enhanced sensitivity and performance. The gained AFM sensing capability will be an incomparable tool in fields such as nano- and bio-sciences.

Detailed analytical, computational and experimental studies will be performed of a new, microcantilever beam design enabling higher-frequency nonlinear AFM. Under dynamic mode operation an intentionally designed 1:n internal resonance between the two leading bending modes of the AFM microcantilever incorporating an inner Silicon paddle, leads to magnification of high-frequency harmonics in the paddle response, which is the basis for AFM of improved sensitivity. The research team will develop the significantly enhanced AFM measurements of sample material properties and topography, achieved through sensing of higher harmonics in the response. The research team will systematically study, optimize, extend and validate this promising concept through theoretical studies to characterize the paddles response to different types of interaction forces, and will perform an extended series of experimental tests to assess the sensitivity of high-order internal resonance designs to changes in topology and material properties. Moreover, multi-paddle AFM designs incorporating multiple simultaneous internal resonances will be analyzed for quantitative characterization, whereas related microfabrication issues will also be addressed.

Agency: NSF | Branch: Standard Grant | Program: | Phase: ENERGY,POWER,ADAPTIVE SYS | Award Amount: 376.33K | Year: 2015

Energy usage of data centers is becoming an increasingly important concern, as the energy-related operating costs of data centers have become a dominant part of the total cost of ownership, and their power demands represent some of the fastest growing loads on the electric grid. Consequently, data centers today are a significant contributor to global carbon emissions, making the design of data centers with improved efficiency an important societal need. The goal of this project is to reduce the power conversion losses in data centers through innovations in power electronics and system control, with the aim of extreme efficiency. This research program is expected to have far-reaching consequences for the economic and environmental impact of data centers. If successful, the large-scale adoption of the proposed power delivery architecture could save 4.44 billion kWh of energy in US data centers, based on conservative 2010 data and preliminary experimental results from an early proof-of-concept demonstration. These energy savings would in turn reduce harmful emissions equivalent to removing 850,000 cars from U.S. roads. Research will be conducted by undergraduate and graduate students who will be provided with opportunities to develop broad research and education skills. The research will complement existing educational efforts, and will help enhance community outreach programs. Additionally, educational activities include the development of power and energy focused educational modules for middle school teachers.

This research will explore a radically different power delivery architecture that is designed specifically for multi-machine environments such as racks of servers, and provides significant improvement both in terms of volume savings and power efficiency. The proposed power conversion architecture exploits the large number of servers available in todays data centers, and achieves extreme power conversion efficiency by electrically connecting servers in series, similar to solutions developed for solar PV and battery applications. Stringent voltage regulation and operational requirements will be met through the combination of a) isolated, high-efficiency, high power density differential power converters that maintain each server voltage within specifications, b) hardware circuitry that enables safe and reliable isolation and hot-swapping of malfunctioning servers, and c) system control that achieves high efficiency, and which handles start-up and shutdown of individual servers and racks of servers. The research plan includes major research components in the areas of power converter topologies; design, fabrication, and testing of high density switched-capacitor power converters; development of isolation and hot-swapping circuitry for safe and reliable operation; bidirectional hysteresis control for improved light-load efficiency; and evaluation and benchmarking against existing solutions.

Agency: NSF | Branch: Continuing grant | Program: | Phase: ELECT, PHOTONICS, & MAG DEVICE | Award Amount: 47.49K | Year: 2015

The objective of this proposal is to establish the methodology to perform back-gated scanning tunneling microscopy on topological insulators, study these materials on a local scale with spectroscopy and gating, and to explore candidate materials for two-dimensional topological insulators. While gated STM has been highly successful in studying graphene, it has yet to be achieved in topological insulators.
The intellectual merit is the following. The ability to gate while performing STM will provide a tremendously powerful control knob to manipulate the properties of topological insulators and explore the role of interactions. The discovery of ultra thin films of topological insulators will be important to realizing the quantum spin hall effect.
The broader impacts are the following. Gated devices will provide the ability to position the Fermi energy near the Dirac point or away from the bulk bands and into the most interesting regimes for future applications. The edge modes at domain wall boundaries in 2D topological insulators are potentially useful for realizing one-dimensional dissipationless spin transport. Undergraduates, graduate students and post-docs will be trained on materials and instruments at the forefront of today?s research. The PI?s integrated outreach and education activities will expose talented high school students to cutting edge research. The program to effectively mentor post-doctoral scholars in the department will impact the training of future scientists. The PI?s ongoing collaboration with the Lynch School of Education will have a real measurable impact on the training and numbers of science teachers in middle- and high schools in urban areas.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Big Data Science &Engineering | Award Amount: 185.00K | Year: 2015

Electricity is the lifeblood of our society; therefore providing a reliable and efficient electricity supply is vital for ensuring human welfare and sustainable economic growth. A pivotal need in ensuring reliable operation of the US power grid is the development of sophisticated and robust tools for monitoring and anomaly detection. To this end, this research project aims to develop robust and scalable data-driven inference algorithms for detecting and isolating the occurrence of undesirable events that could threaten the integrity of the grid. In this regard, the combination of tools and methods on which the project will rely, namely (i) power system reliability modeling and analysis, and (ii) statistical signal processing and detection, and estimation theory, will result in a unique interdisciplinary collaboration program.

The proposed framework relies on large datasets obtained with phasor measurement units (PMUs) located across the system. By exploiting the statistical properties of voltage phase angle measurements obtained from the aforementioned PMUs, algorithms will be developed to detect and identify undesirable events in power grids, e.g., outages in transmission lines and other assets, in near real-time. Specifically, the ultimate objective of this research is to develop a data-driven framework for real-time detection of undesirable events in power systems that is robust and highly scalable. The framework builds on existing powerful tools from the theory of quickest change detection (QCD), and will provide techniques for partitioning the graph describing the connectivity of a power system, and PMU placement to allow these QCD-based tools to be exploited in large scale systems such as the US power grid. Additionally, the research will explore the challenging problem of explicitly incorporating the sparsity structure of the undesirable events in our QCD-based algorithms to make them scaleable to multiple events.

The Regents Of The University Of California and University of Illinois at Urbana - Champaign | Date: 2015-10-14

A hemostatic composition is provided. The hemostatic composition includes a hemostatically effective amount of a hemostatic agent that includes a nanoparticle and a polyphosphate polymer attached to the nanoparticle. Also provided are medical devices and methods of use to promote blood clotting.

University of Illinois at Urbana - Champaign | Date: 2014-01-30

A system and methods for analyzing land use and productivity. The invention relates to land use analysis through detection, monitoring and evaluating changes in particular land regions of interest and the analysis of changes in such land use as well as the forecasting of in-season productivity of vegetation in the region of interest. This system and methods is applicable to facilitate the automatic preparation of reports for a selected parcel of land that evaluates changes in land use and creates a quantitative report for one or more land regions of interest. This system and methods is useful to assess compliance with government regulations or standards regarding land use as well as provide a predictive land use productivity model used for commodity trading.

University of Illinois at Urbana - Champaign | Date: 2016-01-22

A method of making silicone microspheres comprises nebulizing a silicone precursor solution comprising one or more oligomeric dimethylsiloxanes, a catalyst and a solvent into an aerosol comprising a plurality of droplets. Each droplet comprises the silicone precursor solution. The droplets are entrained in a gas which is flowed through a reaction zone comprising light energy and/or heat energy. Upon exposure of the droplets to the light energy and/or the heat energy, the solvent evaporates and the one or more oligomeric dimethylsiloxanes are polymerized. Thus, silicone microspheres are formed from the droplets of the aerosol.

University of Illinois at Urbana - Champaign | Date: 2014-09-09

The invention provides simple small molecule, non-heme iron catalyst systems with broad substrate scope that can predictably enhance or overturn a Substrate Control Catalyst Control substrates inherent reactivity preference for sp3-hybridized CH bond oxidation. The invention also provides methods for selective aliphatic CH bond oxidation. Furthermore, a structure-based catalyst reactivity model is disclosed that quantitatively correlates the innate physical properties of the substrate to the site-selectivities observed as a function of the catalyst. The catalyst systems can be used in combination with oxidants such as hydrogen peroxide to effect highly selective oxidations of unactivated sp3 CH bonds over a broad range of substrates.

University of Illinois at Urbana - Champaign | Date: 2014-01-31

J-resolved LASER and semi-LASER sequences for localized two-dimensional magnetic resonance spectroscopy are disclosed. After a delay time 1 from application of an excitation RF pulse, a first pair of slice-selective adiabatic full-passage (AFP) pulses, separated by an inter-pulse interval _(2), is applied. At the end of the sequence a final pair of slice-selective AFP pulses, separated by a time of _(2)/2+_(1)+t_(1)/2 is applied, where t_(1 )is the duration of an incremental evolution period corresponding to the indirect dimension of a 2D J-resolved spectrum. In the case of J-resolved LASER, an additional intermediate pair of slice-selective AFP pulses, separated by an inter-pulse interval 2, is applied prior to the final pair of AFP pulses. An echo signal is acquired at time t_(1)/2 from the application of the last AFP pulse. The method suppresses chemical shift artifacts, J-refocused artifactual peaks, and sensitivity to RF field inhomogeneity, each being caused, at least in part, by the use of a 3 T or higher main magnetic field.

University of Illinois at Urbana - Champaign | Date: 2015-08-20

A composite dry adhesive includes (a) an adhesive layer comprising a shape memory polymer and (b) a resistive heating layer comprising a shape memory polymer composite on the adhesive layer. The shape memory polymer composite includes conductive particles dispersed in a shape memory polymer matrix, where the conductive particles have a concentration sufficient to form a conductive path through the resistive heating layer.

University of Illinois at Urbana - Champaign and Duke University | Date: 2014-09-25

The present disclosure provides tetra-substituted cyclobutane inhibitors of Androgen Receptor Action, and methods of using such inhibitors, for the treatment of hormone-refractory cancers.

University of Illinois at Urbana - Champaign | Date: 2015-12-18

A zinc titanate reactive adsorbent comprising multiphase, polycrystalline nanofibers comprising ZnTiO_(3), ZnO, TiO_(2), and Zn_(2)TiO_(4).

University of Illinois at Urbana - Champaign | Date: 2014-04-11

The invention provides multifunctional supramolecular self-assembled nanoparticles (SSNPs) comprising a set of rationally designed components that collectively facilitate efficient intestinal absorption of siRNA. The nanoparticles can induce potent TNF- silencing in macrophages. Single gavage of SSNPs in mice depleted systemic TNF- production at an siRNA dose as low as 50 g/kg, and protected the mice from lipopolysaccharide-induced hepatic injury.

University of Illinois at Urbana - Champaign | Date: 2014-06-25

Methods for sensitizing a solid tumor cell to an agent that activates the extrinsic apoptotic pathway and treating a solid tumor using a combination of 2-deoxy-D-glucose and an agent that activates the extrinsic apoptotic pathway are described.

University of Illinois at Urbana - Champaign | Date: 2015-10-13

A crutch includes a pole, a cuff, a handle, a forearm support member, and a second forearm support member. The cuff and the handle are attached to the pole. The forearm support member, comprising an actuator, is disposed between the cuff and the handle and is moveable between a constricted position and a non-constricted position. The second forearm support member is disposed within the forearm support member. The energy generating device is configured to move the forearm support member between the constricted position and the non-constricted position.

University of Illinois at Urbana - Champaign and New York University | Date: 2015-12-15

Methods and systems of applying physical stimuli to tissue are disclosed. The methods can include reducing or suppressing pancreatitis in a subject by administering a low magnitude, high frequency mechanical signal on a period basis and for a time sufficient to reduce or suppress pancreatitis. The methods can include enhancing healing of damaged tissue in a subject by administering to the subject a low magnitude, high frequency mechanical signal on a periodic basis and for a time sufficient to treat the damaged tissue. The systems can include a device for generating a low magnitude, high frequency physical signal and a platform for applying the low magnitude, high frequency physical signal to the subject for a predetermined time.

University of Illinois at Urbana - Champaign | Date: 2015-12-14

The invention generally relates to the fields of drug delivery and cell capture. In particular, the invention relates to amphiphilic dendron-coils, micelles thereof and their use for drug delivery vehicles and/or cell capture.

Rush University Medical Center and University of Illinois at Urbana - Champaign | Date: 2015-01-14

A method for monitoring a treatment of a subject having a musculoskeletal disorder is provided. The method includes measuring a first expression level of at least two biomarkers at a treatment site prior to the treatment and measuring a second expression level of the at least two biomarkers at the treatment site after the treatment begins. The method further includes comparing the first expression level of the at least two biomarkers prior to the treatment to the second expression level of the at least two biomarkers post treatment and continuing the treatment, altering the treatment or stopping the treatment based on the comparison. A method of treating a musculoskeletal disorder in a subject is also provided. The method includes removing a aggrecan-hyaluronan matrix from a treatment site in the subject.

University of Illinois at Urbana - Champaign | Date: 2015-05-19

A method of creating crumples in a monolayer entails contacting a monolayer comprising a two-dimensional material with a thermally contractible polymer, and heating the thermally contractible polymer to contract the polymer and induce buckling of the monolayer, where a plurality of crumples are formed in the monolayer due to the buckling. A device having a crumpled microstructure includes a monolayer comprising a two-dimensional material and including a plurality of crumples.

Semprius and University of Illinois at Urbana - Champaign | Date: 2014-03-13

Provided are optical devices and systems fabricated, at least in part, via printing-based assembly and integration of device components. In specific embodiments the present invention provides light emitting systems, light collecting systems, light sensing systems and photovoltaic systems comprising printable semiconductor elements, including large area, high performance macroelectronic devices. Optical systems of the present invention comprise semiconductor elements assembled, organized and/or integrated with other device components via printing techniques that exhibit performance characteristics and functionality comparable to single crystalline semiconductor based devices fabricated using conventional high temperature processing methods. Optical systems of the present invention have device geometries and configurations, such as form factors, component densities, and component positions, accessed by printing that provide a range of useful device functionalities. Optical systems of the present invention include devices and device arrays exhibiting a range of useful physical and mechanical properties including flexibility, shapeability, conformability and stretchablity.

University of Illinois at Urbana - Champaign | Date: 2014-03-20

In an aspect, the present invention provides stretchable, and optionally printable, components such as semiconductors and electronic circuits capable of providing good performance when stretched, compressed, flexed or otherwise deformed, and related methods of making or tuning such stretchable components. Stretchable semiconductors and electronic circuits preferred for some applications are flexible, in addition to being stretchable, and thus are capable of significant elongation, flexing, bending or other deformation along one or more axes. Further, stretchable semiconductors and electronic circuits of the present invention are adapted to a wide range of device configurations to provide fully flexible electronic and optoelectronic devices.

University of Illinois at Urbana - Champaign | Date: 2015-09-22

Provided herein are embodiments relating to porcine reproductive and respiratory syndrome (PRRS) virus, compositions comprising the virus, and methods of using the virus. The virus may be used to immunize a mammal, including swine. Methods for generating an immune response against PRRS virus in swine by administering a composition comprising the virus are provided.

University of Illinois at Urbana - Champaign | Date: 2015-03-13

An apparatus for depositing a coating on a substrate at atmospheric pressure comprises (a) a plasma torch comprising a microwave source coupled to an antenna disposed within a chamber having an open end, the chamber comprising a gas inlet for flow of a gas over the antenna to generate a plasma jet; (b) a substrate positioned outside the open end of the chamber a predetermined distance away from a tip of the antenna; and (c) a target material to be coated on the substrate disposed at the tip of the antenna.

University of Illinois at Urbana - Champaign | Date: 2015-04-13

Methods and kits for diagnosing and treating type I diabetes based upon the expression of macrophage-specific Chymotrypsin-Like Elastase Family, Member 3B, either alone or in combination with sialic acid-binding immunoglobulin-like lectin-1, are provided.

Surf Canyon Incorporated and University of Illinois at Urbana - Champaign | Date: 2014-10-10

A method and apparatus for utilizing user behavior to immediately modify sets of search results so that the most relevant documents are moved to the top. In one embodiment of the invention, behavior data, which can come from virtually any activity, is used to infer the users intent. The updated inferred implicit user model is then exploited immediately by re-ranking the set of matched documents and advertisements to best reflect the information need of the user. The system updates the user model and immediately re-ranks documents and advertisements at every opportunity in order to constantly provide the most optimal results. In another embodiment, the system determines, based on the similarity of results sets, if the current query belongs in the same information session as one or more previous queries. If so, the current query is expanded with additional keywords in order to improve the targeting of the results.

Agency: NSF | Branch: Continuing grant | Program: | Phase: Theory, Models, Comput. Method | Award Amount: 636.99K | Year: 2014

Sharon Hammes-Schiffer of the University of Illinois at Urbana-Champaign is supported by an award from the Chemical Theory, Models and Computational Methods program in the Chemistry Division and the Division of Advanced Cyberinfrastructure to develop efficient and accurate computational methods that describe the coupled motion of electrons and protons in chemical and biological systems. The coupling between electrons and protons plays a vital role in a wide range of biological and chemical processes, including photosynthesis, respiration, and energy production in solar cells. It is difficult to develop computational approaches to accurately describe this coupling because electrons and protons are so light that they must be treated quantum mechanically. After extensive assessment and validation, these computational methods will be incorporated into a computer program that will be available to the public. Moreover, these computational methods will be applied to specific processes of biological and chemical relevance to elucidate the underlying fundamental principles that determine the processes. Professor Hammes-Schiffer and her research group maintain a web site that contains software and educational tools related to this topic. The computer programs, tools, demonstrations, and tutorials available on this web site enable scientists in a broad range of fields to learn about the coupled motion of electrons and protons. In addition, this project will facilitate technological and biomedical advances through a better understanding of this important research area. An important example is the design of more effective solar cells and other alternative, renewable energy sources. Another example is the design of more effective drugs through modification of enzymes that rely on the coupling between electrons and protons.

The objective of this project is to develop new theoretical and computational approaches to provide insight into the underlying fundamental principles of proton-coupled electron transfer (PCET) reactions. These reactions play a vital role in a broad range of biological and chemical processes. The specific issues to be examined include the roles of nuclear quantum effects, hydrogen tunneling, and non-Born-Oppenheimer effects, which are thought to be significant in PCET. These issues will be explored using the nuclear-electronic orbital (NEO) approach, in which all electrons and the transferring proton(s) are treated quantum mechanically on the same level with molecular orbital methods or density functional theory. This approach enables the calculation of key quantities in PCET theories for determining rates and mechanisms. It is also applicable to a wide range of other chemical and biological systems. A major goal of this project is to develop algorithms to enhance the computational efficiency of this approach, assess and validate the methodology, and port the NEO code to GAMESS, a general quantum chemistry package available to the public. The NEO method benefits from the optimized components of other parts of the general electronic structure package, and portions of the NEO code will be useful to other scientists. This software development enables calculations that are not currently possible with existing codes. In addition, a web site on PCET is maintained and enhanced to convey useful information to the general community.

Agency: NSF | Branch: Standard Grant | Program: | Phase: ETF | Award Amount: 300.00K | Year: 2015

The project will identify, document, and analyze effective practices in establishing public-private partnerships between High Performance Computing (HPC) centers and industry. With the market analysis firm IDC, the project will conduct a worldwide in-depth survey of 70-80 example partnerships of HPC centers of various sizes, in the US and elsewhere, that have been involved in partnerships with the private-sector.

The project is important to our national economic competitiveness because prior studies have shown that the use of HPC can boost industrial innovation and competitiveness, benefiting the firms in question and their economies. Additionally, it directly supports the Presidential executive order Creating a National Strategic Computing Initiative by addressing one of its key guiding principle to foster public-private collaboration. In particular, the project outcomes will help HPC centers, whose existence serve as an important nexus in our nations HPC hardware, software, and human investments, improve their innovation and effectiveness in engaging industry for knowledge transfer and workforce development.

In summary, the project implementation will have the following scope and goals:

* Evaluate the nature and status of existing public-private, HPC-centered partnership programs.

* Collect and analyze practices that have worked well in individual partnerships and in the aggregate, along with practices that have not worked well and issues impending greater success, in order to capture the state-of-the-art and associated best practices in these partnerships

* Produce and disseminate a quantitative-qualitative report that can serve as a reference guide and compendium of effective practices and lessons learned.

Agency: NSF | Branch: Cooperative Agreement | Program: | Phase: INSTRUMENTATION & FACILITIES | Award Amount: 9.80M | Year: 2012

COMPRES is a community-based consortium, comprised of and governed by 57 US member institutions, that supports research in the properties of materials that comprise the interiors of Earth and other planetary bodies, with particular emphasis on highpressure science and technology and related fields. It is charged with the oversight and guidance of specialized high-pressure laboratories at several national synchrotron facilities. COMPRES supports the operation of beam lines, the development of new technologies for high-pressure research, and advocates for science and educational programs to the various funding agencies. In addition to the US membership, 39 foreign institutions are non-voting affiliates of COMPRES.

The member institutions guide the direction of the consortium and define its missions. The intellectual and scientific goals of COMPRES are described in the report Understanding the Building Blocks of the Planet: The Materials Science of Earth Processes, which came out of a 2009 planning workshop (www.compres.us). Through its support of advanced tools for Earth science experimentation, COMPRES enables research on the compositional and thermal structure of Earths interior, its evolution through time, and the dynamical behavior of the crust, mantle and core. There is an intimate relationship between the scientific goals of COMPRES and other geophysical disciplines. The results of mineral physics measurements carried out at COMPRES-supported facilities are critical for interpreting seismological and other geophysical observations in terms of the physical, chemical, and dynamical state of Earths interior. Many of the scientific objectives of the seismological community require investigations of the material properties of Earth materials at extreme pressures and temperatures, as pursued by the COMPRES community. Research done at COMPRES-supported facilities also addresses questions of direct societal impact, such as the causes of earthquakes, carbon sequestration, natural resources, and basic research into new advanced materials.

COMPRES supports the operations and equipment needs of three synchrotron beamlines at the National Synchrotron Light Source (NSLS, Brookhaven National Laboratory), and one at the Advanced Light Source (ALS, Lawrence Berkeley National Laboratory). X-ray beam line X17B2 at NSLS is a center for rheology and equations of state measurements on materials at high pressures and temperatures, using the Large-Volume Multi-Anvil Press (LVP) and a Drickamer deformation apparatus. Beam lines X17B3 and X17C at NSLS are for high-energy X-ray studies of materials using the diamond-anvil cell (DAC) high-pressure apparatus, with resistance and laser heating. Beam line U2A is the worldleading facility for high-pressure infrared spectroscopy. Beam line 12.2.2 at the ALS is the COMPRES West-Coast facility for high-pressure high-temperature research using the diamond cell, and is developing into a center for resistance heating for DAC applications, as well as for laser-heating experiments. COMPRES also supports nuclear resonant inelastic X-ray scattering and synchrotron Mössbauer research at Sector 3 of the Advanced Photon Source (APS). Under the new 5-year cooperative agreement, COMPRES will establish a COMPRES Technology Center (COMPTECH) at the APS to develop high-pressure capabilities utilizing other advanced synchrotron techniques, such as single-crystal micro-diffraction, and momentum resolved inelastic scattering. With the planned completion of the NSLS-II - the newest US synchrotron ? in 2015, COMPRES will support high-pressure experimentation at the Project beam line XPD. In addition to providing facilities for large volume and diamond cell experiments upon startup of NSLS-II, a COMPTECH office at NSLS-II will help to exploit the unique capabilities of this new world-leading facility for a new generation of experiments at extreme pressure-temperature conditions.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Cyber-Human Systems (CHS) | Award Amount: 23.20K | Year: 2015

Creativity is the cornerstone and the fundamental motive of both the aesthetic and engineering disciplines. It is a critical element of our economic and social prosperity, as a precursor to scientific discoveries, technological advances, and new forms of cultural and aesthetic experiences. Held every other year in an international location since 1993, the ACM Creativity & Cognition (C&C) conference serves as a gathering place for the diverse communities of researchers, designers, engineers, and artists who provide an innovative and cross-disciplinary perspective on creativity and cognition as well as technological innovation. C&C is the only ACM-sponsored conference in which human creativity is the central focus, and as such it provides pathways for substantially different kinds of work including interactive art pieces, user studies of creativity support tools, and new sensor technologies for creative practice. It serves as a premier forum for presenting the worlds best new research investigating computings impact on and ability to promote creativity in all forms of human experience. The conference particularly values research that explores new, synergistic roles for computing and people in creative processes, or that addresses situations where computing, as contextualized in sociotechnical systems, may sometimes have an undesirable impact. Work presented at the conference is archived in the ACM Digital Library. More information about the conference may be found online at http://cc15.cityofglasgowcollege.ac.uk/.

This is funding to support participation by students and faculty based in U.S. educational institutions in a Graduate Student Symposium (workshop) to be organized in conjunction with C&C 2015, which will take place June 22-25 in Glasgow, UK. The GSS will bring together up to 12 promising doctoral students (not all of whom who be eligible for funding) and 4 distinguished researchers from academia and industry as mentors, in a day-long event that will be held on June 26, 2015, immediately following the main C&C conference. During the morning, each student will briefly present his or her work in a short formal talk. Mentors will lead a brief discussion of the work, including the research question, method of addressing the question, possible results and findings, significance of the work, and pointers to related work and research areas. The mentors will also offer a critique of each students written and oral presentation. These discussions will continue through a working lunch. In the afternoon, following additional individual presentations and discussions, the mentors will lead a broad discussion that relates the presentations to one another and to the other work in the field. They will attempt to highlight common themes that emerged across the individual student works, and note differences and similarities in research methods both across individual research projects as well as across diverse intellectual threads within the creativity and cognition field as a whole. In the last portion of the day, the mentors will lead a discussion on pursuing a career in this interdisciplinary field and what the opportunities are from different disciplinary perspectives (e.g., arts and design vs. technology, academia vs. industry, etc.). Through this process, the student participants will gain experience and skills in communicating their own work and critiquing the work of peers. They will come away with new research insights and possible directions, with better understanding of prior work and of the field overall, and with new awareness of potentially useful methods that draw from different disciplines. And they will have the opportunity to build professional and social connections that transcend the event, and to gain awareness of potential career paths in both academia and industry. The student submissions will appear in the C&C Proceedings, and they will be indexed in the ACM Digital Library. The students will also be invited to present their work at the main conference during the Poster Session, which will give their work wider visibility in the community while also affording an opportunity for them to talk one-on-one with peers and other senior researchers, in addition to the focused mentoring of the GSS itself. To promote diversity, no more than two graduate students will be accepted from any one institution, and if there are two then at least one of them must be a female.