Time filter

Source Type

Atlanta, GA, United States

The Georgia Institute of Technology is a public research university in Atlanta, Georgia, in the United States. It is a part of the University System of Georgia and has satellite campuses in Savannah, Georgia; Metz, France; Athlone, Ireland; Shanghai, China; and Singapore.The educational institution was founded in 1885 as the Georgia School of Technology as part of Reconstruction plans to build an industrial economy in the post-Civil War Southern United States. Initially, it offered only a degree in mechanical engineering. By 1901, its curriculum had expanded to include electrical, civil, and chemical engineering. In 1948, the school changed its name to reflect its evolution from a trade school to a larger and more capable technical institute and research university.Today, Georgia Tech is organized into six colleges and contains about 31 departments/units, with emphasis on science and technology. It is well recognized for its degree programs in engineering, computing, business administration, the science, architecture, and liberal arts.Georgia Tech's main campus occupies part of Midtown Atlanta, bordered by 10th Street to the north and by North Avenue to the south, placing it well in sight of the Atlanta skyline. In 1996, the campus was the site of the athletes' village and a venue for a number of athletic events for the 1996 Summer Olympics. The construction of the Olympic village, along with subsequent gentrification of the surrounding areas, enhanced the campus.Student athletics, both organized and intramural, are a part of student and alumni life. The school's intercollegiate competitive sports teams, the four-time football national champion Yellow Jackets, and the nationally recognized fight song "Ramblin' Wreck from Georgia Tech", have helped keep Georgia Tech in the national spotlight. Georgia Tech fields eight men's and seven women's teams that compete in the NCAA Division I athletics and the Football Bowl Subdivision. Georgia Tech is a member of the Coastal Division in the Atlantic Coast Conference. Wikipedia.

Safavynia S.A.,Emory University | Ting L.H.,Georgia Institute of Technology
Journal of Neurophysiology | Year: 2013

In both the upper and lower limbs, evidence suggests that short-latency electromyographic (EMG) responses to mechanical perturbations are modulated based on muscle stretch or joint motion, whereas long-latency responses are modulated based on attainment of task-level goals, e.g., desired direction of limb movement. We hypothesized that long-latency responses are modulated continuously by task-level error feedback. Previously, we identified an error-based sensorimotor feedback transformation that describes the time course of EMG responses to ramp-and-hold perturbations during standing balance (Safavynia and Ting 2013; Welch and Ting 2008, 2009). Here, our goals were 1) to test the robustness of the sensorimotor transformation over a richer set of perturbation conditions and postural states; and 2) to explicitly test whether the sensorimotor transformation is based on task-level vs. joint-level error. We developed novel perturbation trains of acceleration pulses such that perturbations were applied when the body deviated from the desired, upright state while recovering from preceding perturbations. The entire time course of EMG responses (~4 s) in an antagonistic muscle pair was reconstructed using a weighted sum of center of mass (CoM) kinematics preceding EMGs at long-latency delays (~100 ms). Furthermore, CoM and joint kinematic trajectories became decorrelated during perturbation trains, allowing us to explicitly compare task-level vs. joint feedback in the same experimental condition. Reconstruction of EMGs was poorer using joint kinematics compared with CoM kinematics and required unphysiologically short (~10 ms) delays. Thus continuous, long-latency feedback of task-level variables may be a common mechanism regulating long-latency responses in the upper and lower limbs. © 2013 the American Physiological Society.

Gilbert E.,Georgia Institute of Technology
Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW | Year: 2012

We have friends we consider very close and acquaintances we barely know. The social sciences use the term tie strength to denote this differential closeness with the people in our lives. In this paper, we explore how well a tie strength model developed for one social medium adapts to another. Specifically, we present a Twitter application called We Meddle which puts a Facebook tie strength model at the core of its design. We Meddle estimated tie strengths for more than 200,000 online relationships from people in 52 countries. We focus on the mapping of Facebook relational features to relational features in Twitter. By examining We Meddle's mistakes, we find that the Facebook tie strength model largely generalizes to Twitter. This is early evidence that important relational properties may manifest similarly across different social media, a finding that would allow new social media sites to build around relational findings from old ones. © 2012 ACM.

Hudson A.E.,Georgia Institute of Technology
PLoS computational biology | Year: 2010

Recent experimental evidence suggests that coordinated expression of ion channels plays a role in constraining neuronal electrical activity. In particular, each neuronal cell type of the crustacean stomatogastric ganglion exhibits a unique set of positive linear correlations between ionic membrane conductances. These data suggest a causal relationship between expressed conductance correlations and features of cellular identity, namely electrical activity type. To test this idea, we used an existing database of conductance-based model neurons. We partitioned this database based on various measures of intrinsic activity, to approximate distinctions between biological cell types. We then tested individual conductance pairs for linear dependence to identify correlations. Contrary to experimental evidence, in which all conductance correlations are positive, 32% of correlations seen in this database were negative relationships. In addition, 80% of correlations seen here involved at least one calcium conductance, which have been difficult to measure experimentally. Similar to experimental results, each activity type investigated had a unique combination of correlated conductances. Finally, we found that populations of models that conform to a specific conductance correlation have a higher likelihood of exhibiting a particular feature of electrical activity. We conclude that regulating conductance ratios can support proper electrical activity of a wide range of cell types, particularly when the identity of the cell is well-defined by one or two features of its activity. Furthermore, we predict that previously unseen negative correlations and correlations involving calcium conductances are biologically plausible.

Safavynia S.A.,Emory University | Ting L.H.,Georgia Institute of Technology
Journal of Neurophysiology | Year: 2013

We hypothesized that motor outputs are hierarchically organized such that descending temporal commands based on desired task-level goals flexibly recruit muscle synergies that specify the spatial patterns of muscle coordination that allow the task to be achieved. According to this hypothesis, it should be possible to predict the patterns of muscle synergy recruitment based on task-level goals. We demonstrated that the temporal recruitment of muscle synergies during standing balance control was robustly predicted across multiple perturbation directions based on delayed sensorimotor feedback of center of mass (CoM) kinematics (displacement, velocity, and acceleration). The modulation of a muscle synergy's recruitment amplitude across perturbation directions was predicted by the projection of CoM kinematic variables along the preferred tuning direction(s), generating cosine tuning functions. Moreover, these findings were robust in biphasic perturbations that initially imposed a perturbation in the sagittal plane and then, before sagittal balance was recovered, perturbed the body in multiple directions. Therefore, biphasic perturbations caused the initial state of the CoM to differ from the desired state, and muscle synergy recruitment was predicted based on the error between the actual and desired upright state of the CoM. These results demonstrate that that temporal motor commands to muscle synergies reflect task-relevant error as opposed to sensory inflow. The proposed hierarchical framework may represent a common principle of motor control across motor tasks and levels of the nervous system, allowing motor intentions to be transformed into motor actions. © 2013 the American Physiological Society.

Erturk A.,Georgia Institute of Technology
Journal of Intelligent Material Systems and Structures | Year: 2011

This article formulates the problem of vibration-based energy harvesting using piezoelectric transduction for civil infrastructure system applications with a focus on moving load excitations and surface strain fluctuations. Two approaches of piezoelectric power generation from moving loads are formulated. The first one is based on using a bimorph cantilever located at an arbitrary position on a simply supported slender bridge. The fundamental moving load problem is reviewed and the input to the cantilevered energy harvester is obtained to couple with the generalized electromechanical equations for transient excitation. The second approach considers using a thin piezoceramic patch covering a region on the bridge. The transient electrical response of the surface patch to moving load excitation is derived in the presence of a resistive electrical load. The local way of formulating piezoelectric energy harvesting from two-dimensional surface strain fluctuations of large structures is also discussed. For a thin piezoceramic patch attached onto the surface of a large structure, analytical expressions of the electrical power output are presented for generalized, harmonic, and white noise-type two-dimensional strain fluctuations. Finally, a case study is given to analyze a small piezoceramic patch for energy harvesting from surface strain fluctuations along with measured bridge strain data. © SAGE Publications 2011.

Garoufalidis S.,Georgia Institute of Technology
Electronic Journal of Combinatorics | Year: 2011

A sequence of rational functions in a variable q is q-holonomic if it satisfies a linear recursion with coefficients polynomials in q and qn. We prove that the degree of a q-holonomic sequence is eventually a quadratic quasi-polynomial, and that the leading term satisfies a linear recursion relation with constant coefficients. Our proof uses differential Galois theory (adapting proofs regarding holonomic Dmodules to the case of q-holonomic D-modules) combined with the Lech-Mahler- Skolem theorem from number theory. En route, we use the Newton polygon of a linear q-difference equation, and introduce the notion of regular-singular q-difference equation and aWKB basis of solutions of a linear q-difference equation at q = 0. We then use the Skolem-Mahler-Lech theorem to study the vanishing of their leading term. Unlike the case of q = 1, there are no analytic problems regarding convergence of the WKB solutions. Our proofs are constructive, and they are illustrated by an explicit example.

Lu Z.,Simon Fraser University | Monteiro R.D.C.,Georgia Institute of Technology
Mathematical Programming | Year: 2011

In this paper we consider the general cone programming problem, and propose primal-dual convex (smooth and/or nonsmooth) minimization reformulations for it. We then discuss first-order methods suitable for solving these reformulations, namely, Nesterov's optimal method (Nesterov in Doklady AN SSSR 269:543-547, 1983; Math Program 103:127-152, 2005), Nesterov's smooth approximation scheme (Nesterov in Math Program 103:127-152, 2005), and Nemirovski's prox-method (Nemirovski in SIAM J Opt 15:229-251, 2005), and propose a variant of Nesterov's optimal method which has outperformed the latter one in our computational experiments. We also derive iteration-complexity bounds for these first-order methods applied to the proposed primal-dual reformulations of the cone programming problem. The performance of these methods is then compared using a set of randomly generated linear programming and semidefinite programming instances. We also compare the approach based on the variant of Nesterov's optimal method with the low-rank method proposed by Burer and Monteiro (Math Program Ser B 95:329-357, 2003; Math Program 103:427-444, 2005) for solving a set of randomly generated SDP instances. © 2009 Springer-Verlag.

Juditsky A.,Joseph Fourier University | Nemirovski A.,Georgia Institute of Technology
Mathematical Programming | Year: 2011

We discuss necessary and sufficient conditions for a sensing matrix to be "s-good"-to allow for exact ℓ 1-recovery of sparse signals with s nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect ℓ 1-recovery (nonzero measurement noise, nearly s-sparse signal, near-optimal solution of the optimization problem yielding the ℓ 1-recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse ℓ 1-recovery and to efficiently computable upper bounds on those s for which a given sensing matrix is s-good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties. © 2010 Springer and Mathematical Optimization Society.

Yang S.K.,Georgia Institute of Technology | Yang S.K.,New York University | Ambade A.V.,CSIR - National Chemical Laboratory | Weck M.,New York University
Chemical Society Reviews | Year: 2011

Block copolymers are key building blocks for a variety of applications ranging from electronic devices to drug delivery. The material properties of block copolymers can be tuned and potentially improved by introducing noncovalent interactions in place of covalent linkages between polymeric blocks resulting in the formation of supramolecular block copolymers. Such materials combine the microphase separation behavior inherent to block copolymers with the responsiveness of supramolecular materials thereby affording dynamic and reversible materials. This tutorial review covers recent advances in main-chain supramolecular block copolymers and describes the design principles, synthetic approaches, advantages, and potential applications. © 2011 The Royal Society of Chemistry.

The impact of the two adaptation-induced mutations in an improved xylose-fermenting Zymomonas mobilis strain was investigated. The chromosomal mutation at the xylose reductase gene was critical to xylose metabolism by reducing xylitol formation. Together with the plasmid-borne mutation impacting xylose isomerase activity, these two mutations accounted for 80 % of the improvement achieved by adaptation. To generate a strain fermenting xylose in the presence of high acetic acid concentrations, we transferred the two mutations to an acetic acid-tolerant strain. The resulting strain fermented glucose + xylose (each at 5 % w/v) with 1 % (w/v) acetic acid at pH 5.8 to completion with an ethanol yield of 93.4 %, outperforming other reported strains. This work demonstrated the power of applying molecular understanding in strain improvement.

Pan Y.,Georgia Institute of Technology | Wang J.,Chinese University of Hong Kong
IEEE Transactions on Industrial Electronics | Year: 2012

In this paper, we present a neurodynamic approach to model predictive control (MPC) of unknown nonlinear dynamical systems based on two recurrent neural networks (RNNs). The echo state network (ESN) and simplified dual network (SDN) are adopted for system identification and dynamic optimization, respectively. First, the unknown nonlinear system is identified based on the ESN with input-output training and testing samples. Then, the resulting nonconvex optimization problem associated with nonlinear MPC is decomposed via Taylor expansion. To estimate the higher order unknown term resulted from the decomposition, an online supervised learning algorithm is developed. Next, the SDN is applied for solving the relaxed convex optimization problem to compute the optimal control actions over the predicted horizon. Simulation results are provided to demonstrate the effectiveness and characteristics of the proposed approach. The proposed RNN-based approach has many desirable properties such as global convergence and low complexity. It is shown that the RNN-based nonlinear MPC scheme is effective and potentially suitable for real-time MPC implementation in many applications. © 2012 IEEE.

Blekherman G.,Georgia Institute of Technology
Foundations of Computational Mathematics | Year: 2015

We prove a conjecture of Comon and Ottaviani that typical real Waring ranks of bivariate forms of degree d take all integer values between (Formula presented.) and d. That is, we show that for all d and all (Formula presented.) there exists a bivariate form f such that f can be written as a linear combination of mdth powers of real linear forms and no fewer, and additionally all forms in an open neighborhood of f also possess this property. Equivalently we show that for all d and any (Formula presented.) there exists a symmetric real bivariate tensor t of order d such that t can be written as a linear combination of m symmetric real tensors of rank 1 and no fewer, and additionally all tensors in an open neighborhood of t also possess this property. © 2013, SFoCM.

The effect of ion orbit loss on the poloidal distribution of ions, energy and momentum from the plasma edge into the tokamak scrape-off layer (SOL) is analysed for a representative DIII-D (Luxon 2002 Nucl. Fusion 42 614) high-mode discharge. Ion orbit loss is found to produce a significant concentration of the fluxes of particle, energy and momentum into the outboard SOL. An intrinsic co-current rotation in the edge pedestal due to the preferential loss of counter-current ions is also found. © 2013 IAEA, Vienna.

Ballantyne D.R.,Georgia Institute of Technology
Astrophysical Journal Letters | Year: 2010

The spin of a supermassive black hole (SMBH) is directly related to the radiative efficiency of accretion onto the hole, and therefore impacts the amount of fuel required for the black hole to reach a certain mass. Thus, knowledge of the SMBH spin distribution and evolution is necessary to develop a comprehensive theory of the growth of SMBHs and their impact on galaxy formation. Currently, the only direct measurement of SMBH spin is through fitting the broad Fe Kα line in active galactic nuclei (AGNs). The evolution of spins could be determined by fitting the broad line in the integrated spectra of AGNs over different redshift intervals. The accuracy of these measurements will depend on the observed integrated line strength. Here, we present theoretical predictions of the integrated relativistic Fe Kα line strength as a function of redshift and AGN luminosity. The equivalent widths of the integrated lines are much less than 300eV. Searches for the integrated line will be easiest for unobscured AGNs with 2-10keV luminosities between 44 < log LX ≤ 45. The total integrated line makes up less than 4% of the X-ray background, but its shape is sensitive to the average SMBH spin. By following these recommendations, future International X-ray Observatory surveys of broad Fe Kα lines should be able to determine the spin evolution of SMBHs. © 2010. The American Astronomical Society. All rights reserved.

Zhang F.,Georgia Institute of Technology | Leonard N.E.,Princeton University
IEEE Transactions on Automatic Control | Year: 2010

Autonomous mobile sensor networks are employed to measure large-scale environmental fields. Yet an optimal strategy for mission design addressing both the cooperative motion control and the cooperative sensing is still an open problem. We develop strategies for multiple sensor platforms to explore a noisy scalar field in the plane. Our method consists of three parts. First, we design provably convergent cooperative Kalman filters that apply to general cooperative exploration missions. Second, we present a novel method to determine the shape of the platform formation to minimize error in the estimates and design a cooperative formation control law to asymptotically achieve the optimal formation shape. Third, we use the cooperative filter estimates in a provably convergent motion control law that drives the center of the platform formation to move along level curves of the field. This control law can be replaced by control laws enabling other cooperative exploration motion, such as gradient climbing, without changing the cooperative filters and the cooperative formation control laws. Performance is demonstrated on simulated underwater platforms in simulated ocean fields. © 2010 IEEE.

Shapiro A.,Georgia Institute of Technology
Operations Research Letters | Year: 2011

In this paper we consider the adjustable robust approach to multistage optimization, for which we derive dynamic programming equations. We also discuss this from the point of view of risk averse stochastic programming. We consider as an example a robust formulation of the classical inventory model and show that, like for the risk neutral case, a basestock policy is optimal. © 2011 Elsevier B.V. All rights reserved.

Cao X.-Y.,Xiamen University | Zhao Q.,Xiamen University | Lin Z.,Xiamen University | Lin Z.,Georgia Institute of Technology | Xia H.,Xiamen University
Accounts of Chemical Research | Year: 2014

Aromatic compounds, such as benzene and its derivatives, porphyrins, fullerenes, carbon nanotubes, and graphene, have numerous applications in biomedicine, materials science, energy science, and environmental science. Metalla-aromatics are analogues of conventional organic aromatic molecules in which one of the (hydro)carbon segments is formally replaced by an isolobal transition-metal fragment. Researchers have studied these transition-metal- containing aromatic molecules for the past three decades, particularly the synthesis and reactivity of metallabenzenes. Another focus has been the preparation and characterization of other metalla-aromatics such as metallafurans, metallapyridines, metallabenzynes, and more. Despite significant advances, remaining challenges in this field include the limited number of convenient and versatile synthetic methods to construct stable and fully characterized metalla-aromatics, and the relative shortage of new topologies.To address these challenges, we have developed new methods for preparing metalla-aromatics, especially those possessing new topologies. Our synthetic efforts have led to a large family of closely related metalla-aromatics known as aromatic osmacycles. This Account summarizes the synthesis and reactivity of these compounds, with a focus on features that are different from those of compounds developed by other groups. These osmacycles can be synthesized from simple precursors under mild conditions. Using these efficient methods, we have synthesized aromatic osmacycles such as osmabenzene, osmabenzyne, isoosmabenzene, osmafuran, and osmanaphthalene. Furthermore, these methods have also created a series of new topologies, such as osmabenzothiazole and osmapyridyne. Our studies of the reactivity of these osma-aromatics revealed unprecedented reaction patterns, and we demonstrated the interconversion of several osmacycles.Like other metalla-aromatics, osma-aromatics have spectroscopic features of aromaticity, such as ring planarity and the characteristic bond lengths between a single and double bond, but the osma-aromatics we have prepared also exhibit good stability towards air, water, and heat. Indeed, some seemingly unstable species proved stable, and their stability made it possible to study their optical, electrochemical, and magnetic properties. The stability of these compouds results from their aromaticity and the phosphonium substituents on the aromatic plane: most of our osma-aromatics carry at least one phosphonium group. The phosphonium group offers stability via both electronic and steric mechanisms. The phosphonium acts as an electron reservoir, allowing the circulation of electron pairs along metallacycles and lowering the electron density of the aromatic rings. Meanwhile, the bulky phosphonium groups surrounding the aromatic metallacycle prevent most reactions that could decompose the skeleton. © 2013 American Chemical Society.

Matisoff D.C.,Georgia Institute of Technology
Energy Policy | Year: 2013

This study assesses the effectiveness of two types information disclosure programs - state-based mandatory carbon reporting programs and the voluntary Carbon Disclosure Project, which uses investor pressure to push firms to disclose carbon emissions and carbon management strategies. I match firms in each program to control groups of firms that have not participated in each program. Using panel data methods and a difference in differences specification, I measure the impact of each program on plant-level carbon emissions, plant-level carbon intensity, and plant level output. I find that neither program has generated an impact on plant-level carbon emissions, emissions intensity, or output. Placing this study in contrast with others that demonstrate improvements from mandatory information disclosure, these results suggest that how information is reported to stakeholders has important implications for program effectiveness. © 2012 Elsevier Ltd.

Ballantyne D.R.,Georgia Institute of Technology
Monthly Notices of the Royal Astronomical Society | Year: 2014

The broad-band X-ray spectra of active galactic nuclei (AGNs) contains information about the nuclear environment from Schwarzschild radii scales (where the primary power law is generated in a corona) to distances of ~1 pc (where the distant reflector may be located). In addition, the average shape of the X-ray spectrum is an important input into X-ray background synthesis models. Here, local (z 0) AGN luminosity functions (LFs) in five energy bands are used as a low-resolution, luminosity-dependent X-ray spectrometer in order to constrain the average AGN X-ray spectrum between 0.5 and 200 keV. The 15-55 keV LF measured by Swift-BAT is assumed to be the best determination of the local LF, and then a spectral model is varied to determine the best fit to the 0.5-2 keV, 2-10 keV, 3-20 keV and 14-195 keV LFs. The spectral model consists of a Gaussian distribution of power laws with a mean photonindex and cutoff energy Ecut, as well as contributions from distant and disc reflection. The reflection strength is parametrized by varying the Fe abundance relative to solar, AFe, and requiring a specific Fe Ka equivalent width (EW). In this way, the presence of the X-ray Baldwin effect can be tested. The spectral model that best fits the four LFs has 1.85 ± 0.15, Ecut = 270+170 -80 keV, AFe = 0.3+0.3 -0.15. The sub-solar AFe is unlikely to be a true measure of the gas-phase metallicity, but indicates the presence of strong reflection given the assumed Fe Ka EW. Indeed, parametrizing the reflection strength with the R parameter gives R = 1.7+1.7 -0.85. There is moderate evidence for no X-ray Baldwin effect. Accretion disc reflection is included in the best-fitting model, but it is relatively weak (broad iron Ka EW < 100 eV) and does not significantly affect any of the conclusions. A critical result of our procedure is that the shape of the local 2-10 keV LFmeasured by HEAO-1 and MAXI is incompatible with the LFs measured in the hard X-rays by Swift-BAT and RXTE. We therefore present a new determination of the local 2-10 keV LF that is consistent with all other energy bands, as well as the de-evolved 2-10 keV LF estimated from the XMM-Newton Hard Bright Survey. This new LF should be used to revise current measurements of the evolving AGN LF in the 2-10 keV band. Finally, the suggested absence of the X-ray Baldwin effect points to a possible origin for the distant reflector in dusty gas not associated with the AGN obscuring medium. This may be the same material that produces the compact 12 μm source in local AGNs. © 2013 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.

Ballantyne D.R.,Georgia Institute of Technology
Astrophysical Journal Letters | Year: 2010

X-ray observations of several active galactic nuclei (AGNs) show prominent iron K-shell fluorescence lines that are sculpted due to special and general relativistic effects. These observations are important because they probe the spacetime geometry close to distant black holes. However, the intrinsic distribution of Fe line strengths in the cosmos has never been determined. This uncertainty has contributed to the controversy surrounding the relativistic interpretation of the emission feature. Now, by making use of the latest multi-wavelength data, we show theoretical predictions of the cosmic density of relativistic Fe lines as a function of their equivalent width (EW) and line flux. We are able to show unequivocally that the most common relativistic iron lines in the universe will be produced by neutral iron fluorescence in Seyfert galaxies and have EWs <100 eV. Thus, the small number of very intense lines that have been discovered are just the bright end of a distribution of line strengths. In addition to validating the current observations, the predicted distributions can be used for planning future surveys of relativistic Fe lines. Finally, the predicted sky density of EWs indicates that the X-ray source in AGNs cannot, on average, lie on the axis of the black hole. © 2010 The American Astronomical Society.

Shin H.,University of Ulsan | Santamarina J.C.,Georgia Institute of Technology
Geotechnique | Year: 2011

The formation of desiccation cracks in soils is often interpreted in terms of tensile strength. However, this mechanistic model disregards the cohesionless, effective-stress-dependent frictional behaviour of fine-grained soils. An alternative theory is explored using analyses, numerical simulations based on an effective-stress formulation, and experiments monitored using high-resolution time-lapsed photography. Results show that desiccation cracks in fine-grained sediments initiate as the air-water interface invades the saturated medium, driven by the increase in suction. Thereafter, the interfacial membrane causes an increase in the local void ratio at the tip, the air-entry value decreases, the air-water interface advances into the tip and the crack grows. The effective stress remains in compression everywhere in the soil mass, including at the tip of the desiccation crack. This crack-growing mechanism can explain various observations related to desiccation crack formation in finegrained soils, including the effects of pore fluid salt concentration, slower crack propagation velocity and right angle realignment while approaching a pre-existing crack, and the apparent strength and failure mode observed in fine-grained soils subjected to tension. Additional research is required to develop a complementary phenomenological model for desiccation crack formation in coarse-grained sediments.

Mei Y.,Georgia Institute of Technology
Biometrika | Year: 2010

The sequential changepoint detection problem is studied in the context of global online monitoring of a large number of independent data streams. We are interested in detecting an occurring event as soon as possible, but we do not know when the event will occur, nor do we know which subset of data streams will be affected by the event. A family of scalable schemes is proposed based on the sum of the local cumulative sum, cusum, statistics from each individual data stream, and is shown to asymptotically minimize the detection delays for each and every possible combination of affected data streams, subject to the global false alarm constraint. The usefulness and limitations of our asymptotic optimality results are illustrated by numerical simulations and heuristic arguments. The Appendices contain a probabilistic result on the first epoch to simultaneous record values for multiple independent random walks. © 2010 Biometrika Trust.

Konstantinidis K.T.,Georgia Institute of Technology
Environmental Microbiology | Year: 2014

The role of airborne microbial cells in the chemistry of the atmosphere and cloud formation remains essentially speculative. Recent studies have indicated that microbes might be more important than previously anticipated for atmospheric processes. However, more work and direct communication between microbiologists and atmospheric scientists and modellers are necessary to better understand and model bioaerosol-cloud-precipitation-climate interactions. © 2014 Society for Applied Microbiology and John Wiley & Sons Ltd.

Laband D.N.,Georgia Institute of Technology
Forest Policy and Economics | Year: 2013

Humans sometimes, perhaps often, form deeply personal and important attachments to 'things' such as trees. Such attachments reflect the value these 'things' have in our lives and often are based on cultural (shared) or spiritual (individualistic) meanings. However, there has been relatively little discussion of environmental services that are symbolic, cultural or spiritual. One potential problem created as a by-product of this lack of discussion is that there may be a tendency to inadvertently trivialize the importance of cultural and/or spiritual environmental services. A second problem is that it is essentially inconceivable, at the present time, to consider functioning markets in which such services are traded. In this paper, I explore both of these themes. In terms of addressing the importance of cultural/aesthetic/spiritual values of trees, specifically, occasionally an event occurs that reminds us forcefully that these values are not trivial - indeed, they may be quite sizable - and command our attention as scientists and in policy discussions. The recent poisoning of Auburn University's beloved Toomer's oaks provides a compelling case study. With respect to the absence of markets for cultural/spiritual/aesthetic services, I seek to better understand why markets have emerged for certain environmental services but not for others. © 2013 Elsevier B.V.

Jain P.K.,University of California at Berkeley | El-Sayed M.A.,Georgia Institute of Technology
Chemical Physics Letters | Year: 2010

Noble metal nanostructures display unique and strongly enhanced optical properties due to the phenomenon of localized surface plasmon resonance (LSPR). In assemblies or complex noble metal nanostructures, individual plasmon oscillations on proximal particles can couple via their near-field interaction, resulting in coupled plasmon resonance modes, quite akin to excitonic coupling in molecular aggregates or orbital hybridization in molecules. In this frontier Letter we discuss how the coupling of plasmon modes in certain nanostructure geometries (such as nanoparticle dimers and nanoshells) allows systematic tuning of the optical resonance, and also the confinement and enhancement of the near-field, making possible improved refractive-index sensing and field-enhanced spectroscopy and photochemistry. We discuss the polarization, orientation, and distance-dependence of this near-field coupling especially the universal size-scaling of the plasmon coupling interaction. In addition to radiative properties, we also discuss the effect of inter-particle coupling on the non-radiative electron relaxation in noble metal nanostructures. © 2010 Elsevier B.V. All rights reserved.

Pollock A.,Georgia Institute of Technology
Social Studies of Science | Year: 2014

This article draws on ethnographic research at iThemba Pharmaceuticals, a small South African startup pharmaceutical company with an elite international scientific board. The word ‘iThemba’ is Zulu for ‘hope’, and so far drug discovery at the company has been essentially aspirational rather than actual. Yet this particular place provides an entry point for exploring how the location of the scientific knowledge component of pharmaceuticals – rather than their production, licensing, or distribution – matters. The article explores why it matters for those interested in global health and postcolonial science, and why it matters for the scientists themselves. Consideration of this case illuminates limitations of global health frameworks that implicitly posit rich countries as the unique site of knowledge production, and thus as the source of unidirectional knowledge flows. It also provides a concrete example for consideration of the contexts and practices of postcolonial science, its constraints, and its promise. Although the world is not easily bifurcated, it still matters who makes knowledge and where. © The Author(s) 2014.

Sherrill C.D.,Georgia Institute of Technology
Journal of Chemical Physics | Year: 2010

Current and emerging research areas in electronic structure theory promise to greatly extend the scope and quality of quantum chemical computations. Two particularly challenging problems are the accurate description of electronic near-degeneracies (as occur in bond-breaking reactions, first-row transition elements, etc.) and the description of long-range dispersion interactions in density functional theory. Additionally, even with the emergence of reduced-scaling electronic structure methods and basis set extrapolation techniques, quantum chemical computations remain very time-consuming for large molecules or large basis sets. A variety of techniques, including density fitting and explicit correlation methods, are making rapid progress toward solving these challenges. © 2010 American Institute of Physics.

Alben S.,Georgia Institute of Technology
Journal of Fluid Mechanics | Year: 2010

We model the swimming of a finite body in a vortex street using vortex sheets distributed along the body and in a wake emanating from its trailing edge. We determine the magnitudes and distributions of vorticity and pressure loading on the body as functions of the strengths and spacings of the vortices. We then consider the motion of a flexible body clamped at its leading edge in the vortex street as a model for a flag in a vortex street and find alternating bands of thrust and drag for varying wavenumber. We consider a flexible body driven at its leading edge as a model for tail-fin swimming and determine optimal motions with respect to the phase between the body's trailing edge and the vortex street. For short bodies maximizing thrust or efficiency, we find maximum deflections shifted in phase by 90° from oncoming vortices. For long bodies, leading-edge driving should reach maximum amplitude when the vortices are phase-shifted from the trailing edge by 45° (to maximize thrust) and by 135° (to maximize efficiency). Optimal phases for intermediate lengths show smooth transitions between these values. The optimal motion of a body driven along its entire length is similar to that of the model tail fin driven only at its leading edge, but with an additional outward curvature near the leading edge. The similarity between optimal motions forced at the leading edge and all along the body supports the high performance attributed to fin-based motions. © 2010 Cambridge University Press.

Kardomateas G.A.,Georgia Institute of Technology
Journal of Applied Mechanics, Transactions ASME | Year: 2010

There exist several formulas for the global buckling of sandwich plates, each based on a specific set of assumptions and a specific plate or beam model. It is not easy to determine the accuracy and range of validity of these rather simple formulas unless an elasticity solution exists. In this paper, we present an elasticity solution to the problem of global buckling of wide sandwich panels (equivalent to sandwich columns) subjected to axially compressive loading (along the short side). The emphasis on this study is on the global (single-wave) rather than the wrinkling (multiwave) mode. The sandwich section is symmetric, and all constituent phases, i.e., the facings and the core, are assumed to be orthotropic. The buckling problem is formulated as an eigenboundary-value problem for differential equations, with the axial load being the eigenvalue. The complication in the sandwich construction arises due to the existence of additional "internal"conditions at the face-sheet/core interfaces. Results are produced for a range of geometric configurations, and these are compared with the different global buckling formulas in the literature. © 2010 by ASME.

The increasing applications of engineered nanomaterials nowadays have elevated the potential of human exposure through various routes including inhalation, skin penetration and digestion. To date there is scarce information on a quantitative description of the interactions between nanoparticles (NPs) and cell surfaces and the detrimental effects from the exposure. The purpose of this work was to study in vitro exposure of Caco-2 cells to hematite (alpha-Fe(2)O(3)) NPs and to determine the particle size effects on the adsorption behaviors. Cellular impairment was also investigated and compared. Hematite NPs were synthesized as part of this study with a discrete size distribution and uniform morphology examined by dynamic light scattering (DLS) and confirmed by transmission electron microscopy (TEM). Caco-2 cells were cultured as a model epithelium to mirror human intestinal cells and used to evaluate the impacts of the exposure to NPs by measuring transepithelial electrical resistance (TEER). Cell surface disruption, localization and translocation of NPs through the cells were analyzed with immunocytochemical staining and confocal microscopy. Results showed that hematite NPs had mean diameters of 26, 53, 76 and 98 nm and were positively charged with minor aggregation in the buffer solution. Adsorption of the four sizes of NPs on cells reached equilibrium within approximately 5 min but adsorption kinetics were found to be size-dependent. The adsorption rates expressed as mg m(-2) min(-1) were greater for large NPs (76 and 98 nm) than those for small NPs (26 and 53 nm). However, adsorption rates, expressed in units of m(-2) min(-1), were much greater for small NPs than large ones. After the adsorption equilibrium was reached, the adsorbed mass of NPs on a unit area of cells was calculated and showed no significant size dependence. Longer exposure time (>3 h) induced adverse cellular effects as indicated by the drop in TEER compared to the control cells without the exposure to NPs. NPs initially triggered a dynamic reorganization and detachment of microvilli structures on Caco-2 cell surfaces. Following this impact, the drop in TEER occurred more significantly, particularly for the exposure to 26 nm NPs, which was consistent with the observations with confocal microscopy that the junctions were more severely disrupted by 26 nm NPs than other sizes. In conclusion, this paper demonstrates the interactions at the ultrastructural level from initial surface adsorption of NPs upon cells, to the subsequent microvilli reorganization, membrane penetration and the disruption of adherens junction and provides the fundamental information on size effects on NP behavior which is often poorly addressed for in vitro cytotoxicity studies of NPs.

Mahmoud M.A.,Georgia Institute of Technology
Journal of Physical Chemistry C | Year: 2015

The proper assembly of nanoparticles can enhance their properties and improve their applicability. Likewise, imprudent assembly can damage the unique properties of the nanomaterials. Accordingly, finding robust techniques for making ordered assemblies of nanoparticles is a hot topic in materials science research. In this work, the Langmuir-Blodgett (LB) technique was used to assemble polyethylene glycol (PEG)-functionalized gold nanocubes (AuNCs) into highly packed two-dimensional (2D) arrays with different structures. This technique is based on creating polymeric micelles within the AuNC monolayer, which drives the nanocubes to assemble into a highly packed structure even at low LB surface pressures. Interestingly, the micelles could be made more diffuse by changing the LB trough surface pressure, which allowed for tuning the width and the structure of the AuNC 2D arrays. The areas occupied by the micelles appeared as voids that separated the AuNC arrays and prevented the formation of a uniform monolayer of AuNCs. The polymer micelles were therefore able to act as dynamic soft templates, and the separation distances between individual nanocubes as well as the 2D array structure were controlled by changing the chain length of the PEG functionalization on the surface of the nanocubes. Theoretical calculations of the attractive and repulsive forces and the balance between them presented a good prediction for the optimum separation distance between the AuNCs inside the 2D arrays. © 2014 American Chemical Society.

Korzdorfer T.,Georgia Institute of Technology | Marom N.,University of Texas at Austin
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

Many-body perturbation theory in the G 0W 0 approximation is an increasingly popular tool for calculating electron removal energies and fundamental gaps for molecules and solids. However, the predictive power of G 0W 0 is limited by its sensitivity to the density functional theory (DFT) starting point. We introduce a nonempirical scheme, which allows us to find a reliable DFT starting point for G 0W 0 calculations. This is achieved by adapting the amount of Hartree-Fock exchange in a hybrid DFT functional. The G 0W 0 spectra resulting from this starting point reliably predict experimental photoelectron spectra for a test set of 13 typical organic semiconductor molecules. © 2012 American Physical Society.

Schneider T.M.,Harvard University | Gibson J.F.,Georgia Institute of Technology | Burke J.,Boston University
Physical Review Letters | Year: 2010

We demonstrate the existence of a large number of exact solutions of plane Couette flow, which share the topology of known periodic solutions but are localized in one spatial dimension. Solutions of different size are organized in a snakes-and-ladders structure strikingly similar to that observed for simpler pattern-forming partial differential equations. These new solutions are a step towards extending the dynamical systems view of transitional turbulence to spatially extended flows. © 2010 The American Physical Society.

Cressler J.D.,Georgia Institute of Technology
IEEE Transactions on Nuclear Science | Year: 2013

Silicon-Germanium (SiGe) technology effectively merges the desirable attributes of conventional silicon-based CMOS manufacturing (high integration levels, at high yield and low cost) with the extreme levels of transistor performance attainable in classical III-V heterojunction bipolar transistors (HBTs). SiGe technology joins together on-die high-speed bandgap-engineered SiGe HBTs with conventional Si CMOS to form SiGe BiCMOS technology, including all the requisite RF passive elements and multi-level thick-Al metalization required for high-speed circuit design. Such an silicon-based integrated circuit technology platform presents designers with an ideal division of labor for realizing optimal solutions to many performance-constrained mixed-signal (analog + digital + RF) systems. The unique bandgap-engineered features of SiGe HBTs enable several key merits with respect to operation across a wide variety of so-called 'extreme environments', potentially with little or no process modification, ultimately providing compelling advantages at the circuit and system level, across a wide class of envisioned commercial and defense applications. Here we give an overview of this interesting field, focusing primarily on the intersection of SiGe HBTs, and circuits built from them, with radiation-intense environments such as space. © 1963-2012 IEEE.

McDowell D.L.,Georgia Institute of Technology | Dunne F.P.E.,University of Oxford
International Journal of Fatigue | Year: 2010

Recent trends towards simulation of the cyclic slip behavior of polycrystalline and polyphase microstructures of advanced engineering alloys subjected to cyclic loading are facilitating understanding of the relative roles of intrinsic and extrinsic attributes of microstructure in fatigue crack formation, comprised of nucleation and growth of cracks at the scale of individual grains or phases. Modeling of processes of early stages of fatigue crack nucleation and growth at these microstructure scales is an important emerging frontier in several respects. First, it facilitates analysis of the influence of local microstructure attributes on the distribution of driving forces for fatigue crack formation as a function of the applied stress state. This can support microstructure-sensitive estimates of minimum life, as well as characterization of competing failure modes. Second, it can inform modification of process route and its manifestations (e.g., residual stress, texture) to alter microstructure in ways that promote enhanced resistance to formation of fatigue cracks. Third, microstructure-sensitive modeling, even conducted at the mesocopic scale of individual grains/phases, can facilitate parametric design exploration in searching for microstructure morphologies and/or compositions that modify fatigue resistance. Fourth, such technologies offer promise for integration with advanced nondestructive evaluation methods for prognosis and structural health monitoring. Finally, as a longer term prospect in view of uncertainties in modeling mechanisms of cyclic slip, crack nucleation and growth, such modeling can serve to support more quantitative predictions of fatigue lifetime as a function of microstructure. We first discuss computationally based microstructure-sensitive fatigue modeling in the context of recent initiatives in accelerated insertion of materials and integration of computational mechanics, materials science, and systems engineering in design of materials and structures. We then highlight recent application of such strategies to Ni-base superalloys, gear steels, and α-β Ti alloys, with focus on the individual grain scale as the minimum length scale of heterogeneity. Finally, we close by outlining opportunities to advance microstructure-sensitive fatigue modeling in the next decade. © 2010 Elsevier Ltd. All rights reserved.

Mahmoud M.A.,Georgia Institute of Technology
Crystal Growth and Design | Year: 2015

Thermodynamically unfavorable metallic nanocrystals can be prepared only by the growth of the nanocrystals under kinetically controlled experimental conditions. The common technique to drive the growth of metallic nanocrystals under kinetic control is to adjust the rate of the generation of metal atoms to be slower than the rate of deposition of such atoms onto the surface of nanocrystal nuclei, which form in the first step of the nanoparticle synthesis. The kinetically controlled growth leads to the formation of seeds with crystal defects, which are needed for the growth of anisotropic nanocrystals such as silver nanodisks (AgNDs). The simultaneous multiple asymmetric reduction technique (SMART) is introduced here to successfully prepare AgNDs of controllable sizes and on a large scale within a few seconds. The SMART is simply based on the simultaneous reduction of silver ions with a strong reducing agent such as borohydride (redox potential of 1.24 V) and a weak reducing agent such as l-ascorbic acid (redox potential of 0.35 V) in the presence of a polyvinylpyrrolidone capping agent. The random formation and deposition of silver atoms by the two different reducing agents generated stacking faults in the growing nanocrystal. The hexagonal close-packed {111} layers of silver atoms were then deposited on the surface of the growing nanocrystal containing stacked faults along the [111] plane. This initiated asymmetric growth necessary for the formation of platelike seeds with planar twin defects, which is required for the formation of anisotropic AgNDs. (Figure Presented). © 2015 American Chemical Society.

Stoesser T.,Georgia Institute of Technology
Journal of Hydraulic Engineering | Year: 2010

A physically realistic roughness closure method for the simulation of turbulent open-channel flow over natural beds within the framework of large-eddy simulation (LES) is proposed. The description of bed roughness in LES is accomplished through a roughness geometry function together with forcing terms in the momentum equations. The major benefit of this method is that the roughness is generated from one physically measurable parameter, i.e., the mean grain diameter of the bed material. A series of flows over rough beds, for which mean flow and turbulence statistics are available from experiments, is simulated. Measured and computed values are compared to validate the proposed roughness closure approach. It is found that predicted streamwise velocity profiles, turbulence intensities, and turbulent shear stress profiles match the measured values fairly well. Furthermore, the effect of roughness on the overall flow resistance is predicted in reasonable agreement with experimental values. © 2010 ASCE.

Haggerty C.M.,Georgia Institute of Technology
Annals of biomedical engineering | Year: 2012

Virtual modeling of cardiothoracic surgery is a new paradigm that allows for systematic exploration of various operative strategies and uses engineering principles to predict the optimal patient-specific plan. This study investigates the predictive accuracy of such methods for the surgical palliation of single ventricle heart defects. Computational fluid dynamics (CFD)-based surgical planning was used to model the Fontan procedure for four patients prior to surgery. The objective for each was to identify the operative strategy that best distributed hepatic blood flow to the pulmonary arteries. Post-operative magnetic resonance data were acquired to compare (via CFD) the post-operative hemodynamics with predictions. Despite variations in physiologic boundary conditions (e.g., cardiac output, venous flows) and the exact geometry of the surgical baffle, sufficient agreement was observed with respect to hepatic flow distribution (90% confidence interval-14 ± 4.3% difference). There was also good agreement of flow-normalized energetic efficiency predictions (19 ± 4.8% error). The hemodynamic outcomes of prospective patient-specific surgical planning of the Fontan procedure are described for the first time with good quantitative comparisons between preoperatively predicted and postoperative simulations. These results demonstrate that surgical planning can be a useful tool for single ventricle cardiothoracic surgery with the ability to deliver significant clinical impact.

Ingall E.D.,Georgia Institute of Technology
Nature communications | Year: 2013

Iron has a key role in controlling biological production in the Southern Ocean, yet the mechanisms regulating iron availability in this and other ocean regions are not completely understood. Here, based on analysis of living phytoplankton in the coastal seas of West Antarctica, we present a new pathway for iron removal from marine systems involving structural incorporation of reduced, organic iron into biogenic silica. Export of iron incorporated into biogenic silica may represent a substantial unaccounted loss of iron from marine systems. For example, in the Ross Sea, burial of iron incorporated into biogenic silica is conservatively estimated as 11 μmol m-2 per year, which is in the same range as the major bioavailable iron inputs to this region. As a major sink of bioavailable iron, incorporation of iron into biogenic silica may shift microbial population structure towards taxa with relatively lower iron requirements, and may reduce ecosystem productivity and associated carbon sequestration.

Sauermanna H.,Georgia Institute of Technology | Franzonib C.,Polytechnic of Milan
Proceedings of the National Academy of Sciences of the United States of America | Year: 2015

Scientific research performed with the involvement of the broader public (the crowd) attracts increasing attention from scientists and policy makers. A key premise is that project organizers may be able to draw on underused human resources to advance research at relatively low cost. Despite a growing number of examples, systematic research on the effort contributions volunteers are willing to make to crowd science projects is lacking. Analyzing data on seven different projects, we quantify the financial value volunteers can bring by comparing their unpaid contributions with counterfactual costs in traditional or online labor markets. The volume of total contributions is substantial, although some projects are much more successful in attracting effort than others. Moreover, contributions received by projects are very uneven across time-a tendency toward declining activity is interrupted by spikes typically resulting from outreach efforts or media attention. Analyzing user-level data, we find that most contributors participate only once and with little effort, leaving a relatively small share of users who return responsible for most of the work. Although top contributor status is earned primarily through higher levels of effort, top contributors also tend to work faster. This speed advantage develops over multiple sessions, suggesting that it reflects learning rather than inherent differences in skills. Our findings inform recent discussions about potential benefits from crowd science, suggest that involving the crowd may be more effective for some kinds of projects than others, provide guidance for project managers, and raise important questions for future research.

Zhang H.,Zhejiang University | Jin M.,Xian University of Science and Technology | Xiong Y.,Anhui University of Science and Technology | Lim B.,Sungkyunkwan University | Xia Y.,Georgia Institute of Technology
Accounts of Chemical Research | Year: 2013

Palladium is a marvelous catalyst for a rich variety of reactions in industrial processes and commercial devices. Most Pd-catalyzed reactions exhibit structure sensitivity, meaning that the activity or selectivity depends on the arrangement of atoms on the surface. Previously, such reactions could only be studied in ultrahigh vacuum using Pd single crystals cut with a specific crystallographic plane. However, these model catalysts are far different from real catalytic systems owing to the absence of atoms at corners and edges and the extremely small specific surface areas for the model systems. Indeed, enhancing the performance of a Pd-based catalyst, in part to reduce the amount needed of this precious and rare metal for a given reaction, requires the use of Pd with the highest possible specific surface area. Recent advances in nanocrystal synthesis are offering a great opportunity to investigate and quantify the structural sensitivity of catalysts based on Pd and other metals. For a structure-sensitive reaction, the catalytic properties of Pd nanocrystals are strongly dependent on both the size and shape. The shape plays a more significant role in controlling activity and selectivity, because the shape controls not only the facets but also the proportions of surface atoms at corners, edges, and planes, which affect the outcomes of possible reactions. We expect catalysts based on Pd nanocrystals with optimized shapes to meet the increasing demands of industrial applications at reduced loadings and costs.In this Account, we discuss recent advances in the synthesis of Pd nanocrystals with controlled shapes and their resulting performance as catalysts for a large number of reactions. First, we review various synthetic strategies based on oxidative etching, surface capping, and kinetic control that have been used to direct the shapes of nanocrystals. When crystal growth is under thermodynamic control, the capping agent plays a pivotal role in determining the shape of a product by altering the order of surface energies for different facets through selective adsorption; the resulting product has the lowest possible total surface energy. In contrast, the product of a kinetically controlled synthesis often deviates from the thermodynamically favored structure, with notable examples including nanocrystals enclosed by high-index facets or concave surfaces.We then discuss the key parameters that control the nucleation and growth of Pd nanocrystals to decipher potential growth mechanisms and build a connection between the experimental conditions and the pathways to different shapes. Finally, we present a number of examples to highlight the use of these Pd nanocrystals as catalysts or electrocatalysts for various applications with structure-sensitive properties. We believe that a deep understanding of the shape-dependent catalytic properties, together with an ability to experimentally maneuver the shape of metal nanocrystals, will eventually lead to rational design of advanced catalysts with substantially enhanced performance. © 2012 American Chemical Society.

Wyatt M.G.,University of Colorado at Boulder | Curry J.A.,Georgia Institute of Technology
Climate Dynamics | Year: 2014

A hypothesized low-frequency climate signal propagating across the Northern Hemisphere through a network of synchronized climate indices was identified in previous analyses of instrumental and proxy data. The tempo of signal propagation is rationalized in terms of the multidecadal component of Atlantic Ocean variability-the Atlantic Multidecadal Oscillation. Through multivariate statistical analysis of an expanded database, we further investigate this hypothesized signal to elucidate propagation dynamics. The Eurasian Arctic Shelf-Sea Region, where sea ice is uniquely exposed to open ocean in the Northern Hemisphere, emerges as a strong contender for generating and sustaining propagation of the hemispheric signal. Ocean-ice-atmosphere coupling spawns a sequence of positive and negative feedbacks that convey persistence and quasi-oscillatory features to the signal. Further stabilizing the system are anomalies of co-varying Pacific-centered atmospheric circulations. Indirectly related to dynamics in the Eurasian Arctic, these anomalies appear to negatively feed back onto the Atlantic's freshwater balance. Earth's rotational rate and other proxies encode traces of this signal as it makes its way across the Northern Hemisphere. © 2013 Springer-Verlag Berlin Heidelberg.

Baer P.,Georgia Institute of Technology
Wiley Interdisciplinary Reviews: Climate Change | Year: 2013

The Greenhouse Development Rights (GDRs) Framework is a proposal for a global climate agreement in which the obligations assigned to nations are based on a combination of responsibility (contribution to the problem) and capacity (ability to pay). A key feature of the GDRs framework is that it is modeled on the assignment of a 'right to development' to individuals, such that individuals with incomes below a 'development threshold' are nominally exempted from obligations to pay for mitigation and adaptation. Obligations for those 'over the threshold' are calculated in the same way for rich persons in poor countries and rich persons in rich countries. As income distribution within countries is taken into account and all countries have some wealthy people, all countries have a positive obligation to contribute to global mitigation and adaptation requirements, eliminating the sharp distinction between Annex I and non-Annex I countries. In the last few years, GDRs has become one of the most widely known of the many so-called burden-sharing frameworks that have been proposed. In this essay, one of the co-authors of the GDRs framework presents the framework's fundamental principles, describes its place in the larger discussion of burden-sharing and climate justice, and reflects on its prospects in the next phase of the global climate negotiations. Hopefully it will be helpful both to readers new to GDRs and to our existing supporters and critics. © 2012 John Wiley & Sons, Ltd.

Inflammation and altered glutamate metabolism are two pathways implicated in the pathophysiology of depression. Interestingly, these pathways may be linked given that administration of inflammatory cytokines such as interferon-α to otherwise non-depressed controls increased glutamate in the basal ganglia and dorsal anterior cingulate cortex (dACC) as measured by magnetic resonance spectroscopy (MRS). Whether increased inflammation is associated with increased glutamate among patients with major depression is unknown. Accordingly, we conducted a cross-sectional study of 50 medication-free, depressed outpatients using single-voxel MRS, to measure absolute glutamate concentrations in basal ganglia and dACC. Multivoxel chemical shift imaging (CSI) was used to explore creatine-normalized measures of other metabolites in basal ganglia. Plasma and cerebrospinal fluid (CSF) inflammatory markers were assessed along with anhedonia and psychomotor speed. Increased log plasma C-reactive protein (CRP) was significantly associated with increased log left basal ganglia glutamate controlling for age, sex, race, body mass index, smoking status and depression severity. In turn, log left basal ganglia glutamate was associated with anhedonia and psychomotor slowing measured by the finger-tapping test, simple reaction time task and the Digit Symbol Substitution Task. Plasma CRP was not associated with dACC glutamate. Plasma and CSF CRP were also associated with CSI measures of basal ganglia glutamate and the glial marker myoinositol. These data indicate that increased inflammation in major depression may lead to increased glutamate in the basal ganglia in association with glial dysfunction and suggest that therapeutic strategies targeting glutamate may be preferentially effective in depressed patients with increased inflammation as measured by CRP.Molecular Psychiatry advance online publication, 12 January 2016; doi:10.1038/mp.2015.206. © 2016 Macmillan Publishers Limited

Wray J.J.,Georgia Institute of Technology
International Journal of Astrobiology | Year: 2013

Gale crater formed from an impact on Mars ∼3.6 billion years ago. It hosts a central mound nearly 100Â km wide and ∼5Â km high, consisting of layered rocks with a variety of textures and spectral properties. The oldest exposed layers contain variably hydrated sulphates and smectite clay minerals, implying an aqueous origin, whereas the younger layers higher on the mound are covered by a mantle of dust. Fluvial channels carved into the crater walls and the lower mound indicate that surface liquids were present during and after deposition of the mound material. Numerous hypotheses have been advocated for the origin of some or all minerals and layers in the mound, ranging from deep lakes to playas to mostly dry dune fields to airfall dust or ash subjected to only minor alteration driven by snowmelt. The complexity of the mound suggests that multiple depositional and diagenetic processes are represented in the materials exposed today. Beginning in August 2012, the Mars Science Laboratory rover Curiosity will explore Gale crater by ascending the mound's northwestern flank, providing unprecedented new detail on the evolution of environmental conditions and habitability over many millions of years during which the mound strata accumulated. © 2012 Cambridge University Press.

Feng S.S.J.,Emory University | Sechopoulos I.,Georgia Institute of Technology
Radiology | Year: 2012

Purpose: To comprehensively characterize the dosimetric properties of a clinical digital breast tomosynthesis (DBT) system for the acquisition of mammographic and tomosynthesis images. Materials and Methods: Compressible water-oil mixture phantoms were created and imaged by using the automatic exposure control (AEC) of the Selenia Dimensions system (Hologic, Bedford, Mass) in both DBT and full-field digital mammography (FFDM) mode. Empirical measurements of the x-ray tube output were performed with a dosimeter to measure the air kerma for the range of tube current-exposure time product settings and to develop models of the automatically selected x-ray spectra. A Monte Carlo simulation of the system was developed and used in conjunction with the AEC-chosen settings and spectra models to compute and compare the mean glandular dose (MGD) resulting from both imaging modalities for breasts of varying sizes and glandular compositions. Results: Acquisition of a single craniocaudal view resulted in an MGD ranging from 0.309 to 5.26 mGy in FFDM mode and from 0.657 to 3.52 mGy in DBT mode. For a breast with a compressed thickness of 5.0 cm and a 50% glandular fraction, a DBT acquisition resulted in an only 8% higher MGD than an FFDM acquisition (1.30 and 1.20 mGy, respectively). For a breast with a compressed thickness of 6.0 cm and a 14.3% glandular fraction, a DBT acquisition resulted in an 83% higher MGD than an FFDM acquisition (2.12 and 1.16 mGy, respectively). Conclusion: For two-dimensional-three-dimensional fusion imaging with the Selenia Dimensions system, the MGD for a 5-cmthick 50% glandular breast is 2.50 mGy, which is less than the Mammography Quality Standards Act limit for a two-view screening mammography study. © RSNA, 2012.

Rhee W.J.,Georgia Institute of Technology
Nucleic acids research | Year: 2010

Molecular beacons (MBs) have the potential to provide a powerful tool for rapid RNA detection in living cells, as well as monitoring the dynamics of RNA expression in response to external stimuli. To exploit this potential, it is necessary to distinguish true signal from background signal due to non-specific interactions. Here, we show that, when cyanine-dye labeled 2'-deoxy and 2'-O-methyl oligonucleotide probes are inside living cells for >5 h, most of their signals co-localize with mitochondrial staining. These probes include random-sequence MB, dye-labeled single-strand linear oligonucleotide and dye-labeled double-stranded oligonucleotide. Using carbonyl cyanide m-chlorophenyl hydrazone treatment, we found that the non-specific accumulation of oligonucleotide probes at mitochondria was driven by mitochondrial membrane potential. We further demonstrated that the dye-labeled oligonucleotide probes were likely on/near the surface of mitochondria but not inside mitochondrial inner membrane. Interestingly, oligonucleotides probes labeled respectively with Alexa Fluor 488 and Alexa Fluor 546 did not accumulate at mitochondria, suggesting that the non-specific interaction between dye-labeled ODN probes and mitochondria is dye-specific. These results may help design and optimize fluorescence imaging probes for long-time RNA detection and monitoring in living cells.

Liebman S.W.,University of Nevada, Reno | Chernoff Y.O.,Georgia Institute of Technology
Genetics | Year: 2012

The concept of a prion as an infectious self-propagating protein isoform was initially proposed to explain certain mammalian diseases. It is now clear that yeast also has heritable elements transmitted via protein. Indeed, the "protein only" model of prion transmission was first proven using a yeast prion. Typically, known prions are ordered cross-b aggregates (amyloids). Recently, there has been an explosion in the number of recognized prions in yeast. Yeast continues to lead the way in understanding cellular control of prion propagation, prion structure, mechanisms of de novo prion formation, specificity of prion transmission, and the biological roles of prions. This review summarizes what has been learned from yeast prions. © 2012 by the Genetics Society of America.

Friscourt F.,University of Georgia | Fahrni C.J.,Georgia Institute of Technology | Boons G.-J.,University of Georgia
Journal of the American Chemical Society | Year: 2012

Fluorogenic reactions in which non- or weakly fluorescent reagents produce highly fluorescent products can be exploited to detect a broad range of compounds including biomolecules and materials. We describe a modified dibenzocyclooctyne that under catalyst-free conditions undergoes fast strain-promoted cycloadditions with azides to yield strongly fluorescent triazoles. The cycloaddition products are more than 1000-fold brighter compared to the starting cyclooctyne, exhibit large Stokes shift, and can be excited above 350 nm, which is required for many applications. Quantum mechanical calculations indicate that the fluorescence increase upon triazole formation is due to large differences in oscillator strengths of the S0 S 1 transitions in the planar C2v-symmetric starting material compared to the symmetry-broken and nonplanar cycloaddition products. The new fluorogenic probe was successfully employed for labeling of proteins modified by an azide moiety. © 2012 American Chemical Society.

Wang Y.,Georgia Institute of Technology
Journal of Mechanical Design, Transactions of the ASME | Year: 2011

Variability is the inherent randomness in systems, whereas incertitude is due to lack of knowledge. In this paper, a generalized hidden Markov model (GHMM) is proposed to quantify aleatory and epistemic uncertainties simultaneously in multiscale system analysis. The GHMM is based on a new imprecise probability theory that has the form of generalized interval. The new interval probability resembles the precise probability and has a similar calculus structure. The proposed GHMM allows us to quantify cross-scale dependency and information loss between scales. Based on a generalized interval Bayes' rule, three cross-scale information assimilation approaches that incorporate uncertainty propagation are also developed. © 2011 American Society of Mechanical Engineers.

Hora M.,Georgia Institute of Technology | Klassen R.D.,University of Western Ontario
Journal of Operations Management | Year: 2013

Risks arising from operations are increasingly being highlighted by managers, customers, and the popular press, particularly related to large-scale (and usually low-frequency) losses. If poorly managed, the resulting disruptions in customer service and environmental problems incur enormous recovery costs, prompt large legal liabilities, and damage customer goodwill and brand equity. Yet, despite conventional wisdom that firms should improve their own operations by observing problems that occur in others' processes, significant operational risks appear to be ignored and similar losses recur. Using a randomized vignette-based field experiment, we tested the influence of organization-level factors on knowledge acquisition. Two organization-level factors, namely perceived operational similarity, and to a lesser extent, market leadership, significantly influenced the risk manager's likelihood of acquiring knowledge about possible causes that triggered another firm's operational loss. These findings suggest that senior managers need to develop organizational systems and training to expand the screening by risk managers to enhance knowledge acquisition. Moreover, industry and trade organizations may have a role in fostering the transfer of knowledge and potential learning from operational losses of firms. © 2012 Elsevier B.V. All rights reserved.

Balog E.M.,Georgia Institute of Technology
Exercise and Sport Sciences Reviews | Year: 2010

Low-frequency fatigue (LFF) is characterized by a proportionally greater loss of force at low compared with high activation frequencies and a prolonged recovery. Recent work suggests a calcium-induced uncoupling of excitation-contraction coupling underlies LFF. Here, newly characterized triadic proteins are described, and possible mechanisms by which they may contribute to LFF are suggested. Copyright © 2010 by the American College of Sports Medicine.

McDowell D.L.,Georgia Institute of Technology
International Journal of Plasticity | Year: 2010

Research trends in metal plasticity over the past 25 years are briefly reviewed. The myriad of length scales at which phenomena involving microstructure rearrangement during plastic flow is discussed, along with key challenges. Contributions of the author's group over the past 30 years are summarized in this context, focusing on the statistical nature of microstructure evolution and emergent multiscale behavior associated with metal plasticity, current trends and models for length scale effects, multiscale kinematics, the role of grain boundaries, and the distinction of the roles of concurrent and hierarchical multiscale modeling in the context of materials design. © 2010 Elsevier Ltd.

Kent Barefield E.,Georgia Institute of Technology
Coordination Chemistry Reviews | Year: 2010

N-alkylation of macrocyclic amines has a significant impact on their properties as ligands for metal ions. This article examines the development of the coordination chemistry of N-alkylated cyclam ligands from its inception in 1973 with the first report of tetramethylcyclam. Emphasis is on: (1) the stereochemistry of metal complexation, including the effects of inclusion of functional groups in one or two of the N-alkyl groups; (2) the effect of N-alkylation on the metal-donor interaction; (3) the ability of tertiary amine ligands to stabilize complexes of metal ions in unusual oxidation states. © 2010 Elsevier B.V. All rights reserved.

Chakravarty U.K.,Georgia Institute of Technology
Composite Structures | Year: 2010

Mechanical characterization of foams at varying strain rates is indispensable for the selection of foam as core material for the proficient sandwich structure design at dynamic loading application. Both servo-hydraulically controlled Material Testing System (MTS) and Instron machines are generally considered for quasi-static testing at strain rates on the order of 10-3s-1. Split Hopkinson pressure bar (SHPB) with steel bars is typically utilized for characterizing metallic foams at high strain rates, however modified SHPB with polycarbonate or soft martial bars are used for characterizing polymeric and biomaterial foams at high strain rates on the order of 103s-1 for impedance match between the foam specimens and bars. This paper reviews the effect of strain rate of loading, density, environmental temperature, and microstructure on compressive strength and energy absorption capacity of various closed-cell polymeric, metallic, and biomaterial foams. Compressive strength and energy absorption capacity increase with the increase in both strain rate of loading and density of foams, but decrease with the increase in surrounding temperature. Foams of same density can have different strength and can absorb unequal amount of energy at the same strain rate of loading due to the variation of microstructure. © 2010 Elsevier Ltd.

Spadoni A.,Ecole Polytechnique Federale de Lausanne | Ruzzene M.,Georgia Institute of Technology
Journal of the Mechanics and Physics of Solids | Year: 2012

Auxetic materials expand when stretched, and shrink when compressed. This is the result of a negative Poissons ratio ν. Isotropic configurations with ν≈-1 have been designed and are expected to provide increased shear stiffness G. This assumes that Youngs modulus and ν can be engineered independently. In this article, a micropolar-continuum model is employed to describe the behavior of a representative auxetic structural network, the chiral lattice, in an attempt to remove the indeterminacy in its constitutive law resulting from ν=-1. While this indeterminacy is successfully removed, it is found that the shear modulus is an independent parameter and, for certain configurations, it is equal to that of the triangular lattice. This is remarkable as the chiral lattice is subject to bending deformation of its internal members, and thus is more compliant than the triangular lattice which is stretch dominated. The derived micropolar model also indicates that this unique lattice has the highest characteristic length scale lc of all known lattice topologies, as well as a negative first Lamé constant without violating bounds required for thermodynamic stability. We also find that hexagonal arrangements of deformable rings have a coupling number N=1. This is the first lattice reported in the literature for which couple-stress or Mindlin theory is necessary rather than being adopted a priori. © 2011 Elsevier Ltd. All rights reserved.

Rosenberger R.,Georgia Institute of Technology
Science Technology and Human Values | Year: 2011

Thinkers from a variety of fields analyze the roles of imaging technologies in science and consider their implications for many issues, from our conception of selfhood to the authority of science. In what follows, I encourage scholars to develop an applied philosophy of imaging, that is, to collect these analyses of scientific imaging and to reflect on how they can be made useful for ongoing scientific work. As an example of this effort, I review concepts developed in Don Ihde's phenomenology of technology and refigure them for use in the analysis of scientific practice. These concepts are useful for drawing out the details of the interpretive frameworks scientists bring to laboratory images. Next, I apply these ideas to a contemporary debate in neurobiology over the interpretation of images of neurons which have been frozen at the moment of transmitter release. This reveals directions for further thought for the study of neurotransmission. © The Author(s) 2011.

Ju Y.,Princeton University | Sun W.,Georgia Institute of Technology
Progress in Energy and Combustion Science | Year: 2015

Plasma assisted combustion is a promising technology to improve engine performance, increase lean burn flame stability, reduce emissions, and enhance low temperature fuel oxidation and processing. Over the last decade, significant progress has been made towards the applications of plasma in engines and the understanding of the fundamental chemistry and dynamic processes in plasma assisted combustion via the synergetic efforts in advanced diagnostics, combustion chemistry, flame theory, and kinetic modeling. New observations of plasma assisted ignition enhancement, ultra-lean combustion, cool flames, flameless combustion, and controllability of plasma discharge have been reported. Advances are made in the understanding of non-thermal and thermal enhancement effects, kinetic pathways of atomic O production, diagnostics of electronically and vibrationally excited species, plasma assisted combustion kinetics of sub-explosion limit ignition, plasma assisted low temperature combustion, flame regime transition of the classical ignition S-curve, dynamics of the minimum ignition energy, and the transport effect by non-equilibrium plasma discharge. These findings and advances have provided new opportunities in the development of efficient plasma discharges for practical applications and predictive, validated kinetic models and modeling tools for plasma assisted combustion at low temperature and high pressure conditions. This article is to provide a comprehensive overview of the progress and the gap in the knowledge of plasma assisted combustion in applications, chemistry, ignition and flame dynamics, experimental methods, diagnostics, kinetic modeling, and discharge control. © 2014 Elsevier Ltd. All rights reserved.

O'Connor J.,Combustion Research Facility | Lieuwen T.,Georgia Institute of Technology
Physics of Fluids | Year: 2012

This work investigates the response of the vortex breakdown region of a swirling, annular jet to transverse acoustic excitation for both non-reacting and reacting flows. This swirling flow field consists of a central vortex breakdown region, two shear layers, and an annular fluid jet. The vortex breakdown bubble, a region of highly turbulent recirculating flow in the center of the flowfield, is the result of a global instability of the swirling jet. Additionally, the two shear layers originating from the inner and outer edge of the annular nozzle are convectively unstable and rollup due to the Kelvin-Helmholtz instability. Unlike the convectively unstable shear layers that respond in a monotonic manner to acoustic forcing, the recirculation zone exhibits a range of response characteristics, ranging from minimal response to exhibiting abrupt bifurcations at large forcing amplitudes. In this study, the response of the time-average and fluctuating recirculation zone is measured as a function of forcing frequency, amplitude, and symmetry. The time-average flow field is shown to exhibit both monotonically varying and abrupt bifurcation features as acoustic forcing amplitude is increased. The unsteady motion in the recirculation zone is dominated by the low frequency precession of the vortex breakdown bubble. In the unforced flow, the azimuthal m = -2 and m = -1 modes (i.e., disturbances rotating in the same direction as the swirl flow) dominate the velocity disturbance field. These modes correspond to large scale deformation of the jet column and two small-scale precessing vortical structures in the recirculation zone, respectively. The presence of high amplitude acoustic forcing changes the relative amplitude of these two modes, as well as the character of the self-excited motion. For the reacting flow problem, we argue that the direct effect of these recirculation zone fluctuations on the flame response to flow forcing is not significant. Rather, flame wrinkling in response to flow forcing is dominated by shear layer disturbances. Recirculation zone dynamics primarily influence the time-average flame features (such as spreading angle). These influences on the flame response are indirect, as they control the transfer function relating shear layer fluctuations and the resulting flame response. © 2012 American Institute of Physics.

Garimella S.,Georgia Institute of Technology
Applied Thermal Engineering | Year: 2012

An investigation of heat recovery from industrial processes with large exhaust gas flow rates, but at very low temperatures, was conducted. Heat recovered from a gas stream at 120 °C was supplied to an absorption cycle to simultaneously generate chilled water and hot water to be used for space conditioning and/or process heating. With the steep increase in energy costs faced by industry, it may be possible to use previously unviable techniques. At nominal conditions, 2.26 MW of heat recovered from the waste heat stream yields a chilled hydronic fluid stream at 7 °C with a cooling capacity of 1.28 MW. Simultaneously, a second hydronic fluid stream can be heated from 43 °C to 54 °C for a heating capacity of 3.57 MW. Based on the cost of electricity to generate this cooling without the waste heat recovery system, and the cost of natural gas for heating, savings of $186/hr of operation may be realized. When extrapolated to annual operation with a 75% capacity factor, savings of up to $1.2 million can be achieved. The system requires large components to enable heat exchange over very small temperature differences, with the largest component being the waste heat driven desorber. Minor increases in heat source temperature result in substantial reductions in heat exchanger size. © 2011 Elsevier Ltd. All rights reserved.

Stewart F.J.,Georgia Institute of Technology
Methods in Enzymology | Year: 2013

High-throughput sequencing and analysis of microbial community cDNA (metatranscriptomics) are providing valuable insight into in situ microbial activity and metabolism in the oceans. A critical first step in metatranscriptomic studies is the preparation of high-quality cDNA. At the minimum, preparing cDNA for sequencing involves steps of biomass collection, RNA preservation, total RNA extraction, and cDNA synthesis. Each of these steps may present unique challenges for marine microbial samples, particularly for deep-sea samples whose transcriptional profiles may change between water collection and RNA preservation. Because bacterioplankton community RNA yields may be relatively low (< 500 ng), it is often necessary to amplify total RNA to obtain sufficient cDNA for downstream sequencing. Additionally, depending on the nature of the samples, budgetary considerations, and the choice of sequencing technology, steps may be required to deplete the amount of ribosomal RNA (rRNA) transcripts in a sample in order to maximize mRNA recovery. cDNA preparation may also involve the addition of internal RNA standards to biomass samples, thereby allowing for absolute quantification of transcript abundance following sequencing. This chapter describes a general protocol for cDNA preparation from planktonic microbial communities, from RNA preservation to final cDNA synthesis, with specific emphasis placed on topics of sampling bias and rRNA depletion. Consideration of these topics is critical for helping standardize metatranscriptomics methods as they become widespread in marine microbiology research. © 2013 Elsevier Inc. All rights reserved.

Braun G.,University of Leipzig | Pokutta S.,Georgia Institute of Technology
Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS | Year: 2013

We provide a new framework for establishing strong lower bounds on the nonnegative rank of matrices by means of common information, a notion previously introduced in [1]. Common information is a natural lower bound for the nonnegative rank of a matrix and by combining it with Hellinger distance estimations we can compute the (almost) exact common information of UDISJ partial matrix. We also establish robustness of this estimation under various perturbations of the UDISJ partial matrix, where rows and columns are randomly or adversarially removed or where entries are randomly or adversarially altered. This robustness translates, via a variant of Yannakakis' Factorization Theorem, to lower bounds on the average case and adversarial approximate extension complexity. We present the first family of polytopes, the hard pair introduced in [2] related to the CLIQUE problem, with high average case and adversarial approximate extension complexity. We also provide an information theoretic variant of the fooling set method that allows us to extend fooling set lower bounds from extension complexity to approximate extension complexity. Copyright © 2013 by The Institute of Electrical and Electronics Engineers, Inc.

Grijalva S.,Georgia Institute of Technology
IEEE Transactions on Power Systems | Year: 2012

This paper explores the relation between saddle-node bifurcation voltage collapse and complex flow limits of individual transmission elements. Two necessary conditions for power system voltage collapse are presented. First, when a transfer of power takes place in a power system, at least one line must reach its static transfer stability limit (STSL) at or before the point of voltage collapse. Second, for a point-to-point transfer, a path from the source to the sink, formed by lines all of which have reached their STSL limits, must be formed in the network before the point of collapse is reached. We present numerical examples confirming these two necessary conditions. © 2006 IEEE.

Sovacool B.K.,National University of Singapore | Brown M.A.,Georgia Institute of Technology
Energy Policy | Year: 2010

A dearth of available data on carbon emissions and comparative analysis between metropolitan areas make it difficult to confirm or refute best practices and policies. To help provide benchmarks and expand our understanding of urban centers and climate change, this article offers a preliminary comparison of the carbon footprints of 12 metropolitan areas. It does this by examining emissions related to vehicles, energy used in buildings, industry, agriculture, and waste. The carbon emissions from these sources-discussed here as the metro area's partial carbon footprint-provide a foundation for identifying the pricing, land use, help metropolitan areas throughout the world respond to climate change. The article begins by exploring a sample of the existing literature on urban morphology and climate change and explaining the methodology used to calculate each area's carbon footprint. The article then depicts the specific carbon footprints for Beijing, Jakarta, London, Los Angeles, Manila, Mexico City, New Delhi, New York, São Paulo, Seoul, Singapore, and Tokyo and compares these to respective national averages. It concludes by offering suggestions for how city planners and policymakers can reduce the carbon footprint of these and possibly other large urban areas. © 2009 Elsevier Ltd.

A generalized framework is presented for the electromechanical modeling of base-excited piezoelectric energy harvesters with symmetric and unsymmetric laminates. The electromechanical derivations are given using the assumed-modes method under the Euler-Bernoulli, Rayleigh, and Timoshenko beam assumptions in three sections. The formulations account for an independent axial displacement variable and its electromechanical coupling in all cases. Comparisons are provided against the analytical solution for symmetric laminates and convergence of the assumed-modes solution to the analytical solution with increasing number of modes is shown. Model validations are also presented by comparing the electromechanical frequency response functions derived herein with the experimentally obtained ones in the absence and presence of a tip mass attachment. A discussion is provided for combination of the assumed-modes solution with nonlinear energy harvesting and storage circuitry. The electromechanical assumed-modes formulations can be used for modeling of piezoelectric energy harvesters with moderate thickness as well as those with unsymmetric laminates and varying geometry in the axial direction. © 2012 Elsevier Ltd. All rights reserved.

Niculescu M.F.,Georgia Institute of Technology | Whang S.,Stanford University
Information Systems Research | Year: 2012

Wireless telecommunications have become over time a ubiquitous tool that not only sustains our increasing need for flexibility and efficiency, but also provides new ways to access and experience both utilitarian and hedonic information goods and services. This paper explores the parallel market evolution of the two main categories of wireless services-voice and data-in leading technology markets, inspecting the differences and complex interactions between the associated adoption processes. We propose a model that addresses specific individual characteristics of these two services and the stand-alone/add-on relationship between them. In particular, we acknowledge the distinction between the nonoverlapping classes of basic consumers, who only subscribe to voice plans, and sophisticated consumers, who adopt both services. We also account for the fact that, unlike voice services, data services rapidly evolved over time due to factors such as interface improvement, gradual technological advances in data transmission speed and security, and the increase in volume and diversity of the content and services ported to mobile Internet. Moreover, we consider the time gap between the market introduction of these services and allow for different corresponding consumer learning curves. We test our model on the Japanese wireless market. The empirical analysis reveals several interesting results. In addition to an expected one-way effect of voice on data adoption at the market potential level, we do find two-way codiffusion effects at the speed of adoption level. We also observe that basic consumers impact the adoption of wireless voice services in a stronger way compared to sophisticated consumers. This, in turn, leads to a decreasing average marginal network effect of voice subscribers on the adoption of wireless voice services. Furthermore, we find that the willingness of voice consumers to consider adopting data services is positively related to both time and penetration of 3G-capable handsets among voice subscribers. © 2012 INFORMS.

Wang Z.L.,Georgia Institute of Technology | Wang Z.L.,CAS Beijing Institute of Nanoenergy and Nanosystems
Faraday Discussions | Year: 2014

Triboelectrification is one of the most common effects in our daily life, but it is usually taken as a negative effect with very limited positive applications. Here, we invented a triboelectric nanogenerator (TENG) based on organic materials that is used to convert mechanical energy into electricity. The TENG is based on the conjunction of triboelectrification and electrostatic induction, and it utilizes the most common materials available in our daily life, such as papers, fabrics, PTFE, PDMS, Al, PVC etc. In this short review, we first introduce the four most fundamental modes of TENG, based on which a range of applications have been demonstrated. The area power density reaches 1200 W m-2, volume density reaches 490 kW m-3, and an energy conversion efficiency of ∼50-85% has been demonstrated. The TENG can be applied to harvest all kinds of mechanical energy that is available in our daily life, such as human motion, walking, vibration, mechanical triggering, rotation energy, wind, a moving automobile, flowing water, rain drops, tide and ocean waves. Therefore, it is a new paradigm for energy harvesting. Furthermore, TENG can be a sensor that directly converts a mechanical triggering into a self-generated electric signal for detection of motion, vibration, mechanical stimuli, physical touching, and biological movement. After a summary of TENG for micro-scale energy harvesting, mega-scale energy harvesting, and self-powered systems, we will present a set of questions that need to be discussed and explored for applications of the TENG. Lastly, since the energy conversion efficiencies for each mode can be different although the materials are the same, depending on the triggering conditions and design geometry. But one common factor that determines the performance of all the TENGs is the charge density on the two surfaces, the saturation value of which may independent of the triggering configurations of the TENG. Therefore, the triboelectric charge density or the relative charge density in reference to a standard material (such as polytetrafluoroethylene (PTFE)) can be taken as a measuring matrix for characterizing the performance of the material for the TENG. © The Royal Society of Chemistry 2014.

Gilbert E.,Georgia Institute of Technology
Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW | Year: 2012

Hierarchy fundamentally shapes how we act at work. In this paper, we explore the relationship between the words people write in workplace email and the rank of the email's recipient. Using the Enron corpus as a dataset, we perform a close study of the words and phrases people send to those above them in the corporate hierarchy versus those at the same level or lower. We find that certain words and phrases are strong predictors. For example, "thought you would" strongly suggests that the recipient outranks the sender, while "let's discuss" implies the opposite. We also find that the phrases people write to their bosses do not demonstrate cognitive processes as often as the ones they write to others. We conclude this paper by interpreting our results and announcing the release of the predictive phrases as a public dataset, perhaps enabling a new class of status-aware applications. © 2012 ACM.

Goel A.K.,Georgia Institute of Technology
Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM | Year: 2013

Research on design and analysis of complex systems has led to many functional representations with several meanings of function. This work on conceptual design uses a family of representations called structure-behavior- function (SBF) models. The SBF family ranges from behavior-function models of abstract design patterns to drawing-shape-SBF models that couple SBF models with visuospatial knowledge of technological systems. Development of SBF modeling is an instance of cognitively oriented artificial intelligence research that seeks to understand human cognition and build intelligent agents for addressing complex tasks such as design. This paper first traces the development of SBF modeling as our perspective on design evolved from that of problem solving to that of memory and learning. Next, the development of SBF modeling as a case study is used to abstract some of the core principles of an artificial intelligence methodology for functional modeling. Finally, some implications of the artificial intelligence methodology for different meanings of function are examined. Copyright © Cambridge University Press 2013.

Jornet J.M.,State University of New York at Buffalo | Akyildiz I.F.,Georgia Institute of Technology
IEEE Journal on Selected Areas in Communications | Year: 2013

Nanonetworks, i.e., networks of nano-sized devices, are the enabling technology of long-awaited applications in the biological, industrial and military fields. For the time being, the size and power constraints of nano-devices limit the applicability of classical wireless communication in nanonetworks. Alternatively, nanomaterials can be used to enable electromagnetic (EM) communication among nano-devices. In this paper, a novel graphene-based nano-antenna, which exploits the behavior of Surface Plasmon Polariton (SPP) waves in semi-finite size Graphene Nanoribbons (GNRs), is proposed, modeled and analyzed. First, the conductivity of GNRs is analytically and numerically studied by starting from the Kubo formalism to capture the impact of the electron lateral confinement in GNRs. Second, the propagation of SPP waves in GNRs is analytically and numerically investigated, and the SPP wave vector and propagation length are computed. Finally, the nano-antenna is modeled as a resonant plasmonic cavity, and its frequency response is determined. The results show that, by exploiting the high mode compression factor of SPP waves in GNRs, graphene-based plasmonic nano-antennas are able to operate at much lower frequencies than their metallic counterparts, e.g., the Terahertz Band for a one-micrometer-long ten-nanometers-wide antenna. This result has the potential to enable EM communication in nanonetworks. © 1983-2012 IEEE.

Knox-Hayes J.,Georgia Institute of Technology
Annals of the Association of American Geographers | Year: 2010

Climate change represents a new era in the development of capitalism, whereby humanity has become such a force of nature so as to destabilize its own environment and ultimately threaten its survival-neo-modernity. This article explores the creation of markets to control greenhouse gas emissions. Carbon markets are an important infrastructure to enable humanity to integrate nature into its sociopolitical and economic organization. The carbon markets are the embodiment of a process designed to reorganize human activities but also to organize and assimilate the natural environment. As with other eras, the key to success in neo-modernity is organizing complex and divergent human activities across space and time. Using an institutional approach, built on case studies and close dialogue with market participants and policymakers in the United States and Europe, this article analyzes the construction of carbon market infrastructure including how the markets organize environmental impacts in space and time. Particular attention is paid to the compressions of the spacetime of carbon commodities through the establishment of platforms, exchanges, and verifiers. The article concludes that markets are coordinating networks-the epitome of neo-modernity infrastructure and the beginning of a process through which the natural environment will become valued only in the context of further capitalist expansion. © 2010 by Association of American Geographers.

Bair S.,Georgia Institute of Technology
Tribology Transactions | Year: 2012

Until very recently, viscometers have not had a significant role in the development of the field of elastohydrodynamic lubrication (EHL). Viscosity has generally been treated as an adjustable parameter that assumed the values at elevated pressure necessary to validate whatever model was proposed. The description of the pressure dependence of viscosity in EHL requires the choice of a model that reproduces the way that viscosity changes with pressure and requires values for the parameters of the chosen model. Viscometers are necessary for both requirements if a quantitative understanding of the mechanisms of film generation and friction is sought. Four types of pressurized viscometers, two based on Poiseuille flow and two on Couette flow, are investigated here. They yield remarkably similar pressure responses. There is therefore no ambiguity in the measurement of the pressure dependence of viscosity in pressurized viscometers. A significant challenge for engineers wishing to apply the results archived in the EHL literature is the assessment of the extent to which the results would be altered if an accurate viscosity had been employed. An important step toward establishing EHL as a quantitative science would be the designation of reference liquids for the pressure-viscosity effect and the use of these liquids in experimental and analytical investigations of EHL. © 2012 Copyright Taylor and Francis Group, LLC.

Joshi Y.,Georgia Institute of Technology
Journal of Heat Transfer | Year: 2012

Thermal systems often involve multiple spatial and temporal scales, where transport information from one scale is relevant at others. Optimized thermal design of such systems and their control require approaches for their rapid simulation. These activities are of increasing significance due to the need for energy efficiency in the operation of these systems. Traditional full-field simulation methodologies are typically unable to resolve these scales in a computationally efficient manner. We summarize recent work on simulations of conjugate transport processes over multiple length scales via reduced order modeling through approaches such as compact finite elements and proper orthogonal decomposition. In order to incorporate the influence of length scales beyond those explicitly considered, lumped models are invoked, with appropriate handshaking between the two frameworks. We illustrate the methodology through selected examples, with a focus on information technology systems. © 2012 American Society of Mechanical Engineers.

Harvey S.C.,Georgia Institute of Technology
Biophysical Journal | Year: 2014

The conformational entropic penalty associated with packaging double-stranded DNA into viral capsids remains an issue of contention. So far, models based on a continuum approximation for DNA have either left the question unexamined, or they have assumed that the entropic penalty is negligible, following an early analysis by Riemer and Bloomfield. In contrast, molecular-dynamics (MD) simulations using bead-and-spring models consistently show a large penalty. A recent letter from Ben-Shaul attempts to reconcile the differences. While the letter makes some valid points, the issue of how to include conformational entropy in the continuum models remains unresolved. In this Comment, I show that the free energy decomposition from continuum models could be brought into line with the decomposition from the MD simulations with two adjustments. First, the entropy from Flory-Huggins theory should be replaced by the estimate of the entropic penalty given in Ben-Shaul's letter, which corresponds closely to that from the MD simulations. Second, the DNA-DNA repulsions are well described by the empirical relationship given by the Cal Tech group, but the strength of these should be reduced by about half, using parameters based on the Rau-Parsegian experiments, rather than treating them as "fitting parameters (tuned) to fit the data from (single molecule pulling) experiments." © 2014 Biophysical Society.

Singh J.S.,Georgia Institute of Technology
Science Technology and Human Values | Year: 2015

This article provides empirical evidence of the social context and moral reasoning embedded within a parents’ decision to participate in autism genetics research. Based on in-depth interviews of parents who donated their family’s blood and medical information to an autism genetic database, three narratives of participation are analyzed, including the altruistic parent, the obligated parent, and the diagnostic parent. Although parents in this study were not generally concerned with bioethical principles such as autonomy and the issues of informed consent and/or privacy and confidentiality of genetic information, a critical analysis reveals contextual bioethics embedded within these different narratives. These include the negotiations of responsibility that parents confront in biomedical research, the misguided hope and expectations parents place in genomic science, and the structural barriers of obtaining an autism diagnosis and educational services. Based on these findings, this article demonstrates the limits of a principle-based approach to bioethics and the emergent forms of biological citizenship that takes into account the social situations of people’s lives and the moral reasoning they negotiate when participating in autism genetic research. © The Author(s) 2014.

This article contributes to the debate about the role of the region in the placement and coordination of research centers linking technology-led economic development and science, technology, and innovation policy. Through a comparison of how a "conscious geography" has informed the organization of innovation + development (I + D) research centers in the US and Canada, this analysis focuses on the variation in the models of multi-scalar policy coordination deployed through the I + D research center frameworks in the US and Canada. This article begins with a discussion of the theoretical arguments behind territorial innovation systems. It continues by describing the different models of I + D research centers in the US and Canada and the role of the region in each set of policy frameworks. The third section discusses ways policy outcomes are influenced by the initial consideration of the spatial distribution of production and innovation. The article concludes with the case for a policy model which prioritizes a role for the region as a site of economic and geographic analysis and a partner in the design of a multi-scalar innovation policy. © 2009 Springer Science+Business Media, LLC.

Zhu L.,Georgia Institute of Technology | Zhang W.,University of California at San Francisco | Elnatan D.,University of California at San Francisco | Huang B.,University of California at San Francisco
Nature Methods | Year: 2012

In super-resolution microscopy methods based on single-molecule switching, the rate of accumulating single-molecule activation events often limits the time resolution. Here we developed a sparse-signal recovery technique using compressed sensing to analyze images with highly overlapping fluorescent spots. This method allows an activated fluorophore density an order of magnitude higher than what conventional single-molecule fitting methods can handle. Using this method, we demonstrated imaging microtubule dynamics in living cells with a time resolution of 3 s. © 2012 Nature America, Inc. All rights reserved.

Yuan M.,Georgia Institute of Technology
Journal of Machine Learning Research | Year: 2010

This paper considers the problem of estimating a high dimensional inverse covariance matrix that can be well approximated by "sparse" matrices. Taking advantage of the connection between multivariate linear regression and entries of the inverse covariance matrix, we propose an estimating procedure that can effectively exploit such "sparsity". The proposed method can be computed using linear programming and therefore has the potential to be used in very high dimensional problems. Oracle inequalities are established for the estimation error in terms of several operator norms, showing that the method is adaptive to different types of sparsity of the problem. © 2010 Ming Yuan.

Best M.L.,Georgia Institute of Technology
Communications of the ACM | Year: 2014

The article discusses how to encourage the opportunities for digital innovation and invention to flourish in a variety of social environments. Approximately 750,000 reports were analyzed through the SMTC system during the three-week election period. Social media activity peaked during the April 16, 2011 Presidential election. When violence erupted in the North of the country, Aggie received nearly 50 reports a second. The system has been replicated for elections in Liberia, Ghana, and Kenya, with results that are only now being analyzed robustly though initial results show great promise. When issues of sustainability arise in computing initiatives in the Global South they tend to focus on financial self-sustainability tethered to market forces and neoliberal economic theory. However, there are other forms of sustainability that demand our attention: environmental, technological, social and cultural, political and institutional.

Jones C.W.,Georgia Institute of Technology
Annual Review of Chemical and Biomolecular Engineering | Year: 2011

The growing atmospheric CO2 concentration and its impact on climate have motivated widespread research and development aimed at slowing or stemming anthropogenic carbon emissions. Technologies for carbon capture and sequestration (CCS) employing mass separating agents that extract and purify CO2 from flue gas emanating from large point sources such as fossil fuel-fired electricity-generating power plants are under development. Recent advances in solvents, adsorbents, and membranes for postcombust- ion CO 2 capture are described here. Specifically, room-temperature ionic liquids, supported amine materials, mixed matrix and facilitated transport membranes, and metal-organic framework materials are highlighted. In addition, the concept of extracting CO2 directly from ambient air (air capture) as a means of reducing the global atmospheric CO2 concentration is reviewed. For both conventional CCS from large point sources and air capture, critical research needs are identified and discussed. © Copyright 2011 by Annual Reviews. All rights reserved.

Current AASHTO specifications provide engineers with a temperature gradient across the depth of the cross section to predict the vertical thermal behavior of bridges. This gradient is based on one-dimensional heat flow and does not account for change in the cross section, as found in prestressed concrete girders, nor does it account for thermal effects on the sides of the girder. Furthermore, the current specifications do not provide the transverse temperature gradient that is needed to predict the lateral thermal behavior of the girders, especially during construction, before the placement of the bridge decks. To determine the transverse and vertical temperature gradients in prestressed concrete girders, experimental and analytical studies were conducted on a prestressed BT-63 concrete girder segment. The analytical results were found to be in good agreement with experimental measurements. The analytical model was then used to determine the seasonal temperature gradients in four standard PCI girder sections at selected cities in the United States. On the basis of these findings, vertical and transverse temperature gradients were developed to aid engineers in predicting the thermal behavior of prestressed girders during construction. © 2012 American Society of Civil Engineers.

Kinney M.A.,Georgia Institute of Technology
Integrative biology : quantitative biosciences from nano to macro | Year: 2012

The sensitivity of stem cells to environmental perturbations has prompted many studies which aim to characterize the influence of mechanical factors on stem cell morphogenesis and differentiation. Hydrodynamic cultures, often employed for large scale bioprocessing applications, impart complex fluid shear and transport profiles, and influence cell fate as a result of changes in media mixing conditions. However, previous studies of hydrodynamic cultures have been limited in their ability to distinguish confounding factors that may affect differentiation, including modulation of embryoid body size in response to changes in the hydrodynamic environment. In this study, we demonstrate the ability to control and maintain embryoid body (EB) size using a combination of forced aggregation formation and rotary orbital suspension culture, in order to assess the impact of hydrodynamic cultures on ESC differentiation, independent of EB size. Size-controlled EBs maintained at different rotary orbital speeds exhibited similar morphological features and gene expression profiles, consistent with ESC differentiation. The similar differentiation of ESCs across a range of hydrodynamic conditions suggests that controlling EB formation and resultant size may be important for scalable bioprocessing applications, in order to standardize EB morphogenesis. However, perturbations in the hydrodynamic environment also led to subtle changes in differentiation toward certain lineages, including temporal modulation of gene expression, as well changes in the relative efficiencies of differentiated phenotypes, thereby highlighting important tissue engineering principles that should be considered for implementation in bioreactor design, as well as for directed ESC differentiation.

Dufek J.,Georgia Institute of Technology | Bachmann O.,University of Washington
Geology | Year: 2010

Compositional gaps are common in volcanic series worldwide. The pervasive generation of compositional gaps influences the mechanical and thermal properties of the crust, and holds clues on how our planet differentiates. We have explored potential mechanisms to generate these gaps using numerical simulations coupling crystallization kinetics and multiphase fluid dynamics of magma reservoirs. We show that gaps are inherent to crystal fractionation for all compositions, as crystal-liquid separation takes place most efficiently within a crystallinity window of ~50-70 vol% crystals. The probability of melt extraction from a crystal residue in a cooling magma chamber is highest in this crystallinity window due to (1) enhanced melt segregation in the absence of chamber-wide convection, (2) buffering by latent heat of crystallization, and (3) diminished chamber-wall thermal gradients. This mechanical control of igneous distillation is likely to have played a dominant role in the formation of the compositionally layered Earth's crust by allowing multiple and overlapping intrusive episodes of relatively discrete or quantized composition that become more silicic upward. © 2010 Geological Society of America.

Kang I.-S.,Seoul National University | Kim H.-M.,Georgia Institute of Technology
Journal of Climate | Year: 2010

The predictability of intraseasonal variation in the tropics is assessed in the present study by using various statistical and dynamical models with rigorous and fair measurements. For a fair comparison, the real-time multivariate Madden-Julian oscillation (MJO) (RMM) index, proposed by Wheeler and Hendon, is used as a predictand for all models. The statistical models include the models based on a multilinear regression, a wavelet analysis, and a singular spectrum analysis (SSA). The prediction limits (correlation skill of 0.5) of statistical models for RMM1 (RMM2) index are at days 16-17 (14-15) for the multiregression model, whereas they are at days 8-10 (9-12) for the wavelet- and SSA-based models. The poor predictability of the wavelet and SSA models is related to the tapering problem for a half-length of the time window before the initial condition. To assess the dynamical predictability, long-term serial prediction experiments with a prediction interval of every 5 days are carried out with Seoul National University (SNU) AGCM and coupled general circulation model (CGCM) for 26 (1980-2005) boreal winters. The prediction limits of RMM1 and RMM2 occur at around 20 days for both AGCM and CGCM. These results demonstrate that the skills of dynamical models used in this study are better than those of the three statistical predictions. The dynamical and statistical predictions are combined using a multimodel ensemble method. The combination provides a superior skill to any of the statistical and dynamical predictions, with a prediction limit of 22-24 days. The dependencies of prediction skill on the initial phase and amplitude of the MJO are also investigated. © 2010 American Meteorological Society.

Balcan M.-F.,Georgia Institute of Technology | Blum A.,Carnegie Mellon University
Journal of the ACM | Year: 2010

Supervised learning-that is, learning from labeled examples-is an area of Machine Learning that has reached substantial maturity. It has generated general-purpose and practically successful algorithms and the foundations are quite well understood and captured by theoretical frameworks such as the PAC-learning model and the Statistical Learning theory framework. However, for many contemporary practical problems such as classifying web pages or detecting spam, there is often additional information available in the form of unlabeled data, which is often much cheaper and more plentiful than labeled data. As a consequence, there has recently been substantial interest in semi-supervised learningusing unlabeled data together with labeled datasince any useful information that reduces the amount of labeled data needed can be a significant benefit. Several techniques have been developed for doing this, along with experimental results on a variety of different learning problems. Unfortunately, the standard learning frameworks for reasoning about supervised learning do not capture the key aspects and the assumptions underlying these semi-supervised learning methods. In this article, we describe an augmented version of the PAC model designed for semi-supervised learning, that can be used to reason about many of the different approaches taken over the past decade in the Machine Learning community. This model provides a unified framework for analyzing when and why unlabeled data can help, in which one can analyze both sample-complexity and algorithmic issues. The model can be viewed as an extension of the standard PAC model where, in addition to a concept class C, one also proposes a compatibility notion: a type of compatibility that one believes the target concept should have with the underlying distribution of data. Unlabeled data is then potentially helpful in this setting because it allows one to estimate compatibility over the space of hypotheses, and to reduce the size of the search space from the whole set of hypotheses C down to those that, according to one's assumptions, are a-priori reasonable with respect to the distribution. As we show, many of the assumptions underlying existing semi-supervised learning algorithms can be formulated in this framework. After proposing the model, we then analyze sample-complexity issues in this setting: that is, how much of each type of data one should expect to need in order to learn well, and what the key quantities are that these numbers depend on. We also consider the algorithmic question of how to efficiently optimize for natural classes and compatibility notions, and provide several algorithmic results including an improved bound for Co-Training with linear separators when the distribution satisfies independence given the label. © 2010 ACM.

Bredas J.-L.,Georgia Institute of Technology | Bredas J.-L.,King Abdulaziz University
Materials Horizons | Year: 2014

The energy gap between the highest occupied and lowest unoccupied electronic levels is a critical parameter determining the electronic, optical, redox, and transport (electrical) properties of a material. However, the energy gap comes in many flavors, such as the band gap, HOMO-LUMO gap, fundamental gap, optical gap, or transport gap, with each of these terms carrying a specific meaning. Failure to appreciate the distinctions among these different energy gaps has caused much confusion in the literature, which is manifested by the frequent use of improper terminology, in particular, in the case of organic molecular or macromolecular materials. Thus, it is our goal here to clarify the meaning of the various energy gaps that can be measured experimentally or evaluated computationally, with a focus on π-conjugated materials of interest for organic electronics and photonics applications. © The Royal Society of Chemistry.

Dong J.,Scripps Research Institute | Krasnova L.,Scripps Research Institute | Finn M.G.,Georgia Institute of Technology | Barry Sharpless K.,Scripps Research Institute
Angewandte Chemie - International Edition | Year: 2014

Aryl sulfonyl chlorides (e.g. Ts-Cl) are beloved of organic chemists as the most commonly used SVI electrophiles, and the parent sulfuryl chloride, O2SVICl2, has also been relied on to create sulfates and sulfamides. However, the desired halide substitution event is often defeated by destruction of the sulfur electrophile because the S VIï£iCl bond is exceedingly sensitive to reductive collapse yielding SIV species and Cl-. Fortunately, the use of sulfur(VI) fluorides (e.g., R-SO2-F and SO2F 2) leaves only the substitution pathway open. As with most of click chemistry, many essential features of sulfur(VI) fluoride reactivity were discovered long ago in Germany.6a Surprisingly, this extraordinary work faded from view rather abruptly in the mid-20th century. Here we seek to revive it, along with John Hyatt's unnoticed 1979 full paper exposition on CH 2ï£CH-SO2-F, the most perfect Michael acceptor ever found.98 To this history we add several new observations, including that the otherwise very stable gas SO2F2 has excellent reactivity under the right circumstances. We also show that proton or silicon centers can activate the exchange of Sï£iF bonds for Sï£iO bonds to make functional products, and that the sulfate connector is surprisingly stable toward hydrolysis. Applications of this controllable ligation chemistry to small molecules, polymers, and biomolecules are discussed. Old chemistry in new glory: Sulfonyl fluoride exchange (SuFEx) forges rugged inorganic links between carbon centers. Like most click reactions, it is an old process now improved to allow the underappreciated sulfate connection to be made for a variety of purposes. The various exchange events uniquely enabled by the use of fluoride are highlighted here in orange. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Baden J.M.,Georgia Institute of Technology
IEEE Transactions on Information Theory | Year: 2011

Autocorrelation sidelobes are a form of self-noise that reduce the effectiveness of phase coding in radar and communication systems. The merit factor is a well-known measure related to the autocorrelation sidelobe energy of a sequence. In this paper an equation is derived that allows the change in the merit factor of a binary sequence to be computed for all single-element changes with O(N\log N) operations, where N is the sequence length. The approach is then extended to multiple-element changes, allowing the merit factor change to be calculated with an additional O(Ns3) operations, where Ns is the number of changed elements, irrespective of sequence length. The multiple-element calculations can be used to update the single-element calculations so that in iterative use only O(N) operations are required per element change to keep the single-element calculations current. A steep descent algorithm (a variation on the steepest descent method) employing these two techniques was developed and applied to quarter-rotated, periodically extended Legendre sequences, producing optimized sequences with an apparent asymptotic merit factor of approximately 6.3758, modestly higher than the best known prior result of approximately 6.3421. Modified Jacobi sequences improve after steep descent to an approximate asymptotic merit factor of 6.4382. Three-prime and four-prime Jacobi sequences converge very slowly making a determination difficult but appear to have a higher post-steep-descent asymptotic merit factor than Legendre or modified Jacobi sequences. © 2006 IEEE.

Degruyter W.,Georgia Institute of Technology | Bonadonna C.,University of Geneva
Earth and Planetary Science Letters | Year: 2013

The collapse of volcanic plumes has significant implications for eruption dynamics and associated hazards. We show how eruptive columns can collapse and generate pyroclastic density currents as a result of not only the source conditions, but also of the atmospheric environment. The ratio of the potential energy and the kinetic energy at the source quantified by the Richardson number, and the entrainment efficiency quantified by the radial entrainment coefficient have already been identified as key parameters in controlling the transition between a buoyant and collapsing plume. Here we quantify how this transition is affected by wind using scaling arguments in combination with a one-dimensional plume model. Air entrainment due to wind causes a volcanic plume to lower its density at a faster rate and therefore to favor buoyancy. We identify the conditions when wind entrainment becomes dominant over radial entrainment and quantify the effect of wind on column collapse. These findings are framed into a generalized regime diagram that also describes previous regime diagrams for the specific case of choked flows. Many observations confirm how bent-over plumes typically do not generate significant collapses. A quantitative comparison with the 1996 Ruapehu and the 2010 Eyjafjallajökull eruptions shows that the likelihood of collapse is reduced even at moderate wind speeds relative to the exit velocity at the vent. © 2013 Elsevier B.V.

Immergluck D.,Georgia Institute of Technology
Housing Policy Debate | Year: 2013

The primary federal policy responses to the foreclosure crisis, thus far, include programs to reduce foreclosures and efforts to mitigate the impacts of foreclosures on communities. This paper reviews policy responses between 2007 and 2012. While there is less information at this point on the outcomes of mitigation polices, the overall federal response is thus far lacking. The programs pale in comparison with the challenges they are intended to solve and suffer from other program design and implementation problems. Foreclosure prevention efforts, in particular, are faulted for being too reliant on marginal incentive payments, for failing to include a key policy, bankruptcy modification, which would have encouraged lenders to modify loans more aggressively, and for not sanctioning servicers more aggressively for poor performance and/or noncompliance. The overall federal response is also characterized as moving too slowly in some cases and being too captive to the policy preferences of the financial services industry. © 2013 Copyright Virginia Polytechnic Institute and State University.

Tan S.,Georgia Institute of Technology
Physical Review Letters | Year: 2011

Alhassid conjectured that the total energy of a harmonically trapped two-component Fermi gas with a short range interaction is a linear functional of the occupation probabilities of single-particle energy eigenstates. We confirm his conjecture and derive the functional explicitly. We show that the functional applies to all smooth (namely, differentiable) potentials having a minimum, not just harmonic traps. We also calculate the occupation probabilities of high energy states. Published by the American Physical Society.

Ny J.,University of Pennsylvania | Feron E.,Georgia Institute of Technology | Frazzoli E.,Massachusetts Institute of Technology
IEEE Transactions on Automatic Control | Year: 2012

We study the traveling salesman problem for a Dubins vehicle. We prove that this problem is NP-hard, and provide lower bounds on the approximation ratio achievable by some recently proposed heuristics. We also describe new algorithms for this problem based on heading discretization, and evaluate their performance numerically. © 2006 IEEE.

Traynor P.,Georgia Institute of Technology
IEEE Transactions on Mobile Computing | Year: 2012

Cellular text messaging services are increasingly being relied upon to disseminate critical information during emergencies. Accordingly, a wide range of organizations including colleges and universities now partner with third-party providers that promise to improve physical security by rapidly delivering such messages. Unfortunately, these products do not work as advertised due to limitations of cellular infrastructure and therefore provide a false sense of security to their users. In this paper, we perform the first extensive investigation and characterization of the limitations of an Emergency Alert System (EAS) using text messages as a security incident response mechanism. We show emergency alert systems built on text messaging not only can meet the 10 minute delivery requirement mandated by the WARN Act, but also potentially cause other voice and SMS traffic to be blocked at rates upward of 80 percent. We then show that our results are representative of reality by comparing them to a number of documented but not previously understood failures. Finally, we analyze a targeted messaging mechanism as a means of efficiently using currently deployed infrastructure and third-party EAS. In so doing, we demonstrate that this increasingly deployed security infrastructure does not achieve its stated requirements for large populations. © 2012 IEEE.

Vazirani V.V.,Georgia Institute of Technology
Journal of the ACM | Year: 2012

We introduce the notion of a rational convex program (RCP) and we classify the known RCPs into two classes: quadratic and logarithmic. The importance of rationality is that it opens up the possibility of computing an optimal solution to the program via an algorithm that is either combinatorial or uses an LP-oracle. Next, we define a new Nash bargaining game, called ADNB, which is derived from the linear case of the Arrow-Debreu market model. We show that the convex program for ADNB is a logarithmic RCP, but unlike other known members of this class, it is nontotal. Our main result is a combinatorial, polynomial-time algorithm for ADNB. It turns out that the reason for infeasibility of logarithmic RCPs is quite different from that for LPs and quadratic RCPs. We believe that our ideas for surmounting the new difficulties will be useful for dealing with other nontotal RCPs as well.We give an application of our combinatorial algorithm for ADNB to an important "fair" throughput allocation problem on a wireless channel. Finally, we present a number of interesting questions that the new notion of RCP raises. © 2012 ACM.

Wang Z.L.,Georgia Institute of Technology | Wang Z.L.,Japan National Institute of Materials Science
Advanced Materials | Year: 2012

Sensor networks are a key technological and economic driver for global industries in the near future, with applications in health care, environmental monitoring, infrastructure monitoring, national security, and more. Developing technologies for self-powered nanosensors is vitally important. This paper gives a brief summary about recent progress in the area, describing nanogenerators that are capable of providing sustainable self-sufficient micro/nanopower sources for future sensor networks. Sensor networks are a key technological and economic driver for global industries in the near future, with applications in health care, environmental monitoring, infrastructure monitoring, national security, and more. This paper introduces a technology that is capable of providing sustainable self-sufficient micro/nano-power sources for future sensor networks. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

LaRowe D.E.,Georgia Institute of Technology | Van Cappellen P.,University Utrecht
Geochimica et Cosmochimica Acta | Year: 2011

The oxidative degradation of organic matter is a key process in the biogeochemical functioning of the earth system. Quantitative models of organic matter degradation are therefore essential for understanding the chemical state and evolution of the Earth's near-surface environment, and to forecast the biogeochemical consequences of ongoing regional and global change. The complex nature of biologically produced organic matter represents a major obstacle to the development of such models, however. Here, we compare the energetics of the oxidative degradation of a large number of naturally occurring organic compounds. By relating the Gibbs energies of half reactions describing the complete mineralization of the compounds to their average nominal carbon oxidation state, it becomes possible to estimate the energetic potential of the compounds based on major element (C, H, N, O, P, S) ratios. The new energetic description of organic matter can be combined with bioenergetic theory to rationalize observed patterns in the decomposition of natural organic matter. For example, the persistence of cell membrane derived compounds and complex organics in anoxic settings is consistent with their limited catabolic potential under these environmental conditions. The proposed approach opens the way to include the thermodynamic properties of organic compounds in kinetic models of organic matter degradation. © 2011 Elsevier Ltd.

De Heer W.A.,Georgia Institute of Technology
MRS Bulletin | Year: 2011

Graphene has been known for a long time, but only recently has its potential for electronics been recognized. Its history is recalled starting from early graphene studies. A critical insight in June 2001 brought to light that graphene could be used for electronics. This was followed by a series of proposals and measurements cumulating in a comprehensive patent for graphene-based electronics filed in 2003. The Georgia Institute of Technology (GIT) graphene electronics research project group selected epitaxial graphene as the most viable route for graphene-based electronics, as described in their 2004 paper on transport and structural measurements of epitaxial graphene. Subsequently, the field developed rapidly, and multilayer graphene was discovered at GIT. This material consists of many graphene layers, but it is not graphite; in contrast to graphite, the layers are rotated with respect to each other, causing electronic decoupling so that each layer has the electronic structure of graphene. Currently, the field has developed to the point where epitaxial graphene-based electronics may be realized in the not too distant future. © 2011 Materials Research Society.

Starner T.,Georgia Institute of Technology
IEEE Pervasive Computing | Year: 2013

The article discusses how the wearable computers such as Google Glass and head-up displays (HUD) have revolutionized the way people interact. People can Google search on Glass in a wide variety of contexts, both personal and professional. Google's search displays excerpts of the most related content for each of its hits, which means that the result of the search itself often provides people with the information they need without even clicking on a link. When the time between intention and action becomes small enough, the interface becomes an extension of the self. When the time between intention and action becomes small enough, the interface becomes an extension of the self. An example is a man riding a bicycle. If a car cuts him off, the rider does not think, 'If I squeeze this lever on the handlebar, it will pivot about its axle, pulling a cable. Using a sheaf, that cable will direct the force under the bicycle and pull on a pair of calipers that will then squeeze against the sides of his back tire.

Lorne F.T.,Georgia Institute of Technology
Environmental Economics and Policy Studies | Year: 2014

This article restates and enhances the methodology described by Yu et al. in this journal (3:291–309, 2000) by projecting its implications in light of recent interdisciplinary research, suggesting several new directions of inquiry. The difference between a static neoclassical approach and a dynamic multidimensional approach, stated in terms of macro-entrepreneurship setting the stage for macro-sustainability, is further highlighted. Existing case studies can be reinterpreted by this new approach. © 2009, Springer Japan.

Wang Z.L.,Georgia Institute of Technology
Advanced Materials | Year: 2012

The fundamental principle of piezotronics and piezo-phototronics were introduced by Wang in 2007 and 2010, respectively. Due to the polarization of ions in a crystal that has non-central symmetry in materials such as the wurtzite structured ZnO, GaN and InN, a piezoelectric potential (piezopotential) is created in the crystal by applying a stress. Owing to the simultaneous possession of piezoelectricity and semiconductor properties, the piezopotential created in the crystal has a strong effect on the carrier transport at the interface/junction. Piezotronics is about the devices fabricated using the piezopotential as a "gate" voltage to tune/control charge carrier transport at a contact or junction. The piezo-phototronic effect is to use the piezopotential to control the carrier generation, transport, separation and/or recombination for improving the performance of optoelectronic devices, such as photon detector, solar cell and LED. This manuscript reviews the updated progress in the two new fields. A perspective is given about their potential applications in sensors, human-silicon technology interfacing, MEMS, nanorobotics and energy sciences. Piezotronics is about devices fabricated using the piezopotential as a "gate" voltage to tune/control charge carrier transport at a contact or junction. The piezo-phototronic effect is to use the piezopotential to control the carrier generation, transport, separation and/or recombination for improving the performance of optoelectronic devices, such as photon detectors, solar cells and LEDs. This manuscript reviews progress in these two new fields. A perspective is given about their potential applications in sensors, human-silicon technology interfacing, MEMS, nanorobotics, and energy sciences. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Yu J.,Industrial Light and Magic | Turk G.,Georgia Institute of Technology
ACM Transactions on Graphics | Year: 2013

In this article we present a novel surface reconstruction method for particlebased fluid simulators such as Smoothed Particle Hydrodynamics. In particle-based simulations, fluid surfaces are usually defined as a level set of an implicit function. We formulate the implicit function as a sum of anisotropic smoothing kernels, and the direction of anisotropy at a particle is determined by performing Principal Component Analysis (PCA) over the neighboring particles. In addition, we perform a smoothing step that repositions the centers of these smoothing kernels. Since these anisotropic smoothing kernels capture the local particle distributions more accurately, our method has advantages over existing methods in representing smooth surfaces, thin streams, and sharp features of fluids. Our method is fast, easy to implement, and our results demonstrate a significant improvement in the quality of reconstructed surfaces as compared to existing methods. © 2013 ACM.

Bair S.,Georgia Institute of Technology
Fuel | Year: 2014

The relative volumes and the viscosities of four Diesel fuels have been measured experimentally to pressures up to 350 MPa and temperatures to 160 °C. The experimental liquids were an extra low viscosity reference fuel, an ethanol based blend, neat biodiesel and 20% biodiesel. The Tait and Murnaghan equations of state represented the pressure-volume-temperature response equally well. The improved Yasutomi model for supercooling liquids, which accurately represents the temperature and pressure dependence of the viscosity of lubricating oils, did not fit the data well except for neat biodiesel. Surprisingly, the Doolittle free-volume equation was accurate only for the low viscosity reference fuel. The reason for the correlation difficulties may be illuminated by the behavior of the thermodynamic scaling of the viscosities. The Stickel analysis of the normalized Ashurst-Hoover parameter indicates that, for all liquids except the reference fuel, the measured viscosities lie across the transition in the response of viscosity to temperature and pressure. Consequently, only the comprehensive normalized Ashurst-Hoover scaling model successfully fits all data. © 2014 Elsevier Ltd. All rights reserved.

Franzoni C.,Polytechnic of Milan | Sauermann H.,Georgia Institute of Technology
Research Policy | Year: 2014

A growing amount of scientific research is done in an open collaborative fashion, in projects sometimes referred to as "crowd science", "citizen science", or "networked science". This paper seeks to gain a more systematic understanding of crowd science and to provide scholars with a conceptual framework and an agenda for future research. First, we briefly present three case examples that span different fields of science and illustrate the heterogeneity concerning what crowd science projects do and how they are organized. Second, we identify two fundamental elements that characterize crowd science projects - open participation and open sharing of intermediate inputs - and distinguish crowd science from other knowledge production regimes such as innovation contests or traditional "Mertonian" science. Third, we explore potential knowledge-related and motivational benefits that crowd science offers over alternative organizational modes, and potential challenges it is likely to face. Drawing on prior research on the organization of problem solving, we also consider for what kinds of tasks particular benefits or challenges are likely to be most pronounced. We conclude by outlining an agenda for future research and by discussing implications for funding agencies and policy makers. © 2013 Elsevier B.V. All rights reserved.

Engelhart A.E.,Georgia Institute of Technology
Cold Spring Harbor perspectives in biology | Year: 2010

Since the structure of DNA was elucidated more than 50 years ago, Watson-Crick base pairing has been widely speculated to be the likely mode of both information storage and transfer in the earliest genetic polymers. The discovery of catalytic RNA molecules subsequently provided support for the hypothesis that RNA was perhaps even the first polymer of life. However, the de novo synthesis of RNA using only plausible prebiotic chemistry has proven difficult, to say the least. Experimental investigations, made possible by the application of synthetic and physical organic chemistry, have now provided evidence that the nucleobases (A, G, C, and T/U), the trifunctional moiety ([deoxy]ribose), and the linkage chemistry (phosphate esters) of contemporary nucleic acids may be optimally suited for their present roles-a situation that suggests refinement by evolution. Here, we consider studies of variations in these three distinct components of nucleic acids with regard to the question: Is RNA, as is generally acknowledged of DNA, the product of evolution? If so, what chemical and structural features might have been more likely and advantageous for a proto-RNA?

Zangwill A.,Georgia Institute of Technology | Vvedensky D.D.,Imperial College London
Nano Letters | Year: 2011

Graphene, a hexagonal sheet of sp 2-bonded carbon atoms, has extraordinary properties which hold immense promise for nanoelectronic applications. Unfortunately, the popular preparation methods of micromechanical cleavage and chemical exfoliation of graphite do not easily scale up for application purposes. Epitaxial graphene provides an attractive alternative, though there are many challenges, not least of which is the absence of any understanding of the complex atomistic assembly kinetics of graphene layers. Here, we present a simple rate theory of epitaxial graphene growth on close-packed metal surfaces. On the basis of recent low-energy electron-diffraction microscopy experiments, our theory supposes that graphene islands grow predominantly by the attachment of five-atom clusters. With optimized kinetic parameters, our theory produces a quantitative account of the measured time-dependent carbon adatom density. The temperature dependence of this density at the onset of nucleation leads us to predict that the smallest stable precursor to graphene growth is an immobile island composed of six five-atom clusters. This conclusion is supported by a recent study based on temperature-programmed growth of epitaxial graphene, which provides direct evidence of nanoclusters whose coarsening leads to the formation of graphene layers. Our findings should motivate additional high-resolution imaging experiments and more detailed simulations which will yield important input to developing strategies for the large-scale production of epitaxial graphene. © 2011 American Chemical Society.

Gaucher E.A.,Georgia Institute of Technology
Cold Spring Harbor perspectives in biology | Year: 2010

The Darwinian concept of biological evolution assumes that life on Earth shares a common ancestor. The diversification of this common ancestor through speciation events and vertical transmission of genetic material implies that the classification of life can be illustrated in a tree-like manner, commonly referred to as the Tree of Life. This article describes features of the Tree of Life, such as how the tree has been both pruned and become bushier throughout the past century as our knowledge of biology has expanded. We present current views that the classification of life may be best illustrated as a ring or even a coral with tree-like characteristics. This article also discusses how the organization of the Tree of Life offers clues about ancient life on Earth. In particular, we focus on the environmental conditions and temperature history of Precambrian life and show how chemical, biological, and geological data can converge to better understand this history."You know, a tree is a tree. How many more do you need to look at?"--Ronald Reagan (Governor of California), quoted in the Sacramento Bee, opposing expansion of Redwood National Park, March 3, 1966.

Mahmoud M.A.,Georgia Institute of Technology
Journal of Physical Chemistry C | Year: 2014

Gold nanorattles (AuNRTs), hollow gold nanospheres with internal small solid gold nanospheres (AuNSs), were prepared with different sizes. The presence of AuNS inside the hollow gold nanospheres in the nanorattle shape was found to improve their sensing efficiency. The sensitivity factor of the nanorattles is in the range of 450 nm/RIU, while the individual hollow nanosphere's efficiency is ∼300 nm/RIU. This improvement is due to the strong plasmon field on the cavity and around the inner gold nanosphere as shown by using the discrete dipole approximation (DDA) calculations. Interestingly, this nanoparticle produces a strong enhancement for the interaction of light at 850 nm due to the excitation of both the inner sphere and outer nanoshell, despite being the fact that NIR radiation (850 nm) has very low energy to excite the inner gold nanosphere when present alone. Comparing the experimental and simulated scattering spectrum for a single colloidal nanorattle suggests that the interior gold nanosphere moves freely inside the gold nanoshell. When the rattle is dried, the nanosphere adheres to the inner surface as shown from the experimental and theoretical results. Unlike nanospheres and nanoshells, the nanorattles have three plasmon peaks in addition to a shoulder. This allows the AuNRTs to be useful in applications in the visible and near IR spectral regions. © 2014 American Chemical Society.

Bunz U.H.F.,Georgia Institute of Technology | Bunz U.H.F.,University of Heidelberg
Angewandte Chemie - International Edition | Year: 2010

Figure Equotion Present Be planar! The planar oligofurans (see picture, lower; C gray, H white, O red) are, despite a lack of solubilizing alkyl groups, quite soluble. They are also highly fluorescent and surprisingly stable, and might give the oligothiophenes (upper; S yellow) a run for their money when seeking applications in organic electronics. © 2010 Wiley-VCH Verlag GmbH & Co. KGaA.

Zhu W.,Georgia Institute of Technology
Nucleic acids research | Year: 2010

We describe an algorithm for gene identification in DNA sequences derived from shotgun sequencing of microbial communities. Accurate ab initio gene prediction in a short nucleotide sequence of anonymous origin is hampered by uncertainty in model parameters. While several machine learning approaches could be proposed to bypass this difficulty, one effective method is to estimate parameters from dependencies, formed in evolution, between frequencies of oligonucleotides in protein-coding regions and genome nucleotide composition. Original version of the method was proposed in 1999 and has been used since for (i) reconstructing codon frequency vector needed for gene finding in viral genomes and (ii) initializing parameters of self-training gene finding algorithms. With advent of new prokaryotic genomes en masse it became possible to enhance the original approach by using direct polynomial and logistic approximations of oligonucleotide frequencies, as well as by separating models for bacteria and archaea. These advances have increased the accuracy of model reconstruction and, subsequently, gene prediction. We describe the refined method and assess its accuracy on known prokaryotic genomes split into short sequences. Also, we show that as a result of application of the new method, several thousands of new genes could be added to existing annotations of several human and mouse gut metagenomes.

Ammar M.,Georgia Institute of Technology
Proceedings - IEEE INFOCOM | Year: 2010

In opportunistic networks, end-to-end paths between two communicating nodes are rarely available. In such situations, the nodes might still copy and forward messages to nodes that are more likely to meet the destination. The question is which forwarding algorithm offers the best trade off between cost (number of message replicas) and rate of successful message delivery. We address this challenge by developing the PeopleRank approach in which nodes are ranked using a tunable weighted social information. Similar to the PageRank idea, PeopleRank gives higher weight to nodes if they are socially connected to other important nodes of the network. We develop centralized and distributed variants for the computation of PeopleRank. We present an evaluation using real mobility traces of nodes and their social interactions to show that PeopleRank manages to deliver messages with near optimal success rate (i.e., close to Epidemic Routing) while reducing the number of message retransmissions by 50% compared to Epidemic Routing. ©2010 IEEE.

Wang Y.,Georgia Institute of Technology
Mechanical Systems and Signal Processing | Year: 2013

The Fokker-Planck equation is widely used to describe the time evolution of stochastic systems in drift-diffusion processes. Yet, it does not differentiate two types of uncertainties: aleatory uncertainty that is inherent randomness and epistemic uncertainty due to lack of perfect knowledge. In this paper, a generalized differential Chapman-Kolmogorov equation based on a new generalized interval probability theory is derived, where epistemic uncertainty is modeled by the generalized interval while the aleatory one is by the probability measure. A generalized Fokker-Planck equation is proposed to describe drift-diffusion processes under both uncertainties. A path integral approach is developed to numerically solve the generalized Fokker-Planck equation. The resulted interval-valued probability density functions rigorously bound the real-valued ones computed from the classical path integral method. The method is demonstrated by numerical examples. © 2012 Elsevier Ltd.

Mayeur J.R.,Los Alamos National Laboratory | McDowell D.L.,Georgia Institute of Technology
International Journal of Plasticity | Year: 2014

We compare and contrast the governing equations and numerical predictions of two higher-order theories of extended single crystal plasticity, specifically, Gurtin type and micropolar models. The models are presented within a continuum thermodynamic setting, which facilitates identification of equivalent terms and the roles they play in the respective models. Finite element simulations of constrained thin films are used to elucidate the various scale-dependent strengthening mechanisms and their effect of material response. Our analysis shows that the two theories contain many analogous features and qualitatively predict the same trends in mechanical behavior, although they have substantially different points of departure. This is significant since the micropolar theory affords a simpler numerical implementation that is less computationally expensive and potentially more stable. © 2014 Elsevier Ltd. All rights reserved.

King G.M.,Louisiana State University | Kostka J.E.,Georgia Institute of Technology | Hazen T.C.,University of Tennessee at Knoxville | Sobecky P.A.,University of Alabama
Annual Review of Marine Science | Year: 2015

The Deepwater Horizon oil spill in the northern Gulf of Mexico represents the largest marine accidental oil spill in history. It is distinguished from past spills in that it occurred at the greatest depth (1,500 m), the amount of hydrocarbon gas (mostly methane) lost was equivalent to the mass of crude oil released, and dispersants were used for the first time in the deep sea in an attempt to remediate the spill. The spill is also unique in that it has been characterized with an unprecedented level of resolution using next-generation sequencing technologies, especially for the ubiquitous hydrocarbon-degrading microbial communities that appeared largely to consume the gases and to degrade a significant fraction of the petroleum. Results have shown an unexpectedly rapid response of deep-sea Gammaproteobacteria to oil and gas and documented a distinct succession correlated with the control of the oil flow and well shut-in. Similar successional events, also involving Gammaproteobacteria, have been observed in nearshore systems as well. Copyright © 2015 by Annual Reviews. All rights reserved.

Tezcan T.,University of Illinois at Urbana - Champaign | Dai J.G.,Georgia Institute of Technology
Operations Research | Year: 2010

We consider a class of parallel server systems that are known as N-systems. In an N-system, there are two customer classes that are catered by servers in two pools. Servers in one of the pools are cross-trained and can serve customers from both classes, whereas all of the servers in the other pool can serve only one of the customer classes. A customer reneges from his queue if his waiting time in the queue exceeds his patience. Our objective is to minimize the total cost that includes a linear holding cost and a reneging cost. We prove that, when the service speed is pool dependent, but not class dependent, a cμ-type greedy policy is asymptotically optimal in many-server heavy traffic. © 2010 INFORMS.

Glezer A.,Georgia Institute of Technology
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2011

Aerodynamic flow control effected by interactions of surface-mounted synthetic (zero net mass flux) jet actuators with a local cross flow is reviewed. These jets are formed by the advection and interactions of trains of discrete vortical structures that are formed entirely from the fluid of the embedding flow system, and thus transfer momentum to the cross flow without net mass injection across the flow boundary. Traditional approaches to active flow control have focused, to a large extent, on control of separation on stalled aerofoils by means of quasi-steady actuation within two distinct regimes that are characterized by the actuation time scales. When the characteristic actuation period is commensurate with the time scale of the inherent instabilities of the base flow, the jets can effect significant quasi-steady global modifications on spatial scales that are one to two orders of magnitude larger than the scale of the jets. However, when the actuation frequency is sufficiently high to be decoupled from global instabilities of the base flow, changes in the aerodynamic forces are attained by leveraging the generation and regulation of 'trapped' vorticity concentrations near the surface to alter its aerodynamic shape. Some examples of the utility of this approach for aerodynamic flow control of separated flows on bluff bodies and fully attached flows on lifting surfaces are also discussed. © 2011 The Royal Society.

Qureshi M.K.,Georgia Institute of Technology
Proceedings of the Annual International Symposium on Microarchitecture, MICRO | Year: 2011

Phase Change Memory (PCM) suffers from the problem of limited write endurance. This problem is exacerbated because of the high variability in lifetime across PCM cells, resulting in weaker cells failing much earlier than nominal cells. Ensuring long lifetimes under high variability requires that the design can correct a large number of errors for any given memory line. Unfortunately, supporting high levels of error correction for all lines incurs significantly high overhead, often exceeding 10% of overall memory capacity. Such an overhead may be too high for wide-scale adoption of PCM, given that memory market is typically very cost-sensitive. This paper reduces the storage required for error correction by making the key observation that only a few lines require high levels of hard-error correction. Therefore, prior approaches that uniformly allocated large number of error correction entries for all lines are inefficient, as most (> 90%) of these entries remain unused. We propose Pay-As-You-Go (PAYG), an efficient hard-error resilient architecture that allocates error correction entries in proportion to the number of hard-faults in the line. We describe a storage-efficient and low-latency organization for PAYG. Compared to the previously proposed ECP-6 technique, PAYG requires 3X lower storage overhead and yet provides 13% more lifetime, while incurring a latency overhead of < 0.4% for the first five years of system lifetime. We also show that PAYG is more effective than the recent FREE-p proposal. © 2011 ACM.

Fahrni C.J.,Georgia Institute of Technology
Current Opinion in Chemical Biology | Year: 2013

Fluorescent probes are powerful and cost-effective tools for the detection of metal ions in biological systems. Compared to non-redox-active metal ions, the design of fluorescent probes for biological copper is challenging. Within the reducing cellular environment, copper is predominantly present in its monovalent oxidation state; therefore, the design of fluorescent probes for biological copper must take into account the rich redox and coordination chemistry of Cu(I). Recent progress in understanding the underlying solution chemistry and photophysical pathways led to the development of new probes that offer high fluorescence contrast and excellent selectivity towards monovalent copper. © 2013 Elsevier Ltd.

Vazirani V.V.,Georgia Institute of Technology
Mathematics of Operations Research | Year: 2010

The notion of a "market" has undergone a paradigm shift with the Internet. Totally new and highly successful markets have been defined and launched by Internet companies, which already form an important part of today's economy and are projected to grow considerably in the future. Another major change is the availability of massive computational power for running these markets in a centralized or distributed manner. In view of these new realities, the study of market equilibria, an important, though essentially nonalgorithmic, theory within mathematical economics, needs to be revived and rejuvenated via an inherently algorithmic approach. Such a theory should not only address traditional market models, but also define new models for some of the new markets. We present a new, natural class of utility functions that allow buyers to explicitly provide information on their relative preferences as a function of the amount of money spent on each good. These utility functions offer considerable expressivity, especially in Google's Adwords market. In addition, they lend themselves to efficient computation, while retaining some of the nice properties of traditional models. The latter include weak gross substitutability, and the uniqueness and continuity of equilibrium prices and utilities ©2010 INFORMS.

Buchan A.,University of Tennessee at Knoxville | LeCleir G.R.,University of Tennessee at Knoxville | Gulvik C.A.,Georgia Institute of Technology | Gonzalez J.M.,University of La Laguna
Nature reviews. Microbiology | Year: 2014

Marine phytoplankton blooms are annual spring events that sustain active and diverse bloom-associated bacterial populations. Blooms vary considerably in terms of eukaryotic species composition and environmental conditions, but a limited number of heterotrophic bacterial lineages - primarily members of the Flavobacteriia, Alphaproteobacteria and Gammaproteobacteria - dominate these communities. In this Review, we discuss the central role that these bacteria have in transforming phytoplankton-derived organic matter and thus in biogeochemical nutrient cycling. On the basis of selected field and laboratory-based studies of flavobacteria and roseobacters, distinct metabolic strategies are emerging for these archetypal phytoplankton-associated taxa, which provide insights into the underlying mechanisms that dictate their behaviours during blooms.

Neu R.W.,Georgia Institute of Technology
Tribology International | Year: 2011

This paper reviews the current ASTM, ISO, and other standards that pertain in part to fretting fatigue and fretting wear testing. A historical perspective gives some background on why there still are relatively few standards for fretting fatigue and fretting wear testing. Current standards on the books tend to be application specific. In the past few years, there have been some new activities in standardization. These developments along with future needs in standardization are discussed. © 2010 Elsevier Ltd. All rights reserved.

Merrill A.H.,Georgia Institute of Technology
Chemical Reviews | Year: 2011

There are several nomenclature systems for glycosphingolipids, and many compounds are still referred to by their historically assigned names. Some of the breakthroughs in understanding the functions of sphingolipids, especially with respect to cell signaling, have come from having the capacity to measure more than one bioactive subspecies so the correct signaling pathways can be sorted out, especially when the metabolites have opposite effects, such as ceramide versus S1P. The relationships are being explored as a way for cancer detection and targeting using Shiga toxin. The story for iGb3 is less clear because although it stimulates NKT cells and has been hypothesized to be a natural modulator of them, recent studies have found that the human iGb3 synthase gene contains several mutations that render its product nonfunctional. A number of physiological factors have also been found to modulate DES.

Enhancements of the Raman signal by the newly prepared gold-palladium and gold-platinum double-shell hollow nanoparticles were examined and compared with those using gold nanocages (AuNCs). The surface-enhanced Raman spectra (SERS) of thiophenol adsorbed on the surface of AuNCs assembled into a Langmuir-Blodgett monolayer were 10-fold stronger than AuNCs with an inner Pt or Pd shell. The chemical and electromagnetic enhancement mechanisms for these hollow nanoparticles were further proved by comparing the Raman enhancement of nitrothiophenol and nitrotoulene. Nitrothiophenol binds to the surface of the nanoparticles by covalent interaction, and Raman enhancement by both the two mechanisms is possible, while nitrotoulene does not form any chemical bond with the surface of the nanoparticles and hence no chemical enhancement is expected. Based on discrete dipole approximation (DDA) calculations and the experimental SERS results, AuNCs introduced a high electromagnetic enhancement, while the nanocages with inner Pt or Pd shell have a strong chemical enhancement. The optical measurements of the localized surface plasmon resonance (LSPR) of the nanocages with an outer Au shell and an inner Pt or Pd shell were found, experimentally and theoretically, to be broad compared with AuNCs. The possible reason could be due to the decrease of the coherence time of Au oscillated free electrons and fast damping of the plasmon energy. This agreed well with the fact that a Pt or Pd inner nanoshell decreases the electromagnetic field of the outer Au nanoshell while increasing the SERS chemical enhancement. © 2013 American Chemical Society.

Nerem R.M.,Georgia Institute of Technology
Journal of the Royal Society Interface | Year: 2010

Over the last quarter of a century there has been an emergence of a tissue engineering industry, one that has now evolved into the broader area of regenerative medicine. There have been 'ups and downs' in this industry; however, it now appears to be on a track that may be described as 'back to the future'. The latest data indicate that for 2007 the private sector activity in the world for this industry is approaching $2.5 billion, with 167 companies/business units and more than 6000 employee full time equivalents. Although small compared with the medical device and also the pharmaceutical industries, these numbers are not insignificant. Thus, there is the indication that this industry, and the related technology, may still achieve its potential and address the needs of millions of patients worldwide, in particular those with needs that currently are unmet. © 2010 The Royal Society.

Wang Z.L.,Georgia Institute of Technology
Nano Today | Year: 2010

Due to the polarization of ions in a crystal that has non-central symmetry, a piezoelectric potential (piezopotential) is created in the crystal by applying a stress. For materials such as ZnO, GaN, and InN in the wurtzite structure family, the effect of piezopotential on the transport behavior of charge carriers is significant due to their multiple functionalities of piezoelectricity, semiconductor and photon excitation. By utilizing the advantages offered by these properties, a few new fields have been created. Electronics fabricated by using inner-crystal piezopotential as a "gate" voltage to tune/control the charge transport behavior is named piezotronics, with applications in strain/force/pressure triggered/controlled electronic devices, sensors and logic units. Piezo-phototronic effect is a result of three-way coupling among piezoelectricity, photonic excitation and semiconductor transport, which allows tuning and controlling of electro-optical processes by strain induced piezopotential. The objective of this review article is to introduce the fundamentals of piezotronics and piezo-phototronics and to give an updated progress about their applications in energy science and sensors. © 2010 Elsevier Ltd All rights reserved.

Hoffman-Kim D.,Brown University | Mitchel J.A.,Brown University | Bellamkonda R.V.,Georgia Institute of Technology
Annual Review of Biomedical Engineering | Year: 2010

In the body, cells encounter a complex milieu of signals, including topographical cues, in the form of the physical features of their surrounding environment. Imposed topography can affect cells on surfaces by promoting adhesion, spreading, alignment, morphological changes, and changes in gene expression. Neural response to topography is complex, and it depends on the dimensions and shapes of physical features. Looking toward repair of nerve injuries, strategies are being explored to engineer guidance conduits with precise surface topographies. How neurons and other cell types sense and interpret topography remains to be fully elucidated. Studies reviewed here include those of topography on cellular organization and function as well as potential cellular mechanisms of response. © 2010 by Annual Reviews. All rights reserved.

Guzdial M.,Georgia Institute of Technology
ICER 2013 - Proceedings of the 2013 ACM Conference on International Computing Education Research | Year: 2013

Research in computing education has been criticized as "Marco Polo," e.g., the researchers tried something and reported what happened. Our developing field needs more hypothesisdriven and theory-driven research. We will get there by making clear our goals and hypotheses, testing those goals and hypotheses explicitly, and critically reconsidering our results. My colleagues and 1 designed and evaluated a mediacentric introductory computing approach ("'Media Computation") over the last ten years. We started from a. "Marco Polo" style and an explicit set of hypotheses. We have worked to test those hypotheses and to understand the outcomes. Our iterative effort has led us to explore deeper theory around motivation and learning. This paper serves as an example of a ten year research program that resulted in more hypotheses, a more elaborated theory, and a better understanding of the potential impacts of a computer science curriculum change. Copyright © 2013 ACM.

Antolovich S.D.,Georgia Institute of Technology | Antolovich S.D.,Washington State University | Armstrong R.W.,University of Maryland University College
Progress in Materials Science | Year: 2014

This article focuses on the mechanisms and consequences of plastic strain localizations exhibited in tensile stress-strain behaviors, fracture and fatigue. A broad overview is first presented, including important practical considerations and historical background; then dislocation mechanics based details are developed in subsequent sections. Material characterizations are portrayed beginning from the macroscopic and extending down to the critical nanoscale. Controlling influences of temperature, strain rate, grain size and deformation mode on strain localization are evaluated. Relations are established between otherwise apparently disparate variations in phenomena, materials, applied conditions, and size scales. Strengths and weaknesses of various model descriptions of material behaviors are discussed in light of experimental evidence and suggestions are put forward for further research into promising model approaches and for areas where new approaches appear to be needed. A paradigm is suggested for the development of improvements in understanding based on the needed evolution of ever-increasingly precise experimental results, accurate theoretical model descriptions and incorporation of this information into the rapidly progressing development of numerical/computer modelling of real material behaviors. © 2013 Elsevier Ltd. All rights reserved.

Asif M.S.,Rice University | Romberg J.,Georgia Institute of Technology
IEEE Transactions on Signal Processing | Year: 2014

Most of the existing sparse-recovery methods assume a static system: the signal is a finite-length vector for which a fixed set of measurements and sparse representation are available and an l1 problem is solved for the reconstruction. However, the same representation and reconstruction framework is not readily applicable in a streaming system: the signal changes over time, and it is measured and reconstructed sequentially over small intervals. This is particularly desired when dividing signals into disjoint blocks and processing each block separately is infeasible or inefficient. In this paper, we discuss two streaming systems and a new homotopy algorithm for quickly solving the associated l1 problems: 1) recovery of smooth, time-varying signals for which, instead of using block transforms, we use lapped orthogonal transforms for sparse representation and 2) recovery of sparse, time-varying signals that follows a linear dynamic model. For both systems, we iteratively process measurements over a sliding interval and solve a weighted l1-norm minimization problem for estimating sparse coefficients. Since we estimate overlapping portions of the signal while adding and removing measurements, instead of solving a new l1 program as the system changes, we use available signal estimates as starting point in a homotopy formulation and update the solution in a few simple steps. We demonstrate with numerical experiments that our proposed streaming recovery framework provides better reconstruction compared to the methods that represent and reconstruct signals as independent, disjoint blocks, and that our proposed homotopy algorithm updates the solution faster than the current state-of-the-art solvers. © 1991-2012 IEEE.

Bauchau O.A.,Georgia Institute of Technology
Journal of the Franklin Institute | Year: 2010

Finite element based formulations for flexible multibody systems are becoming increasingly popular and as the complexity of the configurations to be treated increases, so does the computational cost. It seems natural to investigate the applicability of parallel processing to this type of problems; domain decomposition techniques have been used extensively for this purpose. In this approach, the computational domain is divided into non-overlapping sub-domains, and the continuity of the displacement field across sub-domain boundaries is enforced via the Lagrange multiplier technique. In the finite element literature, this approach is presented as a mathematical algorithm that enables parallel processing. In this paper, the divided system is viewed as a flexible multibody system, and the sub-domains are connected by kinematic constraints. Consequently, all the techniques applicable to the enforcement of constraints in multibody systems become applicable to the present problem. In particular, it is shown that a combination of the localized Lagrange multiplier technique with the augmented Lagrange formulation leads to interesting solution strategies. © 2009 The Franklin Institute.

Sarvepalli P.,Georgia Institute of Technology
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2012

In a recent work, Markham and Sanders proposed a framework to study quantum secret-sharing (QSS) schemes using graph states. This framework unified three classes of QSS protocols, namely, sharing classical secrets over private and public channels, and sharing quantum secrets. However, previous work on graph-state secret sharing mostly focused on threshold schemes. In this paper, we focus on general access structures. We show how to realize a large class of arbitrary access structures using the graph-state formalism. We show an equivalence between [[n,1]] binary quantum codes and graph-state secret-sharing schemes sharing one bit. We also establish a similar (but restricted) equivalence between a class of [[n,1]] Calderbank-Shor-Steane codes and graph-state QSS schemes sharing one qubit. With these results we are able to construct a large class of graph-state quantum secret-sharing schemes with arbitrary access structures. © 2012 American Physical Society.

Koltchinskii V.,Georgia Institute of Technology
Journal of Machine Learning Research | Year: 2010

Sequential algorithms of active learning based on the estimation of the level sets of the empirical risk are discussed in the paper. Localized Rademacher complexities are used in the algorithms to estimate the sample sizes needed to achieve the required accuracy of learning in an adaptive way. Probabilistic bounds on the number of active examples have been proved and several applications to binary classification problems are considered. © 2010 Vladimir Koltchinskii.

Zhang Y.,Georgia Institute of Technology
PLoS genetics | Year: 2012

Pluripotent embryonic stem cells (ESCs) are known to possess a relatively open chromatin structure; yet, despite efforts to characterize the chromatin signatures of ESCs, the role of chromatin compaction in stem cell fate and function remains elusive. Linker histone H1 is important for higher-order chromatin folding and is essential for mammalian embryogenesis. To investigate the role of H1 and chromatin compaction in stem cell pluripotency and differentiation, we examine the differentiation of embryonic stem cells that are depleted of multiple H1 subtypes. H1c/H1d/H1e triple null ESCs are more resistant to spontaneous differentiation in adherent monolayer culture upon removal of leukemia inhibitory factor. Similarly, the majority of the triple-H1 null embryoid bodies (EBs) lack morphological structures representing the three germ layers and retain gene expression signatures characteristic of undifferentiated ESCs. Furthermore, upon neural differentiation of EBs, triple-H1 null cell cultures are deficient in neurite outgrowth and lack efficient activation of neural markers. Finally, we discover that triple-H1 null embryos and EBs fail to fully repress the expression of the pluripotency genes in comparison with wild-type controls and that H1 depletion impairs DNA methylation and changes of histone marks at promoter regions necessary for efficiently silencing pluripotency gene Oct4 during stem cell differentiation and embryogenesis. In summary, we demonstrate that H1 plays a critical role in pluripotent stem cell differentiation, and our results suggest that H1 and chromatin compaction may mediate pluripotent stem cell differentiation through epigenetic repression of the pluripotency genes.

Nejati H.,University of Michigan | Beirami A.,Georgia Institute of Technology
Optics Letters | Year: 2012

We propose a closed form formulation for the impedance of the metal-insulator-metal (MIM) plasmonic transmission lines by solving the Maxwell's equations. We provide approximations for thin and thick insulator layers sandwiched between metallic layers. In the case of very thin dielectric layer, the surface waves on both interfaces are strongly coupled resulting in an almost linear dependence of the impedance of the plasmonic transmission line on the thickness of the insulator layer. On the other hand, for very thick insulator layer, the impedance does not vary with the insulator layer thickness due to the weak-coupling/decoupling of the surface waves on each metal-insulator interface. We demonstrate the effectiveness of our proposed formulation using two test scenarios, namely, almost zero reflection in T-junction and reflection from line discontinuity in the design of Bragg reflectors, where we compare our formulation against previously published results. © 2012 Optical Society of America.

Wang Z.L.,Georgia Institute of Technology | Wang Z.L.,CAS Beijing Institute of Nanoenergy and Nanosystems
ACS Nano | Year: 2013

Triboelectrification is an effect that is known to each and every one probably since ancient Greek time, but it is usually taken as a negative effect and is avoided in many technologies. We have recently invented a triboelectric nanogenerator (TENG) that is used to convert mechanical energy into electricity by a conjunction of triboelectrification and electrostatic induction. As for this power generation unit, in the inner circuit, a potential is created by the triboelectric effect due to the charge transfer between two thin organic/inorganic films that exhibit opposite tribo-polarity; in the outer circuit, electrons are driven to flow between two electrodes attached on the back sides of the films in order to balance the potential. Since the most useful materials for TENG are organic, it is also named organic nanogenerator, which is the first using organic materials for harvesting mechanical energy. In this paper, we review the fundamentals of the TENG in the three basic operation modes: vertical contact-separation mode, in-plane sliding mode, and single-electrode mode. Ever since the first report of the TENG in January 2012, the output power density of TENG has been improved 5 orders of magnitude within 12 months. The area power density reaches 313 W/m2, volume density reaches 490 kW/m3, and a conversion efficiency of ∼60% has been demonstrated. The TENG can be applied to harvest all kinds of mechanical energy that is available but wasted in our daily life, such as human motion, walking, vibration, mechanical triggering, rotating tire, wind, flowing water, and more. Alternatively, TENG can also be used as a self-powered sensor for actively detecting the static and dynamic processes arising from mechanical agitation using the voltage and current output signals of the TENG, respectively, with potential applications for touch pad and smart skin technologies. To enhance the performance of the TENG, besides the vast choices of materials in the triboelectric series, from polymer to metal and to fabric, the morphologies of their surfaces can be modified by physical techniques with the creation of pyramid-, square-, or hemisphere-based micro-or nanopatterns, which are effective for enhancing the contact area and possibly the triboelectrification. The surfaces of the materials can be functionalized chemically using various molecules, nanotubes, nanowires, or nanoparticles, in order to enhance the triboelectric effect. The contact materials can be composites, such as embedding nanoparticles in a polymer matrix, which may change not only the surface electrification but also the permittivity of the materials so that they can be effective for electrostatic induction. Therefore, there are numerous ways to enhance the performance of the TENG from the materials point of view. This gives an excellent opportunity for chemists and materials scientists to do extensive study both in the basic science and in practical applications. We anticipate that a better enhancement of the output power density will be achieved in the next few years. The TENG is possible not only for self-powered portable electronics but also as a new energy technology with potential to contribute to the world energy in the near future. © 2013 American Chemical Society.

Yu J.,Georgia Institute of Technology | Zhou X.,Nanyang Technological University
IEEE Communications Magazine | Year: 2010

We review and summarize several 100G per channel high-capacity transmission systems enabled by advanced technologies such as multilevel modulation format, new low-loss and large effective area fiber, hybrid EDFA/Raman amplification, and digital coherent detection technologies. We show that high-speed QPSK, 8PSK, 8QAM, and 16QAM can all be generated using commercially available optical modulators using only binary electrical drive signals through novel synthesis methods, and that all of these modulation formats can be detected using digital coherent detection. We also show our latest research results on 400 Gb/s and 1 Tb/s per channel by using orthogonal DWDM transmission technologies. © 2010 IEEE.

Lachance J.,University of Pennsylvania | Lachance J.,Georgia Institute of Technology | Tishkoff S.A.,University of Pennsylvania
American Journal of Human Genetics | Year: 2014

Gene conversion results in the nonreciprocal transfer of genetic information between two recombining sequences, and there is evidence that this process is biased toward G and C alleles. However, the strength of GC-biased gene conversion (gBGC) in human populations and its effects on hereditary disease have yet to be assessed on a genomic scale. Using high-coverage whole-genome sequences of African hunter-gatherers, agricultural populations, and primate outgroups, we quantified the effects of GC-biased gene conversion on population genomic data sets.We find that genetic distances (FST and population branch statistics) are modified by gBGC. In addition, the site frequency spectrum is left-shifted when ancestral alleles are favored by gBGC and right-shifted when derived alleles are favored by gBGC. Allele frequency shifts due to gBGC mimic the effects of natural selection. As expected, these effects are strongest in high-recombination regions of the human genome. By comparing the relative rates of fixation of unbiased and biased sites, the strength of gene conversion was estimated to be on the order of Nb z 0.05 to 0.09. We also find that derived alleles favored by gBGC are much more likely to be homozygous than derived alleles at unbiased SNPs (42.2% to 62.8%). This results in a curse of the converted, whereby gBGC causes substantial increases in hereditary disease risks. Taken together, our findings reveal that GC-biased gene conversion has important population genetic and public health implications. © 2014 by The American Society of Human Genetics. All rights reserved.

Peikert C.,Georgia Institute of Technology | Waters B.,University of Texas at Austin
SIAM Journal on Computing | Year: 2011

We propose a general cryptographic primitive called lossy trapdoor functions (lossy TDFs), and we use it to develop new approaches for constructing several important cryptographic tools, including (injective) trapdoor functions, collision-resistant hash functions, oblivious transfer, and chosen ciphertext-secure cryptosystems (in the standard model). All of these constructions are simple, efficient, and black-box. We realize lossy TDFs based on a variety of cryptographic assumptions, including the hardness of the decisional Diffie-Hellman (DDH) problem and the hardness of the "learning with errors" problem (which is implied by the worst-case hardness of various lattice problems). Taken together, our results resolve some long-standing open problems in cryptography. They give the first injective TDFs based on problems not directly related to integer factorization and provide the first chosen ciphertext-secure cryptosystem based solely on worst-case complexity assumptions. © 2011 Society for Industrial and Applied Mathematics.

Vazirani V.V.,Georgia Institute of Technology | Yannakakis M.,Columbia University
Journal of the ACM | Year: 2011

We consider Fisher and Arrow-Debreu markets under additively separable, piecewise-linear, concave utility functions and obtain the following results. For both market models, if an equilibrium exists, there is one that is rational and can be written using polynomially many bits. There is no simple necessary and sufficient condition for the existence of an equilibrium: The problem of checking for existence of an equilibrium is NP-complete for both market models; the same holds for existence of an ε-approximate equilibrium, for ε = O(n-5). Under standard (mild) sufficient conditions, the problem of finding an exact equilibrium is in PPAD for both market models. Finally, building on the techniques of Chen et al. [2009a] we prove that under these sufficient conditions, finding an equilibrium for Fisher markets is PPAD-hard. © 2011 ACM.

Duffy M.A.,Georgia Institute of Technology
Freshwater Biology | Year: 2010

Although populations harbour considerable diversity, most ecological studies still assume they are homogeneous. However, mounting evidence suggests that intraspecific diversity is not only common, but also important for interactions with community members. Here, intraspecific variation in Daphnia dentifera in haemoglobin content is shown to be a marker of hypolimnion use. Hypolimnion use differed substantially within and among D. dentifera populations. Daphnia dentifera with haemoglobin resided primarily in the hypolimnion, while D. dentifera lacking haemoglobin migrated vertically. These 'deep' and 'migratory'. D. dentifera had different seasonal phenologies and dynamics.Deep and migratory D. dentifera had qualitatively different relationships with an important competitor, Daphnia pulicaria. Deep D. dentifera density was negatively correlated with D. pulicaria density, whereas migratory density was not correlated with D. pulicaria density.Given that D. pulicaria tends to reside in the hypolimnion, this negative correlation probably reflects competition between D. pulicaria and the deep D. dentifera. This pattern would have been missed if only the relationship between the overall lake populations of D. dentifera and D. pulicaria had been studied.Abundances of deep D. dentifera and D. pulicaria were both correlated with the size of the hypolimnetic refuge from fish predation, but in opposite directions. Lakes with large refuges generally had high D. pulicaria and low deep D. dentifera densities. © 2009 Blackwell Publishing Ltd.

Zhu C.,Georgia Institute of Technology
Annals of Biomedical Engineering | Year: 2014

Molecular biomechanics includes two themes: the study of mechanical aspects of biomolecules and the study of molecular biology of the cell using mechanical tools. The two themes are interconnected for obvious reasons. The present review focuses on one of the interconnected areas-the mechanical regulation of molecular interaction and conformational change. Recent conceptual developments are summarized, including catch bonds, regulation of molecular interaction by the history of force application, and cyclic mechanical reinforcement. These studies elucidate the mechanochemistry of some of the candidate mechanosensing molecules, thereby providing a natural connection to mechanobiology. © 2013 Biomedical Engineering Society.

Garcia A.J.,Georgia Institute of Technology
Annals of Biomedical Engineering | Year: 2014

Protein- and cell-based therapies represent highly promising strategies for regenerative medicine, immunotherapy, and oncology. However, these therapies are significantly limited by delivery considerations, particularly in terms of protein stability and dosing kinetics as well as cell survival, engraftment, and function. Hydrogels represent versatile and robust delivery vehicles for proteins and cells due to their high water content that retains protein biological activity, high cytocompatibility and minimal adverse host reactions, flexibility and tunability in terms of chemistry, structure, and polymerization format, ability to incorporate various biomolecules to convey biofunctionality, and opportunity for minimally invasive delivery as injectable carriers. This review highlights recent progress in the engineering of poly(ethylene glycol) hydrogels cross-linked using maleimide reactive groups for protein and cell delivery. © 2013 Biomedical Engineering Society.

Howard A.,Georgia Institute of Technology
Science | Year: 2014

I was shielded from stereotypes during my young and impressionable years. I didn't realize they existed until maybe middle school, and by then, I'd already decided I wanted to build the Bionic Woman. I was always drawn to 'techy' stuff, but I also liked what people would consider typical girly things. I would just as quickly ask for a RadioShack kit as a Betty Crocker oven, and get both. I learned to solder at the same time I was playing with dolls (not necessarily Barbie, although I did collect them for a while and have some that are quite valuable). In the third grade, I started programming in BASIC on a Commodore 64 computer in the basement.

Wang Z.L.,Georgia Institute of Technology
MRS Bulletin | Year: 2012

Developing wireless nanodevices and nanosystems is critical for sensing, medical science, environmental/infrastructure monitoring, defense technology, and even personal electronics. It is highly desirable for wireless devices to be self-powered without using a battery. We have developed piezoelectric nanogenerators that can serve as self-sufficient power sources for micro-/nanosystems. For wurtzite structures that have non-central symmetry, such as ZnO, GaN, and InN, a piezoelectric potential (piezopotential) is created by applying a strain. The nanogenerator uses the piezopotential as the driving force, responding to dynamic straining of piezoelectric nanowires. A gentle strain can produce an output voltage of up to 20-40 V from an integrated nanogenerator. Furthermore, piezopotential in the wurtzite structure can serve as a gate voltage that can effectively tune/control charge transport across an interface/junction; electronics based on such a mechanism are referred to as piezotronics, with applications such as electronic devices that are triggered or controlled by force or pressure, sensors, logic units, and memory. By using the piezotronic effect, we show that optoelectronic devices fabricated using wurtzite materials can provide superior performance for solar cells, photon detectors, and light-emitting diodes. Piezotronic devices are likely to serve as mediators for directly interfacing biomechanical action with silicon-based technology. This article reviews our study of ZnO nanostructures over the last 12 years, with a focus on nanogenerators and piezotronics. © 2012 2012 Materials Research Society.

Kim J.-Y.,Georgia Institute of Technology
International Journal of Engineering Science | Year: 2011

An exact matrix method, originally proposed for evaluating effective elastic constants of generally anisotropic multilayer composites, is further developed for a micromechanical analysis of multilayers with various coupled physical effects including piezoelectricity, piezomagnetism, thermoelasticity (in consideration of entropy), and the Biot's poroelasticity. The results for a BaTiO3-CoFe2O4 magneto-electro-thermo-elastic (METE) multilayer coincide with those calculated using other micromechanical models based on the Mori-Tanaka method and the asymptotic homogenization method. It is shown that the present method can efficiently handle the most general type of multilayers with an arbitrary number of general anisotropic layers. Analytical expressions for effective material properties of a transversely isotropic METE multilayer composite are derived, from which those for functionally graded METE multilayers can be directly obtained. The effects of crystallographic orientations and volume fractions of constituting layers on the magnetoelectric coefficients are investigated for BaTiO3-CoFe 2O4 and LiNbO3-CoFe2O4 multilayer composites. It is thus demonstrated that the present model can be used for the layout/material optimization of these METE multilayers to obtain a maximum product property such as the magnetoelectric, pyroelectric, and pyromagnetic coefficients. It is also shown that the same method can be used to predict the effective properties of poroelastic multilayers. © 2011 Elsevier Ltd. All rights reserved.

Stewart F.J.,Georgia Institute of Technology
Biochemical Society Transactions | Year: 2011

Biological diversity in marine OMZs (oxygen minimum zones) is dominated by a complex community of bacteria and archaea whose anaerobic metabolisms mediate key steps in global nitrogen and carbon cycles. Molecular and physiological studies now confirm that OMZs also support diverse micro-organisms capable of utilizing inorganic sulfur compounds for energy metabolism. The present review focuses specifically on recent metagenomic data that have helped to identify the molecular basis for autotrophic sulfur oxidation with nitrate in the OMZ water column, as well as a cryptic role for heterotrophic sulfate reduction. Interpreted alongside marker gene surveys and process rate measurements, these data suggest an active sulfur cycle with potentially substantial roles in organic carbon input and mineralization and critical links to the OMZ nitrogen cycle. Furthermore, these studies have created a framework for comparing the genomic diversity and ecology of pelagic sulfur-metabolizing communities from diverse low-oxygen regions. ©The Authors Journal compilation ©2011 Biochemical Society.

Bunz U.H.F.,Georgia Institute of Technology
Pure and Applied Chemistry | Year: 2010

The history and development of pyrazine- and pyridine-type heteroacenes and their use in solid-state organic electronics are discussed and reviewed. The larger N-heteroacenes are potential electron- or hole-transporting materials and should therefore complement acenes in organic electronics. As they feature electronegative nitrogen ring atoms in their molecular skeleton, issues with oxidation should be less problematic when comparing them to the larger acenes such as pentacene. This paper covers the synthesis and the solid-state packing of larger (tetracene/pentacene-based) N,N-heterocylic acenes as well as the question of the interplay of aromaticity and antiaromaticity in the known larger N-heteroacenes and their N,N-dihydro-derivatives; also illuminated are their optical properties. A literature overview is provided. © 2010 IUPAC.

Kazoe Y.,University of Tokyo | Yoda M.,Georgia Institute of Technology
Applied Physics Letters | Year: 2011

Understanding near-wall diffusion of small particles and biomolecules is important in colloid science and many microfluidic devices. Our experimental measurements of the diffusion of 110-460 nm radii suspended particles in the presence of electric fields up to 3.1 kV/m using particle tracking are in agreement with theoretical predictions for diffusion hindered by the presence of a solid surface. The results suggest that the external electric field has little, if any, effect upon the hindered diffusion of colloidal particles, even when the electrophoretic force exceeds the Stokes drag. © 2011 American Institute of Physics.

Kennedy G.J.,Georgia Institute of Technology
Structural and Multidisciplinary Optimization | Year: 2015

Structural engineers are often constrained by cost or manufacturing considerations to select member thicknesses from a discrete set of values. Conventional, gradient-free techniques to solve these discrete problems cannot handle large problem sizes, while discrete material optimization (DMO) techniques may encounter difficulties, especially for bending-dominated problems. To resolve these issues, we propose an efficient gradient-based technique to obtain engineering solutions to the discrete thickness selection problem. The proposed technique uses a series of constraints to enforce an effective stiffness-to-mass and strength-to-mass penalty on intermediate designs. In conjunction with these constraints, we apply an exact penalty function which drives the solution towards a discrete design. We utilize a continuation approach to obtain approximate solutions to the discrete thickness selection problem by solving a sequence of relaxed continuous problems with increasing penalization. We also show how this approach can be applied to combined discrete thickness selection and topology optimization design problems. To demonstrate the effectiveness of the proposed technique, we present both compliance and stress-constrained results for in-plane and bending-dominated problems. © 2014, Springer-Verlag Berlin Heidelberg.

Heyde K.,Ghent University | Wood J.L.,Georgia Institute of Technology
Reviews of Modern Physics | Year: 2011

Shape coexistence in nuclei appears to be unique in the realm of finite many-body quantum systems. It differs from the various geometrical arrangements that sometimes occur in a molecule in that in a molecule the various arrangements are of the widely separated atomic nuclei. In nuclei the various "arrangements" of nucleons involve (sets of) energy eigenstates with different electric quadrupole properties such as moments and transition rates, and different distributions of proton pairs and neutron pairs with respect to their Fermi energies. Sometimes two such structures will "invert" as a function of the nucleon number, resulting in a sudden and dramatic change in ground-state properties in neighboring isotopes and isotones. In the first part of this review the theoretical status of coexistence in nuclei is summarized. Two approaches, namely, microscopic shell-model descriptions and mean-field descriptions, are emphasized. The second part of this review presents systematic data, for both even- and odd-mass nuclei, selected to illustrate the various ways in which coexistence is observed in nuclei. The last part of this review looks to future developments and the issue of the universality of coexistence in nuclei. Surprises continue to be discovered. With the major advances in reaching to extremes of proton-neutron number, and the anticipated new "rare isotope beam" facilities, guidelines for search and discovery are discussed. © 2011 American Physical Society.

Vempala S.S.,Georgia Institute of Technology
Journal of the ACM | Year: 2010

We give an algorithm to learn an intersection of k halfspaces in R n whose normals span an l-dimensional subspace. For any input distribution with a logconcave density such that the bounding hyperplanes of the k halfspaces pass through its mean, the algorithm (ε, δ)-learns with time and sample complexity bounded by (nkl/ε)o(1)log 1/εδ The hypothesis found is an intersection of O(k log(1/ε)) halfspaces. This improves on Blum and Kannan's algorithm for the uniform distribution over a ball, in the time and sample complexity (previously doubly exponential) and in the generality of the input distribution. © 2010 ACM.

Pepe P.,University of LAquila | Verriest E.I.,Georgia Institute of Technology
Automatica | Year: 2012

In this paper, we present a LyapunovKrasovskii methodology for checking the global asymptotic stability and the input-to-state stability of systems described by retarded functional differential equations coupled with continuous time difference equations, often referred to as special neutral systems, with un-matched and piece-wise continuous initial conditions. The methodology provides results in terms of the Lp norm for the non differentiated variable and does not require the preliminary check of input-to-state stability for the difference part of the system. The Lyapunov conditions are given without involving, not even formally, the solution. © 2011 Elsevier Ltd. All rights reserved.

A growing body of evidence suggests that DNA methylation is functionally divergent among different taxa. The recently discovered functional methylation system in the honeybee Apis mellifera presents an attractive invertebrate model system to study evolution and function of DNA methylation. In the honeybee, DNA methylation is mostly targeted toward transcription units (gene bodies) of a subset of genes. Here, we report an intriguing covariation of length and epigenetic status of honeybee genes. Hypermethylated and hypomethylated genes in honeybee are dramatically different in their lengths for both exons and introns. By analyzing orthologs in Drosophila melanogaster, Acyrthosiphon pisum, and Ciona intestinalis, we show genes that were short and long in the past are now preferentially situated in hyper- and hypomethylated classes respectively, in the honeybee. Moreover, we demonstrate that a subset of high-CpG genes are conspicuously longer than expected under the evolutionary relationship alone and that they are enriched in specific functional categories. We suggest that gene length evolution in the honeybee is partially driven by evolutionary forces related to regulation of gene expression, which in turn is associated with DNA methylation. However, lineage-specific patterns of gene length evolution suggest that there may exist additional forces underlying the observed interaction between DNA methylation and gene lengths in the honeybee.

Skums P.,Centers for Disease Control and Prevention | Bunimovich L.,Georgia Institute of Technology | Khudyakov Y.,Centers for Disease Control and Prevention
Proceedings of the National Academy of Sciences of the United States of America | Year: 2015

Hepatitis C virus (HCV) has the propensity to cause chronic infection. Continuous immune escape has been proposed as a mechanism of intrahost viral evolution contributing to HCV persistence. Although the pronounced genetic diversity of intrahost HCV populations supports this hypothesis, recent observations of long-term persistence of individual HCV variants, negative selection increase, and complex dynamics of viral subpopulations during infection as well as broad cross-immunoreactivity (CR) among variants are inconsistent with the immune-escape hypothesis. Here, we present a mathematical model of intrahost viral population dynamics under the condition of a complex CR network (CRN) of viral variants and examine the contribution of CR to establishing persistent HCV infection. The model suggests a mechanism of viral adaptation by antigenic cooperation (AC), with immune responses against one variant protecting other variants. AC reduces the capacity of the host's immune system to neutralize certain viral variants. CRN structure determines specific roles for each viral variant in host adaptation, with variants eliciting broad-CR antibodies facilitating persistence of other variants immunoreacting with these antibodies. The proposed mechanism is supported by empirical observations of intrahost HCV evolution. Interference with AC is a potential strategy for interruption and prevention of chronic HCV infection. © 2015, National Academy of Sciences. All rights reserved.

Sherrill C.D.,Georgia Institute of Technology
Accounts of Chemical Research | Year: 2013

Fundamental features of biomolecules, such as their structure, solvation, and crystal packing and even the docking of drugs, rely on noncovalent interactions. Theory can help elucidate the nature of these interactions, and energy component analysis reveals the contributions from the various intermolecular forces: electrostatics, London dispersion terms, induction (polarization), and short-range exchange-repulsion. Symmetry-adapted perturbation theory (SAPT) provides one method for this type of analysis.In this Account, we show several examples of how SAPT pro-vides insight into the nature of noncovalent π-interactions. In cation-π interactions, the cation strongly polarizes electrons in π-orbitals, leading to substantially attractive induction terms. This polarization is so important that a cation and a benzene attract each other when placed in the same plane, even though a consideration of the electrostatic interactions alone would suggest otherwise. SAPT analysis can also support an understanding of substituent effects in π-π interactions. Trends in face-to-face sandwich benzene dimers cannot be understood solely in terms of electrostatic effects, especially for multiply substituted dimers, but SAPT analysis demonstrates the importance of London dispersion forces. Moreover, detailed SAPT studies also reveal the critical importance of charge penetration effects in π-stacking interactions. These effects arise in cases with substantial orbital overlap, such as in π-stacking in DNA or in crystal structures of π-conjugated materials. These charge penetration effects lead to attractive electrostatic terms where a simpler analysis based on atom-centered charges, electrostatic potential plots, or even distributed multipole analysis would incorrectly predict repulsive electrostatics. SAPT analysis of sandwich benzene, benzene-pyridine, and pyridine dimers indicates that dipole/induced-dipole terms present in benzene-pyridine but not in benzene dimer are relatively unimportant. In general, a nitrogen heteroatom contracts the electron density, reducing the magnitude of both the London dispersion and the exchange-repulsion terms, but with an overall net increase in attraction.Finally, using recent advances in SAPT algorithms, researchers can now perform SAPT computations on systems with 200 atoms or more. We discuss a recent study of the intercalation complex of proflavine with a trinucleotide duplex of DNA. Here, London dispersion forces are the strongest contributors to binding, as is typical for π-π interactions. However, the electrostatic terms are larger than usual on a fractional basis, which likely results from the positive charge on the intercalator and its location between two electron-rich base pairs. These cation-π interactions also increase the induction term beyond those of typical noncovalent π-interactions. © 2012 American Chemical Society.

Zhu T.,Georgia Institute of Technology | Gao H.,Brown University
Scripta Materialia | Year: 2012

An overview is given of the deformation mechanisms in nanotwinned copper, as studied by recent molecular dynamics, dislocation mechanics and crystal plasticity modeling. We highlight the unique role of nanoscale twin lamellae in producing the hard and soft modes of dislocation glide, as well as how the coherent twin boundaries affect slip transfer, dislocation nucleation, twinning and detwinning. These twin boundary-mediated deformation mechanisms have been mechanistically linked to the mechanical properties of strength, ductility, strain hardening, activation volume, rate sensitivity, size-dependent strengthening and softening in nanotwinned metals. Finally, discussions are dedicated to identifying important unresolved issues for future research. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

Eisenstein J.,Georgia Institute of Technology
NAACL HLT 2013 - 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Main Conference | Year: 2013

The rise of social media has brought computational linguistics in ever-closer contact with bad language: text that defies our expectations about vocabulary, spelling, and syntax. This paper surveys the landscape of bad language, and offers a critical review of the NLP community's response, which has largely followed two paths: normalization and domain adaptation. Each approach is evaluated in the context of theoretical and empirical work on computer-mediated communication. In addition, the paper presents a quantitative analysis of the lexical diversity of social media text, and its relationship to other corpora. © 2013 Association for Computational Linguistics.

Snell T.W.,Georgia Institute of Technology
International Review of Hydrobiology | Year: 2014

It has been two decades since 1993 when research on the biology of rotifer aging was last reviewed by Enesco. Much has transpired during this time as rotifer biologists have adapted to the "omics" revolution and incorporated these techniques into the experimental analysis of rotifers. Rotifers are amenable to many of these approaches and getting adequate quantities of DNA, RNA, and protein from rotifers is not difficult. Analysis of rotifer genomes, transcriptomes, and proteomes is rapidly yielding candidate genes that likely regulate a variety of features of rotifer biology. Parallel developments in aging biology have recognized the limitations of standard animal models like worms and flies and that comparative aging research has essentially ignored a large fraction of animal phylogeny in the lophotrochozoans. As experimentally tractable members of this group, rotifers have attracted interest as models of aging. In this paper, I review advances over the past 20 years in the biology of aging in rotifers, with emphasis on the unique contributions of rotifer models for understanding aging. The majority of experimental work has manipulated rotifer diet and followed changes in survival and reproductive dynamics like mean lifespan, maximum lifespan, reproductive lifespan, and mortality rate doubling time. The main dietary manipulation has been some form of caloric restriction, withholding food for some period or feeding continuously at low levels. There have been comparative studies of several rotifer species, with some species responding to caloric restriction with life extension, but others not, at least under the tested food regimens. Other aspects of diet are less explored, like nutritional properties of different algae species and their capacity to extend rotifer lifespan. Several descriptive studies have reported many genes involved in rotifer aging by comparing gene expression in young and old individuals. Classes of genes up or down-regulated during aging have become prime targets for rotifer aging investigations. Alterations of gene expression by exposure to specific inhibitors or RNA interference (RNAi) knockdown will probably yield valuable insights into the cellular mechanisms of rotifer life extension. In this paper, I highlight major experimental contributions and indicate opportunities where I believe additional investigation is likely to be profitable. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Yavari A.,Georgia Institute of Technology
Archive for Rational Mechanics and Analysis | Year: 2013

Compatibility equations of elasticity are almost 150 years old. Interestingly, they do not seem to have been rigorously studied, to date, for non-simply-connected bodies. In this paper we derive necessary and sufficient compatibility equations of nonlinear elasticity for arbitrary non-simply-connected bodies when the ambient space is Euclidean. For a non-simply-connected body, a measure of strain may not be compatible, even if the standard compatibility equations ("bulk" compatibility equations) are satisfied. It turns out that there may be topological obstructions to compatibility; this paper aims to understand them for both deformation gradient F and the right Cauchy-Green strain C = F T F. We show that the necessary and sufficient conditions for compatibility of deformation gradient F are the vanishing of its exterior derivative and all its periods, that is, its integral over generators of the first homology group of the material manifold. We will show that not every non-null-homotopic path requires supplementary compatibility equations for F and linearized strain e. We then find both necessary and sufficient compatibility conditions for the right Cauchy-Green strain tensor C for arbitrary non-simply-connected bodies when the material and ambient space manifolds have the same dimensions. We discuss the well-known necessary compatibility equations in the linearized setting and the Cesàro-Volterra path integral. We then obtain the sufficient conditions of compatibility for the linearized strain when the body is not simply-connected. To summarize, the question of compatibility reduces to two issues: i) an integrability condition, which is d(F dX) = 0 for the deformation gradient and a curvature vanishing condition for C, and ii) a topological condition. For F dx this is a homological condition because the equation one is trying to solve takes the form dφ = F dX. For C, however, parallel transport is involved, which means that one needs to solve an equation of the form dR/ ds = RK, where R takes values in the orthogonal group. This is, therefore, a question about an orthogonal representation of the fundamental group, which, as the orthogonal group is not commutative, cannot, in general, be reduced to a homological question. © 2013 Springer-Verlag Berlin Heidelberg.

Verriest E.I.,Georgia Institute of Technology
IMA Journal of Mathematical Control and Information | Year: 2011

Problems with causality, minimality and inconsistency arise in delay systems when the derivative of the time delay exceeds one. We seek to resolve such problems by proper analysis of the structural properties of a system with time-variant delay and propose new interpretations for such systems, via 'causalization': lossless and forgetful. The results are applicable to systems with piecewise constant delays, of interest in control over a network. © The author 2011.

Lucki N.C.,Georgia Institute of Technology | Sewer M.B.,University of California at San Diego
Annual Review of Physiology | Year: 2012

Nuclear lipid metabolism is implicated in various processes, including transcription, splicing, and DNA repair. Sphingolipids play roles in numerous cellular functions, and an emerging body of literature has identified roles for these lipid mediators in distinct nuclear processes. Different sphingolipid species are localized in various subnuclear domains, including chromatin, the nuclear matrix, and the nuclear envelope, where sphingolipids exert specific regulatory and structural functions. Sphingomyelin, the most abundant nuclear sphingolipid, plays both structural and regulatory roles in chromatin assembly and dynamics in addition to being an integral component of the nuclear matrix. Sphingosine-1-phosphate modulates histone acetylation, sphingosine is a ligand for steroidogenic factor 1, and nuclear accumulation of ceramide has been implicated in apoptosis. Finally, nuclear membraneassociated ganglioside GM1 plays a pivotal role in Ca 2+ homeostasis. This review highlights research on the factors that control nuclear sphingolipid metabolism and summarizes the roles of these lipids in various nuclear processes. Copyright © 2012 by Annual Reviews. All rights reserved.

Kalidindi S.R.,Georgia Institute of Technology | De Graef M.,Carnegie Mellon University
Annual Review of Materials Research | Year: 2015

The field of materials science and engineering is on the cusp of a digital data revolution. After reviewing the nature of data science and Big Data, we discuss the features of materials data that distinguish them from data in other fields. We introduce the concept of process-structure-property (PSP) linkages and illustrate how the determination of PSPs is one of the main objectives of materials data science. Then we review a selection of materials databases, as well as important aspects of materials data management, such as storage hardware, archiving strategies, and data access strategies. We introduce the emerging field of materials data analytics, which focuses on data-driven approaches to extract and curate materials knowledge from available data sets. The critical need for materials e-collaboration platforms is highlighted, and we conclude the article with a number of suggestions regarding the near-term future of the materials data science field. Copyright © 2015 by Annual Reviews. All rights reserved.

Lukasik S.J.,Georgia Institute of Technology
Communications of the ACM | Year: 2011

Establish a global cyber "neighborhood watch" enabling users to take defensive action to protect their operations. © 2011 ACM.

Guzdial M.,Georgia Institute of Technology
Communications of the ACM | Year: 2011

Exploring the dual nature of computing education research.

Since the discovery of its anticancer activity in 1970s, cisplatin and its analogs have become widely used in clinical practice, being administered to 40-80% of patients undergoing chemotherapy for solid tumors. The fascinating story of this drug continues to evolve presently, which includes advances in our understanding of complexity of molecular mechanisms involved in its anticancer activity and drug toxicity. While genomic DNA has been generally recognized as the most critical pharmacological target of cisplatin, the results reported across multiple disciplines suggest that other targets and molecular interactions are likely involved in the anticancer mode of action, drug toxicity and resistance of cancer cells to this remarkable anticancer drug. This article reviews interactions of cisplatin with non-DNA targets, including RNAs, proteins, phospholipids and carbohydrates in the context of its pharmacological activity and drug toxicity. Some of these non-DNA targets and associated mechanisms likely act in a highly concerted manner towards the biological outcome in cisplatin-treated tumors; therefore, the understanding of complexity of cisplatin interactome may open new avenues for modulation of its clinical efficacy or for designing more efficient platinum-based anticancer drugs to reproduce the success of cisplatin in the treatment of highly curable testicular germ cell tumors in its therapeutic applications to other cancers. © 2014 Bentham Science Publishers.

Garcia R.,CSIC - Institute of Materials Science | Knoll A.W.,IBM | Riedo E.,Georgia Institute of Technology
Nature Nanotechnology | Year: 2014

The nanoscale control afforded by scanning probe microscopes has prompted the development of a wide variety of scanning-probe-based patterning methods. Some of these methods have demonstrated a high degree of robustness and patterning capabilities that are unmatched by other lithographic techniques. However, the limited throughput of scanning probe lithography has prevented its exploitation in technological applications. Here, we review the fundamentals of scanning probe lithography and its use in materials science and nanotechnology. We focus on robust methods, such as those based on thermal effects, chemical reactions and voltage-induced processes, that demonstrate a potential for applications. © 2014 Macmillan Publishers Limited.

Kohl P.A.,Georgia Institute of Technology
Annual Review of Chemical and Biomolecular Engineering | Year: 2011

Future integrated circuits and packages will require extraordinary dielectric materials for interconnects to allow transistor advances to be translated into system-level advances. Exceedingly low-permittivity and low-loss materials are required at every level of the electronic system, from chip-level insulators to packages and printed wiring boards. In this review, the requirements and goals for future insulators are discussed followed by a summary of current state-of-the-art materials and technical approaches. Much work needs to be done for insulating materials and structures to meet future needs. © Copyright 2011 by Annual Reviews. All rights reserved.

Sovacool B.K.,National University of Singapore | Brown M.A.,Georgia Institute of Technology
Annual Review of Environment and Resources | Year: 2010

How well are industrialized nations doing in terms of their energy security? Without a standardized set of metrics, it is difficult to determine the extent to which countries are properly responding to the emerging energy security challenges related to climate change: a growing dependency on fossil fuels, population growth, and economic development. In response, this article first surveys the academic literature on energy security and concludes that it is composed of availability, affordability, efficiency, and environmental stewardship. It then analyzes the relative energy security performance, based on these four dimensions, of the United States and 21 other member countries of the Organisation for Economic Co-operation and Development (OECD) from 1970 to 2007. Four countries are examined in greater detail: one of the strongest (Denmark), one of the most improved in terms of energy security (Japan), one with weak and stagnant energy security (United States), and one with deteriorating energy security (Spain). The article concludes by offering implications for public policy. Copyright © 2010 by Annual Reviews. All rights reserved.

Leamy M.J.,Georgia Institute of Technology
Journal of Sound and Vibration | Year: 2012

This paper presents an exact, wave-based approach for determining Bloch waves in two-dimensional periodic lattices. This is in contrast to existing methods which employ approximate approaches (e.g., finite difference, Ritz, finite element, or plane wave expansion methods) to compute Bloch waves in general two-dimensional lattices. The analysis combines the recently introduced wave-based vibration analysis technique with specialized Bloch boundary conditions developed herein. Timoshenko beams with axial extension are used in modeling the lattice members. The Bloch boundary conditions incorporate a propagation constant capturing Bloch wave propagation in a single direction, but applied to all wave directions propagating in the lattice members. This results in a unique and properly posed Bloch analysis. Results are generated for the simple problem of a periodic bi-material beam, and then for the more complex examples of square, diamond, and hexagonal honeycomb lattices. The bi-material beam clearly introduces the concepts, but also allows the Bloch wave mode to be explored using insight from the technique. The square, diamond, and hexagonal honeycomb lattices illustrate application of the developed technique to two-dimensional periodic lattices, and allow comparison to a finite element approach. Differences are noted in the predicted dispersion curves, and therefore band gaps, which are attributed to the exact procedure more-faithfully modeling the finite nature of lattice connection points. The exact method also differs from approximate methods in that the same number of solution degrees of freedom is needed to resolve low frequency, and arbitrarily high frequency, dispersion branches. These advantageous features may make the method attractive to researchers studying dispersion characteristics, band gap behavior, and energy propagation in two-dimensional periodic lattices. © 2011 Elsevier Ltd. All rights reserved.

Alben S.,Georgia Institute of Technology
Journal of Fluid Mechanics | Year: 2012

We determine the inviscid dynamics of a point vortex in the vicinity of a flexible filament. For a wide range of filament bending rigidities, the filament is attracted to the point vortex, which generally moves tangentially to it. We find evidence that the point vortex collides with the filament at a critical time, with the separation distance tending to zero like a square root of temporal distance from the critical time. Concurrent with the collision, we find divergences of pressure loading on the filament, filament vortex sheet strength, filament curvature and velocity. We derive the corresponding power laws using the governing equations. © 2012 Cambridge University Press.

Alben S.,Georgia Institute of Technology
Journal of Fluid Mechanics | Year: 2012

We calculate optimal driving motions for a fin ray in a two-dimensional inviscid fluid, which is a model for caudal fin locomotion. The driving is sinusoidal in time, and consists of heaving, pitching and a less-studied motion called 'shifting'. The optimal phases of shifting relative to heaving and pitching for maximum thrust power and efficiency are calculated. The optimal phases undergo jumps at resonant combinations of fin ray bending and shear moduli, and are nearly constant in regions between resonances. In two examples, pitching-and heaving-based motions converge with the addition of optimal shifting. Shifting provides an order-one increase in output power and efficiency. © 2011 Cambridge University Press.

Kindermann M.,Georgia Institute of Technology
Physical Review Letters | Year: 2015

While the experimental progress on three dimensional topological insulators is rapid, the development of their 2D counterparts has been comparatively slow, despite their technological promise. The main reason is materials challenges of the to date only realizations of 2D topological insulators, in semiconductor quantum wells. Here we identify a 2D topological insulator in a material which does not face similar challenges and which is by now most widely available and well charaterized: graphene. For certain commensurate interlayer twists, graphene multilayers are insulators with sizable band gaps. We show that they are moreover in a topological phase protected by crystal symmetry. As its fundamental signature, this topological state supports one-dimensional boundary modes. They form low-dissipation quantum wires that can be defined purely electrostatically. © 2015 American Physical Society.

Radu I.,Georgia Institute of Technology
Personal and Ubiquitous Computing | Year: 2014

Augmented reality (AR) is an educational medium increasingly accessible to young users such as elementary school and high school students. Although previous research has shown that AR systems have the potential to improve student learning, the educational community remains unclear regarding the educational usefulness of AR and regarding contexts in which this technology is more effective than other educational mediums. This paper addresses these topics by analyzing 26 publications that have previously compared student learning in AR versus non-AR applications. It identifies a list of positive and negative impacts of AR experiences on student learning and highlights factors that are potentially underlying these effects. This set of factors is argued to cause differences in educational effectiveness between AR and other media. Furthermore, based on the analysis, the paper presents a heuristic questionnaire generated for judging the educational potential of AR experiences. © 2014 Springer-Verlag London.

Ke Y.,Georgia Institute of Technology
Current Opinion in Structural Biology | Year: 2014

The capability to de novo design molecular structures with precise weights, geometries, and functions provides an important avenue not only for scientific explorations, but also for technological applications. Owing largely to its rationalizable design strategies, super-molecular self-assembly with DNA has emerged as a powerful approach to assemble custom-shaped intricate three-dimensional nanostructures with molecular weights up to several megadaltons. Here, we summarize and discuss landmark achievements and important methodologies in three-dimensional DNA nanostructures. © 2014 Elsevier Ltd.

Zhu T.,Georgia Institute of Technology | Li J.,University of Pennsylvania
Progress in Materials Science | Year: 2010

Recent experiments on nanostructured materials, such as nanoparticles, nanowires, nanotubes, nanopillars, thin films, and nanocrystals have revealed a host of "ultra-strength" phenomena, defined by stresses in a material component generally rising up to a significant fraction >110 of its ideal strength - the highest achievable stress of a defect-free crystal at zero temperature. While conventional materials deform or fracture at sample-wide stresses far below the ideal strength, rapid development of nanotechnology has brought about a need to understand ultra-strength phenomena, as nanoscale materials apparently have a larger dynamic range of sustainable stress ("strength") than conventional materials. Ultra-strength phenomena not only have to do with the shape stability and deformation kinetics of a component, but also the tuning of its physical and chemical properties by stress. Reaching ultra-strength enables "elastic strain engineering", where by controlling the elastic strain field one achieves desired electronic, magnetic, optical, phononic, catalytic, etc. properties in the component, imparting a new meaning to Feynman's statement "there's plenty of room at the bottom". This article presents an overview of the principal deformation mechanisms of ultra-strength materials. The fundamental defect processes that initiate and sustain plastic flow and fracture are described, and the mechanics and physics of both displacive and diffusive mechanisms are reviewed. The effects of temperature, strain rate and sample size are discussed. Important unresolved issues are identified. © 2010 Elsevier Ltd.

Calculation models are presented for treating ion orbit loss effects in interpretive fluid transport calculations for the tokamak edge pedestal. Both standard ion orbit loss of particles following trapped or passing orbits across the separatrix and the X-loss of particles that are poloidally trapped in a narrow null- Bθ region extending inward from the X-point, where they gradB and curvature drift outward, are considered. Calculations are presented for a representative DIII-D J. Luxon, Nucl. Fusion 42, 614 (2002) shot which indicate that ion orbit loss effects are significant and should be taken into account in calculations of present and future experiments. © 2011 American Institute of Physics.

Wang Y.,Georgia Institute of Technology
Structural Control and Health Monitoring | Year: 2011

For the control of large-scale complex systems, it has been widely recognized that decentralized approaches may offer a number of advantages compared with centralized approaches. Primarily, these advantages include less feedback latency and lower demand on communication range, which may result in better control performance and lower system cost. This paper presents a decentralized approach for the control of large-scale civil structures. The approach provides decentralized dynamic output feedback controllers that minimize the H∞ norm of the closed-loop system. The effect of feedback time delay is considered in the problem formulation, and therefore, compensated during the controller design. The control decentralization is achieved using a homotopy method that gradually transforms a typical centralized controller into multiple uncoupled decentralized controllers. At each homotopy step, linear matrix inequality (LMI) constraints are satisfied to guarantee the performance requirement for the closed-loop H∞ norm. The proposed algorithm is validated through numerical simulations with a five-story example structure. Performance of the proposed algorithm is compared with a time-delayed decentralized control algorithm that is based upon the linear quadratic regulator (LQR) criteria. © 2009 John Wiley & Sons, Ltd.

Zhang Z.,Zhejiang University | Wang J.,Huaqiao University | Zha H.,Georgia Institute of Technology
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2012

Manifold learning algorithms seek to find a low-dimensional parameterization of high-dimensional data. They heavily rely on the notion of what can be considered as local, how accurately the manifold can be approximated locally, and, last but not least, how the local structures can be patched together to produce the global parameterization. In this paper, we develop algorithms that address two key issues in manifold learning: 1) the adaptive selection of the local neighborhood sizes when imposing a connectivity structure on the given set of high-dimensional data points and 2) the adaptive bias reduction in the local low-dimensional embedding by accounting for the variations in the curvature of the manifold as well as its interplay with the sampling density of the data set. We demonstrate the effectiveness of our methods for improving the performance of manifold learning algorithms using both synthetic and real-world data sets. © 2012 IEEE.

Ebert-Uphoff I.,Colorado State University | Deng Y.,Georgia Institute of Technology
Journal of Climate | Year: 2012

Causal discovery seeks to recover cause-effect relationships from statistical data using graphical models. One goal of this paper is to provide an accessible introduction to causal discovery methods for climate scientists, with a focus on constraint-based structure learning. Second, in a detailed case study constraintbased structure learning is applied to derive hypotheses of causal relationships between four prominent modes of atmospheric low-frequency variability in boreal winter including the Western Pacific Oscillation (WPO), Eastern Pacific Oscillation (EPO), Pacific-North America (PNA) pattern, and North Atlantic Oscillation (NAO). The results are shown in the form of static and temporal independence graphs also known as Bayesian Networks. It is found that WPO and EPO are nearly indistinguishable from the cause- effect perspective as strong simultaneous coupling is identified between the two. In addition, changes in the state of EPO (NAO) may cause changes in the state of NAO (PNA) approximately 18 (3-6) days later. These results are not only consistent with previous findings on dynamical processes connecting different low-frequency modes (e.g., interaction between synoptic and low-frequency eddies) but also provide the basis for formulating new hypotheses regarding the time scale and temporal sequencing of dynamical processes responsible for these connections. Last, the authors propose to use structure learning for climate networks, which are currently based primarily on correlation analysis. While correlation-based climate networks focus on similarity between nodes, independence graphs would provide an alternative viewpoint by focusing on information flow in the network. © 2012 American Meteorological Society.

Hao S.,Georgia Institute of Technology
Applied Physics Letters | Year: 2010

The transport properties of hydrogen in metal additives are very important for understanding the enhanced kinetic processes of (de)hydrogenation in metal hydrides. Based on the first-principles calculations, we found that the H 2 dissociation rates on TiAl surfaces are very facile and the dissociated H diffusion in TiAl lattice is much faster than that in host material MgH2. We propose that the "catalytic" effect of additives Ti and Al is the H transport channel within the Mg and MgH2 host materials for the enhanced reaction kinetics. © 2010 American Institute of Physics.

Luo J.,Shanghai JiaoTong University | Ba S.,University of Connecticut | Zhang H.,Georgia Institute of Technology
MIS Quarterly: Management Information Systems | Year: 2012

Electronic commerce has grown rapidly in recent years. However, surveys of online customers continue to indicate that many remain unsatisfied with their online purchase experiences. Clearly, more research is needed to better understand what affects customers' evaluations of their online experiences. Through a large dataset gathered from two online websites, this study investigates the importance of product uncertainty and retailer visibility in customers' online purchase decisions, as well as the mitigating effects of retailer characteristics. We find that high product uncertainty and low retailer visibility have a negative impact on customer satisfaction. However, a retailer's service quality, website design, and pricing play important roles in mitigating the negative impact of high product uncertainty and low retailer visibility. Specifically, service quality can mitigate the negative impacts of low retailer visibility and high product uncertainty in online markets. Website design, on the other hand, helps to reduce the impact of product uncertainty when experience goods are involved.

Rasher D.B.,Georgia Institute of Technology
Proceedings. Biological sciences / The Royal Society | Year: 2014

Many seaweeds and terrestrial plants induce chemical defences in response to herbivory, but whether they induce chemical defences against competitors (allelopathy) remains poorly understood. We evaluated whether two tropical seaweeds induce allelopathy in response to competition with a reef-building coral. We also assessed the effects of competition on seaweed growth and seaweed chemical defence against herbivores. Following 8 days of competition with the coral Porites cylindrica, the chemically rich seaweed Galaxaura filamentosa induced increased allelochemicals and became nearly twice as damaging to the coral. However, it also experienced significantly reduced growth and increased palatability to herbivores (because of reduced chemical defences). Under the same conditions, the seaweed Sargassum polycystum did not induce allelopathy and did not experience a change in growth or palatability. This is the first demonstration of induced allelopathy in a seaweed, or of competitors reducing seaweed chemical defences against herbivores. Our results suggest that the chemical ecology of coral-seaweed-herbivore interactions can be complex and nuanced, highlighting the need to incorporate greater ecological complexity into the study of chemical defence.

As glacial retreat changes the hydrology of Peru's Rio Santa, water demand is growing, pollution is worsening, and competition for water among economic sectors, political jurisdictions and upstream and downstream water users is intensifying. The vulnerability of highland communities, food producers, and poor urban neighborhoods in the Santa watershed in the face of these changes is magnified by inequities in water governance, giving rise to water conflict. Peru's new water regime defines water as an economic good and seeks to centralize control over water. This article analyzes implications of this regime for ensuring equity and managing conflict. It concludes that Peru's water regime is more likely to address equity issues when faced with concerted citizen action. © 2012 Elsevier Ltd.

Monteiro R.D.C.,Georgia Institute of Technology | Svaiter B.F.,IMPA
SIAM Journal on Optimization | Year: 2013

In this paper, we consider the monotone inclusion problem consisting of the sum of a continuous monotone map and a point-to-set maximal monotone operator with a separable two-block structure and introduce a framework of block-decomposition prox-type algorithms for solving it which allows for each one of the single-block proximal subproblems to be solved in an approximate sense. Moreover, by showing that any method in this framework is also a special instance of the hybrid proximal extragradient (HPE) method introduced by Solodov and Svaiter, we derive corresponding convergence rate results. We also describe some instances of the framework based on specific and inexpensive schemes for solving the single-block proximal subproblems. Finally, we consider some applications of our methodology to establish for the first time (i) the iteration-complexity of an algorithm for finding a zero of the sum of two arbitrary maximal monotone operators and, as a consequence, the ergodic iteration-complexity of the Douglas-Rachford splitting method and (ii) the ergodic iteration-complexity of the classical alternating direction method of multipliers for a class of linearly constrained convex programming problems with proper closed convex objective functions. © 2013 Society for Industrial and Applied Mathematics.

Black W.Z.,Georgia Institute of Technology
Fire Safety Journal | Year: 2010

A differential computer model specifically designed to quantify smoke movement during a fire in a high-rise structure is described. The basic conservation equations are transformed into a computer code which can be used to determine the paths that smoke will take during a fire. The program is a tool for fire protection engineers to design a smoke management plan with the ultimate goal of improving occupant safety in the event of a fire. The computer code is based on a modified and improved differential smoke control model for the conditions in the floor spaces, stairwells and elevator shafts and it considers a complete set of variables that influence the motion of smoke throughout the building. Program output suggests ways to alter the pressure distribution within the building by using air handling equipment, so that occupants will have smoke-free areas on the floors and inside of the fire escape stairwells. Results for several example cases are provided, and the results are used to illustrate how smoke movement can be managed in order to mitigate dangerous conditions within the building. © 2010 Elsevier Ltd. All rights reserved.

Minsker S.,Georgia Institute of Technology
Journal of Machine Learning Research | Year: 2012

We present a new active learning algorithm based on nonparametric estimators of the regression function. Our investigation provides probabilistic bounds for the rates of convergence of the generalization error achievable by proposed method over a broad class of underlying distributions. We also prove minimax lower bounds which show that the obtained rates are almost tight. © 2012 Stanislav Minsker.

Matisoff D.C.,Georgia Institute of Technology
Environmental and Resource Economics | Year: 2012

The Chicago Climate Exchange (CCX) and the Carbon Disclosure Project (CDP) are two private voluntary initiatives aimed at reducing carbon emissions and improving carbon management by firms. I sample power plants from firms participating in each of these programs, and match these to plants belonging to non-participating firms, to control for differences between participating and non-participating plants. Using a difference-in-differences model to control for unobservable differences between participants and non-participants, and to control for the trajectory of emissions prior to program participation, I find that the CCX is associated with a decrease in total carbon dioxide emissions for participating plants when non-publicly traded firms are included in the sample. Effects are produced largely by decreases in output. CCX participation is associated with increases in carbon dioxide intensity. The CDP is not associated with a decrease of carbon dioxide emissions or electricity generation, and program participation is associated with an increase in carbon dioxide intensity. I explore these results within the context of voluntary environmental programs to address carbon emissions. © 2012 Springer Science+Business Media B.V.

Stacey W.M.,Georgia Institute of Technology
Physics of Plasmas | Year: 2011

A calculation model for X-transport due to the radially outward grad-B and curvature drift of ions trapped poloidally in the null- B X-region just inside the X-point in diverted tokamaks is presented. Calculations are presented for two representative DIII-D J. Luxon, Nucl. Fusion 42, 614 (2002) shots which indicate that X-transport effects are significant and should be taken into account in calculations of present and future experiments. © 2011 American Institute of Physics.

At academic and policy levels, universities are finding themselves in heated debate about their role in fostering entrepreneurship and local economic growth. Theories that encourage university involvement in the region perceive a straightforward positive correlation between the level of the university contribution and industrial growth in the region. Accordingly, the adaptation of a successful model will have positive results on local economic growth. Utilizing a case study of the University of Cambridge, this paper contends that the impact on regional economies depends on universities' resources, policies, and organization, as well as on industry's response to the knowledge and innovation generated. © 2011 Regional Studies Association.

Young A.R.,Georgia Institute of Technology
Journal of European Public Policy | Year: 2014

The European Union (EU) is often depicted as global regulatory power. This contribution contends that this depiction, while not unfounded, is misleading. It aims to clarify under what conditions the EU converts its regulatory capability into influence. Specifically, it seeks to resolve the puzzle of the EU's poor performance in the setting of global food safety standards within the Codex Alimentarius Commission. The argument is deceptively simple. The EU's limited influence is due to it being a preference outlier. In a context where standards can be agreed by voting, the stringency of the EU's regulations, rather than being a source of influence, is a liability. This extreme case demonstrates that the EU's ability to exercise international influence is affected by the constellation of preferences and the distribution of power. This contribution, therefore, contributes to the emerging literature that contends that the EU's international effectiveness can be understood only with explicit reference to the international context within which it is operating. © 2014 Taylor & Francis.

Peikert C.,Georgia Institute of Technology
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

At the heart of many recent lattice-based cryptographic schemes is a polynomial-time algorithm that, given a 'high-quality' basis, generates a lattice point according to a Gaussian-like distribution. Unlike most other operations in lattice-based cryptography, however, the known algorithm for this task (due to Gentry, Peikert, and Vaikuntanathan; STOC 2008) is rather inefficient, and is inherently sequential. We present a new Gaussian sampling algorithm for lattices that is efficient and highly parallelizable. We also show that in most cryptographic applications, the algorithm's efficiency comes at almost no cost in asymptotic security. At a high level, our algorithm resembles the "perturbation" heuristic proposed as part of NTRUSign (Hoffstein et al., CT-RSA 2003), though the details are quite different. To our knowledge, this is the first algorithm and rigorous analysis demonstrating the security of a perturbation-like technique. © 2010 Springer-Verlag Berlin Heidelberg.

Dufek J.,Georgia Institute of Technology
Annual Review of Fluid Mechanics | Year: 2016

Pyroclastic density currents are generated in explosive volcanic eruptions when gas and particle mixtures remain denser than the surrounding atmosphere. These mobile currents have a diversity of flow regimes, from energetic granular flows to turbulent suspensions. Given their hazardous nature, much of our understanding of the internal dynamics of these currents has been explored through mathematical and computational models. This review discusses the anatomy of these currents and their phenomenology and places these observations in the context of forces driving the currents. All aspects of the current dynamics are influenced by multiphase interactions, and the study of these currents offers insight into a high-energy end-member of multiphase flow. At low concentration, momentum transfer is dominated by particle-gas drag. At higher concentration, particle collisions, friction, and gas pore pressure act to redistribute momentum. This review examines end-member theoretical models for dilute and concentrated flow and then considers insight gained from multiphase simulations of pyroclastic density currents. © Copyright 2016 by Annual Reviews. All rights reserved.

Smith G.S.,Georgia Institute of Technology
European Journal of Physics | Year: 2014

The skin effect in a round wire is an important electromagnetic phenomenon with practical consequences; however, it is usually not presented in any detail at the undergraduate level but reserved for graduate study. The purpose of this paper is to remedy this situation by providing a simple derivation for the skin effect in a round wire that only requires background usually familiar to these students: Maxwell's equations in integral form, integral calculus (specifically integration of a power) and some elementary properties of series. Graphical results are used to clearly show the current concentrating near the surface as the frequency increases and the accompanying increase in the resistance and decrease in the inductance of the wire. A brief review of the history of the subject shows that several of the scientists familiar to students made contributions to our understanding of the skin effect in a round wire; they include J C Maxwell, Lord Rayleigh, Lord Kelvin, O Heaviside and J J Thomson. The validity of the theory is demonstrated by comparing results from the theory with resistances and inductances measured by some of the early pioneers of wireless communication. © 2014 IOP Publishing Ltd.

Wang Z.L.,Georgia Institute of Technology
Journal of Physical Chemistry Letters | Year: 2010

Owing to the polarization of ions in a crystal that has noncentral symmetry, a piezoelectric potential (piezopotential) is created in the material by applying a stress. The creation of piezopotential together with the presence of Schottky contacts are the fundamental physics responsible for a few important nanotechnologies. The nanogenerator is based on the piezopotential-driven transient flow of electrons in the external load. On the basis of nanomaterials in the wurtzite semiconductors, such as ZnO and GaN, electronics fabricated by using a piezopotential as a gate voltage are called piezotronics, with applications in strain/force/pressure-triggered/controlled electronic devices, sensors, and logic gates. The piezophototronic effect is a result of three-way coupling among piezoelectricity, photonic excitation, and semiconductor transport, which allows tuning and controlling of electro-optical processes by a strain-induced piezopotential. © 2010 American Chemical Society.

Stewart F.J.,Georgia Institute of Technology | Ulloa O.,University of Concepcion | Delong E.F.,Massachusetts Institute of Technology
Environmental Microbiology | Year: 2012

Simultaneous characterization of taxonomic composition, metabolic gene content and gene expression in marine oxygen minimum zones (OMZs) has potential to broaden perspectives on the microbial and biogeochemical dynamics in these environments. Here, we present a metatranscriptomic survey of microbial community metabolism in the Eastern Tropical South Pacific OMZ off northern Chile. Community RNA was sampled in late austral autumn from four depths (50, 85, 110, 200m) extending across the oxycline and into the upper OMZ. Shotgun pyrosequencing of cDNA yielded 180000 to 550000 transcript sequences per depth. Based on functional gene representation, transcriptome samples clustered apart from corresponding metagenome samples from the same depth, highlighting the discrepancies between metabolic potential and actual transcription. BLAST-based characterizations of non-ribosomal RNA sequences revealed a dominance of genes involved with both oxidative (nitrification) and reductive (anammox, denitrification) components of the marine nitrogen cycle. Using annotations of protein-coding genes as proxies for taxonomic affiliation, we observed depth-specific changes in gene expression by key functional taxonomic groups. Notably, transcripts most closely matching the genome of the ammonia-oxidizing archaeon Nitrosopumilus maritimus dominated the transcriptome in the upper three depths, representing one in five protein-coding transcripts at 85m. In contrast, transcripts matching the anammox bacterium Kuenenia stuttgartiensis dominated at the core of the OMZ (200m; 1 in 12 protein-coding transcripts). The distribution of N. maritimus-like transcripts paralleled that of transcripts matching ammonia monooxygenase genes, which, despite being represented by both bacterial and archaeal sequences in the community DNA, were dominated (>99%) by archaeal sequences in the RNA, suggesting a substantial role for archaeal nitrification in the upper OMZ. These data, as well as those describing other key OMZ metabolic processes (e.g. sulfur oxidation), highlight gene-specific expression patterns in the context of the entire community transcriptome, as well as identify key functional groups for taxon-specific genomic profiling. © 2011 Society for Applied Microbiology and Blackwell Publishing Ltd.

Taboada I.,Georgia Institute of Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2010

It has been proposed that the gamma-ray burst-supernova connection may manifest itself in a significant fraction of core collapse supernovae possessing mildly relativistic jets with wide opening angles that do not break out of the stellar envelope. Neutrinos would provide proof of the existence of these jets. In the present paper we calculate the event rate of 100GeV neutrino-induced cascades in km3 detectors. We also calculate the event rate for 10GeV neutrinos of all flavors with the DeepCore low energy extension of IceCube. The added event rate significantly improves the ability of km3 detectors to search for these gamma-ray dark bursts. For a core collapse supernova at 10 Mpc we find ∼4 events expected in DeepCore and ∼6 neutrino-induced cascades in IceCube/KM3Net. Observations at 10GeV are mostly sensitive to the pion component of the neutrino production in the choked jet, while the 100GeV depends on the kaon component. Finally we discuss extensions of the ongoing optical follow-up programs by IceCube and Antares to include neutrinos of all flavors at 10GeV and neutrino-induced cascades at 100GeV energies. © 2010 The American Physical Society.

Xu S.,Georgia Institute of Technology
Nature communications | Year: 2010

Harvesting energy from irregular/random mechanical actions in variable and uncontrollable environments is an effective approach for powering wireless mobile electronics to meet a wide range of applications in our daily life. Piezoelectric nanowires are robust and can be stimulated by tiny physical motions/disturbances over a range of frequencies. Here, we demonstrate the first chemical epitaxial growth of PbZr(x)Ti(1-x)O(3) (PZT) nanowire arrays at 230 °C and their application as high-output energy converters. The nanogenerators fabricated using a single array of PZT nanowires produce a peak output voltage of ~0.7 V, current density of 4 μA cm(-2) and an average power density of 2.8 mW cm(-3). The alternating current output of the nanogenerator is rectified, and the harvested energy is stored and later used to light up a commercial laser diode. This work demonstrates the feasibility of using nanogenerators for powering mobile and even personal microelectronics.

Appleton A.L.,Georgia Institute of Technology
Nature communications | Year: 2010

Large acenes, particularly pentacenes, are important in organic electronics applications such as thin-film transistors. Derivatives where CH units are substituted by sp(2) nitrogen atoms are rare but of potential interest as charge-transport materials. In this article, we show that pyrazine units embedded in tetracenes and pentacenes allow for additional electronegative substituents to induce unexpected redshifts in the optical transitions of diazaacenes. The presence of the pyrazine group is critical for this effect. The decrease in transition energy in the halogenated diazaacenes is due to a disproportionate lack of stabilization of the HOMO on halogen substitution. The effect results from the unsymmetrical distribution of the HOMO, which shows decreased orbital coefficients on the ring bearing chlorine substituents. The more strongly electron-accepting cyano group is predicted to shift the transitions of diazaacenes even further to the red. Electronegative substitution impacts the electronic properties of diazaacenes to a much greater degree than expected.

Cortez M.H.,Georgia Institute of Technology
Journal of Mathematical Biology | Year: 2013

Pathogen evolution towards the largest basic reproductive number, R0, has been observed in many theoretical models, but this conclusion does not hold universally. Previous studies of host-pathogen systems have defined general conditions under which R0, maximization occurs in terms of R0, itself. However, it is unclear what constraints these conditions impose on the functional forms of pathogen related processes (e.g. transmission, recover, or mortality) and how those constraints relate to the characteristics of natural systems. Here we focus on well-mixed SIR-type host-pathogen systems and, via a synthesis of results from the literature, we present a set of sufficient mathematical conditions under which evolution maximizes R0,. Our conditions are in terms of the functional responses of the system and yield three general biological constraints on when R0, maximization will occur. First, there are no genotype-by-environment interactions. Second, the pathogen utilizes a single transmission pathway (i.e. either horizontal, vertical, or vector transmission). Third, when mortality is density dependent: (i) there is a single infectious class that individuals cannot recover from, (ii) mortality in the infectious class is entirely density dependent, and (iii) the rates of recovery, infection progression, and mortality in the exposed classes are independent of the pathogen trait. We discuss how this approach identifies the biological mechanisms that increase the dimension of the environmental feedback and prevent R0, maximization. © 2012 Springer-Verlag Berlin Heidelberg.

Sauermann H.,Georgia Institute of Technology | Roach M.,University of North Carolina at Chapel Hill
Research Policy | Year: 2013

Web surveys have become increasingly central to innovation research but often suffer from low response rates. Based on a cost-benefits framework and the explicit consideration of heterogeneity across respondents, we consider the effects of key contact design features such as personalization, incentives, and the exact timing of survey contacts on web survey response rates. We also consider the benefits of a "dynamic strategy", i.e.; the approach to change features of survey contacts over the survey life cycle. We explore these effects experimentally using a career survey sent to over 24,000 junior scientists and engineers. The results show that personalization increases the odds of responding by as much as 48%, while lottery incentives with a high payoff and a low chance of winning increase the odds of responding by 30%. Furthermore, changing the wording of reminders over the survey life cycle increases the odds of a response by over 30%, while changes in contact timing (day of the week or hour of the day) did not have significant benefits. Improvements in response rates did not come at the expense of lower data quality. Our results provide novel insights into web survey response behavior and suggest useful tools for innovation researchers seeking to increase survey participation. © 2012 Elsevier B.V.

Trogadas P.,TU Berlin | Fuller T.F.,Georgia Institute of Technology | Strasser P.,TU Berlin
Carbon | Year: 2014

Carbon has unique characteristics that make it an ideal material for use in a wide variety of electrochemical applications ranging from metal refining to electrocatalysis and fuel cells. In polymer electrolyte fuel cells (PEFCs), carbon is used as a gas diffusion layer, electrocatalyst support and oxygen reduction reaction (ORR) electrocatalyst. When used as electrocatalyst support, amorphous carbonaceous materials suffer from enhanced oxidation rates at high potentials over time. This drawback has prompted an extensive effort to improve the properties of amorphous carbon and to identify alternate carbon-based materials to replace carbon blacks. Alternate support materials are classified in carbon nanotubes and fibers, mesoporous carbon, multi-layer graphene (undoped and doped with metal nanoparticles) and reduced graphene oxide. A comparative review of all these supports is provided. Work on catalytically active carbon hybrids is focused on the development of non-precious metal electrocatalysts that will significantly reduce the cost without sacrificing catalytic activity. Of the newer electrocatalysts, nitrogen/metal-functionalized carbons and composites are emerging as possible contenders for commercial PEFCs. Nitrogen-doped carbon hybrids with transition metals and their polymer composites exhibit high ORR activity and selectivity and these catalytic properties are presented in detail in this review. © 2014 Elsevier Ltd. All rights reserved.

Korzdorfer T.,University of Potsdam | Bredas J.-L.,Georgia Institute of Technology | Bredas J.-L.,King Abdullah University of Science and Technology
Accounts of Chemical Research | Year: 2014

Density functional theory (DFT) and its time-dependent extension (TD-DFT) are powerful tools enabling the theoretical prediction of the ground- and excited-state properties of organic electronic materials with reasonable accuracy at affordable computational costs. Due to their excellent accuracy-to-numerical-costs ratio, semilocal and global hybrid functionals such as B3LYP have become the workhorse for geometry optimizations and the prediction of vibrational spectra in modern theoretical organic chemistry. Despite the overwhelming success of these out-of-the-box functionals for such applications, the computational treatment of electronic and structural properties that are of particular interest in organic electronic materials sometimes reveals severe and qualitative failures of such functionals. Important examples include the overestimation of conjugation, torsional barriers, and electronic coupling as well as the underestimation of bond-length alternations or excited-state energies in low-band-gap polymers.In this Account, we highlight how these failures can be traced back to the delocalization error inherent to semilocal and global hybrid functionals, which leads to the spurious delocalization of electron densities and an overestimation of conjugation. The delocalization error for systems and functionals of interest can be quantified by allowing for fractional occupation of the highest occupied molecular orbital. It can be minimized by using long-range corrected hybrid functionals and a nonempirical tuning procedure for the range-separation parameter.We then review the benefits and drawbacks of using tuned long-range corrected hybrid functionals for the description of the ground and excited states of π-conjugated systems. In particular, we show that this approach provides for robust and efficient means of characterizing the electronic couplings in organic mixed-valence systems, for the calculation of accurate torsional barriers at the polymer limit, and for the reliable prediction of the optical absorption spectrum of low-band-gap polymers. We also explain why the use of standard, out-of-the-box range-separation parameters is not recommended for the DFT and/or TD-DFT description of the ground and excited states of extended, pi-conjugated systems. Finally, we highlight a severe drawback of tuned range-separated hybrid functionals by discussing the example of the calculation of bond-length alternation in polyacetylene, which leads us to point out the challenges for future developments in this field. © 2014 American Chemical Society.

Huang X.,Georgia Institute of Technology
Methods in molecular biology (Clifton, N.J.) | Year: 2010

This chapter describes the application of gold nanorods in biomedical imaging and photothermal therapy. The photothermal properties of gold nanorods are summarized and the synthesis as well as antibody conjugation of gold nanorods is outlined. Biomedical applications of gold nanorods include cancer imaging using their enhanced scattering property and photothermal therapy using their enhanced nonradiative photothermal property.

Curry J.A.,Georgia Institute of Technology
Climate Dynamics | Year: 2015

Energy budget estimates of equilibrium climate sensitivity (ECS) and transient climate response (TCR) are derived using the comprehensive 1750–2011 time series and the uncertainty ranges for forcing components provided in the Intergovernmental Panel on Climate Change Fifth Assessment Working Group I Report, along with its estimates of heat accumulation in the climate system. The resulting estimates are less dependent on global climate models and allow more realistically for forcing uncertainties than similar estimates based on forcings diagnosed from simulations by such models. Base and final periods are selected that have well matched volcanic activity and influence from internal variability. Using 1859–1882 for the base period and 1995–2011 for the final period, thus avoiding major volcanic activity, median estimates are derived for ECS of 1.64 K and for TCR of 1.33 K. ECS 17–83 and 5–95 % uncertainty ranges are 1.25–2.45 and 1.05–4.05 K; the corresponding TCR ranges are 1.05–1.80 and 0.90–2.50 K. Results using alternative well-matched base and final periods provide similar best estimates but give wider uncertainty ranges, principally reflecting smaller changes in average forcing. Uncertainty in aerosol forcing is the dominant contribution to the ECS and TCR uncertainty ranges. © 2014, Springer-Verlag Berlin Heidelberg.

Madiman M.,Yale University | Tetali P.,Georgia Institute of Technology
IEEE Transactions on Information Theory | Year: 2010

Upper and lower bounds are obtained for the joint entropy of a collection of random variables in terms of an arbitrary collection of subset joint entropies. These inequalities generalize Shannon's chain rule for entropy as well as inequalities of Han, Fujishige, and Shearer. A duality between the upper and lower bounds for joint entropy is developed. All of these results are shown to be special cases of general, new results for submodular functions-thus, the inequalities presented constitute a richly structured class of Shannon-type inequalities. The new inequalities are applied to obtain new results in combinatorics, such as bounds on the number of independent sets in an arbitrary graph and the number of zero-error source-channel codes, as well as determinantal inequalities in matrix theory. A general inequality for relative entropies is also developed. Finally, revealing connections of the results to literature in economics, computer science, and physics are explored. © 2010 IEEE.

Atasu A.,Georgia Institute of Technology | Van Wassenhove L.N.,Social Innovation Center
Production and Operations Management | Year: 2012

Agrowing stream of environmental legislation enforces collection and recycling of used electrical and electronics products. Based on our experiences with producers coping with e-waste legislation, we find that there is a strong need for research on the implications of such legislation from an operations perspective. In particular, as a discipline at the interface of systems design and economic modeling, operations focused research can be extremely useful in identifying appropriate e-waste take-back implementations for different business environments and how producers should react to them. © 2011 Production and Operations Management Society.

Yavari A.,Georgia Institute of Technology | Goriely A.,University of Oxford
Archive for Rational Mechanics and Analysis | Year: 2012

We present a geometric theory of nonlinear solids with distributed dislocations. In this theory the material manifold-where the body is stress free-is a Weitzenböck manifold, that is, a manifold with a flat affine connection with torsion but vanishing non-metricity. Torsion of the material manifold is identified with the dislocation density tensor of nonlinear dislocation mechanics. Using Cartan's moving frames we construct the material manifold for several examples of bodies with distributed dislocations. We also present non-trivial examples of zero-stress dislocation distributions. More importantly, in this geometric framework we are able to calculate the residual stress fields, assuming that the nonlinear elastic body is incompressible. We derive the governing equations of nonlinear dislocation mechanics covariantly using balance of energy and its covariance. © 2012 Springer-Verlag.

Eley S.,University of Illinois at Urbana - Champaign | Gopalakrishnan S.,University of Illinois at Urbana - Champaign | Goldbart P.M.,Georgia Institute of Technology | Mason N.,University of Illinois at Urbana - Champaign
Nature Physics | Year: 2012

Systems of superconducting islands placed on normal metal films offer tunable realizations of two-dimensional (2D) superconductivity1,2; they can thus elucidate open questions regarding the nature of 2D superconductors and competing states. In particular, island systems have been predicted to exhibit zero-temperature metallic states3-5. Although evidence exists for such metallic states in some 2D systems6,7, their character is not well understood: the conventional theory of metals cannot explain them8, and their properties are difficult to tune 7,9. Here, we characterize the superconducting transitions in mesoscopic island-array systems as a function of island thickness and spacing. We observe two transitions in the progression to superconductivity. Both transition temperatures exhibit unexpectedly strong depression for widely spaced islands, consistent with the system approaching zero-temperature (T=0) metallic states. In particular, the first transition temperature seems to linearly approach T=0 for finite island spacing. The nature of the transitions is explained using a phenomenological model involving the stabilization of superconductivity on each island via a coupling to its neighbours.

Dufek J.,Georgia Institute of Technology | Manga M.,University of California at Berkeley | Patel A.,University of California at Berkeley
Nature Geoscience | Year: 2012

Explosive volcanic eruptions are among the most energetic events on Earth. The hazard to surrounding populations and aviation is controlled by the concentration and size of particles that exit the volcanic vent. The size distribution of volcanic particles is thought to be determined by the initial fragmentation process 1-4, where bubbly magmatic mixtures transition to gas-particle flows. Here we show that collisional processes in the volcanic conduit after initial fragmentation can change the grain-size distribution of particles that leave the volcanic vent. We use experimental analysis of the breakup of natural volcanic rocks during collisions, as well as numerical simulations, to estimate the probability that particles pass through the volcanic conduit and survive intact. We find that breakup in the conduit is strongly controlled by the initial particle size and the location of the initial fragmentation: particles that measure more than 1 cm in diameter and those fragmented at great depths break up most frequently. Abundant large pumice clasts in volcanic deposits therefore imply shallow fragmentation that may be transient. In contrast, fragmentation events at depth will lead to enhanced ash production and greater atmospheric loading of long-residence, fine-grained ash. © 2012 Macmillan Publishers Limited. All rights reserved.

An investigation of the effect of ion orbit loss of thermal ions and the compensating return ion current directly on the radial ion flux flowing in the plasma, and thereby indirectly on the toroidal and poloidal rotation velocity profiles, the radial electric field, density, and temperature profiles, and the interpretation of diffusive and non-diffusive transport coefficients in the plasma edge, is described. Illustrative calculations for a high-confinement H-mode DIII-D [J. Luxon, Nucl. Fusion 42, 614 (2002)] plasma are presented and compared with experimental results. Taking into account, ion orbit loss of thermal ions and the compensating return ion current is found to have a significant effect on the structure of the radial profiles of these quantities in the edge plasma, indicating the necessity of taking ion orbit loss effects into account in interpreting or predicting these quantities. © 2013 AIP Publishing LLC.

Gaalema D.E.,Georgia Institute of Technology
Journal of Comparative Psychology | Year: 2011

Reptile learning has been studied with a variety of methods and has included numerous species. However, research on learning in lizards has generally focused on spatial memory and has been studied in only a few species. This study explored visual discrimination in two rough-necked monitors (Varanus rudicollis). Subjects were trained to discriminate between black and white stimuli. Both subjects learned an initial discrimination task as well as two reversals, with the second reversal requiring fewer sessions than the first. This reduction in trials required for reversal acquisition provides evidence for behavioral flexibility in the monitor lizard genus. © 2011 American Psychological Association.

Bloch M.R.,Georgia Institute of Technology | Bloch M.R.,French National Center for Scientific Research | Laneman J.N.,University of Notre Dame
IEEE Transactions on Information Theory | Year: 2013

We analyze physical-layer security based on the premise that the coding mechanism for secrecy over noisy channels is tied to the notion of channel resolvability. Instead of considering capacity-based constructions, which associate to each message a subcode that operates just below the capacity of the eavesdropper's channel, we consider channel-resolvability-based constructions, which associate to each message a subcode that operates just above the resolvability of the eavesdropper's channel. Building upon the work of Csiszár and Hayashi, we provide further evidence that channel resolvability is a powerful and versatile coding mechanism for secrecy by developing results that hold for strong secrecy metrics and arbitrary channels. Specifically, we show that at least for symmetric wiretap channels, random capacity-based constructions fail to achieve the strong secrecy capacity, while channel-resolvability-based constructions achieve it. We then leverage channel resolvability to establish the secrecy-capacity region of arbitrary broadcast channels with confidential messages and a cost constraint for strong secrecy metrics. Finally, we specialize our results to study the secrecy capacity of wireless channels with perfect channel state information (CSI), mixed channels, and compound channels with receiver CSI, as well as the secret-key capacity of source models for secret-key agreement. By tying secrecy to channel resolvability, we obtain achievable rates for strong secrecy metrics with simple proofs. © 1963-2012 IEEE.

Chakrapani V.,University of Notre Dame | Chakrapani V.,Georgia Institute of Technology | Baker D.,University of Notre Dame | Kamat P.V.,University of Notre Dame
Journal of the American Chemical Society | Year: 2011

The presence of sulfide/polysulfide redox couple is crucial in achieving stability of metal chalcogenide (e.g., CdS and CdSe)-based quantum dot-sensitized solar cells (QDSC). However, the interfacial charge transfer processes play a pivotal role in dictating the net photoconversion efficiency. We present here kinetics of hole transfer, characterization of the intermediates involved in the hole oxidation of sulfide ion, and the back electron transfer between sulfide radical and electrons injected into TiO2 nanoparticles. The kinetic rate constant (107-109 s -1) for the hole transfer obtained from the emission lifetime measurements suggests slow hole scavenging from CdSe by S2- is one of the limiting factors in attaining high overall efficiency. The presence of the oxidized couple, by addition of S or Se to the electrolyte, increases the photocurrent, but it also enhances the rate of back electron transfer. © 2011 American Chemical Society.

Fedele F.,Georgia Institute of Technology
Journal of Physical Oceanography | Year: 2012

This study develops a stochastic approach to model short-crested stormy seas as random fields both in space and time. Defining a space-time extreme as the largest surface displacement over a given sea surface area during a storm, associated statistical properties are derived by means of the theory of Euler characteristics of random excursion sets in combination with the Equivalent Power Storm model. As a result, an analytical solution for the return period of space-time extremes is given. Subsequently, the relative validity of the new model and its predictions are explored by analyzing wave data retrieved from NOAA buoy 42003, located in the eastern part of the Gulf of Mexico, offshore Naples, Florida. The results indicate that, as the storm area increases under short-crested wave conditions, space-time extremes noticeably exceed the significant wave height of the most probable sea state in which they likely occur and that they also do not violate Stokes- Miche-type upper limits on wave heights. © 2012 American Meteorological Society.

Kroes J.R.,University of Rhode Island | Ghosh S.,Georgia Institute of Technology
Journal of Operations Management | Year: 2010

The growth of outsourcing has led outsourcing strategies to become an increasingly important component of firm success (Gottfredson et al., 2005). While the purported goal of outsourcing in supply chains is to derive a competitive advantage, it is not clear whether the outsourcing decisions of firms are always strategically aligned with their overall competitive strategy. In this paper we evaluate the degree of congruence (fit or alignment) between a firm's outsourcing drivers and its competitive priorities and assess the impact of congruence on both supply chain performance and business performance, using empirical data collected from manufacturing business units operating in the United States. We find outsourcing congruence across all five competitive priorities to be positively and significantly related to supply chain performance. We also find the level of supply chain performance in a firm to be positively and significantly associated with the firm's business performance. © 2009 Elsevier B.V. All rights reserved.

Melodia T.,State University of New York at Buffalo | Akyildiz I.F.,Georgia Institute of Technology
IEEE Journal on Selected Areas in Communications | Year: 2010

Wireless Multimedia Sensor Networks (WMSNs) are distributed systems of wirelessly networked devices that allow retrieving video and audio streams, still images, and scalar sensor data. WMSNs will be a crucial component of mission-critical networks to protect the operation of strategic national infrastructure, provide support to counteract emergencies and threats, and enhance infrastructure for tactical military operations. To enable these applications, WMSNs require the sensor network paradigm to be re-thought in view of the need for mechanisms to deliver multimedia content with a pre-defined level of quality of service (QoS). In this paper, a new cross-layer communication architecture based on the time-hopping impulse radio ultra wide band technology is described, whose objective is to reliably and flexibly deliver QoS to heterogeneous applications in WMSNs, by leveraging and controlling interactions among different layers of the protocol stack according to applications requirements. Simulations show that the proposed system achieves the performance objectives of WMSNs without sacrificing on the modularity of the overall design. © 2010 IEEE.

Jacobs B.W.,Michigan State University | Subramanian R.,Georgia Institute of Technology
Production and Operations Management | Year: 2012

Extended producer responsibility (EPR) programs typically hold the producer - a single actor defined by the regulator - responsible for the environmental impacts of end-of-life products. This is despite emphasis on the need to involve all actors in the supply chain in order to best achieve the aims of EPR. In this paper, we examine the economic and environmental implications of product recovery mandates and shared responsibility within a supply chain. We use a two-echelon model consisting of a supplier and a manufacturer to determine the impacts of product collection and recycling mandates on the incentive to recycle and resulting profits in the integrated and decentralized supply chains. For the decentralized supply chain, we demonstrate how the sharing of responsibility for product recovery between the echelons can improve total supply chain profit and suggest a contract menu that can Pareto-improve profits. To examine both the economic and environmental performance associated with responsibility sharing, we propose a social welfare construct that includes supply chain profit, consumer surplus, and the externalities associated with virgin material extraction, product consumption, and disposal of nonrecycled products. Using a numerical example, we discuss how responsibility sharing may or may not improve social welfare. The results of this paper are of value to firms either anticipating or subject to product recovery legislation, and to social planners that attempt to balance economic and environmental impacts and ensure fairness of such legislation. © 2011 Production and Operations Management Society.

Voit E.O.,Georgia Institute of Technology
Biochimica et Biophysica Acta - Proteins and Proteomics | Year: 2014

Probably the most prominent expectation associated with systems biology is the computational support of personalized medicine and predictive health. At least some of this anticipated support is envisioned in the form of disease simulators that will take hundreds of personalized biomarker data as input and allow the physician to explore and optimize possible treatment regimens on a computer before the best treatment is applied to the actual patient in a custom-tailored manner. The key prerequisites for such simulators are mathematical and computational models that not only manage the input data and implement the general physiological and pathological principles of organ systems but also integrate the myriads of details that affect their functionality to a significant degree. Obviously, the construction of such models is an overwhelming task that suggests the long-term development of hierarchical or telescopic approaches representing the physiology of organs and their diseases, first coarsely and over time with increased granularity. This article illustrates the rudiments of such a strategy in the context of cystic fibrosis (CF) of the lung. The starting point is a very simplistic, generic model of inflammation, which has been shown to capture the principles of infection, trauma, and sepsis surprisingly well. The adaptation of this model to CF contains as variables healthy and damaged cells, as well as different classes of interacting cytokines and infectious microbes that are affected by mucus formation, which is the hallmark symptom of the disease (Perez-Vilar and Boucher, 2004) [1]. The simple model represents the overall dynamics of the disease progression, including so-called acute pulmonary exacerbations, quite well, but of course does not provide much detail regarding the specific processes underlying the disease. In order to launch the next level of modeling with finer granularity, it is desirable to determine which components of the coarse model contribute most to the disease dynamics. The article introduces for this purpose the concept of module gains or ModGains, which quantify the sensitivity of key disease variables in the higher-level system. In reality, these variables represent complex modules at the next level of granularity, and the computation of ModGains therefore allows an importance ranking of variables that should be replaced with more detailed models. The "hot-swapping" of such detailed modules for former variables is greatly facilitated by the architecture and implementation of the overarching, coarse model structure, which is here formulated with methods of biochemical systems theory (BST). This article is part of a Special Issue entitled: Computational Proteomics, Systems Biology & Clinical Implications. Guest Editor: Yudong Cai. © 2013 Elsevier B.V.

Liang Guo,Georgia Institute of Technology
IEEE transactions on biomedical circuits and systems | Year: 2013

Numerous applications in neuroscience research and neural prosthetics, such as electrocorticogram (ECoG) recording and retinal prosthesis, involve electrical interactions with soft excitable tissues using a surface recording and/or stimulation approach. These applications require an interface that is capable of setting up high-throughput communications between the electrical circuit and the excitable tissue and that can dynamically conform to the shape of the soft tissue. Being a compliant material with mechanical impedance close to that of soft tissues, polydimethylsiloxane (PDMS) offers excellent potential as a substrate material for such neural interfaces. This paper describes an integrated technology for fabrication of PDMS-based stretchable microelectrode arrays (MEAs). Specifically, as an integral part of the fabrication process, a stretchable MEA is directly fabricated with a rigid substrate, such as a thin printed circuit board (PCB), through an innovative bonding technology-via-bonding-for integrated packaging. This integrated strategy overcomes the conventional challenge of high-density packaging for this type of stretchable electronics. Combined with a high-density interconnect technology developed previously, this stretchable MEA technology facilitates a high-resolution, high-density integrated system solution for neural and muscular surface interfacing. In this paper, this PDMS-based integrated stretchable MEA (isMEA) technology is demonstrated by an example design that packages a stretchable MEA with a small PCB. The resulting isMEA is assessed for its biocompatibility, surface conformability, electrode impedance spectrum, and capability to record muscle fiber activity when applied epimysially.

Choi S.,Seoul National University | Dickson R.M.,Georgia Institute of Technology | Yu J.,Seoul National University
Chemical Society Reviews | Year: 2012

Though creation and characterization of water soluble luminescent silver nanodots were achieved only in the past decade, a large variety of emitters in diverse scaffolds have been reported. Photophysical properties approach those of semiconductor quantum dots, but relatively small sizes are retained. Because of these properties, silver nanodots are finding ever-expanding roles as probes and biolabels. In this critical review we revisit the studies on silver nanodots in inert environments and in aqueous solutions. The recent advances detailing their chemical and physical properties of silver nanodots are highlighted with an effort to decipher the relations between their chemical/photophysical properties and their structures. The primary results about their biological applications are discussed here as well, especially relating to their chemical and photophysical behaviours in biological environments (216 references). © 2012 The Royal Society of Chemistry.

Zajic A.G.,Georgia Institute of Technology
IEEE Transactions on Vehicular Technology | Year: 2011

This paper proposes a geometry-based statistical model for wideband multiple-inputmultiple-output (MIMO) mobile-to-mobile (M-to-M) shallow-water acoustic (SWA) multipath fading channels. Based on the reference model, the corresponding MIMO timefrequency correlation function, Doppler power spectral density, and delay cross-power spectral density are derived. These statistics are important tools for the design and performance analysis of MIMO M-to-M SWA communication and sonar systems. Finally, the derived statistics are compared with the experimentally obtained channel statistics, and close agreement is observed. © 2011 IEEE.

Zhang F.,Georgia Institute of Technology
IEEE Transactions on Automatic Control | Year: 2010

We present a geometric approach for formation control that explicitly decouples translation dynamics from the orientation and shape dynamics. The formation dynamics are modeled as controlled Lagrangian systems on Jacobi shape space, and measurements of shape variables are used as feedback to control the entire formation. This geometric approach allows each member of the formation, modeled as a Newtonian particle, freedom to choose different coordinate frame and shape variables to describe observed orientation and shape of the formation. We derive a class of cooperative control laws and shape consensus algorithms with provable convergence. They can be implemented in a distributed fashion thanks to gauge covariance and coordinate independence associated with the geometric approach. © 2010 IEEE.

Vuran M.C.,University of Nebraska - Lincoln | Akyildiz I.F.,Georgia Institute of Technology
IEEE Transactions on Mobile Computing | Year: 2010

Severe energy constraints of battery-powered sensor nodes necessitate energy-efficient communication in Wireless Sensor Networks (WSNs). However, the vast majority of the existing solutions are based on the classical layered protocol approach, which leads to significant overhead. It is much more efficient to have a unified scheme, which blends common protocol layer functionalities into a cross-layer module. In this paper, a cross-layer protocol (XLP) is introduced, which achieves congestion control, routing, and medium access control in a cross-layer fashion. The design principle of XLP is based on the cross-layer concept of initiative determination, which enables receiver-based contention, initiative-based forwarding, local congestion control, and distributed duty cycle operation to realize efficient and reliable communication in WSNs. The initiative determination requires simple comparisons against thresholds, and thus, is very simple to implement, even on computationally constrained devices. To the best of our knowledge, XLP is the first protocol that integrates functionalities of all layers from PHY to transport into a cross-layer protocol. A cross-layer analytical framework is developed to investigate the performance of the XLP. Moreover, in a cross-layer simulation platform, the state-of-the-art layered and cross-layer protocols have been implemented along with XLP for performance evaluations. XLP significantly improves the communication performance and outperforms the traditional layered protocol architectures in terms of both network performance and implementation complexity. © 2006 IEEE.

Badreddine Assouar M.,Georgia Institute of Technology | Oudich M.,CNRS Jean Lamour Institute
Applied Physics Letters | Year: 2011

Reliable numerical simulations of band structure for surface acoustic waves propagating in a two-dimensional phononic crystal are reported. Through an efficient finite element method and specific boundary conditions, a theoretical approach allowing a direct computation of surface acoustic wave's band structure for phononic crystal is proposed. Three types of phononic structures are investigated; fluid/solid, solid/solid, and air connected stubbed substrate. Using sound cone limitation, calculated results show the propagation of surface acoustic waves in the nonradiative region of the substrate. In addition, the modal displacements of the original guided surface modes supported by the studied structures are computed showing their original characteristics. © 2011 American Institute of Physics.

Hicks D.,Georgia Institute of Technology
Research Policy | Year: 2012

The university research environment has been undergoing profound change in recent decades and performance-based research funding systems (PRFSs) are one of the many novelties introduced. This paper seeks to find general lessons in the accumulated experience with PRFSs that can serve to enrich our understanding of how research policy and innovation systems are evolving. The paper also links the PRFS experience with the public management literature, particularly new public management, and understanding of public sector performance evaluation systems. PRFSs were found to be complex, dynamic systems, balancing peer review and metrics, accommodating differences between fields, and involving lengthy consultation with the academic community and transparency in data and results. Although the importance of PRFSs seems based on their distribution of universities' research funding, this is something of an illusion, and the literature agrees that it is the competition for prestige created by a PRSF that creates powerful incentives within university systems. The literature suggests that under the right circumstances a PRFS will enhance control by professional elites. PRFSs since they aim for excellence, may compromise other important values such as equity or diversity. They will not serve the goal of enhancing the economic relevance of research. © 2011 Elsevier B.V.

Chakravarty U.K.,Georgia Institute of Technology
Composite Structures | Year: 2010

SectionBuilder is a finite element based tool for analysis and design of composite rotor blade cross-sections. The tool can create the cross-sections with parametric shapes and arbitrary configurations. It has the ability to generate single- and multi-cell cross-sections with arbitrary lay-ups where the material properties for each layer can be defined on the basis of the design requirements. It can create the variation of thickness of skin and D-spars for rotor blades by considering ply drops. Cross-sections are often reinforced by core material for constructing realistic rotor blade cross-sections. The tool has the ability to integrate core materials into the cross-sections. After meshing the cross-section, the tool determines the sectional properties using finite element analysis. This tool computes sectional properties including stiffness matrix, compliance matrix, mass matrix, and principal axes. A visualization environment is integrated with the tool for visualizing the stress and strain distributions over the cross-section. The detail about the development steps and application of SectionBuilder is presented in this paper. © 2009 Elsevier Ltd. All rights reserved.

Cressler J.D.,Georgia Institute of Technology
IEEE Transactions on Device and Materials Reliability | Year: 2010

"Extreme environment" electronics represent an important niche market in the trillion dollar global electronics industry and span the operation of electronic circuits and systems in surroundings lying outside the domain of conventional commercial or military specifications. Such extreme environments might include, for instance, the following: 1) operation down to very low temperatures (e.g., to 77 K or even 4.2 K or below); 2) operation up to very high temperatures (e.g., to 200 °C or even 300 °C; 3) operation across very wide and/or cyclic temperature swings (e.g., °-230 °C + 120 °C night to day, as found on the lunar surface); 4) operation in a radiation environment (e.g., in space while orbiting the Earth); or 5) at worst case even with all four simultaneously. The unique bandgap-engineered features of silicon-germanium (SiGe) heterojunction bipolar transistors and the electronic circuits built from them offer a considerable potential for simultaneously coping with all four of these extreme environments, potentially with no process modifications, ultimately providing compelling advantages at the integrated circuit and system level across a wide class of envisioned commercial and defense applications. Here, we detail the nuances associated with using SiGe technology for extreme environment electronics, paying particular attention to recent developments in the field. © 2010 IEEE.

Voit E.O.,Georgia Institute of Technology
Advances in Experimental Medicine and Biology | Year: 2010

Sphingolipid metabolism constitutes a complex pathway system that includes biosynthesis of different types of sphingosines and ceramides, the formation and recycling of complex sphingolipids and the supply of materials for remodeling. Many of the metabolites have several roles, for instance, as substrates and as modulators of reactions in other parts of the system. The large number of sphingolipid compounds and the different types of nonlinear interactions among them render it difficult to predict responses of the sphingolipid pathway system to perturbations, unless one utilizes mathematical models. The sphingolipid pathway system not only invites modeling as a useful tool, it is also a very suitable test bed for developing detailed modeling techniques and analyses, due to several features. First, the reaction network is relatively well understood and many of the steps have been characterized, at least in vitro. Second, sphingolipid metabolism constitutes a relatively closed system, such that most reactions occur within the system rather than between the system and other pathways. Third, the basic structure of the pathway is conserved throughout evolution, but some of the details vary among different species. This degree of similarity permits comparative analyses and may one day elucidate the gradual evolution toward superior system designs. We discuss here some reasons that make sphingolipid modeling an appealing companion to experimental research and sketch out applications of sphingolipid models that are different from typical model uses. © 2010 Landes Bioscience and Springer Science+Business Media.

Agency: NSF | Branch: Standard Grant | Program: | Phase: CDS&E | Award Amount: 690.71K | Year: 2016

Quantum chemistry methods and software give scientists a computational tool to predict and study the properties of molecular systems. These tools are used widely, ranging from fundamental chemistry and biochemistry to pharmaceutical and materials design. Improvements in these methods and software give scientists an even more powerful tool for discovery. This project will develop new methods and software for quantum chemistry aimed at fully exploiting the parallel capabilities of todays computer hardware. Parallel capabilities within individual processor cores will be specifically exploited, as well as the ability to use multiple compute nodes concurrently for a single calculation. Success in this project will promote the progress of science, specifically for efficiently studying very large molecular systems with quantum chemistry tools, by decreasing the computer time needed for studies with the same accuracy and increasing the accuracy of studies that use the same computational time.

This project will first target the computation of electron repulsion integrals (ERIs) used in essentially all quantum chemistry software. The calculation of ERIs using the Obara-Saika (OS) method will be reorganized to exploit the single instruction multiple data (SIMD) capabilities of modern computer processors. The calculation of the Boys function used in the OS method will also be addressed. An optimized, open source software library for ERI calculation will be released, which will include functionality for one-electron integrals and integral derivatives. This project will also further develop the GTFock quantum chemistry framework to make it easier to use by adding several interfaces. GTFock, which has efficient distributed parallel capabilities, will be extended to include symmetry-adapted perturbation theory. GTFock will be used to study large protein-ligand systems. In addition, this project will involve undergraduate students and will be used to motivate research in the classroom.

Agency: GTR | Branch: EPSRC | Program: | Phase: Research Grant | Award Amount: 2.00M | Year: 2013

Globalization and ever-changing customer demands resulting in product customization, variety and time to market have intensified enormous competition in automotive and aerospace, manufacturing worldwide. Manufacturers are under tremendous pressures to meet changing customer needs quickly and cost effectively without sacrificing quality. Responding to these challenges manufacturers have offered flexible and reconfigurable assembly systems. However, a major challenge is how to obtain production volume flexibility for a product family with low investment and capability to yield high product quality and throughput while allowing quick production ramp-up. Overcoming these challenges involves three requirements which are the focus of this proposal: (1) Model reconfigurable assembly system architecture. The system architecture should purposefully take into account future uncertainties triggered by product family mix and product demands. This will require minimizing system changeability while maximizing system reusability to keep cost down; (2) Develop novel methodologies that can predict process capability and manage product quality for given system changeability requirements; and (3) Take advantage of emerging technologies & rapidly integrate them into existing production system, for e.g., new joining processes (Remote Laser Welding) and new materials. This project will address these factors by developing a self-resilient reconfigurable assembly system with in-process quality improvement that is able to self-recover from (i) 6-sigma quality faults; and (ii) changes in design and manufacturing. In doing so, it will go beyond state-of-the-art and practice in following ways: (1) Since current system architectures face significant challenges in responding to changing requirements, this initiative will incorporate cost, time and risks involving necessary changes by integrating uncertainty models; decision models for needed changes; and system change modelling; and (2) Current in-process quality monitoring systems use point-based measurements with limited 6-sigma failure root cause identification. They seldom correct operational defects quickly and do not provide in-depth information to understand and model manufacturing defects related to part and subassembly deformation. Usually, existing surface-based scanners are used for parts inspection not in-process quality control. This project will integrate in-line surface-based measurement with automatic Root Cause Analysis, feedforward/feedback process adjustment and control to enhance system response to fault or quality/productivity degradation. The research will be conducted for reconfigurable assembly system with multi-sector applications. It will involve system changeability/adaptation and in-process quality improvement for: (i) Automotive door assembly for implementing an emerging joining technology, e.g. Remote Laser Welding (RLW), for precise closed-loop surface quality control; and (ii) Airframe assembly for predicting process capability also for precise closed-loop surface quality control. Results will yield significant benefits to the UKs high value manufacturing sector. It will further enhance the sector by accelerating introduction of new emerging eco-friendly processes, e.g., RLW. It will foster interdisciplinary collaboration across a range of disciplines such as data mining and process mining, advanced metrology, manufacturing, and complexity sciences, etc. The integration of reconfigurable assembly systems (RAS) with in-process quality improvement (IPQI) is an emerging field and this initiative will help to engender the development into an internationally important area of research. The results of the research will inform engineering curriculum components especially as these relate to training future engineers to lead the high value manufacturing sector and digital economy.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Service, Manufacturing, and Op | Award Amount: 289.57K | Year: 2017

Each year about 200,000 women are diagnosed with and more than 40,000 die from breast cancer, the most common female cancer in the US. Late detection significantly reduces survival; while 5-year survival is about 97 percent for early stage breast cancers, it is only about 20 percent for advanced stage cancers. Numerous clinical trials and community setting analyses have shown that repeat mammography use can significantly reduce breast cancer mortality. The reduction in breast cancer mortality due to screening however, is contingent upon adhering to screening recommendations and having consecutive on-schedule mammograms. Therefore, women who do not adhere to receiving repeat mammograms are at risk for developing advanced stage or incurable breast cancers. Indeed, adherence to cancer screening has been identified as a national top priority to reduce cancer mortality. In line with this initiative, the research objective of this project is to optimize the design and allocation of adaptive adherence-enhancing intervention (AEI) strategies to improve overall adherence to mammography screening, while reducing unnecessary costs. From a societal perspective, this research has the potential to significantly improve the efficiency of adherence-enhancing intervention strategies for more effective breast cancer prevention. Results from this research can inform breast cancer prevention policies at the level of the individual health plan, a states comprehensive cancer control plan, and also at the national level in terms of guideline development. This project will also have an immediate impact on integration of research and learning, and enhancing diversity. Under this project, a PhD student will be trained to apply systems modeling methodologies to healthcare area. In addition, the investigators will engage several minority students into these research activities, and aim to attract them to engineering with a focus on healthcare.

This research will apply machine learning and adaptive stochastic dynamic control methodologies to learn patients responses to adherence-enhancing interventions and optimize the use of intervention strategies accordingly. If successful, this project will make several intellectual contributions. First, this will be the first study to optimize the design and allocation of adaptive AEI strategies for sustained mammography use. The team will develop flexible adaptive stochastic control models that capture key disease and intervention dynamics, conduct in depth structural analysis of analytical models, and develop tailored solution algorithms. In parameterizing such models, the team will use large national datasets to inform the models. Further, the team will test policies derived by the analytical models against some actual policies through a detailed simulation model to evaluate possible solutions and estimate the impact. The projects approaches are general and could be applied to other chronic diseases with historically low adherence rates to screening.

Agency: NSF | Branch: Standard Grant | Program: | Phase: CHEMICAL & BIOLOGICAL SEPAR | Award Amount: 230.00K | Year: 2015


Approximately 15% of domestic energy in the United States is consumed in separation processes. Composite membrane separators comprised of particles of advanced molecular sieve material embedded into a polymer film are a promising class of membranes with the potential to make meaningful impact on the energy efficiency of separation processes. The proposed research will establish experimental framework that allows resolving microscopic transport, sorption and structural properties of such composite membranes and finding relationships between these properties and the corresponding macroscopic properties relevant for applications in separations. The fundamental understanding of the microscopic properties of the composite membranes will lay the foundation for rationally designed composite membranes optimized for gas separations and pervaporation.

Zeolitic imidazolate frameworks (ZIFs), a class of metal-organic frameworks (MOFs) that have zeolite-like structures, can be embedded into a glassy polymer matrix to form a mixed matrix membrane (MMM). Such membranes capture both the molecular sieving properties of the ZIF and the ease of processing associated with polymers. The main goal of this work is to develop a fundamental understanding of the relationship between the transport, sorption and structural properties of the MMM constituents and those of the corresponding neat materials. The PIs propose to use a synergetic strategy based on the application of several state-of-the-art and even unique NMR and sorption characterization techniques to achieve such resolution. The potentially transformative outcome of the proposed work is to enable the design of ZIF-polymer MMMs based on the understanding of changes of the transport, sorption and structural properties of the MMM constituents in comparison to those of the neat materials.

The primary focus of the educational and outreach project plan is to establish a new inter-campus research mentoring program for K-12 and undergraduate students from underrepresented groups. This program will offer students participation in well-defined engineering projects, development of engaging animations and educational demonstrations related to membrane separations, presentation of these animations and demonstrations to younger students and participation in a student exchange between the UF and Georgia Tech groups.

Agency: NSF | Branch: Standard Grant | Program: | Phase: | Award Amount: 2.91M | Year: 2015

NRT: Accessibility, Rehabilitation and Movement Science (ARMS): An Interdisciplinary Traineeship Program in Health-Centered Robotics

The worlds demographics are changing. People continue to live longer and the U.S. population is becoming older and more racially and ethnically diverse. There is also an increase in younger individuals living with a life-long disability, such as veterans who sustain catastrophic injuries, persons suffering from neurodegenerative diseases, and children growing up with developmental disorders or delays. With this changing population profile comes an increasing demand for advanced healthcare technologies and a need to train a new generation of engineers able to develop these new technologies. This National Science Foundation Research Traineeship award to the Georgia Institute of Technology will address this demand by training graduate masters and doctoral students in the interdisciplinary field of healthcare robotics. The traineeship anticipates providing a unique and comprehensive training opportunity for one hundred and fifty-five (155) students, including thirty (30) funded trainees, by combining disciplines in robotics, studies in health sciences, interactions with clinical partners, hands-on rehabilitation research, and a culture of innovation and translational research.

Trainees will have unique exposure to a variety of approaches developed in Robotics, Physiology, Neuroscience, Rehabilitation, and Psychology. The traineeship will bridge the gap between healthcare and robotics by addressing two major barriers: a) the lack of a formalized framework to enable interdisciplinary collaborations between robotics engineers and health professionals; b) the tendency for students in robotics to be unprepared to address problems in healthcare, including a lack of appreciation for the challenges encountered by clinicians, caregivers, and people with disabilities. Through close interactions with various partners, the traineeship will expand student horizons beyond a technology-first mentality to consider challenges in developing robotic solutions that address the needs of clinicians, caregivers, and people with disabilities. The goal is to develop an interdisciplinary curriculum based upon the concept of participatory design, problem-based learning, and an immersive research experience that blends techniques from multiple disciplines to solve problems posed in healthcare. A second major goal of the traineeship is to increase the participation of women, underrepresented minorities, and students with disabilities in robotics and related engineering fields. The project will develop a new M.S. degree program in healthcare robotics and a new PhD concentration area in healthcare robotics as well as curricular materials and best-practices to allow other institutions to develop similar programs.

The NSF Research Traineeship (NRT) Program is designed to encourage the development and implementation of bold, new, potentially transformative, and scalable models for STEM graduate education training. The Traineeship Track is dedicated to effective training of STEM graduate students in high priority interdisciplinary research areas, through the comprehensive traineeship model that is innovative, evidence-based, and aligned with changing workforce and research needs.

This award is supported, in part, by the EHR Core Research (ECR) program, specifically the ECR Research in Disabilities Education (RDE) area of special interest. ECR emphasizes fundamental STEM education research that generates foundational knowledge in the field. Investments are made in critical areas that are essential, broad and enduring: STEM learning and STEM learning environments, broadening participation in STEM, and STEM workforce development.

Agency: NSF | Branch: Continuing grant | Program: | Phase: GRAPHICS & VISUALIZATION | Award Amount: 95.30K | Year: 2016

CAREER: Understanding, Representing, and Enhancing Scenes at the Internet-scale

Photography has an enormous impact on society -- it is our primary visual history and a medium for storytelling, entertainment, and art. But our visual world is extraordinarily complex which makes it difficult for computer vision to understand photos and for computer graphics to synthesize visual content. However, the emergence of Internet-scale photo collections in recent years enables new research directions. We use scene-based representations to leverage Internet-scale data. Scenes (places or environments) are the context in which all other visual phenomena exist and it seems possible to brute-force the space of scenes -- with millions of scenes, we find qualitatively similar scenes and create massively data-driven algorithms with capabilities that are complementary to typical bottom-up graphics and vision pipelines. The underlying principle of this study is that joint investigations of scene representations and large image databases will advance the state-of-the-art in graphics and vision.

First, we are investigating detail synthesis tasks which alleviate camera shake, motion blur, defocus, atmospheric scattering, or low resolution. Scene representations are robust enough to find matching scenes in Internet-scale photo collections even in the presence of dramatic blurring. These matching scenes provide a context-specific statistical model which can be used to insert convincing texture and object detail. Second, we are studying attribute-based representations of scenes. We use crowdsourcing to discover attributes and build large databases for the community. Attributes are a powerful intermediate representation for the next generation of big data imaging research which can have broad societal impact through applications such as robotics, security, assistance to vision-impaired, and vehicle safety. The investigators also are developing a new introductory course for Brown students to explore big data computing across scientific disciplines and are creating an online community for visual computing education to benefit students interested in photography and programming.

Agency: NSF | Branch: Standard Grant | Program: | Phase: I-Corps | Award Amount: 50.00K | Year: 2015

Biopharmaceuticals have revolutionized treatment of cancer, immune, inflammatory and neurological diseases. As these drugs are administered to patients intravenously and then distributed by vasculature to target tissues, their testing in vitro has not been predictive enough. Next, because these drugs are humanized, predicting safety hazards from animal studies has been challenging and volunteers suffered from severe reactions. For these reasons, economics and timeline associated with drug development, biopharmaceutical industry seeks human organ-on-a-chip platforms for more predictive drug testing. Unfortunately, there has not yet been a system that is simple enough for researchers to use, in adequate throughput, and standardized enough to fit into drug screening workflows. The goal of this project is to help solve this problem for researchers by setting a foothold for development of a human organ-on-a-chip platform that is easy for them to use, high-throughput, and in a standard format to readily integrate into their testing routine.

The proposed innovation is a simple tool in a medium throughput, a tool that any researcher can use and be able to answer all the questions that complex commercial perfusion systems do (if all the hardware that services them could be put in one lab and researchers trained on how to use it all). The proposed vascularized organ-on-a-chip platform (PerfusionPal) will be in standard format, high throughput, easy to use and fit into routine pharmaceutical workflows. It will be diagnostic and prognostic; diagnostic to assess target specificity, cross-reactivity with off-target tissues and to identify biomarkers; and prognostic or capable of predicting severe reactions in clinical trials such as cytokine storms, infusion reactions, immune suppression and off-target organ liabilities. PerfusionPal commercialization will triage dangerous drug leads and enable faster and cheaper development of safe drugs from bench to bedside for the benefit of patients, healthcare industry and society.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Service, Manufacturing, and Op | Award Amount: 300.00K | Year: 2015

For sensor network design, an optimization method is often used to determine how many sensors to install and where to physically install them. One stream of research is to assume a known functional relationship (usually a regression line) among sensor measurements with normally distributed noise. This assumption does not hold for systems with complex dynamics such as traffic transportation or environmental monitoring. Even when a process simulation is used to capture complex dynamics, it is often assumed that estimated performance measures from stochastic simulation are accurate and there is no false alarm (i.e., no sensor measurement error). A sensor network found under these assumptions may produce unacceptably high false alarm rates, which eventually makes the decision maker abandon the sensor network. This project develops statistical monitoring and controlling methods for raising an alarm when a sensor network of a complicated system detects abnormal behaviors such as a contaminant spill in a water quality monitoring network. Then it develops efficient simulation-based optimization algorithms to determine the number of sensors and their locations when stochastic simulation is used to estimate multiple performance measures. Finally this project combines the statistical methods and the simulation-based optimization to design the optimal sensor network while controlling false alarm rates. This research is interdisciplinary across manufacturing, quality control, simulation and environmental engineering; and the composition of the research team broadens the participation of underrepresented groups in research and teaching. The results from this research are applicable to many application areas and thus will benefit the U.S. economy and society.

The objective of this project is to develop methods that will be useful in identifying an optimal sensor network quickly and accurately in the presence of sensor measurement error for a complicated system whose in-control and out-of-control observations are obtained through stochastic process simulation. This project considers a potentially large-scale process with general marginal and general correlation structure, which can broaden application fields of statistical process control (SPC) methods; and develops SPC techniques whose control limits neither require modeling of an underlying process nor trial-and-error calibration. It also develops a combined framework of SPC and discrete optimization via simulation when multiple performance measures exist. Finally it facilitates knowledge transfer from IE/OR to non-traditional IE/OR fields by applying the resulting combined algorithms to the water quality monitoring problem.

Norton B.G.,Georgia Institute of Technology
Journal of Agricultural and Environmental Ethics | Year: 2012

The revelatory paper, "Dilemmas in the General Theory of Planning," by Rittel and Webber (Policy Sci 4:155-169, 1973) has had great impact because it provides one example of an emergent consensus across many disciplines. Many "problems," as addressed in real-world situations, involve elements that exceed the complexity of any known or hoped-for model, or are "wicked." Many who encounter this work for the first time find that their concept of wicked problems aptly describes many environmental disputes. For those frustrated with the lack of progress in many areas of environmental protection, Rittel and Webber's work suggested a powerful explanatory hypothesis: Complex environmental problems cannot be comprehended within any of the accepted disciplinary models available in the academy or in discourses on public interest and policy. What should we conclude about the future of social improvements, and about the possibilities for rational discourse leading to cooperative action, with respect to this huge number of pressing public, environmental problems? Can we find ways to address environmental problems that improves the ability of communities to respond creatively and rationally to them? I will argue that, while the Rittel-Webber critique requires us to abandon many of the assumptions associated with a positivistic view of science and its applications to policy analysis, it also points to a more productive direction for the future of policy analysis. I will introduce "boundary critique," developed within Critical Systems Theory (CST), an approach that offers some reason for optimism in dealing with some aspects of wickedness. © 2011 Springer Science+Business Media B.V.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Manufacturing Machines & Equip | Award Amount: 397.15K | Year: 2014

This Faculty Early Career Development (CAREER) award supports a project with the goal of elucidating the interdependence of deformation history, microstructure and thermomechanical properties in surface generation by novel impression and burnishing-based processes. To accomplish this, an experimental study will utilize controlled deformation platforms and a suite of advanced characterization methods to resolve local material conditions in the modified subsurface. Predictive models of microstructure and thermomechanical properties will be calibrated using these results to enable multi-role surface modification. The primary educational objective is the creation of a series of immersive experiences for students to elicit interest in manufacturing research. This includes deep dives in coupled design/manufacturing problems for minority high school students and a series of hands-on manufacturing design challenges for undergraduates. Graduate students will also engage in mentorship of high school students on semester-long design projects. These educational experiences are complemented by the integration of research outcomes in curricula and development of an interactive manufacturing speaker series.

If successful, the research will result in new manufacturing processes that enable direct control of surface properties for a broad range of engineering components. This capability will benefit the nations economy in enabling domestic manufacturers to be globally competitive in automotive, aerospace and biomedical systems applications where component surfaces strongly influence performance, service life, and operating cost. Societal benefits of the research include the rapid development of a new class of high-performance, surface-engineered components (e.g., turbine blades, bearing seals, hard-tissue implants) that will improve quality of life through enhanced performance. The integrated educational activities will help to develop the future domestic manufacturing workforce by promoting interest in manufacturing-related studies among high school students and undergraduates through novel experiential learning opportunities. Students will also gain globally-aware perspectives in problem solving activities as they will be connected across academic levels and cultural backgrounds in formal mentor-mentee relationships. Doctoral students will benefit from collaborative research opportunities in the form of international research exchanges and industrial internships.

Agency: NSF | Branch: Standard Grant | Program: | Phase: I-Corps | Award Amount: 50.00K | Year: 2015

Mobile applications are prevalent and are increasingly being used for business activities. Because the mobile user environment is not directly under the control of the companies developing such applications, it is difficult for them to observe, reproduce, investigate, and fix failures that occur in the field. Further, mobile environments can be heterogeneous and may lead to different kinds of issues in the application behavior, making it harder to investigate these issues and possible failures associated with them. Delays in addressing these failures lead to high cost in terms of customer support and loss of reputation on the application store due to bad reviews from dis-satisfied customers. This I-Corps team proposes a novel approach that captures the runtime behavior of a mobile application in the field and makes it available to the developer.

The goal from this project is to complete the customer validation to develop a proof-of-concept technique and tool for capturing field data relevant to the mobile application. The technique will collect this field data, which consists of different runtime states of the application on specific mobile environments and the sequences of actions leading to such states. This information would be reported in a visual format to allow developers to identify commonalities and differences between the runtime behavior obtained from
different environments. Although the exact details of this technique might change based on interaction with potential customers, it has essential ingredients to support the testing and maintenance activities for the mobile application.

Agency: NSF | Branch: Standard Grant | Program: | Phase: Secure &Trustworthy Cyberspace | Award Amount: 197.72K | Year: 2015

Existing research on Internet routing security concentrates on technical solutions (new standards and protocols). This project is based on the premise that organizational and institutional factors - known as governance structures in institutional economics - are as important to Internet routing security as technological design. Internet routing involves decentralized decision making among tens of thousands of autonomous network operators. In this environment, an operators decisions regarding implementation, organization and monitoring of routing policies powerfully affect the adoption and performance of security technologies.

The research bridges a gap