Pomeroy C.,University of California at Santa Cruz |
Hall-Arber M.,77 Massachusetts Avenue
Marine Policy | Year: 2015
Marine renewable energy (MRE), though a relative newcomer to the ocean and coastal commons, has become a significant driver of marine spatial planning in the US, posing particular challenges to commercial fisheries and fishing communities. State and federal agencies with primary oversight for MRE development have focused on the identification of places where MRE might proceed unhindered by other uses, most notably coastal fisheries. These agencies and MRE developers have focused on potential space-use conflict and standard mitigation measures for loss of access to that space. However, discussions with fishery participants and other community members, as well as observations of processes on the US West and East Coasts, reveal a complex, multi-faceted social-ecological system not easily parsed out among users, nor amenable to classic mitigation formulas. Recent ethnographic research on potential space-use conflicts and mitigation for MRE demonstrates that marine space use is dynamic and multi-dimensional, with important linkages among fisheries, communities and other interests. Although experiences vary within and across regions and fishing communities, this research illustrates the weak position of fishing communities in marine spatial planning in the context of MRE development. This paper considers the implications of MRE for US East and West Coast fisheries and fishing communities situated within the larger context of neoliberalism and commodification of the ocean commons. © 2014 Elsevier Ltd..
Fenning D.P.,77 Massachusetts Avenue |
Fenning D.P.,University of California at San Diego |
Hofstetter J.,77 Massachusetts Avenue |
Morishige A.E.,77 Massachusetts Avenue |
And 4 more authors.
Advanced Energy Materials | Year: 2014
Material defects govern the performance of a wide range of energy conversion and storage devices, including photovoltaics, thermoelectrics, and batteries. The success of large-scale, cost-effective manufacturing hinges upon rigorous material optimization to mitigate deleterious defects. Material processing simulations have the potential to accelerate novel energy technology development by modeling defect-evolution thermodynamics and kinetics during processing of raw materials into devices. Here, a predictive process optimization framework is presented for rapid material and process development. A solar cell simulation tool that models defect kinetics during processing is coupled with a genetic algorithm to optimize processing conditions in silico. Experimental samples processed according to conditions suggested by the optimization show signifi cant improvements in material performance, indicated by minority carrier lifetime gains, and confi rm the simulated directions for process improvement. This material optimization framework demonstrates the potential for process simulation to leverage fundamental defect characterization and high-throughput computing to accelerate the pace of learning in materials processing for energy applications. © 2014 Wiley-VCH Verlag GmbH & Co. KGaA.
Aimon N.M.,77 Massachusetts Avenue |
Choi H.K.,77 Massachusetts Avenue |
Sun X.Y.,77 Massachusetts Avenue |
Sun X.Y.,Harbin Institute of Technology |
And 2 more authors.
Advanced Materials | Year: 2014
In perovskite/spinel self-assembled oxide nanocomposites, the substrate surface plays a dominant role in determining the final morphology. Topgraphic features, such as pits and trenches, are written in the substrate using either Focused Ion Beam or wet etching through a block co-polymer mask. These features are effective at templating the self-assembly, resulting in a wide range of attainable nano-assemblies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Slocum Jr. A.H.,77 Massachusetts Avenue |
Culpepper M.L.,77 Massachusetts Avenue
Precision Engineering | Year: 2012
The creation of new technology for high-throughput nanomanufacturing is necessary to realize the full potential of some nano-technological products. Here, we present the preliminary design and manufacture of a precision machine for enabling high-throughput nanomanufacturing processes in a laboratory environment. An error analysis and rate analysis for implementing Dip Pen Nanolithography (DPN), a scanning-probe-based nanomanufacturing process, are used to generate detailed machine functional requirements. A deterministic process is then used to design or select each machine element; standard machine elements and easily manufactured components are used when possible to achieve a low-cost design. The machine is capable of operating with an accuracy and repeatability in the range of hundreds of nanometers, with a thermal stability in the tens of nanometers, thus exceeding the performance requirements for DPN as well as the capabilities of current technology. In a manufacturing environment, the machine could implement DPN at a rate which is almost two orders of magnitude faster than current technology. Multiple machines could also be used for parallel processing and increased production rate to make a nanomanufacturing process economically viable. © 2011 Elsevier Inc. All Rights Reserved.
Baxamusa S.H.,77 Massachusetts Avenue |
Montero L.,University of Barcelona |
Borros S.,University of Barcelona |
Gleason K.K.,77 Massachusetts Avenue
Macromolecular Rapid Communications | Year: 2010
Bifunctional surfaces are micropatterned using a self-aligned, dual-purpose lithographic mask and pairs of conformally deposited iCVD polymers. A first layer is deposited, then physically masked and etched in oxygen plasma. A second layer is deposited with the mask still in place. Lift-off reveals the micropatterned surface. The thicknesses of the two layers are independently controlled so that the resultant surface displays both chemical and topographical contrast. The patterning scheme is independent of the polymers used and order of deposition. We use this scheme to create surfaces that spatially confine microcondensation, as well as chemical functionality. We also demonstrate microwells whose depth can be altered in response to a water stimulus. (Figure Presented) © 2010 WILEY-VCH Verlag GmbH & Co. KGaA,.
Hadjiconstantinou N.G.,77 Massachusetts Avenue
AIP Conference Proceedings | Year: 2011
We present variance reduction methods for drastically reducing the statistical uncertainty associated with Monte Carlo methods for solving the Boltzmann transport equation. The variance reduction, achieved by simulating the deviation from equilibrium, provides a speedup compared to traditional methods such as direct simulation Monte Carlo (DSMC) which increases quadratically as the deviation from equilibrium goes to zero, thus enabling the simulation of arbitrarily small deviations from equilibrium. We show that in addition to reducing the computational cost associated with simulations, the control variate approach can be used to remove the stiffness associated with reaching the continuum limit. In other words, simulating the deviation from equilibrium endows particle methods with the ability to seamlessly capture both the molecular and the continuum regimes at comparable computational cost. © 2011 American Institute of Physics.
Bianchi G.,University of Rome Tor Vergata |
Bracciale L.,University of Rome Tor Vergata |
Censor-Hillel K.,Technion - Israel Institute of Technology |
Lincoln A.,160 Sunrise Dr |
Medard M.,77 Massachusetts Avenue
Advances in Mathematics of Communications | Year: 2016
In this paper we show how linear network coding can reduce the number of queries needed to retrieve one specific message among k distinct ones replicated across a large number of randomly accessed nodes storing one message each. Without network coding, this would require k queries on average. After proving that no scheme can perform better than a straightforward lower bound of 0:5k average queries, we propose and asymptotically evaluate, using mean field arguments, a few example practical schemes, the best of which attains 0:794k queries on average. The paper opens two complementary challenges: a systematic analysis of practical schemes so as to identify the best performing ones and design guideline strategies, as well as the need to identify tighter, nontrivial, lower bounds. © 2016 AIMS.
PubMed | 77 Massachusetts Avenue, University of California at Irvine, CNRS Laboratory of Civil and Environmental Engineering, 7 Massachusetts Avenue and Texas A&M University
Type: Journal Article | Journal: Journal of the Royal Society, Interface | Year: 2016
More than 44% of building energy consumption in the USA is used for space heating and cooling, and this accounts for 20% of national CO2emissions. This prompts the need to identify among the 130 million households in the USA those with the greatest energy-saving potential and the associated costs of the path to reach that goal. Whereas current solutions address this problem by analysing each building in detail, we herein reduce the dimensionality of the problem by simplifying the calculations of energy losses in buildings. We present a novel inference method that can be used via a ranking algorithm that allows us to estimate the potential energy saving for heating purposes. To that end, we only need consumption from records of gas bills integrated with a buildings footprint. The method entails a statistical screening of the intricate interplay between weather, infrastructural and residents choice variables to determine building gas consumption and potential savings at a city scale. We derive a general statistical pattern of consumption in an urban settlement, reducing it to a set of the most influential buildings parameters that operate locally. By way of example, the implications are explored using records of a set of (N= 6200) buildings in Cambridge, MA, USA, which indicate that retrofitting only 16% of buildings entails a 40% reduction in gas consumption of the whole building stock. We find that the inferred heat loss rate of buildings exhibits a power-law data distribution akin to Zipfs law, which provides a means to map an optimum path for gas savings per retrofit at a city scale. These findings have implications for improving the thermal efficiency of cities building stock, as outlined by current policy efforts seeking to reduce home heating and cooling energy consumption and lower associated greenhouse gas emissions.