Santa Barbara, CA, United States
Santa Barbara, CA, United States

The University of California, Santa Barbara is a public research university and one of the 10 general campuses of the University of California system. The main campus is located on a 1,022-acre site near Goleta, California, United States, 8 miles from Santa Barbara and 100 miles northwest of Los Angeles. Tracing its roots back to 1891 as an independent teachers' college, UCSB joined the University of California system in 1944 and is the third-oldest general-education campus in the system.UCSB is one of America's Public Ivy universities, which recognizes top public research universities in the United States. The university is a comprehensive doctoral university and is organized into five colleges and schools offering 87 undergraduate degrees and 55 graduate degrees. UCSB was ranked 40th among "National Universities", 10th among U.S. public universities and 28th among Best Global Universities by U.S. News & World Report 's 2015 rankings. The university was also ranked 37th worldwide by the Times Higher Education World University Rankings and 41st worldwide by the Academic Ranking of World Universities in 2014.UC Santa Barbara is a "very high activity" research university and spent $233.9 million on research expenditures in the 2012 fiscal year, 91st largest in the United States. UCSB houses twelve national research centers, including the renowned Kavli Institute for Theoretical Physics. Current UCSB faculty includes six Nobel Prize laureates, one Fields Medalist, 29 members of the National Academy of science, 27 members of the National Academy of Engineering, and 31 members of the American Academy of Arts and science. UCSB was the No. 3 host on the ARPAnet and was elected to the Association of American Universities in 1995.The UC Santa Barbara Gauchos compete in the Big West Conference of the NCAA Division I. The Gauchos have won NCAA national championships in men's soccer and men's water polo. Wikipedia.

Time filter

Source Type

Barta K.,University of Groningen | Ford P.C.,University of California at Santa Barbara
Accounts of Chemical Research | Year: 2014

ConspectusThis Account outlines recent efforts in our laboratories addressing a fundamental challenge of sustainability chemistry, the effective utilization of biomass for production of chemicals and fuels. Efficient methods for converting renewable biomass solids to chemicals and liquid fuels would reduce society's dependence on nonrenewable petroleum resources while easing the atmospheric carbon dioxide burden. The major nonfood component of biomass is lignocellulose, a matrix of the biopolymers cellulose, hemicellulose, and lignin. New approaches are needed to effect facile conversion of lignocellulose solids to liquid fuels and to other chemical precursors without the formation of intractable side products and with sufficient specificity to give economically sustainable product streams.We have devised a novel catalytic system whereby the renewable feedstocks cellulose, organosolv lignin, and even lignocellulose composites such as sawdust are transformed into organic liquids. The reaction medium is supercritical methanol (sc-MeOH), while the catalyst is a copper-doped porous metal oxide (PMO) prepared from inexpensive, Earth-abundant starting materials. This transformation occurs in a single stage reactor operating at 300-320°C and 160-220 bar. The reducing equivalents for these transformations are derived by the reforming of MeOH (to H2 and CO), which thereby serves as a "liquid syngas" in the present case. Water generated by deoxygenation processes is quickly removed by the water-gas shift reaction. The Cu-doped PMO serves multiple purposes, catalyzing substrate hydrogenolysis and hydrogenation as well as the methanol reforming and shift reactions. This one-pot "UCSB process" is quantitative, giving little or no biochar residual.Provided is an overview of these catalysis studies beginning with reactions of the model compound dihydrobenzofuran that help define the key processes occurring. The initial step is phenyl-ether bond hydrogenolysis, and this is followed by aromatic ring hydrogenation. The complete catalytic disassembly of the more complex organosolv lignin to monomeric units, largely propyl-cyclohexanol derivatives is then described. Operational indices based on 1H NMR analysis are also presented that facilitate holistic evaluation of these product streams that within several hours consist largely of propyl-cyclohexanol derivatives. Lastly, we describe the application of this methodology with several types of wood (pine sawdust, etc.) and with cellulose fibers. The product distribution, albeit still complex, displays unprecedented selectivity toward the production of aliphatic alcohols and methylated derivatives thereof. These observations clearly indicate that the Cu-doped solid metal oxide catalyst combined with sc-MeOH is capable of breaking down the complex biomass derived substrates to markedly deoxygenated monomeric units with increased hydrogen content. Possible implementations of this promising system on a larger scale are discussed. © 2014 American Chemical Society.

Noffke N.,Old Dominion University | Awramik S.M.,University of California at Santa Barbara
GSA Today | Year: 2013

Benthic microorganisms form highly organized communities called "biofilms." A biofilm consists of the individual cells plus their extracellular polymeric substances (EPS). In marine and non-marine environments, benthic microbial communities interact with the physical sediment dynamics and other factors in the environment in order to survive. This interaction can produce distinctive sedimentary structures called microbialites. Binding, biostabilization, baffling, and trapping of sediment particles by microorganisms result in the formation of microbially induced sedimentary structures (MISS); however, if carbonate precipitation occurs in EPS, and these processes happen in a repetitive manner, a multilayered build-up can form-stromatolites. Stromatolites and MISS are first found in the early Archean, recording highly evolved microbial activity early in Earth's history. Whereas the stromatolites show enormous morphologic and taxonomic variation, MISS seem not to have changed in morphology since their first appearance. MISS might be the older relative, but due to the lack of well-preserved sedimentary rocks older than 3.5 billion years, the origin of both stromatolites and MISS remains uncertain.

Cutler S.R.,University of California at Riverside | Rodriguez P.L.,Institute Biologia Molecular Y Celular Of Plantas | Finkelstein R.R.,University of California at Santa Barbara | Abrams S.R.,National Research Council Canada
Annual Review of Plant Biology | Year: 2010

Abscisic acid (ABA) regulates numerous developmental processes and adaptive stress responses in plants. Many ABA signaling components have been identified, but their interconnections and a consensus on the structure of the ABA signaling network have eluded researchers. Recently, several advances have led to the identification of ABA receptors and their three-dimensional structures, and an understanding of how key regulatory phosphatase and kinase activities are controlled by ABA. A new model for ABA action has been proposed and validated, in which the soluble PYR/PYL/RCAR receptors function at the apex of a negative regulatory pathway to directly regulate PP2C phosphatases, which in turn directly regulate SnRK2 kinases. This model unifies many previously defined signaling components and highlights the importance of future work focused on defining the direct targets of SnRK2s and PP2Cs, dissecting the mechanisms of hormone interactions (i.e., cross talk) and defining connections between this new negative regulatory pathway and other factors implicated in ABA signaling. Copyright © 2010 by Annual Reviews. All rights reserved.

Donnelly W.,University of California at Santa Barbara | Wall A.C.,Institute for Advanced Study
Physical Review Letters | Year: 2015

The vacuum entanglement entropy of Maxwell theory, when evaluated by standard methods, contains an unexpected term with no known statistical interpretation. We resolve this two-decades old puzzle by showing that this term is the entanglement entropy of edge modes: classical solutions determined by the electric field normal to the entangling surface. We explain how the heat kernel regularization applied to this term leads to the negative divergent expression found by Kabat. This calculation also resolves a recent puzzle concerning the logarithmic divergences of gauge fields in 3+1 dimensions. © 2015 American Physical Society.

Lucini B.,University of Swansea | Panero M.,Helsinki Institute of Physics | Panero M.,University of California at Santa Barbara
Physics Reports | Year: 2013

We review the theoretical developments and conceptual advances that stemmed from the generalization of QCD to the limit of a large number of color charges, originally proposed by 't Hooft. Then, after introducing the gauge-invariant non-perturbative formulation of non-Abelian gauge theories on a spacetime lattice, we present a selection of results from recent lattice studies of theories with a different number of colors, and the findings obtained from their extrapolation to the 't Hooft limit. We conclude with a brief discussion and a summary. © 2013 Elsevier B.V.

Kids are frequently taught that seven continents exist: Africa, Asia, Antarctica, Australia, Europe, North America, and South America. Geologists, who look at the rocks (and tend to ignore the humans), group Europe and Asia into a supercontinent — Eurasia — making for a total of six geologic continents. But according to a new study of Earth's crust, there's a seventh geologic continent called "Zealandia," and it has been hiding under our figurative noses for millennia. The 11 researchers behind the study say that New Zealand and New Caledonia aren't merely island chains. Instead, they're both part of a single, 4.9 million-square-kilometer (1.89 million-square-mile) slab of continental crust that's distinct from Australia. "This is not a sudden discovery, but a gradual realization; as recently as 10 years ago we would not have had the accumulated data or confidence in interpretation to write this paper," the researchers wrote in GSA Today, a journal of the Geological Society of America. Ten of the researchers work for institutions within the new continent; one works for a university in Australia. But other geologists are almost certain to accept the team's continent-sized conclusions, says Bruce Luyendyk, a geophysicist at the University of California at Santa Barbara, who wasn't involved in the study. "These people here are A-list earth scientists," Luyendyk told Business Insider. "I think they've put together a solid collection of evidence that's really thorough. I don't see that there's going to be a lot of pushback, except maybe around the edges." The concept of Zealandia isn't new. In fact, Luyendyk coined the term in 1995. But Luyendyk says it was never intended to be a new continent. Rather, the name was used to describe New Zealand, New Caledonia, and a collection of submerged pieces and slices of crust that broke off a region of Gondwana, a 200 million-year-old supercontinent. "The reason I came up with this term is out of convenience," Luyendyk said. "They're pieces of the same thing when you look at Gondwana. So I thought, 'Why do you keep naming this collection of pieces as different things?'" Researchers behind the new study advanced Luyendyk's idea a huge step further: They took decades' worth of newer evidence and examined it with four criteria that geologists use to deem a slab of rock a continent: Geologists had determined that New Zealand and New Caledonia fit the bill for items one, two, and three. After all, they're large islands that poke up from the sea floor, are geologically diverse, and are made of thicker, less dense crust. This eventually led to Luyendyk's coining of Zealandia and the description of the region as "continental," since it was considered a collection of microcontinents, or bits and pieces of former continents. The authors say the last item on the list — a question of "Is it big enough and unified enough to be its own thing?" — is one that other researchers had skipped over, though by no fault of their own. At a glance, Zealandia seemed broken up. However, the new study used recent and detailed satellite-based elevation and gravity maps of the ancient seafloor to show that Zealandia is indeed part of a unified region. The data also suggests Zealandia spans "approximately the area of greater India" — larger than Madagascar, New Guinea, Greenland, or other pieces of crust. "If the elevation of Earth's solid surface had first been mapped in the same way as those of Mars and Venus (which lack [...] opaque liquid oceans)," they wrote, "we contend that Zealandia would, much earlier, have been investigated and identified as one of Earth's continents." The geologic devil's in the details The authors point out that while India is big enough to be a continent — and probably used to be — it's now part of Eurasia because it collided and stuck to that continent millions of years ago. Zealandia, meanwhile, has not yet smashed into Australia. A piece of seafloor called the Cato Trough still separates the continents by 25 kilometers (15.5 miles). Another wrinkle for Zealandia is its division into northern and southern segments by two tectonic plates: the Australian Plate and the Pacific Plate. This split makes the region seem more like a bunch of continental fragments than a unified slab. But the researchers say that Arabia, India, and parts of Central America have similar divisions yet are still considered parts of larger continents. "I'm from California, and it has a plate boundary going through it," Luyendyk said. "In millions of years, the western part will be up near Alaska. Does that make it not part of North America? No." What's more, the researchers wrote, rock samples suggest Zealandia is made of the same continental crust that used to be part of Gondwana and that it migrated in ways similar to Antarctica and Australia. The samples and data also show that Zealandia is not broken up. Instead, plate tectonics have thinned, stretched, and submerged Zealandia over of millions of years. Today, only about 5% of it is visible — which is part of the reason it took so long to discover. "The scientific value of classifying Zealandia as a continent is much more than just an extra name on a list," the scientists wrote. "That a continent can be so submerged yet unfragmented makes it a useful and thought-provoking geodynamic end member in exploring the cohesion and breakup of continental crust." Luyendyk said he believes the distinction won't end up as merely a scientific curiosity. He thinks it could have larger, real-world consequences. "The economic implications are clear and come into play: What's part of New Zealand, and what's not part of New Zealand?" he said. United Nations agreements use continental margins to determine which nations can extract off-shore resources — and New Zealand may have tens of billions of dollars' worth of fossil fuels and minerals lurking off its shores. NOW WATCH: This Cold War-era technology could safely power the world for millions of years Warren Buffett just dropped Walmart and signaled the death of retail as we know it New Zealand just had its worst whale stranding for more than 30 years

News Article | February 15, 2017

Shinichiro Michizono from KEK has been appointed as associate director for the International Linear Collider (ILC), taking over from Mike Harrison, while Jim Brau of the University of Oregon has replaced Hitoshi Yamamoto as associate director for physics and detectors. The Linear Collider collaboration, which encompasses the ILC and CLIC, has recently been granted a further three-year mandate by the International Committee for Future Accelerators. The council of the European Southern Observatory (ESO), which builds and operates some of the world’s most powerful ground-based telescopes, has appointed Xavier Barcons as its next director general. The 57 year-old astronomer will take up his new position on 1 September 2017, when the current director general Tim de Zeeuw completes his mandate. He began his career as a physicist, completing a PhD on hot plasmas. In October 2016, Jianwei Qiu joined the Thomas Jefferson National Accelerator Facility as its new associate director for theoretical and computational physics. Qiu, whose research focus is QCD and its applications in both high-energy particle and nuclear physics, will oversee a broad programme of theoretical research in support of the physics studied with the Continuous Electron Beam Accelerator Facility (CEBAF). Rende Steerenberg has been appointed head of operations in CERN’s Beams Department, effective from 1 January 2017. He takes over from Mike Lamont, who has been in the role since 2009 and oversaw operations from the LHC’s rollercoaster start-up to its latest record performance. Lamont remains deputy group leader of the Beams Department. Former CERN Director-General Rolf-Dieter Heuer has been appointed Chevalier de la Légion d’Honneur (Knight of the Legion of Honour), one of the highest recognitions of achievement in France. Heuer, who is currently president of the German Physical Society (DPG) and president-elect of the SESAME Council, among other roles, was presented with the medal on 22 November at the residence of the French permanent representative in Geneva. The 2017 Breakthrough Prize in Fundamental Physics has been awarded to Joseph Polchinski, University of California at Santa Barbara, and Andrew Strominger and Cumrun Vafa of Harvard University. The three winners, who received the $3 million award at a glitzy ceremony in San Francisco on 4 December, have made important contributions to fundamental physics including quantum gravity and string theory. Polchinski was recognised in particular for his discovery of D-branes, while the citation for Strominger and Vafa included their derivation of the Bekenstein–Hawking area-entropy relation, which unified the laws of thermodynamics and black-hole dynamics. Recipients of the previously announced Special Prize in Fundamental Physics – Ronald Drever and Kip Thorne of Caltech and Rainer Weiss of MIT, who were recognised in May along with the entire LIGO team for the discovery of gravitational waves – were also present. A further prize, the $100,000 New Horizons in Physics Prize, went to six early-career physicists: Asimina Arvanitaki (Perimeter Institute), Peter Graham (Stanford University) and Surjeet Rajendran (University of California, Berkeley); Simone Giombi (Princeton University) and Xi Yin (Harvard University); and Frans Pretorius (Princeton). This year’s Breakthrough Prize, which was founded in 2012 by Sergey Brin, Anne Wojcicki, Yuri and Julia Milner, Mark Zuckerberg and Priscilla Chan, saw $25 million in prizes awarded for achievements in the life sciences, fundamental physics and mathematics. On 30 November, the Alexander von Humboldt Foundation in Bonn, Germany, granted a Humboldt Research Award to Raju Venugopalan, a senior physicist at Brookhaven National Laboratory and Stony Brook University. The €60,000 award recognises Venugopalan’s achievements in theoretical nuclear physics, and comes with the opportunity to collaborate with German researchers at Heidelberg University and elsewhere. US physicist and science policy adviser to the US government, Richard Garwin, was awarded the Presidential Medal of Freedom at a White House ceremony on 22 November. The award is the highest honour that the US government can confer to civilians. Garwin was recognised for his long career in research and invention, which saw him play a leading role in the development of the hydrogen bomb, and for his advice to policy makers. Introducing Garwin, President Obama remarked: “Dick’s not only an architect of the atomic age. Reconnaissance satellites, the MRI, GPS technology, the touchscreen all bear his fingerprints – he even patented a mussel washer for shellfish. Dick has advised nearly every president since Eisenhower, often rather bluntly. Enrico Fermi, also a pretty smart guy, is said to have called Dick the only true genius he ever met.” Fumihiko Suekane of Tohoku University, Japan, has been awarded a 2016 Blaise Pascal Chair to further his research into neutrinos. Established in 1996, and named after the 17th-century French polymath Blaise Pascal, the €200,000 grant allows researchers from abroad to work on a scientific project in an institution in the Ile-de-France region. Suekane will spend a year working at the Astroparticle and Cosmology Laboratory in Paris, where he will focus on R&D for novel neutrino detectors and measurements of reactor neutrinos. In late 2016, theorists Mikhail Danilov, from the Lebedev Institute in Moscow, Sergio Ferrara from CERN and David Gross from the Kavli Institute for Theoretical Physics and the University of California in Santa Barbara were elected as members of the Russian Academy of Sciences. Established in 1724, the body has more than 2000 members. President of the Republic of Poland, Andrzej Duda, visited CERN on 15 November and toured the CERN Control Centre. Chi-­Chang Kao, signed the guestbook with CERN Director-General Fabiola Gianotti on 23 November. From 28 November to 2 December, more than 200 flavour physicists gathered at the Tata Institute of Fundamental Research in Mumbai for the 9th International Workshop on the Cabibbo–Kobayashi–Maskawa Unitarity Triangle (CKM2016). The workshop focuses on weak transitions of quarks from one flavour to another, as described by the CKM matrix, and on the charge–parity (CP) violation present in these transitions, as visualised by the unitarity triangle (UT). Input from theory, particularly lattice QCD, is vital to fully leverage the power of such measurements. It is an exciting time for flavour physics. The mass scales potentially involved in such weak processes are much higher than those that can be directly probed at the LHC, due to the presence of quantum loops that mediate many of the processes of interest, such as B0 – B0 mixing. Compared with the absence of new particles so far at the energy frontier, LHCb and other B factories already have significant hints of deviations between measurements and Standard Model (SM) predictions. An example is the persistent discrepancy in the measured differential distributions of the decay products of the rare flavour-changing neutral-current process B0 → K*0 μ+ μ–, first reported by the LHCb collaboration in 2015. A highlight of CKM2016 was the presentation of first results of the same distributions from the Belle experiment in Japan, which also included the related but previously unmeasured process B0 → K*0 e+ e–. The Belle results are more compatible with those of LHCb than the SM, further supporting the idea that new physics may be manifesting itself, via interference effects, in these observables. Progress on measuring CP violation in B decays was also reported, with LHCb presenting the first evidence for time-dependent CP violation in the decay of B0 mesons in two separate final states, D+ K– and K+ K–. The latter involves loop diagrams allowing a new-physics-sensitive determination of a UT angle (γ) that can be compared to a tree-level SM determination in the decay B– → D0 K–. For the first time, LHCb also presented results with data from LHC Run 2, which is ultimately expected to increase the size of the LHCb data samples by approximately a factor four. Longer term, the Belle II experiment based at the SuperKEKB collider recently enjoyed its first beam, and will begin its full physics programme in 2018. By 2024, Belle II should have collected 50 times more data than Belle, allowing unprecedented tests of rare B-meson decays and precision CP-violation measurements. On the same timescale, the LHCb upgrade will also be in full swing, with the goal of increasing the data size by least a factor 10 compared to Run 1 and Run 2. Plans for a second LHCb upgrade presented at the meeting would allow LHCb, given the long-term future of the LHC, to run at much higher instantaneous luminosities to yield an enormous data set by 2035. With more data the puzzles of flavour physics will be resolved thanks to the ongoing programme of LHCb, imminent results from rare-kaon-decay experiments (KOTO and NA62), and the Belle II/LHCb upgrade projects. No doubt there will be more revealing results by the time of the next CKM workshop, to be held in Heidelberg in September 2018. While there are many conferences focusing on physics at the high-energy frontier, the triennial PSI workshop at the Paul Scherrer Institute (PSI) in Switzerland concerns searches for new phenomena at non-collider experiments. These are complementary to direct searches at the LHC and often cover a parameter space that is beyond the reach of the LHC or even future colliders. The fourth workshop in this series, PSI2016, took place from 16–21 October and attracted more than 170 physicists. Theoretical overviews covered: precision QED calculations; beyond-the-Standard-Model implications of electric-dipole-moment (EDM) searches; axions and other light exotic particles; flavour symmetries; the muon g-2 problem; NLO calculations of the rare muon decay μ → eeeνν; and possible models to explain the exciting flavour anomalies presently seen in B decays. On the experimental side, several new results were presented. Fundamental neutron physics featured prominently, ranging from cold-neutron-beam experiments to those with stored ultracold neutrons at facilities such as ILL, PSI, LANL, TRIUMF and Mainz. Key experiments are measurements of the neutron lifetime, searches for a permanent EDM, measurements of beta-decay correlations and searches for exotic interactions. The future European Spallation Source in Sweden will also allow a new and much improved search for neutron–antineutron oscillations. Atomic physics and related methods offer unprecedented sensitivity to fundamental-physics aspects ranging from QED tests, parity violation in weak interactions, EDM and exotic physics to dark-matter (DM) and dark-energy searches. With the absence of signals from direct DM searches so far, light and ultralight DM is a focus of several upcoming experiments. Atomic physics also comprises precision spectroscopy of exotic atoms, and several highlight talks included the ongoing efforts at CERN’s Antiproton Decelerator with antihydrogen and with light muonic atoms at J-PARC and at PSI. For antiprotons and nuclei, impressive results from recent Penning-trap mass and g-factor measurements were presented with impacts on CPT tests, bound-state QED tests and more. Major international efforts are under way at PSI (μ → eγ, μ → eee), FNAL and J-PARC (μ → e conversion) devoted to muons and their lepton-flavour violating decays, and the upcoming muon g-2 experiments at FNAL and J-PARC have reported impressive progress. Last but not least, rare kaon decays (at CERN and J-PARC), new long-baseline neutrino oscillation results, developments towards direct neutrino-mass measurements, and CP and CPT tests with B mesons were reported. The field of low-energy precision physics has grown fast over the past few years, and participants plan to meet again at PSI in 2019. The fields of nanomaterials and nanotechnology are quickly evolving, with discoveries frequently reported across a wide range of applications including nanoelectronics, sensor technologies, drug delivery and robotics, in addition to the energy and healthcare sectors. At an academia–industry event on 20–21 October at GSI in Darmstadt, Germany, co-organised by the technology-transfer network HEPTech, delegates explored novel connections between nanotechnology and high-energy physics (HEP). The forum included an overview of the recent experiments at DESY’s hard X-ray source PETRA III, which allows the investigation of physical and chemical processes in situ and under working conditions and serves a large user community in many fields including nanotechnology. Thermal-scanning probe lithography, an increasingly reliable method for rapid and low-cost prototyping of 2D and quasi-3D structures, was also discussed. Much attention was paid to the production and application of nanostructures, where the achievements of the Ion Beam Center at Helmholtz-Zentrum Dresden-Rossendorf in surface nanostructuring and nanopatterning were introduced. UK firm Hardide Coatings Ltd presented its advanced surface-coating technology, the core of which are nano-structured tungsten-carbide-based coatings that have promising applications in HEP and vacuum engineering. Industry also presented ion-track technology, which is being used to synthesise 3D interconnected nanowire networks in micro-batteries or gas sensors, among other applications. Neutron-research infrastructures and large-scale synchrotrons are emerging as highly suitable platforms for the advanced characterisation of micro- and nano-electronic devices, and the audience heard the latest developments from the IRT Nanoelec Platform for Advanced Characterisation of Grenoble. The meeting addressed how collaboration between academia and industry in the nanotechnology arena can best serve the needs of HEP, with CERN presenting applications in gaseous detectors using the charge-transfer properties of graphene. The technology-transfer office at DESY also shared its experience in developing a marketing strategy for promoting the services of the DESY NanoLab to companies. Both academia and industry representatives left the event with a set of contacts and collaboration arrangements. On 24–25 November, academics and leading companies in the field of superconductivity met in Madrid, Spain, to explore the technical challenges of applying new accelerator technology to medicine. Organised by CIEMAT in collaboration with HEPTech, EUCARD2, CDTI, GSI and the Enterprise Europe Network, the event brought together 120 participants from 19 countries to focus on radioisotope production, particle therapy and gantries. Superconductivity has a range of applications in energy, medicine, fusion and high-energy physics (HEP). The latter are illustrated by CERN’s high-luminosity LHC (HL-LHC), now near construction with superconducting magnets made from advanced Nb Sn technology capable of 12 T fields. The HL-LHC demands greatly advanced superconducting cavities with more efficient and higher-gradient RF systems, plus the development of new devices such as crab cavities that can deflect or rotate single bunches of protons. On the industry side, new superconducting technology is ready to go into production for medical applications. A dedicated session presented novel developments in cyclotron production, illustrated by the AMIT project of CIEMAT (based on a cyclotron with a compact superconducting design that will be able to produce low-to-moderate rates of dose-on-demand 11C and 18F) and the French industry–academia LOTUS project system, which features a compact 12 MeV superconducting helium-free magnet cyclotron suitable for the production of these isotopes in addition to 68Ga. Antaya Science and Technology, meanwhile, reported on the development of a portable high-field superconducting cyclotron for the production of ammonia-13N in near proximity to the PET cameras. The meeting also heard from MEDICIS, the new facility under construction at CERN that will extend the capabilities of the ISOLDE radioactive ion-beam facility for production of radiopharmaceuticals and develop new accelerator technologies for medical applications (CERN Courier October 2016 p28). Concerning particle therapy, industry presented medical accelerators such as the MEVION S250 – a proton-therapy system based on a gantry-mounted 250 MeV superconducting synchrocyclotron that weighs less than 15 tonnes and generates magnetic fields in excess of 10 T. Global medical-technology company IBA described its two main superconducting cyclotrons for particle therapy: the Cyclone 400 for proton/carbon therapy and the S2C2 dedicated to proton therapy, with a particular emphasis on their superconducting coil systems. IBA also introduced the latest developments concerning ProteusONE – a single-room system that delivers the most clinically advanced form of proton-radiation therapy. Researchers from MIT in the US presented a novel compact superconducting synchrocyclotron based on an ironless magnet with a much reduced weight, while the TERA Foundation in Italy is developing superconducting technology for “cyclinacs” – accelerators that combine a cyclotron injector and a linac booster. Finally, the session on gantries covered developments such as a superconducting bending-magnet section for future compact isocentric gantries by researchers at the Paul Scherrer Institute, and a superconducting rotating gantry for carbon radiotherapy designed by the Japanese National Institute of Radiological Sciences. With demand for medical isotopes and advanced cancer therapy rising, we can look forward to rich collaborations between accelerator physics and the medical community in the coming years. The fifth in the series of Higgs Couplings workshops, which began just after the Higgs-boson discovery in 2012 to bring together theorists and experimentalists, was held at SLAC on 9–12 November and drew 148 participants from five continents. Discussions focused on lessons from the current round of LHC analyses that could be applied to future data. Modelling of signal and background is already limiting for some measurements, and new theoretical results and strategies were presented. Other key issues were the use of vector-boson fusion production as a tool, and the power and complementarity of diverse searches for heavy Higgs bosons. Two new themes emerged at the meeting. The first was the possibility of exotic decays of the 125 GeV Higgs boson. These include not only Higgs decays to invisible particles but also decays to lighter Higgs particles, light quarks and leptons (possibly with flavour violation) and new, long-lived particles. A number of searches from ATLAS and CMS reported their first results. The workshop also debated the application of effective field theory as a framework for parametrising precise Higgs measurements. The 6th Higgs Couplings meeting will be held in Heidelberg on 6–10 November 2017. We look forward to new ideas for the creative use of the large data samples of Higgs bosons that will become available as the LHC programme continues. The 8th International Conference on Hard and Electromagnetic Probes of High-energy Nuclear Collisions (Hard Probes 2016) was held in Wuhan, China, on 23–27 September. Hard and electromagnetic probes are powerful tools for the study of the novel properties of hot and dense QCD matter created in high-energy nucleus–nucleus collisions, and have provided much important evidence for the formation of quark–gluon plasma (QGP) in heavy-ion collisions at RHIC and the LHC. Hard Probe 2016 attracted close to 300 participants from 28 countries. The main topics discussed were: jet production and modification in QCD matter; high transverse-momentum hadron spectra and correlations; jet-induced medium excitations; jet properties in small systems; heavy flavour hadrons and quarkonia; photons and dileptons and initial states and related topics. The most recent experimental progress on hard and electromagnetic probes from the ALICE, ATLAS, CMS, LHCb, PHENIX and STAR collaborations, together with many new exciting theoretical and phenomenological developments, were discussed. The next Hard Probe conference will be held in Aix Les Bains, France, in 2018. The International Symposium on EXOtic Nuclei (EXON-2016), took place from 5–9 September in Kazan, Russia, attracting around 170 nuclear experts from 20 countries. The scientific programme focused on recent experiments on the synthesis and study of new super-heavy elements, the discovery of which demonstrates the efficiency of international co-operation. Interesting results were obtained in joint experiments on chemical identification of elements 112 and 114 performed at JINR (Russia), the GSI (Germany) and the Paul Scherrer Institute (Switzerland). A vivid example of co-operation with US scientists is an experiment on the synthesis of element 117 held at the cyclotron of JINR. Recently, the International Union of Pure and Applied Chemistry approved the discovery of the new elements with atomic numbers 113 (“nihonian”), 115 (“moscovium”), 117 (“tennessine”) and 118 (“oganesson”). Five laboratories, which are the co-founders of the symposium, are now creating a new generation of accelerators for the synthesis and study of new exotic nuclei. Projects such as SPIRAL2, RIKEN RI Beam Factory, FAIR, DRIBs, NICA and FRIB will allow us to delve further into the upper limits of the periodic table. The CERN Accelerator School (CAS) and the Wigner Research Centre for Physics jointly organised an introduction-to-accelerator-physics course in Budapest, Hungary, from 2–14 October, attended by more than 120 participants spanning 28 nationalities. This year, CAS will organise a specialised course on beam injection, extraction and transfer (to be held in Erice, Sicily, from 10–19 March) and a second specialised course on vacuum for particle accelerators (near Lund, Sweden, from 6–16 June). The next course on advanced-accelerator physics will be held in the UK in early September, and a Joint International Accelerator School on RF technology will be held in Hayama, Japan, from 16–26 October (

Helie S.,Purdue University | Ell S.W.,University of Maine, United States | Ashby F.G.,University of California at Santa Barbara
Cortex | Year: 2015

This article focuses on the interaction between the basal ganglia (BG) and prefrontal cortex (PFC). The BG are a group of nuclei at the base of the forebrain that are highly connected with cortex. A century of research suggests that the role of the BG is not exclusively motor, and that the BG also play an important role in learning and memory. In this review article, we argue that one important role of the BG is to train connections between posterior cortical areas and frontal cortical regions that are responsible for automatic behavior after extensive training. According to this view, one effect of BG trial-and-error learning is to activate the correct frontal areas shortly after posterior associative cortex activation, thus allowing for Hebbian learning of robust, fast, and efficient cortico-cortical processing. This hypothesized process is general, and the content of the learned associations depends on the specific areas involved (e.g., associations involving premotor areas would be more closely related to behavior than associations involving the PFC). We review experiments aimed at pinpointing the function of the BG and the frontal cortex and show that these results are consistent with the view that the BG is a general purpose trainer for cortico-cortical connections. We conclude with a discussion of some implications of the integrative framework and how this can help better understand the role of the BG in many different tasks. © 2014 Elsevier Ltd.

Hacker B.R.,University of California at Santa Barbara | Kelemen P.B.,Lamont Doherty Earth Observatory | Behn M.D.,Woods Hole Oceanographic Institution
Earth and Planetary Science Letters | Year: 2011

Crust extracted from the mantle in arcs is refined into continental crust in subduction zones. During sediment subduction, subduction erosion, arc subduction, and continent subduction, mafic rocks become eclogite and may sink into the mantle, whereas more silica-rich rocks are transformed into felsic gneisses that are less dense than peridotite but more dense than the upper crust. These more felsic rocks rise buoyantly, undergo decompression melting and melt extraction, and are relaminated to the base of the crust. As a result of this process, such felsic rocks could form much of the lower crust. The lower crust need not be mafic and the bulk continental crust may be more silica rich than generally considered. © 2011 Elsevier B.V.

Lentz S.J.,Woods Hole Oceanographic Institution | Fewings M.R.,University of California at Santa Barbara
Annual Review of Marine Science | Year: 2012

The inner continental shelf, which spans water depths of a few meters to tens of meters, is a dynamically defined region that lies between the surf zone (where waves break) and the middle continental shelf (where the along-shelf circulation is usually in geostrophic balance). Many types of forcing that are often neglected over the deeper shelf-such as tides, buoyant plumes, surface gravity waves, and cross-shelf wind stress-drive substantial circulations over the inner shelf. Cross-shelf circulation over the inner shelf has ecological and geophysical consequences: It connects the shore to the open ocean by transporting pollutants, larvae, phytoplankton, nutrients, and sediment. This review of circulation and momentum balances over the inner continental shelf contrasts prior studies, which focused mainly on the roles of along-shelf wind and pressure gradients, with recent understanding of the dominant roles of cross-shelf wind and surface gravity waves. Copyright © 2012 by Annual Reviews. All rights reserved.

News Article | February 21, 2017

Debra Fiakas is the Managing Director of , an alternative research resource on small capitalization companies in selected industries. Last month BioSolar BSRC :  OTC/PK) reported positive test results for its proprietary energy storage technology.  The company is developing an alternative anode material for lithium ion batteries using silicon-carbon materials.  BioSolar’s engineers are targeting dramatic improvement in anode performance and equally impressive reductions in cost.  If they are successful, it could mean longer lithium ion battery life, greater capacity and shorter charging time  -  the dreams of every manufacturer with an electronic product.Most lithium ion batteries rely on graphite for the battery anode.  However, silicon anodes could offer as much as ten times more capacity as anodes made with graphite.  Unfortunately, silicon has a few downsides that make it unreliable as well as unaffordable.  BioSolar is working to overcome those downsides and make silicon anodes an affordable alternative by using a silicon alloy.Biosolar is also working on the other important battery component  -  the cathode.  Existing lithium ion batteries are limited by the capacity of the cathode.  The company has developed a new cathode made from a conductive polymer that can withstand higher charge-discharge cycles.  This would extend the life of the lithium ion battery and lower the overall cost of operation.  In June 2017, the company filed an application for patent protection of its proprietary process and material for high capacity cathodes.The company is not alone in the quest for a better lithium ion battery.  There are others experimenting with polymers and silicon-based materials for lithium ion energy storage.  For example, researchers at the University of Leeds in the United Kingdom, Lawrence Berkeley National Laboratory in California, Wuhan University of Technology in China, and Pacific Northwest National Laboratory in Washington are just four of several research and development groups publishing papers on their experiments with conductive polymers.  The activity could be a source of competition, support or distraction for BioSolar.  For example, the University of Leeds has licensed its technology to privately held Polystor Energy Corporation in the U.S., which planned to commercialize the Leeds polymer gel for use as the electrolyte in a lithium ion battery.  While Polystor would not have competed against BioSolar's anode or cathode materials, its progress or lack of progress could have an impact on investors' view of polymer technology in the energy storage sector.BioSolar’s research and development efforts are led by its Chief Technology Officer, Dr. Stanley Levy.  With a dozen patents in his own name, Levy has been recognized by his peers for technical work on plastics and film development.  His prior experience includes stints at DuPont, Global Solar and Solar Integrated Technologies.Besides the mechanical engineering background of Levy, BioSolar’s chief executive officer, David Lee, brings electrical engineering education and experience to team.  Lee founded BioSolar after working in various engineering positions at the electronics, space and defense units of TRW as well as management roles at RF-Link Technology, Inc. and Applied Reasoning, Inc.  A plus for BioSolar is Lee’s time in the trenches in marketing and sales, which will be needed to get the company’s technology turned into marketable products.As a developmental stage company BioSolar has no revenue.  Its operations are limited to managing sponsored research activities.  BioSolar has received support for its research activities from the University of California at Santa Barbara .  For the rest of its work the company relies on cash resources to support its development plans.  Operating expenses have been near $600,000 per quarter.  One of the company’s most significant expenses is a research arrangement with North Carolina Agriculture and Technical State University , which is conducting tests on BioSolar’s polymer and silicon-alloy materials.With only $244,776 in its bank account at the end of September 2016, any investor considering a position in BSRC might take pause.   Since the close of the September quarter the company entered into an arrangement for an unsecured convertible promissory note for up to $500,000.    Nonetheless, expect more capital raising efforts involving dilutive securities of some kind or another.  There is really no other way to entice investors to an early stage company than to offer a piece of the pie.  The company has not yet found sponsorship by a large investor or strategic partner, so capital raising activities appear to be limited to individual investors.For investors with no interest in a private placement, there are shares quoted on the Over-the-Counter market.   At a nickel, the shares are price like options on management’s ability to reach the development milestone before running out of money.  It is a high risk proposition, but one that could yield exceptional returns if BioSolar is successful in getting its materials into a working prototype battery and the market recognizes some value the accomplishment. Neither the author of the Small Cap Strategist web log, Crystal Equity Research nor its affiliates have a beneficial interest in the companies mentioned herein.

Miao M.-S.,Beijing Computational Science Research Center | Miao M.-S.,University of California at Santa Barbara | Hoffmann R.,Cornell University
Accounts of Chemical Research | Year: 2014

ConspectusElectrides, in which electrons occupy interstitial regions in the crystal and behave as anions, appear as new phases for many elements (and compounds) under high pressure. We propose a unified theory of high pressure electrides (HPEs) by treating electrons in the interstitial sites as filling the quantized orbitals of the interstitial space enclosed by the surrounding atom cores, generating what we call an interstitial quasi-atom, ISQ.With increasing pressure, the energies of the valence orbitals of atoms increase more significantly than the ISQ levels, due to repulsion, exclusion by the atom cores, effectively giving the valence electrons less room in which to move. At a high enough pressure, which depends on the element and its orbitals, the frontier atomic electron may become higher in energy than the ISQ, resulting in electron transfer to the interstitial space and the formation of an HPE.By using a He lattice model to compress (with minimal orbital interaction at moderate pressures between the surrounding He and the contained atoms or molecules) atoms and an interstitial space, we are able to semiquantitatively explain and predict the propensity of various elements to form HPEs. The slopes in energy of various orbitals with pressure (s > p > d) are essential for identifying trends across the entire Periodic Table. We predict that the elements forming HPEs under 500 GPa will be Li, Na (both already known to do so), Al, and, near the high end of this pressure range, Mg, Si, Tl, In, and Pb. Ferromagnetic electrides for the heavier alkali metals, suggested by Pickard and Needs, potentially compete with transformation to d-group metals. © 2014 American Chemical Society.

Miao M.-S.,University of California at Santa Barbara | Miao M.-S.,Beijing Computational Science Research Center
Nature Chemistry | Year: 2013

The periodicity of the elements and the non-reactivity of the inner-shell electrons are two related principles of chemistry, rooted in the atomic shell structure. Within compounds, Group I elements, for example, invariably assume the +1 oxidation state, and their chemical properties differ completely from those of the p-block elements. These general rules govern our understanding of chemical structures and reactions. Here, first-principles calculations show that, under pressure, caesium atoms can share their 5p electrons to become formally oxidized beyond the +1 state. In the presence of fluorine and under pressure, the formation of CsF n (n > 1) compounds containing neutral or ionic molecules is predicted. Their geometry and bonding resemble that of isoelectronic XeF n molecules, showing a caesium atom that behaves chemically like a p-block element under these conditions. The calculated stability of the CsF n compounds shows that the inner-shell electrons can become the main components of chemical bonds. © 2013 Macmillan Publishers Limited. All rights reserved.

Mitragotri S.,University of California at Santa Barbara | Lahann J.,University of Michigan | Lahann J.,Karlsruhe Institute of Technology
Advanced Materials | Year: 2012

Some of the innovative materials to address the complex biological challenges for drug delivery are presented. Biologically responsive nanoparticles are being developed and the drug is encapsulated within a carrier particle that continues to shield the drug from the human body, even after administration. There has been an increasing interest in organic/inorganic hybrid particles that feature dual functionality for imaging and therapy. Researchers have also developed strategies to optimize surface concentrations of PEG and targeting ligands to strike a balance between prolonged circulation and effective tissue accumulation. In vitro studies using tumor spheroids, supplemented by mathematical models, have shown that nanoparticles of 20 and 40 nm in diameter are able to accumulate in the interior of the spheroid after treatment with collagenase.

Agency: National Aeronautics and Space Administration | Branch: | Program: STTR | Phase: Phase II | Award Amount: 749.92K | Year: 2015

Silicon Carbide based ceramic matrix composites (CMCs) offer the potential to fundamentally change the design and manufacture of aeronautical and space propulsion systems to significantly increase performance and fuel efficiency over current metal-based designs. Physical Sciences Inc. (PSI) and our team members at the University of California Santa Barbara (UCSB) are developing, designing and fabricating enhanced SiC-based matrices capable of long term operation at 2750oF to 3000oF in the combustion environment. Our approach is successfully building upon PSI's and UCSB's previous work in incorporating refractory and rare earth species into the SiC matrix to increase the CMC use temperatures and life-time capabilities by improving the protective oxide passivation layer that forms during use. As part of this work we are creating physics based-materials and process models that qualitatively define methods of improving matrix properties and the interaction of the fibers, interphases and matrix with each other. In the Phase I program the PSI team developed and experimentally demonstrated CMC's capable of withstanding 100's of hours of oxidation at 2700oF with no degradation. We have focused predicting the effect of phase distribution, grain size, chemical composition, matrix density, and surface flaws on the oxidation behavior of the CMC matrix. During the Phase II program we will iteratively improve the CMC performance by optimizing the composition and characteristics of the additives based on oxidation and mechanical test results and burner rig exposure testing.

Agency: National Aeronautics and Space Administration | Branch: | Program: STTR | Phase: Phase I | Award Amount: 124.93K | Year: 2014

Silicon Carbide based ceramic matrix composites (CMCs) offer the potential to fundamentally change the design and manufacture of aeronautical and space propulsion systems to significantly increase performance and fuel efficiency over current metal-based designs. Physical Sciences Inc. (PSI) and our team members at the University of California Santa Barbara (UCSB) will develop, design and fabricate enhanced SiC-based matrices capable of long term operation at 2750 F to 3000 F in the combustion environment. Our approach will build upon PSI's and UCSB's previously successful work in incorporating refractory and rare earth species into the SiC matrix to increase the CMC use temperatures and life-time capabilities by improving the protective oxide passivation layer that forms during use. As part of this work we will create physics based-materials and process models that qualitatively define methods of improving matrix properties and the interaction of the fibers, interphases and matrix with each other. In the Phase I program the PSI team will focus on performing experiments and develop models predicting the effect of phase distribution, grain size, chemical composition, matrix density, and surface flaws on the oxidation behavior of the CMC matrix. During the Phase II program we will iteratively improve the CMC performance by optimizing the composition and characteristics of the additives based on oxidation and mechanical test results.

Shieh and University of California at Santa Barbara | Date: 2013-01-02

A full-color AM OLED includes a transparent substrate, a color filter positioned on an upper surface of the substrate, and a metal oxide thin film transistor backpanel positioned in overlying relationship on the color filter and defining an array of pixels. An array of OLEDs is formed on the backpanel and positioned to emit light downwardly through the backpanel, the color filter, and the substrate in a full-color display. Light emitted by each OLED includes a first emission band with wavelengths extending across the range of two of the primary colors and a second emission band with wavelengths extending across the range of the remaining primary color. The color filter includes for each pixel, two zones separating the first emission band into two separate primary colors and a third zone passing the second emission band.

News Article | March 2, 2017

Welcome back to the Mind Over Money podcast. I’m Kevin Cook, your field guide and storyteller for the fascinating arena of behavioral economics. I’m excited about today’s topics because we are going to talk about how our brains use stories to make decisions. In fact, after I tell you a few stories about brains and stories, you’ll start to wonder which came first – brains or stories! Obviously a brain has to exist first for there to be a story to experience, remember, and retell. But you should begin to see how stories help form neural pathways, perception, and other cognitive function – and thus our decision-making. I begin with the current world capital of storytelling, Hollywood. Film producer and founder of PolyGram, Peter Guber, known for blockbusters from Batman to Rain Man, wrote an article in the March 2011 edition of Psychology Today titled "The Inside Story." His goal was to explain the deep human need for stories from two vantage points: first, from his deep experience helping people craft and tell theirs and second, from the brain scientists who have discovered their importance in our daily lives. Guber achieves his goal masterfully. After relating how the Oscar winning film starring Jack Lemmon, Missing, almost didn’t get made, he introduces the work of some giants of neuroscience that you should know about... Stories, it turns out, are not optional. They are essential. Our need for them reflects the very nature of perceptual experience, and storytelling is embedded in the brain itself. While we all feel ourselves to be unified creatures, that is not the reality of our experience or our brains. There is no central command post in the brain, says neuroscientist Michael Gazzaniga, professor of psychology at the University of California at Santa Barbara. Rather, there are millions of highly specialized local processors—circuits for vision, for other sensory data, for motor control, for specific emotions, for cognitive representations, just to name a few modules—distributed throughout the brain carrying out the neural processes of experience. What's more, Washington University neuroscientist Jeffrey Zacks told me, such modules monitor external experience not continuously but in a kind of punctuated way, a process he calls event sampling. "The mind/brain segments ongoing activity into meaningful events," he says. How is it, then, that they function as an integrated whole and we experience ourselves that way? Because we tell ourselves stories, Gazzaniga says. There is in fact a processor in our left hemisphere that is driven to explain events to make sense out of the scattered facts. The explanations are all rationalizations based on the minuscule portion of mental actions that make it into our consciousness. There is much more in his lengthy piece from both Gazzaniga and Zacks. But in three short sentences, he sums it all up... “We literally create ourselves through narrative. Narrative is more than a literary device—it's a brain device. Small wonder that stories can be so powerful.” In today’s edition of the Mind Over Money podcast, I also share the research and insights of neuroeconomist Paul Zak who runs the Center for Neuroeconomics Studies at Claremont Graduate University. There, he and his team investigate the neurophysiology of economic decisions drawing on research from economic theory, experimental economics, neuroscience, endocrinology, and psychology to develop a comprehensive understanding of human decisions. Zak is also the author of The Moral Molecule and Trust Factor. What’s interesting about Zak is that he is not a doctor of the life sciences, but he has become a trail-blazing researcher of brain biology because of his intense interests in the economics and the morality of human decision making. Early in his career, Zak wanted to know what made us moral, or not. He learned about the biological effects of the hormone oxytocin, produced in both women and men, and he decided to focus on the aspect of human relationships centered around trust. If you watch his 2011 TED Talk, you can get the full story, but in my podcast I actually read a few paragraphs of the transcript to give you his first-hand account (or would that be a second-hand story?). As excited as I was by his story, I was nearly equally demoralized to hear part of it “debunked” by other experts in neuroscience. But the bottom line remains, our brains are a “bag of hormones” that compel our behavior as much as we “think” our mind is driving the bus.

News Article | February 21, 2017

SUNNYVALE, CA--(Marketwired - Feb 21, 2017) - AMD ( : AMD) today announced the appointment of John W. Marren, 53, to its board of directors coinciding with Martin Edelman's decision to step down as member of the company's board of directors, a position he has held since 2013. Marren's 30-year career spans both the financial and technology industries, with a deep focus on semiconductors. He retired from Texas Pacific Group (TPG) Capital in 2015 after spending 16 years at the firm as senior partner and head of technology investments. Prior to TPG, he was managing director and co-head of the Technology Investment Banking Group at Morgan Stanley, and prior to that time was managing director at Alex, Brown and Sons. Before shifting his focus to finance, Marren spent seven years in various technical and business roles at VLSI Technology and Vitesse Semiconductor. He currently serves on a number of private company boards, including Avaya Inc., Infinidat, Inc., and Isola Group. "John brings substantial board, financial, and technology industry experience as well as strong semiconductor knowledge that make him a valuable addition to AMD as the company enters an exciting growth phase driven by a strengthened and expanded portfolio of new products," said John Caldwell, AMD's chairman of the board. "On behalf of the AMD Board, I would also like to express our thanks to Marty for his four years of service as a director. We are grateful for his counsel and insight that has helped AMD transform and build a solid foundation for growth." Marren holds a Bachelor of Science degree in electrical engineering from the University of California at Santa Barbara. He is also a Trustee of the University of California, Santa Barbara and a member of the US Olympic and Paralympic Foundation Board. He previously served on the boards of MEMC Electronics Materials, On Semiconductor, Freescale Semiconductor, Sungard Data Systems, and Vertafore Software. About AMD For more than 45 years AMD has driven innovation in high-performance computing, graphics, and visualization technologies -- the building blocks for gaming, immersive platforms, and the datacenter. Hundreds of millions of consumers, leading Fortune 500 businesses, and cutting-edge scientific research facilities around the world rely on AMD technology daily to improve how they live, work, and play. AMD employees around the world are focused on building great products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD ( : AMD) website, blog, Facebook and Twitter pages. AMD, the AMD Arrow logo and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other names are for informational purposes only and may be trademarks of their respective owners.

Banuls M.C.,Max Planck Institute of Quantum Optics | Cirac J.I.,Max Planck Institute of Quantum Optics | Hastings M.B.,University of California at Santa Barbara
Physical Review Letters | Year: 2011

When a nonintegrable system evolves out of equilibrium for a long time, local observables are in general expected to attain stationary expectation values, independent of the details of the initial state. But the thermalization of a closed quantum system is not yet well understood. Here we show that it presents indeed a much richer phenomenology than its classical counterpart. Using a new numerical technique, we identify two distinct regimes, strong and weak, occurring for different initial states. Strong thermalization, intrinsically quantum, happens when instantaneous local expectation values converge to the thermal ones. Weak thermalization, well known in classical systems, shows convergence to thermal values only after time averaging. Remarkably, we find a third group of states showing no thermalization, neither strong nor weak, to the time scales one can reliably simulate. © 2011 American Physical Society.

Adler P.B.,Utah State University | Ellner S.P.,Cornell University | Levine J.M.,University of California at Santa Barbara
Ecology Letters | Year: 2010

Despite decades of research documenting niche differences between species, we lack a quantitative understanding of their effect on coexistence in natural communities. We perturbed an empirical sagebrush steppe community model to remove the demographic effect of niche differences and quantify their impact on coexistence. With stabilizing mechanisms operating, all species showed positive growth rates when rare, generating stable coexistence. Fluctuation-independent mechanisms contributed more than temporal variability to coexistence and operated more strongly on recruitment than growth or survival. As expected, removal of stabilizing niche differences led to extinction of all inferior competitors. However, complete exclusion required 300-400 years, indicating small fitness differences among species. Our results show an 'excess' of niche differences: stabilizing mechanisms were not only strong enough to maintain diversity but were much stronger than necessary given the small fitness differences. The diversity of this community cannot be understood without consideration of niche differences. © 2010 Blackwell Publishing Ltd/CNRS.

Begley M.R.,University of California at Santa Barbara | Wadley H.N.G.,University of Virginia
Acta Materialia | Year: 2012

Micromechanical models are developed to explore the effect of embedded metal layers upon thermal cycling delamination failure of thermal barrier coatings (TBCs) driven by thickening of a thermally grown oxide (TGO). The effects of reductions in the steady-state (i.e. maximum) energy release rate (ERR) controlling debonding from large interface flaws and decreases in the thickening kinetics of TGO are investigated. The models are used to quantify the dependence of the ERR and delamination lifetime upon the geometry and constitutive properties of metal/TBC/TGO multilayers. Combinations of multilayer properties are identified which maximize the increase in delamination lifetime. It is found that even in the absence of TGO growth rate effects, the delamination lifetime of TBC systems with weak TGO/bond coat interfaces can be more than doubled by replacing 10-20% of the ceramic TBC layer with a metal whose ambient temperature yield stress is in the ∼100-200 MPa range. © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

Meiburg E.,University of California at Santa Barbara | Kneller B.,University of Aberdeen
Annual Review of Fluid Mechanics | Year: 2010

The article surveys the current state of our understanding of turbidity currents, with an emphasis on their fluid mechanics. It highlights the significant role these currents play within the global sediment cycle, and their importance in environmental processes and in the formation of hydrocarbon reservoirs. Events and mechanisms governing the initiation of turbidity currents are reviewed, along with experimental observations and findings from field studies regarding their internal velocity and density structure. As turbidity currents propagate over the seafloor, they can trigger the evolution of a host of topographical features through the processes of deposition and erosion, such as channels, levees, and sediment waves. Potential linear instability mechanisms are discussed that may determine the spatial scales of these features. Finally, the hierarchy of available theoretical models for analyzing the dynamics of turbidity currents is outlined, ranging from dimensional analysis and integral models to both depth-averaged and depth-resolving simulation approaches. Copyright © 2010 by Annual Reviews. All rights reserved.

Squires T.M.,University of California at Santa Barbara | Mason T.G.,University of California at Los Angeles
Annual Review of Fluid Mechanics | Year: 2010

In microrheology, the local and bulk mechanical properties of a complex fluid are extracted from the motion of probe particles embedded within it. In passive microrheology, particles are forced by thermal fluctuations and probe linear viscoelasticity, whereas active microrheology involves forcing probes externally and can be extended out of equilibrium to the nonlinear regime. Here we review the development, present state, and future directions of this field. We organize our review around the generalized Stokes-Einstein relation (GSER), which plays a central role in the interpretation of microrheology. By discussing the Stokes and Einstein components of the GSER individually, we identify the key assumptions that underpin each, and the consequences that occur when they are violated. We conclude with a discussion of two techniques-multiple particle-tracking and nonlinear microrheology-that have arisen to handle systems in which the GSER breaks down. Copyright © 2010 by Annual Reviews. All rights reserved.

Jambeck J.R.,University of Georgia | Geyer R.,University of California at Santa Barbara | Wilcox C.,Commonwealth Scientific and Industrial Research Organization | Siegler T.R.,DSM Environmental Services | And 4 more authors.
Science | Year: 2015

Plastic debris in the marine environment is widely documented, but the quantity of plastic entering the ocean from waste generated on land is unknown. By linking worldwide data on solid waste, population density, and economic status, we estimated the mass of land-based plastic waste entering the ocean. We calculate that 275 million metric tons (MT) of plastic waste was generated in 192 coastal countries in 2010, with 4.8 to 12.7 million MT entering the ocean. Population size and the quality of waste management systems largely determine which countries contribute the greatest mass of uncaptured waste available to become plastic marine debris. Without waste management infrastructure improvements, the cumulative quantity of plastic waste available to enter the ocean from land is predicted to increase by an order of magnitude by 2025. © 2015 American Association for the Advancement of Science. All rights reserved.

Agency: National Science Foundation | Branch: | Program: STTR | Phase: Phase I | Award Amount: 225.00K | Year: 2016

The broader impact/commercial potential of this Small Business Innovation Research (SBIR) Phase I project is to deliver energy and electricity savings in the high-power lighting market, by creating an energy-efficient, high color-quality, and cost effective alternative to conventional light sources using laser technology and materials design. Commercialization of this innovation could lead to the next generation of energy-efficient light sources, surpassing the limitations of current lighting technologies and drastically increasing the availability and uptake of energy-efficient light sources in the high-power market. As lighting is a major source of electricity use in the commercial and industrial markets, this would in turn aid in reducing global energy consumption and help to preserve our environment. The intellectual knowledge gained from these studies will inform future materials research in developing robust materials with optimal properties to advance solid-state lighting, as well as other energy related technologies including solar energy technologies. This Small Business Innovation Research (SBIR) Phase I project aims to advance research in the field of solid-state lighting towards the goal of ultra-efficient and smart lighting by exploring laser-stimulated phosphor emission. In particular, the proposed innovation focuses on energy savings in the high-power lighting market, where high-power light emitting diode (LED) technology does not attain the energy efficiency seen in low-power LED technology, due to LED droop. The use of laser technology can simultaneously overcome the negative effects of droop while also leveraging the directional nature of a laser to create a focused light source that can be better controlled and delivered to the illumination area with less losses and higher overall efficiency. This project will address device designs using optical modeling to maximize lighting performance metrics and will develop materials systems to mitigate the thermal effects introduced when using an intense light source such as a laser or high-power LED, which can damage and degrade materials within the device.

Agency: Department of Defense | Branch: Air Force | Program: STTR | Phase: Phase I | Award Amount: 150.00K | Year: 2013

ABSTRACT: Toyon Research Corporation and the University of California, Santa Barbara (UCSB) are proposing development and feasibility demonstration of advanced algorithms for the detection of vibration signatures in scattered light. The algorithms are being developed for applications including space-based electro-optical (EO)/infrared (IR) sensing at high frame rates and extremely low signal-to-noise ratios (SNRs). Toyon-UCSB are proposing both detailed modeling and simulation of the signals measured by the space-based EO/IR sensors, as well as implementation, demonstration, and evaluation of advanced noise reduction and detection algorithms. The proposed algorithms are based on principles from Track-before-Detect (TrbD), implemented via nonlinear particle filtering techniques, and are designed to near-optimally integrate information from high-frame-rate EO/IR sensors to provide improved effective SNRs at the detection stage, enabling high-confidence detection decisions, with reduced latency. In this novel application of particle filters for low level signal detection, the filter is designed to exploit the particular physical signature (temporal vibration) of the object of interest. Phase I R & D will include signal and sensor modeling, signal processing algorithm development and evaluation, and development of a recommended Phase II prototype architecture. BENEFIT: The successful completion of this R & D will ultimately result in advanced algorithms and a real-time software implementation for detection of extremely dim modulated signals from space-based EO/IR sensor platforms. Additional applications include airborne- and ground/surface-based sensing onboard manned and unmanned platforms. The proposed technology has the potential to enable detection of target signals which were previously not exploited due to the low level of the signals, due to signal scattering and/or long sensor-target standoff ranges. Thus, the proposal technology has applications in counter-terrorism, law enforcement, and a variety of civilian applications, in addition to many military applications.

Agency: Department of Defense | Branch: Navy | Program: STTR | Phase: Phase I | Award Amount: 150.00K | Year: 2014

Toyon Research Corporation, together with the University of California, Santa Barbara (UCSB), propose to develop a Local Carrier-based Precision Approach and Landing System (LC-PALS) that provides a full navigation solution (3-D position and altitude) for platforms within range of an aircraft carrier equipped with at least one beacon. Preliminary analysis indicates that the system will achieve 10-cm z-axis (altitude) accuracy, thereby enabling autonomous carrier landing capability under GPS-denied conditions. Moreover, the system has a low probability of detection and intercept (LPD/LPI), significant built-in anti-jam, anti-spoof, and multipath-mitigation capabilities, and is not prone to integer ambiguity and the cycle-slip phenomenon that is common with GPS-based carrier-phase tracking systems. Furthermore, because the system makes use of the same hardware that is required for GPS processing, the LC-PALS receiver can be fully integrated with GPS, thereby minimizing redundant hardware and enabling simultaneous operation with GPS, when available. While an inertial measurement unit (IMU) is not required for precision approach and landing, an onboard IMU can be used as an additional and complementary measurement source for improved attitude performance. During the Phase I program the LC-PALS feasibility will be fully verified and a hardware design will be completed in preparation for a Phase II demonstration.

Agency: Department of Defense | Branch: Navy | Program: STTR | Phase: Phase II | Award Amount: 757.74K | Year: 2011

The Navy has need of assessing the river environment including bathymetry, flow velocity profile, and navigational obstructions. While improvements in measurement fidelity and reduction in cost have come about by the use of multiple drifters, measurement quality is lost due to convergent drifter trajectories, and cost/risk remains high due to personnel effort required for deployment. An autonomous river measurement system capable of self-deployment as well as detecting and evading collisions with floating debris is desired. Toyon proposes to develop and demonstrate a prototype Wapter (water-helicopter) autonomous riverine system: a combination of a unique UAV coupled with a multi-functional hull well-suited to river measurement. The Wapter platform features a structure which protects the propellers in bad or crash landings, and sufficient control authority to vertically take off of the water and recover should it be flipped onto its back. The Wapter platform is foldable for easy transport, and is designed for a 24+ hour mission.

Agency: Department of Defense | Branch: Navy | Program: STTR | Phase: Phase I | Award Amount: 79.98K | Year: 2013

The low weight, excellent durability and heat resistance of ceramic matrix composites (CMCs) make them attractive materials for use in aircraft engine hot sections, where improved overall engine efficiency can be realized. Nevertheless, the interlaminar properties of CMCs must be well understood before CMCs can realistically replace metallic superalloys in engine hot sections, and no standardized test methods for determining interlaminar fracture toughness of CMCs currently exist. During this Phase I effort, Aurora Flight Sciences (AFS) will work closely with experts in CMC fabrication, testing, design, and analysis at the University of California, Santa Barbara (UCSB) and the United Technologies Research Center (UTRC) to develop analytical concept models on interlaminar fracture toughness test methods. A finite element code developed at UCSB, LayerSlayer, will be used in conjunction with the commercially available finite element analysis software, Abaqus, to ascertain stresses, energy release rates and mode mixities in CMCs while conducting parametric studies of various test configurations and geometries feasible for measuring Mode I and Mode II interlaminar fracture toughnesses. Preliminary experiments on CMC specimens will be conducted to evaluate the feasibility of the concept models developed, in preparation for comprehensive testing of several CMC coupons during Phase II.

Scherler D.,University of Potsdam | Bookhagen B.,University of California at Santa Barbara | Strecker M.R.,University of Potsdam
Nature Geoscience | Year: 2011

Controversy about the current state and future evolution of Himalayan glaciers has been stirred up by erroneous statements in the fourth report by the Intergovernmental Panel on Climate Change. Variable retreat rates and a paucity of glacial mass-balance data make it difficult to develop a coherent picture of regional climate-change impacts in the region. Here, we report remotely-sensed frontal changes and surface velocities from glaciers in the greater Himalaya between 2000 and 2008 that provide evidence for strong spatial variations in glacier behaviour which are linked to topography and climate. More than 65% of the monsoon-influenced glaciers that we observed are retreating, but heavily debris-covered glaciers with stagnant low-gradient terminus regions typically have stable fronts. Debris-covered glaciers are common in the rugged central Himalaya, but they are almost absent in subdued landscapes on the Tibetan Plateau, where retreat rates are higher. In contrast, more than 50% of observed glaciers in the westerlies-influenced Karakoram region in the northwestern Himalaya are advancing or stable. Our study shows that there is no uniform response of Himalayan glaciers to climate change and highlights the importance of debris cover for understanding glacier retreat, an effect that has so far been neglected in predictions of future water availability or global sea level. © 2011 Macmillan Publishers Limited. All rights reserved.

Guarrotxena N.,CSIC - Institute of Polymer Science and Technology | Bazan G.C.,University of California at Santa Barbara
Advanced Materials | Year: 2014

Simultaneous detection of multiple proteins on a single spot can be efficiently achieved by using multiplexed surface-enhanced Raman spectroscopy (SERS)-encoded nanoparticle 'antitags' consisting of poly(ethylene glycol) (PEG)-protected silver dimers (and higher aggregates) and antibody-tagging entities. The effective SERS-based multivariate deconvolution approach guarantees an accurate and successful distinguishable identification of single and multiple proteins in complex samples. Their potential application in multiplexed SERS bioimaging technology can be easily envisaged. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Lipshutz B.H.,University of California at Santa Barbara | Ghorai S.,Sigma-Aldrich
Green Chemistry | Year: 2014

Traditional organic chemistry, and organic synthesis in particular, relies heavily on organic solvents, as most reactions involve organic substrates and catalysts that tend to be water-insoluble. Unfortunately, organic solvents make up most of the organic waste created by the chemical enterprise, whether from academic, industrial, or governmental labs. One alternative to organic solvents follows the lead of Nature: water. To circumvent the solubility issues, newly engineered "designer" surfactants offer an opportunity to efficiently enable many of the commonly used transition metal-catalyzed and related reactions in organic synthesis to be run in water, and usually at ambient temperatures. This review focuses on recent progress in this area, where such amphiphiles spontaneously self-aggregate in water. The resulting micellar arrays serve as nanoreactors, obviating organic solvents as the reaction medium, while maximizing environmental benefits. This journal is © the Partner Organisations 2014.

Haddock S.H.D.,Monterey Bay Aquarium Research Institute | Moline M.A.,California Polytechnic State University, San Luis Obispo | Case J.F.,University of California at Santa Barbara
Annual Review of Marine Science | Year: 2010

Bioluminescence spans all oceanic dimensions and has evolved many times from bacteria to fish to powerfully influence behavioral and ecosystem dynamics. New methods and technology have brought great advances in understanding of the molecular basis of bioluminescence, its physiological control, and its significance in marine communities. Novel tools derived from understanding the chemistry of natural light-producing molecules have led to countless valuable applications, culminating recently in a related Nobel Prize. Marine organisms utilize bioluminescence for vital functions ranging from defense to reproduction. To understand these interactions and the distributions of luminous organisms, new instruments and platforms allow observations on individual to oceanographic scales. This review explores recent advances, including the chemical and molecular, phylogenetic and functional, community and oceanographic aspects of bioluminescence. © 2010 by Annual Reviews.

Turner T.L.,University of California at Santa Barbara | Hahn M.W.,Indiana University Bloomington
Molecular Ecology | Year: 2010

Populations of the malaria mosquito, Anopheles gambiae, are comprised of at least two reproductively isolated, sympatric populations. In this issue, White et al. (2010) use extensive sampling, high-density tiling microarrays, and an updated reference genome to clarify and expand our knowledge of genomic differentiation between these populations. It is now clear that DNA near the centromeres of all three chromosomes are in near-perfect disequilibrium with each other. This is in stark contrast to the remaining 97% of the assembled genome, where fixed differences between populations have not been found, and many polymorphisms are shared. This pattern, coupled with direct evidence of hybridization in nature, supports models of "mosaic" speciation, where ongoing hybridization homogenizes variation in most of the genome while loci under strong selection remain in disequilibrium with each other. However, unambiguously demonstrating that selection maintains the association of these pericentric "speciation islands" in the face of gene flow is difficult. Low recombination at all three loci complicates the issue, and increases the probability that selection unrelated to the speciation process alters patterns of variation in these loci. Here, we discuss these different scenarios in light of this new data. © 2010 Blackwell Publishing Ltd.

Yee C.-H.,Rutgers University | Balents L.,University of California at Santa Barbara
Physical Review X | Year: 2015

Motivated by the commonplace observation of Mott insulators away from integer filling, we construct a simple thermodynamic argument for phase separation in first-order doping-driven Mott transitions. We show how to compute the critical dopings required to drive the Mott transition using electronic structure calculations for the titanate family of perovskites, finding good agreement with experiment. The theory predicts that the transition is percolative and should exhibit Coulomb frustration.

Prezioso M.,University of California at Santa Barbara | Merrikh-Bayat F.,University of California at Santa Barbara | Hoskins B.D.,University of California at Santa Barbara | Adam G.C.,University of California at Santa Barbara | And 2 more authors.
Nature | Year: 2015

Despite much progress in semiconductor integrated circuit technology, the extreme complexity of the human cerebral cortex1, with its approximately 1014 synapses, makes the hardware implementation of neuromorphic networks with a comparable number of devices exceptionally challenging. To provide comparable complexity while operating much faster and with manageable power dissipation, networks2 based on circuits3,4 combining complementary metal-oxide-semiconductors (CMOSs) and adjustable two-terminal resistive devices (memristors) have been developed. In such circuits, the usual CMOS stack is augmented with one3 or several4 crossbar layers, with memristors at each crosspoint. There have recently been notable improvements in the fabrication of such memristive crossbars and their integration with CMOS circuits5+12, including first demonstrations5,6,12 of their vertical integration. Separately, discrete memristors have been used as artificial synapses in neuromorphic networks13,18. Very recently, such experiments have been extended19 to crossbar arrays of phase-change memristive devices. The adjustment of such devices, however, requires an additional transistor at each crosspoint, and hence these devices are much harder to scale than metal-oxide memristors11,20,21, whose nonlinear current-voltage curves enable transistor-free operation. Here we report the experimental implementation of transistor-free metal-oxide memristor crossbars, with device variability sufficiently low to allow operation of integrated neural networks, in a simple network: a single-layer perceptron (an algorithm for linear classification). The network can be taught in situ using a coarse-grain variety of the delta rule algorithm22 to perform the perfect classification of 3 × 3-pixel black/white images into three classes (representing letters). This demonstration is an important step towards much larger and more complex memristive neuromorphic networks. G2015 Macmillan Publishers Limited. All rights reserved.

McCauley D.J.,University of California at Santa Barbara | Pinsky M.L.,Rutgers University | Palumbi S.R.,Stanford University | Estes J.A.,University of California at Santa Cruz | And 2 more authors.
Science | Year: 2015

Marine defaunation, or human-caused animal loss in the oceans, emerged forcefully only hundreds of years ago, whereas terrestrial defaunation has been occurring far longer. Though humans have caused few global marine extinctions, we have profoundly affected marine wildlife, altering the functioning and provisioning of services in every ocean. Current ocean trends, coupled with terrestrial defaunation lessons, suggest that marine defaunation rates will rapidly intensify as human use of the oceans industrializes. Though protected areas are a powerful tool to harness ocean productivity, especially when designed with future climate in mind, additional management strategies will be required. Overall, habitat degradation is likely to intensify as a major driver of marine wildlife loss. Proactive intervention can avert a marine defaunation disaster of the magnitude observed on land. © 2015, American Association for the Advancement of Science. All rights reserved.

Wyttenbach T.,University of California at Santa Barbara | Pierson N.A.,Indiana University Bloomington | Clemmer D.E.,Indiana University Bloomington | Bowers M.T.,University of California at Santa Barbara
Annual Review of Physical Chemistry | Year: 2014

The combination of mass spectrometry and ion mobility spectrometry (IMS) employing a temperature-variable drift cell or a drift tube divided into sections to make IMS-IMS experiments possible allows information to be obtained about the molecular dynamics of polyatomic ions in the absence of a solvent. The experiments allow the investigation of structural changes of both activated and native ion populations on a timescale of 1-100 ms. Five different systems representing small and large, polar and nonpolar molecules, as well as noncovalent assemblies, are discussed in detail: a dinucleotide, a sodiated polyethylene glycol chain, the peptide bradykinin, the protein ubiquitin, and two types of peptide oligomers. Barriers to conformational interconversion can be obtained in favorable cases. In other cases, solution-like native structures can be observed, but care must be taken in the experimental protocols. The power of theoretical modeling is demonstrated. Copyright © 2014 by Annual Reviews.

Bookhagen B.,University of California at Santa Barbara | Strecker M.R.,University of Potsdam
Earth and Planetary Science Letters | Year: 2012

The tectonic and climatic boundary conditions of the broken foreland and the orogen interior of the southern Central Andes of northwestern Argentina cause strong contrasts in elevation, rainfall, and surface-process regimes. The climatic gradient in this region ranges from the wet, windward eastern flanks (~2m/yr rainfall) to progressively drier western basins and ranges (~0.1m/yr) bordering the arid Altiplano-Puna Plateau. In this study, we analyze the impact of spatiotemporal climatic gradients on surface erosion: First, we present 41 new catchment-mean erosion rates derived from cosmogenic nuclide inventories to document spatial erosion patterns. Second, we re-evaluate paleoclimatic records from the Calchaquíes basin (66°W, 26°S), a large intermontane basin bordered by high (>4.5km) mountain ranges, to demonstrate temporal variations in erosion rates associated with changing climatic boundary conditions during the late Pleistocene and Holocene. Three key observations in this region emphasize the importance of climatic parameters on the efficiency of surface processes in space and time: (1) First-order spatial patterns of erosion rates can be explained by a simple specific stream power (SSP) approach. We explicitly account for discharge by routing high-resolution, satellite derived rainfall. This is important as the steep climatic gradient results in a highly nonlinear relation between drainage area and discharge. This relation indicates that erosion rates (ER) scale with ER~SSP 1.4 on cosmogenic-nuclide time scales. (2) We identify an intrinsic channel-slope behavior in different climatic compartments. Channel slopes in dry areas (<0.25m/yr rainfall) are slightly steeper than in wet areas (>0.75m/yr) with equal drainage areas, thus compensating lower amounts of discharge with steeper slopes. (3) Erosion rates can vary by an order of magnitude between presently dry (~0.05mm/yr) and well-defined late Pleistocene humid (~0.5mm/yr) conditions within an intermontane basin. Overall, we document a strong climatic impact on erosion rates and channel slopes. We suggest that rainfall reaching areas with steeper channel slopes in the orogen interior during wetter climate periods results in intensified sediment mass transport, which is primarily responsible for maintaining the balance between surface uplift, erosion, sediment routing and transient storage in the orogen. © 2012 Elsevier B.V..

Lipshutz B.H.,University of California at Santa Barbara | Ghoraib S.,Sigma-Aldrich
Aldrichimica Acta | Year: 2012

New methodologies are discussed that allow for several commonly used transition-metal-catalyzed coupling reactions to be conducted within aqueous micellar nanoparticles at ambient temperatures. © 2012 Sigma-Aldrich Co. LLC.

Burkov A.A.,University of Waterloo | Burkov A.A.,University of California at Santa Barbara | Balents L.,University of California at Santa Barbara
Physical Review Letters | Year: 2011

We propose a simple realization of the three-dimensional (3D) Weyl semimetal phase, utilizing a multilayer structure, composed of identical thin films of a magnetically doped 3D topological insulator, separated by ordinary-insulator spacer layers. We show that the phase diagram of this system contains a Weyl semimetal phase of the simplest possible kind, with only two Dirac nodes of opposite chirality, separated in momentum space, in its band structure. This Weyl semimetal has a finite anomalous Hall conductivity and chiral edge states and occurs as an intermediate phase between an ordinary insulator and a 3D quantum anomalous Hall insulator. We find that the Weyl semimetal has a nonzero dc conductivity at zero temperature, but Drude weight vanishing as T2, and is thus an unusual metallic phase, characterized by a finite anomalous Hall conductivity and topologically protected edge states. © 2011 American Physical Society.

Yang J.J.,Hewlett - Packard | Strukov D.B.,University of California at Santa Barbara | Stewart D.R.,National Research Council Canada
Nature Nanotechnology | Year: 2013

Memristive devices are electrical resistance switches that can retain a state of internal resistance based on the history of applied voltage and current. These devices can store and process information, and offer several key performance characteristics that exceed conventional integrated circuit technology. An important class of memristive devices are two-terminal resistance switches based on ionic motion, which are built from a simple conductor/insulator/conductor thin-film stack. These devices were originally conceived in the late 1960s and recent progress has led to fast, low-energy, high-endurance devices that can be scaled down to less than 10 nm and stacked in three dimensions. However, the underlying device mechanisms remain unclear, which is a significant barrier to their widespread application. Here, we review recent progress in the development and understanding of memristive devices. We also examine the performance requirements for computing with memristive devices and detail how the outstanding challenges could be met. © 2013 Macmillan Publishers Limited. All rights reserved.

Childs A.M.,University of Waterloo | Van Dam W.,University of California at Santa Barbara
Reviews of Modern Physics | Year: 2010

Quantum computers can execute algorithms that dramatically outperform classical computation. As the best-known example, Shor discovered an efficient quantum algorithm for factoring integers, whereas factoring appears to be difficult for classical computers. Understanding what other computational problems can be solved significantly faster using quantum algorithms is one of the major challenges in the theory of quantum computation, and such algorithms motivate the formidable task of building a large-scale quantum computer. This article reviews the current state of quantum algorithms, focusing on algorithms with superpolynomial speedup over classical computation and, in particular, on problems with an algebraic flavor. © 2010 The American Physical Society.

Isakov S.V.,ETH Zurich | Hastings M.B.,Duke University | Hastings M.B.,University of California at Santa Barbara | Melko R.G.,University of Waterloo
Nature Physics | Year: 2011

The Landau paradigm of classifying phases by broken symmetries was shown to be incomplete when it was realized that different quantum-Hall states can only be distinguished by more subtle, topological properties1. The role of topology as an underlying description of order has since branched out to include topological band insulators and certain featureless gapped Mott insulators with a topological degeneracy in the ground-state wavefunction. Despite intense work, very few candidates for such topologically ordered ′spin liquids′ exist. The main difficulty in finding systems that harbour spin-liquid states is the very fact that they violate the Landau paradigm, making conventional order parameters non-existent. Here, we describe a spin-liquid phase in a Bose-Hubbard model on the kagome lattice, and determine its topological order directly by means of a measure known as topological entanglement entropy. We thus identify a non-trivial spin liquid through its entanglement entropy as a gapped ground state with emergent Z 2 gauge symmetry. © 2011 Macmillan Publishers Limited. All rights reserved.

Levin M.,University of Maryland University College | Gu Z.-C.,University of California at Santa Barbara
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

We construct a two-dimensional (2D) quantum spin model that realizes an Ising paramagnet with gapless edge modes protected by Ising symmetry. This model provides an example of a "symmetry-protected topological phase." We describe a simple physical construction that distinguishes this system from a conventional paramagnet: We couple the system to a Z 2 gauge field and then show that the π-flux excitations have different braiding statistics from that of a usual paramagnet. In addition, we show that these braiding statistics directly imply the existence of protected edge modes. Finally, we analyze a particular microscopic model for the edge and derive a field theoretic description of the low energy excitations. We believe that the braiding statistics approach outlined in this paper can be generalized to a large class of symmetry-protected topological phases. © 2012 American Physical Society.

Donnelly W.,University of Maryland University College | Wall A.C.,University of California at Santa Barbara
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2012

Quantum fluctuations of matter fields contribute to the thermal entropy of black holes. For free minimally coupled scalar and spinor fields, this contribution is precisely the entanglement entropy. For gauge fields, Kabat found an extra negative divergent "contact term" with no known statistical interpretation. We compare this contact term to a similar term which arises for nonminimally coupled scalar fields. Although both divergences may be interpreted as terms in the Wald entropy, we point out that the contact term for gauge fields comes from a gauge-dependent ambiguity in Wald's formula. Revisiting Kabat's derivation of the contact term, we show that it is sensitive to the treatment of infrared modes. To explore these infrared issues, we consider two-dimensional compact manifolds, such as Euclidean de Sitter space, and show that the contact term arises from an incorrect treatment of zero modes. In a manifestly gauge-invariant reduced phase space quantization, the gauge field contribution to the entropy is positive, finite, and equal to the entanglement entropy. © 2012 American Physical Society.

Kilpatrick A.M.,University of California at Santa Cruz | Briggs C.J.,University of California at Santa Barbara | Daszak P.,Wildlife Trust
Trends in Ecology and Evolution | Year: 2010

Emerging infectious diseases are increasingly recognized as key threats to wildlife. Batrachochytrium dendrobatidis (Bd), the causative agent of chytridiomycosis, has been implicated in widespread amphibian declines and is currently the largest infectious disease threat to biodiversity. Here, we review the causes of Bd emergence, its impact on amphibian populations and the ecology of Bd transmission. We describe studies to answer outstanding issues, including the origin of the pathogen, the effect of Bd relative to other causes of population declines, the modes of Bd dispersal, and factors influencing the intensity of its transmission. Chytridiomycosis is an archetypal emerging disease, with a broad host range and significant impacts on host populations and, as such, poses a crucial challenge for wildlife managers and an urgent conservation concern. © 2009 Elsevier Ltd. All rights reserved.

Dorfler F.,ETH Zurich | Bullo F.,University of California at Santa Barbara
Automatica | Year: 2014

The emergence of synchronization in a network of coupled oscillators is a fascinating subject of multidisciplinary research. This survey reviews the vast literature on the theory and the applications of complex oscillator networks. We focus on phase oscillator models that are widespread in real-world synchronization phenomena, that generalize the celebrated Kuramoto model, and that feature a rich phenomenology. We review the history and the countless applications of this model throughout science and engineering. We justify the importance of the widespread coupled oscillator model as a locally canonical model and describe some selected applications relevant to control scientists, including vehicle coordination, electric power networks, and clock synchronization. We introduce the reader to several synchronization notions and performance estimates. We propose analysis approaches to phase and frequency synchronization, phase balancing, pattern formation, and partial synchronization. We present the sharpest known results about synchronization in networks of homogeneous and heterogeneous oscillators, with complete or sparse interconnection topologies, and in finite-dimensional and infinite-dimensional settings. We conclude by summarizing the limitations of existing analysis methods and by highlighting some directions for future research. © 2014 Elsevier Ltd. All rights reserved.

Sun K.,University of Maryland University College | Gu Z.,University of California at Santa Barbara | Katsura H.,Gakushuin University | Das Sarma S.,University of Maryland University College
Physical Review Letters | Year: 2011

We report the theoretical discovery of a class of 2D tight-binding models containing nearly flatbands with nonzero Chern numbers. In contrast with previous studies, where nonlocal hoppings are usually required, the Hamiltonians of our models only require short-range hopping and have the potential to be realized in cold atomic gases. Because of the similarity with 2D continuum Landau levels, these topologically nontrivial nearly flatbands may lead to the realization of fractional anomalous quantum Hall states and fractional topological insulators in real materials. Among the models we discover, the most interesting and practical one is a square-lattice three-band model which has only nearest-neighbor hopping. To understand better the physics underlying the topological flatband aspects, we also present the studies of a minimal two-band model on the checkerboard lattice. © 2011 American Physical Society.

Kraft N.J.B.,University of Maryland University College | Godoy O.,University of California at Santa Barbara | Godoy O.,CSIC - Institute of Natural Resources and Agriculture Biology of Seville | Levine J.M.,ETH Zurich
Proceedings of the National Academy of Sciences of the United States of America | Year: 2015

Understanding the processes maintaining species diversity is a central problem in ecology, with implications for the conservation and management of ecosystems. Although biologists often assume that trait differences between competitors promote diversity, empirical evidence connecting functional traits to the niche differences that stabilize species coexistence is rare. Obtaining such evidence is critical because traits also underlie the average fitness differences driving competitive exclusion, and this complicates efforts to infer community dynamics from phenotypic patterns. We coupled fieldparameterized mathematical models of competition between 102 pairs of annual plants with detailed sampling of leaf, seed, root, and whole-plant functional traits to relate phenotypic differences to stabilizing niche and average fitness differences. Single functional traits were often well correlated with average fitness differences between species, indicating that competitive dominance was associated with late phenology, deep rooting, and several other traits. In contrast, single functional traits were poorly correlated with the stabilizing niche differences that promote coexistence. Niche differences could only be described by combinations of traits, corresponding to differentiation between species in multiple ecological dimensions. In addition, several traits were associated with both fitness differences and stabilizing niche differences. These complex relationships between phenotypic differences and the dynamics of competing species argue against the simple use of single functional traits to infer community assembly processes but lay the groundwork for a theoretically justified trait-based community ecology.

Lutchyn R.M.,Microsoft | Fisher M.P.A.,University of California at Santa Barbara
Physical Review B - Condensed Matter and Materials Physics | Year: 2011

We show that semiconductor nanowires coupled to an s-wave superconductor provide a playground to study effects of interactions between different topological superconducting phases supporting Majorana zero-energy modes. We consider a quasi-one-dimensional system where the topological phases emerge from different transverse subbands in the nanowire. In a certain parameter space, we show that there is a multicritical point in the phase diagram where the low-energy theory is equivalent to the one describing two coupled Majorana chains. We study the effect of interactions as well as symmetry-breaking perturbations on the topological phase diagram in the vicinity of this multicritical point. Our results shed light on the stability of the topological phase around the multicritical point and have important implications for the experiments on Majorana nanowires. © 2011 American Physical Society.

News Article | February 21, 2017

Many shark populations around the world are known to have declined over the past several decades, yet marine scientists lack important baseline information about what a healthy shark population looks like. A clearer picture is now coming into focus -- thanks to a team of scientists who investigated the size of an unfished community of reef sharks. Researchers from UC Santa Barbara and colleagues conducted an eight-year study of a healthy shark population on Palmyra, a remote, uninhabited atoll in the central Pacific Ocean. This pristine ecosystem is part of a marine refuge that extends 50 nautical miles from its shores. No fishing is allowed within these borders, which protect a diverse array of species, including grey reef sharks. The investigators were surprised to find far fewer sharks than expected. The study results appear in the journal Scientific Reports. "We estimated a population size of between 6,000 to 8,000 grey reef sharks at Palmyra, which works out to a density of about 20 sharks per square kilometer," said lead author Darcy Bradley, a postdoctoral researcher in UCSB's Sustainable Fisheries Group, a collaboration of the campus's Marine Science Institute and Bren School of Environmental Science & Management. "Previous research that used underwater visual survey methods estimated a density of between 200 to 1,000 sharks per square kilometer," Bradley continued. "So while it's not totally clear how those density estimates would scale up to a population estimate, it is clear that it would end up a lot bigger than our estimate." From 2006 to 2014, the research team captured reef sharks across Palmyra and fitted them with numbered ID tags. They also tracked the movement of some of these animals using acoustic telemetry tags, which emit a sound that is then recorded by acoustic receivers located underwater. Of the 1,300 tagged reef sharks, 350 individuals were recaptured, making this effort the largest reef shark tag recapture program in the world. In addition to the tag data, the investigators recorded information on the sex and size of each animal and the location of its capture. They plugged all the data into an algorithm that estimated the total population size. According to Bradley, the fact that the shark population was smaller than anticipated is not all bad news. "If a healthy shark population is smaller than we assumed, that means other shark populations are more precarious than previously suggested," she said. "However, it also means that the recovery goal for shark populations is lower, which makes recovering shark populations somewhat easier. Given that the way we manage fisheries and ecosystem health depends on having decent estimates of abundance, we need to continue to improve the way we count things in the ocean." Additional UCSB co-authors on the study are Douglas McCauley, Bruce Kendall, Steven Gaines and Jennifer Caselle. Other co-authors include Eric Conklin and Kydd Pollock of The Nature Conservancy, Yannis Papastamatiou of Florida International University and Amanda Pollock of the U.S. Fish and Wildlife Service.

Pasqualetti F.,University of California at Santa Barbara | Bicchi A.,University of Pisa | Bullo F.,University of California at Santa Barbara
IEEE Transactions on Automatic Control | Year: 2012

This paper addresses the problem of ensuring trustworthy computation in a linear consensus network. A solution to this problem is relevant for several tasks in multi-agent systems including motion coordination, clock synchronization, and cooperative estimation. In a linear consensus network, we allow for the presence of misbehaving agents, whose behavior deviate from the nominal consensus evolution. We model misbehaviors as unknown and unmeasurable inputs affecting the network, and we cast the misbehavior detection and identification problem into an unknown-input system theoretic framework. We consider two extreme cases of misbehaving agents, namely faulty (non-colluding) and malicious (Byzantine) agents. First, we characterize the set of inputs that allow misbehaving agents to affect the consensus network while remaining undetected and/or unidentified from certain observing agents. Second, we provide worst-case bounds for the number of concurrent faulty or malicious agents that can be detected and identified. Precisely, the consensus network needs to be $2k+1$ (resp. $k+1$) connected for $k$ malicious (resp. faulty) agents to be generically detectable and identifiable by every well behaving agent. Third, we quantify the effect of undetectable inputs on the final consensus value. Fourth, we design three algorithms to detect and identify misbehaving agents. The first and the second algorithm apply fault detection techniques, and affords complete detection and identification if global knowledge of the network is available to each agent, at a high computational cost. The third algorithm is designed to exploit the presence in the network of weakly interconnected subparts, and provides local detection and identification of misbehaving agents whose behavior deviates more than a threshold, which is quantified in terms of the interconnection structure. © 2006 IEEE.

Saleh A.A.M.,University of California at Santa Barbara | Simmons J.M.,Monarch Network Architects
Proceedings of the IEEE | Year: 2012

While all-optical networking had its origins in the research community a quarter of a century ago, the realization of the vision has not had a straight trajectory. The original goal of the all-optical network was based on keeping the data signals entirely in the optical domain from source to destination to eliminate the so-called electronic bottleneck, and to allow arbitrary signal formats, bitrates, and protocols to be transported. The latter property is referred to as transparency. When all-optical networks were finally commercialized around the turn of the century, however, a modified reality emerged; the quest for transparency was replaced by the more pragmatic objective of reducing the network cost and energy consumption. Moreover, especially for networks of large geographical extent, electronics were still present at some (relatively few) points along the data path, for signal regeneration and traffic grooming. This modified vision captures the state of today's networks, though terms like all-optical and transparent are still used to describe this technology. However, continued advancements are bringing back some aspects of the original transparency vision. In this paper, we review the evolution of all-optical networking, from the early vision to its present vibrant state, which was made possible by great advances in optical transmission and all-optical switching technologies. We describe the numerous benefits afforded by the technology, and its relative merits and drawbacks compared to competing technologies, sometimes referred to as opaque. We also discuss the remaining challenges and future directions of all-optical networking. While all-optical solutions permeate today's access, metro, and core networks, this paper focuses on the core. © 2012 IEEE.

Loading University of California at Santa Barbara collaborators
Loading University of California at Santa Barbara collaborators