Honolulu, HI, United States
Honolulu, HI, United States

The University of Hawai'i at Mānoa is a public co-educational research university, and is the flagship campus of the greater University of Hawai'i system. The school is located in Mānoa, an affluent neighborhood of Honolulu, Honolulu County, Hawai'i, United States, approximately three miles east and inland from downtown Honolulu and one mile from Ala Moana and Waikiki. The campus occupies the eastern half of the mouth of Mānoa Valley. It is accredited by the Western Association of Schools and Colleges and is governed by the Hawaii State Legislature and a semi-autonomous board of regents, which in turn hires a president to be administrator. The university campus houses the main offices of the University of Hawai'i System. Wikipedia.


Time filter

Source Type

Patent
Cephalon Inc., University of Hawaii at Manoa and University of Utah | Date: 2016-10-14

Compounds of formula II are described: wherein D, n, R_(a), R_(b), and R_(c )are as herein defined, along with pharmaceutical compositions and methods of using compounds of formula II for treating or reducing the risk of peritoneal carcinomatosis in a patient.


Patent
Kineticor, University of Hawaii at Manoa and Albert Ludwigs University of Freiburg | Date: 2016-07-28

The systems, methods, and devices described herein generally relate to achieving accurate and robust motion correction by detecting and accounting for false movements in motion correction systems used in conjunction with medical imaging and/or therapeutic systems. In other words, in some embodiments of the systems, methods, and devices described herein can be configured to detect false movements for motion correction during a medical imaging scan and/or therapeutic procedure, and thereby ensure that such false movements are not accounted for in the motion correction process. Upon detection of false movements, the imaging or therapeutic system can be configured to transiently suppress and/or subsequently repeat acquisitions.


Evans T.A.,National University of Singapore | Forschler B.T.,University of Georgia | Kenneth Grace J.,University of Hawaii at Manoa
Annual Review of Entomology | Year: 2013

The number of recognized invasive termite species has increased from 17 in 1969 to 28 today. Fourteen species have been added to the list in the past 44 years; 10 have larger distributions and 4 have no reported change in distribution, and 3 species are no longer considered invasive. Although most research has focused on invasive termites in urban areas, molecular identification methods have answered questions about certain species and found that at least six species have invaded natural forest habitats. All invasive species share three characteristics that together increase the probability of creating viable propagules: they eat wood, nest in food, and easily generate secondary reproductives. These characteristics are most common in two families, the Kalotermitidae and Rhinotermitidae (which make up 21 species on the invasive termite list), particularly in three genera, Cryptotermes, Heterotermes, and Coptotermes (which together make up 16 species). Although it is the largest termite family, the Termitidae (comprising 70% of all termite species) have only two invasive species, because relatively few species have these characteristics. Islands have double the number of invasive species that continents do, with islands in the South Pacific the most invaded geographical region. Most invasive species originate from Southeast Asia. The standard control methods normally used against native pest termites are also employed against invasive termites; only two eradication attempts, in South Africa and New Zealand, appear to have been successful, both against Coptotermes species. © 2013 by Annual Reviews. All rights reserved.


Canalizo G.,University of California at Riverside | Stockton A.,University of Hawaii at Manoa
Astrophysical Journal | Year: 2013

Although mergers and starbursts are often invoked in the discussion of quasi-stellar object (QSO) activity in the context of galaxy evolution, several studies have questioned their importance or even their presence in QSO host galaxies. Accordingly, we are conducting a study of z ∼ 0.2 QSO host galaxies previously classified as passively evolving elliptical galaxies. We present deep Keck/LRIS spectroscopy of a sample of 15 hosts and model their stellar absorption spectra using stellar synthesis models. The high signal-to-noise ratio of our spectra allows us to break various degeneracies that arise from different combinations of models, varying metallicities, and contamination from QSO light. We find that none of the host spectra can be modeled by purely old stellar populations and that the majority of the hosts (14/15) have a substantial contribution from intermediate-age populations with ages ranging from 0.7 to 2.4 Gyr. An average host spectrum is strikingly well fit by a combination of an old population and a 2.1 (+0.5, -0.7) Gyr population. The morphologies of the host galaxies suggest that these aging starbursts were induced during the early stages of the mergers that resulted in the elliptical-shaped galaxies that we observe. The current active galactic nucleus activity likely corresponds to the late episodes of accretion predicted by numerical simulations, which occur near the end of the mergers, whereas earlier episodes may be more difficult to observe due to obscuration. Our off-axis observations prevent us from detecting any current star formation or young stellar populations that may be present in the central few kiloparsecs. © 2013. The American Astronomical Society. All rights reserved.


Rhodes R.E.,University of Victoria | Nigg C.R.,University of Hawaii at Manoa
Exercise and Sport Sciences Reviews | Year: 2011

As behavioral physical activity (PA) research matures, the adaptation and augmentation of theories with PA-specific concepts are required to improve explanatory power and to justify the uniqueness of the discipline. This review details the advances of three prominent theories applied to understand PA. We conclude by presenting a framework for researchers to test whether a particular behavioral theory holds use in the PA domain. Copyright © 2011 by the American College of Sports Medicine.


Zeebe R.E.,University of Hawaii at Manoa
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2013

Over the next few centuries, with unabated emissions of anthropogenic carbon dioxide (CO2), a total of 5000 Pg C may enter the atmosphere, causing CO2 concentrations to rise to approximately 2000 ppmv, global temperature to warm by more than 8°C and surface ocean pH to decline by approximately 0.7 units. A carbon release of this magnitude is unprecedented during the past 56 million years- and the outcome accordingly difficult to predict. In this regard, the geological record may provide foresight to how the Earth system will respond in the future. Here, we discuss the long-term legacy of massive carbon release into the Earth's surface reservoirs, comparing the Anthropocene with a past analogue, the Palaeocene-Eocene Thermal Maximum (PETM, approx. 56Ma). We examine the natural processes and time scales of CO 2 neutralization that determine the atmospheric lifetime of CO 2 in response to carbon release. We compare the duration of carbon release during the Anthropocene versus PETM and the ensuing effects on ocean acidification and marine calcifying organisms. We also discuss the conundrum that the observed duration of the PETM appears to be much longer than predicted by models that use firstorder assumptions. Finally, we comment on past and future mass extinctions and recovery times of biotic diversity.


Liu Z.,University of Wisconsin - Madison | Liu Z.,Peking University | Lu Z.,Peking University | Wen X.,U.S. National Center for Atmospheric Research | And 3 more authors.
Nature | Year: 2014

TheEl Niño SouthernOscillation(ENSO) isEarth's dominant source of interannual climate variability, but its response to globalwarming remainshighly uncertain1.To improve ourunder standing of ENSO's sensitivity to external climate forcing, it is paramount to determine its past behaviour by using palaeoclimate data and model simulations. Palaeoclimate records show that ENSO has varied considerably since the Last Glacial Maximum (21,000 years ago)2-9, and somedata sets suggest a gradual intensification of ENSO over the past ~6,000 years2,5,7,8. Previous attempts to simulate the transient evolution of ENSO have relied onsimplified models10 or snapshot11-13 experiments. Here we analyse a series of transient Coupled General Circulation Model simulations forced by changes in greenhouse gasses, orbital forcing, the meltwater discharge and the ice-sheet history throughout the past 21,000 years. Consistent withmost palaeo-ENSOr econstructions, our model simulates an orbitally induced strengthening of ENSO during the Holocene epoch, which is caused by increasing positive ocean-atmosphere feedbacks. During the early deglaciation, ENSO characteristics change drastically in response tomeltwater discharges and the resulting changes in the Atlantic Meridional Overturning Circulation and equatorial annual cycle. Increasing deglacial atmospheric CO2 concentrations tend to weaken ENSO, whereas retreating glacial ice sheets intensify ENSO. The complex evolution of forcings and ENSO feedbacks and the uncertainties in the reconstruction further highlight the challenge and opportunity for constraining future ENSO responses. © 2014 Macmillan Publishers Limited. All rights reserved.


Grant
Agency: GTR | Branch: NERC | Program: | Phase: Research Grant | Award Amount: 401.39K | Year: 2011

Future climate change is one of the most challenging issues facing humankind and an enormous research effort is directed at attempting to construct realistic projections of 21st century climate based on underlying assumptions about greenhouse gas emissions. Climate models now include many of the components of the earth system that influence climate over a range of timescales. Understanding and quantifying earth system processes is vital to projections of future climate change because many processes provide feedbacks to climate change, either reinforcing upward trends in greenhouse gas concentrations and temperature (positive feedbacks) or sometimes damping them (negative feedbacks). One key feedback loop is formed by the global carbon cycle, part of which is the terrestrial carbon cycle. As carbon dioxide concentrations and temperatures rise, carbon sequestration by plants increases but at the same time, increasing temperatures lead to increased decay of dead plant material in soils. Carbon cycle models suggest that the balance between these two effects will lead to a strong positive feedback, but there is a very large uncertainty associated with this finding and this process represents one of the biggest unknowns in future climate change projections. In order to reduce these uncertainties, models need to be validated against data such as records for the past millennium. Furthermore, it is extremely important to make sure that the models are providing a realistic representation of the global carbon cycle and include all its major component parts. Current models exclude any consideration of the reaction of peatlands to climate change, even though these ecosystems contain almost as much carbon as the global atmosphere and are potentially sensitive to climate variability. On the one hand, increased warmth may increase respiration and decay of peat and on the other hand, even quite small increases in productivity may compensate for this or even exceed it in high latitude peatlands. A further complication is that peatlands emit quite large quantities of methane, another powerful greenhouse gas. Our proposed project aims to assess the contribution of peatlands to the global carbon cycle over the past 1000 years by linking together climate data and climate model output with models that simulate the distribution and growth of peatlands on a global scale. The models will also estimate changes in methane emissions from peatlands. In particular, we will test the hypotheses that warmth leads to lower rates of carbon accumulation and that this means that globally, peatlands will sequester less carbon in future than they do now. We will also test whether future climate changes lead to a positive or negative feedback from peatland methane emissions. To determine how well our models can simulate the peatland-climate links, we will test the model output for the last millennium against fossil data of peat growth rates and hydrological changes (related to methane emissions). To do this, we will assemble a large database of published information but also new data acquired in collaboration with partners from other research organisations around the world who are involved in collecting information and samples that we can make use of once we undertake some additional dating and analyses. Once the model has been evaluated against the last millennium data, we will make projections of the future changes in the global carbon cycle that may occur as a result of future climate change. This will provide a strong basis for making a decision on the need to incorporate peatland dynamics into the next generation of climate models. Ultimately we expect this to reduce uncertainty in future climate change predictions.


News Article | November 11, 2015
Site: www.nature.com

When Fiona Ingleby took to Twitter last April to vent about a journal’s peer-review process, she didn’t expect much of a response. With only around 100 followers on the social-media network, Ingleby — an evolutionary geneticist at the University of Sussex near Brighton, UK — guessed that she might receive a few messages of support or commiseration from close colleagues. What she got was an overwhelming wave of reaction. In four pointed tweets, Ingleby detailed her frustration with a PLoS ONE reviewer who tried to explain away her findings on gender disparities in the transition from PhD to postdoc. He suggested that men had “marginally better health and stamina”, and that adding “one or two male biologists” as co-authors would improve the analysis. The response was a full-fledged ‘Twitterstorm’ that spawned more than 5,000 retweets, a popular hashtag — #addmaleauthorgate — and a public apology from the journal. “Things went really mental,” Ingleby says. “I had to turn off the Twitter notifications on my e-mail.” Yet her experience is not as unusual as it may seem. Social media has enabled an increasingly public discussion about the persistent problem of sexism in science. When a male scientist with the European Space Agency (ESA) wore a shirt patterned with half-naked women to a major media event in November 2014, Twitter blazed with criticism. The site was where the first reports surfaced in June of Nobel Prizewinning biologist Tim Hunt’s self-confessed “trouble with girls” in laboratories. And in mid-October, many astronomers took to Twitter to register their anger and disappointment when the news broke that Geoffrey Marcy, an exoplanet hunter at the University of California, Berkeley, was found to have sexually harassed female subordinates for at least a decade. “I have been in [the] field for 15 years,” wrote Sarah Hörst, a planetary scientist at Johns Hopkins University in Baltimore, Maryland. “It is my field now too & we are not going to do things this way anymore if I have anything to do w/ it.” Scientists studying the rise of social media are still trying to understand the factors that can whip an online debate into a raging Twitterstorm. Such events often have far-reaching and unpredictable consequences — for participants as well as targets. Sometimes this continuing public discussion prompts action: PLoS ONE is re-reviewing Ingleby’s paper, and its original editor and reviewer no longer work for the journal, for example. But women who speak out about sexism often face a vicious backlash, ranging from insults to threats of physical violence. Although it is not yet clear whether the social-media conversation about sexism in science will help to create lasting change, some scientists think that it may provide a sense of solidarity for women across disciplines. “You may not be changing minds, but you may be finding people who have your back,” says Brooke Foucault Welles, a communications scientist at Northeastern University in Boston, Massachusetts. “And that’s powerful.” On 12 November 2014, the ESA Rosetta mission landed a spacecraft on a comet — a milestone for space exploration. But in certain corners of the Internet, Rosetta’s landing day may be best remembered for the scantily clad women on Matt Taylor’s shirt. Taylor, a Rosetta project scientist, sported the Hawaiian-style garment as he gave interviews to reporters at mission headquarters in Darmstadt, Germany, and answered questions on an ESA webcast. (His comments were also suggestive: Rosetta “is sexy, but I never said she was easy”, he told viewers.) It wasn’t long before people following the historic comet landing took notice — and took to Twitter. “What a lost opportunity to encourage girls into science,” tweeted Fernanda Foertter, a computer programmer at Oak Ridge National Laboratory in Tennessee. Others approached it with a bit more snark: “No no women are toooootally welcome in our community, just ask the dude in the shirt,” wrote New York-based science journalist Rose Eveleth, who linked to a Nature video interview with Taylor. What started as a trickle of tweets soon became a flood. By 14 November, the day that Taylor gave a tearful public apology on another ESA webcast, Twitter users had posted more than 3,100 messages using the #shirtstorm hashtag (see ‘Anatomy of a Twitterstorm’). In many ways, #shirtstorm and other Twitter conversations about sexism are not new. It is only the venue that has changed, says Hope Jahren, a geobiologist at the University of Hawaii at Manoa who is active on Twitter. “Guys have been wearing girly shirts forever,” she says. “The women around them have been rolling their eyes and going home and saying, ‘What a buffoon. I’m so sick of this crap.’ They’ve been doing it in the women’s room and doing it in the coffee room.” But now, Jahren says, “Twitter is that thought under your breath.” The social-media service is also an enormous megaphone that claims to have 320 million active users each month. Research suggests that hashtag-driven Twitter conversations can help to amplify the voices of people who are not powerful by conventional measures. One example comes from Foucault Welles and her colleagues’ analysis of a hashtag that arose after police in Ferguson, Missouri, shot an unarmed African American teenager in August 2014. The killing quickly became a national news story, and the #ferguson hashtag became part of a broader US debate over police violence. Yet more than a year later, one of the most retweeted #ferguson contributors was a teenager from the Ferguson area. “People who don’t have power really can have their voices heard,” Foucault Welles says. “They can reframe the story.” And that can make Twitter an important outlet for younger scientists, who often don’t know how to respond to instances of sexism or sexual harassment. One 2014 survey of 666 scientists — including 516 women — found that 64% had experienced sexual harassment, and only 20% of that group said that they knew how to report such behaviour1. Most were students or postdoctoral researchers at the time they were harassed. When scientists talk about sexism and harassment on Twitter, it presents younger researchers with a model for confronting such issues. “This way, they can see other people are going through it, and there is a positive effect to speaking out,” says Zuleyka Zevallos, a sociologist who manages the Science Australia Gender Equity Project at the Australian Academy of Science in Canberra. For Ingleby, venting about her sexist journal review on Twitter paid unexpected dividends. She and her co-author, both postdocs, had waited three weeks for PLoS ONE to decide whether to grant their appeal and re-examine their paper. By making their plight public, Ingleby drew public support from other scientists — and, privately, invaluable advice from more-experienced researchers about how to deal with the journal. “I did get some messages that called me a feminazi and all that stuff,” Ingleby says, “but that was by far the minority.” She has one crucial piece of advice for those who may follow in her footsteps: “Be a bit more prepared for things going viral. Maybe pick a few quiet days in your calendar.” Determining which factors can fan a handful of messages into an Internet firestorm, or what gives a hashtag staying power, is tricky. One study2, published in 2012 by researchers at the University of Pennsylvania in Philadelphia, suggests that Internet content goes viral when it elicits a strong emotional reaction. Marketing researcher Jonah Berger and decision scientist Katherine Milkman analysed the popularity of 6,956 news stories posted to the New York Times homepage between 30 August and 30 November 2008. The pair found that stories that inspired intense positive emotions, such as awe or amusement, were the most likely to go viral; anger, anxiety and other strong negative feelings also propelled articles to wide readership, but sadness seemed to reduce the chance that a reader would share a story with others. The recent science Twitterstorms, which are often fuelled by a combination of frustration, anger and black humour, fit with those ideas. Yet an element of randomness is also at play. Joseph Reagle, a communications researcher at Northeastern University, sees this in the story of Cecil, a lion killed by an American tourist in Hwange National Park in Zimbabwe in July. The animal’s death became an international cause célèbre, inspiring a hashtag (#CeciltheLion) that racked up 1.2 million tweets in one month — despite the fact that hunters kill dozens of lions in Zimbabwe each year3. To Reagle, Cecil’s tale also suggests that ‘hashtag activism’ is here to stay. “We are seeing the emergence of a genre,” he says. “And we will see it repeated.” The conversations sparked by popular hashtags can shift the focus of media coverage and broader public discussion. The #YesAllWomen hashtag began in May 2014, in response to a shooting spree in California in which the killer said that his motivation was a hatred of women. Women used the hashtag to connect this violent misogyny to examples of everyday sexism and harassment — giving rise to a new wave of media coverage4. “That’s one of the really interesting things that starts to happen with some hashtags — they become news in their own right,” says Samantha Thrift, a feminist media scholar at the University of Calgary in Canada. Hunt learned about the amplifying power of social media the hard way on 8 June. “You fall in love with them, they fall in love with you, and when you criticize them, they cry,” he said in a speech at the World Conference of Science Journalists in Seoul. His comments were tweeted by audience members, creating an Internet furore that quickly hit mainstream news outlets. On 10 June, Hunt told BBC Radio 4 that he was “really sorry”. But in later comments to The Observer, he said that he had been “hung out to dry” and forced to resign an honorary post at University College London. “It has done me lasting damage,” he said. “What they did was unacceptable.” For those in positions of power, such as Hunt, finding themselves at the centre of a Twitterstorm can be deeply unsettling, given social media’s ability to upend traditional hierarchies. But many women who talk about sexism, feminism and gender issues online face a harsher reception, from abusive comments to threats of physical harm. When Eveleth tweeted her criticism of Taylor’s shirt, she received death threats. When others joined the fray, such as Jacquelyn Gill, a palaeoecologist at the University of Maine in Orono, they became targets, too. “I stand with @roseveleth and others who are calling out sexism despite online harassment,” she tweeted. “I’m reporting abusive tweets as I’m able.” She added: “Free-speech apparently only applies to dudes threatening violence to women with an opinion — not the women with an opinion. #shirtstorm”. The reaction to her commentary was swift and punishing. “For the next 72 hours I got death and rape threats,” Gill says. “It was a non-stop barrage of people trolling those hashtags.” As the stream of vitriol became overwhelming, some of Gill’s colleagues wrote a computer program to scan Twitter for threatening messages that mentioned her username. That spared Gill from constantly monitoring her account for serious threats. But no program could spare her from the awkward conversations that she had with University of Maine officials after realizing that some of her harassers on Twitter were discussing how to get her fired in retaliation for her ‘Shirtgate’ activism. “I’ve run up against the real-world consequences of speaking as a woman on the Internet,” she says. This problem is not limited to science: in a study of 2,849 Internet users, the Pew Research Center in Washington DC reported that 40% had been harassed online. Although men are more likely to be called offensive names or purposefully embarrassed, women are more likely to be stalked or sexually harassed as a result of their Internet use. The survey also found that social media is the place where women are most vulnerable to harassment of all types, ranging from stalking to physical threats. Faced with such attacks, some scientists have begun to rethink how they participate in online discussions about sexism. Some retreat entirely; others, wary of being silenced by abuse, try to find safer ways to engage online. One female researcher who has suffered Internet harassment now tweets about feminist issues under a pseudonym while also maintaining an active Twitter account under her real name. “It makes me feel safer,” says the researcher, who asked not to be named. “Although, in a lot of these cases, if someone wants to find you, they will.” Researchers tracking the rise of social media are trying to understand whether intense discussions online translate into real-world change. The difficulty lies in deciding how to measure such effects. One approach draws on network analysis. A team of computer scientists at Carnegie Mellon University in Pittsburgh, Pennsylvania, tracked Twitter users’ interactions before, during and after 20 Twitterstorms between 2011 and 2014 — most centred on targets of broad interest, such as US late-night television host Stephen Colbert and fast-food chain McDonald’s5. The researchers found that these events did not create lasting links between participants, as measured by who these users follow or message on Twitter. This suggests that Internet dust-ups do not usually lead to sustained discussion or greater awareness of a given issue. But other studies show that intense Twitter discussions may affect contributors in ways that are harder to quantify. Mindi Foster, a social psychologist at Wilfrid Laurier University in Waterloo, Canada, decided to investigate the psychological effects of tweeting on the basis of her own experience using social media. After hearing an anti-Semitic remark on a television programme one night, Foster joined Twitter to vent her anger — and it felt good. Foster’s research seems to confirm her hunch: that when women tweet about sexism, it improves their sense of well-being6. The study involved 93 female university students who were presented with information about sexism in academia, politics and the media. One group of students was asked to tweet publicly about what they had learned, another to tweet privately and a third to tweet about the weather. (A fourth group was told to do nothing.) During the three-day study, each participant filled out a daily questionnaire on her emotional state. Those who were assigned to tweet publicly reported a greater sense of well-being, on average, by the end of the experiment; those in the other groups showed no change. These results, although preliminary, are in line with earlier research that shows that expressive writing — such as writing in a diary — can provide similar benefits. But Foster speculates that public tweeting may confer an extra boost because it spurs writers to think more deeply about what they are saying. Twitter can also help to build a sense of community among scientists in different disciplines who are confronting sexism and sexual harassment. Sometimes these bonds grow out of dark humour, such as the #distractinglysexy hashtag birthed in reaction to Hunt’s comments. Thousands of female researchers posted pictures of themselves in labs and at field sites, up to their knees in mud or swathed in shapeless biosafety suits. “Filter mask protects me from hazardous chemicals and muffles my woman cries,” wrote Amelia Cervera, a biochemist at the University of Valencia in Spain, who shared a photo of herself wearing the face-obscuring gear. Gill, a palaeoecologist, says that she has begun to connect with researchers in astronomy, anthropology, engineering and computer science, among other fields. Such links can help researchers to learn from each other’s experiences of confronting sexism. “Some of our disciplines have been better at gender equality than others,” she notes. “Some of us have been having these discussions for a long time.” But the ongoing Twitter conversation about sexism is also limited in some important ways. It often ignores the concerns of women whose experiences with sexism are exacerbated by discrimination on the basis of race, sexual orientation or disability. For example, a US survey of 557 female scientists from ethnic minority groups found that two-thirds felt pressure to prove themselves over and over again — beyond what was asked of white colleagues. And 48% of African American respondents said that they had been mistaken for janitors (caretakers) or administrative staff in their workplaces. “If you are a minority within a minority, you are actually dealing with multiple problems,” says Zevallos. That is just as true on Twitter as it is in the lab or office. And this can make women who are dealing with the effects of multiple forms of discrimination feel excluded from conversations that focus on sexism or sexual harassment. Such concerns surfaced recently in the wake of the Marcy sexual-harassment case, which had prompted a vigorous online debate under the #astroSH hashtag. “If you are not talking about and confronting racism with same vigilance as sexism, might as well hang ‘no Blacks’ signs,” tweeted Chanda Prescod-Weinstein, an astrophysicist at the Massachusetts Institute of Technology (MIT) in Cambridge. “And I say that as a victim of both sexual assault and sexual harassment.” Sarah Ballard, also an MIT astrophysicist, echoed the sentiment: “We can’t rely on crowdsourcing meting out of justice- (Mostly white) crowds will stand up for white women, *crickets* otherwise.” And although social media can help to create a community discussion about sexism and other forms of discrimination, fighting for equality requires the real-world cooperation of universities, governments and other institutions. Some of these have taken action in response to sexist incidents that online discussions helped to bring to wider attention. But although Twitter may be hard to ignore, it does not have the authority to set and enforce expectations for fair treatment. Despite those caveats, Thrift finds great value in the ongoing social-media conversations among scientists, which she sees as a form of public education — and the first step towards concrete change. “That’s hugely important,” she says. “If we don’t name something as sexist, as harassment, as misogyny, it will continue unchecked.”


News Article | November 19, 2016
Site: www.prweb.com

Leading higher education information and resource site AffordableCollegesOnline.org has released its list of the Best Online Accelerated Nursing Degrees & Programs in the nation for 2016-2017. Using a variety of cost and educational outcome data to compare accredited programs side by side, the list rates the University of Saint Mary, University of Wyoming, The Sage Colleges, University of Indianapolis and Samford University as the top-scoring schools for accelerated nursing students. "As the demand for qualified nurses grows, more colleges are implementing fast-track education options for nursing students,” said Dan Schuessler, CEO and Founder of AffordableCollegesOnline.org. "These programs are rigorous, but the schools on our list stand out for providing the best quality education and support to help students comprehend quickly and ultimately start their careers in nursing sooner.” AffordableCollegesOnline.org required schools to meet several basic requirements to be considered for the Best Online Accelerated Nursing Degrees & Programs list. Each college must be accredited by the Collegiate Nursing Education (CCNE) or Accreditation Commission for Education in Nursing (ACEN) and be a public or private not-for-profit institution to be included. They must also offer students job placement and academic counseling services to qualify. Individual school scores are then determined by weighing a variety of qualitative and quantitative data points, such as nursing certification pass rates, tuition costs and more. The Top 50 list of schools, as well as specific details on data and methodology used to determine ranking and scoring can be found at the link below: An alphabetical list of schools on the Best Online Accelerated Nursing Programs list for 2016-2017: Adelphi University Albany State University Ball State University Barry University Baylor University Clemson University Creighton University DeSales University Drexel University Duquesne University East Carolina University Georgia Southwestern State University Indiana Wesleyan University Jacksonville University Lewis University Loyola University Chicago Lynchburg College New Mexico State University - Main Campus New York University Northern Arizona University Ohio University - Main Campus Olivet Nazarene University Quinnipiac University Robert Morris University Rutgers University - Newark Saint Xavier University Samford University Seton Hall University Shenandoah University Simmons College Southern Nazarene University Texas Christian University The College of Saint Scholastica The Sage Colleges University of Alabama at Birmingham University of Arizona University of Delaware University of Hawaii at Manoa University of Indianapolis University of Memphis University of Miami University of North Florida University of Northern Colorado University of Saint Joseph University of Saint Mary University of Wyoming Utica College Valparaiso University West Virginia University Wilkes University AffordableCollegesOnline.org began in 2011 to provide quality data and information about pursuing an affordable higher education. Our free community resource materials and tools span topics such as financial aid and college savings, opportunities for veterans and people with disabilities, and online learning resources. We feature higher education institutions that have developed online learning environments that include highly trained faculty, new technology and resources, and online support services to help students achieve educational and career success. We have been featured by nearly 1,100 postsecondary institutions and nearly 120 government organizations.


News Article | December 31, 2015
Site: www.techtimes.com

The itsy-bitsy spider is not so itsy-bitsy anymore. Or at least, its counterpart under the sea isn't. With a bright orange body, eight gangling legs, and an elongated proboscis, the sea spider prefers to lurk in the cold waters of both the Antarctic and Arctic oceans. Under these dark waters, the sea spider grows massive, with each of its eight legs sprouting to span the width of a person's face. Known as pycnogonids, these creatures are part of a type of primitive marine arthropods, some of which are now extinct. Modern marine arthropods include crabs, shrimp, and lobsters. Technically, pycnogonids are not true spiders or arachnids, but their classification as chelicerates somehow place them closer to one another. Are Pycnogonids Becoming Mutant Spiders? No One Knows - Yet Pycnogonids are usually small and cryptic. Because of their thin legs and diminutive size, these sea spiders need no respiratory system. Their proboscis allows them to suck out nutrients from invertebrates. However, tiny sea spiders in Antarctica have somehow gotten bigger, growing up to 25 centimeters or 9 inches. Scientists say the growth is due to the phenomenon called "polar gigantism." Strangely enough, pycnogonids are not the only ones that grow into unusual sizes. Echinoderms, copepods, and certain mollusks have all grown larger than their equatorial relatives. The occurrence is a mystery for scientists. Multiple hypotheses have attempted to explain why and how polar gigantism happens, but none of these have been proven yet. A team of researchers from the National Science Foundation, the University of Montana, the University of Hawaii at Manoa, and the United States Antarctic Program believe that by studying and examining Antarctic pycnogonids, the answer to this strange phenomenon may be unraveled. Scientists drilled a hole in the thick Antarctic sea ice in order to collect some sea spiders. Two dry-suited and insulated SCUBA divers went into the water. Art Woods, one of the members of the team, said the atmosphere surrounding the hole was pleasant, except that they were also freezing to death. The temperature of the sea water was around -1.5 degrees Celsius (29.3 degrees Fahrenheit) to -1.8 degrees Celsius (28.76 degrees Fahrenheit), the freezing point. These extremely chilling temperatures may actually play a role in polar gigantism, scientists said. Colder water can hold more dissolved oxygen than warm water, and the oxygen content of the coastal Antarctic sea is significantly high. Colder levels of temperature slow the metabolisms of cold-blooded animals. Reduced metabolism indicated less oxygen consumption. Combine these factors and you may get supersized sea spiders. With their lack of a respiratory system, these marine arthropods depend on simple diffusion to get oxygen into their bodies. Woods explained that this will not work on organisms with large bodies unless the amount of oxygen available is vast. The team of scientists tested how differences in temperature and dissolved oxygen content in seawater affected the physiology of Antarctic pycnogonids. So far into their study, Woods and his colleagues found that larger sea spiders find it difficult to get by on seawater with low oxygen content. This evidence supports their hypothesis that abundant levels of oxygen influence the growth of these creatures. Scientists have yet to find the root cause for polar gigantism. Nevertheless, if seawater with copious levels of oxygen is essential for the survival of large marine animals, the implications of warmer oceans and decreasing levels of oxygen may be devastating. The demise of gigantic sea spiders may not be noticeable because they live in the polar regions, but experts believe that king crabs, jumbo lobsters and other marine creatures are in danger of meeting their ultimate end. Watch the video of the drilling expedition below.


News Article | December 1, 2016
Site: www.prweb.com

Leading higher education information and resource provider AffordableCollegesOnline.org has released its ranking of the Best Online Registered Nursing (RN) Programs in the U.S. for 2016-2017. Analyzing more than a dozen unique data points on colleges and universities who offer online RN programs, the site honored 65 schools for providing the best overall value and quality for students. East Carolina University, Allen College, Seton Hall University, University of Alabama in Huntsville and West Virginia University were among the highest scoring four-year schools, while New Mexico Junior College, Amarillo College, West Kentucky Community and Technical College, Hopkinsville Community College and Kansas City Kansas Community College were among the highest scoring two-year schools. "There is a growing demand for health care workers, and quality registered nursing programs are growing more and more competitive,” said Dan Schuessler, CEO and Founder of AffordableCollegesOnline.org. “Our list of schools gives registered nursing students a better idea of which programs offer the best combination of cost, quality curriculum and online learning flexibility.” AffordableCollegesOnline.org requires schools to meet several minimum requirements to be eligible for placement on their rankings. Colleges must be accredited, public or private not-for-profit institutions and must offer in-state tuition rates below $5,000 annually at two-year schools or below $25,000 annually at four-year schools. Qualifying schools are scored and ranked based on a comparison of more than a dozen qualitative and quantitative statistics, including financial aid offerings and graduation rates by school. More details on data and methodology used to rank each online criminal justice program and a complete list of schools and scores is available at: Two-year schools with the Best Online Registered Nurse Programs for 2016-2017: Amarillo College Ashland Community and Technical College Bluegrass Community and Technical College Columbus State Community College Community College of Philadelphia Henderson Community College Hopkinsville Community College Jefferson Community and Technical College Kansas City Kansas Community College Madisonville Community College Minnesota West Community and Technical College New Mexico Junior College San Antonio College Somerset Community College Southeast Kentucky Community and Technical College West Kentucky Community and Technical College Four-year schools with the Best Online Registered Nurse Programs for 2016-2017: Allen College Aurora University Ball State University Barry University Clayton State University Columbus State University Concordia University - Wisconsin Drexel University East Carolina University East Tennessee State University Fitchburg State University Gannon University Gardner-Webb University Georgia College and State University Graceland University - Lamoni Indiana State University La Salle University Loyola University Chicago Minot State University Missouri State University-Springfield New Mexico State University - Main Campus North Carolina Central University Northern Arizona University Olivet Nazarene University Sacred Heart University Seton Hall University South Dakota State University The College of Saint Scholastica University of Alabama in Huntsville University of Arkansas University of Central Florida University of Cincinnati-Main Campus University of Colorado, Colorado Springs University of Delaware University of Hawaii at Manoa University of Massachusetts - Amherst University of Massachusetts - Boston University of Massachusetts - Lowell University of Memphis University of North Alabama University of North Dakota University of North Florida University of Northern Colorado University of Southern Indiana University of the Incarnate Word University of Toledo Villanova University Wayland Baptist University West Virginia University Western Kentucky University AffordableCollegesOnline.org began in 2011 to provide quality data and information about pursuing an affordable higher education. Our free community resource materials and tools span topics such as financial aid and college savings, opportunities for veterans and people with disabilities, and online learning resources. We feature higher education institutions that have developed online learning environments that include highly trained faculty, new technology and resources, and online support services to help students achieve educational and career success. We have been featured by nearly 1,100 postsecondary institutions and nearly 120 government organizations.


News Article | September 19, 2016
Site: www.chromatographytechniques.com

In the waters off the coast of Hawaii, a tall buoy bobs and sways in the water, using the rise and fall of the waves to generate electricity. The current travels through an undersea cable for a mile to a military base, where it is fed into Oahu's power grid — the first wave-produced electricity to go online in the U.S. By some estimates, the ocean's endless motion packs enough power to meet a quarter of America's energy needs and dramatically reduce the nation's reliance on oil, gas and coal. But wave energy technology lags well behind wind and solar power, with important technical hurdles still to be overcome. To that end, the Navy has established a test site in Hawaii, with hopes the technology can someday be used to produce clean, renewable power for offshore fueling stations for the fleet and provide electricity to coastal communities in fuel-starved places around the world. "More power from more places translates to a more agile, more flexible, more capable force," Joseph Bryan, deputy assistant secretary of the Navy, said during an event at the site. "So we're always looking for new ways to power the mission." Hawaii would seem a natural site for such technology. As any surfer can tell you, it is blessed with powerful waves. The island state also has the highest electricity costs in the nation — largely because of its heavy reliance on oil delivered by sea — and has a legislative mandate to get 100 percent of its energy from renewables by 2045. Still, it could be five to 10 years before wave energy technology can provide an affordable alternative to fossil fuels, experts say. For one thing, developers are still working to come up with the best design. Some buoys capture the up-and-down motion of the waves, while others exploit the side-to-side movement. Industry experts say a machine that uses all the ocean's movements is most likely to succeed. Also, the machinery has to be able to withstand powerful storms, the constant pounding of the seas and the corrosive effects of saltwater. "The ocean is a really hard place to work," said Patrick Cross, specialist at the Hawaii Natural Energy Institute at the University of Hawaii at Manoa, which helps run the Hawaii test site. "You've got to design something that can stay in the water for a long time but be able to survive." The U.S. has set a goal of reducing carbon emissions by one-third from 2005 levels by 2030, and many states are seeking to develop more renewable energy in the coming decades. Jose Zayas, a director of the Wind and Water Power Technologies Office at the U.S. Department of Energy, which helps fund the Hawaii site, said the U.S. could get 20 to 28 percent of its energy needs from waves off the U.S. coasts without encroaching on sensitive waters such as marine preserves. "When you think about all of the states that have water along their coasts ... there's quite a bit of wave energy potential," he said. Wave energy technology is at about the same stage as the solar and wind industries were in the 1980s. Both received substantial government investment and tax credits that helped them become energy sources cheap enough to compete with fossil fuels. But while the U.S. government and military have put about $334 million into marine energy research over the last decade, Britain and the rest of Europe have invested more than $1 billion, according to the Marine Energy Council, a trade group. "We're about, I'd say, a decade behind the Europeans," said Alexandra De Visser, the Navy's Hawaii test site project manager. The European Marine Energy Centre in Scotland, for example, has 14 grid-connected berths that have housed dozens of wave and tidal energy devices from around the world over the past 13 years, and Wave Hub in England has several such berths. China, too, has been building and testing dozens of units at sea. Though small in scale, the test project near Kaneohe Bay represents the vanguard of U.S. wave energy development. It consists of two buoys anchored a half-mile to a mile off the coast. One of them, the Azura, which stands 12 feet above the surface and extends 50 feet below, converts the waves' vertical and horizontal movements into up to 18 kilowatts of electricity, enough for about a dozen homes. The company working with the Navy, Northwest Energy Innovations of Portland, Oregon, plans a version that can generate at least 500 kilowatts, or enough to power hundreds of homes. The other buoy, a 50-foot-wide, doughnut-shaped device called the Lifesaver, was developed by a Norwegian company. The 3-foot-tall ring is anchored to the ocean floor with cables; when the buoy is moved by the sea, the cables move, turning the wheels of a generator. It is producing on average of just 4 kilowatts but is capable of generating more. Test sites run by other researchers are being planned or expanded in Oregon and California to take advantage of the powerful waves that pound the West Coast. One of those projects, Cal Wave, run by California Polytechnic State University, hopes to provide utility-scale power to Vandenberg Air Force Base. The Hawaii buoys are barely noticeable from shore without binoculars, but developers envision dozens of machines working all at once, an idea that could run into the same kind of opposition that wind turbines have faced from environmentalists, fishermen and tourist groups. "Putting 200 machines on the North Shore of Oahu within a mile or two off the coast might be difficult, said Steve Kopf, CEO of Northwest Energy Innovations. "Nobody wants to look out and see wind turbines or wave machines off the coast."


News Article | September 19, 2016
Site: www.rdmag.com

In the waters off the coast of Hawaii, a tall buoy bobs and sways in the water, using the rise and fall of the waves to generate electricity. The current travels through an undersea cable for a mile to a military base, where it is fed into Oahu's power grid — the first wave-produced electricity to go online in the U.S. By some estimates, the ocean's endless motion packs enough power to meet a quarter of America's energy needs and dramatically reduce the nation's reliance on oil, gas and coal. But wave energy technology lags well behind wind and solar power, with important technical hurdles still to be overcome. To that end, the Navy has established a test site in Hawaii, with hopes the technology can someday be used to produce clean, renewable power for offshore fueling stations for the fleet and provide electricity to coastal communities in fuel-starved places around the world. "More power from more places translates to a more agile, more flexible, more capable force," Joseph Bryan, deputy assistant secretary of the Navy, said during an event at the site. "So we're always looking for new ways to power the mission." Hawaii would seem a natural site for such technology. As any surfer can tell you, it is blessed with powerful waves. The island state also has the highest electricity costs in the nation — largely because of its heavy reliance on oil delivered by sea — and has a legislative mandate to get 100 percent of its energy from renewables by 2045. Still, it could be five to 10 years before wave energy technology can provide an affordable alternative to fossil fuels, experts say. For one thing, developers are still working to come up with the best design. Some buoys capture the up-and-down motion of the waves, while others exploit the side-to-side movement. Industry experts say a machine that uses all the ocean's movements is most likely to succeed. Also, the machinery has to be able to withstand powerful storms, the constant pounding of the seas and the corrosive effects of saltwater. "The ocean is a really hard place to work," said Patrick Cross, specialist at the Hawaii Natural Energy Institute at the University of Hawaii at Manoa, which helps run the Hawaii test site. "You've got to design something that can stay in the water for a long time but be able to survive." The U.S. has set a goal of reducing carbon emissions by one-third from 2005 levels by 2030, and many states are seeking to develop more renewable energy in the coming decades. Jose Zayas, a director of the Wind and Water Power Technologies Office at the U.S. Department of Energy, which helps fund the Hawaii site, said the U.S. could get 20 to 28 percent of its energy needs from waves off the U.S. coasts without encroaching on sensitive waters such as marine preserves. "When you think about all of the states that have water along their coasts ... there's quite a bit of wave energy potential," he said. Wave energy technology is at about the same stage as the solar and wind industries were in the 1980s. Both received substantial government investment and tax credits that helped them become energy sources cheap enough to compete with fossil fuels. But while the U.S. government and military have put about $334 million into marine energy research over the last decade, Britain and the rest of Europe have invested more than $1 billion, according to the Marine Energy Council, a trade group. "We're about, I'd say, a decade behind the Europeans," said Alexandra De Visser, the Navy's Hawaii test site project manager. The European Marine Energy Centre in Scotland, for example, has 14 grid-connected berths that have housed dozens of wave and tidal energy devices from around the world over the past 13 years, and Wave Hub in England has several such berths. China, too, has been building and testing dozens of units at sea. Though small in scale, the test project near Kaneohe Bay represents the vanguard of U.S. wave energy development. It consists of two buoys anchored a half-mile to a mile off the coast. One of them, the Azura, which stands 12 feet above the surface and extends 50 feet below, converts the waves' vertical and horizontal movements into up to 18 kilowatts of electricity, enough for about a dozen homes. The company working with the Navy, Northwest Energy Innovations of Portland, Oregon, plans a version that can generate at least 500 kilowatts, or enough to power hundreds of homes. The other buoy, a 50-foot-wide, doughnut-shaped device called the Lifesaver, was developed by a Norwegian company. The 3-foot-tall ring is anchored to the ocean floor with cables; when the buoy is moved by the sea, the cables move, turning the wheels of a generator. It is producing on average of just 4 kilowatts but is capable of generating more. Test sites run by other researchers are being planned or expanded in Oregon and California to take advantage of the powerful waves that pound the West Coast. One of those projects, Cal Wave, run by California Polytechnic State University, hopes to provide utility-scale power to Vandenberg Air Force Base. The Hawaii buoys are barely noticeable from shore without binoculars, but developers envision dozens of machines working all at once, an idea that could run into the same kind of opposition that wind turbines have faced from environmentalists, fishermen and tourist groups. "Putting 200 machines on the North Shore of Oahu within a mile or two off the coast might be difficult, said Steve Kopf, CEO of Northwest Energy Innovations. "Nobody wants to look out and see wind turbines or wave machines off the coast." Associated Press Writer Joe McDonald in Beijing contributed to this report.


News Article | November 30, 2016
Site: www.prweb.com

Leading higher education information and resource provider AffordableCollegesOnline.org has released its list of the Best Schools with Online Nurse Practitioner Programs in the U.S. for 2016-2017. The ranking cites the top 50 colleges and universities for online nurse practitioner students based on an in-depth cost and quality comparison. Highest scores were awarded to Stony Brook University, University of Cincinnati, Ball State University, University of St. Francis and Northern Arizona University. "The U.S. Department of Labor predicts Practitioners to be among of the most in-demand nursing positions in the nation through 20214,” said Dan Schuessler, CEO and Founder of AffordableCollegesOnline.org. “Aspiring students will find the schools on our list offer the flexibility of an online education with exceptional overall quality and value compared to other nursing programs around the country.” To qualify for a spot on AffordableCollegesOnline.org’s rankings, schools to meet several minimum requirements. Each college cited is institutionally accredited and holds public or private not-for-profit standing. To maintain affordability standards, AffordableCollegesOnline.org requires schools to offer in-state tuition rates below $25,000 per year. Each qualifying school is scored based on a comparison of more than a dozen qualitative and quantitative statistics, including financial aid offerings and graduation rates by school. All eligible school scores are compared to determine the final top 50 list. For complete details on the data and methodology used to score each school and a full list of ranking colleges, visit: Top 50 Online Nurse Practitioner Programs in the Nation for 2016-2017: Ball State University Clarkson College Columbus State University Concordia University - Wisconsin Duquesne University East Tennessee State University Fitchburg State University Gardner-Webb University Georgia College and State University Graceland University - Lamoni Indiana State University Indiana University-Purdue University - Indianapolis Indiana Wesleyan University Loyola University New Orleans Maryville University of Saint Louis McNeese State University Michigan State University New Mexico State University - Main Campus Northern Arizona University Saint Joseph's College of Maine Samford University Seton Hall University Southern Adventist University Southern Illinois University - Edwardsville Stony Brook University The University of Alabama The University of Texas Medical Branch University of Alabama in Huntsville University of Arizona University of Arkansas University of Central Florida University of Central Missouri University of Cincinnati - Main Campus University of Colorado, Colorado Springs University of Detroit Mercy University of Hawaii at Manoa University of Indianapolis University of Louisiana at Lafayette University of Massachusetts - Amherst University of Memphis University of North Dakota University of Northern Colorado University of South Alabama University of Southern Indiana University of St. Francis West Virginia University Western Carolina University Western Kentucky University Winona State University Wright State University - Main Campus AffordableCollegesOnline.org began in 2011 to provide quality data and information about pursuing an affordable higher education. Our free community resource materials and tools span topics such as financial aid and college savings, opportunities for veterans and people with disabilities, and online learning resources. We feature higher education institutions that have developed online learning environments that include highly trained faculty, new technology and resources, and online support services to help students achieve educational and career success. We have been featured by nearly 1,100 postsecondary institutions and nearly 120 government organizations.


Conrad C.P.,University of Hawaii at Manoa | Behn M.D.,Woods Hole Oceanographic Institution
Geochemistry, Geophysics, Geosystems | Year: 2010

Although an average westward rotation of the Earth's lithosphere is indicated by global analyses of surface features tied to the deep mantle (e.g., hot spot tracks), the rate of lithospheric drift is uncertain despite its importance to global geodynamics. We use a global viscous flow model to predict asthenospheric anisotropy computed from linear combinations of mantle flow fields driven by relative plate motions, mantle density heterogeneity, and westward lithosphere rotation. By comparing predictions of lattice preferred orientation to asthenospheric anisotropy in oceanic regions inferred from SKS splitting observations and surface wave tomography, we constrain absolute upper mantle viscosity (to 0•5-1•0 × 1021 Pa s, consistent with other constraints) simultaneously with net rotation rate and the decrease in the viscosity of the asthenosphere relative to that of the upper mantle. For an asthenosphere 10 times less viscous than the upper mantle, we find that global net rotation must be <0.26°/Myr (<60% of net rotation in the HS3 (Pacific hot spot) reference frame); larger viscosity drops amplify asthenospheric shear associated with net rotation and thus require slower net rotation to fit observed anisotropy. The magnitude of westward net rotation is consistent with lithospheric drift relative to Indo-Atlantic hot spots but is slower than drift in the Pacific hot spot frame (HS3 ≈ 0•44°/Myr). The latter may instead express net rotation relative to the deep mantle beneath the Pacific plate, which is moving rapidly eastward in our models. Copyright 2010 by the American Geophysical Union.


Schorghofer N.,University of Hawaii at Manoa | Forget F.,University Paris - Sud
Icarus | Year: 2012

Ice buried beneath a thin layer of soil has been revealed by neutron spectroscopy and explored by the Phoenix Mars Lander. It has also been exposed by recent impacts. This subsurface ice is thought to lose and gain volume in response to orbital variations (Milankovitch cycles). We use a powerful numerical model to follow the growth and retreat of near-surface ice as a result of regolith-atmosphere exchange continuously over millions of years. If a thick layer of almost pure ice has been deposited recently, it has not yet reached equilibrium with the atmospheric water vapor and may still remain as far equatorward as 43°N, where ice has been revealed by recent impacts. A potentially observable consequence is present-day humidity output from the still retreating ice. We also demonstrate that in a sublimation environment, subsurface pore ice can accumulate in two ways. The first mode, widely known, is the progressive filling of pores by ice over a range of depths. The second mode occurs on top of an already impermeable ice layer; subsequent ice accumulates in the form of pasted on horizontal layers such that beneath the ice table, the pores are completely full with ice. Most or all of the pore ice on Mars today may be of the second type. At the Phoenix landing site, where such a layer is also expected to exist above an underlying ice sheet, it may be extremely thin, due to exceptionally small variations in ice stability over time. © 2012 Elsevier Inc..


Dinezio P.N.,University of Hawaii at Manoa | Tierney J.E.,Woods Hole Oceanographic Institution
Nature Geoscience | Year: 2013

The Indo-Pacific warm pool - the main source of heat and moisture to the global atmosphere - plays a prominent role in tropical and global climate variability. During the Last Glacial Maximum, temperatures within the warm pool were cooler than today and precipitation patterns were altered, but the mechanism responsible for these shifts remains unclear. Here we use a synthesis of proxy reconstructions of warm pool hydrology and a multi-model ensemble of climate simulations to assess the drivers of these changes. The proxy data suggest drier conditions throughout the centre of the warm pool and wetter conditions in the western Indian and Pacific oceans. Only one model out of twelve simulates a pattern of hydroclimate change similar to our reconstructions, as measured by the Cohen's κ statistic. Exposure of the Sunda Shelf by lower glacial sea level plays a key role in the hydrologic pattern simulated by this model, which results from changes in the Walker circulation driven by weakened convection over the warm pool. We therefore conclude that on glacial-interglacial timescales, the growth and decay of ice sheets exert a first-order influence on tropical climate through the associated changes in global sea level. © 2013 Macmillan Publishers Limited. All rights reserved.


Mora C.,Dalhousie University | Mora C.,University of Hawaii at Manoa | Sale P.F.,Environment Canada
Marine Ecology Progress Series | Year: 2011

A leading strategy in international efforts to reverse ongoing losses in biodiversity is the use of protected areas. We use a broad range of data and a review of the literature to show that the effectiveness of existing, and the current pace of the establishment of new, protected areas will not be able to overcome current trends of loss of marine and terrestrial biodiversity. Despite local successes of well-designed and well-managed protected areas proving effective in stemming biodiversity loss, there are significant shortcomings in the usual process of implementation of protected areas that preclude relying on them as a global solution to this problem. The shortcomings include technical problems associated with large gaps in the coverage of critical ecological processes related to individual home ranges and propagule dispersal, and the overall failure of such areas to protect against the broad range of threats affecting ecosystems. Practical issues include budget constraints, conflicts with human development, and a growing human population that will increase not only the extent of anthropogenic stressors but the difficulty in successfully enforcing protected areas. While efforts towards improving and increasing the number and/or size of protected areas must continue, there is a clear and urgent need for the development of additional solutions for biodiversity loss, particularly ones that stabilize the size of the world's human population and our ecological demands on biodiversity. © Inter-Research 2011.


Andreeva V.A.,University Paris - Sud | Pokhrel P.,University of Hawaii at Manoa
Psycho-Oncology | Year: 2013

Objective Many countries host growing Eastern European immigrant communities whose breast cancer preventive behaviors are largely unknown. Thus, we aimed to synthesize current evidence regarding secondary prevention via breast cancer screening utilized by that population. Methods All observational, general population studies on breast cancer screening with Eastern European immigrant women and without any country, language, or age restrictions were identified. Screening modalities included breast self-examination, clinical breast examination, and mammography. Results The selected 30 studies were published between 1996 and 2013 and came from Australia, Canada, Denmark, Germany, Israel, the Netherlands, Spain, Switzerland, the UK, and the USA. The reported prevalence of monthly breast self-examination was 0-48%; for yearly clinical breast examination 27-54%; and for biennial mammography 0-71%. The substantial methodologic heterogeneity prevented a meta-analysis. Nonetheless, irrespective of host country, healthcare access, or educational level, the findings consistently indicated that Eastern European immigrant women underutilize breast cancer screening largely because of insufficient knowledge about early detection and an external locus of control regarding decision making in health matters. Conclusions This is a vulnerable population for whom the implementation of culturally tailored breast cancer screening programs is needed. As with other underscreened immigrant/minority groups, Eastern European women's inadequate engagement in prevention is troublesome as it points to susceptibility not only to cancer but also to other serious conditions for which personal action and responsibility are critical. Copyright © 2013 John Wiley & Sons, Ltd.


Lobell D.B.,Stanford University | Roberts M.J.,University of Hawaii at Manoa | Schlenker W.,Columbia University | Braun N.,North Carolina State University | And 3 more authors.
Science | Year: 2014

A key question for climate change adaptation is whether existing cropping systems can become less sensitive to climate variations. We use a field-level data set on maize and soybean yields in the central United States for 1995 through 2012 to examine changes in drought sensitivity. Although yields have increased in absolute value under all levels of stress for both crops, the sensitivity of maize yields to drought stress associated with high vapor pressure deficits has increased. The greater sensitivity has occurred despite cultivar improvements and increased carbon dioxide and reflects the agronomic trend toward higher sowing densities. The results suggest that agronomic changes tend to translate improved drought tolerance of plants to higher average yields but not to decreasing drought sensitivity of yields at the field scale.


Joint I.,Plymouth Marine Laboratory | Doney S.C.,Woods Hole Oceanographic Institution | Karl D.M.,University of Hawaii at Manoa
ISME Journal | Year: 2011

The pH of the surface ocean is changing as a result of increases in atmospheric carbon dioxide (CO 2), and there are concerns about potential impacts of lower pH and associated alterations in seawater carbonate chemistry on the biogeochemical processes in the ocean. However, it is important to place these changes within the context of pH in the present-day ocean, which is not constant; it varies systematically with season, depth and along productivity gradients. Yet this natural variability in pH has rarely been considered in assessments of the effect of ocean acidification on marine microbes. Surface pH can change as a consequence of microbial utilization and production of carbon dioxide, and to a lesser extent other microbially mediated processes such as nitrification. Useful comparisons can be made with microbes in other aquatic environments that readily accommodate very large and rapid pH change. For example, in many freshwater lakes, pH changes that are orders of magnitude greater than those projected for the twenty second century oceans can occur over periods of hours. Marine and freshwater assemblages have always experienced variable pH conditions. Therefore, an appropriate null hypothesis may be, until evidence is obtained to the contrary, that major biogeochemical processes in the oceans other than calcification will not be fundamentally different under future higher CO 2/lower pH conditions.


News Article | December 9, 2015
Site: www.nature.com

When the underwater robot Nereus imploded at sea more than a year ago, oceanographers were left without a vehicle that can reach the deepest parts of the ocean. Now Nereus’s operator has told Nature that it will not replace the submersible. The Woods Hole Oceanographic Institution (WHOI) in Massachusetts says that it will instead spread the insurance money for Nereus across multiple, lower-risk projects. Some oceanographers say that they will miss Nereus’s unique exploration capabilities, but other efforts to build similar robots that can reach the very bottom of the sea are afoot in the United States and China. WHOI originally built Nereus at a cost of around US$8 million — which includes its design, development and testing — with funding from the US National Science Foundation, the Office of Naval Research and the National Oceanic and Atmospheric Administration. Among other things, the institute hoped to use the robot, which was a ‘hybrid’ capable of being controlled remotely and operating autonomously, to investigate the ocean’s hadal zone. This area, in deep-sea trenches between 6,000 and 11,000 metres down, is one of the least explored regions on Earth. Exactly which organisms live down there, how they survive and how they might be altered by pressures such as climate change and pollution are still only poorly understood. Any research vehicles operating at such depths must withstand intense pressure from the weight of the water above, so they are expensive to make and prone to accidents. WHOI lost contact with Nereus during a dive in the Pacific Ocean in May 2014 — probably because a failure in one of its sealed buoyancy spheres, or in the housing around a piece of equipment, set off a catastrophic implosion at a depth of some 10,000 metres (see Nature 509, 408–409; 2014). At first, researchers hoped that the institute would build a replacement vehicle. But Andy Bowen, an engineer and director of WHOI’s National Deep Submergence Facility, told Nature that after weighing up the risks and benefits, the institute decided that the money would be better spent on less risky projects. The $3 million insurance payout will go towards a Nereus legacy fund to support activities “in keeping with the spirit of Nereus”, he says. This includes developing technology to improve WHOI’s undersea vehicles that do not go as deep as Nereus — as well as deep-sea ‘landers’, which go to full depth but are un­able to move around. They simply sink to the bottom with various pieces of equipment on board, and are later recovered. Such landers are the only tools currently available to explore the hadal zone — and they are no substitute for submersibles, says Jeffrey Drazen, a deep-sea researcher at the University of Hawaii at Manoa. “You put a lander down and hope for the best,” he says. By contrast, Nereus could travel around under water, relaying a real-time video feed, and be moved in response to observations. “We need both,” says Drazen. Only having landers restricts exploration opportunities, agrees Alan Jamieson, a hadal-zone researcher at the University of Aberdeen, UK. Last year, his team used landers to collect hundreds of hours of footage from the Mariana Trench, the ocean’s deepest point, but could not take transects, a common fieldwork technique in which data is collected at multiple points along a set path. Neither could the team pick up samples, although some landers can do this. However, Jamieson understands the choice not to recreate Nereus, describing the construction of a single, expensive robot as “a very good example of putting all your eggs in one basket”. “We need to get clever in how we access the hadal zone,” he says. A more conservative approach has delayed — but not stopped — an effort to build a full-depth vehicle at the Schmidt Ocean Institute, a private foundation in Palo Alto, California. Rather than having this vehicle completely ready by 2016 as originally planned, the institute now aims to create a series of deep-sea submersibles. One will be delivered each year starting in 2016, with gradually increasing depth capabilities, says the institute’s director of research, Victor Zykov. This could lead to a full-ocean-depth vehicle by 2019, if all the precursors prove successful. “When I have discussed our plans with the deep-ocean scientists, most of them understand and appreciate this approach,” says Zykov. “They appreciate how difficult it is to operate in the ocean’s deepest trenches.” The plans in China are even bigger. In a paper published in 2014, Weicheng Cui of the Hadal Science and Technology Research Center at the Shanghai Ocean University, described a rough plan to build and deploy three landers, one robotic submersible and one human-occupied vehicle that can all operate in the hadal zone (W. Cui et al. Meth. Oceanogr. 10, 178–193; 2014). Cui told Nature that the first lander and the robotic submersible are currently undergoing sea trials, and that a mother ship that would control them is under construction. Around August or September 2016, he hopes to do trials in the Mariana Trench, sending the landers and the submersible to 11,000 metres. He plans eventually to use the robot to scout areas, then deploy the landers and the crewed submersible to conduct more detailed research. The project is backed by a mixture of government funding and private investment. As to working out exactly what destroyed Nereus, says Bowen, “we’ll never know — short of going and recovering the debris, which isn’t financially viable. And of course there’s nothing that can get there at the moment.”


Tully R.B.,University of Hawaii at Manoa | Courtois H.M.,University of Hawaii at Manoa | Courtois H.M.,University Claude Bernard Lyon 1
Astrophysical Journal | Year: 2012

In order to measure distances with minimal systematics using the correlation between galaxy luminosities and rotation rates it is necessary to adhere to a strict and tested recipe. We now derive a measure of rotation from a new characterization of the width of a neutral hydrogen line profile. Additionally, new photometry and zero-point calibration data are available. Particularly the introduction of a new linewidth parameter necessitates the reconstruction and absolute calibration of the luminosity-linewidth template. The slope of the new template is set by 267 galaxies in 13 clusters. The zero point is set by 36 galaxies with Cepheid or tip of the red giant branch distances. Tentatively, we determine H 0 ∼ 75kms-1 Mpc-1. Distances determined using the luminosity-linewidth calibration will contribute to the distance compendium Cosmicflows-2. © © 2012. The American Astronomical Society. All rights reserved.


Courtois H.M.,University of Hawaii at Manoa | Tully R.B.,University Claude Bernard Lyon 1
Astrophysical Journal | Year: 2012

The construction of the Cosmicflows-2 compendium of distances involves the merging of distance measures contributed by the following methods: (Cepheid) period-luminosity, tip of the red giant branch (TRGB), surface brightness fluctuation (SBF), luminosity-linewidth (TF), fundamental plane (FP), and Type Ia supernova (SNIa). The method involving SNIa is at the top of an interconnected ladder, providing accurate distances to well beyond the expected range of distortions to Hubble flow from peculiar motions. In this paper, the SNIa scale is anchored by 36 TF spirals with Cepheid or TRGB distances, 56 SNIa hosts with TF distances, and 61 groups or clusters hosting SNIa with Cepheid, SBF, TF, or FP distances. With the SNIa scale zero-point set, a value of the Hubble constant is evaluated over a range of redshifts 0.03 < z < 0.5, assuming a cosmological model with Ωm = 0.27 and ΩΛ = 0.73. The value determined for the Hubble constant is H 0 = 75.9 ± 3.8kms-1Mpc-1. © 2012. The American Astronomical Society. All rights reserved.


Murakami H.,Japan Advanced Institute of Science and Technology | Murakami H.,Japan Agency for Marine - Earth Science and Technology | Wang B.,University of Hawaii at Manoa
Journal of Climate | Year: 2010

Possible future change in tropical cyclone (TC) activity over the North Atlantic (NA) was investigated by comparison of 25-yr simulations of the present-day climate and future change under the A1B emission scenario using a 20-km-mesh Meteorological Research Institute (MRI) and Japan Meteorological Agency (JMA) atmospheric general circulation model. The present-day simulation reproduces many essential features of observed climatology and interannual variability in TC frequency of occurrence and tracks over the NA. For the future projection, the model is driven by the sea surface temperature (SST) that includes a trend projected by the most recent Intergovernmental Panel on Climate Change (IPCC) multimodel ensemble and a year-to-year variation derived from the present-day climate. A major finding is that the future change of total TC counts in the NA is statistically insignificant, but the frequency of TC occurrence will decrease in the tropical western NA (WNA) and increase in the tropical eastern NA (ENA) and northwestern NA (NWNA). The projected change in TC tracks suggests a reduced probability of TC landfall over the southeastern United States, and an increased influence of TCs on the northeastern United States. The track changes are not due to changes of large-scale steering flows; instead, they are due to changes in TC genesis locations. The increase in TC genesis in the ENA arises from increasing background ascending motion and convective available potential energy. In contrast, the reduced TC genesis in the WNA is attributed to decreases in midtropospheric relative humidity and ascending motion caused by remotely forced anomalous descent. This finding indicates that the impact of remote dynamical forcing is greater than that of local thermodynamical forcing in the WNA. The increased frequency of TC occurrence in the NWNA is attributed to reduced vertical wind shear and the pronounced local warming of the ocean surface. These TC changes appear to be most sensitive to future change in the spatial distribution of rising SST. Given that most IPCC models project a larger increase in SST in the ENA than in the WNA, the projected eastward shift in TC genesis is likely to be robust. © 2010 American Meteorological Society.


Rupke D.S.N.,Rhodes College | Rupke D.S.N.,University of Hawaii at Manoa | Veilleux S.,University of Maryland University College
Astrophysical Journal | Year: 2013

Massive, galaxy-scale outflows are known to be ubiquitous in major mergers of disk galaxies in the local universe. In this paper, we explore the multiphase structure and power sources of galactic winds in six ultraluminous infrared galaxies (ULIRGs) at z < 0.06 using deep integral field spectroscopy with the Gemini Multi-Object Spectrograph (GMOS) on Gemini North. We probe the neutral, ionized, and dusty gas phases using Na I D, strong emission lines ([O I], Hα, and [N II]), and continuum colors, respectively. We separate outflow motions from those due to rotation and tidal perturbations, and find that all of the galaxies in our sample host high-velocity flows on kiloparsec scales. The properties of these outflows are consistent with multiphase (ionized, neutral, and dusty) collimated bipolar winds emerging along the minor axis of the nuclear disk to scales of 1-2 kpc. In two cases, these collimated winds take the form of bipolar superbubbles, identified by clear kinematic signatures. Less collimated (but still high-velocity) flows are also present on scales up to 5 kpc in most systems. The three galaxies in our sample with obscured QSOs host higher velocity outflows than those in the three galaxies with no evidence for an active galactic nucleus. The peak outflow velocity in each of the QSOs is in the range 1450-3350 km s-1, and the highest velocities (2000-3000 km s-1) are seen only in ionized gas. The outflow energy and momentum in the QSOs are difficult to produce from a starburst alone, but are consistent with the QSO contributing significantly to the driving of the flow. Finally, when all gas phases are accounted for, the outflows are massive enough to provide negative feedback to star formation. © 2013. The American Astronomical Society. All rights reserved.


Sanyal A.,New Mexico State University | Nordkvist N.,University of Hawaii at Manoa | Chyba M.,University of Hawaii at Manoa
IEEE Transactions on Automatic Control | Year: 2011

This technical note treats the challenging control problem of tracking a desired continuous trajectory for a maneuverable autonomous vehicle in the presence of gravity, buoyancy and fluid dynamic forces and moments. A realistic dynamics model that applies to maneuverable vehicles moving in 3-D Euclidean space is used for obtaining this control scheme. While applications of this control scheme include autonomous aerial and underwater vehicles, we focus on an autonomous underwater vehicle (AUV) application because of its richer, more nonlinearly coupled, dynamics. The desired trajectory and trajectory tracking errors are globally characterized in the nonlinear state space. Almost global asymptotic stability to the desired trajectory in the nonlinear state space is demonstrated both analytically and through numerical simulations. © 2006 IEEE.


King S.F.,University of Southampton | Muhlleitner M.,Karlsruhe Institute of Technology | Nevzorov R.,University of Hawaii at Manoa
Nuclear Physics B | Year: 2012

The recent LHC indications of a SM-like Higgs boson near 125 GeV are consistent not only with the Standard Model (SM) but also with Supersymmetry (SUSY). However naturalness arguments disfavour the Minimal Supersymmetric Standard Model (MSSM). We consider the Next-to-Minimal Supersymmetric Standard Model (NMSSM) with a SM-like Higgs boson near 125 GeV involving relatively light stops and gluinos below 1 TeV in order to satisfy naturalness requirements. We are careful to ensure that the chosen values of couplings do not become non-perturbative below the grand unification (GUT) scale, although we also examine how these limits may be extended by the addition of extra matter to the NMSSM at the two-loop level. We then propose four sets of benchmark points corresponding to the SM-like Higgs boson being the lightest or the second lightest Higgs state in the NMSSM or the NMSSM-with-extra-matter. With the aid of these benchmark points we discuss how the NMSSM Higgs boson near 125 GeV may be distinguished from the SM Higgs boson in future LHC searches. © 2012.


Mondragon Chaparro D.,Centro Interdisciplinario Of Investigacion Para El Desarrollo Integral Regional Ciidir | Ticktin T.,University of Hawaii at Manoa
Conservation Biology | Year: 2011

Hundreds of epiphytic bromeliads species are harvested from the wild for trade and for cultural uses, but little is known about the effects of this harvest. We assessed the potential demographic effects of harvesting from the wild on 2 epiphytic bromeliads: Tillandsia macdougallii, an atmospheric bromeliad (adsorbs water and nutrients directly from the atmosphere), and T. violaceae, a tank bromeliad (accumulates water and organic material between its leaves). We also examined an alternative to harvesting bromeliads from trees-the collection of fallen bromeliads from the forest floor. We censused populations of T. macdougallii each year from 2005 to 2010 and of T. violaceae from 2005 to 2008, in Oaxaca, Mexico. We also measured monthly fall rates of bromeliads over 1 year and monitored the survival of fallen bromeliads on the forest floor. The tank bromeliad had significantly higher rates of survival, reproduction, and stochastic population growth rates (λs) than the atmospheric bromeliad, but λs for both species were <1, which suggests that the populations will decline even without harvest. Elasticity patterns differed between species, but in both, survival of large individuals had high elasticity values. No fallen bromeliads survived more than 1.5 years on the forest floor and the rate of bromeliad fall was comparable to current harvest rates. Low rates of population growth recorded for the species we studied and other epiphytic bromeliads and high elasticity values for the vital rates that were most affected by harvest suggest that commercial harvesting in the wild of these species is not sustainable. We propose the collection of fallen bromeliads as an ecologically and, potentially, economically viable alternative. ©2011 Society for Conservation Biology.


Patent
The Queens Medical Center, University of Hawaii at Manoa, Medical College of Wisconsin and INC Research | Date: 2015-04-28

This invention relates to a system that adaptively compensates for subject motion in real-time in an imaging system. An object orientation marker (30), preferably a retro-grate reflector (RGR), is placed on the head or other body organ of interest of a patient (P) during a scan, such as an MRI scan. The marker (30) makes it possible to measure the six degrees of freedom (x, y, and z-translations, and pitch, yaw, and roll), or pose, required to track motion of the organ of interest. A detector, preferably a camera (40), observes the marker (30) and continuously extracts its pose. The pose from the camera (40) is sent to the scanner (120) via an RGR processing computer (50) and a scanner control and processing computer (100), allowing for continuous correction of scan planes and position (in real-time) for motion of the patient (P). This invention also provides for internal calibration and for co-registration over time of the scanners and tracking systems reference frames to compensate for drift and other inaccuracies that may arise over time.


Patent
The Queens Medical Center, INC Research, Medical College of Wisconsin and University of Hawaii at Manoa | Date: 2013-09-23

This invention relates to a system that adaptively compensates for subject motion in real-time in an imaging system. An object orientation marker (30), preferably a retro-grate reflector (RGR), is placed on the head or other body organ of interest of a patient (P) during a scan, such as an MRI scan. The marker (30) makes it possible to measure the six degrees of freedom (x, y, and z-translations, and pitch, yaw, and roll), or pose, required to track motion of the organ of interest. A detector, preferably a camera (40), observes the marker (30) and continuously extracts its pose. The pose from the camera (40) is sent to the scanner (120) via an RGR processing computer (50) and a scanner control and processing computer (100), allowing for continuous correction of scan planes and position (in real-time) for motion of the patient (P). This invention also provides for internal calibration and for co-registration over time of the scanners and tracking systems reference frames to compensate for drift and other inaccuracies that may arise over time.


Patent
The Queens Medical Center, University of Hawaii at Manoa, Medical College of Wisconsin and INC Research | Date: 2015-08-17

This invention relates to a system that adaptively compensates for subject motion in real-time in an imaging system. An object orientation marker (30), preferably a retro-grate reflector (RGR), is placed on the head or other body organ of interest of a patient (P) during a scan, such as an MRI scan. The marker (30) makes it possible to measure the six degrees of freedom (x, y, and z-translations, and pitch, yaw, and roll), or pose, required to track motion of the organ of interest. A detector, preferably a camera (40), observes the marker (30) and continuously extracts its pose. The pose from the camera (40) is sent to the scanner (120) via an RGR processing computer (50) and a scanner control and processing computer (100), allowing for continuous correction of scan planes and position (in real-time) for motion of the patient (P). This invention also provides for internal calibration and for co-registration over time of the scanners and tracking systems reference frames to compensate for drift and other inaccuracies that may arise over time.


Patent
The Queens Medical Center, INC Research, Medical College of Wisconsin and University of Hawaii at Manoa | Date: 2013-01-07

This invention relates to a system that adaptively compensates for subject motion in real-time in an imaging system. An object orientation marker (30), preferably a retro-grate reflector (RGR), is placed on the head or other body organ of interest of a patient (P) during a scan, such as an MRI scan. The marker (30) makes it possible to measure the six degrees of freedom (x, y, and z-translations, and pitch, yaw, and roll), or pose, required to track motion of the organ of interest. A detector, preferably a camera (40), observes the marker (30) and continuously extracts its pose. The pose from the camera (40) is sent to the scanner (120) via an RGR processing computer (50) and a scanner control and processing computer (100), allowing for continuous correction of scan planes and position (in real-time) for motion of the patient (P). This invention also provides for internal calibration and for co-registration over time of the scanners and tracking systems reference frames to compensate for drift and other inaccuracies that may arise over time.


News Article | February 15, 2017
Site: www.nature.com

Every few days, Alan Flint plucks a gardenia from outside the building where his wife, Lorraine, works in Sacramento, California. He replaces the fading flower on her desk and refills the glass with fresh water. It's an easy romantic gesture, given that Alan's office is right down the hall. The Flints are both research hydrologists with the US Geological Survey California Water Science Center. It's the farthest apart their offices have been for years: they met during secondary school, married in 1975 and have been next door to each other throughout much of their careers in soil science. There are many couples in science similar to the Flints — and this Valentine's Day, Alan might not be the only one delivering flowers to an office down the hall. For researcher couples, obvious advantages can range from reviewing each other's writing to carpooling. Yet there are potential downsides, from navigating the challenge of finding and holding dual jobs to concerns about potential or existing conflicts of interest — such as when one partner sits on a promotions committee that discusses the other — or what might happen if the romance collapses (see 'Science soldiers on'). According to a 2008 report by the Clayman Institute for Gender Research at Stanford University in California, which collected data on around 9,000 faculty members at 13 universities, 36% of US faculty members were part of an academic couple1. Of those, 38% worked in the same department as their partner. Professors in the natural sciences were particularly likely to work in similar fields or the same department: 83% of female scientists and 54% of male scientists in academic couples had another scientist as a partner. A report released by the US National Science Foundation in 2015 found that 73% of scientists were married, and 24% of their employed spouses worked in engineering, computing, mathematics or the natural sciences2. “I know so many of my colleagues here who are married to another scientist on this campus,” says Alexis Templeton, a geologist at the University of Colorado Boulder. In Europe, too, it's common for scientists to marry another scientist, says Phil Stanier, a geneticist at University College London — although it's less common for them to work as closely as he does with his wife, geneticist Gudrun Moore, with whom he's co-authored dozens of papers. A 2016 report by the European Commission (EC) similarly found that 72% of surveyed researchers were in a relationship, and, of those, 54% were partnered with a person who was also pursuing a demanding career (although not necessarily in science)3. Some couples deliberately keep their careers separate and don't talk much shop on evenings and weekends. Others, such as the Flints, are driven by a shared goal, and seamlessly integrate their work and home lives. Ultimately, navigating a relationship and career as a member of a scientist couple requires mutual respect, effort to carve out two distinct niches and a hearty dose of cooperation. Married wildlife biologists Paula MacKay and Robert Long laugh at the idea of setting boundaries between personal and professional activities. The pair was once halfway up a mountain, carrying odorous bear-scent lures, when MacKay realized that it was their wedding anniversary. Long, a senior conservation scientist at Woodland Park Zoo in Seattle, Washington, and MacKay, a contract field biologist whose clients include the zoo, had been so involved in planning their trip that they had both forgotten the date. “I feel like I'm always out there with my best friend,” MacKay says. “When we approach a remote camera site or a place where we had set out a station before, it's really exciting to be there with Rob.” That shared joy is one of the myriad benefits of dual employment as researchers. Those might be as simple as grabbing lunch for one's partner on a busy day, as Frances Rena Bahjat and Keith Bahjat of Bristol-Myers Squibb in Redwood City, California, frequently do for each other. Frances Rena is senior director of in vivo studies and Keith directs cellular immunology at the company. To keep up with the literature, they also play a 'Who can find the best papers?' game each week, and they recommend potential collaborators to each other. “The two of us have much more reach than a single scientist, that's for sure,” says Frances Rena. One partner's enthusiasm for science, or for a particular field, can be contagious. Frances Rena says that she probably wouldn't have become a scientist if she hadn't met her husband (and now, colleague) when both were undergraduates. She didn't understand how science could be a career until she met Keith, whose father was a geophysicist. Similarly, Alan Flint started his career in soil science before Lorraine followed, and their couple status has even helped in a job search. As Alan was finishing his PhD and Lorraine her master's, their adviser heard about a lab that was looking for two soil scientists — one at the PhD level and one at the master's level. They got the jobs. This situation is not uncommon: in the Clayman Institute report1, 10% of faculty members were hired as a couple, and as of 2008, that rate was on the rise. Usually, one partner was hired first and negotiated for the other. Men were more often the first hire at that time, and the second hire was more likely to be in a junior faculty position. For many couples, such as geneticists Moore and Stanier of University College London, working together enhances both the relationship and research. The pair met during the 1980s at St Mary's Hospital in London. After a series of lecturer and postdoc positions, both worked at Imperial College London for a time, sharing equipment, working on each other's grants and co-authoring papers. They tried working apart, but didn't like it. “We're stronger together,” says Moore. For example, the pair was able to productively combine Moore's background in protein chemistry and Stanier's in molecular cloning when they searched for a gene associated with X-linked cleft palate. Working together and being able to continue the discussion at home is a big advantage for the research, agrees Shin-ichi Horike, a geneticist at Kanazawa University in Japan whose wife, Makiko Meguro, works in his lab. When a grant deadline is coming up, science is a major item on the conversation agenda at their house, albeit after the children have gone to bed. They discuss results of their experiments on those evenings. For partners who collaborate closely, division of expertise is crucial. “You have to develop complementary skills so that you're not in competition with each other,” says Lorraine Flint. And it's important, for the relationship, to take a bit of time away from science, say the Flints. They've set aside a daily cocktail hour. Ethologists Rick D'Eath and Susan Jarvis of Scotland's Rural College in Easter Bush, UK, don't work on exactly the same science, but they use each other as a sounding board to practise major presentations. Both can approach other colleagues for feedback, but are fully — even brutally — honest with each other. Jarvis feels perfectly comfortable telling her husband that his points are “a bit rubbish”. Whether scientist couples work closely or just share an employer, many say that they appreciate the ability to provide mutual support through tough times at work. Allison Mattheis, an educational researcher at California State University (Cal State) in Los Angeles, met her partner, Valerie Wong, when they were both at the University of Minnesota — Mattheis at the Minneapolis campus and Wong in St Paul. Now, Wong is an adjunct faculty member at Cal State. “You get frustrated by all the same bureaucratic hurdles of the institution,” says Mattheis. Who better to commiserate with over Mattheis's struggles to add her partner to her health insurance than Wong? The two talk about how best to design lessons, address students' misconceptions or advise students. Wong also refers biology students with an interest in teaching to Mattheis. The two have started a project to connect secondary-school teachers with university instructors to improve early science education. These relationships are of value to scientists still in training, too. Erin Zimmerman of London, Canada, misses this kind of connection now that she and her husband, Eric Chevalier, no longer work in science. Although they met as graduate students in the Plant Biology Research Institute at the University of Montreal, Canada, she's now a freelance science writer; he, an optometrist at Old South Optometry in London. When they began dating, it was easy to keep in contact. Chevalier once placed a picture of a hand-drawn flower into a beaker on Zimmerman's desk, because he knew she hated how real cut flowers die. They co-authored a review, and related to each other's dealings with academic culture, funding woes and other frustrations. “It was nice being able to have someone at home who really understood that,” says Zimmerman. “Now,” she jokes, “we bore each other.” There are potential pitfalls to such a relationship. For one, those determined to work together might limit their options. One-fifth of researchers in a relationship surveyed by the EC3 had refused or left a job owing to the challenge of maintaining both careers. Moore advises: “You have to be seen as one, so when they want you, they want both of you.” Scientist couples who work together need to be aware of how they present themselves, and must always maintain an image of two distinct professionals. “Your relationship is living in a fishbowl,” says MacKay. And they must take care to avoid even the possible appearance of favouritism. Intern architect Donna Marion and her husband, Mike Grosskopf, a statistics graduate student at Simon Fraser University in Vancouver, Canada, met as undergraduates in an astrophysics lab at the University of Michigan in Ann Arbor. Both joined the lab as employees once they graduated, and, for a time, Grosskopf was Marion's supervisor. But when romance blossomed, he warned his boss, who changed Marion's supervisor. Similarly, mathematician Piper Harron, a temporary faculty member at the University of Hawaii at Manoa, avoided selecting her husband, Robert Harron, as an academic mentor when she was applying for grant support. “If we weren't related, I would be the natural choice,” says her husband, a maths faculty member at the university, but he knew that any reports or letters of recommendation that he might write about her would be suspect. Nonetheless, they contribute to each other's work, reading and editing their writing. Piper excels at bits that sell the projects, and Robert is good at converting text into more maths-oriented language. Sharing a last name might also raise eyebrows, adds biochemist Edith Sim of Oxford, UK, who met her husband, Bob Sim, when they were undergraduate laboratory partners. They worked in each other's labs at times. Once, a grant application that she had submitted came back with the comment, “Was this hers or was this her husband's?” From then on, she left her husband's name off any papers that she produced. By contrast, colleagues of Moore and Stanier didn't always catch on that they were married. “We didn't hide it, but we didn't particularly flaunt it,” explains Stanier. One visiting student spent a few months in Moore's lab while Stanier was a postdoc there, and thought the two were engaged in a scandalous affair. (His adviser set him straight.) Another issue that couples may want to consider, points out Keith Bahjat, is that when a couple works for the same employer, both members depend on that employer for their wages. That's a particular concern in industry, he says, where companies might impose layoffs at any time. D'Eath and Jarvis had the same concern, which they've mitigated in part by Jarvis taking a second position as director of a master's programme at the University of Edinburgh, UK, in addition to her work at Scotland's Rural College. Now they feel safer, because it's unlikely that both institutions would falter at the same time. Despite these challenges, scientist couples know that they enjoy significant good fortune. “Finding a situation where you both have great opportunity is really rare,” says Frances Rena Bahjat.


News Article | November 17, 2015
Site: www.sciencenews.org

Faced with a shortage of the essential nutrient selenium, the brain and the testes duke it out. In selenium-depleted male mice, testes hog the trace element, leaving the brain in the lurch, scientists report in the Nov. 18 Journal of Neuroscience. The results are some of the first to show competition between two organs for trace nutrients, says analytical neurochemist Dominic Hare of the University of Technology Sydney and the Florey Institute of Neuroscience and Mental Health in Melbourne. In addition to uncovering this brain-testes scuffle, the study “highlights that selenium in the brain is something we can’t continue to ignore,” he says. About two dozen proteins in the body contain selenium, a nonmetallic chemical element. Some of these proteins are antioxidants that keep harmful molecules called free radicals from causing trouble. Male mice without enough selenium have brain abnormalities that lead to movement problems and seizures, neuroscientist Matthew Pitts of the University of Hawaii at Manoa and colleagues found. In some experiments, Pitts and his colleagues depleted selenium by interfering with genes. Male mice engineered to lack two genes that produce proteins required for the body to properly use selenium had trouble balancing on a rotating rod and moving in an open field. In their brains, a particular group of nerve cells called parvalbumin interneurons didn’t mature normally. But removing the selenium-hungry testes via castration before puberty improved these symptoms, leaving more selenium for the brain, the team found.  Selenium levels in the brains of these castrated mice were higher than those in uncastrated mice (though not as high as in females). The results “really suggest that there is some competition going on” in the males, Pitts says. Because selenium is known to be important for both fertility and the brain, the results make sense, says biochemist Lutz Schomburg of Charité-University Medical School Berlin. “Taking out the brain or the testes will likely benefit the other organ,” he says. “The former experiment is impossible to do but the latter has now nicely been conducted.” Schomburg cautions that the results aren’t necessarily relevant for people, who aren’t likely to be as selenium-deprived as the mice in the experiment. “Under normal conditions, the competition between testes and brain is not existent,” he says. That’s in part because most people’s diets contain plenty of selenium. The nutrient is found in crops grown in soil with plentiful selenium, such as in the Great Plains in the United States. Brazil nuts are packed with selenium, as are tuna, halibut and sardines. Yet some people in parts of China, New Zealand and Europe have low selenium intake, Pitts says. Differences in selenium levels in the body, either due to diet or genetic traits, may play a role in psychiatric disorders such as schizophrenia, he speculates. While that idea is unconfirmed, a hint comes from an earlier study that found that people with schizophrenia had reduced activity of the gene that encodes a protein that helps deliver selenium to where it is needed. Early-onset schizophrenia is also more prevalent in males. “In this way, males could be more at risk, because they have an additional organ sucking up resources that could be going to the brain,” Pitts says.


News Article | March 21, 2016
Site: www.washingtonpost.com

This story has been updated. If you dig deep enough into the Earth’s climate change archives, you hear about the Palaeocene-Eocene Thermal Maximum, or PETM. And then you get scared. This is a time period, about 56 million years ago, when something mysterious happened — there are many ideas as to what — that suddenly caused concentrations of carbon dioxide in the atmosphere to spike, far higher than they are right now. The planet proceeded to warm rapidly, at least in geologic terms, and major die-offs of some marine organisms followed due to strong acidification of the oceans. The cause of the PETM has been widely debated. Some think it was an explosion of carbon from thawing Arctic permafrost. Some think there was a huge release of subsea methane that somehow made its way to the atmosphere — and that the series of events might have been kickstarted by major volcanic eruptions. [We had all better hope these scientists are wrong about the planet’s future] In any case, the result was a hothouse world from pole to pole, some 5 degrees Celsius warmer overall. But now, new research suggests, even the drama of the PETM falls short of our current period, in at least one key respect: We’re putting carbon into the atmosphere at an even faster rate than happened back then. Such is the result of a new study in Nature Geoscience, led by Richard Zeebe of the University of Hawaii at Manoa, and colleagues from the University of Bristol in the UK and the University of California-Riverside. “If you look over the entire Cenozoic, the last 66 million years, the only event that we know of at the moment, that has a massive carbon release, and happens over a relatively short period of time, is the PETM,” says Zeebe. “We actually have to go back to relatively old periods, because in the more recent past, we don’t see anything comparable to what humans are currently doing.” That’s why this time period is so crucial to study — as a possible window on our own. There’s no doubt that a lot of carbon — about as much as contained the fossil fuel reserves that humans have either already burned, or could still burn, combined — made its way into the atmosphere during the PETM. The result was a major warming event that lasted over 100,000 years. But precisely how rapidly the emissions occurred is another matter. “If anthropogenic emissions rates have no analogue in Earth’s recent history, then unforeseeable future responses of the climate system are possible,” the authors write. To examine what happened in the PETM, the researchers used a deep ocean core of sediment from off the coast of New Jersey. The goal was to determine the ratios between different isotopes, or slightly different elemental forms, of carbon and oxygen, in the sediments during the PETM. The relationship between the two lets researchers determine how atmospheric carbon dioxide levels, as reflected in the ratio of carbon 12 to carbon 13, in turn influenced temperatures (which can be inferred based on oxygen isotopes in the ocean). “In terms of these two systems, the first shows us when the carbon went into the system, and the second tells us when the climate responded,” says Zeebe. It turns out that there is a lag time between massive pulses of carbon in the atmosphere and subsequent warming, because the oceans have a large thermal inertia. Therefore, a large lag would indicate a greater carbon release, whereas the lack of one actually means that carbon dioxide came out more slowly. The geologic evidence from the new core did not show a lag, the new study reports. That means, the authors estimate, that while a gigantic volume of carbon entered the atmosphere during the PETM — between 2,000 and 4,500 billion tons — it played out over some 4,000 years. So only about 1 billion tons of carbon were emitted per year. In contrast, humans are now emitting about 10 billion tons annually — changing the planet much more rapidly. “The anthropogenic release outpaces carbon release during the most extreme global warming event of the past 66 million years, by at least an order of magnitude,” writes Peter Stassen, an Earth and environmental scientist at KU Leuven, in Belgium, in an accompanying commentary on the new study. The analogy between the PETM and the present, then, is less than perfect — and our own era may be worse in key ways. “The two main conclusions is that ocean acidification will be more severe, ecosystems may be hit harder because of the rate” of carbon release, says Zeebe. And not only have we only begun to see the changes that will result from current warming, but there may be other changes that lack any ancient parallel, because of the current rate of change. “Given that the current rate of carbon release is unprecedented throughout the Cenozoic, we have effectively entered an era of a no-analogue state, which represents a fundamental challenge to constraining future climate projections,” the study concludes. The economy is growing but carbon emissions aren’t. That’s a really big deal This Galapagos island — named Darwin — will now anchor a vast new marine reserve This huge change in how we get energy is coming much faster than expected For more, you can sign up for our weekly newsletter here, and follow us on Twitter here.


Yang B.,University of Hawaii at Manoa | Jewitt D.,University of California at Los Angeles
Astronomical Journal | Year: 2010

Spectrally blue (b-type) asteroids are rare, with the second discovered asteroid, pallas, being the largest and most famous example. we conducted a focused, infrared spectroscopic survey of b-type asteroids to search for water- related features in these objects. our results show that the negative optical spectral slope of some b-type asteroids is due to the presence of a broad absorption band centered near 1.0 /μm. the 1 /μm band can be matched in position and shape using magnetite (fe3o4), which is an important indicator of past aqueous alteration in the parent body. furthermore, our observations of b-type asteroid (335) roberta in the 3 /μm region reveal an absorption feature centered at 2.9 /μm, which is consistent with the absorption due to phyllosilicates (another hydration product) observed in ci chondrites. the new observations suggest that at least some b-type asteroids are likely to have incorporated significant amounts of water ice and to have experienced intensive aqueous alteration. © 2010. The american astronomical society. All rights reserved.


Chuang Y.-C.,National Taiwan University | Chang S.-C.,National Taiwan University | Wang W.-K.,University of Hawaii at Manoa
Critical Care Medicine | Year: 2012

OBJECTIVE: Bacteremia caused by Acinetobacter baumannii is becoming more frequent among critically ill patients, and has been associated with high mortality and prolonged hospital stay. Multidrug resistance and delay in blood culture have been shown to be significant barriers to appropriate antibiotic treatment. Quantitative polymerase chain reaction assays were recently used to monitor bacterial loads; we hypothesized that the rate of bacterial clearance determined by quantitative polymerase chain reaction can be used as a timely surrogate marker to evaluate the appropriateness of antibiotic usage. DESIGN: Prospective observational study. SETTING: University hospital and research laboratory. PATIENTS: Patients with culture-proven A. baumannii bacteremia in the intensive care units were prospectively enrolled from April 2008 to February 2009. INTERVENTIONS: Plasmid Oxa-51/pCRII-TOPO, which contained a 431-bp fragment of the A. baumannii-specific Oxa-51 gene in a pCRII-TOPO vector, was used as the standard. Sequential bacterial DNA loads in the blood were measured by a quantitative polymerase chain reaction assay. MEASUREMENTS AND MAIN RESULTS: We enrolled 51 patients with A. baumannii bacteremia, and examined 318 sequential whole blood samples. The initial mean bacterial load was 2.15 log copies/mL, and the rate of bacterial clearance was 0.088 log copies/mL/day. Multivariate linear regression using the generalized estimation equation approach revealed that the use of immunosuppressants was an independent predictor for slower bacterial clearance (coefficient, 1.116; p < .001), and appropriate antibiotic usage was an independent predictor for more rapid bacterial clearance (coefficient, -0.995; p < .001). Patients with a slower rate of bacterial clearance experienced higher in-hospital mortality (odds ratio, 2.323; p = .04) CONCLUSIONS: Immunosuppression and appropriate antibiotic usage were independent factors affecting the rate of clearance of A. baumannii bacteremia in critical patients. These findings highlight the importance of appropriate antibiotic usage and development of effective antibiotics against A. baumannii in an era of emerging antibiotic resistance. The rate of bacterial clearance could serve as a timely surrogate marker for evaluating the appropriateness of antibiotics. Copyright © 2012 by the Society of Critical Care.


Yang B.,University of Hawaii at Manoa | Jewitt D.,University of California at Los Angeles
Astronomical Journal | Year: 2011

We obtained near-infrared (NIR; 0.8-2.5μm) spectra of seven Jovian Trojan asteroids that have been formerly reported to show silicate-like absorption features near 1μm. Our sample includes the Trojan (1172) Aneas, which is one of the three Trojans known to possess a comet-like 10μm emission feature, indicative of fine-grained silicates. Our observations show that all seven Trojans appear featureless in high signal-to-noise ratio spectra. The simultaneous absence of the 1μm band and the presence of the 10μm emission can be understood if the silicates on (1172) Aneas are iron-poor. In addition, we present NIR observations of five optically gray Trojans, including three objects from the collisionally produced Eurybates family. The five gray Trojans appear featureless in the NIR with no diagnostic absorption features. The NIR spectrum of Eurybates can be best fitted with the spectrum of a CM2 carbonaceous chondrite, which hints that the C-type Eurybates family members may have experienced aqueous alteration. © 2011. The American Astronomical Society. All rights reserved.


Lugaz N.,University of Hawaii at Manoa | Kintner P.,University of Rochester
Solar Physics | Year: 2013

The Fixed-Φ (FΦ) and Harmonic Mean (HM) fitting methods are two methods to determine the "average" direction and velocity of coronal mass ejections (CMEs) from time-elongation tracks produced by Heliospheric Imagers (HIs), such as the HIs onboard the STEREO spacecraft. Both methods assume a constant velocity in their descriptions of the time-elongation profiles of CMEs, which are used to fit the observed time-elongation data. Here, we analyze the effect of aerodynamic drag on CMEs propagating through interplanetary space, and how this drag affects the result of the FΦ and HM fitting methods. A simple drag model is used to analytically construct time-elongation profiles which are then fitted with the two methods. It is found that higher angles and velocities give rise to greater error in both methods, reaching errors in the direction of propagation of up to 15{ring operator} and 30{ring operator} for the FΦ and HM fitting methods, respectively. This is due to the physical accelerations of the CMEs being interpreted as geometrical accelerations by the fitting methods. Because of the geometrical definition of the HM fitting method, it is more affected by the acceleration than the FΦ fitting method. Overall, we find that both techniques overestimate the initial (and final) velocity and direction for fast CMEs propagating beyond 90{ring operator} from the Sun-spacecraft line, meaning that arrival times at 1 AU would be predicted early (by up to 12 hours). We also find that the direction and arrival time of a wide and decelerating CME can be better reproduced by the FΦ due to the cancelation of two errors: neglecting the CME width and neglecting the CME deceleration. Overall, the inaccuracies of the two fitting methods are expected to play an important role in the prediction of CME hit and arrival times as we head towards solar maximum and the STEREO spacecraft further move behind the Sun. © 2012 Springer Science+Business Media B.V.


Gonnermann H.M.,Rice University | Houghton B.F.,University of Hawaii at Manoa
Geochemistry, Geophysics, Geosystems | Year: 2012

We have modeled the nucleation and isothermal growth of bubbles in dacite from the 1912 Plinian eruption of Novarupta, Alaska. Bubble growth calculations account for the exsolution of H 2O and CO 2, beginning with bubble nucleation and ending when bubble sizes reproduced the observed size distribution of vesicles in Novarupta pumice clasts. Assuming classical nucleation theory, bubbles nucleated with a diameter of the order of 10 -8 m and grew to sizes ranging from 10 -6 m to greater than 10 -3 m, the typical range of vesicle sizes found in Novarupta pumice. The smallest vesicles in Novarupta pumices are also the most abundant and bubbles with radii of 10 -6 m to 10 -5 m comprise almost 90% of the entire bubble population. We find that these bubbles must have nucleated and grown to their final size within a few 100 milliseconds. Despite these extremely fast growth rates, the pressures of exsolved volatiles contained within the bubbles remained high, up to about 10 7 Pa in excess of ambient pressure. Assuming a closed-system, the potential energy of these compressed volatiles was sufficient to cause magma fragmentation, even though only a fraction of the pre-eruptive volatiles had exsolved. Unless the matrix glasses of Novarupta pyroclasts retains a large fraction of pre-eruptive volatiles, the majority of magmatic volatiles (80-90%) was likely lost by open-system degassing between magma fragmentation and quenching. © 2012. American Geophysical Union. All Rights Reserved.


Miller F.D.,University of Hawaii at Manoa | Abu-Raddad L.J.,Cornell College | Abu-Raddad L.J.,Cornell University | Abu-Raddad L.J.,Fred Hutchinson Cancer Research Center
Proceedings of the National Academy of Sciences of the United States of America | Year: 2010

Egypt has the highest prevalence of antibodies to hepatitis C virus (HCV) in the world, estimated nationally at 14.7%. An estimated 9.8% are chronically infected. Numerous HCV prevalence studies in Egypt have published various estimates from different Egyptian communities, suggesting that Egypt, relative to the other nations of the world, might be experiencing intense ongoing HCV transmission. More importantly, a new national study provided an opportunity to apply established epidemiologic models to estimate incidence. Validated mathematical models for estimating incidence from age-specific prevalence were used. All previous prevalence studies of HCV in Egypt were reviewed and used to estimate incidence provided that there was sufficient age-specific data required by the models. All reports of anti-HCV antibody prevalence were much higher than any single other national estimate. Age was the strongest and most consistently associated factor to HCV prevalence and HCV RNA positivity. It was not possible to establish a prior reference point for HCV prevalence or incidence to compare with the 2009 incidence estimates. The modeled incidence from the national study and collectively from the modeled incidence from the previous community studies was 6.9/1,000 [95% confidence interval (CI), 5.5-7.4] per person per year and 6.6/1,000 (95% CI, 5.1-7.0) per person per year, respectively. Projected to the age structure of the Egyptian population, more than 500,000 new HCV infections per year were estimated. Iatrogenic transmission is the most likely, underlining exposure to the ongoing transmission. The study demonstrates the urgency to reduce HCV transmission in Egypt.


Kaltenegger L.,MPIA | Haghighipour N.,University of Hawaii at Manoa | Haghighipour N.,University of Tübingen
Astrophysical Journal | Year: 2013

We have developed a comprehensive methodology for calculating the boundaries of the habitable zone (HZ) of planet-hosting S-type binary star systems. Our approach is general and takes into account the contribution of both stars to the location and extent of the binary HZ with different stellar spectral types. We have studied how the binary eccentricity and stellar energy distribution affect the extent of the HZ. Results indicate that in binaries where the combination of mass-ratio and orbital eccentricity allows planet formation around a star of the system to proceed successfully, the effect of a less luminous secondary on the location of the primary's HZ is generally negligible. However, when the secondary is more luminous, it can influence the extent of the HZ. We present the details of the derivations of our methodology and discuss its application to the binary HZ around the primary and secondary main-sequence stars of an FF, MM, and FM binary, as well as two known planet-hosting binaries α Cen AB and HD 196886. © 2013. The American Astronomical Society. All rights reserved..


Waite M.,University of Hawaii at Manoa | Sack L.,University of California at Los Angeles
New Phytologist | Year: 2010

Mosses are an understudied group of plants that can potentially confirm or expand principles of plant function described for tracheophytes, from which they diverge strongly in structure. We quantified 35 physiological and morphological traits from cell-, leaf- and canopy-level, for 10 ground-, trunk- and branch-dwelling Hawaiian species. We hypothesized that trait values would reflect the distinctive growth form and slow growth of mosses, but also that trait correlations would be analogous to those of tracheophytes. The moss species had low leaf mass per area and low gas exchange rate. Unlike for tracheophytes, light-saturated photosynthetic rate per mass (Amass) did not correlate with habitat irradiance. Other photosynthetic parameters and structural traits were aligned with microhabitat irradiance, driving an inter-correlation of traits including leaf area, cell size, cell wall thickness, and canopy density. In addition, we found a coordination of traits linked with structural allocation, including costa size, canopy height and Amass. Across species, Amass and nitrogen concentration correlated negatively with canopy mass per area, analogous to linkages found for the 'leaf economic spectrum', with canopy mass per area replacing leaf mass per area. Despite divergence of mosses and tracheophytes in leaf size and function, analogous trait coordination has arisen during ecological differentiation. © 2009 New Phytologist.


Haghighipour N.,University of Hawaii at Manoa | Haghighipour N.,University of Tübingen | Kaltenegger L.,MPIA
Astrophysical Journal | Year: 2013

We have developed a comprehensive methodology for calculating the circumbinary habitable zone (HZ) in planet-hosting P-type binary star systems. We present a general formalism for determining the contribution of each star of the binary to the total flux received at the top of the atmosphere of an Earth-like planet and use the Sun's HZ to calculate the inner and outer boundaries of the HZ around a binary star system. We apply our calculations to the Kepler's currently known circumbinary planetary systems and show the combined stellar flux that determines the boundaries of their HZs. We also show that the HZ in P-type systems is dynamic and, depending on the luminosity of the binary stars, their spectral types, and the binary eccentricity, its boundaries vary as the stars of the binary undergo their orbital motion. We present the details of our calculations and discuss the implications of the results. © 2013. The American Astronomical Society. All rights reserved..


Jin F.-F.,University of Hawaii at Manoa | Jin F.-F.,Chinese Meteorological Agency | Boucharel J.,University of Hawaii at Manoa | Lin I.-I.,National Taiwan University
Nature | Year: 2014

The El Ninõ Southern Oscillation (ENSO) creates strong variations in sea surface temperature in the eastern equatorial Pacific, leading to major climatic and societal impacts1,2. In particular, ENSO influences the yearly variations of tropical cyclone (TC) activities in both the Pacific and Atlantic basins through atmospheric dynamical factors such as vertical wind shear and stability3-6. Until recently, however, the direct ocean thermal control of ENSOon TCs has not been taken into consideration because of an apparent mismatch in both timing and location: ENSOpeaks in winter and its surface warming occurs mostly along the Equator, a region without TC activity. Here weshowthat El Ninõ-the warmphase of anENSOcycle-effectively discharges heat into the eastern North Pacific basin two to three seasons after its wintertime peak, leading to intensified TCs. This basin is characterized by abundant TC activity and is the second most activeTCregionintheworld 5-7.As a result of the time involved inocean transport, El Ninõ's equatorial subsurface 'heat reservoir', built up in boreal winter, appears in the easternNorth Pacific severalmonths later during peakTCseason (boreal summer and autumn). Bymeans of this delayed ocean transport mechanism, ENSO provides an additional heat supply favourable for the formation of strong hurricanes. This thermal control on intenseTCvariability has significant implications for seasonal predictions and long-term projections ofTCactivity over the eastern North Pacific. © 2014 Macmillan Publishers Limited. All rights reserved.


Cooke R.J.,University of California at Santa Cruz | Pettini M.,Institute of Astronomy | Jorgenson R.A.,University of Hawaii at Manoa
Astrophysical Journal | Year: 2015

In this paper we analyze the kinematics, chemistry, and physical properties of a sample of the most metal-poor damped Lyα systems (DLAs), to uncover their links to modern-day galaxies. We present evidence that the DLA population as a whole exhibits a "knee" in the relative abundances of the α-capture and Fe-peak elements when the metallicity is [Fe/H] ≃ -2.0, assuming that Zn traces the buildup of Fe-peak elements. In this respect, the chemical evolution of DLAs is clearly different from that experienced by Milky Way halo stars, but resembles that of dwarf spheroidal galaxies in the Local Group. We also find a close correspondence between the kinematics of Local Group dwarf galaxies and of high-redshift metal-poor DLAs, which further strengthens this connection. On the basis of such similarities, we propose that the most metal-poor DLAs provide us with a unique opportunity to directly study the dwarf galaxy population more than ten billion years in the past, at a time when many dwarf galaxies were forming the bulk of their stars. To this end, we have measured some of the key physical properties of the DLA gas, including their neutral gas mass, size, kinetic temperature, density, and turbulence. We find that metal-poor DLAs contain a warm neutral medium with Tgas ≃ 9600 K predominantly held up by thermal pressure. Furthermore, all of the DLAs in our sample exhibit a subsonic turbulent Mach number, implying that the gas distribution is largely smooth. These results are among the first empirical descriptions of the environments where the first few generations of stars may have formed in the universe. © 2015. The American Astronomical Society. All rights reserved.


Bellorado J.,Link A Media Devices | Kavcic A.,University of Hawaii at Manoa
IEEE Transactions on Information Theory | Year: 2010

In this paper, we present an algebraic methodology for implementing low-complexity, Chase-type, decoding of Reed-Solomon (RS) codes of length n. In such, a set of 2ηtest-vectors that are equivalent on all except η ≪ n coordinate positions is first produced. The similarity of the test-vectors is utilized to reduce the complexity of interpolation, the process of constructing a set of polynomials that obey constraints imposed by each test-vector. By first considering the equivalent indices, a polynomial common to all test-vectors is constructed. The required set of polynomials is then produced by interpolating the final η dissimilar indices utilizing a binary-tree structure. In the second decoding step (factorization) a candidate message is extracted from each interpolation polynomial such that one may be chosen as the decoded message. Although an expression for the direct evaluation of each candidate message is provided, carrying out this computation for each polynomial is extremely complex. Thus, a novel, reduced-complexity, methodology is also given. Although suboptimal, simulation results affirm that the loss in performance incurred by this procedure is decreasing with increasing code length n, and negligible for long $(n > 100)$ codes. Significant coding gains are shown to be achievable over traditional hard-in hard-out decoding procedures (e.g., Berlekamp-Massey) at an equivalent (and, in some cases, lower) computational complexity. Furthermore, these gains are shown to be similar to the recently proposed soft-in-hard-out algebraic techniques (e.g., Sudan, Kötter-Vardy) that bear significantly more complex implementations than the proposed algorithm. © 2006 IEEE.


Launer L.J.,U.S. National Institute on Aging | Hughes T.M.,U.S. National Institute on Aging | Hughes T.M.,University of Pittsburgh | White L.R.,University of Hawaii at Manoa
Annals of Neurology | Year: 2011

Objective: This study was untaken to investigate the association of micro brain infarcts (MBIs) with antemortem global cognitive function (CF), and whether brain weight (BW) and Alzheimer lesions (neurofibrillary tangles [NFTs] or neuritic plaques [NPs]) mediate the association. Methods: Subjects were 436 well-characterized male decedents from the Honolulu Asia Aging Autopsy Study. Brain pathology was ascertained with standardized methods, CF was measured by the Cognitive Abilities Screening Instrument, and data were analyzed using formal mediation analyses, adjusted for age at death, time between last CF measure and death, education, and head size. Based on antemortem diagnoses, demented and nondemented subjects were examined together and separately. Results: In those with no dementia, MBIs were strongly associated with the last antemortem CF score; this was significantly mediated by BW, and not NFTs or NPs. In contrast, among those with an antemortem diagnosis of dementia, NFTs had the strongest associations with BW and with CF, and MBIs were modestly associated with CF. Interpretation: This suggests that microinfarct pathology is a significant and independent factor contributing to brain atrophy and cognitive impairment, particularly before dementia is clinically evident. The role of vascular damage as initiator, stimulator, or additive contributor to neurodegeneration may differ depending on when in the trajectory toward dementia the lesions develop. ANN NEUROL 2011 Copyright © 2011 American Neurological Association.


Grant
Agency: GTR | Branch: NERC | Program: | Phase: Research Grant | Award Amount: 209.00K | Year: 2012

The breaking apart of a continent to form extended continental margins and ultimately ocean basins is a process that can last for 10s of millions of years. The start of this process of rifting is thought to contribute significantly to the structure and sedimentary layering of the continental margins that have formed by its end. Often the details of how rifting initiates and develops in the first few million years are lost in the complexities of deformation and thick sediment layers beneath the continents edges. To understand the early phases, we have to study areas where rifting has only recently started, and the Gulf of Corinth, Greece, is a key example in its first few millions of years of history. Across the Gulf, the two sides of the rift are moving apart at up to 20 mm every year and this high rate of extension results in numerous earthquakes which historically have been very destructive. The rapid extension also results in a rapidly developing rift basin which is partially submerged beneath the sea and filling with sediments. Within the Gulf, a large volume of marine geophysical data has been collected, including detailed maps of the seabed, as well as seismic data that use sound sources to give cross-sections of material beneath the seabed. The seismic data allow us to directly image the accumulated sediment layers and to identify faults that offset the layers and create the basin. This project will integrate these data to make a very detailed interpretation of the sediment layers (and their likely age) and fault planes. Imaging and assigning ages to the layers, by comparing with models of climate and sea level change, allows us to determine how the basin has developed through time. The fault planes imaged by the data generate the extension and subsidence of the rift, and their history of activity controls how the basin develops. The results will be used to generate the first high resolution model of rift development over the initial few million years of a rifts history and will help to address some of the unanswered questions of how continents break apart. The model will be used by a range of scientists, including those trying to understand how tectonics, landscape morphology and climate all interact to cause sediments to move from one place to another: rift basins are one of the main sinks for sediments and we will calculate how the volume of sediment delivered to the Corinth basin has changed with time, as faults move and as climate changes. The majority of the worlds petroleum resources are found in old rifts, but often details of how the rift developed and the detailed geometry of the rock units in which the oil is now found are masked by other geological processes and by shallower sediment layers. Understanding the early rift processes is important for determining where and what kind of sediments will be deposited in different parts of the basin with time. We will also analyse details of how individual faults grow and interact with other faults in the rift: this process affects where sediments enter a rift basin and is therefore also important for identifying petroleum reservoirs. The rift faults are responsible for the destructive earthquakes in central Greece, so this projects analysis of fault location and rate of slip will also help us to better understand the potential hazard, increasing the potential for reduction of associated risk. Ultimately, the project will be used to select sites for drilling and sampling the sediments of the rift zone, through the Integrated Ocean Drilling Program. These samples would provide: the actual age of sediment layers, and hence well resolved slip rates for each active fault and a test for the rift models generated here; and the types of sediments, that will tell us more about the regional climate of the last few millions of years and where sediments that typically form hydrocarbon reservoirs are located in this analogue for older rift systems like the North Sea.


News Article | December 16, 2016
Site: cen.acs.org

The dwarf planet Ceres, which orbits in the asteroid belt between Mars and Jupiter, harbors large amounts of water ice just below its surface, scientists report (Science 2016, DOI: 10.1126/science.aah6765). The ice, which is likely filling pores in subsurface rock, has been there for billions of years, confirming predictions made by astronomers 30 years ago. Scientists reported these and other results gleaned from the National Aeronautics & Space Administration’s Dawn spacecraft, which is orbiting Ceres, at a Dec. 15 press conference at the American Geophysical Union meeting in San Francisco. Dawn’s gamma ray and neutron spectrometer detected hydrogen, a proxy for water, just one meter below the dwarf planet’s surface. The ice is most concentrated at Ceres’s poles, said Thomas H. Prettyman of the Planetary Science Institute at the meeting. Norbert Schörghofer, a Dawn mission scientist and astronomer at the University of Hawaii at Manoa, also reported at the conference that Dawn has found patches of water ice in permanently shadowed craters at Ceres’s poles—a phenomenon that also exists on Mercury and the moon (Nat. Astron. 2016, DOI: 10.1038/s41550-016-0007). Collectively, many lines of evidence taken from Dawn indicate that Ceres once had a liquid subsurface ocean, some of which likely still remains, said Carol Raymond, deputy principal investigator for the Dawn mission, at the conference. Scientists are particularly interested in solar system bodies that may contain liquid water, as they could be environments that would sustain life. Ceres, Raymond said, is likely similar to Europa, Jupiter’s moon, or Enceladus, Saturn’s moon, in terms of its potential for habitability. Other research at the AGU meeting focused on a solar system body closer to home: Mars. Scientists from Los Alamos National Laboratory announced the first-ever discovery of boron on the surface of the red planet. NASA’s Curiosity rover identified the element in mineral veins of calcium sulfate. If this mineral is comparable to that found on Earth, the scientists said it would indicate that the planet’s surface temperatures were once 0–60 °C, and the soil had a neutral to alkaline pH—in other words, a very habitable environment.


Mann A.W.,University of Hawaii at Manoa | Gaidos E.,University of Hawaii at Manoa | Lepine S.,American Museum of Natural History | Hilton E.J.,University of Hawaii at Manoa
Astrophysical Journal | Year: 2012

We estimate the stellar parameters of late K- and early M-type Kepler target stars. We obtain medium-resolution visible spectra of 382 stars with KP - J > 2 (≃K5 and later spectral type). We determine luminosity class by comparing the strength of gravity-sensitive indices (CaH, K I, Ca II, and Na I) to their strength in a sample of stars of known luminosity class. We find that giants constitute 96% ± 1% of the bright (K P < 14) Kepler target stars, and 7% ± 3% of dim (K P > 14) stars, significantly higher than fractions based on the stellar parameters quoted in the Kepler Input Catalog (KIC). The KIC effective temperatures are systematically (110+15 - 35K) higher than temperatures we determine from fitting our spectra to PHOENIX stellar models. Through Monte Carlo simulations of the Kepler exoplanet candidate population, we find a planet occurrence of 0.36 ± 0.08 when giant stars are properly removed, somewhat higher than when a KIC log g > 4 criterion is used (0.27 ± 0.05). Last, we show that there is no significant difference in g - r color (a probe of metallicity) between late-type Kepler stars with transiting Earth-to-Neptune-size exoplanet candidates and dwarf stars with no detected transits. We show that a previous claimed offset between these two populations is most likely an artifact of including a large number of misidentified giants. © 2012 The American Astronomical Society. All rights reserved.


Lepine S.,American Museum of Natural History | Gaidos E.,University of Hawaii at Manoa
Astronomical Journal | Year: 2011

We present an all-sky catalog of M dwarf stars with apparent infrared magnitude J < 10. The 8889 stars are selected from the ongoing SUPERBLINK survey of stars with proper motion μ > 40masyr-1, supplemented on the bright end with the Tycho-2 catalog. Completeness tests which account for kinematic (proper motion) bias suggest that our catalog represents ≈75% of the estimated 11, 900 M dwarfs with J < 10 expected to populate the entire sky. Our catalog is, however, significantly more complete for the northern sky (≈90%) than it is for the south (≈60%). Stars are identified as cool, red M dwarfs from a combination of optical and infrared color cuts, and are distinguished from background M giants and highly reddened stars using either existing parallax measurements or, if such measurements are lacking, using their location in an optical-to-infrared reduced proper motion diagram. These bright M dwarfs are all prime targets for exoplanet surveys using the Doppler radial velocity or transit methods; the combination of low-mass and bright apparent magnitude should make possible the detection of Earth-size planets on short-period orbits using currently available techniques. Parallax measurements, when available, and photometric distance estimates are provided for all stars, and these place most systems within 60pc of the Sun. Spectral type estimated from V-J color shows that most of the stars range from K7 to M4, with only a few late M dwarfs, all within 20pc. Proximity to the Sun also makes these stars good targets for high-resolution exoplanet imaging searches, especially if younger objects can be identified on the basis of X-ray or UV excess. For that purpose, we include X-ray flux from ROSAT and FUV/NUV ultraviolet magnitudes from GALEX for all stars for which a counterpart can be identified in those catalogs. Additional photometric data include optical magnitudes from Digitized Sky Survey plates and infrared magnitudes from the Two Micron All Sky Survey. © 2011 The American Astronomical Society. All rights reserved.


Pakvasa S.,University of Hawaii at Manoa | Joshipura A.,Physical Research Laboratory | Mohanty S.,Physical Research Laboratory
Physical Review Letters | Year: 2013

There has been some concern about the unexpected paucity of cosmic high-energy muon neutrinos in detectors probing the energy region beyond 1 PeV. As a possible solution we consider the possibility that some exotic neutrino property is responsible for reducing the muon neutrino flux at high energies from distant sources; specifically, we consider (i) neutrino decay and (ii) neutrinos being pseudo-Dirac-particles. This would provide a mechanism for the reduction of high-energy muon events in the IceCube detector, for example. © 2013 American Physical Society.


Shaya E.J.,University of Maryland University College | Tully R.B.,University of Hawaii at Manoa
Monthly Notices of the Royal Astronomical Society | Year: 2013

The confinement of most satellite galaxies in the Local Group to thin planes presents a challenge to the theory of hierarchical galaxy clustering. The Pan-Andromeda Archaeological Survey (PAndAS) collaboration has identified a particularly thin configuration with kinematic coherence among companions of M31 and there have been long-standing claims that the dwarf companions to the MilkyWay lie in a plane roughly orthogonal to the disc of our galaxy. This discussion investigates the possible origins of four Local Group planes: the plane similar, but not identical to that identified by the PAndAS collaboration, an adjacent slightly tilted plane and two planes in the vicinity of the Milky Way: one with very nearby galaxies and the other with more distant ones. Plausible orbits are found by using a combination of Numerical Action methods and a backward in time integration procedure. This investigation assumes that the companion galaxies formed at an early time in accordance with the standard cosmological model. For M31, M33, IC10 and Leo I, solutions are found that are consistent with measurements of their proper motions. For galaxies in planes, there must be commonalities in their proper motions, and this constraint greatly limits the number of physically plausible solutions. Key to the formation of the planar structures has been the evacuation of the Local Void and consequent build-up of the Local Sheet, a wall of this void.Most of the M31 companion galaxies were born in early-forming filamentary or sheet-like substrata that chased M31 out of the void. M31 is a moving target because of its attraction towards the Milky Way, and the result has been alignments stretched towards our galaxy. In the case of the configuration around the Milky Way, it appears that our galaxy was in a three-way competition for companions with M31 and Centaurus A. Only those within a modest band fell our way. The Milky Way's attraction towards the Virgo Cluster resulted in alignment along the MilkyWay-Virgo Cluster line. © 2013 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.


Bartoli G.,ETH Zurich | Honisch B.,Lamont Doherty Earth Observatory | Zeebe R.E.,University of Hawaii at Manoa
Paleoceanography | Year: 2011

Several hypotheses have been put forward to explain the onset of intensive glaciations on Greenland, Scandinavia, and North America during the Pliocene epoch between 3.6 and 2.7 million years ago (Ma). A decrease in atmospheric CO 2 may have played a role during the onset of glaciations, but other tectonic and oceanic events occurring at the same time may have played a part as well. Here we present detailed atmospheric CO 2 estimates from boron isotopes in planktic foraminifer shells spanning 4.6-2.0 Ma. Maximal Pliocene atmospheric CO 2 estimates gradually declined from values around 410 μatm to early Pleistocene values of 300 μatm at 2.0 Ma. After the onset of large-scale ice sheets in the Northern Hemisphere, maximal pCO 2 estimates were still at 2.5 Ma +90 μatm higher than values characteristic of the early Pleistocene interglacials. By contrast, Pliocene minimal atmospheric CO 2 gradually decreased from 310 to 245 μatm at 3.2 Ma, coinciding with the start of transient glaciations on Greenland. Values characteristic of early Pleistocene glacial atmospheric CO 2 of 200 μatm were abruptly reached after 2.7 Ma during the late Pliocene transition. This trend is consistent with the suggestion that ocean stratification and iron fertilization increased after 2.7 Ma in the North Pacific and Southern Ocean and may have led to increased glacial CO 2 storage in the oceanic abyss after 2.7 Ma onward. copyright 2011 by the American Geophysical Union.


Mann R.K.,National Research Council Canada | Williams J.P.,University of Hawaii at Manoa
Astrophysical Journal | Year: 2010

We present the full results of our three-year-long Submillimeter Array (SMA) survey of protoplanetary disks in the Orion Nebula Cluster. We imaged 23 fields at 880μm and 2 fields at 1330μm, covering an area of ∼6.5 arcmin2 and containing 67 disks. We detected 42 disks with fluxes between 6 and 135 mJy and at rms noise levels between 0.6 and 5.3 mJy beam -1. Thermal dust emission above any free-free component was measured in 40 of the 42 detections, and the inferred disk masses range from 0.003 to 0.07M⊙. We find that disks located within 0.3 pc of θ1 Ori C have a truncated mass distribution, while disks located beyond 0.3 pc have masses more comparable to those found in low-mass star-forming regions. The disk mass distribution in Orion has a distance dependence, with a derived relationship max(Mdisk) = 0.046M⊙(d/0.3 pc) 0.33 for the maximum disk masses. We found evidence of grain growth in disk 197-427, the only disk detected at both 880μm and 1330μm with the SMA. Despite the rapid erosion of the outer parts of the Orion disks by photoevaporation, the potential for planet formation remains high in this massive star-forming region, with ≈18% of the surveyed disks having masses ≥0.01M⊙ within 60 AU. © 2010 The American Astronomical Society.


Guisan A.,University of Lausanne | Petitpierre B.,University of Lausanne | Broennimann O.,University of Lausanne | Daehler C.,University of Hawaii at Manoa | Kueffer C.,ETH Zurich
Trends in Ecology and Evolution | Year: 2014

Assessing whether the climatic niche of a species may change between different geographic areas or time periods has become increasingly important in the context of ongoing global change. However, approaches and findings have remained largely controversial so far, calling for a unification of methods. Here, we build on a review of empirical studies of invasion to formalize a unifying framework that decomposes niche change into unfilling, stability, and expansion situations, taking both a pooled range and range-specific perspective on the niche, while accounting for climatic availability and climatic analogy. This framework provides new insights into the nature of climate niche shifts and our ability to anticipate invasions, and may help in guiding the design of experiments for assessing causes of niche changes. © 2014 Elsevier Ltd.


Deng X.,University of Hawaii at Manoa | Chi L.,Blue Slate Solutions
Journal of Management Information Systems | Year: 2012

For an organization to gain maximum benefits from a new information system (IS), individual users in the organization must use it effectively and extensively. To do so, users need to overcome many problems associated with their system use in order to integrate the new IS into their work routines. Much remains to be learned about the types of problems that users encounter in using the new system, in particular, the root causes of system use problems and how they relate to and co-evolve with the problems over time. In this study, we seek to develop a comprehensive and dynamic view of system use problems in organizations. Using a combined method of revealed causal mapping and in-depth network analysis, we analyze nine-month archival data on user-reported problems with a new business intelligence application in a large organization. Our data analysis revealed seven emergent constructs of system use problems and causes, including reporting, data, workflow, role authorization, users' lack of knowledge, system error, and user-system interaction. The seven constructs were found to interact differentially across two usage phases (initial versus continued) and between two types of users (regular versus power user). This study contributes to advancing our theoretical understanding of postadoptive IS use by focusing on its problematic aspect. This study also suggests useful methods for organizations to effectively monitor users' system use problems over time and thus guides organizations to effectively target mechanisms to promote the use of new technologies. © 2013 M.E. Sharpe, Inc. All rights reserved.


News Article | July 8, 2016
Site: www.techtimes.com

NASA's Dawn space probe has identified several shadowed regions and craters on dwarf planet Ceres considered as "cold traps" or areas where ice can likely accumulate over time. Most of these permanently shadowed areas are cold enough to trap water ice for billions of years, scientists say. This raises the possibility that ice deposits currently exist in these cold traps. Guest investigator Norbert Schorghofer of the University of Hawaii at Manoa says the conditions on the dwarf planet are just enough for the accumulation of water ice deposits. Ceres has the right mass to hold on to water molecules, Schorghofer says. Plus, the dwarf planet's extremely cold shadowed regions are even more frigid than regions on Mercury or on the moon. Cold traps have long been predicted for Ceres, but have not been detected until now. Schorghofer and his colleagues studied the northern hemisphere of Ceres. Images from the Dawn probe's cameras were combined to determine the shape of the dwarf planet, showing plains, craters and other features. A sophisticated computer model at NASA's Goddard Space Flight Center helped pinpoint which areas receive direct sunlight, how conditions on the dwarf planet change over the course of a year and how much solar radiation reaches its surface. It's important to note that direct sunlight does not reach Ceres' permanently shadowed regions, which are usually located along a section of the crater or on the crater floor. Still, these regions do receive indirect sunlight. However, if the temperature plummets below negative 240 degrees Fahrenheit (negative 151 degrees Celsius), the shadowed areas then become cold traps, scientists say. Schorghofer and his team discovered plenty of large permanently shadowed regions all across Ceres' northern hemisphere. The largest region is inside a crater that is 16 kilometers (10 miles) wide situated less than 65 kilometers (40 miles) from Ceres' north pole. When added together, the permanently shadowed regions on Ceres occupy about 1,800 square kilometers (695 square miles) of land. This is only a small fraction of its landscape -- about 1 percent of the northern hemisphere's surface area. Another guest investigator from Goddard, Erwan Mazarico, says because Ceres is far from the sun and the shadowed areas receive little radiation, the dwarf planet's shadowed regions are colder than that on Mercury and the moon. "On Ceres, these regions act as cold traps down to relatively low latitudes," says Mazarico. On the other hand, on Mercury and the Earth's moon, only the permanently shadowed regions close to the poles become enough for ice to stabilize on the surface. Furthermore, the situation on Ceres is more similar to the situation on Mercury, scientists say. The shadowed regions on Mercury expands roughly the same fraction of the northern hemisphere. Mercury's efficiency to trap water ice is also comparable to Ceres. Chris Russell, the Dawn mission's principal investigator, adds that Ceres may have been formed with a greater reservoir of water compared to the moon and Mercury. Some studies suggest that Ceres may be a volatile-rich world that does not rely on current-day external sources. Details of the study are featured in the journal Geophysical Research Letters. © 2016 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | March 28, 2016
Site: news.yahoo.com

A bronze bell from a sunken World War II-era Japanese submarine was recently recovered off the coast of Oahu, in Hawaii. The bell was retrieved from the underwater remains of the I-400, an Imperial Japanese Navy mega submarine that was captured and intentionally sunk by U.S. forces in 1946. The massive vessel was one of the Japanese Navy's Sen Toku-class submarines. At the time, they were the largest submarines ever built. These mega submarines measured more than 400 feet (122 meters) long — longer than a football field — and were designed to function as underwater aircraft carriers, according to the University of Hawaii at Manoa. The subs could carry up to three float-plane bombers and were capable of rising quickly to the surface, launching aircraft and diving back underwater without being detected by enemies. [7 Technologies That Transformed Warfare] The I-400's bronze bell was recovered earlier this month during a test dive by researchers at the Hawaii Undersea Research Laboratory (HURL), part of the University of Hawaii at Manoa. The researchers used two manned submersibles, the Pisces IV and the Pisces V, to retrieve the historic bell from the sub's watery resting place. "It was an exciting day for the submersible operations crew of Pisces IV and Pisces V," Terry Kerby, HURL operations director and chief submarine pilot, said in a statement. "Just prior to our test dive, Dr. Georgia Fox [an archaeologist at the California State University-Chico] had received the underwater archaeological research permit from the Naval History and Heritage Command. We had only one chance to relocate and recover the bell." HURL researchers have been using manned submersibles to hunt for sunken submarines and other historical artifacts since 1992. The I-400 was first discovered in August 2013, sitting more than 2,300 feet (700 m) below sea level off the southwest coast of Oahu. The Japanese Navy intended to build an entire fleet of Sen Toku-class submarines, but only three vessels were ultimately completed by the end of World War II. At the end of the war, the U.S. Navy transferred five captured Japanese subs, including the massive I-400, to Pearl Harbor. The submarines were eventually scuttled off the coast of Oahu in 1946, after the former Soviet Union demanded access to the warships under the terms of the treaty that ended the war, according to HURL researchers. The U.S. Navy decided to intentionally sink the submarines, rather than have the advanced technology fall into Soviet hands in the buildup to the Cold War. Four of the five sunken submarines in this region have since been found, HURL researchers said. "These historic properties in the Hawaiian Islands recall the events and innovations of World War II, a period which greatly affected both Japan and the United States and reshaped the Pacific region," Hans Van Tilburg, maritime heritage coordinator for the National Oceanic and Atmospheric Administration (NOAA) in the Pacific Islands region, said in a statement. "Wreck sites like the I-400 are reminders of a different time, and markers of our progress from animosity to reconciliation." The I-400's bronze bell will undergo conservation treatments over the next year, and will subsequently be displayed at the USS Bowfin Submarine Museum & Park in Honolulu. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Loading University of Hawaii at Manoa collaborators
Loading University of Hawaii at Manoa collaborators