Lake of the Pines, CA, United States
Lake of the Pines, CA, United States

Time filter

Source Type

News Article | April 20, 2017
Site: www.sciencemag.org

There will be more than a little angst on display in Washington, D.C., over the next week. Science marchers will rally Saturday to express their concerns about perceived attacks on evidence and research, and climate marchers worried about U.S. policy are set to jam the streets of the nation’s capital 7 days later. But there’s also some optimism on tap over the next 3 days: The first Earth Optimism Summit kicks off today at the Ronald Reagan Building and International Trade Center, just blocks from where the marchers will be gathering. It will feature some 240 talks on what is working in conservation, energy efficiency, innovation, and other fields. “There is a lot of attention being focused on the science march but it isn't all anger out there,” says coral biologist Nancy Knowlton of the Smithsonian National Museum of Natural History in Washington, D.C., a leader of the event, which was planned long before the science march materialized earlier this year. “We organized this because it was clear that bad news gets most of the oxygen and we wanted to share successes to inspire others.” Presenters and attendees at the summit “will talk about what’s working, why it’s working, and how to scale it up,” she says. “It’s not Pollyanna, where people forget about the problems, but it is focusing on what is working.” In panel discussions and 12-minute, TED-like talks, the participants will cover more than two dozen success stories, including the conservation of orchids, Mongolian horses, and maned wolves, as well as efforts to develop stronger ocean policies, productive kelp farms, and more constructive human behavior. A few titles hint at some unusual perspectives as well: “Poachers as protectors,” and “Penguins and Pipelines”—that one presented by an energy company. Various organizations expect to announce new funding programs and fellowships. And meeting participants will be encouraged to compete in an X Prize–like “Make for the Planet” competition, to develop new hardware or software that might help solve conservation problems. The four finalists will make their pitches Sunday afternoon and one will receive a cash prize. The idea for the summit dates back more than a decade, when Knowlton was a professor at the Scripps Institute of Oceanography in San Diego, California, teaching marine biodiversity. “What we were doing was [holding] medical school for the planet,” she says, but most of the lessons were about dead or dying species and ecosystems. But “when you teach medical students, you don’t teach them how to write obituaries,” Knowlton says. So in a bid to find some hope, she started a Twitter account @OceanOptimism that garnered millions of responses, so many that eventually enthusiasts created a website with the same name. Usually, “we are such complainers,” Knowlton says. “I hope to really change the conversation and make people realize we have to ability [to make conservation work].”


News Article | April 27, 2017
Site: scienceblogs.com

Here are the highlights from the final day of the meeting: Carbon monoxide (CO) is not all that bad: Michael Tift, graduate student at Scripps Institute of Oceanography, described how the body naturally produces CO when red blood cells are broken down and CO can actually be protective against inflammation at low doses. His research was focused on measuring whether species that have more hemoglobin (from living in hypoxic environments) also have more CO. As it turns out, people native to high-altitude Peru do have higher CO levels than those living at lower elevations. Likewise, elephant seals and beluga whales have higher hemoglobin content and produce more CO than marine mammals with lower hemoglobin levels. Hypoxia can increase lifespan: Dr. Kendra Greenlee, from North Dakota State University, presented research from her lab showing that bees (Megachile rotundata) raised in hypoxic conditions live 2 times longer than bees raised in normal conditions. Her research is geared towards figuring out how hypoxia can increase longevity. Lastly, this year’s Nobel speaker was Dr. Louis Ignarro, from the Department of Pharmacology at the UCLA School of Medicine. He was one of three awardees of the 1998 Nobel Prize in Physiology or Medicine for their discoveries “concerning nitric oxide as a signalling molecule in the cardiovascular system.” Dr. Ferid Murad was credited with figuring out how nitrous oxide and nitroglycerin cause blood vessels to dilate. Dr. Ignarro and Dr. Robert Furchgott were both credited with their independent discoveries that nitric oxide is also released from the inner lining of blood vessels to cause the vessels to dilate. Together, their independent discoveries have led to the development of several medications that are used to treat cardiovascular disease as well as impotence.


News Article | December 1, 2016
Site: www.npr.org

In California, Squid Is Big Business. But Good Luck Eating Local Calamari Calamari is a favorite on American dinner tables. But while the U.S. has a thriving squid industry, chances are the calamari you are eating made a 12,000-mile round trip before ending up on your dinner plate. That, or it wasn't caught in the U.S. at all. More than 80 percent of U.S. squid landings are exported — most of it to China. The rare percentage of that catch that stays domestically goes to Asian fresh fish markets or is used as bait. Ironically, the lion's share of the squid consumed in the United States is imported. "Squid is a labor-intensive product," says Emily Tripp, founder of Marine Science Today, a website on the latest ocean-based research. "It's cheaper in some situations to ship it to China to be processed and ship it back." Tripp, who recently graduated with a masters from the Scripps Institute of Oceanography, did her thesis project on California market squid, which, during non-El Niño years, is California's most valuable fishery. In California, squid is an economic driver of the seafood industry – it's the fifth-largest fishery in the United States by weight. Yet most of this squid is frozen and exported overseas to China to be processed and distributed to over 42 countries across the globe. It's an export market that, according to 2011 figures, is valued at $107 million. Only 1.4 percent of it, on average, makes it back to the U.S. In 2015, that figure was 0.46 percent. "It has to do with the American desire for a larger squid," explains Diane Pleschner-Steele, executive director of the California Wetfish Producers Association. "A lot of squid that is shipped overseas stays overseas because they prefer it. They eat it over there. Our consumers typically prefer a larger squid, and so there's just a ton of squid imported into this country that comes in at a far lower price." In the U.S., the squid that ends up on our dinner table is typically Patagonian squid from the Falkland Islands or Humboldt squid — a jumbo cephalopod fished predominantly in Mexico and Peru. California market squid isn't usually desired because of its smaller size. "Our squid is a learning curve," Pleschner-Steele says. "If you overcook it, it can taste like a rubber band. But in my opinion, if you do it right, it tastes more like abalone than any other squid. It's nutty, sweet and delicate." The cost of labor is another, perhaps more significant, factor. Squid cleaning and processing is an extremely time-consuming practice. The eyes, cartilage, skin and guts need to be removed ahead of time, and it's cheaper to have this done overseas than domestically. A round-trip freight cost to China is $0.10 per pound and labor is just $7 a day there. By contrast, California wages — with tax and health insurance — amount to $12 an hour, according to Pleschner-Steele. Also, supply chains and markets are incredibly opaque. Pleschner-Steele suspects that as the Chinese middle-class economy has blossomed, a lot of the squid processing facilities are now based in Thailand. Tripp says during her research, it was nearly impossible to track down where exactly the squid was being processed abroad. "The biggest challenge was trying to find out where the squid goes when it leaves to the United States," she says. "No one wants to say where they partner. It's a bit of a challenge. In the United States we keep such good records of all of our fish and seafood. There's no comparable system in China. I couldn't follow the chain backwards." Regardless, the narrative is the same: Californians aren't eating Californian squid. And if they are, it likely wasn't processed in California. At Mitch's Seafood, a restaurant in San Diego committed to local fish, the owners spent three years looking for a California-based squid processor for their calamari. They eventually found a company in San Pedro called Tri-Marine. "We have to pay twice as much for it, but it's worth it so that we can say we offer California-caught and processed squid," owner Mitch Conniff says. "Squid that's caught two to three miles away takes a 10,000-mile round-trip journey before I can get it back into my restaurant." All Californian fish processors are capable of dealing with squid, Pleschner-Steele says. However, it's not a money-making operation because people aren't willing to pay for it. "It has to be on request," she says. "We simply can't compete with the cost of other imported squid. " Supporting the local squid industry is much more than just helping the local economy – it's helpful from a sustainability angle as well. Even with squid being sent on a round-trip journey across the world, the California market squid fishery has one of the lowest carbon footprints in the industry. "California squid fishing fleets are one of the most energy efficient in the world because [they're] so close to port," Pleschner-Steele says. "Our boats can produce a ton of proteins for about six gallons of diesel fuel. ... Efficiency is key." Further efficiency, she says, could be achieved if consumers would be keen to fork over $1.50 a pound more for California-caught and processed squid. But the "truth is that Americans aren't willing to pay for it," she says. "If people were willing to pay the price, we can definitely feed the demand." Clarissa Wei is a freelance journalist based in Los Angeles and Taipei. She writes about sustainability and food.


Our public health data is being ingested into Silicon Valley's gaping, proprietary maw In a lead editorial in the current Nature, John Wilbanks (formerly head of Science Commons, now "Chief Commons Officer" for Sage Bionetworks) and Eric Topol (professor of genomics at the Scripps Institute) decry the mass privatization of health data by tech startups, who're using a combination of side-deals with health authorities/insurers and technological lockups to amass huge databases of vital health information that is not copyrighted or copyrightable, but is nevertheless walled off from open research, investigation and replication. The key to their critique isn't just this enclosure of something that rightfully belongs to all of us: it's that this data and its analysis will be used to make decisions that profoundly affect the lives of billions of people; without public access to this, it could be used to magnify existing inequities and injustice (see also Weapons of Math Destruction). Even when corporations do give customers access to their own aggregate data, built-in blocks on sharing make it hard for users to donate them to science. 23andMe, holder of the largest repository of human genomic data in the world, allows users to view and download their own single-letter DNA variants and share their data with certain listed institutions. But for such data to truly empower patients, customers must be able to easily send the information to their health provider, genetic counsellor or any analyst they want. Pharmaceutical firms have long sequestered limited types of hard-to-obtain data, for instance on how specific chemicals affect certain blood measurements in clinical trials. But they generally lack longitudinal health data about individuals outside the studies that they run, and often cannot connect a participant in one trial to the same participant in another. Many of the new entrants to health, unbound by fragmented electronic health-record platforms, are poised to amass war chests of data and enter them into systems that are already optimized (primarily for advertising) to make predictions about individuals. The companies jostling to get into health face some major obstacles, not least the difficulties of gaining regulatory approval for returning actionable information to patients. Yet the market value of Internet-enabled devices that collect and analyse health and fitness data, connect medical devices and streamline patient care and medical research is estimated to exceed US$163 billion by 2020, as a January report from eMarketer notes (see 'The digital health rush' and go.nature.com/29fbvch). Such a tsunami of growth does not lend itself to ethically minded decision-making focused on maximizing the long-term benefits to citizens. It is already clear that proprietary algorithms can replicate and exacerbate societal biases and structural problems. Despite the best efforts of Google's coders, the job postings that its advertising algorithm serves to female users are less well-paying than are those displayed to male users2. A ProPublica investigation in May demonstrated that algorithms being used by US law-enforcement agencies are likely to wrongly predict that black defendants will commit a crime (see go.nature.com/29aznyw). And thanks to 'demographically blind' algorithms, in several US cities, black people are about half as likely as white people to live in neighbourhoods that have access to Amazon's one-day delivery service (see go.nature.com/29kskg3). Stop the privatization of health data [John T. Wilbanks and Eric J. Topol/Nature]


News Article | October 26, 2016
Site: www.eurekalert.org

WASHINGTON, DC -- Low-frequency vibrations of the Ross Ice Shelf are likely causing ripples and undulations in the air above Antarctica, a new study finds. Using mathematical models of the ice shelf, the study's authors show how vibrations in the ice match those seen in the atmosphere, and are likely causing these mysterious atmospheric waves. Scientists at McMurdo Station detected unusual atmospheric waves with an altitude between 30 to 115 kilometers (20 to 70 miles) above Antarctica in 2011. The waves, which have a long period and take hours to cycle, were observed for several years. Scientists routinely observe atmospheric waves around the world, but the persistence of these waves made them unusual, and scientists didn't know what was causing them. The new research solves this mystery by connecting the atmospheric waves to vibrations of the Ross Ice Shelf - the largest ice shelf in the world with an area of almost half a million square kilometers (188,000 miles), roughly the size of France. Imperceptible vibrations of the ice shelf, caused by ocean waves and other forces, are transferred and amplified in the atmosphere, according to the new study. If the study's predictions are correct, scientists could use these atmospheric waves to measure properties of the ice shelf that are normally difficult to track, such as the amount of stress the ice shelf is under from ocean waves. "If atmospheric waves are generated by ice vibrations, by rhythmic vibrations of ice--then that carries a lot of information of the ice shelf itself," said Oleg Godin, a professor at the Naval Postgraduate School in Monterey, California, and lead author of the new study, published in the Journal of Geophysical Research: Space Physics, a journal of the American Geophysical Union. This information could help scientists better understand the status and stability of ice shelves, which are permanently floating sheets of ice connected to landmasses. Scientists closely monitor the size and movement of ice shelves because when they break up, they indirectly contribute to sea level rise through their impact on land ice. "Ice shelves buffer or restrain land ice from reaching the ocean," said Peter Bromirski, a research oceanographer at Scripps Institute of Oceanography in La Jolla, California, who was not involved in the new study. "The long term evolution of an ice shelf--whether or not it breaks up and disintegrates--is an important factor in how fast sea level will rise." In the new study, Godin and his co-author, Nikolay Zabotin, used two theoretical models of the Ross Ice Shelf to show vibrations within the ice could create the atmospheric waves. One model approximates the ice shelf as a smooth rectangular slab of ice, while the other approximates the ice as a layered fluid. The authors incorporated known properties of the ice sheet such as elasticity, density, and thickness into each model to calculate the time it would take vibrations in the ice to complete one cycle. They found both models predict that the ice shelf produces vibrations within a 3- to 10-hour period, which matches the duration of vibrations seen in the atmosphere. These ice shelf vibrations would likely also produce atmospheric waves with a vertical wavelength of 20 to 30 kilometers (12 to 18 miles), another feature of the observed waves. "Even in this simplified description [of the ice], it readily explains the most prominent features of the observations," Godin said. "That's why it goes beyond hypothesis. I would say it's now a theory." The vibrations are transferred from the ice shelf to the atmosphere through direct contact with the air above the ice shelf, according to the new study. While the vibrations of the ice sheet are small, the atmospheric disturbances they create can be large because of reduced air pressure high in the atmosphere. For example, an ice shelf vibration one centimeter in amplitude pushes on the air directly above it. As the vibration cascades upward, it can grow in amplitude to move air hundreds of meters up and down when the wave reaches the less-dense air high in the atmosphere, Godin said. The size of these atmospheric waves makes them easy to observe with radar and Lidar, a radar-like system using laser light to scan the atmosphere. Godin and Zabotin plan to use an advanced research radar to study the atmospheric waves in more detail to better understand the behavior of the Ross Ice Shelf. "There are suggestions in the literature that accelerated breakup of ice shelves will lead to rise of sea level by several meters by the end of the century," Godin said. "Anything we can do to quantify what is going on with these large ice shelves is of huge importance. The research was primarily funded the Basic Research Challenge program of the Office of Naval Research and utilized the research computing facility at the University of Colorado Boulder, which is supported by the National Science Foundation. The American Geophysical Union is dedicated to advancing the Earth and space sciences for the benefit of humanity through its scholarly publications, conferences, and outreach programs. AGU is a not-for-profit, professional, scientific organization representing more than 60,000 members in 139 countries. Join the conversation on Facebook, Twitter, YouTube, and our other social media channels. This research article is open access for 30 days. A PDF copy of the article can be downloaded at the following link: http://onlinelibrary. . After 30 days, journalists and public information officers (PIOs) of educational and scientific institutions who have registered with AGU can download a PDF copy of the article from the same link. Journalists and PIOs may also order a copy of the final paper by emailing a request to Daniel Garisto at dgaristo@agu.org. Please provide your name, the name of your publication, and your phone number. Neither the paper nor this press release is under embargo. Nikolay Zabotin: Department of Electrical, Computer, and Energy Engineering, University of Colorado Boulder, Boulder, Colorado, U.S.A.


News Article | November 15, 2016
Site: www.theenergycollective.com

Biofuels are usually regarded as inherently carbon-neutral, but once all emissions associated with growing feedstock crops and manufacturing biofuel are factored in, they actually increase CO2 emissions rather than reducing them, writes John DeCicco of the University of Michigan. According to DeCicco, biofuels are actually more harmful to the climate than gasoline. Courtesy of The Conversation. Ever since the 1973 oil embargo, U.S. energy policy has sought to replace petroleum-based transportation fuels with alternatives. One prominent option is using biofuels, such as ethanol in place of gasoline and biodiesel instead of ordinary diesel. Transportation generates one-fourth of U.S. greenhouse gas emissions, so addressing this sector’s impact is crucial for climate protection. Many scientists view biofuels as inherently carbon-neutral: they assume the carbon dioxide (CO2) plants absorb from the air as they grow completely offsets, or “neutralizes,” the CO2 emitted when fuels made from plants burn. Many years of computer modeling based on this assumption, including work supported by the U.S. Department of Energy, concluded that using biofuels to replace gasoline significantly reduced CO2 emissions from transportation. Our new study takes a fresh look at this question. We examined crop data to evaluate whether enough CO2 was absorbed on farmland to balance out the CO2 emitted when biofuels are burned. It turns out that once all the emissions associated with growing feedstock crops and manufacturing biofuel are factored in, biofuels actually increase CO2 emissions rather than reducing them. Federal and state policies have subsidized corn ethanol since the 1970s, but biofuels gained support as a tool for promoting energy independence and reducing oil imports after the September 11, 2001 attacks. In 2005 Congress enacted the Renewable Fuel Standard, which required fuel refiners to blend 7.5 billion gallons of ethanol into gasoline by 2012. (For comparison, in that year Americans used 133 billion gallons of gasoline.) In 2007 Congress dramatically expanded the RFS program with support from some major environmental groups. The new standard more than tripled annual U.S. renewable fuel consumption, which rose from 4.1 billion gallons in 2005 to 15.4 billion gallons in 2015. Biomass energy consumption in the United States grew more than 60 percent from 2002 through 2013, almost entirely due to increased production of biofuels. Energy Information Administration Our study examined data from 2005-2013 during this sharp increase in renewable fuel use. Rather than assuming that producing and using biofuels was carbon-neutral, we explicitly compared the amount of CO2 absorbed on cropland to the quantity emitted during biofuel production and consumption. Existing crop growth already takes large amounts of CO2 out of the atmosphere. The empirical question is whether biofuel production increases the rate of CO2 uptake enough to fully offset CO2 emissions produced when corn is fermented into ethanol and when biofuels are burned. Most of the crops that went into biofuels during this period were already being cultivated; the main change was that farmers sold more of their harvest to biofuel makers and less for food and animal feed. Some farmers expanded corn and soybean production or switched to these commodities from less profitable crops. But as long as growing conditions remain constant, corn plants take CO2 out of the atmosphere at the same rate regardless of how the corn is used. Therefore, to properly evaluate biofuels, one must evaluate CO2 uptake on all cropland. After all, crop growth is the CO2 “sponge” that takes carbon out of the atmosphere. When we performed such an evaluation, we found that from 2005 through 2013, cumulative carbon uptake on U.S. farmland increased by 49 teragrams (a teragram is one million metric tons). Planted areas of most other field crops declined during this period, so this increased CO2 uptake can be largely attributed to crops grown for biofuels. Over the same period, however, CO2 emissions from fermenting and burning biofuels increased by 132 teragrams. Therefore, the greater carbon uptake associated with crop growth offset only 37 percent of biofuel-related CO2 emissions from 2005 through 2013. In other words, biofuels are far from inherently carbon-neutral. This result contradicts most established work on biofuels. To understand why, it is helpful to think of the atmosphere as a bathtub that is filled with CO2 instead of water. Many activities on Earth add CO2 to the atmosphere, like water flowing from a faucet into the tub. The largest source is respiration: Carbon is the fuel of life, and all living things “burn carbs” to power their metabolisms. Burning ethanol, gasoline or any other carbon-based fuel opens up the CO2 “faucet” further and adds carbon to the atmosphere faster than natural metabolic processes. Other activities remove CO2 from the atmosphere, like water flowing out of a tub. Before the industrial era, plant growth absorbed more than enough CO2 to offset the CO2 that plants and animals respired into the atmosphere. Today, however, largely through fossil fuel use, we are adding CO2 to the atmosphere far more rapidly than nature removes it. As a result, the CO2 “water level” is rapidly rising in the climate bathtub. Atmospheric carbon dioxide concentrations, recorded by the Mauna Loa Observatory in Hawaii. The line is jagged because CO2 levels rise and fall slightly each year in response to plant growth cycles. Scripps Institute of Oceanography When biofuels are burned, they emit roughly the same the amount of CO2 per unit of energy as petroleum fuels. Therefore, using biofuels instead of fossil fuels does not change how quickly CO2 flows into the climate bathtub. To reduce the buildup of atmospheric CO2 levels, biofuel production must open up the CO2 drain – that is, it must speed up the net rate at which carbon is removed from the atmosphere. Growing more corn and soybeans has opened the CO2 uptake “drain” a bit more, mostly by displacing other crops. That’s especially true for corn, whose high yields remove carbon from the atmosphere at a rate of two tons per acre, faster than most other crops. Nevertheless, expanding production of corn and soybeans for biofuels increased CO2 uptake only enough to offset 37 percent of the CO2 directly tied to biofuel use. Moreover, it was far from enough to offset other GHG emissions during biofuel production from sources including fertilizer use, farm operations and fuel refining. Additionally, when farmers convert grasslands, wetlands and other habitats that store large quantities of carbon into cropland, very large CO2 releases occur. Our new study has sparked controversy because it contradicts many prior analyses. These studies used an approach called lifecycle analysis, or LCA, in which analysts add up all of the GHG emissions associated with producing and using a product. The result is popularly called the product’s “carbon footprint.” The LCA studies used to justify and administer renewable fuel policies evaluate only emissions – that is, the CO2 flowing into the air – and failed to assess whether biofuel production increased the rate at which croplands removed CO2 from the atmosphere. Instead, LCA simply assumes that because energy crops such as corn and soybeans can be regrown from one year to the next, they automatically remove as much carbon from the atmosphere as they release during biofuel combustion. This significant assumption is hard-coded into LCA computer models. Unfortunately, LCA is the basis for the RFS as well as California’s Low-Carbon Fuel Standard, a key element of that state’s ambitious climate action plan. It is also used by other agencies, research institutions and businesses with an interest in transportation fuels. I once accepted the view that biofuels were inherently carbon-neutral. Twenty years ago I was lead author of the first paper proposing use of LCA for fuel policy. Many such studies were done, and a widely cited meta-analysis published in Science in 2006 found that using corn ethanol significantly reduced GHG emissions compared to petroleum gasoline. However, other scholars raised concerns about how planting vast areas with energy crops could alter land use. In early 2008 Science published two notable articles. One described how biofuel crops directly displaced carbon-rich habitats, such as grasslands. The other showed that growing crops for biofuel triggered damaging indirect effects, such as deforestation, as farmers competed for productive land. LCA adherents made their models more complex to account for these consequences of fuel production. But the resulting uncertainties grew so large that it became impossible to determine whether or not biofuels were helping the climate. In 2011 a National Research Council report on the RFS concluded that crop-based biofuels such as corn ethanol “have not been conclusively shown to reduce GHG emissions and might actually increase them.” These uncertainties spurred me to start deconstructing LCA. In 2013, I published a paper in Climatic Change showing that the conditions under which biofuel production could offset CO2 were much more limited than commonly assumed. In a subsequent review paper I detailed the mistakes made when using LCA to evaluate biofuels. These studies paved the way for our new finding that in the United States, to date, renewable fuels actually are more harmful to the climate than gasoline. It is still urgent to mitigate CO2 from oil, which is the largest source of anthropogenic CO2 emissions in the United States and the second-largest globally after coal. But our analysis affirms that, as a cure for climate change, biofuels are “worse than the disease.” Science points the way to climate protection mechanisms that are more effective and less costly than biofuels. There are two broad strategies for mitigating CO2 emissions from transportation fuels. First, we can reduce emissions by improving vehicle efficiency, limiting miles traveled or substituting truly carbon-free fuels such as electricity or hydrogen. Second, we can remove CO2 from the atmosphere more rapidly than ecosystems are absorbing it now. Strategies for “recarbonizing the biosphere” include reforestation and afforestation, rebuilding soil carbon and restoring other carbon-rich ecosystems such as wetlands and grasslands. Protecting ecosystems that store carbon can increase CO2 removal from the atmosphere. U.S. Geological Survey These approaches will help to protect biodiversity – another global sustainability challenge – instead of threatening it as biofuel production does. Our analysis also offers another insight: Once carbon has been removed from the air, it rarely makes sense to expend energy and emissions to process it into biofuels only to burn the carbon and re-release it into the atmosphere. John DeCicco (@JMDeCicco) is Research Professor at the University of Michigan. This article was first published on The Conversation and is republished here with permission.


News Article | April 11, 2016
Site: www.technologyreview.com

Clues to novel treatments could be gleaned from people who aren’t sick, but should be. The hunt is on for people who are healthy—even though their genes say they shouldn’t be. A massive search through genetic databases has found evidence for more than a dozen “genetic superheroes,” people whose genomes contain serious DNA errors that cause devastating childhood illnesses but who say they aren’t sick. The new study is part of a trend toward studying the DNA of unusually healthy people to determine if there’s something about them that can be discovered and bottled up as a treatment for everyone else. There’s already evidence from large families afflicted by genetic disease that some members are affected differently—or not at all. The current study took a different approach, scouring DNA data collected on 589,306 mostly unrelated individuals, and is the “the largest genome study to date,” according to Mount Sinai’s Icahn School of Medicine in New York. “There hasn’t been nearly enough attention paid to looking at healthy people’s genomes,” says Eric Topol, a cardiologist and gene scientist at the Scripps Institute. “This confirms that there are many people out there that should be manifesting disease but aren’t. It’s a lesson from nature.” The researchers, led by Stephen Friend, president of Sage Bionetworks, a nonprofit based in Seattle, and genome scientist Eric Schadt of Mount Sinai, reported today in Nature Biotechnology how they looked for people with mutations in any of 874 genes that should doom them to a childhood of pain or misery, but whose medical records or self-reports didn’t indicate any problem. In the end, they found 13 people who qualify as genetic “superheroes” but, under medical privacy agreements, were unable to contact them. That meant the scientists weren’t able to learn what’s actually different about them. “It’s like you got the box and couldn’t take the wrapping off,” Friend said during a media teleconference last week. The team consulted DNA data from nearly 400,000 people provided by 23andMe, the direct-to-consumer testing company. The team also used more detailed genome information contributed by BGI, a large genome center in China, and the Ontario Institute for Cancer Research. “The best approach to discovering large numbers of resilient individuals will involve data sharing on a global scale, involving many sequencing projects,” says Daniel MacArthur, who developed a pooled DNA database at the Broad Institute in Cambridge, Massachusetts, which he says also holds evidence of resilient individuals. Some companies, including the biotechnology company Regeneron (see “The Search for Exceptional Genomes”), have already started large searches for people whose genes seem to protect them against disease. Regeneron's focus is on common illnesses like heart disease and diabetes. Mayana Zatz, a geneticist in Sao Paulo, Brazil, who studies large families affected by inherited disease, says she’s found instances where people seem to dodge genetic destiny. For example, she located two Brazilian half-brothers with the same mutation that causes muscular dystrophy, but while one was in a wheelchair at age nine, the other is 16 and has no symptoms. Zatz says the reason could be some other gene that “rescues” the patient, or perhaps environmental factors. She says women are more often found to be resilient than men, though the reason isn’t clear. Friend says his “extraordinarily large pilot” study is meant to determine if the same sort of discoveries made by looking at affected families could be made by dredging large DNA databases. “The purpose was to see if the technology is ready, and worth the effort, and we think the answer is yes,“ he says.


News Article | September 28, 2016
Site: www.theguardian.com

In the centuries to come, history books will likely look back on September 2016 as a major milestone for the world’s climate. At a time when atmospheric carbon dioxide is usually at its minimum, the monthly value failed to drop below 400 parts per million (ppm). That all but ensures that 2016 will be the year that carbon dioxide officially passed the symbolic 400 ppm mark, never to return below it in our lifetimes, according to scientists. Because carbon pollution has been increasing since the start of the industrial revolution and has shown no signs of abating, it was more a question of “when” rather than “if” we would cross this threshold. The inevitability doesn’t make it any less significant, though. September is usually the month when carbon dioxide is at its lowest after a summer of plants growing and sucking it up in the northern hemisphere. As fall wears on, those plants lose their leaves, which in turn decompose, releasing the stored carbon dioxide back into the atmosphere. At Mauna Loa Observatory, the world’s marquee site for monitoring carbon dioxide, there are signs that the process has begun but levels have remained above 400 ppm. Since the industrial revolution, humans have been altering this process by adding more carbon dioxide to the atmosphere than plants can take up. That’s driven carbon dioxide levels higher and with it, global temperatures, along with a host of other climate change impacts. “Is it possible that October 2016 will yield a lower monthly value than September and dip below 400 ppm? Almost impossible,” Ralph Keeling, the scientist who runs the Scripps Institute for Oceanography’s carbon dioxide monitoring program, wrote in a blog post. “Brief excursions toward lower values are still possible, but it already seems safe to conclude that we won’t be seeing a monthly value below 400 ppm this year – or ever again for the indefinite future.” We may get a day or two reprieve in the next month, similar to August when Tropical Storm Madeline blew by Hawaii and knocked carbon dioxide below 400 ppm for a day. But otherwise, we’re living in a 400 ppm world. Even if the world stopped emitting carbon dioxide tomorrow, what has already put in the atmosphere will linger for many decades to come. “At best (in that scenario), one might expect a balance in the near term and so CO2 levels probably wouldn’t change much – but would start to fall off in a decade or so,” Gavin Schmidt, Nasa’s chief climate scientist, said in an email. “In my opinion, we won’t ever see a month below 400 ppm.” The carbon dioxide we’ve already committed to the atmosphere has warmed the world about 1.8F since the start of the industrial revolution. This year, in addition to marking the start of our new 400 ppm world, is also set to be thehottest year on record. The planet has edged right up against the 1.5C (2.7F) warming threshold, a key metric in last year’s Paris climate agreement. Even though there are some hopeful signs that world leaders will take actions to reduce emissions, those actions will have to happen on an accelerating timetable in order to avoid 2C of warming. That’s the level outlined by policymakers as a safe threshold for climate change. And even if the world limits warming to that benchmark, it will still likely spell doom for low-lying small island states and have serious repercussions around the world, from more extreme heat waves to droughts, coastal flooding and the extinction of many coral reefs. It’s against this backdrop that the measurements on top of Mauna Loa take on added importance. They’re a reminder that with each passing day, we’re moving further from the climate humans have known and thrived in and closer to a more unstable future.


News Article | February 4, 2016
Site: www.rdmag.com

Found idling near deep-sea hypothermal vents and whale carcasses, the creatures look more like elongated and deflated hot water balloons than worms. For decades, scientists sought to understand the genus Xenoturbella. A single species found off the coast of Sweden in 1950 started the scientific journey. With one body opening — a mouth — and no brain, gills, eyes, kidneys, or anus, the creatures appear primordial, harkening back to some of the earliest forms of life. Now, scientists from the Scripps Institution of Oceanography, the Western Australian Museum, and the Monterey Bay Aquarium Research Institute have a better grasp on the genus. Publishing in Nature, they described four new species of Xenoturbella, adding the number of total species to five and allowing the scientists to properly place them on the animal tree of life. When the genus was first discovered in 1950, scientists initially classified it as a flatworm. But in the 1990s, the classification was changed to a simplified mollusk. Recently, it’s been postulated that they may be close to vertebrates or echinoderms. “By placing Xenoturbella properly in the tee of life, we can better understand early animal evolution,” said study lead author Greg Rouse. In the study, the researchers described Xenoturbella churro (named for its similarity to the fired-dough pastry), Xenoturbella monstrosa, Xenoturbella profunda, and Xenoturbella hollandorum. The collection period for the study took 12 years. After analyzing nearly 1,200 of the genes, the researchers determined the animals were “evolutionary simple members near the base of the evolutionary tree of bilaterally symmetrical animals,” according to Scripps Institute of Oceanography. “Phylogenomic analyses of transcriptomic sequences support placement of Xenacoelomorpha as sister to Nephrozoa or Protostomia,” the researchers wrote in Nature.


News Article | February 16, 2017
Site: phys.org

One of the main hazards of sailing in freezing temperatures is topside icing, in which water blown from the ocean freezes once it contacts a ship, potentially accumulating enough ice to put the vessel at risk of capsizing. No tools have existed for ships to accurately monitor topside icing, but now Cornell engineers have demonstrated a novel method to do so using a combination of applied mathematics and computational mechanics. The results are published in the February edition of the journal Applied Ocean Research. "If you know something about the excitations occurring in a seaway that load a ship, and we can measure some response of the ship to those excitations, we may then be able to infer the current condition of the vessel," said Christopher Earls, professor of civil and environmental engineering and co-lead author of the paper. Engineers refer to this as inverse problem solving – using data from an effect to infer something about the cause. In topside icing, an effect is that the motion of the ship is changed due to the weight of the ice. "So we solve an inverse problem by using the inertial motion unit of the ship and a computer vision sensor that looks at the near wave field around the ship, and then use a model that turns that into an excitation," Earls explained. "So we have an excitation and a response to infer how much ice must be on the ship." To demonstrate the inversion framework in the real world, Earls and his team applied it to the R/V Melville – a 279-foot Navy research vessel operated by the Scripps Institute of Oceanography prior to its retirement in 2015. The goal was to accurately determine the "roll gyradius" of the ship and its smaller 1:23-scale model, essentially predicting each ship's weight distribution about its axis of rolling. And while certain mass properties of a ship may be estimated based on design assumptions, those estimated properties are uncertain once a ship is seafaring due to factors such as varying fuel and hydraulic fluid levels, and how heavy equipment is stowed. Because of this, the inversion framework could be put to the test without the use of ice. Prior to the full-scale demonstration, the research team began exercising the inversion framework using the small model of the R/V Melville, and the data began to roll in. "That was exciting, but not a guarantee of meaningful results yet. Then, incrementally, there were results that showed promise and also showed new things to consider," said Yolanda Lin, a doctoral student in structural mechanics and the study's co-lead author. After some revisions, the team was able to accurately predict the roll gyradius of the ship within the standard deviation of error. After validating the inversion framework at the model-scale, the team ran the framework on the full-scale ship, using the R/V Melville's onboard inertial motion unit. "Using telemetry, we were collecting data as the ship made some pretty severe maneuvers. It made the crew a little sick and the captain mused that onlookers must wonder if he was drinking and driving," said Earls of the testing conducted near the coast of San Diego. The full-scale results were more difficult to validate since the massive ship couldn't be lifted from the ocean and placed on a pendulum to measure the vessel's precise roll gyradius, "but it was within the expectations that we would have in our minds," Earls reported. Earls and Lin are now taking the successful proof of concept a step further by conducting new experiments using ships with more sophisticated equipment. "It's essentially a plug-and-play framework so you can put any seakeeping modeling tools into it," said Earls, who added that the new tests include applying topside icing to a scale-model ship. "The ultimate goal is to develop a full framework that can help detect when the surface of a ship has accumulated so much ice that the ship is in danger," Lin said. The data produced could also help captains determine the capabilities of a ship, such as what maneuvers it can safely make at any given time. Earls and his research group are also using inverse problem-solving to attack other maritime challenges, such as using the magnetic signature of the ocean to infer the internal wave structure deep under the surface, or to infer the condition of a ship's propeller as it interacts with the wake field it generates. More information: Yolanda C. Lin et al. Stochastic inversion for the roll gyradius second moment mass property in ships at full-scale and model-scale, Applied Ocean Research (2017). DOI: 10.1016/j.apor.2016.12.010

Loading Scripps Institute collaborators
Loading Scripps Institute collaborators