News Article | May 29, 2017
Overheated cities face climate change costs at least twice as big as the rest of the world because of the 'urban heat island' effect, new research shows. The study by an international team of economists of all the world's major cities is the first to quantify the potentially devastating combined impact of global and local climate change on urban economies. The analysis of 1,692 cities, published today (Monday 29 May 2017) in the journal Nature Climate Change, shows that the total economic costs of climate change for cities this century could be 2.6 times higher when heat island effects are taken into account than when they are not. For the worst-off city, losses could reach 10.9 per cent of GDP by the end of the century, compared with a global average of 5.6 per cent. The urban heat island occurs when natural surfaces, such as vegetation and water, are replaced by heat-trapping concrete and asphalt, and is exacerbated by heat from cars, air conditioners and so on. This effect is expected to add a further two degrees to global warming estimates for the most populated cities by 2050. Higher temperatures damage the economy in a number of ways - more energy is used for cooling, air is more polluted, water quality decreases and workers are less productive, to name a few. The authors - from the University of Sussex in the UK, Universidad Nacional Autónoma de México and Vrije University Amsterdam - say their new research is significant because so much emphasis is placed on tackling global climate change, while they show that local interventions are as, if not more, important. Professor Richard S.J. Tol MAE, Professor of Economics at the University of Sussex, said: "Any hard-won victories over climate change on a global scale could be wiped out by the effects of uncontrolled urban heat islands. "We show that city-level adaptation strategies to limit local warming have important economic net benefits for almost all cities around the world." Although cities cover only around one per cent of the Earth's surface, they produce about 80 per cent of Gross World Product, consume about 78 per cent of the world's energy and are home to over half of the world's population. Measures that could limit the high economic and health costs of rising urban temperatures are therefore a major priority for policy makers. The research team carried out a cost-benefit analysis of different local policies for combating the urban heat island, such as cool pavements - designed to reflect more sunlight and absorb less heat - cool and green roofs and expanding vegetation in cities. The cheapest measure, according to this modelling, is a moderate-scale installation of cool pavements and roofs. Changing 20 per cent of a city's roofs and half of its pavements to 'cool' forms could save up to 12 times what they cost to install and maintain, and reduce air temperatures by about 0.8 degrees. Doing this on a larger scale would produce even bigger benefits but the vastly increased costs mean that the cost-benefit ratio is smaller. The research has important implications for future climate policy decisions - the positive impacts of such local interventions are amplified when global efforts are also having an effect, the study shows. Professor Tol said: "It is clear that we have until now underestimated the dramatic impact that local policies could make in reducing urban warming. "However, this doesn't have to be an either/or scenario. "In fact, the largest benefits for reducing the impacts of climate change are attained when both global and local measures are implemented together. "And even when global efforts fail, we show that local policies can still have a positive impact, making them at least a useful insurance for bad climate outcomes on the international stage."
News Article | May 27, 2017
A new study finds 52 genes that are related to intelligence — a rousing success in a field that has often struggled to find correlations between smarts and genes. The 52 genes, though, account for only about 5 percent of the variation in intelligence scores among different people. That's because intelligence is a complex trait, said study author Danielle Posthuma, a statistical geneticist at Vrije University in Amsterdam. These genes "are basically a tip of the iceberg," Posthuma told Live Science. "But there are still a lot more genes that are important for intelligence." Precisely because the genetic underpinnings of intelligence are so complex, previous studies on the topic turned out to be underpowered — most did not include enough people to detect the correlations between any given gene and people's scores on intelligence tests. Those earlier studies were too small because, prior to them, researchers "didn't know what the genetic architecture of intelligence would be," Posthuma said. She added, "If it had been one or two genes, we would have been able to detect them" with the sample sizes that those studies included. Instead, those early findings suggested that intelligence probably involves thousands of genes. Various studies show that intelligence is highly heritable: Between 40 percent and 80 percent of the variations in intelligence among people are attributable to genes. In the new study, the researchers put the heritability factor at 54 percent. The researchers pulled together data from 78,308 people, all of European descent, and scanned their DNA for single-nucleotide polymorphisms, or SNPs. SNPs are variations in the nucleotides that make up the genome. Most, according to the National Library of Medicine, have no effect, but some are crucial to health. [7 Diseases You Can Learn About from a Genetic Test] Twelve of the 52 genes the researchers ended up pinpointing had been previously associated with intelligence, the researchers reported May 22 in the journal Nature Genetics. One set of genes involved with intelligence, which is also involved in cell development, included three genes already known to be involved with building or maintaining neurons: SHANK3, which is involved in the formation of the synapses, or gaps between neurons; DCC, which is involved in guiding the growth of axons, the spindly projections that neurons use as communication wires; and ZRHX3, which regulates the differentiation of neurons from other cell types during development. To avoid stumbling on false correlations in the giant data set — there are at least 3 million SNPs in a human genome, Posthuma said — the researchers set their standards high in running their analysis. The result of this was that for each gene they identified, the chance that it is not truly linked to intelligence is about 1 in a million, Posthuma said. The researchers also replicated their findings on another data set that measured the highest level of education attained instead of looking at general intelligence. IQ is highly correlated with educational attainment, so genes that drive IQ should also be linked to education, they reasoned. The researchers found that almost all of the variations they uncovered were also associated with the participants' education levels. "This is really important stuff," said Douglas Detterman, a psychologist at Case Western Reserve University and a prominent intelligence researcher. "What is interesting about this particular article is, it suggests what we have to do to really understand intelligence. It's not going to be easy," Detterman said. "They suggest that the things they are finding are mostly implicated in neural development, so we'll have to understand neural development and what it is about the brain that makes people smart." Not everyone agrees that studies like these can shed much light on what makes people smart, though. [6 Foods That Are Good For Your Brain] "The basic premise is that each gene operates to do something in particular, independent of the environment and all the other genes," said Wendy Johnson, a psychologist at The University of Edinburgh. "There is so much evidence that there are many, many problems with this, that I'm not even sure where to start." Focusing on development and the modeling of the dynamics of the gene-environment system would be more enlightening, Johnson said. Detterman said the natural next step in this line of work is to push the sample sizes of genome-wide studies into the millions. "That is what it's going to take to get really good information," he said. Posthuma and her colleagues are already planning to include more people in their next studies, hoping to find genes with even smaller contributions to general intelligence. They also plan to look more closely at the genes they've already uncovered, to see what they do, and if they really are involved with intelligence — and, hopefully, to discover what makes someone intelligent in the first place. [5 Experts Answer: Can Your IQ Change Over Time?] "The genes have a certain function, so it will help us get an idea of the underlying biological mechanism," Posthuma said. "Why do people with different intelligence differ from one another? Are the cells behaving different, or is the information processing faster?" 10 Ways to Keep Your Mind Sharp Why You Forget: 5 Strange Facts About Memory
News Article | December 15, 2016
Flash Physics is our daily pick of the latest need-to-know developments from the global physics community selected by Physics World's team of editors and reporters The Spanish astronomer Xavier Barcons will take over as director general (DG) of the European Southern Observatory (ESO) in September 2017, replacing the current DG Tim de Zeeuw who completes his mandate. Barcons is a professor at the Spanish Council for Scientific Research in Madrid and is an expert in the field of X-ray astronomy. He served as ESO council president in 2012–2014 and is currently chair of the organization’s Observing Programmes Committee. Based in Garching, Germany, the ESO has three observing sites in Chile. “I look forward to seeing the European Extremely Large Telescope (E-ELT) come to fruition and overseeing the further development of the Very Large Telescope, Atacama Large Millimeter/submillimeter Array (ALMA) and many other projects at ESO,” said Barcons. A molecular fountain has been created that allows molecules to be observed for very long times as they free fall. Created by Hendrick Bethlem and colleagues at Vrije University in the Netherlands, the technique involves cooling ammonia molecules to milliKelvin temperatures and then launching them upwards at about 1.6 m/s. The molecules can then be studied in free fall for as long as 266 ms. This set-up is similar to atomic fountains, which allow very precise measurements to be made of atomic energy levels and form the basis for atomic clocks. A molecular fountain has proven much more difficult to create because molecules can vibrate and rotate – and this makes it very difficult to cool and manipulate them using conventional laser techniques. Bethlem and colleagues overcame this problem by using electric field gradients to exert forces on ammonia, which is a polar molecule. The team says that its new molecular fountain could be used to look for tiny deviations from the Standard Model of particle physics – which could be revealed by tiny shifts in molecular energy levels. Tests of the equivalence principle of Einstein’s general theory of relativity could also be done by measuring the acceleration due to gravity experienced by different types of molecule. The fountain is described in Physical Review Letters. An X-ray imaging technique that could only be done at large synchrotron facilities has been adapted for widespread use by Sandro Olivo at University College London and colleagues. Called X-ray phase-contrast imaging (XPCI), the method involves measuring changes in the phase of an X-ray beam as it travels through a sample. This is unlike conventional X-ray imaging, which measures the attenuation of the X-ray beam. The technique is better able to distinguish structures in living tissue, making it ideal for medical imaging. XPCI is also better at finding tiny cracks and defects in materials and could also be used to detect the presence of weapons and explosives in baggage. However, XPCI could only be done using the laser-like X-ray beams produced by synchrotrons – which are huge electron accelerators. Now, Olivo and colleagues have developed a technique that allows XPCI to be performed using X-rays generated by conventional medical sources. It involves first passing the X-rays through a “mask” containing an array of apertures to create a number of beams. These then interact with the sample before passing through a second mask to a detector. This configuration converts differences in phase to differences in measured intensity. “We've now advanced this embryonic technology to make it viable for day-to-day use in medicine, security applications, industrial production lines, materials science, non-destructive testing, the archaeology and heritage sector, and a whole range of other fields," says Olivo. The technology has already been licensed to Nikon Metrology UK for use in a security scanner and UCL and Nikon are currently developing a medical scanner.
News Article | September 19, 2016
“The heat made people crazy. They woke from their damp bed sheets and went in search of a glass of water, surprised to find that when their vision cleared, they were holding instead the gun they kept hidden in the bookcase.’’ This passage, from Summer Island, a romance novel by Kristin Hannah, is how researchers introduce a potentially important new study they believe could alter peoples’ attitudes about the impact of unrelenting heat on violence, and why some parts of the world experience strikingly higher rates of violence than others. It’s not what people think. The new research goes beyond existing ideas about how hot summer nights cause tempers to flare and prompt sporadic acts of violence. Their model explores long-term cultural changes resulting from persistently high temperatures and a lack of seasonal variability, among them a loss of self-control and future-oriented goals. This combination can lead to more aggression and violence, they say. “People think about weather when they think about global warming, but don’t realize that climate change can increase aggression and violence, ’’ says Brad Bushman, professor of communication and psychology at The Ohio State University and one of the study’s authors. “But climate change affects how we relate to other people.’’ Moreover, he predicts that unmitigated global warming could increase violence levels in the United States, something he believes deserves immediate attention. Bushman, with colleagues Paul Van Lange, professor of psychology at Vrije University in Amsterdam (VU) and research assistant Maria Rinderu, also of VU, say their model, which they call CLASH (for CLimate Aggression and Self-Control in Humans), recently published in the journal Behavioral and Brain Sciences, could explain why violence is greater in countries closer to the Equator and in the Southern regions of the United States, and less so in the American North and in areas farther away from the Equator. People living in such climates are more tuned into the present — the here and now — and are less likely to plan for the future, they theorize. They are less strict about time, less stringent about birth control, and have children earlier and more often, Bushman says. “If you live farther away from the Equator, you have to exercise more self-control,’’ Bushman says. “You can’t just eat all your crops, because you then won’t have anything left to eat in the winter. But if you live closer to the Equator, those mango trees will grow mangoes year-round.’’ This scenario encourages a state of mind and lack of self-control that affects how people treat each other, according to Bushman. “Climate shapes how people live, and affects the culture in ways that we don’t think about in our daily lives.’’ Such a faster life strategy “can lead people to react more quickly with aggression and sometimes violence,’’ Bushman adds. Until recently only two models helped explain why violence and aggression are higher in hotter climates. The first, the General Aggression Model — which Bushman helped develop — holds that hot temperatures make people feel uncomfortable and irritated, causing them to become more aggressive. The second, known as the Routine Activity Theory, suggests that people go outside and interact with each other more when the weather is warm, thus providing more opportunities for conflict. But that doesn’t explain why there is more violence when the temperature is 95 degrees F (35 degrees C) than when it is 75 degrees F (24 degrees C), even though people are more likely to go outside under both conditions. To be sure, “our ability to cope with irritation and frustration may be less strong on hot days,’’ says Van Lange, the study’s lead author. “But this would be only part of the story. We thought it is not only average temperature that might matter, but also seasonal variation in temperature. The latter is predictable and may lead cultures that are facing seasonal variation to develop stronger norms and habits, and adopt longer-time planning and self-control — that is, to forgo immediate benefit for longer-term benefit.’’ These two factors, average temperature and predictable seasonal variation, may help experts better understand aggression, as “the psych literature has revealed that self-control is one of the strongest predictors of aggression and violence,’’ Van Lange adds. It also may explain why crime is higher in the American South, Bushman says. “Violent crime rates have always been higher in the South,’’ he says. “You see different life strategies in the North and the South. People seem to plan more for the future in the North. But we predict that if climate change continues, with less seasonal variability in the North, you will see violent crime rates increase there, too.’’ What about climate’s influence on war? “War is usually less impulsive, less the result of lack of self-control, and more planned and premeditated,’’ Bushman says. “However, the model could be applicable to a leader inclined to respond impulsively,’’ he says. The scientists have called for more research, and note that they are not suggesting people in hotter climates can’t help themselves when it comes to violence. However, they stress that it is important to recognize that culture is strongly affected by climate. “Climate doesn’t make a person, but it is one part of what influences each of us,’’ Van Lange says. Buy a cool T-shirt or mug in the CleanTechnica store! Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.
News Article | September 12, 2016
The team will present their work during the Frontiers in Optics (FiO) / Laser Science (LS) conference in Rochester, New York, USA on 17-21 October 2016. "Our target is the best tested theory there is: quantum electrodynamics," said Kjeld Eikema, a physicist at Vrije University, The Netherlands, who led the team that built the laser. Quantum electrodynamics, or QED, was developed in the 1940s to make sense of small unexplained deviations in the measured structure of atomic hydrogen. The theory describes how light and matter interact, including the effect of ghostly 'virtual particles.' Its predictions have been rigorously tested and are remarkably accurate, but like extremely dedicated quality control officers, physicists keep ordering new tests, hoping to find new insights lurking in the experimentally hard-to-reach regions where the theory may yet break down. A promising tool for the next generation of tests is the new high-intensity laser. It produces pulses of deep ultraviolet light with energies large enough to bump electrons in some of the simplest atoms and molecules into a higher energy level. "For increased precision, you have to do these QED tests in the most simple atoms and molecules," Eikema explained. The team has already tested the laser on molecular hydrogen. They measured the frequency of light required to excite a certain electron transition with a preliminary uncertainty of less than one part per 100 billion, more than 100 times better than previous measurements. The Challenge of Ultra-Precise Measurements in the UV The key challenge for the team wasn't really producing the deep UV light—a feat that has been accomplished before—but in finding a way to keep the measurements precise. Short pulses, which are easier to produce for UV light, make inherently uncertain measurements, due to the Heisenberg uncertainty principle. One way around this is to use a technique called Ramsey interferometry, which requires two pulses of light separated by an incredibly precise period of time. What Eikema and his colleagues did that had never been done before was to get the two pulses by extracting them from a device, called a frequency comb laser, uniquely suited to create precisely timed pulses. "People normally think that if you take just two pulses out of a frequency comb then you destroy the beauty of a frequency comb, but we do it in a special way," Eikema said. Extracting and amplifying the pulses introduced uncertainties, but the team found that if they hit an atom or molecule with differently spaced pulse pairs and then analyzed the results simultaneously, the uncertainties in effect canceled out. Even better, it also canceled out an unwanted effect called the AC-Stark effect, which arises when the high-intensity light used for measurement actually changes the structure of an atom or molecule. "Using this method we actually restore all the properties of the frequency comb, and we also get exciting new properties," Eikema said. "This was our eureka moment." The team's next goal is to use their laser to measure the first electron transition energy of a positively charged helium atom, called He+. He+ is the one of the "holy grails" for testing QED, Eikema said, because the properties of the nucleus have been extensively studied, it can be trapped with electromagnetic fields and observed for a very long time, and the QED effects are larger in helium than in hydrogen. "If it's possible to measure this transition in He+, people will immediately do it, because it's a very nice, clean transition," he said. A test of QED in He+ might also help resolve the proton radius problem, a new puzzle gripping the physics community after complementary tests turned up conflicting measurements of the proton's size. The discrepancy could be due to a problem with QED theory, and so a better test would help scientists see whether or not QED theory still holds at this unprecedented new level of precision. Going from molecular hydrogen to He+ is still an enormous jump, Eikema said, since the wavelength of light required is almost ten times shorter. If all goes according to plan, he estimates the team may have results to report in about 2 years. "I went to a conference about the proton size problem and explained how we want to measure this transition of He+. Everyone was asking 'When? When? When?' They really want to know," Eikema said. Sandrine Galtier, a postdoctoral researcher at Vrije University who will present the team's findings at the FiO meeting, says it's exciting how well their new laser system can test the extreme limits of theoretical physics. "We don't need huge accelerators. With just a tabletop experiment, we can test the Standard Model of physics," she said. Explore further: New experiments challenge fundamental understanding of electromagnetism More information: The presentation (FTu5C.6), "Testing QED with Ramsey-Comb spectroscopy in the deep-UV range," by Sandrine Galtier will take place from 04:00 - 06:00, Tuesday, 18 October 2016, at the Radisson Hotel, Grand Ballroom C, Rochester, New York, USA.
News Article | December 12, 2016
In his 49 years, Zablon Katende had never thought of leaving his hometown of Kipini in coastal Kenya. But now, looking at his dwindling mango trees, the farmer worries the harvest will not be enough to provide for his five children. “Every year there is less water,” he says, pointing at the murky Tana river which washes the shores of his village. Despite being Kenya’s longest river, the Tana is struggling to keep up with the country’s ever-growing demand for water and electricity. It is the backbone of the country’s economy, providing up to 80% of Nairobi’s water and half the country’s electricity through hydroelectric plants. Its water also irrigates thousands of hectares of cash crops such as tea, coffee and rice. However, erosion, pollution and excessive water capture are threatening the livelihoods of many who, like Katende, depend on the river. The government is currently planning to divert even more of Tana’s water for irrigation and power, but a study (pdf) by Wetlands International and the Vrije University in Amsterdam warns this management model is not ecologically sustainable. Despite concerns, Kenya’s government wants to use more of the Tana river’s resources to ensure economic prosperity for the country’s fast growing population. Known as Vision 2030, the plan includes 1m acres of monocultures, a 3km-long dam and a £28bn transportation corridor including a new port city in Lamu, near the Tana delta. Experts, however, warn the river’s resources are not unlimited. “Ignoring nature has a price,” says Julie Mulonga, programme manager of Wetlands International in Kenya. According to Mulonga, the government’s water management style focuses on the short-term benefit of industries around the capital, such as flower farms and breweries, and disregards the needs of people and animals downstream. The consequences are already being felt, especially in Tana’s delta where most locals live off fishing, raising cattle and growing sustenance crops. Without enough water, fish cannot breed, crops fail and animals are too emaciated to sell. “Without the river, nothing lives,” says Katende, who worries that the construction of another dam will mean even less water for his mango trees. Tourism is suffering, too. Tana’s delta is a wildlife refuge for hundreds of species, from hippos to monkeys. But water scarcity increases deforestation and animal poaching. What’s more, local authorities worry that competition over water will lead to violent clashes between pastoralist and farming tribes, which in 2012 resulted in 50 deaths and forced several hotels to close. The Kenyan government rejects the suggestion that their plans are putting strain on the environment, communities and business that relies on the river. “There is no need to compete over water because all economic activities on the river are complementary,” says Robinson Gaita, director of irrigation and water storage at the Ministry of Water and Irrigation. Gaita is overseeing the development of a new 10,000-acre maize farm near the middle section of the Tana, which he says is already improving food security. The government recently donated 62,000 bags of maize from this plantation to communities suffering from drought in the river’s delta. As for the colossal dam, Gaita says it will actually help downstream farmers like Katende because it will give the state the ability to prevent excessive flooding and increase the availability of water in case of drought – both of which are happening more frequently because of climate change. Private businesses could have a big role to play in the Tana’s conservation. Some of the country’s largest companies, including Coca-Cola and East African Breweries, have joined the Nairobi Water Fund, a scheme which aims to raise £8m to help preserve Tana’s ecosystems by planting trees or teaching farmers better soil-management practices. Nushin Ghassmi, communications manager for Frigoken, Kenya’s largest vegetable processing company, says working with the fund is important because “preserving our natural resources is crucial for our business survival”. Coca-Cola estimates the annual water treatment and filtration costs for their Nairobi bottling plant at more than $1m. Yet even with increased corporate responsibility, the Tana will continue to deteriorate if the government does not scale down its ambitious infrastructural projects, warns Pieter van Beukering, director of the Institute for Environmental Studies at Vrije University. If the economic benefits are not shared equally along the river this could also increase upstream migration. “Money follows water. And people follow money,” says Beukering. Many of Katende’s neighbours have already left Kipini looking for greener pastures for their cattle or cleaner waters for their nets. “But I’m a farmer,” says Katende, “I can’t abandon my land.” Instead he has joined a local conservation group to help raise awareness about the importance of preserving the Tana. Despite this year’s failing crop, he is hopeful. “We will find a way to give water to everybody,” he says. “We have to.” Sign up to be a Guardian Sustainable Business member and get more stories like this direct to your inbox every week. You can also follow us on Twitter.
Jager T.,Vrije University |
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2010
The interest of environmental management is in the long-term health of populations and ecosystems. However, toxicity is usually assessed in short-term experiments with individuals. Modelling based on dynamic energy budget (DEB) theory aids the extraction of mechanistic information from the data, which in turn supports educated extrapolation to the population level. To illustrate the use of DEB models in this extrapolation, we analyse a dataset for life cycle toxicity of copper in the earthworm Dendrobaena octaedra. We compare four approaches for the analysis of the toxicity data: No model, a simple DEB model without reserves and maturation (the Kooijman-Metz formulation), a more complex one with static reserves and simplified maturation (as used in the DEBtox software) and a full-scale DEB model (DEB3) with explicit calculation of reserves and maturation. For the population prediction, we compare two simple demographic approaches (discrete time matrix model and continuous time Euler-Lotka equation). In our case, the difference between DEB approaches and population models turned out to be small. However, differences between DEB models increased when extrapolating to more field-relevant conditions. The DEB3 model allows for a completely consistent assessment of toxic effects and therefore greater confidence in extrapolating, but poses greater demands on the available data. © 2010 The Royal Society.
News Article | September 20, 2016
An experiment with fake frogs shows how certain bats adjust their hunting technique to compensate for unnatural noises. Humankind is loud, and research already suggests that birds alter their singing in urban noise. Now tests show that bats listening for the frogs they hunt switch from mostly quiet eavesdropping to pinging echolocating when artificial sounds mask the frog calls. That way, the bats can detect the motion of the frogs’ vocal sac poofing out with each call, researchers report in the Sept. 16 Science. That switch in sensory tactics could make bats the only animals besides people shown to react this way to interfering din in the classic cocktail party scenario, says study coauthor Wouter Halfwerk of Vrije University Amsterdam. People straining to hear each other over the cacophony of a party can get a boost in communication by paying attention to each other’s mouth movements. He points out that watching someone’s lips lets people tolerate about an extra 20 decibels of tipsy shrieking and shouting. Conservation biologists worry about the effects of human racket on other residents of the planet. Other researchers, for instance, found that noise interfered with pallid bats’ success in hunting insects on the wing. The fringe-lipped bats (Trachops cirrhosus) in the new study, however, specialize in frogs instead of insects. Hungry bats listen to frog choruses and swoop out of the darkness to carry off a male chirping his advertisements for a mate. “A talking pickle” is what Halfwerk calls the frog. Researchers tested 12 wild-caught bats in outdoor flight cages in Panama. Bats perching (upside down, of course) in cages were perfectly willing to make a grab at robotic frogs deployed in the cages. The robofrogs, modeled by an artist on the túngara species the bats naturally hunt, sit motionless but can on command start inflating a specially constructed balloon in time with broadcast calls. In this setup, interfering noise changed normal hunting. When researchers broadcast sounds that partially masked the main frequency of telltale frog calls, bats waited longer than normal to strike and also strongly preferred pouncing on a robofrog that was inflating his sac instead of an identical frog squatting nearby with a deflated sac. Recordings of bat noises from the perch showed that the hunters were pinging fast echolocation sounds instead of mostly listening for the pickle to betray its location. Even with the strategy switch, the bats aren’t completely making up for the noise nuisance, Jinhong Luo at Johns Hopkins University points out. A sensory biologist, he has tested noise effects on other bats but was not involved in this project. Looking at the new data, he notes that frog-eating bats in echolocating mode are slower to leave their perches and swoop than bats in eavesdropping mode. He also cautions about generalizing to the other 1,300-plus bat species. Many of them are already using echolocation to hunt insects and may not have a backup prey-finder method when noise complicates their foraging.
News Article | November 21, 2016
On top of that, there are even plenty of volunteers who are prepared to make a one-way journey to Mars, and people advocating that we turn it into a second home. All of these proposals have focused attention on the peculiar hazards that come with sending human beings to Mars. Aside from its cold, dry environment, lack of air, and huge sandstorms, there's also the matter of its radiation. Mars has no protective magnetosphere, as Earth does. Scientists believe that at one time, Mars also experienced convection currents in its core, creating a dynamo effect that powered a planetary magnetic field. However, roughly 4.2 billions year ago – either due to a massive impact from a large object, or rapid cooling in its core – this dynamo effect ceased. As a result, over the course of the next 500 million years, Mars atmosphere was slowly stripped away by solar wind. Between the loss of its magnetic field and its atmosphere, the surface of Mars is exposed to much higher levels of radiation than Earth. And in addition to regular exposure to cosmic rays and solar wind, it receives occasional lethal blasts that occur with strong solar flares. NASA's 2001 Mars Odyssey spacecraft was equipped with a special instrument called the Martian Radiation Experiment (or MARIE), which was designed to measure the radiation environment around Mars. Since Mars has such a thin atmosphere, radiation detected by Mars Odyssey would be roughly the same as on the surface. Over the course of about 18 months, the Mars Odyssey probe detected ongoing radiation levels which are 2.5 times higher than what astronauts experience on the International Space Station – 22 millirads per day, which works out to 8000 millirads (8 rads) per year. The spacecraft also detected 2 solar proton events, where radiation levels peaked at about 2,000 millirads in a day, and a few other events that got up to about 100 millirads. For comparison, human beings in developed nations are exposed to (on average) 0.62 rads per year. And while studies have shown that the human body can withstand a dose of up to 200 rads without permanent damage, prolonged exposure to the kinds of levels detected on Mars could lead to all kinds of health problems – like acute radiation sickness, increased risk of cancer, genetic damage, and even death. And given that exposure to any amount of radiation carries with it some degree of risk, NASA and other space agencies maintain a strict policy of ALARA (As-Low-As-Reasonable-Achievable) when planning missions. Human explorers to Mars will definitely need to deal with the increased radiation levels on the surface. What's more, any attempts to colonize the Red Planet will also require measures to ensure that exposure to radiation is minimized. Already, several solutions – both short term and long- have been proposed to address this problem. For example, NASA maintains multiple satellites that study the sun, the space environment throughout the solar system, and monitor for galactic cosmic rays (GCRs), in the hopes of gaining a better understanding of solar and cosmic radiation. They've also been looking for ways to develop better shielding for astronauts and electronics. In 2014, NASA launched the Reducing Galactic Cosmic Rays Challenge, an incentive-based competition that awarded a total of $12,000 to ideas on how to reduce astronauts' exposure to galactic cosmic rays. After the initial challenge in April of 2014, a follow-up challenge took place in July that awarded a prize of $30,000 for ideas involving active and passive protection. When it comes to long-term stays and colonization, several more ideas have been floated in the past. For instance, as Robert Zubrin and David Baker explained in their proposal for a low-cast "Mars Direct" mission, habitats built directly into the ground would be naturally shielded against radiation. Zubrin expanded on this in his 1996 book The Case for Mars: The Plan to Settle the Red Planet and Why We Must. Proposals have also been made to build habitats above-ground using inflatable modules encased in ceramics created using Martian soil. Similar to what has been proposed by both NASA and the ESA for a settlement on the Moon, this plan would rely heavily on robots using 3-D printing technique known as "sintering", where sand is turned into a molten material using x-rays. MarsOne, the non-profit organization dedicated to colonizing Mars in the coming decades, also has proposals for how to shield Martian settlers. Addressing the issue of radiation, the organization has proposed building shielding into the mission's spacecraft, transit vehicle, and habitation module. In the event of a solar flare, where this protection is insufficient, they advocate creating a dedicated radiation shelter (located in a hollow water tank) inside their Mars Transit Habitat. But perhaps the most radical proposal for reducing Mars' exposure to harmful radiation involves jump-starting the planet's core to restore its magnetosphere. To do this, we would need to liquefy the planet's outer core so that it can convect around the inner core once again. The planet's own rotation would begin to create a dynamo effect, and a magnetic field would be generated. According to Sam Factor, a graduate student with the Department of Astronomy at the University of Texas, there are two ways to do this. The first would be to detonate a series of thermonuclear warheads near the planet's core, while the second involves running an electric current through the planet, producing resistance at the core which would heat it up. In addition, a 2008 study conducted by researchers from the National Institute for Fusion Science (NIFS) in Japan addressed the possibility of creating an artificial magnetic field around Earth. After considering continuous measurements that indicated a 10% drop in intensity in the past 150 years, they went on to advocate how a series of planet-encircling superconducting rings could compensate for future losses. With some adjustments, such a system could be adapted for Mars, creating an artificial magnetic field that could help shield the surface from some of the harmful radiation it regularly receives. In the event that terraformers attempt to create an atmosphere for Mars, this system could also ensure that it is protected from solar wind. Lastly, a study in 2007 by researchers from the Institute for Mineralogy and Petrology in Switzerland and the Faculty of Earth and Life Sciences at Vrije University in Amsterdam managed to replicate what Mars' core looks like. Using a diamond chamber, the team was able to replicate pressure conditions on iron-sulfur and iron-nickel-sulfur systems that correspond to the center of Mars. What they found was that at the temperatures expected in the Martian core (~1500 K, or 1227 °C; 2240 °F), the inner core would be liquid, but some solidification would occur in the outer core. This is quite different from Earth's core, where the solidification of the inner core releases heat that keeps the outer core molten, thus creating the dynamo effect that powers our magnetic field. The absence of a solid inner core on Mars would mean that the once-liquid outer core must have had a different energy source. Naturally, that heat source has since failed, causing the outer core to solidify, thus arresting any dynamo effect. However, their research also showed that planetary cooling could lead to core solidification in the future, either due to iron-rich solids sinking towards the center or iron-sulfides crystallizing in the core. In other words, Mars' core might become solid someday, which would heat the outer core and turn it molten. Combined with the planet's own rotation, this would generate the dynamo effect that would once again fire up the planet's magnetic field. If this is true, then colonizing Mars and living there safely could be a simple matter of waiting for the core to crystallize. There's no way around it. At present, the radiation on the surface of Mars is pretty hazardous! Therefore, any crewed missions to the planet in the future will need to take into account radiation shielding and counter-measures. And any long-term stays there – at least for the foreseeable future – are going to have to be built into the ground, or hardened against solar and cosmic rays. But you know what they say about necessity being the mother of invention, right? And with such luminaries as Stephen Hawking saying that we need to start colonizing other worlds in order to survive as a species, and people like Elon Musk and Bas Lansdrop looking to make it happen, we're sure to see some very inventive solutions in the coming generations!
News Article | November 28, 2016
Water is not just vital to life on Earth – it turns out that it may have been a crucial ingredient of the primordial body that split apart 4.5 billion years ago to become Earth and the moon. The latest evidence for this, from lab simulations of how minerals formed in the early moon, may settle a long-running debate about whether the early moon and Earth contained water from the outset, or whether it arrived later through collisions with water-bearing comets or asteroids. “Our study shows that water was there at the time the moon formed, and because that happened soon after the formation of Earth, it shows water was present well before any later addition via comets or asteroids,” says Wim van Westrenen at Vrije University in Amsterdam, the Netherlands, who co-led the team. “We show that the moon, in its initial hot stage, contained a lot of water – at least as much as, and likely more than, the amount we have on Earth today.” Water has been detected in samples from the moon before, but only in young rock from the surface, which does not tell us whether it was there from the beginning or brought by asteroids. To investigate the role of water in the early moon’s formation, van Westrenen and his colleagues made small-scale lab mixtures weighing just 10 milligrams, but containing all the basic ingredients from which the moon originated. Specifically, this mimics the components that gave rise to the lunar magma ocean, the initial liquefied mass that gradually cooled and solidified to form the moon. “The main constituents are silicon and oxygen, with a sprinkling of magnesium, calcium, iron, titanium and aluminium,” says van Westrenen. The recipe reflects that revealed by seismic data collected from the moon’s surface by instruments left there by Apollo astronauts. Next, van Westrenen’s team simulated the moon’s evolving geology by subjecting the mixture to temperatures and pressures that matched those on the early moon, taking advantage of laboratory apparatus also used to create synthetic diamonds. They did this both with and without water to see whether this affected the type and amount of rocks formed. The team found that only when water was included in the mix, at levels of just 0.5 to 1 per cent by weight, did the types and amounts of rock formed match those that have been detected or measured on the moon. Most importantly, the water-based mixture generated a layer of plagioclase—the dominant component of the crust – that when extrapolated to the moon would be around 34 to 43 kilometres thick. This tallies with the average thickness reported in 2013 based on data from satellites orbiting the moon. When the mixture was dry, the plagioclase layer ended up twice as deep, at 68 kilometres. This suggests that the existing make-up of the moon’s geology could only have evolved if water was there at the outset. The latest research adds weight to arguments that Earth and the moon had water from the outset. Others have argued that water arrived later on asteroids or comets that smashed into the primordial planet and moon. Measurements sent back in 2014 from the Rosetta space probe when it visited a comet dealt that theory a blow by showing that the water on the comet had a combination of isotopes that did not match Earth’s. “This is yet another indication that the moon may have initially been water-rich, with important implications both for our models of lunar origin, and for the possibility there are still water-rich reservoirs on the moon today,” says Robin Canup, who studies the origins of planetary bodies at the Southwest Research Institute in Boulder, Colorado. “This work is going to force us to think about how the material that formed the moon managed to take some of Earth’s water along with it,” says Steve Jacobsen of Northwestern University in Evanston, Illinois. This month, he reported evidence for the deepest water yet discovered on Earth, at 1000 kilometres down, or a third of the way to the edge of the core.