Institute of Biometeorology

Bologna, Italy

Institute of Biometeorology

Bologna, Italy

Time filter

Source Type

Salinger M.J.,Institute of Biometeorology | Baldi M.,Institute of Biometeorology | Grifoni D.,CNR Institute of Neuroscience | Jones G.,Southern Oregon University | And 7 more authors.
International Journal of Biometeorology | Year: 2015

Climatic factors and weather type frequencies affecting Tuscany are examined to discriminate between vintages ranked into the upper- and lower-quartile years as a consensus from six rating sources of Chianti wine during the period 1980 to 2011. These rankings represent a considerable improvement on any individual publisher ranking, displaying an overall good consensus for the best and worst vintage years. Climate variables are calculated and weather type frequencies are matched between the eight highest and the eight lowest ranked vintages in the main phenological phases of Sangiovese grapevine. Results show that higher heat units; mean, maximum and minimum temperature; and more days with temperature above 35 °C were the most important discriminators between good- and poor-quality vintages in the spring and summer growth phases, with heat units important during ripening. Precipitation influences on vintage quality are significant only during veraison where low precipitation amounts and precipitation days are important for better quality vintages. In agreement with these findings, weather type analysis shows good vintages are favoured by weather type 4 (more anticyclones over central Mediterranean Europe (CME)), giving warm dry growing season conditions. Poor vintages all relate to higher frequencies of either weather type 3, which, by producing perturbation crossing CME, favours cooler and wetter conditions, and/or weather type 7 which favours cold dry continental air masses from the east and north east over CME. This approach shows there are important weather type frequency differences between good- and poor-quality vintages. Trend analysis shows that changes in weather type frequencies are more important than any due to global warming. © 2015 ISB


Xia J.,University of Oklahoma | Niu S.,CAS Beijing Institute of Geographic Sciences and Nature Resources Research | Ciais P.,French Climate and Environment Sciences Laboratory | Janssens I.A.,University of Antwerp | And 43 more authors.
Proceedings of the National Academy of Sciences of the United States of America | Year: 2015

Terrestrial gross primary productivity (GPP) varies greatly over time and space. A better understanding of this variability is necessary for more accurate predictions of the future climate-carbon cycle feedback. Recent studies have suggested that variability in GPP is driven by a broad range of biotic and abiotic factors operating mainly through changes in vegetation phenology and physiological processes. However, it is still unclear howplant phenology and physiology can be integrated to explain the spatiotemporal variability of terrestrial GPP. Based on analyses of eddy-covariance and satellite-derived data, we decomposed annual terrestrial GPP into the length of the CO2 uptake period (CUP) and the seasonalmaximal capacity of CO2 uptake (GPPmax). The product of CUP and GPPmax explained >90% of the temporal GPP variability in most areas of North America during 2000-2010 and the spatial GPP variation among globally distributed eddy flux tower sites. It also explained GPP response to the European heatwave in 2003 (r2 = 0.90) and GPP recovery after a fire disturbance in South Dakota (r2 = 0.88). Additional analysis of the eddy-covariance flux data shows that the interbiome variation in annual GPP is better explained by that in GPPmax than CUP. These findings indicate that terrestrial GPP is jointly controlled by ecosystem-level plant phenology and photosynthetic capacity, and greater understanding of GPPmax and CUP responses to environmental and biological variations will, thus, improve predictions of GPP over time and space.


PubMed | University of Colorado at Boulder, McMaster University, French Climate and Environment Sciences Laboratory, ETH Zurich and 31 more.
Type: Journal Article | Journal: Proceedings of the National Academy of Sciences of the United States of America | Year: 2015

Terrestrial gross primary productivity (GPP) varies greatly over time and space. A better understanding of this variability is necessary for more accurate predictions of the future climate-carbon cycle feedback. Recent studies have suggested that variability in GPP is driven by a broad range of biotic and abiotic factors operating mainly through changes in vegetation phenology and physiological processes. However, it is still unclear how plant phenology and physiology can be integrated to explain the spatiotemporal variability of terrestrial GPP. Based on analyses of eddy-covariance and satellite-derived data, we decomposed annual terrestrial GPP into the length of the CO2 uptake period (CUP) and the seasonal maximal capacity of CO2 uptake (GPPmax). The product of CUP and GPPmax explained >90% of the temporal GPP variability in most areas of North America during 2000-2010 and the spatial GPP variation among globally distributed eddy flux tower sites. It also explained GPP response to the European heatwave in 2003 (r(2) = 0.90) and GPP recovery after a fire disturbance in South Dakota (r(2) = 0.88). Additional analysis of the eddy-covariance flux data shows that the interbiome variation in annual GPP is better explained by that in GPPmax than CUP. These findings indicate that terrestrial GPP is jointly controlled by ecosystem-level plant phenology and photosynthetic capacity, and greater understanding of GPPmax and CUP responses to environmental and biological variations will, thus, improve predictions of GPP over time and space.


Roberti R.,University of Bologna | Osti F.,Institute of Biometeorology | Innocenti G.,University of Bologna | Rombola A.D.,University of Bologna | Di Marco S.,Institute of Biometeorology
European Journal of Plant Pathology | Year: 2016

Among the fungi associated with the kiwi wood diseases, the vascular pathogen Phaeoacremonium minimum can infect plants already at nursery stage, without any external symptoms. At the moment, there are not effective control strategies. The effect of soil treatments applicable in organic agriculture was evaluated in two-years experiments on potted kiwi plants artificially inoculated with P. minimum. The soil treatments were based on commercial formulations of iron chelate, silicon, neem paste, Trichoderma afroharzianum strain T22, and cover cropping with selected perennial Poaceae. Cover cropping and iron chelate treatments enhanced the iron availability for the plants and significantly reduced wood necrosis caused by the pathogen. Both treatments also produced an increase of hairy root proliferation, so the plants were able to better cope with stress conditions. Laboratory assays showed the role of iron on the pathogen growth and its pathogenesis enzyme activities. © 2016 Koninklijke Nederlandse Planteziektenkundige Vereniging


Osti F.,Institute of Biometeorology | Di Marco S.,Institute of Biometeorology
Acta Horticulturae | Year: 2011

Wood decay is a recent chronic wood disease caused by different fungi with a complex aetiology not clear. This harmful and spread disease is characterized by a correlation between yield losses and oliar symptoms, which in turn have an erratic nature as esca of grapevine. A survey is being carried out on 10-year-old, in which wood decay appeared for the first time in 2003. Each year and in each, the incidence of the disease was assessed by recording the percentage of vines with foliar symptoms. Moreover, in each vineyard the content of exchangeable sodium in the soil was evaluated.


Dalu G.A.,Institute of Biometeorology | Baldi M.,Institute of Biometeorology
Journal of the Atmospheric Sciences | Year: 2010

The authors study the nonlinear dynamics of a density current generated by a diabatic source in a rotating and a nonrotating system, both in the presence and in the absence of frictional losses, using a steady-state hydrostatic shallow-water model and producing solutions as a function of the Coriolis parameter and of the Rayleigh friction coefficient. Results are presented in the range of the parameter values that are relevant for shallow atmospheric flows as sea-land breezes and as cold pool outflows. In the shallow-water approximation, single-layer flows and two-layer flows with a lid have three degrees of freedom, and their steady-state dynamics are governed by three ordinary differential equations (ODEs), whereas two-layer flows bounded by a free surface have six degrees of freedom, and their dynamics are governed by six ODEs. It is shown that in the limit case of frictionless flow, the problem has an explicit analytical solution, and in the presence of friction, the system for a one-layer flow and for a two-layer flow bounded by a lid can be reduced to two algebraic equations, plus one second-order ordinary differential equation, which can be integrated numerically. Results show that the maximum runout length of the current occurs when the Rayleigh friction coefficient in the lower layer is on the order of the Coriolis parameter. This length is larger when the upper layer is deeper than the lower layer, but it shortens when the friction coefficient of the upper layer is smaller than that in the lower layer. In addition, the relative error of the solution to the linearized equations is computed. This error, which is enhanced when the width of the forcing is smaller than the Rossby radius, is sizable when the friction coefficient is smaller than the Coriolis parameter. In addition, by comparing the nonlinear solution with a lid (three degrees of freedom) to the nonlinear solution with a free surface as an upper boundary (six degrees of freedom), it is shown that the solution with the lid overestimates the geopotential for low values of the friction coefficient and it underestimates the geopotential for large values of this coefficient. The error, sizable when the two layers have a comparable depth, rapidly decreases when the upper layer becomes deeper than the lower layer; accordingly, a rigid lid can be safely adopted only when the depth of the upper layer is twice the depth of the lower layer, or deeper. © 2010 American Meteorological Society.


Pezzoli A.,Polytechnic University of Turin | Pezzoli A.,University of Turin | Cristofori E.,Polytechnic University of Turin | Cristofori E.,University of Turin | And 4 more authors.
Procedia Engineering | Year: 2012

This research presents a detailed analysis of thermal comfort in road cycling athletes. The data have been collected during experimental road test in prevision of the UCI Road World Championship 2013 (Florence-Tuscany, Italy), considering the different technical situations and the different environmental conditions expected as the most probable for the race's period. The analysis presented in this work is based on the in-situ measurements of both environmental and physiological parameters (i.e.: air temperature, relative humidity, true wind velocity, apparent wind velocity, skin temperature, clothing temperature, heat transfer resistance of the clothing, internal heat production) made over different athletes in different race conditions. The recorded data have been used as input for the model "RayMan" [1],[2] for the assessment of the thermal comfort using thermal indices such as Predicted Mean Vote (PMV) and Physiological Equivalent Temperature (PET). It should be noted that the apparent wind velocity, which is a fundamental parameter in this kind of analysis but often disregarded, is evaluated in relationship with the movement and the effort made by the cyclist. The results obtained by the comparison of the PET and PMV indices with the measured skin temperature confirm the importance of considering the variation of environmental parameters in both training and strategy assessment and provide a working method which is believed to be innovative for the applied sport research. © 2012 Published by Elsevier Ltd.


News Article | September 26, 2016
Site: www.technologyreview.com

The formation and collapse of tiny bubbles dramatically changes the chemistry, engineering, and cost of beer-making. When it comes to beer, many readers will know what a magnificent product the amber nectar can be and why the forces of scientific progress should be focused on its constant improvement. Over the years, there have been many advances in our understanding of the biochemistry of fermentation. But the basic beer-making process has not changed for hundreds, if not thousands, of years. Clearly, earth-shattering breakthroughs in brewery science are few and far between. Which is why the work of Lorenzo Albanese at the Institute of Biometeorology in Florence, Italy, and a few pals, is so significant. These dedicated individuals have invented an entirely new beer-brewing process that dramatically changes the chemistry, the engineering, and the environmental footprint of the process that produces the heavenly brew. So what have they done? The secret sauce in their new method is cavitation, the formation of small bubbles of vapor within a liquid and their subsequent collapse. This is usually done by reducing the pressure within a liquid so that it boils and then increasing it again so that the vapor condenses. Cavitation can be produced by a rotating impeller which generates low pressures at its fast-moving tips. Indeed, cavitation is often an unwanted by-product of ship and submarine propellers, not least because the bubble trail, and the noise it makes, can give away a submarine’s position. Cavitation is an extraordinary process. The rapid collapse of one of these tiny bubbles can create temperatures of more than 1,000 Kelvin and produce pressures some 5,000 times greater than atmospheric pressure. These conditions dramatically change the physical and chemical environment in water. Albanese and pals have conducted extensive experiments to find out how this influences the brewing process. This process involves the basic ingredients of malt, hops, yeast, and water and has always been relatively simple. It takes place in four steps. The first is to create a sugary liquid called wort in which the starch from malted barley is converted to simpler sugars that are fermentable. In the second step, the wort is drained and the malted barley washed to extract as much of the fermentable sugars as possible, a process known as sparging. The wort is then boiled for an hour or so to remove water and concentrate the sugars. The boiling also kills off any enzymes involved in converting starch to sugar and boils away volatile chemicals that can ruin the mix, particularly the unpleasant-tasting dimethyl sulphide. Adding the hops at this stage gives the mixture its characteristic flavors. Finally, the mixture is cooled and the yeast added to start the fermentation process, which typically takes several days. This converts the sugars into alcohol, creating beer, which can then be bottled. Of course, the devil is in the detail. The malted barley must be milled in advance to increase its surface area. The wort must be kept at a certain temperature—usually between 50 and 78 °C—to help the enzymes break down starch. Combined with the boiling, this is an energy intensive process. It takes about 32 kilowatt-hours to make 100 liters of beer. (For comparison, a television rated at 100 watts left on for 10 hours uses a single kilowatt-hour.) So how does cavitation change all this? To find out, Albanese and co have built an entirely new kind of brewing facility that produces cavitation within the wort. They then conducted a range of beer-making experiments under different conditions to explore the potential advantages and disadvantages that cavitation introduces. The results make for the interesting reading. The first advantage is that cavitation pulps malted barley and so removes the necessity for it to be milled in advance. “Dry milling of malts becomes irrelevant with the new installation, since malts are pulverized by the cavitational processes down to less than 100 µm in size within a few minutes,” say Albanese and co. This also increases the biodegradability of the spent malt, which is a waste product of the beer-making process. Cavitation also increases the rate at which starch passes from the pulverized malted barley into the wort. This process is so efficient that little if any starch is left in the malt at the end of the process. That has significant implications. It means that the process of sparging—the washing of the malt to remove trapped sugar and starch—becomes entirely unnecessary. With starch released more efficiently, the transformation of starch into simpler sugars can take place at lower temperatures. “The activation temperature of enzymes aimed at transforming starch into simple sugars and amino acids drops by about 35 °C, shortening the time needed for saccharification,” say the team. Cavitation also helps improves the efficiency of the chemical processes that usually occur during the conventional boiling of the wort and hop mix. Cavitation causes unpleasant volatile gases to degas quickly, denatures the enzymes in the wort and allows hop flavors to mix in easily. That makes boiling entirely unnecessary. Indeed, this whole process can take place at around 78 °C say, Albanese and co. All that translates into significant energy savings. The team says its new brewing process used just 24 kilowatt-hours per 100 liters, some 30 percent less than a control experiment they also ran. And that’s before they optimize the process to prevent unnecessary heat dissipation. One thing the team does not consider, however, is the cost of the equipment necessary to create cavitation and the maintenance associated with it. Cavitation is famously damaging. The pressures and temperatures it produces eat away at the hardest steel. Just how this would influence costs isn’t clear but must surely be factored in somehow. Of course, the ultimate test is the product itself. In a series of tests, this brave team says that the resulting beer is just as good as the conventionally produced stuff. That’s something that will have to be independently verified by selfless individuals willing to put their own interests aside in the name of science. If objective observers agree that the resulting beer is good, cavitation looks set to have a major impact on the brewing industry. Indeed, it may turn out to be one of the biggest changes in brewing technology in decades, if not centuries.


News Article | September 26, 2016
Site: www.technologyreview.com

The formation and collapse of tiny bubbles dramatically changes the chemistry, engineering, and cost of beer-making. When it comes to beer, many readers will know what a magnificent product the amber nectar can be and why the forces of scientific progress should be focused on its constant improvement. Over the years, there have been many advances in our understanding of the biochemistry of fermentation. But the basic beer-making process has not changed for hundreds, if not thousands, of years. Clearly, earth-shattering breakthroughs in brewery science are few and far between. Which is why the work of Lorenzo Albanese at the Institute of Biometeorology in Florence, Italy, and a few pals, is so significant. These dedicated individuals have invented an entirely new beer-brewing process that dramatically changes the chemistry, the engineering, and the environmental footprint of the process that produces the heavenly brew. So what have they done? The secret sauce in their new method is cavitation, the formation of small bubbles of vapor within a liquid and their subsequent collapse. This is usually done by reducing the pressure within a liquid so that it boils and then increasing it again so that the vapor condenses. Cavitation can be produced by a rotating impeller which generates low pressures at its fast-moving tips. Indeed, cavitation is often an unwanted by-product of ship and submarine propellers, not least because the bubble trail, and the noise it makes, can give away a submarine’s position. Cavitation is an extraordinary process. The rapid collapse of one of these tiny bubbles can create temperatures of more than 1,000 Kelvin and produce pressures some 5,000 times greater than atmospheric pressure. These conditions dramatically change the physical and chemical environment in water. Albanese and pals have conducted extensive experiments to find out how this influences the brewing process. This process involves the basic ingredients of malt, hops, yeast, and water and has always been relatively simple. It takes place in four steps. The first is to create a sugary liquid called wort in which the starch from malted barley is converted to simpler sugars that are fermentable. In the second step, the wort is drained and the malted barley washed to extract as much of the fermentable sugars as possible, a process known as sparging. The wort is then boiled for an hour or so to remove water and concentrate the sugars. The boiling also kills off any enzymes involved in converting starch to sugar and boils away volatile chemicals that can ruin the mix, particularly the unpleasant-tasting dimethyl sulphide. Adding the hops at this stage gives the mixture its characteristic flavors. Finally, the mixture is cooled and the yeast added to start the fermentation process, which typically takes several days. This converts the sugars into alcohol, creating beer, which can then be bottled. Of course, the devil is in the detail. The malted barley must be milled in advance to increase its surface area. The wort must be kept at a certain temperature—usually between 50 and 78 °C—to help the enzymes break down starch. Combined with the boiling, this is an energy intensive process. It takes about 32 kilowatt-hours to make 100 liters of beer. (For comparison, a television rated at 100 watts left on for 10 hours uses a single kilowatt-hour.) So how does cavitation change all this? To find out, Albanese and co have built an entirely new kind of brewing facility that produces cavitation within the wort. They then conducted a range of beer-making experiments under different conditions to explore the potential advantages and disadvantages that cavitation introduces. The results make for the interesting reading. The first advantage is that cavitation pulps malted barley and so removes the necessity for it to be milled in advance. “Dry milling of malts becomes irrelevant with the new installation, since malts are pulverized by the cavitational processes down to less than 100 µm in size within a few minutes,” say Albanese and co. This also increases the biodegradability of the spent malt, which is a waste product of the beer-making process. Cavitation also increases the rate at which starch passes from the pulverized malted barley into the wort. This process is so efficient that little if any starch is left in the malt at the end of the process. That has significant implications. It means that the process of sparging—the washing of the malt to remove trapped sugar and starch—becomes entirely unnecessary. With starch released more efficiently, the transformation of starch into simpler sugars can take place at lower temperatures. “The activation temperature of enzymes aimed at transforming starch into simple sugars and amino acids drops by about 35 °C, shortening the time needed for saccharification,” say the team. Cavitation also helps improves the efficiency of the chemical processes that usually occur during the conventional boiling of the wort and hop mix. Cavitation causes unpleasant volatile gases to degas quickly, denatures the enzymes in the wort and allows hop flavors to mix in easily. That makes boiling entirely unnecessary. Indeed, this whole process can take place at around 78 °C say, Albanese and co. All that translates into significant energy savings. The team says its new brewing process used just 24 kilowatt-hours per 100 liters, some 30 percent less than a control experiment they also ran. And that’s before they optimize the process to prevent unnecessary heat dissipation. One thing the team does not consider, however, is the cost of the equipment necessary to create cavitation and the maintenance associated with it. Cavitation is famously damaging. The pressures and temperatures it produces eat away at the hardest steel. Just how this would influence costs isn’t clear but must surely be factored in somehow. Of course, the ultimate test is the product itself. In a series of tests, this brave team says that the resulting beer is just as good as the conventionally produced stuff. That’s something that will have to be independently verified by selfless individuals willing to put their own interests aside in the name of science. If objective observers agree that the resulting beer is good, cavitation looks set to have a major impact on the brewing industry. Indeed, it may turn out to be one of the biggest changes in brewing technology in decades, if not centuries.

Loading Institute of Biometeorology collaborators
Loading Institute of Biometeorology collaborators