Canberra, Australia
Canberra, Australia

CSIRO PUBLISHING is an Australian-based science and technology publisher. CSIRO PUBLISHING is the publishing branch of the Commonwealth Scientific and Industrial Research Organisation. They cover a range of scientific disciplines including agriculture, chemistry, plant and animal science, natural history and environmental management. They publish research journals, books, magazines and email newsletters. They also produce videos and multimedia content for students and training. Wikipedia.


Time filter

Source Type

Patent
Csiro | Date: 2016-10-13

The present invention relates to processes for extracting lipid from vegetative plant parts such as leaves, stems, roots and tubers, and for producing industrial products such as hydrocarbon products from the lipids. Preferred industrial products include alkyl esters which may be blended with petroleum based fuels.


Patent
Csiro | Date: 2016-10-24

The present invention relates to processes for producing industrial products such as hydrocarbon products from non-polar lipids in a vegetative plant part. Preferred industrial products include alkyl esters which may be blended with petroleum based fuels.


Patent
Dow AgroSciences and Csiro | Date: 2016-09-20

This invention relates to methods for identifying wheat plants that having increased fructan/arabinoxylan. The methods use molecular markers to identify and to select plants with increased fructan/arabinoxylan or to identify and deselect plants with decreased fructan/arabinoxylan. Wheat plants generated by the methods of the invention are also a feature of the invention.


Patent
Csiro | Date: 2016-06-29

The present invention provides silk proteins, as well as nucleic acids encoding these proteins. The present invention also provides recombinant cells and/or organisms which synthesize silk proteins. Silk proteins of the invention can be used for a variety of purposes such as in the manufacture of personal care products, plastics, textiles, and biomedical products.


Patent
Csiro | Date: 2014-11-28

The present invention relates to a method of identifying a subject with a Plasmodium infection. The present invention also relates to a method for monitoring a subject with a Plasmodium infection, for example, following treatment with an anti-malaria compound. Also provided are methods of identifying a compound to treat a Plasmodium infection.


The present invention relates to methods of synthesizing long-chain polyunsaturated fatty acids, especially eicosapentaenoic acid, docosapentaenoic acid and docosahexaenoic acid, in recombinant cells such as yeast or plant cells. Also provided are recombinant cells or plants which produce long-chain polyunsaturated fatty acids. Furthermore, the present invention relates to a group of new enzymes which possess desaturase or elongase activity that can be used in methods of synthesizing long-chain polyunsaturated fatty acids.


The present invention relates to methods of synthesizing long-chain polyunsaturated fatty acids, especially eicosapentaenoic acid, docosapentaenoic acid and docosahexaenoic acid, in recombinant cells such as yeast or plant cells. Also provided are recombinant cells or plants which produce long-chain polyunsaturated fatty acids. Furthermore, the present invention relates to a group of new enzymes which possess desatorase or elongase activity that can be used in methods of synthesizing long-chain polyunsaturated fatty acids.


Patent
Csiro | Date: 2015-05-19

A light-emitting display panel sub-pixel circuit, comprising: at least three switches; at least one capacitive element; at least one light-emitting element; a power line serving as a wire for connecting the at least three switches, the at least one capacitive element, and the at least one light-emitting element; an earth line; a scan line for selecting a sub-pixel for light emission; a data line for supplying data to the light-emitting element; and a sense line for detecting degradation of the light-emitting element, wherein the data corresponding to a predetermined luminous intensity are programmed on the basis of a voltage.


Patent
Csiro | Date: 2016-10-25

A discrete element method for modelling granular or particulate material, the method including a multiple grid search method wherein the multiple grid search method is a hierarchical grid search method, and wherein entities, such as particles and boundary elements, are allocated to cells of respective grids based on size. The search method further includes: (a) performing a search of cells in a first of the grid levels to determine pairs of entities which satisfy predetermined criteria to be included in a neighbour list for which both entities belong to the first grid level; (b) mapping each nonempty cell in the first grid level to each of the other grid levels, determining neighbouring cells in each of the other grid levels and determining all pairs of entities belonging to pair of levels that satisfy the predetermined criteria for inclusion in the neighbour list; and (c) repeating (a) and (b) for all grid levels.


A process of forming a thin film photoactive layer of a perovskite photoactive device comprising: applying at least one coating of a perovskite precursor solution and a polymer additive to a substrate, wherein the at least one perovskite precursor solution comprises at least one reaction constituent for forming at least one perovskite compound having the formula AMX_(3 )dissolved in a coating solvent selected from at least one polar aprotic solvent, the polymer additive being soluble in said coating solvent, and in which A comprises an ammonium group or other nitrogen containing organic cation, M is selected from Pb, Sn, Ge, Ca, Sr, Cd, Cu, Ni, Mn, Co, Zn, Fe, Mg, Ba, Si, Ti, Bi, or In, and X is selected from at least one of F, Cl, Br or I.


Patent
Csiro | Date: 2017-01-25

The present invention relates to a compound selected from the group consisting of:


We consider the problem of finding a sparse solution for an underdetermined linear system of equations when the known parameters on both sides of the system are subject to perturbation. This problem is particularly relevant to reconstruction in fully-perturbed compressive-sensing setups where both the projected measurements of an unknown sparse vector and the knowledge of the associated projection matrix are perturbed due to noise, error, mismatch, etc. We propose a new iterative algorithm for tackling this problem. The proposed algorithm utilizes the proximal-gradient method to find a sparse total least-squares solution by minimizing an l1-regularized Rayleigh-quotient cost function. We determine the step-size of the algorithm at each iteration using an adaptive rule accompanied by backtracking line search to improve the algorithm's convergence speed and preserve its stability. The proposed algorithm is considerably faster than a popular previously-proposed algorithm, which employs the alternating-direction method and coordinate-descent iterations, as it requires significantly fewer computations to deliver the same accuracy. We demonstrate the effectiveness of the proposed algorithm via simulation results. © 2016 Elsevier B.V.


The sedimentary succession of the central Australian Amadeus Basin consists of Neoproterozoic to Carboniferous sedimentary rocks and contains shallow marine, subtidal carbonates of Middle to Late (Series 2 to Furongian) Cambrian age. A combination of sequence stratigraphy, geochemistry and mineralogy shows a transgressive 2nd order cyclicity deposited between ~ 511–490 Ma and a change from arid, low energy to humid, high energy depositional environments. This is reflected in an initially evaporitic sequence with upward decreasing halite and anhydrite abundance and transition from oxygenated to anoxic conditions, reflected by the Fe mineral species change from hematite to pyrite during transgression. Sequence boundaries of several 3rd order cycles consisting of HST carbonate rocks and LST siltstones, correlate with globally recognised sequence boundaries linked to the inferred eustatic sea level record for the upper two series of the Cambrian System. The carbon isotope record for this ~ 1400 m thick succession in combination with biostratigraphic age correlation allowed the identification of the globally recognised Steptoean Positive Carbon Isotope Excursion (SPICE), Drumian Carbon Isotope Excursion (DICE) and Redlichiid-Olenellid Extinction Carbon Isotope Excursion (ROECE). © 2017 Elsevier B.V.


Elevated atmospheric CO2 concentrations ([CO2]) cause direct changes in crop physiological processes (e.g. photosynthesis and stomatal conductance). To represent these CO2 responses, commonly used crop simulation models have been amended, using simple and semicomplex representations of the processes involved. Yet, there is no standard approach to and often poor documentation of these developments. This study used a bottom-up approach (starting with the APSIM framework as case study) to evaluate modelled responses in a consortium of commonly used crop models and illuminate whether variation in responses reflects true uncertainty in our understanding compared to arbitrary choices of model developers. Diversity in simulated CO2 responses and limited validation were common among models, both within the APSIM framework and more generally. Whereas production responses show some consistency up to moderately high [CO2] (around 700 ppm), transpiration and stomatal responses vary more widely in nature and magnitude (e.g. a decrease in stomatal conductance varying between 35% and 90% among models was found for [CO2] doubling to 700 ppm). Most notably, nitrogen responses were found to be included in few crop models despite being commonly observed and critical for the simulation of photosynthetic acclimation, crop nutritional quality and carbon allocation. We suggest harmonization and consideration of more mechanistic concepts in particular subroutines, for example, for the simulation of N dynamics, as a way to improve our predictive understanding of CO2 responses and capture secondary processes. Intercomparison studies could assist in this aim, provided that they go beyond simple output comparison and explicitly identify the representations and assumptions that are causal for intermodel differences. Additionally, validation and proper documentation of the representation of CO2 responses within models should be prioritized. © 2017 John Wiley & Sons Ltd.


Gras L.J.,CSIRO | Keywood M.,CSIRO
Atmospheric Chemistry and Physics | Year: 2017

Multi-decadal observations of aerosol microphysical properties from regionally representative sites can be used to challenge regional or global numerical models that simulate atmospheric aerosol. Presented here is an analysis of multi-decadal observations at Cape Grim (Australia) that characterise production and removal of the background marine aerosol in the Southern Ocean marine boundary layer (MBL) on both short-term weather-related and underlying seasonal scales. A trimodal aerosol distribution comprises Aitken nuclei (<100nm), cloud condensation nuclei (CCN)/accumulation (100-350nm) and coarse-particle (>350nm) modes, with the Aitken mode dominating number concentration. Whilst the integrated particle number in the MBL over the clean Southern Ocean is only weakly dependent on wind speed, the different modes in the aerosol size distribution vary in their relationship with wind speed. The balance between a positive wind dependence in the coarse mode and negative dependence in the accumulation/CCN mode leads to a relatively flat wind dependence in summer and moderately strong positive wind dependence in winter. The changeover in wind dependence of these two modes occurs in a very small size range at the mode intersection, indicative of differences in the balance of production and removal in the coarse and accumulation/CCN modes. Whilst a marine biological source of reduced sulfur appears to dominate CCN concentration over the summer months (December to February), other components contribute to CCN over the full annual cycle. Wind-generated coarse-mode sea salt is an important CCN component year round and is the second-most-important contributor to CCN from autumn through to mid-spring (March to November). A portion of the non-seasonally dependent contributor to CCN can clearly be attributed to wind-generated sea salt, with the remaining part potentially being attributed to long-range-transported material. Under conditions of greater supersaturation, as expected in more convective cyclonic systems and their associated fronts, Aitken mode particles become increasingly important as CCN. © 2017 The Author(s).


Hunt B.G.,CSIRO | Dix M.R.,CSIRO
Climate Dynamics | Year: 2017

Rainfall prediction for a year in advance would be immensely valuable for numerous activities, if it were achievable. It is shown that in any one year, the chances of making a correct prediction is about 50%, but there is no way a priori of determining the correctness of such a prediction. This results primarily because annual mean time series of rainfall over most of the globe consists of white noise, i.e. they are random/stochastic. This outcome is shown to exist for both observations and output from a coupled global climatic model, based on autoregressive analysis. The major forcing mechanism for rainfall anomalies over much of the global is the El Niño/Southern Oscillation, but it explains only a modest part of the variance in the rainfall. Much of the remaining variance is attributed to internal climatic variability, and it is shown that this imposes a major limitation on rainfall predictability. © 2017 Springer-Verlag Berlin Heidelberg


Liu D.,CSIRO
IEEE International Conference on Cloud Computing, CLOUD | Year: 2017

Encrypted data stored in clouds usually cannot be processed. To address this limitation, we design a practically efficient fully homomorphic encryption (FHE) scheme, which allows encrypted data to be directly processed by the clouds. Our scheme assumes that the clouds are curious to derive information from encrypted data, but not performing any nondeterministic brute-force attacks. We implemented a prototype of our scheme and evaluated its concrete performance by evaluating high-degree polynomials over encrypted data and calculating inner product of high-dimensional encrypted vectors. © 2016 IEEE.


Riveret R.,CSIRO
Proceedings - 2016 IEEE 28th International Conference on Tools with Artificial Intelligence, ICTAI 2016 | Year: 2016

In this paper, we investigate the problem of finding argumentation graphs consistent with some observed statement labellings. We consider a general abstract framework, where the structure of arguments is left unspecified, and we focus on particular grounded argument labellings where arguments can be omitted. The specification of such grounded labellings, the Principle of Multiple Explanations and the Principle of Parsimony lead us to a simple and efficient anytime algorithm to induce 'on the fly' all the 'argument-parsimonious' argumentation graphs consistent with an input stream of statement labellings. © 2016 IEEE.


Held A.,CSIRO
Proceedings of the International Astronautical Congress, IAC | Year: 2016

The Committee on Earth Observation Satellites (CEOS) was established in September 1984 in response to a recommendation from a Panel of Experts on Remote Sensing from Space that was set up under the aegis of the G7 Economic Summit of Industrial Nations Working Group on Growth, Technology and Employment. This panel recognized the multidisciplinary nature of space-based Earth observations and the value of coordinating international Earth observation efforts to benefit society. Accordingly, the original function of CEOS was to coordinate and harmonize Earth observations to make it easier for the user community to access and utilize data. CEOS initially focused on interoperability, common data formats, the intercalibration of instruments, and common validation and intercomparison of products. Since the inception of CEOS, the circumstances surrounding the collection and use of space-based Earth observations have changed. The number of Earth-observing satellites has vastly increased. As of January 2016, CEOS Agencies operate over 130 Earth observing space missions. Onboard instruments are more complex, and are capable of collecting new types of data in ever-growing volumes. The user community has expanded and become more diverse, as different data types become available and new applications for Earth observations are developed. Users have become more organized, forming several international bodies that coordinate and levy Earth observation requirements. In response to this changing environment, CEOS has also evolved, becoming more complex, and expanding the number and scope of its activities. In addition to its original charge, CEOS now focuses on validated requirements levied by external organizations, works closely with other satellite coordinating bodies (e.g., the Coordination Group for Meteorological Satellites [CGMS]), and continues its role as the primary forum for international coordination of space-based Earth observations. CEOS has played an influential role in the establishment and ongoing development of the Group on Earth Observations (GEO) and the Global Earth Observation System of Systems (GEOSS). Indeed, CEOS coordinates the GEOSS space segment. In 2016, CEOS will have a particular focus on coordinating space agencies to support implementation of the three big global agendas from 2016: the Global Agenda for Sustainable Development, the Sendai Framework for Disaster Risk Reduction and the Paris Climate Agreement. The current chair of CEOS, or his representative, will provide an update on key 2016 CEOS activities.


Patent
Csiro | Date: 2017-02-22

A process for producing a preform by cold spray deposition, the process comprising: providing a starter substrate about a preform axis of rotation, the starter substrate having at least one axial end having a substantially flat deposition surface; rotating the starter substrate about the preform axis of rotation; depositing material onto the deposition surface of the starter substrate using cold spray deposition to form a product deposition surface, the cold spray deposition process including a cold spray applicator through which the material is sprayed onto the deposition surface; successively depositing material onto a respective top product deposition surface using cold spray deposition to form successive deposition layers of the material; and moving at least one of: the cold spray applicator; or the starter substrate and preform product, relative to the other in an axial direction along the preform axis of rotation to maintain a constant distance between the cold spray applicator and the top product deposition surface, thereby forming a preform product of a selected length, wherein the cold spray applicator is moved in a plane perpendicular to the preform axis of rotation so as to deposit material as a substantially flat surface on each respective deposition surface of the starter substrate or product deposition surface of the preform product.


News Article | April 17, 2017
Site: www.greencarcongress.com

« CSIRO licenses technology to Amfora for production of oil in leaves and stems of plants; participates in Series A | Main | DLR team devises concept for next-gen rail cargo transport; automated, intelligent freight wagons » Peloton Technology, a connected and automated vehicle technology company dedicated to improving the safety and efficiency of freight transportation, closed a $60-million Series B funding round. Omnitracs, a global pioneer of fleet management solutions, led the round, which also included existing investors Intel Capital, DENSO International America, BP Ventures, Lockheed Martin, Nokia Growth Partners, UPS Strategic Enterprise Fund, Volvo Group, Sand Hill Angels, Band of Angels and Birchmere Ventures along with new investors B37 Ventures, Mitsui USA, Okaya, Schlumberger, US Venture and Breakthrough Fuel. Peloton has raised a total of $78 million since inception. Series B funds will fuel Peloton’s growth plans, including the rollout of the first commercial two-truck driver-assistive platooning system later in 2017, as well as development of more advanced automation solutions. Several US-based Fortune 500 fleets plan to trial the system within the next year. At the same time, Peloton and Series B lead investor Omnitracs will expand cross-fleet platooning opportunities by integrating the system with the Omnitracs Intelligent Vehicle Gateway telematics platform and developing new joint telematics solutions. Boosted by the new investment, Peloton is accelerating vehicle integration projects with several truck OEMs, including Volvo Trucks North America, a part of investor Volvo Group, as well as Tier 1 brake system and connected-vehicle suppliers. The Peloton investor mix includes a variety of leading global companies, enabling Peloton to collaborate extensively to bring its solutions to international markets. Intel, a co-leader on Peloton’s 2015 Series A round, announced in November 2016 that it will invest $250 million in automated driving solutions.


News Article | May 4, 2017
Site: www.theguardian.com

People don’t talk about how global warming has stopped, paused or slowed down all that much any more – three consecutive hottest years on record will tend to do that to a flaky meme. But there was a time a few years ago when you couldn’t open your news feed without being told global warming had stopped by some conservative columnist, climate science denier or one of those people who spend their waking hours writing comments on stories like this. The issue at hand was one of the multiple measurements used by scientists to monitor the state of the planet – the globally averaged temperature. Depending on which particular set of data you looked at, and how you calculated trends, there was an argument that temperature rises had slowed over a period of about 15 years. When deniers and contrarians talked about this “slowdown” the implication was that somehow, the laws of physics had suddenly changed and loading the atmosphere with CO2 might not be a problem any more. As I argued three years ago, this global warming pause was never really a thing. Despite all the other indicators of global warming showing business as usual – sea-level rise, temperature extremes, glacier melt, species movements, ocean heating, permafrost melt – the unhealthy fixation on one aspect, the average temperature of the globe, stuck firm. But scientists reacted to the public commentary in the only way they know how. They started to study this “pause” to find out what might be going on. They published scores and scores of papers in academic journals. This, in turn, fed a narrative that in the public eye that the fundamentals of human-caused climate change were in doubt when, in fact, none of the credible studies found this to be the case. Some argued the pause did not exist at all, others looked at the role of the oceans, the trade winds, greenhouse gases, volcanic eruptions or even the way ship thermometers recorded the water temperatures (and then how scientists accounted for the different methods). But many scientists agreed too that the wobble in the temperature was well within the bounds of what’s called “decadal variability” – the natural ups and downs in the climate system that are superimposed on top of the warming caused by burning fossil fuels. As the contrarian talking point went, the existence of different studies coming to different conclusions was proof enough that policy makers should wait rather than act. In one paper that appeared in the journal Bulletin of the American Meteorological Society, three researchers argued that the scientific community had unwittingly been distracted by the claims of global warming contrarians. Now a new study in the leading journal Nature has tried to reconcile the differences between the various pause studies and make suggestions about what went wrong. There was not a clear and agreed definition of what a pause was and if it was consequential. Scientists didn’t always communicate nuances clearly. “In a time coinciding with high-level political negotiations on preventing climate change,” write the authors from Switzerland’s Institute for Atmospheric and Climate Science, “sceptical media and politicians were using the apparent lack of warming to downplay the importance of climate change. It is easy to paint a controversial picture, but as often the devil is in the detail.” Just to be clear, this is was never about whether or not the threat from global warming caused by burning fossil fuels was in doubt for a while a few years ago. It wasn’t. Indeed, the Nature paper concludes that out of all the studies, the community is “more confident than ever” that human activity is now dominating the warming of the planet. But I’ve asked several leading climate scientists for their take. Dr James Risbey, a senior research scientist at CSIRO who has co-written an accompanying commentary in Nature, told me: “It never hurts to go back and see how we did.” But he said: “A short-term trend was too blunt an instrument to speak directly to our confidence in climate change anyway, but its overall relevance is that it helped us to explain the bumps along the way.” The Penn State University climate scientist Prof Michael Mann (he of the hockey stick graph) expected the Nature paper would gain attention because of the high profile of the journal and that it was talking about the “faux pause”. But in an email he wrote there were no real “bombshell” findings in the Nature paper. “The work of many groups, including our own, has shown that [climate] models and observations are consistent in terms of long-term warming, and that this warming – and recent extreme warmth – can only be explained by human activity, namely the burning of fossil fuels,” he said Prof Stefan Rahmstorf, of the Potsdam Institute for Climate Impact Research, said: “I think the main lesson to be learnt from this discussion, by scientists, the media and the public alike, is to be highly sceptical of narratives pushed by so-called climate sceptics.” Rahmstorf was a co-author on a paper in the journal Environmental Research Letters in April which found neither the claimed “pause” nor the recent spikes in global temperature were outside the bounds of how the climate should be expected to react when it is loaded with extra greenhouse gases. He added: “Global temperature is a noisy data set due to natural short-term variability, and the debate was all about the noise and not about any meaningful change in the global warming signal. Let me add that understanding the precise nature of this short-term variability is of course a very interesting science question, and work done on the so-called ‘hiatus’ has certainly improved our understanding of that a lot. “Incidentally, when in the journal Science in 2007 we pointed to the exceptionally large warming trend of the preceding 16 years, which was at the upper end of the [climate] model range, nobody cared, because there is no powerful lobby trying to exaggerate global warming. “And of course in our paper we named natural intrinsic variability as the most likely reason. But when a trend at the lower end of the model range occurs it suddenly became a big issue of public debate, because that was pushed by the fossil fuel climate sceptics’ lobby. There is an interesting double standard there.” Prof Matt England, of the University of New South Wales Climate Change Research Centre, is another scientist to have carried out research in response to the “hiatus” and found that a change in the strength of trade winds was also a factor in holding temperatures down. “Yes, the post-2000 slowdown was totally real,” he said. “Just like the acceleration in surface warming between 1980 and 2000 was totally real. It’s called decadal variability, and it’s superimposed on the long-term warming trends. Studying the physical mechanisms giving rise to decadal variability is an important component of the work we do, and will continue regardless of definitions of surface warming slowdowns and accelerations.” So what to make of it all? The short version is that global warming didn’t stop, scientists knew global temperatures would wobble around and climate scientists aren’t always the best communicators. But also, to paraphrase Stefan Rahmstorf, climate sceptics are not really sceptics at all.


News Article | April 17, 2017
Site: www.greencarcongress.com

« Bio- and jet-fuel carinata feedstock company Agrisoma closes $15.4M Series B financing | Main | CSIRO licenses technology to Amfora for production of oil in leaves and stems of plants; participates in Series A » As the most abundant gas in Earth’s atmosphere, nitrogen has been an attractive option as a source of renewable energy. But nitrogen gas—which consists of two nitrogen atoms held together by a strong, triple covalent bond—doesn’t break apart under normal conditions, presenting a challenge to scientists who want to transfer the chemical energy of the bond into electricity. Now, researchers in China have developed a rechargeable lithium-nitrogen (Li-N–) battery with the proposed reversible reaction of 6Li + N– ⇋ 2Li–N. The assembled N– fixation battery system, consisting of a Li anode, ether-based electrolyte, and a carbon cloth cathode, shows a promising electrochemical faradic efficiency (59%). The “proof-of-concept” design, described in an open-access paper in the journal Chem, works by reversing the chemical reaction that powers existing lithium-nitrogen batteries. Instead of generating energy from the breakdown of lithium nitride (2Li N) into lithium and nitrogen gas, the researchers’ battery prototype runs on atmospheric nitrogen in ambient conditions and reacts with lithium to form lithium nitride. Its energy output is brief but comparable to that of other lithium-metal batteries. Although it constitutes about 78% of Earth’s atmosphere, N in its molecular form is unusable in most organisms because of its strong nonpolar N≡N covalent triple-bond energy, negative electron affinity, high ionization energy, and so on. In terms of energy efficiency, the honorable Haber-Bosch process, which was put forward more than 100 years ago, is the most efficient process for producing the needed N fertilizers from atmospheric N in industrial processes. However, the energy-intensive Haber-Bosch process is inevitably associated with major environmental concerns under high temperature and pressure, leaving almost no room for further improvement by industry optimization. … Inspired by rechargeable metal-gas batteries such as Li-O , Li-CO , Li-SO , Al-CO , and Na-CO (which have attracted much attention because of their high specific energy density and ability to reduce gas constituent), research on Li-N batteries has not seen any major breakthroughs yet. Although Li-N batteries have never been demonstrated in rechargeable conditions, the chemical process is similar to that of the previously mentioned Li-gas systems. During discharging reactions, the injected N molecules accept electrons from the cathode surface, and the activated N molecules subsequently combine with Li ions to form Li-containing solid discharge products. From the results of theoretical calculations, the proposed Li-N batteries show an energy density of 1,248 Wh kg , which is comparable to that of rechargeable Li-SO and Li-CO batteries. The research team demonstrated that a rechargeable Li-N battery is possible under room temperature and atmospheric pressure with the following reversible battery reactions: The team investigated the use of Ru-CC and ZrO2-CC composite cathodes to improved the N fixation efficiency. Li-N2 batteries with catalyst cathodes showed higher fixation efficiency than pristine CC cathodes. This promising research on a nitrogen fixation battery system not only provides fundamental and technological progress in the energy storage system but also creates an advanced N /Li N (nitrogen gas/lithium nitride) cycle for a reversible nitrogen fixation process. The work is still at the initial stage. More intensive efforts should be devoted to developing the battery systems. —senior author Xin-Bo Zhang, of the Changchun Institute of Applied Chemistry, part of the Chinese Academy of Sciences This work was financially supported by the Ministry of Science and Technology of China and the National Natural Science Foundation of China.


News Article | April 26, 2017
Site: www.theguardian.com

In 2015 a US survey found that LGBTIQ scientists felt more accepted in their workplaces than their peers in other professions did. The Queer in Stem survey, published in the Journal of Homosexuality, surveyed 1,400 LGBTIQ workers in science, technology, engineering and mathematics. They found respondents in scientific fields that had a high proportion of women were more likely to be out to their colleagues than those who worked in male-intensive disciplines. This is heartening news as it’s not necessarily that way in most Australian workplaces. Last year the Australian Workplace Equality Index found that nearly half of LGBTIQ Australians hide their sexual identity at work. The report also found many LGBTIQ people have experienced verbal or physical homophobic abuse in the workplace. Discrimination still takes place in Stem workplaces around the world. The 2016 survey LGBTIQ Climate in Physics, published by the American Physical Society, found that more than one in five physicists from sexual and gender minorities in the Unites States reported having been excluded, intimidated or harassed at work because of their sexual identity. Transgender physicists and physics students faced the most hostile environments, while women experienced harassment, intimidation and exclusion at three times the rate of men. Despite apparently progressive attitudes, being gay in Stem fields can be difficult. For me, it was a bit of a miracle I made it as an astrophysicist at all. When I went to enrol in high school in the UK, I was told I would be forced to wear a skirt. My reaction was akin to that of an 11-year-old boy. Dressing up in a skirt wasn’t an option for me, so I didn’t attend another day of school after the age of 11. My parents were accommodating and I taught myself at home, but not everyone is fortunate to have that amount of flexibility and autonomy in their education. I came out as gay aged 17. It was 1997 – the start of more enlightened times in the UK. Despite this, my girlfriend’s mother wanted us to keep our relationship a secret in case she got sacked from her job as a college lecturer. I was working at the time as a part-time nanny, but my employer (who was a lovely person) asked me to keep my relationship a secret in case she lost custody of her children. At university, I volunteered my time as the student union’s lesbian, gay, bisexual and transgender officer, providing frontline support to students. It was an uplifting experience to listen and provide a social support network to fellow students coming out. Through this work I made many lifelong friends but I also copped plenty of abuse. One day I came into the office and discovered a chilling personal death threat on my voicemail. I have been intimidated and belittled many times owing to my sexuality and gender but, frankly, insults from individuals are the easiest type of flak to take. It’s the campaigns by well-funded groups to curb human rights that scar deeply. Campaigns to fight an unequal age of consent, a ban from the armed forces, a ban on blood donation, a ban on adoption and, more recently, a ban on same-sex marriage in the UK were successful, but not without a measure of blood, sweat and tears. And there’s more work to be done in Australia. So, what does this have to do with workplaces? Given that so many LGBTIQ people are likely to face barriers to inclusion in their jobs, this affects their mental health (LGBTIQ people are three times more likely to experience depression compared with the broader population) and it also has a detrimental effect on engagement, which is closely linked with productivity. Having witnessed this problem in several Stem working environments around the world, I wanted to help tackle it. In 2014 I led a team of Astronomical Society of Australia members to launch the Pleiades awards – a framework for astronomy organisations to engage in self-reflection and improvement of their workplace culture and practices. The project has led to many positive actions improving our professional community, for instance every workplace in Australia for astronomers now has a diversity and inclusion committee. And these have introduced initiatives including mentoring programs for students, guidelines for the prevention and reporting of harassment at conferences and scholarships to enable researchers who have returned from parental leave to host a conference and “get back into the game”. This has emboldened astronomers to open up at the Astronomical Society of Australia’s annual conferences on inclusion and diversity to discuss hard issues including race, cultural and linguistic background, ability, health, sexual and gender expression and how these play out in our professional community. Whereas 10 years ago I was speaking at meetings about women in Stem, I’m now as likely to be talking about intersectional issues faced by LGBTIQ scientists and, happily, I am also more likely to share the stage with people from different cultural backgrounds speaking their truths. Astronomy is slowly improving its listening skills. At my workplace, the CSIRO, we are working hard to improve the way things are done. In the astronomy and space science business unit, we’re undergoing a deep period of self-scrutiny and improvement called the Culture Project. In parallel, our diversity and inclusion group is working alongside the human resources team on positive changes in the areas of hiring, training, flexible working arrangements and facilities to improve the workplace for all. There are improvements across the organisation, too. Staff run an active LGBTIQ network and are setting up an LGBTIQ Ally network, to equip people with the skills and knowledge required to effectively support LGBTIQ employees. We are taking part in a pilot of the national Science and Gender Equity program, which employs an evidence-based approach to identify areas for improvement in equity and inclusion. The organisational will is there but complacency has absolutely no place in the process. We must push on. In our global economy and with our truly global workforce within Stem, we must work together to make things better. Playing a positive and active role in these activities shows our fellow humans that we take time to listen to their experiences, and respect and acknowledge the different path they have taken in life from us. In our fractured world it’s time we listened more and judged less. Take the time to stand together and support each other. It’s basic human kinship. And I’m happy to do my part.


US-based biotech startup Amfora and CSIRO (Commonwealth Scientific and Industrial Research Organisation, the federal government agency for scientific research in Australia) signed an agreement to advance development and commercialization of technology to produce oil in the leaves and stems of plants as well as the seeds. Innovation Leader with CSIRO Agriculture and Food, Allan Green, said that this was the first of many applications of the technology, which can be used to produce energy-rich feed for livestock as well as for human food, biofuels and industrial uses. Previously it has only been possible to extract oil from the oil-rich seeds and fruits of some specialized plants, such as canola, soybean, sunflower, coconut and oil palm. What we have been able to do is switch on this high-level oil production in vegetative tissue, such as in stems and leaves, as well. In some plants, the research team has been able to get around 35% oil content into vegetative tissue—the same amount as in many oil seed crops. Amfora will use the technology to develop oil content in the vegetative tissue of corn and sorghum, meaning they can market a feed for dairy farmers that does not require them to purchase additional oils, such as tallow or cotton seed, to supplement feeds. Dairy cattle require around 7% fat in their diet to produce milk. If their feed already contains this fat in the form of oil then this means less agricultural land is needed to produce feed and fewer greenhouse gas emissions are produced from feed production. The agreement with Amfora is the first major application for the high oil technology. It provides a direct path to market as the oil does not need to be extracted from the leaves before it is fed to cattle. Future applications, such as the production of industrial oils and bio-based diesel, will require further industrial supply chain development to customise techniques for extracting the oil and converting it to suitable products. CSIRO granted Amfora a worldwide, exclusive license to its technology for use in the development of specified forage crops. CSIRO also participated in Amfora’s Series A financing along with Spruce Capital Partners, a venture capital firm based in San Francisco, California and co-manager of the MLS Fund II.


News Article | May 8, 2017
Site: phys.org

Their experiment was the first to use a newly installed x-ray detector, called Maia, mounted at NSLS-II's Submicron Resolution X-Ray Spectroscopy (SRX) beamline. Scientists from around the world come to SRX to create high-definition images of mineral deposits, aerosols, algae—just about anything they need to examine with millionth-of-a-meter resolution. Maia, developed by a collaboration between NSLS-II, Brookhaven's Instrumentation Division and Australia's Commonwealth Scientific and Industrial Research Organization (CSIRO), can scan centimeter-scale sample areas at micron scale resolution in just a few hours—a process that used to take weeks. "The Maia detector is a game-changer," said Juergen Thieme, lead scientist at the SRX beamline. "Milliseconds per image pixel instead of seconds is a huge difference." SRX beamline users now have time to gather detailed data about larger areas, rather than choosing a few zones to focus on. This greatly enhances the chance to capture rare "needle in a haystack" clues to ore forming processes, for example. "This is important when you are trying to publish a paper," said Thieme. "Editors want to make sure that your claim is based on many examples and not one random event." "We've already gathered enough data for one, if not two papers," said Margaux Le Vaillant, one of the visiting users from CSIRO and principal investigator for this experiment. Collaborator Giada Iacono Marziano of the French National Center for Scientific Research added, "Because we can now look at a larger image in detail, we might see things—like certain elemental associations—that we didn't predict." These kinds of surprises pose unexpected questions to scientists, pushing their research in new directions. Siddons and his collaborators at Brookhaven Lab and CSIRO have provided Maia detectors to synchrotron light sources around the world—CHESS at Cornell University in New York, PETRA-III at the DESY laboratory in Hamburg, Germany, and the Australian Synchrotron in Melbourne. The detector at SRX offers the advantage of using beams from NSLS-II, the brightest light source of its kind in the world. When scientists shine the x-ray beams at samples, they excite the material's atoms. As the atoms relax back to their original state they fluoresce, emitting x-ray light that the detector picks up. Different chemical elements will emit different characteristic wavelengths of light, so this x-ray fluorescence mapping is a kind of chemical fingerprinting, allowing the detector to create images of the sample's chemical makeup. The Maia detector has several features that help it map samples at high speeds and in fine detail. "Maia doesn't 'stop and measure' like other detectors," said physicist Pete Siddons, who led Brookhaven's half of the project. Most detectors work in steps, analyzing each spot on a sample one at a time, he explained, but the Maia detector scans continuously. Siddons' team has programmed Maia with a process called dynamic analysis to pick apart the x-ray spectral data collected and resolve where different elements are present. Maia's analysis systems also make it possible for scientists to watch images of their samples appear on the computer screen in real-time as Maia scans. If samples are very similar, Maia will recycle the dynamic analysis algorithms it used to create multi-element images from the first sample's fluorescence signals to build the subsequent sample's images in real-time, without computational lag. Part of Maia's speed is also attributable to the 384 tiny photon-sensing detector elements that make up the large detector. This large grid of sensors can pick up more re-emitted x-rays than standard detectors, which typically use less than 10 elements. Siddons' instrumentation team designed special readout chips to deal with the large number of sensors and allow for efficient detection. The 20-by-20 grid of detectors has a hole in the middle, but that's intentional, Siddons explained. "The hole lets us put the detector much closer to the sample," Siddons said. Rather than placing the sample in front of the x-ray beam and the detector off to the side, SRX beamline scientists have aligned the beam, sample, and detector so that the x-ray beam shines through the hole to reach the sample. With this arrangement, the detector covers a wide angle and captures a large fraction of fluoresced x-rays. That sensitivity allows researchers to scan faster, which can be used either to save time or to cut back on the intensity of x-rays striking the sample, reducing any damage the rays might cause. Siddons noted that the team is currently developing new readout chips for the detector, and incorporating a new type of sensor, called a silicon drift detector array. Together these will heighten the detector's ability to distinguish between photons of similar energy, unfolding detail in complex spectra and making for even more accurate chemical maps. Explore further: Multilaboratory collaboration brings new X-ray detector to light


News Article | March 30, 2017
Site: www.techtimes.com

Scientists have discovered that a diet with high amounts of short-chain fatty acids acetate and butyrate provided positive effects on the immune system, protecting the subjects against juvenile or type 1 diabetes. The specialized diet was developed by CSIRO and Monash University researchers who discovered that starches found in numerous types of food including fruits and vegetables can resist digestion. Instead of being digested, some of them pass through to the colon, where they are broken down by microbiota (gut bacteria). This process of starch fragmentation creates acetate and butyrate which can completely protect against type 1 diabetes when working together. The findings were received with a lot of interest at the International Congress of Immunology in Melbourne in 2016, where they were presented. The results of the study were published, March 27, in the journal Nature Immunology. According to the researchers, the study underlines how natural approaches, starting with special diets and the regulation of gut bacteria, could treat or prevent a series of autoimmune diseases. "Each diet provided a high degree of protection from diabetes, even when administered after breakdown of immune-tolerance. Feeding mice a combined acetate- and butyrate-yielding diet provided complete protection, which suggested that acetate and butyrate might operate through distinct mechanisms," noted the study. As part of the research, the scientists employed materials that people can digest — all of which were comprised of natural products, as starches that resist human digestion are perfectly natural in a person's diet. As a result of this starch resistance, the body releases beneficial metabolites, which the researchers described as "superfood". Professor Charles Mackay, the lead researcher, noted that the diet was not solely about eating foods rich in fiber or just vegetables, but consisted of special foods which follow an equally special process. This type of diet would have to be prescribed by a nutritionist, a clinician or a dietitian, and it should not be self-administered. After the positive results of the research, the scientists wish to get the necessary funding to start a clinical trial with the conclusions of this study. Both teams who worked on the study and other specialists in Australia are making efforts to expand their research and better understand the effects of this diet on obesity, as well as other inflammatory diseases. Among these, type 2 diabetes, cardiovascular diseases, food allergies, asthma and Inflammatory Bowel Disease are a priority. Currently, an approximate number of more than 29 million people suffer from either type 1 or type 2 diabetes in the United States. Of these, only 21 million people are diagnosed, which means that 27.8 percent of the people who suffer from either type of diabetes are currently undiagnosed. Of the total number of people who suffer from diabetes, between 5 and 10 percent have type 1 of the disease. "However, because type 1 diabetes accounts for approximately 5 percent of all diagnosed cases of diabetes among adults, trends documented in the surveillance system may not be reflective of trends in type 1 diabetes," notes the CDC data. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


In the 2015 Paris climate agreement, 195 nations committed to limit global warming to two degrees above pre-industrial levels. But some, like Eelco Rohling, professor of ocean and climate change at the Australian National University’s research school of earth sciences, now argue that this target cannot be achieved unless ways to remove huge amounts of carbon dioxide from the atmosphere are found, and emissions are slashed. This is where negative emissions technologies come in. The term covers everything from reforestation projects to seeding the stratosphere with sulphates or fertilising the ocean with iron fillings. It’s controversial – not least because of the chequered history of geoengineering-type projects, but also because of concerns it will grant governments and industry a licence to continue with business as usual. But many argue we no longer have a choice. “Most things are not applied yet on larger scales but we have a pretty good feeling of things that will work and we can quantify roughly how much carbon we should be able to remove from the atmosphere with them,” says Rohling. The scale of the task is staggering, says Dr Pep Canadell, from the global carbon project at CSIRO. “The models are basically asking for removing carbon dioxide from the atmosphere which will be equivalent of one-quarter of all carbon emissions at present,” he says. This amounts to about 10 billion tonnes of carbon dioxide removed from the atmosphere and disposed of each year. The least controversial method of doing this is deceptively simple: plant more trees. “We have lost a lot of density of carbon in the landscapes because of deforestation and degradation. We have depleted carbon in the soils in all the problem areas of the world,” Canadell says. “What are the opportunities to bring some of this carbon back?” Again, the scale of reforestation efforts needed to make a dent in atmospheric carbon dioxide is substantial. “We would need as many as three Indias worth of land globally – and good quality land, not marginal land,” Canadell says. Reforestation also needs enough water, and needs to be done in such a way that it enriches the soil and ecosystems, not deplete them. The fact that so many soils are carbon-depleted by intensive agriculture offers a way to tackle two environment challenges at the same time. Biochar is a form of charcoal produced by heating plant material in the absence of oxygen. Agricultural waste, which would otherwise be a major source of greenhouse gas emissions if burnt, could instead be turned into a biochar – a process that produces more energy than it consumes – and the biochar could then be used to enrich agricultural soils with carbon. Research suggests that biochars not only boost crop yields, but could lock away carbon for several thousand years. Another approach designed to lock away carbon while also helping depleted soils is enhanced weathering. Olivine refers to a group of silicate minerals that react with carbon dioxide to form other compounds. Enhanced weathering aims to amplify this chemical interaction by mining huge quantities of olivine – which is widespread and relatively abundant – and pulverising it to maximise its exposure to the air, then spreading it over areas such as agricultural fields to add carbon to the soils. Rohling believes enhanced weathering is very promising, but it does have some significant downsides. “It’s not one of the most expensive approaches but it does require large-scale mining, which we do for everything else anyway,” he says. The mining would also consume significant amounts of energy, which reduces the efficiency of the process by up to one-third. The oceans are of particular interest for negative emissions because of their enormous capacity for carbon dioxide. One proposal is to fertilise the oceans with powdered iron or olivine. This boost in important nutrients leads to an increase in phytoplankton which, when it dies, decomposes and sinks to the seafloor, taking the carbon with it. This phenomenon occurred naturally during recent ice ages, Rohling says, when the Southern Ocean was fertilised with dust from South America and Australia. But any project that attempted to alter the biochemistry and ecology of the oceans would very quickly run foul of international conventions, and rightly so. “The law of the sea would forbid you from dumping things that will affect the environmental chemistry or ecology, and that’s exactly what you want to do,” he says. As atmospheric carbon dioxide rises above 400 parts per million (ppm) for the first time in human history, there’s even talk of direct capture of carbon dioxide, using huge versions of the atmospheric scrubbers that remove carbon dioxide from the air on board spacecraft. Canadell’s strongest bet is on carbon capture and storage, but instead of sucking it out of the air, he wants to see every facility that produces carbon dioxide equipped with technology to capture it at the release point. “Anything that can be attached to any plants that are emitting carbon, either it’s a full power plant, a bioenergy burning biomass to produce electricity or carbon capture storage that is associated to industrial processes which release carbon,” he says. The captured carbon can then be disposed of deep underground in abandoned oil and gas wells, saline aquifers, or in the kind of geology that locks it away chemically. While not strictly a negative emissions technology, he argues that as long as we continue to emit carbon dioxide, we cannot hope to remain below two degrees of warming unless we find a way to capture it. Whatever the choice of negative emissions technology, Rohling says we are running out of time to study and implement them responsibly. He’s worried that at the first big global climate change disaster, governments will respond with a knee-jerk embracing of whatever negative emissions technologies they can, regardless of whether scientists have adequately explored the consequences. “We need to start preparing so we know what we’re talking about when we need it,” he says.


News Article | April 20, 2017
Site: www.eurekalert.org

A new study on UHT milk is helping scientists to better understand Alzheimer's, Parkinson's and type 2 diabetes, opening the door to improved treatments for these age-related diseases. About 500 million people worldwide suffer from these diseases, which cause millions of deaths each year. Co-lead researcher, ANU Professor John Carver, said that two unrelated proteins aggregate in UHT milk over a period of months to form clusters called amyloid fibrils, which cause the milk to transform from a liquid into a gel. He said the same type of protein clusters are found in plaque deposits in cases of Alzheimer's and Parkinson's. "Parkinson's, dementia and type 2 diabetes are big problems for the ageing population in Australia and many other countries around the world," said Professor Carver from the ANU Research School of Chemistry. "Our interest in milk proteins led to a discovery of the reason for this gelling phenomenon occurring in aged UHT milk." "The research does not suggest UHT milk can cause these age-related diseases." Professor Carver said milk proteins changed structurally when heated briefly to around 140 degrees to produce UHT milk, causing the gelling phenomenon with long-term storage. He said normal pasteurised milk did not form amyloid fibrils. ANU worked with CSIRO, University of Wollongong and international researchers on the study, which is published in the journal Small. Watch a video interview with Professor John Carver about the study.


Apparently, even the Arctic Ocean is not safe from the unsightly bits of plastic, bottles, and other garbage defacing most of our oceans today. This is what an emerging study published in Science Advances has revealed. A team of researchers from the University of Cádiz in Spain and a number of other organizations have made the sad discovery that a major ocean current is bringing countless plastic bits all the way from the North Atlantic, to the Greenland and Barents seas. Experts predict that plastic pollution could rapidly find its way into the pristine Arctic waters in the years to come. At least 275 million tons of plastic waste are produced every year by 192 nations across the globe - almost 8 million tons of this plastic pollution is washed up straight into the ocean. China has been notoriously identified as the biggest contributor of plastic waste at 1.32 to 3.52 million tons, with its neighboring countries Indonesia, the Philippines, Sri Lanka, and Vietnam following closely behind. Plastic pollution is pervasive in the open ocean, but the largest concentrations can be spotted in the five major ocean gyres, or circulating ocean currents, such as in the Pacific, Atlantic, and Indian Oceans. The Great Garbage Patch in the North Pacific, also known as the Pacific trash vortex, is one example of this. This dense part of the sea filled with marine debris or microplastics was discovered between 1985 and 1988. In 2014, scientists have also found proof of microplastics in deep-sea sediments from the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean. Unfortunately, plastics are really built to last forever. They're made from strong, durable materials that are difficult to break down naturally. There are certain types of plastics - say, a dense monofilament fishing line - that could stay up to 600 years. A thin plastic bag in harsh surf zones, on the other hand, only stand a few months. "But even if that bag breaks down over the course of six months or a year, it might well have had a lot of environmental impact before that," Chris Wilcox of CSIRO's Oceans and Atmosphere Flagship said. Research shows that many marine animals get caught in plastic fishing lines and end up getting strangulated. There's also the risk of animals mistaking colorful plastic as food, which is severely toxic to them. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | April 27, 2017
Site: phys.org

The plan for this pilot project is to pipe the Waikatakata Hot Spring at Vusasivo, which is coming out of the ground at 70°C, into an absorption chilling facility at Natewa Bay. This centralised cold storage facility will then be available for villagers to preserve their goods. "So, for example, if they slaughter a cow or catch some fish they can freeze and store them there," says Professor Regenauer-Lieb, Head of UNSW Petroleum Engineering. "This will be an important facility for the villages where people still regularly die from food poisoning." The geothermal freezer facility plans to service the traditional landowners of Vusasivo Village and adjoining Natewa Village in coordination with the Nambu Conservation Trust of Natewa. Although the villages have basic infrastructure, including a post office, church and small clinic, they are off-grid and find it difficult to keep food fresh. They also have sewerage installations and no morgue or clean water, which all significantly enhance the risk of food poisoning and disease. Having dedicated much of his professional life to researching geothermal energy, Regenauer-Lieb believes that deriving energy from natural hot springs represents a state-of-the-art and reliable renewable approach to cooling in villages and urban areas. He says the geothermally driven absorption chiller technology is based on traditional electrically driven vapour compression chillers, but uses heat-driven absorption or adsorption and evaporation for refrigeration and freezing instead. The project, which is part of the long-running Geothermal Cities initiative set up by Regenauer-Lieb in 2008 with the CSIRO and other universities, is being funded and championed by the University of the South Pacific in collaboration with UNSW. Regenauer-Lieb says the technology is already proven and is technically identical to the Chena Hot Spring in Alaska, which provides year-round ice for an ice museum and sculpture gallery. "The original idea came about when I was invited to scout geothermal potential for a gold-mining operation on the island," he says. "But when I got there I decided I wanted to do something much more lasting and beneficial for the people of Fiji." Much of Regenauer-Lieb's previous research has explored the much more difficult, risky and expensive non-volcanic geothermal energy opportunities that require deep-well drilling in places such as Perth and the Cooper Basin. He says that with geothermal energy so close to the surface, the islands of Fiji (which are in the Pacific Ring of Fire) are a perfect low-risk, high-benefit place to install this type of technology. In addition, he says the remoteness of the village and the fact it is off-grid provides a marvellous opportunity to train people in the philosophy of geothermal energy, which is about embracing heat as a commodity that can be used again and again for a wide variety of applications. "If I simply go there, for example, to build a geothermal power station, I can guarantee that what happens is a repeat of the mistakes made in current developed nations," he says. "Rather than using an energy-efficient centralised air conditioner and cool store, the natural tendency is to individually purchase an electricity-hungry reverse-cycle air conditioner or refrigerator. This will in turn drive up electricity consumption and the size and investment needed for a geothermal power plant. "In a remote setting, the financial and technical hurdle of properly maintaining the individual air conditioners and refrigerators may lead to early break down and we're back at square one. "In terms of re-educating the community to embrace heat as a commodity, what we have in Fiji is a welcome blank slate where we can introduce the attractive concept of cascading heat usage. "This ability to refrigerate their food is just the start; the long-term aim is to integrate multiple uses of naturally available geothermal heat for electric power, cooling solutions and providing fresh water and other direct heat uses to local communities." The project team has enough funding to undertake the reconnaissance work on the hot springs but still needs to raise the capital to build the cooling facility. The next stage is running the Clean Power for the South Pacific Conference 2017 in Fiji to seek investment or sponsorship from the geothermal community as well as the World Bank, which has identified Fiji as a possible target for promoting geothermal energy in the South Pacific. Although geothermal energy is not quite as well-known as its sustainable-energy cousins – wind and solar power – Regenauer-Lieb says it's the only one with realistic hopes of providing sustainable base-load energy for the future energy mix, particularly in volcanic regions like Fiji. "The long-term aim for Vanua Levu is to power the entire island by geothermal energy," he says. "In the town of Savusavu, for example, there is a 98°C hot spring, with an estimated 160°C at 500m depth. This energy source may be sufficient to power the local grid and replace its industrial diesel generators. "With deeper drilling, geothermal energy could power the entire island and neighbouring islands as well. It wouldn't be too much of a stretch to say that Fiji could be entirely run on electricity won from geothermal energy in the future." Explore further: Geothermal power potential seen in Iceland drilling project


News Article | April 28, 2017
Site: www.eurekalert.org

Special 'nugget-producing' bacteria may hold the key to more efficient processing of gold ore, mine tailings and recycled electronics, as well as aid in exploration for new deposits, University of Adelaide research has shown. For more than 10 years, University of Adelaide researchers have been investigating the role of microorganisms in gold transformation. In the Earth's surface, gold can be dissolved, dispersed and reconcentrated into nuggets. This epic 'journey' is called the biogeochemical cycle of gold. Now they have shown for the first time, just how long this biogeochemical cycle takes and they hope to make to it even faster in the future. "Primary gold is produced under high pressures and temperatures deep below the Earth's surface and is mined, nowadays, from very large primary deposits, such as at the Superpit in Kalgoorlie," says Dr Frank Reith, Australian Research Council Future Fellow in the University of Adelaide's School of Biological Sciences, and Visiting Fellow at CSIRO Land and Water at Waite. "In the natural environment, primary gold makes its way into soils, sediments and waterways through biogeochemical weathering and eventually ends up in the ocean. On the way bacteria can dissolve and re-concentrate gold - this process removes most of the silver and forms gold nuggets. "We've known that this process takes place, but for the first time we've been able to show that this transformation takes place in just years to decades -- that's a blink of an eye in terms of geological time. "These results have surprised us, and lead the way for many interesting applications such as optimising the processes for gold extraction from ore and re-processing old tailings or recycled electronics, which isn't currently economically viable." Working with John and Johno Parsons (Prophet Gold Mine, Queensland), Professor Gordon Southam (University of Queensland) and Dr Geert Cornelis (formerly of the CSIRO), Dr Reith and postdoctoral researcher Dr Jeremiah Shuster analysed numerous gold grains collected from West Coast Creek using high-resolution electron-microscopy. Published in the journal Chemical Geology, they showed that five 'episodes' of gold biogeochemical cycling had occurred on each gold grain. Each episode was estimated to take between 3.5 and 11.7 years -- a total of under 18 to almost 60 years to form the secondary gold. "Understanding this gold biogeochemical cycle could help mineral exploration by finding undiscovered gold deposits or developing innovative processing techniques," says Dr Shuster, University of Adelaide. "If we can make this process faster, then the potential for re-processing tailings and improving ore-processing would be game-changing. Initial attempts to speed up these reactions are looking promising." The researchers say that this new understanding of the gold biogeochemical process and transformation may also help verify the authenticity of archaeological gold artefacts and distinguish them from fraudulent copies.


News Article | May 1, 2017
Site: www.chromatographytechniques.com

Special 'nugget-producing' bacteria may hold the key to more efficient processing of gold ore, mine tailings and recycled electronics, as well as aid in exploration for new deposits, University of Adelaide research has shown. For more than 10 years, University of Adelaide researchers have been investigating the role of microorganisms in gold transformation. In the Earth's surface, gold can be dissolved, dispersed and reconcentrated into nuggets. This epic 'journey' is called the biogeochemical cycle of gold. Now they have shown for the first time, just how long this biogeochemical cycle takes and they hope to make to it even faster in the future. "Primary gold is produced under high pressures and temperatures deep below the Earth's surface and is mined, nowadays, from very large primary deposits, such as at the Superpit in Kalgoorlie," says Frank Reith, Australian Research Council Future Fellow in the University of Adelaide's School of Biological Sciences, and Visiting Fellow at CSIRO Land and Water at Waite. "In the natural environment, primary gold makes its way into soils, sediments and waterways through biogeochemical weathering and eventually ends up in the ocean. On the way bacteria can dissolve and re-concentrate gold - this process removes most of the silver and forms gold nuggets. "We've known that this process takes place, but for the first time we've been able to show that this transformation takes place in just years to decades -- that's a blink of an eye in terms of geological time. "These results have surprised us, and lead the way for many interesting applications such as optimizing the processes for gold extraction from ore and re-processing old tailings or recycled electronics, which isn't currently economically viable." Working with John and Johno Parsons (Prophet Gold Mine, Queensland), Gordon Southam (University of Queensland) and Geert Cornelis (formerly of the CSIRO),  Reith and postdoctoral researcher Jeremiah Shuster analyzed numerous gold grains collected from West Coast Creek using high-resolution electron-microscopy. Published in the journal Chemical Geology, they showed that five 'episodes' of gold biogeochemical cycling had occurred on each gold grain. Each episode was estimated to take between 3.5 and 11.7 years -- a total of under 18 to almost 60 years to form the secondary gold. "Understanding this gold biogeochemical cycle could help mineral exploration by finding undiscovered gold deposits or developing innovative processing techniques," says Shuster. "If we can make this process faster, then the potential for re-processing tailings and improving ore-processing would be game-changing. Initial attempts to speed up these reactions are looking promising." The researchers say that this new understanding of the gold biogeochemical process and transformation may also help verify the authenticity of archaeological gold artifacts and distinguish them from fraudulent copies.


News Article | April 21, 2017
Site: news.yahoo.com

A modified genuine Boeing 777 flaperon was tested in waters near Hobart, the capital of Tasmania, to help determine a possible resting place of missing Malaysia Airlines jet MH370 (AFP Photo/Handout) Missing flight MH370 "most likely" lies north of a former search zone in the remote Indian Ocean, Australian authorities said Friday, in a new report that offers hope the plane may one day be found. A vast underwater hunt for the Malaysia Airlines jet off Australia's west coast was halted in January when no trace was found of the plane, which disappeared en route from Kuala Lumpur to Beijing three years ago carrying 239 people. The Australian-led undersea search -- the most expensive ever of its kind -- operated on the assumption that MH370 went down somewhere in the southern Indian Ocean, based on satellite data. But relatives pleaded for the search to be extended following analysis by Australian and international experts released in December that concluded the aircraft was not in the search zone but may be further north. Three fragments were also recovered from the plane outside the official search zone on western Indian Ocean shores, including a two-metre wing part known as a flaperon found on La Reunion island. The new report by Australia's national science body CSIRO supported the northern theory using data and analysis from ocean testing of an actual Boeing 777 flaperon. As part of the test, the wing part was cut down to match photographs of MH370's flaperon and then placed in waters near Hobart, the capital of Tasmania, an island state south of Australia's mainland. "The arrival of MH370's flaperon at La Reunion in July 2015 now makes perfect sense," said CSIRO scientist David Griffin, adding that how the flaperon responded to wind, waves and ocean currents was crucial. He said testing with an actual flaperon "added an extra level of assurance" to the findings from earlier drift modelling work. "We add both (wind and waves) together in our model to simulate the drift across the ocean, then compare the results with observations of where debris was and wasn't found, in order to deduce the location of the aircraft. "We cannot be absolutely certain, but that is where all the evidence we have points us, and this new work leaves us more confident in our findings." The report was welcomed by Transport Minister Darren Chester who said it was provided to Malaysia. "The CSIRO report has been provided to Malaysia for consideration in its ongoing investigation into the disappearance of MH370," he said in a statement. "Malaysia is the lead investigator and any future requests in relation to searching for MH370 would be considered by Australia, at that time." But he added it was "important to note that it does not provide new evidence leading to a specific location of MH370". Australia said in January when the full area was scoured that the hunt would not be restarted without "credible new evidence". The Australian Transport Safety Bureau (ATSB), which led the original search mission, said the report "further confirms the most likely location of MH370 is in the new search area". The old search zone -- a 120,000 square kilometre (46,000 square mile) area off western Australia -- was largely defined from scant clues available from satellite "pings" and calculations of how much fuel was on board MH370. The analysis released in December had identified an area of approximately 25,000 square kilometres with the highest probability of containing wreckage. For more news videos visit Yahoo View, available now on iOS and Android.


A Boeing 777 flaperon cut down to match the one from flight MH370 found on Reunion island off the coast of Africa in 2015, is lowered into water to discover its drift characteristics by Commonwealth Scientific and Industrial Research Organisation researchers in Tasmania, Australia, in this handout image taken March 23, 2017. CSIRO/Handout via REUTERS SYDNEY (Reuters) - A new ocean debris drift analysis shows missing Malaysia Airlines MH370 is most likely within a proposed expanded search area rejected by Australia and Malaysia in January, the Australian government's scientific agency said on Friday. A A$200 million ($150.54 million) search for the aircraft, which went missing in 2014 with 239 people onboard, was suspended when the two nations rejected a recommendation to search north of the 120,000 sq km (46,000 sq mile) area already canvassed, saying the new area was too imprecise. The new debris drift analysis suggests the missing Boeing 777 may be located in a much smaller 25,000 sq km (9,652 sq mile) zone within that proposed northern search area. “This new work leaves us more confident in our findings,” Dr David Griffin, a principal research scientist at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) said in a statement. The CSIRO report featured data and analysis from ocean testing of an actual Boeing 777 flaperon cut down to match the one from MH370 found on Reunion island off the coast of Africa in 2015, rather than the wood and steel models used in a previous test. "We’ve found that an actual flaperon goes (drifts) about 20 degrees to the left, and faster than the replicas, as we thought it might," said Griffin. "The arrival of MH370’s flaperon at La Reunion in July 2015 now makes perfect sense." For more news videos visit Yahoo View, available now on iOS and Android. The location of MH370, which went missing on a flight to Beijing from the Malaysian capital of Kuala Lumpur, has become one of the world's greatest aviation mysteries. Australian Minister for Infrastructure and Transport Darren Chester said he welcomed the new CSIRO report but said it was important to note it did not provide new evidence leading to a specific location of MH370. He said a copy of the report had been provided to Malaysia for consideration in its ongoing investigation into the disappearance of the aircraft. “Malaysia is the lead investigator and any future requests in relation to searching for MH370 would be considered by Australia, at that time," Chester said.


News Article | April 30, 2017
Site: www.theguardian.com

The flecks of white speckled across the parched brown landscape of the Murray-Darling basin appear dramatically out of place – some kind of wintertime miracle in the Australian bush. On closer inspection it is not snow, but something equally alien to this harsh environment: fluffy wads of cotton. The major river system of the world’s driest inhabited continent somehow sustains this thirsty cash crop – the WWF estimates that 2,700 litres of water can be used to produce a single cotton T-shirt. Australian conditions have pushed local farmers to become the most efficient in the game, using hi-tech innovations to improve water productivity by more than 40% in a decade. Yet critics note that saved water is simply reinvested in producing ever-more cotton, rather than released back into a once-mighty river network crippled by increasingly erratic rainfall since the turn of the millennium. Both sides have turned to science to support their position – indeed both to the CSIRO, Australia’s national science agency, which simultaneously serves as both saviour and prophet of doom for the cotton industry. Lionel Henderson, the business development director for CSIRO agriculture and food, says the agency has developed varieties specially adapted to Australia’s climate, disease threats and nutrient availability. “When I first got involved in cotton industry during the early 80s, two bales to an acre was standard – now five to an acre is the target,” he says. “The breeding program has helped industry expand, particularly into southern New South Wales and northern Victoria – there is generally going to be water available in one of the different rivers, so by broadening the base you minimise the impact [of low rainfall in a particular region].” The challenging nature of Australia’s conditions has led to CSIRO-bred varieties being used in similar dry climates around the world. The CSIRO has worked with companies including Monsanto to roll out genetically modified varieties over the past two decades, with GM cotton today making up more than 99% of the crop. “Monsanto develop the traits, we then work with Monsanto to incorporate those traits into varieties we are breeding,” Henderson says. He says the main rivals to Australian growers are not foreign cotton producers but manufacturers of other fibres. In terms of water use, cotton’s rivals are certainly more efficient, from natural fibres including hemp to synthetics such as polyester, which represents less than 0.1% of cotton’s water footprint, according to a 1999 AUTEX Research Journal study by Eija Kalliala and Pertti Nousiainen. An Australian Conservation Foundation campaigner, Jonathan La Nauze, is more interested in another area of CSIRO work – the agency’s climate change research, which forecasts a dramatic rise in extreme weather events such as droughts and heatwaves, and a sharp drop in winter and spring rainfall across southern Australia. “We’re already the driest part of the world and water use is a key concern – cotton uses a hell of a lot of it,” he says. “Growers are aggressively trying to increase amount they can take rather than accept the current amount as the upper limit. We saw the Darling river stop flowing for months this year – extraordinary and avoidable.” “The impacts on native fish and water birds have been severe, and significant opportunities to improve downstream communities have been missed – and that’s before factoring in the CSIRO’s global warming scenarios of a reduction of water availability in the northern basin.” La Nauze welcomes Cotton Australia’s measures to improve water efficiency but says it isn’t much help to the environment if the saved water doesn’t get shared around. “The dividend should be a long-term sustainable river system – if you kill that system, you won’t have an industry,” he says. Cotton Australia’s chief executive, Adam Kay, says asking growers to pass the dividends of improved efficiencies on to the environment is “a ridiculous thing to say” given it is farmers making the investments in the first place. “We’ve got to help the public understand about this perception that cotton is somehow thirsty – it is a normal plant like soybeans or corn, uses about the same amount of water,” he says. “The issue is people with the best access to water choose to grow cotton as it offers the best return – that water would still be used to grow other crops if cotton wasn’t there.” But even Cotton Australia’s own promotional material acknowledges that the crop’s irrigation requirement of eight megalitres a hectare is the second-most water intensive in Australia, behind rice (12ML per hectare), but ahead of alternatives such as nurseries or cut flowers (5ML). Analysis of Australian Bureau of Statistics data reveals both the dramatic ebbs and flows of cotton production in response to water supply, and the continuing intensity of water use despite the progress made. During the water-scarce season of 2014-15, cotton sales represented 1.7% of Australia’s agricultural commodity value but used 12.2% of its water. In the more favourable conditions of 2013-14, cotton generated 3.9% of agriculture profits but in the process devoured 24% of the water diverted to agriculture. Kay says the industry has left no stone unturned in its quest for water savings and improved yield. Innovations include electromagnetic meters and soil moisture probes to monitor the need for irrigation, the laser-levelling of fields to ensure water drains evenly, weather forecasting software to know how much crop can be sustained before planting, thermal imaging to identify leaks, lining channels with non-porous materials to minimise seepage, autonomous spray rigs, and tailwater recycling programs. “We are on the cusp of incredible things with IoT technologies and digital agriculture,” he says. “We are using individual pieces [of data gathering] right now – it is commonplace to use drones to monitor crops and look for weed outbreaks, but the time is coming to link data from drones to data from the cotton picker to data in soil tests field and the canopy sensors – [but] once you link it all up, you can drive incredible decision making.” The reliance on new technology has thrown up new challenges for farmers: Kay notes that regional internet coverage is inadequate, and also that growers need to develop new tech-savvy skill sets. He says Cotton Australia’s investment of $20m a year into research and development can also help deal with the biggest new challenge of all: climate change. “We have research and development projects going on looking at impacts – tents out in the field to see what higher CO2 does to the crop, work on water use efficiency for potential scarcity in the future, and managing increased temperature,” he says. Cotton Australia is encouraging farmers to becoming accredited with the global Better Cotton Initiative, a framework founded by the WWF that requires members to meet stringent sustainability criteria – not to mention marketing rules. They must promote their cotton using a selection of pre-approved phrases, including: “The Better Cotton Initiative exists to make global cotton production better for the people who produce it, better for the environment it grows in and better for the sector’s future.”


News Article | May 4, 2017
Site: www.undercurrentnews.com

A genetic improvement program to breed oysters genetically predisposed to resist Pacific oyster mortality syndrome (POMS) -- a disease that is harmless to humans but so lethal to oysters that it can kill more than 90% of a crop of millions of animals within days -- is being developed in Australia. In 2013, POMS spread to the Hawkesbury River, a prime oyster growing region, where the disease killed more than 10 million oysters in three days, according to the Commonwealth Scientific and Industrial Research Organisation (CSIRO) blog. Then, in January 2016, it turned up in Tasmanian waters, considered by some to be the least likely destination for POMS in Australia, due to the disease’s preference for water temperatures above 21-22°C. The Tasmanian industry lost 50 employees, and 60% of the state’s growing areas were affected after the disease hit, according to Scott Parkinson, selective breeding manager at Tasmanian-based Shellfish Culture, Australia’s main Pacific oyster hatchery. Australian Seafood Industries, the sole supplier of selectively bred Pacific oyster broodstock to the Australian industry, had been working closely on a genetic improvement program for Pacific oysters. Following the POMS outbreak, it refocused its research to breed oysters genetically predisposed to resist the disease. “POMS resistance is what we call a polygenic trait, which means there are perhaps thousands, or tens of thousands, of genes involved. The breeding program is about accumulating or increasing the frequency of those genes with each new generation," said Peter Kube, senior geneticist at CSIRO. Kube pointed out that it will require up to two years for hatcheries to be able to produce commercial quantities of seed from resistant genetic lines for growers. Shellfish Culture’s Scott Parkinson said that a strategy of ‘farming around’ the disease will be a key factor in getting the most out of the new genetic lines. Shellfish Culture takes ASI broodstock and produces large quantities of seed or spat -- baby oysters three to nine months old -- that it sends to oyster farms for growing out. “Managing POMS is not just about genetics, although that will underpin the recovery of the Australian oyster industry, but about management, site selection, when to stock,” said Parkinson. “Different growers are experimenting with different strategies.” Oyster age is a risk factor for POMS, with younger oysters at a higher risk of death than mature individuals. For Shellfish Culture, POMS has brought both challenge and opportunity, according to Parkinson. Overnight, the company lost not just a significant percentage of its stock of around 100 million spat, but was no longer able to supply to South Australia nor most of New South Wales -- which represented 50% of its market -- due to interstate biosecurity protocols. Its response was to set up a new facility in South Australia, Eyre Shellfish. At the same time, the company invested in making its main hatchery operation near Hobart biosecure, which after two independent biosecurity audits has been declared disease-free and is now back to supplying oyster spat to the entire Tasmanian industry. “We will keep breeding stock for resistance. We still have a percentage of animals that die from the disease, so we have to get more and more resistance into that stock.” Click here for the full article.


Tucker M.R.,University of Adelaide | Koltunow A.M.G.,CSIRO
Current Opinion in Plant Biology | Year: 2014

The formation of female gametes in plants occurs within the ovule, a floral organ that is also the precursor of the seed. Unlike animals, plants lack a typical germline separated from the soma early in development and rely on positional signals, including phytohormones, mobile mRNAs and sRNAs, to direct diploid somatic precursor cells onto a reproductive program. In addition, signals moving between plant cells must overcome the architectural limitations of a cell wall which surrounds the plasma membrane. Recent studies have addressed the molecular and histological signatures of young ovule cells and indicate that dynamic cell wall changes occur over a short developmental window. These changes in cell wall properties impact signal flow and ovule cell identity, thereby aiding the establishment of boundaries between reproductive and somatic ovule domains. © 2013 Elsevier Ltd.


Furbank R.T.,CSIRO | Tester M.,University of Adelaide
Trends in Plant Science | Year: 2011

Global agriculture is facing major challenges to ensure global food security, such as the need to breed high-yielding crops adapted to future climates and the identification of dedicated feedstock crops for biofuel production (biofuel feedstocks). Plant phenomics offers a suite of new technologies to accelerate progress in understanding gene function and environmental responses. This will enable breeders to develop new agricultural germplasm to support future agricultural production. In this review we present plant physiology in an 'omics' perspective, review some of the new high-throughput and high-resolution phenotyping tools and discuss their application to plant biology, functional genomics and crop breeding. © 2011.


Guo H.,Shanghai University | Barnard A.S.,CSIRO
Journal of Materials Chemistry A | Year: 2013

The widespread nanostructures of iron oxides and oxyhydroxides are important reagents in many biogeochemical processes in many parts of our planet and ecosystem. Their functions in various aspects are closely related to their shapes, sizes, and thermodynamic surroundings, and there is much that we can learn from these natural relationships. This review covers these subjects of several phases (ferrihydrite, goethite, hematite, magnetite, maghemite, lepidocrocite, akaganéite and schwertmannite) commonly found in water, soils and sediments. Due to surface passivation by ubiquitous water in aquatic and most terrestrial environments, the difference in formation energies of bulk phases can decrease substantially or change signs at the nanoscale because of the disproportionate surface effects. Phase transformations and the relative abundance are sensitive to changes in environmental conditions. Each of these phases (except maghemite) displays characteristic morphologies, while maghemite appears frequently to inherit the precursor's morphology. We will see how an understanding of naturally occurring iron oxide nanostructures can provide useful insight for the production of synthetic iron oxide nanoparticles in technological settings. © 2013 The Royal Society of Chemistry.


Pettolino F.A.,CSIRO | Walsh C.,University of Melbourne | Fincher G.B.,University of Adelaide | Bacic A.,University of Melbourne
Nature Protocols | Year: 2012

The plant cell wall is a chemically complex structure composed mostly of polysaccharides. Detailed analyses of these cell wall polysaccharides are essential for our understanding of plant development and for our use of plant biomass (largely wall material) in the food, agriculture, fabric, timber, biofuel and biocomposite industries. We present analytical techniques not only to define the fine chemical structures of individual cell wall polysaccharides but also to estimate the overall polysaccharide composition of cell wall preparations. The procedure covers the preparation of cell walls, together with gas chromatographyg-mass spectrometry (GC-MS)-based methods, for both the analysis of monosaccharides as their volatile alditol acetate derivatives and for methylation analysis to determine linkage positions between monosaccharide residues as their volatile partially methylated alditol acetate derivatives. Analysis time will vary depending on both the method used and the tissue type, and ranges from 2 d for a simple neutral sugar composition to 2 weeks for a carboxyl reduction/methylation linkage analysis. © 2012 Nature America, Inc. All rights reserved.


Lu X.,Huazhong University of Science and Technology | Naidis G.V.,RAS Joint Institute for High Temperatures | Laroussi M.,Old Dominion University | Ostrikov K.,CSIRO | Ostrikov K.,University of Sydney
Physics Reports | Year: 2014

This review focuses on one of the fundamental phenomena that occur upon application of sufficiently strong electric fields to gases, namely the formation and propagation of ionization waves-streamers. The dynamics of streamers is controlled by strongly nonlinear coupling, in localized streamer tip regions, between enhanced (due to charge separation) electric field and ionization and transport of charged species in the enhanced field. Streamers appear in nature (as initial stages of sparks and lightning, as huge structures-sprites above thunderclouds), and are also found in numerous technological applications of electrical discharges. Here we discuss the fundamental physics of the guided streamer-like structures-plasma bullets which are produced in cold atmospheric-pressure plasma jets. Plasma bullets are guided ionization waves moving in a thin column of a jet of plasma forming gases (e.g.,He or Ar) expanding into ambient air. In contrast to streamers in a free (unbounded) space that propagate in a stochastic manner and often branch, guided ionization waves are repetitive and highly-reproducible and propagate along the same path-the jet axis. This property of guided streamers, in comparison with streamers in a free space, enables many advanced time-resolved experimental studies of ionization waves with nanosecond precision. In particular, experimental studies on manipulation of streamers by external electric fields and streamer interactions are critically examined. This review also introduces the basic theories and recent advances on the experimental and computational studies of guided streamers, in particular related to the propagation dynamics of ionization waves and the various parameters of relevance to plasma streamers. This knowledge is very useful to optimize the efficacy of applications of plasma streamer discharges in various fields ranging from health care and medicine to materials science and nanotechnology. © 2014 Elsevier B.V.


Schwartz M.W.,University of California at Davis | Martin T.G.,CSIRO
Annals of the New York Academy of Sciences | Year: 2013

Conservation translocation of species varies from restoring historic populations to managing the relocation of imperiled species to new locations. We review the literature in three areas-translocation, managed relocation, and conservation decision making-to inform conservation translocation under changing climates. First, climate change increases the potential for conflict over both the efficacy and the acceptability of conservation translocation. The emerging literature on managed relocation highlights this discourse. Second, conservation translocation works in concert with other strategies. The emerging literature in structured decision making provides a framework for prioritizing conservation actions-considering many possible alternatives that are evaluated based on expected benefit, risk, and social-political feasibility. Finally, the translocation literature has historically been primarily concerned with risks associated with the target species. In contrast, the managed relocation literature raises concerns about the ecological risk to the recipient ecosystem. Engaging in a structured decision process that explicitly focuses on stakeholder engagement, problem definition and specification of goals from the outset will allow creative solutions to be developed and evaluated based on their expected effectiveness. © 2013 The New York Academy of Sciences.


Batley G.E.,CSIRO | Kirby J.K.,CSIRO | McLaughlin M.J.,CSIRO | McLaughlin M.J.,University of Adelaide
Accounts of Chemical Research | Year: 2013

Over the last decade, nanoparticles have been used more frequently in industrial applications and in consumer and medical products, and these applications of nanoparticles will likely continue to increase. Concerns about the environmental fate and effects of these materials have stimulated studies to predict environmental concentrations in air, water, and soils and to determine threshold concentrations for their ecotoxicological effects on aquatic or terrestrial biota.Nanoparticles can be added to soils directly in fertilizers orplant protection products or indirectly through application to land or wastewater treatment products such as sludges or biosolids. Nanoparticles may enter aquatic systems directly through industrial discharges or from disposal of wastewater treatment effluents or indirectly through surface runoff from soils. Researchers have used laboratory experiments to begin to understand the effects of nanoparticles on waters and soils, and this Account reviews that research and the translation of those results to natural conditions.In the environment, nanoparticles can undergo a number of potential transformations that depend on the properties both of the nanoparticle and of the receiving medium. These transformations largely involve chemical and physical processes, but they can involve biodegradation of surface coatings used to stabilize many nanomaterial formulations.The toxicity of nanomaterials to algae involves adsorption to cell surfaces and disruption to membrane transport. Higher organisms can directly ingest nanoparticles, and within the food web, both aquatic and terrestrial organisms can accumulate nanoparticles.The dissolution of nanoparticles may release potentially toxic components into the environment. Aggregation with other nanoparticles (homoaggregation) or with natural mineral and organic colloids (heteroaggregation) will dramatically change their fate and potential toxicity in the environment. Soluble natural organic matter may interact with nanoparticles to change surface charge and mobility and affect the interactions of those nanoparticles with biota. Ultimately, aquatic nanomaterials accumulate in bottom sediments, facilitated in natural systems by heteroaggregation. Homoaggregates of nanoparticles sediment more slowly.Nanomaterials from urban, medical, and industrial sources may undergo significant transformations during wastewater treatment processes. For example, sulfidation of silver nanoparticles in wastewater treatment systems converts most of the nanoparticles to silver sulfides (Ag2S). Aggregation of the nanomaterials with other mineral and organic components of the wastewater often results in most of the nanomaterial being associated with other solids rather than remaining as dispersed nanosized suspensions.Risk assessments for nanomaterial releases to the environment are still in their infancy, and reliable measurements of nanomaterials at environmental concentrations remain challenging. Predicted environmental concentrations based on current usage are low but are expected to increase as use increases. At this early stage, comparisons of estimated exposure data with known toxicity data indicate that the predicted environmental concentrations are orders of magnitude below those known to have environmental effects on biota. As more toxicity data are generated under environmentally- relevant conditions, risk assessments for nanomaterials will improve to produce accurate assessments that assure environmental safety. © 2012 American Chemical Society.


Paterson S.,CSIRO | Bryan B.A.,University of Adelaide
Ecology and Society | Year: 2012

Understanding the effects of payments on the adoption of reforestation in agricultural areas and the associated food-carbon trade-offs is necessary to inform climate change policy. Economic viability of reforestation under payment per hectare and payment per tonne schemes for carbon sequestration was assessed in a region in southern Australia supporting 6.1 Mha of rain-fed agriculture. The results show that under the median scenario, a carbon price of 27 A$/tCO 2-e could make onethird of the study area (nearly 2 Mha) more profitable for reforestation than agriculture, and at 58 A$/tCO 2-e all of the study area could become more profitable. The results were sensitive to variation in carbon risk factor, establishment costs, and discount rates. Pareto-optimal land allocation could realize one-third of the potential carbon sequestration from reforestation (16.35 MtCO 2-e/yr at a carbon risk factor of 0.8) with a loss of less than one-tenth (107.89 A$M/yr) of the agricultural production. Both payment schemes resulted in efficiencies within 1% of the Pareto-optimum. Understanding food-carbon trade-offs and policy efficiencies can inform carbon policy design. © 2012 by the author(s).


Gan Z.,Swinburne University of Technology | Cao Y.,Swinburne University of Technology | Evans R.A.,CSIRO | Gu M.,Swinburne University of Technology
Nature Communications | Year: 2013

The current nanofabrication techniques including electron beam lithography provide fabrication resolution in the nanometre range. The major limitation of these techniques is their incapability of arbitrary three-dimensional nanofabrication. This has stimulated the rapid development of far-field three-dimensional optical beam lithography where a laser beam is focused for maskless direct writing. However, the diffraction nature of light is a barrier for achieving nanometre feature and resolution in optical beam lithography. Here we report on three-dimensional optical beam lithography with 9 nm feature size and 52 nm two-line resolution in a newly developed two-photon absorption resin with high mechanical strength. The revealed dependence of the feature size and the two-line resolution confirms that they can reach deep sub-diffraction scale but are limited by the mechanical strength of the new resin. Our result has paved the way towards portable three-dimensional maskless laser direct writing with resolution fully comparable to electron beam lithography. © 2013 Macmillan Publishers Limited. All rights reserved.


Bloch W.M.,University of Adelaide | Babarao R.,CSIRO | Hill M.R.,CSIRO | Doonan C.J.,University of Adelaide | Sumby C.J.,University of Adelaide
Journal of the American Chemical Society | Year: 2013

Here we report the synthesis and ceramic-like processing of a new metal-organic framework (MOF) material, [Cu(bcppm)H2O], that shows exceptionally selective separation for CO2 over N2 (ideal adsorbed solution theory, Sads = 590). [Cu(bcppm)H 2O]·xS was synthesized in 82% yield by reaction of Cu(NO 3)2·2.5H2O with the link bis(4-(4-carboxyphenyl)-1H-pyrazolyl)methane (H2bcppm) and shown to have a two-dimensional 44-connected structure with an eclipsed arrangement of the layers. Activation of [Cu(bcppm)H2O] generates a pore-constricted version of the material through concomitant trellis-type pore narrowing (b-axis expansion and c-axis contraction) and a 2D-to-3D transformation (a-axis contraction) to give the adsorbing form, [Cu(bcppm)H 2O]-ac. The pore contraction process and 2D-to-3D transformation were probed by single-crystal and powder X-ray diffraction experiments. The 3D network and shorter hydrogen-bonding contacts do not allow [Cu(bcppm)H 2O]-ac to expand under gas loading across the pressure ranges examined or following re-solvation. This exceptional separation performance is associated with a moderate adsorption enthalpy and therefore an expected low energy cost for regeneration. © 2013 American Chemical Society.


Sathyan T.,University of Adelaide | Hedley M.,CSIRO
IEEE Transactions on Mobile Computing | Year: 2013

The utility of wireless networks for many applications is increased if the locations of the nodes in the network can be tracked based on the measurements between communicating nodes. Many applications, such as tracking fire fighters in large buildings, require the deployment of mobile ad hoc networks. Real-time tracking in such environments is a challenging task, particularly combined with restrictions on computational and communication resources in mobile devices. In this paper, we present a new algorithm using the Bayesian framework for cooperative tracking of nodes, which allows accurate tracking over large areas using only a small number of anchor nodes. The proposed algorithm requires lower computational and communication resources than existing algorithms. Simulation results show that the algorithm performs well with the tracking error being close to the posterior Cramér-Rao lower bound that we derive for cooperative tracking. Experimental results for a network deployed in an indoor office environment with external global position system-referenced anchor nodes are presented. A computationally simple indoor range error model for measurements at the 5.8-GHz ISM band that yields positioning accuracy close to that obtained when using the actual range error distribution is also presented. © 2002-2012 IEEE.


Burke-Spolaor S.,Swinburne University of Technology | Burke-Spolaor S.,CSIRO
Monthly Notices of the Royal Astronomical Society | Year: 2011

Using archival Very Long Baseline Interferometry (VLBI) data for 3114 radio-luminous active galactic nuclei, we searched for binary supermassive black holes using a radio spectral index mapping technique which targets spatially resolved, double radio-emitting nuclei. Only one source was detected as a double nucleus. This result is compared with a cosmological merger rate model and interpreted in terms of (1) implications for post-merger time-scales for centralization of the two black holes, (2) implications for the possibility of 'stalled' systems and (3) the relationship of radio activity in nuclei to mergers. Our analysis suggests that binary pair evolution of supermassive black holes (both of masses ≥108 M⊙) spends less than 500 Myr in progression from the merging of galactic stellar cores to within the purported stalling radius for supermassive black hole pairs. The data show no evidence for an excess of stalled binary systems at small separations. We see circumstantial evidence that the relative state of radio emission between paired supermassive black holes is correlated within orbital separations of 2.5 kpc. © 2010 The Author. Journal compilation © 2010 RAS.


Patent
Csiro and Clinical Genomics Pty. Ltd. | Date: 2012-08-24

The present invention relates generally to nucleic acid molecules in respect of which changes to DNA methylation levels are indicative of the onset or predisposition to the onset of a neoplasm. More particularly, the present invention is directed to nucleic acid molecules in respect of which changes to DNA methylation levels are indicative of the onset and/or progression of a large intestine or breast neoplasm, such as an adenoma or adenocarcinoma. The DNA methylation status of the present invention is useful in a range of applications including, but not limited to, those relating to the diagnosis and/or monitoring of colorectal or breast neoplasms, such as colorectal or breast adenocarcinomas. Accordingly, in a related aspect the present invention is directed to a method of screening for the onset, predisposition to the onset and/or progression of a neoplasm by screening for modulation in DNA methylation of one or more nucleic acid molecules. The nucleic acid molecules used for diagnostics in the present invention are sequences from LOC 100526820, subsequently named CAHM (colorectal adenocarcinoma hypermethylated).


Patent
Clinical Genomics Pty Ltd and Csiro | Date: 2014-08-20

The present invention relates generally to an array of nucleic acid molecules, the expression profiles of which characterise the anatomical origin of a cell or population of cells within the large intestine. More particularly, the present invention relates to an array of nucleic acid molecules, the expression profiles of which characterise the proximal or distal origin of a cell or population of cells within the large intestine. The expression profiles of the present invention are useful in a range of applications including, but not limited to determining the anatomical origin of a cell or population of cells which have been derived from the large intestine. Still further, since the progression of a normal cell towards a neoplastic state is often characterised by phenotypic de-differentiation, the method of the present invention also provides a means of identifying a cellular abnormality based on the expression of an incorrect expression profile relative to that which should be expressed by the subject cells when considered in light of their anatomical location within the colon. Accordingly, this aspect of the invention provides a valuable means of identifying the existence of large intestine colon cells, these being indicative of an abnormality within the large intestine such as the onset or predisposition to the onset of a condition such as colorectal neoplasm.


Patent
Korea Institute of Geoscience, Mineral Resources and Csiro | Date: 2011-06-30

A droplet generation system includes a first nozzle configuration structured to receive a liquid and a gas under pressure in a controllable feed ratio, and to merge the liquid and gas to form an intermediate stream that is a mixture of the gas and of a dispersed phase of the liquid. A second nozzle configuration is connected to receive the intermediate stream from the first nozzle configuration and has a valve mechanism with one or more controllable operating parameters to emit a stream of droplets of the liquid. The mean size of the droplets is dependent on the controllable feed ratio of the liquid and gas and the flow rate of the stream of droplets is dependent on the controllable operating parameter(s) of the valve mechanism. A corresponding method is disclosed, as is the application of the system and method to the production of nanoparticles in a thermochemical reactor.


Patent
Csiro and Clinical Genomics Pty. Ltd. | Date: 2013-10-18

The present invention relates generally to nucleic acid molecules in respect of which changes to the DNA or to the RNA or protein expression profiles are indicative of the onset, predisposition to the onset and/or progression of a neoplasm. More particularly, the present invention is directed to nucleic acid molecules in respect of which changes to the DNA or to the RNA or protein expression profiles are indicative of the onset and/or progression of a large intestine neoplasm, such as an adenoma or an adenocarcinoma. The DNA or the expression profiles of the present invention are useful in a range of applications including, but not limited to, those relating to the diagnosis and/or monitoring of colorectal neoplasms, such as colorectal adenocarcinomas. Accordingly, in a related aspect the present invention is directed to a method of screening a subject for the onset, predisposition to the onset and/or progression of a neoplasm by screening for modulation in the DNA or the RNA or protein expression profile of one or more nucleic acid molecule markers.


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Research Grant | Award Amount: 5.21M | Year: 2013

The UK is committed to a target of reducing greenhouse gas emissions by 80% before 2050. With over 40% of fossil fuels used for low temperature heating and 16% of electricity used for cooling these are key areas that must be addressed. The vision of our interdisciplinary centre is to develop a portfolio of technologies that will deliver heat and cold cost-effectively and with such high efficiency as to enable the target to be met, and to create well planned and robust Business, Infrastructure and Technology Roadmaps to implementation. Features of our approach to meeting the challenge are: a) Integration of economic, behavioural, policy and capability/skills factors together with the science/technology research to produce solutions that are technically excellent, compatible with and appealing to business, end-users, manufacturers and installers. b) Managing our research efforts in Delivery Temperature Work Packages (DTWPs) (freezing/cooling, space heating, process heat) so that exemplar study solutions will be applicable in more than one sector (e.g. Commercial/Residential, Commercial/Industrial). c) The sub-tasks (projects) of the DTWPs will be assigned to distinct phases: 1st Wave technologies or products will become operational in a 5-10 year timescale, 2nd Wave ideas and concepts for application in the longer term and an important part of the 2050 energy landscape. 1st Wave projects will lead to a demonstration or field trial with an end user and 2nd Wave projects will lead to a proof-of-concept (PoC) assessment. d) Being market and emission-target driven, research will focus on needs and high volume markets that offer large emission reduction potential to maximise impact. Phase 1 (near term) activities must promise high impact in terms of CO2 emissions reduction and technologies that have short turnaround times/high rates of churn will be prioritised. e) A major dissemination network that engages with core industry stakeholders, end users, contractors and SMEs in regular workshops and also works towards a Skills Capability Development Programme to identify the new skills needed by the installers and operators of the future. The SIRACH (Sustainable Innovation in Refrigeration Air Conditioning and Heating) Network will operate at national and international levels to maximise impact and findings will be included in teaching material aimed at the development of tomorrows engineering professionals. f) To allow the balance and timing of projects to evolve as results are delivered/analysed and to maximise overall value for money and impact of the centre only 50% of requested resources are earmarked in advance. g) Each DTWP will generally involve the complete multidisciplinary team in screening different solutions, then pursuing one or two chosen options to realisation and test. Our consortium brings together four partners: Warwick, Loughborough, Ulster and London South Bank Universities with proven track records in electric and gas heat pumps, refrigeration technology, heat storage as well as policy / regulation, end-user behaviour and business modelling. Industrial, commercial, NGO and regulatory resources and advice will come from major stakeholders such as DECC, Energy Technologies Institute, National Grid, British Gas, Asda, Co-operative Group, Hewlett Packard, Institute of Refrigeration, Northern Ireland Housing Executive. An Advisory Board with representatives from Industry, Government, Commerce, and Energy Providers as well as international representation from centres of excellence in Germany, Italy and Australia will provide guidance. Collaboration (staff/student exchange, sharing of results etc.) with government-funded thermal energy centres in Germany (at Fraunhofer ISE), Italy (PoliMi, Milan) and Australia (CSIRO) clearly demonstrate the international relevance and importance of the topic and will enhance the effectiveness of the international effort to combat climate change.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-FP-SICA | Phase: KBBE-2009-1-1-03 | Award Amount: 3.76M | Year: 2010

NEXTGEN proposes the bold step of using whole genome data to develop and optimise conservation genetic management of livestock diversity for the foreseeable future. The rationale for choosing whole genome data is to future-proof DNA-based analysis in livestock conservation against upcoming changes in technology and analysis. Thus, in the context of whole genome data availability, our global objective is to develop cost-effective optimized methodologies for preserving farm-animal biodiversity, using cattle, sheep, and goats as model species. More specifically, NEXTGEN will: - produce whole genome data in selected populations; - develop the necessary bioinformatics approaches, taking advantage of the 1000 human genomes project, and focusing on the identification of genomic regions under recent selection (adaptive / neutral variation); - develop the methods for optimizing breeding and biobanking, taking into account both neutral and adaptive variations; - develop innovative biobanking methods based on freeze-dried nuclei; - provide guidelines for studying disease resistance and genome/environment relationships in a spatial context; - assess the value of wild ancestors and breeds from domestication centres as genetic resources; - assess the performance of a surrogate marker system compared with whole genome sequence data for preserving biodiversity; - provide efficient training and a wide dissemination of the improved methodologies. The consortium has been designed to specifically reach these objectives, and encompasses skills in conservation genetics, bioinformatics, biobanking and breeding technologies, GIScience. The work plan has been established with great care. The sequencing task has been postponed to year 3 to take advantage of cost dynamics, while the two first years are dedicated to bioinformatics and to an innovative sampling strategy that fully integrates the spatial aspect and that offers more value at the data analysis stage.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-CSA-Infra | Phase: INFRA-2007-2.2-01 | Award Amount: 35.54M | Year: 2008

The Square Kilometre Array (SKA) will be one of the largest scientific projects ever undertaken. It is a machine designed to answer some of the big questions of our time: what is Dark Energy? Was Einstein right about gravity? What is the nature of dark matter? Can we detect gravitational waves? When and how did the first stars and galaxies form? What was the origin of cosmic magnetism? How do Earth-like planets form? Is there life, intelligent or otherwise, elsewhere in the Universe? There are several issues that need to be addressed before construction of the SKA can begin: 1. What is the design for the SKA? 2. Where will the SKA be located? 3. What is the legal framework and governance structure under which SKA will operate? 4. What is the most cost-effective mechanism for the procurement of the various components of the SKA? 5. How will the SKA be funded? The purpose of this proposal is to address all of these points. We seek funding to integrate the R&D work from around the globe in order to develop the fully-costed design for Phase 1 of the SKA, and a deployment plan for the full instrument. With active collaboration between funding agencies and scientists, we will investigate all of the options for the policy-related questions. The principal deliverable will be an implementation plan that will form the basis of a funding proposal to governments to start the construction of the SKA.


Grant
Agency: GTR | Branch: NERC | Program: | Phase: Research Grant | Award Amount: 583.88K | Year: 2015

Terrestrial biodiversity is declining globally because of human impacts, of which land-use change has so far been the most important. When people change how land is used, many of the species originally present decline or disappear from the area, while others previously absent become established. Although some species are affected immediately, others might only respond later as the consequences of the land-use change ripple through the ecosystem. Such delayed or protracted responses, which we term biotic lag, have largely been ignored in large-scale models so far. Another shortcoming of much previous work is that it has focused on numbers of species, rather than what they do. Because winners from the change are likely to be ecologically different from losers, the land-use change impacts how the assemblage functions, as well as how many species it contains. Understanding how - and how quickly - land-use change affects local assemblages is crucial for supporting better land-use decisions in the decades to come, as people try to strike the balance between short-term needs for products from ecosystems and the longer-term need for sustainability. The most obvious way to assess the global effects of land-use change on local ecological communities would be to have monitored how land use and the community have changed over a large, representative set of sites over many decades. The sites have to be representative to avoid a biased result, and the long time scale is needed because the responses can unfold over many years. Because there is no such set of sites, less direct approaches are needed. We are planning to scour the ecological literature for comparisons of communities before and after land-use change. We can correct for bias because we have estimates of how common different changes in land use have been; and we will model how responses change over time after a land-use change so that we can use longer-term and shorter-term studies alike. There are many hundreds of suitable studies, and we will ask the researchers who produced them to share their data with us; we will then make them available to everyone at the end of the project. We will combine data on species abundances before and after the land-use change with information about their ecological roles, to reveal how - and how quickly - changing land use affects the relative abundances of the various species and the ecological structure and function of the community. Does conversion of natural habitats to agriculture tend to favour smaller species over large ones, for instance, and if so how quickly? Is metabolism faster in more human-dominated land uses? These analyses will require new compilations of trait data for several ecologically important and highly diverse arthropod groups; to produce these, we will make use of the expertise, collections and library of the Natural History Museum. In an earlier NERC-funded project (PREDICTS: www.predicts.org.uk), we have already compiled over 500 data sets - provided by over 300 different researchers - that compared otherwise-matched sites where land use differed. The PREDICTS database has amassed over 2,000,000 records, from over 18,000 sites in 88 countries. The database contains more than 1% as many species as have been formally described. Our analyses of this unprecedentedly large and representative data set indicates that land-use change has had a marked global impact on average local diversity. However, because PREDICTS data sets are spatial rather than temporal comparisons, they are not well-suited to analysing the dynamics of how assemblages respond to land-use change. More fundamentally, PREDICTS assumption that spatial comparisons are an adequate substitute for temporal data now needs testing. This proposal will deliver the necessary tests, as well as producing the most comprehensive picture of how land-use change reshapes ecological assemblages through time.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: INFRAIA-1-2014-2015 | Award Amount: 12.17M | Year: 2015

The overall objective will be to create and mobilise an International network of high calibre centres around a strong European group of institutes selected for their appropriate expertises, to collect, amplify, characterise, standardise, authenticate, distribute and track, mammalian and other exotic viruses. The network of EVAg laboratories including 25 institutions represents an extensive range of virological disciplines. The architecture of the consortium is based on the association of capacities accessible to the partners but also to any end-users through the EVAg web-based catalogue. This concept has been elaborated and tested for its efficiency during the successful EVA project (FP7). The project will integrate more facilities dedicated to high risk pathogen (HRP) manipulation (1 in EVA, 13 in EVAg) The access to products derived from those HRP will be enhanced and for instance the production of diagnostic reagents will be facilitated. The new project will also provide access to high containment biosafety facilities to carry out in vivo studies of infectious disease using natural or models hosts, to look at prophylactic or therapeutic control measures and to develop materials for the evaluation of diagnostic tests, meaning an extensive capacity to service and to training. EVAg will also link up with other network-based virus-associated programmes that exist globally. However, looking further ahead, EVAg is conceived ultimately to be an open entity aiming at developing synergies and complementarity capabilities in such a way as to offer an improved access to researchers. This project will generate the largest collection of mammalian viruses in the world and move beyond the current state-of-the-art to provide an increasingly valuable resource and service to the worlds scientific community, including government health departments, higher education institutes, industry and, through information systems, the general public.


Grant
Agency: European Commission | Branch: FP7 | Program: CPCSA | Phase: INFRA-2008-1.2.2 | Award Amount: 3.70M | Year: 2009

A coherent classification and species checklist of the worlds plants, animals, fungi and microbes is fundamental for accessing information about biodiversity. The Catalogue of Life provides the world with a unique service: a dynamically updated global index of validated scientific names, synonyms and common names integrated within a single taxonomic hierarchy.The Catalogue of Life was initiated as a European Scientific Infrastructure under FP5 and has a distributed knowledge architecture. Its federated e-compendium of the worlds organisms grows rapidly (now covering well over one million species), and has established a formidable user base, including major global biodiversity portals as well as national biodiversity resources and individual users worldwide.Joint Research Activities in this 4D4Life Project will establish the Catalogue of Life as a state of the art e-science facility based on an enhanced service-based distributed architecture. This will make it available for integration into analytical and synthetic distributed networks such as those developing in conservation, climate change, invasive species, molecular biodiversity and regulatory domains. User-driven enhancements in the presentation of distribution data and bio-data will be made.In its Networking Activities 4D4Life will strengthen the development of Global Species Databases that provide the core of the service, and extend the geographical reach of the programme beyond Europe by realizing a Multi-Hub Network integrating data from China, New Zealand, Australia, N. America and Brazil.Service Activities, the largest part of 4D4Life, will create new electronic taxonomy services, including synonymy server, taxon name-change, and download services, plus new educational and popular services, for instance for hand-held devices.


Grant
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ENERGY.2009.5.1.1 | Award Amount: 6.06M | Year: 2010

In post-combustion CO2 capture, a main bottleneck causing significant reduction in power plant efficiency and preventing cost effectiveness is the low flue gas CO2 partial pressure, limiting membrane flux, solvent selection and capacity. In pre-combustion CO2 capture, key bottlenecks are number of processing steps, possible low hydrogen pressure, and high hydrogen fraction in the fuel Global deployment of CO2 capture is restrained by a general need for prior removal of SO2. iCap seeks to remove these barriers by developing new technologies with potential for reducing the current energy penalty to 4-5% points in power plant efficiency, to combine SO2 and CO2 removal, and to reduce the avoidance cost to 15 /tonne CO2. iCap will: Develop solvents forming CO2 hydrates or two liquid phases enabling drastically increased liquid phase CO2 capacity, radically decreasing solvent circulation rates, introducing a new regime in desorption energy requirement, and allowing CO2 desorption at elevated pressures; Develop combined SO2 and CO2 capture systems increasing dramatically the potential for large scale deployment of CCS in BRIC countries and for retrofit in Europe. Develop high permeability/ high selectivity low temperature polymer membranes, by designing ultra thin composite membranes from a polymeric matrix containing ceramic nano particles. Develop mixed proton-electron conducting dense ceramic-based H2 membranes offering the combined advantages of theoretically infinite selectivity, high mechanical strength and good stability. Develop and evaluate novel coal and gas-based power cycles that allows post-combustion CO2 captures at elevated pressures, thus reducing the separation costs radically. Integrate the improved separation technologies in brownfield and greenfield power plants, and in novel power cycles in order to meet the performance and cost targets of the project


Grant
Agency: European Commission | Branch: FP7 | Program: CP-TP | Phase: KBBE.2011.1.2-04 | Award Amount: 4.89M | Year: 2012

ADAPTAWHEAT will show how flowering time variation can be exploited for the genetic improvement of the European wheat crop to optimise adaptation and performance in the light of predicted climate change. It will test current hypotheses that postulate specific changes in ear emergence and the timing and duration of developmental phases, which are thought of as components of ear emergence, will improve wheat productivity. Precise genetic stocks varying in specific flowering time elements and subjected to genotyping and characterisation with diagnostic markers for key flowering time genes will be used to test these hypotheses. They will be phenotyped at the molecular (transcript abundance), physiological (growth stage dissection) and agronomic (yield components) levels in multiple field trials located at sites in Europe that represent regional agricultural diversity and at non European locations that have mega environments of relevance. Controlled environment experiments will investigate specific environmental interactions including day length, ambient temperature, and heat stress. Data analysis will aid the construction of new wheat flowering models that can be used to refine existing hypotheses. They will allow standing genetic variation for flowering time in European germplasm to be deployed more efficiently in wheat breeding programmes. This knowledge will be used to inform searches for specific phenotypic and molecular variants in diverse and non adapted wheat germplasm panels provided by consortium members. Vital novel genetic variation will be efficiently imported into the germplasm of European wheat breeders. The project will deliver new diagnostic markers for genotyping, molecular reporters for novel breeding selection strategies and the tools and knowledge necessary for a combined physiology and genomics led predictive wheat breeding programme. A conduit for these outcomes will be three SMEs, who will exploit the tools developed to deliver these outcomes.


Patent
Csiro and Clinical Genomics Pty. Ltd. | Date: 2012-05-04

The present invention relates generally to a method for screening a subject for the onset, predisposition to the onset and/or progression of a colorectal neoplasm by screening for modulation in the level of expression of one or more nucleic acid markers. More particularly, the present invention provides a method for screening a subject for the onset, predisposition to the onset and/or progression of a colorectal neoplasm by screening for modulation in the level of expression of one or more gene markers in membranous microvesicles. The expression profiles of the present invention are useful in a range of applications including, but not limited to, those relating to the diagnosing and/or monitoring of colorectal neoplasms, such as colorectal adenoma and adenocarcinomas.


Patent
Clinical Genomics Pty Ltd and Csiro | Date: 2013-10-30

The present invention relates generally to nucleic acid molecules in respect of which changes to the DNA or to the RNA or protein expression profiles are indicative of the onset, predisposition to the onset and/or progression of a neoplasm. More particularly, the present invention is directed to nucleic acid molecules in respect of which changes to the DNA or to the RNA or protein expression profiles are indicative of the onset and/or progression of a large intestine neoplasm, such as an adenoma or an adenocarcinoma. The DNA or the expression profiles of the present invention are useful in a range of applications including, but not limited to, those relating to the diagnosis and/or monitoring of colorectal neoplasms, such as colorectal adenocarcinomas. Accordingly, in a related aspect the present invention is directed to a method of screening a subject for the onset, predisposition to the onset and/or progression of a neoplasm by screening for modulation in the DNA or the RNA or protein expression profile of one or more nucleic acid molecule markers.


Patent
Clinical Genomics Pty Ltd and Csiro | Date: 2016-06-22

A method of screening for the onset or predisposition to the onset of a large intestine neoplasm or monitoring the progress of a neoplasm in an individual, said method comprising assessing the methylation status of a DNA region selected from: the region defined by Hg19 coordinates chr7:50344378..50472799 and 2kb upstream of the transcription start site; or the gene region, including 2kb upstream, of IKZF1 in a biological sample from said individual wherein a higher level of methylation of the DNA regions of group (i) and/or (ii) relative to control levels is indicative of a large intestine neoplasm or a predisposition to the onset of a large intestine neoplastic state.


Patent
University of Melbourne, Grains Research & Development Corporation, University of Adelaide and Csiro | Date: 2013-11-27

The present invention relates generally to polysaccharide synthases. More particularly, the present invention relates to (1,3;1,4)--D-glucan synthases. The present invention provides, among other things, methods for influencing the level of (1,3;1,4)--D-glucan produced by a cell and nucleic acid and amino acid sequences which encode (1,3;1,4)--D-glucan synthases.


Patent
Clinical Genomics Pty Ltd and Csiro | Date: 2013-10-02

The present invention relates generally to nucleic acid molecules in respect of which changes to the DNA or to the RNA or protein expression profiles are indicative of the onset, predisposition to the onset and/or progression of a neoplasm. More particularly, the present invention is directed to nucleic acid molecules in respect of which changes to the DNA or to the RNA or protein expression profiles are indicative of the onset and/or progression of a large intestine neoplasm, such as an adenoma or an adenocarcinoma. The DNA or the expression profiles of the present invention are useful in a range of applications including, but not limited to, those relating to the diagnosis and/or monitoring of colorectal neoplasms, such as colorectal adenocarcinomas. Accordingly, in a related aspect the present invention is directed to a method of screening a subject for the onset, predisposition to the onset and/or progression of a neoplasm by screening for modulation in the DNA or the RNA or protein expression profile of one or more nucleic acid molecule markers.


Patent
Clinical Genomics Pty Ltd and Csiro | Date: 2013-10-02

The present invention relates generally to nucleic acid molecules in respect of which changes to the DNA or to the RNA or protein expression profiles are indicative of the onset, predisposition to the onset and/or progression of a neoplasm. More particularly, the present invention is directed to nucleic acid molecules in respect of which changes to the DNA or to the RNA or protein expression profiles are indicative of the onset and/or progression of a large intestine neoplasm, such as an adenoma or an adenocarcinoma. The DNA or the expression profiles of the present invention are useful in a range of applications including, but not limited to, those relating to the diagnosis and/or monitoring of colorectal neoplasms, such as colorectal adenocarcinomas. Accordingly, in a related aspect the present invention is directed to a method of screening a subject for the onset, predisposition to the onset and/or progression of a neoplasm by screening for modulation in the DNA or the RNA or protein expression profile of one or more nucleic acid molecule markers.


Patent
Clinical Genomics Pty. Ltd. and Csiro | Date: 2014-11-13

The present invention relates generally to nucleic acid molecules, the RNA and protein expression profiles of which are indicative of the onset, predisposition to the onset and/or progression of a neoplasm. More particularly, the present invention is directed to nucleic acid molecules, the expression profiles of which are indicative of the onset and/or progression of a large intestine neoplasm, such as an adenoma or an adenocarcinoma. The expression profiles of the present invention are useful in a range of applications including, but not limited to, those relating to the diagnosis and/or monitoring of colorectal neoplasms, such as colorectal adenocarcinomas. Accordingly, in a related aspect the present invention is directed to a method of screening a subject for the onset, predisposition to the onset and/or progression of a neoplasm by screening for modulation in the expression profile of one or more nucleic acid molecule markers.


Patent
Clinical Genomics Pty. Ltd. and Csiro | Date: 2015-08-12

The present invention relates generally to a nucleic acid molecule, the RNA and protein expression profiles of which are indicative of the onset, predisposition to the onset and/or progression of a large intestine neoplasm. More particularly, the present invention is directed to a nucleic acid molecule, the expression profiles of which are indicative of the onset and/or progression of a colorectal neoplasm, such as an adenoma or an adenocarcinoma. The expression profiles of the present invention are useful in a range of applications including, but not limited to, those relating to the diagnosis and/or monitoring of colorectal neoplasms, such as colorectal adenomas and adenocarcinomas. Accordingly, in a related aspect the present invention is directed to a method of screening a subject for the onset, predisposition to the onset and/or progression of a large intestine neoplasm by screening for modulation in the expression profile of said nucleic acid molecule markers.


Patent
Clinical Genomics Pty. Ltd and Csiro | Date: 2013-05-10

The present invention relates generally to a method of screening for the onset, predisposition to the onset and/or progression of a neoplasm. More particularly, the present invention relates to a method of screening for the onset, predisposition to the onset and/or progression of a neoplasm by screening for changes to the methylation levels of a panel of gene markers including BCAT1, IKZF1, IRF4, GRASP and/or CAHM. The method of the present invention is useful in a range of applications including, but not limited to, those relating to the diagnosis and/or monitoring of colorectal neoplasms, such as colorectal adenocarcinosis.


Patent
Clinical Genomics Pty Ltd and Csiro | Date: 2016-06-22

A method of screening for the onset or predisposition to the onset of a large intestine neoplasm or monitoring the progress of a neoplasm in an individual, said method comprising assessing the methylation status of a DNA region selected from: the region defined by Hg19 coordinates chr12:24964278..25102308 and 2kb upstream of the transcription start site, or the gene region, including 2kb upstream, of BCAT1 in a biological sample from said individual wherein a higher level of methylation of the DNA regions of group (i) and/or (ii) relative to control levels is indicative of a large intestine neoplasm or a predisposition to the onset of a large intestine neoplastic state.


Patent
Clinical Genomics Pty Ltd and Csiro | Date: 2016-08-17

The present invention relates generally to nucleic acid molecules, the RNA and protein expression profiles of which are indicative of the onset, predisposition to the onset and/or progression of a neoplasm. More particularly, the present invention is directed to nucleic acid molecules, the expression profiles of which are indicative of the onset and/or progression of a large intestine neoplasm, such as an adenoma or an adenocarcinoma. The expression profiles of the present invention are useful in a range of applications including, but not limited to, those relating to the diagnosis and/or monitoring of colorectal neoplasms, such as colorectal adenocarcinomas, Accordingly, in a related aspect the present invention is directed to a method of screening a subject for the onset, predisposition to the onset and/or progression of a neoplasm by screening for modulation in the expression profile of one or more nucleic acid molecule markers.


Patent
Clinical Genomics Pty Ltd and Csiro | Date: 2013-10-02

The present invention relates generally to nucleic acid molecules in respect of which changes to the DNA or to the RNA or protein expression profiles are indicative of the onset, predisposition to the onset and/or progression of a neoplasm. More particularly, the present invention is directed to nucleic acid molecules in respect of which changes to the DNA or to the RNA or protein expression profiles are indicative of the onset and/or progression of a large intestine neoplasm, such as an adenoma or an adenocarcinoma. The DNA or the expression profiles of the present invention are useful in a range of applications including, but not limited to, those relating to the diagnosis and/or monitoring of colorectal neoplasms, such as colorectal adenocarcinomas. Accordingly, in a related aspect the present invention is directed to a method of screening a subject for the onset, predisposition to the onset and/or progression of a neoplasm by screening for modulation in the DNA or the RNA or protein expression profile of one or more nucleic acid molecule markers.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-TP | Phase: KBBE.2013.1.2-08 | Award Amount: 7.75M | Year: 2014

MareFrame seeks to remove barriers preventing a more widespread use of an Ecosystem-based Approach to Fisheries Management (EAFM). It will develop assessment methods and a Decision Support Framework (DSF) for management of marine resources and thereby enhance the capacity to provide integrated assessment, advice and decision support for an EAFM. Enabling comparisons between relevant what-if scenarios and their likely consequences, DSF will support the implementation of the new Common Fisheries Policy (CFP) and the Marine Strategy Framework Directive (MSFD). The project SMEs, together with RTD institutions and stakeholders, will develop and demonstrate the use of innovative decision support tools through training actions, role-play and workshops. Indicators of Good Environmental Status (GES) will be developed along with models for ecosystem-based management. The models will take multi-species approaches into account and be developed and compared through seven datasets of six European regional seas. The models will draw on historical data sets and data from new analytical methods. Model performance will be compared and evaluated using a simulated ecosystem as an operating model. Learning from the experience of previous and on-going research, MareFrame integrates stakeholders at its core using a co-creation approach that combines analytical and participatory processes to provide knowledge that can be applied to policy-making, improving management plans and implementation of EAFM. The project dissemination will use innovative ways to ensure effective usage of project outcomes. The work packages and the allocation of roles have been designed to ensure effective collaboration through the projects lifetime. MareFrame liaises with other national and international research projects and is of high relevance to the future management of living marine resources in Europe in a changing environment, taking a holistic view incorporating socio-economic and legislative issues.


Munns R.,University of Western Australia | Munns R.,CSIRO | Gilliham M.,University of Adelaide
New Phytologist | Year: 2015

Soil salinity reduces crop yield. The extent and severity of salt-affected agricultural land is predicted to worsen as a result of inadequate drainage of irrigated land, rising water tables and global warming. The growth and yield of most plant species are adversely affected by soil salinity, but varied adaptations can allow some crop cultivars to continue to grow and produce a harvestable yield under moderate soil salinity. Significant costs are associated with saline soils: the economic costs to the farming community and the energy costs of plant adaptations. We briefly consider mechanisms of adaptation and highlight recent research examples through a lens of their applicability to improving the energy efficiency of crops under saline field conditions. © 2015 New Phytologist Trust.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-IP | Phase: KBBE.2011.2.2-02 | Award Amount: 7.84M | Year: 2012

NutriTech will build on the foundations of traditional human nutrition research using cutting-edge analytical technologies and methods to comprehensively evaluate the diet-health relationship and critically assess their usefulness for the future of nutrition research and human well-being. Technologies include genomics, transcriptomics, proteomics, metabolomics, laser scanning cytometry, NMR based lipoprotein profiling and advanced imaging by MRI/MRS. All methods will be applied in an integrated manner to quantify the effect of diet on phenotypic flexibility, based on metabolic flexibility (the capacity for the organism to adapt fuel oxidation to fuel availability). However, NutriTech will move beyond the state-of-the-art by applying these integrated methods to assess the underlying and related cell biological and genetic mechanisms and multiple physiological processes of adaptation when homeostasis is challenged. Methods will in the first instance be evaluated within a human intervention study, and the resulting optimal methods will be validated in a number of existing cohorts against established endpoints. NutriTech will disseminate the harmonised and integrated technologies on a global scale by a large academic network including 6 non-EU partners and by providing an integrated and standardised data storage and evaluation platform. The impact of NutriTech will be multifold and exploitation is crucial as major breakthroughs from our technology and research are expected. This will be achieved by collaboration with a consortium of 8 major food industries and by exploitation of specific technologies by our 6 SME partners. Overall, NutriTech will lay the foundations for successful integration of emerging technologies intro nutrition research.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-CSA-Infra | Phase: INFRA-2011-1.1.8. | Award Amount: 7.16M | Year: 2012

The ability to quantitatively analyze plant phenotypic traits (from single cells to plant and stand level) and their dynamic responses to the environment is an essential requirement for genetic and physiological research, and the cornerstone for enabling applications of scientific findings to bioeconomy. Whereas molecular profiling technologies allow today the generation of a large amount of data with decreasing costs largely due to automation and robotics, the understanding of the link between genotype and phenotype has progressed more slowly. Insufficient technical and conceptual capacity of the plant scientific community to probe existing genetic resources and unravel environmental effects limits faster progress in this field. The development of robust and standardized phenotyping applications depends on the availability of specialised infrastructure, technologies and protocols. Europe has become a key driver in defining innovative solutions in academic and industrial settings. However, the existing initiatives at the local or member-state level represent a fragmented research landscape with similar goals. The aim of this project is to create synergies between the leading plant phenotyping institutions in Europe as a nucleus for the development of a strong European Plant Phenotyping Network (EPPN). The project fosters the development of an effective European infrastructure including human resources, expertise and communication needed to support transnational access to user communities. Joint research activities will adapt and develop novel sensors and methods for application in plant phenotyping. Innovative phenotyping concepts integrating mechanistic, medium- and high throughput as well as field phenotyping will be developed and made available to the community. This project will strengthen Europes leading role in plant phenotyping research and application through the creation of a community of research institutes, universities, industry and SMEs.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-IP | Phase: KBBE.2013.2.2-02 | Award Amount: 13.01M | Year: 2013

Emerging evidence indicates that the gut microbiome contributes to our ability to extract energy from the diet and influences development and function of the immune, endocrine and nervous systems, which regulate energy balance and behaviour. This has led to hypothesize that developing microbiome-based dietary interventions can be cost-effective measures to prevent diet-related and behavioural disorders. Yet this approach is restricted in practice by a lack of understanding of the specific species that contribute to these disorders and their interactions with host and lifestyle determinants. To progress beyond the state of the art, the MyNewGut proposal aims to: (1) shed light on the contribution of the human microbiome to nutrient metabolism and energy expenditure; (2) identify microbiome-related features that contribute to or predict obesity and associated disorders in human epidemiological studies; (3) understand how the microbiome is influenced by environmental factors and its role in brain and immune development and function in humans; and (4) provide proof-of-concept of the disease risk-reduction potential of dietary intervention with new foods/ingredients targeting the gut microbiome in humans. To this end, a translational multidisciplinary research strategy will be developed, combining experts in omic-technologies and all other scientific disciplines required. Consequently, the MyNewGut proposal will contribute to developing new approaches to prevent diet-related diseases (metabolic syndrome and obesity) and behavioural disorders through lifestyle changes, intake of pro- and prebiotics and semi-personalised and innovative food products. This will ultimately contribute to increasing the competitiveness of the European food industry and provide consumers with reliable claims on foods. Results will also help inform new strategies on public health, support EU legislation and improve the position of the EU in the field of food-related disease prevention.


Grant
Agency: European Commission | Branch: FP7 | Program: NoE | Phase: KBBE-2007-2-3-06 | Award Amount: 6.81M | Year: 2009

The Network of Excellence HighTech Europe facilitates the implementation of high-tech processing at industrial scale via strongly connected regional knowledge transfer chains consisting of basic-applied science centres (12 in total), industry federations (4), Innovation Relay Centre (1) and SMEs (5). Its main aim is to identify, develop and demonstrate potential, cost-efficient, innovations to be used by SMEs via a Science Cube approach and availability of high-tech pilot facilities. This approach links innovation sources (bio-, nano- and information and communication technologies), scientific principles and food engineering operations. In addition, industrial needs are mapped and linked with the results from the Science Cube approach, using a novel Lighthouse Watcher concept. New routes for implementation are developed such as a Knowledge Auction and an Implementation Award honouring full chain innovations. Next, knowledge transfer schemes elucidate feasibility studies and business cases based on unique patent portfolios convincing entrepreneurs and early adopters to acquire lead market positions. Here, ethical, legal, social aspects and consumer perception regarding high-tech food processing are taken into account in order to set up a first, well-balanced Agenda of the White Book on high-tech food processing for policy and regulatory bodies. Dissemination of European findings and training & career development, especially for young scientists sharing spirit and enthusiasm with their senior colleagues in the Network of Excellence, allow attracting new stakeholders and next generations for a durable network finally merging into the future European Institute for Food Processing. Herein, critical mass and joint initiatives at the interface of the Food Manufacturers and Technology Providers substantially increase the innovation rate and competitiveness of the European Agro-Food sector.


Patent
Csiro and Clinical Genomics Pty. Ltd. | Date: 2011-09-13

The present invention relates generally to nucleic acid molecules in respect of which changes to DNA methylation levels are indicative of the onset or predisposition to the onset of a neoplasm. More particularly, the present invention is directed to nucleic acid molecules in respect of which changes to DNA methylation levels are indicative of the onset and/or progression of a large intestine neoplasm, such as an adenoma or adenocarcinoma. The DNA methylation status of the present invention is useful in a range of applications including, but not limited to, those relating to the diagnosis and/or monitoring of colorectal neoplasms, such as colorectal adenocarcinomas. Accordingly, in a related aspect the present invention is directed to a method of screening for the onset, predisposition to the onset and/or progression of a neoplasm by screening for modulation in DNA methylation of one or more nucleic acid molecules.


News Article | November 8, 2016
Site: www.techradar.com

In July 2015, Russian internet entrepreneur Yuri Milner that he would be giving $100 million to the 'Breakthrough Listen' initiative - the biggest hunt for aliens that the world has ever seen. The project begun on the Green Bank Telescope in West Virginia, and at Lick Observatory in California, but now they've been joined by the Parkes Radio Telescope in New South Wales, Australia. “The addition of Parkes is an important milestone,” Milner. “These major instruments are the ears of planet Earth, and now they are listening for signs of other civilizations.” After 14 days of commissioning and test observations, the hunt began on 7 November with observations of Proxima b - a newly-discovered Earth-sized planet orbiting Proxima Centauri. The planet is known to be in the habitable zone of its red dwarf star, which is about 4.3 light years from Earth, meaning that it's possible that there is liquid water on the surface. “The chances of any particular planet hosting intelligent life-forms are probably minuscule,” said Andrew Siemion, director of UC Berkeley SETI Research Center. “But once we knew there was a planet right next door, we had to ask the question, and it was a fitting first observation for Parkes. To find a civilization just 4.2 light years away would change everything.” That's just the start, however. The Parkes observations will cover all 43 stars within five parsecs of Earth, listening at frequencies between one and 15 gigahertz, as well as 1000 stars within 50 parsecs at 1-4GHz and a million others at the same frequencies for just a minute each. After that, it will scan the galactic plane and center, the centers of 100 other nearby galaxies, and a bunch of other, more exotic sources - like white dwarfs, neutron stars, and black holes - all at at 1-4GHz. All the data gathered will be freely available to the public online, and the Breakthrough Institute has invited scientists, programmers, students, and anyone else interested to help sift through the observations and see if there's anything interesting in them. If that sounds a bit much, you can just install the , allowing your computer to comb through the data itself in idle moments. "The Parkes Radio Telescope is a superb instrument, with a rich history," said Pete Worden, Chairman of Breakthrough Prize Foundation and Executive Director of the Breakthrough Initiatives. "We’re very pleased to be collaborating with CSIRO to take Listen to the next level."


News Article | December 6, 2016
Site: www.theguardian.com

The energy and environment minister, Josh Frydenberg, has folded in the face of internal pressure, declaring the Turnbull government will not pursue emissions trading as part of adjusting its climate policy to meet Australia’s international emissions reduction targets. In media interviews on Monday morning Frydenberg explicitly said a looming review of the government’s Direct Action climate change policy would canvas the desirability of a trading scheme for the electricity sector. But after backbenchers and one cabinet minister – Christopher Pyne – dissented vociferously from that view over the course of Monday, Frydenberg told 3AW on Tuesday night he had not flagged an emissions intensity trading scheme for the electricity sector. “It’s always been our policy to have a review. I didn’t mention an emissions intensity scheme – that’s not in any document to the Coalition has put out,” the minister said on Wednesday night. “The Turnbull government is not contemplating such a scheme. We are not advocating such a scheme”. “What we are focused on is driving down electricity prices and … energy security.” He said an emissions trading scheme was “not the policy of the Turnbull government”. The recanting followed a day of canvassing government dissenters. Coalition sources have told Guardian Australia Frydenberg spoke to a number of opponents of carbon pricing over the course of Tuesday. On Tuesday night, the former prime minister Tony Abbott also weighed into the poisonous internal debate. He told Sky News: “I’m sure the last thing ministers want to do is reopen questions that were settled for our side back in 2009.” “We’re against a carbon tax. We’re against an emissions trading scheme. We’re against anything that’s a carbon tax or an ETS by stealth,” Abbott said. “We are the party of lower power prices and should let Labor be the party that artificially increases [electricity] prices under Greens pressure.” The terms of reference for the Direct Action review Frydenberg outlined at the beginning of the week include considering policy mechanisms to reduce emissions on a “sector-by-sector basis” – which was interpreted by analysts as a green light for the review to consider an emissions trading scheme in the electricity sector. On Monday Frydenberg went a step further, telling the ABC the government would look at an emissions intensity scheme for the electricity sector as part of the review of Direct Action. “Now, as you know, the electricity sector is the one that produces the most emissions; around a third of Australia’s emissions come from that sector,” the minister told the ABC’s AM program. “We know that there’s been a large number of bodies that have recommended an emissions intensity scheme, which is effectively a baseline and credit scheme. Frydenberg’s initially positive comments on Monday are in line with advice to the government from the Climate Change Authority, from the energy industry and the CSIRO. On Tuesday Australia’s electricity and gas transmission industry called on the Turnbull government to implement a form of carbon trading in the national electricity market by 2022 and review the scope for economy-wide carbon pricing by 2027. While Abbott once characterised carbon pricing as a wrecking ball through the Australian economy, the new report, backed by the CSIRO, says adopting an emissions intensity scheme is the least costly way of reducing emissions, and could actually save customers $200 a year by 2030. Some stakeholders believe the Finkel review into energy security and Australia’s climate commitments may also float the desirability of an emissions intensity scheme for the electricity sector when it presents its preliminary fundings to Friday’s Coag meeting of the prime minister and the premiers. Business and Australia’s energy sector has been calling on the government to deliver policy certainty in order to allow an orderly transition in the electricity sector from emissions intensive sources of power generation to low emissions technologies.


News Article | September 13, 2016
Site: phys.org

This profound observation displaced Earth from its position at the centre of the universe to just one planet among many. It also sparked a new golden era of optical astronomy, which continues to this day. In September 2015 the Advanced Laser Interferometer Gravitational-Wave Observatory (aLIGO) detected the gravitational waves emitted by two coalescing black holes. This remarkable discovery opened up a new window on the universe, using gravitational waves rather than electromagnetic waves to peer into the far reaches of the cosmos. A little before aLIGO's successful detection, I was invited to put together a team to bid for an Australian Research Council Centre of Excellence for Gravitational Wave Discovery, to be known as "OzGRav". Centres of Excellence are a scientist's idea of funding nirvana because they provide guaranteed funding for seven years. So instead of writing annual grant applications with a slim chance of success of getting a fraction of what you asked for, you can plan and execute a serious scientific agenda with critical mass. But the competition is fierce, and the chances of success are small, and funding rounds are only held every three years or so. To be successful, Centres need bold visions and ambitious objectives. Our main problem when we submitted our pitch was that no-one had detected gravitational waves yet, and we were relying on the promise of new instruments like aLIGO to deliver in an area that was still void of positive results. But unbeknown to any of us, the enormous burst of gravitational waves from GW150914 was en route to Earth and due to strike it just two months after our initial application was submitted. The gravitational waves were generated more than a billion years ago when two enormous black holes merged after a death spiral. And shortly after the aLIGO gravitational wave detector was turned on it saw the characteristic "chirp" as space time shook during its passage. Many of my OzGRav team had aided in the construction of aLIGO, and its precision is mind-blowing. When the first source of gravitational waves ever detected (GW150914) were impacting the four kilometre long arms of the detector, they shook by the equivalent of less than the width of a human hair at the distance of the nearest star! So when our grant was being assessed, gravitational waves were still just a twinkle in the scientific community's eye. One of our assessors even made it very clear that physicists were always promising to detect gravitational waves but none had been found. With some luck we were selected to submit a full proposal; one of only 20 teams to do so. By this time, many of my collaborators were fully aware that the first gravitational waves had been discovered. But they were bound by the strict rules of the LIGO Scientific Consortium that prohibited them from telling me (the proposed Director of the Centre) or putting this news in our proposal, or the rejoinder. It must have been killing them. All we could say was the data were looking really exciting! Fortunately for us, the discovery of gravitational waves was announced just prior to the interviews of the final 20 Centre of Excellence teams, and many of my team were invited to parliament house to describe their role in the discovery. Last week we heard that we were one of the nine Centres fortunate enough to gain funding. I'm certain this is at least partly attributable to the fact that a billion years ago in a galaxy far, far away, two black holes, some 30 times the mass of our sun tore each other apart, releasing gravitational waves in the process. The impact of this discovery has been remarkable. In only six months the discovery paper has already gathered 641 citations. Another black hole merger event was published by the LIGO consortium in June and the (now) "telescope" is gearing up for its second major run after some tweaks to its hardware that seems certain to discover more events. OzGRav has three major themes that will be driving its research programmes: instrumentation, data and astrophysics. The instrumentation behind these gravitational wave detectors is truly remarkable. OzGRav scientists will aid in the enhancement of aLIGO so that it is even more sensitive, using amazing tricks such as quantum squeezing. We will also help design and ultimately construct the next-generation detectors that aim to detect thousands of events per year. To minimise the possible locations of these events, it would also make a lot of sense to build one of these new detectors in Australia. But aLIGO isn't the only detector capable of discovering gravitational waves. Radio astronomers can use neutron stars (pulsars) that rotate many hundreds of times per second to sense "disturbances in the space-time continuum" induced by the gravitational waves coming from super-massive black holes. OzGRav engineers are currently designing the supercomputers that will monitor dozens of these stars using the Square Kilometre Array. The CSIRO's Parkes telescope is also having a powerful new receiver fitted to continue its leading role in this area of science. Swinburne University of Technology will host the Centre headquarters and design a supercomputer custom-built to process the data coming from the gravitational wave detectors. These data will be processed to look for not just merging black holes, but also neutron stars. And the closest neutron stars will be monitored to see if tiny "magnetic mountains" on their surfaces cause them to generate detectable gravitational wave emission. OzGRav's astronomers will also use a network of telescopes at traditional frequencies (optical and radio) to search for evidence of gravitational wave events at other wavelengths to help identify the host galaxies (or lack thereof?) to help understand where the sources of gravitational waves come from. Finally, our astrophysicists will attempt to explain what our detectors see, and whether Einstein's theory of general relativity is correct or needs some tweaks. Fortunately Australian scientists can fully engage with this new window on the universe and participate in the first decade of this exciting new era of gravitational wave astrophysics thanks to the Australian Research Council's Centre of Excellence programme. Explore further: Video: The hunt is on for gravitational waves


News Article | February 24, 2017
Site: phys.org

The researchers, from the ARC Centre of Excellence in Plant Energy Biology, in collaboration with CSIRO, carried out a long-term study, following the discovery of the mutation in the genetic makeup of a plant that alters its ability to recover from stressful factors. In order to survive being rooted to one spot, plants must adapt fast to stresses in their environment, which include pathogens and harsh changes in weather and temperature. The researchers chemically induced stress in the roots of plants, treating them with salicylic acid, to examine the signalling response inside of the plants' cells. They observed key changes in a particular enzyme (called succinate dehydrogenase) that leads to the complete loss of stress signalling. The impact of this tiny change is an inability of the plant to fight off disease-causing pathogens. Lead researcher Ms Katharina Belt, said the finding suggests that this enzyme plays an important role in plant resistance to pathogen-induced stress. "It is astonishing to realise that the part of the plant that we knew is responsible for energy production, is also involved in how plants cope with stress," Ms Belt said. Ms Belt said that a better understanding of how plants deal with stress could open up new opportunities to develop stronger plants for the future. "Much research is needed to address the dramatic impacts of a growing population and decreasing agricultural land," she said. "It is hoped that this research will contribute to the science community's thinking about how to create more efficient and robust plants. "This could help to combat food security issues we face in these times of climate change." The researchers plan to use these findings to drive further research into how to equip plants with a more efficient stress response, making them more resilient. This could become an important new step in improving agricultural yields. The research was recently published in Plant Physiology Journal. Explore further: Teaching plants to be better spenders More information: Katharina Belt et al. Salicylic acid-dependent plant stress signalling via mitochondrial succinate dehydrogenase, Plant Physiology (2017). DOI: 10.1104/pp.16.00060


Grant
Agency: European Commission | Branch: FP7 | Program: CP-CSA | Phase: ENERGY.2013.10.1.10 | Award Amount: 21.20M | Year: 2014

Concentrating Solar Thermal Energy encompasses Solar Thermal Electricity (STE), Solar Fuels, Solar Process Heat and Solar Desalination that are called to play a major role in attaining energy sustainability in our modern societies due to their unique features: 1) Solar energy offers the highest renewable energy potential to our planet; 2) STE can provide dispatchable power in a technically and economically viable way, by means of thermal energy storage and/or hybridization, e.g. with biomass. However, significant research efforts are needed to achieve this goal. This Integrated Research Programme (IRP) engages all major European research institutes, with relevant and recognized activities on STE and related technologies, in an integrated research structure to successfully accomplish the following general objectives: a) Convert the consortium into a reference institution for concentrating solar energy research in Europe, creating a new entity with effective governance structure; b) Enhance the cooperation between EU research institutions participating in the IRP to create EU added value; c) Synchronize the different national research programs to avoid duplication and to achieve better and faster results; d) Accelerate the transfer of knowledge to industry in order to maintain and strengthen the existing European industrial leadership in STE; e) Expand joint activities among research centres by offering researchers and industry a comprehensive portfolio of research capabilities, bringing added value to innovation and industry-driven technology; f) Establish the European reference association for promoting and coordinating international cooperation in concentrating solar energy research. To that end, this IRP promotes Coordination and Support Actions (CSA) and, in parallel, performs Coordinated Projects (CP) covering the full spectrum of current concentrating solar energy research topics, selected to provide the highest EU added value and filling the gaps among national programs.


News Article | November 7, 2016
Site: www.eurekalert.org

A new comprehensive study of Australian natural hazards paints a picture of increasing heatwaves and extreme bushfires as this century progresses, but with much more uncertainty about the future of storms and rainfall. Published today (Tuesday 8 November) in a special issue of the international journal Climatic Change, the study documents the historical record and projected change of seven natural hazards in Australia: flood; storms (including wind and hail); coastal extremes; drought; heatwave; bushfire; and frost. "Temperature-related hazards, particularly heatwaves and bushfires, are increasing, and projections show a high level of agreement that we will continue to see these hazards become more extreme into the 21st century," says special issue editor Associate Professor Seth Westra, Head of the Intelligent Water Decisions group at the University of Adelaide. "Other hazards, particularly those related to storms and rainfall, are more ambiguous. Cyclones are projected to occur less frequently but when they do occur they may well be more intense. In terms of rainfall-induced floods we have conflicting lines of evidence with some analyses pointing to an increase into the future and others pointing to a decrease. "One thing that became very clear is how much all these hazards are interconnected. For example drought leads to drying out of the land surface, which in turn can lead to increased risk of heat waves and bushfires, while also potentially leading to a decreased risk of flooding." The importance of interlinkages between climate extremes was also noted in the coastal extremes paper: "On the open coast, rising sea levels are increasing the flooding and erosion of storm-induced high waves and storm surges," says CSIRO's Dr Kathleen McInnes, the lead author of the coastal extremes paper. "However, in estuaries where considerable infrastructure resides, rainfall runoff adds to the complexity of extremes." This special issue represents a major collaboration of 47 scientists and eleven universities through the Australian Water and Energy Exchange Research Initiative, an Australian research community program. The report's many authors were from the Centre of Excellence for Climate System Science, the CSIRO, Bureau of Meteorology, Australian National University, Curtin University, Monash University, University of Melbourne, University of Western Australia, University of Adelaide, University of Newcastle, University of New South Wales, University of Tasmania, University of Western Australia and University of Wollongong. The analyses aim to disentangle the effects of climate variability and change on hazards from other factors such as deforestation, increased urbanisation, people living in more vulnerable areas, and higher values of infrastructure. "The study documents our current understanding of the relationship between historical and possible future climatic change with the frequency and severity of Australian natural hazards," says Associate Professor Westra. "These hazards cause multiple impacts on humans and the environment and collectively account for 93% of Australian insured losses, and that does not even include drought losses. "We need robust decision-making that considers the whole range of future scenarios and how our environment may evolve. The biggest risk from climate change is if we continue to plan as though there will be no change. One thing is certain: our environment will continue to change." Some of the key findings from the studies include: Associate Professor Seth Westra Special Issue Editor School of Civil, Environmental and Mining Engineering University of Adelaide Phone: 61-8-8313-1538. Mobile: 61-0-414-997-406 seth.westra@adelaide.edu.au


News Article | November 7, 2016
Site: www.sciencedaily.com

A new comprehensive study of Australian natural hazards paints a picture of increasing heatwaves and extreme bushfires as this century progresses, but with much more uncertainty about the future of storms and rainfall. Published in a special issue of the international journal Climatic Change, the study documents the historical record and projected change of seven natural hazards in Australia: flood; storms (including wind and hail); coastal extremes; drought; heatwave; bushfire; and frost. "Temperature-related hazards, particularly heatwaves and bushfires, are increasing, and projections show a high level of agreement that we will continue to see these hazards become more extreme into the 21st century," says special issue editor Associate Professor Seth Westra, Head of the Intelligent Water Decisions group at the University of Adelaide. "Other hazards, particularly those related to storms and rainfall, are more ambiguous. Cyclones are projected to occur less frequently but when they do occur they may well be more intense. In terms of rainfall-induced floods we have conflicting lines of evidence with some analyses pointing to an increase into the future and others pointing to a decrease. "One thing that became very clear is how much all these hazards are interconnected. For example drought leads to drying out of the land surface, which in turn can lead to increased risk of heat waves and bushfires, while also potentially leading to a decreased risk of flooding." The importance of interlinkages between climate extremes was also noted in the coastal extremes paper: "On the open coast, rising sea levels are increasing the flooding and erosion of storm-induced high waves and storm surges," says CSIRO's Dr Kathleen McInnes, the lead author of the coastal extremes paper. "However, in estuaries where considerable infrastructure resides, rainfall runoff adds to the complexity of extremes." This special issue represents a major collaboration of 47 scientists and eleven universities through the Australian Water and Energy Exchange Research Initiative (www.ozewex.org), an Australian research community program. The report's many authors were from the Centre of Excellence for Climate System Science, the CSIRO, Bureau of Meteorology, Australian National University, Curtin University, Monash University, University of Melbourne, University of Western Australia, University of Adelaide, University of Newcastle, University of New South Wales, University of Tasmania, University of Western Australia and University of Wollongong. The analyses aim to disentangle the effects of climate variability and change on hazards from other factors such as deforestation, increased urbanisation, people living in more vulnerable areas, and higher values of infrastructure. "The study documents our current understanding of the relationship between historical and possible future climatic change with the frequency and severity of Australian natural hazards," says Associate Professor Westra. "These hazards cause multiple impacts on humans and the environment and collectively account for 93% of Australian insured losses, and that does not even include drought losses. "We need robust decision-making that considers the whole range of future scenarios and how our environment may evolve. The biggest risk from climate change is if we continue to plan as though there will be no change. One thing is certain: our environment will continue to change." Some of the key findings from the studies include: • Historical information on the most extreme bushfires -- so-called "mega fires" -- suggests an increased occurrence in recent decades with strong potential for them to increase in frequency in the future. Over the past decade major bushfires at the margins of Sydney, Canberra, and Melbourne have burnt more than a million hectares of forests and woodlands and resulted in the loss of more than 200 lives and 4000 homes. • Heatwaves are Australia's most deadly natural hazard, causing 55% of all natural disaster related deaths and increasing trends in heatwave intensity, frequency and duration are projected to continue throughout the 21st century. • The costs of flooding have increased significantly in recent decades, but factors behind this increase include changes in reporting mechanisms, population, land-use, infrastructure as well as extreme rainfall events. The physical size of floods has either not changed at all, or even decreased in many parts of the country.


News Article | March 12, 2016
Site: www.techtimes.com

When it comes to climate change issues, the spotlight is always on the regulation of carbon dioxide emissions from fossil fuels. However, just focusing on CO2 means overlooking other aspects that drive the rise of global temperatures, a new study revealed. This includes aspects of the terrestrial biosphere: global food production, rice cultivation, and animal farming, as well as waste disposal. Such activities produce two other main greenhouse gases: methane and nitrous oxide. In 2014, a study revealed that although increased farming provides us food, it is also responsible for climate change. Now, a new study, which is featured in the journal Nature, points to global agriculture as another culprit that contributes to climate change in the same way that fossil fuels do. Every year, our terrestrial biosphere -- the surface where land animals, plants and microorganisms dwell -- absorbs about a quarter of the total CO2 emissions that humans produce. This helps moderate global temperatures. Humans produce a whopping 40 billion tons of CO2 emissions annually, which comes from deforestation and the burning of fossil fuels. This has contributed to the 82 percent rise in warming due to greenhouse gases over the past decade. But there are other greenhouse gases that are more abundant than CO2. According to lead study author Hanqin Tian of Auburn University, the potential of global warming due to methane is 28 times higher than than of CO2. Additionally, a nitrous oxide global warming is 265 times more likely than CO2 in the course of a century. "These two gases are really important non-CO2 greenhouse gases," said Tian. Methane not only emerges from oil leaks and gas operations, it is also released by ruminants, landfills and wetlands. On the other hand, nitrous oxide comes from nitrogen fertilizers used in agriculture. Only 17 out of 100 units of nitrogen applied to the crop system ends up in the food we eat, Business Insider reported. Tian and his colleagues calculated how much nitrous oxide, methane and CO2 the terrestrial biosphere is absorbing each year versus how much it is releasing. If the land absorbs more than it produces, it is called a "sink." But if it produces more than it absorbs, it's already a "source." Researchers found that although terrestrial living things absorb more CO2 every year than what they are producing, they are still a net source of nitrous oxide and methane. When the three gases are converted into a comparable unit based on their potential to warm Earth over a century, the biosphere becomes a source of greenhouse gases. It would cause a warming equal to about 3.8 or 5.4 billion tons CO2 emissions every year. "Human actions not only are emitting greenhouse gases based on our own activities, but also are causing plants and animals and microbes to be net emitters of greenhouse gases as well," said study co-author Anna Michalak. And so, the findings of their study overturns assumptions that the Earth is a sink, especially because those studies did not take into account methane and nitrous oxide, researchers said. Scientists call on the public to direct attention to the role of the food system in climate mitigation. Countries have shown little interest on the matter because one thing is at stake: feeding the people. Pep Canadell from CSIRO told ABC News that we need to re-think the ways to feed the population, especially a population interested in meat-rich diets. "We really need to look at completely different ways to become much more efficient," said Canadell. This would involve sustainable intensification of lands that would minimize the overall impact of other greenhouse gases such as nitrous oxide and methane, he added.


News Article | December 15, 2016
Site: www.materialstoday.com

The porous crystals known as metal-organic frameworks (MOFs) consist of metallic intersections connected by organic molecules. Thanks to their high porosity, MOFs have an extremely large surface area: a teaspoonful of MOF has the same surface area as a football pitch. The large number of pores situated in an extremely small space offer room for ‘guests’, allowing MOFS to be used for gas storage or as a ‘molecular gate’ for separating chemicals. But MOFs have a much greater potential, and this is what Paolo Falcaro from the Institute of Physical and Theoretical Chemistry (PTC) at the Graz University of Technology (TU Graz) in Austria wants to unlock. “MOFs are prepared by self-organization,” Falcaro explains. “We don’t have to do anything other than mix the components, and the crystals will grow by themselves. However, crystals grow with random orientation and position, and thus their pores. Now, we can control this growth, and new properties of MOFs will be explored for multifunctional use in microelectronics, optics, sensors and biotechnology.” In a paper in Nature Materials, Falcaro and his team report a method for growing MOFs on a comparatively large surface area of 1cm2 that offers an unprecedented level of control over the orientation and alignment of the crystals. Other members of the team include Masahide Takahashi from Osaka Prefecture University in Japan and researchers from the University of Adelaide, Monash University and the Commonwealth Scientific and Industrial Research Organisation (CSIRO), all in Australia. Incorporating functional materials into these precisely-oriented crystals allows the creation of anisotropic materials, which are materials with directionally-dependent properties. In the paper, the research team describes incorporating fluorescent molecules into a precisely-oriented MOF. Just by rotating the film, the fluorescent signal can be turned ‘on’ or ‘off’, producing an optically-active switch. “This has many conceivable applications and we’re going to try many of them with a variety of different functionalities,” says Falcaro. “One and the same material can show different properties through different orientations and alignments. Intentional growth of MOFs on this scale opens up a whole range of promising applications which we’re going to explore step by step.” A major aim of Falcaro and his team at TU Graz is developing MOFs for biotechnological applications. “We are trying to encapsulate enzymes, proteins and even DNA in MOFs and to immunize their activity against fluctuations in temperature,” he says. “The crystalline structure surrounding the ‘guest’ in the pore has a protective effect, like a tough jacket. We want to check out the possibilities more accurately.” This story is adapted from material from TU Graz, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | September 15, 2016
Site: cleantechnica.com

Australia’s federal scientific research organization CSIRO launched its Centre for Hybrid Energy Systems earlier this month, which is a centre designed to research and improve “cutting edge” renewable and hybrid energy technologies. The Centre for Hybrid Energy Systems (CHES) is intended to be a “hub for researchers and industry to identify, improve, and then tailor energy technologies to meet specific requirements.” The project has the backing of CSIRO’s long and successful history, which includes the invention of insect repellent, the development of the Parkes Observatory which broadcast the Apollo 11 pictures to the world, and which was integral to the development of WiFi. Now, CSIRO is turning its attention to the development of hybrid technologies — where two or more forms of energy generation, storage, or end-use technologies are combined, bringing with it overall cost and efficiency benefits (as compared to single-source energy systems) that will be integral to the widespread growth of renewable technology over the next years. The Centre will aim to create a collaborative space that will allow researchers, industry, and government to maximize the value of local energy sources. “These technologies are becoming cost competitive, but the key to greater use is to combine them in connected hybrid systems,” said CSIRO Fellow Dr Sukhvinder Badwal, referring to renewable and modular power generation, as well as solar technologies such as batteries, fuel cells, and household solar. “By doing this, we can offer substantial improvements in performance, reliability of power, flexibility and cost.” “The opening of the Centre for Hybrid Energy Systems will expand research in this area and marks a significant milestone to ensure the success of any industry cooperation,” added Allen Chao, Australian Director of Delta Energy Systems, a developer and manufacturer of solar electric vehicle fast-charging technologies, which is partnering with CSIRO in CHES. Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | December 17, 2016
Site: cleantechnica.com

Recently on Quora I was challenged to assess the likely capacity of soil carbon sequestration approaches (sometimes referred to as biological carbon capture and sequestration or BCCS) by a researcher in the space. The premise was that two thirds of the carbon which had been sequestered in the soil had been lost into the atmosphere as grasslands were converted to large-scale agriculture, and that changing agricultural practices would be sufficient to act as a sink for the majority of excess CO2 emitted. What exactly is the mechanism? How much potential does BCCS offer? How much effort would be required to implement a large scale fix? Reasonable questions, so I went hunting for answers. There have been some interesting findings in plant biology in the past two decades, specifically concerning something called glomalin. Glomalin is a glycoprotein produced abundantly on hyphae and spores of arbuscular mycorrhizal (AM) fungi in soil and in roots. Glomalin was discovered in 1996 by Sara F. Wright, a scientist at the USDA Agricultural Research Service. The name comes from Glomales, an order of fungi. To summarize the premise behind modern BCCS: Carbon has been lost from native soils as they became agriculturally productive. The concept of BCCS is shift to agricultural approaches which support glomalin from approaches which reduce it, increasing the carbon uptake of soil. The various sources provided  supported this theory (any sources not linked in the body of this article are provided below as additional reading). As a lot of the sources were Australian, I went to my go-to Australian source for good climate information, CSIRO. I found this very useful briefing paper on soil sequestration from 2010. The part that leapt out at me in the Executive Summary of the material on p. iv: Globally, this loss of SOC has resulted in the emission of at least 150 Petagrams (Pg) of carbon dioxide to the atmosphere (1 Petagram = 1 Gigatonne = 10^15 grams). Recapturing even a small fraction of these legacy emissions through improved land management would represent a significant greenhouse gas emissions reduction. As CO2 has risen from 150 to 400 ppm, this represents an increase of about 1,170 gigatonnes of excess CO2 in total, and annually we are contributing about 10 gigatonnes. Let’s make the assumption that all agricultural land globally could be returned to a baseline of the same sequestration as native land over the course of the next 50 years. That means that we’d be at about 1,222 gigatonnes of extra CO2 and the soil would sequester about 150 gigatonnes out of that total, or about 12%. However, this 12% is dominantly a temporary biological sink. That study showed that glomalin accounts for 27 percent of the carbon in soil and is a major component of soil organic matter. Nichols, Wright, and E. Kudjo Dzantor, a soil scientist at the University of Maryland-College Park, found that glomalin weighs 2 to 24 times more than humic acid, a product of decaying plants that up to now was thought to be the main contributor to soil carbon. But humic acid contributes only about 8 percent of the carbon. Another team recently used carbon dating to estimate that glomalin lasts 7 to 42 years, depending on conditions. Glomalin, as with all biological sinks, is temporary. Movement of CO2 into permanent sinks occurs, but also movement back into the atmosphere. Biological sinks become saturated and then atmospheric levels of CO2 remain in balance with the sink. There has been recent bad news for soil sequestration via a radiocarbon dating of soil carbon study published in Science by a UCal team in September 2016. A gloss on the study in the Guardian is good and in more accessible terms. Scientists from the University of California, Irvine (UCI) found that models used by the UN’s Intergovernmental Panel on Climate Change (IPCC) assume a much faster cycling of carbon through soils than is actually the case. Data taken from 157 soil samples taken from around the world show the average age of soil carbon is more than six times older than previously thought. This means it will take hundreds or even thousands of years for soils to soak up large amounts of the extra CO2 pumped into the atmosphere by human activity – far too long to be relied upon as a way to help the world avoid dangerous global warming this century. So the answer of 12% by 2050 is actually much slower, centuries slower in fact. That’s too slow to be of use in any near term attempt to deal with warming. Like a more virtuous corn ethanol, BCCS enables governments to give money to farmers who are a large voting block. This is happening in jurisdictions around the world. The science says it’s not a short-term climate solution, but that better tillage practices are a very good choice regardless. While sequestration might be a bit of a red herring, reduced soil erosion and better soil biology are strong net benefits regardless. Global green biomass has been increasing for the past 15 years or so as the rural poor move to cities and leave semi-arable land to go wild. In a tightly related story, we’re producing more food from less land under agriculture  globally. The combination means that a great deal of land is returning to being a better carbon sink and that the area of land amenable to better practices is both smaller and under organizations more amenable to seeing it as a long term asset, agricorporations who are more likely to follow the science of better land management. To be clear, improving land management practices to make the soil healthier and sustainable is something that’s excellent to do. It will help with long term ecosystem health and biosequestration. But it’s not a fix for our climate change problem this century or next. That will take electrifying everything, decarbonizing electricity and then cleaning up around the edges. Many thanks to Scott Strough whose Quora answers on soil carbon sequestration and persistence in forcing me to look through the material led me to make this assessment. Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


Research and Markets has announced the addition of the "Perovskite Photovoltaics 2016-2026: Technologies, Markets, Players" report to their offering. The report will also benchmark other photovoltaic technologies including crystalline silicon, GaAs, amorphous silicon, CdTe, CIGS, CZTS, DSSC, OPV and quantum dot PV. Cost analysis is provided for future perovskite solar cells. A 10-year market forecast is given based on different application segments. Possible fabrication methods and material choices are discussed as well. With so many improvements, perovskite solar cell technology is still in the early stages of commercialization compared with other mature solar technologies as there are a number of concerns remaining such as stability, toxicity of lead in the most popular perovskite materials, scaling-up, etc. Crystalline silicon PV modules have fallen from $76.67/W in 1977 to $0.4-0.5/W with fair efficiency in early 2015. As one of the top ten science breakthroughs of 2013, perovskite solar cells have shown potential both in the rapid efficiency improvement (from 2.2% in 2006 to the latest record 20.1% in 2014) and in cheap material and manufacturing costs. Perovskite solar cells have attracted tremendous attention from the likes of DSSC and OPVs with greater potential. Many companies and research institutes that focused on DSSCs and OPVs now transfer attention to perovskites with few research institutes remaining exclusively committed to OPVs and DSSCs. Perovskite solar cells are a breath of fresh air into the emerging photovoltaic technology landscape. They have amazed with an incredibly fast efficiency improvement, going from just 2% in 2006 to over 20.1% in 2015. These questions will be answered in this report: - Will perovskite solar cells be able to compete with silicon solar cells which dominate the PV market now? - What is the status of the technology? - What are the potential markets? - Who is working on it?  The market forecast is provided based on the following applications: - Smart glass - BIPV - Outdoor furniture - Perovskites in tandem solar cells - Utility - Portable devices - Third world/developing countries for off-grid applications - Automotive - Others Key Topics Covered: 1. Overview 2. Technology Benchmarking Of Different Pv Technologies 3. Cost Analysis 4. Commercial Opportunities And Market Forecast 5. Background Of Perovskite Solar Cells 6. Architecture And Fabrication 7. Material Options 8. Player Profiles 9. Companies Currently Working On Perovskites 10. Companies Working On Other Emerging Pvs 11. Abbreviations Companies Mentioned - Alta Devices - Armor - Belectric - CSIRO - CrayoNano AS - Crystalsol GmbH - DisaSolar - Dyesol - Eight19 Ltd - Exeger - Flexink - Fraunhofer ISE - FrontMaterials - G24 Power Ltd - Heliatek GmbH - NanoGram Corp - National Research Council Canada - New Energy Technologies Inc - Oxford Photovoltaics - Polyera Corporation - Raynergy Tek Incorporation - Saule Technologies - SolarPrint Ltd - Solaronix - Sumitomo Chemical and CDT - Ubiquitous Energy Inc - VTT Technical Research Centre of Finland - Xiamen Weihua Solar Co.,Ltd. For more information about this report visit http://www.researchandmarkets.com/research/3lstml/perovskite Research and Markets Laura Wood, Senior Manager press@researchandmarkets.com For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900 U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716


Grant
Agency: European Commission | Branch: FP7 | Program: CP-FP | Phase: KBBE-2009-1-1-02 | Award Amount: 5.15M | Year: 2010

The 3SR project (Sustainable Solutions for Small Ruminants) brings together a strong and unique international consortium of 14 partners that will mine genomic information of sheep and goats to deliver a step-change in our understanding of the genetic basis of traits underlying sustainable production and health. To do this we will build on existing research resources in the major sheep and goat producing member states to discover and verify (in commercial populations) selectable genetic markers (and causative mutations where possible) for traits critical to sustainable farming, particularly in marginal areas. The targeted sustainability traits are mastitis susceptibility, nematode resistance and ovulation rate. These are traits that would markedly benefit from genetic markers, and traits for which we have evidence that polymorphisms exist with large effects on trait variation. We will apply the latest high-throughput genomics technologies, comparative and functional genomics; together with targeted genome sequencing and extensive in silico analyses to dissect important genetic components controlling these traits. Concurrently, we will deliver significant improvements in available genomic information and technologies for these species, thus having a lasting impact on European research capacity. Our work on genome resources will be undertaken in close collaboration with the International Sheep Genome Consortium and will make use of complementary resources provided by major research projects in Europe, Australasia, USA, Argentina and China. 3SR will provide selectable genetic markers that can be affordably applied by sheep and goat breeders to make important contributions to improving animal health, welfare, sustainability and the long-term competitiveness of small ruminant production in the EU. In addition 3SR will generate a collaborative infrastructure that will enable these orphan species to keep pace with the rapid developments in livestock genomics.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-IP | Phase: KBBE-2007-1-1-04 | Award Amount: 8.18M | Year: 2009

QUANTOMICS will deliver a step-change in the availability of cutting edge technologies and tools for the economic exploitation of livestock genomes. We will provide the tools to identify rapidly the causative DNA variation underpinning sustainability in livestock and for industry to exploit high-density genomic information. Our adaptable quantitative and genomic tools each based on cutting-edge technologies and valuable in itself, will together form a powerful integrated pipeline with wide application. To deliver these outcomes we will; i) use comparative genomics to annotate putatively functional features of the genomes of the EUs key farmed animal livestock species; ii) enhance existing molecular genetic tools (to include copy number variation, CNV); iii) deliver computationally optimised tools for genome-wide selection (GWS) to include CNV; iv) apply these tools to important health and welfare traits in commercial populations of dairy cattle and broiler chickens, determining the benefits and constraints; v) use hyper-parallel resequencing of DNA within identified genomic features underlying loci of large effect in significant numbers of animals to catalogue variation; vi) develop new visualisation tools to make this variation publicly available via the Ensembl genome-browser; vii) develop tools to prioritise the likely functionality of identified polymorphisms; viii) validate the utility of the putative causative haplotypes within commercial populations; ix) test the potential advances from combined GWS and gene assisted selection in breeding programmes; x) explore new methods to manage molecular biodiversity; xi) assess the implications of these new tools for breeding programme design, and xii) disseminate results of the project achieving major competitive, animal health and welfare impacts across the EU livestock industry and ultimately consumers. QUANTOMICS will have wide application in all farmed species and leave a legacy of resources for future research.


News Article | October 27, 2016
Site: phys.org

The Cryogenic Sapphire Oscillator, or Sapphire Clock, has been enhanced by researchers from the University of Adelaide in South Australia to achieve near attosecond capability. The oscillator is 10-1000 times more stable than competing technology and allows users to take ultra-high precision measurements to improve the performance of electronic systems. Increased time precision is an integral part of radar technology and quantum computing, which have previously relied on the stability of quartz oscillators as well as atomic clocks such as the Hydrogen Maser. Atomic clocks are the gold-standard in time keeping for long-term stability over months and years. However, electronic systems need short-term stability over a second to control today's devices. The new Sapphire Clock has a short-term stability of better than 1x10-15, which is equivalent to only losing or gaining one second every 40 million years, 100 times better than commercial atomic clocks over a second. The original Sapphire Clock was developed by Professor Andre Luiten in 1989 in Western Australia before the team moved to South Australia to continue developing the device at the University of Adelaide. Lead researcher Martin O'Connor said the development group was in the process of modifying the device to meet the needs of various industries including defence, quantum computing and radio astronomy. The 100cm x 40cm x 40cm clock uses the natural resonance frequency of a synthetic sapphire crystal to maintain a steady oscillator signal. Associate Professor O'Connor said the machine could be reduced to 60 per cent of its size without losing much of its capability. "Our technology is so far ahead of the game, it is now the time to transfer it into a commercial product," he said. "We can now tailor the oscillator to the application of our customers by reducing its size, weight and power consumption but it is still beyond current electronic systems." The Sapphire Clock, also known as a microwave oscillator, has a 5 cm cylinder-shaped crystal that is cooled to -269C. Microwave radiation is constantly propagating around the crystal with a natural resonance. The concept was first discovered by Lord Rayleigh in 1878 when he could hear someone whispering far away on the other side of the church dome at St Paul's Cathedral. The clock then uses small probes to pick up the faint resonance and amplifies it back to produce a pure frequency with near attosecond performance. "An atomic clock uses an electronic transition between two energy levels of an atom as a frequency standard," Associate Professor O'Connor said. "The atomic clock is what is commonly used in GPS satellites and in other quantum computing and astronomy applications but our clock is set to disrupt these current applications." The lab-based version already has an existing customer in the Defence Science and Technology Group (DST Group) in Adelaide, but Associate Professor O'Connor said the research group was also looking for more clients and was in discussion with a number of different industry groups. The research group is taking part in the Commonwealth Scientific and Industrial Research Organisation's (CSIRO's) On Prime pre-accelerator program, which helps teams identify customer segments and build business plans. Commercial versions of the Sapphire Clock will be made available in 2017. Explore further: A new EU project on ultra-precise atomic clocks


News Article | November 17, 2016
Site: phys.org

Though astronomers still do not know what kinds of events or objects produce FRBs, the discovery is a stepping stone for astronomers to understand the diffuse, faint web of material that exists between galaxies, called the cosmic web. The findings are described in a paper appearing in Science on November 17. "Because FRBs like the one we discovered occur billions of light-years away, they help us study the universe between us and them," says Ravi, who is the R A and G B Millikan Postdoctoral Scholar in Astronomy. "Nearly half of all visible matter is thought to be thinly spread throughout intergalactic space. Although this matter is not normally visible to telescopes, it can be studied using FRBs." When FRBs travel through space, they pass through intergalactic material and are distorted, similar to the apparent twinkling of a star because its light is distorted by Earth's atmosphere. By observing these bursts, astronomers can learn details about the regions of the universe through which the bursts traveled on their way to Earth. FRB 150807 appears to only be weakly distorted by material within its host galaxy, which shows that the intergalactic medium in this direction is no more turbulent than theorists originally predicted. This is the first direct insight into turbulence in intergalactic medium. The researchers observed FRB 150807 while monitoring a nearby pulsar—a rotating neutron star that emits a beam of radio waves and other electromagnetic radiation—in our galaxy using the Parkes radio telescope in Australia. "Thanks to a real-time detection system developed by the Swinburne University of Technology, we found that although the FRB is a million times further away than the pulsar, the magnetic fields in their directions appear identical," says Ryan Shannon, research fellow at Commonwealth Scientific and Industrial Research Organisation (CSIRO) Astronomy and Space Science and at Curtin University in Australia, and colead author of the study. This refutes some claims that FRBs are produced in dense environments with strong magnetic fields. The result provides a measure of the magnetism in the space between galaxies, an essential step in determining how cosmic magnetic fields are produced. Only 18 FRBs have been detected to date. Mysteriously, most give off only a single burst and do not flash repeatedly. Additionally, most FRBs have been detected with telescopes that observe large swaths of the sky but with poor resolution, making it difficult to pinpoint the exact location of a given burst. The unprecedented brightness of FRB 150807 allowed Ravi and his team to localize it much more accurately, making it the best-localized FRB to date. In February 2017, pinpointing the locations of FRBs will become much easier for astronomers with the commissioning of the Deep Synoptic Array prototype, an array of 10 radio dishes at Caltech's Owens Valley Radio Observatory in California. "We estimate that there are between 2,000 and 10,000 FRBs occurring in the sky every day," Ravi says. "One in 10 of these are as bright as FRB 150807, and the Deep Synoptic Array prototype will be able to pinpoint their locations to individual galaxies. Measuring the distances to these galaxies enables us to use FRBs to weigh the tenuous intergalactic material." Ravi is the project scientist for the Deep Synoptic Array prototype, which is being constructed by the Jet Propulsion Laboratory (JPL) and Caltech and funded by the National Aeronautics and Space Administration through the JPL President's and Director's Fund Program. The paper is titled "The magnetic field and turbulence of the cosmic web measured using a brilliant fast radio burst." More information: "The magnetic field and turbulence of the cosmic web measured using a brilliant fast radio burst," Science, science.sciencemag.org/lookup/doi/10.1126/science.aaf6807


SINGAPUR, 8. Dezember 2016 /PRNewswire/ -- Carmentix Private Limited („Carmentix") und die University of Melbourne sind stolz, die Initiative „Preterm Birth Biomarker Discovery" (Aufdeckung von Biomarkern bei Frühgeburten) bekannt geben zu dürfen. Das Ziel dieser gemeinsam durchgeführten klinischen Studie ist es, neuartige, von Carmentix entdeckte Biomarker sowie Biomarker, die zuvor an der University of Melbourne endeckt und validiert wurden, in einem gemeinsamen Panel zu validieren und das Risiko für eine Frühgeburt ab der 20. Schwangerschaftswoche abzuschätzen. Die retrospektive Studie, die unter Leitung von Dr. Harry Georgiou, PhD, und Dr. Megan Di Quinzio, MD, an der University of Melbourne durchgeführt wird, soll die statistische Aussagekraft des neuartigen Biomarkerpanels bewerten. „Carmentix ist gespannt auf den Start dieser Zusammenarbeit, zumal wir anstreben, auch zukünftig die Biomarker, die auf unserer einzigartigen Plattform zur Datenaufbereitung entdeckt wurden, weiterzuentwickeln", sagte Dr. Nir Arbel, CEO von Carmentix. „Falls es validiert wird, könnte das neue Panel an Biomarkern die Hoffnung auf eine erhebliche, weltweite Verringerung bei der Zahl von Frühgeburten bestärken." Die klinische Geburtshelferin und Forscherin Dr. Di Quinzio erlebt häufig Mütter, die fragen: „Warum ist mein Baby zu früh geboren?" Oft gibt es keine befriedigende Antwort. „Frühgeburten bleiben weiterhin ein weltweites Problem im Gesundheitswesen, und traurigerweise fehlt es an zuverlässigen Instrumenten bei der Diagnostik", sagte Dr. Georgiou, Wissenschaftlicher Leiter an der University of Melbourne. „Die Kooperationsinitiative gemeinsam mit einem kommerziellen Partner wird dabei helfen, den Weg für einen neuartigen Lösungsansatz für eine bessere Diagnose zu bereiten, und hoffentlich einen Beitrag zur Vermeidung von zu früh einsetzenden Geburtswehen leisten." Carmentix ist von Esco Ventures unterstütztes Start-up-Unternehmen mit Sitz in Singapur. Carmentix entwickelt derzeit ein neuartiges, prognostisches Biomarkerpanel, das die Zahl von Frühgeburten signifikant reduzieren soll. Erreicht werden soll das durch die Einführung von biomolekularen Instrumenten, die das klinische Fachpersonal auf das Risiko einer Frühgeburt schon Wochen vor dem Auftauchen von Symptomen in Alarmbereitschaft versetzt. Die Technik von Carmentix beruht auf einer Vielzahl von Analysen der Signalwege, bei denen ein einzigartiges Panel an Biomarkern zum Einsatz kommt. Dieses Panel an patentrechtlich geschützten Markern wird es erlauben, eine Frühgeburt zwischen der 16. und 20. Schwangerschaftswoche vorherzusagen, wobei ein in hohem Maße genauer vorausberechnender Algorithmus aufgrund der Abdeckung von molekularen Flaschenhalsprozessen, die an Frühgeburten beteiligt sind, angenommen wird. Das Ziel von Carmentix ist es, eine kostengünstige Lösung zu erreichen, die stabil und genau ist und die sich weltweit an die klinischen Gegebenheiten anpassen lässt. Über die University of Melbourne und ihre Vermarktungsinitiativen Die University of Melbourne ist die beste Universität in Australien und gehört zu den weltweit führenden Hochschulen. Gemäß seiner Stellung als Zentrum für Forschung und Entwicklung mit weltweit führenden Spezialisten aus Wissenschaft, Technik und der Medizin wird in Melbourne hochmoderne Forschung betrieben, um neue Wege bei der Ideenfindung, neue Technologien und neues Wissen für den Aufbau einer besseren Zukunft zu schaffen. Forschung von Weltrang, Lösungen für die reale Welt: Die University of Melbourne bekennt sich zu einer Kultur der Innovation -- und arbeitet mit der Industrie, der Regierung, Nichtregierungsorganisationen und der Kommune zusammen, um die Herausforderungen der realen Welt zu meistern. Unsere kommerziellen Partnerschaften hauchen der Forschung Leben ein, und zwar über Zusammenarbeiten in den Bereichen Bio-Engineering, Materialentwicklung, technische Innovationen in der Medizin, Entwicklung von kommunalen Kapazitäten und Kulturunternehmen. Zu den bahnbrechenden und kommerziell umgesetzten Technologien, die an der University of Melbourne erschaffen wurden, gehören etwa Innenohrimplantat , die Stentrode (ein Instrument, das die Steuerung von Computer, Robotergliedmaße oder eines Exoskeletts mithilfe von Gedanken erlaubt), und neuartige anti-fibrotische Medikamentenkandidaten für die Behandlung von Fibrose (häufig vorkommend bei chronischen Erkrankungen, wie chronischen Nierenerkrankung, chronischen Herzinsuffizienz, Lungenfibrose und Arthritis). Die University of Melbourne pflegt enge Partnerschaften mit dem Peter Doherty Institute for Infection and Immunity, Walter and Eliza Hall Institute, CSIRO, CSL und The Royal Melbourne, Royal Children's und Royal Women's Hospitals. Mit seinen über 160 Jahren in führender Position bei Bildung und Forschung reagiert die Universität auf unmittelbare und auf zukünftige Herausforderungen, denen sich unsere Gesellschaft aufgrund des wissenschaftlichen Fortschritts gegenübersieht. Die University of Melbourne ist die Nummer 1 in Australien und steht weltweit auf Rang 31 (laut Times Higher Education World University Rankings 2015-2016).


News Article | August 24, 2016
Site: www.fastcompany.com

One of the lesser understood aspects of what you can do with massive stockpiles of data is the ability to use data that would traditionally have been overlooked or in some cases even considered rubbish. This whole new category of data is known as "exhaust" data—data generated as a by-product of some other process. Much financial market data is a result of two parties agreeing on a price for the sale of an asset. The record of the price of the sale at that instant becomes a form of exhaust data. Not that long ago, this kind of data wasn’t of much interest, except to economic historians and regulators. A massive moment-by-moment archive of prices of shares and other securities sales prices is now key to many major banks and hedge funds as a "training ground" for their machine-learning algorithms. Their trading engines "learn" from that history and this learning now powers much of the world’s trading. Traditional transactions such as house price sales history or share trading archives are one form of time-series data, but many other less conventional measures are being collected and traded too. There are also other categories of unconventional data that are not time-series-based. For example, network data outlines relationships and other signals from social networks, geospatial data lends itself to mapping, and survey data concerns itself with people’s viewpoints. Time series or longitudinal data is, however, the most common form and the easiest to integrate with other time-series data. Consistent Longitudinal Unconventional Exhaust Data or CLUE data sets, as I’m calling them, are many, varied and growing. They include: Say, for example, you are interested in the seasonal profitability of supermarkets over time. Foot traffic data may not be the cause of profitability, as more store visitors doesn’t necessarily correlate directly to profit or even sales. But it may be statistically related to volume of sales and so may be one useful clue, just as body temperature is a good clue or one signal to a person’s overall well-being. And when combined with massive amounts of other signals using data analytics techniques, this can provide valuable new insights. Leading hedge fund BlackRock, for example, is using satellite images of China taken every five minutes to better understand industrial activity and to give it an independent reading on reported data. Traditionally, there have been two main types of actors in the financial world—traders (including high-frequency traders), who look to make money from massive volumes on many small transactions, and investors, who look to make money from a smaller number of larger bets over a longer time. Investors tend to care more about the underlying assets involved. In the case of company stocks, that usually means trying to understand the underlying or fundamental value of the company and future prospects based on its sales, costs, assets, and liabilities and so on. A new type of fund is emerging that combines the speed and computational power of computer-based quants with the fundamental analysis used by investors: Quantamental. These funds use advanced machine learning combined with a huge variety of conventional and unconventional data sources to predict the fundamental value of assets and mismatches in the market. Some of these new style of funds, including Two Sigma in New York and Winton Capital in London, have been spectacularly successful. Winton was founded by David Harding, a physics graduate from Cambridge University in 1997. After less than two decades it ranks in the top 10 hedge funds worldwide with US$33 billion in assets under advice and more than 400 people—many with PhDs in physics, math, and computer science. Not far behind and with US$30 billion in assets, Two Sigma also glistens with top tech talent. New ones are emerging too, including Taaffeite Capital Management, run by computational biology and University of Melbourne alumnus Professor Desmond Lun. Understanding the complex data dynamics of many areas of natural science, including biology and ecology, is turning out to be excellent training for understanding financial market dynamics. But it’s not only the world’s top hedge funds that can or are using alternative data. A number of startups are on a mission to democratize access to new sources. Michael Babineau, cofounder and CEO of Bay Area startup Second Measure, aims to offer a Bloomberg-terminal-like approach to consumer purchase data. This will transform massive amounts of inscrutable text in card statements into more structured data, thus making it accessible and useful to a wide business and investor audience. Others companies, like Mattermark in San Francisco and CB Insights in New York, are intelligence services that provide fascinating and valuable data insights into company "signals." These can be indicators and potential predictors of success—especially in the high-stakes game of technology venture capital investment. Akin to Adrian Holovaty's pioneering work a decade ago mapping crime and many other statistics in Chicago online, Microburbs in Sydney provides a granular array of detailed data points on residential locations around Australia. It allows potential residents and investors to compare schooling, restaurants, and many other amenities in very specific neighborhoods within suburbs. We Feel, designed by CSIRO researcher Cecile Paris, is an extraordinary data project that explores whether social media—specifically Twitter—can provide an accurate, real-time signal of the world’s emotional state. More than simply pop-economics, Freakonomics (2005) showed how unusual yet good-quality data sources can be valuable in creating insights. Assiduous record-keeping of the accounts of an honesty system cookie jar in an office place revealed that people stole most during certain holidays (perhaps due to increased financial and mental stress at these times); access to drug gangster bookkeeping accounts explained why many drug dealers live with their grandparents (they are too poor to move out); and massive public school records from Chicago showed parental attention to be a key factor in students' academic success. Many of the examples in Freakonomics were based on small quirky data samples. However, as many academics are aware, studies with small samples can present several problems. There’s the question of sampling—whether it’s large enough to represent a robust sample and whether it’s a random selection of the population the study aims to understand. Then there’s the problem of errors. While one could expect errors to be smaller with smaller sample sizes, a recent meta-study of academic psychology papers found half the papers tested showed significant data inconsistencies and errors. In a small number of cases this may be due to authors fudging the results, whereas others may be due to transcription or other simple mistakes. More and more large-scale unconventional data collections are becoming readily available. There are three blast furnaces driving its proliferation: While large data collections can’t help with avoiding fabrication, they can sometimes help with sample size and representation issues. When combined with machine learning they can: We may see unexpected results and be surprised about the degree to which many factors such as social and personal information are highly predictable using unexpected data signals. Michael Kosinski and his colleagues showed the predictive power of social media data in the analysis they published in PNAS in 2013. They demonstrated that highly personal traits such as religion, politics, and even whether your parents were together when you were 21 were highly predictable using Facebook likes alone. We will see a plethora of applications emerge that take advantage of processing unconventional data sources. One rich area is biometrics. Australian tech startup Brain Gauge has shown that people’s voices can be uses as signal for cognitive load and used for real-time detection of stress levels and reduced absenteeism in call-center staff, for example. We can also expect to see a lot more meta-analysis of communities, populations and industries. Increasingly ambitious studies are now possible that combine and link massive, often disparate data sets together to yield new insights into economics, law, health, and many other areas of research. One example is the recent meta-study published in the Journal of the American Medical Association that combined nine other studies and found that walking speed in older adults is indeed a predictor of longevity. Abundant data combined with machine learning has the power to provide us with new personal insights into the way we work, live, and play. And there's enormous potential upside here as individuals and enterprises can leverage others' experience and combine it with our own data, thus equipping us to make better decisions. Many traditional businesses like banks, airlines, and supermarkets and the new online giants already know about the habits and statistical predilections of their customers. But with the advent of easier, lower-cost tools and new approaches to data, there's the potential for businesses and individuals to gain access to some of this insight themselves. By combining public, private, and historical statistical data with one's own data—both the typical data and the weird stuff—we may gain all kinds of insights hitherto only available to governments and a few large companies. Paul X. McCarthy is cofounder and CEO of League of Scholars, a specialist data analytics based global executive search and recruitment firm for researchers and academics. He is author of Online Gravity (Simon & Schuster), which explores new dynamics of businesses in an era of data analytics. An earlier version of this post originally appeared at The Conversation.


News Article | April 21, 2016
Site: phys.org

A collaborative study between UQ and the CSIRO has shown that fish learn to avoid hooks that are a risk for their size – but they take the bait more frequently in quiet areas. UQ Centre for Marine Science Honours student Andrew Colefax has designed a sophisticated underwater stereo video system to understand the common problem of having many bites but few catches. "We simulated angling baits at fishing hotspots and less fished areas, and discovered that fish are smarter than we gave them credit for," Mr Colefax said. "In high-intensity fishing areas, smaller fish that could not engulf the hook fed first with larger hook-susceptible fish hanging back and observing. "By contrast, in nearby low-intensity fishing areas, the larger fish moved in quickly and attacked the bait. "A small change in where you fish might greatly increase your catch." UQ School of Biological Sciences' Associate Professor Ian Tibbetts said researchers were surprised to discover that smaller individuals of a species fed sooner than their larger relatives at fishing hotspots. "This kind of behaviour indicates that the fish observe and learn from their environment and from the mistakes of others," Associate Professor Tibbetts said. Each year more than 700,000 fish are caught in Queensland for recreation, with anglers taking home about 8500 tonnes of fin fish, crabs and prawns. CSIRO collaborator and marine ecologist Mick Haywood said the next step was to work out if fish had spatial awareness of risk. "It's possible that individual fish change their feeding behaviour between nearby lightly fished and heavily fished sites," Mr Haywood said. Future studies may be able to tell if 'green zones', or sanctioned low-intensity fishing areas, might create naïve fish that have not learned to avoid baits. The researchers believe their study, published in the international Journal of Marine Biology, has global implications because it could increase catches and save on bait bills for the world's 220 million fishers. Explore further: Fishing impacts on the Great Barrier Reef More information: Andrew P. Colefax et al. Effect of angling intensity on feeding behaviour and community structure of subtropical reef-associated fishes, Marine Biology (2016). DOI: 10.1007/s00227-016-2857-3


News Article | December 1, 2016
Site: www.24-7pressrelease.com

MOUNT WAVERLEY, AUSTRALIA, December 01, 2016-- Thomas Spurling, Professor of the Swinburne Business School at the Swinburne University of Technology, has been recognized by Worldwide Branding for showing dedication, leadership and excellence in his profession.With four decades of educational experience, Dr. Spurling demonstrates expertise in physical chemistry, industrial technology and molecular science. He has been at Swinburne for 11 years, now serving as a research professor. He recently had left the Faculty of Life and Social Sciences department to move to the business school. He was then dean of the Faculty of Engineering and Industrial Sciences from 2004-2005. In the future, Dr. Spurling plans to continue his areas of research and career around innovation enterprise. He says persistence has been the key to his sustained success.A former lecturer with the University of Tasmania, Dr. Spurling earned a Postdoctoral Fellow from the University of Maryland in 1967, which followed a Bachelor of Science, with first class honors, in 1962, and a Doctor of Philosophy in 1966, both in physical chemistry, from University of Western Australia.He has also worked outside the educational field. Dr. Spurling serves as a chairman of Advanced Molecular Technologies Pty., Ltd., and as a board member of the International Centre for Radio Astronomy Research (ICRAR). He is also a former scientist and research leader with CSIRO. He was the chief of CSIRO Chemicals and Polymers and then CSIRO Molecular Science from 1989 to 1998. He led the World Bank funded CSIRO-Indonesian Institute of Sciences Management Systems Strengthening Project in Jakarta from 1999 to 2001. He was the chief executive officer of the Cooperative Research Centre (CRC) Wood Innovations from 2005 to 2008.Dr. Spurling was appointed as a member of the Order of Australia in June 2008 for services to chemical science through contributions to national innovation policies, strategies and research, and to the development of professional scientific relationships within the Asian region.He is a Fellow with the Australian Academy of Technological Sciences and Engineering, the Federation of Asian Chemical Societies, and the Royal Australian Chemical Institute. He maintains affiliations with the Design Research Institute of RMIT University and the Australian Institute for Teaching and School Leadership (AITSL).In the past, Dr. Spurling was president of several groups: the Federation of Asian Chemical Societies (FACS), the Royal Australian Chemical Institute (RACI) and the Federation of Australian Scientific and Technological Societies (FASTS). He was also a member of the Prime Minister's Science, Engineering and Innovation Council from 2005 to 2007.Throughout his career, Dr. Spurling has received numerous awards, including the FACS Award for Distinguished Contribution to Economic Development (2003), the Centenary Medal from the Australian Government (2003), the CSIRO Award for Business Excellence (2000), and the Leighton Memorial Medal (1994) and the Rennie Memorial Medal (1971), both from the Royal Australian Chemical Institute.In recognition of all of his achievements, Dr. Spurling was recently inducted into Worldwide Branding.About Worldwide BrandingFor more than 15 years, Worldwide Branding has been the leading, one-stop-shop, personal branding company, in the United States and abroad. From writing professional biographies and press releases, to creating and driving Internet traffic to personal websites, our team of branding experts tailor each product specifically for our clients' needs. From health care to finance to education and law, our constituents represent every major industry and occupation, at all career levels.For more information, please visit http://www.worldwidebranding.com


News Article | April 15, 2016
Site: www.spie.org

The first detection of a gravitational wave depended on large surfaces with excellent flatness, combined with low microroughness and the ability to mitigate environmental noise. Albert Einstein's general theory of relativity predicted that massive, accelerating bodies in deep space, such as supernovae or orbiting black holes, emit huge amounts of energy that radiate throughout the universe as gravitational waves. Although these "ripples in spacetime" may travel billions of light years, Einstein never thought the technology would exist that would allow for their detection on Earth. But a century later, the technology does exist at the Laser Interferometer Gravitational-Wave Observatory (LIGO). Measurements from two interferometers, 3000km apart in Louisiana and Washington State, have provided the first direct evidence of Einstein's theory by recording gravitational-wave signal GW150914, determined to be produced by two black holes coalescing 1.2 billion light years away. At the heart of the discovery lies fused silica optics with figure quality and surface smoothness refined to enable measurement of these incredibly small perturbations. Their design is an important part of LIGO's story. The black hole coalescence was detected as an upward-sweeping 'chirp' from 35 to 300Hz, which falls in the detectors' mid-frequency range that is plagued by noise from the optics. Left and right images show data from Hanford and Livingston observatories. Click to enlarge. (Caltech/MIT/LIGO Laboratory) "Most impressive are [the optics'] size combined with surface figure, coating uniformity, monolithic suspensions, and low absorption," says Daniel Sigg, a LIGO lead scientist at Caltech. LIGO's optics system amplifies and splits a laser beam down two 4km-long orthogonal tubes. The two beams build power by resonating between reflective mirrors, or 'test masses,' suspended at either end of each arm. This creates an emitted wavelength of unprecedented precision. When the split beam recombines, any change in one arm's path length results in a fringe pattern at the photodetector. For GW150914, this change was just a few times 10-18 meters. Reducing noise sources at each frequency improves interferometer sensitivity. Green shows actual noise during initial LIGO science run. Red and blue (Hanford, WA and Livingston, LA) show noise during advanced LIGO's first observation run, during which GW150914 was detected. Advanced LIGO's sensitivity goal (gray) is a tenfold noise reduction from initial LIGO. Click to enlarge. (Caltech/MIT/LIGO Laboratory) But the entire instrument is subject to environmental noise that reduces sensitivity. A noise plot shows the actual strain on the instruments at all frequencies, which must be distinguished from gravity wave signals. The optics themselves contribute to the noise, which most basically includes thermal noise and the quality factor, or 'Q,' of the substrate. "If you ping a wine glass, you want to hear 'ping' and not 'dink'. If it goes 'dink', the resonance line is broad and the entire noise increases. But if you contain all the energy in one frequency, you can filter it out," explains GariLynn Billingsley, LIGO optics manager at Caltech. That's the Q of the mirrors. Further, if the test mass surfaces did not allow identical wavelengths to resonate in both arms, it would result in imperfect cancellation when the beam recombines. And if non-resonating light is lost, so is the ability to reduce laser noise. Perhaps most problematic, the optics' coatings contribute to noise due to stochastic particle motion. Stringent design standards ameliorate these problems. In 1996, a program invited manufacturers to demonstrate their ability to meet the specifications required by initial LIGO's optics. Australia's Commonwealth Science and Industrial Research Organisation (CSIRO) won the contract. "It was a combination of our ability to generate large surfaces with excellent flatness, combined with very low microroughness," says Chris Walsh, now at the University of Sydney, who supervised the overall CSIRO project. "It requires enormous expertise to develop the polishing process to get the necessary microroughness (0.2-0.4nm RMS) and surface shape simultaneously." Master optician Achim Leistner led the work, with Bob Oreb in charge of metrology. Leistner pioneered the use of a Teflon lap, which provides a very stable surface that matches the desired shape of the optic during polishing and allows for controlled changes. "We built the optics to a specification that was different to anything we'd ever seen before," adds Walsh. Even with high-precision optics and a thermal compensation system that balances the minuscule heating of the mirror's center, initial LIGO was not expected to detect gravity waves. Advanced LIGO, begun in 2010 and completing its first observations when GW150914 was detected, offers a tenfold increase in design sensitivity due to upgrades that address the entire frequency range. "Very simply, we have better seismic isolation at low frequencies; better test masses and suspension at intermediate frequencies; and higher powered lasers at high frequencies," says Michael Landry, a lead scientist at the LIGO-Hanford observatory. At low frequencies, mechanical resonances are well understood. At high frequencies, radiation pressure and laser 'shot' noise dominate. But at intermediate frequencies (60-100 Hz), scattered light and beam jitter are difficult to control. "Our bucket is lowest here. And there are other things we just don't know," adds Landry. "The primary thermal noise, which is the component at intermediate frequency that will ultimately limit us, is the Brownian noise of the coatings." To improve signal-to-noise at intermediate frequencies, advanced LIGO needed larger test masses (340mm diameter). California-based Zygo Extreme Precision Optics won the contract to polish them. "We were chosen based on our ability to achieve very tight surface figure, roughness, radius of curvature, and surface defect specifications simultaneously," says John Kincade, Zygo's Extreme Precision Optics managing director. The test masses required a 1.9km radius of curvature, with figure requirements as stringent as 0.3nm RMS. After super-polishing to extremely high spatial frequency, ion beam figuring fine-tunes the curvature by etching the surface several molecules at a time. This allows reliable shape without compromising on ability to produce micro-roughness over large surfaces. Advanced LIGO input test mass champion data. Zygo achieved figuring accuracy to 0.08nm RMS over the critical 160mm central clear aperture, and sub-nanometer accuracy on the full clear 300mm aperture of many other samples. Click to enlarge. (Zygo Extreme Precision Optics) Dielectric coatings deposited on the high-precision surfaces determine their optical performance. CSIRO and the University of Lyon Laboratoire des Materiaux Avances shared the contract to apply molecule-thin alternating layers of tantalum and silica via ion-beam sputtering. Katie Green, project leader in CSIRO's optics group, says "the thickness of the individual layers are monitored as they're deposited. Each coating consists of multiple layers of particular thicknesses, with the specific composition of the layers varying depending on how the optic needs to perform in the detector." Additionally, gold coatings around the edges provide thermal shielding and act as an electrostatic drive. LIGO's next observation run is scheduled to begin in September 2016. And after Advanced LIGO reaches its design sensitivity by fine-tuning current systems, further upgrades await in the years 2018-2020 and beyond. "One question is how you reduce the thermal noise of the optics, in particular their coatings. But coating technologies make it hard to get more than a factor of about three beyond Advanced LIGO's noise level," says Landry. One possibility is operating at cyrogenic temperatures. But "fused silica becomes noisy at cold temperatures, and you need a different wavelength laser to do this," according to Billingsley. Another way of increasing the sensitivity at room temperature is to use 40km-arm-length interferometers. Other optics-related systems reduce noise. Advanced LIGO's test masses are suspended on fused silica fibers, creating monolithic suspension that reduces thermal noise and raises the system's resonant frequency compared with initial LIGO. "The Q of that system is higher so an entire band shrinks. That means opening up more space at lower frequencies, where binary black holes are," says Landry. In the 17th century, Galileo pointed a telescope to the sky and pioneered a novel way of observing the universe. Now, LIGO's detection of GW150914 marks another new era of astronomy. As advances in glass lenses enabled Galileo's discoveries, so have state-of-the-art optics made LIGO's discoveries possible. And with astronomy's track record of developing new generations of optical devices, both the astrophysical and precision optics communities are poised for an exciting future.


News Article | February 18, 2017
Site: cleantechnica.com

The first few weeks of the Trump administration have been extraordinary, and quite frightening – not just because of the incompetence of a president who appears to be little more than a self-obsessed idiot, but by the actions of the dangerous ideologues at the helm of the world’s biggest economy and military power. There have been shocks across the policy spectrum, but probably none more so than in climate and clean energy, where Trump has promised to throw the baby out with the bathwater, quit the Paris deal, disband or dismember environmental regulations, “re-invent” coal, stop renewables and build more gas pipelines. It might sound stone-cold crazy to many people in Australia, but it should be familiar: There is little that Trump and his regime is doing on climate and clean energy that has not already achieved, or attempted, by the current Coalition government in Canberra. Remember that former prime minister Tony Abbott destroyed the carbon price, slashed the renewable energy target, disbanded the Climate Commission, absorbed the climate change department, and removed the words climate change and clean energy from the government lexicon. If he had had the executive power, or the numbers in the Senate, Abbott would also have demolished the Climate Change Authority, the Clean Energy Finance Corporation and the Australian Renewable Energy Agency, and abolished the RET altogether. Far from reversing those acts, current prime minister Malcolm Turnbull has extended them – slashing ARENA funding and now calling on the CEFC to subsidise new coal-fired power stations. With energy minister Josh Frydenberg, Turnbull has sought to demonise renewable energy at every possible turn. And just like the Trump regime, if the Australian government does not rely on lies, they certainly depend on “alternative facts” – particularly about the costs of power, the impact of renewables, and the efforts to reduce emissions. Let’s look at each of them in turn: The basic premise of the Coalition line is that new coal power is cheaper than renewable energy, a point repeated by George Christensen on ABC Radio on Tuesday morning. This is a blatant nonsense. The Coalition has made much of the supposed $48 billion capital cost of a 50 per cent renewable energy target, but neither it nor its boosters in the media have reported the $62 billion cost of building new “ultra supercritical coal” instead, which doesn’t include the huge ongoing fuel cost, nor the environmental or climate impacts. The Melbourne Energy Institute puts the carbon emission savings from $62 billion invested in renewables at twice that if invested in “clean coal”. Put more simply, says Bloomberg New Energy Finance, if you are building new coal-fired power plants now, you will be paying nearly twice as much than if you were building new wind or solar. This graph from Bloomberg new Energy Finance illustrates the point. The industry itself does not dispute these figures. It is instructive that neither the major coal generation lobby, nor the major coal generation companies think new coal is either valid or a good idea. The Australian Industry Group has dismissed it, much to the horror of Murdoch commentators such as Judith Sloan. Energy experts point out that not only is new coal expensive and polluting, it is also relatively useless in a grid that will rely increasingly on flexible generation, particularly as more consumers turn to rooftop solar and storage to reduce their bills. Indeed, the only people pushing new coal, apart from the ideologues within the Coalition and the Murdoch and other media, are the coal miners, desperate for a market for their product. (It was interesting to see the front page “exclusive” in The Australian on Monday, quoting an “analysis” from the Minerals Council of Australia of the costs of the RET. Typically, it sought to add the costs without counting the benefits. Yet when they talk about coal, they prefer to add the benefits without counting the costs). Turnbull and Frydenberg continue to bang on about rising prices of electricity, fingering wind and solar as the culprit. A brief scan of the wholesale prices in individual states over summer proves the nonsense of that claim. The highest prices this year have come in the states with the least amount of large scale renewable energy, Queensland and NSW. We’ll have more on that in the next few days, with a particular focus on how the actions of certain retailers that own fossil fuel plants is pushing wholesale prices to stratospheric levels. But, as David Leitch pointed out in his column on Monday, the average pool price in Queensland last week was $319/MWh. Even more appallingly, the average pool price at 4.30pm in 2017 has been $886/MWh, and at 5pm it has been $1,332. As Leitch notes, “the Queensland State owned Generators are having a lend of consumers.” This is about competition, or the lack of it. Large scale renewables increase competition and reduce the pricing power of the big coal and gas generators. It was this lack competition, exploited by the gas operators in South Australia when the interconnector was being repaired last year, that caused prices to jump. Turnbull and Frydenberg have been banging on all summer about the high prices in Victoria and South Australia, attacking their decision to focus on renewables and the resulting coal closures. Which states have had the cheapest wholesale prices in 2017? Victoria and South Australia. The most expensive has been Queensland, with virtually no large scale renewables. Over the first five weeks, it has averaged $229/MWh – for so called “cheap” coal and gas. It is insane. And what have we heard from the Coalition about Queensland’s price jumps? Absolutely nothing. No wonder so many companies, including major zinc refiner Sun Metals, are focusing on large scale solar – it is less than half the price. Turnbull, Frydenberg and most others in the Coalition tell us that targets such as Labor’s 50 per cent renewable energy target are a recipe for disaster, not just on costs but on reliability of supply. Again, this is a nonsense. The Australian Energy Market Operator is making it clear that the South Australia blackout last year was a storm issue, not a technology one. Yes, there was problems with ride through mechanisms on wind farms that were unknown, but have now been addressed. Moreover, these right through mechanisms are not unique to wind farms, similar issues were found on thermal plants in Australia more than a decade ago. It is interesting to note that AEMO’s new CEO, Audrey Zibelman, is to be the head of New York’s Reforming the Energy Vision, a groundbreaking program that aims to take New York state to 50 per cent renewables by 2030. But we don’t have to rely on imports to tell us what is possible. Chief scientist Alan Finkel says the technologies to incorporate large amounts of wind and solar are at hand, and the CSIRO and the network owners have made it clear that high levels of wind and solar are not just doable, but desirable because it will cut emissions and be cheaper to consumers. There is really no evidence, apart from a few crack-pot commentators, to support the Coalition position. Then it comes down to how seriously the government takes climate science. In the case of Trump, it is clear that he does not. He has a Big Oil CEO in charge of diplomacy, and climate science deniers in charge of environment, energy and many other key portfolios. Turnbull claims he accepts the science and will honour the Paris climate deal. But that requires more than just paying lip-service to Australia’s down-payment of a 26-28 per cent cut in emissions by 2030. The Paris deal requires the world to keep average global warming “well below” 2C, and Australia’s fair share of that effort is at least a 45 per cent cut by 2030, and a long term plan to reach zero emissions by mid century, or in the 2040s according to the Climate Change Authority. Building new coal-fired power plants doesn’t allow that to happen, and it’s instructive to know that the loudest supporters of new coal fired power plants are among those who think we should shred our participation to the Paris goal. It would be tempting to think that the defection of Cory Bernardi, and potentially other far right-ers to form an Australian equivalent of the Tea Party would give Turnbull room to breathe, moderate his clearly unpopular stance on key issues and shift to the centre. Fat chance. Turnbull’s over-riding ambition is to last at least one day longer as prime minister than Abbott. That means that he will remain beholden to the right, who are ready to push the self-destruct button at any moment in the fervent belief that they can win power, if not immediately then after a single term of Labor. Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


Porous crystals called metal-organic frameworks (MOFs) consist of metallic intersections with organic molecules as connecting elements. Thanks to their high porosity, MOFs have an extremely large surface area. A teaspoonful of MOFs has the same surface area as a football pitch. These countless pores situated in an extremely small space offer room for "guests" and can, for example, be used for gas storage or as "molecular gate" for separation of chemicals. But MOFs have a much greater potential and it is what Paolo Falcaro from TU Graz's Institute of Physical and Theoretical Chemistry (PTC) wants to unlock. "MOFs are prepared by self-organisation. We don't have to do anything other than mix the components, and the crystals will grow by themselves. However, crystals grow with random orientation and position, and thus their pores. Now, we can control this growth, and new properties of MOFs will be explored for multifunctional use in microelectronics, optics, sensors and biotechnology." In the current issue of Nature Materials, a research activity lead by Paolo Falcaro and Masahide Takahashi (Osaka Prefecture University - Japan) together with Australian colleagues at the University of Adelaide, Monash University and The Commonwealth Scientific and Industrial Research Organisation (CSIRO) describes a method of growing MOFs on a comparatively large surface area of one square centimetre rapidly achieving an unprecedented controlled orientation and alignment of the crystals. The big advantage of precisely oriented crystals in MOFs makes every materials scientist excited. Functional materials can be infiltrated in the pores of the crystals to generate anisotropic materials; in other words, materials with directionally dependent properties. In the journal Nature Materials, the research team shows how the controlled synthesis of a MOF film behaves in the presence of fluorescent dye. Just by rotating the film, the fluorescent signal is turned "on" or "off" and an optically active switch has been created. Paolo Falcaro: "This has many conceivable applications and we're going to try many of them with a variety of different functionalities. One and the same material can show different properties through different orientations and alignments. Intentional growth of MOFs on this scale opens up a whole range of promising applications which we're going to explore step by step." A major aim of Paolo Falcaro and his team at TU Graz is the development of MOFs for biotechnological applications: "We are trying to encapsulate enzymes, proteins and even DNA in MOFs and to immunise their activity against fluctuations in temperature. The crystalline structure surrounding the "guest" in the pore has a protective effect, like a tough jacket. We want to check out the possibilities more accurately," explains Falcaro. Explore further: 'Seeding' the next generation of smart materials More information: Paolo Falcaro et al. Centimetre-scale micropore alignment in oriented polycrystalline metal–organic framework films via heteroepitaxial growth, Nature Materials (2016). DOI: 10.1038/nmat4815


News Article | November 21, 2016
Site: www.marketwired.com

CHARLOTTESVILLE, VA--(Marketwired - November 21, 2016) - The recent announcement that GE is acquiring a controlling stake in Arcam will have a profound impact on the additively manufactured (AM) titanium sector, SmarTech Publishing believes. Arcam owns Advanced Powders & Coatings (AP&C), which supplies over one third of the total supply of titanium powder for the AM industry today. As a result of this development and other important trends, we believe it is an excellent time to reassess the market opportunities for 3D printed titanium. With this in mind, SmarTech Publishing has just published, Titanium Opportunities in Additive Manufacturing - 2017. According to this new 100-page report, revenues from Titanium-based AM power will reach $518 million in 2022 growing to $1,077 million by 2026. SmarTech Publishing is the leading industry analyst firm in the 3D printing (3DP)/additive manufacturing (AM) industry. For more details on this report go to: https://www.smartechpublishing.com/reports/titanium-opportunities-in-additive-manufacturing-2017-an-opportunity-analys The ability to effectively process titanium alloys is a leading driver in the development of titanium AM. Titanium is becoming one of the most popular materials for metal additive manufacturing systems due to their growing use in both medical and aerospace industries. SmarTech Publishing was the first industry analysis firm to publish a report on the topic additively manufactured titanium. In this report, we bring the story up to date with a full analysis of the markets of AM utilizing metal powders and other titanium feedstocks in modern commercial additive manufacturing systems. In fact, we believe that titanium printing is becoming the biggest opportunity for metal additive manufacturing materials, with revenues exceeding all other alloy groups used in metal AM over the next ten years. Titanium is sought after primarily for its high strength-to-weight ratio, biological inertness, and other desirable properties when combined. In this report, we discuss how the printing of titanium is burgeoning in the medical, aerospace, automotive, dental, and consumer products industries. This report presents our latest, highly granular, 10-year market forecast data, with breakouts by application, type of titanium material used, AM technology used. Forecasts are supplied in both volume and revenue terms. In addition, we examine the primary opportunity factors related to the broader supply chain, primary providers of AM titanium materials, and analysis of the print technologies and powder production processes in this sector. SmarTech Publishing believes that the already quite large number of suppliers in this space, more firms will enter in 2017. Some providers have also begun developing application-specific or parameter specific titanium alloys based on customer needs and offering them to the broader market. In addition, capacity expansions at existing leaders in the titanium powder supply chain are underway. Among the organizations that we examine in this report are: 3D Systems Additive Works, Advanced Powders & Coatings, Airbus, Arcam, ATI Metals, Concept Laser, CSIRO, DiSanto, Divergent3D, EOS, Farsoon, Fonon Technologies, Fraunhofer Institute, Fripp Design, GKN Hoeganaes, H.C. Starck, i.materialise, K Home International, Linde Gases, Lockheed Martin, LPW Technology, Matsuura Machinery, Metalysis, Norsk Titanium, Osaka Titanium, Oventus, Oxford Performance Materials, Phenix Systems, Praxair Surface Technologies, PSA Group, Puris, Pyrogenesis, Realizer, Renishaw, Sciaky, Shapeways, Sigma Labs, Sisma, SLM Solutions, Tekna, TLS Technik, Wacker Chemie, Xi'an Brightlaser and Z3DLab. Revenues from additive manufacturing of titanium in aerospace is expected to reach around $110 million by 2022. Titanium alloys in the aerospace industry are in continued competition against other high strength-to-weight ratio materials. Nonetheless, there is already demand for specialty titanium alloys for aerospace other than the commonly utilized Ti64 -- titanium aluminides (TiAl), for example. In the aerospace market, printed titanium is currently being explored for the smaller structural entities in engines such as brackets and housings. The demand for titanium for 3D printing in medicine and dental applications is expected to grow significantly. In 2016 more than 150,000 Kgs of titanium will be consumed for these applications. However, this will have grown to almost 1.1 million Kg by 2022. Much of this growth will come from a successful push from the orthopedic industry to achieve FDA and similar certifications for new types of titanium implants. Applications for AM titanium include spinal, knee, cranial, and other implants -- the entire industry is moving towards additive as a preferred production method for most titanium orthopedic devices due to the improved osseointegrative properties and ease of manufacturing related features using AM. Meanwhile, opportunities for printed titanium are beginning to emerge in the dental industry through a worldwide growth in dental implants as well as production of custom titanium devices to treat Obstructive Sleep Apnea. Thus, the supply chain for qualified titanium materials for AM is entering a highly transitionary phase. Though this may provide a marginal boost to wire based systems, the qualifications for use of wire based AM versus powder AM only potentially cross in a few select applications. The significance of supply chain evolution in titanium powder for AM is thus one of the most important issues currently facing the metal AM market in this rapid growth phase. Since 2013 SmarTech Publishing has published reports on all the important revenue opportunities in the 3D printing/additive manufacturing sector and is considered the leading industry analyst firm providing coverage of this sector. Our company has a client roster that includes the largest 3D printer firms, materials firms and investors. We have also published reports on most of the important revenue opportunities in the 3D printing sector including personal printers, low-volume manufacturing, 3D printing materials, medical/dental applications, aerospace, automotive, and other promising 3D market segments. To Purchase this Report: If interested in receiving a quote or to purchase this report, please email missy@smartechpublishing.com


News Article | November 14, 2016
Site: www.eurekalert.org

Seaweed-eating fish are becoming increasingly voracious as the ocean warms due to climate change and are responsible for the recent destruction of kelp forests off the NSW north coast near Coffs Harbour, research shows. The study includes an analysis of underwater video covering a 10 year period between 2002 and 2012 during which the water warmed by 0.6 degrees. "Kelp forests provide vital habitat for hundreds of marine species, including fish, lobster and abalone" says study first author Dr Adriana Vergés of UNSW and the Sydney Institute of Marine Science. "As a result of climate change, warm-water fish species are shifting their range and invading temperate areas. Our results show that over-grazing by these fish can have a profound impact, leading to kelp deforestation and barren reefs. "This is the first study demonstrating that the effects of warming in kelp forests are two-fold: higher temperatures not only have a direct impact on seaweeds, they also have an indirect impact by increasing the appetite of fish consumers, which can devour these seaweeds to the point of completely denuding the ocean floor. "Increases in the number of plant-eating fish because of warming poses a significant threat to kelp-dependent ecosystems both in Australia and around the globe," she says. The study is published in the journal Proceedings of the National Academy of Sciences. The team recorded underwater video around August-time each year at 12 sites along a 25 kilometre stretch of coast adjacent to the Solitary Island Marine Park off northern NSW. During this period, kelp disappeared completely from all study sites where it was initially present. At the same time the proportion of tropical and sub-tropical seaweed-eating fish swimming in these areas more than tripled. Grazing also intensified, with the proportion of kelp with obvious feeding marks on it increasing by a factor of seven during the decade. "We also carried out an experiment where we transplanted kelp onto the sea floor. We found that two warm-water species - rabbitfish and drummer fish - were the most voracious, eating fronds within hours at an average rate of 300 bites per hour" says Dr Vergés. "The number of fish that consumed the smaller algae growing on rock surfaces also increased, and they cleared the algae faster when there was no kelp present. This suggests the fish may help prevent kelp regrowing as well, by removing the tiny new plants." In Australia, kelp forests support a range of commercial fisheries, tourism ventures, and recreation activities worth more than $10 billion per year. "The decline of kelp in temperate areas could have major economic and management impacts," says Dr Vergés. The video footage used in the study from 2002 onwards was originally collected for a very different research project - to measure fish populations inside and outside sanctuary zones in a marine park. But the team realised it could also be used to determine whether kelp was present in the background or not. This unplanned use of an historic dataset is a good example of the value of collecting long-term data in the field, especially if it includes video or photos for permanent records. The team behind the study includes Professor Peter Steinberg, director of the Sydney Institute of Marine Science (SIMS), Dr Ezequiel Marzinelli and Dr Alexandra Campbell, also from UNSW and SIMS, Dr Christopher Doropoulos from CSIRO, and other researchers from the University of Queensland, the University of Sydney, the NSW Department of Primary Industries, James Cook University, Centre for Advanced Studies in Blanes Spain, and Nanyang Technical University in Singapore.


News Article | November 8, 2016
Site: phys.org

Parkes joins the Green Bank Telescope (GBT) in West Virginia, USA, and the Automated Planet Finder (APF) at Lick Observatory in California, USA, in their ongoing surveys to determine whether civilizations elsewhere have developed technologies similar to our own. Parkes radio telescope is part of the Australia Telescope National Facility, owned and managed by Australia's Commonwealth Scientific and Industrial Research Organization (CSIRO). Drawing on over nine months of experience in operation of the dedicated Breakthrough Listen instrument at GBT, a team of scientists and engineers from the University of California, Berkeley's SETI Research Center (BSRC) deployed similar hardware at Parkes, bringing Breakthrough Listen's unprecedented search tools to a wide range of sky inaccessible from the GBT. The Southern Hemisphere sky is rich with targets, including the center of our own Milky Way galaxy, large swaths of the galactic plane, and numerous other galaxies in the nearby Universe. 'The Dish' at Parkes played an iconic role in receiving the first deliberate transmissions from the surface of another world, as the astronauts of Apollo 11 set foot on our Moon. Now, Parkes joins once again in expanding human horizons as we search for the answer to one of our oldest questions: Are we alone? "The Parkes Radio Telescope is a superb instrument, with a rich history," said Pete Worden, Chairman of Breakthrough Prize Foundation and Executive Director of the Breakthrough Initiatives. "We're very pleased to be collaborating with CSIRO to take Listen to the next level." With its new combined all-sky range, superb telescope sensitivity and computing capacity, Breakthrough Listen is the most powerful, comprehensive, and intensive scientific search ever undertaken for signs of intelligent life beyond Earth. Moreover, this expansion of Breakthrough Listen's range follows the announcement on October 12 that it will be joining forces with the new FAST telescope – the world's largest filled-aperture radio receiver – to coordinate their searches for artificial signals. The two programs will exchange observing plans, search methods and data, including the rapid sharing of promising new signals for additional observation and analysis. The partnership represents a major step toward establishing a fully connected, global search for intelligent life in the Universe. "The addition of Parkes is an important milestone," said Yuri Milner, founder of the Breakthrough Initiatives, which include Breakthrough Listen. "These major instruments are the ears of planet Earth, and now they are listening for signs of other civilizations." After 14 days of commissioning and test observations, first light for Breakthrough Listen at Parkes was achieved on November 7, with an observation of the newly-discovered Earth-size planet orbiting the nearest star to the Sun. Proxima Centauri, a red dwarf star 4.3 light years from Earth, is now known to have a planet ("Proxima b") within its habitable zone – the region where water could exist in liquid form on the planet's surface. Such "exo-Earths" (habitable zone exoplanets) are among the primary targets for Breakthrough Listen. "The chances of any particular planet hosting intelligent life-forms are probably minuscule," said Andrew Siemion, director of UC Berkeley SETI Research Center. "But once we knew there was a planet right next door, we had to ask the question, and it was a fitting first observation for Parkes. To find a civilization just 4.2 light years away would change everything." As the closest known exoplanet, Proxima b is also the current primary target for Breakthrough Listen's sister initiative, Breakthrough Starshot, which is developing the technology to send gram-scale spacecraft to the nearest stars. "Parkes is one of the most highly cited radio telescopes in the world, with a long list of achievements to its credit, including the discovery of the first 'fast radio burst'. Parkes' unique view of the southern sky, and cutting-edge instrumentation, means we have a great opportunity to contribute to the search for extra-terrestrial life," said Douglas Bock, Director of CSIRO Astronomy and Space Science. As with the other Breakthrough Listen telescopes, data from Parkes will be freely available to the public online. Scientists, programmers, students, and others are invited to access the Breakthrough Listen archive for scientific research purposes, including helping perfect algorithms to sift through petabytes of raw data from the telescopes, screening for interfering signals from earth-bound technology. Volunteers can also help analyze data from Parkes by donating their spare computing power as part of BSRC's legendary SETI@home project. Breakthrough Listen at Parkes will be the most comprehensive search of the southern sky for artificial signals in six key samples: Explore further: Breakthrough Listen to search for intelligent life around weird star


News Article | November 17, 2016
Site: www.eurekalert.org

Fast radio bursts, or FRBs, are mysterious flashes of radio waves originating outside our Milky Way galaxy. A team of scientists, jointly led by Caltech postdoctoral scholar Vikram Ravi and Curtin University research fellow Ryan Shannon, has now observed the most luminous FRB to date, called FRB 150807. Though astronomers still do not know what kinds of events or objects produce FRBs, the discovery is a stepping stone for astronomers to understand the diffuse, faint web of material that exists between galaxies, called the cosmic web. The findings are described in a paper appearing in Science on November 17. "Because FRBs like the one we discovered occur billions of light-years away, they help us study the universe between us and them," says Ravi, who is the R A and G B Millikan Postdoctoral Scholar in Astronomy. "Nearly half of all visible matter is thought to be thinly spread throughout intergalactic space. Although this matter is not normally visible to telescopes, it can be studied using FRBs." When FRBs travel through space, they pass through intergalactic material and are distorted, similar to the apparent twinkling of a star because its light is distorted by Earth's atmosphere. By observing these bursts, astronomers can learn details about the regions of the universe through which the bursts traveled on their way to Earth. FRB 150807 appears to only be weakly distorted by material within its host galaxy, which shows that the intergalactic medium in this direction is no more turbulent than theorists originally predicted. This is the first direct insight into turbulence in intergalactic medium. The researchers observed FRB 150807 while monitoring a nearby pulsar--a rotating neutron star that emits a beam of radio waves and other electromagnetic radiation--in our galaxy using the Parkes radio telescope in Australia. "Thanks to a real-time detection system developed by the Swinburne University of Technology, we found that although the FRB is a million times further away than the pulsar, the magnetic fields in their directions appear identical," says Ryan Shannon, research fellow at Commonwealth Scientific and Industrial Research Organisation (CSIRO) Astronomy and Space Science and at Curtin University in Australia, and colead author of the study. This refutes some claims that FRBs are produced in dense environments with strong magnetic fields. The result provides a measure of the magnetism in the space between galaxies, an essential step in determining how cosmic magnetic fields are produced. Only 18 FRBs have been detected to date. Mysteriously, most give off only a single burst and do not flash repeatedly. Additionally, most FRBs have been detected with telescopes that observe large swaths of the sky but with poor resolution, making it difficult to pinpoint the exact location of a given burst. The unprecedented brightness of FRB 150807 allowed Ravi and his team to localize it much more accurately, making it the best-localized FRB to date. In February 2017, pinpointing the locations of FRBs will become much easier for astronomers with the commissioning of the Deep Synoptic Array prototype, an array of 10 radio dishes at Caltech's Owens Valley Radio Observatory in California. "We estimate that there are between 2,000 and 10,000 FRBs occurring in the sky every day," Ravi says. "One in 10 of these are as bright as FRB 150807, and the Deep Synoptic Array prototype will be able to pinpoint their locations to individual galaxies. Measuring the distances to these galaxies enables us to use FRBs to weigh the tenuous intergalactic material." Ravi is the project scientist for the Deep Synoptic Array prototype, which is being constructed by the Jet Propulsion Laboratory (JPL) and Caltech and funded by the National Aeronautics and Space Administration through the JPL President's and Director's Fund Program. The paper is titled "The magnetic field and turbulence of the cosmic web measured using a brilliant fast radio burst." The Parkes radio telescope is part of the Australia Telescope National Facility, which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.


News Article | November 17, 2016
Site: www.sciencedaily.com

Fast radio bursts, or FRBs, are mysterious flashes of radio waves originating outside our Milky Way galaxy. A team of scientists, jointly led by Caltech postdoctoral scholar Vikram Ravi and Curtin University research fellow Ryan Shannon, has now observed the most luminous FRB to date, called FRB 150807. Though astronomers still do not know what kinds of events or objects produce FRBs, the discovery is a stepping stone for astronomers to understand the diffuse, faint web of material that exists between galaxies, called the cosmic web. The findings are described in a paper appearing in Science on November 17. "Because FRBs like the one we discovered occur billions of light-years away, they help us study the universe between us and them," says Ravi, who is the R A and G B Millikan Postdoctoral Scholar in Astronomy. "Nearly half of all visible matter is thought to be thinly spread throughout intergalactic space. Although this matter is not normally visible to telescopes, it can be studied using FRBs." When FRBs travel through space, they pass through intergalactic material and are distorted, similar to the apparent twinkling of a star because its light is distorted by Earth's atmosphere. By observing these bursts, astronomers can learn details about the regions of the universe through which the bursts traveled on their way to Earth. FRB 150807 appears to only be weakly distorted by material within its host galaxy, which shows that the intergalactic medium in this direction is no more turbulent than theorists originally predicted. This is the first direct insight into turbulence in intergalactic medium. The researchers observed FRB 150807 while monitoring a nearby pulsar -- a rotating neutron star that emits a beam of radio waves and other electromagnetic radiation -- in our galaxy using the Parkes radio telescope in Australia. "Thanks to a real-time detection system developed by the Swinburne University of Technology, we found that although the FRB is a million times further away than the pulsar, the magnetic fields in their directions appear identical," says Ryan Shannon, research fellow at Commonwealth Scientific and Industrial Research Organisation (CSIRO) Astronomy and Space Science and at Curtin University in Australia, and colead author of the study. This refutes some claims that FRBs are produced in dense environments with strong magnetic fields. The result provides a measure of the magnetism in the space between galaxies, an essential step in determining how cosmic magnetic fields are produced. Only 18 FRBs have been detected to date. Mysteriously, most give off only a single burst and do not flash repeatedly. Additionally, most FRBs have been detected with telescopes that observe large swaths of the sky but with poor resolution, making it difficult to pinpoint the exact location of a given burst. The unprecedented brightness of FRB 150807 allowed Ravi and his team to localize it much more accurately, making it the best-localized FRB to date. In February 2017, pinpointing the locations of FRBs will become much easier for astronomers with the commissioning of the Deep Synoptic Array prototype, an array of 10 radio dishes at Caltech's Owens Valley Radio Observatory in California. "We estimate that there are between 2,000 and 10,000 FRBs occurring in the sky every day," Ravi says. "One in 10 of these are as bright as FRB 150807, and the Deep Synoptic Array prototype will be able to pinpoint their locations to individual galaxies. Measuring the distances to these galaxies enables us to use FRBs to weigh the tenuous intergalactic material." Ravi is the project scientist for the Deep Synoptic Array prototype, which is being constructed by the Jet Propulsion Laboratory (JPL) and Caltech and funded by the National Aeronautics and Space Administration through the JPL President's and Director's Fund Program.


News Article | November 9, 2016
Site: www.csmonitor.com

In this Oct. 9, 2007 file photo, radio telescopes of the Allen Telescope Array are seen in Hat Creek, Calif. The latest tool in the hunt for extraterrestrial life is the Parkes Radio Telescope, located in New South Wales, Australia. The powerful observatory is expected to be a major asset for Breakthrough Listen, a $100-million project designed to comb the sky for transmissions from intelligent life. The decade-long project is sponsored by a number of notable scientists, including famed cosmologist Stephen Hawking, who are hailing the project as the most comprehensive survey in search of alien civilizations to date. The Parkes telescope has a long and storied history associated with scientific advancement in space-related matters. Built in 1961, the radio telescope was used to receive a live TV signal from the first moon landing on July 20, 1969. The 210-foot-wide (64-meter-wide) dish is operated by Australia's Commonwealth Scientific and Industrial Research Organization (CSIRO). "The Parkes Radio Telescope is a superb instrument, with a rich history," Pete Worden, the chairman of Breakthrough Prize Foundation and executive director of the Breakthrough Initiatives, said in a statement. "We're very pleased to be collaborating with CSIRO to take Listen to the next level." The telescope is already being put to good use. On Monday, the research team turned the dish towards Proxima Centauri, the closest star to our sun at only 4.2 light years away. Earlier this year, astronomers discovered a potentially habitable planet orbiting the nearby star, as Eva Botkin-Kowacki reported for The Christian Science Monitor: The Earth-like world is about 1.3 times the mass of our planet and orbits its parent star within a range that would make it not too hot and not too cold for liquid water to exist on its surface.  Unlike Earth, Proxima b orbits remarkably close to its host star, Proxima Centauri. Only 4.4 million miles separate the two, making Proxima b more than 20 times closer to its star than Earth is to our sun. At such a distance Earth would be hot, hot, hot, but as a red dwarf star, Proxima Centauri is a much cooler and smaller star than our sun. And with such a tight orbit, Proxima b takes just 11.2 days to revolve around its star. Known exoplanets that could be in the habitable zone for life to develop are a prime target for Breakthrough Listen. "The chances of any particular planet hosting intelligent life-forms are probably minuscule," Andrew Siemion, the director of the University of California, Berkeley's SETI Research Center, said in the statement. "But once we knew there was a planet right next door, we had to ask the question, and it was a fitting first observation for Parkes. To find a civilization just 4.2 light years away would change everything." Breakthrough Listen is being funded by Yuri Milner, a Silicon Valley investor. The program will survey the million stars closest over the Earth over the next ten years, and examine the 100 closest galaxies to the Milky Way. "We believe that life arose spontaneously on Earth, so in an infinite universe, there must be other occurrences of life," Dr. Hawking said at the Royal Society in London in July 2015, when Breakthrough Listen was first announced. "Somewhere in the cosmos, perhaps intelligent life might be watching these lights of ours, aware of what they mean. Or do our lights wander a lifeless cosmos, unseen beacons announcing that, here on one rock, the universe discovered its existence? Either way, there is no better question. It's time to commit to finding the answer, to search for life beyond Earth."


News Article | November 17, 2016
Site: www.eurekalert.org

A brief but brilliant burst of radiation that travelled at least a billion light years through Space to reach an Australian radio telescope last year has given scientists new insight into the fabric of the Universe. ICRAR-Curtin University's Dr Ryan Shannon, who co-led research into the sighting along with the California Institute of Technology's Dr Vikram Ravi, said the flash, known as a Fast Radio Burst (FRB), was one of the brightest seen since FRBs were first detected in 2001. The flash was captured by CSIRO's Parkes radio telescope in New South Wales. Dr Shannon, from the Curtin node of ICRAR (the International Centre for Radio Astronomy Research) and CSIRO, said all FRBs contained crucial information but this FRB, the 18th detected so far, was unique in the amount of information it contained about the cosmic web - the swirling gases and magnetic fields between galaxies. "FRBs are extremely short but intense pulses of radio waves, each only lasting about a millisecond. Some are discovered by accident and no two bursts look the same," Dr Shannon said. "This particular FRB is the first detected to date to contain detailed information about the cosmic web - regarded as the fabric of the Universe - but it is also unique because its travel path can be reconstructed to a precise line of sight and back to an area of space about a billion light years away that contains only a small number of possible home galaxies." Dr Shannon explained that the vast spaces between objects in the Universe contain nearly invisible gas and a plasma of ionised particles that used to be almost impossible to map, until this pulse was detected. "This FRB, like others detected, is thought to originate from outside of Earth's own Milky Way galaxy, which means their signal has travelled over many hundreds of millions of light years, through a medium that - while invisible to our eyes - can be turbulent and affected by magnetic fields," Dr Shannon said. "It is amazing how these very few milliseconds of data can tell how weak the magnetic field is along the travelled path and how the medium is as turbulent as predicted." This particular flash reached CSIRO's Parkes radio telescope mid-last year and was subsequently analysed by a mostly Australian team. A paper describing the FRB and the team's findings was published today in the journal Science. The Parkes telescope has been a prolific discoverer of FRBs, having detected the vast majority of the known population including the very first, the Lorimer burst, in 2001. FRBs remain one of the most mysterious processes in the Universe and likely one of the most energetic ones. To catch more FRBs, astronomers use new technology, such as Parkes' multibeam receiver, the Murchison Widefield Array (MWA) in Western Australia, and the upgraded Molonglo Observatory Synthesis Telescope near Canberra. This particular FRB was found and analysed by a system developed by the supercomputing group led by Professor Matthew Bailes at Swinburne University of Technology. Professor Bailes, who was a co-author on the Science paper, also heads The Dynamic Universe research theme in the ARC Centre of Excellence for All-sky Astrophysics (CAASTRO), which has seven Australian nodes including ICRAR-Curtin University. "Ultimately, FRBs that can be traced to their cosmic host galaxies offer a unique way to probe intergalactic space that allow us to count the bulk of the electrons that inhabit our Universe," Professor Bailes said. "To decode and further understand the information contained in this FRB is an exceptional opportunity to explore the physical forces and the extreme environment out in Space." "The magnetic field and turbulence of the cosmic web measured using a brilliant fast radio burst" published November 17th 2016 in Science. CAASTRO is a collaboration of The University of Sydney, The Australian National University, The University of Melbourne, Swinburne University of Technology, The University of Queensland, The University of Western Australia and Curtin University, the latter two participating together as the International Centre for Radio Astronomy Research (ICRAR). CAASTRO is funded under the Australian Research Council (ARC) Centre of Excellence program, with additional funding from the seven participating universities and from the NSW State Government's Science Leveraging Fund. ICRAR is a joint venture between Curtin University and The University of Western Australia with support and funding from the State Government of Western Australia.CONTACTS


News Article | January 25, 2016
Site: www.rdmag.com

Invisible structures shaped like noodles, lasagne sheets or hazelnuts could be floating around in our Galaxy radically challenging our understanding of gas conditions in the Milky Way. CSIRO astronomer and first author of a paper released in Science Dr Keith Bannister said the structures appear to be 'lumps' in the thin gas that lies between the stars in our Galaxy. "They could radically change ideas about this interstellar gas, which is the Galaxy's star recycling depot, housing material from old stars that will be refashioned into new ones," Dr Bannister said. Dr Bannister and his colleagues described breakthrough observations of one of these 'lumps' that have allowed them to make the first estimate of its shape. The observations were made possible by an innovative new technique the scientists employed using CSIRO's Compact Array telescope in eastern Australia. Astronomers got the first hints of the mysterious objects 30 years ago when they saw radio waves from a bright, distant galaxy called a quasar varying wildly in strength. They figured out this behaviour was the work of our Galaxy's invisible 'atmosphere', a thin gas of electrically charged particles which fills the space between the stars. "Lumps in this gas work like lenses, focusing and defocusing the radio waves, making them appear to strengthen and weaken over a period of days, weeks or months," Dr Bannister said. These episodes were so hard to find that researchers had given up looking for them. But Dr Bannister and his colleagues realised they could do it with CSIRO's Compact Array. Pointing the telescope at a quasar called PKS 1939-315 in the constellation of Sagittarius, they saw a lensing event that went on for a year. Astronomers think the lenses are about the size of the Earth's orbit around the Sun and lie approximately 3000 light-years away - 1000 times further than the nearest star, Proxima Centauri. Until now they knew nothing about their shape, however, the team has shown this lens could not be a solid lump or shaped like a bent sheet. "We could be looking at a flat sheet, edge on," CSIRO team member Dr Cormac Reynolds said. "Or we might be looking down the barrel of a hollow cylinder like a noodle, or at a spherical shell like a hazelnut." Getting more observations will "definitely sort out the geometry," he said. While the lensing event went on, Dr Bannister's team observed it with other radio and optical telescopes. The optical light from the quasar didn't vary while the radio lensing was taking place. This is important, Dr Bannister said, because it means earlier optical surveys that looked for dark lumps in space couldn't have found the one his team has detected. So what can these lenses be? One suggestion is cold clouds of gas that stay pulled together by the force of their own gravity. That model, worked through in detail, implies the clouds must make up a substantial fraction of the mass of our Galaxy. Nobody knows how the invisible lenses could form. "But these structures are real, and our observations are a big step forward in determining their size and shape," Dr Bannister said.


News Article | December 9, 2016
Site: www.chromatographytechniques.com

A recently discovered galaxy is undergoing an extraordinary boom of stellar construction, revealed by a group of astronomers led by University of Florida graduate student Jingzhe Ma using NASA’s Chandra X-Ray Observatory. The galaxy known as SPT 0346‐52 is 12.7 billion light years from Earth, seen at a critical stage in the evolution of galaxies about a billion years after the Big Bang. Astronomers first discovered SPT 0346‐52 with the National Science Foundation’s South Pole Telescope, then observed it with space and ground-based telescopes. Data from the NSF/ESO Atacama Large Millimeter/submillimeter Array in Chile revealed extremely bright infrared emission, suggesting that the galaxy is undergoing a tremendous burst of star birth. However, an alternative explanation remained: Was much of the infrared emission instead caused by a rapidly growing supermassive black hole at the galaxy’s center? Gas falling towards the black hole would become much hotter and brighter, causing surrounding dust and gas to glow in infrared light. To explore this possibility, researchers used NASA’s Chandra X‐ray Observatory and CSIRO’s Australia Telescope Compact Array, a radio telescope. No X‐rays or radio waves were detected, so astronomers were able to rule out a black hole being responsible for most of the bright infrared light. “We now know that this galaxy doesn’t have a gorging black hole, but instead is shining brightly with the light from newborn stars,” Ma said. “This gives us information about how galaxies and the stars within them evolve during some of the earliest times in the universe.” Stars are forming at a rate of about 4,500 times the mass of the Sun every year in SPT0346-52, one of the highest rates seen in a galaxy. This is in contrast to a galaxy like the Milky Way that only forms about one solar mass of new stars per year. “Astronomers call galaxies with lots of star formation ‘starburst’ galaxies,” said UF astronomy professor Anthony Gonzalez, who co-authored the study. “That term doesn’t seem to do this galaxy justice, so we are calling it a ‘hyper-starburst’ galaxy.” The high rate of star formation implies that a large reservoir of cool gas in the galaxy is being converted into stars with unusually high efficiency. Astronomers hope that by studying more galaxies like SPT0346‐52 they will learn more about the formation and growth of massive galaxies and the supermassive black holes at their centers. “For decades, astronomers have known that supermassive black holes and the stars in their host galaxies grow together,” said co-author Joaquin Vieira of the University of Illinois at Urbana‐Champaign. “Exactly why they do this is still a mystery. SPT0346-52 is interesting because we have observed an incredible burst of stars forming, and yet found no evidence for a growing supermassive black hole. We would really like to study this galaxy in greater detail and understand what triggered the star formation and how that affects the growth of the black hole.” SPT0346‐52 is part of a population of strong gravitationally-lensed galaxies discovered with the SPT. It appears about six times brighter than it would without gravitational lensing, which enables astronomers to see more details than would otherwise be possible. A paper describing the results appears in a recent issue of The Astrophysical Journal and is available online. NASA’s Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA’s Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra’s science and flight operations.


News Article | December 8, 2016
Site: www.eurekalert.org

A recently discovered galaxy is undergoing an extraordinary boom of stellar construction, revealed by a group of astronomers led by University of Florida graduate student Jingzhe Ma using NASA's Chandra X-Ray Observatory. The galaxy known as SPT 0346?52 is 12.7 billion light years from Earth, seen at a critical stage in the evolution of galaxies about a billion years after the Big Bang. Astronomers first discovered SPT 0346?52 with the National Science Foundation's South Pole Telescope, then observed it with space and ground-based telescopes. Data from the NSF/ESO Atacama Large Millimeter/submillimeter Array in Chile revealed extremely bright infrared emission, suggesting that the galaxy is undergoing a tremendous burst of star birth. However, an alternative explanation remained: Was much of the infrared emission instead caused by a rapidly growing supermassive black hole at the galaxy's center? Gas falling towards the black hole would become much hotter and brighter, causing surrounding dust and gas to glow in infrared light. To explore this possibility, researchers used NASA's Chandra X?ray Observatory and CSIRO's Australia Telescope Compact Array, a radio telescope. No X?rays or radio waves were detected, so astronomers were able to rule out a black hole being responsible for most of the bright infrared light. "We now know that this galaxy doesn't have a gorging black hole, but instead is shining brightly with the light from newborn stars," Ma said. "This gives us information about how galaxies and the stars within them evolve during some of the earliest times in the universe." Stars are forming at a rate of about 4,500 times the mass of the Sun every year in SPT0346-52, one of the highest rates seen in a galaxy. This is in contrast to a galaxy like the Milky Way that only forms about one solar mass of new stars per year. "Astronomers call galaxies with lots of star formation 'starburst' galaxies," said UF astronomy professor Anthony Gonzalez, who co-authored the study. "That term doesn't seem to do this galaxy justice, so we are calling it a 'hyper-starburst' galaxy." The high rate of star formation implies that a large reservoir of cool gas in the galaxy is being converted into stars with unusually high efficiency. Astronomers hope that by studying more galaxies like SPT0346?52 they will learn more about the formation and growth of massive galaxies and the supermassive black holes at their centers. "For decades, astronomers have known that supermassive black holes and the stars in their host galaxies grow together," said co-author Joaquin Vieira of the University of Illinois at Urbana?Champaign. "Exactly why they do this is still a mystery. SPT0346-52 is interesting because we have observed an incredible burst of stars forming, and yet found no evidence for a growing supermassive black hole. We would really like to study this galaxy in greater detail and understand what triggered the star formation and how that affects the growth of the black hole." SPT0346?52 is part of a population of strong gravitationally-lensed galaxies discovered with the SPT. It appears about six times brighter than it would without gravitational lensing, which enables astronomers to see more details than would otherwise be possible. A paper describing the results appears in a recent issue of The Astrophysical Journal and is available online. NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations.


News Article | January 2, 2016
Site: www.scientificamerican.com

I figured that this event should be marked in some way that’s different from usual. Combine it with the fact that my annual reviews always tend to be over-long and that I’m over-committed and unable to find time for blog-writing this month anyway, and the result is… a whole month dedicated to 10 years of Tet Zoo. If this quantity of gratuitous introspection and self-congratulation is too retchingly egoistic for you to handle, go away and check back next month. It’s a big deal for me and I can’t ignore it. Here’s how the whole thing is going to pan out. To begin with, I’m going to review the events and adventures of 2015. I’m then going to review the blog’s taxonomic coverage during 2015. This is the part where I beat myself up for not writing enough about amphibians and turtles and berate myself for being too focused on charismatic megafauna. And then I’ll finish things by musing about the role of Tet Zoo in the blogosphere. Some of this might be interesting, but at least there’ll be lots of pictures. To work. Some things were due to happen in 2015. The writing of two… three… four books. More fieldwork in Romania. Four conferences in the UK, all of which I was (in some way) involved with, assorted technical research projects, continual progress at the Tet Zoo patreon project (THANK YOU PATRONS!) and the TetZoopodcats podcasts… During February I attended (and spoke at) the SRA (= Scholarly Research of the Anomalous) conference in Edinburgh, Scotland. As I’m sure you already know, there’s a lot of science to do on anomalous phenomena no matter whether there’s validity in them or not – human belief systems, our perceptive abilities, the way we interpret data, ideas, concepts and so on are all rewarding areas of study regardless of whether sea monsters, bigfoot or UFOs exist. Highlights included Roger Musson on the Bala earthquake – yes, this is the same event as the Berwyn Mountain UFO incident – Bettina Bildhauer on monster beliefs of the Middle Ages, Mike Dash’s ‘Our Artist Pictures What the Witness Saw’ and Charles Paxton on eyewitness reliability and re-enactments of the Patterson film. My own talk was on the evolution of ideas about sea monsters: I’d talk more about it, but now is not the time and I think it’s covered in my new cryptozoology book anyway (on which more later). Thanks to Gordon Rutter and Charles Paxton for the invitation and for organising a great conference. I took advantage of my time in Edinburgh to visit the zoo. Bad weather (driving rain) meant that it wasn’t the best of outdoor experiences I’ve had, but at least I had close-up views of such animals as giant panda, binturong (yeah, sleeping), one-horned rhino, drill, douroucouli and cassowary. March saw the publication of a collaborative paper on a small azhdarchid pterosaur vertebra from Romania (Vremir et al. 2015). This is a significant specimen since it provides evidence for a new, small, relatively short-necked azhdarchid taxon that lived alongside the also small but long-necked Eurazhdarcho and the gigantic Hatzegopteryx. Incidentally, the published version of this paper does not reflect the look or feel of previous drafts – I’m not happy with the very text-heavy look of the final product… pages and pages of text and nothing else. A few other interesting things happened in March. I started work on a new dinosaur-themed book (co-authored with Paul Barrett of The Natural History Museum, London), caught up (briefly) with Steve Backshall while attending a talk he gave for the Winchester College Natural History Society, and went to a talk by Christine Janis on kangaroo anatomy. And I was absolutely thrilled to receive a review copy of Gordon Grigg and David Kirshner’s incredible Biology and Evolution of Crocodylians (Grigg & Kirshner 2015), one of the most significant zoological books of our time. My initial thoughts on the book are here and I still have to write a proper, long review for publication. By the end of March, I’d done enough work on The Big Book (aka The Vertebrate Fossil Record) to have another draft ready for sharing and viewing. I suppose at this stage that the book was over 50% complete. Already it was 553 pages long. April started with the pivotal Tet Zoo article on how the Chromatic Truthometer has shown the world that cetaceans are not the blacks, grey and browns so often assumed but, actually, multi-coloured beasts of many hues. I was in Romania at the time, doing fieldwork. I saw European bison Bison bonasus (captive, not wild), Syrian woodpecker Dendrocopus/Picoides syriacus, Valachian sheep (article here) and much else, and my colleagues and I succeeded in finding new Cretaceous dinosaur and pterosaur specimens. The Hoser Issue Once More Regular readers will be aware of the effort to minimise the vigorous taxonomic vandalism being practised by independent researcher and snake-keeper Raymond Hoser – I covered this issue at length back in June 2013. Hoser has named hundreds of new, or allegedly new, taxa (including ‘higher’ entities like subfamilies, families and so on) and wants his names to be accepted by the technical community. There are a whole list of reasons why this shouldn’t happen: not only are the names etymological monstrosities, they’re published in-house in a desktop magazine that can’t be considered at all acceptable in terms of technical standards. There have been several efforts over the years to get Hoser’s hundreds of proposed names stricken from the record. Unfortunately, the body that’s supposed to police and monitor taxonomic problems and disputes (the ICZN, the International Commission on Zoological Nomenclature) simply won’t make rulings on situations of this sort on its own. Instead, communities of workers are required to resolve messes for themselves before steering the ICZN toward the making of an appropriate final decision. And so it was that a large number of people interested in the Hoser problem ganged together in order to encourage the ICZN to have Hoser’s primary taxonomic vehicle (his Australian Journal of Herpetology) listed as unavailable for the publication of new taxonomic names. The resulting paper – led by Anders Rhodin and involving Roger Bour, Frank Glaw, Colin Groves, Russell Mittermeier, Mark O’Shea, James Parham, Robert Sprackland, Laurie Vitt, E. O. Wilson, Hussam Zaher, myself and many others – was published in March (Rhodin et al. 2015). The aim of publishing an argument such as this is to solicit comments from other members of the community, the weight of consensus then affecting the ICZN’s eventual ruling. It’s Case 3601 and comments can be added here. Hoser is a perfectly sensible and normal individual. Just to prove this, he recently went through my entire twitter feed to find – and respond to – all the mentions of his good self. I’ll be covering the Hoser issue again at some point. The multi-author article denouncing the pterosaurian nature of ‘Thalassodromeus sebesensis’ – it’s actually a partial plastron of the turtle Kallokibotion – saw print in April (Dyke et al. 2015). April also saw Tet Zoo articles on Brontosaurus, on the artisan modification of live monitor lizards (thanks to Memo Kosemen for all his help with that), and my article Some of the Things I Have Gotten Wrong. I mean to do follow-up articles to that one: there are, you see, an awful lot of Things I Have Gotten Wrong, but I haven’t yet found the time. Remember: it’s normal to get things wrong, and there’s absolutely nothing wrong with that so long as you aim to change your mind and admit your mistakes the more you learn. April 27th was World Tapir Day, and I hastily covered it on Tet Zoo. I like tapirs. Did I mention that there’s a new one? This wasn’t the only perissodactyl-themed day of 2015. The bizarre new membranous-winged maniraptoran dinosaur Yi qi was published late in April and the Tet Zoo take on it proved one of the year’s most popular articles. As I said back then, some of the most interesting things about Yi qi weren’t really covered by other writers or scientists. One is that the possible existence of membranous-winged scansoriopterygids was predicted and in fact even published (in All Your Yesterdays) years prior to 2015. Another is that the ‘screaming dragon of death’ images so prevalent online probably do not accurately reflect what we know about the life appearance of this animal. It probably looked more like a bat-winged parrot. As has been tradition for the past few years, I and colleagues attended the Lyme Regis Fossil Festival during late April and early May. We spoke to people about Mesozoic marine reptiles, dinosaurs and other beasts. I spent time at the coast, photographing pipits, wagtails, pigeons and gulls, as is tradition. I wrote about the pigeons. Significant progress was made on several technical projects during May: on a sauropod-themed manuscript, an azhdarchid one that’s kind of a big deal, and another that involved reinterpreting the Romanian maniraptoran Balaur. I’m sure I’ll mention again that 2015 was one of the most frustrating years I’ve yet endured in that an incredible number of technical papers got to a very advanced stage in the long and tedious process of publication, only to stall or be derailed for one reason or another. Of the projects just listed, only one has made it to completion so far. Such is the nature of the beast when it comes to scientific publishing, but it doesn’t ever make it any easier, especially not when you have to pour your own hard-won free time – or have to take time away from paid work – into getting things done. While staying in north Wales in May, I (and my family) visited both Chester Zoo and the Welsh Mountain Zoo. At both of these fine zoos I did see a great many animals, some of which you can see here. While caving, I (and my son, Will) accidentally discovered a Lesser horseshoe bat colony. Disturbing a bat roost is a criminal offence here in the UK, but of course this only counts when people know that there’s a bat colony there in the first place. Rest assured that we acted in proper fashion. June at Tet Zoo started with an article about turtles. Ah, turtles. I really need to blog about them a whole lot more. Sorry turtles. The final proofs for the paper version of Witton & Naish (2015) were dealt with – another azhdarchid paper... (a digital preprint has been around since 2013). Of course, June was also the month in which a movie called Jurassic World was released. Like everyone else who works on Mesozoic dinosaurs, journalists sought me out for my opinion (I penned an opinion piece for CNN). I don’t rate Jurassic World at all. It’s a dumb film with a lazy storyline, it makes a point of poking fun at you if you have any affection for Jurassic Park, and it deliberately gives us hideous monsters because modern audiences only like dinosaurs, apparently, when they look like saggy-skinned throwbacks from the 1950s. I agree with whomever it was that compared ‘Indominus’ to Rudy from Ice Age 3: Dawn of the Dinosaurs. During May I’d already been quoted in the Sunday Times, the Mirror and various other UK papers, my primary lament being that Jurassic World simply could have been so much better as goes an innovative portrayal of Mesozoic animals. But, no. Stick with what’s safe. Cowards. The same sentiments were reflected via Brian Engh’s Build A Better Fake Theropod project, mentioned on Tet Zoo during June. And that’s where we’ll end things for now. More thoughts coming soon. For the previous Tet Zoo birthday articles, see... Dyke, G. J., Vremir, M., Brusatte, S., Bever, G., Buffetaut, E., Chapman, S., Csiki-Sava, Z., Kellner, A. W. A., Martin, E., Naish, D., Norell, M., si, A., Pinheiro, F. L., Prondvai, E., Rabi, M., Rodrigues, T., Steel, L., Tong, H., Vila Nova, B. C. & Witton, M. 2014. Thalassodromeus sebesensis – a new name for an old turtle. Comment on “Thalassodromeus sebesensis, an out of place and out of time Gondwanan tapejarid pterosaur”, Grellet-Tinner and Codrea. Gondwana Research 27, 1680-1682. Grigg, G. & Kirshner, D. 2015. Biology and Evolution of Crocodylians. Comstock Publishing Associates and CSIRO Publications. Rhodin, A. G. J., Kaiser, H., van Dijk, P. P., Wüster, W., O’Shea, M., Archer, M., Auliya, M., Boitani, L., Bour, R., Clausnitzer, V., Contreras-MacBeath, T., Crother, B. I., Daza, J. M., Driscoll, C. A., Flores-Villela, O., Frazier, J., Fritz, U., Gardner, A., Gascon, C., Georges, A., Glaw, F., Grazziotin, F. G., Groves, C. P., Haszprunar, G., Havaš, P., Hero, J. M., Hoffmann, M., Hoogmoed, M. S., Horne, B. D., Iverson, J. B., Jäch, M., Jenkins, C. L., Jenkins, R. K. B., Kiester, A. R., Keogh, J. S., Lacher Jr., T. E., Lovich, J. E., Luiselli, L., Mahler, D. L., Mallon, D., Mast, R., Mcdiarmid, R. W., Measey, J., Mittermeier, R. A., Molur, S., Mossbrugger, V., Murphy, R., Naish, D., Niekisch, M., Ota, J., Parham, J. F., Parr, M. J., Pilcher, N. J., Pine, R. H., Rylands, A. B., Sanderson, J. G., Savage, J., Schleip, W., Scrocchi, G. J., Shaffer, H. B., Smith, E. N., Sprackland, R., Stuart, S. N., Vetter, H., Vitt, L. J., Waller, T., Webb, G., Wilson, E. O., Zaher, H. & Thomson, S. 2015. Comment on Spracklandus Hoser, 2009 (Reptilia, Serpentes, ELAPIDAE): request for confirmation of the availability of the generic name and for the nomenclatural validation of the journal in which it was published. (Case 3601; see BZN 70: 234–237; 71: 30–38, 133–135, 181–182, 252–253). Bulletin of Zoological Nomenclature 72 (1): 65-78. Vremir, M., Witton, M., Naish, D., Dyke, G., Brusatte, S. L., Norell, M. & Totoianu, R. 2015. A medium-sized robust-necked azhdarchid pterosaur (Pterodactyloidea: Azhdarchidae) from the Maastrichtian of Pui (Haeg Basin, Transylvania, Romania). American Museum Novitates 3827, 1-16.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-IP | Phase: KBBE-2007-3-1-03 | Award Amount: 11.21M | Year: 2008

Replacing fossil oil with renewable resources is perhaps the most urgent need and the most challenging task that human society faces today. Cracking fossil hydrocarbons and building the desired chemicals with advanced organic chemistry usually requires many times more energy than is contained in the final product. Thus, using plant material in the chemical industry does not only replace the fossil material contained in the final product but also save substantial energy in the processing. Of particular interest are seed oils which show a great variation in their composition between different plant species. Many of the oil qualities found in wild species would be very attractive for the chemical industry if they could be obtained at moderate costs in bulk quantities and with a secure supply. Genetic engineering of vegetable oil qualities in high yielding oil crops could in a relatively short time frame yield such products. This project aims at developing such added value oils in dedicated industrial oil crops mainly in form of various wax esters particularly suited for lubrication. This project brings together the most prominent scientists in plant lipid biotechnology in an unprecedented world-wide effort in order to produce added value oils in industrial oil crops within the time frame of four years as well as develop a tool box of genes und understanding of lipid cellular metabolism in order for rational designing of vast array of industrial oil qualities in oil crops. Since GM technologies that will be used in the project are met with great scepticism in Europe it is crucial that ideas, expectations and results are communicated to the public and that methods, ethics, risks and risk assessment are open for debate. The keywords of our communication strategies will be openness and an understanding of public concerns.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: INFRASUPP-03-2016 | Award Amount: 3.00M | Year: 2017

The objective of the AENEAS project is to develop a concept and design for a distributed, federated European Science Data Centre (ESDC) to support the astronomical community in achieving the scientific goals of the Square Kilometre Array (SKA). The scientific potential of the SKA radio telescope is unprecedented and represents one of the highest priorities for the international scientific community. By the same token, the large scale, rate, and complexity of data the SKA will generate, present challenges in data management, computing, and networking that are similarly world-leading. SKA Regional Centres (SRC) like the ESDC will be a vital resource to enable the community to take advantage of the scientific potential of the SKA. Within the tiered SKA operational model, the SRCs will provide essential functionality which is not currently provisioned within the directly operated SKA facilities. AENEAS brings together all the European member states currently part of the SKA project as well as potential future EU SKA national partners, the SKA Organisation itself, and a larger group of international partners including the two host countries Australia and South Africa.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-CSA-Infra | Phase: INFRA-2011-1.1.21. | Award Amount: 11.58M | Year: 2012

RadioNet is an I3 that coordinates all of Europes leading radio astronomy facilities in an integrated cooperation to achieve transformational improvement in the quality and quantity of the scientific research of European astronomers. RadioNet3 includes 27 partners operating world-class radio telescopes and/or performing cutting-edge R&D in a wide range of technology fields important for radio astronomy. RadioNet3 proposes a work plan that is structured into 6 NAs, 7 TNAs and 4 JRAs with the aim to integrate and optimise the use and development of European radio astronomy infrastructures. The general goals of RadioNet3 are to: - facilitate, for a growing community of European researchers, access to the complete range of Europes world-leading radio-astronomical facilities, including the ALMA telescope; - secure a long-term perspective on scientific and technical developments in radio astronomy, pooling resources and expertise that exist among the partners; - stimulate new R&D activities for the existing radio infrastructures in synergy with ALMA and the SKA; - contribute to the implementation of the vision of the ASTRONET Strategic Plan for European Astronomy by building a sustainable and world leading radio astronomical research community. RadioNet3 builds on the success of two preceeding I3s under FP6 and FP7, but it also takes a leap forward as it includes facilitation of research with ALMA via a dedicated NA, and 4 pathfinders for the SKA in its TNA Program. It has a transparent and efficient management structure designed to optimally support the implementation of the project. RadioNet is now recognized by funding agencies and international project consortia as the European entity representing radio astronomy and facilitating the access to and exploitation of excellent facilities in this field. This is of paramount importance, as a dedicated, formal European radio astronomy organisation to coordinate and serve the needs of this community does not yet exist.


Grant
Agency: European Commission | Branch: H2020 | Program: CSA | Phase: SC5-16-2016-2017 | Award Amount: 1.16M | Year: 2016

Global demand for minerals is growing rapidly, driven by rapid population growth, urbanisation and an increasingly diverse range of technical applications. Global material supply chains linking the extraction, transport and processing stages of raw materials have become increasingly complex and today involve multiple players and product components. An interactive platform that provides transparency about existing approaches and information gaps concerning global material flows is needed to understand these global supply chains; developing this capability is critical for maintaining competitiveness in the European economy. Against this backdrop, the proposed MinFuture project aims to identify, integrate, and develop expertise for global material flow analysis and scenario modelling. Specific activities include: the analysis of barriers and gateways for delivering more transparent and interoperable materials information; the assessment of existing model approaches for global material flow analysis, including the demand- supply forecasting methods; the delivery of a common methodology which integrates mineral data, information and knowledge across national boundaries and between governmental and non-governmental organisations; the development of recommendations for a roadmap to implement the common methodology at international level; the creation of a web-portal to provide a central access point for material flow information, including links to existing data sources, models, tools and analysis. MinFuture brings together 16 international partners from across universities, public organisations and companies, to deliver new insight, strategic intelligence and a clear roadmap for enabling effective access to global material information.


Grant
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ENERGY.2013.5.1.2 | Award Amount: 7.73M | Year: 2014

This proposal aims to develop high-potential novel and environmentally benign technologies and processes for post-combustion CO2 capture leading to real breakthroughs. The proposal includes all main separation technologies for post-combustion CO2 capture; absorption, adsorption and membranes. Enzyme based systems, bio-mimicking systems and other novel forms of CO2 binding will be explored. For each technology we will focus on chosen set of promising concepts (four for absorption, two for adsorption and two for membranes). We aim to achieve 25% reduction in efficiency penalty compared to a demonstrated state-of-the-art capture process in the EU project CESAR and deliver proof-of-concepts for each technology. The various technologies and associated process concepts will be assessed using a novel methodology for comparing new and emerging technologies, for which limited data are available and the maturity level varies substantially. Based on the relative performance using various performance indicators, a selection of two breakthrough technologies will be made. Those two technologies will be further studied in order to do a more thorough benchmarking against demonstrated state-of-the-art technologies. A technological roadmap, based on a thorough gap analysis, for industrial demonstration of the two technologies will finally be established. HiPerCap involves 15 partners, from both the public and private sectors (research, academia, and industry), from 6 different EU Member States and Associated States, and three International Cooperation Partner Countries (Russia, Canada, and Australia). The HiPerCap consortium includes all essential stakeholders in the technology supply chain for CCS: power companies, RTD providers, suppliers, manufacturers (of power plants, industrial systems, equipment, and materials), and engineering companies.


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Training Grant | Award Amount: 3.68M | Year: 2014

The UK water sector is experiencing a period of profound change with both public and private sector actors seeking evidence-based responses to a host of emerging global, regional and national challenges which are driven by demographic, climatic, and land use changes as well as regulatory pressures for more efficient delivery of services. Although the UK Water Industry is keen to embrace the challenge and well placed to innovate, it lacks the financial resources to support longer term skills and knowledge generation. A new cadre of engineers is required for the water industry to not only make our society more sustainable and profitable but to develop a new suite of goods and services for a rapidly urbanising world. EPSRC Centres for Doctoral Training provide an ideal mechanism with which to remediate the emerging shortfall in advanced engineering skills within the sector. In particular, the training of next-generation engineering leaders for the sector requires a subtle balance between industrial and academic contributions; calling for a funding mechanism which privileges industrial need but provides for significant academic inputs to training and research. The STREAM initiative draws together five of the UKs leading water research and training groups to secure the future supply of advanced engineering professionals in this area of vital importance to the UK. Led by the Centre for Water Science at Cranfield University, the consortium also draws on expertise from the Universities of Sheffield and Bradford, Imperial College London, Newcastle University, and the University of Exeter. STREAM offers Engineering Doctorate and PhD awards through a programme which incorporates; (i) acquisition of advanced technical skills through attendance at masters level training courses, (ii) tuition in the competencies and abilities expected of senior engineers, and (iii) doctoral level research projects. Our EngD students spend at least 75% of their time working in industry or on industry specified research problems. Example research topics to be addressed by the schemes students include; delivering drinking water quality and protecting public health; reducing carbon footprint; reducing water demand; improving service resilience and reliability; protecting natural water bodies; reducing sewer flooding, developing and implementing strategies for Integrated Water Management, and delivering new approaches to characterising, communicating and mitigating risk and uncertainty. Fifteen studentships per year for five years will be offered with each position being sponsored by an industrial partner from the water sector. A series of common attendance events will underpin programme and group identity. These include, (i) an initial three-month taught programme based at Cranfield University, (ii) an open invitation STREAM symposium and (iii) a Challenge Week to take place each summer including transferrable skills training and guest lectures from leading industrialists and scientists. Outreach activities will extend participation in the programme, pursue collaboration with associated initiatives, promote brand awareness of the EngD qualification, and engage with a wide range of stakeholder groups (including the public) to promote engagement with and understanding of STREAM activities. Strategic direction for the programme will be formulated through an Industry Advisory Board comprising representatives from professional bodies, employers, and regulators. This body will provide strategic guidance informed by sector needs, review the operational aspects of the taught and research components as a quality control, and conduct foresight studies of relevant research areas. A small International Steering Committee will ensure global relevance for the programme. The total cost of the STREAM programme is £9m, £2.8m of which is being invested by industry and £1.8m by the five collaborating universities. Just under £4.4m is being requested from EPSRC


Grant
Agency: European Commission | Branch: H2020 | Program: CSA | Phase: INT-01-2015 | Award Amount: 1.06M | Year: 2016

Mesopelagic Southern Ocean Prey and Predators The underlying concept of MESOPP is the creation of a collaborative network and associated e-infrastructure (marine ecosystem information system) between European and Australian research teams/institutes sharing similar interests in the Southern Ocean and Antarctica, its marine ecosystem functioning and the rapid changes occurring with the climate warming and the exploitation of marine resources. While MESOPP will focus on the enhancement of collaborations by eliminating various obstacles in establishing a common methodology and a connected network of databases of acoustic data for the estimation of micronekton biomass and validation of models, it will also contribute to a better predictive understanding of the SO based on furthering the knowledge base on key functional groups of micronekton and processes which determine ecosystem dynamics from physics to large oceanic predators. This first project and associated implementation (science network and specification of an infrastructure) should constitute the nucleus of a larger international programme of acoustic monitoring and micronekton modelling to be integrated in the general framework of ocean observation following a roadmap that will be prepared during the project.


Grant
Agency: European Commission | Branch: FP7 | Program: CPCSA | Phase: INFRA-2010-1.2.3 | Award Amount: 5.79M | Year: 2010

The objective of Novel EXplorations Pushing Robust e-VLBI Services (NEXPReS) is to offer enhanced scientific performance for all use of the European VLBI Network (EVN) and its partners. The proposed activities will allow the introduction of an e-VLBI component to every experiment, aiming for enhanced robustness, flexibility and sensitivity. This will boost the scientific capability of this distributed facility and offer better data quality and deeper images of the radio sky to a larger number of astronomers. In the past years, e-VLBI has been successfully introduced for real-time, high-resolution radio astronomy. Due to limitations in connectivity, bandwidth and processing capacity, this enhanced mode cannot be offered to all astronomers yet, in spite of its obvious advantages. By providing transparent buffering mechanisms at telescope and correlator it will be possible to address all the current and future bottlenecks in e-VLBI, overcoming limited connectivity to essential stations or network failures, all but eliminating the need for physical transport of magnetic media. Such a scheme will be far more efficient, and ultimately greener, than the current model, in which complex logistics and a large over-capacity of disks are needed to accommodate global observations. It will require high-speed recording hardware, as well as software systems that hide all complexity. Real-time grid computing and high bandwidth on demand will be addressed as well, to improve both the continuous usage of the network and prepare the EVN for the higher bandwidths which will ensure it will remain the most sensitive VLBI array in the world. The proposed programme will strengthen the collaboration between the European radio-astronomical and ICT communities. This will be essential to maintain Europes leading role in the global SKA project.


News Article | February 15, 2017
Site: www.eurekalert.org

A breakthrough by CSIRO-led scientists has made the world's strongest material more commercially viable, thanks to the humble soybean. Graphene is a carbon material that is one atom thick. Its thin composition and high conductivity means it is used in applications ranging from miniaturised electronics to biomedical devices. These properties also enable thinner wire connections; providing extensive benefits for computers, solar panels, batteries, sensors and other devices. Until now, the high cost of graphene production has been the major roadblock in its commercialisation. Previously, graphene was grown in a highly-controlled environment with explosive compressed gases, requiring long hours of operation at high temperatures and extensive vacuum processing. CSIRO scientists have developed a novel "GraphAir" technology which eliminates the need for such a highly-controlled environment. The technology grows graphene film in ambient air with a natural precursor, making its production faster and simpler. "This ambient-air process for graphene fabrication is fast, simple, safe, potentially scalable, and integration-friendly," CSIRO scientist Dr Zhao Jun Han, co-author of the paper published today in Nature Communications said. "Our unique technology is expected to reduce the cost of graphene production and improve the uptake in new applications." "Our GraphAir technology results in good and transformable graphene properties, comparable to graphene made by conventional methods," CSIRO scientist and co-author of the study Dr Dong Han Seo said. With heat, soybean oil breaks down into a range of carbon building units that are essential for the synthesis of graphene. The team also transformed other types of renewable and even waste oil, such as those leftover from barbecues or cooking, into graphene films. "We can now recycle waste oils that would have otherwise been discarded and transform them into something useful," Dr Seo said. The potential applications of graphene include water filtration and purification, renewable energy, sensors, personalised healthcare and medicine, to name a few. Graphene has excellent electronic, mechanical, thermal and optical properties as well. Its uses range from improving battery performance in energy devices, to cheaper solar panels. CSIRO are looking to partner with industry to find new uses for graphene. Researchers from The University of Sydney, University of Technology Sydney and The Queensland University of Technology also contributed to this work.


News Article | February 15, 2017
Site: www.rdmag.com

A breakthrough by CSIRO-led scientists has made the world's strongest material more commercially viable, thanks to the humble soybean. Graphene is a carbon material that is one atom thick. Its thin composition and high conductivity means it is used in applications ranging from miniaturised electronics to biomedical devices. These properties also enable thinner wire connections; providing extensive benefits for computers, solar panels, batteries, sensors and other devices. Until now, the high cost of graphene production has been the major roadblock in its commercialisation. Previously, graphene was grown in a highly-controlled environment with explosive compressed gases, requiring long hours of operation at high temperatures and extensive vacuum processing. CSIRO scientists have developed a novel "GraphAir" technology which eliminates the need for such a highly-controlled environment. The technology grows graphene film in ambient air with a natural precursor, making its production faster and simpler. "This ambient-air process for graphene fabrication is fast, simple, safe, potentially scalable, and integration-friendly," CSIRO scientist Dr Zhao Jun Han, co-author of the paper published today in Nature Communications said. "Our unique technology is expected to reduce the cost of graphene production and improve the uptake in new applications." "Our GraphAir technology results in good and transformable graphene properties, comparable to graphene made by conventional methods," CSIRO scientist and co-author of the study Dr Dong Han Seo said. With heat, soybean oil breaks down into a range of carbon building units that are essential for the synthesis of graphene. The team also transformed other types of renewable and even waste oil, such as those leftover from barbecues or cooking, into graphene films. "We can now recycle waste oils that would have otherwise been discarded and transform them into something useful," Dr Seo said. The potential applications of graphene include water filtration and purification, renewable energy, sensors, personalised healthcare and medicine, to name a few. Graphene has excellent electronic, mechanical, thermal and optical properties as well. Its uses range from improving battery performance in energy devices, to cheaper solar panels. CSIRO are looking to partner with industry to find new uses for graphene. Researchers from The University of Sydney, University of Technology Sydney and The Queensland University of Technology also contributed to this work.


News Article | February 15, 2017
Site: phys.org

Graphene is a carbon material that is one atom thick. Its thin composition and high conductivity means it is used in applications ranging from miniaturised electronics to biomedical devices. These properties also enable thinner wire connections; providing extensive benefits for computers, solar panels, batteries, sensors and other devices. Until now, the high cost of graphene production has been the major roadblock in its commercialisation. Previously, graphene was grown in a highly-controlled environment with explosive compressed gases, requiring long hours of operation at high temperatures and extensive vacuum processing. CSIRO scientists have developed a novel "GraphAir" technology which eliminates the need for such a highly-controlled environment. The technology grows graphene film in ambient air with a natural precursor, making its production faster and simpler. "This ambient-air process for graphene fabrication is fast, simple, safe, potentially scalable, and integration-friendly," CSIRO scientist Dr Zhao Jun Han, co-author of the paper published today in Nature Communications said. "Our unique technology is expected to reduce the cost of graphene production and improve the uptake in new applications." "Our GraphAir technology results in good and transformable graphene properties, comparable to graphene made by conventional methods," CSIRO scientist and co-author of the study Dr Dong Han Seo said. With heat, soybean oil breaks down into a range of carbon building units that are essential for the synthesis of graphene. The team also transformed other types of renewable and even waste oil, such as those leftover from barbecues or cooking, into graphene films. "We can now recycle waste oils that would have otherwise been discarded and transform them into something useful," Dr Seo said. The potential applications of graphene include water filtration and purification, renewable energy, sensors, personalised healthcare and medicine, to name a few. Graphene has excellent electronic, mechanical, thermal and optical properties as well. Its uses range from improving battery performance in energy devices, to cheaper solar panels. CSIRO are looking to partner with industry to find new uses for graphene. Researchers from The University of Sydney, University of Technology Sydney and The Queensland University of Technology also contributed to this work. More information: Dong Han Seo et al, Single-step ambient-air synthesis of graphene from renewable precursors as electrochemical genosensor, Nature Communications (2017). DOI: 10.1038/ncomms14217


ELANORA-GOLD COAST, AUSTRALIA, November 22, 2016-- Professor Dr. Henry O Meissner (also known in earlier years of his career under full name Ostrowski-Meissner) has been included in Marquis Who's Who. As in all Marquis Who's Who biographical volumes, individuals profiled are selected on the basis of current reference value. Factors such as position, noteworthy accomplishments, visibility, and prominence in a field are all taken into account during the selection process.Professor Dr. Meissner is a three-time graduate of the Agricultural University of Kraków, from which he holds a Ph.D. in nutritional biochemistry, a Master of Science in environmental physiology, and a Bachelor of Science in agricultural sciences. For more than five decades, he has utilized his educational foundation in his career as a nutritional biochemist and educator. Professor Dr. Meissner has devoted his work to the prevention and intervention of metabolic and medical conditions through standardized bioavailable herbal therapeutic products, non-invasive therapies and functional health foods, with special consideration to environmental and ecological factors. He is noted for his research in all aspects of nutritional biochemistry and manufacturing technology of herbal extracts, as well as herbal extraction technology and application of standardized herbal extracts in dietary and therapeutic practice. His work has led to the development of production lines and a variety of unique proprietary therapeutic and functional products for different companies in Australia and internationally.Since 1986, Professor Dr. Meissner has served as the executive director of research and development for TTD International. The company, which is primarily concerned with natural health services and food technology, supports his efforts to create preventive and therapeutic programs for specific groups of people, such as diabetics, athletes, the overweight, those with celiac disease and women with pre- & post-menopausal symptoms. Professor Dr. Meissner has developed and introduced to the international market a variety of novel functional foods, therapeutic preparations, raw standardized active herbal ingredients, and ready-to-use dietary supplements and therapeutics derived through non-chemical extraction of freshly-harvested biomass of organically-cultivated medicinal plants.Passionate about environmental pollution caused by plastic waste, Professor Dr. Meissner spent about 10 years on applied research related to biodegradable polymers used as packaging material, including potable water, various liquids, and dried and fresh food. He also involved himself in the study and introduction of electro-chemically activated, non-toxic water sanitizer for use in public and commercial facilities as a non-toxic disinfectant for processing equipment, as well as a preservative for fresh foods and non-chemical sanitizer for pure and contaminant-free communal water supply. Further achievements include designing an environmentally friendly, solar-powered bio-sanitation system for the delivery of potable water through purification using non-chemical disinfecting sanitation of contaminated water sources. The water is delivered in various biodegradable flexible plastic packaging forms to communities in need of pure water and medical intervention in locations worldwide. Additionally, as an extension to his work in therapeutic research, Professor Dr. Meissner has designed and introduced to the market therapeutic devices for personal use, such as a hand-held multi-channel personal pulse magnetic device and personal dual photo-spectral device for dermal regeneration.Professor Dr. Meissner has parlayed his knowledge into a number of research and teaching positions over the years, including at Sydney University, CSIRO-Australia, Nagoya University in Japan, Hubei Agricultural College in China, the Chinese Academy of Science (both Agricultural and then Medical Sciences), Charles Sturt University in Bathurst, NSW, Australia and Research Institute of Medicinal Plants in Poland. He has also held leadership, international research coordinator and consultant roles with a multitude of organizations and institutions nationwide. Professor Dr. Meissner has authored 23 books and contributed more than 300 articles to professional journals. His many accomplishments were taken into consideration when he was chosen to be featured in the 2nd through 8th editions of Who's Who in Medicine and Healthcare, as well as several editions of Who's Who in the World and Who's Who in Science and Engineering.About Marquis Who's Who :Since 1899, when A. N. Marquis printed the First Edition of Who's Who in America , Marquis Who's Who has chronicled the lives of the most accomplished individuals and innovators from every significant field of endeavor, including politics, business, medicine, law, education, art, religion and entertainment. Today, Who's Who in America remains an essential biographical source for thousands of researchers, journalists, librarians and executive search firms around the world. Marquis now publishes many Who's Who titles, including Who's Who in America , Who's Who in the World , Who's Who in American Law , Who's Who in Medicine and Healthcare , Who's Who in Science and Engineering , and Who's Who in Asia . Marquis publications may be visited at the official Marquis Who's Who website at www.marquiswhoswho.com


News Article | September 13, 2016
Site: www.cemag.us

An international team of more than 20 scientists has inadvertently discovered how to create a new type of crystal using light more than ten billion times brighter than the sun. The discovery, led by Associate Professor Brian Abbey at La Trobe in collaboration with Associate Professor Harry Quiney at the University of Melbourne, has been published in the journal Science Advances. Their findings reverse what has been accepted thinking in crystallography for more than 100 years. The team exposed a sample of crystals, known as Buckminsterfullerene or Buckyballs, to intense light emitted from the world’s first hard X-ray free electron laser (XFEL), based at Stanford University. The molecules have a spherical shape forming a pattern that resembles panels on a soccer ball. Light from the XFEL is around one billion times brighter than light generated by any other X-ray equipment — even light from the Australian Synchrotron pales in comparison. Because other X-ray sources deliver their energy much slower than the XFEL, all previous observations had found that the X-rays randomly melt or destroy the crystal. Scientists had previously assumed that XFELs would do the same. The result from the XFEL experiments on Buckyballs, however, was not at all what scientists expected. When the XFEL intensity was cranked up past a critical point, the electrons in the Buckyballs spontaneously re-arranged their positions, changing the shape of the molecules completely. Every molecule in the crystal changed from being shaped like a soccer ball to being shaped like an AFL ball at the same time. This effect produces completely different images at the detector. It also altered the sample’s optical and physical properties. “It was like smashing a walnut with a sledgehammer and instead of destroying it and shattering it into a million pieces, we instead created a different shape — an almond!” Abbey says. “We were stunned, this is the first time in the world that X-ray light has effectively created a new type of crystal phase,” says Quiney, from the School of Physics, University of Melbourne. “Though it only remains stable for a tiny fraction of a second, we observed that the sample’s physical, optical and chemical characteristics changed dramatically, from its original form,” he says. “This change means that when we use XFELs for crystallography experiments we will have to change the way interpret the data. The results give the 100-year-old science of crystallography a new, exciting direction,” Abbey says. “Currently, crystallography is the tool used by biologists and immunologists to probe the inner workings of proteins and molecules — the machines of life. Being able to see these structures in new ways will help us to understand interactions in the human body and may open new avenues for drug development.” The study was conducted by researchers from the ARC Centre of Excellence in Advanced Molecular Imaging, La Trobe University, the University of Melbourne, Imperial College London, the CSIRO, the Australian Synchrotron, Swinburne Institute of Technology, the University of Oxford, Brookhaven National Laboratory, the Stanford Linear Accelerator (SLAC), the BioXFEL Science and Technology Centre, Uppsala University, and the Florey Institute of Neuroscience and Mental Health.


News Article | April 18, 2016
Site: phys.org

Oil spills at sea, on the land and in your own kitchen could one day easily be mopped up with a new multipurpose fabric covered with semi-conducting nanostructures, developed by a team of researchers from QUT, CSIRO and RMIT.


Flash Physics is our daily pick of the latest need-to-know developments from the global physics community selected by Physics World's team of editors and reporters The first optical clock to be operated in space has been launched by Matthias Lezius and colleagues at the Germany-based Menlo Systems. Based on a frequency-comb laser system, the optical clock operates at a frequency that is about 100,000 times higher than that of the microwave-based atomic clocks that are currently used on global-positioning-system (GPS) satellites. The optical clock is about 22 cm in size and weighs 22 kg. Its power consumption is about 70 W, which makes it suitable for satellite applications. Although this prototype optical clock can only operate at about one tenth the accuracy of today's GPS atomic clocks, Lezius' team is now working on a new version of the clock that promises to improve this accuracy by several orders of magnitude – which could boost the accuracy of GPS. The current clock was tested on board a research rocket that flew a 6 min parabolic flight. The next version of the optical clock is scheduled for testing in 2017. The research is described in Optica. The physicist and advocate of strategic nuclear-arms reduction Richard Garwin will receive a Presidential Medal of Freedom from US president Barack Obama. Garwin, who is 88, was a PhD student of Enrico Fermi at the University of Chicago before designing the first hydrogen bomb in 1952 under Edward Teller at Los Alamos National Laboratory. He then moved to IBM's Thomas J Watson Research Center, where he is an IBM fellow emeritus. At IBM he worked on a broad range of topics including condensed matter, particle physics and gravitation. He also applied his skills to the development of touch screens, laser printers and intelligence-gathering technologies. Garwin served as a scientific adviser to presidents Kennedy, Johnson and Nixon, which is when he developed his long-standing interest in nuclear non-proliferation (see video). The medal is the highest civilian honour in the US and it will be given to Garwin and 20 other winners at a ceremony at the White House on 22 November. A brilliant burst of radiation known as a fast radio burst (FRB) that has travelled over a billion light years has unexpectedly revealed information about the cosmic web – the large-scale structure of the universe. A team led by Ryan Shannon at the International Centre for Radio Astronomy Research (ICRAR) and Vikram Ravi of the California Institute of Technology says that the latest FRB – one of 18 to be detected to date – is one of the brightest seen. The flash was captured by CSIRO's Parkes radio telescope in New South Wales, Australia. FRBs are extremely rare, short but intense pulses of radio waves, each only lasting about a millisecond. "This particular FRB is the first detected to date to contain detailed information about the cosmic web – regarded as the fabric of the universe – but it is also unique because its travel path can be reconstructed to a precise line of sight and back to an area of space about a billion light-years away that contains only a small number of possible home galaxies," says Shannon. The cosmic web is very difficult to spot because most of the plasma and gas it contains is very faint. It is usually detected when large sections of it are lit up briefly, for example by a bright quasar or a FRB. This particular flash reached the Parkes radio telescope mid last year and is described in Science.


News Article | November 28, 2016
Site: www.eurekalert.org

Australia's solar heliostat technology will be used for concentrating solar thermal (CST) electricity generation in China.In continued emphasis on mitigation and adaptation, CSIRO has partnered with Chinese company Thermal Focus, following China's announcement to produce 1.4 GW of CST by 2018, and 5 GW by 2020. This would double the world's installed CST plants. The relationship enables Thermal Focus to manufacture, market, sell and install CSIRO's patented low cost heliostats, field control software and design software in China, with a shared revenue stream back to Australia to fund further climate mitigation research.CSIRO Chief Executive Dr Larry Marshall said he was proud of CSIRO Energy's solar thermal technology team and their innovative science for the contribution it is making to support Australia's mitigation R&D. "Australia is a leader in clean energy technology and CSIRO's partnership with China's Thermal Focus takes our climate mitigation focus to a global stage," Dr Marshall said. "This is another great example of all four pillars of our Strategy 2020 in action; using excellent science to deliver breakthrough innovation, and through global collaboration, increasing renewable energy deliverables. "Through this collaboration and our continued solar research, we will be helping to generate cleaner energy, cost savings and technology export benefits for Australia; all lowering global greenhouse gas emissions." Solar thermal technology uses a field of computer-controlled mirrors (heliostats) that accurately reflect and concentrate sunlight onto a receiver on top of a tower. The concentrated sunlight may then be used to heat and store hot molten salt, which can generate superheated steam to drive a turbine for electricity generation. An advantage of this system is the very low cost of storing thermal energy, giving CST technology great potential for medium to large-scale solar power, even when the sun isn't shining. A heliostat field can represent up to 40 per cent of the total plant cost so low cost, high precision heliostats are a crucial component. CSIRO's unique design features smaller than conventional heliostats, and uses an advanced control system to get high performance from a cost-effective design. CSIRO's software optimises the configuration of the heliostats prior to construction and manages each heliostat to ensure the optimum amount of reflected heat is focused on the receiver, maximising the amount of power that can be produced. The licensing agreement with Thermal Focus follows CSIRO's successful international solar thermal partnerships with Japan's Mitsubishi Hitachi Power Systems, and the Cyprus Institute and Heliostat SA in Australia. Mr Wei Zhu from Thermal Focus, welcomes the collaboration and acknowledges CSIRO's reputation in R&D and work in solar thermal research. "CSIRO's solar thermal technology combined with our manufacturing capability will help expedite and deliver solar thermal as an important source of renewable energy in China," Mr Zhu said. "This partnership will help us commercialise this emerging technology on a larger scale." The licensing agreement with China's Thermal Focus is being announced today at the Asia-Pacific Solar Research Conference at the Australian National University.


News Article | October 29, 2016
Site: www.techrepublic.com

When the National Aeronautics and Space Administration's (NASA) spacecraft New Horizon made its first close encounter with Pluto on 14 July 2015, it was the first time the world saw what the planet looked like. The Commonwealth Scientific and Industrial Research Organisation (CSIRO)'s Canberra Deep Space Communication Complex (CDSCC) played an integral role in that mission, as it was responsible for delivering the first close-up high resolution pictures of Pluto. The CDSCC, located in Tidbinbilla, just outside Australia's capital city, celebrates its 50th anniversary this year, and is part of NASA's Deep Space Network. It is one of three tracking stations in the world that provides continuous, two-way radio contact with spacecraft exploring the solar system. The complex's sister stations are located in Goldstone, California and near Madrid, Spain. CDSCC director Ed Kruzins said each station is approximately 120 degrees longitude apart, but with CDSCC located on the southward facing side of the earth, it has a better view of the lower part of the solar system than the northern hemisphere, where the other two stations are located. Together, the three stations provide around the clock tracking of 40 spacecraft, including missions studying almost every planet of the solar system, comets, the moon, and sun. Onsite at the CDSCC, there are currently four antennas in operation: a 70-metre antenna, the largest of the lot and the biggest in the southern hemisphere, and three 34-metre dishes, while a fourth 34-metre antenna is currently under construction. Kruzins said when the cost of running the 70-metre antenna becomes unmanageable one day, the 34-metre dishes will replace the aperture of the large antenna. "The important thing for us is to have a highly reliable system, and we have a program of planned and corrective maintenance to make sure the antennas are running perfectly," he said. "Having said that, it doesn't always run perfectly; antennas are very complex precision systems and sometimes electronically or mechanically, but we got it fixed and it's back online. We have a pretty JPL enviable benchmark of 99.77 percent capture of signals." The antennas are used to send and receive data, and transmit scheduled commands to the spacecrafts on deep space missions. Kruzins explained the complex's communication with each spacecraft works around a coordinated schedule set out by NASA's Jet Propulsion Laboratory (JPL) located in Pasadena, California. "We're a bit like being part of a switchboard, or an air traffic control in the sky but in space. We control the spacecraft based on information that is given to us by the Jet Propulsion Laboratory, we work out where they are, so we have a navigation routine, and we take the data down," he said. He went on saying the schedules developed by JPL are based on computer predictions of each spacecraft's location, and suggestions of how CDSCC should configure its antennas's radio frequency accordingly. "We get our share of [the schedule] and JPL will request us to work through this, and they work it out based on the position of our station, the position of the satellite, and whether the mission is routine or not. If it's a non-routine mission then it gets a high priority, like a launch or encounter like New Horizons, which was supported by two CDSCC antennas for three days. "If it's a spacecraft emergency, and sometimes that happens, we'll swing the antennas to find out what's wrong and send the appropriate JPL commands to make sure it's fine." When the raw data is transmitted from the spacecraft, CDSCC collects it, processes it, and share the information with JPL, before scientists deconvolve the data into images, Kruzins said. CDSCC went through this process during the initial close encounter of Pluto by New Horizons. Kruzines said New Horizons is expected to send back 50GB of high-resolution images over the next 18 months. "We're taking data at roughly a rate of 1.2 kb per second, which is about 10,000 times slower than your internet at home. Why is it so slow? Because it's so far away...[and] we don't want to generate errors when we do that. If we go faster, we generate data bit errors, and the pictures will look pixelated or errenous," he said. But the Pluto mission is not the only thing keeping the CDSCC on constant monitoring duty. Since launching Voyager 1 and 2 into space 38 years ago for the grand tour of the solar system, where Voyager 2 headed south and Voyager 1 headed north, CDSCC is now the only station that has any contact with the Voyager 2 spacecraft. "It's because of where we're located and because we have a large antenna that is big enough to be able to hear it in the southern hemisphere. The data we have received over the last three years has confirmed that the spacecraft has passed out of the solar system and truly into stellar space," he said. Tracking Voyager 2 has since enabled scientists to learn about the size of the solar system boundary or its heliopause, and the distance it takes to reach truly interstellar space from the Sun, Kruzins said. Other missions on the radar for CDSCC include tracking the Dawn spacecraft that is on a mission of orbiting the dwarf planet Ceres, and the Mars Odyssey and Reconnaissance Orbiter, which are currently searching for the markers for life on the red planet. Information is also being drawn directly from the two rovers on Mars, Opportunity and Curiosity. But not only is the CDSCC drawing information from planetary missions, Kruzins said the complex is also tracking Kepler, the space observatory on a mission to discovering planets around other stars. He said so far nearly 2,000 of the 3,500 planets indicated by Kepler to date have been confirmed. The team behind running the 24-hour, seven days a week operation at the complex are classed into three groups, Kruzins said. He said there are the operators that drive the antennas to do the work; the maintenance people who make sure the antennas function normally, and that ranges from electricians to antenna mechanics; and people in service who make sure things run on budget and ensures the complex runs well, almost like a village. "I think we've been involved in every major encounter that has ever happened around a planetary body in the solar system, and there's more to come," said Kruzins.


News Article | December 2, 2016
Site: www.theguardian.com

The Australian pioneer of the polymer bank note says it’s “stupid” that vegetarian and vegans are protesting in the UK about the five pound polymer note containing animal fat. Professor David Solomon says the polymer notes contain trivial amounts of tallow, an animal fat found in candles and soap, yet pressure is being placed on the Bank of England to find an alternative. “It’s stupid. It’s absolutely stupid,” Solomon told the Australian radio station 2GB. “There’s trivial amounts of it in there.” More than 120,000 people have supported an online petition urging the Bank of England to cease using animal fat in the production of five pound notes – the first polymer notes in circulation in the UK. “The new STG5 notes contain animal fat in the form of tallow. This is unacceptable to millions of vegans, vegetarians, Hindus, Sikhs, Jains and others in the UK,” the petition states. “We demand that you cease to use animal products in the production of currency that we have to use.” Solomon said polymer notes were extremely hard to forge and had a lot more benefits for the consumer than previous paper notes. “It picks up less drugs than paper notes and you don’t chop down trees,” he said. “It’s more hygenic than a paper note by a long way.” The $10 note was the first polymer bank note in circulation in Australia in 1988. The note was developed by the country’s research and development body, CSIRO, led by a team under Solomon.


News Article | April 18, 2016
Site: www.cemag.us

Oil spills at sea, on the land, and in your own kitchen could one day easily be mopped up with a new multipurpose fabric covered with semiconducting nanostructures, developed by a team of researchers from Queensland University of Technology (QUT), CSIRO, and RMIT. “The fabric could also potentially degrade organic matter when exposed to light thanks to these semi-conducting properties,” says Associate Professor Anthony O’Mullane, from QUT’s School of Chemistry, Physics and Chemical Engineering, who collaborated with researchers from CSIRO and RMIT on this project. “This fabric repels water and attracts oil. We have tested it and found it effective at cleaning up crude oil, and separating organic solvents, ordinary olive and peanut oil from water,” he says. “We were able to mop up crude oil from the surface of fresh and salt water.” O’Mullane says the chemistry behind the creation of the new material was not complex. “All steps in its production are easy to carry out and, in principle, production of this fabric could be scaled up to be used on massive oil spills that threaten land and marine ecosystems,” he says. “On a large scale the material could mop up crude oil to saturation point and then be washed with a common organic solvent and reused. “We used nylon, but in principle any fabric could work. We took commercially available nylon that already had a seed layer of silver woven into it which makes it easier to carry out the next part of the process — addition of the copper. “We then dipped this fabric into a vat where a copper layer was electrochemically deposited onto it. “Now with a copper coating, we converted the fabric into a semiconducting material with the addition of another solution that causes nanostructures to grow on the fabric’s surface — the key to its enhanced properties. “The nanostructures are like tiny rods that cover the surface of the fabric. Water just runs straight off it but the rods attract and hold oil. “Also, when the fabric is saturated it allows the oil to permeate where it then acts like a sieve to separate oil and water.” O’Mullane said the fabric could have multiple uses. “What is particularly exciting is that it is multifunctional and can separate water from other liquids like a sieve, it is self-cleaning, antibacterial, and being a semiconductor opens up further applicability,” he says. “Its antibacterial properties arising from the presence of copper could be used to kill bugs while also separating water from industrial waste in waterways or decontaminate water in remote and poor communities where water contamination is an issue. “Because it is also a semi-conductor it can interact with visible light to degrade organic pollutants such as those found in waste water streams.” O’Mullane says the next step was to test the scalability of the approach and if the material was mechanically robust. “Our testing has shown the material is chemically robust but we need to investigate whether the nanostructures can withstand tough wear conditions.” The research was published in the journal ChemPlusChem. The team consisted of Dr. Faegheh Hoshyargar and Associate Professor O’Mullane (QUT), Dr. Louis Kyratzis and Dr. Anand Bhatt (CSIRO) and Manika Mahajan, Anuradha and Dr. Sheshanath Bhosale (RMIT). Source: Queensland University of Technology


News Article | October 25, 2016
Site: cleantechnica.com

The cost of wind and solar energy has fallen so dramatically that wind and solar plants can now be built in South Africa at nearly half the cost of new coal, according to the country’s principal research organisation. A presentation from the energy division of the Centre for Scientific and Industrial Research (CSIR, that country’s equivalent of Australia’s own CSIRO) illustrates the dramatic different in costs, based on tenders held this year for wind, solar and coal and assumed costs for other technologies. The analysis by Dr Tobias Bischof-Niemz and Ruan Fourie shows that solar and wind are on par on pricing, and are more than 40 per cent cheaper than new baseload coal plants. Solar and wind are at 0.62 rand per kilowatt-hour ($A0.058/kWh), with coal at 1.03 rand/kWh ($A0.09/kWh). It’s a standout result for South Africa, which unlike developed economies has a shortage of power rather than a surplus, so needs to build new capacity to meet the demands of its growing population and economy. But they also have implications for countries like Australia, which over the next two decades will need to replace much of its existing fossil fuel capacity. Solar and wind, which are following a similar if slower trajectory in Australia (thanks to its policy environment), will present similar price advantages. Indeed, the results will be seen as important for any review of the draft update to the Integrated Resource Plan for Electricity (Draft IRP), currently in progress by the Department of Energy and which will set the country’s new energy priorities. According to the Daily Maverik website in South Africa, that review was to have been delivered earlier this year, but possibly because of the falling cost of solar, in particular, and wind, the process has been delayed. That program has sought to build 17.3GW of renewable energy and 11.5GW of “non renewables”, including 5GW of coal and 4.7GW of gas-fired generation. A request for proposals for 9.6GW of nuclear power has been put off indefinitely – from its previous deadline of late March and a later deadline of late September – possibly as the result of an assessment of the technology costs. The push into nuclear has been a major controversy in South Africa because of the high costs and the nature of the contracts with the Russian builders. Solar prices have been falling dramatically around the world, with recent bids of $30/MWh or less in Abu Dhabi, Dubai, Chile and Mexico, and a 40 per cent slump in the price of modules in the US in just the last few months. South Africa has also brought down the cost of solar dramatically in five years since it began competitive tenders for large-scale projects. As this graph above shows, the most recent tender, in November last year, was one sixth of the cost of its first tender in 2011. The cost of wind energy has also fallen by 60 per cent. EE Publishers says the solar PV, wind and coal IPP tariffs presented by the CSIR for South Africa are fully comparable, because they are all based on long-term take-or-pay contracts with the same off-taker (Eskom). Notably, the cost of coal does not include the propose carbon tax of 120 rand per tonne of Co2, making coal even less competitive. The CSIR study suggests that the LCOE for new baseload coal has likely risen in any case to R1.10 to R1.20/kWh, making it twice the cost of wind or solar, while it puts the price of new baseload nuclear at R1.20 to R1.30/kWh, new mid-merit gas (CCGT) at R1 to R1.20/kWh, and new mid-merit coal at R1.50/kWh. Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | September 14, 2016
Site: www.nanotech-now.com

Abstract: The discovery, led by Associate Professor Brian Abbey at La Trobe in collaboration with Associate Professor Harry Quiney at the University of Melbourne, has been published in the journal Science Advances. Their findings reverse what has been accepted thinking in crystallography for more than 100 years. The team exposed a sample of crystals, known as Buckminsterfullerene or Buckyballs, to intense light emitted from the world's first hard X-ray free electron laser (XFEL), based at Stanford University in the United States. The molecules have a spherical shape forming a pattern that resembles panels on a soccer ball. Light from the XFEL is around one billion times brighter than light generated by any other X-ray equipment --even light from the Australian Synchrotron pales in comparison. Because other X-ray sources deliver their energy much slower than the XFEL, all previous observations had found that the X-rays randomly melt or destroy the crystal. Scientists had previously assumed that XFELs would do the same. The result from the XFEL experiments on Buckyballs, however, was not at all what scientists expected. When the XFEL intensity was cranked up past a critical point, the electrons in the Buckyballs spontaneously re-arranged their positions, changing the shape of the molecules completely. Every molecule in the crystal changed from being shaped like a soccer ball to being shaped like an AFL ball at the same time. This effect produces completely different images at the detector. It also altered the sample's optical and physical properties. "It was like smashing a walnut with a sledgehammer and instead of destroying it and shattering it into a million pieces, we instead created a different shape - an almond!" Assoc. Prof. Abbey said. "We were stunned, this is the first time in the world that X-ray light has effectively created a new type of crystal phase" said Associate Professor Quiney, from the School of Physics, University of Melbourne. "Though it only remains stable for a tiny fraction of a second, we observed that the sample's physical, optical and chemical characteristics changed dramatically, from its original form," he said. "This change means that when we use XFELs for crystallography experiments we will have to change the way interpret the data. The results give the 100-year-old science of crystallography a new, exciting direction," Assoc. Prof. Abbey said. "Currently, crystallography is the tool used by biologists and immunologists to probe the inner workings of proteins and molecules -- the machines of life. Being able to see these structures in new ways will help us to understand interactions in the human body and may open new avenues for drug development." ### The study was conducted by researchers from the ARC Centre of Excellence in Advanced Molecular Imaging, La Trobe University, the University of Melbourne, Imperial College London, the CSIRO, the Australian Synchrotron, Swinburne Institute of Technology, the University of Oxford, Brookhaven National Laboratory, the Stanford Linear Accelerator (SLAC), the BioXFEL Science and Technology Centre, Uppsala University and the Florey Institute of Neuroscience and Mental Health. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | December 2, 2016
Site: www.eurekalert.org

An international team of scientists, with IAC participation, has discovered that the biggest galaxies in the universe develop in cosmic clouds of cold gas. This finding, which was made possible using radio telescopes in Australian and the USA, is being published today in the journal Science. Galaxies are usually grouped into clusters, huge systems comprising up to thousands of millions of these objects, in whose interior are found the most massive galaxies in the universe. Until now scientists believed that these "supergalaxies" formed from smaller galaxies that grow closer and closer together until they merge, due to gravitational attraction. "In the local univers we see galaxies merging" says Bjorn Emonts, the first author of the article and a researcher at the Centro de Astrobiología (CSIC-INTA) in Madrid "and we expected to observe that the formation of supergalaxies took place in the same way, in the early (now distant) universe." To investigate this, telescopes were pointed towards an embryonic galaxy cluster 10 thousand million light years away, in whose interior the giant Spiderweb galaxy is forming, and discovered a cloud of very cold gas where the galaxies were merging. This enormous cloud, with some 100 thousand million times the mass of the Sun, is mainly composed of molecular hydrogen, the basic material from which the stars and the galaxies are formed. Previous studies had discovered the mysterious appearance of thousands of millions of young stars throughout the Spiderweb, and for this reason it is now thought that this supergalaxy condensed directly from the cold gas cloud. Instead of observing the hydrogen directly, they did so using carbon monoxide, a tracer gas which is much easier to detect. "It is surprising", comments Matthew Lehnert, second autor of the article and researcher at the Astrophysics Institute of Paris, "how cold this gas is, at some 200 degrees below zero Celsius. We would have expected a lot of collapsing galaxies, which would have heated the gas, and for that reason we thought that the carbon monoxide would be much more difficult to detect". However, combining the interferometers VLA (Very Large Array) in New Mexico (USA) and the ATCA (Australia Telescope Compact Array) in Australia, they could observe and found that the major fraction of the carbon monoxide was not inthe small galaxies. "With the VLA", explained Helmut Dannerbauer, another of the authors of the article and researcher at the IAC who contributed to the detectoin of the molecular gas, "we can see only the gas in the central galaxy, which is one third of all the carbon monoxide detecte with the ATCA. This latter instrument, which is more sensitive for observing large structures, revealed an area of size 70 kiloparsecs (some 200,000 light years) with carbon monoxide distributed around the big galaxy, in the volumen populated by its smaller neighbours. Thanks to the two interferometers, we discovered the cloud of cosmic gas entangled among them". Ray Norris, another of the authors of the study and researcher at the CSIRO and Western Sydney University underlined that "this finding shows just what we can manage to do from the ground with international collaboration". According to George Miley, a coauthor of the article, and whose group at the University of Leiden (the Netherlands) discovered and studied this embryonic cluster with the Hubble Space Telescope at the end of the 90's: "Spiderweb is an astonishing laboratory, which lets us witness the birth of supergalaxies in the interiors of clusters, which are the "cosmic cities" of the Universe" And he concludes: "We are beginning to understand how these giant objects formed from the ocean of gas which surrounds them". Now it remains to understand the origin of the carbon monoxide. "It is a byproduct of stellar interiors, but we are not sure where it came from, or how it accumulated in the centre of this cluster of galaxies. To know this we will have to look even further back into the history of the universe", concludes Emonts.


News Article | December 3, 2016
Site: www.theguardian.com

A vegetarian restaurant owner’s decision not to accept the new £5 note because it contains traces of meat byproducts has come under fire from vegetarians and omnivores alike. Sharon Meijland, who has run the Rainbow cafe in Cambridge for three decades, said she would not allow customers to pay with the polymer note because the animal byproduct tallow is used during the production process. The businesswoman said she had been shocked and frightened by some of the online reaction to her decision, but that customers had supported her stance. “Our own customers who are actually in the restaurant in Cambridge have been very favourable, but it is people on Facebook – there’s been a good deal of charming comments such as ‘I hope this comes back to bite you in the ass’,” she said. On Twitter, Stephen Coltrane said he had eaten at the Rainbow cafe and enjoyed it, but that Meijland’s stance was an “over-reaction”. Robbie Weir tweeted: “Pretty hypocritical when the food on your menu … contains animal products.” Denise Venn tweeted: “I’m veggie and I find this so embarrassing. We’re not all this stupid.” Others accused Meijland, 66, of seeking publicity, but she rejected the claim and said some people were reacting in such a way “because I made a stand”. Meanwhile, former Smiths frontman Morrissey has hit out at the animal fat contained in the fiver. The Meat is Murder singer said on his True To You fan site: “If it had been revealed by the Bank of England that the new British five pound note contained slices of cat or dog, the country would be in an uproar. “But because we have been trained to accept the vicious slaughter of cows, sheep and pigs, the UK media can only make light of the use of tallow in the new British fiver because animal slaughter is thought to be outside of the human grasp and concern.” He suggested anyone who did not take issue with the revelation that traces of tallow are used in the production process should donate their own bodies for “decorative use in future £5 notes”. More than 125,000 people have signed a petition calling on the Bank of England to remove tallow from the new notes. After signing it, Meijland said she spoke with staff and they decided they could not justify handling the notes. “We all said we all felt very uneasy about handling it. We thought the only way round this is to just not accept them.” Vowing to stick with the decision, she added: “I am shocked and frightened at my age to get such hatred [online].” She said the cafe had been runnerup in the best ethical restaurant category in the Observer Food Monthly awards for the last five years. Polymer bank notes, which last far longer than their paper equivalents, were developed in Australia. Prof David Solomon, who led a CSIRO team that created the notes, said this week that the tallow controversy was “absolutely stupid”, adding: “There’s trivial amounts of it in there.” As well as being robust and difficult to forge, Solomon said polymer notes were also more hygienic. The Bank of England said this week that Innovia, which makes the polymer fiver, was considering “potential solutions” to the problem. “Innovia is now working intensively with its supply chain and will keep the Bank informed on progress towards potential solutions,” it said. The £5 note – the first to be printed on polymer by the Bank of England – was introduced in September and is likely to signal the beginning of the end for paper money.


SINGAPOUR, 9 décembre 2016 /PRNewswire/ -- Carmentix Private Limited (« Carmentix ») et l'Université de Melbourne sont fiers d'annoncer l'initiative « Découverte des biomarqueurs des naissances prématurées ». Cette étude clinique collaborative vise à valider les nouveaux biomarqueurs découverts par Carmentix et ceux précédemment découverts et validés à l'Université de Melbourne dans un panel combiné, ainsi qu'à évaluer le risque de naissance prématurée dès 20 semaines de gestation. L'étude rétrospective dirigée par le Dr Harry Georgiou, PhD et la Dre Megan Di Quinzio, MD à l'Université de Melbourne validera la solidité statistique du panel de biomarqueurs innovants. « Carmentix est ravi de commencer cette collaboration, car nous sommes déterminés à développer davantage les biomarqueurs découverts sur notre plateforme unique d'exploration des données », a déclaré le Dr Nir Arbel, PDG de Carmentix. « S'il venait à être validé, ce nouveau panel de biomarqueurs pourrait répandre l'espoir de réduire significativement le nombre de cas de naissances prématurées à l'échelle mondiale. » Obstétricienne clinique et chercheuse, la Dre Di Quinzio rencontre fréquemment des mères demandant : « Pourquoi mon bébé est-il né prématurément ? » Souvent, il n'existe pas de réponses satisfaisantes. « La naissance prématurée continue de représenter un problème de santé mondial, mais malheureusement, les outils de diagnostic fiables sont insuffisants », a déclaré le Dr Georgiou, responsable scientifique à l'Université de Melbourne. « Cette initiative en collaboration avec un solide partenaire commercial contribuera à ouvrir la voie pour une approche novatrice qui permettra d'établir de meilleurs diagnostics et, avec un peu de chance, de prévenir le travail prématuré. » Carmentix est une jeune entreprise soutenue par Esco Ventures située à Singapour. Carmentix développe un panel pronostic de biomarqueurs innovants pour réduire significativement le nombre de cas de naissances prématurées en créant des outils biomoléculaires qui alerteront les cliniciens en cas de risque de naissance prématurée plusieurs semaines avant la survenue des symptômes. La technologie de Carmentix s'appuie sur une analyse des voies multiples à l'aide d'un panel unique de biomarqueurs. Ce panel de marqueurs propriétaires permettra la prédiction des naissances prématurées à 16/20 semaines de gestation, devançant un algorithme prédictif hautement précis grâce à sa couverture du processus moléculaire du goulot d'étranglement impliqué dans les naissances prématurées. L'objectif de Carmentix est d'obtenir une solution rentable qui serait robuste et précise et qui tiendra compte des contextes cliniques du monde entier. À propos de l'Université de Melbourne et de ses initiatives de commercialisation L'Université de Melbourne est la meilleure université d'Australie et l'une des plus importantes au monde. En tant que plateforme de R&D avec les plus grands spécialistes mondiaux en sciences, technologie et médecine, Melbourne entreprend des recherches de pointe pour créer de nouvelles façons de penser, de nouvelles technologies et de nouvelles expertises pour bâtir un avenir meilleur. Des chercheurs de classe mondiale, des solutions du monde réel : l'Université de Melbourne adopte une culture d'innovation en travaillant avec l'industrie, le gouvernement, des organisations non gouvernementales et la communauté pour résoudre des défis du monde réel. Nos partenariats commerciaux donnent vie à la recherche grâce à la collaboration dans les secteurs de la bio-ingénierie, du développement de matériaux, de l'innovation de technologies médicales, du développement des capacités communautaires et de l'entrepreneuriat culturel. Parmi les technologies révolutionnaires commercialisées créées à l'Université de Melbourne, citons notamment l'implant cochléaire, le stentrode (un dispositif permettant de contrôler des ordinateurs, des membres robotiques ou des exosquelettes par la pensée) et un médicament candidat antifibrotique pour le traitement de la fibrose (prévalant dans des états pathologiques chroniques tels que les maladies rénales chroniques, les insuffisances cardiaques chroniques, les fibroses pulmonaires et l'arthrite). L'Université de Melbourne est en étroit partenariat avec le Peter Doherty Institute for Infection and Immunity, le Walter and Eliza Hall Institute, le CSIRO, le CSL, et l'Hôpital royal, l'Hôpital royal des enfants et l'Hôpital royal des femmes de Melbourne. Comptant plus de 160 années de leadership en matière d'éducation et de recherche, l'Université répond aux défis d'aujourd'hui et de demain auxquels notre société est confrontée grâce à l'innovation dans la recherche. L'Université de Melbourne est la 1ère université d'Australie et la 31e au monde (classement mondial des universités 2015-2016 du Times Higher Education).


SINGAPORE, Dec. 7, 2016 /PRNewswire/ -- Carmentix Private Limited ("Carmentix") and the University of Melbourne are proud to announce the "Preterm Birth Biomarker Discovery" initiative. The aim of this collaborative clinical study is to validate novel biomarkers discovered by Carmentix and biomarkers previously discovered and validated at the University of Melbourne in a combined panel and to assess the risk for preterm birth as early as 20 weeks of gestation. The retrospective study led by Dr. Harry Georgiou, PhD and Dr. Megan Di Quinzio, MD at the University of Melbourne will validate the statistical strength of the novel biomarker panel. "Carmentix is excited to begin this collaboration, as we are keen to further develop the biomarkers discovered on our unique data mining platform," said Dr. Nir Arbel, CEO Carmentix. "If validated, this new panel of biomarkers may shed hope to significantly reduce the number of preterm birth cases on a global scale." Clinical Obstetrician and researcher, Dr. Di Quinzio frequently sees mothers asking "why was my baby born prematurely?" There is often no satisfactory answer. "Preterm birth continues to be a global health problem but sadly, reliable diagnostic tools are lacking," said Dr. Georgiou, scientific leader at the University of Melbourne. "This collaborative initiative with a strong commercial partner will help pave the way for a novel approach for better diagnosis and hopefully the prevention of preterm labour." Carmentix is an Esco Ventures-backed startup company based in Singapore. Carmentix is developing a novel biomarker prognostic panel to significantly reduce the numbers of preterm birth cases by establishing biomolecular tools that will alert clinicians of the preterm birth risk weeks before symptoms occur. Carmentix's technology relies on a multiple pathway analysis utilizing a unique panel of biomarkers. This panel of proprietary markers will allow the prediction of preterm birth at 16-20 weeks of gestation, anticipating a high accuracy predictive algorithm due to its coverage of bottleneck molecular process involved in preterm birth. Carmentix' goal is to achieve a cost-effective solution that would be robust and accurate, and will accommodate clinical settings worldwide. About the University of Melbourne and its commercialisation initiatives The University of Melbourne is Australia's best and one of the world's leading universities. As an R&D hub with world-leading specialists in science, technology and medicine, Melbourne undertakes cutting-edge research to create new ways of thinking, new technology and new expertise to build a better future. World-class research, real-world solutions: The University of Melbourne embraces a culture of innovation -- working with industry, government, non-governmental organisations and the community to solve real-world challenges. Our commercial partnerships bring research to life through collaboration in areas of bio-engineering, materials development, medical technology innovation, community capacity development and cultural entrepreneurship. Some of the ground-breaking commercialised technology created at the University of Melbourne includes the cochlear implant, the stentrode (a device that delivers mind control over computers, robotic limbs or exoskeletons), and novel anti-fibrotic drug candidates for the treatment of the fibrosis (prevalent in such chronic conditions as chronic kidney disease, chronic heart failure, pulmonary fibrosis and arthritis). The University of Melbourne is closely partnered with the Peter Doherty Institute for Infection and Immunity, Walter and Eliza Hall Institute, CSIRO, CSL, and The Royal Melbourne, Royal Children's and Royal Women's Hospitals. With over 160 years of leadership in education and research, the University responds to immediate and future challenges facing our society through innovation in research. The University of Melbourne is No. 1 in Australia and 31 in the world (Times Higher Education World University Rankings 2015-2016).


SINGAPORE, Dec. 7, 2016 /PRNewswire/ -- Carmentix Private Limited ("Carmentix") and the University of Melbourne are proud to announce the "Preterm Birth Biomarker Discovery" initiative. The aim of this collaborative clinical study is to validate novel biomarkers discovered by Carmentix and biomarkers previously discovered and validated at the University of Melbourne in a combined panel and to assess the risk for preterm birth as early as 20 weeks of gestation. The retrospective study led by Dr. Harry Georgiou, PhD and Dr. Megan Di Quinzio, MD at the University of Melbourne will validate the statistical strength of the novel biomarker panel. "Carmentix is excited to begin this collaboration, as we are keen to further develop the biomarkers discovered on our unique data mining platform," said Dr. Nir Arbel, CEO Carmentix. "If validated, this new panel of biomarkers may shed hope to significantly reduce the number of preterm birth cases on a global scale." Clinical Obstetrician and researcher, Dr. Di Quinzio frequently sees mothers asking "why was my baby born prematurely?" There is often no satisfactory answer. "Preterm birth continues to be a global health problem but sadly, reliable diagnostic tools are lacking," said Dr. Georgiou, scientific leader at the University of Melbourne. "This collaborative initiative with a strong commercial partner will help pave the way for a novel approach for better diagnosis and hopefully the prevention of preterm labour." Carmentix is an Esco Ventures-backed startup company based in Singapore. Carmentix is developing a novel biomarker prognostic panel to significantly reduce the numbers of preterm birth cases by establishing biomolecular tools that will alert clinicians of the preterm birth risk weeks before symptoms occur. Carmentix's technology relies on a multiple pathway analysis utilizing a unique panel of biomarkers. This panel of proprietary markers will allow the prediction of preterm birth at 16-20 weeks of gestation, anticipating a high accuracy predictive algorithm due to its coverage of bottleneck molecular process involved in preterm birth. Carmentix' goal is to achieve a cost-effective solution that would be robust and accurate, and will accommodate clinical settings worldwide. About the University of Melbourne and its commercialisation initiatives The University of Melbourne is Australia's best and one of the world's leading universities. As an R&D hub with world-leading specialists in science, technology and medicine, Melbourne undertakes cutting-edge research to create new ways of thinking, new technology and new expertise to build a better future. World-class research, real-world solutions: The University of Melbourne embraces a culture of innovation -- working with industry, government, non-governmental organisations and the community to solve real-world challenges. Our commercial partnerships bring research to life through collaboration in areas of bio-engineering, materials development, medical technology innovation, community capacity development and cultural entrepreneurship. Some of the ground-breaking commercialised technology created at the University of Melbourne includes the cochlear implant, the stentrode (a device that delivers mind control over computers, robotic limbs or exoskeletons), and novel anti-fibrotic drug candidates for the treatment of the fibrosis (prevalent in such chronic conditions as chronic kidney disease, chronic heart failure, pulmonary fibrosis and arthritis). The University of Melbourne is closely partnered with the Peter Doherty Institute for Infection and Immunity, Walter and Eliza Hall Institute, CSIRO, CSL, and The Royal Melbourne, Royal Children's and Royal Women's Hospitals. With over 160 years of leadership in education and research, the University responds to immediate and future challenges facing our society through innovation in research. The University of Melbourne is No. 1 in Australia and 31 in the world (Times Higher Education World University Rankings 2015-2016).


News Article | December 14, 2016
Site: www.eurekalert.org

Sydney, Australia: Australian researchers from the National Computational Infrastructure (NCI) and the ARC Centre of Excellence for Climate System Science have produced a remarkable high-resolution animation of the largest El Niño ever recorded. It is so detailed that it took 30,000 computer hours crunching ocean model data on Australia's most powerful supercomputer, Raijin, before it could be extracted by the NCI visualisation team to produce the animation. The animation looks beneath the ocean surface to reveal the oceanic processes that led to the 1997/98 El Niño - an event that caused billions of dollars of damage worldwide and was followed by consecutive strong La Niña events. "The animation shows how shifting pools of warmer or cooler than average water 300m below the surface of the ocean can trigger these powerful events," said Dr Alex Sen Gupta, a member of the visualisation team from the ARC Centre of Excellence for Climate System Science. "When these pools of water burst through to the surface and link up with the atmosphere they can set off a chain reaction that leads to El Niños or La Niñas," The ocean model that produced the animation used a 30km horizontal grid and split the vertical depth into 50 cells, which allowed the researchers to see the development of the El Niño and La Niñas at a high resolution. "Raijin gives us the capacity to model complex global systems like El Niño that require a high resolution for a better accuracy," said a member of the team from the Australian National University, Associate Prof Andy Hogg. "It was these huge volumes of data produced by the model that meant we needed the specialist visualisation expertise from NCI to reveal what happened in detail." The 97/98 El Niño was a particularly damaging event. It was linked to massive forest fires in Indonesia, catastrophic flooding in Peru and the first "global" coral bleaching event that killed 16% of the world's corals in a single year. While it is impossible to prevent such events, researchers believe and the model confirms that better observation systems can help us forecast them earlier. "The animation shows us that a well developed deep ocean observation system can give us advance warning of extreme El Niños and La Niñas," said team member Dr Shayne McGregor from Monash University. "Preserving and expanding the currently sparse observation system is critical to improving our seasonal prediction capability in the future." Research over the past few years led by CSIRO and the University of New South Wales has indicated that "super" El Niños like the 97/98 event are likely to become more frequent as the climate warms. A member of the visualisation team, Dr Agus Santoso found in 2013 that as the climate warms, we are likely to see noticeable changes to El Niños. "As the planet warms it also appears that the swings between the two extremes, from El Niño to La Niña like the 1997 to 1999 sequence, will become more frequent," said Dr Santoso from the University of New South Wales. "For this reason and many others a reliable early warning of El Niño and La Niña will be vital for farmers, industry groups and societies to be better prepared for the extreme conditions they inevitably bring."


News Article | December 6, 2016
Site: www.greentechmedia.com

Washington Post: Al Gore Just Had ‘an Extremely Interesting Conversation’ With Trump on Climate Change As Donald Trump continues to indicate that he might be willing to change his position on climate change, which he has long called a “hoax,” the president-elect met Monday with former vice president Al Gore, who has become a prominent activist in the fight against global warming. Gore was originally scheduled to meet just with Trump’s oldest daughter, Ivanka, who is not registered with a political party and has pushed her father to adopt some positions usually promoted by Democrats. Gore told reporters that after that meeting, he had “an extremely interesting conversation” with the president-elect. “I had a lengthy and very productive session with the president-elect. It was a sincere search for areas of common ground,” Gore told reporters after spending about 90 minutes at Trump Tower in Manhattan during the lunch hour Monday. “I had a meeting beforehand with Ivanka Trump. The bulk of the time was with the president-elect, Donald Trump. I found it an extremely interesting conversation, and to be continued, and I'm just going to leave it at that.” Australia’s electricity and gas transmission industry is calling on the Turnbull government to implement a form of carbon trading in the national electricity market by 2022 and review the scope for economy-wide carbon pricing by 2027. Energy Networks Australia warns in a new report examining how to achieve zero net carbon emissions by 2050 that policy stability and regulatory certainty are the key to delivering lower power prices and reliable electricity supply. While Tony Abbott once characterized carbon pricing as a wrecking ball through the Australian economy, the new report, backed by CSIRO, says adopting an emissions intensity scheme is the least costly way of reducing emissions, and could actually save customers $200 a year by 2030. After selling about 70,000 "electrified" vehicles (that is, hybrids, plug-in hybrids and electric vehicles) in 2015 and being on track to do the same in 2016, Ford is crying uncle. Ford now wants to lead the charge to lower the federal fuel economy standards. That's according to Bloomberg, which talked to Ford CEO Mark Fields. Fields said that he wants to talk to Trump about many things, including lower CAFE standards. "We will be very clear in the things we'd like to see," Fields told Bloomberg. Fields said that the rules -- which Ford agreed to, of course, in 2011 -- mean that automakers have to build more hybrids and electric vehicles than they want to. And there's no demand, Fields said as he blamed car shoppers. "In 2008, there were 12 electrified vehicles offered in the US market and it represented 2.3 percent of the industry," he said. "Fast forward to 2016, there are 55 models, and year to date it's 2.8 percent." The switch to renewables in Germany is saving money and creating jobs, according to a new economic analysis by the international consulting firm PricewaterhouseCoopers (PwC). The report finds that the German government’s 2015-2020 climate action plan and energy-efficiency measures will save about 149 billion euros. Research that appeared last month in Earth Systems Science Data suggested that global carbon-dioxide emissions will be growing slowly, thanks in part to reduction moves by China and the United States. Several research projects have found that a downturn in the use of fossil fuels in the United States that would come from switches to renewable energy could save U.S. consumers money -- but coal’s not dead yet. President-elect Donald Trump insisted during the campaign season that supporting the U.S. coal industry will help the economy and create jobs. Meanwhile, India plans to double coal production by 2020. The controversy over utility ownership of electric-vehicle charging infrastructure could to come to a head in California. Two of the state’s three dominant investor-owned utilities are already acting on plans to build networks of EV chargers approved by the California Public Utilities Commission. In one, the utility will own all the chargers; in the other, all will be owned by independent charger providers. The third pilot -- PG&E’s closely watched hybrid proposal -- will involve both utility- and third-party-owned EV chargers. Set to be decided this month, it could be especially important in helping regulators decide how to structure the large-scale charger buildout in the nation’s largest electric-vehicle market. Advocates for utility ownership say their ability to rate-base investments can help ensure charging infrastructure reaches all customers -- not just the higher-income ones who today account for the majority of EV ownership. But third-party providers say that could squeeze them out of the market, and consumer advocates have voiced concerns about the cost-effectiveness of utility EV investments.


News Article | December 8, 2016
Site: phys.org

After astronomers discovered the galaxy, known as SPT 0346-52, with the National Science Foundation's South Pole Telescope (SPT), they observed it with several space and other ground-based telescopes. Data from the international Atacama Large Millimeter/submillimeter Array (ALMA) previously revealed extremely bright infrared emission, suggesting that the galaxy is undergoing a tremendous burst of star birth. However, an alternative explanation remained: Was much of the infrared emission instead caused by a rapidly growing supermassive black hole at the galaxy's center? Gas falling towards the black hole would become much hotter and brighter, causing surrounding dust and gas to glow in infrared light. To explore this possibility, researchers used NASA's Chandra X-ray Observatory and CSIRO's Australia Telescope Compact Array, a radio telescope. No X-rays or radio waves were detected, so astronomers were able to rule out a black hole being responsible for most of the bright infrared light. "We now know that this galaxy doesn't have a gorging black hole, but instead is shining brightly with the light from newborn stars," said Jingzhe Ma of the University of Florida in Gainesville, Florida, who led the new study. "This gives us information about how galaxies and the stars within them evolve during some of the earliest times in the Universe." Stars are forming at a rate of about 4,500 times the mass of the Sun every year in SPT0346-52, one of the highest rates seen in a galaxy. This is in contrast to a galaxy like the Milky Way that only forms about one solar mass of new stars per year. "Astronomers call galaxies with lots of star formation 'starburst' galaxies," said co-author Anthony Gonzalez, also of the University of Florida. "That term doesn't seem to do this galaxy justice, so we are calling it a 'hyper-starburst' galaxy." The high rate of star formation implies that a large reservoir of cool gas in the galaxy is being converted into stars with unusually high efficiency. Astronomers hope that by studying more galaxies like SPT0346-52 they will learn more about the formation and growth of massive galaxies and the supermassive black holes at their centers. "For decades, astronomers have known that supermassive black holes and the stars in their host galaxies grow together," said co-author Joaquin Vieira of the University of Illinois at Urbana-Champaign. "Exactly why they do this is still a mystery. SPT0346-52 is interesting because we have observed an incredible burst of stars forming, and yet found no evidence for a growing supermassive black hole. We would really like to study this galaxy in greater detail and understand what triggered the star formation and how that affects the growth of the black hole." SPT0346-52 is part of a population of strong gravitationally-lensed galaxies discovered with the SPT. SPT0346-52 appears about six times brighter than it would without gravitational lensing, which enables astronomers to see more details than would otherwise be possible. A paper describing these results appears in a recent issue of The Astrophysical Journal. Explore further: X-ray point source discovered at the center of a distant dwarf galaxy Henize 2-10 More information: Jingzhe Ma et al. SPT0346-52: NEGLIGIBLE AGN ACTIVITY IN A COMPACT, HYPER-STARBURST GALAXY AT= 5.7, The Astrophysical Journal (2016). DOI: 10.3847/0004-637X/832/2/114 , https://arxiv.org/abs/1609.08553


News Article | December 9, 2016
Site: spaceref.com

Astronomers have used NASA's Chandra X-ray Observatory and other telescopes to show that a recently discovered galaxy is undergoing an extraordinary boom of stellar construction. The galaxy is 12.7 billion light-years from Earth, seen at a critical stage in the evolution of galaxies about a billion years after the Big Bang. After astronomers discovered the galaxy, known as SPT 0346-52, with the National Science Foundation's South Pole Telescope (SPT), they observed it with several space and other ground-based telescopes. Data from the NSF/ESO Atacama Large Millimeter/submillimeter Array (ALMA) previously revealed extremely bright infrared emission, suggesting that the galaxy is undergoing a tremendous burst of star birth. However, an alternative explanation remained: Was much of the infrared emission instead caused by a rapidly growing supermassive black hole at the galaxy's center? Gas falling towards the black hole would become much hotter and brighter, causing surrounding dust and gas to glow in infrared light. To explore this possibility, researchers used NASA's Chandra X-ray Observatory and CSIRO's Australia Telescope Compact Array, a radio telescope. No X-rays or radio waves were detected, so astronomers were able to rule out a black hole being responsible for most of the bright infrared light. "We now know that this galaxy doesn't have a gorging black hole, but instead is shining brightly with the light from newborn stars," said Jingzhe Ma of the University of Florida in Gainesville, Florida, who led the new study. "This gives us information about how galaxies and the stars within them evolve during some of the earliest times in the universe." Stars are forming at a rate of about 4,500 times the mass of the Sun every year in SPT0346-52, one of the highest rates seen in a galaxy. This is in contrast to a galaxy like the Milky Way that only forms about one solar mass of new stars per year. "Astronomers call galaxies with lots of star formation 'starburst' galaxies," said co-author Anthony Gonzalez, also of the University of Florida. "That term doesn't seem to do this galaxy justice, so we are calling it a 'hyper-starburst' galaxy." The high rate of star formation implies that a large reservoir of cool gas in the galaxy is being converted into stars with unusually high efficiency. Astronomers hope that by studying more galaxies like SPT0346-52 they will learn more about the formation and growth of massive galaxies and the supermassive black holes at their centers. "For decades, astronomers have known that supermassive black holes and the stars in their host galaxies grow together," said co-author Joaquin Vieira of the University of Illinois at Urbana-Champaign. "Exactly why they do this is still a mystery. SPT0346-52 is interesting because we have observed an incredible burst of stars forming, and yet found no evidence for a growing supermassive black hole. We would really like to study this galaxy in greater detail and understand what triggered the star formation and how that affects the growth of the black hole." SPT0346-52 is part of a population of strong gravitationally-lensed galaxies discovered with the SPT. SPT0346-52 appears about six times brighter than it would without gravitational lensing, which enables astronomers to see more details than would otherwise be possible. Reference: "SPT0346-52: Negligible AGN Activity in a Compact, Hyper-Starburst Galaxy at z = 5.7," Jingzhe Ma et al., 2016 Dec. 1, Astrophysical Journal [http://iopscience.iop.org/article/10.3847/0004-637X/832/2/114 , preprint: https://arxiv.org/abs/1609.08553]. NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations. Please follow SpaceRef on Twitter and Like us on Facebook.


News Article | December 1, 2016
Site: www.eurekalert.org

They're flexible, cheap to produce and simple to make - which is why perovskites are the hottest new material in solar cell design. And now, engineers at Australia's University of New South Wales in Sydney have smashed the trendy new compound's world efficiency record. Speaking at the Asia-Pacific Solar Research Conference in Canberra on Friday 2 December, Anita Ho-Baillie, a Senior Research Fellow at the Australian Centre for Advanced Photovoltaics (ACAP), announced that her team at UNSW has achieved the highest efficiency rating with the largest perovskite solar cells to date. The 12.1% efficiency rating was for a 16 cm2 perovskite solar cell, the largest single perovskite photovoltaic cell certified with the highest energy conversion efficiency, and was independently confirmed by the international testing centre Newport Corp, in Bozeman, Montana. The new cell is at least 10 times bigger than the current certified high-efficiency perovskite solar cells on record. Her team has also achieved an 18% efficiency rating on a 1.2 cm2 single perovskite cell, and an 11.5% for a 16 cm2 four-cell perovskite mini-module, both independently certified by Newport. "This is a very hot area of research, with many teams competing to advance photovoltaic design," said Ho-Baillie. "Perovskites came out of nowhere in 2009, with an efficiency rating of 3.8%, and have since grown in leaps and bounds. These results place UNSW amongst the best groups in the world producing state-of-the-art high-performance perovskite solar cells. And I think we can get to 24% within a year or so." Perovskite is a structured compound, where a hybrid organic-inorganic lead or tin halide-based material acts as the light-harvesting active layer. They are the fastest-advancing solar technology to date, and are attractive because the compound is cheap to produce and simple to manufacture, and can even be sprayed onto surfaces. "The versatility of solution deposition of perovskite makes it possible to spray-coat, print or paint on solar cells," said Ho-Baillie. "The diversity of chemical compositions also allows cells be transparent, or made of different colours. Imagine being able to cover every surface of buildings, devices and cars with solar cells." Most of the world's commercial solar cells are made from a refined, highly purified silicon crystal and, like the most efficient commercial silicon cells (known as PERC cells and invented at UNSW), need to be baked above 800?C in multiple high-temperature steps. Perovskites, on the other hand, are made at low temperatures and 200 times thinner than silicon cells. But although perovskites hold much promise for cost-effective solar energy, they are currently prone to fluctuating temperatures and moisture, making them last only a few months without protection. Along with every other team in the world, Ho-Baillie's is trying to extend its durability. Thanks to what engineers learned from more than 40 years of work with layered silicon, they're are confident they can extend this. Nevertheless, there are many existing applications where even disposable low-cost, high-efficiency solar cells could be attractive, such as use in disaster response, device charging and lighting in electricity-poor regions of the world. Perovskite solar cells also have the highest power to weight ratio amongst viable photovoltaic technologies. "We will capitalise on the advantages of perovskites and continue to tackle issues important for commercialisation, like scaling to larger areas and improving cell durability," said Martin Green, Director of the ACAP and Ho-Baillie's mentor. The project's goal is to lift perovskite solar cell efficiency to 26%. The research is part of a collaboration backed by $3.6 million in funding through the Australian Renewable Energy Agency's (ARENA) 'solar excellence' initiative. ARENA's CEO Ivor Frischknecht said the achievement demonstrated the importance of supporting early stage renewable energy technologies: "In the future, this world-leading R&D could deliver efficiency wins for households and businesses through rooftop solar as well as for big solar projects like those being advanced through ARENA's investment in large-scale solar." To make a perovskite solar cells, engineers grow crystals into a structure known as 'perovskite', named after Lev Perovski, the Russian mineralogist who discovered it. They first dissolve a selection of compounds in a liquid to make the 'ink', then deposit this on a specialised glass which can conduct electricity. When the ink dries, it leaves behind a thin film that crystallises on top of the glass when mild heat is applied, resulting in a thin layer of perovskite crystals. The tricky part is growing a thin film of perovskite crystals so the resulting solar cell absorbs a maximum amount of light. Worldwide, engineers are working to create smooth and regular layers of perovskite with large crystal grain sizes in order to increase photovoltaic yields. Ho-Baillie, who obtained her PhD at UNSW in 2004, is a former chief engineer for Solar Sailor, an Australian company which integrates solar cells into purpose-designed commercial marine ferries which currently ply waterways in Sydney, Shanghai and Hong Kong. The Australian Centre for Advanced Photovoltaics is a national research collaboration based at UNSW, whose partners are the University of Queensland, Monash University, the Australian National University, the University of Melbourne and the CSIRO Manufacturing Flagship. The collaboration is funded by an annual grant from ARENA, and partners include Arizona State University, Suntech Power and Trina Solar. UNSW's Faculty of Engineering is the powerhouse of engineering research in Australia, comprising of nine schools, 21 research centres and participating or leading 10 Cooperative Research Centres. It is ranked in the world's top 50 engineering faculties, and home to Australia's largest cohort of engineering undergraduate, postgraduate, domestic and international students. UNSW itself and is ranked #1 in Australian Research Council funding ($150 million in 2016); ranked #1 in Australia for producing millionaires (#33 globally) and ranked #1 in Australia for graduates who create technology start-ups.


News Article | October 29, 2016
Site: www.techrepublic.com

An abundance of fresh fruits, vegetables, meats, and seafood are always on display when we walk into our local supermarket. But how often do any of us really think about the efforts our local farmers go through to make sure this fresh produce is available at our disposal? The pressure on farmers to continue to produce at the same rate — or even higher — is set to worsen as worldwide predictions have concluded that we will face a global food shortage crisis in less than 50 years. Findings in the latest report (PDF) by the Global Harvest Initiative show that the world population will exceed 9 billion people in 2050, and, as a result, the demand for food is likely to outpace the amount that can be produced. This echoes a similar prediction (PDF) made by the Australian government's Department of Agriculture, Fisheries, and Forestry, which highlighted that in order for Australia to maintain a stable food security level, there is a need to increase global agricultural output by 70 percent by 2050. Australia's agricultural sector accounts for 2.4 percent of the country's gross domestic product (GDP). However, in recent times, according to the National Farmers' Federation, agricultural productivity growth has slowed to 1 percent per annum, illustrating the need for the sector to look at new ways to ensure that the industry is able to keep up with growing population demands. Justin Goc and his team at Tasmania's Barilla Bay Oysters have been working closely with Sense-T — a collaboration between the Commonwealth Scientific and Industrial Research Organisation (CSIRO), the University of Tasmania, IBM, and the Tasmanian government — to identify how sensing technology can help Tasmania's agricultural community drive future stability. As part of the project, biological indicators were installed throughout the farm. For the last two years, the indicators have been measuring environmental factors such as the salinity in water, the temperature in and out of water, and wind levels. The data collected from these indicators is now being collated to create a catalogue. Goc, the general manager of Barilla Bay Oysters, said that ideally, he would eventually like to see the data play a role in assisting the farm to figure out the mystery of why oysters fatten up or don't fatten up. "It can be a massive waiting game; sometimes it could just occur, and other times it doesn't. Usually, it's called seasonality. Why that is, the data may be able to shed some light over it," he said. Goc also believes that the technology may potentially be useful in helping the farm to further understand the biology behind oysters. "If we've got elevated temperatures and a lack of wind, then we will know if we're going to get an extended period of warmer weather, then that means we may have to accelerate our growing programs, because the oysters will grow quicker. This is instead of six weeks, where you might have to do it in four weeks to get the same outcome that you did before," he said. "All of these things can then be cross-referenced and help build a catalogue. We hope we can draw parallels between seasons and understand the biology of the animal, how it works, and why it's happy and why it isn't happy." However, Goc said that using technology in oyster farming can never replace a farmer's experience; rather, it would act as an assistant, to help fine tune their existing knowledge. "With oyster farmers, you can never discount the raw experience of dealing with your lease areas and learning the information yourself. These concepts are only there to help and assist our experience." For crop growers, the CSIRO has been trialling the phenonet system, a sensor network that collects information and monitors plants, soil conditions, irrigation levels, and other environmental conditions such as weather patterns to help farmers become better informed about which crops they should plant to get a better harvest. Arkady Zaslavsky, CSIRO digital productivity senior principal research scientist, said during a Gartner presentation in February that farmers are after information that will let them know what they need to do. "Not only from experience, but through the use of science, so they need to know the weather forecast, when to irrigate, how much fertilisation to put in, and so on." He highlighted, though, that only a small percentage of the information collected from the fields is considered valuable. "We collect the data from the field every five minutes, where there is a sensor reading of about 20 bytes. When you multiply that by the amount of sensors in the amount of plots, where there is potentially over 1 million plots, we are dealing with petabytes of agricultural data just coming from digital agricultural fields. "Now, if we look at this data in terms of what is useful for processing, it turns out that 99.95 percent of that data is useless. Only 0.5 percent of the data is called 'golden' data points, which we use for processing and visualising," he said. At the same time, researchers from the Queensland University of Technology (QUT) are now looking at the possibility of using robots to help crop farmers improve their productivity. QUT has designed a prototype AgBot II equipped with cameras, sensors, and software that can navigate, detect, and classify weeds and manage them either chemically or mechanically. It has also been designed to apply fertiliser for site-specific crop management. Trials of the technology are expected to commence in June 2015. QUT robotics professor Tristan Perez said there is enormous potential to give farmers access to data that will assist them in management decisions, particularly given that weed and pest management in crops is a serious problem. He added that the AgBots could potentially replace large, expensive tractors, and work 24 hours a day. "There is enormous potential for AgBots to be combined with sensor networks and drones to provide a farmer with large amounts of data, which ... can be combined with mathematical models and novel statistical techniques (big data analytics) to extract key information for management decisions — not only on when to apply herbicides, pesticides, and fertilisers, but how much to use," he said. Meanwhile, cows are being tagged with collars installed with GPS tracking to enable farmers to locate their herds, identify where they go and where they feed, and examine the health of each individual animal. The CSIRO is currently trialling the technology in its Smart Farm in New England, New South Wales, and Smart Homestead in Townsville, Queensland. Zaslavsky said one of the biggest challenges that the data collected from the cows — roughly 200MB of data per cow each year — helps farmers address is in early detection of when an animal becomes sick. This enables farmers to separate the individual animal from the herd and maintain the health of the others. Raja Jurdak, CSIRO autonomous systems principal research scientist, added that the vision the CSIRO has for the Smart Farm is for all the information collected from the cows to be fed back to farmers on a single dashboard, creating a support system to better manage farms. For example, it could help farmers decide when they should bring their animals in, when to irrigate, and when there are abnormalities in an animal's behaviour. "[Farmers] would often have infrequent access to information, and, if they do, it's often through manual inspection by having the animal on a scale. Typically, it's very labour intensive to get the animals there," he said. "With the new approach, you'd have a structure and an automated scale with data connectivity, and that would facilitate the process a lot more." Jurdak added that the organisation sees the potential of using the collars for sheep farming, too, as farmers are looking to trace where each animal goes, where it grazes, the quality of its milk, and the state of its wool. "We've tested some sensors on some pregnant sheep, because when a sheep is pregnant, there is a certain pattern of movement that characterises that they are pregnant. Our sensors can detect that movement, so we can closely monitor the sheep at that time to make sure that when it gives birth, it's a healthy lamb," he said.


News Article | September 14, 2016
Site: boingboing.net

“A NASA airborne mission designed to transform our understanding of Earth's valuable and ecologically sensitive coral reefs has set up shop in Australia for a two-month investigation of the Great Barrier Reef, the world's largest reef ecosystem,” reports NASA's Jet Propulsion Laboratory today. Below, a NASA/JPL photo of the Gulfstream III carrying NASA's PRISM instrument being readied for science flights from Cairns, Australia. At a media briefing today at Cairns Airport in North Queensland, Australia, scientists from NASA's COral Reef Airborne Laboratory (CORAL) mission and their Australian collaborators discussed the mission's objectives and the new insights they expect to glean into the present condition of the Great Barrier Reef and the function of reef systems worldwide. "CORAL offers the clearest, most extensive picture to date of the condition of a large portion of the world's coral reefs," said CORAL Principal Investigator Eric Hochberg of the Bermuda Institute of Ocean Sciences (BIOS), Ferry Reach, St. George's, Bermuda, prior to the briefing. "This new understanding of reef condition and function will allow scientists to better predict the future of this global ecosystem and provide policymakers with better information for decisions regarding resource management." CORAL's three-year mission combines aerial surveys using state-of-the-art airborne imaging spectrometer technology developed by NASA's Jet Propulsion Laboratory, Pasadena, California, with in-water validation activities. The mission will provide critical data and new models for analyzing reef ecosystems from a new perspective. CORAL will generate a uniform data set for a large sample of reefs across the Pacific Ocean. Scientists can use these data to search for trends between coral reef condition and the natural and human-produced biological and environmental factors that affect reefs. Over the next year, CORAL will survey portions of the Great Barrier Reef, along with reef systems in the main Hawaiian Islands, the Mariana Islands and Palau. In Australia, CORAL will survey six discrete sections across the length of the Great Barrier Reef, from the Capricorn-Bunker Group in the south to Torres Strait in the north. Two locations on the reef -- one north (Lizard Island Research Station) and one south (Heron Island Research Station) -- will serve as bases for in-water validation activities. Scientists from Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) and the University of Queensland in Brisbane are collaborating with NASA and BIOS to conduct additional complementary in-water validation activities. Located in the Coral Sea off Queensland, the Great Barrier Reef encompasses more than 2,900 individual reefs and 900 islands. It is more than 1,400 miles (2,300 kilometers) long and covers an area of about 133,000 square miles (344,400 square kilometers). The largest single structure made by living organisms on Earth, the reef teems with biodiversity, including about 400 species of coral. It attracts about 2 million visitors a year; in turn, tourism and fishing generate billions annually and employ tens of thousands of people. However, the reef faces environmental pressures from various human and climate change impacts. "The Great Barrier Reef is Australia's national treasure, so having a broader understanding of its condition and what's threatening it will help us better understand how we can protect it," said Tim Malthus, research leader of CSIRO's Coastal Monitoring, Modeling and Informatics Group in Canberra, Australia. "Along with surveying several large sections of the reef, CORAL will also survey the health of corals in the Torres Strait, a complex high-tide area that has been historically less studied. It is also opportunistic for us to see if the reef is recovering after the recent bleaching event." Stuart Phinn, professor of geography at the University of Queensland (UQ), said CORAL will provide Australian coral reef science and management with unique new maps and mapping approaches. These will expand ongoing efforts to map and understand Great Barrier Reef dynamics. "Being able to support and collaborate on NASA's CORAL project will enable groups like ours to advance our capabilities and transfer them to Australian science and management agencies," Phinn said. "Part of this includes building a process for mapping the entire reef. UQ and CORAL will exchange field data, knowledge and experience to cross-validate mapping and monitoring approaches." An Urgent Need for Better Data Around the world, concerns among scientists, resource managers and the public that coral reef ecosystems are degrading at alarming rates due to human-induced factors and global change have motivated increased assessment and monitoring efforts. The urgency of the problem has forced estimates of global reef status to be synthesized from a variety of local surveys with disparate aims, methods and quality. The problem with current assessments of reef degradation, said Hochberg, is that the data supporting these predictions are not uniform and surprisingly sparse. "Virtually all reef assessments to date rely on in-water survey techniques that are laborious, expensive and limited in spatial scope," he said. "Very little of Earth's reef area has been directly surveyed. More importantly, there are no existing models that quantitatively relate reef conditions to the full range of biological and environmental factors that affect them -- models that can help scientists better understand how coral reefs will respond to expected environmental changes. CORAL addresses an urgent need in the face of ongoing worldwide reef degradation, and also serves as a pathfinder for a future satellite mission to globally survey the world's reefs." Natural, balanced coral reefs comprise mosaics of coral, algae and sand on the seafloor that, together, drive the structure and function of reef ecosystems. When corals die, algae rapidly take over their skeletons. A non-stressed, healthy reef will usually increase coral coverage as it recovers from disturbance. But when a stressed reef is disturbed, the carbonate structure of its coral erodes, and the reef ultimately becomes a flat-bottom community dominated by algae, shifting rubble and sand, with little to no coral recovery. Such ecosystem phase shifts, as they are called, represent a radical change in a reef's character, marked by a decline in the diversity of reef flora and fauna. CORAL will generate scientific data products describing coral reef condition, measuring three key components of reef health for which we currently have limited data: composition, primary productivity and calcification. Primary productivity is a measure of how much energy is available to drive biological activity in a reef system. Calcification measures the net gain in carbonates, which determine a reef's long-term growth. To accomplish its science objectives, CORAL will use JPL's Portable Remote Imaging Spectrometer (PRISM). PRISM will literally peer through the ocean's surface to generate high-resolution images of reflected light in the specific regions of the electromagnetic spectrum important to coral reef scientists. Mounted in the belly of a modified Tempus Solutions Gulfstream IV aircraft, PRISM will survey reefs from an altitude of 28,000 feet (8,500 meters) to generate calibrated scientific data products. "PRISM builds on an extensive legacy of JPL spectrometers that have successfully operated for NASA and non-NASA missions," said Michelle Gierach, NASA CORAL project scientist at JPL. "It provides the coral reef science community with high-quality oceanographic imagery at the accuracy, range, resolution, signal-to-noise ratio, sensitivity and uniformity needed to answer key questions about coral reef condition. PRISM data will be analyzed against data for 10 key biological and environmental factors affecting coral reef ecosystems, acquired from pre-existing data sources." NASA collects data from space, air, land and sea to increase our understanding of our home planet, improve lives and safeguard our future. NASA develops new ways to observe and study Earth's interconnected natural systems with long-term data records. The agency freely shares this unique knowledge and works with institutions around the world to gain new insights into how our planet is changing.


News Article | December 8, 2016
Site: cleantechnica.com

Australia’s top climate scientists have come out in support of their American counterparts, in response to news that the incoming Trump Administration will scrap climate research at the country’s top research facility, NASA. Trump’s senior advisor on NASA, Bob Walker, announced the plans strip NASA’s Earth science division of funding on Wednesday, in a crackdown on what his team refers to as “politicised science”. The policy – and the language used to frame it – would be all too familiar to Australian climate scientists, who faced a similar attack on funding and staff of the world-leading CSIRO climate department, and the dismantling of the Climate Commission. In defense of the CSIRO cuts, the Organisation’s ex-venture capitalist CEO Larry Marshall said the national climate change discussion was “more like religion than science.” Here’s what Australia’s scientists are saying about Trump and NASA… “Just as we have seen in Australia the attack on CSIRO climate science under the Coalition government, we now see the incoming Trump administration attacking NASA,” said Professor Ian Lowe, Emeritus Professor of Science, Technology and Society at Griffith University and a former President of the Australian Conservation Foundation. “They obviously hope that pressure for action will be eased if the science is muffled. “But with temperatures in the Arctic this week a startling 20 degrees above normal, no amount of waffle can disguise the need for urgent action to decarbonise our energy supply and immediately withdraw support for new coal mines,” Prof Lowe said. “Why a world leader in Earth observation should do this is beyond rational explanation,” said David Bowman, a “fire scientist” and Professor of Environmental Change Biology at The University of Tasmania. “Earth observation is a non-negotiable requirement for effective, sustainable fire management and it will be provided by other sources if the US proceeds with this path, such as Europe, Japan and China,” Prof Bowman said. “So, effectively the US would be ceding intellectual ‘real estate’ to other nations that could quickly become dominant providers of essential information on fire activity.” Dr Megan Saunders, a Research Fellow in the School of Geography Planning and Environmental Management & Centre for Biodiversity and Conservation Science at The University of Queensland, said scrapping funding to climate research in NASA would be devastating. “Climate change is already causing significant disruptions to the earth system on which humanity relies, and urgent action on climate change is required around the globe. Cutting funding to NASA compromises our ability to cope with climate change sends a message that climate change is not being taken seriously,” Doctor Saunders said. “In many instances symptoms of climate change are occurring faster than predicted by models. For instance, NASA’s temperature records have shown that September 2016 was the warmest in 136 years of modern record keeping. NASA’s research on sea-level rise demonstrated that sea-level rise in the 21st century was greater than previously understood. NASA research in West Antarctica identified the fastest rates of glacier retreat ever observed.” Dr Liz Hanna, fellow of the National Centre for Epidemiology & Population Health at the Australian National University, and National Convenor Climate Change Adaptation Research Network for Human Health said that shutting down the science would not stop climate change. “All it will do is render people, communities and societies unprepared at even greater risk. …If Trump does not care about people’s lives, perhaps he might consider the drop in productivity that inevitably tracks temperature increases,” she said. “My advice to president-elect Trump is to look beyond his advisor Bob Walker’s comments and see exactly the important work done by the NASA Earth science division,” said Dr Helen McGregor, an ARC Future Fellow in the School of Earth Sciences and Environmental Sciences at the University of Wollongong. “This is not ‘politically correct environmental monitoring’ as Walker asserts but is essential data to ensure society’s health and wellbeing. “As for climate change science, the division’s reports on global temperatures are solely based on robust data. What’s being politicised here is not the science but the story that the science tells: that the planet is warming. Let’s not shoot the messenger,” Dr McGregor said. “Will Mr Trump be taking his electorate with him once he’s finished with Earth?” asked Dr Paul Read, a Research Fellow in Natural Disasters at the University of Melbourne’s Sustainable Society Institute. “Mr Trump is about 10 years behind the public understanding of climate science, much less the scientific consensus. As the climate hits home here on Earth, his own support base could turn on him like snake with whiplash.” Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


Halmos E.P.,Monash University | Christophersen C.T.,CSIRO | Bird A.R.,CSIRO | Shepherd S.J.,Monash University | And 2 more authors.
Gut | Year: 2014

Objective: A low FODMAP (Fermentable Oligosaccharides, Disaccharides, Monosaccharides And Polyols) diet reduces symptoms of IBS, but reduction of potential prebiotic and fermentative effects might adversely affect the colonic microenvironment. The effects of a low FODMAP diet with a typical Australian diet on biomarkers of colonic health were compared in a single-blinded, randomised, cross-over trial.Design: Twenty-seven IBS and six healthy subjects were randomly allocated one of two 21-day provided diets, differing only in FODMAP content (mean (95% CI) low 3.05 (1.86 to 4.25) g/day vs Australian 23.7 (16.9 to 30.6) g/day), and then crossed over to the other diet with ≥21-day washout period. Faeces passed over a 5-day run-in on their habitual diet and from day 17 to day 21 of the interventional diets were pooled, and pH, short-chain fatty acid concentrations and bacterial abundance and diversity were assessed.Results: Faecal indices were similar in IBS and healthy subjects during habitual diets. The low FODMAP diet was associated with higher faecal pH (7.37 (7.23 to 7.51) vs 7.16 (7.02 to 7.30); p=0.001), similar short-chain fatty acid concentrations, greater microbial diversity and reduced total bacterial abundance (9.63 (9.53 to 9.73) vs 9.83 (9.72 to 9.93) log10 copies/g; p<0.001) compared with the Australian diet. To indicate direction of change, in comparison with the habitual diet the low FODMAP diet reduced total bacterial abundance and the typical Australian diet increased relative abundance for butyrate-producing Clostridium cluster XIVa (median ratio 6.62; p<0.001) and mucus-associated Akkermansia muciniphila (19.3; p<0.001), and reduced Ruminococcus torques.Conclusions: Diets differing in FODMAP content have marked effects on gut microbiota composition. The implications of long-term reduction of intake of FODMAPs require elucidation.


Chang L.Y.,Monash University | Barnard A.S.,CSIRO | Gontard L.C.,Technical University of Denmark | Dunin-Borkowski R.E.,Technical University of Denmark
Nano Letters | Year: 2010

Accurate understanding of the structure of active sites is fundamentally important in predicting catalytic properties of heterogeneous nanocatalysts. We present an accurate determination of both experimental and theoretical atomic structures of surface monatomic steps on industrial platinum nanoparticles. This comparison reveals that the edges of nanoparticles can significantly alter the atomic positions of monatomic steps in their proximity, which can lead to substantial deviations in the catalytic properties compared with the extended surfaces. © 2010 American Chemical Society.


Barnard A.S.,CSIRO | Chang L.Y.,Monash University
ACS Catalysis | Year: 2011

The development of the next generation of nanosized heterogeneous catalysts requires precise control of the size, shape, and structure of individual components in a variety of chemical environments. Recent reports show that the density of catalytically active defects on Pt nanoparticles is intrinsically linked to performance, such as edges, corners, steps, and kinks, which may be introduced postsynthesis. To optimize the synthesis of nanoparticles decorated by these defects and to understand the structural stability of the final product, multiscale thermodynamic modeling has been used to predict the size and temperature dependence of these steps and to show how this directly relates to catalytic reactivity. The results show that relatively modest annealing can promote the formations of surface steps and kinks and can more than double the reactivity of particles at industrially relevant sizes. © 2011 American Chemical Society.


Barrow S.J.,University of Melbourne | Funston A.M.,Monash University | Gomez D.E.,University of Melbourne | Davis T.J.,CSIRO | Mulvaney P.,University of Melbourne
Nano Letters | Year: 2011

We present experimental data on the light scattering properties of linear chains of gold nanoparticles with up to six nanoparticles and an interparticle spacing of 1 nm. A red shift of the surface plasmon resonance with increasing chain length is observed. An exponential model applied to the experimental data allows determination of an asymptotic maximum resonance at a chain length of 10-12 particles. The optical data are compared with analytical and numerical calculation methods (EEM and BEM). © 2011 American Chemical Society.


Louie R.H.Y.,University of Sydney | McKay M.R.,Hong Kong University of Science and Technology | Collings I.B.,CSIRO
IEEE Transactions on Information Theory | Year: 2011

This paper investigates the performance of open-loop multi-antenna point-to-point links in ad hoc networks with slotted ALOHA medium access control (MAC). We consider spatial multiplexing transmission with linear maximum ratio combining and zero forcing receivers, as well as orthogonal space time block coded transmission. New closed-form expressions are derived for the outage probability, throughput and transmission capacity. Our results demonstrate that both the best performing scheme and the optimum number of transmit antennas depend on different network parameters, such as the node intensity and the signal-to-interference-and-noise ratio operating value. We then compare the performance to a network consisting of single-antenna devices and an idealized fully centrally coordinated MAC. These results show that multi-antenna schemes with a simple decentralized slotted ALOHA MAC can outperform even idealized single-antenna networks in various practical scenarios. © 2006 IEEE.


Von Caemmerer S.,Australian National University | Quick W.P.,International Rice Research Institute | Quick W.P.,University of Sheffield | Furbank R.T.,CSIRO
Science | Year: 2012

Another "green revolution" is needed for crop yields to meet demands for food. The international C4 Rice Consortium is working toward introducing a higher-capacity photosynthetic mechanism - the C 4 pathway - into rice to increase yield. The goal is to identify the genes necessary to install C4 photosynthesis in rice through different approaches, including genomic and transcriptional sequence comparisons and mutant screening.


Zhu Y.M.,Monash University | Morton A.J.,CSIRO | Nie J.F.,Monash University
Acta Materialia | Year: 2010

The 18R and 14H long-period stacking ordered structures formed in Mg-Y-Zn alloys are examined systematically using electron diffraction and high-angle annular dark-field scanning transmission electron microscopy. In contrast to that reported in previous studies, the 18R structure is demonstrated to have an ordered base-centred monoclinic lattice, with Y and Zn atoms having an ordered arrangement in the closely packed planes. Furthermore, the composition of 18R is suggested to be Mg10Y1Zn1, instead of the Mg12Y1Zn1 composition that is commonly accepted. The 14H structure is also ordered. It has a hexagonal unit cell; the ordered distribution of Y and Zn atoms in the unit cell is similar to that in the 18R and its composition is Mg12Y1Zn1. The 18R unit cell has three ABCA-type building blocks arranged in the same shear direction, while the 14H unit cell has two ABCA-type building blocks arranged in opposite shear directions. © 2010 Acta Materialia Inc.


Zhou X.,CSIRO | Chen L.,Hong Kong University of Science and Technology
VLDB Journal | Year: 2014

In recent years, microblogs have become an important source for reporting real-world events. A real-world occurrence reported in microblogs is also called a social event. Social events may hold critical materials that describe the situations during a crisis. In real applications, such as crisis management and decision making, monitoring the critical events over social streams will enable watch officers to analyze a whole situation that is a composite event, and make the right decision based on the detailed contexts such as what is happening, where an event is happening, and who are involved. Although there has been significant research effort on detecting a target event in social networks based on a single source, in crisis, we often want to analyze the composite events contributed by different social users. So far, the problem of integrating ambiguous views from different users is not well investigated. To address this issue, we propose a novel framework to detect composite social events over streams, which fully exploits the information of social data over multiple dimensions. Specifically, we first propose a graphical model called location-time constrained topic (LTT) to capture the content, time, and location of social messages. Using LTT, a social message is represented as a probability distribution over a set of topics by inference, and the similarity between two messages is measured by the distance between their distributions. Then, the events are identified by conducting efficient similarity joins over social media streams. To accelerate the similarity join, we also propose a variable dimensional extendible hash over social streams. We have conducted extensive experiments to prove the high effectiveness and efficiency of the proposed approach. © 2013 Springer-Verlag Berlin Heidelberg.


Trautwein M.D.,North Carolina State University | Wiegmann B.M.,North Carolina State University | Beutel R.,Institute For Spezielle Zoologie Und Evolutionsbiologie Mit Phyletischem Museum | Kjer K.M.,Rutgers University | Yeates D.K.,CSIRO
Annual Review of Entomology | Year: 2012

Most species on Earth are insects and thus, understanding their evolutionary relationships is key to understanding the evolution of life. Insect relationships are increasingly well supported, due largely to technological advances in molecular sequencing and phylogenetic computational analysis. In this postgenomic era, insect systematics will be furthered best by integrative methods aimed at hypothesis corroboration from molecular, morphological, and paleontological evidence. This review of the current consensus of insect relationships provides a foundation for comparative study and offers a framework to evaluate incoming genomic evidence. Notable recent phylogenetic successes include the resolution of Holometabola, including the identification of the enigmatic Strepsiptera as a beetle relative and the early divergence of Hymenoptera; the recognition of hexapods as a crustacean lineage within Pancrustacea; and the elucidation of Dictyoptera orders, with termites placed as social cockroaches. Regions of the tree that require further investigation include the earliest winged insects (Palaeoptera) and Polyneoptera (orthopteroid lineages). © 2012 by Annual Reviews. All rights reserved.


Patent
Csiro and Monash University | Date: 2014-09-29

The present invention relates to a method for isolating proteins from a solution containing the proteins. The invention also relates to a method for the chromatographic separation of proteins. The present invention also relates to crosslinked hydroxylic polymer particles functionalized with temperature-responsive copolymer, and to methods of preparing such particles.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: ICT-30-2015 | Award Amount: 9.43M | Year: 2016

The Internet of Things (IoT) brings opportunities for creating new services and products, reducing costs for societies, increasing the service level for the citizens in a number of areas, and changing how services are sold and consumed. Despite these opportunities, current information system architectures create obstacles that must be addressed for leveraging the full potential of IoT. One of the most critical obstacles are the vertical silos that shape todays IoT because they constitute a serious impediment to the creation of cross-domain, cross-platform and cross-organisational applications and services. Those silos also hamper developers from producing new added value across multiple platforms due to the lack of interoperability and openness. bIoTope provides the necessary Standardized Open APIs for enabling horizontal interoperability between silos. Such horizontal interoperability makes it possible to develop Systems of Systems where cross-domain information from platforms, devices and other information sources can be accessed when and as needed. bIoTope-enabled Systems can seamlessly exploit all available information, which makes them smart in the sense that they can take or propose the most appropriate actions depending on the current Users or Objects Context/Situation, and even learn from experience. bIoTope capabilities lay the foundation for open innovation ecosystems where companies can innovate both by the creation of new software components for IoT ecosystems, as well as create new Platforms for Connected Smart Objects with minimal investment. Large-scale pilots implemented in smart cities will provide both social, technical and business proofs-of-concept for such IoT ecosystems. This is feasible because the bIoTope consortium combines unique IoT experience, commercial solution providers and end-users, thus ensuring the high quality and efficiency of the results and implementations.


Grant
Agency: European Commission | Branch: FP7 | Program: CSA-SA | Phase: INCO-2009-5.1 | Award Amount: 597.13K | Year: 2009

The overall objective of the proposed project is to increase S&T cooperation between the EU and Australia by identifying access opportunities for European researchers in Australian research capabilities and programmes. The work plan of AUS-ACCESS4EU puts the objective of the FP 7 Capacitioes Work Programme Supporting the EU access to third countries programmes (FP-INCO-2009-5) to help to develop the reciprocity aspects of the S&T agreement by identifying programmes open to EU researchers and promote their participation into practise. It will enhance the information collection as regards programmes open for EU researchers as well as rules and obstacles for participation. The close and continous dialogue with Australian programme owners and the wide outreach of the project results to European stakeholders and policy makers and European scientists are two of the major success factors of the project. The acticivities are grouped into 4 work packages WP). WP 1 Inventory and Monitoring aims to map the opportunities for European researchers and research institutes to access Australian programmes. The objectives of WP 2 Awareness raising and profile building are to raise the awareness of Australian institutions and programme owners and to promote the principle of reciprocity of research programmes. WP 3 Information dissemination and outreach aims to increase the European research communitys awareness of opportunities to access Australian support and capability in order to stimulate, encourage and support the participation of European organisations in Australian programmes. WP 4 Project coordination and management will ensure that the project is managed effectively.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-FP-SICA | Phase: ENV.2011.3.1.1-2 | Award Amount: 4.78M | Year: 2011

Saph Pani addresses the improvement of natural water treatment systems such as river bank filtration (RBF), managed aquifer recharge (MAR) and wetlands in India building on a combination of local and international expertise. The project aims at enhancing water resources and water supply particularly in water stressed urban and peri-urban areas in different parts of the sub-continent. The objective is to strengthen the scientific understanding of the performance-determining processes occurring in the root, soil and aquifer zones of the relevant processes considering the removal and fate of important water quality parameters such as pathogenic microorganisms and respective indicators, organic substances and metals. Moreover the hydrologic characteristics (infiltration and storage capacity) and the eco-system function will be investigated along with the integral importance in the local or regional water resources management concept (e.g. by providing underground buffering of seasonal variations in supply and demand). The socio-economic value of the enhanced utilisation of the attenuation and storage capacity will be evaluated taking into account long-term sustainability issues and a comprehensive risk management. The project focuses on a set of case study areas in India covering various regional, climatic, and hydrogeological conditions as well as different treatment technologies. The site investigations will include hydrological and geochemical characterisation and, depending on the degree of site development, water quality monitoring or pre-feasibility studies for new treatment schemes. Besides the actual natural treatment component the investigation may encompass also appropriate pre- and post treatment steps to potabilise the water or avoid clogging of the sub-surface structures. The experimental and conceptual studies will be complemented by modelling activities which help to support the transferability of results.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-FP | Phase: KBBE-2007-2-5-01 | Award Amount: 3.42M | Year: 2008

The function of post market monitoring is to further assess possible nutritional and health effects of authorized GM foods on a mixed population of human and animal consumers. Currently, however, little is known about exposure levels, whether adverse effects are predictable, and the occurrence of any unexpected effects following market release of GM foods. Our objective is to identify a panel of anatomic, physiologic, biochemical, molecular, allergenic, and immunogenic biomarkers, which could be used to predict harmful GMO effects after product authorization. Using a prototype allergenic -amylase inhibitor GM-pea, we will extrapolate multiple biomarker databases that correlate GMO effects during gestation, growth, maturation in various animal models with humans. We will establish biomarkers in GMO-fed pigs, salmon, rats, and mice, in addition to indirect effects of GM feeding in the food chain and GMO influence during an underlying allergic disorder. These experiments will yield data on general health with a specific focus on allergy and immunology. To extrapolate our data to humans, we will establish a comparative database with antigenic epitopes and antibody crossreactivity in legume allergic patients and human-mouse chimera in which a human immune system is transplanted into a mouse lacking an immune system. Taken together, these results will yield databases from multiple biological systems that will be used in a mathematical modeling strategy for biomarker discovery and validation. Our consortium consists of partners from Austria, Turkey, Hungary, Ireland, Norway, and Australia and constitutes a diverse interdisciplinary team from veterinary medicine, nutrition, agriculture, immunology, and medicine that is dedicated to the development and validation of biomarkers to be used for post market monitoring of animals and humans consuming newly authorized GMOs.


Grant
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ICT-2013.1.5 | Award Amount: 8.44M | Year: 2013

The aim of the AU2EU project is to implement and demonstrate in a real-life environment an integrated eAuthentication and eAuthorization framework to enable trusted collaborations and delivery of services across different organizational/governmental jurisdictions. Consequently, the project aims at fostering the adoption of security and privacy-by-design technologies in European and global markets. This objective will be achieved by:\n1)\tdesigning a joint eAuthentication and eAuthorization framework for cross-domain and jurisdictional collaborations, supporting different identity/attribute providers and organizational policie,s and guaranteeing privacy, security and trust;\n2)\tadvancing the state-of-the-art by extending the joint eAuthentication and eAuthorization framework with assurance of claims, trust indicators, policy enforcement mechanisms and processing under encryption techniques to address specific security and confidentiality requirements of large distributed infrastructures;\n3)\timplementing the joint eAuthentication and eAuthorization framework as a part of the platform that supports collaborative secure distributed storage, secure data processing and management in the cloud and offline scenarios;\n4)\tdeploying the designed framework and platform in two pilots on bio-security incident management and collaborative services in Australia and on eHealth and Ambient Assisted Living in Europe; and\n5)\tvalidating the practical aspects of the developed platform such as scalability, efficiency, maturity and usability.\nThe aforementioned activities will contribute to the increased trust, security and privacy, which in turn shall lead to the increased adoption of (cloud-based) critical infrastructures and collaborative delivery of services dealing with sensitive data. AU2EU strategically invests in two pilots deploying the existing research results as well as the novel techniques developed in the project to bridge the gap between research and market adoption.\nThe project builds on existing schemes and research results, particularly on the results of the ABC4Trust project as well as the Trust in Digital Life (TDL) initiative (www.trustindigitallife.eu), which initiated this project and will support its objectives by executing aligned activities defined in the TDL strategic research agenda. The project brings together a strong collaboration of leading industry (such as Philips, IBM, NEC, Thales), SMEs (such as Bicore) and research organizations of Europe (such as Eindhoven University of Technology) and Australia (such as CSIRO, Edith Cowan University, RMIT University, University of New South Wales & Macquarie University) as well as the large voluntary welfare association (such as German Red Cross). Consortium is determined to make a sustained long term impact through commercialization, open source & standardization of open composable infrastructure for e-services where privacy and interoperability with existing technologies are guaranteed.


Grant
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ENERGY.2013.6.1.1 | Award Amount: 4.10M | Year: 2013

The main objective of the proposed project is to develop a generic UCG-CCS site characterisation workflow, and the accompanying technologies, which would address the dilemma faced by the proponents of reactor zone CO2 storage, and offer technological solutions to source sink mismatch issues that are likely to be faced in many coalfields. This objective will be achieved through integrated research into the field based technology knowledge gaps, such as cavity progression and geomechanics, potential groundwater contamination and subsidence impacts, together with research into process engineering solutions in order to assess the role/impact of site specific factors (coal type, depth/pressure, thickness, roof and floor rock strata, hydrology) and selected reagents on the operability of a given CO2 emission mitigation option in a coalfield. CO2 storage capacity on site for European and international UCG resources will be assessed and CO2 mitigation technologies based on end use of produced synthetic gas will be evaluated. The technology options identified will be evaluated with respect to local and full chain Life Cycle environmental impacts and costs. The project takes a radical and holistic approach to coupled UCG-CCS, and thus the site selection criteria for the coupled process, considering different end-uses of the produced synthetic gas, covering other options beyond power generation, and will evaluate novel approaches to UCG reagent use in order to optimise the whole process. This approach aims at minimising the need for on-site CO2 storage capacity as well as maximising the economic yield of UCG through value added end products, as well as power generation, depending on the local coalfield and geological conditions.


Patent
Csiro and Monash University | Date: 2016-02-08

The present invention provides methods of designing molecularly imprinted polymers (MIPs) which have applications in extracting bioactive compounds from a range of bioprocessing feedstocks and wastes. The present invention is further directed to MIPs designed by the methods of the present invention.


Grant
Agency: GTR | Branch: NERC | Program: | Phase: Research Grant | Award Amount: 931.42K | Year: 2014

Sea levels around the world are currently rising, threatening populations living near the coast with flooding and increased coastal erosion. Evaluating the future threat requires a better understanding of the physical processes responsible for driving changes in the Earths ice sheets. Recent observations show that in some key locations around the ice sheets margins, rapid thinning is currently contributing 1.3 mm/yr to global sea level rise, and that that number has risen dramatically in recent years. Most of the attention has been focussed on the Greenland and West Antarctic ice sheets, where the thinning is most widespread and rapid. It is generally assumed that the culprit is a warming of the ocean waters that come into contact with the ice sheet. Increased melting of the floating ice shelves and tidewater glaciers has caused them to thin, forcing the grounding line or calving front to retreat and allowing the inland ice to flow faster towards the coast. Although thinning of the East Antarctic Ice Sheet (EAIS) is currently much less widespread and dramatic than that observed in West Antarctica, a large sector of the EAIS is grounded below sea level and is thus potentially vulnerable to the same process of ice shelf thinning, grounding line retreat and ice stream acceleration. In addition, analogous ocean forcing to that in West Antarctica could influence the marine-based sector of the EAIS. In both regions the Antarctic Circumpolar Current brings warm Circumpolar Deep Water (CDW) close to the continental slope. While CDW may already be influencing Totten Glacier, which now shows the strongest thinning signature over the entire EAIS, other glaciers in the region, most notably Mertz Glacier, may be protected by the formation of dense, cold Shelf Water in local polynyas. However, our knowledge of the oceanography of the continental shelf and of the waters that circulate beneath and interact with the floating ice shelves is presently insufficient to understand what processes are driving the change on Totten Glacier and how vulnerable its near neighbours such as Mertz Glacier might be. Our ability to project the future behaviour of these outlet glacier systems is severely limited as a result. To address this deficiency, this project will make observations of the critical processes that take place beneath the floating ice shelves, to determine how the topography beneath the ice and the oceanographic forcing from beyond the cavity control the rate at which the ice shelves melt. The key tool with which the necessary observations will be made is an Autonomous Underwater Vehicle (Autosub3), configured and run in a manner analogous to that used for an earlier, highly successful campaign in which it completed 500 km of along-track observations beneath the 60-km long floating tongue of Pine Island Glacier in West Antarctica. We will use these data to validate a numerical model of ocean circulation beneath the ice shelves and use the computed melt rates to force a numerical model of ice flow, in order to investigate the response of the glaciers to a range of climate forcing. A detailed understanding of ocean circulation and melting beneath Totten and Mertz glaciers will generate insight into ocean-ice interactions that will be relevant to many other sites in Greenland and Antarctica, and will advance our developing knowledge of ice sheet discharge and its future effect on sea-level rise. This work forms part of an intensive observational campaign focused on ocean-ice shelf interactions in East Antarctica. The collaborative, interdisciplinary effort consists of coordinated ocean and glacier studies conducted by groups at Australian, French, UK and US institutions.


Patent
Csiro and Monash University | Date: 2013-07-26

A process for the separation of a gas from a gas stream using metal organic framework that is reversibly switchable between a first conformation that allows the first gas species to be captured in the metal organic framework, and a second conformation that allows the release of the captured first gas species, using light as the switching stimulus. The metal organic framework may comprise a metal and one or more ligands, in which the ligands contain an isomerisable group within the molecular chain that forms a link between adjacent metal atoms in the metal organic framework.


Patent
Monash University and Csiro | Date: 2013-10-28

The present invention generally relates to lithium based energy storage devices. According to the present invention there is provided a lithium energy storage device comprising: at least one positive electrode; at least one negative electrode; and an ionic liquid electrolyte comprising an anion, a cation counterion and lithium mobile ions, wherein the anion comprises a nitrogen, boron, phosphorous, arsenic or carbon anionic group having at least one nitrile group coordinated to the nitrogen, boron, phosphorous, arsenic or carbon atom of the anionic group.


Patent
Monash University and Csiro | Date: 2011-03-18

An organic cation for a battery, including a heteroatom-containing cyclic compound having at least (2) ring structures formed from rings that share at least one common atom, the cyclic compound having both a formal positive charge of at least +1 and a partial negative charge.


News Article | October 29, 2016
Site: www.techrepublic.com

Despite having the same fitness level as everybody else at school, Gary Barber was never able to run as far as the other kids could because, at the age of four, he was diagnosed with a heart defect. The impact the heart defect had on Barber progressively worsened in his adult life. At one point, Barber, who is now 48, was unable to walk from the lounge room to the bathroom without having to stop halfway down the hallway to take a breath. "Imagine having somebody's hands around your throat, or you've got a really bad flu where there's pressure on your chest and you can't breathe properly. You're taking a quarter of your breath and you're trying to walk," Barber said. Toward the end of October 2015, Barber underwent emergency heart failure surgery after being admitted into Ipswich Hospital in Queensland, Australia. Barber's need for surgery came after frequently passing out and being told by his doctor the issue was his lungs and being overweight—not his heart. "My surgeon said I should have been on his [operating] table a minimum of four years ago," Barber said. Post-surgery, patients such as Barber are advised to undertake cardiac rehabilitation (CR) to reduce the risk of a second heart attack. As part of CR, patients are required to make regular visits to the hospital. But, according to Simon McBride, co-founder and CTO of Cardihab, the average cardiac rehab completion rate is only 30%. In hopes of increasing the completion rate, McBride introduced Cardihab, a smartphone application currently in pilot phase, designed to help patients recover from heart surgery remotely. Cardihab is a spin-off company from the Commonwealth Science and Industrial Research Organisation (CSIRO), and is also a participant of the HCF Catalyst accelerator program. He explained a key problem behind why people do not complete their CR program is due to accessibility and convenience. "The way normal cardiac rehab works is it's usually a 6-8 week long program where the person has to go to a clinic once or twice a week and that can really be inconvenient, especially for patients who have returned to work, or for rural remote patients," McBride said. Cardihab has been designed to collect data about a patient including how many steps a patient has taken, and their blood pressure and sugar levels, via Bluetooth-enabled monitors. The information is then uploaded to the cloud and shared with the patient's clinician, who can access it through an online portal. SEE: Healthcare IT's battle to keep sensitive data safe (TechRepublic) Based on research by the CSIRO, and through initial trials with Queensland Health, Cardihab has been able to reduce clinical hospital visits by 89% and improve cardiac rehab completion rates by 70%. While Barber said Cardihab has raised his personal awareness, it was not something he was initially open to trying. His initial thoughts about the program was that it was a " damn waste of time," but after completing the six-week Cardihab program with encouragement from one of the nurses, he said anyone who does not do the program would be a fool. "It made me more aware about what I was doing...I had a machine to be accountable to, I had a set of scales I had to be accountable to, and I had a blood pressure machine I had to be accountable to," Barber said. There were also conversational check-ups over the phone with the nurses, which would often involve discussions about why he was unable to take as many steps during certain days, Barber said, pointing out he's also a sufferer of gout and that restricted his movements. McBride said the application gives the opportunity to "empower" patients. "I think it's true to say getting patients more engaged is a big trend and something healthcare systems are trying to do. With [Cardihab], it gives people the ability to engage more with their care, and drive that feedback loop to the clinician and that's still the most important thing: The conversation between the patient and clinician is the heart of the cardiac rehab program; the technology just helps the clinician deliver that program in another way," Mcbride said. Although it has been a physical recovery, it has equally been an emotional one, too, Barber said. "When recovery is mentioned, what people don't understand is the emotional trauma. They don't understand the ongoing after affects," Barber said. While Barber believes there's still a long road ahead to full recovery, the results are already showing. He said it's now only taking him 10 minutes to feed the horses on his property, when it used to take 45 minutes.


News Article | March 9, 2016
Site: www.nature.com

Timothy Doran's 11-year-old daughter is allergic to eggs. And like about 2% of children worldwide who share the condition, she is unable to receive many routine vaccinations because they are produced using chicken eggs. Doran, a molecular biologist at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Geelong, Australia, thinks that he could solve this problem using the powerful gene-editing tool CRISPR–Cas9. Most egg allergies are caused by one of just four proteins in the white, and when Doran's colleagues altered the gene that encodes one of these in bacteria, the resulting protein no longer triggered a reaction in blood serum from people who were known to be allergic to it1. Doran thinks that using CRISPR to edit the gene in chickens could result in hypoallergenic eggs. The group expects to hatch its first generation of chicks with gene modifications later this year as a proof of concept. Doran realizes that it could be some time before regulators would approve gene-edited eggs, and he hopes that his daughter will have grown out of her allergy by then. “If not, I've got someone ready and waiting to try the first egg,” he says. Chickens are just one of a menagerie of animals that could soon have their genomes reimagined. Until now, researchers had the tools to genetically manipulate only a small selection of animals, and the process was often inefficient and laborious. With the arrival of CRISPR, they can alter the genes of a wide range of organisms with relative precision and ease. In the past two years alone, the prospect of gene-edited monkeys, mammoths, mosquitoes and more have made headlines as scientists attempt to put CRISPR to use for applications as varied as agriculture, drug production and bringing back lost species. CRISPR-modified animals are even being marketed for sale as pets. “It's allowed us to consider a whole raft of projects we couldn't before,” says Bruce Whitelaw, an animal biotechnologist at the Roslin Institute in Edinburgh, UK. “The whole community has wholeheartedly moved towards genome editing.” But regulators are still working out how to deal with such creatures, particularly those intended for food or for release into the wild. Concerns abound about safety and ecological impacts. Even the US director of national intelligence has weighed in, saying that the easy access, low cost and speedy development of genome editing could increase the risk that someone will engineer harmful biological agents. Eleonore Pauwels, who studies biotechnology regulation at the Wilson Center in Washington DC, says that the burgeoning use of CRISPR in animals offers an opportunity for researchers and policymakers to engage the public in debate. She hopes that such discussions will help in determining which uses of CRISPR will be most helpful to humans, to other species and to science — and will highlight the limits of the technology. “I think there is a lot of value in humility about how much control we have,” she says. Disease resistance is one of the most popular applications for CRISPR in agriculture, and scientists are tinkering across a wide spectrum of animals. Biotechnology entrepreneur Brian Gillis in San Francisco is hoping that the tool can help to stem the dramatic loss of honeybees around the world, which is being caused by factors such as disease and parasites. Gillis has been studying the genomes of 'hygienic' bees, which obsessively clean their hives and remove sick and infested bee larvae. Their colonies are less likely to succumb to mites, fungi and other pathogens than are those of other strains, and Gillis thinks that if he can identify genes associated with the behaviour, he might be able to edit them in other breeds to bolster hive health. But the trait could be difficult to engineer. No hygiene-associated genes have been definitively identified, and the roots of the behaviour may prove complex, says BartJan Fernhout, chairman of Arista Bee Research in Boxmeer, the Netherlands, which studies mite resistance. Moreover, if genes are identified, he says, conventional breeding may be sufficient to confer resistance to new populations, and that might be preferable given the widespread opposition to genetic engineering. Such concerns don't seem to have slowed down others studying disease resistance. Whitelaw's group at the Roslin Institute is one of several using CRISPR and other gene-editing systems to create pigs that are resistant to viral diseases that cost the agricultural industry hundreds of millions of dollars each year. Whitelaw's team is using another gene-editing technique to alter immune genes in domestic pigs to match more closely those of warthogs that are naturally resistant to African swine fever, a major agricultural pest2. And Randall Prather at the University of Missouri in Columbia has created pigs with a mutated protein on the surface of their cells, which should make them impervious to a deadly respiratory virus3. Other researchers are making cattle that are resistant to the trypanosome parasites that are responsible for sleeping sickness. Whitelaw hopes that regulators — and sceptical consumers — will be more enthusiastic about animals that have had their genes edited to improve disease resistance than they have been for traits such as growth promotion because of the potential to reduce suffering. And some governments are considering whether CRISPR-modified animals should be regulated in the same way as other genetically modified organisms, because they do not contain DNA from other species. Doran's quest to modify allergens in chicken eggs requires delicate control. The trick is to finely adjust a genetic sequence in a way that will stop the protein from triggering an immune reaction in people, but still allow it to perform its normal role in embryonic development. CRISPR has made such precise edits possible for the first time. “CRISPR has been the saviour for trying to tackle allergens,” says Mark Tizard, a molecular biologist at CSIRO who works with Doran on chickens. Using the technique in birds still presents problems. Mammals can be induced to produce extra eggs, which can then be removed, edited, fertilized and replaced. But in birds, the fertilized egg binds closely to the yolk and removing it would destroy the embryo. And because eggs are difficult to access while still inside the hen, CRISPR components cannot be directly injected into the egg itself. By the time the egg is laid, development has proceeded too far for gene editing to affect the chick's future generations. To get around this, Tizard and Doran looked to primordial germ cells (PGCs) — immature cells that eventually turn into sperm or eggs. Unlike in many animals, chicken PGCs spend time in the bloodstream during development. Researchers can therefore remove PGCs, edit them in the lab and then return them to the developing bird. The CSIRO team has even developed a method to insert CRISPR components directly into the bloodstream so that they can edit PGCs there4. The researchers also plan to produce chickens with components required for CRISPR integrated directly into their genomes — what they call CRISPi chickens. This would make it even easier to edit chicken DNA, which could be a boon for 'farmaceuticals' — drugs created using domesticated animals. Regulators have shown a willingness to consider such drugs. In 2006, the European Union approved a goat that produces an anticlotting protein in its milk. It was subsequently approved by the US Food and Drug Administration, in 2009. And in 2015, both agencies approved a transgenic chicken whose eggs contain a drug for cholesterol diseases. About 4,000 years ago, hunting by humans helped to drive woolly mammoths (Mammuthus primigenius) to extinction. CRISPR pioneer George Church at Harvard Medical School in Boston, Massachusetts, has attracted attention for his ambitious plan to undo the damage by using CRISPR to transform endangered Indian elephants into woolly mammoths — or at least cold-resistant elephants. The goal, he says, would be to release them into a reserve in Siberia, where they would have space to roam. The plan sounds wild — but efforts to make mammals more mammoth-like have been going on for a while. Last year, geneticist Vincent Lynch at the University of Chicago in Illinois showed that cells with the mammoth version of a gene for heat-sensing and hair growth could grow in low temperatures5, and mice with similar versions prefer the colder parts of a temperature-regulated cage6. Church says that he has edited about 14 such genes in elephant embryos. But editing, birthing and then raising mammoth-like elephants is a huge undertaking. Church says that it would be unethical to implant gene-edited embryos into endangered elephants as part of an experiment. So his lab is looking into ways to build an artificial womb; so far, no such device has ever been shown to work. There are some de-extinction projects that could prove less challenging. Ben Novak at the University of California, Santa Cruz, for example, wants to resurrect the passenger pigeon (Ectopistes migratorius), a once-ubiquitous bird that was driven to extinction in the late nineteenth century by overhunting. His group is currently comparing DNA from museum specimens to that of modern pigeons. Using PGC methods similar to Doran's, he plans to edit the modern-pigeon genomes so that the birds more closely resemble their extinct counterparts. Novak says that the technology is not yet advanced enough to modify the hundreds of genes that differ between modern and historic pigeons. Still, he says that CRISPR has given him the best chance yet of realizing his lifelong dream of restoring an extinct species. “I think the project is 100% impossible without CRISPR,” he says. For decades, researchers have explored the idea of genetically modifying mosquitos to prevent the spread of diseases such as dengue or malaria. CRISPR has given them a new way to try. In November, molecular biologist Anthony James of the University of California, Irvine, revealed a line of mosquitoes with a synthetic system called a gene drive that passes a malaria-resistance gene on to the mosquitoes' offspring7. Gene drives ensure that almost all the insects' offspring inherit two copies of the edited gene, allowing it to spread rapidly through a population. Another type of gene drive, published last December8, propagates a gene that sterilizes all female mosquitoes, which could wipe out a population. The outbreak of mosquito-borne Zika virus in Central and South America has increased interest in the technology, and several research labs have begun building gene drives that could eliminate the Zika-carrying species, Aedes aegypti. Many scientists are worried about unintended and unknown ecological consequences of releasing such a mosquito. For this reason, Church and his colleagues have developed 'reverse gene drives' — systems that would propagate through the population to cancel out the original mutations9, 10. But Jason Rasgon, who works on genetically modified insects at Pennsylvania State University in University Park, says that although ecology should always be a consideration, the extent and deadliness of some human diseases such as malaria may outweigh some costs. Mosquitoes are some of the easiest insects to work with, he says, but researchers are looking at numerous other ways to use gene drives, including making ticks that are unable to transmit the bacteria that cause Lyme disease. Last year, researchers identified a set of genes that could be modified to prevent aquatic snails (Biomphalaria glabrata) from transmitting the parasitic disease schistosomiasis11. Last November, after a lengthy review, the US Food and Drug Administration approved the first transgenic animals for human consumption: fast-growing salmon made by AquaBounty Technologies of Maynard, Massachusetts. Some still fear that if the salmon escape, they could breed with wild fish and upset the ecological balance. To address such concerns, fish geneticist Rex Dunham of Auburn University in Alabama has been using CRISPR to inactivate genes for three reproductive hormones — in this case, in catfish, the most intensively farmed fish in the United States. The changes should leave the fish sterile, so any fish that might escape from a farm, whether genetically modified or not, would stand little chance of polluting natural stocks. “If we're able to achieve 100% sterility, there is no way that they can make a genetic impact,” Dunham says. Administering hormones would allow the fish to reproduce for breeding purposes. And Dunham says that similar methods could be used in other fish species. CRISPR could also reduce the need for farmers to cull animals, an expensive and arguably inhumane practice. Biotechnologist Alison van Eenennaam at the University of California, Davis, is using the technique to ensure that beef cattle produce only male or male-like offspring, because females produce less meat and are often culled. She copies a Y-chromosome gene that is important for male sexual development onto the X chromosome in sperm. Offspring produced with the sperm would be either normal, XY males, or XX females with male traits such as more muscle. In the egg industry, male chicks from elite egg-laying chicken breeds have no use, and farmers generally cull them within a day of hatching. Tizard and his colleagues are adding a gene for green fluorescent protein to the chickens' sex chromosomes so that male embryos will glow under ultraviolet light. Egg producers could remove the male eggs before they hatch and potentially use them for vaccine production. There are other ways that CRISPR could make agriculture more humane. Packing cattle into trailers or other small spaces often causes injuries, especially when the animals have long horns. So cattle farmers generally burn, cut or remove them with chemicals — a process that can be painful for the animal and dangerous for the handler. There are cattle varieties that do not have horns — a condition called 'polled' — but crossing these breeds with 'elite' meat or dairy breeds reduces the quality of the offspring. Molecular geneticist Scott Fahrenkrug, founder of Recombinetics in Saint Paul, Minnesota, is using gene-editing techniques to transfer the gene that eliminates horns into elite breeds12. The company has produced only two polled calves so far — both male — which are being raised at the University of California, Davis, until they are old enough to breed. Last September, the genomics firm BGI wowed a conference in Shenzhen, China, with micropigs — animals that grow to only around 15 kilograms, about the size of a standard dachshund. BGI had originally intended to make the pigs for research, but has since decided to capitalize on creation of the animals by selling them as pets for US$1,600. The plan is to eventually allow buyers to request customized coat patterns. BGI is also using CRISPR to alter the size, colour and patterns of koi carp. Koi breeding is an ancient tradition in China, and Jian Wang, director of gene-editing platforms at BGI, says that even good breeders will usually produce only a few of the most beautifully coloured and proportioned, 'champion quality' fish out of millions of eggs. CRISPR, she says, will let them precisely control the fish's patterns, and could also be used to make the fish more suitable for home aquariums rather than the large pools where they are usually kept. Wang says that the company will begin selling koi in 2017 or 2018 and plans to eventually add other types of pet fish to its repertoire. Claire Wade, a geneticist at the University of Sydney in Australia, says that CRISPR could be used to enhance dogs. Her group has been cataloguing genetic differences between breeds and hopes to identify areas involved in behaviour and traits such as agility that could potentially be edited13. Sooam Biotech in Seoul, best-known for a service that will clone a deceased pet for $100,000, is also interested in using CRISPR. Sooam researcher David Kim says that the company wants to enhance the capabilities of working dogs — guide dogs or herding dogs, for example. Jeantine Lunshof, a bioethicist who works in Church's lab at Harvard, says that engineering animals just to change their appearance, “just to satisfy our idiosyncratic desires”, borders on frivolous and could harm animal well-being. But she concedes that the practice is not much different from the inbreeding that humans have been performing for centuries to enhance traits in domestic animals and pets. And CRISPR might even help to eliminate some undesirable characteristics: many dog breeds are prone to hip problems, for example. “If you could use genome editing to reverse the very bad effects we have achieved by this selective inbreeding over decades, then that would be good.” Ferrets have long been a useful model for influenza research because the virus replicates in their respiratory tracts and they sometimes sneeze when infected, allowing studies of virus transmission. But until the arrival of CRISPR, virologists lacked the tools to easily alter ferret genes. Xiaoqun Wang and his colleagues at the Chinese Academy of Sciences in Beijing have used CRISPR to tweak genes involved in ferret brain development14, and they are now using it to modify the animals' susceptibility to the flu virus. He says that he will make the model available to infectious-disease researchers. Behavioural researchers are particularly excited about the prospect of genetically manipulating marmosets and monkeys, which are more closely related to humans than are standard rodent models. The work is moving most quickly in China and Japan. In January, for instance, neuroscientist Zilong Qiu and his colleagues at the Chinese Academy of Sciences in Shanghai published a paper15 describing macaques with a CRISPR-induced mutation in MECP2, the gene associated with the neurodevelopmental disorder Rett syndrome. The animals showed symptoms of autism spectrum disorder, including repetitive behaviours and avoiding social contact. But Anthony Chan, a geneticist at Emory University in Atlanta, Georgia, cautions that researchers must think carefully about the ethics of creating such models and whether more-standard laboratory animals such as mice would suffice. “Not every disease needs a primate model,” he says. Basic neuroscience could also benefit from the availability of new animal models. Neurobiologist Ed Boyden at the Massachusetts Institute of Technology is raising a colony of the world's tiniest mammal — the Etruscan tree shrew (Suncus etruscus). The shrews' brains are so small that the entire organ can be viewed under a microscope at once. Gene edits that cause neurons to flash when they fire, for instance, could allow researchers to study the animal's entire brain in real time. The CRISPR zoo is expanding fast — the question now is how to navigate the way forward. Pauwels says that the field could face the same kind of public backlash that bedevilled the previous generation of genetically modified plants and animals, and to avoid it, scientists need to communicate the advantages of their work. “If it's here and can have some benefit,” she says, “let's think of it as something we can digest and we can own.”


Abstract: A new technique using liquid metals to create integrated circuits that are just atoms thick could lead to the next big advance for electronics. The process opens the way for the production of large wafers around 1.5 nanometres in depth (a sheet of paper, by comparison, is 100,000nm thick). Other techniques have proven unreliable in terms of quality, difficult to scale up and function only at very high temperatures -- 550 degrees or more. Distinguished Professor Kourosh Kalantar-zadeh, from the School of Engineering at RMIT University in Melbourne, Australia, led the project, which also included colleagues from RMIT and researchers from CSIRO, Monash University, North Carolina State University and the University of California. He said the electronics industry had hit a barrier. "The fundamental technology of car engines has not progressed since 1920 and now the same is happening to electronics. Mobile phones and computers are no more powerful than five years ago. "That is why this new 2D printing technique is so important -- creating many layers of incredibly thin electronic chips on the same surface dramatically increases processing power and reduces costs. "It will allow for the next revolution in electronics." Benjamin Carey, a researcher with RMIT and the CSIRO, said creating electronic wafers just atoms thick could overcome the limitations of current chip production. It could also produce materials that were extremely bendable, paving the way for flexible electronics. "However, none of the current technologies are able to create homogenous surfaces of atomically thin semiconductors on large surface areas that are useful for the industrial scale fabrication of chips. "Our solution is to use the metals gallium and indium, which have a low melting point. "These metals produce an atomically thin layer of oxide on their surface that naturally protects them. It is this thin oxide which we use in our fabrication method. "By rolling the liquid metal, the oxide layer can be transferred on to an electronic wafer, which is then sulphurised. The surface of the wafer can be pre-treated to form individual transistors. "We have used this novel method to create transistors and photo-detectors of very high gain and very high fabrication reliability in large scale." For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | February 18, 2017
Site: phys.org

The process opens the way for the production of large wafers around 1.5 nanometres in depth (a sheet of paper, by comparison, is 100,000nm thick). Other techniques have proven unreliable in terms of quality, difficult to scale up and function only at very high temperatures—550 degrees or more. Distinguished Professor Kourosh Kalantar-zadeh, from the School of Engineering at RMIT University in Melbourne, Australia, led the project, which also included colleagues from RMIT and researchers from CSIRO, Monash University, North Carolina State University and the University of California. He said the electronics industry had hit a barrier. "The fundamental technology of car engines has not progressed since 1920 and now the same is happening to electronics. Mobile phones and computers are no more powerful than five years ago. "That is why this new 2D printing technique is so important—creating many layers of incredibly thin electronic chips on the same surface dramatically increases processing power and reduces costs. "It will allow for the next revolution in electronics." Benjamin Carey, a researcher with RMIT and the CSIRO, said creating electronic wafers just atoms thick could overcome the limitations of current chip production. It could also produce materials that were extremely bendable, paving the way for flexible electronics. "However, none of the current technologies are able to create homogenous surfaces of atomically thin semiconductors on large surface areas that are useful for the industrial scale fabrication of chips. "Our solution is to use the metals gallium and indium, which have a low melting point. "These metals produce an atomically thin layer of oxide on their surface that naturally protects them. It is this thin oxide which we use in our fabrication method. "By rolling the liquid metal, the oxide layer can be transferred on to an electronic wafer, which is then sulphurised. The surface of the wafer can be pre-treated to form individual transistors. "We have used this novel method to create transistors and photo-detectors of very high gain and very high fabrication reliability in large scale." Explore further: Towards the T-1000: Liquid metals propel future electronics More information: "Wafer Scale Two Dimensional Semiconductors from Printed Oxide Skin of Liquid Metals", Nature Communications, DOI: 10.1038/NCOMMS14482


News Article | February 18, 2017
Site: news.yahoo.com

A new technique using liquid metals to create integrated circuits that are atoms thick could shape the future of electronics, according to research led by RMIT. The technique, which can produce large electronic wafers around 1.5 nanometres in depth, was developed by researchers from RMIT, Monash University, North Carolina State University, and the Commonwealth Scientific and Industrial Research Organisation (CSIRO). Benjamin Carey, a researcher with RMIT and CSIRO, said the technique overcomes one of the biggest limitations of current chip production -- the inability to create homogeneous surfaces of atomically thin semiconductors on large surface areas. The new technique would make large-scale chip production more efficient and cost-effective, Carey said. "Our solution is to use the metals gallium and indium, which have a low melting point. These metals produce an atomically thin layer of oxide on their surface that naturally protects them. It is this thin oxide which we use in our fabrication method," Carey explained. "By rolling the liquid metal, the oxide layer can be transferred onto an electronic wafer, which is then sulphurised. The surface of the wafer can be pre-treated to form individual transistors." The technique also produces bendable materials that could pave the way for flexible electronics, Carey said. Professor Kourosh Kalantar-Zadeh from RMIT's School of Engineering, who led the research project, said the electronics industry has hit a barrier. "The fundamental technology of car engines has not progressed since 1920 and now the same is happening to electronics. Mobile phones and computers are no more powerful than five years ago," Kalantar-Zadeh said. "That is why this new 2D printing technique is so important -- creating many layers of incredibly thin electronic chips on the same surface dramatically increases processing power and reduces costs. "It will allow for the next revolution in electronics." Earlier in February, Intel announced plans to invest $7 billion in the construction of its R&D factory in Arizona where it will produce its new 7-nanometre semiconductors. Intel first announced plans for the Fab 42 factory in 2011 with construction slated to be completed by 2013. However, the plans were put on hold due to shrinking chip demand. Around mid-2016, Samsung said the Internet of Things will drive semiconductor sales. The world's semiconductor market is growing 7 percent per year, while IoT will account for 25 percent, the company said at the time, which is why Samsung decided to increase its competence in semiconductors. Video: Seeing is believing: Glasses give sight back to the blind


News Article | February 17, 2017
Site: www.rdmag.com

A new technique using liquid metals to create integrated circuits that are just atoms thick could lead to the next big advance for electronics. The process opens the way for the production of large wafers around 1.5 nanometres in depth (a sheet of paper, by comparison, is 100,000nm thick). Other techniques have proven unreliable in terms of quality, difficult to scale up and function only at very high temperatures -- 550 degrees or more. Distinguished Professor Kourosh Kalantar-zadeh, from the School of Engineering at RMIT University in Melbourne, Australia, led the project, which also included colleagues from RMIT and researchers from CSIRO, Monash University, North Carolina State University and the University of California. He said the electronics industry had hit a barrier. "The fundamental technology of car engines has not progressed since 1920 and now the same is happening to electronics. Mobile phones and computers are no more powerful than five years ago. "That is why this new 2D printing technique is so important -- creating many layers of incredibly thin electronic chips on the same surface dramatically increases processing power and reduces costs. "It will allow for the next revolution in electronics." Benjamin Carey, a researcher with RMIT and the CSIRO, said creating electronic wafers just atoms thick could overcome the limitations of current chip production. It could also produce materials that were extremely bendable, paving the way for flexible electronics. "However, none of the current technologies are able to create homogenous surfaces of atomically thin semiconductors on large surface areas that are useful for the industrial scale fabrication of chips. "Our solution is to use the metals gallium and indium, which have a low melting point. "These metals produce an atomically thin layer of oxide on their surface that naturally protects them. It is this thin oxide which we use in our fabrication method. "By rolling the liquid metal, the oxide layer can be transferred on to an electronic wafer, which is then sulphurised. The surface of the wafer can be pre-treated to form individual transistors. "We have used this novel method to create transistors and photo-detectors of very high gain and very high fabrication reliability in large scale."


News Article | February 17, 2017
Site: www.eurekalert.org

A new technique using liquid metals to create integrated circuits that are just atoms thick could lead to the next big advance for electronics. The process opens the way for the production of large wafers around 1.5 nanometres in depth (a sheet of paper, by comparison, is 100,000nm thick). Other techniques have proven unreliable in terms of quality, difficult to scale up and function only at very high temperatures -- 550 degrees or more. Distinguished Professor Kourosh Kalantar-zadeh, from the School of Engineering at RMIT University in Melbourne, Australia, led the project, which also included colleagues from RMIT and researchers from CSIRO, Monash University, North Carolina State University and the University of California. He said the electronics industry had hit a barrier. "The fundamental technology of car engines has not progressed since 1920 and now the same is happening to electronics. Mobile phones and computers are no more powerful than five years ago. "That is why this new 2D printing technique is so important -- creating many layers of incredibly thin electronic chips on the same surface dramatically increases processing power and reduces costs. "It will allow for the next revolution in electronics." Benjamin Carey, a researcher with RMIT and the CSIRO, said creating electronic wafers just atoms thick could overcome the limitations of current chip production. It could also produce materials that were extremely bendable, paving the way for flexible electronics. "However, none of the current technologies are able to create homogenous surfaces of atomically thin semiconductors on large surface areas that are useful for the industrial scale fabrication of chips. "Our solution is to use the metals gallium and indium, which have a low melting point. "These metals produce an atomically thin layer of oxide on their surface that naturally protects them. It is this thin oxide which we use in our fabrication method. "By rolling the liquid metal, the oxide layer can be transferred on to an electronic wafer, which is then sulphurised. The surface of the wafer can be pre-treated to form individual transistors. "We have used this novel method to create transistors and photo-detectors of very high gain and very high fabrication reliability in large scale." The paper outlining the new technique, "Wafer Scale Two Dimensional Semiconductors from Printed Oxide Skin of Liquid Metals", has been published in the journal, Nature Communications. The DOI for this paper will be 10.1038/NCOMMS14482.


News Article | March 2, 2017
Site: marketersmedia.com

RENO, NV / ACCESSWIRE / March 2, 2017 / Scandium International Mining Corp. (TSX: SCY) ("Scandium International" or the "Company") is pleased to announce it has signed a Memorandum of Understanding ("MOU") with Weston Aluminium Pty Ltd ("Weston") of Chatswood, NSW, Australia. The MOU defines a cooperative commercial alliance ("Alliance") to jointly develop the capability to manufacture aluminum-scandium master alloy ("Master Alloy"). The intended outcome of this Alliance will be to develop the capability to offer Nyngan Project aluminum alloy customers scandium in form of Al-Sc Master Alloy, should customers prefer that product form. MOU outlines terms of a commercial alliance with Weston Aluminum, Represents key steps towards Master Alloy manufacturing agreement, Allows SCY to quickly apply know-how alongside operating aluminum processor, Supports a product offer more suitable to many downstream customers, thereby expanding the potential customer base, and Allows SCY to bring added value to significant potential scandium markets. The MOU outlines steps to jointly establish the manufacturing parameters, metallurgical processes, and capital requirements to convert Nyngan Project scandium product into Master Alloy, on Weston's existing production site in NSW. The MOU does not include a binding contract with commercial terms at this stage, although the intent is to pursue the necessary technical elements to arrive at a commercial contract for conversion of scandium oxide to Master Alloy, and to do so prior to first mine production from the Nyngan Project. SCY has been developing an internal understanding and capability to convert scandium oxide to Master Alloy for some time. In 2014, the Company announced it applied for a US Patent on Master Alloy production, which is still in the application phase. That Patent Application addressed scandium Master Alloys with both aluminum-base and magnesium-base metals. The Company has investigated scandium Master Alloy manufacturing processes previously with the CSIRO, in Australia, and Kingston Process Metallurgy (KPM) in Ontario, Canada. Further, the technical team in SCY has previous relevant experience in this metallurgical area. Weston Aluminium is a privately-owned, established secondary aluminum and industrial services company with its production facilities located near Kurri Kurri in, NSW, approximately 50km from Newcastle and 400km from the Nyngan Project. Using an environmentally attractive 'non-salt' processing technology from Asahi Seiren in Japan as its core technology, Weston has further developed its application to process all forms of aluminum bearing dross and by-products from the aluminum industry, and manufacturing Aluminum Deoxidant for steel making for BlueScope Steel at Port Kembla NSW and Arrium at Whyalla SA. Weston recycles industrial aluminum scrap, and dross, principally from the Tomago Aluminum Smelter located north of Newcastle, which is an independent joint venture between Rio Tinto Alcan, CSR and Hydro Aluminum. Weston today recycles 20,000 tonnes of aluminum materials annually, and has established capability in recovering valuable metals from metallurgical process dross materials, resulting in zero land fill. The process for making scandium Master Alloy is generally known, although efficiencies vary widely in practice. While numerous entities make and sell a variety of aluminum master alloys globally, very few have established the capability or the access to scandium, to make Al-Sc Master Alloy product well. The Company believes that it can establish world-class techniques for manufacture of scandium Master Alloy, and can offer an upgraded product to aluminum alloy customers directly. This approach enables the Company to better control product quality, by consolidating the upgrade step through a trusted downstream processor. The Company will also continue to offer product in scandium oxide form, for customers who prefer or require that product form. SCY intends to contribute the majority of the intellectual property ("IP") required to define manufacturing technique and metallurgy, and will retain the IP rights to that process, recipe and know-how, as defined in the MOU. "We are excited to team with an experienced aluminum products manufacturer to build a capability in scandium Master Alloy manufacturing. This Alliance allows SCY to apply and expand our know-how in this area immediately, while working with Weston, an innovative local operator. We believe this capability to offer scandium product as Master Alloy has the potential to broaden our customer base and deliver scandium's promise more effectively to aluminum alloy users." Willem Duyvesteyn, MSc, AIME, CIM, a Director and CTO of the Company, is a qualified person for the purposes of NI 43-101 and has reviewed and approved the technical content of this press release on behalf of the Company. The Company is focused on developing the Nyngan Scandium Project into the world's first scandium-only producing mine. The Company owns an 80% interest in both the Nyngan Scandium Project, and the adjacent Honeybugle Scandium Property, in New South Wales, Australia, and is manager of both projects. Our joint venture partner, Scandium Investments LLC, owns the remaining 20% in both projects, along with an option to convert those direct project interests into SCY common shares, based on market values, prior to construction. The Company filed a NI 43-101 technical report in May 2016, titled "Feasibility Study - Nyngan Scandium Project". That feasibility study delivered an expanded scandium resource, a first reserve figure, and an estimated 33.1% IRR on the project, supported by extensive metallurgical test work and an independent, 10-year global marketing outlook for scandium demand. For further information, please contact: This press release contains forward-looking statements about the Company and its business. Forward looking statements are statements that are not historical facts and include, but are not limited to: reserve and resource estimates, estimated NPV of the project, anticipated IRR, anticipated mining and processing methods for the Project, the estimated economics of the project, anticipated Scandium recoveries, production rates, scandium grades, estimated capital costs, operating cash costs and total production costs, planned additional processing work and environmental permitting. The forward-looking statements in this press release are subject to various risks, uncertainties and other factors that could cause the Company's actual results or achievements to differ materially from those expressed in or implied by forward looking statements. These risks, uncertainties and other factors include, without limitation risks related to uncertainty in the demand for Scandium and pricing assumptions; uncertainties related to raising sufficient financing to fund the project in a timely manner and on acceptable terms; changes in planned work resulting from logistical, technical or other factors; the possibility that results of work will not fulfill expectations and realize the perceived potential of the Company's properties; uncertainties involved in the estimation of Scandium reserves and resources; the possibility that required permits may not be obtained on a timely manner or at all; the possibility that capital and operating costs may be higher than currently estimated and may preclude commercial development or render operations uneconomic; the possibility that the estimated recovery rates may not be achieved; risk of accidents, equipment breakdowns and labor disputes or other unanticipated difficulties or interruptions; the possibility of cost overruns or unanticipated expenses in the work program; risks related to projected project economics, recovery rates, and estimated NPV and anticipated IRR and other factors identified in the Company's SEC filings and its filings with Canadian securities regulatory authorities. Forward-looking statements are based on the beliefs, opinions and expectations of the Company's management at the time they are made, and other than as required by applicable securities laws, the Company does not assume any obligation to update its forward-looking statements if those beliefs, opinions or expectations, or other circumstances, should change. RENO, NV / ACCESSWIRE / March 2, 2017 / Scandium International Mining Corp. (TSX: SCY) ("Scandium International" or the "Company") is pleased to announce it has signed a Memorandum of Understanding ("MOU") with Weston Aluminium Pty Ltd ("Weston") of Chatswood, NSW, Australia. The MOU defines a cooperative commercial alliance ("Alliance") to jointly develop the capability to manufacture aluminum-scandium master alloy ("Master Alloy"). The intended outcome of this Alliance will be to develop the capability to offer Nyngan Project aluminum alloy customers scandium in form of Al-Sc Master Alloy, should customers prefer that product form. MOU outlines terms of a commercial alliance with Weston Aluminum, Represents key steps towards Master Alloy manufacturing agreement, Allows SCY to quickly apply know-how alongside operating aluminum processor, Supports a product offer more suitable to many downstream customers, thereby expanding the potential customer base, and Allows SCY to bring added value to significant potential scandium markets. The MOU outlines steps to jointly establish the manufacturing parameters, metallurgical processes, and capital requirements to convert Nyngan Project scandium product into Master Alloy, on Weston's existing production site in NSW. The MOU does not include a binding contract with commercial terms at this stage, although the intent is to pursue the necessary technical elements to arrive at a commercial contract for conversion of scandium oxide to Master Alloy, and to do so prior to first mine production from the Nyngan Project. SCY has been developing an internal understanding and capability to convert scandium oxide to Master Alloy for some time. In 2014, the Company announced it applied for a US Patent on Master Alloy production, which is still in the application phase. That Patent Application addressed scandium Master Alloys with both aluminum-base and magnesium-base metals. The Company has investigated scandium Master Alloy manufacturing processes previously with the CSIRO, in Australia, and Kingston Process Metallurgy (KPM) in Ontario, Canada. Further, the technical team in SCY has previous relevant experience in this metallurgical area. Weston Aluminium is a privately-owned, established secondary aluminum and industrial services company with its production facilities located near Kurri Kurri in, NSW, approximately 50km from Newcastle and 400km from the Nyngan Project. Using an environmentally attractive 'non-salt' processing technology from Asahi Seiren in Japan as its core technology, Weston has further developed its application to process all forms of aluminum bearing dross and by-products from the aluminum industry, and manufacturing Aluminum Deoxidant for steel making for BlueScope Steel at Port Kembla NSW and Arrium at Whyalla SA. Weston recycles industrial aluminum scrap, and dross, principally from the Tomago Aluminum Smelter located north of Newcastle, which is an independent joint venture between Rio Tinto Alcan, CSR and Hydro Aluminum. Weston today recycles 20,000 tonnes of aluminum materials annually, and has established capability in recovering valuable metals from metallurgical process dross materials, resulting in zero land fill. The process for making scandium Master Alloy is generally known, although efficiencies vary widely in practice. While numerous entities make and sell a variety of aluminum master alloys globally, very few have established the capability or the access to scandium, to make Al-Sc Master Alloy product well. The Company believes that it can establish world-class techniques for manufacture of scandium Master Alloy, and can offer an upgraded product to aluminum alloy customers directly. This approach enables the Company to better control product quality, by consolidating the upgrade step through a trusted downstream processor. The Company will also continue to offer product in scandium oxide form, for customers who prefer or require that product form. SCY intends to contribute the majority of the intellectual property ("IP") required to define manufacturing technique and metallurgy, and will retain the IP rights to that process, recipe and know-how, as defined in the MOU. "We are excited to team with an experienced aluminum products manufacturer to build a capability in scandium Master Alloy manufacturing. This Alliance allows SCY to apply and expand our know-how in this area immediately, while working with Weston, an innovative local operator. We believe this capability to offer scandium product as Master Alloy has the potential to broaden our customer base and deliver scandium's promise more effectively to aluminum alloy users." Willem Duyvesteyn, MSc, AIME, CIM, a Director and CTO of the Company, is a qualified person for the purposes of NI 43-101 and has reviewed and approved the technical content of this press release on behalf of the Company. The Company is focused on developing the Nyngan Scandium Project into the world's first scandium-only producing mine. The Company owns an 80% interest in both the Nyngan Scandium Project, and the adjacent Honeybugle Scandium Property, in New South Wales, Australia, and is manager of both projects. Our joint venture partner, Scandium Investments LLC, owns the remaining 20% in both projects, along with an option to convert those direct project interests into SCY common shares, based on market values, prior to construction. The Company filed a NI 43-101 technical report in May 2016, titled "Feasibility Study - Nyngan Scandium Project". That feasibility study delivered an expanded scandium resource, a first reserve figure, and an estimated 33.1% IRR on the project, supported by extensive metallurgical test work and an independent, 10-year global marketing outlook for scandium demand. For further information, please contact: This press release contains forward-looking statements about the Company and its business. Forward looking statements are statements that are not historical facts and include, but are not limited to: reserve and resource estimates, estimated NPV of the project, anticipated IRR, anticipated mining and processing methods for the Project, the estimated economics of the project, anticipated Scandium recoveries, production rates, scandium grades, estimated capital costs, operating cash costs and total production costs, planned additional processing work and environmental permitting. The forward-looking statements in this press release are subject to various risks, uncertainties and other factors that could cause the Company's actual results or achievements to differ materially from those expressed in or implied by forward looking statements. These risks, uncertainties and other factors include, without limitation risks related to uncertainty in the demand for Scandium and pricing assumptions; uncertainties related to raising sufficient financing to fund the project in a timely manner and on acceptable terms; changes in planned work resulting from logistical, technical or other factors; the possibility that results of work will not fulfill expectations and realize the perceived potential of the Company's properties; uncertainties involved in the estimation of Scandium reserves and resources; the possibility that required permits may not be obtained on a timely manner or at all; the possibility that capital and operating costs may be higher than currently estimated and may preclude commercial development or render operations uneconomic; the possibility that the estimated recovery rates may not be achieved; risk of accidents, equipment breakdowns and labor disputes or other unanticipated difficulties or interruptions; the possibility of cost overruns or unanticipated expenses in the work program; risks related to projected project economics, recovery rates, and estimated NPV and anticipated IRR and other factors identified in the Company's SEC filings and its filings with Canadian securities regulatory authorities. Forward-looking statements are based on the beliefs, opinions and expectations of the Company's management at the time they are made, and other than as required by applicable securities laws, the Company does not assume any obligation to update its forward-looking statements if those beliefs, opinions or expectations, or other circumstances, should change.


News Article | March 2, 2017
Site: www.accesswire.com

RENO, NV / ACCESSWIRE / March 2, 2017 / Scandium International Mining Corp. (TSX: SCY) ("Scandium International" or the "Company") is pleased to announce it has signed a Memorandum of Understanding ("MOU") with Weston Aluminium Pty Ltd ("Weston") of Chatswood, NSW, Australia. The MOU defines a cooperative commercial alliance ("Alliance") to jointly develop the capability to manufacture aluminum-scandium master alloy ("Master Alloy"). The intended outcome of this Alliance will be to develop the capability to offer Nyngan Project aluminum alloy customers scandium in form of Al-Sc Master Alloy, should customers prefer that product form. The MOU outlines steps to jointly establish the manufacturing parameters, metallurgical processes, and capital requirements to convert Nyngan Project scandium product into Master Alloy, on Weston's existing production site in NSW. The MOU does not include a binding contract with commercial terms at this stage, although the intent is to pursue the necessary technical elements to arrive at a commercial contract for conversion of scandium oxide to Master Alloy, and to do so prior to first mine production from the Nyngan Project. SCY has been developing an internal understanding and capability to convert scandium oxide to Master Alloy for some time. In 2014, the Company announced it applied for a US Patent on Master Alloy production, which is still in the application phase. That Patent Application addressed scandium Master Alloys with both aluminum-base and magnesium-base metals. The Company has investigated scandium Master Alloy manufacturing processes previously with the CSIRO, in Australia, and Kingston Process Metallurgy (KPM) in Ontario, Canada. Further, the technical team in SCY has previous relevant experience in this metallurgical area. Weston Aluminium is a privately-owned, established secondary aluminum and industrial services company with its production facilities located near Kurri Kurri in, NSW, approximately 50km from Newcastle and 400km from the Nyngan Project. Using an environmentally attractive 'non-salt' processing technology from Asahi Seiren in Japan as its core technology, Weston has further developed its application to process all forms of aluminum bearing dross and by-products from the aluminum industry, and manufacturing Aluminum Deoxidant for steel making for BlueScope Steel at Port Kembla NSW and Arrium at Whyalla SA. Weston recycles industrial aluminum scrap, and dross, principally from the Tomago Aluminum Smelter located north of Newcastle, which is an independent joint venture between Rio Tinto Alcan, CSR and Hydro Aluminum. Weston today recycles 20,000 tonnes of aluminum materials annually, and has established capability in recovering valuable metals from metallurgical process dross materials, resulting in zero land fill. The process for making scandium Master Alloy is generally known, although efficiencies vary widely in practice. While numerous entities make and sell a variety of aluminum master alloys globally, very few have established the capability or the access to scandium, to make Al-Sc Master Alloy product well. The Company believes that it can establish world-class techniques for manufacture of scandium Master Alloy, and can offer an upgraded product to aluminum alloy customers directly. This approach enables the Company to better control product quality, by consolidating the upgrade step through a trusted downstream processor. The Company will also continue to offer product in scandium oxide form, for customers who prefer or require that product form. SCY intends to contribute the majority of the intellectual property ("IP") required to define manufacturing technique and metallurgy, and will retain the IP rights to that process, recipe and know-how, as defined in the MOU. "We are excited to team with an experienced aluminum products manufacturer to build a capability in scandium Master Alloy manufacturing. This Alliance allows SCY to apply and expand our know-how in this area immediately, while working with Weston, an innovative local operator. We believe this capability to offer scandium product as Master Alloy has the potential to broaden our customer base and deliver scandium's promise more effectively to aluminum alloy users." Willem Duyvesteyn, MSc, AIME, CIM, a Director and CTO of the Company, is a qualified person for the purposes of NI 43-101 and has reviewed and approved the technical content of this press release on behalf of the Company. The Company is focused on developing the Nyngan Scandium Project into the world's first scandium-only producing mine. The Company owns an 80% interest in both the Nyngan Scandium Project, and the adjacent Honeybugle Scandium Property, in New South Wales, Australia, and is manager of both projects. Our joint venture partner, Scandium Investments LLC, owns the remaining 20% in both projects, along with an option to convert those direct project interests into SCY common shares, based on market values, prior to construction. The Company filed a NI 43-101 technical report in May 2016, titled "Feasibility Study - Nyngan Scandium Project". That feasibility study delivered an expanded scandium resource, a first reserve figure, and an estimated 33.1% IRR on the project, supported by extensive metallurgical test work and an independent, 10-year global marketing outlook for scandium demand. For further information, please contact: This press release contains forward-looking statements about the Company and its business. Forward looking statements are statements that are not historical facts and include, but are not limited to: reserve and resource estimates, estimated NPV of the project, anticipated IRR, anticipated mining and processing methods for the Project, the estimated economics of the project, anticipated Scandium recoveries, production rates, scandium grades, estimated capital costs, operating cash costs and total production costs, planned additional processing work and environmental permitting. The forward-looking statements in this press release are subject to various risks, uncertainties and other factors that could cause the Company's actual results or achievements to differ materially from those expressed in or implied by forward looking statements. These risks, uncertainties and other factors include, without limitation risks related to uncertainty in the demand for Scandium and pricing assumptions; uncertainties related to raising sufficient financing to fund the project in a timely manner and on acceptable terms; changes in planned work resulting from logistical, technical or other factors; the possibility that results of work will not fulfill expectations and realize the perceived potential of the Company's properties; uncertainties involved in the estimation of Scandium reserves and resources; the possibility that required permits may not be obtained on a timely manner or at all; the possibility that capital and operating costs may be higher than currently estimated and may preclude commercial development or render operations uneconomic; the possibility that the estimated recovery rates may not be achieved; risk of accidents, equipment breakdowns and labor disputes or other unanticipated difficulties or interruptions; the possibility of cost overruns or unanticipated expenses in the work program; risks related to projected project economics, recovery rates, and estimated NPV and anticipated IRR and other factors identified in the Company's SEC filings and its filings with Canadian securities regulatory authorities. Forward-looking statements are based on the beliefs, opinions and expectations of the Company's management at the time they are made, and other than as required by applicable securities laws, the Company does not assume any obligation to update its forward-looking statements if those beliefs, opinions or expectations, or other circumstances, should change.


News Article | January 12, 2016
Site: www.rdmag.com

Researchers have conducted the first ever trials of smart pills that can measure intestinal gases inside the body, with surprising results revealing some unexpected ways that fiber affects the gut. Intestinal gases have been linked to colon cancer, irritable bowel syndrome (IBS) and inflammatory bowel disease (IBD), but their role in health is poorly understood and there is currently no easy and reliable tool for detecting them inside the gut. The first animal trials of smart gas sensing pills developed at Australia's RMIT University - which can send data from inside the gut directly to a mobile phone - have examined the impact of low and high-fiber diets on intestinal gases and offer new clues for the development of treatments for gut disorders. Lead investigator Professor Kourosh Kalantar-zadeh, from the Centre for Advanced Electronics and Sensors at RMIT, said the results reversed current assumptions about the effect of fiber on the gut. "We found a low-fiber diet produced four times more hydrogen in the small intestine than a high-fiber diet," Kalantar-zadeh said. "This was a complete surprise because hydrogen is produced through fermentation, so we naturally expected more fibre would equal more of this fermentation gas. "The smart pills allow us to identify precisely where the gases are produced and help us understand the microbial activity in these areas - it's the first step in demolishing the myths of food effects on our body and replacing those myths with hard facts. "We hope this technology will in future enable researchers to design personalized diets or drugs that can efficiently target problem areas in the gut, to help the millions of people worldwide that are affected by digestive disorders and diseases." The trials revealed different levels of fiber in a diet affected both how much gas was produced and where it was concentrated in the gut - in the stomach, small intestine or large intestine. The smart pills were trialed on two groups of pigs - whose digestive systems are similar to humans - fed high and low-fiber diets. The results indicate the technology could help doctors differentiate gut disorders such as IBS, showing: The research, jointly conducted with the Department of Gastroenterology at The Alfred Hospital, Monash University, the University of Melbourne and CSIRO, is published in the January edition of the high-impact journal, Gastroenterology.


News Article | January 12, 2016
Site: www.biosciencetechnology.com

Researchers have conducted the first ever trials of smart pills that can measure intestinal gases inside the body, with surprising results revealing some unexpected ways that fiber affects the gut. Intestinal gases have been linked to colon cancer, irritable bowel syndrome (IBS) and inflammatory bowel disease (IBD), but their role in health is poorly understood and there is currently no easy and reliable tool for detecting them inside the gut. The first animal trials of smart gas sensing pills developed at Australia's RMIT University - which can send data from inside the gut directly to a mobile phone - have examined the impact of low and high-fiber diets on intestinal gases and offer new clues for the development of treatments for gut disorders. Lead investigator Professor Kourosh Kalantar-zadeh, from the Centre for Advanced Electronics and Sensors at RMIT, said the results reversed current assumptions about the effect of fiber on the gut. "We found a low-fiber diet produced four times more hydrogen in the small intestine than a high-fiber diet," Kalantar-zadeh said. "This was a complete surprise because hydrogen is produced through fermentation, so we naturally expected more fiber would equal more of this fermentation gas. "The smart pills allow us to identify precisely where the gases are produced and help us understand the microbial activity in these areas - it's the first step in demolishing the myths of food effects on our body and replacing those myths with hard facts. "We hope this technology will in future enable researchers to design personalized diets or drugs that can efficiently target problem areas in the gut, to help the millions of people worldwide that are affected by digestive disorders and diseases." The trials revealed different levels of fiber in a diet affected both how much gas was produced and where it was concentrated in the gut - in the stomach, small intestine or large intestine. The smart pills were trialed on two groups of pigs - whose digestive systems are similar to humans - fed high and low-fiber diets. The results indicate the technology could help doctors differentiate gut disorders such as IBS, showing: The research, jointly conducted with the Department of Gastroenterology at The Alfred Hospital, Monash University, the University of Melbourne and CSIRO, is published in the January edition of the high-impact journal, Gastroenterology.


Australia’s CSIRO has partnered with Chinese company Thermal Focus in an agreement which will see Australian concentrating solar thermal technology used in China. China has recently announced plans to produce 1.4 GW of concentrating solar thermal technology (or CST) by 2018, and 5 GW by 2020 — effectively doubling the world’s CST capacity. The partnership will allow Thermal Focus to manufacture, market, sell, and install CSIRO’s patented low-cost heliostats, field control software, and design software throughout China, with a shared revenue stream back to Australia which will in turn fund further climate mitigation research. “Australia is a leader in clean energy technology and CSIRO’s partnership with China’s Thermal Focus takes our climate mitigation focus to a global stage,” explained CSIRO Chief Executive Dr Larry Marshall. “This is another great example of all four pillars of our Strategy 2020 in action; using excellent science to deliver breakthrough innovation, and through global collaboration, increasing renewable energy deliverables. “Through this collaboration and our continued solar research, we will be helping to generate cleaner energy, cost savings and technology export benefits for Australia; all lowering global greenhouse gas emissions.” For the uninitiated, CST technology uses a field of computer-controlled mirrors (or heliostats) that are programmed to always reflect and concentrate sunlight onto a receiver at the top of a tower. In turn, the receiver is used to heat and store hot molten salt, which can generate superheated steam which in turn drives a turbine for electricity generation. CST is a very low-cost method of storing thermal energy, making CST technology a great medium- to large-scale solar power technology, as it is able to generate energy when the sun is shining and even when it’s not. “CSIRO’s solar thermal technology combined with our manufacturing capability will help expedite and deliver solar thermal as an important source of renewable energy in China,” added Mr Wei Zhu from Thermal Focus, who welcomed the collaboration with one of the world’s leading CST experts. “This partnership will help us commercialise this emerging technology on a larger scale.” Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | December 8, 2016
Site: phys.org

The tech giants Google, Facebook, Apple, IBM and others are applying artificial intelligence to all sorts of data. Machine learning methods are being used in areas such as translating language almost in real time, and even to identify images of cats on the internet. So why haven't we seen artificial intelligence used to the same extent in healthcare? Radiologists still rely on visual inspection of magnetic resonance imaging (MRI) or X-ray scans – although IBM and others are working on this issue – and doctors have no access to AI for guiding and supporting their diagnoses. Machine learning technologies have been around for decades, and a relatively recent technique called deep learning keeps pushing the limit of what machines can do. Deep learning networks comprise neuron-like units into hierarchical layers, which can recognise patterns in data. This is done by iteratively presenting data along with the correct answer to the network until its internal parameters, the weights linking the artificial neurons, are optimised. If the training data capture the variability of the real-world, the network is able to generalise well and provide the correct answer when presented with unseen data. So the learning stage requires very large data sets of cases along with the corresponding answers. Millions of records, and billions of computations are needed to update the network parameters, often done on a supercomputer for days or weeks. Here lies the problems with healthcare: data sets are not yet big enough and the correct answers to be learned are often ambiguous or even unknown. We're going to need better and bigger data sets The functions of the human body, its anatomy and variability, are very complex. The complexity is even greater because diseases are often triggered or modulated by genetic background, which is unique to each individual and so hard to be trained on. Adding to this, specific challenges to medical data exist. These include the difficulty to measure precisely and accurately any biological processes introducing unwanted variations. Other challenges include the presence of multiple diseases (co-morbidity) in a patient, which can often confound predictions. Lifestyle and environmental factors also play important roles but are seldom available. The result is that medical data sets need to be extremely large. This is being addressed across the world with increasingly large research initiatives. Examples include Biobank in the United Kingdom, which aims to scan 100,000 participants. Others include the Alzheimer's Disease Neuroimaging Initiative (ADNI) in the United States and the Australian Imaging, Biomarkers and Lifestyle Study of Ageing (AIBL), tracking more than a thousand subjects over a decade. Government initiatives are also emerging such as the American Cancer Moonshot program. The aim is to "build a national cancer data ecosystem" so researchers, clinicians and patients can contribute data with the aim to "facilitate efficient data analysis". Similarly, the Australian Genomics Health Alliance aims at pooling and sharing genomic information. Eventually the electronic medical record systems that are being deployed across the world should provide extensive high quality data sets. Beyond the expected gain in efficiency, the potential to mine population wide clinical data using machine learning is tremendous. Some companies such as Google are eagerly trying to access those data. What a machine needs to learn is not obvious Complex medical decisions are often made by a team of specialists reaching consensus rather than certainty. Radiologists might disagree slightly when interpreting a scan where blurring and only very subtle features can be observed. Inferring a diagnosis from measurement with errors and when the disease is modulated by unknown genes often relies on implicit know-how and experience rather than explicit facts. Sometimes the true answer cannot be obtained at all. For example, measuring the size of a structure from a brain MRI cannot be validated, even at autopsy, since post-mortem tissues change in their composition and size after death. So a machine can learn that a photo contains a cat because users have labelled with certainty thousands of pictures through social media platforms, or told Google how to recognise doodles. It is a much more difficult task to measure the size of a brain structure from an MRI because no one knows the answer and only consensus from several experts can be assembled at best, and at a great cost. Several technologies are emerging to address this issue. Complex mathematical models including probabilities such as Bayesian approaches can learn under uncertainty. Unsupervised methods can recognise patterns in data without the need for what the actual answers are, albeit with challenging interpretation of the results. Another approach is transfer learning, whereby a machine can learn from large, different, but relevant, data sets for which the training answers are known. Medical applications of deep learning have already been very successful. They often come first at competitions during scientific meetings where data sets are made available and the evaluation of submitted results revealed during the conference. At CSIRO we have been developing CapAIBL (Computational Analysis of PET from AIBL) to analyse 3-D images of brain positron emission tomography (PET). Using a database with many scans from healthy individuals and patients with Alzheimer's disease, the method is able to learn pattern characteristics of the disease. It can then identify that signature from unseen individual's scan. The clinical report generated allows doctors to diagnose the disease faster and with more confidence. In the case (above), CapAIBL technology was applied to amyloid plaque imaging in a patient with Alzheimer's disease. Red indicates higher amyloid deposition in the brain, a sign of Alzheimer's. Probably the most challenging issue is about understanding causation. Analysing retrospective data is prone to learning spurious correlation and missing the underlying cause for diseases or effect of treatments. Traditionally, randomised clinical trials provide evidence on the superiority of different options, but they don't benefit yet from the potential of artificial intelligence. New designs such as platform clinical trials might address this in the future, and could pave the way of how machine learning technologies could learn evidence rather than just association. So large medical data sets are being assembled. New technologies to overcome the lack of certainty are being developed. Novel ways to establish causation are emerging. This area is moving fast and tremendous potential exists for improving efficiency and health. Indeed many ventures are trying to capitalise on this. Startups such as Enlitic, large firms such as IBM, or even small businesses such as Resonance Health, are promising to revolutionise health. Impressive progress is being made but many challenges still exist. Explore further: Making better use of the crowd


News Article | December 14, 2016
Site: www.theguardian.com

No doubt nearly everyone is familiar with the story. In early 2014, Malaysian flight MH370 left Kuala Lumpur Malaysia, on a flight to China. The flight disappeared from communication and was never found; despite great search efforts. It isn’t that there is no evidence of the crash. In July of last year, a portion of a wing was found near Madagascar and Reunion Island in the Indian Ocean. Since then, other debris has been found in the Western Indian Ocean. Using the location of where the wing debris were found, oceanographers from University of Santiago de Compostela (Spain), the United States National Oceanographic and Atmospheric Administration (NOAA), the University of Miami, University of Hawaii, and the Commonwealth Science Industrial and Research Organization (CSIRO) in Australia have a lead. Their hypothesis is published in the Journal of Operational Oceanography and can be found here. The authors used two sets of data to help track the possible paths of the debris. First, they took advantage of observations from NOAA’s Global Drifter Array. These drifters have a surface float and an anchor or drogue that extend to 15m deep, and a suite of sensors that communicate via satellite their location and parameters like ocean currents, surface ocean temperature, pressure, wind, and salinity. In the Indian Ocean alone, there are approximately 400 of these drifters at any time, providing continuous ocean measurement information. At some point the drifters loose their drogue and these are the ones used in this study as they better simulate debris dynamics. The authors tracked drifters that were released or that traveled near the search area in the southeastern Indian Ocean. Several of these drifters traveled across the Indian Ocean to the final destination near Reunion Island, very near where the wing debris was found, and the duration it took the drifters to make their trek was similar to that of the debris. In addition, the authors used a computer model of ocean currents from the University of Hawaii. This model incorporated the surface ocean winds and provided a realistic simulation of ocean currents during and after the plane crash. Using these computer-derived currents, the scientists released thousands of replica drifters to see where they traveled. By combining the real trajectories from actual instruments with the simulated trajectories, scientists were able to identify the location where a crash was most likely, shown in the image below. More recent debris discoveries confirm the general westward drift predictions from the computer program and analysis. While the assessments from this study are interesting in that they are related to the MH370 accident, the techniques that the researchers developed can be used for other ocean-debris scenarios and are useful both for basic research as well as more tangible applications for societal benefits, such as search and rescue efforts, oil spills, and fish larval transports. I contacted author Joaquin Trinanes to ask about the difficulties of this project and its importance. He told me: I think it is really great to solve a basic research problem but also to connect it to practical applications. Great work, folks.


News Article | November 1, 2016
Site: news.yahoo.com

The idea of artificial intelligence (AI) — autonomous computers that can learn independently — makes some people extremely uneasy, regardless of what the computers in question might be doing. Those individuals probably wouldn't find it reassuring to hear that a group of researchers is deliberately training computers to get better at scaring people witless. The project, appropriately enough, is named "Nightmare Machine." Digital innovators in the U.S. and Australia partnered to create an algorithm that would enable a computer to understand what makes certain images frightening, and then use that data to transform any photo, no matter how harmless-looking, into the stuff of nightmares. [5 Intriguing Uses for Artificial Intelligence (That Aren't Killer Robots)] Images created by Nightmare Machine are unsettling, to say the least. Iconic buildings from around the world appear eroded and distorted within shadowy settings or amid charred and smoldering landscapes, glimpsed through what appears to be murky, polluted water or toxic gas clouds. Nightmare Machine's faces are equally disturbing. Some of the subjects are almost abstract, but subtle — creepy suggestions of hollow eyes, bloody shadows and decaying flesh still cause unease. Even the lovable Muppet Kermit the Frog emerges from the process as a zombie-like creature that would terrify toddlers — and adults, too. The primary reason for building Nightmare Machine was to explore the common fear inspired by intelligent computers, its trio of designers told Live Science. They wanted to playfully confront the anxiety inspired by AI, and simultaneously test if a computer is capable of understanding and visualizing what makes people afraid. "We know that AI terrifies us in the abstract sense," co-creator Pinar Yanardag, a postdoctoral researcher at MIT Media Lab in Massachusetts, wrote in an email. "But can AI scare us in the immediate, visceral sense?" The designers used a form of artificial intelligence called "deep learning" — a system of data structures and programs mimicking the neural connections in a human brain — to teach a computer what makes for a frightening visual, according to co-creator Manuel Cebrian, a principal research scientist at CSIRO Data61 in Australia. "Deep-learning algorithms perform remarkably well in several tasks considered difficult or impossible," Cebrian said. "Even though there is a lot of room for improvement, some of the faces already look remarkably creepy!" Once deep-learning algorithms understood the visual elements that were commonly perceived as spooky, they applied those styles to images of buildings and human faces — with chilling results. "Elon Musk said that with the development of AI, we are 'summoning the demon,'" co-creator Iyad Rahwan, an associate professor at MIT Media Lab, told Live Science. "We wanted to playfully explore whether and how AI can indeed become a demon, that can learn how to scare us, both by extracting features from scary images and subsequent refinement using crowd feedback," Rahwan said. He added that the timing of their spooky experiment — close to Halloween — was no accident. "Halloween has always been a time where people celebrate what scares them," he said, "so it seems like a perfect time for this particular hack." "Our research group's main goal is to understand the barriers between human and machine cooperation," Rahwan said. "Psychological perceptions of what makes humans tick and what makes machines tick are important barriers for such cooperation to emerge. This project tries to shed some light on that front — of course, in a goofy, hackerish Halloween manner!" And if you're brave enough, Nightmare Machine could use your help to learn how to become even scarier. The project's creators used deep-learning algorithms to generate frightening images of dozens of faces, tweaking the results to make them look even more disturbing. Nightmare Machine visitors can vote on these so-called "Haunted Faces," to help the algorithm "learn scariness," according to instructions on the website. Teaching a computer to be more terrifying — what could possibly go wrong?


News Article | December 18, 2016
Site: cleantechnica.com

The Australian Energy Market Commission is the nominally independent body that sets the rules for the country’s energy markets. You’d expect, given the importance of its role, that it would have some basic understanding about the costs of the technologies that it is dealing with. In the case of wind and solar, it is becoming increasingly obvious that it has no idea. Over the last few days, the AEMC has released important and influential reports that simply take the breath away for the depth of its ignorance. This is important. The AEMC, because it sets the rules, is pivotal in the industry and has enormous influence at state and federal government levels, and holds the principal levers over the course of energy rules and outcomes. As we reported on Tuesday, its report on the costs of the Emissions Intensity Scheme – which hit the headlines across most papers on Friday and the weekend – painted a favourable outcome for the EIS over an extended renewable energy target. But, as we pointed out, it was based on an heroically benign forecast of gas prices from Frontier Economics, and included estimates for large-scale solar costs, in particular, that were off the planet. Without these distortions, the renewable energy target came out as the cheapest. Even their own modelling shows that, although they chose not to highlight it. Now the AEMC has done it again, this time in its annual review of consumer electricity prices – which again hit the newspaper headlines, and, just like previous years, sought to paint renewable energy in a poor light. It’s a report that relies on some absurd costings for large-scale renewables – again using modelling from Frontier Economics, the consultancy headed by Danny Price, the architect of Direct Action, and who teamed up with the AEMC two years ago to argue for the reduction of the RET (they won). The basis for the cost estimate of large-scale solar stands out as an example of their ignorance. “Our assumed capital cost of $2,305/kW and average capacity factor of 22% results in an LCOE for large-scale solar PV of approximately $135/MWh in 2016, reducing to $95/MWh by 2040,” the Frontier report says. We’re not sure how it is they they assumed a capital cost above the ARENA costings that they quoted from. Other solar projects are already well below the ARENA costings – Sun Brilliance, for instance, has costed its 100MW solar project at around $1,600/kW. But let’s just let that one slide because it is not really that huge in the scheme of things. The biggie is on the “capacity factor” – the amount a solar plant can produce. Frontier assumes a capacity factor of 22 per cent. This is ancient history. Current solar projects will get at least 26 per cent, and most of the ARENA projects – because they are using single axis tracking – will get capacity factors of more than 30 per cent and up to 32 per cent. That is probably the principal reason why Frontier – and through it, the AEMC – thinks that the cost of large-scale solar is around 50 per cent more than it actually is, and in 2040 (in their projections) will still not fall to actual current levels. “Actually, I’m embarrassed for them,” said the head of one solar developer, who asked not to be named. “We really have to stop this nonsense.” We agree. The AEMC, using modelling from Frontier, makes errors on wind energy too, assuming that it costs $90/MWh, when the reality is that costs for the wind farms that will be developed are way below that. The AEMC and Frontier, however, still assume that solar is nearly 50 per cent more expensive than wind – when anyone in the renewables industry will tell you that they are pretty close to parity, if not already there. The AEMC and Frontier, with their heads stuck firmly in their modelling, say “it is unlikely that further solar PV will be constructed to meet the LRET.” I imagine that the 200 people or so who turned up at a solar function in Sydney last week, and the countless developers, bankers, lawyers, solar suppliers and even utilities working on such projects, would see that as a statement of such breathtaking stupidity that it is hard to figure out how it is that the AEMC wishes to put its name to it, and put it forward to ministers as a serious piece of analysis. It’s also a surprise to Sun Brilliance, which plans to build its 100MW solar plant in WA early next year. “It’s remarkably out of touch,” says Sun Brilliance director Ray Wills. “We know some projects today are already below their 2040 predicted price!” “It’s important for the industry to look at the projects that are being contracted today,” adds Jack Curtis, from First Solar. “The unsubsidised solar projects in Australia are currently priced at around $80-$90/MWh. Assumptions that large-scale solar will only fall to $72-$80/MWh in 2040 don’t add up.” Wills suggests that with solar modules in 2016 already below 40c/W and with a current learning rate for solar at 26.3%, it’s likely solar modules will be below 20c/W in 2020. That could put the price of output of a new solar plant in 2020  below $A40/MWh. But it is also the tone of the AEMC report into electricity prices that is a problem. It seeks, on all occasions, to paint renewables in as bad a light as possible: they say it is the cause of the coal-fired stations retiring, it is the cause of retail prices going up, it is the cause of instability in the grid. There is little or no mention of the well documented surge in gas prices to record levels, or the bidding actions and price gouging by gas generators. Indeed, its gas price assumptions are seen by some as hopelessly optimistic, because they ignore supply constraints on the LNG gas projects in Queensland already struggling to source gas supplies, and the impact of a tight global LNG market, high net-back prices and the increasing costs of domestic production. The two reports reports produced by AEMC and Frontier are in such huge contrast to the review handed out in the same period by the CSIRO and the Energy Networks Australia and the separate one by the chief scientist Alan Finkel, that you actually wonder if they are talking about the same industry, or the same century. The CSIRO and ENA and the Finkel reports came to a similar conclusion: The world is changing, the technologies to deal with the variable output of wind and solar are available now, storage is here and will help change everything, and renewable focused grids are going to be a much cheaper option than business as usual or focusing on coal and gas. (Interestingly, Frontier also did some modelling for investment bank CLSA, which claimed that – in direct contrast to what the network operators say can be done – local grids can only absorb 35-40 per cent renewables. The networks – and you’d think they would know –  disagree: they laid out scenarios for near 100 per cent wind and solar. (It also claims, hilariously, and also in direct contrast to the conclusions of the CSIRO, the networks, the chief scientist and just about everyone else, that “distributed energy” – solar and battery storage – is an expensive option, and twice the price of the grid. Recent analysis from Bruce Mountain shows solar and storage is already cheaper in some areas). The main theme of the CSIRO, ENA and Finkel reviews – apart from the extraordinary technology changes that are and will take place – is that it’s high time Australia got on with making the rule changes and policy changes that could allow this “unstoppable transition,” as Finkel described it, to happen as efficiently as possible. But that hasn’t happened because the AEMC has been part of the problem. It has refused to approve, or has delayed critical rule changes that smarter and more informed people have been advocating for years. At nearly every turn, it has sided with the incumbents, and buried its conclusions in fossil fuel thinking. Little surprise, then, that at the recent COAG energy minister meetings, the focus has been on how to give the AEMC a kick up the backside and get it to move faster. Perhaps COAG should just give it a kick up the backside and out through the door, along with its modellers, and find people with a finer grip on reality, and with a focus on the future, not the past. Update: An ARENA spokesman later emailed RenewEconomy agreeing that the AEMC/Frontier estimates on large scale solar were out of the ball park. “The average levalised lost of energy (LCOE) for the 12 projects earmarked for funding through ARENA’s large-scale solar competitive round was $113/MWh in September 2016,” he said. “Panel prices have fallen by another 10% since September, which could result in further reductions in LCOE. “The most competitive Australian large-scale solar projects being built in 2017 are expected to be priced at around $95/MWh (LCOE), which is a significant drop from previously built projects such as Nyngan, Broken Hill, Moree and Barcaldine. “ARENA expects that pricing in Australia will follow global trajectories, which are forecast (by BNEF) to experience a further 35% drop in capital costs by 2025. Assuming a 35% reduction in capital costs of the best in class current projects, we predict large-scale solar costs of $65 – $70/MWh  (LCOE) by 2025.” That would be 30 per cent cheaper, and 15 years earlier, than AEMC/Frontier modelling. Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | December 19, 2016
Site: www.theguardian.com

As Australia settles in for another long hot summer, the demand for air-conditioning is set to surge. In fact, with the World Meteorological Organisation stating that 2016 is likely to be the hottest year on record, it’s no surprise an estimated 1.6bn new air conditioners are likely to be installed globally by 2050. Powering all these units will be a challenge, especially on summer’s hottest days. In Australia, peak demand days can drive electricity usage to almost double and upgrading infrastructure to meet the increased demand can cost more than four times what each additional air-conditioning unit costs. Yet an emerging sector of the solar industry is turning the searing heat of summer into cooling by using solar heat or electricity. For those developing the technology, the benefits of solar cooling are obvious: the days when cooling is needed the most are also the days when solar works best. When combined with a building’s hot water and heating systems – which together with cooling account for around half of the global energy consumption in buildings – solar cooling can drastically reduce reliance on grid energy and improve a building’s sustainability credentials. According to the International Energy Agency, solar could cover almost 17% of global cooling needs by 2050. Currently, such systems are still the exception. “It hasn’t got into the mainstream yet,” says Ken Guthrie, who chairs the International Energy Agency’s Solar Heating and Cooling Program. Nevertheless, several solar cooling technologies are making their way to market. While off-the-shelf systems for most are still years away, a handful of businesses have already opted for purpose-designed solar cooling systems, which experts hope will convince others to follow their lead. Echuca regional hospital in rural Victoria was one of the first to take the leap into solar cooling. In 2010, with support from Sustainability Victoria, the hospital designed and installed a solar heat–driven absorption chiller with engineering firm WSP consultants. A 300 sq m roof-mounted evacuated tube solar field feeds hot water to a 500 kW chiller that was set to save the hospital $60,000 on energy bills and reduce greenhouse gas emissions by around 1,400 tonnes of carbon dioxide equivalent per year. The system was not designed to run entirely off solar (a gas-fired boiler takes up the slack on hot days), but “we have had days where we run 100% solar” for both cooling and hot water, says Echuca regional health executive project manager Mark Hooper. The benefits of solar were clear enough that a larger 1,500 kW chiller, connected to a field of trough-shaped solar collectors that track the sun during the day, was installed during the hospital’s recent expansion and redevelopment. This second chiller started operating in November and an analysis of the resulting energy and emissions savings will be assessed in conjunction with CSIRO. Meanwhile, Stockland Wendouree shopping centre in Ballarat, Victoria, is trialling a CSIRO-designed solar cooling system with funding from the Australian Renewable Energy Agency (Arena). Trough-shaped metal collectors on the centre’s rooftop collect solar heat that is used to dry out a desiccant matrix (much like the silica gel sachets in your shoebox) that dehumidifies air brought in from outside. The hot, dry air is then directed to an indirect evaporative cooler, which delivers cool, dry air into the shopping centre. The yearlong trial is still under way and hasn’t yet seen a full summer to calculate energy savings, but “it’s going very well,” says CSIRO’s Stephen White. The system is 50% more efficient than an earlier iteration of the design – an important improvement given many buildings don’t have the sprawling rooftop spaces of a shopping centre to mount large solar collector arrays. With photovoltaic cells more affordable than ever, cooling systems that run off solar electricity are already commercially available. But solar thermal systems could still find a place in the market, according to Guthrie, especially for larger commercial buildings. “There’s no single solution,” he says. Like any solar technology, solar cooling doesn’t work 24/7. Storing the solar energy collected during the day for use overnight is possible. Stockland’s system uses thermal oil storage, for example, and Echuca regional hospital has insulated its firewater tanks to store chilled water. But there are also efforts to store heat or cooling from one season to the next using underground storage tanks. Whichever systems a building adopts, White says the benefits of solar cooling extend beyond electricity savings. “It’s not just about the cents per kilowatt hour avoided, but it’s also about the value of the asset itself,” he says. For Hooper, the motivation was even simpler: “We did it to ensure that our children have a future.”


News Article | January 13, 2016
Site: www.nature.com

In 2007 and 2008 we undertook three (T1–T3) 1 m × 2 m test excavations at Talepu Hill, where large numbers of stone artefacts were found scattered on the surface with loose gravel. The summit of Talepu Hill (4° 22′ 06.5′′ S; 119° 59′ 01.7′′ E) lies 36 m above sea level and 18 m above the floodplain of the Walanae River, which flows 600 m to the east (Extended Data Fig. 1). Geological outcrop conditions are very poor, and thick tropical soils cover the underlying geological formations. The three test excavations near the summit of Talepu Hill proved the occurrence of in situ stone artefacts down to a depth of at least 1.8 m, in heavily weathered conglomerate lenses and sandy silt layers. The same gravel unit occurs on other hilltops to the west and southwest. At Bulu Palece, 850 m west of Talepu Hill, which is the highest hilltop in the vicinity with an elevation of 51 m (see Extended Data Fig. 2), the gravel is at least 13 m thick, but at Talepu Hill only a basal interval of 4.3 m thickness remains. In October 2009, T2 was taken down to 7 m below surface (Extended Data Fig. 2b), at which depth the excavation area was reduced to a 1 m × 1 m square and taken down further to a maximum depth of 10 m. To ensure that this deep-trench operation was undertaken safely, we installed timber shoring as the work progressed (Extended Data Fig. 2c). A new east–west oriented, 1 m × 9 m trench (T4) was excavated at the base of the Talepu Hill, 40 m east of T2. This trench reached a maximum depth of 2 m, revealing the lateral development of the stratigraphy near the base of the hill (Fig. 2 and Extended Data Fig. 2d). Deposits were removed in 10 cm spits within stratigraphic units. Stone artefacts and fossils found by the excavators were bagged and labelled immediately; all other deposits were dry sieved with 5 mm mesh to separate out clasts, including stone artefacts. Pebbles from each spit were weighed; and composition analysis was undertaken on clasts from a representative sample from six spits: average maximum clast diameter was recorded by measuring the longest diameter of the ten largest clasts per spit (Extended Data Fig. 3). Bulk samples of stratigraphic units were taken for sediment and pollen analyses. In October 2010, the excavations at Talepu were continued. A 1 m × 2 m area at the east end of T4 was excavated to a depth of 6.20 m below the surface, thus providing an additional 6 m stratigraphically below the section covered by excavation 2 in 2009. The T4 deposits were removed in 20 cm spits within stratigraphic units. After the excavation of an in situ stone artefact (specimen S-TLP10-1, a flake from sub-unit E at a depth of 2.38 m below the surface) and fossils of Celebochoerus, it was decided to wet-sieve all the excavated sediments with 3 mm mesh to separate out stones and other clasts, including stone artefacts. Wet-sieving of the silty clay deposits from the interval between 2 and 2.4 m depth yielded one more stone artefact (S-TLP10-2; Fig. 3m) and two possible stone artefacts (S-TLP10-3 and S-TLP10-4). Magnetic susceptibility measurements were taken from the excavation profile at 1 cm intervals with a Bartington MS-2 device, to examine the presence of cryptic tephra layers suitable for dating. A sample for 40Ar/39Ar dating was taken at 2.5 m below the ground surface from an interval with elevated magnetic susceptibility values. In October 2012, backfill of T2 and T4 was removed. T4 was enlarged with a 1 m × 2 m extension (T4-B), and both T2 and T4/4-B were taken further down with an additional 2 m and 2.1 m, respectively, to allow for sampling for palaeomagnetic and optical dating methods. In T4-B two more stone artefacts (Fig. 3j–k), originating from spit 31 (depth 3.0–3.1 m depth below ground level), were recovered on the sieves. Stone artefacts were analysed following the definitions and methods in ref. 31. The analysis focused on stone-flaking techniques, sequences of reduction, and sizes of stone-flaking products and by-products (Supplementary Tables 1 and 2). The stone artefacts are stored at the Geology Museum in Bandung. The details for laser ablation uranium-series analysis of skeletal materials were recently summarized14. Uranium-series analyses provide insights into when uranium migrates into a bone or tooth. This may happen a short time after the burial of the skeletal element or some significant time span later. There may also be later uranium-overprints that are difficult to recognize. As such, apparent uranium-series results from faunal remains have generally to be regarded as minimum age estimates. It is very difficult or impossible to evaluate by how much the uranium-series results underestimate the correct age of the sample. Details of the instrumentation, analytical procedures and data evaluation have been modified from those described in detail elsewhere14, 32. All isotope ratios refer to activity ratios. Sequential laser spot analyses were undertaken on cross sections of eight Celebochoerus fossils from the T4 excavation at Talepu. They comprised fragments of six teeth and two bones from sub-unit E found 10–50 cm below the lowest stone artefacts in the same silt layer. Of one fossil (TLP10-1, a Celebochoerus lower canine), two subsamples were analysed (a and b). Each fossil specimen was cut transversely using a dentist drill with a diamond saw blade (Extended Data Fig. 4). Four or five samples were then mounted together into aluminium cups, aligning the cross-sections with the outer rim of the sample holder, which later positioned the samples on the focal plane of the laser. Uranium-series isotopes were measured using the laser ablation multicollector (MC)-ICP-MS system at The Australian National University’s (ANU) Research School of Earth Sciences. It consists of a Finnigan MAT Neptune MC-ICP-MS equipped with multiple Faraday cups. At the time of measurement, the mass spectrometer had only one ion counter. This necessitated two sequential sets of measurements along parallel tracks, one for 230Th and a second for 234U. The ion counter was set either to masses 230.1 or 234.1 while the Faraday cups measured the masses 232, 235 and 238. Samples were ablated with a Lambda Physik LPFPro ArF excimer (λ = 193 nm) laser coupled to the Neptune through an ANU-designed Helex ablation cell. The samples were initially cleaned for 10 s with the laser spot size set to 265 μm followed by a 50 s analysis run with a 205 μm spot size using a 5 Hz pulse rate. Analyses were performed at regular intervals along traverses, all starting from the exterior surface (Extended Data Fig. 4a–i). The data sets of each transect were bracketed between reference standard analyses to correct for instrument drift. Semi-quantitative analysis of uranium and thorium concentrations were derived from repeated measurements of the SRM NIST-610 glass (uranium = 461.5 μg g−1; thorium = 457.2 μg g−1), and uranium-isotope ratios from repeated measurements of rhinoceros tooth dentine from Hexian (sample 1118)33. Age estimates combining all measurements on a specimen were calculated using the iDAD program15, assuming diffusion from both surfaces for the bones (TLP10-6 and 7) and roots of the teeth (Extended Data Fig. 4a–f, h, i) and directional diffusion from the central pulp cavity into the dentine and covering enamel for TLP10-9 (Extended Data Fig. 4g). The enamel data of the enamel samples were omitted as enamel has a different diffusion rate. Generally, results with elemental U/Th <300 are rejected, as these are associated with detrital contamination. However, this applied only to a single measurement. The finite ages are given with 2σ error bands; the infinite results only refer to the lower bound of the 2σ confidence interval (Supplementary Table 3). None of the samples showed any indication for uranium leaching, which is either expressed by sections with 230Th/234U >> 234U/238U or increasing 230Th/234U ratios towards the surface in conjunction with decreasing uranium-concentrations. Five samples had infinite positive error bounds and it was thus only possible to calculate minimum ages. It can be seen that the uranium-series results may change over small distances within a sample. The first data set of TLP10-1 yielded a finite result of 161 ± 15 kyr while the second set yielded a minimum age of >255 kyr. As mentioned above, all uranium-series results, whether they are finite of infinite, have to be regarded as minimum age estimates. If the faunal elements present a single population, the uranium-series results indicate that the Talepu samples are most probably older than ~350 kyr, but certainly older than ~200 kyr (Supplementary Table 3). The large errors do not allow us to further constrain the age. Optical dating provides an estimate of the time since grains of quartz or potassium-rich feldspar were last exposed to sunlight34, 35, 36, 37. The burial age is estimated by dividing the equivalent dose (D , a measure of the radiation energy absorbed by grains during their period of burial) by the environmental dose rate (the rate of supply of ionizing radiation to the grains over the same period). D is determined from the laboratory measurements of the optically stimulated luminescence (OSL) from quartz or the infrared stimulated luminescence (IRSL) from potassium (K)-feldspar, and the dose rate is estimated from laboratory and field measurements of the environmental radioactivity. K-feldspar has two advantages over quartz for optical dating: (1) the IRSL signal (per unit absorbed dose) is usually much brighter than the OSL signal from quartz; and (2) the IRSL traps saturate at a much higher dose than do the OSL traps, which makes it possible to date older samples using feldspars than is feasible using the OSL signal from quartz. However, the routine dating of K-feldspars using the IRSL signal has been hampered by the malign phenomenon of ‘anomalous fading’ (that is, the leakage of electrons from IRSL traps at a faster rate than expected from kinetic considerations38), which gives rise to substantial underestimates of age unless an appropriate correction is made39. Recently, IRSL traps that are less prone to fading have been identified40, using either a post-infrared IRSL (pIRIR) approach41, 42 or a MET-pIRIR procedure16, 43. The progress, potential and remaining problems in using these pIRIR signals for dating have been reviewed recently17. Dating the samples from Talepu using quartz OSL is impractical because of the paucity of quartz. Furthermore, the quartz OSL traps are expected to be in saturation, owing to the ages of the samples (>100 kyr) and the high environmental dose rates of the deposits (4–5 Gy/kyr). In this study, we applied the MET-pIRIR procedure to K-feldspar extracts from Talepu to isolate the light-sensitive IRSL signal that is least prone to anomalous fading. We also allowed for any residual dose at the time of sediment deposition, to account for the fact that pIRIR traps are less easily bleached than the ‘fast’ component OSL traps in quartz. The resulting MET-pIRIR ages should, therefore, be reliable estimates of the time of sediment deposition at Talepu. The total environmental dose rate for K-feldspar grains consists of four components: the external gamma, beta and cosmic-ray dose rates, and the internal beta dose rate. The dosimetry data for all samples are summarized in Supplementary Table 4. The external gamma dose rates were measured using an Exploranium GR-320 portable gamma-ray spectrometer, equipped with a 3-inch diameter NaI(Tl) crystal calibrated for uranium, thorium and potassium concentrations using the CSIRO facility at North Ryde44. At each sample location, three or four measurements of 300 s duration were made of the gamma dose rate at field water content. The external beta dose rate was measured by low-level beta counting using a Risø GM-25-5 multicounter system45 and referenced to the Nussloch Loess (Nussi) standard46. The external beta dose rate was corrected for the effect of grain size and hydrofluoric acid etching on beta-dose attenuation. These external components of the total dose rate were adjusted for assumed long-term water contents of 20% for the Talepu Upper Trench (TUT = T2) samples and 30% for the Talepu Lower Trench (TLT = T4) sample (TUT and TLT sample numbers refer to the Centre for Archaeological Science laboratory numbers). These values are based on the measured field water contents (Supplementary Table 4), together with an assigned 1σ uncertainty of ±5% to capture the likely range of time-averaged mean values over the entire period of sample burial. To check the equilibrium status of the 238U and 232Th decay chains, each sample was dried, ground to a fine powder and then analysed by high-resolution gamma-ray spectrometry (HRGS). The measured activities of 238U, 226Ra and 210Pb in the 238U series, 228Ra and 228Th in the 232Th series, and 40K are listed in Supplementary Table 5. The activities of 228Ra and 228Th were close to equilibrium for all of the samples, as is commonly the case with the 232Th series. By contrast, the 238U chain of each sample, except TUT-OSL9, was in disequilibrium at the present day. Sample TUT-OSL2 had a 39–45% deficit of 226Ra and 210Pb relative to the parental 238U activity, whereas sample TUT-OSL3 had a 224–345% excess of the daughter nuclides. Samples TUT-OSL1 and TLT-OSL6 had 226Ra deficits of 50% and 26%, respectively, relative to their 238U activities, but the 210Pb activities of both samples were similar to their parental 238U activities. Sample TUT-OSL3 was the only sample with a present-day excess of 226Ra. This sample was from a sandy layer (unit B) through which ground water could percolate, so we attributed the observed 226Ra excess to the deposition of radium transported by ground water. Given the similar 238U activities of TUT-OSL3 and nearby TUT-OSL2, it is reasonable to assume that the parental uranium activity had not changed substantially during the period of burial of either sample, and that the 226Ra excess in TUT-OSL3 most probably occurred recently. The latter can be deduced from the fact that 226Ra has a half-life of ~1,600 years, which is short relative to the ages of our samples (>100 kyr), so any unsupported excess of 226Ra would have decayed back into equilibrium with 238U within ~8 kyr of deposition (that is, five half-lives of 226Ra). The alternative option—that groundwater has continuously supplied excess 226Ra to unit B—is not supported by the disequilibrium between 226Ra and 210Pb: the latter nuclide has a half-life of ~22 years, so it should remain in equilibrium with 226Ra if the latter is supplied continuously and no radon gas is lost to atmosphere. Moreover, as the return of 210Pb to equilibrium with 226Ra is governed by the half-life of the shorter-lived nuclide, it could be argued that the excess 226Ra was deposited within the past ~110 years (five half-lives of 210Pb). Fortunately, the calculated age of TUT-OSL3 is not especially sensitive to different assumptions about the timing or extent of disequilibria in the 238U series. The latter accounts for only 28% of the total dose rate estimated from the HRGS data in Supplementary Table 5; this assumes that the present-day nuclide activities have prevailed throughout the period of sample burial. If, instead, as we consider more likely, the observed excess in 226Ra was deposited recently and the 238U decay chain had been in equilibrium for almost all of the period of sample burial, then the 238U series accounted for only 12% of the total dose rate (that is, using activities of 37 ± 4 Bq kg−1 for 238U, 226Ra and 210Pb). The ages calculated under these two alternative scenarios, using only the HRGS data for estimating external beta and gamma dose rates, range from ~118 kyr to ~143 kyr (Supplementary Table 5). Sample TUT-OSL2 was from the more silty overlying layer (sub-unit A ) and had deficits of 226Ra and 210Pb relative to 238U, but these disequilibria were much smaller in magnitude than those of TUT-OSL3. If it were not continuously leached from the sample, 226Ra will return to secular equilibrium with 238U within ~8 kyr, so the existence of disequilibrium in TUT-OSL2 adds further weight to the argument for recent transport of 226Ra in ground water at Talepu. The alternative is that 226Ra has been leached continuously from this sample, so we performed the same sensitivity test on the dose rates and ages as that performed on TUT-OSL3. For TUT-OSL2, the ages determined using the present-day HRGS data or activities of 41 ± 3 Bq kg−1 for 238U, 226Ra and 210Pb are statistically indistinguishable (130 ± 12 and 125 ± 11 kyr, respectively; Supplementary Table 5), because the disequilibria are much less marked than in TUT-OSL3 and the 238U series makes only a small contribution (10–14%) to the total dose rate of TUT-OSL2. Samples TUT-OSL1 and TLT-OSL6 had deficits of 226Ra relative to 238U, but similar activities of 238U and 210Pb. The latter additionally strengthens our proposition that 226Ra was leached from these sediments recently, because 210Pb should return to a state of equilibrium with 226Ra within ~110 years (five half-lives of 210Pb). For both samples, the ages calculated using the present-day HRGS data were statistically concordant with those estimated by assuming that the 238U chain had been in secular equilibrium for almost the entire period of sample burial (Supplementary Table 5). The same applies to sample TUT-OSL9, since the measured activities of 238U, 226Ra and 210Pb were consistent at 1σ. To calculate the ages of the Talepu samples, we used the beta dose rates deduced from direct beta counting and the in situ gamma dose rates measured at each sample location. The external beta dose rates determined from beta counting and from the HRGS data (Supplementary Table 5) were statistically consistent (at 2σ) for all five samples; such agreement is expected, as both measure the present-day activities. The field gamma dose rates are also based on the nuclide activities prevailing at the time of measurement (214Bi, a short-lived nuclide between 226Ra and 210Pb, being used for the 238U series) and—importantly—take into account any spatial heterogeneity in dose rate from the ~30 cm of deposit surrounding each sample. The in situ gamma dose rates for samples TUT-OSL1 and TLT-OSL6 were consistent at 1σ with those estimated from the HRGS activities, whereas the field gamma dose rates for TUT-OSL2, -OSL3 and -OSL9 were either higher or lower than those calculated from the HRGS data. The lower in situ gamma dose rate of TUT-OSL3 can be explained by the location of this sample close to the boundary with the TUT-OSL2 sediments, which have a smaller beta dose rate (Supplementary Table 4), and vice versa for the elevated field gamma dose rate of the latter sample. This result also indicates that the 226Ra and 210Pb deficits (TUT-OSL2) and excesses (TUT-OSL3) were spatially localized and not pervasive in the 30 cm of deposit surrounding these samples. Under dim red laboratory illumination, the collected samples (see Methods) were treated with hydrochloric acid and hydrogen peroxide solutions to remove carbonates and organic matter, then dried. Grains of 90–180 or 180–212 μm in diameter were obtained by dry sieving. The K-feldspar grains were separated from quartz and heavy minerals using a sodium polytungstate solution of density 2.58 g cm−3, and etched in 10% hydrofluoric acid for 40 min to clean the surfaces of the grains and remove (or greatly reduce in volume) the external alpha-irradiated layer of each grain. For each sample, 8–14 aliquots were prepared by mounting grains as a 5-mm-diameter monolayer in the centre of a 9.8-mm-diameter stainless steel disc, using ‘Silkospray’ silicone oil as the adhesive. This resulted in each aliquot consisting of several hundred K-feldspar grains. The single-aliquot regenerative-dose (SAR) MET-pIRIR procedure introduced in ref. 16 was adapted for the Talepu samples in this study. We modified the original procedure by using a preheat at 320 °C (rather than 300 °C) for 60 s, to avoid significant influence from residual phosphorescence while recording the MET-pIRIR signal at 250 °C (Supplementary Table 6). In addition, following ref. 47, we used a 2 h solar simulator bleach before each regenerative dose cycle, instead of the high-temperature infrared bleaching step used originally, as this proved essential for recovering a given laboratory dose (see below). Example IRSL (50 °C) and MET-pIRIR (100–250 °C) decay curves are shown in Extended Data Fig. 6a for an aliquot of sample TUT-OSL2. The decay curves observed at the different stimulation temperatures are similar in shape, with initial MET-pIRIR signal intensities of the order of a few thousand counts per second. Extended Data Fig. 6b shows the corresponding dose–response curves for the same aliquot. Each sensitivity-corrected (L /T ) dose–response curve was fitted using a single saturating-exponential function of the form I = I (1 − exp−D/D0), where I is the L /T value at regenerative dose D, I is the saturation value of the exponential curve and D is the characteristic saturation dose. The D values are shown next to each dose–response curve in Extended Data Fig. 6b. For a total of 38 aliquots drawn from all 5 samples, we calculated the D values for the 250 °C MET-pIRIR signal; these are plotted in Extended Data Fig. 6c. On a ‘radial plot’ such as this, the most precise estimates fall to the right and the least precise to the left. If these independent estimates are statistically consistent with a common value at 2σ, then 95% of the points should scatter within a band of width ±2 units projecting from the left-hand (‘standardized estimate’) axis to the common value on the right-hand, radial axis. The radial plot thus provides simultaneous information about the spread, precision and statistical consistency of experimental data48, 49, 50. The measured D values range from ~220 to ~600 Gy, with the vast majority consistent at 2σ with a common value of ~360 Gy. The average D value (calculated using the central age model49) is 358 ± 14 Gy, with the standard error taking the extent of overdispersion (16 ± 4%) into account. If we adopt the D values corresponding to 90% (2.3D ) and 95% (3D ) of the saturation level of the typical dose–response curve as the upper limits for reliable estimation of D 43, 47, 51, 37, then the maximum reliable D values that we can determine using the 250 °C MET-pIRIR signal are ~820 Gy and ~1070 Gy, respectively, for these samples. To validate whether the MET-pIRIR procedure is applicable to the Talepu samples, we conducted dose recovery, anomalous fading and residual dose tests. For the latter, four aliquots of each sample were bleached for 4–5 h using a Dr Hönle solar simulator (model UVACUBE 400). The residual doses were then estimated by measuring these bleached aliquots using the modified MET-pIRIR procedure (Supplementary Table 6). The residual doses obtained for each of the TUT samples are plotted against stimulation temperature in Extended Data Fig. 7a. The IRSL signal measured at 50 °C has a few grays of residual dose, which increases as the stimulation temperature is raised, attaining values of 16–20 Gy at 250 °C. The size of the residual dose is only about 2–3% of the corresponding D values for the 250 °C signal, which were subtracted from the D values for the respective samples before calculating their ages. It was noted in ref. 52 that a simple subtraction of the residual dose from the apparent D value could result in underestimation of the true D value if the residual signal is large relative to the bleachable signal. Accordingly, it advocated the use of an ‘intensity-subtraction’ procedure instead of the simple ‘dose-subtraction’ approach for samples with large residual doses. The dose-subtraction approach should be satisfactory for the Talepu samples, however, given the small size of the residual doses compared with the D values obtained from the MET-pIRIR 250 °C signal. A dose recovery test49 was conducted on sample TUT-OSL1. Eight aliquots were bleached by the solar simulator for 5 h, then given a ‘surrogate natural’ dose of 550 Gy. Four of these aliquots were measured using the original MET-pIRIR procedure16, with a ‘hot’ infrared bleach of 320 °C for 100 s applied at the end of each SAR cycle (step 15 in Supplementary Table 6). The other four aliquots were measured using the modified MET-pIRIR procedure (Supplementary Table 6), with a solar simulator bleach of 2 h used at step 15. The measured doses at each stimulation temperature were then corrected for the corresponding residual doses (Extended Data Fig. 7a), and the ratios of measured dose to given dose were calculated for the IRSL and MET-pIRIR signals. The dose recovery ratios are plotted in Extended Data Fig. 7b, which shows that a hot bleach at the end of each SAR cycle results in significant overestimation of the known (given) dose; for the MET-pIRIR 250 °C signal, an overestimation of 48% was observed. For these same four aliquots, we obtained a ‘recycling ratio’ (the ratio of the L /T signals for two duplicate regenerative doses) consistent with unity (1.00 ± 0.03), which indicates that the test-dose sensitivity correction worked successfully between regenerative-dose cycles. The overestimation in recovered dose, therefore, implies failure of the sensitivity correction for the surrogate natural dose: that is, the extent of sensitivity change between measurement of the surrogate natural and its corresponding test dose differs from the changes occurring in the subsequent regenerative-dose cycles. The surrogate natural and regenerative-dose cycles differ only in respect to the preceding bleaching treatment (that is, a solar simulator bleach was used for the former and a hot bleach for the latter), so we compared these results with those obtained for the four aliquots that were bleached at the end of each regenerative-dose cycle using the solar simulator. The dose recovery results improved significantly using this modified procedure (Extended Data Fig. 7): all of the measured/given dose ratios were consistent with unity (at 2σ) for the signals measured at different temperatures, with a ratio of 1.02 ± 0.03 obtained for the MET-pIRIR 250 °C signal. The results of the dose recovery test on sample TUT-OSL1 suggest that the MET-pIRIR procedure could successfully recover a known dose given to K-feldspars from Talepu, but only when a solar simulator bleach was applied at the end of each SAR cycle. We therefore adopted this procedure to measure the D values for all five Talepu samples. Previous studies of pIRIR signals have shown that the anomalous fading rate (g value) depends on the stimulation temperature, with negligible fading of MET-pIRIR signals stimulated at temperatures of 200 °C and above16, 17. Accordingly, no fading correction is required for these high-temperature MET-pIRIR signals. To check that this finding also applied to the Talepu samples, fading tests were conducted on six aliquots of sample TUT-OSL3 that had already been used for D measurements. We adopted a single-aliquot procedure similar to that described in ref. 53, but based on the MET-pIRIR signals. Doses of 110 Gy were administered using the laboratory beta source, and the irradiated aliquots were then preheated and stored for periods of up to 1 week at room temperature (~20 °C). For practical reasons, we used a hot bleach (320 °C for 100 s) instead of a solar simulator bleach at the end of each SAR cycle, but this choice should not have affected the outcome of the fading test, given the aforementioned recycling ratio of unity obtained using the hot bleach. Extended Data Fig. 7c shows the decay in the sensitivity-corrected MET-pIRIR signal as a function of storage time for these six aliquots, normalized to the time of prompt measurement (which ranged from 720 s for the 50 °C IRSL to 1480 s for the 250 °C MET-pIRIR signal). The corresponding fading rates (g values) were calculated for the IRSL and MET-pIRIR signals (Extended Data Fig. 7d). The highest fading rate was observed for the 50 °C IRSL signal (5.5 ± 0.4% per decade), and decreases as the stimulation temperature is increased, falling to 0.94 ± 0.92 and 0.17 ± 1.13% per decade for the 200 and 250 °C signals, respectively. The latter g value is consistent with zero at 1σ, so we used the D value obtained from the 250 °C signal to date each of the samples. We note, however, that the g values for the 200 and 250 °C signals have large uncertainties, owing to the difficulty in obtaining precise estimates at low fading rates, so our data do not exclude the possibility that the high-temperature signals may fade slightly. On the basis of the results of the performance tests described above, the MET-pIRIR procedure in Supplementary Table 6 was used to estimate the D values for all four TUT samples, as well as one sample (TLT-OSL6) collected from near the base of the stratigraphically underlying deposits in the TLT. The D estimates obtained for the TUT samples using the MET-pIRIR 250 °C signal are shown in Extended Data Fig. 8. Most of the estimates are distributed around a central value, although the spread is larger than can be explained by the measurement uncertainties alone. The overdispersion among these D values is ~20% for three of the TUT samples and almost twice this amount for TUT-OSL9, the latter arising from a pair of low D values measured with relatively high precision. To estimate the age for each of these samples, we determined the weighted mean D of the individual single-aliquot values using the central age model49, which takes account of the measured overdispersion in the associated standard error. As a further test of the reliability of our D estimates for the TUT samples, we have plotted the estimates of the central age model as a function of stimulation temperature in Extended Data Fig. 9a. These plots show that the D values increase with stimulation temperature until a ‘plateau’ is reached at higher temperatures for each of the TUT samples; the plateau region (marked by the dashed line) indicates that a non-fading component is present at these elevated temperatures. The existence of a plateau can be used, therefore, as an internal, diagnostic tool to confirm that a stable, non-fading component has been isolated for age determination. For all four TUT samples, a plateau is reached at temperatures of 200 °C and above, from which we infer negligible fading of the MET-pIRIR 250 °C signal. We calculated the sample ages, therefore, using the D values obtained from the 250 °C signal. The corresponding weighted mean D values, dose rate data and final ages are listed in Supplementary Table 4. For sample TLT-OSL6 from the TLT, four of the eight aliquots measured emitted natural MET-pIRIR 250 °C signals consistent with the saturation levels of the corresponding dose–response curves (for example Extended Data Fig. 9b). This implies that the IRSL traps were saturated in the natural sample, which further supports our conclusion that the MET-pIRIR 250 °C signal had a negligible fading rate. It would be hazardous to estimate the age of sample TLT-OSL6 from the D values of the four non-saturated aliquots, as these may represent only the low D values in the ‘tail’ of a truncated distribution. If we adopt the average 2.3D value for the MET-pIRIR 250 °C signal of all five Talepu samples (~820 Gy) as an upper limit for reliable D estimation, then this corresponds to a minimum age of ~195 kyr for sample TLT-OSL6 (Supplementary Table 4). The MET-pIRIR 250 °C ages for the four samples dated from the TUT (=T2) are in correct stratigraphic order, increasing from 103 ± 9 kyr (at ~3 m depth) to 156 ± 19 kyr (at ~10 m depth). They thus span the period from marine isotope stage 6—the penultimate glacial—to marine isotope stage 5, the last interglacial. This coherent sequence of ages also supports our contention that the Talepu samples were sufficiently bleached before deposition. The sample analysed from ~8 m depth in the TLT (=T4; sample TLT-OSL6) yielded a minimum age of ~195 kyr, corresponding to marine isotope stage 7 (the penultimate interglacial) or earlier. We have not yet dated the other sediments exposed in the TLT, but expect that the 6 m of deposit immediately overlying TLT-OSL6 will be older than 156 ± 19 kyr, as they stratigraphically underlie sample TUT-OSL9 in the TUT. We interpret the ages for the TUT samples as true (finite) depositional ages, based on the existence of D plateaux (Extended Data Fig. 9a) and the increase in D with depth (that is, ordered stratigraphically). This is the most parsimonious reading of our data. The measured fading rate of 0.17 ± 1.13% per decade for sample TUT-3 allows for the possibility, however, that the MET-pIRIR 250 °C signal may still fade slightly and that our samples had reached an equilibrium state of trap filling and emptying (so-called field saturation54). If so, then the increase in D with depth could, instead, be due to a systematic decline in fading rate with increasing depth. Any such a trend cannot be verified or rejected from laboratory measurements of the g value, owing to the size of the associated uncertainties at low fading rates (Extended Data Fig. 7c, d). The ages for the TUT samples could, therefore, be viewed conservatively as minimum ages (as for sample TLT-OSL6), given the uncertainties in the measured fading rate of the 250 °C signal and the exact level at which the signal saturates. The measured age of the uppermost sample in the sequence, TUT-OSL1, would increase by about 15% and 40% after correcting39, 55, 56 for assumed fading rates of 0.5 and 1% per decade, respectively. Similarly, the measured ages of TUT-OSL2, -3 and -4 would increase by about 17, 23 and 28%, respectively, after correcting for an assumed fading rate of 0.5% per decade. Thus, whether viewed as true ages or as minimum ages, the TUT sediments were deposited more than ~100 ka. Samples for palaeomagnetic polarity assessment were taken from the baulks of excavations Talepu 2 (T2) and Talepu 4 (T4) (Fig. 2). Samples were taken at 20–30 cm intervals using non-magnetic tools. Preferably samples in non-bioturbated silty deposits were taken. The upper conglomeratic interval of T2 was omitted because of its coarser grain size and because it appeared heavily affected by soil formation and plant root bioturbation. From each sample level, five oriented sample specimens were retrieved by carving the sediment using non-magnetic tools and fitting them into 8 cm3 plastic cubes. The samples were labelled according to excavation, baulk and depth. In the laboratory all specimens were treated by an alternating field demagnetizer. The mean magnetic directions for each sample are presented in Supplementary Table 7. Demagnetization was performed with intervals of 2.5–5 mT to a peak of up to 80–1,000 mT. The magnetization vectors obtained from most samples showed no more than two separated components of natural remanent magnetization (NRM) on the orthogonal planes, which means that the specimens had been affected by secondary magnetization. However, secondary magnetization was easily removed with a demagnetization of up to 5–20 mT, while the characteristic remanent magnetizations (ChRMs) could be isolated through stepwise demagnetization of up 20–40 mT, in some cases up to 50 mT. Above 40 mT most samples were completely demagnetized (Extended data Fig. 4j and Supplementary Table 8). The mean magnetization intensities and palaeomagnetic directions are plotted against stratigraphic depth in Extended Data Fig. 5. The 90–98% intensity saturation was achieved from 1.30 × 10−4 to 3.81 × 10−3 A m−1 before demagnetization, and between 8.52 × 10−6 and 1.49 × 10−4 A m−1 after demagnetization at 20–40 mT. The direction of ChRMs is determined from the orthogonal plots in at least four or five successive measurement steps between 20 and 50 mT using principal component analysis57 (PuffinPlot58 and IAPD 2000 software59) with the maximum angular deviations setting at <5°. Although there are no well-defined criteria for the acceptability of palaeomagnetic data available, the k > 30 and α95 < 15° criteria of ref. 60 were used to accept the average remanence direction for sampled levels. On the basis of these tests, all the samples (n = 24) throughout the Talepu sequences yielded acceptable ChRMs directions and showed a normal polarity. The ChRM directions were relatively constant throughout the sequences, except the direction of samples taken in T2 at 6.5 and 7.5 m depth, which showed steep inclinations of 56–68°. Such steep inclinations are unusual for near-equatorial regions. One possible interpretation is that post-depositional mass-movement disturbances, such as creep or a landslide, resulted in rotational movements of this interval. The equal-area projections show that the dispersion of within-site means of the remanence directions re-group more closely together after demagnetization, and no significant change in the major remanence direction occurs with depth. The major remanent direction corresponds closely with the present magnetization direction (Extended Data Fig. 5b). Sample TAL-10-01 was taken from T4, sub-unit E , at a depth of 2.5 m below the surface. Euhedral sanidine crystals up to 250 μm in length were hand-picked following standard heavy liquid and magnetic separation techniques. Crystals were loaded into wells in aluminium sample discs (diameter 18 mm) for neutron irradiation, along with the 1.185 Myr Alder Creek sanidine61 as the neutron fluence monitor. Neutron irradiation was done in the cadmium-shielded CLICIT facility at the Oregon State University TRIGA reactor. Argon isotopic analyses of gas released by CO laser fusion of single sanidine crystals (Supplementary Table 9) were made on a fully automated, high-resolution, Nu Instruments Noblesse multi-collector noble-gas mass spectrometer, using procedures documented previously1, 62. Sample gas clean-up was through an all-metal extraction line, equipped with a −130 °C cold trap (to remove H O) and two water-cooled SAES GP-50 getter pumps (to absorb reactive gases). Argon isotopic analyses of unknowns, blanks and monitor minerals were performed in identical fashion.40Ar and 39Ar were measured on the high-mass ion counter, 38Ar and 37Ar on the axial ion counter and 36Ar on the low-mass ion counter, with baselines measured no less than every third cycle. Measurement of the 40Ar, 38Ar and 36Ar ion beams was performed simultaneously, followed by sequential measurement of 39Ar and 37Ar. Beam switching was achieved by varying the field of the mass spectrometer magnet and with minor adjustment of the quad lenses. Data acquisition and reduction was performed using the program ‘Mass Spec’ (A. Deino, Berkeley Geochronology Center). Detector intercalibration and mass fractionation corrections were made using the weighted mean of a time series of measured atmospheric argon aliquots delivered from a calibrated air pipette. Decay and other constants, including correction factors for interference isotopes produced by nucleogenic reactions, are as reported in ref. 62. The resulting age probability diagram for single sanidine crystals (Extended Data Figure 10) shows a wide range in ages with a dominant population around 9.4 million years ago (Late Miocene). This indicates that the sanidine crystals from the sample do not represent a single volcanic event, but were predominantly derived from erosion of the Miocene volcanic rocks west of the Walanae Depression and/or from Late Miocene marine sediments of the Walanae Formation.


News Article | November 24, 2016
Site: cleantechnica.com

With each day that passes, we are treated to a clearer picture of what a Donald Trump White House will look like, and the impact it will have on the environment, climate, and energy usage — and to be perfectly honest, the picture is becoming more and more grim with each day. Donald Trump’s focus on the environment, climate, and energy is a big focus here at CleanTechnica — we’ve already looked at the initial attacks and plans of attack Donald Trump has made on the environment; calls from US business leaders to Trump urging him to focus on climate change; and the ensuing conflict between the Tennessee Valley Authority’s plans to spend $8 billion on clean power over the coming years compared with Donald Trump’s plans to increase coal jobs, among many other things. Maybe the biggest news from the week (which missed the deadline for my previous Trump-related piece) was the response to Donald Trump’s moves by the outgoing chief of the US Environmental Protection Agency (EPA), Gina McCarthy. “We’ve been very successful in the last five decades avoiding partisan politics,” McCarthy said at at the National Press Club in Washington earlier this week. “It really doesn’t matter whether you’re Republican or Democrat: You still want your kids to be healthy and sound.” Speaking in defence of the country’s Clean Power Plan, McCarthy said, “But I truly believe, guided by President Obama’s deliberate vision, history will show that the Clean Power Plan marked a turning point in American climate leadership. A point where our country stepped up to the plate and delivered … and the world followed.” Unsurprisingly, however, the news continues to roll in, bringing with it more nuggets revealing the potential makeup of Donald Trump’s White House administration, and the policies they plan to enact. Despite a call from the Climate Mayors — a group of 37 mayors grouping together to tackle climate change city by city — for Donald Trump to partner in their “work to clean our air, strengthen our economy, and ensure that our children inherit a nation healthier and better prepared for the future than it is today,” Donald Trump has nevertheless taken aim at what he has called “politicised science” and is planning several moves which will severely cripple the United States’ contributions to climate targets — climate targets that Donald Trump likely dismisses as irrelevant. Added to Donald Trump’s State Department transition team on Monday was the Heritage Foundation’s Steven Groves, who has repeatedly called for the United States to abandon the Paris agreement — a decision which is almost universally agreed to be a monumentally bad decision. This falls in line with the appointment of Myron Ebell, who will head up the Environmental Protection Agency transition team — which we reported on earlier this week. The transition team for the Department of the Interior is similarly shaping up to be a winner for the fossil fuel industry, with numerous contenders for positions in the team boasting terrible environmental records, including names such as former Alaskan Governor Sarah Palin, Oklahoma Governor Mary Fallin, and Representative Cynthia Lummis, a Republican with a lifetime environmental record score of 5% from the League of Conservation Voters. To cap it all off, Donald Trump’s senior adviser on issues relating to NASA has confirmed that the incoming President is intending to eliminate all climate change research currently being conducted by the space agency, with the extra resources intended to be refocused towards deep space exploration. Bob Walker, Trump’s adviser, explained that there was no need for NASA to involve itself in what he has previously called “politically correct environmental monitoring,” and expects NASA to take the lead “in an exploration role, in deep space research,” adding: “Earth-centric science is better placed at other agencies where it is their primary mission. “My guess is that it would be difficult to stop all ongoing NASA programs but future programs should definitely be placed with other agencies. I believe that climate research is necessary but it has been heavily politicised, which has undermined a lot of the work that researchers have been doing. Mr Trump’s decisions will be based upon solid science, not politicised science.” Scientists from around the world have reacted bitterly and strongly in response to this news, including numerous scientists from Australia. “Just as we have seen in Australia the attack on CSIRO climate science under the Coalition government, we now see the incoming Trump administration attacking NASA,” said Professor Ian Lowe, Emeritus Professor of Science, Technology and Society at Griffith University and a former President of the Australian Conservation Foundation. “They obviously hope that pressure for action will be eased if the science is muffled. “But with temperatures in the Arctic this week a startling 20 degrees above normal, no amount of waffle can disguise the need for urgent action to decarbonise our energy supply and immediately withdraw support for new coal mines.” “Why a world leader in Earth observation should do this is beyond rational explanation,” added David Bowman, a Professor of Environmental Change Biology at The University of Tasmania. Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | November 4, 2016
Site: www.eurekalert.org

From deep within the Earth to the upper atmosphere, the organisms and ecosystems highlighted in the 37 projects selected for the 2017 Community Science Program (CSP) of the U.S. Department of Energy Joint Genome Institute (DOE JGI), a DOE Office of Science User Facility, reflect the breadth and depth of interests researchers are exploring to find solutions to energy and environmental challenges. "These new CSP projects, selected through our external review process, exploit DOE JGI's experimental and analytical "omics" capabilities and build our portfolio in key focus areas including sustainable bioenergy production, plant microbiomes and terrestrial biogeochemistry," said Susannah Tringe, DOE JGI User Programs Deputy. The CSP 2017 projects were selected from 98 full proposals received, resulting from 123 letters of intent submitted. The full list of projects may be found at http://jgi. . A number of the accepted proposals call for developing reference genomes for plants of relevance to bioenergy, either as potential feedstocks or because they possess biochemistries that could provide insights into better ways to derive biofuels from plant biomass. These will take advantage of powerful DOE JGI sequencing upgrades. Building upon the previously published reference genome for Sorghum bicolor generated by the DOE JGI, Todd Mockler of the Donald Danforth Plant Science Center has targeted the sequencing of a number of highly diverse sorghum lines. The project will explore and begin to assemble the pangenome to accelerate gene discovery and increase understanding of which variants are associated with different functional outcomes. This work is being conducted in close concert with DOE's Advanced Research Projects Agency-Energy's (ARPA-E) Transportation Energy Resources from Renewable Agriculture (TERRA) program. David Des Marais of Harvard University is similarly focused on Brachypodium, which has had two genomes - Brachypodium distachyon and B. sylvaticum - sequenced by the DOE JGI. Among his plans are estimating a pangenome for B. sylvaticum and sequencing 4 additional Brachypodium species to further functional genomic analysis research in grasses. Karen Aitken of Australia's national science agency CSIRO is focused on producing the first genome assembly of a cultivated sugarcane plant, namely variety R570. Cultivated sugarcane produces 80 percent of the world's sugar and is already harvested in large volumes and along with its transportation system efficiencies, is helping to guide optimized production strategies for other renewable and sustainable biofuel crops. Many projects focus on plants with demonstrated tolerance to stressors such as drought. The proposal from John Cushman of the University of Nevada seeks to establish the common or crystalline ice plant (Mesembryanthemum crystallinum L.) as a DOE JGI Flagship Genome species like poplar, soybean, and Brachypodium distachyon. Most plants use what is known as the C3 pathway to photosynthetically fix carbon (from CO2) in the presence of abundant nutrients, while plants in arid conditions rely on a different mechanism, the water-conserving CAM pathway. A rapidly growing desert annual native to the Namibian desert, the common ice plant is the first reported species that can switch from the C3 pathway to CAM when stressed by factors such as drought and salinity. Elizabeth Kellogg, also of the Danforth Center, requested a genome sequence for the big bluestem (Andropogon gerardii subsp. gerardii), the plant most associated with the Great Plains. The big bluestem dominates the tall grass prairie, accounting for up to 70 percent of biomass in some areas, and is highly resistant to climate change. In a complementary project, Karolina Heyduk of the University of Georgia seeks genome sequences for two cacti: the C3 species Yucca aloifolia and the CAM species Y. filamentosa. The comparison of the genomes for these two related cacti could illuminate the genes and pathways involved in these two forms of photosynthesis. Similarly, the gene atlas for CAM plant Kalanchoe laxiflora proposed by Xiaohan Yang of Oak Ridge National Laboratory could allow researchers to understand how it responds to changes in environmental conditions, including temperature, water, and nitrogen source. Few conifers have had their genomes sequenced thus far, and Joerg Bohlmann of Canada's University of British Columbia wants to raise the number. He has targeted the genomes and transcriptomes of the common yew and western red cedar, as well as the transcriptome of the Jeffrey pine, all candidate bioenergy feedstocks. Two projects focus on the impact of fire on forests as full recovery can take decades. Daniel Cullen of Forest Products Laboratory is comparing how microbial communities in forests dominated by Lodgepole pine (Pinus contorta) fare in fire-disturbed and undisturbed forests, in part to define processes that underlie carbon cycling in coniferous forests. Similarly, Thomas Bruns of University of California, Berkeley seeks to learn more about the effects of pyrophilous fungi on post-fire soil carbon by studying the genomes and transcriptomes of 13 fungi as well as the metatranscriptomes of burned soils. Several projects focus on plant-microbe interactions, both on micro- and macroscales. Sharon Doty of the University of Washington is investigating poplar endophytes, one that fixes nitrogen and one involved in phytoremediation. Plant root microbial consortia are known to be critically important to plant growth and resilience to changing soil conditions but much remains to be learned about community composition and function. Devin Coleman-Derr of the USDA-ARS, with his resource allocation, is seeking to learn more about the role of root-associated Actinobacteria in promoting host fitness in sorghum and rice under drought conditions. Paul Dijkstra of Northern Arizona University, with his allocation, will illuminate soil bacterial transcriptional regulatory networks in switchgrass fields. William Whitman of the University of Georgia plans to develop pangenomes of 100-200 species of soil or plant-associated prokaryotes. The pangenome concept is vital to understanding the repertoire of genes upon which microbial populations may call as local environments change. Jared LeBoldus of Oregon State University is targeting metabolites of Sphaerulina musiva, the cause of Septoria stem canker and leaf spot disease in poplar. Christine Smart of Cornell University is characterizing the mechanisms of willow rust (Melampsora americana), and conducting a comparative genomic study involving M. americana and other Melampsora genomes sequenced by the DOE JGI that are known plant pathogens of poplar and other candidate bioenergy feedstocks. Gregory Bonito of Michigan State University will identify the mechanisms of attraction, communication, and growth-promoting activity between Mortierella, a close relative of arbuscular mycorrhizal fungi, and the DOE JGI Flagship plants. Nhu Nguyen of the University of California, Berkeley will be a generating genus-wide molecular phylogeny of Suillus, asking for 50 fungal genome sequences. Suillus fungi tolerate heavy metals, but the protection varies among hosts. Wayne Nicholson of the University of Florida will be generating a more comprehensive assessment of Carnobacterium, strains of which can be found in all pressure niches from the deep ocean to the upper atmosphere. Sean Crowe of Canada's University of British Columbia will characterize the methanome, comprised of genomic information distributed across organisms that either produce or consume methane, which is both a source of fuel and a greenhouse gas. Graeme Attwood of New Zealand's AgResearch Ltd and his team seek to define gene function in rumen microbes, in part to control the microbes' production of methane emissions from bovines. Soil emissions are a focus of Jennifer Pett-Ridge of Lawrence Livermore National Laboratory and Jonathan Raff of Indiana University. Pett-Ridge is determining the impact microbes in humid tropical forest soils have on carbon cycling, work that complements her DOE Early Career Research Program award. Raff intends to use samples from temperate hardwood forest sites to learn more about soil emissions of nitric oxide (NO) and more accurately represent NO sinks and sources in climate models. Alison Buchan of University of Tennessee, Knoxville is generating data about lignin related aromatic compounds in salt marshes that are removed between river mouths and open oceans, and the biochemical pathways employed in this process. A similar project comes from Christopher Francis of Stanford University and involves the San Francisco Bay Delta, the largest estuary on the west coast of North America. His team is investigating how environmental changes drive estuarine microbial community changes, and if certain pathways and organisms dominate under certain conditions, or if genes co-vary with specific functional gene ecotypes. Several projects are focused on algae for their roles in carbon fixation and for potential bioenergy applications. Chi Zhang of the University of Nebraska - Lincoln is focused on Zygnematales, the closest algal lineage to land plants, to learn more about how plant cell walls formed in the evolutionary transition from aquatic to land plants. The information could shed light on how to deconstruct plant cell walls for biofuel production without impacting plant viability. Matthew Johnson of Woods Hole Oceanographic Institute is interested in the bloom-forming Mesodinium rubrum and M. major complex. While these algae are major contributors to primary production, they acquire their ability to harness light energy for nutrients through predation so genome sequencing would help distinguish native metabolic pathways from those of the prey. Jeffry Dudycha of the University of South Carolina is pursuing a project based on cryptophytes, eukaryotic microalgae that are important primary producers in aquatic environments and are capable of capturing a broad spectrum of available light. By sequencing representatives from all major clades, the team hopes to maximize ways diverse algal communities could boost lipid yields for biofuels. Biological soil crusts or biocrusts are extremely sensitive to climate changes. As surviving extreme drought is a rare feature in plants, Elena Lopez Peredo of Marine Biological Laboratory will be generating annotated genomes of Scenedesmus algae to learn more about the desiccation tolerance of green microalgae in biocrusts. In a complementary project, Steven Harris of University of Nebraska - Lincoln is detailing the interactions of fungal and algal components of biocrusts to learn more about how they can tolerate environmental stress. Several projects build off previous efforts to sequence 1,000 microbial genomes (KMG), which have so far led to the selection of nearly 2,300 type strains. Markus Göker of Germany's DSMZ is spearheading KMG-4 to sequence environmentally relevant cultures. Ramunas Stepanauskas of Bigelow Laboratory for Ocean Sciences is using single-cell genomics to target taxonomic "blind spots" - clades systematically underrepresented or missed due to primer mismatches or intervening sequences in marker genes. Barbara MacGregor of the University of North Carolina is exploring how horizontal gene transfer has shaped the genomes of large sulfur bacteria, which are often found in dense microbial mats and play roles in carbon, nitrogen, sulfur and phosphorus cycles. Mary Ann Moran of the University of Georgia is examining organic sulfur biogeochemistry in a coastal ocean ecosystem. She and colleagues from the Monterey Bay Aquarium Research Institute collected a time series of coastal microbes and microbial communities which will be subjected to sequencing to explore how metabolism shifts as community composition changes. Mark Dopson of Sweden's Linnaeus University has a project that deploys DOE JGI's single cell genomics resources on samples sourced from deep subsurface. Targeted microbiomes will come from deep bedrock waters including Mont Terri Underground Rock Lab in Switzerland and sites constructed to investigate the safety of deep geological storage of spent nuclear fuel. Data would provide knowledge contributing to informed decisions on safety. Ashley Shade of Michigan State University plans to use a synthetic bacterial community to study the interactions among multiple community members, and then link community structure to activity and to exometabolite interactions. In sum, these 37 new projects will furnish a genomic basis for ongoing and new explorations into the genetic bases for plant, fungal, algal, microbial, and microbial community mechanisms focused on important processes of biofuel and bioenergy feedstock composition and metabolisms, as well as related environmental processes of importance for DOE missions. The U.S. Department of Energy Joint Genome Institute, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory, is committed to advancing genomics in support of DOE missions related to clean energy generation and environmental characterization and cleanup. DOE JGI, headquartered in Walnut Creek, Calif., provides integrated high-throughput sequencing and computational analysis that enable systems-based scientific approaches to these challenges. Follow @doe_jgi on Twitter. DOE's Office of Science is the largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | November 29, 2016
Site: news.yahoo.com

If Trump makes good on his rants against wind and solar, it will be countries like China and India that muscle in, the International Energy Agency (IEA) predicted in November. And Australia? Australia can sell the 21st century world powers the technology to get them to a greener future. SEE ALSO: Tesla is powering an entire island with solar energy, NBD On Tuesday, Australia's peak science body announced it would be selling solar heliostat technology to China as part of a major new deal. Working with the Chinese company Thermal Focus, CSIRO's technology will be used for concentrating solar thermal (CST) electricity generation. The CSIRO's heliostat system uses small mirrors to concentrate the sun's energy and help store energy at low cost. "In a commercial field, we have thousands of heliostats or mirrors that move across the day and keep the sun focused onto one point on a receiver tower," Wes Stein, CSIRO's chief solar energy research scientist, told Mashable. The resulting high temperatures can then be used to melt and store molten salt in large tanks. The salt's steam can move a turbine for electricity generation, "whether it's cloudy, night time, it doesn't really matter." Stein said their technology helps tackle the "critical" issue of green power storage. "As our electricity system moves to renewables, and with the retirement of coal-fired [power] stations, we're going to need solar energy that's got storage," he said. Thermal Focus will commercialise CSIRO's heliostat design and the software that controls its positioning. "CSIRO's solar thermal technology combined with our manufacturing capability will help expedite and deliver solar thermal as an important source of renewable energy in China," Wei Zhu from Thermal Focus said in a statement. While still a heavy user of coal, China has been turning away from the fossil fuel in recent years and upping its clean energy investments. A September report from CoalSwarm found that there had been a 14 percent drop in the total amount of coal-fired power capacity in early planning stages globally in 2016, of which China accounted for about three-fourths of canceled capacity. In other words, it's a bad time to be in the coal business (hear that, Australian government and Adani?) Stein said China recognises renewable energy storage will become critical. "Everyone's insta