Sydney, Australia
Sydney, Australia

The University of New South Wales is an Australian public research university located in the Sydney suburbs of Kensington and Paddington. Established in 1949, it is ranked among the top 50 universities in the world, according to the 2014 QS World University Rankings, and among the top 5 in Australia. It is also the first university in the world to be awarded the QS 5 Star Plus badge for excellence. UNSW currently has more than 50,000 students from over 120 countries.The main UNSW campus is located on a modern 38-hectare site at Kensington, seven kilometres from the centre of Sydney and close to local beaches including Coogee and Bondi. Other campuses are UNSW Art & Design in Paddington, UNSW Canberra at the Australian Defence Force Academy, and sub-campuses at Randwick and Coogee in Sydney, as well as research stations around New South Wales.UNSW is a founding member of the prestigious Group of Eight, a coalition of Australian research-intensive universities, and of Universitas 21, a leading global network of research universities. It has international exchange and research partnerships with over 200 universities around the world.UNSW graduates hold more chief executive positions of ASX 200 listed companies than those of any other university in Australia. Wikipedia.

Time filter

Source Type

Apte M.,University of New South Wales | Apte M.,Liverpool Hospital | Banks P.A.,Harvard University
The Lancet | Year: 2015

Acute pancreatitis, an inflammatory disorder of the pancreas, is the leading cause of admission to hospital for gastrointestinal disorders in the USA and many other countries. Gallstones and alcohol misuse are long-established risk factors, but several new causes have emerged that, together with new aspects of pathophysiology, improve understanding of the disorder. As incidence (and admission rates) of acute pancreatitis increase, so does the demand for effective management. We review how to manage patients with acute pancreatitis, paying attention to diagnosis, differential diagnosis, complications, prognostic factors, treatment, and prevention of second attacks, and the possible transition from acute to chronic pancreatitis. © 2015 Elsevier Ltd.

Westra S.,University of Adelaide | Alexander L.V.,Climate Change Research Center | Alexander L.V.,University of New South Wales | Zwiers F.W.,University of Victoria
Journal of Climate | Year: 2013

This study investigates the presence of trends in annual maximum daily precipitation time series obtained from a global dataset of 8326 high-quality land-based observing stations with more than 30 years of record over the period from 1900 to 2009. Two complementary statistical techniques were adopted to evaluate the possible nonstationary behavior of these precipitation data. The first was a Mann-Kendall nonparametric trend test, and it was used to evaluate the existence of monotonic trends. The second was a nonstationary generalized extreme value analysis, and it was used to determine the strength of association between the precipitation extremes and globally averaged near-surface temperature. The outcomes are that statistically significant increasing trends can be detected at the global scale, with close to two-thirds of stations showing increases. Furthermore, there is a statistically significant association with globally averaged near-surface temperature, with the median intensity of extreme precipitation changing in proportion with changes in global mean temperature at a rate of between 5.9% and 7.7%K-1, depending on the method of analysis. This ratio was robust irrespective of record length or time period considered and was not strongly biased by the uneven global coverage of precipitation data. Finally, there is a distinct meridional variation, with the greatest sensitivity occurring in the tropics and higher latitudes and the minima around 13°S and 11°N. The greatest uncertainty was near the equator because of the limited number of sufficiently long precipitation records, and there remains an urgent need to improve data collection in this region to better constrain future changes in tropical precipitation. © 2013 American Meteorological Society.

Murphy M.T.,Swinburne University of Technology | Berengut J.C.,University of New South Wales
Monthly Notices of the Royal Astronomical Society | Year: 2014

Quasar spectra reveal a rich array of important astrophysical information about galaxies which intersect the quasar line of sight. They also enable tests of the variability of fundamental constants over cosmological time- and distance-scales. Key to these endeavours are the laboratory frequencies, isotopic and hyperfine structures of various metal-ion transitions. Here, we review and synthesize the existing information about these quantities for 43 transitions which are important for measuring possible changes in the fine-structure constant, α, using optical quasar spectra, i.e. those of Na, Mg, Al, Si, Ca, Cr, Mn, Fe, Ni and Zn. We also summarize the information currently missing that precludes more transitions being used. We present an up-to-date set of coefficients, q, which define the sensitivity of these transitions to variations in α. New calculations of isotopic structures and q-coefficients are performed for Si II and Ti II, including Si II λ1808 and Ti IIλλ1910.6/1910.9 for the first time. Finally, simulated absorption-line spectra are used to illustrate the systematic errors expected if the isotopic/hyperfine structures are omitted from profile fitting analyses. To ensure transparency, repeatability and currency of the data and calculations, we supply a comprehensive data base as Supporting Information. This will be updated as new measurements and calculations are performed. © 2013 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society.

Biro P.A.,University of New South Wales | Stamps J.A.,University of California at Davis
Trends in Ecology and Evolution | Year: 2010

Consistent individual differences (CIDs) in behavior are a widespread phenomenon in animals, but the proximate reasons for them are unresolved. We discuss evidence for the hypothesis that CIDs in energy metabolism, as reflected by resting metabolic rate (RMR), promote CIDs in behavior patterns that either provide net energy (e.g. foraging activity), and/or consume energy (e.g. courtship activity). In doing so, we provide a framework for linking together RMR, behavior, and life-history productivity. Empirical studies suggest that RMR is (a) related to the capacity to generate energy, (b) repeatable, and (c) correlated with behavioral output (e.g. aggressiveness) and productivity (e.g. growth). We conclude by discussing future research directions to clarify linkages between behavior and energy metabolism in this emerging research area. © 2010 Elsevier Ltd.

Denson T.F.,University of New South Wales | Mehta P.H.,University of Oregon | Ho Tan D.,University of New South Wales
Psychoneuroendocrinology | Year: 2013

The dual-hormone hypothesis posits that the effect of testosterone on social behavior is moderated by cortisol. The present study tested this hypothesis with a competitive reactive aggression paradigm in 53 healthy undergraduate women. Salivary cortisol and testosterone were assessed at baseline. Participants were personally insulted and subsequently given the opportunity to retaliate by administering blasts of white noise to the provocateur. Participants were randomly assigned to win or lose the aggressive competition. Basal testosterone positively predicted reactive aggression and state dominance, but only among participants with high concentrations of basal cortisol. The corresponding, reverse pattern was found for state submissiveness. Winners also had higher concentrations of testosterone than losers following the aggressive competition. We discuss the role of heightened reactivity to social provocation as a possible explanation for these effects. © 2012 Elsevier Ltd.

Sushkov O.P.,University of New South Wales | Castro Neto A.H.,National University of Singapore
Physical Review Letters | Year: 2013

We propose that ordinary semiconductors with large spin-orbit coupling, such as GaAs, can host stable, robust, and tunable topological states in the presence of quantum confinement and superimposed potentials with hexagonal symmetry. We show that the electronic gaps which support chiral spin edge states can be as large as the electronic bandwidth in the heterostructure miniband. The existing lithographic technology can produce a topological insulator operating at a temperature of 10-100 K. Improvement of lithographic techniques will open the way to a tunable room temperature topological insulator. © 2013 American Physical Society.

Evans J.P.,University of New South Wales | Westra S.,University of Adelaide
Journal of Climate | Year: 2012

This study investigates the ability of a regional climate model (RCM) to simulate the diurnal cycle of precipitation over southeast Australia, to provide a basis for understanding the mechanisms that drive diurnal variability. When compared with 195 observation gauges, the RCM tends to simulate too many occurrences and too little intensity for precipitation events at the 3-hourly time scale. However, the overall precipitation amounts are well simulated and the diurnal variability in occurrences and intensities are generally well reproduced, particularly in spring and summer. In terms of precipitation amounts, the RCM overestimated the diurnal cycle during the warmer months but was reasonably accurate during winter. The timing of the maxima and minima was found to match the observed timings well. The spatial pattern of diurnal variability in the Weather Research and Forecasting model outputs was remarkably similar to the observed record, capturing many features of regional variability. TheRCMdiurnal cycle was dominated by the convective (subgrid scale) precipitation. In the RCM the diurnal cycle of convective precipitation over land corresponds well to atmospheric instability and thermally triggered convection over large areas, and also to the large-scale moisture convergence at 700 hPa along the east coast, with the strongest diurnal cycles present where these three mechanisms are in phase. © 2012 American Meteorological Society.

Jiang Z.,National University of Singapore | Heng C.S.,National University of Singapore | Choi B.C.F.,University of New South Wales
Information Systems Research | Year: 2013

Privacy is of prime importance to many individuals when they attempt to develop online social relationships. Nonetheless, it has been observed that individuals' behavior is at times inconsistent with their privacy concerns, e.g., they disclose substantial private information in synchronous online social interactions, even though they are aware of the risks involved. Drawing on the hyperpersonal framework and the privacy calculus perspective, this paper elucidates the interesting roles of privacy concerns and social rewards in synchronous online social interactions by examining the causes and the behavioral strategies that individuals utilize to protect their privacy. An empirical study involving 251 respondents was conducted in online chat rooms. Our results indicate that individuals utilize both self-disclosure and misrepresentation to protect their privacy and that social rewards help explain why individuals may not behave in accordance with their privacy concerns. In addition, we find that perceived anonymity of others and perceived intrusiveness affect both privacy concerns and social rewards. Our findings also suggest that higher perceived anonymity of self decreases individuals' privacy concerns, and higher perceived media richness increases social rewards. Generally, this study contributes to the information systems literature by integrating the hyperpersonal framework and the privacy calculus perspective to identify antecedents of privacy trade-off and predict individuals' behavior in synchronous online social interactions. © 2013 Informs.

Flambaum V.V.,University of New South Wales | Demille D.,Yale University | Kozlov M.G.,RAS Petersburg Nuclear Physics Institute
Physical Review Letters | Year: 2014

Recent measurements in paramagnetic molecules improved the limit on the electron electric dipole moment (EDM) by an order of magnitude. Time-reversal (T) and parity (P) symmetry violation in molecules may also come from their nuclei. We point out that nuclear T, P-odd effects are amplified in paramagnetic molecules containing deformed nuclei, where the primary effects arise from the T, P-odd nuclear magnetic quadrupole moment (MQM). We perform calculations of T, P-odd effects in the molecules TaN, ThO, ThF+, HfF+, YbF, HgF, and BaF induced by MQMs. We compare our results with those for the diamagnetic TlF molecule, where the T, P-odd effects are produced by the nuclear Schiff moment. We argue that measurements in molecules with MQMs may provide improved limits on the strength of T, P-odd nuclear forces, on the proton, neutron, and quark EDMs, on quark chromo-EDMs, and on the QCD θ term and CP-violating quark interactions. © 2014 American Physical Society.

Crean A.J.,University of New South Wales | Kopps A.M.,University of New South Wales | Kopps A.M.,University of Groningen | Bonduriansky R.,University of New South Wales
Ecology Letters | Year: 2014

Newly discovered non-genetic mechanisms break the link between genes and inheritance, thereby also raising the possibility that previous mating partners could influence traits in offspring sired by subsequent males that mate with the same female ('telegony'). In the fly Telostylinus angusticollis, males transmit their environmentally acquired condition via paternal effects on offspring body size. We manipulated male condition, and mated females to two males in high or low condition in a fully crossed design. Although the second male sired a large majority of offspring, offspring body size was influenced by the condition of the first male. This effect was not observed when females were exposed to the first male without mating, implicating semen-mediated effects rather than female differential allocation based on pre-mating assessment of male quality. Our results reveal a novel type of transgenerational effect with potential implications for the evolution of reproductive strategies. © 2014 John Wiley & Sons Ltd/CNRS.

Catalan G.,Catalan Institution for Research and Advanced Studies | Catalan G.,Catalan Institute of Nanoscience and Nanotechnology | Seidel J.,Lawrence Berkeley National Laboratory | Seidel J.,University of California at Berkeley | And 4 more authors.
Reviews of Modern Physics | Year: 2012

Domains in ferroelectrics were considered to be well understood by the middle of the last century: They were generally rectilinear, and their walls were Ising-like. Their simplicity stood in stark contrast to the more complex Bloch walls or Néel walls in magnets. Only within the past decade and with the introduction of atomic-resolution studies via transmission electron microscopy, electron holography, and atomic force microscopy with polarization sensitivity has their real complexity been revealed. Additional phenomena appear in recent studies, especially of magnetoelectric materials, where functional properties inside domain walls are being directly measured. In this paper these studies are reviewed, focusing attention on ferroelectrics and multiferroics but making comparisons where possible with magnetic domains and domain walls. An important part of this review will concern device applications, with the spotlight on a new paradigm of ferroic devices where the domain walls, rather than the domains, are the active element. Here magnetic wall microelectronics is already in full swing, owing largely to the work of Cowburn and of Parkin and their colleagues. These devices exploit the high domain wall mobilities in magnets and their resulting high velocities, which can be supersonic, as shown by Kreines' and co-workers 30 years ago. By comparison, nanoelectronic devices employing ferroelectric domain walls often have slower domain wall speeds, but may exploit their smaller size as well as their different functional properties. These include domain wall conductivity (metallic or even superconducting in bulk insulating or semiconducting oxides) and the fact that domain walls can be ferromagnetic while the surrounding domains are not. © 2012 American Physical Society.

Avis C.A.,University of Victoria | Weaver A.J.,University of Victoria | Meissner K.J.,University of New South Wales
Nature Geoscience | Year: 2011

Wetlands are vegetated regions that are inundated with water on a permanent, seasonal or intermittent basis1. These ecosystems play an important role in the carbon cycle: wetlands take up and store carbon, and release carbon dioxide and methane through the decomposition of organic matter2. More than 50% of wetlands are located in the high northern latitudes3, where permafrost also prevails and exerts a strong control on wetland hydrology4. Permafrost degradation is linked to changes in Arctic lakes: between 1973 and 2004 the abundance of lakes increased in continuous permafrost zones, but decreased in other zones5. Here, we use a global climate model to examine the influence of permafrost thaw on the prevalence of high-latitude northern wetlands, under four emissions scenarios. We show that as permafrost degrades, the areal extent of wetlands declines; we found a net loss in wetland extent in the three highest emissions scenarios. We also note an initial increase in the number of days of the year conducive to wetland formation, owing to an increase in unfrozen surface moisture resulting from a lengthening of the thaw season. This is followed by a dramatic decline in the number of wetland-conducive days, owing to a deepening of the permafrost surface, and drainage of near-surface moisture to deeper soil layers. We suggest that a reduction in the areal extent and duration of wetlands will influence high-latitude carbon emissions. © 2011 Macmillan Publishers Limited. All rights reserved.

Woolf D.,University of Swansea | Amonette J.E.,Pacific Northwest National Laboratory | Street-Perrott F.A.,University of Swansea | Lehmann J.,Cornell University | Joseph S.,University of New South Wales
Nature Communications | Year: 2010

Production of biochar (the carbon (C)-rich solid formed by pyrolysis of biomass) and its storage in soils have been suggested as a means of abating climate change by sequestering carbon, while simultaneously providing energy and increasing crop yields. Substantial uncertainties exist, however, regarding the impact, capacity and sustainability of biochar at the global level. In this paper we estimate the maximum sustainable technical potential of biochar to mitigate climate change. Annual net emissions of carbon dioxide (CO 2), methane and nitrous oxide could be reduced by a maximum of 1.8 Pg CO2 -C equivalent (CO2 -Ce) per year (12% of current anthropogenic CO2 -Ce emissions; 1 Pg=1 Gt), and total net emissions over the course of a century by 130 Pg CO2 -Ce, without endangering food security, habitat or soil conservation. Biochar has a larger climate-change mitigation potential than combustion of the same sustainably procured biomass for bioenergy, except when fertile soils are amended while coal is the fuel being offset. © 2010 Macmillan Publishers Limited. All rights reserved.

Ahmed N.A.,University of New South Wales | Day T.R.,New Fluid Technology
Applied Mechanics and Materials | Year: 2014

The design and manufacture of an innovative pump conceived based on vortex diffusion is described in this paper. The authors believe the pump has the potential of achieving higher efficiency gains than those currently available. A prototype was, therefore, manufactured for laboratory testing and eventual field trials and commercial manufacture. The success of the concept in achieving the goals, will pave the way towards significant savings in fuel expenditure through lower pumping costs. © (2014) Trans Tech Publications, Switzerland.

Parmenter B.J.,University of New South Wales | Dieberg G.,University of New England of Australia | Smart N.A.,University of New England of Australia
Sports Medicine | Year: 2015

Background: Peripheral arterial disease (PAD), a chronic condition with debilitating clinical sequelae, leads to reduced walking activity and increased mortality risk.Objective: We sought to quantify expected benefits elicited via exercise training in people with PAD and aimed to clarify which prescriptions were optimal.Data sources: We conducted a systematic search (PubMed, CINAHL, Cochrane controlled trials registry; 1966–31 July 2013).Study selection: We included randomized controlled trials (RCTs) of exercise training versus usual medical care in persons with PAD. Studies were assessed by two reviewers, 41 of 57 (72 %) of RCTs met selection criteria.Data extraction and synthesis: Data extraction sheets were used to record data and two reviewers cross-checked data. Included study authors were asked for missing data.Main outcomes and measures: Primary outcome: change in aerobic capacity (peak VO2). Secondary outcomes: ankle-brachial index (ABI), flow-mediated dilatation, 6-minute walk claudication distances (initial and absolute) and graded treadmill (initial and absolute) distances. The primary hypothesis was that peak VO2 would increase with exercise training. Using sub-analyses, we also aimed to clarify what types of exercise prescription would provide patients with most benefit; hypotheses were developed a priori.Results: Exercise training produced significant peak VO2 improvements with mean difference (MD) 0.62 ml·kg−1·min−1 (95 % CI 0.47–0.77; p < 0.00001); 6-minute walk initial claudication MD 52.7 m (95 % CI 24.7–80.6 m; p = 0.0002); total walking distance MD 34.9 m (95 % CI 25.6–44.1 m; p < 0.00001); graded treadmill initial claudication MD 68.8 m (95 % CI 54.4–83.2 m; p < 0.00001); absolute claudication distance MD 41.0 m (95 % CI 28.8–53.2 m; p < 0.00001)); but not ABI (p = 0.12) or flow mediated dilatation (FMD) (p = 0.96). Sub-analyses of change in peak VO2 after arm cranking showed a MD of 1.91 ml·kg−1·min−1 (95 % CI 1.28–2.54, p < 0.00001). Sub-analysis of peak VO2 according to exercise training pain thresholds suggested that no-to-mild pain may be superior (MD 0.79 ml·kg−1·min−1 [95 % CI 0.45–1.14, p < 0.00001]) to moderate-to-maximum training pain (MD 0.49 ml·kg−1·min−1 [95 % CI 0.31–0.66, p < 0.00001]).Conclusions and relevance: Exercise training improves cardio-respiratory fitness, pain-free and total flat-ground walking distances, as well as graded treadmill performance in PAD. Exercise prescriptions for PAD may consider arm cranking as well as lower limb exercise, possibly at short vigorous intensity intervals, but only to a threshold of mild pain. © 2014, Springer International Publishing Switzerland.

Oitmaa J.,University of New South Wales | Singh R.R.P.,University of California at Davis
Physical Review B - Condensed Matter and Materials Physics | Year: 2011

We study magnetically-ordered phases and their phase boundaries in the J1-J2-J3 Heisenberg models on the honeycomb lattice using series expansions around Néel and different colinear and noncolinear magnetic states. An Ising anisotropy (λ=J/Jz 1) is introduced and ground-state energy and magnetization order parameter are calculated as a power series expansion in λ. Series extrapolation methods are used to study properties for the Heisenberg model (λ=1). We find that at large J3 (0.6) there is a first-order transition between Néel and columnar states, in agreement with the classical answer. For J3=0, we find that the Néel phase extends beyond the region of classical stability. We also find that spiral phases are stabilized over large parameter regions, although their spiral angles can be substantially renormalized with respect to the classical values. Our study also shows a magnetically disordered region at intermediate J2/J1 and J3/J1 values. © 2011 American Physical Society.

Singh R.R.P.,University of California at Davis | Oitmaa J.,University of New South Wales
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

We study corrections to single tetrahedron based approximations for the entropy, specific heat, and uniform susceptibility of the pyrochlore lattice Ising antiferromagnet, by a numerical linked cluster (NLC) expansion. In a tetrahedron based NLC, the first order gives the Pauling residual entropy of 12ln320.20273. A 16th order NLC calculation changes the residual entropy to 0.205507, a correction of 1.37% over the Pauling value. At high temperatures, the accuracy of the calculations is verified by a high temperature series expansion. We find the corrections to the single tetrahedron approximations to be at most a few percent for all the thermodynamic properties. © 2012 American Physical Society.

Oitmaa J.,University of New South Wales | Singh R.R.P.,University of California at Davis
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

Motivated by the observation of a disordered spin ground state in the S=3/2 material Bi 3Mn 4O 12NO 3, we study the ground-state properties and excitation spectra of the S=3/2 (and for comparison S=1/2) bilayer Heisenberg model on the honeycomb lattice, with and without frustrating further neighbor interactions. We use series expansions around the Néel state to calculate properties of the magnetically ordered phase. Furthermore, series expansions in 1/λ=J 1/J, where J 1 is an in-plane exchange constant and J is the exchange constant between the layers, are used to study properties of the spin singlet phase. For the unfrustrated case, our results for the phase transitions are in very good agreement with recent quantum Monte Carlo studies. We also obtain the excitation spectra in the disordered phase and study the change in the critical λ when frustrating exchange interactions are added to the S=3/2 system and find a rapid suppression of the ordered phase with frustration. Implications for the material Bi 3Mn 4O 12NO 3 are discussed. © 2012 American Physical Society.

Agency: Cordis | Branch: H2020 | Program: RIA | Phase: SC5-16-2014 | Award Amount: 15.99M | Year: 2015

Terrestrial and marine ecosystems provide essential services to human societies. Anthropogenic pressures, however, cause serious threat to ecosystems, leading to habitat degradation, increased risk of collapse and loss of ecosystem services. Knowledge-based conservation, management and restoration policies are needed to improve ecosystem benefits in face of increasing pressures. ECOPOTENTIAL makes significant progress beyond the state-of-the-art and creates a unified framework for ecosystem studies and management of protected areas (PA). ECOPOTENTIAL focuses on internationally recognized PAs in Europe and beyond in a wide range of biogeographic regions, and it includes UNESCO, Natura2000 and LTER sites and Large Marine Ecosystems. Best use of Earth Observation (EO) and monitoring data is enabled by new EO open-access ecosystem data services (ECOPERNICUS). Modelling approaches including information from EO data are devised, ecosystem services in current and future conditions are assessed and the requirements of future protected areas are defined. Conceptual approaches based on Essential Variables, Macrosystem Ecology and cross-scale interactions allow for a deeper understanding of the Earths Critical Zone. Open and interoperable access to data and knowledge is assured by a GEO Ecosystem Virtual Laboratory Platform, fully integrated in GEOSS. Support to transparent and knowledge-based conservation and management policies, able to include information from EO data, is developed. Knowledge gained in the PAs is upscaled to pan-European conditions and used for planning and management of future PAs. A permanent stakeholder consultancy group (GEO Ecosystem Community of Practice) will be created. Capacity building is pursued at all levels. SMEs are involved to create expertise leading to new job opportunities, ensuring long-term continuation of services. In summary, ECOPOTENTIAL uses the most advanced technologies to improve future ecosystem benefits for humankind.

Agency: GTR | Branch: AHRC | Program: | Phase: Research Grant | Award Amount: 4.16M | Year: 2012

Over the last decade, the creative industries have been revolutionised by the Internet and the digital economy. The UK, already punching above its weight in the global cultural market, stands at a pivotal moment where it is well placed to build a cultural, business and regulatory infrastructure in which first movers as significant as Google, Facebook, Amazon or iTunes may emerge and flourish, driving new jobs and industry. However, for some creators and rightsholders the transition from analogue to digital has been as problematic as it has been promising. Cultural heritage institutions are also struggling to capitalise upon new revenue streams that digitisation appears to offer, while maintaining their traditional roles. Policymakers are hampered by a lack of consensus across stakeholders and confused by partisan evidence lacking robust foundations. Research in conjunction with industry is needed to address these problems and provide support for legislators. CREATe will tackle this regulatory and business crisis, helping the UK creative industry and arts sectors survive, grow and become global innovation pioneers, with an ambitious programme of research delivered by an interdisciplinary team (law, business, economics, technology, psychology and cultural analysis) across 7 universities. CREATe aims to act as an honest broker, using open and transparent methods throughout to provide robust evidence for policymakers and legislators which can benefit all stakeholders. CREATe will do this by: - focussing on studying and collaborating with SMEs and individual creators as the incubators of innovation; - identifying good, bad and emergent business models: which business models can survive the transition to the digital?, which cannot?, and which new models can succeed and scale to drive growth and jobs in the creative economy, as well as supporting the public sector in times of recession?; - examining empirically how far copyright in its current form really does incentivise or reward creative work, especially at the SME/micro level, as well as how far innovation may come from open business models and the informal economy; - monitoring copyright reform initiatives in Europe, at WIPO and other international fora to assess how they impact on the UK and on our work; - using technology as a solution not a problem: by creating pioneering platforms and tools to aid creators and users, using open standards and released under open licences; - examining how to increase and derive revenues from the user contribution to the creative economy in an era of social media, mash-up, data mining and prosumers; - assessing the role of online intermediaries such as ISPs, social networks and mobile operators to see if they encourage or discourage the production and distribution of cultural goods, and what role they should play in enforcing copyright. Given the important governing role of these bodies should they be subject to regulation like public bodies, and if so, how?; - consider throughout this work how the public interest and human rights, such as freedom of expression, privacy, and access to knowledge for the socially or physically excluded, may be affected either positively or negatively by new business models and new ways to enforce copyright. To investigate these issues our work will be arranged into seven themes: SMEs and good, bad and emergent business models; Open business models; Regulation and enforcement; Creators and creative practice; Online intermediaries and physical and virtual platforms; User creation, behaviour and norms; and, Human rights and the public interest. Our deliverables across these themes will be drawn together to inform a Research Blueprint for the UK Creative Economy to be launched in October 2016.

Agency: GTR | Branch: EPSRC | Program: | Phase: Research Grant | Award Amount: 800.50K | Year: 2015

Solar is an increasingly important source of power generation. Word-wide installations of new solar modules will exceed 40GW in 2014 with over 1GW that capacity will be installed in the UK. The cost of modules has decreased sharply over the past two years due to over-supply from manufacturers. The cost reduction is now stimulating demand because the cost of energy from solar is now at grid parity in some important regions of the world. An exciting new type of solar cell based on thin film perovskite light absorbers has been discovered in the UK which has the potential to lower costs still further. The discovery has been made by a team of researchers at Oxford University. The progress they have made with these new devices has been unprecedented and in only two years the Oxford team has achieved conversion efficiencies exceeding 17%. Moreover, the technology has been protected by filing patent applications on the fundamental discoveries. The Supergen Supersolar Hub comprises eight of the UKs leading University groups (including Oxford) engaged in the development of photovoltaic technologies. The Supergen SuperSolar Hub was quick to recognise the importance of the perovskite development and has already funded complementary research programmes in Hub member and Associate member laboratories through its flexible funding. This proposal for Supergen + funding will increase the scope and ambition of the Hubs perovskite research in modelling, synthesis, process optimization and characterization to boost conversion efficiencies still further and help maintain the UKs leadership position. In addition to the proposed research, proposals are made to increase the Hubs involvement with industry and with leading International laboratories to accelerate progress and lay the foundations for timely exploitation.

Agency: Cordis | Branch: H2020 | Program: RIA | Phase: PHC-11-2015 | Award Amount: 7.42M | Year: 2016

Cancer is the second leading cause of mortality in EU member states with ~90% of all cancer deaths caused by metastatic spread. Despite its significance, measuring metastatic potential as well as potential indicators of therapy efficacy remain unmet clinical challenges. Recently, it has been demonstrated in vitro, that aggressive metastatic cells pull on their surroundings suggesting that metastatic potential could be gauged by measuring the forces exert by tumours. Furthermore, many solid tumours show a significantly increased interstitial fluid pressure (IFP) which prevents the efficient uptake of therapeutic agents. As a result, a reduction in IFP is recognized as a hallmark of therapeutic efficacy. Currently, there is no non-invasive modality that can directly image these forces in vivo. Our objective is the non-invasive measurement of both IFP within tumours as well as the forces they exert on their surrounding environment. This will be used to predict a tumours metastatic potential and importantly, changes in these forces will be used to predict the therapeutic efficacy of drug therapy. To attain this goal, the biomechanical properties of the tumour and its neighbouring tissue will be measured via MR-elastography at various measured deformation states. Resultant images will be used to reconstruct images of the internal and external forces acting on the tumour. We call this novel imaging modality Magnetic Resonance Force (MRF) imaging. We will calibrate MRF via cell cultures and pre-clinical models, and then test the method in breast, liver, and brain cancer patients. Thereby, we will investigate whether MRF data can predict metastatic spread and measure IFP in patients. We will also investigate the potential to non-invasively modulate the force environment of cancer cells via externally applied shear forces with the aim of impacting cell motility and proliferation. This can provide novel mechanism for anticancer therapeutic agents via mechanotransduction.

Agency: Cordis | Branch: H2020 | Program: RIA | Phase: SC1-PM-21-2016 | Award Amount: 7.58M | Year: 2017

STRENGTHS aims to provide effective community-based health care implementation strategies to scale-up the delivery and uptake of effective mental health interventions in different country contexts. The current refugee crisis across Europe and the Middle East effects both individual refugees psychological well-being, as they face extreme stressors in their flight from their home country, but also has large effects on the healthcare systems of countries housing refugees. In reponse to this crisis, the STRENGTHS project aims to provide a framework for scaling-up the delivery and uptake of effective community-based mental health strategies to address the specific needs of refugees within and outside Europes borders. STRENGTHS will outline necessary steps needed to integrate evidence based low-intensity psychological interventions for common mental disorders into health systems in Syrias surrounding countries taking up the majority of refugees (Turkey, Lebanon and Jordan), a LMIC (Egypt) and European countries (Germany, Switzerland the Netherlands and Sweden). The consortium is a unique partnership between academics, non-governmental organisations (NGOs), international agencies and local partners with the responsibility to provide and scale-up evidence-based mental health and psychosocial support interventions for refugees. Key preparatory steps in the local political, regulatory and governance processes for uptake and scaling-up of the intervention and key contextual and system-related factors for integration will be validated for the real-life impact on the responsiveness of the system. The low-intensity interventions and training materials will be adapted and implemented in Syrian refugees within Syrias surrounding countries taking up the majority of refugees (Turkey, Lebanon and Jordan), a LMIC (Egypt) and European countries (Germany, Switzerland the Netherlands and Sweden). STRENGTHS will disseminate and promote buy-in of a validated framework for large-scale implementation of the low intensity interventions to providers of health and social services, policy makers and funding agencies.

Agency: Cordis | Branch: FP7 | Program: CP | Phase: ENERGY.2008.10.1.2;NMP-2008-2.6-1 | Award Amount: 2.09M | Year: 2009

The objective of the SOLAMON project is to develop high potential Plasmon Generating Nanocomposite Materials (PGNM) which will pave the way to the generation III solar cells (high efficiency & low cost). The objective is an augmentation in the External Quantum Efficiency resulting in an increase of 20% in the short circuit current density of the thin film solar cells. To achieve such an ambitious goal, the project will focus on the development of fully tailored building block nanoparticles able to generate a plasmon effect for enhanced solar absorption in thin film solar cells. Such nanoparticles designed for an optimum absorption will be integrated in solar cells matrix using a recently developed room temperature deposition process. This step will result in the specific design of PGNM for solar cells using a knowledge based approach coupling modeling at both scales: nanoscopic (plasmonic structure) and macroscopic (solar cells). SOLAMON will address three different classes of solar cells: a-Si:H thin films, organics and dye sensitised. Developing the PGNM on these three classes aims at maximizing the project impact and not to compare them because scientific background acquired on these technologies could be easily transferred to other ones. As a matter of fact, a-Si:H technology targets mainly the BuiIding Integrated PV (BIPV) market (large surfaces) whereas the two others are most suitable for the consumer good market (nomad applications). The project workprogram, the critical path and the contingencies plans are designed to maximize both social and economic impact. For this reason, the BIPV applications (i.e. a-Si:H based technology) will be firstly considered when a strategic choice occurs, keeping in mind that, even of large economic importance, the two other technologies do not have the same key BIPV environmental and social impact.

Agency: GTR | Branch: EPSRC | Program: | Phase: Training Grant | Award Amount: 5.00M | Year: 2014

Quantum technologies promise a transformation of measurement, communication and computation by using ideas originating from quantum physics. The UK was the birthplace of many of the seminal ideas and techniques; the technologies are now ready to translate from the laboratory into industrial applications. Since international companies are already moving in this area, there is a critical need across the UK for highly-skilled researchers who will be the future leaders in quantum technology. Our proposal is driven by the need to train this new generation of leaders. They will need to be equipped to function in a complex research and engineering landscape where quantum physics meets cryptography, complexity and information theory, devices, materials, software and hardware engineering. We propose to train a cohort of leaders to meet these challenges within the highly interdisciplinary research environment provided by UCL, its commercial and governmental laboratory partners. In their first year the students will obtain a background in devices, information and computational sciences through three concentrated modules organized around current research issues. They will complete a team project and a longer individual research project, preparing them for their choice of main research doctoral topic at the end of the year. Cross-cohort training in communication skills, technology transfer, enterprise, teamwork and career planning will continue throughout the four years. Peer to peer learning will be continually facilitated not only by organized cross-cohort activities, but also by the day to day social interaction among the members of the cohort thanks to their co-location at UCL.

News Article | February 15, 2017

An ancient continent that was once sandwiched between India and Madagascar now lies scattered on the bottom of the Indian Ocean. The first clues to the continent’s existence came when some parts of the Indian Ocean were found to have stronger gravitational fields than others, indicating thicker crusts. One theory was that chunks of land had sunk and become attached to the ocean crust below. Mauritius was one place with a powerful gravitational pull. In 2013, Lewis Ashwal at the University of the Witwatersrand in South Africa and his colleagues proposed that the volcanic island was sitting on a piece of old, sunken continent. Although Mauritius is only 8 million years old, some zircon crystals on the island’s beaches are almost 2 billion years old. Volcanic eruptions may have ejected the zircon from ancient rock below. Now, Ashwal and his team have found zircon crystals in Mauritius that are up to 3 billion years old. Through detailed analyses they have reconstructed the geological history of the lost continent, which they named Mauritia. Until about 85 million years ago, Mauritia was a small continent — about a quarter of the size of Madagascar — nestled between India and Madagascar, which were much closer than they are today. Then, India and Madagascar began to move apart, and Mauritia started to stretch and break up. “It’s like plasticine: when continents are stretched they become thinner and split apart,” says Martin Van Kranendonk at the University of New South Wales in Australia. “It’s these thin pieces that sink below the ocean.” There is evidence that other volcanic islands in the Indian Ocean, including the Cargados Carajos, Laccadive and Chagos islands, also sit on fragments of Mauritia. More and more remnants of other old continents are being uncovered, says Alan Collins at the University of Adelaide in Australia. Several pieces have recently been found off Western Australia and underneath Iceland, he says. “It’s only now as we explore more of the deep oceans that we’re finding all these bits of ancient continents around the place.”

News Article | November 3, 2016

If scientists had sacred objects, this would be one of them: a single, closely guarded 137-year-old cylinder of metal, housed in a vault outside of Paris. It is a prototype that precisely defines a kilogram of mass everywhere in the universe. A kilogram of ground beef at the grocery store has the same mass as this one special hunk of metal, an alloy of platinum and iridium. A 60-kilogram woman has a mass 60 times as much. Even far-flung astronomical objects such as comets are measured relative to this all-important cylinder: Comet 67P/Churyumov–Gerasimenko, which was recently visited by the European Space Agency’s Rosetta spacecraft (SN: 2/21/15, p. 6), has a mass of about 10 trillion such cylinders. But there’s nothing special about that piece of metal, and its mass isn’t even perfectly constant — scratches or gunk collecting on its surface could change its size subtly (SN: 11/20/10, p. 12). And then a kilogram of beef would be slightly more or less meat than it was before. That difference would be too small to matter when flipping burgers, but for precise scientific measurements, a tiny shift in the definition of the kilogram could cause big problems. That issue nags at some researchers. They would prefer to define important units — including kilograms, meters and seconds — using immutable properties of nature, rather than arbitrary lengths, masses and other quantities dreamed up by scientists. If humans were to make contact with aliens and compare systems of units, says physicist Stephan Schlamminger, “we’d be the laughingstock of the galaxy.” To set things right, metrologists — a rare breed of scientist obsessed with precise measurements — are revamping the system. Soon, they will use fundamental constants of nature — unchanging numbers such as the speed of light, the charge of an electron and the quantum mechanical Planck constant — to calibrate their rulers, scales and thermometers. They’ve already gotten rid of an artificial standard that used to define the meter — an engraved platinum-iridium bar. In 2018, they plan to jettison the Parisian kilogram cylinder, too. Fundamental constants are among the most precisely measured quantities known and so seem ideal for defining the units. But the constants remain enigmatic. The fine-structure constant, in particular, has mystified physicists since it first emerged as an important quantity in scientists’ equations 100 years ago. Every time electrically charged particles attract or repel — anywhere in the universe — the fine-structure constant comes into play. Its value sets the strength of the charged particles’ push and pull. Nudge its value by a few percent and stars would produce much less carbon, the basis of all known life. Tweak it even more, and stars, molecules and atoms would not form. It seems as if its value were handpicked to allow life in the universe. The values of other fundamental constants are likewise unexplainable — all scientists can do is measure them. “Nobody has any idea why these quantities have the numerical values they do,” says theoretical physicist John Barrow of the University of Cambridge. This uncertainty swirling around the constants could cause headaches for metrologists. No law of physics prohibits the constants from changing ever so slightly in time or space — though scientists haven’t found conclusive evidence that they do. Some controversial measurements suggest, however, that the fine-structure constant may be different in different parts of the universe. That could mean that the constants used to define the units vary, too, which could muck up the orderly system that metrologists are preparing to adopt. The kilogram isn’t the only unit that gives scientists indigestion. A second culprit is the kelvin, the unit of temperature. “It’s bonkers,” says physicist Michael de Podesta of the National Physical Laboratory in Teddington, England. “Humanity’s standard for temperature is the level of molecular jiggling at a mystical point.” That point — which much like the sacrosanct kilogram prototype is an arbitrary quantity chosen by humans — is the triple point of water, a particular temperature and pressure at which liquid, gas and solid phases of water coexist. This temperature is set to 273.16 kelvins (0.01° Celsius). Then there’s the ampere, or amp, which quantifies the flow of electrical current that juices up laptops and lightbulbs. “We’ve agonized over the years on the definition of an ampere,” says Barry Inglis, president of the International Committee for Weights and Measures. The present definition is wonky: It is the current that, when flowing through two infinitely long, infinitely thin wires, placed one meter apart, would produce a certain amount of force between them. Such wires are impossible to produce, of course, so it is impractical to create a current that is precisely one amp in this way. This problem makes current-measuring equipment difficult to calibrate well. It’s not a problem for household wiring jobs, but it’s no good when the highest level of precision is needed. These examples explain the discomfort that surrounds the units that are so fundamental to science. “There’s this hairline crack in the foundation, and you cannot build your building of physics on that foundation,” says Schlamminger, of the National Institute of Standards and Technology in Gaithersburg, Md. To seal the crack, scientists are preparing to update the International System of Units, or SI, in 2018. The kilogram, kelvin, ampere and mole (the unit that quantifies an amount of material) will all be redefined using related constants. These include the Planck constant, which describes the scale of the quantum realm; the Boltzmann constant, which relates temperature and energy; the Avogadro constant, which sets the number of atoms or molecules that make up a mole; and the magnitude of the charge of an electron or proton, also known as the elementary charge. The new units will be based on the modern understanding of physics, including the laws of quantum mechanics and Einstein’s theory of special relativity. Scientists went through similar unit acrobatics when they redefined the meter in terms of a fundamental constant — the speed of light, a speed that is always the same. In 1983, the meter became the distance light travels in a vacuum in 1/299,792,458th of a second (SN: 10/22/83, p. 263). This long string of digits came from the increasingly precise measurements of the speed of light made over centuries. Scientists settled on a value of exactly 299,792,458 meters per second, which then defined the meter. The other units will now undergo similar redefinitions. The shakeup of so many units is a “once in a lifetime” change, says NIST physicist David Newell. But most people won’t notice. Definitions will flip, but the changes will be orchestrated so that the size of a kilogram or a kelvin won’t change — you won’t have to worry about overpaying at the salad bar. Although the changes are mostly under the hood, their advantages are more than philosophical. In the current system, masses much larger or smaller than a kilogram are difficult to measure precisely. Pharmaceutical companies, for example, need to measure tiny fractions of grams to dole out minute drug doses. Those masses can be a millionth the size of the kilogram prototype cylinder, increasing the uncertainty in the measurement. The new system will tie masses to the Planck constant instead, allowing for more precise measurements of masses both large and small. In 2018, at a meeting of the General Conference on Weights and Measures, metrologists will vote on the SI revision and will likely put it into practice. The new system is expected to be a welcome change. “Obviously, a system where you take a lump of metal and say, ‘this is a kilogram,’ is not very fundamental,” says physicist Richard Davis of the International Bureau of Weights and Measures in Sèvres, France. “Why would anyone spend their life trying to measure an atom in terms of that?” To retire the Paris-based prototype, scientists must agree on a number for the Planck constant. Its value is about 6.62607 x 10−34 kilograms times meters squared per second. But scientists need to measure it with extreme precision — within 2 millionths of a percent, out to about seven decimal places — and several independent measurements need to agree. Once that’s done, scientists will fix the value of the Planck constant. Since the meter is already defined (by the speed of light) and the second is defined (using a cesium atomic clock), fixing the value of the Planck constant will define what a kilogram is. Several teams are using different techniques to zero in on the Planck constant (SN: 9/5/15, p. 15). The first compares electromagnetic forces to the force imposed by gravity, using a tool called a watt balance. Schlamminger’s group and others are in the final stages of refining such measurements. Thanks to precise quantum mechanical methods of producing voltages, an object’s mass can be directly related to the Planck constant. In a complementary effort, scientists are measuring the Planck constant using carefully formed and stunningly shiny spheres of pure silicon. “The principle is quite easy,” says metrologist Horst Bettin of the German national metrology institute, Physikalisch-Technische Bundesanstalt in Braunschweig. “We are just counting the number of atoms.” Atoms in the sphere are perfectly spaced in a 3-D crystal grid, so the number of atoms can be deduced from the volume of the sphere. The result, a measurement of the Avogadro constant, can then be used to calculate the Planck constant, using precise measurements of other fundamental constants — including the Rydberg constant, which is related to the energy needed to ionize a hydrogen atom. To make this measurement, the spheres must be impressively round so that the number of atoms inside can be calculated. “The Earth would be as round as our spheres if the highest mountain [were] a few meters high,” Bettin says. Imagine a universe where the speed of light changes drastically from day to day. If metrologists’ new system of units were in use there, “today’s meter would be different than tomorrow’s meter,” says Schlamminger — clearly not an ideal situation. In our universe, however, no one has found solid evidence of variation, so if variation exists, it would be too small to have a practical impact on the units. But if the constants weren’t constant, physics would be in trouble. The whole of physics is founded on laws that are assumed to be unchanging, says physicist Paul Davies of Arizona State University in Tempe. Physicists have found possible signs of fickleness in the fine-structure constant (SN: 11/16/13, p. 11). If true, such measurements suggest that charged particles tug differently in far-flung parts of the universe. The fine-structure constant is an amalgamation of several other constants, including the charge of the electron, the speed of light and the Planck constant, all smooshed together into one fraction, with a value of about 1/137. Its value is the same regardless of the system of measurement because the fraction is a pure number, with no units. Scientists keep track of the fine-structure constant using quasars — brilliant cosmic beacons produced by distant supermassive black holes. On its way toward Earth, a quasar’s light passes through gas clouds, which absorb light of particular frequencies, producing gaps in the otherwise smooth spectrum of light. The locations of these gaps depend on the fine-structure constant. Variations in the spacing of the gaps in space or time could indicate that the fine-structure constant has changed. In 2011, scientists reported tantalizing hints that the fine-structure constant changes. In Physical Review Letters, astrophysicist John Webb of the University of New South Wales in Sydney and colleagues reported that the fine-structure constant increases in one direction on the sky, and decreases in the opposite direction, as if there were a special axis running through the universe (SN Online: 9/3/10). The claim is controversial and Webb counts himself as one of the skeptics. “This is radical, obviously, and when you come out with a discovery like that, of course you don’t believe it.” But, he says, despite his best efforts to disprove it, the variation remains. If confirmed, the result would have enormous consequences. “It’s a tiny effect,” says Davies, but “I think it would come as a great shock, because people really do want to think that the laws of physics are absolutely immutable. The idea that they could somehow vary makes most physicists feel seriously uneasy.” Some scientists have come up with a more comforting explanation for the hints of variation in the constant. Michael Murphy of Swinburne University of Technology in Melbourne, Australia, suggests that telescope calibration issues could be to blame for the changing fine-structure constant. (Murphy knows the ins and outs of the measurement — he was a coauthor on the 2011 paper reporting fine-structure constant changes.) Using measurements free from calibration issues, the fine-structure constant stays put, Murphy and colleagues reported in September in Monthly Notices of the Royal Astronomical Society. The quasars studied for the September paper, however, don’t rule out variation in the part of the sky observed in 2011. Other puzzles in physics might be connected to possible variation in the constants. Scientists believe that most of the matter in the universe is of an unseen form known as dark matter. In a paper published in Physical Review Letters in 2015, physicists Victor Flambaum and Yevgeny Stadnik of the University of New South Wales showed that dark matter could cause the fundamental constants to change, via its interactions with normal matter. And a shifting speed of light could revise current views about the evolution of the infant universe. Scientists think that a period of inflation caused the newborn cosmos to expand extremely rapidly, creating a universe that is uniform across vast distances. That uniformity is in line with observations: The cosmic microwave background, light that emerged about 380,000 years after the Big Bang, is nearly the same temperature everywhere scientists look. But cosmologist João Magueijo of Imperial College London has a radical alternative to inflation: If light were speedier in the early universe, it could account for the universe’s homogeneity. “As soon as you raise the speed limit in the early universe,” Magueijo says, “you start being able to work on explanations for why the universe is the way it is.” To the consternation of many physicists, whose equations are riddled with fundamental constants, these quantities cannot be calculated directly from physical principles. Scientists don’t know why electrons pull charged particles with the strength they do and can only measure the strength of the force and plug that number into formulas. Such black boxes detract from the elegance of scientists’ theories, which attempt to explain the universe from the bottom up. Particularly troubling is the fact that the precise values of these constants are essential to the formation of stars and galaxies. If, during the birth of the universe, certain constants — in particular the fine-structure constant — had been just slightly different, they would set the cosmos on a path to being empty and barren. As a result, many scientists believe that there must be some deeper theory that constrains their values. But recent attempts to come up with such a theory have been stymied, says theoretical physicist Frank Wilczek of MIT. “Progress has been pretty limited in the last few decades.” Some scientists have begun turning to an alternative explanation: These constants may not be fine-tuned, but randomly chosen, with a roll of the dice that occurred many times in many universes, or different parts of the universe. “We’ve really changed our view of fundamental constants. They’re less rigidly defined and ultimate,” says Barrow. There may be other universes, or faraway pockets of our own universe, that have very different constants. In those places, life might not be able to survive. Just as diverse life sprung up on Earth, with its favorable climate, and not Mars, we may just live in a universe that has constants amenable to life because that’s the only place where life could gain a foothold. There’s also an increasing mismatch between what’s known experimentally and theoretically about the constants. Although scientists are measuring them to painstaking precision, with experimental errors measured in parts per billion, the origins of the constants remain completely unexplained. As metrologists attempt to build their system on stronger foundations by pinning the units to the constants, those very foundations may yet shift. Shifting constants would make the system of units less appealingly neat and tidy. The system of units will have to evolve as knowledge of physics advances, says Newell. “Then, you can turn around and use that measurement system to explore further the world around us.” This article appears in the November 12, 2016, issue of Science News with the headline, "Constant connections: New units based on fundamental properties of the universe will make measurements more precise."

News Article | December 27, 2016

As any punter will know, backing winners isn’t easy. There’s a little bit of science, a little bit of art and a whole lot of luck. Australia’s independent renewable energy agency came into being in 2012 though an act of parliament, with a $2.5bn, 10-year mission to improve the affordability of renewable energy and increase its supply in Australia. It hasn’t been smooth sailing; this year Arena was facing a $1.3bn budget cut but this was commuted to a smaller but still significant $500m. Six years on it has spent nearly $100m wholly or partly funding more than 60 projects across the renewable energy portfolio, from solar to geothermal to wind to hybrid. Its chief executive, Ivor Frischknecht, says it’s not so much about picking winners as working to generate market interest in the area. But with a rapidly expanding range of technologies and innovations arising to meet the demand for a truly sustainable energy supply, the agency and its expert advisory boards have had to make a lot of tough decisions. Some have paid off handsomely. Others have not. How do you make solar photovoltaic cells cheaper without compromising their quality and efficiency? It’s the holy grail question of solar photovoltaic development, prompting researchers at the University of New South Wales and industry partners Suntech R&D Australia to investigate how to mass produce high quality solar cells that maintain their performance for as long as possible. “The silicon wafer is by far the biggest cost of making a solar cell so it’s important that the cost of the silicon wafer gets reduced,” says Prof Stuart Wenham, chief investigator on the project and director of the UNSW’s Centre of Excellence for Advanced Photovoltaics and Photonics. “But the problem for reducing the cost of the silicon wafer is the fact that it reduces the quality.” With $4.4m from Arena, the researchers not only achieved a new world record for efficiency of the most commonly used commercial silicon wafer, but have subsequently developed a method to further improve efficiency. Hydrogenation is new technology that uses hydrogen atoms to chemically neutralise impurities within the silicon, thus improving its purity and performance. The technology can be incorporated into existing solar cell manufacturing processes and is now commercially available, with interest coming from around the world. Wind turbines and solar farms both require large amounts of space. A country the size of Australia has an abundance of land, wind and sun but there is the potential to combine all three of these. Arena commissioned a report, which was published in October, to identify areas for brownfield development – retrofitting solar farms at existing windfarms – or greenfield projects, where a site for both wind and solar could be developed. The report’s authors, Aecom, found that co-location was not only viable across a range of sites but could achieve significant cost savings in connection equipment, installation, operation, maintenance and development. Since then, Arena has invested nearly $10m in Australia’s first large co-located wind and solar farm in the southern highlands of NSW. The plan is to install 10MW of solar photovoltaic power generation alongside the existing 165.5MW Gullen Range windfarm. Mining operations in Australia tend to be remote and therefore far from the electricity grid. As a result, many are powered by diesel, which is not only costly but produces significant greenhouse gas emissions. Rio Tinto’s bauxite mining operation in Weipa in remote Queensland has now built the first stage of a 6.7MW photovoltaic solar farm, with $11.3m of assistance from Arena. The 18,000 photovoltaic panels are already generating up to 1.7MW of power, meeting 20% of the local township’s electricity demand and saving as much as 600,000 litres of diesel each year. A spokesman for Arena says the project has created a precedent for the mining industry by demonstrating solar photovoltaic is a viable option for powering off-grid locations such as mines. The next stage – worth an additional $7.8m from Arena – will see the solar farm’s potential output increase to 6.7MW, and the provision of batteries for storage. Australia is one of the least volcanically active continents on Earth so, when it comes to generating electricity from geothermal energy, it was always going to be a more challenging prospect than for countries such as Iceland or New Zealand. When Arena was formed, it inherited a $32m commitment to what was hoped to be Australia’s first demonstration of geothermal energy generation. Unfortunately the hot rocks of the Cooper Basin in South Australia proved harder to economically tap into and, after six years, the company formerly known as Geodynamics has sealed the wells and got out of geothermal altogether, rebranding as ReNu Energy and turning its focus to biogas, solar, battery solutions and hybrid. It’s not the end of geothermal in Australia, at least from Arena’s point of view. It has since commissioned an international report into the viability of geothermal energy generation in Australia and granted $450,000 to the University of Adelaide’s South Australian centre for geothermal energy research to develop a map that will help identify areas where drilling is more likely to hit geothermal paydirt. As coal-fired power plants are decommissioned, many are looking at ways to reuse some or all of the infrastructures left behind. In 2013 Arena provided $2.15m to a conglomerate interested in exploring a life after coal for the 180MW Collinsville power station in Queensland. The idea was to rebirth the plant as a 30MW hybrid solar thermal and gas-powered plant. But the feasibility study found while there were no significant technical challenges to the conversion, the construction and operating costs were high enough that it could not be made economically viable. Ratch-Australia, which owns the plant, has now revised its plans and is moving towards building a 42MW solar farm around the station, making as much use of the building infrastructure as possible.

News Article | December 2, 2016

AI has seen a renaissance over the last year, with developments in driverless vehicle technology, voice recognition, and the mastery of the game "Go," revealing how much machines are capable of. But with all of the successes of AI, it's also important to pay attention to when, and how, it can go wrong, in order to prevent future errors. A recent paper by Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville, outlines a history of AI failures which are "directly related to the mistakes produced by the intelligence such systems are designed to exhibit." According to Yampolskiy, these types of failures can be attributed to mistakes during the learning phase or mistakes in the performance phase of the AI system. Here is TechRepublic's top 10 AI failures from 2016, drawn from Yampolskiy's list as well as from the input of several other AI experts. The company Northpointe built an AI system designed to predict the chances of an alleged offender to commit a crime again. The algorithm, called "Minority Report-esque" by Gawker (a reference to the dystopian short story and movie based on the work by Philip K. Dick), was accused of engaging in racial bias, as black offenders were more likely to be marked as at a higher risk of committing a future crime than those of other races. Another media outlet, ProPublica, found that Northpointe's software wasn't an "effective predictor in general, regardless of race." In June, an AI-fueled video game called Elite: Dangerous exhibited something the creators never intended: The AI had the ability to create superweapons that were beyond the scope of the game's design. According to one gaming website, "[p]layers would be pulled into fights against ships armed with ridiculous weapons that would cut them to pieces." The weapons were later pulled from the game's developers. A so-called "crime fighting robot," created by the platform Knightscope, crashed into a child in a Silicon Valley mall in July, injuring the 16-month-old boy. The Los Angeles Times quoted the the company as saying that incident was a "freakish accident." As previously reported by TechRepublic, Joshua Brown was driving a Tesla engaged in Autopilot mode when his vehicle collided with a tractor-trailer on a Florida highway, in the first-reported fatality of the feature. Since the accident, Telsa has announced major upgrades to its Autopilot software, which Elon Musk claimed would have prevented that collision. There have been other fatalities linked to Autopilot, including one in China, although none can be directly tied to a failure of the AI system. In an attempt to form relationships with younger customers, Microsoft launched an AI-powered chatbot called "" on Twitter last spring. "Tay," modeled around a teenage girl, morphed into, well, a "Hitler-loving, feminist-bashing troll"—within just a day of her debut online. Microsoft yanked Tay off the social media platform and announced it planned to make "adjustments" to its algorithm. SEE: Big data can reveal inaccurate stereotypes on Twitter, according to UPenn study (TechRepublic) In "The First International Beauty Contest Judged by Artificial Intelligence," a robot panel judged faces, based on "algorithms that can accurately evaluate the criteria linked to perception of human beauty and health," according to the contest's site. But by failing to supply the AI with a diverse training set, the contest winners were all white. As Yampolskiy said, "Beauty is in the pattern recognizer." After the release of the massively popular Pokémon Go in July, several users noted that there were fewer Pokémon locations in primarily black neighborhoods. According to Anu Tewary, chief data officer for Mint at Intuit, it's because the creators of the algorithms failed to provide a diverse training set, and didn't spend time in these neighborhoods. In March 2016, Google's AI, AlphaGo, was beaten in game four of a five-round series of the game Go by Lee Sedol, a 18-time world champion of the game. And though the AI program won the series, Sedol's win proved AI's algorithms aren't flawless yet. "Lee Sedol found a weakness, it seems, in Monte Carlo tree search," said Toby Walsh, professor of AI at the University of New South Wales. But while this can be considered a failure of AI, Yampolskiy also makes the point that the loss "could be considered by some to be within normal operations specs." Two researchers at China's Shanghai Jiao Tong University published a study entitled "Automated Inference on Criminality using Face Images." According to the Mirror, they "fed the faces of 1,856 people (half of which were convicted violent criminals) into a computer and set about analysing them." In the work, the researchers concluded that there are "some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle." Many in the field questioned the results and the report's ethics underpinnings. And, finally, this year England's largest vehicle insurer, Admiral Insurance, set out to use Facebook users' posts to see there was a correlation between their use of the social media site and whether they would make good first-time drivers. While this isn't a straight AI failure, it is a misuse of AI. Walsh said that "Facebook did a good job in blocking this one." The endeavor, which was called "firstcarquote," never got off the ground because Facebook blocked the company from accessing data, citing its policy that states companies can't "use data obtained from Facebook to make decisions about eligibility, including whether to approve or reject an application or how much interest to charge on a loan." As evidenced by these examples, AI systems are deeply prone to bias—and it is critical that machine learning algorithms train on diverse sets of data in order to prevent it. As AI increases its capabilities, ensuring proper checks, diverse data, and ethical standards for research is of utmost importance.

News Article | November 30, 2016

It's a Sunday afternoon in September, and the two co-founders of ionQ, a quantum computing startup, are meeting for a strategy session with their first hire: their new CEO. Sitting in comfy leather chairs in the Physical Sciences Complex at the University of Maryland (UMD) in College Park, the two founders are experiencing a touch of culture clash. Lifelong research scientists, UMD physicist Chris Monroe and Jungsang Kim, an electrical engineer at Duke University in Durham, North Carolina, are relaxed and talkative about their company's plans, even in the presence of a reporter. They tick off reasons why trapped ions, their specialty, will make for a great quantum computer—perfect reproducibility, long lifetimes, and good controllability with lasers. Their CEO, David Moehring, whom Monroe and Kim have just hired away from the U.S. Intelligence Advanced Research Projects Activity, is more guarded. He warns Monroe and Kim against divulging information that he thinks a startup should keep secret—including exactly how much money they received from the venture capital firm New Enterprise Associates. (He is willing to confirm that it's several million dollars.) Kim nods at Moehring and chuckles. "At some point this guy will implement a policy that we need to get his approval to talk." These unlikely partners share a common conviction: that quantum computing—which aims to harness quantum mechanics to vastly accelerate computation—is ready for prime time. They are not alone. Tech giants Intel, Microsoft, IBM, and Google are all plowing tens of millions of dollars into quantum computing. Yet the contenders are betting on different technological horses: No one yet knows what type of quantum logic bit, or qubit, will power a practical quantum computer. Google, often considered the field's leader, has signaled its choice: tiny, superconducting circuits. Its group has built a nine-qubit machine and hopes to scale up to 49 within a year—an important threshold. At about 50 qubits, many say a quantum computer could achieve "quantum supremacy," a term coined by John Preskill, a physicist at the California Institute of Technology in Pasadena, to denote a quantum computer that can do something beyond the ken of a classical computer, such as simulate molecular structures in chemistry and materials science, or tackle certain problems in cryptography or machine learning. IonQ's team isn't ruffled by Google's success. "I'm not worried that Google will declare next month that the game's over," Kim says. "Or maybe they can declare it, but it won't be over." Still, ionQ, which lacks a building or even a website, is a decided underdog. The startup is sticking with trapped ions—the technology behind the world's very first quantum logic gates, which Monroe himself helped create in 1995. With precisely tuned laser pulses, Monroe can knock ions into quantum states that last for seconds—far longer than Google's qubits. Kim has developed a modular scheme for connecting groups of ions together, which might allow ionQ to scale up faster than many expect. But so far, its leaders have joined just five qubits into a programmable machine. Trapped ions are "a bit of a black sheep right now," Monroe admits, "but I think in the coming years people will be flocking to it." One thing is certain: Building a quantum computer has gone from a far-off dream of a few university scientists to an immediate goal for some of the world's biggest companies. And Monroe and his colleagues are among many who hope to cash in. Although superconducting qubits may have taken a momentary lead among industry players, experts agree that it's far too early to declare a winner. "It's a good thing that these different technologies are being developed in parallel," says Preskill, an unofficial dean of quantum information science. "Because there could be surprises that really change the game." Qubits outmuscle classical computer bits thanks to two uniquely quantum effects: superposition and entanglement. Superposition allows a qubit to have a value of not just 0 or 1, but both states at the same time, enabling simultaneous computation. Entanglement enables one qubit to share its state with others separated in space, creating a sort of super-superposition, whereby processing capability doubles with every qubit. An algorithm using, say, five entangled qubits can effectively do 25, or 32, computations at once, whereas a classical computer would have to do those 32 computations in succession. As few as 300 fully entangled qubits could, theoretically, sustain more parallel computations than there are atoms in the universe. This massive parallelism would not help with many tasks—nobody thinks quantum computers will revolutionize word processing or email. But it could dramatically speed up algorithms designed to explore vast numbers of different paths simultaneously, and solve problems that include searching through large data sets, discovering new chemical catalysts, and factoring large numbers used to encrypt data. Quantum computers may even find a role simulating black holes and other phenomena in physics. There is a major catch, however. Quantum superpositions and entangled states are exquisitely fragile. They can be destroyed by slight perturbations from the environment—or by attempts to measure them. A quantum computer needs protection from what Robert Schoelkopf, a physicist at Yale University, calls "a sea of classical chaos." Though theoretical ideas started appearing in the early 1980s, experimental quantum computing got going only in 1995, after Peter Shor, a mathematician at Bell Labs in Murray Hill, New Jersey, showed that a quantum computer could quickly factor large numbers—a capability that would render much of modern cryptography obsolete. Shor and others also showed that it was theoretically possible to keep fragile qubits stable indefinitely by using neighboring qubits to correct their errors. Suddenly, physicists and their funders had both a concrete reason to build a quantum computer and a sign that the machine wouldn't dissolve into a pile of cascading errors. David Wineland, a Nobel Prize-winning physicist at a National Institute of Standards and Technology (NIST) laboratory in Boulder, Colorado, had already pioneered methods to use lasers to cool ions and control their internal quantum states. Within a year of Shor's discoveries, Wineland and Monroe, a NIST staff scientist at the time, built the first quantum mechanical logic gate, using lasers to manipulate electron states in a beryllium ion. Because of Wineland's experience with ions, the chance to seize the lead in early quantum computing experiments "fell in our laps," Monroe says. As millions of government research dollars began flowing to quantum physics groups around the world, other kinds of qubits began to appear. By the early 2010s, trapped ions faced a strong challenge from a new darling: circuit loops made out of superconductors—metallic materials that can carry an oscillating electric current without resistance when chilled nearly to absolute zero. The 0 and 1 of the qubit correspond to different current strengths. Adding to their appeal, the loops can be seen with the naked eye, controlled with simple microwave electronics rather than finicky lasers, and fabricated using techniques from conventional computer chip manufacturing. They also operate very quickly. At least at first, however, superconductors had a fatal weakness: Environmental noise, even from the electronics used to control them, can disrupt their quantum superpositions in a small fraction of a microsecond. But engineering refinements have improved the circuits' stability by more than a million times, so that they now can remain in a superposition state for tens of microseconds—though they still collapse far faster than ions. In 2007, D-Wave Systems, a startup company in Burnaby, Canada, surprised just about everybody by announcing that it had built a quantum computer, with 16 superconducting qubits. D-Wave's machine didn't entangle all the qubits, and it couldn't be programmed qubit by qubit. Instead, it relied on a technique called quantum annealing, in which qubits are entangled only with near neighbors and interact to produce not a set of parallel computations, but a single overall quantum state. D-Wave developers hoped to map complicated mathematical problems onto such states and use quantum effects to find minimum points, a promising technique for solving optimization problems such as efficiently routing air traffic. Almost instantly critics cried foul: D-Wave did not even attempt to do certain things that many thought essential to quantum computing, such as error correction. But several companies, including Google and Lockheed Martin, bought and tested D-Wave devices. A tentative consensus emerged: They did something quantum, and, for certain specialized tasks, they might perform faster than a conventional computer. Quantum or not, D-Wave jolted the private sector awake. "It was really eye-opening," Monroe says. "[D-Wave] showed that there's a market, there's a hunger for these devices." Within a few years, companies started lining up behind technologies that aligned with their in-house expertise. Intel made one of the biggest bets, announcing in 2015 that it would invest $50 million into research at QuTech, an offshoot of Delft University of Technology in the Netherlands. The company is focusing on silicon quantum dots, often called "artificial atoms." A quantum dot qubit is a small chunk of material in which, as in an atom, the quantum states of an electron can represent 0 and 1. Unlike ions or atoms, however, a quantum dot doesn't need lasers to trap it. Early quantum dots were made from nearperfect crystals of gallium arsenide, but researchers have turned to silicon, hoping to leverage the massive manufacturing infrastructure of the semiconductor industry. "I think [Intel's] heart is with silicon," says Leo Kouwenhoven, scientific director of QuTech. "That's what they're good at." But silicon-based qubits are well behind those based on ions or superconductors, with the first two-qubit logic gate reported only last year by a group at the University of New South Wales in Sydney, Australia. Microsoft went for what many consider an even longer shot: topological qubits based on nonabelian anyons. These aren't objects at all—they're quasiparticles, traveling along the boundary between two different materials—and their quantum states are encoded in the different braiding paths they follow in time. Because the shapes of the braided paths lead to the qubit superpositions, they would be "topologically protected" from collapse, similar to how a shoelace stays tied even if nudged or bumped. This meant that theoretically, a topological quantum computer wouldn't need to devote so many qubits to error correction. As early as 2005, a Microsoft-led team proposed a way to build a topologically protected qubit in hybrid semiconductor-superconductor structures, and Microsoft has funded several groups to try to make one. Recent papers from these groups and from a separate effort at Bell Labs have shown hints of the crucial anyon in the patterns of electrical currents that flow in their specialized circuitry, and the scientists are close to demonstrating an actual qubit, Preskill says. "I think in a year or two, we can consider it to be nailed: Topological qubits exist." Google, for its part, recruited John Martinis, a superconducting qubit expert at the University of California, Santa Barbara (UCSB), who had studied D-Wave's operation and shortcomings. In 2014, the company swallowed his UCSB research team whole, hiring about a dozen people. Soon afterward, Martinis's team announced they had built a nine-qubit machine at UCSB, one of the largest programmable quantum computers so far, and they are now trying to scale up. To avoid creating an unwieldy jumble of wires, they are rebuilding the system into a 2D array that will sit on top of a wafer with control wires etched into it. In July, Martinis's team—now up to about 30 scientists and engineers—used three superconducting qubits to simulate the ground state energy of a hydrogen molecule, demonstrating that quantum computers can simulate simple quantum systems as well as classical computers. The result points to the coming power of a machine with quantum supremacy, he says. Martinis calls the 1-year timetable for reaching a 49-qubit computer a "stretch goal," but he believes it may be possible. Meanwhile, Monroe is grappling with the challenges that come with trapped ions. As qubits, they can remain stable for seconds, thanks to vacuum chambers and electrodes that stabilize them even in the presence of external noise. Yet that isolation also means it is a challenge to get the qubits to interact. Monroe recently entangled 22 ytterbium ions in a linear chain, but so far he is not able to control or query all ion pairs in the chain, as a quantum computer will require. The complexity of controlling the ensemble rises with the number of ions squared, so adding many more is impractical. The way forward, Monroe believes, is to go modular and use fiber optics to link traps holding perhaps 20 ions each. In such a scheme, certain qubits within each module would act as hubs, reading out information from the rest of the qubits and sharing it with other modules; this way, most qubits could remain shielded from external interference. On a recent afternoon, Monroe toured his six lab spaces at UMD. In his three older labs, electrical wires and vacuum lines descend in tangles from above. On oversize tables, a bewildering array of lenses and mirrors shape and direct laser light toward portals in small steel vacuum chambers containing the all-important ions. Overhead heating, ventilation, and air conditioning (HVAC) equipment—necessary to keep dust down and stabilize the lab temperature—gives off a steady drone. "I'm passionate about HVAC," Monroe says. The three newer labs are, by contrast, tidy and eerily empty. Instead of Rube Goldberg optics tables, most of the lasers are integrated, plug-and-play units from companies like Honeywell—prototypes for the kinds of turnkey systems that ionQ needs to perfect if it is going to succeed. "The lasers we use now have only one knob, and it's 'on,'" Monroe says. He is antsy to get ionQ's labs up and running, so he can transition his highly paid research scientists onto ionQ's payroll and set them to perfecting technologies they've developed at UMD—which, thanks to an unusual agreement, ionQ can license exclusively and royalty-free. Next year he will take his first-ever sabbatical to focus on building ionQ. The private sector money flowing into quantum research, he says, "is the biggest deal in my career." Even as money has poured in, quantum computing is a long way from becoming a secretive commercial field. The major research groups—even those affiliated with big companies—are still publishing results and presenting at conferences. They say they have a mutual interest in publicizing their advances, not least so that potential customers can think about how they could use a quantum computer. "We all need a market," Monroe says. What's more, nobody knows enough about quantum computing yet to go it alone with a single qubit type. Every approach needs refining before quantum computers can be scaled up. Superconductor- and silicon-based qubits need to be manufactured with more consistency, and the refrigerators that chill them need streamlining. Trapped ions need faster logic gates and more compact lasers and optics. Topological qubits still need to be invented. A future quantum computer could well be a hybrid, with ultrafast superconducting qubits running algorithms, then dumping output to more stable ion memory, while photons shuttle information among different parts of the machine or between nodes of a quantum internet. "One can imagine we'll have an environment in which several types of qubits exist and play different roles," says Krysta Svore, a Microsoft researcher in Redmond, Washington. A quantum computer is so new, and so strange, that even the world's top quantum physicists and computer engineers do not know what a commercial one will ultimately look like. Physicists will need to simply build the most complex computer possible with current technology, then confront the new challenges that are sure to crop up, Svore says. Build, study, and repeat. "We like to joke that once we have a quantum computer, we're going to use it to design the next quantum computer."

News Article | November 15, 2016

While increasing at some sites, kelp, a useful indicator of an ecosystem's health, is undergoing a worldwide decline. A school of sardines swims through a kelp forest at the Channel Islands National Marine Sanctuary off California's coast. Since the area was declared a marine sanctuary five years ago, scientists have observed more lobsters, fewer urchins, and healthier kelp forests. Kelp, with its incredible ability to overcome biological stressors and grow back quickly, is an important indicator of how healthy the ocean is. As such, two studies published recently in the Proceedings of the National Academy of Sciences, attempted first to quantify the health of kelp forests around the globe, and then to determine what can cause this resilient species to enter a large-scale decline and how that impacts the ecosystem as a whole. “When kelps show dramatic changes, increases or decreases, that indicates that there has been a real change in the ecosystem,” Jarrett Byrnes, co-author of the first study and an assistant professor of biology at University of Massachusetts, Boston, tells The Christian Science Monitor. “Kelps are very resilient, they respond very quickly after environmental predation. Anything that can drive a long-term change in a kelp population has got to be pretty big.” Kira Krumhansl, a postdoctoral fellow at Simon Fraser University in British Colombia who led the first study, looked at the whether kelp populations were more impacted by local stressors, such as poor water quality and overfishing, or global stressors, such as ocean temperature warming cause by human activity. By analyzing kelp abundance at more than 1,000 sites across the globe, Dr. Krumhansl and her team determined that 38 percent of the world’s kelp forests have declined over the past 50 years, while 25 percent have increased. The lack of a global trend in the health of kelp forests suggests that it is the regional differences that drive the success of failure of a kelp population more than the global factors. “Climate change is impacting kelp forests, but what really drives variation across regions is what is occurring at a local scale,” Krumhansl tells the Monitor. “Kelps’ ability to persist through sources of stress is attributed to their capacity to grow quickly and recover quickly, but this also indicates that where we do see decline in kelp we are really tipping the level of stress over a critical threshold beyond which these species can recover.” Picking up on that line of reasoning, the second study looked into a kelp population in Australia that may have tipped over that threshold because of an invasion of tropical herbivorous fish into the New South Wales Sea that have been eating the kelp. Using video footage from 12 kelp forest sites between 2002 and 2011, the research team from University of New South Wales in Australia (UNSW) observed that while only 10 percent of kelp showed bite marks from fish in 2002, six years later 70 percent of kelp had been chewed. The dramatic kelp deforestation coincided with an 0.6 degree Celsius temperature rise in the New South Wales Sea, which ushered in a 20 percent increase in various species of tropical fish in the ecosystem – in a phenomenon called tropicalization. “We call it a homogenization of community,” Adriana Vergés, a researcher in the School of Biological, Earth and Environmental Sciences at UNSW, told The Guardian. “It’s a bit like globalization – everything starts to become the same everywhere.” By eating the kelp, these non-native tropical fish disrupt the ecosystem. Peter Steinberg, the director of the Sydney Institute of Marine Science who contributed to the UNSW study, told The Guardian the damage that the fish – and by association the temperature change – caused to this kelp forest ecosystem was concerning because it showed that tropicalization could be triggered by a smaller increase in ocean temperature than expected. While kelp have proven they can survive in warmer waters, although they thrive at cooler temperatures, those warmer waters bring with them other environmental changes that can prove to be too much for the kelp. Therefore the temperature range that will initiate a massive kelp decline increase, which is bad new for the species and processes that in turn depend on kelp. “Kelp forests are important ecosystems because they support diverse communities of organisms in coastal zones, including many fishery species, and they provide a mechanism of carbon storage in coastal zones, they shelter shorelines from incoming wave actions,” Krumhansl told the Monitor. “They prove a number of benefits to humans, they are really critical for our livelihood and survival.”

News Article | February 13, 2017

Scientists have discovered “extraordinary” levels of toxic pollution in the most remote and inaccessible place on the planet – the 10km deep Mariana trench in the Pacific Ocean. Small crustaceans that live in the pitch-black waters of the trench, captured by a robotic submarine, were contaminated with 50 times more toxic chemicals than crabs that survive in heavily polluted rivers in China. “We still think of the deep ocean as being this remote and pristine realm, safe from human impact, but our research shows that, sadly, this could not be further from the truth,” said Alan Jamieson of Newcastle University in the UK, who led the research. “The fact that we found such extraordinary levels of these pollutants really brings home the long-term, devastating impact that mankind is having on the planet,” he said. Jamieson’s team identified two key types of severely toxic industrial chemicals that were banned in the late 1970s, but do not break down in the environment, known as persistent organic pollutants (POPs). These chemicals have previously been found at high levels in Inuit people in the Canadian Arctic and in killer whales and dolphins in western Europe. The research, published in the journal Nature Ecology and Evolution, suggests that the POPs infiltrate the deepest parts of the oceans as dead animals and particles of plastic fall downwards. POPs accumulate in fat and are therefore concentrated in creatures up the food chain. They are also water-repellent and so stick to plastic waste. “The very bottom of the deep trenches like the Mariana are inhabited by incredibly efficient scavenging animals, like the 2cm-long amphipods we sampled, so any little bit of organic material that falls down, these guys turn up in huge numbers and devour it,” said Jamieson. He said it was not unexpected that some POPs would be found in the deepest parts of the oceans: “When it gets down into the trenches, there is nowhere else for it to go. The surprise was just how high the levels were – the contamination in the animals was sky high.” The level of one type of POP, called polychlorinated biphenyls (PCBs), was only equalled anywhere in the northwest Pacific in Suruga Bay in Japan, an infamous pollution blackspot. The researchers also found severe contamination in amphipods collected in the Kermadec trench, which is 7,000km from the Mariana trench. The pollution was ubiquitous, found “in all samples across all species at all depths in both trenches”, the scientists said. PCBs were manufactured from the 1930s to the 1970s, when their appalling impact on people and wildlife was realised. About a third of the 1.3m tonnes produced has already leaked into coastal sediments and the open oceans, with a steady stream still thought to be coming from poorly protected landfill sites. An expedition conducted by the US National Oceanic and Atmospheric Administration last year also found various manmade items on the slopes leading to the Sirena Deep, part of the Mariana trench, and the nearby Enigma Seamount. They included a tin of Spam, a can of Budweiser beer and several plastic bags. The results are both significant and disturbing, said the marine ecologist Katherine Dafforn at the University of New South Wales in Australia and not part of the research team: “The trenches are many miles away from any industrial source and suggests that the delivery of these pollutants occurs over long distances despite regulation since the 1970s. “We still know more about the surface of the moon than that of the ocean floor,” Dafforn said. She said the new research showed that the deep ocean trenches are not as isolated as people imagine. “Jamieson’s team has provided clear evidence that the deep ocean, rather than being remote, is highly connected to surface waters. Their findings are crucial for future monitoring and management of these unique environments.” POPs cause a wide range of damage to life, particularly harming reproductive success. Jamieson is now assessing the impact on the hardy trench creatures, which survive water pressures equivalent to balancing a tonne weight on a fingertip and temperatures of just 1C. He is also examining the deep sea animals for evidence of plastic pollution, feared to be widespread in the oceans, which has been the focus of much recent attention, leading to bans on plastic microbeads in cosmetics in the UK and US. “I reckon it will be there,” he said. Jamieson said it had been positive that the dangers of POPs had been identified and their use ended but that plastic pollution presented a new concern for contamination of the oceans. “We’ve just done it again,” he said.

News Article | November 15, 2016

While increasing at some sites, kelp, a useful indicator of an ecosystem's health, is undergoing a worldwide decline. A school of sardines swims through a kelp forest at the Channel Islands National Marine Sanctuary off California's coast. Since the area was declared a marine sanctuary five years ago, scientists have observed more lobsters, fewer urchins, and healthier kelp forests. Kelp, with its incredible ability to overcome biological stressors and grow back quickly, is an important indicator of how healthy the ocean is. As such, two studies published recently in the Proceedings of the National Academy of Sciences, attempted first to quantify the health of kelp forests around the globe, and then to determine what can cause this resilient species to enter a large-scale decline and how that impacts the ecosystem as a whole. “When kelps show dramatic changes, increases or decreases, that indicates that there has been a real change in the ecosystem,” Jarrett Byrnes, co-author of the first study and an assistant professor of biology at University of Massachusetts, Boston, tells The Christian Science Monitor. “Kelps are very resilient, they respond very quickly after environmental predation. Anything that can drive a long-term change in a kelp population has got to be pretty big.” Kira Krumhansl, a postdoctoral fellow at Simon Fraser University in British Colombia who led the first study, looked at the whether kelp populations were more impacted by local stressors, such as poor water quality and overfishing, or global stressors, such as ocean temperature warming cause by human activity. By analyzing kelp abundance at more than 1,000 sites across the globe, Dr. Krumhansl and her team determined that 38 percent of the world’s kelp forests have declined over the past 50 years, while 25 percent have increased. The lack of a global trend in the health of kelp forests suggests that it is the regional differences that drive the success of failure of a kelp population more than the global factors. “Climate change is impacting kelp forests, but what really drives variation across regions is what is occurring at a local scale,” Krumhansl tells the Monitor. “Kelps’ ability to persist through sources of stress is attributed to their capacity to grow quickly and recover quickly, but this also indicates that where we do see decline in kelp we are really tipping the level of stress over a critical threshold beyond which these species can recover.” Picking up on that line of reasoning, the second study looked into a kelp population in Australia that may have tipped over that threshold because of an invasion of tropical herbivorous fish into the New South Wales Sea that have been eating the kelp. Using video footage from 12 kelp forest sites between 2002 and 2011, the research team from University of New South Wales in Australia (UNSW) observed that while only 10 percent of kelp showed bite marks from fish in 2002, six years later 70 percent of kelp had been chewed. The dramatic kelp deforestation coincided with an 0.6 degree Celsius temperature rise in the New South Wales Sea, which ushered in a 20 percent increase in various species of tropical fish in the ecosystem – in a phenomenon called tropicalization. “We call it a homogenization of community,” Adriana Vergés, a researcher in the School of Biological, Earth and Environmental Sciences at UNSW, told The Guardian. “It’s a bit like globalization – everything starts to become the same everywhere.” By eating the kelp, these non-native tropical fish disrupt the ecosystem. Peter Steinberg, the director of the Sydney Institute of Marine Science who contributed to the UNSW study, told The Guardian the damage that the fish – and by association the temperature change – caused to this kelp forest ecosystem was concerning because it showed that tropicalization could be triggered by a smaller increase in ocean temperature than expected. While kelp have proven they can survive in warmer waters, although they thrive at cooler temperatures, those warmer waters bring with them other environmental changes that can prove to be too much for the kelp. Therefore the temperature range that will initiate a massive kelp decline increase, which is bad new for the species and processes that in turn depend on kelp. “Kelp forests are important ecosystems because they support diverse communities of organisms in coastal zones, including many fishery species, and they provide a mechanism of carbon storage in coastal zones, they shelter shorelines from incoming wave actions,” Krumhansl told the Monitor. “They prove a number of benefits to humans, they are really critical for our livelihood and survival.”

Society as a whole has been trying to address the issue of gender equality in the workplace for decades, and the technology sector has been no exception. Women in the tech industry have long been outnumbered by men and by a wider percentage than the rest of the economy, as TechRepublic detailed in our award-winning 2015 cover story that also looked specifically at the challenges in venture capital. But the traditional stereotype of the tech industry is changing in Australia. It is one country where the industry has ramped efforts to purposefully encourage more women to enter the industry, as well as stay and climb the corporate ladder. And, the numbers are starting to reflect it—if only modestly. Australia's IT industry is slightly better at appointing women to leadership roles, in comparison to the rest of the world. Eleven percent of Harvey Nash's respondents to its 2015 CIO Survey were female IT leaders, 2% higher than the global average of 9%. And, if we take a look at some of the major tech companies in Australia, females are starting to take top roles: Within the startup community there are women such as University of New South Wales physics professor Michelle Simmons, who is expected to make real headwinds in quantum computing, after picking up AU$46 million in funding from the Australian government, as well as the Commonwealth Bank of Australia and Telstra. The funds are expected to support Simmons' development of silicon quantum computing technology. Canva founder Melanie Perks, Shoes of Prey co-founder Jodie Fox, ShowPro founder Jane Lu, and OneShift founder Gen George are also among a handful of successful Australian tech entrepreneurs, who all happen to be females. Microsoft's Marlow told TechRepublic that gender inequality as a topic in tech is more important than ever, given companies are out there designing solutions for the general population made up of 50% females. Not only that, it's a huge emerging market, Pip said, pointing to the US, for instance, where women account for $20 trillion in consumer spend every year. "More and more product designs, more and more solutions for customers has to not just be for men, but for women, and putting women at the centre of the technology that is being designed is so important," she said. Over the last few years, one of the first initiatives large organisations including Dell, Twitter, and Microsoft have introduced to combat gender inequality within their companies has been unconscious bias training. While unconscious bias is an ideology that cannot be completely reversed, Psynapse founder Jennifer Whelan said bringing awareness into the equation helps people make more conscious decisions. During the training process, companies are encouraged to openly discuss the contributing factors of unconscious bias to ensure that from the top down there is a cultural awareness, and businesses are putting in place appropriate processes for recruitment, promotion, and performance evaluation. This can vary from including at least one female in the recruitment process on both the applicant and employer side; and consciously giving women the opportunity to rise to leadership roles. "Frankly, everybody took something away from [the unconscious bias training]. It's more than just the gender bias; it's that knowledge you're presenting and the lens you're putting on the world, and how that may influence your decisions or the opportunity you give people," Dell Australia and New Zealand managing director Angela Fox said. But the unconscious bias training doesn't stop at gender for Microsoft. Marlow's own personal philosophy at the firm is to "lead with inclusion" across gender, as well as skin colour, race, religion, and sexual orientation. Drawing from her own needs as a female leader, Marlow has long been an avid believer in promoting "work-life balance" for both men and women. She said it helps address a common concern for many females who question if they can successfully be a mother, wife, employee, and leader. During her interview with TechRepublic, Marlow admitted despite having an arm injury that prevented her from physically going into the office, she was still able to work from home. With the rise of technology, the line between life and work has blurred, giving people the opportunity to work the hours they want while attending to family life. "We have the technology and the ability now to create creative workplaces, so that actually means that I can attend the swimming carnival or I can go to the school graduation or [a] play, and I can still do my job," Marlow said. "I don't believe work is a place you go; it's a thing you do, and if you take that into account, then you can tap into the minds of 100% of the population." Flexible working hours is also a priority at Dropbox, which recently partnered with Diversity City Careers in an effort to attract and retain women who may also be primary caregivers at home. The need for flexible working hours became apparent to Dropbox Australia managing director Charlie Wood during his days as the founder of SocialCRM. At the time, Wood hired a team of women, who he described as having "amazing talent" but needed flexible working hours because they could only work between 10am to 2pm, and couldn't drive from Sydney's Northern Beaches to the city. The benefits of flexible work practices were also experienced firsthand by Accenture Technology's Livesey, who was promoted while on maternity leave to her current role. She acknowledged it was only possible because of the support of her team. "We talk about 'flexing for life', and that's for all of our staff. It is about helping our teams get what they want out of their career, and, if anything, we're seeing a lot of people feeling—particularly as they've been on maternity leave—really empowered to come back to the business and contribute. But, on their own terms." Pinterest, Twitter, Intel, and Accenture are some companies holding themselves accountable to improving the gender imbalance by publishing diversity data and setting target goals around overall female representation, female representation in management roles, and female representation in engineering—the hardest one to crack. In 2016, Twitter is striving to boost representation for women in the global workforce to 35%, and female representation in tech roles to 16%. The priority for Pinterest this year is to lift female hiring rates for full-time engineers to at least 30%, which in 2015 was only 19%. "You have to be transparent and start looking at the numbers. I think you have to take the attitude of 'stop making excuses'. If there are not enough women there, I think the question is why aren't there enough women there, and start asking very fundamental questions right back to your recruitment process," said Twitter's Stocks, who said it is not uncommon to see no females during a recruitment process. Regrettably, the lack of women in technology is not only a business issue, but also an education problem. According to the Australian Bureau of Statistics (ABS), women tend to be underrepresented in science, technology, engineering, and mathematics (STEM) education and training. In 2015, for example, there were only 3,900 females studying computer science, versus 14,600 males. A similar slew of statistics was also evident in information technology, engineering and related technologies, and electrical and electronic engineering and technology. Communications Minister cum Prime Minister Malcolm Turnbull admitted last May the number of students taking up STEM learning has dropped significantly, saying that "the country has gone backwards". "Of our 600,000 workers in ICT, more than half work outside the traditional ICT sector. Seventy-five percent of the fastest-growing occupations require STEM skills, but only half of year 12 students are studying science; that's down from 94% 20 years ago. That is really a retrograde development, and we have to turn that around," Turnbull said. At the end of last year, under its National Innovation and Science Agenda, the Australian government pledged AU$51 million in a bid to help students in Australia embrace the digital age and prepare for the jobs of the future, along with AU$48 million to inspire STEM literacy and AU$13 million to encourage women to take up roles in STEM sectors. The AU$51 million will include the formation of IT summer schools for students in years 9 and 10, an annual Cracking the Code competition for those in years 4 through 12, and online computing challenges for years 5 and 7 students. Teachers will also receive assistance with access to online support for preparing digital technology-based curriculum activities. The Office of the Chief Scientist published the first national STEM Programme Index (SPI) at the beginning of 2016 to give schools access to a list of more than 250 STEM programs such as competitions, excursions, and online activities available to students. Some may argue kids these days are tech savvy enough they don't need to study STEM subjects given that they are born into the digital age. But Intel's Burleigh argued "it couldn't be further from the truth". "You actually get more removed from the technology ... so it's the right thing that we've dumbed it down, but ... you have to be prepared to literally get a screwdriver out, pull your phone or laptop apart, and understand how the technology works. It's that person who does that [that] I want to employ; not the person who knows how to use it," she said. But the onus isn't just on teachers but parents, too, ACS vice president Arnold Wong stressed. He believes parents are guilty of stereotyping tech as an industry for men only. Psynapse's Whelan said any unconscious bias formed at home carries over to how children interact with each other when they go to school, and ultimately impacts students' interests during the "crucial moments" of years 9 and 10 when they have to choose electives. Sera Prince McGill, former Girl Geek Sydney lead organiser, highlighted that since birth, we have all been exposed to the vicious cycle of gender stereotyping—everything from clothes to toys have always been marketed with a gender divide. Such marketing exacerbates the gender bias, she said, and therefore determines what activities are appropriate for each gender. "The result is that women have been systematically socialised away from technology. Marketing has created a societal bias, especially in the '80s and '90s, that computer science was for boys," she said. However, the gender bias hasn't always existed. Dating all the way back to the 1800s, Ada Lovelace was, of course, one of the earliest computer programmers. It wasn't until 1984—the same year Apple released its first Macintosh—that the uptake of computer science by women began to plunge, while studies in medicine, law, and physical sciences continued to surge, as reported by National Public Radio. McGill, drawing on research by Elizabeth Sweet, a sociologist and lecturer at the University of California, suggested from the early 20th century to the 1970s, there were far fewer gendered toys or clothing. Leading the charge of the social-gender divide in technology was the video game industry. "1983 had seen a huge crash in recession in what [the game industry] had been up to ... so one contributing factor was the lack of marketing; so enter the marketers. Their research found children who played games in the 1980s, 70% were boys and 30% were girls," she said. "Games companies wanted to ensure they could move product and create profit, so of course they went after the male demographic only. There is no denying it made them money; Atari and Nintendo became household names. The upshot of this approach was that it spilled into the toys market generally, because if the toy market is divided, there might be a greater chance of profit." Interestingly, many of the women who have risen to the top of technology companies have come into the industry through other fields such as marketing, sales, and accounting. Australia has traditionally become a hub where companies come to establish local presence through a support centre, rather than a central location for product development. Harvey Nash managing director Bridget Gray believes this perception is a contributing factor to why the demand for engineering skills is relatively lower than other skills in Australia. "It's critical that Australia is viewed as a desirable and supportive location for startups and product development if we want to seriously play in the global digital economy," she said. "Australia would be in a better position to offer more opportunities to engineers of all levels, and retain top talent if we were able to attract a greater volume of companies, large and small, to headquarter their product development and innovation centres locally." Intel's Burleigh is one example of the women who have found themselves working for a tech company after spending years in sales and marketing. Given the nature of the sales and marketing industry, where there are typically more females, Burleigh said she never noticed the gender imbalance. It was only when she started putting her hand up for enterprise sales roles that she felt she had to justify her job a little more. "It wasn't that it was harder, but I had to have that second or third conversation that I really was serious and I seriously wanted it, because were a few question marks over could you really do it, and I could never say it was because I was female, or was it because my skills were [such that] I was identified more strongly as a consumer marketer," she said. Due to the lack of females coming through the education pipeline, businesses such as Microsoft have begun broadening their searches—and it's starting to pay off. Women now make up at least 50% of all of Microsoft's new university graduates being hired. Despite decades of roadblocks for women in technology, the answer to the problem is nothing but a social evolution, according to Whelan. She believes that with the attitudes of newer generations changing as they go against "social norms", such as seeing more women work outside of the home, it's only a matter of time before there is an evolutionary breakthrough. There are "glimmers of hope" with female CEOs already leading the likes of Twitter, Intel, Dell, and Spotify in Australia, Marlow said. "This is a time for us to push harder, and drive the change we want to see."

News Article | February 27, 2017

The combined results of two ovarian cancer screening trials suggest that a personalized strategy involving frequent screening of high-risk women could improve the chance that tumors are detected at early stages when they are easier to treat. As reported in a paper published in Clinical Cancer Research, these trials imply that a protocol involving quarterly blood test to identify significant increases above each patient's personal baseline in levels of the protein CA125, followed by ultrasound examination when such elevations are detected, could reduce the risk of diagnosis with advanced cancer in high-risk women who choose to delay recommended preventive surgery. "The standard advice for women at high risk of ovarian cancer, due to either family history or inherited gene mutations, is to have their ovaries and fallopian tubes removed once their families are complete. Some women choose to postpone this surgery," says Steven Skates, PhD, of the Massachusetts General Hospital (MGH) Cancer Center and the Biostatistics Unit, co-lead and corresponding author of the report. "Our screening protocol increased the proportion of tumors detected at early stages from 10 percent - which is typically seen in high-risk women who are not screened - to 50 percent." CA125 levels are known to be raised over the level of 35 in the blood of most women with ovarian cancer. While screening for raised CA125 and/or transvaginal ultrasound may be considered for high-risk women who postpone surgery, that approach has not been shown to improve patient outcomes. The two trials reported in the current paper utilize the Risk of Ovarian Cancer Algorithm (ROCA) - co-developed by Skates and Ian Jacobs, MD, FRCOG, of the University of New South Wales in Australia and University College London - which tracks CA125 levels over time to identify significant elevations above each patient's baseline levels, even those that do not exceed the traditional threshold of 35. One trial conducted through the National Cancer Institute's Cancer Genetics Network (CGN) - with additional patients from two ovarian Specialized Programs of Research Excellence (SPORE) and two Early Detection Research Network sites (EDRN) - was led by Skates. The other, conducted through the Gynecologic Oncology Group (GOG), was led by Mark H. Greene, MD, of the Clinical Genetics Branch at the National Cancer Institute (NCI). Both trials followed similar protocols, enrolling women at elevated risk for ovarian cancer - based on either a strong family history of ovarian and/or breast cancer or the presence in the patient or in close blood relatives of risk-associated mutations in the BRCA1 or BRCA2 genes - who had not yet had risk-reducing surgery. Participants had CA125 blood tests utilizing ROCA every three months, compared with screening for raised CA125 values every 6 or 12 months as in previous screening studies. The investigators calculated a patient's ROCA risk by analyzing the results of each new CA125 test, combined with previous results, and factors such as participant's age and menopausal status. Women at intermediate ROCA risk were referred for an ultrasound examination, while those at an elevated ROCA risk received both ultrasound and clinical evaluation by either a gynecologic oncologist or the site principal investigator. While the results of those examinations were used to guide decisions about surgical treatment, study participants were free to choose to have their ovaries and fallopian tubes removed at any time during the clinical trials, as is standard practice for women with a BRCA1/2 mutation. Between 2001 and 2011, the CGN trial enrolled 2,359 women at 25 U.S. sites. The GOG trial enrolled 1,459 women at 112 sites in the U.S. and Australia between 2003 and 2006 and screened them for five years. Among the more than 3,800 participants in both studies, 19 malignant tumors of the ovaries or fallopian tubes were identified during the study periods. Ten cases were diagnosed during screening, and nine were diagnosed by preventive surgery. Of the ten cases, there was evidence that four were present at the outset of the trial, while six tumors were likely to have developed during the trial period after a CA125 baseline had been measured. While the algorithm can calculate risk without a baseline, ROCA works best when a baseline has been established. The results in these six cases reflect the benefits of a long-term ROCA screening program; all but one were diagnosed by ROCA, giving a sensitivity of over 80 percent ,and 50 percent were detected at early stages. Another study - the UK Familial Ovarian Cancer Screening Study, led by ROCA co-developer Jacobs and published today in the Journal of Clinical Oncology - found that a similar protocol using ROCA-based testing every four months was also better than current practice at diagnosing early-stage tumors in high-risk women. Skates notes that a formal analysis of the data from all three trials could increase the statistical power of these studies and could lend stronger support to recommending frequent ROCA-based screening for high-risk women who choose to postpone surgery or while waiting for surgery. While the pattern of cancers detected in these studies supported the potential value of ROCA screening, these studies were not designed to assess whether screening reduced deaths due to ovarian cancer, the authors note. "It is important to note that removal of ovaries and fallopian tubes remains the standard of care when women at increased familial or genetic risk complete their families and reach an age when their risk exceeds that of the general population," stresses Skates, who is an associate professor of Medicine at Harvard Medical School. Adds Greene, who is a senior principal investigator at the NCI, "Surgery is the primary and best option for reducing the risk of ovarian cancer, and ROCA should only be considered as a promising but unproven option for patients who decide, against medical advice, to postpone their surgery." Both investigators note that further research to identify a greater range of ovarian cancer biomarkers and improved imaging technologies is needed to help detect even more tumors at even earlier stages. Skates is leading a program to discover new biomarkers for early detection of ovarian cancer as part of NCI's Early Detection Research Network. Skates and Greene are co-lead authors of the Clinical Cancer Research report; and Dianne Finkelstein, PhD, MGH Biostatistics, and Karen Lu, MD, M.D. Anderson Cancer Center, are co-senior authors. Support for the study includes multiple grants from the National Cancer Institute to the Cancer Genetics Network, the Specialized Programs of Research Excellence, and the Early Detection Research Network, and support from the NCI Intramural Research Program. Massachusetts General Hospital has co-licensed software implementing ROCA. Skates is on the scientific advisory board of SISCAPA Assay Technologies, a consultant for Abcodia, and has received a speaker honorarium from Astra-Zeneca. The NCI Division of Cancer Epidemiology and Genetics (DCEG), home to the Clinical Genetics Branch, is a global leader in cancer epidemiology and genetics research. With its cadre of renowned epidemiologists, geneticists and biostatisticians, DCEG conducts population and multidisciplinary research to discover the genetic and environmental determinants of cancer and new approaches to cancer prevention. The DCG research portfolio informs biological concepts, clinical practice and public health policy. Massachusetts General Hospital, founded in 1811, is the original and largest teaching hospital of Harvard Medical School. The MGH Research Institute conducts the largest hospital-based research program in the nation, with an annual research budget of more than $800 million and major research centers in HIV/AIDS, cardiovascular research, cancer, computational and integrative biology, cutaneous biology, human genetics, medical imaging, neurodegenerative disorders, regenerative medicine, reproductive biology, systems biology, photomedicine and transplantation biology. The MGH topped the 2015 Nature Index list of health care organizations publishing in leading scientific journals and earned the prestigious 2015 Foster G. McGaw Prize for Excellence in Community Service. In August 2016 the MGH was once again named to the Honor Roll in the U.S. News & World Report list of "America's Best Hospitals."

News Article | November 7, 2016

The former governor general Quentin Bryce is part of a high-powered group of academics, business leaders, financiers and energy providers urging the Turnbull government to extend and expand the national renewable energy target and create a market mechanism to govern an orderly phase out of coal-fired power in Australia. Representatives of the group came to Canberra on Monday to meet the energy minister, Josh Frydenberg, armed with an eight-point plan to drive a sequenced decarbonisation of the national energy market. The intervention follows weeks of political attacks by the Turnbull government on state-based renewable energy schemes and the government’s continuing refusal to say whether it will use a planned review of its Direct Action policy next year to overhaul an emissions reduction framework experts say is woefully inadequate. The leadership forum on energy transition, led by the Australian Conservation Foundation and hosted by the University of New South Wales, has used the resumption of parliament to call on the Turnbull government to establish national rules, policies, regulations, markets and a basis for investment to drive an orderly transition away from coal-fired power. As a first step, the group says the government needs to update Australia’s national electricity objective “to include a clear goal to accelerate Australia’s energy transition towards net zero emissions before 2050, to fulfil our domestic policy objectives and international commitments”. It says the government should consider creating an emissions intensity standard to govern the regulated closure of highly emissions intensive power stations and legislate a market mechanism that would allow a rational sequence of closures, where the dirtiest power stations close first. It says an alternative to a market mechanism would be an age-based regulation that tightens over time, ensuring the oldest coal-fired power stations closed first. The group is also calling on the government to extend and expand the large-scale element of the renewable energy target scheme beyond 2020 and introduce economy-wide complementary laws, such as a carbon price, to make companies pay for their pollution. As well as calling for a rational policy framework to ensure Australia complies with its Paris emissions reduction targets, and investors are given the policy certainty they need to bring on low-emissions technologies, the eight-point plan also emphasises the importance of transitional assistance for coal communities and help for low-income earners by way of compensation for higher power prices. The group is chaired by Ian Jacobs, the vice chancellor of UNSW, and includes Geoff Cousins, the president of the ACF, Bryce, Jillian Broadbent, the chairwoman of the Clean Energy Finance Corporation, David Thodey, the chairman of the CSIRO, financier Mark Burrows, company director and sustainability adviser Sam Mostyn and Martijn Wilder, lawyer and the chairman of the Australian renewable energy agency. The intervention is part of positioning by a number of stakeholders ahead of the Direct Action review in 2017, and it runs in parallel to a review being led by the chief scientist, Alan Finkel, of the national energy market, for state and federal energy ministers. It also follows confirmation last Thursday that the Hazelwood coal-fired power station in Victoria’s Latrobe valley will close next March. The Finkel review was preceded by a pitched political battle between the Turnbull government, which launched a rhetorical assault on state-based renewable energy targets after a statewide power blackout in South Australia in September, and state governments, who argue the commonwealth will not be able to meet its international emissions reduction targets without the state RETs. The South Australian government has signalled it will use the framework of the Finkel review to argue for an emissions intensity scheme for the electricity sector – a form of carbon trading. A coalition of major business and energy users have already urged federal and state governments to work cooperatively to map out a strategic response to the decarbonisation challenge and have warned investment is at risk.

News Article | November 4, 2016

The Opening Ceremony was held at the Diaoyutai State Guesthouse. Addressing the Opening ceremony, Peking University President Professor Lin Jianhua said, “For universities in the Chinese mainland, we have a commitment to take on the responsibility for the society, the nation and the world. Peking University has inherited and has been diligently following this tradition. On the one hand, we teach students to bear social responsibility during their studies and after graduation. On the other hand, the Universities have to bear their responsibility for the community, the region, and the nation, during their course of development and advancement.” Professor Lin added, “This is a difficult mission, and it calls for concerted efforts of higher education leaders.” Speaking at the Opening Ceremony, PolyU President Professor Timothy W. Tong said, “Over the last few decades, global challenges such as economic development, environmental protection and technological innovation have driven universities worldwide to redefine their roles and responsibilities beyond traditional education and research in order to bolster their impact on society. Consequently, social responsibility has become a subject high on the agenda.” Professor Tong added, “The USR Network member universities sharing the same vision of making our world increasingly just, inclusive, peaceful and sustainable. With an emphasis on collaboration among members and with other networks and alliances, the Network has vigorously promoted USR by organizing a number of projects including this University Social Responsibility Summit.” This year, the Summit has brought together more than 50 speakers who are higher education leaders and scholars from over 10 countries and regions. They exchanged views at three Presidents’ Roundtable sessions respectively themed “Social Responsibility: A Core Mission of Universities in 21st Century?”, “USR: Translating Vision into Action and Impact”, and “USR in Asia: Challenges and Opportunities”. Plenary sessions held tomorrow (5 November) will include “Community Engagement in Higher Education: Policy and Practice”, “Nurturing Future Leaders through Service-Learning: Strategies and Learning Outcomes” and “Building Disaster Response Capacity – University Students as Community First Responders”. This is the first time that the Summit has a separate Student Forum on 4 November at Peking University campus. The Forum attracted more than 100 students, many of them are delegates from the USR Network member universities. In addition, there will be a student presentation tomorrow. Four teams of students from PolyU, PekingU, Sichuan University and Beijing Normal University conducted presentations to share the views and practical experience of USR from the students’ perspective. Their presence and contribution at the Summit are evidence of the USR Network’s commitment to engaging the university community to address world challenges and shape a better future. The second Executive Committee meeting of the USR Network was held yesterday (3 November) to discuss the strategies and work for the coming year. With two new members, University of Sao Paulo, Brazil and University of Pretoria, South Africa, the USR Network now include the following 14 universities (in alphabetical order of their country): Australia | University of New South Wales Brazil | University of Sao Paulo Hong Kong, P.R.C. | The Hong Kong Polytechnic University Israel | University of Haifa Japan | Kyoto University Korea | Yonsei University P.R.C. | Peking University P.R.C. | Beijing Normal University P.R.C. | Sichuan University P.R.C. | South Africa University of Pretoria U.K. | Clare Hall, University of Cambridge U.K. | The University of Manchester U.S.A. | Tufts University U.S.A. | Washington University in St. Louis For details of the USR Network, please visit

News Article | April 6, 2016

Type 'depression' into the Apple App Store and a list of at least a hundred programs will pop up on the screen. There are apps that diagnose depression (Depression Test), track moods (Optimism) and help people to “think more positive” (Affirmations!). There's Depression Cure Hypnosis (“The #1 Depression Cure Hypnosis App in the App Store”), Gratitude Journal (“the easiest and most effective way to rewire your brain in just five minutes a day”), and dozens more. And that's just for depression. There are apps pitched at people struggling with anxiety, schizophrenia, post-traumatic stress disorder (PTSD), eating disorders and addiction. This burgeoning industry may meet an important need. Estimates suggest that about 29% of people will experience a mental disorder in their lifetime1. Data from the World Health Organization (WHO) show that many of those people — up to 55% in developed countries and 85% in developing ones — are not getting the treatment they need. Mobile health apps could help to fill the gap (see 'Mobilizing mental health'). Given the ubiquity of smartphones, apps might serve as a digital lifeline — particularly in rural and low-income regions — putting a portable therapist in every pocket. “We can now reach people that up until recently were completely unreachable to us,” says Dror Ben-Zeev, who directs the mHealth for Mental Health Program at the Dartmouth Psychiatric Research Center in Lebanon, New Hampshire. Public-health organizations have been buying into the concept. In its Mental Health Action Plan 2013–2020, the WHO recommended “the promotion of self-care, for instance, through the use of electronic and mobile health technologies.” And the UK National Health Service (NHS) website NHS Choices carries a short list of online mental-health resources, including a few apps, that it has formally endorsed. But the technology is moving a lot faster than the science. Although there is some evidence that empirically based, well-designed mental-health apps can improve outcomes for patients, the vast majority remain unstudied. They may or may not be effective, and some may even be harmful. Scientists and health officials are now beginning to investigate their potential benefits and pitfalls more thoroughly, but there is still a lot left to learn and little guidance for consumers. “If you type in 'depression', its hard to know if the apps that you get back are high quality, if they work, if they're even safe to use,” says John Torous, a psychiatrist at Harvard Medical School in Boston, Massachusetts, who chairs the American Psychiatric Association's Smartphone App Evaluation Task Force. “Right now it almost feels like the Wild West of health care.” Electronic interventions are not new to psychology; there is robust literature showing that Internet-based cognitive behavioural therapy (CBT), a therapeutic approach that aims to change problematic thoughts and behaviours, can be effective for treating conditions such as depression, anxiety and eating disorders. But many of these online therapeutic programmes are designed to be completed in lengthy sessions in front of a conventional computer screen. Smartphone apps, on the other hand, can be used on the go. “It's a way of people getting access to treatment that's flexible and fits in with their lifestyle and also deals with the issues around stigma — if people are not quite ready to maybe go and see their doctor, then it might be a first step to seeking help,” says Jen Martin, the programme manager at MindTech, a national centre funded by the United Kingdom's National Institute for Health Research and devoted to developing and testing new mental-health technologies. One of the best-known publicly available apps was devised to meet that desire for flexibility. In 2010, US government psychologists conducting focus groups with military veterans who had PTSD learned that they wanted a tool they could use whenever their symptoms flared up. “They wanted something that they could use in the moment when the distress was rising — so when they were in line at the supermarket,” says Eric Kuhn, a clinical psychologist and the mobile apps lead at the US Department of Veterans Affairs' National Center for PTSD. The department joined up with the US Department of Defense to create PTSD Coach, a free smartphone app released in early 2011. Anyone who has experienced trauma can use the app to learn more about PTSD, track symptoms and set up a support network of friends and family members. The app also provides strategies for coping with overwhelming emotions; it might suggest that users distract themselves by finding a funny video on YouTube or lead users through visualization exercises. In its first three years in app stores, PTSD Coach was downloaded more than 150,000 times in 86 different countries. It has shown promise in several small studies; in a 2014 study of 45 veterans, more than 80% reported that the app helped them to track and manage their symptoms and provided practical solutions to their problems2. More results are expected soon. Kuhn and his colleagues recently completed a 120-person randomized trial of the app, and a Dutch team is currently analysing data from a 1,300-patient trial on a similar app called SUPPORT Coach. Smartphone apps can also interact with users proactively, pinging them to ask about their moods, thoughts and overall well-being. Ben-Zeev created one called FOCUS, which is geared towards patients with schizophrenia. Several times a day, the app prompts users to answer questions such as “How well did you sleep last night?” or “How has your mood been today?” If users report that they slept poorly, or have been feeling anxious, the app will suggest strategies for tackling that problem, such as limiting caffeine intake or doing some deep-breathing exercises. Some apps help people to stay connected to health-care professionals, too. ClinTouch, a psychiatric-symptom-assessment app designed by researchers at the University of Manchester, UK, analyses users' responses for signs that they may be experiencing a relapse; it can even notify a clinical-care team. Small feasibility studies — which are generally designed to determine whether an intervention is practical, but do not necessarily evaluate its efficacy — have shown that patients use and like both apps, and a 2014 study found that those who used FOCUS for a month experienced a reduction in psychotic symptoms and depression3. FOCUS and ClinTouch are both now being evaluated in randomized, controlled trials. Some researchers see opportunities in the data that smartphones collect about their users' movement patterns or communication activity, which could provide a potential window into mental health. “Your smartphone is really this interesting diary of your life,” says Anmol Madan, the co-founder and chief executive of, a digital mental-health company based in San Francisco, California. Studies have now shown that certain patterns of smartphone use can predict changes in mental-health symptoms4; a drop in the frequency of outgoing text messages, for instance, may suggest that a user's depression is worsening. The app, which is still in beta, monitors these sorts of patterns and alerts each user's assigned mental-health coach if it detects a worrying change. The evidence supporting the use of such apps is building5, 6, 7. But this is a science in its infancy. Much of the research has been limited to pilot studies, and randomized trials tend to be small and unreplicated. Many studies have been conducted by the apps' own developers, rather than by independent researchers. Placebo-controlled trials are rare, raising the possibility that a 'digital placebo effect' may explain some of the positive outcomes that researchers have documented, says Torous. “We know that people have very strong relationships with their smartphones,” and receiving messages and advice through a familiar, personal device may be enough to make some people feel better, he explains. But the bare fact is that most apps haven't been tested at all. A 2013 review8 identified more than 1,500 depression-related apps in commercial app stores but just 32 published research papers on the subject. In another study published that year9, Australian researchers applied even more stringent criteria, searching the scientific literature for papers that assessed how commercially available apps affected mental-health symptoms or disorders. They found eight papers on five different apps. The same year, the NHS launched a library of “safe and trusted” health apps that included 14 devoted to treating depression or anxiety. But when two researchers took a close look at these apps last year, they found that only 4 of the 14 provided any evidence to support their claims10. Simon Leigh, a health economist at Lifecode Solutions in Liverpool, UK, who conducted the analysis, says he wasn't shocked by the finding because efficacy research is costly and may mean that app developers have less to spend on marketing their products. A separate analysis11 found that 35 of the mobile health apps originally listed by the NHS transmitted identifying information — such as e-mail addresses, names and birthdates — about users over the Internet, and two-thirds of these did not encrypt the data. Last year, the NHS took this apps library offline and posted a smaller collection of recommended online mental-health services. The NHS did not respond to e-mailed questions or make an official available for interview, but it did provide this statement: “We are working to upgrade the Health Apps Library, which was launched as a pilot site in 2013 to review and recommend apps against a defined set of criteria which included data protection.” The regulation of mental-health apps is opaque. Some apps designed to be used in a medical context can be considered medical devices and therefore may be regulated by the UK Medicines and Healthcare Products Regulatory Agency, the US Food and Drug Administration (FDA) or equivalent bodies elsewhere. But the lines are fuzzy. In general, an app that claims to prevent, diagnose or treat a specific disease is likely to be considered a medical device and to attract regulatory scrutiny, whereas one that promises to 'boost mood' or provide 'coaching' might not. The FDA has said that it will regulate only those health apps that present the highest risks to patients if they work improperly; even mental-health apps that qualify as medical devices might not be regulated if the agency deems them to be relatively low risk. But the potential risks are not well understood. “At the low end, people might waste their money or waste their time,” says Martin, “and at the higher end, especially with mental health, they might be actively harmful or giving dangerous advice or preventing people from going and getting proper treatment.” When a team of Australian researchers reviewed 82 commercially available smartphone apps for people with bipolar disorder12, they found that some presented information that was “critically wrong”. One, called iBipolar, advised people in the middle of a manic episode to drink hard liquor to help them to sleep, and another, called What is Biopolar Disorder, suggested that bipolar disorder could be contagious. Neither app seems to be available any more. Martin says that in Europe, at least, apps tend to come in two varieties, those that are commercially developed and come with little supporting evidence or plans for evaluation, and those with academic or government backing that take a more rigorous approach. The problem is that the former are generally more engaging for users and the latter take so long to make it to the market — if they even do — that they look out of date. “This is a generalization,” Martin says, “but it's broadly true.” Even well-intentioned apps can produce unpredictable outcomes. Take Promillekoll, a smartphone app created by Sweden's government-owned liquor retailer, designed to help curb risky drinking. While out at a pub or a party, users enter each drink they consume and the app spits out an approximate blood-alcohol concentration. When Swedish researchers tested the app on college students, they found that men who were randomly assigned to use the app ended up drinking more frequently than before, although their total alcohol consumption did not increase. “We can only speculate that app users may have felt more confident that they could rely on the app to reduce negative effects of drinking and therefore felt able to drink more often,” the researchers wrote in their 2014 paper13. It's also possible, the scientists say, that the app spurred male students to turn drinking into a game. “I think that these apps are kind of playthings,” says Anne Berman, a clinical psychologist at the Karolinska Institute in Stockholm and one of the study's authors. There are other risks too. In early trials of ClinTouch, researchers found that the symptom-monitoring app actually exacerbated symptoms for a small number of patients with psychotic disorders, says John Ainsworth at the University of Manchester, who helped to develop the app. “We need to very carefully manage the initial phases of somebody using this kind of technology and make sure they're well monitored,” he says. In a pilot trial published earlier this year, ten US veterans with PTSD were randomly assigned to use PTSD Coach on their own for eight weeks, while another ten used the app with the support and guidance of primary-care providers. At the end of the trial, seven of the ten patients using the app with support showed a reduction in PTSD symptoms, compared with just three of the patients who used the app on their own14. But if apps require medical supervision, that undermines the idea that they will serve as an easy and low-cost way to provide care to the masses. “People think there's an app for everything,” says Helen Christensen, the director of the Black Dog Institute at the University of New South Wales in Sydney, Australia, who has developed and studied mental-health apps. “It's actually about how we build systems around apps, so that people have health care.” Distributing mental-health apps in the developing world presents further challenges. Although mobile technology is spreading rapidly, there are many people who do not have — or cannot afford — smartphones or mobile Internet access. And the content of apps needs to be delivered in local languages and reflect local cultures. “The notion that you can take an intervention and just plop it down in a region where people might not even use the same terms for mental health as you're using is a little unrealistic,” says Ben-Zeev. “What we might call 'hearing voices' in the United States might be something like 'communicating with your elders' in a different region, depending on what label people attach to that experience.” At this point, the notion that apps can deliver quality health care in low-income regions remains largely theoretical. “This is generally where the mHealth field is,” says Natalie Leon, a scientist at the South African Medical Research Council in Cape Town. “It's a promise of potential effectiveness.” To make good on that promise, apps will have to be tested. Between 2013 and 2015, the number of mobile-health trials registered on more than doubled, from 135 to 300. And the number of trials specifically focused on mental and behavioural health increased by 32%, according to a report by the IMS Institute for Health Informatics in Parsippany, New Jersey. One digital health company that has earned praise from experts is Big Health, co-founded by Colin Espie, a sleep scientist at the University of Oxford, UK, and entrepreneur Peter Hames. The London-based company's first product is Sleepio, a digital treatment for insomnia that can be accessed online or as a smartphone app. The app teaches users a variety of evidence-based strategies for tackling insomnia, including techniques for managing anxious and intrusive thoughts, boosting relaxation, and establishing a sleep-friendly environment and routine. Before putting Sleepio to the test, Espie insisted on creating a placebo version of the app, which had the same look and feel as the real app, but led users through a set of sham visualization exercises with no known clinical benefits. In a randomized trial, published in 2012, Espie and his colleagues found that insomniacs using Sleepio reported greater gains in sleep efficiency — the percentage of time someone is asleep, out of the total time he or she spends in bed — and slightly larger improvements in daytime functioning than those using the placebo app15. In a follow-up 2014 paper16, they reported that Sleepio also reduced the racing, intrusive thoughts that can often interfere with sleep. The Sleepio team is currently recruiting participants for a large, international trial and has provided vouchers for the app to several groups of independent researchers so that patients who enrol in their studies can access Sleepio for free. “We think this is the way forward for digital health,” says Espie. Mobile-phone-based treatments, he says, “should be tested and judged like any other intervention. We shouldn't treat people's health with any less respect because the treatment is coming through an app.”

News Article | February 15, 2017

University of Oregon-led examination answers why ink spilled on cards generated so many visual perceptions and may help advance the use of fractals for biomedical purposes EUGENE, Ore. -- Feb. 14, 2017 -- Researchers have unlocked the mystery of why people have seen so many different images in Rorschach inkblots. The image associations -- a bat, woman with a ponytail, many different animals, people, a jack-o-lantern and more -- are induced by fractal characteristics at the edges of the blots and depend on the scaling parameters of the patterns, according to a nine-member team led by University of Oregon physicist Richard P. Taylor in a paper published in the journal PLOS ONE. Fractals are objects with irregular curves or shapes and are recognizable building blocks of nature. Trees, clouds, rivers, galaxies, lungs and neurons are fractals. The new discovery isn't about improving inkblots for psychological assessments -- their use became controversial and mostly set aside in the last 20 years. It does, however, have implications for Taylor's efforts to design a fractal based retinal implant and for potentially improving materials used for camouflage. "These optical illusions seen in inkblots and sometimes in art are important for understanding the human visual system," said Taylor, who is director of the UO Materials Science Institute. "You learn important things from when our eyes get fooled. Fractal patterns in the inkblots are confusing the visual system. Why do you detect a bat or a butterfly when they were never there?" Hermann Rorschach, a Swiss Freudian psychiatrist, published 10 inkblot patterns on separate cards, five done in black and white and five in color, in 1921. To make them, he poured different amounts and kinds of ink onto cards, folded them and pressed them tightly before opening them to display symmetrical patterns of varying complexities. Rorschach died in 1922 and never knew about the subsequent widespread use of the inkblots to help assess a person's personality and mental health. Up to 300 differently perceived images have been recorded for each of the 10 inkblots. Whether such perceptions speak to mental health, reflect a person's level of creativity or simply draw from a person's past experiences has been debatable. It is the complexity of the inkblots that Taylor's team, which included collaborators from five institutions, has dissected. The group analyzed the boundaries of the inkblots. The work involved scanning them and then extracting the boundaries between regions of ink absorption and unstained portions of the cards using an edge-detection computer analysis. That analysis quantified the visual complexity of the blot boundaries using a parameter called fractal dimension D. Combining these pattern analysis results with original psychology studies conducted on the blots in the 1930s and 1950s, the researchers uncorked "a very clear trend" between the dimensional values of the cards and their ability to induce images, Taylor said. "As you increase the D value, which makes for more visual complexity, the number of visual perceptions fall off," he said. "People see a lot more patterns in the simple ones." Inkblots with D values of 1.1 generate the highest numbers of perceived images, the team found. The team then put their findings to a human test, generating computerized fractal patterns with varying D values. When seen for 10 seconds by psychology undergraduate psychology students at the University of New South Wales in Australia, the same trend between D values and imagery surfaced. Fractal patterns are also found in the artwork of Jackson Pollock, whose abstract expressionist paintings captured Taylor's lifelong interest in childhood. Pollock's paintings from 1943 to 1952, Taylor has found, are composed of fractals with D values that increased from 1.1 to 1.7. That change was deliberate, Taylor said, as Pollock sought ways to reduce imagery figures seen in his earlier work. The study is among many that Taylor and his colleagues have pursued to better understand the connection between vision and nature. In 2015, Taylor and the UO along with Simon Brown and the University of Canterbury in New Zealand obtained a U.S. patent for using artificial fractal-based implants to restore sight to the blind. The patent covers all fractal designed electronic implants that link signaling activity with nerves for any purpose in animal and human biology. In January 2016, Taylor received a $900,000 grant from the W.M. Keck Foundation to pursue his work on fractal-based implants. "All of our studies are highlighting an effect called fractal fluency," Taylor said. "The eye has evolved to efficiently process the fractal patterns found in nature's scenery. This reduces the observer's stress by up to 60 percent. If you don't build fractal fluency into a bionic eye, not only have you lost the ability to navigate, you've also lost that symbiotic relationship with nature's fractal patterns." Co-authors with Taylor on the PLOS ONE paper were R.D. Montgomery, J.H. Smith, C. Boydston and B.C. Scannell, of the UO Department of Physics; T.P. Martin of the U.S. Naval Research Laboratory in Washington, D.C.; M.S. Fairbanks of the California State University's Maritime Academy in Vallejo, California, and B. Spehar of the University of University of New South Wales. The Australian Research Council, Research Corporation for Science Advancement and W.M. Keck Foundation supported the research. Note: The UO is equipped with an on-campus television studio with a point-of-origin Vyvx connection, which provides broadcast-quality video to networks worldwide via fiber optic network. There also is video access to satellite uplink and audio access to an ISDN codec for broadcast-quality radio interviews.

SHANGHAI, Nov 2, 2016 /PRNewswire/ -- SPI Energy Co., Ltd. ("SPI Energy" or the "Company") (Nasdaq: SPI), a global provider of photovoltaic (PV) solutions for business, residential, government and utility customers and investors, today announced changes to the senior management team and board of directors of the Company (the "Board"), effective on October 29, 2016: Mr. Roger Dejun Ye has resigned as Executive Vice President in charge of the Company's solar business but will remain as a non-executive director of the Board; Mr. Minghua Zhao, who currently serves as Joint Chief Operating Officer ("COO") of the Company's China domestic business, has been appointed as a director to the Board; and Mr. Fei Yun, who previously served as General Manager of Xinghang PV Technology (Suzhou) Co., Ltd., has joined the Company as Senior Vice President in charge of R&D and Solar Technology Development. "I would like to thank Roger for his contributions to the development of Company's solar business during his tenure and we look forward to continuing to work closely with Roger in his role as a director of the Board. On behalf of the management team and the Board, I also would like to extend our warm welcome to Minghua in joining the Board and Fei in joining the Company," said Xiaofeng Peng, Chairman and Chief Executive Officer of SPI Energy. Mr. Minghua Zhao currently serves as Joint COO of the Company's China domestic business and previously served as Senior Vice President of the Company's finance service business between February 2015 and June 2016. Before he joined the Company in February 2015, Mr. Zhao served as general manager of Suzhou Industrial Park Chengcheng Enterprises Guarantee Co., Ltd., a financial services company, and from 2003 to 2009 as president of Suzhou Industrial Park Branch of Suzhou Bank. Prior to that, he worked at CITIC Bank for six years. Mr. Zhao graduated from Jiangsu Province Business School in 1997 with a degree in Business Administration and from Southwestern University of Finance and Economics in 2008 with a degree in Business Management. Mr. Fei Yun has more than 30 years of experience in the research and development of solar cells, PV systems and senior management role in the industry in Australia and China. Mr. Fei Yun joined us from Xinghang PV Technology (Suzhou) Co., Ltd. where he has served as General Manager since July 2014. Previously, Mr. Yun held senior management positions at various solar companies, including as Vice President of Technology at LDK Solar Co., Ltd. from February 2010 to June 2013; Chief Technology Officer at Solar Enertech Corp. from December 2007 to January 2010; Vice President of Technology at SolarFun (Now Hanwha Solar One) from July 2006 to November 2007; General Manager and Chief Engineer at Tera Solar Technologies from March 2004 to June 2006. Mr. Yun received his bachelor's degree in Physics from Jinan University, his master's degree in Solar Energy from the Asian Institute of Technology(AIT) in Bangkok, Thailand. He also had nearly 10 years of research and development experience in silicon-based solar cells at the ARC Photovoltaics Centre of Excellence at the University of New South Wales in Sydney, Australia, his expertise is focused on the high efficiency silicon solar cell. About SPI Energy Co., Ltd. SPI Energy Co., Ltd. is a global provider of photovoltaic (PV) solutions for business, residential, government and utility customers and investors. SPI Energy focuses on the downstream PV market including the development, financing, installation, operation and sale of utility-scale and residential solar power projects in China, Japan, Europe and North America. The Company operates an innovative online energy e-commerce and investment platform,, which enables individual and institutional investors to purchase innovative PV-based investment and other products; as well as, a B2B e-commerce platform offering a range of PV products for both upstream and downstream suppliers and customers. The Company has its operating headquarters in Shanghai and maintains global operations in Asia, Europe, North America and Australia. For additional information, please visit:, or This release contains certain "forward-looking statements." These statements are forward-looking in nature and subject to risks and uncertainties that may cause actual results to differ materially. All forward-looking statements included in this release are based upon information available to the Company as of the date of this release, which may change, and the Company undertakes no obligation to update or revise any forward-looking statements, except as may be required under applicable securities law. For investors and media inquiries please contact: To view the original version on PR Newswire, visit:

News Article | October 25, 2016

Increased muscle strength leads to improved brain function in adults with Mild Cognitive Impairment (MCI), new results from a recent trial led by the University of Sydney has revealed. With 135 million people forecast to suffer from dementia in 2050, the study's findings--published in the Journal of American Geriatrics today--have implications for the type and intensity of exercise that is recommended for our growing ageing population. Mild Cognitive Impairment defines people who have noticeably reduced cognitive abilities such as reduced memory but are still able to live independently, and is a precursor to Alzheimer's disease. Findings from the Study of Mental and Resistance Training (SMART) trial show, for the first time, a positive causal link between muscle adaptations to progressive resistance training and the functioning of the brain among those over 55 with MCI. The trial was conducted in collaboration with the Centre for Healthy Brain Ageing (CHeBA) at University of New South Wales and the University of Adelaide. "What we found in this follow up study is that the improvement in cognition function was related to their muscle strength gains" said lead author Dr Yorgi Mavros, from the Faculty of Health Sciences, at University of Sydney. "The stronger people became, the greater the benefit for their brain." SMART was a randomised, double-blind trial involving 100 community-dwelling adults with MCI, aged between 55 and 86. They were divided into four groups doing either: Participants doing resistance exercise prescribed weight lifting sessions twice week for six months, working to at least 80 per cent of their peak strength. As they got stronger, the amount of weight they lifted on each machine was increased to maintain the intensity at 80 per cent of their peak strength. The primary outcomes of a paper published in 2014 found these participants' global cognition improved significantly after the resistance training, as measured by tests including the Alzheimer's disease Assessment Scale-Cognitive scale. The cognitive training and placebo activities did not have this benefit. The benefits persisted even 12 months after the supervised exercise sessions ended. "The more we can get people doing resistance training like weight lifting, the more likely we are to have a healthier ageing population," said Dr Mavros. "The key however is to make sure you are doing it frequently, at least twice a week, and at a high intensity so that you are maximising your strength gains. This will give you the maximum benefit for your brain." These new findings reinforce research from the SMART trial published earlier this year, whereby MRI scans showed an increase in the size of specific areas of the brain among those who took part in the weight training program. These brain changes were linked to the cognitive improvements after weight lifting. "The next step now is to determine if the increases in muscle strength are also related to increases in brain size that we saw," said senior author Professor Maria Fiatarone Singh, geriatrician at University of Sydney. "In addition, we want to find the underlying messenger that links muscle strength, brain growth, and cognitive performance, and determine the optimal way to prescribe exercise to maximise these effects."

News Article | June 20, 2016

The first time professor Sherri Mason cut open a Great Lakes fish, she was alarmed at what she found. Synthetic fibers were everywhere. Under a microscope, they seemed to be “weaving themselves into the gastrointestinal tract”. Though she had been studying aquatic pollution around the Great Lakes for several years, Mason, who works for the State University of New York Fredonia, had never seen anything like it. New studies indicate that the fibers in our clothes could be poisoning our waterways and food chain on a massive scale. Microfibers – tiny threads shed from fabric – have been found in abundance on shorelines where waste water is released. Now researchers are trying to pinpoint where these plastic fibers are coming from. In an alarming study released Monday, researchers at the University of California at Santa Barbara found that, on average, synthetic fleece jackets release 1.7 grams of microfibers each wash. It also found that older jackets shed almost twice as many fibers as new jackets. The study was funded by outdoor clothing manufacturer Patagonia, a certified B Corp that also offers grants for environmental work. “These microfibers then travel to your local wastewater treatment plant, where up to 40% of them enter rivers, lakes and oceans,” according to findings published on the researchers’ website. Synthetic microfibers are particularly dangerous because they have the potential to poison the food chain. The fibers’ size also allows them to be readily consumed by fish and other wildlife. These plastic fibers have the potential to bioaccumulate, concentrating toxins in the bodies of larger animals, higher up the food chain. Microbeads, recently banned in the US, are a better-known variety of microplastic, but recent studies have found microfibers to be even more pervasive. In a groundbreaking 2011 paper, Mark Browne, now a senior research associate at the University of New South Wales, Australia, found that microfibers made up 85% of human-made debris on shorelines around the world. While Patagonia and other outdoor companies, like Polartec, use recycled plastic bottles as a way to conserve and reduce waste, this latest research indicates that the plastic might ultimately end up in the oceans anyway – and in a form that’s even more likely to cause problems. Breaking a plastic bottle into millions of fibrous bits of plastic might prove to be worse than doing nothing at all. While the UCSB study is sure to make waves, researchers are consistently finding more and more evidence that microfibers are in many marine environments and in large quantities. What’s more, the fibers are being found in fresh water as well. “This is not just a coastal or marine problem,” said Abigail Barrows, principal investigator of the Global Microplastics Initiative, part of the research group Adventurers and Scientists for Conservation. Of the almost 2,000 aquatic samples Barrows has processed, about 90% of the debris was microfibers – both in freshwater and the ocean. Microfibers are also the second most common type of debris in Lake Michigan, according to Sherri Mason’s research. Finishing up research into tributaries of the Great Lakes, she’s finding that microfibers are the most common type of debris in those smaller bodies of water. “The majority [71%] of what we’re finding in the tributaries are actually fibers,” Mason said by email. “They exceed fragments and pellets.” Mason is finding that the wildlife is indeed being affected. A study out of the University of Exeter, in which crabs were given food contaminated with microfibers, found that they altered animals’ behavior. The crabs ate less food overall, suggesting stunted growth over time. The polypropylene was also broken down and transformed into smaller pieces, creating a greater surface area for chemical transmission. (Plastics leach chemicals such as Bisphenol A – BPA – as they degrade.) Mason said her concern is not necessarily with the plastic fibers themselves, but with their ability to absorb persistent organic pollutants such as polychlorinated biphenyls (PCBs), and to concentrate them in animals’ tissues. Gregg Treinish, founder and executive director of Adventurers and Scientists for Conservation, which oversees Barrows’s microfibers work, said studies have led him to stop eating anything from the water. “I don’t want to have eaten fish for 50 years and then say, ‘Oh, whoops’,” Treinish said. His organization received $9,000 from Patagonia to research microfibers in 2016. “It absolutely has the potential to move up the food chain,” said Chelsea Rochman, a postdoctoral fellow in conservation biology at the University of California at Davis and the University of Toronto. She cautioned, however, against a rush to avoid fish: “I think no one’s really asked questions directly about that yet.” Rochman’s own recent study of seafood from California and Indonesia indicates that plastic fibers contaminate the food we eat. Testing fish and shellfish from markets in both locations, Rochman determined that “all [human-made] debris recovered from fish in Indonesia was plastic, whereas [human-made] debris recovered from fish in the US was primarily fibers”. Rochman said she can’t yet explain why fish in the US are filled with microfibers. She speculates that washing machines are less pervasive in Indonesia and synthetic, high performance fabrics, such as fleece, which are known to shed a lot of fibers, are not as common in Indonesia. Companies that have built their businesses on the environment have been some of the first to pay attention to the growing microfiber issue. Patagonia proposed the Bren School study in 2015, after polyester, the primary component of outdoor fabrics like fleece, showed up as a major ocean pollutant. Patagonia is part of a working group, as is Columbia Sportswear and 18 others, studying the issue through the Outdoor Industry Association (OIA), a trade group consisting of about 1,300 companies around the world. “We believe the outdoor industry is likely one of those [industries that contribute to the microfiber issue], but we just don’t know the breadth,” said Beth Jenson, OIA’s director of corporate responsibility. In an email, Patagonia spokesperson Tessa Byars wrote: “Patagonia is concerned about this issue and we’re taking concerted steps to figure out the impacts that our materials and products – at every step in their lifecycle – may have on the marine environment.” Miriam Diamond, an earth sciences professor who runs the University of Toronto lab where Rochman now works, said she believes so-called fast fashion could play a larger role than the comparatively smaller outdoor apparel industry. “What I suspect is that some of the cheaper fabrics will more easily shed fibers. It’s probably that the fibers aren’t as long or that they aren’t spun as well,” Diamond said. Inditex, which owns Zara and Massimo Duti among others, said microfibers fall into the category of issues covered by its Global Water Strategy, which includes ongoing plans to evaluate and improve wastewater management at its mills. H&M declined to comment on the microfiber issue, as did Topshop , which responded by email “we are not quite ready to make an official statement on this issue”. Mark Browne, the researcher responsible for first bringing microfibers to public attention, said that the grace period is over. “We know that these are the most abundant forms of debris – that they are in the environment,” Brown said. He added that government and industry must be asked to explain “what they are going to be doing about it”. The Amsterdam-based Plastic Soup Foundation, an ocean conservation project co-funded by the European Union, said better quality clothing or fabrics coated with an anti-shed treatment could help. The foundation’s director, Maria Westerbos, said a nanoball that could be thrown into a washing machine to attract and capture plastic fibers also seems promising. Another solution may lie with waterless washing machines, one of which is being developed by Colorado-based Tersus Solutions. Tersus, with funding from Patagonia, has developed a completely waterless washing machine in which textiles are washed in pressurized carbon dioxide. Others suggest a filter on home washing machines. More than 4,500 fibers can be released per gram of clothing per wash, according to preliminary data from the Plastic Soup Foundation. But the washing machine industry is not yet ready to act. Jill Notini, vice president of communications and marketing for the Association of Home Appliance Manufacturers, said the washing machine could very well be a source of microfiber debris, but that the proposed solutions are impractical. “How do you possibly retrofit all of the units that are in the market and then add a filter in and talk to consumers and say, ‘Here is a new thing that you’re going to have to do with your clothes washer?’” She added that the industry still has trouble getting people to clean lint from the filters in their dryers. For Plastic Soup’s Westerbos, the reluctance of the industries that operate in that crucial place between the consumer and the world’s waterways can no longer be tolerated. “It’s really insulting that they say it’s not their problem,” Westerbos said. “It’s their problem, too. It’s everybody’s problem.”

News Article | November 7, 2016

A new comprehensive study of Australian natural hazards paints a picture of increasing heatwaves and extreme bushfires as this century progresses, but with much more uncertainty about the future of storms and rainfall. Published today (Tuesday 8 November) in a special issue of the international journal Climatic Change, the study documents the historical record and projected change of seven natural hazards in Australia: flood; storms (including wind and hail); coastal extremes; drought; heatwave; bushfire; and frost. "Temperature-related hazards, particularly heatwaves and bushfires, are increasing, and projections show a high level of agreement that we will continue to see these hazards become more extreme into the 21st century," says special issue editor Associate Professor Seth Westra, Head of the Intelligent Water Decisions group at the University of Adelaide. "Other hazards, particularly those related to storms and rainfall, are more ambiguous. Cyclones are projected to occur less frequently but when they do occur they may well be more intense. In terms of rainfall-induced floods we have conflicting lines of evidence with some analyses pointing to an increase into the future and others pointing to a decrease. "One thing that became very clear is how much all these hazards are interconnected. For example drought leads to drying out of the land surface, which in turn can lead to increased risk of heat waves and bushfires, while also potentially leading to a decreased risk of flooding." The importance of interlinkages between climate extremes was also noted in the coastal extremes paper: "On the open coast, rising sea levels are increasing the flooding and erosion of storm-induced high waves and storm surges," says CSIRO's Dr Kathleen McInnes, the lead author of the coastal extremes paper. "However, in estuaries where considerable infrastructure resides, rainfall runoff adds to the complexity of extremes." This special issue represents a major collaboration of 47 scientists and eleven universities through the Australian Water and Energy Exchange Research Initiative, an Australian research community program. The report's many authors were from the Centre of Excellence for Climate System Science, the CSIRO, Bureau of Meteorology, Australian National University, Curtin University, Monash University, University of Melbourne, University of Western Australia, University of Adelaide, University of Newcastle, University of New South Wales, University of Tasmania, University of Western Australia and University of Wollongong. The analyses aim to disentangle the effects of climate variability and change on hazards from other factors such as deforestation, increased urbanisation, people living in more vulnerable areas, and higher values of infrastructure. "The study documents our current understanding of the relationship between historical and possible future climatic change with the frequency and severity of Australian natural hazards," says Associate Professor Westra. "These hazards cause multiple impacts on humans and the environment and collectively account for 93% of Australian insured losses, and that does not even include drought losses. "We need robust decision-making that considers the whole range of future scenarios and how our environment may evolve. The biggest risk from climate change is if we continue to plan as though there will be no change. One thing is certain: our environment will continue to change." Some of the key findings from the studies include: Associate Professor Seth Westra Special Issue Editor School of Civil, Environmental and Mining Engineering University of Adelaide Phone: 61-8-8313-1538. Mobile: 61-0-414-997-406

News Article | November 8, 2016

An analysis of the health of highly impacted coral reefs off Singapore during a 27-year long period has shown they are more resilient to the impacts of human activity and warming than expected. A UNSW-led international team found that shallower reefs rebounded rapidly from a major bleaching episode in 1998, despite experiencing such high levels of sedimentation that underwater visibility was typically less than 2 metres. "It is remarkable that diverse shallow coral communities can persist in such adverse conditions," says study first author Dr James Guest, formerly of UNSW and now at the Hawai'i Institute of Marine Biology. "Undoubtedly, Singapore's reefs have suffered as a result of human activities, but the recovery of corals at shallow sites is really surprising given how impacted this environment is. It really shows how tough corals can be." Study senior author UNSW Professor Peter Steinberg adds: "This is by no means a cause for complacency regarding the state of our reefs, but rather highlights that if we can reduce local stressors, reefs are more likely to be able to rebound from the effects of global stressors such as climate change." The study by the team, which includes researchers from UNSW and the Sydney Institute of Marine Science, the National University of Singapore, Nanyang Technological University, and Singapore's National Parks Board, is published in the journal Scientific Reports. In the past 200 years, Singapore has been transformed from a forest-covered island with about 150 people to a highly urbanised city-state of more than 5.4 million. Extensive coastal construction, dredge spillage and land reclamation have resulted in high sedimentation rates, turbidity and pollution, putting immense pressure on the surrounding coral reefs. Between 1986 and 2012, coral communities at 15 sites south of the main island were regularly surveyed, and the results have been analysed for the new study. Coral cover during this 27-year period declined at all sites - by about 12 per cent at the shallower depths of 3-4 metres and by about 30 per cent at the deeper depths of 6-7 metres. There was a particularly large decline in the first decade due in part, the authors suggest, to unmitigated dumping of dredge spoils in the late 1980s. In 1998, a major bleaching event occurred as the result of high water temperatures associated with an El Nino. However, corals at shallower reef sites were remarkably resilient to this event, showing signs of recovery within a decade. By 2008 coral cover had increased to about 1993 levels. Corals at deeper sites were less resilient, with coral cover at these sites continuing to decline. However, none of the sites were overtaken by large fleshy seaweeds, as has been seen on impacted reefs elsewhere in the world. The lack of recovery at deeper sites may be due to low light levels or a lack of unsuitable substratum for new corals to settle and survive. The researchers suspect the resilience at shallow sites is due to an abundance of coral species which have fast regrowth rates and can tolerate environmental stresses such as high levels of suspended sediments. It is also possible the turbidity of the water could offer some protection by reducing the light and the impact of heat stress, as well as slowing down the growth of fleshy seaweeds. Reefs in Singapore appear to have undergone substantial bleaching again this year, which is likely to test whether the resilience to bleaching observed in previous decades is still present on these reefs.

News Article | November 7, 2016

A new comprehensive study of Australian natural hazards paints a picture of increasing heatwaves and extreme bushfires as this century progresses, but with much more uncertainty about the future of storms and rainfall. Published in a special issue of the international journal Climatic Change, the study documents the historical record and projected change of seven natural hazards in Australia: flood; storms (including wind and hail); coastal extremes; drought; heatwave; bushfire; and frost. "Temperature-related hazards, particularly heatwaves and bushfires, are increasing, and projections show a high level of agreement that we will continue to see these hazards become more extreme into the 21st century," says special issue editor Associate Professor Seth Westra, Head of the Intelligent Water Decisions group at the University of Adelaide. "Other hazards, particularly those related to storms and rainfall, are more ambiguous. Cyclones are projected to occur less frequently but when they do occur they may well be more intense. In terms of rainfall-induced floods we have conflicting lines of evidence with some analyses pointing to an increase into the future and others pointing to a decrease. "One thing that became very clear is how much all these hazards are interconnected. For example drought leads to drying out of the land surface, which in turn can lead to increased risk of heat waves and bushfires, while also potentially leading to a decreased risk of flooding." The importance of interlinkages between climate extremes was also noted in the coastal extremes paper: "On the open coast, rising sea levels are increasing the flooding and erosion of storm-induced high waves and storm surges," says CSIRO's Dr Kathleen McInnes, the lead author of the coastal extremes paper. "However, in estuaries where considerable infrastructure resides, rainfall runoff adds to the complexity of extremes." This special issue represents a major collaboration of 47 scientists and eleven universities through the Australian Water and Energy Exchange Research Initiative (, an Australian research community program. The report's many authors were from the Centre of Excellence for Climate System Science, the CSIRO, Bureau of Meteorology, Australian National University, Curtin University, Monash University, University of Melbourne, University of Western Australia, University of Adelaide, University of Newcastle, University of New South Wales, University of Tasmania, University of Western Australia and University of Wollongong. The analyses aim to disentangle the effects of climate variability and change on hazards from other factors such as deforestation, increased urbanisation, people living in more vulnerable areas, and higher values of infrastructure. "The study documents our current understanding of the relationship between historical and possible future climatic change with the frequency and severity of Australian natural hazards," says Associate Professor Westra. "These hazards cause multiple impacts on humans and the environment and collectively account for 93% of Australian insured losses, and that does not even include drought losses. "We need robust decision-making that considers the whole range of future scenarios and how our environment may evolve. The biggest risk from climate change is if we continue to plan as though there will be no change. One thing is certain: our environment will continue to change." Some of the key findings from the studies include: • Historical information on the most extreme bushfires -- so-called "mega fires" -- suggests an increased occurrence in recent decades with strong potential for them to increase in frequency in the future. Over the past decade major bushfires at the margins of Sydney, Canberra, and Melbourne have burnt more than a million hectares of forests and woodlands and resulted in the loss of more than 200 lives and 4000 homes. • Heatwaves are Australia's most deadly natural hazard, causing 55% of all natural disaster related deaths and increasing trends in heatwave intensity, frequency and duration are projected to continue throughout the 21st century. • The costs of flooding have increased significantly in recent decades, but factors behind this increase include changes in reporting mechanisms, population, land-use, infrastructure as well as extreme rainfall events. The physical size of floods has either not changed at all, or even decreased in many parts of the country.

Petersen I.R.,University of New South Wales | Tempo R.,CNR Institute of Electronics, Computer and Telecommunication Engineering
Automatica | Year: 2014

This paper presents a survey of the most significant results on robust control theory. In particular, we study the modeling of uncertain systems, robust stability analysis for systems with unstructured uncertainty, robustness analysis for systems with structured uncertainty, and robust control system design including H∞ control methods. The paper also presents some more recent results on deterministic and probabilistic methods for systems with uncertainty. © 2014 Elsevier Ltd. All rights reserved.

Wittenmyer A.,University of New South Wales | Horner J.,University of New South Wales | Marshall J.P.,Autonomous University of Madrid
Monthly Notices of the Royal Astronomical Society | Year: 2013

We present a detailed dynamical analysis of the orbital stability of the two circumbinary planets recently proposed to orbit the evolved eclipsing binary star system NSVS 14256825. As is the case for other recently proposed circumbinary planetary systems detected through the timing of mutual eclipses between the central binary stars, the proposed planets do not stand up to dynamical scrutiny. The proposed orbits for the two planets are extremely unstable on time-scales of less than a thousand years, regardless of the mutual inclination between the planetary orbits. For the scenario where the planetary orbits are coplanar, a small region of moderate stability was observed, featuring orbits that were somewhat protected from destabilization by the influence of mutual 2:1 mean-motion resonance between the orbits of the planets. Even in this stable region, however, the systems tested typically only survived on time-scales of the order of 1 Myr, far shorter than the age of the system. Our results suggest that if there are planets in the NSVS 14256825 system, they must move on orbits dramatically different to those proposed in the discovery work. More observations are clearly critically required in order to constrain the nature of the suggested orbital bodies. © 2013 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society.

Aguey-Zinsou K.-F.,University of New South Wales | Ares-Fernandez J.-R.,Autonomous University of Madrid
Energy and Environmental Science | Year: 2010

Storing hydrogen in materials is based on the observation that metals can reversibly absorb hydrogen. However, the practical application of such a finding is found to be rather challenging especially for vehicular applications. The ideal material should reversibly store a significant amount of hydrogen under moderate conditions of pressures and temperatures. To date, such a material does not exist, and the high expectations of achieving the scientific discovery of a suitable material simultaneously with engineering innovations are out of reach. Of course, major breakthroughs have been achieved in the field, but the most promising materials still bind hydrogen too strongly and often suffer from poor hydrogen kinetics and/or lack of reversibility. Clearly, new approaches have to be explored, and the knowledge gained with high-energy ball milling needs to be exploited, i.e. size does matter! Herein, progress made towards the practical use of magnesium as a hydrogen store and the barriers still remaining are reviewed. In this context, the new approach of tailoring the properties of metal hydrides through size restriction at the nanoscale is discussed. Such an approach already shows great promise in leading to further breakthroughs because both thermodynamics and kinetics can be effectively controlled at molecular levels. © 2010 The Royal Society of Chemistry.

Wedemeyer H.,Hanover Center for Internal Medicine | Dore G.J.,University of New South Wales | Ward J.W.,Centers for Disease Control and Prevention
Journal of Viral Hepatitis | Year: 2015

Hepatitis C is caused by infection with the hepatitis C virus (HCV) and represents a major global health burden. Persistent HCV infection can lead to progressive liver disease with the development of liver cirrhosis and hepatocellular carcinoma, possibly accounting for up to 0.5 million deaths every year. Treatment of HCV infection is undergoing a profound and radical change. As new treatments are extremely safe and effective, there are virtually no medical reasons to withhold therapy. Yet, the new therapies are expensive. As resources are limited, solid data to estimate the disease burden caused by HCV are urgently needed. Epidemiology data and disease burden analyses for 16 countries are presented. For almost all countries, the peak of HCV-related cirrhosis, hepatocellular carcinoma and liver-related death is a decade or more away. However, a surprising heterogeneity in country-specific HCV-associated disease burden exists. Also, HCV diagnosis and treatment uptake varied markedly between countries. A consistent finding was that a reduction of HCV liver-related mortality is dependent on access to therapy. Increasing efficacy of therapy alone with a constant numbers of treatments will not have a major impact on the HCV-related disease burden. The data presented here should inform public health policy and help drive advocacy for enhanced strategic investment and action. HCV kills patients, and the disease burden will continue to rise in most countries unless action is taken soon. Chronic HCV is a curable infection and a reversible liver disease. Fortunately, the tools to eliminate HCV are now available. © 2014 John Wiley & Sons Ltd.

Green M.A.,University of New South Wales | Emery K.,National Renewable Energy Laboratory | Hishikawa Y.,Japan National Institute of Advanced Industrial Science and Technology | Warta W.,Fraunhofer Institute for Solar Energy Systems | Dunlop E.D.,European Commission - Joint Research Center Ispra
Progress in Photovoltaics: Research and Applications | Year: 2014

Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Guidelines for inclusion of results into these tables are outlined and new entries since January 2014 are reviewed. Copyright © 2014 John Wiley & Sons, Ltd.

Green M.A.,University of New South Wales | Emery K.,National Renewable Energy Laboratory | Hishikawa Y.,Japan National Institute of Advanced Industrial Science and Technology | Warta W.,Fraunhofer Institute for Solar Energy Systems | Dunlop E.D.,European Commission - Joint Research Center Ispra
Progress in Photovoltaics: Research and Applications | Year: 2014

Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Guidelines for inclusion of results into these tables are outlined, and new entries since July 2013 are reviewed. Copyright © 2013 John Wiley & Sons, Ltd.

Green M.A.,University of New South Wales | Emery K.,National Renewable Energy Laboratory | Hishikawa Y.,Japan National Institute of Advanced Industrial Science and Technology | Warta W.,Fraunhofer Institute for Solar Energy Systems | Dunlop E.D.,European Commission - Joint Research Center Ispra
Progress in Photovoltaics: Research and Applications | Year: 2012

Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Guidelines for inclusion of results into these tables are outlined, and new entries since July 2011 are reviewed. Copyright © 2011 John Wiley & Sons, Ltd. Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Guidelines for inclusion of results into these tables are outlined, and new entries since July 2011 are reviewed. Copyright © 2011 John Wiley & Sons, Ltd.

Vogt B.A.,Cingulum Neurosciences Institute | Vogt B.A.,Boston University | Paxinos G.,University of New South Wales
Brain Structure and Function | Year: 2014

A gulf exists between cingulate area designations in human neurocytology and those used in rodent brain atlases with a major underpinning of the former being midcingulate cortex (MCC). The present study used images extracted from the Franklin and Paxinos mouse atlas and Paxinos and Watson rat atlas to demonstrate areas comprising MCC and modifications of anterior cingulate (ACC) and retrosplenial cortices. The laminar architecture not available in the atlases is also provided for each cingulate area. Both mouse and rat have a MCC with neurons in all layers that are larger than in ACC and layer Va has particularly prominent neurons and reduced neuron densities. An undifferentiated ACC area 33 lies along the rostral callosal sulcus in rat but not in mouse and area 32 has dorsal and ventral subdivisions with the former having particularly large pyramidal neurons in layer Vb. Both mouse and rat have anterior and posterior divisions of retrosplenial areas 29c and 30, although their cytology is different in rat and mouse. Maps of the rodent cingulate cortices provide for direct comparisons with each region in the human including MCC and it is significant that rodents do not have a posterior cingulate region composed of areas 23 and 31 like the human. It is concluded that rodents and primates, including humans, possess a MCC and this homology along with those in ACC and retrosplenial cortices permit scientists inspired by human considerations to test hypotheses on rodent models of human diseases. © 2012 Springer-Verlag Berlin Heidelberg.

Hall W.,University of Queensland | Hall W.,King's College London | Hall W.,University of New South Wales
Addiction | Year: 2015

Aims: To examine changes in the evidence on the adverse health effects of cannabis since 1993. Methods: A comparison of the evidence in 1993 with the evidence and interpretation of the same health outcomes in 2013. Results: Research in the past 20 years has shown that driving while cannabis-impaired approximately doubles car crash risk and that around one in 10 regular cannabis users develop dependence. Regular cannabis use in adolescence approximately doubles the risks of early school-leaving and of cognitive impairment and psychoses in adulthood. Regular cannabis use in adolescence is also associated strongly with the use of other illicit drugs. These associations persist after controlling for plausible confounding variables in longitudinal studies. This suggests that cannabis use is a contributory cause of these outcomes but some researchers still argue that these relationships are explained by shared causes or risk factors. Cannabis smoking probably increases cardiovascular disease risk in middle-aged adults but its effects on respiratory function and respiratory cancer remain unclear, because most cannabis smokers have smoked or still smoke tobacco. Conclusions: The epidemiological literature in the past 20 years shows that cannabis use increases the risk of accidents and can produce dependence, and that there are consistent associations between regular cannabis use and poor psychosocial outcomes and mental health in adulthood. © 2014 Society for the Study of Addiction.

McHale R.,University of Warwick | Patterson J.P.,University of Warwick | Zetterlund P.B.,University of New South Wales | O'Reilly R.K.,University of Warwick
Nature Chemistry | Year: 2012

Segregation and templating approaches have been honed by billions of years of evolution to direct many complex biological processes. Nature uses segregation to improve biochemical control by organizing reactants into defined, well-regulated environments, and the transfer of genetic information is a primary function of templating. The ribosome, wherein messenger RNA is translated into polypeptides, combines both techniques to allow for ideal biopolymer syntheses. Herein is presented a biomimetic segregation/templating approach to synthetic radical polymerization. Polymerization of a nucleobase-containing vinyl monomer in the presence of a complementary block copolymer template of low molecular weight yields high molecular weight (M w up to ∼400,000 g mol -1), extremely low polydispersity (≤1.08) daughter polymers. Control is attained by segregation of propagating radicals in discrete micelle cores (via cooperative assembly of dynamic template polymers). Significantly reduced bimolecular termination, combined with controlled propagation along a defined number of templates, ensures unprecedented control to afford well-defined high molecular weight polymers. © 2012 Macmillan Publishers Limited. All rights reserved.

Green M.A.,University of New South Wales | Emery K.,National Renewable Energy Laboratory | Hishikawa Y.,Japan National Institute of Advanced Industrial Science and Technology | Warta W.,Fraunhofer Institute for Solar Energy Systems | Dunlop E.D.,European Commission - Joint Research Center Ispra
Progress in Photovoltaics: Research and Applications | Year: 2013

Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Guidelines for inclusion of results into these tables are outlined, and new entries since January 2013 are reviewed. Copyright © 2013 John Wiley & Sons, Ltd. Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Guidelines for inclusion of results into these tables are outlined, and new entries since January 2013 are reviewed. Copyright © 2013 John Wiley & Sons, Ltd. Copyright © 2013 John Wiley & Sons, Ltd.

Green M.A.,University of New South Wales | Emery K.,National Renewable Energy Laboratory | Hishikawa Y.,Japan National Institute of Advanced Industrial Science and Technology | Warta W.,Fraunhofer Institute for Solar Energy Systems | Dunlop E.D.,European Commission - Joint Research Center Ispra
Progress in Photovoltaics: Research and Applications | Year: 2015

Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Guidelines for inclusion of results into these tables are outlined and new entries since July 2014 are reviewed. Copyright © 2014 John Wiley & Sons, Ltd.

Green M.A.,University of New South Wales | Emery K.,National Renewable Energy Laboratory | Hishikawa Y.,Japan National Institute of Advanced Industrial Science and Technology | Warta W.,Fraunhofer Institute for Solar Energy Systems | Dunlop E.D.,European Commission - Joint Research Center Ispra
Progress in Photovoltaics: Research and Applications | Year: 2013

Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Guidelines for inclusion of results into these tables are outlined, and new entries since June 2012 are reviewed. Copyright © 2012 John Wiley & Sons, Ltd.

Owen D.M.,King's College London | Gaus K.,University of New South Wales
Frontiers in Plant Science | Year: 2013

The lipid bilayer of model membranes, liposomes reconstituted from cell lipids, and plasma membrane vesicles and spheres can separate into two distinct liquid phases to yield lipid domains with liquid-ordered and liquid-disordered properties. These observations are the basis of the lipid raft hypothesis that postulates the existence of cholesterol-enriched ordered-phase lipid domains in cell membranes that could regulate protein mobility, localization and interaction. Here we review the evidence that nano-scaled lipid complexes and meso-scaled lipid domains exist in cell membranes and how new fluorescence microscopy techniques that overcome the diffraction limit provide new insights into lipid organization in cell membranes.

Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2011.5.4 | Award Amount: 5.79M | Year: 2011

A demographic transformation is underway with a significant rise in the proportion of people aged 65 and older expected over the coming decades. It is therefore crucial to invest in research aimed at dealing with health challenges of an ageing population.\nFalls represent a major age-related health challenge facing our society, with about one third of older community-living people falling at least once a year. Falls in older people can markedly change ones health trajectory, have debilitating and isolating consequences and can trigger a downward spiral of disability which can lead to institutionalization and premature death. Falls and fractures account for over half of all injury-related health care costs and have a major impact not only on older people, but also their carers, health services and the community. This impact will grow substantially in the near future due to the increased proportion of older people in the population. The prevention of falls and mobility-related disability among older people is an urgent public health challenge in Europe and internationally.\nDespite robust evidence and availability of best-practice clinical guidelines to support interventions for preventing falls in older people, implementation of preventive measures remains low, mainly due to the accompanied high cost in both time and resources of the recommended individualized approach. Novel methods for delivery of quality healthcare are required to increase effectiveness of management while containing costs and using scarce human resources to maximum effect. Technology-based solutions have potential to reduce costs while maintaining individualized high quality healthcare. Fall prediction and prevention is a field of research where technology can be used to facilitate healthy ageing, well-being and independent living, but similar paradigms could potentially be used in other areas of geriatric medicine.\nThe primary aim of the iStoppFalls project and its consortium partners is to develop and evaluate innovative home-based technologies to assist in preventing falls, and thus to improve quality of life of older adults living at home. iStoppFalls will develop unobtrusive technological solutions for continuous monitoring and prevention of fall risk factors that are required to coach people in tailored individualized prevention programs, including exercise and education. The emphasis is not on laboratory research but on active implementation of successful fall prevention strategies in peoples own home.\nA coordinated, active and multidisciplinary team is central to this project to face challenges related to the development of these technology-based solutions. iStoppFalls will involve representatives of world-leading technology and research experts from both university and industry partners in Europe and Australia. The program will strengthen collaboration between research and technology which will contribute to European excellence and competitiveness and will produce new insights.

Agency: Cordis | Branch: H2020 | Program: CSA | Phase: GALILEO-4-2014 | Award Amount: 1.83M | Year: 2015

The project implements a set of coordinated actions to promote EGNSS technology in South East Asia (SEA), meanwhile supporting the competitiveness of EU enterprises facilitating their contacts with relevant GNSS stakeholders in SEA, a fast growing market of more than 600 million people. Exploiting the facilities offered in Hanoi by the NAVIS Centre - a strategic asset for Europe, setup with FP7 EU funding - European enterprises can increase their competitiveness conducting tests of their products in a challenging multi-GNSS environment, not available in Europe, with hard ionospheric activity conditions. Ten workshops organised in the different ASEAN States promote EGNSS technology and offer contact opportunities with local stakeholders to EU companies/institutions. Three editions of the Asia Oceania Regional Workshop on GNSS, an important event in SEA, are organized in collaboration with JAXA, MGA and other partners. To reach as many interested EU GNSS companies as possible, the project activities and collaboration opportunities offered by the NAVIS Centre are carefully advertised through different channels including clusters, social networks, websites etc. and three events are organized in Europe, possibly as side events of larger ones. So a larger network of long lasting relations between European companies and SEA GNSS stakeholders is built, allowing the NAVIS Centre to keep serving as a focal point for cooperation with Europe and as showcase of EGNSS technology. A competition to attract smart young researchers/entrepreneurs from ASEAN States to develop EGNSS-based business ideas/applications is also organized. The prize consists in a six-month incubation period in Europe to further implement the idea. Capacity building actions complete the project: they target SEA researchers/technicians with the aim to prepare a generation of SEA experts familiar with European technology, habits and culture and ready to support European industries willing to operate in SEA.

Agency: Cordis | Branch: H2020 | Program: RIA | Phase: PHC-09-2015 | Award Amount: 24.09M | Year: 2015

HIV-1 is responsible for a global pandemic of 35 million people, and continues to spread at a rate of >2 million new infections/year. It is widely acknowledged that a protective vaccine would be the most effective means to reduce HIV-1 spread and ultimately eliminate the pandemic, while a therapeutic vaccine may help mitigate the clinical course of disease and lead to strategies of viral eradication. However despite 30 years of research, we do not have a vaccine capable of protecting from HIV-1 infection or impacting on disease progression. This in part represents the challenge of identifying immunogens and vaccine modalities with reduced risk of failure in late stage development. To overcome this bottleneck some of the most competitive research groups in vaccine discovery from European public institutions and biotechs from 9 EU countries together with top Australian and Canadian groups and US collaborators, have agreed to join forces in EAVI, providing a pool of international expertise at the highest level. EAVI2020 will provide a platform for the discovery and selection of several new, diverse and novel preventive and/or therapeutic vaccine candidates for HIV/AIDS. Emphasis will be placed on early rapid, iterative, small Experimental medicine (EM) human vaccine studies to select and refine the best immunogens, adjuvants, vectors, homologous and heterologous primeboost schedules, and determine the impact of host factors such as gender and genetics. Animal models will be used to complement human studies, and to select novel immunization technologies to be advanced to the clinic. To shift the risk curve in product development we will develop innovative risk prediction methods, specifically designed to reduce the risk associated with late stage preventive or therapeutic vaccine failure, increasing the chance of discovery of an effective vaccine.

Jia X.,University of New South Wales | Kuo B.-C.,National Taichung University of Education | Crawford M.M.,Purdue University
Proceedings of the IEEE | Year: 2013

Hyperspectral sensors record the reflectance from the Earth's surface over the full range of solar wavelengths with high spectral resolution. The resulting high-dimensional data contain rich information for a wide range of applications. However, for a specific application, not all the measurements are important and useful. The original feature space may not be the most effective space for representing the data. Feature mining, which includes feature generation, feature selection (FS), and feature extraction (FE), is a critical task for hyperspectral data classification. Significant research effort has focused on this issue since hyperspectral data became available in the late 1980s. The feature mining techniques which have been developed include supervised and unsupervised, parametric and nonparametric, linear and nonlinear methods, which all seek to identify the informative subspace. This paper provides an overview of both conventional and advanced feature reduction methods, with details on a few techniques that are commonly used for analysis of hyperspectral data. A general form that represents several linear and nonlinear FE methods is also presented. Experiments using two widely available hyperspectral data sets are included to illustrate selected FS and FE methods. © 1963-2012 IEEE.

Murofushi T.,Teikyo University | Akin F.W.,Medical Center | Colebatch J.G.,University of New South Wales
Clinical Neurophysiology | Year: 2014

Background: Cervical vestibular evoked myogenic potentials (cVEMPs) are electromyogram responses evoked by high-level acoustic stimuli recorded from the tonically contracting sternocleidomastoid (SCM) muscle, and have been accepted as a measure of saccular and inferior vestibular nerve function. As more laboratories are publishing cVEMP data, there is a wider range of recording methods and interpretation, which may be confusing and limit comparisons across laboratories. Objective: To recommend minimum requirements and guidelines for the recording and interpretation of cVEMPs in the clinic and for diagnostic purposes. Material and methods: We have avoided proposing a single methodology, as clinical use of cVEMPs is evolving and questions still exist about its underlying physiology and its measurement. The development of guidelines by a panel of international experts may provide direction for accurate recording and interpretation. Results: cVEMPs can be evoked using air-conducted (AC) sound or bone conducted (BC) vibration. The technical demands of galvanic stimulation have limited its application. For AC stimulation, the most effective frequencies are between 400 and 800. Hz below safe peak intensity levels (e.g. 140. dB peak SPL). The highpass filter should be between 5 and 30. Hz, the lowpass filter between 1000 and 3000. Hz, and the amplifier gain between 2500 and 5000. The number of sweeps averaged should be between 100 and 250 per run. Raw amplitude correction by the level of background SCM activity narrows the range of normal values. There are few publications in children with consistent results. Conclusion: The present recommendations outline basic terminology and standard methods. Because research is ongoing, new methodologies may be included in future guidelines. © 2014 International Federation of Clinical Neurophysiology.

News Article | December 1, 2016

United Cannabis established the Advisory Board to help the Company advance and protect the development of its proprietary formulations, including managing the clinical trial process, patent protection, new product research and development, as well as identifying and pursuing potential strategic relationships. Dr. Reynolds co-founded and was VP-Research of the world's first neural stem cell company, NeuroSpheres, Ltd., where he developed a patent portfolio and protocols related to the application of neural stem cells to treating disorders of the central nervous system. Dr. Reynolds has held multiple academic and industry posts including Adjunct Professor at the University of New South Wales, Sydney, Australia; an Honorary Professor at the Queensland Brain Institute, Australia; and Program Director for StepAhead, Australia. He is currently a Professor in the Department of Neurosurgery at the University of Florida, College of Medicine where his lab focuses on the application of natural products for treating diseases and dysfunction of the nervous system. Dr. Reynolds received his PhD from the University of Calgary where he co-discovered the existence of stem cells in the adult mammalian brain. Commenting on the appointment, Dr. Reynolds stated, "Although cannabis has been used as medicine for centuries, its interaction with and effects on the endocannabinoid system are just beginning to be understood by the scientific community. Unique to United Cannabis is their robust scientific approach to developing cannabinoid medicines. I am pleased to be a part of their effort to discover and develop therapies for the treatment of a number of conditions and diseases." Tony Verzura, Chief Technology Officer of United Cannabis, added, "Dr. Reynolds was a mentor for us during the creation of our Advance Cannabinoid Therapy Program (A.C.T. Now), and our patent-pending phyto-therapeutic approach to creating our entire medicinal line of products. Dr. Reynolds has numerous peer-reviewed publications, issued patents, tremendous credibility, and global experience within the scientific community." The Company's Prana Bio Medicinal products provide patients a way to mix/match cannabinoids for therapeutic purpose. These products, licensed to regulated marijuana dispensaries, are broken into 5 categories that are available in capsules, sublingual's, and topical delivery methods. The Company uses a patent-pending infusion process utilizing select fatty acids, lipids, and specific combination of cannabis derived terpenes to increase bioavailability. For further information, please visit Certain statements in this news release may contain forward-looking information within the meaning of Rule 175 under the Securities Act of 1933, are subject to Rule 3b-6 under the Securities Exchange Act of 1934, and are subject to the safe harbors, created by those rules. All statements, other than statements of fact, included in this release, including, without limitation, statements regarding potential future plans and objectives of the company, are forward-looking statements that involve risks and uncertainties. There can be no assurance that such statements will prove to be accurate and other results and further events could differ materially from those anticipated in such statements. Future events and actual results could differ materially from those set forth in, contemplated by, or underlying the forward-looking statements.

News Article | November 10, 2016

There are not a lot of things that could bring together people as far apart on the US political spectrum as Republican Newt Gingrich and Democrat Bob Kerrey. But in 2007, after leading a three-year commission that looked into the costs of care for elderly people, the political rivals came to full agreement on a common enemy: dementia. At the time, there were fewer than 30 million people worldwide diagnosed with the condition, but it was clear that the numbers were set to explode. By 2050, current predictions suggest, it could reach more than 130 million, at which point the cost to US health care alone from diseases such as Alzheimer’s will probably hit US$1 trillion per year in today’s dollars. “We looked at each other and said, ‘You know, if we don’t get a grip on Alzheimer’s, we can’t get anything done because it’s going to drown the system,’” recalls Gingrich, the former speaker of the US House of Representatives. He still feels that sense of urgency, and for good reason. Funding has not kept pace with the scale of the problem; targets for treatments are thin on the ground and poorly understood; and more than 200 clinical trials for Alzheimer’s therapies have been terminated because the treatments were ineffective. Of the few treatments available, none addresses the underlying disease process. “We’re faced with a tsunami and we’re trying to deal with it with a bucket,” says Gingrich. But this message has begun to reverberate around the world, which gives hope to the clinicians and scientists. Experts say that the coming wave can be calmed with the help of just three things: more money for research, better diagnostics and drugs, and a victory — however small — that would boost morale. “What we really need is a success,” says Ronald Petersen, a neurologist at Mayo Clinic in Rochester, Minnesota. After so many failures, one clinical win “would galvanize people’s interest that this isn’t a hopeless disorder”. Dementia is the fifth-biggest cause of death in high-income countries, but it is the most expensive disease to manage because patients require constant, costly care for years. And yet, research funding for dementia pales in comparison with that for many other diseases. At the US National Institutes of Health (NIH), for example, annual funding for dementia in 2015 was only around $700 million, compared with some $2 billion for cardiovascular disease and more than $5 billion for cancer. One problem is visibility. Other disease communities — most notably, people affected by breast cancer and HIV/AIDS — have successfully advocated for large pots of dedicated research funding. But “there simply wasn’t any comparable upswell of attention to Alzheimer’s”, says George Vradenburg, chair and co-founder of UsAgainstAlzheimer’s, a non-profit organization in Chevy Chase, Maryland. The biggest reason, he says, is that “the victims of the disease hide out”. Dementia mostly affects elderly people and is often misconstrued as a normal part of ageing; there is a stigma attached to the condition, and family care-givers are often overworked and exhausted. Few are motivated enough to speak up. However, social and political awareness has increased in the past five years. “We all started to work together a lot more, and that helps,” says Susan Peschin, chief executive at the Alliance for Aging Research in Washington DC, one of more than 50 non-profit groups in the Accelerate Cure/Treatments for Alzheimer’s Disease coalition. The impact can be seen in government investments. France took action first, creating a national plan for Alzheimer’s in 2008 that included €200 million (US$220 million) over five years for research. In 2009, the German Centre for Neurodegenerative Diseases in Bonn was created with a €66-million annual budget. And UK spending on dementia research more than doubled between 2010 and 2015, to £66 million (US$82 million). The European Union has been dishing out tens of millions of euros each year for dementia studies through the Innovative Medicines Initiative and the Joint Programming process, and Australia is now about halfway through doling out its Aus$200-million (US$150-million), five-year dementia-research fund. “This is a global challenge, and no one country will be able to solve the problem,” says Philippe Amouyel, a neurologist and geneticist at the University Hospital of Lille in France. Yet it’s the United States that has been the biggest backer by far, thanks in part to efforts by Gingrich and Kerrey. The NIH’s annual budget for Alzheimer’s and other dementias jumped in the past year to around $1 billion, and there is support for a target to double that figure in the next few years — even in the fractious US political landscape. “Alzheimer’s doesn’t care what political party you’re in,” says Kerrey. Two billion dollars is “a reasonable number”, says Petersen, who chairs the federal advisory board that came up with the target in 2012. Now, he adds, the research community just needs to work out “what are we going to do with it if in fact we get it?”. The answer could depend in large part on the fate of a drug called solanezumab, developed by Eli Lilly of Indianapolis, Indiana. This antibody-based treatment removes the protein amyloid-β, which clumps together to form sticky plaques in the brains of people with Alzheimer’s. By the end of this year, Lilly is expected to announce the results of a 2,100-person clinical trial testing whether the drug can slow cognitive decline in people with mild Alzheimer’s. It showed preliminary signs of cognitive benefit in this patient population in earlier trials (R. S. Doody et al. N. Engl. J. Med. 370, 311–321; 2014), but the benefits could disappear in this final stage of testing, as has happened for practically every other promising compound. No one is expecting a cure. If solanezumab does delay brain degradation, at best it might help people to perform 30–40% better on cognitive tests than those on a placebo. But even such a marginal gain would be a triumph. It would show scientists and the drug industry that a disease-modifying therapy is at least possible. By contrast, another setback could bring recent momentum in therapeutic development to a halt. “This is a fork in the road,” says John Hardy, a neurogeneticist at University College London. “This is going to be a very important outcome, way beyond the importance for Lilly and this particular drug.” On a scientific level, success for solanezumab could lend credence to the much-debated amyloid hypothesis, which posits that the build-up of amyloid-β in the brain is one of the triggers of Alzheimer’s disease. The previous failure of amyloid-clearing agents led many to conclude that plaques were a consequence of a process in the disease, rather than the cause of it. But those in favour of the amyloid hypothesis say that the failed drugs were given too late, or to people with no amyloid build-up — possibly those with a different form of dementia. For its latest solanezumab trial, Lilly sought out participants with mild cognitive impairment, and used brain scans and spinal-fluid analyses to confirm the presence of amyloid-β in their brains. Another company, Biogen in Cambridge, Massachusetts, took the same approach to screening participants in a trial of its amyloid-targeting drug aducanumab. Earlier this year, a 165-person study reported early signs that successfully clearing amyloid-β with the Biogen therapy correlated with slower cognitive decline (J. Sevigny et al. Nature 537, 50–56; 2016). If those results hold up to further scrutiny, “that will at least tell us that amyloid is sufficiently upstream in the cascade that it deserves being targeted and tackled pharmacologically”, says Giovanni Frisoni, a clinical neuroscientist at the University of Geneva in Switzerland who is involved in the drug’s testing. Although debate over the amyloid hypothesis continues, interest is growing in earlier intervention with drugs that clear the protein. Reisa Sperling, a neurologist at Brigham and Women’s Hospital in Boston, Massachusetts, worries that even mild dementia is a sign of irreparable brain-cell death. “You can suck all the amyloid out of the brain or stop it from further accumulating, but you’re not going to grow those neurons back.” That is why she is leading Anti-Amyloid Treatment in Asymptomatic Alzheimer’s, or A4, a $140-million, placebo-controlled solanezumab study that aims to treat people with elevated amyloid levels before they show any signs of cognitive impairment. And A4 is not her only trial. In March, she and neurologist Paul Aisen of the University of Southern California’s Alzheimer’s Therapeutic Research Institute in San Diego launched a trial in 1,650 asymptomatic people with early signs of amyloid-β build-up. It will test a pill from Johnson & Johnson that blocks β-secretase, an enzyme responsible for producing the toxic protein. These interventions are known as secondary prevention because they target people who are already developing amyloid plaques. Sperling and Aisen also plan to test what’s called primary prevention. In August, they received NIH funding to start treating people who have normal brain levels of amyloid-β and no signs of cognitive decline, but who have a high risk of developing Alzheimer’s — because of a combination of factors such as age and genetics. “The biggest impact we can have is in delaying the onset of the diseases,” says David Holtzman, a neurologist at Washington University School of Medicine in St. Louis, Missouri, and an investigator in the Dominantly Inherited Alzheimer Network, which is testing the benefits of giving either solanezumab or another anti-amyloid therapy to people who inherit gene mutations that predispose them to develop Alzheimer’s at an early age. Secondary prevention could eventually mean screening everyone past middle age for signs of amyloid-β, although the current testing methods are either expensive ($3,000 brain scans) or invasive (spinal taps). Researchers have flagged a dozen possible blood-based biomarkers, but none has yet panned out, says Dennis Selkoe, a Brigham and Women’s Hospital neurologist. Yet a cheap and easy diagnostic test for amyloid-β could ultimately prove unnecessary. In the same way that some have suggested giving cholesterol-lowering drugs to anyone at risk of heart disease, clinicians might eventually give anti-amyloid drugs to a broad set of people prone to Alzheimer’s — even if they are not already amyloid positive, says Sperling. Just as cholesterol is not the sole cause of heart disease, amyloid-β is not the only driver of Alzheimer’s. There’s also tau, a protein that causes tangles in the brains of most people with Alzheimer’s. Several pharmaceutical companies are targeting tau, but few large drug-makers have clinical candidates directed at other types of target. “They know how to modulate a specific target and keep looking under that lamp post, rather than venturing away from their comfort zones,” says Bernard Munos, an industry consultant and former Eli Lilly executive. That’s a problem, says Howard Fillit, chief science officer of the Alzheimer’s Drug Discovery Foundation in New York City. “We really need to increase the diversity of targets we’re tackling.” After amyloid and tau, the only target receiving much attention from researchers is neuro­inflammation — the “third leg of the stool” in treating Alzheimer’s, according to neuro­geneticist Rudy Tanzi at Massachusetts General Hospital in Boston. He likens Alzheimer’s disease to a wildfire in the brain. Plaques and tangles provide the initial brush fires, but it’s the accompanying neuro­inflammation that fans the flames. Once the blaze is raging, Tanzi says, “putting out those brush fires that got you there isn’t good enough”. This could explain why anti-amyloid drugs failed when given to people with full-blown dementia. For these individuals, perhaps reducing the inflammatory activity of brain immune cells called microglia could help. Drug researchers are now focusing on two genes, CD33 and TREM2, that are involved in microglial function. But, says Tanzi, “there are two dozen other genes that deserve attention. Who knows if one of these new genes that no one is working on might lead to drug clues?” Many Alzheimer’s experts emphasize the need to develop better low-cost interventions that don’t require drug research. At the University of New South Wales in Sydney, Australia, for example, geriatric psychiatrist Henry Brodaty is testing whether an Internet coaching tool that focuses on diet, exercise, cognitive training and mood can postpone disease development. “We know that two-thirds of the world’s dementia is going to be in developing countries,” he says (see ‘The approaching wave’). Lifestyle interventions, he argues, could be more broadly scalable than expensive drugs. Researchers also need to look beyond Alzheimer’s, to the many other types of dementia. Injuries to the vessels that supply blood to the brain cause a form called vascular dementia. Clumps of a protein called α-synuclein underlie cognitive problems in people with Parkinson’s disease and also what’s called Lewy body dementia. Tau deposits are often behind the nerve-cell loss responsible for frontotemporal dementia. And there are many other, equally devastating, drivers of serious mental decline. “We should not be ignoring these other diseases,” says Nick Fox, a neurologist at University College London, especially given that many types of dementia share biological mechanisms. Tackling one disease could help inform treatment strategies for another. But perhaps the biggest hindrance to drug development today is more logistical than scientific, with clinical trials for dementia taking years to complete as investigators struggle to recruit sufficient numbers of study participants. “We need to get answers more quickly,” says Marilyn Albert, director of the Johns Hopkins Alzheimer’s Disease Research Center in Baltimore, Maryland. One solution is trial-ready registries. By enrolling people who are interested in taking part in a study before it actually exists, investigators can start a trial as soon as a drug comes along for testing. “We have to register humanity in the task of defeating this disease,” says Aisen. The 1,600-person COMPASS-ND registry is being funded through the Canadian Consortium on Neurodegeneration in Aging. Member Serge Gauthier, a neurologist at McGill University in Montreal, says that finding participants can be challenging. But he adds that around one-third of the people who come to memory clinics such as his have what’s known as subjective cognitive impairment — they might forget names or suffer from other ‘senior moments’, but they do not meet the clinical definition of dementia. They are perfect for trial-ready registries, says Gauthier: they are at an elevated risk of the disease, and they’ve demonstrated concern. Gauthier wants to find more people like them. He fits the profile himself, so he joined the Brain Health Registry, which has more than 40,000 participants so far and is led by researchers at the University of California, San Francisco. He takes regular cognitive tests, and could be asked to do more once potential diagnostic tools or therapies are ready for testing. “It’s a fun thing to do,” he says. Voluntarily or not, people will need to face up to dementia, because in just a few short decades, pretty much everyone is going to have a friend or loved one affected by the disease. It’s an alarming idea, and it should spur action, says Robert Egge, chief public policy officer of the Alzheimer’s Association in Chicago, Illinois. “We know where we’re heading,” he says. “The question is: are we going to get in front of it or not?”

News Article | October 28, 2016

At last week's White House Frontiers Conference, President Obama outlined a vision for the development of AI: A plan that involves the collaboration of the US government. The event, co-hosted by the University of Pittsburgh and Carnegie Mellon University, was the first of its kind. Earlier that week, the White House had unveiled a report—Preparing for the Future of Artificial Intelligence—which outlined how the government can be involved in researching, developing, and regulating future technologies. So how well does the American president understand AI? And how powerful will these policies be? TechRepublic spoke to several AI experts to see what they think. While each of these five experts agreed that this is a big step in the right direction, and welcome the president's attention to supporting research in AI, here are a few key areas they want to address. "I am not as optimistic as Obama that as many jobs will be created as destroyed by technology," said Toby Walsh, professor of AI at the University of New South Wales. "Just because this was true in the past, does not mean it is necessarily true in the future. There is no fundamental law of economics that requires this." Susan Schneider, associate professor of philosophy at the University of Connecticut, emphasized the need for a national conversation on unemployment caused by lost jobs. "Given that AI is projected to outmode so many people in the workplace," she said, "it's time for a national dialogue on universal basic income." One area that a few of the experts agreed needed clarification was "artificial general intelligence," or AGI. This should be distinguished from "narrow AI," which is what is involved when computers can be programmed to master a specific skill. In Wired, Obama wrote: "If you've got a computer that can play Go, a pretty complicated game with a lot of variations, then developing an algorithm that lets you maximize profits on the New York Stock Exchange is probably within sight." "I do not think this is true at all," said Vincent Conitzer, professor of computer science at Duke University. "Go is a game with very clearly defined rules that does not require general knowledge about the world," he said. In contrast, in the financial markets, there will always be a need for broad and integrated understanding of how the world works, he said. "And this is very hard to replicate in AI systems." And while AI is currently used in activities such as financial trading, Conitzer said, we still need humans. "This is a good example of a very common fallacy in thinking about AI systems," he said, "where we are overly impressed by a system's performance on a narrow, cleanly defined problem, and then incorrectly conclude that similar performance in open-ended ambiguous real-world domains is just a short step away." Why is clarification around AGI important? It has critical ramifications when it comes to safety. President Obama's plan to wait for benchmarks around AGI before taking action "falls short," said Schneider. "Obama jokes that we can 'pull the plug' on superintelligent AI," said Schneider. "Of course, as those working on AI safety have warned, a truly advanced AI will be able to anticipate such moves." "By the time we detect such benchmarks, it may be too late," Schneider said. Conitzer talked about the two camps approaching the future of AI: One that worries about the existential risk (such as the Nick Bostrom, Elon Musk, Bill Gates group), and another, the traditional AI research community, that he said "dismisses the idea that we need to worry about human-level general AI at this point." "It appears that the report (and the President) are generally more on the latter side," he said. But, Conitzer said, this is problematic, if we are to believe the study cited in the report, in which half of AI researchers say there is a 50% chance of achieving AGI by 2040. "Holy cow!" said Conitzer. "I am not among that half. But I do think that, if you really believe that that survey accurately captures the best prediction we can make about the future of AI...then the response of just 'monitoring progress' seems wholly inadequate." "You need to either argue that the survey did not do a good job in generating a good prediction and that you have a better prediction, or you need to respond in a significant way," he said. One way would to be funding research on keeping AI systems safe, Conitzer said, "or understanding the consequences and significance of human-level general AI." "I am worried that instead of addressing the question head-on, many people just avoid it because they do not want to be seen as loopy crackpot futurists," he added, "which is not a good reason." Conitzer also said that making "future superhuman general AI systems safe is a very different type of problem from making today's self-driving cars safe" and that it's "not responsible to just avoid making a judgment on whether superhuman general AI is somewhat likely to emerge in a few decades or not." Walsh agrees with President Obama that government shouldn't over-regulate. Still, he is "disappointed that he is not advocating more oversight from the US government in two areas where regulation is already too late, autonomous cars and autonomous weapons." "Tesla cannot be left to experiment on the public," he said. "And we need to curb the arms race that is starting to occur in the development of autonomous weapons that will undoubtedly destabilize global security." "If Obama wanted one act to do before he finished as president, he could make a bold move here that would be remembered for centuries: Have the US [support banning] lethal, autonomous weapons at the CCW conference in the UN this December," Walsh said."Be the President that ended killer robots." Overall, the experts were excited about President Obama's embrace of AI research. "We cannot leave it up to the large tech companies," said Walsh. "Better algorithms to rank ads are not going to get us to thinking machines, greater prosperity and reduced inequality." Marie desJardins, AI professor at the University of Maryland, Baltimore County and former chair of AAAI (the National Conference of the Association for the Advancement of Artificial Intelligence), said she was excited about the plans, although "as always, one hopes it's going to be a sustained interest in the big/hard problems, not just temporary hype." Walsh echoed the point. "I would, of course, be even happier if he had backed this by putting actual dollars on the table for basic research." The president's main job is "not to cause panic," said Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville. "And he does it well." "If the next President is anywhere near as able to learn, digest, and synthesize such complex material," said Conitzer, "I think we are in luck."

News Article | November 25, 2016

The size, surface charge, and composition of a nanoparticle determine how easily it can slip inside a cell. Now Australian researchers have demonstrated that the same holds true of nanoparticle shape [Hinde et al., Nature Nanotechnology (2016), doi: 10.1038/nnano.2016.160]. Nanoparticles made from biocompatible polymers could make effective delivery systems taking anti-cancer drugs, nucleic acids used in gene therapy, and imaging agents right into the heart of cells − the nucleus. But particles carrying active agents must be able to gain access to cells quickly and easily. To find out whether the shape of a nanoparticle could make a difference, the researchers from the University of New South Wales created spherical, rod, vesicle, and S-shaped worm-like particles from the same polymeric material with identical surface chemistry. The particles were loaded with the same anticancer agent, doxorubicin, and tracked using microscopy. “We used a fluorescence microscopy method called pair correlation microscopy to show that polymeric nanoparticles with different shapes, but identical surface chemistries, move across the various cellular barriers at different rates,” explains first author of the study, Elizabeth Hinde. “Using this technology, we were able to pinpoint where drugs were being released and how they spread throughout the cell.” The results show marked differences between the various nanoparticles, according to J. Justin Gooding and Katharina Gaus, who led the study. The teams’ observations indicate that all the nanoparticles find the outer plasma membrane of the cell an equal barrier, regardless of shape. Once inside the cell, however, spherical nanoparticles appear more mobile – showing shorter transit times – than higher-aspect-ratio rod- and worm-shaped particles. All the nanoparticles show a similar ability to escape from the endosomal system – which can be limit the efficiency of drug delivery. But the crucial difference, found by the researchers, is that rod and worm-shaped particles enter the cell nucleus via passive diffusion more readily than spherical particles and vesicles. As the nuclear membrane presents more of a barrier to the particles than the plasma membrane, particles that enter the nucleus tend to accumulate there. The routes taken by different nanoparticles inside the cell indicate where drug release ultimately takes place. So while all nanoparticles can deliver their cargoes to the cellular cytoplasm, significantly higher amounts of doxorubicin were deposited in the nucleus by higher aspect ratio rod- and worm-shaped particles. “We found that the cancer drug was most effectively delivered when the nanoparticle carrier could breach the envelope protecting the nucleus – the cell’s control center – and release the drug,” explains Hinde. The researchers also managed to improve on the inherent ability of rod- and worm-shaped particles to enter the cell nucleus and deliver their cargo by adding a nuclear localization signal (NLS) – an amino acid ‘tag’ that facilitates passage through nuclear membrane. “Our results demonstrate that drug delivery across the major cellular barrier, the nuclear envelope, is important for doxorubicin efficiency and can be achieved with appropriately shaped nanoparticles,” says Hinde. Ultimately, if the dimensions of rod-shaped nanoparticles could be fine-tuned to facilitate preferential entry to cancer cells rather than healthy ones, drugs could be delivered more efficiently with reduced side effects. The researchers’ analysis of the role of nanoparticle shape in determining their ability to traverse intracellular transport barriers is an interesting one, believes Samir Mitragotri, director of the Center for Bioengineering at the University of California, Santa Barbara. “This analysis provides novel insights into and guidelines for the design of nanoparticles for drug delivery,” he says. This article was originally published in Nano Today (2016), doi: 10.1016/j.nantod.2016.10.012

News Article | December 7, 2016

Every 18 seconds someone dies from tuberculosis (TB). It is the world's most deadly infectious disease. Mycobacterium tuberculosis, the causative agent of TB, has infected over one-third of the entire human population with an annual death toll of approximately 1.5 million people. For the first time, an international team of scientists from Monash and Harvard Universities have seen how, at a molecular level, the human immune system recognises TB infected cells and initiates an immune response. Their findings, published in Nature Communications, are the first step toward developing new diagnostic tools and novel immunotherapies. Lead author, Professor Jamie Rossjohn says one of the main reasons for our current lack of knowledge comes down to the complexity of the bacterium itself. Working with Professor Branch Moody's team at Harvard, they have begun to gain key insight into how the immune system can recognise this bacterium. Crucial to the success of M. tuberculosis as a pathogen is its highly unusual cell wall that not only serves as a barrier against therapeutic attack, but also modulates the host immune system. Conversely, its cell wall may also be the "Achilles' heel" of mycobacteria as it is essential for the growth and survival of these organisms. This unique cell wall is comprised of multiple layers that form a rich waxy barrier, and many of these lipid -- also known as fatty acids -- components represent potential targets for T-cell surveillance. Specifically, using the Australian Synchrotron, the team of scientists have shown how the immune system recognises components of the waxy barrier from the M. tuberculosis cell wall. "With so many people dying from TB every year, any improvements in diagnosis, therapeutic design and vaccination will have major impacts," Professor Moody says. "Our research is focussed on gaining a basic mechanistic understanding of an important biomedical question. And may ultimately provide a platform for designing novel therapeutics for TB and treat this devastating disease," Professor Rossjohn concludes. Professor Jamie Rossjohn is a Chief Investigator on the Australian Research Council Centre of Excellence in Advanced Molecular Imaging. The $39 million ARC-funded Imaging CoE develops and uses innovative imaging technologies to visualise the molecular interactions that underpin the immune system. Featuring an internationally renowned team of lead scientists across five major Australian Universities and academic and commercial partners globally, the Centre uses a truly multi scale and programmatic approach to imaging to deliver maximum impact. The Imaging CoE is headquartered at Monash University with four collaborating organisations - La Trobe University, the University of Melbourne, University of New South Wales and the University of Queensland. Professor Rossjohn is also a researcher at the Monash Biomedicine Discovery Institute. Committed to making the discoveries that will relieve the future burden of disease, the newly established Monash Biomedicine Discovery Institute at Monash University brings together more than 120 internationally-renowned research teams. Our researchers are supported by world-class technology and infrastructure, and partner with industry, clinicians and researchers internationally to enhance lives through discovery.

News Article | February 15, 2017

Scientists have discovered the presence of chemical pollutants in some of the ocean’s deepest trenches, previously thought to be nearly untouched by human influence. In fact, they’ve found levels of contamination in some marine organisms living there that rival some of the most polluted waterways on the planet. The findings, presented Monday in the journal Nature Ecology and Evolution, underscore the idea that different parts of the ocean may be far more interconnected than previously thought — and that dangerous forms of pollution may be pervasive even in the most remote places. The researchers, from the University of Aberdeen and the James Hutton Institute in the UK, focused on two specific types of chemical pollutants: polychlorinated biphenyls, or PCBs, and polybrominated diphenyl ethers, or PBDEs, both of which may cause a variety of adverse health effects, including neurological, immune and reproductive issues and even cancer (in humans). PCBs were once commonly used in electrical equipment before being banned in the United States over health and environmental concerns in the 1970s. The manufacture and import of PBDEs, which are typically used as flame retardants, has also been restricted in the U.S., although at least one common type of the chemical is still permitted. Despite the reductions in their use, both PCBs and and PBDEs can still be detected in marine organisms today. Both have the potential to remain intact for long periods of time, often binding to other particles in the water that can then carry them throughout the ocean. They also have a tendency to “bioaccumulate,” meaning they can build up in marine organisms over time. Just last year, a study conducted by researchers from the Scripps Institution of Oceanography suggested that certain organic pollutants, including PCBs and PBDEs, are widespread in fish throughout the world. For the new study, the researchers checked for the presence of these chemicals in two of the world’s deepest ocean trenches — the Mariana trench in the Western Pacific, near the Mariana islands, and the Kermadec trench north of New Zealand. To do so, the researchers deployed special devices called “deep-sea landers,” which are small vessels that are released from ships and drop to the bottom of the ocean before floating back up to the surface. Each lander was equipped with special traps designed to catch tiny shrimp-like crustaceans called amphipods known to inhabit some of the ocean’s deepest and most extreme environments. Afterwards, the researchers tested the amphipods for the presence of PCBs and PBDEs. They found that both PCBs and PBDEs were present in all species of amphipod in both trenches, and at all depths sampled — up to 10,000 meters deep in both locations. Concentrations tended to be somewhat higher in the Mariana trench, although in both locations they were generally higher than the baseline concentrations typically found in clean coastal areas. In fact, in the Mariana trench, the highest observed concentrations of PCBs were about 50 times greater than the levels that have been found in crabs living near China’s Liaohe River, one of that nation’s most polluted waterways. “The only Northwest Pacific location with values comparable to the Mariana Trench is Suruga Bay (Japan),” the researchers note in the paper, “a highly industrialized area with historically heavy usage of organochlorine chemicals.” These are bombshell results, given that the deep sea is often thought of as one of the world’s last pristine places, mostly out of the reach of human influence. In a comment also published Monday, marine ecologist Katherine Dafforn of the University of New South Wales highlighted the importance of the findings, noting that the authors “have provided clear evidence that the deep ocean, rather than being remote, is highly connected to surface waters and has been exposed to significant concentrations of human-made pollutants.” What remains unclear is exactly how the contaminants got into the trenches — and why their levels are so high in the Mariana. The authors suspect one of the more likely explanations is that the chemicals in the Mariana trench originated around the “great Pacific garbage patch,” a swirling mass of debris in the northern Pacific. Chemical pollutants in that region could easily cling to plastic waste as it drops through the water column toward the bottom of the ocean. More generally, the authors note that large-scale ocean currents can transport chemical-carrying particles over long distances. And previous research has suggested that even contaminants which start out on the surface of the ocean can sink to the deepest places, clinging to garbage or even the bodies of dead animals, within a few months. All of this is to say that “our proximity to these extreme locations is far from remote, which is why even the deepest chasms of the ocean are no longer pristine,” the authors note in the paper. Future research may shed more light on how these contaminants may move and magnify throughout the food chain, as well as whether they could be causing any measurable damage to the ecosystems they enter. But for now, the findings serve as a jarring reminder that human activities have consequences all over the planet — and there may nowhere that remains out of our reach. More from Energy and Environment: As the planet warms, doubters launch a new attack on a famous climate change study Hundreds of current, former EPA employees urge Senate to reject Trump’s nominee for the agency Standing Rock Sioux want ‘no forcible removal’ of protesters from Dakota Access pipeline site For more, you can sign up for our weekly newsletter here and follow us on Twitter here.

News Article | February 15, 2017

The heatwave down under is unusual even for Australia – but it may not be so for much longer. The country is in the grip of one of the most ferocious heatwaves on record, and climate change is being held accountable. The scorching heat began in South Australia and Victoria last week, before spreading to New South Wales and the Australian Capital Territory over the weekend. The heatwave is now moving north through Queensland. Parts of South Australia and Victoria reached 46 °C, while New South Wales and Queensland recorded temperatures above 47 °C. At least five towns in New South Wales and four in Queensland had their hottest day ever recorded over the weekend. Moree in northern New South Wales is closest to breaking the state heatwave record of 50 consecutive days over 35 °C. Today marks the 49th day in a row, and the forecast shows no signs of any cooling over the next week. The heatwave began when a high pressure system stalled over central Australia and caused a build-up of heat, says Karl Braganza at Australia’s Bureau of Meteorology. An approaching front then dragged the hot air towards the east coast. However, the heatwave would not have been this long and intense if it wasn’t for climate change, says Sarah Perkins-Kirkpatrick at the University of New South Wales in Sydney. “Usually, you would only get this kind of extreme heat if it was an El Niño summer,” she says. The most recent El Niño phenomenon ended in mid-2016. A study published by the Climate Council of Australia in 2014 found that heatwaves in Australia are becoming hotter, longer and more frequent. Heatwaves are defined as at least three days of unusually high temperatures. The average number of annual heatwave days increased from less than three to more than six between 1950 and 2010, the report found. More recent heatwaves have also been 1 to 2 °C hotter. “Based on past heatwave events, we can see that there’s a discernible contribution of global warming,” says Braganza. In Australia, the average temperature has increased by about 0.9 °C since 1950 and is expected to rise further. “Without a doubt, these heatwaves will become more regular,” says Perkins-Kirkpatrick. “In 30, 40, 50 years, they will be relatively normal. They might not occur every summer, but they will occur much more often than now.” The latest heatwave has led to people being admitted to hospital as well as blackouts due to excessive use of air-conditioning. It has also sparked dozens of bush fires in New South Wales that firefighters are still trying to contain.

In recognizing the huge impact of artificial intelligence in our world today—with big data and machine learning powering a growing explosion of AI in every sphere of our lives, from self-driving cars to customer service chatbots—the world's biggest tech giants have formed an alliance to prepare for AI's challenges. On Wednesday, Facebook, Microsoft, IBM, Amazon, and Google announced the creation of a "Partnership on AI." According to its website, the group was formed to "study and formulate best practices on AI technologies, to advance the public's understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society." AI experts see this mostly as a good thing. "The Partnership on AI reflects the increasing presence of AI within all our lives, and the immense ethical, economical and societal impacts it will have on us," said Toby Walsh, AI professor at the University of New South Wales. Susan Schneider, associate professor at the University of Connecticut, agreed. "There is an increasing sentiment that AI will shape our future," she said. "One day, robots may fight our wars, take care of our homes, and even be our personal advisors. Technological unemployment is projected to increase dramatically from AI and robotics, and many worry that the development of smarter than human AI could threaten humanity." Roman Yampolskiy, head of the Cybersecurity Lab at the University of Louisville, believes it to be an important development, as well. "It is meant to provide a central point of guidance and oversight for the quickly progressing AI industry," he said. Yampolskiy is "particularly happy to see all the big players, at the edge of machine learning research, embrace this initiative. While AI research benefits from competition, AI safety is more likely to benefit from collaboration." Schneider views the partnership as a good start to addressing these challenges, "especially if it truly includes advisors from outside of the inner circle of AI leaders, such as ethicists and user activists." While AI research benefits from competition, AI safety is more likely to benefit from collaboration. However, some AI experts wonder about the details of the plan. "I view this with cautious optimism," said Vincent Conitzer, professor of computer science at Duke University. "It is great that these companies, which have some of the leading minds in AI working at them, recognize the importance of AI research and its societal implications, and are able to join in such a partnership." "Of course, the proof is in the pudding," he said, and details must be worked out. Conitzer wonders what the partnership will do that current academic conferences can't provide. "Will they share information on their latest AI technology that they would not share publicly?" Conitzer asked. "Will there be conferences like the 'faculty summits' that some of these companies host? How will the initiative relate to several philanthropic efforts underway, such as Elon Musk's $11 million AI safety program for funding researchers, or Eric Horvitz's One Hundred Year Study on Artificial Intelligence at Stanford? And there is concern that the partnership may be controlled by private interests. "The development of AI should not be controlled by business interests alone," said Schneider, "It requires a public dialogue about how AI can best benefit humanity. Walsh shared the concern. "Whilst I welcome the initiative's broad goals of building trust and tackling the ethical challenges of AI, I have several serious reservations," he said. "I would have been much happier if a nonpartisan body like ACM, AAAI or IEEE had been leading this partnership." "There is a very real concern that there is too large a concentration of strength in these tech giants," Walsh said. "And, as their actions in arranging their tax affairs, scanning books, lobbying Congress and elsewhere demonstrate, their success does not align completely with public good."

News Article | December 14, 2016

Sydney, Australia: Australian researchers from the National Computational Infrastructure (NCI) and the ARC Centre of Excellence for Climate System Science have produced a remarkable high-resolution animation of the largest El Niño ever recorded. It is so detailed that it took 30,000 computer hours crunching ocean model data on Australia's most powerful supercomputer, Raijin, before it could be extracted by the NCI visualisation team to produce the animation. The animation looks beneath the ocean surface to reveal the oceanic processes that led to the 1997/98 El Niño - an event that caused billions of dollars of damage worldwide and was followed by consecutive strong La Niña events. "The animation shows how shifting pools of warmer or cooler than average water 300m below the surface of the ocean can trigger these powerful events," said Dr Alex Sen Gupta, a member of the visualisation team from the ARC Centre of Excellence for Climate System Science. "When these pools of water burst through to the surface and link up with the atmosphere they can set off a chain reaction that leads to El Niños or La Niñas," The ocean model that produced the animation used a 30km horizontal grid and split the vertical depth into 50 cells, which allowed the researchers to see the development of the El Niño and La Niñas at a high resolution. "Raijin gives us the capacity to model complex global systems like El Niño that require a high resolution for a better accuracy," said a member of the team from the Australian National University, Associate Prof Andy Hogg. "It was these huge volumes of data produced by the model that meant we needed the specialist visualisation expertise from NCI to reveal what happened in detail." The 97/98 El Niño was a particularly damaging event. It was linked to massive forest fires in Indonesia, catastrophic flooding in Peru and the first "global" coral bleaching event that killed 16% of the world's corals in a single year. While it is impossible to prevent such events, researchers believe and the model confirms that better observation systems can help us forecast them earlier. "The animation shows us that a well developed deep ocean observation system can give us advance warning of extreme El Niños and La Niñas," said team member Dr Shayne McGregor from Monash University. "Preserving and expanding the currently sparse observation system is critical to improving our seasonal prediction capability in the future." Research over the past few years led by CSIRO and the University of New South Wales has indicated that "super" El Niños like the 97/98 event are likely to become more frequent as the climate warms. A member of the visualisation team, Dr Agus Santoso found in 2013 that as the climate warms, we are likely to see noticeable changes to El Niños. "As the planet warms it also appears that the swings between the two extremes, from El Niño to La Niña like the 1997 to 1999 sequence, will become more frequent," said Dr Santoso from the University of New South Wales. "For this reason and many others a reliable early warning of El Niño and La Niña will be vital for farmers, industry groups and societies to be better prepared for the extreme conditions they inevitably bring."

- New and existing investors participated in the round - Early stage biotechnology company targeting ophthalmology and other areas of unmet medical need Exonate, an early stage biotechnology company, today announced they have successfully closed their latest funding round. This is Exonate's third Angel round and is bolstered by further investment from the Angel Co Fund. The funding round also welcomed new investment from Australian venture fund Uniseed; University of Bristol Enterprise Fund, managed by Parkwalk; Martlet of Cambridge; Wren Capital as well as further Angel Investors and O2h Ventures. This brings the total amount raised at this round by Exonate to just under £1,500,000. The investment will be used to accelerate the development of Exonate's first therapeutic area of interest - an eye drop for the treatment of wet Age-related Macular Degeneration as a much needed alternative to the current treatment by eye injections. Exonate was delighted with the level of interest in this round. The company's strategy is to maintain its focus on topical delivery of ophthalmology products for diseases of the back of the eye and to expand its science base to address other disease areas including cancer. Exonate believes that its approach to wet AMD can provide significant improvements for patients in both efficacy of drug and a reduction in unpleasant injections into the eye. By leveraging the truly global span of this investment Exonate will become a successful company making scientific progress on three continents. Exonate has an experienced international management team, with a wealth of clinical and start-up experience. Commenting on the announcement, Dr Catherine Beech, CEO of Exonate, said: "I am very pleased at the successful close of this funding round. Exonate's early data is very promising and we have a clear aspiration to successfully deliver medicines in areas of unmet need. The funding will enable us to progress our currents programmes to develop drugs that can be easily administered as eye drops, improve adherence and benefit patients." Peter Cowley at Martlet, the corporate angel fund of the £2bn Marshall of Cambridge aerospace and automotive engineering company added: "Martlet is pleased to add Exonate to our 35+ early stage investments. The experienced management team and board, together with a Cambridge base, fits ideally within our investment criteria." Dr Peter Devine, CEO of Uniseed commented: "Uniseed is pleased to be investing in Exonate which was formed on the basis of a medicinal chemistry programme at the University of New South Wales and cutting edge translational research from the Universities of Bristol and Nottingham. Exonate's novel small molecule approach to treating age related macular degeneration has the potential to have a significant clinical impact on patients who suffer from this disease, and increasing incidence with an aging population." Alun Williams, Investment Director at Parkwalk Advisors added: "We are delighted to support this innovative Bristol University spin-out. Their science is exciting with a number of attractive applications potentially leading to big improvements in efficiency and patient experience. We see it as an excellent investment opportunity for the Fund." Sunil Shah, Chairman of Exonate further added: "We would like to thank our new and existing investors including our highly supportive Business Angels for their investment. Without their involvement Exonate's scientific innovations would not be able to benefit patients with wet AMD or in the future potential patients with Cancer, Fibrosis and diseases of the Kidney." Exonate is a privately held, early stage, biotech, company spun out of the University of Nottingham that is focused on Vascular Endothelial Growth Factor (VEGF). The Company has grown significantly in the last year as it welcomes investment and input from an increasing Global market. Exonate undertakes medicinal chemistry in laboratories in the University of New South Wales and in January 2017 will welcome John Kurek from Uniseed to its Board of Directors. Exonate continues to be funded from the University of Nottingham and also has links with the University of Bristol. Exonate continues to have strong links with Cambridge Angels and have offices based in Cambridge, laboratories in Nottingham and collaborations with laboratories in India. It aspires to successfully deliver medicines in areas of unmet need, such as ophthalmology, pain, nephropathy and cancer, by targeting diseases through regulation of VEGF isoforms/variants that are both protective and disease promoting. Exonate's lead program is focused on wet Age-Related Macular Degeneration, known as wet AMD, which is the leading cause of vision loss in people aged 60 and older. The Company is founded on scientific excellence with strong links to Prof. David Bates and his lab at Nottingham University specialising in the biology and biochemical pathways of VEGF splice variants. Exonate is led by an experienced, international management team that has worked together previously, successfully raising capital for start-ups and early stage companies over many years. Management has cross-disciplinary experience in medicine, finance, drug development and the pharmaceutical and biotech industries. CEO Catherine Beech, OBE, has over 25 years biotech/pharma experience including 12 years in big pharma leading development programs in cardiovascular, Parkinson's disease and HIV. Exonate is her 4th role as CEO of an emerging biotech company and she also has extensive experience as a non-executive board director. Today, wet AMD is a leading cause of vision loss in people aged 60 years or older and affects more than 30 million patients worldwide, over 200,000 of those in the UK alone. If untreated patients are likely to lose sight in the affected eye within 24 months of disease onset. The main currently available treatment options for wet AMD are: Please contact Louise Shave at Exonate for further information 01223 437042

News Article | August 31, 2016

Allen Nutman (L) of the University of Woollongong and Vickie Bennet of the Australian National University hold a specimen of 3.7 billion-year-old fossils found in Greenland in Canberra, Australia, August 23, 2016. Picture taken August 23, 2016. Yuri Amelin/Australian National University/Handout via REUTERS OSLO (Reuters) - The earliest fossil evidence of life on Earth has been found in rocks 3.7 billion years old in Greenland, raising chances of life on Mars aeons ago when both planets were similarly desolate, scientists said on Wednesday. The experts found tiny humps, between one and 4 cm (0.4 and 1.6 inches) tall, in rocks at Isua in south-west Greenland that they said were fossilized groups of microbes similar to ones now found in seas from Bermuda to Australia. If confirmed as fossilized communities of bacteria known as stromatolites - rather than a freak natural formation - the lumps would pre-date fossils found in Australia as the earliest evidence of life on Earth by 220 million years. "This indicates the Earth was no longer some sort of hell 3.7 billion years ago," lead author Allen Nutman, of the University of Wollongong, told Reuters of the findings that were published in the journal Nature. "It was a place where life could flourish." Earth formed about 4.6 billion years ago and the relative sophistication of stromatolites indicated that life had evolved quickly after a bombardment by asteroids ended about 4 billion years ago. "Stromatolites contain billions of bacteria ... they're making the equivalent of apartment complexes," said Martin Van Kranendonk, a co-author at the University of New South Wales who identified the previously oldest fossils, dating from 3.48 billion years ago. At the time stromatolites started growing in gooey masses on a forgotten seabed, the Earth was probably similar to Mars with liquid water at the surface, orbiting a sun that was 30 percent dimmer than today, the scientists said. Those parallels could be a new spur to study whether Mars once had life, the authors said. "Suddenly, Mars may look even more promising than before as a potential abode for past life," Abigail Allwood, of the California Institute of Technology, wrote in a commentary in Nature. The Greenland find was made after a retreat of snow and ice exposed long-hidden rocks. Greenland's government hopes that a thaw linked to global warming will have positive spin-offs, such as exposing more minerals. Nutman said the main controversy was likely to be that the fossils were in metamorphic rocks, reckoned to have formed under huge stress with temperatures up to 550 degrees Celsius (1,022°F) - usually too high to preserve any trace of life. Still, Van Kranendonk told Reuters that dried-out biological material could sometimes survive such a baking, adding he was "absolutely convinced" by the Greenland fossils.

News Article | November 14, 2016

LA JOLLA, CA - November 14, 2016 - A new study led by scientists at The Scripps Research Institute (TSRI) reveals that a protein first discovered at TSRI in 2010 is directly responsible for sensing touch. Knowledge about this protein, called Piezo 1, could be relevant for designing better pain medications and exploring future therapies for blood disorders, hypertension and more. "This latest work provides definitive proof that Piezos are, by themselves, 'mechanosensitive,'" said senior author Ardem Patapoutian, a professor at TSRI and a Howard Hughes Medical Institute (HHMI) Investigator. The study was published recently in the journal Cell Reports. Piezo 1 is an "ion channel," or gateway through the cell membrane. When it senses mechanical force, it opens to allow ions to pass into the cell, starting a chain of events that send a signal to the brain--in other words, Piezo proteins control the sensation of touch. "Out of all our senses, the sense of touch is the least understood, so significant efforts are being made to acquire a more complete understanding," said Ruhma Syeda, a professional scientific collaborator at TSRI and first author of the new study. "We are only now unraveling the physiological roles of Piezo proteins." The new study addressed a lingering question about Piezo 1: Does it sense touch directly, or is it influenced by nearby proteins and other cellular components? To study this, the researchers used a "reductionist" approach, which Syeda helped develop during her postdoctoral research. In this system, scientists extract a protein from its native environment in the cell and study it in a simpler membrane environment. This enables the scientists to see how the protein behaves on its own, without influence from other cellular players. The researchers found that Piezo 1 does appear to directly sense force by detecting tension in the cell membrane. "It seems like it has a built-in sensor," said Syeda. The next step in this research is to better understand the molecular structure of Piezo 1--which could lead to a "map" of the protein and further insights into its function. "I'm also very excited that this reductionist approach can be applied to other channels," said Syeda. "We'll be able to understand more channels and proteins and their physiological roles." In addition to Patapoutian and Syeda, authors of the study, "Piezo1 Channels Are Inherently Mechanosensitive," were Maria N. Florendo and Jennifer M. Kefauver of HHMI and TSRI; Charles D. Cox of the Victor Chang Cardiac Research Institute; Jose S. Santos, previously at the University of California, San Diego, now at Dart NeuroScience; and Boris Martinac of the Victor Chang Cardiac Research Institute and the University of New South Wales. The study was supported by the National Institutes of Health (grant R01NS083174), the National Health and Medical Research Council of Australia and the Howard Hughes Medical Institute. The Scripps Research Institute (TSRI) is one of the world's largest independent, not-for-profit organizations focusing on research in the biomedical sciences. TSRI is internationally recognized for its contributions to science and health, including its role in laying the foundation for new treatments for cancer, rheumatoid arthritis, hemophilia, and other diseases. An institution that evolved from the Scripps Metabolic Clinic founded by philanthropist Ellen Browning Scripps in 1924, the institute now employs more than 2,500 people on its campuses in La Jolla, CA, and Jupiter, FL, where its renowned scientists--including two Nobel laureates and 20 members of the National Academy of Science, Engineering or Medicine--work toward their next discoveries. The institute's graduate program, which awards PhD degrees in biology and chemistry, ranks among the top ten of its kind in the nation. For more information, see http://www. .

Lord M.S.,University of New South Wales | Foss M.,University of Aarhus | Besenbacher F.,University of Aarhus
Nano Today | Year: 2010

The nanoscale surface topography of artificial materials is known to play a significant role in interactions with biological systems such as proteins and cells. A detailed understanding, and ultimately control, of these interactions is therefore of utmost importance in areas such as cell culture technology, biosensing and medical device development. We briefly review different methods to synthesise and fabricate surfaces with well-defined nanoscale topography, and discuss how such nanostructured surfaces can be used as model systems in protein and cellular assays. Finally, an outlook of possible future in vitro and in vivo developments is provided. © 2010.

Donaldson A.,Monash University | Poulos R.G.,University of New South Wales
British Journal of Sports Medicine | Year: 2014

Background: This paper describes the development of a theory-informed and evidence-informed, contextspecific diffusion plan for the Mayday Safety Procedure (MSP) among community rugby coaches in regional New South Wales, Australia. Methods: Step 5 of Intervention Mapping was used to plan strategies to enhance MSP adoption and implementation. Results: Coaches were identified as the primary MSP adopters and implementers within a system including administrators, players and referees. A local advisory group was established to ensure context relevance. Performance objectives (eg, attend MSP training for coaches) and determinants of adoption and implementation behaviour (eg, knowledge, beliefs, skills and environment) were identified, informed by Social Cognitive Theory. Adoption and implementation matrices were developed and change-objectives for coaches were identified (eg, skills to deliver MSP training to players). Finally, intervention methods and specific strategies (eg, coach education, social marketing and policy and by-law development) were identified based on advisory group member experience, evidence of effective coach safety behaviour-change interventions and Diffusion of Innovations theory. Conclusions: This is the first published example of a systematic approach to plan injury prevention programme diffusion in community sports. The key strengths of this approach were an effective researcher-practitioner partnership; actively engaging local sports administrators; targeting specific behaviour determinants, informed by theory and evidence; and taking context-related practical strengths and constraints into consideration. The major challenges were the time involved in using a systematic diffusion planning approach for the first time; and finding a planning language that was acceptable and meaningful to researchers and practitioners.

Mahmoudifar N.,University of New South Wales | Doran P.M.,Monash University
Biomaterials | Year: 2010

Chondrogenic differentiation of human adult adipose-derived stem cells was studied in vitro for the development of engineered cartilage tissue. Cells cultured under dynamic conditions in polyglycolic acid (PGA) scaffolds produced substantially higher glycosaminoglycan (GAG) and total collagen levels than cells in pellet cultures. This result reflects the importance of cell attachment and cell-scaffold interactions in stem cell differentiation and chondrogenesis. Although gene expression levels for both aggrecan and collagen type II were up-regulated significantly in PGA cultures treated with transforming growth factor β1 (TGF-β1), synthesis of GAG but not collagen type II was enhanced in tissue constructs when TGF-β1 was added to the medium. Bone morphogenetic protein-6 (BMP-6) in the presence of TGF-β1 was effective in improving GAG and total collagen production when the cells were pre-treated with fibroblast growth factor-2 (FGF-2) prior to scaffold seeding. Extending the culture duration from 2 to 5 weeks did not improve cartilage development in PGA scaffolds; loss of cells from the constructs suggested that the rate of scaffold degradation exceeded the rate of replacement by ECM during the 5-week period. Stem cells in PGA scaffolds were cultured in perfusion-type recirculation bioreactors operated with periodic medium flow reversal. The highest levels of GAG and collagen type II accumulation were achieved in the bioreactor cultures after the seeding cell density was increased from 2 × 107 to 4 × 107 cells per scaffold. © 2010 Elsevier Ltd. All rights reserved.

Chen S.,The Clean Tech Center | Bao P.,University of New South Wales | Wang G.,The Clean Tech Center
Nano Energy | Year: 2013

Fe2O3-CNT-graphene nanosheet (Fe2O3-CNT-GNS) hybrid materials were synthesized using a chemical vapor deposition method. The as-prepared materials consist of Fe2O3 nanorings, bamboo-like carbon nanotubes and graphene nanosheets, which form an open three-dimensional architecture. For the first time, we observed the growth of bamboo-like carbon nanotubes with open tips, which were catalyzed by iron nanorings. When applied as anode materials in lithium ion batteries, the Fe2O3-CNT-GNS hybrid materials exhibited a high specific capacity of 984mAhg-1 with a superior cycling stability and high rate capability. This could be ascribed to short Li+ diffusion path of bamboo-like CNTs, more active reaction sites provided by graphene layers inside CNTs, flexible and highly conductive graphene nanosheets, and an open three-dimensional structure. © 2012 Elsevier Ltd.

Walsh N.,Monash University | Maher L.,University of New South Wales
Current Opinion in HIV and AIDS | Year: 2012

PURPOSE OF REVIEW: The recent major shift toward oral direct acting hepatitis C virus (HCV) treatments has the potential to revolutionize the global response to HCV. People who inject drugs (PWID) are a large key affected population who stand to benefit from these new medications. RECENT FINDINGS: There is a large number of new drug classes and targets with activity against HCV. Although effective for HCV treatment in monoinfection and coinfection with HIV, most direct-acting antivirals (DAAs) remain within the research pipeline, with only two having achieved regulatory approval to date. Clinical trial data are not available regarding HCV treatment for PWID with DAAs. This article reviews clinical data on HCV treatment for a number of promising compounds in HCV monoinfection and coinfection with HIV and discusses the barriers facing PWID in scale-up and roll-out of DAAs in the coming years. SUMMARY: DAAs have the potential to revolutionize HCV treatment. There will be significant access barriers for people who inject drugs to these new medications. Copyright © Lippincott Williams & Wilkins.

Khanna A.,University of New South Wales | Pimanda J.E.,University of New South Wales | Westermarck J.,Åbo Akademi University | Westermarck J.,University of Turku
Cancer Research | Year: 2013

Protein phosphatase 2A (PP2A) complexes function as tumor suppressors by inhibiting the activity of several critical oncogenic signaling pathways. Consequently, inhibition of the PP2A phosphatase activity is one of many prerequisites for the transformation of normal human cells into cancerous cells. However, mechanisms for PP2A inactivation in human cancers are poorly understood. The aberrant expression of cancerous inhibitor of protein phosphatase 2A (CIP2A), a recently identified endogenous PP2A inhibitor in malignant cells, is one such mechanism. Various independent studies have validated CIP2A's role in promoting tumor growth and resistance to apoptosis and senescence-inducing therapies. Notably, high CIP2A expression predicts poor patient prognosis in several human cancer types. Among the oncogenic proteins dephosphorylated by PP2A, the MYC oncoprotein, which is phosphorylated at serine 62, has surfaced as a marker for the oncogenic activity of CIP2A. The positive-feedback loop between CIP2A and MYC augments the activity of MYC in cancer cells. In addition, CIP2A promotes the phosphorylation and activity of additional oncoproteins, including E2F1 and AKT. However, CIP2A is not essential for normal mouse growth and development. These findings indicate that CIP2A is a novel anticancer target based on PP2A reactivation and inhibition of the oncogenic activity of its downstream effectors. The potential approaches and feasibility of targeting CIP2A are discussed here. ©2013 AACR.

Mahmoudifar N.,University of New South Wales | Doran P.M.,Monash University
Trends in Biotechnology | Year: 2012

Joint injury and disease are painful and debilitating conditions affecting a substantial proportion of the population. The idea that damaged cartilage in articulating joints might be replaced seamlessly with tissue-engineered cartilage is of obvious commercial interest because the market for such treatments is large. Recently, a wealth of new information about the complex biology of chondrogenesis and cartilage has emerged from stem cell research, including increasing evidence of the role of physical stimuli in directing differentiation. The challenge for the next generation of tissue engineers is to identify the key elements in this new body of knowledge that can be applied to overcome current limitations affecting cartilage synthesis in vitro. Here we review the status of cartilage tissue engineering and examine the contribution of stem cell research to technology development for cartilage production. © 2011 Elsevier Ltd.

Beekman M.,University of New South Wales | Dowling D.K.,Monash University | Aanen D.K.,Wageningen University
Philosophical transactions of the Royal Society of London. Series B, Biological sciences | Year: 2014

Eukaryotic cells typically contain numerous mitochondria, each with multiple copies of their own genome, the mtDNA. Uniparental transmission of mitochondria, usually via the mother, prevents the mixing of mtDNA from different individuals. While on the one hand, this should resolve the potential for selection for fast-replicating mtDNA variants that reduce organismal fitness, maternal inheritance will, in theory, come with another set of problems that are specifically relevant to males. Maternal inheritance implies that the mitochondrial genome is never transmitted through males, and thus selection can target only the mtDNA sequence when carried by females. A consequence is that mtDNA mutations that confer male-biased phenotypic expression will be prone to evade selection, and accumulate. Here, we review the evidence from the ecological, evolutionary and medical literature for male specificity of mtDNA mutations affecting fertility, health and ageing. While such effects have been discovered experimentally in the laboratory, their relevance to natural populations--including the human population--remains unclear. We suggest that the existence of male expression-biased mtDNA mutations is likely to be a broad phenomenon, but that these mutations remain cryptic owing to the presence of counter-adapted nuclear compensatory modifier mutations, which offset their deleterious effects. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2013.1.5 | Award Amount: 8.44M | Year: 2013

The aim of the AU2EU project is to implement and demonstrate in a real-life environment an integrated eAuthentication and eAuthorization framework to enable trusted collaborations and delivery of services across different organizational/governmental jurisdictions. Consequently, the project aims at fostering the adoption of security and privacy-by-design technologies in European and global markets. This objective will be achieved by:\n1)\tdesigning a joint eAuthentication and eAuthorization framework for cross-domain and jurisdictional collaborations, supporting different identity/attribute providers and organizational policie,s and guaranteeing privacy, security and trust;\n2)\tadvancing the state-of-the-art by extending the joint eAuthentication and eAuthorization framework with assurance of claims, trust indicators, policy enforcement mechanisms and processing under encryption techniques to address specific security and confidentiality requirements of large distributed infrastructures;\n3)\timplementing the joint eAuthentication and eAuthorization framework as a part of the platform that supports collaborative secure distributed storage, secure data processing and management in the cloud and offline scenarios;\n4)\tdeploying the designed framework and platform in two pilots on bio-security incident management and collaborative services in Australia and on eHealth and Ambient Assisted Living in Europe; and\n5)\tvalidating the practical aspects of the developed platform such as scalability, efficiency, maturity and usability.\nThe aforementioned activities will contribute to the increased trust, security and privacy, which in turn shall lead to the increased adoption of (cloud-based) critical infrastructures and collaborative delivery of services dealing with sensitive data. AU2EU strategically invests in two pilots deploying the existing research results as well as the novel techniques developed in the project to bridge the gap between research and market adoption.\nThe project builds on existing schemes and research results, particularly on the results of the ABC4Trust project as well as the Trust in Digital Life (TDL) initiative (, which initiated this project and will support its objectives by executing aligned activities defined in the TDL strategic research agenda. The project brings together a strong collaboration of leading industry (such as Philips, IBM, NEC, Thales), SMEs (such as Bicore) and research organizations of Europe (such as Eindhoven University of Technology) and Australia (such as CSIRO, Edith Cowan University, RMIT University, University of New South Wales & Macquarie University) as well as the large voluntary welfare association (such as German Red Cross). Consortium is determined to make a sustained long term impact through commercialization, open source & standardization of open composable infrastructure for e-services where privacy and interoperability with existing technologies are guaranteed.

Triantafilis J.,University of New South Wales | Monteiro Santos F.A.,University of Lisbon
Geoderma | Year: 2013

One of the most popular geophysical methods currently being used to provide information about the spatial variation of soil properties is electromagnetic (EM) induction. This is because EM instruments are easy to set-up, use and mobilise. In addition, the measured apparent electrical conductivity (σa-mSm-1) has been shown to be useful in mapping soil (e.g. clay) and hydrological (e.g. water holding capacity) properties. However, in most of these studies, the potential to model the stratigraphic nature of soil variation (i.e. horizons) and at discrete depth intervals has not been fully realised. In this research, we describe how a single-frequency and multiple-coil EM instrument (i.e. DUALEM-421) and an inversion algorithm (EM4Soil) can be used to generate a two-dimensional model of the electrical conductivity (σ-mSm-1). The algorithm was evaluated using σa acquired across an irrigated agricultural field, in an extensive Quaternary alluvial clay plain northwest of Moree, New South Wales (NSW, Australia). In addition, we conduct inversions of various combinations of σa data. We do this to compare and contrast the results that can be achieved using the various combinations of commercially available instruments. We compare the inverted models of σ with measured soil properties such as clay content, cation exchange capacity, and, the electrical conductivity of a one part soil to five part water (EC1:5-dSm-1) and saturated soil paste (ECp-dSm-1). In general, σ is shown to compare favourably with ECp. In terms of identifying an optimal set of σa data, we found that whilst the Akaike Information Criteria (AIC) indicated that a DUALEM-1 and a DUALEM-421 configuration (using only Hcon) provided the best balance between goodness and fit between σ and ECp, the best reconstruction of σ and in terms of a 1:1 relationship with ECp is achieved using all six possible σa measurements that can be obtained by a DUALEM-421. © 2013 Elsevier B.V.

This has been a major week for open sourced AI. On Monday, nonprofit research group OpenAI announced Universe, a software platform that includes thousands of games and acts as a virtual "curriculum" to train AI on, OpenAI's Jack Clark told TechRepublic. Following right on its heels came Google's announcement that DeepMind, its machine learning platform, has created a "lab." The DeepMind Lab offers a 3D training environment, and its entire code library is now available on the open source hosting service GitHub. With Universe, OpenAI aimed to address a primary challenge for AI research, which is a lack of "a large variety of realistic environments [for AI agents to] learn progressively more complex skills, and where we can accurately benchmark their progress," the company stated. By making OpenAI open sourced, the organization said it believed it could begin measuring and accelerating progress. DeepMind was founded in 2010, and was bought by Google in 2014 for $625 million. As reported by TechRepublic, Demis Hassabis, the head of DeepMind, said the mission of his company is to "solve intelligence." So while DeepMind Lab is run by Google, a for-profit entity, it seems to have a similar goal to the OpenAI's Universe: Developing artificial general intelligence. Greg Brockman, cofounder of OpenAI, told TechRepublic that these platforms offer "a new world to train reinforcement learning algorithms in, and new ways of evaluating their performance." "DeepMind Lab, along with other virtual worlds like Microsoft's Project Malmo or Facebook's TorchCraft, points to a belief in the AI community that we're moving from simply classifying the world and towards building agents that act within it," Brockman said. "A truly useful agent will need to apply knowledge learned in one world to perform well in another world." And the race to create an intelligent agent is heating up. "We are starting to see competition to lead development of artificial general intelligence intensify," said Roman Yampolskiy, head of the University of Louisville's Cybersecurity Lab. "Data availability is a major bottleneck for deep neural networks researchers, which has now been addressed by three major players in that domain." (Facebook, Yampolskiy noted, also has code available on GitHub.) In the announcement, DeepMind Lab noted that the "only known examples of general-purpose intelligence in the natural world arose from a combination of evolution, development, and learning, grounded in physics and the sensory apparatus of animals." Animal and human intelligence, according to DeepMind, is likely a result of our environment, and probably would not have evolved without specific circumstances. "Consider the alternative," the post said, "if you or I had grown up in a world that looked like Space Invaders or Pac-Man, it doesn't seem likely we would have achieved much general intelligence!" SEE: OpenAI unveils 'Universe,' a platform that helps AI learn from complex environments (TechRepublic) In November, DeepMind announced a partnership with game development studio Blizzard Entertainment to use the popular StarCraft II game as a testing platform for machine learning research. "StarCraft is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real-world," DeepMind said in a post. "The skills required for an agent to progress through the environment and play StarCraft well could ultimately transfer to real-world tasks." Yampolskiy said he believes these types of environments may soon be turned into "a single testbed, perfect for testing cross-domain learning and decision making." But the real world, Yampolskiy said, doesn't reflect these artificial environments, in which you can control factors. "The question is," Yampolskiy said, "Do things learned in the simulated worlds translate safely to the real world?" Toby Walsh, professor of AI and the University of New South Wales, echoed this in his observation of Universe: "Many of the goals we need to address to get to general AI will not be solved," said Walsh, "even with this software platform." While Walsh said the move to make the library open source is "great for the AI community," and that DeepMind Lab "provides a richer play world than OpenAI's Universe," he doubts that it can fulfill some of its promises. Specifically, Walsh doesn't think that a 3D world will be "fundamentally easier to develop intelligence," as DeepMind claims. Why? Because it is missing some of the important pieces that go into intelligence, Walsh said, such as natural language and "interaction with other intelligence, not just simple agents." "AI is about more than playing games intelligently," said Walsh. "Games have proved to be a useful starting point, but they are unlikely to be an end point." For example, children, Walsh pointed out, move away from games to "richer learning environments" in the real world.

News Article | February 21, 2017

Monash University (Australia) and Cardiff University (UK) researchers have come a step further in understanding how the human immunodeficiency virus (HIV) evades the immune system. Declared a pandemic in 1987 by the World Health Organization, HIV infection has been responsible for 39 million deaths over the last 30 years. It remains one of the world's most significant public health challenges and thus a greater understanding of how HIV functions is urgently needed so that researchers can design better therapies to target this devastating pathogen. Published today in Nature Structural and Molecular Biology, the Monash-Cardiff team has made an important finding in understanding how HIV-I can evade the immune system. They demonstrated, in molecular detail, how mutations within HIV can lead to differing ways in which key immune molecules, termed the Major Histocompatibility Complex (MHC), display fragments of the virus and how this results in the HIV remaining "hidden" from the immune system. Principal author of the study, Dr Julian Vivian, said the team was yet to develop a complete understanding of how HIV outmanoeuvred our immune system. "This work uncovers a novel mechanism for HIV immune escape, which will be important to incorporate into future vaccine development and may have broader implications for immune recognition of MHC molecules," he said. The recent finding is part of a much larger international alliance between the two Universities, with the Systems Immunity Research Institute (SIURI) at Cardiff University and Monash Biomedicine Discovery Institute (BDI), having signed a Memorandum of Understanding. The five year mutual agreement recognises a number of highly productive joint projects already being conducted around inflammation and immunity, and provides a mechanism for enabling additional innovative projects and student exchange in the areas of protective immunity, metabolism, autoimmunity and cancer. A chief Investigator on the ARC CoE for Advanced Molecular Imaging, based at Monash BDI, Professor Jamie Rossjohn, said the find was exciting and unexpected. "These result were only possible because of the close collaborative ties between Monash and Cardiff researchers." Cardiff University Vice-Chancellor, Professor Colin Riordan, said the signing of the MoU called for a celebration. "Formalising this collaboration is another step forward in what will continue to be a highly successful exchange program and transfer of knowledge between the two countries for the benefit of all." Monash BDI Director, Professor John Carroll, said the research demonstrated the power of international collaboration. "We are bringing together excellence in molecular and systems level immunity in this partnership, and I know it will lead to many more great discoveries." The $39 million ARC-funded Imaging CoE develops and uses innovative imaging technologies to visualise the molecular interactions that underpin the immune system. Featuring an internationally renowned team of lead scientists across five major Australian Universities and academic and commercial partners globally, the Centre uses a truly multi scale and programmatic approach to imaging to deliver maximum impact. The Imaging CoE is headquartered at Monash University with four collaborating organisations - La Trobe University, the University of Melbourne, University of New South Wales and the University of Queensland. Committed to making the discoveries that will relieve the future burden of disease, the newly established Monash Biomedicine Discovery Institute at Monash University brings together more than 120 internationally-renowned research teams. Our researchers are supported by world-class technology and infrastructure, and partner with industry, clinicians and researchers internationally to enhance lives through discovery.

News Article | October 26, 2016

Let it rip. Eating slow-release carbs and cutting down on protein may prevent rotten-egg farts according to a study of the gases emitted by human faeces samples. Farts are mostly composed of odourless gases. There is oxygen and nitrogen from swallowed air, while hydrogen, methane and carbon dioxide are produced when bacteria in the large intestine ferment the carbohydrates we eat. The distinctive rotten-egg whiff is caused by traces of hydrogen sulphide, which gut bacteria are known to produce from protein. In addition to causing red-face moments, this gas can exacerbate inflammatory bowel disease and increase the risk of bowel cancer. This prompted Chu Yao at Monash University in Melbourne, Australia, and her colleagues to investigate how different foods affect the amount of hydrogen sulphide is produced in the gut. Examining the faeces from seven healthy people, the team found that mixing it with cysteine – a major sulphur-containing component of meat, eggs, dairy and other types of protein – caused hydrogen sulphide emissions from gut bacteria to increase more than seven-fold. “This explains why bodybuilders who consume lots of protein powder are known to have smelly farts,” says Yao. But hydrogen sulphide production declined substantially when the team mixed the faeces with four slowly absorbed carbohydrates. These pass through the small intestine without being fully digested, and are then fermented by bacteria in the large intestine. Two of these carbs – resistant starch, which is found in potatoes, bananas, legumes and cereals, and fructans, which are found in wheat, artichokes and asparagus – both reduced hydrogen sulphide production by about 75 per cent. These foods are highly fermentable, meaning they are preferentially broken down ahead of protein, Yao says. “The focus is taken away from the protein, so hydrogen sulphide is not produced.” The other two carbs that decreased hydrogen sulphide emissions were psyllium and sterculia , but only by 25 per cent. These two forms of dietary fibre are poorly fermentable, but can mop up rotten-egg gas as they soak up water and swell in size. The team’s findings, presented at the annual scientific meeting of the Gastroenterological Society of Australia in Adelaide on Tuesday, overturn conventional wisdom that people with stinky gas should eat less fibre. People with excessive flatulence are often advised to avoid fibre to reduce how much hydrogen, methane and carbon dioxide are produced by bacterial fermentation. However, this may increase hydrogen sulphide emissions, which are more likely to clear the room than the three odourless gases, Yao says. The team’s findings are consistent with research showing that high fibre intake increases people’s fart production but not the subjective smell. Rosemary Stanton at the University of New South Wales, Australia, who conducted this study, says there is no good reason for self-conscious farters to avoid fibre. “When we dug down, it wasn’t the act of farting that worried people, it was the smell,” she says. “But the concerning thing is that there are all these people walking around constipated because they are too scared to eat fibre in case they do a bad fart.” Yao’s team now plans to test whether getting people to eat only small amounts of protein and lots of slow-release carbs can prevent rotten-egg farts in both healthy volunteers and people with inflammatory bowel disease. Read more: Farting 101: The questions you’re too embarrassed to ask

News Article | December 16, 2016

Many current fears around AI and automation center around the idea that superintelligence could somehow "take over," turning streets around the globe into scenes from The Terminator. While there is much to be gained from discussing the safe development of AI, there's another more imminent danger: Autonomous weapons. On Friday, after three years of negotiations, the UN unanimously agreed to take action. At the Fifth Review Conference of the UN Convention on Certain Conventional Weapons, countries around the world agreed to begin formal discussions—which will take place for two weeks at the 2017 UN convention in Geneva—on a possible ban of lethal, autonomous weapons. Talks will begin in April or August, and 88 countries have agreed to attend. This week, the number of countries that support a full ban on killer robots went from 14 to 19. "By moving to a group of governmental experts to formalize the official process, it takes it from being led by these kind of outside academics, and means that they have to find government experts to handle it," said Mary Wareham, coordinator for the Campaign to Stop Killer Robots. "It raises the expectation that they're going to do something about this," she said, although what will be done is not yet clear. "It is great to see universal recognition of dangers coming from weaponized artificial intelligence," said Roman Yampolskiy, director of the Cybersecurity lab at the University of Louisville. "It is my hope that, in the future, general danger coming from malevolent AI or poorly designed superintelligent systems will likewise be universally understood." In an address to the UN—which included a briefing by the Campaign to Stop Killer Robots—Toby Walsh, professor of Artificial Intelligence at University of New South Wales, highlighted the necessary steps involved in obtaining a ban on fully autonomous weapons. Walsh referenced an initiative, announced on Tuesday from the IEEE—a group of a half-million members in the tech space—that "defined ethical standards for those building autonomous systems." The IEEE report "contained a number of recommendations including: there must be meaningful human control over individual attacks, and the design, development, or engineering of autonomous weapons beyond meaningful human control to be used offensively or to kill humans is unethical." Last year, Walsh wrote an open letter, signed by thousands of leading researchers from the AI community, voicing concerns about an AI arms race, and what could happen if these lethal weapons—which can kill with a superhuman speed—end up in the wrong hands. Earlier this week, nine members of the US Congress also wrote a letter to the secretaries of state and defense, supporting a ban on autonomous weapons. "This is a very important issue that has suddenly become urgent," said Vince Conitzer, computer science professor at Duke University. "Where it concerns AI, the border between science fiction and reality is getting blurry in places, and autonomous weapons are on the fast track to crossing over to the reality side. Now is the time to act on this." SEE: Police use robot to kill for first time; AI experts say it's no big deal but worry about future (TechRepublic) Bonnie Docherty, who represents the Human Rights Watch and Harvard Law School's International Human Rights Clinic, co-authored a report this week highlighting the dangers of fully autonomous weapons. While Docherty is disappointed that the talks will be limited to two weeks—she'd been hoping for four—she is still encouraged by the decision. "This week is a key moment for international efforts to address the concerns raised by fully autonomous weapons," Docherty said. "We are pleased that the countries at this major disarmament forum have agreed to formalize discussions on lethal autonomous weapons systems, which should be an important step on the road to a ban." Roughly a hundred countries were involved in the discussions at the UN. Most of them have been entirely on board. "They've been over and over saying 'yes, we need to go to the next level,'" said Wareham. Even China, she said, agreed that international law on the issue was critical. Only one country—which got on board on Friday—"expressed skepticism, hesitation, and said it's premature," said Wareham.

LA JOLLA, Calif., Feb. 27, 2017 (GLOBE NEWSWIRE) -- MediciNova, Inc., a biopharmaceutical company traded on the NASDAQ Global Market (NASDAQ:MNOV) and the JASDAQ Market of the Tokyo Stock Exchange (Code Number:4875), today announced that Associate Professor Kerrie McDonald, Head of the Cure Brain Cancer Foundation Biomarkers and Translational Research Group at the Lowy Cancer Research Centre, University of New South Wales, will present positive results from an in vitro study that examined the potential clinical efficacy of MN-166 (ibudilast) in combination with temozolomide (TMZ) in patient-derived cell lines (PDCLs) and an orthotopic glioblastoma (GBM) animal model study during the 5th Quadrennial Meeting of the World Federation of Neuro-Oncology Societies (WFNOS) to be held May 4-7, 2017 in Zurich, Switzerland. The presentation, "Treatment of recurrent glioblastoma with the cytokine inhibitor ibudilast in combination with temozolomide," which is scheduled for May 5, 2017, is expected to extend understanding about the synergistic activity between MN-166 (ibudilast) and TMZ and the potential for MN-166 (ibudilast) to improve treatment for GBM patients. Based in part on these positive in-vitro and animal model study results, MediciNova is preparing to initiate a Phase 2 trial for the treatment of Grade IV Glioblastoma with Dr. Kerrie McDonald in Australia. Malignant primary brain tumors represent the most frequent cause of cancer death in children and young adults and account for more deaths than melanoma. According to the American Association of Neurological Surgeons, glioblastoma (GBM) is an aggressive, extremely lethal form of brain malignancy that develops from glial cells (astrocytes and oligodendrocytes) and rapidly grows and commonly spreads into nearby brain tissue. GBM is classified as Grade IV, the highest grade, in the World Health Organization (WHO) brain tumor grading system.  The American Brain Tumor Association reports that GBM represents 15% of all brain tumors and 55% of all gliomas and has the highest number of cases of all malignant tumors, with an estimated 12,390 new cases predicted for 2017. Despite decades of advancements in neuroimaging, neurosurgery, chemotherapy, and radiation therapy, only modest improvements have been achieved and the prognosis has not improved for individuals diagnosed with GBM.  Median survival is 14.6 months and only approximately 2% of GBM patients survive longer than 36 months. MN-166 (ibudilast) has been marketed in Japan and Korea since 1989 to treat post-stroke complications and bronchial asthma. MediciNova is developing MN-166 for progressive MS and other neurological conditions such as ALS and drug use disorders. MN-166 (ibudilast) is a first-in-class, orally bioavailable, small molecule phosphodiesterase (PDE) -4 and -10 inhibitor and a macrophage migration inhibitory factor (MIF) inhibitor that suppresses pro-inflammatory cytokines and promotes neurotrophic factors. It attenuates activated glia cells, which play a major role in certain neurological conditions. Ibudilast's anti-neuroinflammatory and neuroprotective actions have been demonstrated in preclinical and clinical study results and provide the rationale for its therapeutic utility in neurodegenerative diseases (e.g., progressive MS and amyotrophic lateral sclerosis [ALS], also known as Lou Gehrig’s disease), substance abuse/addiction and chronic neuropathic pain. MediciNova, Inc. is a publicly-traded biopharmaceutical company founded upon acquiring and developing novel, small-molecule therapeutics for the treatment of diseases with unmet medical needs with a primary commercial focus on the U.S. market. MediciNova's current strategy is to focus on MN-166 (ibudilast) for neurological disorders such as progressive MS, ALS and substance dependence (e.g., methamphetamine dependence, opioid dependence) and MN-001 (tipelukast) for fibrotic diseases such as nonalcoholic steatohepatitis (NASH) and idiopathic pulmonary fibrosis (IPF).  MediciNova’s pipeline also includes MN-221 (bedoradrine) for the treatment of acute exacerbations of asthma and MN-029 (denibulin) for solid tumor cancers.  MediciNova is engaged in strategic partnering and other potential funding discussions to support further development of its programs. For more information on MediciNova, Inc., please visit Statements in this press release that are not historical in nature constitute forward-looking statements within the meaning of the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. These forward-looking statements include, without limitation, statements regarding the future development and efficacy of MN-166, MN-221, MN-001, and MN-029. These forward-looking statements may be preceded by, followed by or otherwise include the words "believes," "expects," "anticipates," "intends," "estimates," "projects," "can," "could," "may," "will," "would," “considering,” “planning” or similar expressions. These forward-looking statements involve a number of risks and uncertainties that may cause actual results or events to differ materially from those expressed or implied by such forward-looking statements. Factors that may cause actual results or events to differ materially from those expressed or implied by these forward-looking statements include, but are not limited to, risks of obtaining future partner or grant funding for development of MN-166, MN-221, MN-001, and MN-029 and risks of raising sufficient capital when needed to fund MediciNova's operations and contribution to clinical development, risks and uncertainties inherent in clinical trials, including the potential cost, expected timing and risks associated with clinical trials designed to meet FDA guidance and the viability of further development considering these factors, product development and commercialization risks, the uncertainty of whether the results of clinical trials will be predictive of results in later stages of product development, the risk of delays or failure to obtain or maintain regulatory approval, risks associated with the reliance on third parties to sponsor and fund clinical trials, risks regarding intellectual property rights in product candidates and the ability to defend and enforce such intellectual property rights, the risk of failure of the third parties upon whom MediciNova relies to conduct its clinical trials and manufacture its product candidates to perform as expected, the risk of increased cost and delays due to delays in the commencement, enrollment, completion or analysis of clinical trials or significant issues regarding the adequacy of clinical trial designs or the execution of clinical trials, and the timing of expected filings with the regulatory authorities, MediciNova's collaborations with third parties, the availability of funds to complete product development plans and MediciNova's ability to obtain third party funding for programs and raise sufficient capital when needed, and the other risks and uncertainties described in MediciNova's filings with the Securities and Exchange Commission, including its annual report on Form 10-K for the year ended December 31, 2016 and its subsequent periodic reports on Forms 10-Q and 8-K. Undue reliance should not be placed on these forward-looking statements, which speak only as of the date hereof. MediciNova disclaims any intent or obligation to revise or update these forward-looking statements.

News Article | September 15, 2016

Home > Press > Journey to the center of the cell: Nano-rods and worms wriggle best Abstract: When it comes to delivering drugs, nanoparticles shaped like rods and worms are the best bet for making the daunting journey to the centre of a cell, new Australian research suggests. A new study published in Nature Nanotechnology has answered a long-standing question that could lead to the design of better drug delivery vehicles: how nanoparticle shape affects the voyage through the cell. "We were able to show for the first time that nanoparticles shaped like rods and worms were more effective than spherical nanoparticles at traversing intracellular barriers and this enabled them to get all the way into the nucleus of the cell," says lead author UNSW's Dr Elizabeth Hinde. The study was led by chemists, engineers, and medical researchers from UNSW in a collaboration between the Australian Research Council Centre of Excellence in Advanced Molecular Imaging and the Australian Research Council Centre of Excellence in Bio-Nano Science. The Centres are both headquartered at Monash University, with research nodes at UNSW in Sydney. The team applied a new microscopy method to drug delivery for the first time, which allowed them to track the movement of differently shaped nanoparticles through a single cultured cancer cell, with very high temporal and spatial resolution. Using this method, the researchers were able to pinpoint where drugs were being released, and how they spread throughout the cell. They found that the cancer drug, doxorubicin, was most effective when it could breach the strong yet porous cellular barrier protecting the nucleus - the cell's control centre. Importantly, they discovered that a nanoparticles shape influenced how well the drug breached the barrier. Dr Hinde, an Associate Investigator on the Imaging CoE, says researchers could previously see the overall distribution of their nanoparticles throughout a cell, but didn't have the microscopy tools to understand how this localisation was set up - a key limitation in drug delivery research. "You need to know how things arrive at their final destination in order to target them there. Now we have a tool to track this incredible journey to the centre of the cell. It means other research groups can use this to assess their nanoparticles and drug delivery systems. "They'll be able to work out how to tailor their particles to reach the nucleus or other structures in the cell, and gauge where the cargo is being dropped off. This wasn't possible before." The shape of things to come: rod, worm or sphere? Polymeric nanoparticles will play a vital role in the future of medicine: these ultra-tiny particles can carry drugs to help attack and kill cancer cells, selectively deliver drugs just to where they are needed, and yield breakthroughs in disease diagnostics and imaging. UNSW engineers fabricated four types of nanoparticles: one shaped like a rod, one like a worm, and two that were spherical in shape. These were labelled with fluorescent tags, and incubated in cancer cells. By combining a novel fluorescence microscopy approach with some statistical analysis, the team was able to create a clear picture of how each particle passed through the cell. While the spherical particles got blocked by the nuclear envelope, the rod and worm-shaped particles were able to pass through. This provides a pathway for the development of particles that can selectively target and kill cancer cells, without hurting healthy ones. Dr Hinde explains: "Cancer cells have different internal architecture than healthy cells. If we can fine-tune the dimensions of these rod-shaped nanoparticles, so they only pass through the cellular barriers in cancer cells and not healthy ones, we can reduce some of the side effects of chemotherapies." Opportunities for other researcher groups "The impact for the field is huge," says Scientia Professor Justin Gooding from UNSW and the ARC Centre of Excellence in Bio-Nano Science. "It gives us the ability to look inside the cell, see what the particles are doing, and design them to do exactly what we want them to do." "And this isn't just thanks to the microscope, but the information and data we can extract from the new analysis procedures we've developed. If other research groups can learn how to do this analysis, they can use the equipment already in their labs and get started tomorrow," says Professor Gooding. "People are going to see, suddenly, that they can get all sorts of new information about their particles." The researchers will soon be collaborating with Dr John McGhee from UNSW Art & Design, who combines scientific data, microscopy images, and computer generated animation to create virtual reality renderings of the inside of human cells and blood vessels. The artworks allow researchers to visualise and go on VR walking tours through the body, and could help speed up the drug development process. About University of New South Wales About the ARC Centre of Excellence in Advanced Molecular Imaging The $39 million ARC-funded Imaging CoE develops and uses innovative imaging technologies to visualise the molecular interactions that underpin the immune system. Featuring an internationally renowned team of lead scientists across five major Australian Universities and academic and commercial partners globally, the Centre uses a truly multi scale and programmatic approach to imaging to deliver maximum impact. The Imaging CoE is headquartered at Monash University with four collaborating organisations - La Trobe University, the University of Melbourne, University of New South Wales and the University of Queensland. About the ARC Centre of Excellence in Bio-Nano Science The ARC Centre of Excellence in Bio-Nano Science is a national innovator in bio-nano sciences and an incubator of the expertise and technological excellence required to develop next generation bio-responsive nanomaterials. Five Australian universities collaborate in the Centre, the Universities of Melbourne, Queensland, New South Wales and South Australia, and Monash University, where the Centre is headquartered. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.

News Article | February 15, 2017

Even the deepest most remote parts of the ocean can't escape our foolishness. Scientist have found a large amount of banned chemicals in the fatty tissue of crustaceans that live in the Pacific Ocean's Mariana and Kermadec trenches. Though the locations are 10 kilometres (six miles) deep and far from land, researchers from Newcastle University in the U.K. identified traces of prohibited chemicals once used as fire retardants in the tiny amphipods. SEE ALSO: Why drones are a game-changer for animal research Using deep sea landers to recover the samples, one of the chemicals scientists found were polychlorinated biphenyls (PCBs). PCBs were banned in the U.S. in 1979 and no longer imported into Australia from 1975, amid environmental and health fears. The report claims that around 1.3 million tonnes of PCB was created between the 1930s and the 1970s, with about 65 percent of that contained in landfills and electrical equipment. A further 35 percent may be stuck on the coast and in the ocean. "We still think of the deep ocean as being this remote and pristine realm, safe from human impact, but our research shows that, sadly, this could not be further from the truth," the report's lead author Alan Jamieson said in a statement. He found the shrimp-like creatures contained chemical levels comparable to some of our most degraded environments: "In fact, the amphipods we sampled contained levels of contamination similar to that found in Suruga Bay, one of the most polluted industrial zones of the northwest Pacific." Published in the journal Nature Ecology & Evolution, the report points out that our oceans often act as a "sink" for chemicals released by insecure landfills and industrial accidents. It's important to remember these are human-made chemicals that do not occur naturally said Katherine Dafforn, a senior research associate at the University of New South Wales, who researches the effect of toxic chemicals on the marine environment. She was not involved in the report. Dafforn described the study as "scary." "I would have thought of these trenches as far removed from any of the problems we're creating on land," she added. "I've always thought of them very far removed from human activities." One reason why such chemicals are the focus of marine research is they don't break down quickly in any environment, persisting over decades while accumulating in living organisms in high concentrations. "This impacts on the hormonal endocrine system, which can effect development," Dafforn explained. This could result in problems with growth and the appearance of tumours on fish, for example, as well as potentially damaging the reproductive system. How these chemicals are falling into the trenches is still unknown, although the researchers speculate they could be delivered in part by falling carrion including dead fish and organic matter as well as falling plastic debris. "These trenches are very deep — once something is in them, it's very hard for it to come back out," Dafforn pointed out. PCBs have a knack for finding their way into isolated environments, including the Arctic. In Dafforn's view, there are still many "black boxes" out there when it comes to the impact of man-made chemicals on the environment. Of particular concern to her are microplastics — tiny bits of plastic that occur thanks to the breakdown of plastics bags and bottles, as well as from cosmetic products like microbeads. In other words, we're far from done doing damage.

News Article | February 15, 2017

Industrial pollution has reached even the most remote corners of Earth: the deepest part of the sea. Scientists have discovered “extraordinary levels” of polychlorinated biphenyls, or PCBs, in the Mariana and Kermadec trenches, two of the deepest ocean chasms on the planet. “Trenches have been considered as pristine environments, but also (given their locations and topography) as likely sinks for contaminants that enter the marine environment,” Alan Jamieson, a marine ecologist at Newcastle University in England, wrote this week in the journal Nature Ecology & Evolution. Banned in the 1970s, PCBs were once widely used as dielectric and coolant fluids in transformers, capacitors and other electrical devices. About 1.3 million tons of the chemical was produced, and an estimated 35% of that is now residing in coastal sediments, according to the study. Known as persistent organic pollutants, these contaminants do not degrade naturally and can stick around in the environment for decades. The chemicals bond with bits of plastic and other kinds of organic debris, and are transported in air, soil and coastal waters. They've even been found in the Arctic, far from industrialized areas. While exploring life in the ocean’s hadal zone, a region 3.7 to 6.8 miles deep, Jamieson and colleagues measured contaminants in tiny shrimp-like scavengers called amphipods. These crustaceans feed at the bottom of ocean trenches, ingesting any contaminants found in the environment. The pollutants are then stored in their in fat tissue. The contaminants spread through the food chain when the amphipods are eaten by bigger animals, such as fish. Using deep-sea landing vehicles equipped with baited traps, the scientists collected samples of three species of amphipod from the Mariana Trench, in the western Pacific, and the Kermadec Trench, off New Zealand. Each of the 12 animals they collected was contaminated. “Regardless of depth, regardless of trench, regardless of species,” Jamieson said, pollutants “were present in all of them.” The study found elevated concentrations of contaminants, including flame retardants, in the upper portions of the trenches, about 4.5 miles deep. But amphipods with the most pollutants came from the Mariana Trench, possibly because the chasm is located closer to industrialized regions and beneath the Great Pacific Garbage Patch. Plastic debris and dead animals contaminated with industrial chemicals sink into the ocean, where they’re eaten by the amphipods. The PCBs and flame retardants found in the trenches can have “devastating effects on the hormonal, immune and reproductive systems,” wrote Katherine Dafforn, an ecotoxicologist at the University of New South Wales, in an article accompanying the study. In parts of the world considered relatively pollution-free, the concentration of PCBs is less than one nanogram per gram of material sampled. (There are 1 billion nanograms in a gram.) In polluted areas, that level can rise to several hundred nanograms per gram. PCB levels in the crustaceans from the Mariana Trench were up to 50 times greater than in crabs from the Liaohe River, one of the most polluted waterways in China, the study found. While not as common as PCB pollution, the level of flame retardants in the deep ocean was comparable to or higher than levels in coastal waters off New Zealand. The “disturbing” results show the deep ocean is much more connected to the surface than most people think, Dafforn wrote. Jamieson added that the deepest point in the ocean, Challenger Deep in the Mariana Trench, is only about seven miles deep — about half the length of Manhattan. That’s far from remote, he said. In future studies, Jamieson said he hopes to determine how these pollutants affect the physiology of deep-sea creatures and their surrounding ecosystem. Concerned about Trump, scientists are leaning into politics As bee populations dwindle, robot bees may pick up some of their pollination slack Using science to see which countries are following through on Paris climate change goals

News Article | October 26, 2016

Martin Tingley was coming undone. It was late autumn 2014, just over a year into his assistant-professor job at Pennsylvania State University in State College, and he was on an eight-hour drive home after visiting his wife in Boston. He was stressed, exhausted and close to tears. As the traffic zipped past in the dark hours of the early morning, the headlights gave him the surreal feeling that he was inside a video game. Usually, Tingley thought of himself as a “pretty stoic guy” — and on paper, his career was going well. He’d completed a master’s degree in statistics and a PhD in Earth science, both at Harvard University. With these, and four years of postdoctoral experience, he had landed a rare tenure-track faculty position. He thought he would soon be successfully combining statistics and climate science to produce the type of interdisciplinary research that funding agencies say they want. In fact, scientific life was proving tough. He found himself working 60–80 hours per week doing teaching and research. His start-up funding had run out, he had yet to secure a major grant and, according to a practice common in US academia, he would not be paid by his university for three summer months. His wife had not been able to move with him, so he was making tiring weekend commutes. It seemed that the pressures had reached unsustainable levels. Something had to give. Tingley is one of many young scientists who are deeply frustrated with life in research. In September, Nature put a post on Facebook asking scientists who were starting their first independent position to tell us about the challenges that they faced. What followed was a major outpouring of grief. Within a week, nearly 300 scientists from around the world had responded with a candid catalogue of concerns. “I see many colleagues divorcing, getting burnt out, moving out of science, and I am so tired now,” wrote one biomedical researcher from Belgium (see ‘Suffering in science’). Nature selected three young investigators who voiced the most common frustrations; here, we tell their stories. But are young scientists whining — or drowning? Our interviewees acknowledge that they are extremely fortunate to have an opportunity to direct their own creative, stimulating careers, and they are hardly the only professionals who are expected to work hard. It’s easy for each generation to imagine that things are more difficult for them than they were in the past. But some data and anecdotal evidence suggest that scientists do face more hurdles in starting research groups now than did many of their senior colleagues 20–30 years ago. Chief among those challenges is the unprecedented number competing for funding pools that have remained stagnant or shrunk in the past decade. “The number of people is at an all-time high, but the number of awards hasn’t changed,” says Jon Lorsch, director of the US National Institute of General Medical Sciences (NIGMS) in Bethesda, Maryland. “A lot of people with influence on the system recognize this is a serious problem and are trying to fix it.” Young scientists and senior scientists alike feel an acute pressure to publish and are weighed down by a growing bureaucratic burden, with little administrative support. They are largely judged on their record of publishing and of winning grants — but without clear targets, they find themselves endlessly churning out paper after paper. The crucial question is whether this is harming science and scientists. Bruce Alberts, a prominent biochemist at the University of California, San Francisco, and former president of the US National Academy of Sciences, says that it is. The current hyper-competitive atmosphere is stifling creativity and pushing scientists “to do mediocre science”, he says — work that is safe and uninteresting. “We’ve got to reward people who do something differently.” Our informal survey suggests that the situation is already making research an unwelcoming career. “Frankly, the job of being a principal investigator and running a lab just looks horrible,” wrote one neuroscientist from the United States. Tingley wouldn’t disagree. Tingley has always had broad interests. At university in Canada, he switched from art history to physics. For his graduate studies, he was drawn to the vibrant research environment at Harvard, in Cambridge, Massachusetts, where he built statistical methods that helped to make sense of data on past climate gathered from sources such as tree rings and ice cores. By the time he was searching for academic positions, he was already working 60-hour weeks, he says: he would be at work by 8 a.m., go home for dinner, and then pull out his laptop again at night. But by 2013, his research was hitting a high: he had published a statistical analysis in Nature1 and, after applying for jobs worldwide, was offered a joint appointment in meteorology and statistics at Penn State. By this point, his wife, Gabrielle, ran the communications programme for Harvard’s Research Computing centre in Cambridge. Positions offered to her at Penn State fell far short of her qualifications, and she opted to stay where she was. They were facing the ‘two-body problem’ — a long-standing stress point for scientists. Like many first-year assistant professors, Tingley immediately felt pressure to publish in top journals, attract funding and students, and innovate in the classroom. He also knew that his roughly US$200,000 in start-up funding from the university — to cover his summer salary, computing access and more — wouldn’t last long, and he applied to the US National Science Foundation for grants. That process was “heartbreaking”, he says. In one instance, he put in a proposal with his collaborator, organic geochemist Jessica Tierney at the University of Arizona in Tucson, for work on proxies for past sea surface temperatures. On the first round of review, the application got two scores of “excellent” and two of “very good”, yet it still fell short of being funded. The two were encouraged to resubmit, which they did. On the next round, the proposal scored worse. “Part of it is on me, I was unsuccessful,” Tingley says — but the anecdote shows the frustration that young scientists face when trying to get a research programme off the ground. “The funding cycle is brutal.” In the meantime, the pair published the initial stages of the work2 in an article that has been cited 40 times. The views of scientists who responded to Nature revealed a generational divide: many feel that today’s senior investigators experienced a more comfortable trajectory in science and now have a competitive advantage. The ‘baby boom’ scientists, who have longer track records and well-established labs, are in a stronger position to win funds. (In September, Nature asked on Twitter: “What are the challenges facing young scientists?” “Old scientists,” one respondent shot right back.) In December 2014, shortly after his low point in the car, Tingley and his wife took a month-long trip to Australia and Indonesia for some much-needed time together. The next month, Tingley returned to the winter chill at State College and walked across campus feeling as if his head was scraping against the low-hanging clouds. He knew that much of his time was about to be sucked up teaching two advanced courses, leaving little time for research, and he would be back to the tiring commute to see his wife at the weekends. If he didn’t get a grant soon, he would have no summer salary. “My wife and I knew this wasn’t a sustainable way for us to live our lives.” Tingley started googling around late at night, and in March, he spied the perfect job posting. Insurance Australia Group in Sydney was looking for someone with experience in meteorology, statistics and climate. He started there two months later, and his wife easily found a position in communications with the University of New South Wales. Now a senior research analyst, Tingley models and quantifies risks from bush fires, cyclones and other storms. The transcontinental move was not without its difficulties, of course — and as a young researcher moving to the private sector, he’s had to prove himself all over again. Tingley now advises others to recognize that there are various paths to a successful career. “It’s perfectly legitimate to use your training and skill set in the private sector.” He isn’t missing the stress and high expectations placed on young investigators’ shoulders, he says. On a sunny spring Saturday in September, he and his wife head out for a walk on their neighbourhood beach. “It turns out that weekends are fantastic,” he says. Sometimes, pressures come not from chasing funding or tenure, but from chasing an ideal of what makes a good scientist. Young researchers from all disciplines told Nature that they wrestle with the lack of clear expectations for success — and materials scientist Eddie López-Honorato is one. He grew up in Mexico City and studied chemistry there, at the National Autonomous University of Mexico, but for his PhD, he struck out for the University of Manchester, UK. He worked at night and at weekends to complete his experiments, he says, which became more difficult after his son was born. He found it stressful, but his time at Manchester gave him high working standards that he now tries to emulate. Next, he did a postdoctoral fellowship at the Institute for Transuranium Elements in Karlsruhe, Germany, where he worked on developing safer coatings for nuclear fuels used in reactors. At the end of his postdoc, he had the opportunity to return to the United Kingdom as a lecturer at the University of Sheffield, but he and his wife, Paola, yearned to go back to Mexico. They weighed up the pros and cons. López-Honorato knew that he would need to build up his professional reputation in Mexico and that the science infrastructure there was less developed than in Europe. But he thought that working in the United Kingdom would be harder for his family, because they faced constant changes in language and culture. The family chose Mexico. In March 2012, López-Honorato started at the Center for Research and Advanced Studies of the National Polytechnic Institute (CINVESTAV) in Ramos Arizpe. He felt an amazing sense of independence and potential on standing in front of his brand new empty lab space. “You know that you have to get some students and money fast, really fast, and that’s when the urge to work kicks in,” he says. Although the government paid his and his students’ salaries, he still needed to secure funds to support his research. He sent out a flurry of grant proposals for government funding, without success. López-Honorato spent 2012 travelling around Mexico and the United States to build collaborations. He cold e-mailed other scientists to explain his work. The grants started trickling in. By 2014, he had secured enough to cover most of his research expenses and had established a second arm to his lab’s work: developing adsorptive materials to remove arsenic from drinking water, a problem that affected nearly half of all wells in certain parts of Mexico3. Since starting at CINVESTAV, he has published 20 research papers and has built up a lab group of 15 people. Like many of those interviewed, he says that the work to sustain funding is as tough as winning the first grants. Even though his position is secure, he feels the pressure of maintaining his research projects and launching the careers of younger scientists. “It’s stressful when you don’t have money, and stressful when you do have money, because then you have to deliver. It’s my fault if anything goes wrong.” He points to a recent eight-month bureaucratic delay in purchasing a coating machine that is essential to his nuclear-fuel work; it put the project a year behind schedule, and he feels that he is to blame. Many scientists, like other professionals, say that there aren’t enough hours in the day. (“My cohort, we feel exhausted,” said one Generation X scientist, who asked to remain anonymous to protect his career.) In the past two months, López-Honorato says, he has averaged four hours of sleep per night. He and other early-career researchers are “in a stage where our kids and partners need us the most at home”, he says. His second son is now eight months old. He wrestles with whether he has valid reasons to complain, and knows the pressures are largely self-generated. “It’s a problem of saying, ‘That’s enough’,” he says. It’s an issue that many young investigators struggle with — when you’re the one setting the goals, when do you have enough money, students or publications? Philip Guo, a cognitive scientist at the University of California, San Diego, described in a 2014 blogpost how academics often feel as if they are on an accelerating treadmill. In his previous work as a software engineer at Google, Guo wrote, he had “tremendous clarity about what and how much I was expected to do”. Academics, however, have obligations to teach, advise, do research, write grants and support departments, universities and the academic community — and “none of these sources of work know of or care about one another”. Alberts highlights the young investigators who need two major grants, one to supply their salary and one for their research programme. “It’s horrible pressure on young people. How are they going to be excellent at anything? The incentives are all wrong.” This year, López-Honorato is trying to lower his own expectations, applying for only one industry grant — compared with the seven he applied for in 2012 — in the hope that he’ll get home in time to play with his boys. But that internal pressure is hardest to quell. “We want to be the best — that’s how we got to the job we have right now. It’s a personal pressure. But that’s even more difficult to get rid of.” Computing always attracted Felienne Hermans, who taught herself programming at age 10. She specialized in computer science at university and pursued a PhD at Delft University of Technology in the Netherlands. There, she applied methods of software engineering to spreadsheets, so that end users such as accountants or biologists would have better ways of maintaining and annotating their data4. The creative work won her top conference papers, which are key for advancement in this field. When a tenure-track position opened up in her research group of four professors, she asked whether she could apply. She beat internal and external candidates and started as an independent professor in March 2013, at the age of just 28. Two years into the position, Hermans was feeling overwhelmed. She was grappling with the responsibilities of managing her two graduate students and one postdoc, prepping for teaching courses, and what felt like endless ‘service’ requests to review papers for journals and colleagues. The spreadsheet work had in some ways run its course, and she wanted to pivot to a more stimulating research area. But the pressure to publish continuously and copiously dogged her. Her job is formally split between 40% teaching, 40% research and 20% academic service, but the message is that research should trump everything else. “Four papers are better than three. And five are better than four,” she says. Like Alberts, she says the idea that research output is now synonymous with publication quashes all creativity. “Papers are just one form of communicating ideas and experiments.” She yearns “for an afternoon of looking out the window and thinking, ‘What will I do next?’”. Another barrier has been constant throughout her career: being a woman in an overwhelmingly male-dominated field. In 2014, she attended the Code Generation hands-on programming conference in Cambridge, UK, and found herself 1 of only 2 women among roughly 100 attendees. She spent the three days speaking to colleagues about this sad statistic, rather than about her programming, as she would have preferred. “It drags you down and drains your energies,” she says. In the survey, Nature received roughly a dozen comments from young scientists who indicated that sexism, gender bias or lack of support for women held back their careers. Hermans eventually developed a fresh research focus through her Saturday volunteer work at a community centre, where she taught programming to inner-city kids. She and a colleague began thinking about how best to teach the children. Rather than just explaining how to make a robot move forward, say, they wanted to communicate how to maintain code quality through properly naming program features and avoiding ‘code smells’, or poorly designed program sections. The pivot wasn’t totally smooth — her first conference paper about a generic theory for code smells was rejected for not having enough supporting evidence, but now she is hitting her stride. Looking back, Hermans says that she probably should have ignored the pressure to publish, and ruminated more. “But I was new in the tenure track and super scared about not being able to pay my mortgage in two years.” Now, she keeps more careful track of her time. If a colleague knocks on her door for help with a student’s paper, she can turn them down: “I’ve already done my 20% to service.” She’s rearranged her week, cramming teaching, grant writing and service into Monday to Thursday so that she can spend Fridays with her lab group, which now comprises six people. There are more-organized moves to help young investigators — to win grants, for example. Alberts says that “there has to be a shift of resources to the younger people”. He points to the European Research Council grant programme that divides applicants into three career stages — Starter (2–7 years post-PhD), Consolidator (7–12 years post-PhD) and Advanced (more than 12 years post-PhD) — so that applicants from each career stage compete with their peers. In the same vein, this year the NIGMS piloted a grant called Maximizing Investigators’ Research Award, which separates early-stage investigators from established ones, and offers five years of guaranteed funding. That’s an innovation in the US funding system, says Lorsch, because it means no longer “comparing apples and oranges”. And Lorsch says that older investigators should be encouraged to move into alternative stages of their career — working in teaching, mentoring and science advocacy — that don’t require research funds. This could help younger researchers to break in. Other scientists vehemently oppose such ideas. And Alberts, like many senior scientists, doesn’t see the problem as solely based on age. “It’s not about fairness. It’s about how to get the best science for the dollar. We’ll get much better science by funding young or old people to do innovative things.” Hermans is acutely aware that the grumbles of young scientists can be brushed away. “If people are complaining about an injustice, it’s easy to say they are just moaning,” she says. “But these are not imaginary problems.” She feels it’s her duty to be vocal about the challenges facing young investigators. “Experienced researchers should be observing if a young scientist is failing and asking, ‘Are you overwhelmed? Why aren’t you inspired?’” Lorsch says that he knows first-hand that Generation X scientists are not whiners: “I do not hear complaining from the people who are trying to get their first grant or renew their first grant, the people trying to get a lab running,” he says. “It’s the really well-funded people who’ve lost one of their grants — that’s who call me and scream.”

News Article | October 26, 2016

Making a splash? Engineers have created metallic nanofish that are inspired by the swimming style of real fish, and could be used to carry drugs to specific sites of the body. The nanofish are 100 times smaller than grains of sand, and are constructed from gold and nickel segments linked by silver hinges. The two outer gold segments act as the head and tail fin, while the two inner nickel segments form the body. Each segment is around 800 nanometres long, a nanometre being one billionth of a metre. When an oscillating magnetic field is applied, the magnetic nickel parts move from side to side. This swings the head and the tail, creating an undulating motion that pushes the nanofish forward (see video). Speed and direction can be controlled by altering the strength and orientation of the magnetic field. The nanofish were developed by Jinxing Li at the University of California, San Diego, and his colleagues, who are now investigating potential medical applications. “We believe they could be useful for medicine delivery, non-invasive surgery, and single cell manipulation,” says Li. External magnets could be used to direct drug-loaded nanofish to particular areas of the body, he says. “This is an exciting new idea,” says Justin Gooding of the University of New South Wales, Australia. “A lot of work on drug delivery vehicles relies on the body’s transport systems to move the particles to the site of the pathology.” “However, active transport has recently begun to be explored and this work shows that active transport particles can be made smaller and faster,” he says. Several other research groups are also developing ‘nanoswimmers’ for drug transport. Most of these use helical propellers inspired by the corkscrew tails of bacteria, but experiments have shown that the propulsive mode of nanofish is more efficient. One major question that remains is how to remove the nanofish from the body after use. Li says the team is now working on developing biodegradable versions, so that the metal does not get stuck inside the body.

News Article | December 20, 2016

In 2016, the White House recognized the importance of AI at its Frontiers Conference. The concept of driverless cars became a reality, with Uber's self-driving fleet in Pittsburgh and Tesla's new models equipped with the hardware for full autonomy. Google's DeepMind platform, AlphaGo, beat the world champion of the game—10 years ahead of predictions. "Increasing use of machine learning and knowledge-based modeling methods" are major trends to watch in 2017, said Marie desJardins, associate dean and professor of computer science at the University of Maryland, Baltimore County. How will this play out? TechRepublic spoke to the experts to look at three overarching ways that AI will make an impact. "The interest from the general population in the societal impacts of AI will grow, especially where these impacts are already tangible," said Vince Conitzer, professor of computer science at Duke University. "Many of the general themes have been previously identified—technological unemployment, autonomous weapons, machine learning systems based on biased data, machine learning systems being used to identify and suppress dissent, AI systems making moral judgments—but specific new technological developments and their failures will further drive and inform the conversation." Conitzer sees the interest in AI reaching specific groups outside the industry as well. "Lawyers will grapple with how the law should deal with autonomous vehicles, economists will study AI-driven technological unemployment, sociologists will study the impact of ever more effective AI-based recommender systems and personal assistants, and so on," he said. Fabio Cardenas, CEO of Sundown AI, agrees. In a more direct sense, he said he sees AI impacting specific roles within organizations, "such as accounting, finance, supply chain, HR, or other fields where work was performed by specialists," he said. "This growing sphere will allow the AI to spread to several departments in multiple industries across the globe." The Terminator-esque scenario is a common trope of AI writing. But is there any truth to it? According to Cardenas, "AI going rogue" could be a reality of 2017. Cardenas envisions "AI created for nefarious purposes by small group of cyber bandits to defraud institutions or individuals," he said. "This rogue AI will enable hacking into systems that were once thought un-hackable." Cardenas also sees this happening through corrupting AI that already exists. "If the training set of the AI is compromised, the hackers could introduced bias or exemptions in order to subvert the AI's predictive capabilities for their own gain," he said. One way this could happen, Cardenas said, is through "AI developed to make other AI smarter." AI can self-improve by checking for blind spots in training data, making adjustments, "or, if we are lucky, writing code to improve the other AI," he said. The result, he said, will help optimize AI. "We're still far from a super intelligent AI, but this trend will get us closer." Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville, also sees AI "failures" as a trend of 2017. "The most interesting and important trend to watch in AI, and the one I am now closely tracking, is: AI failures will grow in frequency and severity proportionate to AI's capability," he said. In other words, as we make gains, we also increase the likelihood of "malevolent" AI. How do we prevent AI from going rogue? It's a question many AI researchers are concerned with. Across the board, experts worry about the ethical implications of AI. After all, it's easy to see ways that AI can make mistakes—from reinforcing biases to spouting racial slurs, to, in an extreme case, not preventing a fatal accident. SEE: MIT's 'Moral Machine' crowdsources decisions about autonomous driving, but experts call it misguided (TechRepublic) "The traditional AI community within computer science will increasingly address societal and moral issues in their work," said Conitzer. "AI researchers are already interested in these topics, but we are only at the beginning of figuring out how to make concrete technical contributions along these lines." The issues, he said, will rely on "an ever growing number of computer scientists" to solve. Moshe Vardi, a computer science professor at Rice University, also said he sees this as a trend for 2017. "Ethical issues related to AI will continue to gather attention, around impact of automation on labor, lethal autonomous weapons, algorithmic fairness and transparency, and more," he said. And Toby Walsh, professor of AI at the University of New South Wales, agrees. We will see, he said, "an autonomous car accidently killing an innocent pedestrian, cyclist or passenger of another vehicle," he said. This will highlight the importance of these issues as we develop and regulate AI, he added. The value-alignment problem—which means, how do we ensure AI operates with the same values as humans?—"stops being seen as a problem for superintelligence, but one for algorithms and machine learning programs we make today," said Walsh.

News Article | November 23, 2016

It’s supposed to be the most fundamental constant in physics, but the speed of light may not always have been the same. This twist on a controversial idea could overturn our standard cosmological wisdom. In 1998, Joao Magueijo at Imperial College London, proposed that the speed of light might vary, to solve what cosmologists call the horizon problem. This says that the universe reached a uniform temperature long before heat-carrying photons, which travel at the speed of light, had time to reach all corners of the universe. The standard way to explain this conundrum is an idea called inflation, which suggests that the universe went through a short period of rapid expansion early on – so the temperature evented out when the cosmos was smaller, then it suddenly grew. But we don’t know why inflation started, or stopped. So Magueijo has been looking for alternatives. Now, in a paper to be published 28 November in Physical Review, he and Niayesh Afshordi at the Perimeter Institute in Canada have laid out a new version of the idea – and this one is testable. They suggest that in the early universe, light and gravity propagated at different speeds. If photons moved faster than gravity just after the big bang, that would have let them get far enough for the universe to reach an equilibrium temperature much more quickly, the team say. What really excites Magueijo about the idea is that it makes a specific prediction about the cosmic microwave background (CMB). This radiation, which fills the universe, was created shortly after the big bang and contains a “fossilised” imprint of the conditions of the universe. In Magueijo and Afshordi’s model, certain details about the CMB reflect the way the speed of light and the speed of gravity vary as the temperature of the universe changes. They found that there was an abrupt change at a certain point, when the ratio of the speeds of light and gravity rapidly went to infinity. This fixes a value called the spectral index, which describes the initial density ripples in the universe, at 0.96478 – a value that can be checked against future measurements. The latest figure, reported by the CMB-mapping Planck satellite in 2015, place the spectral index at about 0.968, which is tantalisingly close. If more data reveals a mismatch, the theory can be discarded. “That would be great – I won’t have to think about these theories again,” Magueijo says. “This whole class of theories in which the speed of light varies with respect to the speed of gravity will be ruled out.” But no measurement will rule out inflation entirely, because it doesn’t make specific predictions. “There is a huge space of possible inflationary theories, which makes testing the basic idea very difficult,” says Peter Coles at Cardiff University, UK. “It’s like nailing jelly to the wall.” That makes it all the more important to explore alternatives like varying light speeds, he adds. John Webb of the University of New South Wales in Sydney, Australia, has worked for many years on the idea that constants may vary, and is “very impressed” by Magueijo and Afshordi’s prediction. “A testable theory is a good theory,” he says. The implications could be profound. Physicists have long known there is a mismatch in the way the universe operates on its smallest scales and at its highest energies, and have sought a theory of quantum gravity to unite them. If there is a good fit between Magueijo’s theory and observations, it could bridge this gap, adding to our understanding of the universe’s first moments. “We have a model of the universe that embraces the idea there must be new physics at some point,” Magueijo says. “It’s complicated, obviously, but I think ultimately there will be a way of informing quantum gravity from this kind of cosmology.”

News Article | February 15, 2017

Wilmington Trust is proud to announce that Sharon L. Klein was elected as a Fellow by the American College of Trust and Estate Counsel (ACTEC). Klein is president of Wilmington Trust’s New York Metropolitan region. She leads a team of professionals who provide planning, trust, investment management, family governance and education, and family office services. ACTEC is a nonprofit, professional organization for trust and estate lawyers and law professors who are elected as Fellows by their peers based on significant contributions to the field. ACTEC has more than 2,600 members in the United States, Canada, and other countries. “We’re very happy for Sharon being elected as an ACTEC Fellow,” said Larry Gore, senior vice president and co-manager of the firm’s Wealth Advisory business. “Sharon has made numerous contributions to the field of trust and estate law, so we are pleased the ACTEC community has recognized her leadership and commitment to the industry.” Klein has over 25 years of experience in the wealth management field, and is nationally recognized in the industry as a speaker and author on trust and estate topics. She is regularly quoted in major publications including the New York Times, Wall Street Journal, New York Law Journal, and more. She received a Master of Laws from the Boalt Hall School of Law at the University of California, Berkeley, and received a Bachelor of Arts and a Bachelor of Laws from the University of New South Wales, Australia. Klein serves as chair of the Trusts and Estates Law Section Taxation Committee of the New York State Bar Association. She is a member of New York Bankers Association Trust & Investment Division Executive Committee, The Rockefeller University Committee on Trust and Estate Gift Plans, the Professional Advisory Council of the Anti-Defamation League, and the Estates, Gifts and Trusts Advisory Board for Bloomberg BNA. She joins Wilmington Trust colleagues Donna Barwick, Carol Kroch, Henry Gissel, Dick Nenno, and Tom Sweeney, who were previously elected as ACTEC Fellows. Klein is based in Wilmington Trust’s office at 350 Park Ave., 9th Floor in New York, and can be contacted at (212)415-0531. ABOUT WILMINGTON TRUST Wilmington Trust’s Wealth Advisory offers a wide array of personal trust, financial planning, fiduciary, asset management, private banking*, and family office services designed to help high-net-worth individuals and families grow, preserve, and transfer wealth. Wilmington Trust focuses on serving families with whom it can build long-term relationships, many of which span multiple generations. Wilmington Trust also provides Corporate and Institutional services for clients around the world. Wilmington Trust has clients in all 50 states and in more than 90 countries, with offices throughout the United States and internationally in London, Dublin, Amsterdam, Luxembourg, and Frankfurt. For more information, visit Wilmington Trust is a registered service mark. Wilmington Trust Corporation is a wholly owned subsidiary of M&T Bank Corporation. Wilmington Trust Company, operating in Delaware only, Wilmington Trust, N.A., M&T Bank and certain other affiliates, provide various fiduciary and non-fiduciary services, including trustee, custodial, agency, investment management and other services. International corporate and institutional services are offered through Wilmington Trust Corporation's international affiliates. Loans, credit cards, retail and business deposits, and other business and personal banking services and products are offered by M&T Bank, member FDIC. *Private Banking is the marketing name for an offering of M&T deposit and loan products and services.

News Article | October 28, 2016

When we think of AI, we often think of the tech that it powers—like Google Maps, Alexa, and driverless cars. But AI has implications far beyond the tools we use for convenience at home, or efficiency in the office: It can make predictions about human decision-making. A recent study, published in PeerJ Computer Science, shows how an AI algorithm by the University College London, the University of Sheffield, and the University of Pennsylvania was remarkably accurate in predicting judicial decisions. The AI examined judicial cases from the European Court of Human Rights—and found the computer to be 79% accurate in predicting the outcomes. The algorithm, touted as the first of its kind, analyzes case text using machine learning. While the study's authors don't think the algorithm can replace human judges or lawyers, it could be "useful for rapidly identifying patterns in cases that lead to certain outcomes," Dr. Nikolaos Aletras, who led the study at University College London, wrote in a release. "It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights." The research showed that court judgments "are highly correlated to non-legal facts rather than directly legal arguments, suggesting that judges of the Court are, in the jargon of legal theory, 'realists' rather than 'formalists,'" according to the release. According to the researchers, the language and topics of the cases were the most important predictors for determining the judgment. "The 'circumstances' section of the text includes information about the factual background to the case. By combining the information extracted from the abstract 'topics' that the cases cover and 'circumstances' across data for all three articles, an accuracy of 79% was achieved," the press release stated. The study, however, just looks at the official, text-based court statements—not the arguments in court. Toby Walsh, AI professor at the University of New South Wales, said he is "unimpressed." The outcomes, he said, are going to be predicted based on the summary of the judgement. Furthermore, even if the judgment were ignored, "the summary is going to be inherently biased towards the decision, focusing on evidence and law that supports the decision." SEE: New research shows that Swarm AI makes more ethical decisions than individuals (TechRepublic) But beyond that, Walsh worries about the ethical issues around the work. "Only humans and not machines should be given the task to judge on humans," he said. "Work that suggests we might one day replace judges with machines is profoundly misguided. We have seen other instances of this already. The COMPAS program in Florida was biased against black people, predicting that they would be more likely to reoffend than than do, potentially leading to judges give less probation and stricter bail terms to black people." Roman Yampolskiy, head of the cybersecurity lab at the University of Louisville, was recently in South Korea, advising their Supreme Court on using AI. "My main message was to always have a human judge as the final deciding power, regardless of how well AI appears to perform in test cases," he said. "As the performance of AI systems improves beyond the reported 79% of human judges' level, it will be tempting to save costs by completely automating all components of the judiciary system, but it is a mistake to do so." "If designed properly and supplied with clean training data, AI may improve on human judges in regards to certain types of bias, such as racism or sexism," said Yampolskiy. "However, AI will always be worse with regards to 'human values bias,' which gives preferential treatment to human life and human preferences over everything else, a bias which I think is highly desirable from a system with a power to impact or even take away human lives." To be fair, according to Vasileios Lampos, one of the study's authors, purpose of this study is not to replace judges. "An improved version of the tool proposed in our paper may be used to prioritize the administrative processing of cases with a clear 'Violation' or 'Non-Violation' prior, in order to reduce more timely the long queue of cases that have not been tried," he said. Also, he pointed out that this is "just a proof-of-concept paper." More data is required, he said, before "this proof-of-concept tool can be translated to anything of an actual substance and usability. At the same time, law experts need to establish the right platform in order to make such technological advancements usable," said Lampos. Even so, "we need to be very, very careful in developing computer programs in areas like this," said Walsh. "And even more careful in interpreting their results and deciding how society uses them."

Li G.,University of New South Wales | Mordukhovich B.S.,Wayne State University
SIAM Journal on Optimization | Year: 2012

This paper is mainly devoted to the study and applications of Hölder metric subregularity (or metric q-subregularity of order q ε (0, 1]) for general set-valued mappings between infinite-dimensional spaces. Employing advanced techniques of variational analysis and generalized differentiation, we derive neighborhood and point-based sufficient conditions as well as necessary conditions for q-metric subregularity with evaluating the exact subregularity bound, which are new even for the conventional (first-order) metric subregularity in both finite and infinite dimensions. In this way we also obtain new fractional error bound results for composite polynomial systems with explicit calculating fractional exponents. Finally, metric q-subregularity is applied to conduct a quantitative convergence analysis of the classical proximal point method (PPM) for finding zeros of maximal monotone operators on Hilbert spaces. © 2012 Society for Industrial and Applied Mathematics.

Lenroot R.K.,University of New South Wales | Giedd J.N.,National Institute of Mental Health
Journal of Child Psychology and Psychiatry and Allied Disciplines | Year: 2011

Biological development is driven by a complex dance between nurture and nature, determined not only by the specific features of the interacting genetic and environmental influences but also by the timing of their rendezvous. The initiation of large-scale longitudinal studies, ever-expanding knowledge of genetics, and increasing availability of neuroimaging data to provide endophenotypic bridges between molecules and behavior are beginning to provide some insight into interactions of developmental stage, genes, and the environment, although daunting challenges remain. Prominent amongst these challenges are difficulties in identifying and quantifying relevant environmental factors, discerning the relative contributions to multiply determined outcomes, and the likelihood that brain development is a non-linear dynamic process in which small initial differences may yield large later effects. Age-sensitive mechanisms include developmental changes in gene expression, epigenetic modifications, synaptic arborization/pruning, and maturational improvements in our capacity to seek out environments of our choosing. Greater understanding of how genetic and environmental factors interact differently across ages is an important step toward elucidating the mechanisms by which phenotypes are created - and how they may differ in health and disease. This knowledge may also provide clues to guide the type and timing of interventions to maximize outcomes. © 2011 The Authors. Journal of Child Psychology and Psychiatry © 2011 Association for Child and Adolescent Mental Health. Published by Blackwell Publishing.

Speakman J.R.,CAS Institute of Genetics and Developmental Biology | Speakman J.R.,University of Aberdeen | Garratt M.,University of New South Wales
BioEssays | Year: 2014

The idea that oxidative stress may underpin life history trade-offs has become extremely popular. However, experimental support for the concept has proved equivocal. It has recently been suggested that this might be because of flaws in the design of existing studies. Here, we explore the background to the oxidative stress hypothesis and highlight some of the complexities in testing it. We conclude that the approach recently suggested to be least useful in this context (comparing reproducing to non-reproducing animals) may in fact be the most powerful. Moreover, suggested alternative approaches of limiting food supply or manipulating litter sizes have many complexities and problems. We suggest some useful alternative approaches that have not been previously advocated, particularly the study of individuals reproducing at greater parity later in life. Finally, the measures of oxidative stress and tissues that are analysed influence the experimental outcome. This suggests our conceptual model of the trade-off is currently too simplistic, and that studies based on single or limited numbers of assays, or restricted to single tissues, whether they support or refute the theory, should be interpreted with great caution. © 2014 WILEY Periodicals, Inc.

Sun B.,University of Technology, Sydney | Huang X.,University of Technology, Sydney | Chen S.,University of Technology, Sydney | Munroe P.,University of New South Wales | Wang G.,University of Technology, Sydney
Nano Letters | Year: 2014

The electrochemical performance of lithium-oxygen (Li-O2) batteries awaits dramatic improvement in the design of porous cathode electrodes with sufficient spaces to accommodate the discharge products and discovery of effective cathode catalysts to promote both oxygen reduction reactions and oxygen evolution reactions. Herein, we report the synthesis of porous graphene with different pore size architectures as cathode catalysts for Li-O2 batteries. Porous graphene materials exhibited significantly higher discharge capacities than that of nonporous graphene. Furthermore, porous graphene with pore diameter around 250 nm showed the highest discharge capacity among the porous graphene with the small pores (about 60 nm) and large pores (about 400 nm). Moreover, we discovered that addition of ruthenium (Ru) nanocrystals to porous graphene promotes the oxygen evolution reaction. The Ru nanocrystal-decorated porous graphene exhibited an excellent catalytic activity as cathodes in Li-O2 batteries with a high reversible capacity of 17 700 mA h g-1, a low charge/discharge overpotential (about 0.355 V), and a long cycle life up to 200 cycles (under the curtaining capacity of 1000 mAh g-1). The novel porous graphene architecture inspires the development of high-performance Li-O2 batteries. © 2014 American Chemical Society.

Halliday G.M.,University of New South Wales | Leverenz J.B.,Cleveland Clinic | Schneider J.S.,Thomas Jefferson University | Adler C.H.,Mayo Medical School
Movement Disorders | Year: 2014

The recent formalization of clinical criteria for Parkinson's disease with dementia (PDD) codifies many studies on this topic, including those assessing biological correlates. These studies show that the emergence of PDD occurs on the background of severe dopamine deficits with, the main pathological drivers of cognitive decline being a synergistic effect between alpha-synuclein and Alzheimer's disease pathology. The presence of these pathologies correlates with a marked loss of limbic and cortically projecting dopamine, noradrenaline, serotonin, and acetylcholine neurons, although the exact timing of these relationships remains to be determined. Genetic factors, such as triplications in the α-synuclein gene, lead to a clear increased risk of PDD, whereas others, such as parkin mutations, are associated with a reduced risk of PDD. The very recent formalization of clinical criteria for PD with mild cognitive impairment (PD-MCI) allows only speculation on its biological and genetic bases. Critical assessment of animal models shows that chronic low-dose MPTP treatment in primates recapitulates PD-MCI over time, enhancing the current biological concept of PD-MCI as having enhanced dopamine deficiency in frontostriatal pathways as well as involvement of other neurotransmitter systems. Data from other animal models support multiple transmitter involvement in cognitive impairment in PD. Whereas dopamine dysfunction has been highlighted because of its obvious role in PD, the role of the other neurotransmitter systems, neurodegenerative pathologies, and genetic factors in PD-MCI remains to be fully elucidated. © 2014 International Parkinson and Movement Disorder Society.

Moon K.,University of Canberra | Blackman D.,University of New South Wales
Conservation Biology | Year: 2014

Natural scientists are increasingly interested in social research because they recognize that conservation problems are commonly social problems. Interpreting social research, however, requires at least a basic understanding of the philosophical principles and theoretical assumptions of the discipline, which are embedded in the design of social research. Natural scientists who engage in social science but are unfamiliar with these principles and assumptions can misinterpret their results. We developed a guide to assist natural scientists in understanding the philosophical basis of social science to support the meaningful interpretation of social research outcomes. The 3 fundamental elements of research are ontology, what exists in the human world that researchers can acquire knowledge about; epistemology, how knowledge is created; and philosophical perspective, the philosophical orientation of the researcher that guides her or his action. Many elements of the guide also apply to the natural sciences. Natural scientists can use the guide to assist them in interpreting social science research to determine how the ontological position of the researcher can influence the nature of the research; how the epistemological position can be used to support the legitimacy of different types of knowledge; and how philosophical perspective can shape the researcher's choice of methods and affect interpretation, communication, and application of results. The use of this guide can also support and promote the effective integration of the natural and social sciences to generate more insightful and relevant conservation research outcomes. © 2014 Society for Conservation Biology.

Tulbure M.G.,University of New South Wales | Broich M.,University of Technology, Sydney
ISPRS Journal of Photogrammetry and Remote Sensing | Year: 2013

Detailed information on the spatiotemporal dynamic in surface water bodies is important for quantifying the effects of a drying climate, increased water abstraction and rapid urbanization on wetlands. The Swan Coastal Plain (SCP) with over 1500 wetlands is a global biodiversity hotspot located in the southwest of Western Australia, where more than 70% of the wetlands have been lost since European settlement. SCP is located in an area affected by recent climate change that also experiences rapid urban development and ground water abstraction. Landsat TM and ETM+ imagery from 1999 to 2011 has been used to automatically derive a spatially and temporally explicit time-series of surface water body extent on the SCP. A mapping method based on the Landsat data and a decision tree classification algorithm is described. Two generic classifiers were derived for the Landsat 5 and Landsat 7 data. Several landscape metrics were computed to summarize the intra and interannual patterns of surface water dynamic. Top of the atmosphere (TOA) reflectance of band 5 followed by TOA reflectance of bands 4 and 3 were the explanatory variables most important for mapping surface water bodies. Accuracy assessment yielded an overall classification accuracy of 96%, with 89% producer's accuracy and 93% user's accuracy of surface water bodies. The number, mean size, and total area of water bodies showed high seasonal variability with highest numbers in winter and lowest numbers in summer. The number of water bodies in winter increased until 2005 after which a decline can be noted. The lowest numbers occurred in 2010 which coincided with one of the years with the lowest rainfall in the area. Understanding the spatiotemporal dynamic of surface water bodies on the SCP constitutes the basis for understanding the effect of rainfall, water abstraction and urban development on water bodies in a spatially explicit way. © 2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

Taschetto A.S.,University of New South Wales | Ambrizzi T.,University of Sao Paulo
Climate Dynamics | Year: 2012

In this study we examine the impact of Indian Ocean sea surface temperature (SST) variability on South American circulation using observations and a suite of numerical experiments forced by a combination of Indian and Pacific SST anomalies. Previous studies have shown that the Indian Ocean Dipole (IOD) mode can affect climate over remote regions across the globe, including over South America. Here we show that such a link exists not only with the IOD, but also with the Indian Ocean basin-wide warming (IOBW). The IOBW, a response to El Niño events, tends to reinforce the South American anomalous circulation in March-to-May associated with the warm events in the Pacific. This leads to increased rainfall in the La Plata basin and decreased rainfall over the northern regions of the continent. In addition, the IOBW is suggested to be an important factor for modulating the persistence of dry conditions over northeastern South America during austral autumn. The link between the IOBW and South American climate occurs via alterations of the Walker circulation pattern and through a mid-latitude wave-train teleconnection. © 2011 Springer-Verlag.

Freeberg T.M.,University of Tennessee at Knoxville | Dunbar R.I.M.,University of Oxford | Ord T.J.,University of New South Wales
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2012

The 'social complexity hypothesis' for communication posits that groups with complex social systems require more complex communicative systems to regulate interactions and relations among group members. Complex social systems, compared with simple social systems, are those in which individuals frequently interact in many different contexts with many different individuals, and often repeatedly interact with many of the same individuals in networks over time. Complex communicative systems, compared with simple communicative systems, are those that contain a large number of structurally and functionally distinct elements or possess a high amount of bits of information. Here, we describe some of the historical arguments that led to the social complexity hypothesis, and review evidence in support of the hypothesis. We discuss social complexity as a driver of communication and possible causal factor in human language origins. Finally, we discuss some of the key current limitations to the social complexity hypothesis-the lack of tests against alternative hypotheses for communicative complexity and evidence corroborating the hypothesis from modalities other than the vocal signalling channel. © 2012 The Royal Society.

Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: HEALTH-2009-2.4.4-1 | Award Amount: 9.25M | Year: 2010

Leukodystrophies (LDs) are inherited rare neurodegenerative diseases of the white matter and its main component, the myelin, that are affecting predominantly children. Severity of the disease is related to the axonal dysfunction due to myelin deficiency or destruction. Despite the achievement of remarkable advances made in the past decade, there is no current curative therapy. The development of therapeutic approaches for myelin repair and neuroprotection constitutes the main objective of the LeukoTreat project. Indeed LDs constitute prototypic pathologies to tackle myelin formation/destruction issues as well as glial cells dysfunctions in neurodegeneration. The global aim is to promote the development of therapeutic strategies for the largest number of LD affected patients and further applications to more common white matter disorders and finally neurodegenerative diseases. For this purpose, the project will combine the expertise of (i) recognized European research teams in the field of White Matter diseases (COST Myelinet), (ii) high-technology SMEs, (iii) experts in medical ethics and (iv) LD patients and families associations. To develop efficient therapies, the LeukoTreat project is based on 5 complementary approaches consisting in: (i) collecting information on the epidemiology, the natural history, the genotype/phenotype correlation of LDs for at least 500 patients; (ii) validating/identifying biomarkers for therapeutic decisions/follow up to isolate new therapeutic targets; (iii) developing pharmacological strategies with the ultimate objective to launch at least 4 pharmacological clinical trials during 5 years following the project; (iv) developing innovative gene and cell therapies with the ultimate objective to launch at least 3 clinical trials during the next 5 years; (v) tackling ethical impacts of the proposed therapeutic challenges by integrating the participation of patients driven by a well-experienced research team strongly skilled in ethics

News Article | February 28, 2017

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Shutterfly, Inc. (NASDAQ:SFLY), the leading manufacturer and online retailer of high-quality personalized photo products and services, today announced it has appointed Michele Anderson as senior vice president, retail. In her role at Shutterfly, Anderson will guide the development and execution of Shutterfly’s consumer growth strategy across its many product categories, both in its core business and in new categories over time. “I am thrilled to welcome Michele Anderson to the Shutterfly team,” said Christopher North, president and chief executive officer of Shutterfly, Inc. “Michele’s experience delivering growth strategies for leading technology and media companies as well as her track record of developing strong teams makes her an ideal candidate for this role.” Anderson joins Shutterfly, Inc. from Activate Inc., a specialist advisory firm that develops and implements growth strategies and builds new digital-first businesses, where she served as chief operating officer and managing director. Prior to Activate Inc., she held the role of general manager North America at Australian Vintage Ltd., founded Wine Angel, Inc., and held senior roles at iVillage Inc. and Booz Allen & Hamilton. She is a board member of Find My Past Ltd. and was named “No. 11 of the 50 Women In Tech Dominating Silicon Valley” by Anderson has both a Bachelor of Commerce and Bachelor of Law from the University of New South Wales and an MBA from Wharton. In connection with her appointment, the Compensation Committee of Shutterfly’s Board of Directors granted the following equity awards to Anderson: options to purchase 144,483 shares of Shutterfly’s common stock with a per share exercise price of $45.68 and vesting over four years, with 25% of the underlying shares vesting on the first anniversary of the grant date and 1/36th of the remaining shares vesting in equal monthly installments thereafter; and restricted stock unit awards representing the right to receive up to 48,161 shares of Shutterfly’s common stock, vesting over four years, with 25% of the underlying shares vesting on each of the first, second, third and fourth anniversaries of the vesting commencement date. Of the option shares, 74,339 were granted in accordance with NASDAQ Listing Rule 5635(c)(4). Shutterfly, Inc. is the leading manufacturer and online retailer of high-quality personalized products and services offered through lifestyle brands. Founded in 1999, the Shutterfly, Inc. brands include Shutterfly®, where your photos come to life in photo books, cards and gifts; Tiny Prints®, premium cards and stationery for all life’s occasions; Wedding Paper Divas®, wedding invitations and stationery for every step of the planning process; MyPublisher®, one of the pioneers in the photo book industry and creator of easy-to-use photo book-making software; BorrowLenses®, the premier online marketplace for photographic and video equipment rentals; and GrooveBook™, an iPhone and Android app and subscription service that prints up to 100 mobile phone photos in a GrooveBook and mails it to customers every month. For more information about Shutterfly, Inc. (SFLY), visit

In Australia, a bold, and potentially damaging experiment is playing out to see what happens when a former venture capitalist with no scientific experience takes over a top science research agency. After facing an intense domestic and international backlash, the Australian government is scaling back a plan to make deep cuts to the Commonwealth Scientific and Industrial Research Organization (CSIRO), the country’s premiere science research organization, and instead save a sliver of its climate research capabilities. Those cuts, critics have argued, would have decimated its world class climate science research units. SEE ALSO: Nearly 3,000 climate scientists condemn Australia's dramatic research cuts However, the new plan, which involves setting up a new national climate center in Hobart, Tasmania, is not earning much praise either. Larry Marshall, the CSIRO director who has previously worked as the managing director of Southern Cross Ventures, a Silicon Valley venture capital firm, is still planning to get rid of about 75 climate science-related positions as part of a restructuring aimed at turning the CSIRO into an agency geared toward conducting research that will have the potential to bring in outside money from the private sector. In other words, he is running the science organization more like a venture capital firm, rather than an organization that funds science that provides a public good, such as climate modeling that enables individuals to make decisions on where to buy a house based on sea level rise projections. Marshall, who took over CSIRO last year, has deemphasized science research for the public good, instead focusing on research that could have near-term commercial applications. The CSIRO’s climate functions are the equivalent to the the National Oceanic and Atmospheric Administration (NOAA) in the U.S., which monitors, researches and communicates how the Earth’s climate is changing, from the depths of the seas to the upper atmosphere and into space. In February, word came that Marshall was pushing a plan to cut as many as 350 CSIRO jobs, including 110 out of the 140 staff members in its oceans and atmosphere division, in order to bring in a workforce that would spin off profitable ventures. This plan sparked Parliamentary hearings that continued this week, as well as worldwide condemnation from climate scientists, who said it would damage international scientific research that is vital for understanding the rapidly warming changing world. Marshall and other CSIRO officials argued that the changes were necessary to position Australia for a new wave of innovation-driven economic growth. “Indeed, just like a startup, our nation needs to re-invent itself (pivot) in order to navigate a new and uncertain future,” Marshall stated in a memo to CSIRO staff on Feb. 4. “CSIRO pioneered climate research … But we cannot rest on our laurels, that is the path to mediocrity,” he wrote. The new plan, announced on April 25, would set up a climate research center in Hobart with 40 staff members. "Our Strategy 2020 is focussed on collaboration, global connection, excellent science and innovation – all four of these pillars are at work in this Centre," Marshall stated in an April 26 press release. The center would have guaranteed funding for the next decade. However, even under the new plan, about 75 climate scientists and climate-related support staff would still lose their jobs, with notices expected to go out during the first week of May. According to two CSIRO staff members, one of whom requested anonymity due to a lack of authorization to speak to the press, morale at the organization is low, and there is a likelihood that the months of uncertainty and unflattering media reports will contribute to a brain drain of climate researchers to other institutions abroad. One CSIRO employee told Mashable in an email that a new climate center is “nothing different to what we already have but smaller.”  Currently there are two climate programs with more than 130 people, which will be replaced by one center with far fewer people, the employee stated.  The ability to bring in outside funding for CSIRO research will still be a criteria in judging an employee’s value, this person said, noting this is a change from before when scientists were judged by the quality of their work and the number of scientific studies they contributed to. “My priority is not to do good quality science — it is to do 'sexy science' that politicians can write media sound bytes around so they will then keep funding me,” the CSIRO employee stated.  “The new staff training programs for staff are about teaching us to set up spin off companies. This is CSIRO's metric now.” John Church, a CSIRO senior fellow in oceanography, told Mashable via email that the new climate center will focus specifically on climate modeling, with some other climate-related activities being retained elsewhere within CSIRO, though details on that have not been revealed.  In addition to climate modeling, currently CSIRO also observes the climate, contributing to monitoring networks of Antarctica and the Pacific Ocean, for example. It’s not clear if any of these observation networks, which are key to understanding the role the Southern Ocean plays in the setting the planet's thermostat, will be shut down or transferred to another agency or institution. For example, according to Paul Durack, a researcher at Lawrence Livermore National Laboratory in California and a former CSIRO scientist, no mention has been made of a ship-based observing network that CSIRO helps operate, known as GO-SHIP. "The long story short of the new announcement is that they are continuing to progress with a significant cutback to the climate science capacity of CSIRO (and Australia)," Durack told Mashable. He called the new climate center a "rebranding" effort meant to fend off political pressure to reduce the climate program cuts. The climate center announcement is not staving off the likelihood that some top climate researchers will leave Australia. “...Yes morale is low," Church said. "Some of the retained staff may leave (I know of some applying for jobs), but people like living and working in Hobart,” Church said. According to Kevin Trenberth, a senior scientist at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, the focus of the new center could be helpful since it “effectively prioritizes some of the topics to be dealt with.”   But the New Zealand native told Mashable that 40 employees “is extremely small: not enough.” Trenberth said it’s unclear exactly what the 40 employee minimum actually means.  “If they are all real scientists and in addition they have software engineers, and support scientists so the total staff is more like 100, plus administrative support, then this might be a positive development,” he said in an email. “I am interested to know these details though and what drops on the floor.” Sarah Perkins-Kirkpatrick, who previously worked at CSIRO and is now at the University of New South Wales, said the biggest loss from any scientific brain drain and related layoffs will come in the form of gaps in Australians’ knowledge of how to adapt to and mitigate global climate change. “While a decade of promised funding is encouraging, it is still certain that the quality climate research undertaken at CSIRO will be seriously compromised,” she said in a statement.  “This goes deeper than people losing their jobs – the cutting-edge climate projection tools that underpin Australian adaptation and mitigation to climate change will almost certainly suffer, meaning that all Australians will suffer too.”

News Article | April 20, 2016

In May 2014, the female-centric subreddit r/TwoXChromosomes was promoted to default subreddit status, meaning it automatically displayed top posts to users when they landed on the reddit homepage. That might sound like a positive change, adding increased female visibility on the wildly popular male dominated website. But it also meant the once quiet, safe space for women suddenly had a lot more attention from the average redditor. Quickly, the r/TwoXChromosomes space was flooded with new users, many of them male. “Suddenly there are men saying immature shit in response to posts about periods, labia, etc. Then there were rape apologists posting in threads about rape culture,” one former r/TwoXChromosome subscriber, who wished to remain anonymous to avoid harassment, said in an email to Motherboard. “I saw what was happening and left before I got myself into any arguments that would raise my blood pressure too much. I've kinda given up on trying to educate men and boys who already think that feminism is a bad word.” This experience of male internet commenters infiltrating what is meant to be female or neutral online space isn’t unique. You could call it virtual manspreading. Manspreading, the practice of sitting on public transportation with splayed legs in a way that encroaches on the space of others, was a popular topic of articles, Tumblrs, and ads in 2015, even getting an entry in the Oxford Dictionary. Though one might be tempted to suggest that men’s bodies simply need more room, various writers have linked manspreading to privilege and occupation of space. Wrote University of New South Wales researcher Emma Jane, “Manspreading is framed as a powerful—yet also ridiculous—symbol of what is argued to be men’s tendency to take up more than their fair share of literal and metaphorical social space." Similarly, virtual manspreading is the practice of taking up online space in a way that encroaches on the space of others. Though it’s not always as extreme as what happened to r/TwoXChromosomes, research has shown that the way men take up and encroach on space in virtual comment sections is significantly different from women’s experience of online commenting. In a study looking at online commenting on the 16 highest trafficked news sites in the US, UK, and Australia, researcher Fiona Martin found that male-named commenters dominated the comments section in every single site included in the study. She determined the gender of commenters by looking at a sample of top commenters and coded for gender based on the username, including coding for gendered first names (for example, Mary or Robert) and gender cues (for example, Sister or Mr). Gender was also cross-checked against images used by commenters as well as any instances of self-reported gender. Although the readership of the sites included in Martin’s study were almost always made up of more male than female readers, the gender spread for commenters was much wider. At the more equitable end, 35 percent of commenters identified as female at the Texas Tribune (with 56 percent male and 9 percent ambiguous). On the other end of the spectrum, 3 percent were identifiable as women at the Washington Post (21 percent male, 76 percent ambiguous). The author notes that it’s possible commenters could use pseudonyms that misgender them, however, three of the sites included in the study use Facebook-powered comments with (mostly) real names, and all three of these sites echoed the gender differences found in the pseudonymous sites. Similarly, Emma Pierson’s 2015 study of the New York Times online comment sections found that although 44 percent of NYT readers are female, only 27.7 percent of gender-identified commenters were female. She used a similar method of gender inference, running an algorithm that compared commenter names to lists of traditional first names categorized by native speakers, but discarded comments where the gender of the author was ambiguous (46 percent of comments were discarded). She also found that women tended to voice different opinions when commenting on the same news stories as men. For example, female commenters were more likely to comment in support of employer-provided contraception on an article discussing religious exemptions to the Affordable Care Act. Women were also more likely to not use their full, first and last, names. Seventy-eight percent of male commenters used their full names versus 63 percent of female commenters. Gender was inferred through first name when used, on in the case of initials, via a model that determined gender based on the distribution of male/female names for the first name initial. The author posited that the gender difference in anonymity may be due to the increased threat of harassment women face online. Pierson’s study shows that while male voices dominate the conversation in comment sections, female viewpoints are valued and desperately needed. It can be dangerous when a singular view is promulgated over and over, even when it’s not widely held. Pierson wrote: “Since people are influenced by reading online comments, these discrepancies may propagate throughout society. Readers of comments about intimate partner violence, for example, were influenced by the views expressed in those comments. Worse, these discrepancies may be exacerbated because if people falsely believe their own views are unpopular, they may be persuaded not to express them at all, creating a 'spiral of silence.'” So how can female commenters take back space when faced with virtual manspreading in the comment section? In the case of r/TwoXChomosomes, some female users fought back against the increased male presence in the best way—by posting and upvoting content that went into graphic and honest detail about uniquely female experiences (yes, period shits). Despite the challenges, the subreddit's founder and moderator HiFructoseCornFeces still sees default subreddit status as a positive. “Women comprise half the population. An online forum for women that is easily accessible makes sense to me,” she said, noting that the greater visibility of the forum meant higher numbers of women finding support in leaving their abusive partners, talking about accessing contraception, and figuring out what it means to be a survivor of sexual assault since default status. “How did we deal with the challenges? By continuing to moderate a space for women online.” Silicon Divide is a series about gender inequality in tech and science. Follow along here.

News Article | November 7, 2016

Nov. 7, 2016 - Protamine -- a natural protein with a proven safety record -- may be useful in developing new types of disinfectant solutions for contact lenses, according to a study published in the November issue of Optometry and Vision Science, official journal of the American Academy of Optometry. The journal is published by Wolters Kluwer. In laboratory tests, protamine solutions perform at least as well as current disinfectant solutions in killing microbes that can cause contact lens-related infections, according to the report by Mahesh K. Bandara, PhD, and colleagues of University of New South Wales, Sydney. But more research will be needed to assess the safety and effectiveness of protamine-containing solutions before they are adopted for broader use in contact lens care. The researchers performed a series of experiments to evaluate the antimicrobial effects of protamine solutions, with and without additives commonly used in contact lens disinfectant products. Originally derived from salmon, protamine is already used for other medical purposes -- especially as an antidote to the anti-blood clotting drug heparin. Following industry standards, the study tested protamine solutions for antimicrobial activity against a standard laboratory panel of microbes, as well as against organisms isolated from patient samples. Potential toxic effects on cultured cells were evaluated as well. The results showed significant antimicrobial effects of protamine solutions. The effects were "dose-dependent" -- the higher the protamine concentration, the greater the microbe-killing effectiveness. Protamine killed a wide range of bacteria, yeasts, and fungi, including common causes of contact lens-related infections. Against most organisms, protamine was even more effective in solutions containing common additives. In standard tests, the antimicrobial effects of protamine were comparable to those of commercially available contact lens disinfectant solutions. All solutions tested had minimal toxicity indicating that they also had a good safety profile. Infections of the cornea -- the clear, outermost layer of the eye--are a potentially serious complication of contact lens wear. Many different risk factors have been identified, including poor contact lens cleaning and disinfection routines. Millions of people worldwide use multipurpose disinfecting solutions to disinfect, clean, rinse, and store their contact lenses. However, these solutions may not always be effective against bacteria and other organisms. With its broad spectrum of antimicrobial activity and long history of medical use, protamine could be a useful alternative for use in contact lens care products. "This study highlights the potential for protamine to be used for the development of effective multipurpose disinfection solutions," Dr. Bandara and colleagues conclude. At least in laboratory studies, protamine solutions have a "progressive dose-dependent killing effect" on bacteria and other organisms that can cause corneal infections. "Protamine represents a new strategy and a very different type of compound to help defend against ocular infections related to contact lens use," comments Michael Twa, OD, PhD, FAAO, Editor-in-Chief of Optometry and Vision Science. Further research will be needed to evaluate protamine's stability, compatibility with contact lens materials, and safety for use with contact lens care products and the eye. Click here to read "Evaluation of Protamine as a Disinfectant for Contact Lenses." Article: "Evaluation of Protamine as a Disinfectant for Contact Lenses" (doi: 10.1097/OPX.0000000000000964) Optometry and Vision Science, official journal of the American Academy of Optometry, is the most authoritative source for current developments in optometry, physiological optics, and vision science. This frequently cited monthly scientific journal has served primary eye care practitioners for more than 90 years, promoting vital interdisciplinary exchange among optometrists and vision scientists worldwide. Michael Twa, OD, PhD, FAAO, of University of Alabama-Birmingham School of Optometry is Editor-in-Chief of Optometry and Vision Science. The editorial office may be contacted at About the American Academy of Optometry Founded in 1922, the American Academy of Optometry is committed to promoting the art and science of vision care through lifelong learning. All members of the Academy are dedicated to the highest standards of optometric practice through clinical care, education or research. Wolters Kluwer is a global leader in professional information services. Professionals in the areas of legal, business, tax, accounting, finance, audit, risk, compliance and healthcare rely on Wolters Kluwer's market leading information-enabled tools and software solutions to manage their business efficiently, deliver results to their clients, and succeed in an ever more dynamic world. Wolters Kluwer reported 2015 annual revenues of €4.2 billion. The group serves customers in over 180 countries, and employs over 19,000 people worldwide. The company is headquartered in Alphen aan den Rijn, the Netherlands. Wolters Kluwer shares are listed on Euronext Amsterdam (WKL) and are included in the AEX and Euronext 100 indices. Wolters Kluwer has a sponsored Level 1 American Depositary Receipt program. The ADRs are traded on the over-the-counter market in the U.S. (WTKWY). Wolters Kluwer Health is a leading global provider of information and point of care solutions for the healthcare industry. For more information about our products and organization, visit http://www. , follow @WKHealth or @Wolters_Kluwer on Twitter, like us on Facebook, follow us on LinkedIn, or follow WoltersKluwerComms on YouTube.

News Article | February 15, 2017

Physicists have sketched a blueprint for a quantum computer built using existing technology that would be powerful enough to crack important and currently unsolvable problems, such as factoring enormous numbers. Such a machine would need to be larger than a football pitch and would cost at least £100 million (US$126 million) to make, its designers say. “Yes it will be big, yes it will be expensive — but it absolutely can be built right now,” says quantum physicist Winfried Hensinger of the University of Sussex in Brighton, UK, who leads the team that published the blueprint in Science Advances1 on 1 February. The idea is not the first proposal to build a practical quantum computer, and it would involve tough engineering challenges, says Andrea Morello, a quantum physicist at the University of New South Wales in Sydney, Australia. But it is remarkable for its ambition and approach, he says. “I do think this is a landmark paper, and it will be very influential in the community for many years ahead.” “While this proposal is incredibly challenging, I wish more in the quantum community would think big like this,” agrees Christopher Monroe, a physicist at the University of Maryland in College Park. Quantum computers promise to exploit the remarkable properties of quantum particles to carry out certain calculations exponentially faster than their classical counterparts. Teams around the world are competing to build them on the scale required for them to be useful, but most designs so far have targeted a few dozen quantum bits, or qubits. Many thousands are probably needed to do useful calculations, such as finding the prime factors of large numbers, a crucial problem in encryption. Hensinger’s team suggests using ions trapped by magnetic fields to create its qubits — an approach that physicists have been working on for more than 20 years. Most of the components necessary to build a trapped-ion quantum computer have already been demonstrated, Monroe says. “Our community needs a systems-engineering push to simply build it.” In Hensinger’s blueprint, thousands of hand-sized square modules could be yoked together to produce — in theory — a quantum computer of any size. Key to the design is how to overcome practical problems, such as the need to dissipate heat produced by the machine. “Such high-level issues are rarely considered by people in the field of quantum computing, either because they think it’s goofy to think that big, or because in their own physical system, it is nearly impossible to fathom such a high-level view,” Monroe says. In each module, around 2,500 trapped-ion qubits would be suspended in magnetic fields, protected from interference that would affect their delicate quantum states. To perform operations, ions interact with their neighbours by shuttling around an x-shaped grid, like Pac-Man characters. Rather than using individual lasers to control each trapped ion — which would require a huge engineering effort to build at scale — the team proposes to control the qubits using a field of microwave radiation through the entire computer. To tune individual qubits in and out of interaction with the wider field, they need only apply a local voltage. The scientists’ scheme suggests using liquid nitrogen to keep the system cool. The ions themselves would hop from chip to chip to transmit information between the modules — a technique that produces inter-chip connection speeds 100,000 times those of systems that use light waves and optical fibres, says Hensinger. The individual modules would be replaceable, built on silicon bases that could be manufactured using techniques available in the conventional electronics industry, he adds. To find the prime factors of a 2,048-bit (or 617-digit-long) number — something no classical computer can do today — the computer would need 2 billion qubit ions. They would take around 110 days to crack the problem, Hensinger says. This would allow researchers to crack today’s best encryption systems. In theory, only 4,096 qubits are required for this calculation, but 2 billion ions would be needed because of the error rates associated with the modest quality of current trapped-ion-qubit technology. But reducing the rate at which qubits make errors could drastically reduce the computer’s dimensions, perhaps bringing it down to the size of a large room, says Hensinger. Huge technical challenges still stand in the way of any team aspiring to build the quantum computer — such as how to make the required strong magnetic-field gradients and engineer in the precise control needed to manipulate qubits. But Hensinger and his colleagues are now constructing a prototype based on their design, in an effort to demonstrate that their plans really could work. “Building that thing will be an extraordinary engineering challenge, but one that’s worth pursuing,” says Morello.

News Article | August 9, 2016

Across Australia, climate science denialists are beside themselves with glee at the voting into office of one of their own. Late last week, the Australian Electoral Commission confirmed that Pauline Hanson’s One Nation party had snagged Queensland’s final 12th Senate spot. Her candidate, Malcolm Roberts, is now a senator. Roberts’ election is yet another demonstration of the quirkiness of Australia’s electoral system. Only 77 people actually voted for Roberts as a first preference but, thanks to the popularity of Hanson, he’s in for three years. Roberts’ own brand of climate denial – a heady mix of conspiracy theories and blind spots the size of the Antarctic ice sheet – is now in the national spotlight. Roberts has had wall-to-wall coverage across Australia’s media – from Sky News, to Lateline, to Insiders to flagship ABC radio. Even Triple J has joined in. News Corp’s Andrew Bolt, a strong promoter of the kind of material produced by Roberts, told the senator there were now “five or six out and proud voices of climate scepticism in the Senate”. So how did Roberts respond to his newfound fame? Well, he didn’t disappoint, telling every mainstream audience there was “no empirical evidence to show that carbon dioxide affects the climate in any way”. I’ve written several stories over the years about Roberts and the Galileo Movement – the climate science denial group founded in 2011 with radio personality Alan “climate change is witchcraft” Jones as its patron. Three years ago I pointed out how One Nation was taking its cues on climate science from Roberts. Last month I suggested that, if elected, Roberts would bring an extreme form of climate science denial to the Senate. But for those paying close attention to climate science denial – such as the string of US senators who spent hours talking about it only last month – Roberts sounds like a broken record. In Roberts’ case, the needle has been stuck for about six years. What about his conspiracy theories (he says they’re not conspiracies, just facts) that climate change is a scam pushed by global banks looking for cash and the UN on the hunt for global domination? Roberts didn’t disappoint there either. On ABC Melbourne, host Rafael Epstein asked Roberts: “Do you think the UN’s trying to impose some sort of global government through climate change policy?” For years, Roberts has been writing to politicians, government agencies, universities and scientists making the same claim that there is “no empirical evidence” to show fossil fuel burning causes climate change. You can go and read all that material on his website – I’ll see you in six months once you’ve read it all. So what does Roberts mean by “empirical evidence”? According to him, decisions should be based on “observations in the real world … it’s measured, real world data” and nothing else counts. There are two very obvious problems with Roberts’ argument. The “real world data” is sending a clear message that the Earth is gaining heat at a rapid rate and that this is a long-term trend. Whether you look at global air temperatures measured in the real world by thermometers or derived from satellites, or the temperature of the oceans at multiple depths, or the increasing frequency of extreme temperatures, or the rising sea levels, the melting ice sheets, the disappearing Arctic sea ice, the increasing risk of bushfires … we could go on and on with a parade of “empirical evidence”. At the same time, humans are adding CO2 to the atmosphere and oceans at a rate that groups like the Geological Society say are unprecedented “even in comparison with the massive injections of carbon to the atmosphere at the Palaeocene-Eocene boundary, which led to a major thermal event 55m years ago”. Roberts’ argument that science is only about “empirical evidence” might sound all sciencey to his interviewees and the lay audience. But it’s bunk. If all you rely on is “empirical evidence”, and reject modelling and analysis that uses that data, then you basically throw out large swathes of modern scientific endeavours. Prof Steven Sherwood, director of the Climate Change Research Centre at the University of New South Wales, told me: Roberts has built a whole suite of well-rehearsed arguments to enable him to reject any assertions put to him. They go like this, and I’m paraphrasing here. Climate scientists only say it’s warming because if they didn’t their grants would dry up. The Bureau of Meteorology and CSIRO have corrupted climate science and are thus guilty of corruption. Government agencies are politicised, which means anything they produce cannot be trusted. You can’t trust climate models, so anything that comes from them should be chucked out. One of Australia’s most famous and celebrated scientists is Prof Peter Doherty, who, in 1996, was jointly awarded a Nobel prize for his research into the immune system. Doherty told me he had sent Roberts “plenty of reports and material” but Roberts had ignored it. So Doherty has first-hand experience with Roberts and also knows a bit about the scientific method. He told me: So how should journalists react when a newly-minted senator makes claims that run against science academies across the planet while suggesting institutions and governments the world over – from the US military to the UN – are either part of, or have been hoodwinked by, a conspiracy that only he can and a few other people on the internet are able to see? According to Doherty “you have to respect the institutions of our democracy, including the Australian Senate, but that does not mean you have to respect the viewpoints held by individual senators”. So there is another approach journalists could take. In 2013, Roberts sent one of his voluminous reports to Ben Cubby, then the environment editor at the Sydney Morning Herald. I’ll leave you with Cubby’s response.

News Article | March 2, 2017

New South Wales, which has just experienced its hottest summer on record, is 50 times more likely to experience another similarly hot summer and 10 times more likely to experience extremely hot days under climate change, according to a group of Australian climate scientists. The mean temperature in Sydney was 2.8C above average in December, January, and February, according to the Bureau of Meteorology, and the three-day heatwave from 9 February to 11 was the hottest on record from Sydney to Brisbane, breaking records set in 1939. It us the kind of weather event that would have been considered a one in 500-year occurrence before 1910, before global warming had a significant impact on the climate system, but had now become a one in 50-year event, according to a new analysis released on Thursday. “In the future, a summer as hot as this past summer in NSW is likely to happen roughly once every five years,” the report said. It could make Sydney a less liveable city, one of the report’s authors, Dr Sarah Perkins-Kirkpatrick, said. Perkins-Kirkpatrick is a research fellow at the University of New South Wales’ Climate Change Research Centre and said Sydney was unprepared for the knock-on effects of a significant increase in average summer temperatures. “I grew up in Sydney, so I’m kind of partial to it, and we are actually starting to think, well, can we live here?” she said. “To live here it’s going to take a lot more preparation that what we are used to, looking at the building codes, including things like full insulation and double-glazing. Sydney should start having these discussions now because that sort of situation is just going to occur a lot more often.” Energy regulators warned last month that the electricity network was not equipped to supply enough power to cover the demand from air conditioners in Sydney for the the heatwave on 9 to 11 February. At the same time the prime minister, Malcolm Turnbull, doubled down on comments linking a large blackout following storms in South Australia last year to renewable energy, against the advice of his department, saying it was a matter of “energy security” and a “wake-up call” for state governments trying to hit “completely unrealistic” renewable energy targets. Perkins-Kirkpatrick said it was “extremely frustrating” to hear that debate in the face of the long-term climate data. “I think people expect something to happen to encourage the to make this change, and that’s the problem – there hasn’t been that encouragement from higher powers to say, ‘ok, we are going to do something now’,” she said. “It’s very, very frustrating that people who can help the situation and can give us a cleaner and a better energy source are not talking about it.” Melbourne University’s Dr Andrew King, another author of the report, said that while Australia had experienced extremely hot days or extreme weather events in the past, the data showed the frequency and severity of those events had increased markedly in the past 20 years and would continue to increase unless drastic action was taken to reduce greenhouse gas emissions. “Yes, people would have experienced 40C days several decades ago around different parts of Australia and in Sydney but we know that these incidences of very hot days are getting more frequent and we are setting more records for heat,” he said. Australia broke 12 times more records for hot weather than cool weather between 2000 and 2014. “The purpose of the analysis in this report is to raise awareness that climate change is already impacting on weather in Australia,” King said. “Hopefully it motivates action on climate change, because we know what the solution to climate change is.”

News Article | November 15, 2016

We’ve heard a lot lately about the destruction of tropical coral reefs brought on by a warming climate. And that’s a big deal — corals are the lynchpin of entire undersea ecosystems. When they go, the damage reverberates widely and ultimately, even people pay the price. But something similar has been happening to corals’ more temperate cousins in many locations: Forests of kelp. These swaying seaweeds, too, anchor communities of diverse types of fish and other living organisms and, in turn, provide great value to humans through their contribution to fisheries. And recently, there has been some very troubling news for kelp. One forest was wiped out off the southwestern coast of Australia by extremely warm temperatures in 2011, an event that scientists called a “rapid climate-driven regime shift.” More recently, there has been massive death of giant kelp around Tasmania as well. And now, new research in the Proceedings of the National Academy of Sciences demonstrates another way that a changing climate can devastate kelp. Adriana Vergés of the University of New South Wales in Sydney and her colleagues at institutions in Australia, Spain, and Singapore used underwater cameras to study kelp forests around the Solitary Islands, along the eastern Australian coast between Sydney and Brisbane. This region is what Vergés calls “a famous place for being a tropical-temperate transition zone. We went there because that’s where we thought, if there’s something that is happening to the kelp, that is climate mediated, this is where it is going to be happening,” she said. And it was. Between 2002 and 2011, all of the kelp in the 25 kilometer study area died. And the cameras showed why: They were being devoured by several species of fish, including tropical rabbitfish, that had been thriving as the temperate waters warmed up just slightly, by 0.6 degrees Celsius, during the time period. That wasn’t much, but it was apparently enough to support herbivorous fish from tropical climes that could wipe out the kelp. “I think what’s interesting is this realization that the greatest impacts of climate change may not be the direct effects of warming on one species, but it’s more the effects of warming on the way species interact with each other,” said Vergés. “The effect of warming on how fish eat the kelp. And it’s kind of the same with coral reef systems. Warming doesn’t kill the coral itself, it actually breaks the relationship between the coral and the symbiotic algae. It’s a case of warming impacting the way species interact.” It’s not just the fish invading from tropical latitudes (which in the case of Australia, would be from the north). It’s also the temperature itself, which increases fish metabolism and may make them consume more, Vergés said. What’s particularly striking about this study is that, unlike in other cases involving coral reefs or kelp forests, there was no sudden surge of warm water, sometimes called an “underwater heat wave,” that proved deadly. Instead the warm-up was slow, steady, and not even all that large in magnitude. Yet nevertheless, it seems to have shifted the conditions in which tropical fish who are herbivores and kelp forests interact, and the consequence was a total loss of kelp, at least in this particular region. That’s a big deal, because the study reports kelp are worth $ 10 billion to the Australian economy annually (in Australian dollars), due to their contributions to tourism and fisheries. And of course, it’s not just Australia — Japan has also seen kelp forests decimated by herbivorous fish, says Vergés, and the kelp forests off the coast of California saw major declines in abundance during the Pacific warm “blob.” Granted, there is also some better news for kelp out there. Another new study in the Proceedings of the National Academy of Sciences found a mere 1.8 percent decline overall per year in kelp abundance over the decade. That’s because while the researchers observed major declines in some areas, they also saw expansion in others. The study suggested one thing kelp have in their favor — they can bounce back fast. So the picture isn’t entirely grim — yet — but the individual stories of dramatic kelp wipeouts that have been dribbling in recently amid warming ocean temperatures certainly suggest reason for concern. “They’re like the trees of the underwater world in cool water places,” said Vergés of kelp. “They provide a habitat and food for literally hundreds of species. They support an entire ecological community.” “So when we lose the kelp, we’re losing all of that,” she continued. “They’re also one of the most productive plants on the planet, taking carbon out of the atmosphere. We lose a biological engine that controls or dominates temperate reefs.”

News Article | November 8, 2016

Protamine -- a natural protein with a proven safety record -- may be useful in developing new types of disinfectant solutions for contact lenses, according to a study published in the November issue of Optometry and Vision Science, official journal of the American Academy of Optometry. The journal is published by Wolters Kluwer. In laboratory tests, protamine solutions perform at least as well as current disinfectant solutions in killing microbes that can cause contact lens-related infections, according to the report by Mahesh K. Bandara, PhD, and colleagues of University of New South Wales, Sydney. But more research will be needed to assess the safety and effectiveness of protamine-containing solutions before they are adopted for broader use in contact lens care. The researchers performed a series of experiments to evaluate the antimicrobial effects of protamine solutions, with and without additives commonly used in contact lens disinfectant products. Originally derived from salmon, protamine is already used for other medical purposes -- especially as an antidote to the anti-blood clotting drug heparin. Following industry standards, the study tested protamine solutions for antimicrobial activity against a standard laboratory panel of microbes, as well as against organisms isolated from patient samples. Potential toxic effects on cultured cells were evaluated as well. The results showed significant antimicrobial effects of protamine solutions. The effects were "dose-dependent" -- the higher the protamine concentration, the greater the microbe-killing effectiveness. Protamine killed a wide range of bacteria, yeasts, and fungi, including common causes of contact lens-related infections. Against most organisms, protamine was even more effective in solutions containing common additives. In standard tests, the antimicrobial effects of protamine were comparable to those of commercially available contact lens disinfectant solutions. All solutions tested had minimal toxicity indicating that they also had a good safety profile. Infections of the cornea -- the clear, outermost layer of the eye -- are a potentially serious complication of contact lens wear. Many different risk factors have been identified, including poor contact lens cleaning and disinfection routines. Millions of people worldwide use multipurpose disinfecting solutions to disinfect, clean, rinse, and store their contact lenses. However, these solutions may not always be effective against bacteria and other organisms. With its broad spectrum of antimicrobial activity and long history of medical use, protamine could be a useful alternative for use in contact lens care products. "This study highlights the potential for protamine to be used for the development of effective multipurpose disinfection solutions," Dr. Bandara and colleagues conclude. At least in laboratory studies, protamine solutions have a "progressive dose-dependent killing effect" on bacteria and other organisms that can cause corneal infections. "Protamine represents a new strategy and a very different type of compound to help defend against ocular infections related to contact lens use," comments Michael Twa, OD, PhD, FAAO, Editor-in-Chief of Optometry and Vision Science. Further research will be needed to evaluate protamine's stability, compatibility with contact lens materials, and safety for use with contact lens care products and the eye.

News Article | August 22, 2016

April 2016 marks 7 straight record-breaking months; puts year on track to be hottest on record. New data released by NASA's Goddard Institute of Space Studies reveals that April's land and sea temperatures were 1.11C warmer than the mean April temperatures between the years 1951 to 1980, the period of time that the agency uses as a reference point to study climate change. “April 2016 was the hottest April on record globally,” reports The Guardian, noting of the grim distinction: It makes three months in a row that the monthly record has been broken by the largest margin ever, and seven months in a row that are at least 1C above the 1951-80 mean for that month. When the string of record-smashing months started in February, scientists began talking about a “climate emergency”. The consensus is that 2016 will likely be the hottest year on record. © April 2016 temperature departures from average, in degrees Celsius, relative to 1951-1980 average. Brown/blue contours correspond to temperatures most above/below April averages. (NASA/GISS) As you can see in the map, some areas in Alaska, Russia, Greenland and Africa had temperature increases of at least 4C above the April average. Much of Asia, eastern Europe, Australia, northern Africa, Brazil, the American northwest, and western Canada were 2C or more above April's mean temperature. This year’s April marks a steady climb up the graph of new high-temperature records, breaking the 2010 record for April (at 0.87C above the baseline average) by 0.24C. That record broke April 2007’s record of 0.75C above the baseline average. Things just keep moving on up. “The interesting thing is the scale at which we’re breaking records,” said Andy Pitman, director of the ARC Centre of Excellence for Climate System Science at the University of New South Wales in Australia. “It’s clearly all heading in the wrong direction." “Climate scientists have been warning about this since at least the 1980s," he adds. "And it’s been bloody obvious since the 2000s. So where’s the surprise?” The animated graphic below does a great job of giving a visual representation of what things are looking like. It certainly brings the phrase “spiraling out of control” to mind. Read more at The Guardian.

News Article | December 27, 2016

The Biophysical Society has announced the winners of its international travel grants to attend the Biophysical Society's 61st Annual Meeting in New Orleans, February 11-15, 2017. The purpose of these awards is to foster and initiate further interaction between American biophysicists and scientists working in countries experiencing financial difficulties. Recipients of this competitive award are chosen based on scientific merit and their proposed presentation at the meeting. They will be honored at a reception on Sunday, February 12 at the Ernest N. Morial Convention Center. The 2017 recipients of the International Travel Award, along with their institutional affiliation and abstract title, are listed below. Ana F. Guedes, Institute of Molecular Medicine, Portugal, ATOMIC FORCE MICROSCOPY AS A TOOL TO EVALUATE THE RISK OF CARDIOVASCULAR DISEASES IN PATIENTS. Karishma Bhasne Mohali, Indian Institute of Science Education and Research (IISER), A TALE OF TWO AMYLOIDOGENIC INTRINSICALLY DISORDERED PROTEINS: INTERPLAY OF TAU AND α-SYNUCLEIN. Chan Cao, East China University of Science and Technology, DIRECT IDENTIFICATION OF ADENINE, THYMINE, CYTOSINE AND GUANINE USING AEROLYSIN NANOPORE. Venkata Reddy Chirasani, Indian Institute of Technology Madras, LIPID TRANSFER MECHANISM OF CETP BETWEEN HDL AND LDL: A COARSEGRAINED SIMULATION STUDY. Assaf Elazar, Weizmann Institute of Science, Israel, DECIPHERING MEMBRANE PROTEIN ENERGETICS USING DEEP SEQUENCING; TOWARDS ROBUST DESIGN AND STRUCTURE PREDICTION OF MEMBRANE PROTEINS. Manuela Gabriel, University of Buenos Aires, Argentina, 3D ORBITAL TRACKING OF SINGLE GOLD NANOPARTICLES: A NEW APPROACH TO STUDY VESICLE TRAFFICKING IN CHROMAFFIN CELLS. Farah Haque National Centre for Biological Sciences, India, A NEW HUMANIZED MOUSE MODEL FOR STUDYING INHERITED CARDIOMYOPATHIC MUTATIONS IN THE MYH7 GENE. Stephanie Heusser, Stockholm University, Switzerland, STRUCTURAL AND FUNCTIONAL EVIDENCE FOR MULTI-SITE ALLOSTERY MEDIATED BY GENERAL ANESTHETICS IN A MODEL LIGAND-GATED ION CHANNEL. Amir Irani, Massey University, New Zealand, HOMOGALACTURONANS ILLUMINATE THE ROLE OF COUNTERION CONDENSATION IN POLYELECTROLYTE TRANSPORT. Olfat Malak, University of Nantes, France, HIV-TAT INDUCES A DECREASE IN IKR AND IKS VIA REDUCTION IN PHOSPHATIDYLINOSITOL-(4,5)-BISPHOSPHATE AVAILABILITY. CONFORMATIONAL TRANSITION AND ASSEMBLY OF E.COLI CYTOLYSIN A PORE FORMING TOXIN BY SINGLE MOLECULE FLUORESCENCE. Sabrina Sharmin, Shizuoka University, Japan, EFFECTS OF LIPID COMPOSITIONS ON THE ENTRY OF CELL PENETRATING PEPTIDE OLIGOARGININE INTO SINGLE VESICLES. Xin Shi, East China University of Science and Technology, DIRECT OBSERVATION OF SINGLE BIOPOLYMER FOLDING AND UNFOLDING PROCESS BY SOLIDSTATE NANOPORE. Omar Alijevic, University of Lausanne, Switzerland, ANALYSIS OF GATING OF ACID-SENSING ION CHANNELS (ASICS) UNDER RAPID AND SLOW PH CHANGES. Swapna Bera, Bose Institute, India, BIOPHYSICAL INSIGHTS INTO THE MEMBRANE INTERACTION OF THE CORE AMYLOID-FORMING Aβ40 FRAGMENT K16-K28 AND ITS ROLE IN THE PATHOGENESIS OF ALZHEIMER'S DISEASE. Anais Cassaignau, University College London, United Kingdom, STRUCTURAL INVESTIGATION OF AN IMMUNOGLOBULIN DOMAIN ON THE RIBOSOME USING NMR SPECTROSCOPY. Bappaditya Chandra, Tata Institute of Fundamental Research, India, SECONDARY STRUCTURE FLIPPING CONNECTED TO SALT-BRIDGE FORMATION CONVERTS TOXIC AMYLOID-β40 OLIGOMERS TO FIBRILS. Gayathri Narasimhan, Cinvestav, Mexico, ANTIHYPERTROPHIC EFFECTS OF DIAZOXIDE INVOLVES CHANGES IN MIR-132 EXPRESSION IN ADULT RAT CARDIOMYCYTES. Giulia Paci, European Molecular Biology Laboratory, Germany, FOLLOWING A GIANT'S FOOTSTEPS: SINGLE-PARTICLE AND SUPER-RESOLUTION APPROACHES TO DECIPHER THE NUCLEAR TRANSPORT OF HEPATITIS B VIRUS CAPSIDS. Bizhan Sharopov, Bogomoletz Institute of Physiology National Academy of Sciences of Ukraine, DISSECTING LOCAL AND SYSTEMIC EFFECTS OF TRPV1 ON BLADDER CONTRACTILITY IN DIABETES. Chao Sun, East China Normal University, FUNCTION OF BACTERIORUBERIN IN ARCHAERHODOPSIN 4, FROM EXPRESSION TO CHARACTERIZATION. Matthew Batchelor, University of Leeds, United Kingdom STRUCTURAL DYNAMICS IN THE MYOSIN 7A SINGLE α-HELIX DOMAIN. Daniel Havelka, Czech Academy of Sciences, MICROVOLUME DIELECTRIC SPECTROSCOPY AND MOLECULAR DYNAMICS OF AMINO ACIDS. Ivan Kadurin, University College London, United Kingdom, INVESTIGATION OF THE PROTEOLYTIC CLEAVAGE OF α2δ SUBUNITS: A MECHANISTIC SWITCH FROM NHIBITION TO ACTIVATION OF VOLTAGE-GATED CALCIUM CHANNELS? Linlin Ma, University of Queensland, Australia, NOVEL HUMAN EAG CHANNEL ANTAGONISTS FROM SPIDER VENOMS. Ivana Malvacio, University of Cagliari, Italy, MOLECULAR INSIGHTS ON THE RECOGNITION OF SUBSTRATES BY THE PROMISCUOUS EFFLUX PUMP ACRB. Cristina Moreno Vadillo, Cardiovascular Research Institute Maastricht, Netherlands, RESTORING DEFECTIVE CAMP-DEPENDENT UPREGULATION IN LONG-QT SYNDROME TYPE-1 THROUGH INTERVENTIONS THAT PROMOTE IKS CHANNEL OPENING. Melanie Paillard, Claude Bernard University Lyon 1, France, TISSUE-SPECIFIC MITOCHONDRIAL DECODING OF CYTOPLASMIC CA2+ SIGNALS IS CONTROLLED BY THE STOICHIOMETRY OF MICU1/2 AND MCU. Mohammed Mostafizur Rahman, Institute for Stem Cell Biology and Regenerative Medicine, India, STRESS-INDUCED DIFFERENTIAL REGULATION LEADS TO DECOUPLING OF THE ACTIVITY BETWEEN MPFC AND AMYGDALA. Marcin Wolny, University of Leeds, United Kingdom, DESIGN AND CHARACTERIZATION OF LONG AND STABLE DE NOVO SINGLE α-HELIX DOMAINS. Elvis Pandzic, University of New South Wales, Australia, VELOCITY LANDSCAPES RESOLVE MULTIPLE DYNAMICAL POPULATIONS FROM FLUORESCENCE IMAGE TIME SERIES. The Biophysical Society, founded in 1958, is a professional, scientific Society established to encourage development and dissemination of knowledge in biophysics. The Society promotes growth in this expanding field through its annual meeting, monthly journal, and committee and outreach activities. Its 9000 members are located throughout the U.S. and the world, where they teach and conduct research in colleges, universities, laboratories, government agencies, and industry. For more information on these awards, the Society, or the 2017 Annual Meeting, visit http://www.

News Article | November 11, 2016

BROOKLINE, MA, November 11, 2016-- Lois Swirnoff has been included in Marquis Who's Who. As in all Marquis Who's Who biographical volumes, individuals profiled are selected on the basis of current reference value. Factors such as position, noteworthy accomplishments, visibility, and prominence in a field are all taken into account during the selection process.Known for her creativity, Prof. Lois Swirnoff's work on Color is recognized internationally. Her book "The Color of Cities; an International Perspective", published in 2000 by McGraw-Hill has been translated into Mandarin in Beijing. It received Honorable Mention for the best book on Architecture and Urban Studies that year from the Association of American Publishers. "Dimensional Color", her first book, now in its third edition, published by WWNorton, is known throughout the world. Most recently, her essay "Colour Contrast" has been published by Springer in "The Encyclopedia of Colour", on line.Prof. Swirnoff prepared for her career in the arts by attending The Cooper Union School of Art in NY, graduating in 1951 with highest honors. Josef Albers, the former Bauhaus Master who widely influenced abstract art as well as art education, admitted her to his graduate program at Yale University, where she received the BFA and MFA Summa Cum Laude. She began her teaching career in 1954 as Instructor in Art at Wellesley College, has been Emerita Prof. of Art and Design at UCLA since 1990 after teaching there for thirteen years, Lecturer on Visual & Environmental Studies at Harvard University from 1968 to 1975, Chair of the Art Department at Skidmore College from 1977 to 1981, and was Adjunct Prof. in the Art School in 1991, then the Feltman Chair in Light at Cooper Union, retiring in 2003. Since then she has lectured in Bergen, Norway, Helsinki, Finland, Stockholm, Sweden, meetings of the AIC in Granada, Spain and at University of New South Wales, Sydney, Australia.Throughout the past six decades Swirnoff has exhibited her creative work in solo shows such as the Farnsworth Art Museum, The National Academy of Sciences, Swetzoff Gallery in Boston, Galley BAI in NY, Hathorn Gallery, Skidmore College "The Color of Cities" at New York School of Interior Design, and in many group shows in galleries and museums in the U.S. and abroad.In celebration of her achievements, Ms. Swirnoff has been awarded the Erskine Hewitt Award from Cooper Union in 1951, the Fulbright Fellowship to Italy for Painting, in 1951-1952, the Mary I. Bunting Fellowship of Radcliff in Painting, in 1961-1962, Yaddo Fellowships in 1985 and 1987, and Graham Foundation Grant for her book, Dimensional Color. Swirnoff has been included in Who's Who in America, Who's Who in American Education, Who's Who in the World and Who's Who of American Women. Her most recent achievement, a grant from the IIDA, enabled her to design and build a large scale installation of color as light, with students at Parsons School of Design; an aspect of sequential contrast of color in time, a phenomenon that continues to engage her.About Marquis Who's Who :Since 1899, when A. N. Marquis printed the First Edition of Who's Who in America , Marquis Who's Who has chronicled the lives of the most accomplished individuals and innovators from every significant field of endeavor, including politics, business, medicine, law, education, art, religion and entertainment. Today, Who's Who in America remains an essential biographical source for thousands of researchers, journalists, librarians and executive search firms around the world. Marquis now publishes many Who's Who titles, including Who's Who in America , Who's Who in the World , Who's Who in American Law , Who's Who in Medicine and Healthcare , Who's Who in Science and Engineering , and Who's Who in Asia . Marquis publications may be visited at the official Marquis Who's Who website at

News Article | November 14, 2016

The earthquake that struck New Zealand shortly after midnight local time on 14 November, killing two people, is a stark reminder that New Zealand's seismic activity "is a lot more complicated than we thought," says James Goff, a seismologist and tsunami expert at the University of New South Wales in Sydney, Australia. The ruptured fault is not along the tectonic plate boundaries where major quakes are expected. "We are finding out again that there is seismic activity that we didn't really know about," Goff says. The U.S. Geological Survey placed the epicenter of the magnitude-7.8 earthquake near Kaikoura, a coastal tourist town 92 kilometers northeast of Christchurch, at a depth of about 23 kilometers. The shallow quake caused extensive damage to infrastructure. Landslides blocked the main highway through the region and temporarily dammed the Clarence River. Aftershocks continued throughout the day Monday. The earthquake’s epicenter was not on a known major fault, says Kevin McCue, an engineering seismologist at Australia's Central Queensland University in Rockhampton. New Zealand straddles the collision zone between the Australian and Pacific tectonic plates. The boundary between the two plates runs off the east coast of the North Island and along the west coast of the South Island. McCue says New Zealand's earthquake hazard maps anticipate strong quakes emanating from the complex faults in those boundaries. Today’s quake, however, occurred on a little studied intraplate fault. The quake, as well as strong temblors that struck Christchurch in 2010 and 2011, indicates that the east coast of the South Island "is a far more risky place than was thought." New Zealand's earthquake hazard maps, which affect building codes, must be reconsidered, McCue says. He also worries that today's quake might increase stress on the plate boundaries, where a rupture could produce a magnitude-8 earthquake. The quake also triggered a 1-meter-or-so tsunami, which Goff says is unusual for a fault located beneath land. He suspects that about 1 meter of coastal ground uplift resulting from the fault movement disturbed the sea floor enough to trigger the tsunami. It's not clear yet whether that local phenomenon caused significant damage.

News Article | February 22, 2017

Learning how termites extract and synthesize a specific sound/vibration signal amidst a cacophony of other sounds could lead to advances in technologies based on acoustic signatures Termites and ants are mortal enemies, their predator-prey relationship dating back into the Miocene. Both armies, if they come into contact, will fight to the death. Yet termites, despite being sightless, carry out their work within millimetres of ant nests thanks to an incredible talent. University of New South Wales (Canberra) researchers Sebastian Oberst (now Senior Lecturer at the newly founded Centre for Audio, Acoustic & Vibration at the University of Technology, Sydney) and Professor Joseph Lai (UNSW Canberra), with colleagues Dr Glen Bann (Australian National University) and Associate Professor Theo Evans (University of Western Australia), have discovered termites' astonishing ability to detect predatory ants and dodge them based on substrate vibrations of their footsteps. Experiments in the lab and in the field showed that termites eavesdrop on the micro-vibrations made when ants move around, rather than on chemical signals. According to Dr Oberst, termites are up to 100 times 'quieter' than their predators, almost tiptoeing while trotting along. The signals of walking ants seem more like 'stomps'. "Termites eavesdrop in order to keep track of exactly where ants are and what they are doing," says Dr Oberst who, during the project, was a Research Associate at UNSW Canberra. "We knew they were spying on the ants' chemical or acoustic signals, but we didn't know which, until now." Dr Oberst conducted experiments in the labs of CSIRO Black Mountain in Canberra before designing a highly sensitive vibration setup within the anechoic chamber at UNSW Canberra to record sounds of ants and termites walking. "Our task was to measure miniscule vibrations caused by the footsteps of insects with a weight of only a few milligrams. I had to enter an anechoic room which absorbed all sound reflections and that was decoupled from the building, but still we picked up vibrations from people in the building," he explains. "We had to decouple the equipment within the room. The vibration of some termite species are so tiny that you wouldn't feel them walking on your skin." "We discovered termites are quieter than ants, although there is one specific type of ant that hunts on its own, therefore has to be quiet. It is almost as silent as termites. Interestingly, another termite species that specialises in stealing food of other termites is also much quieter than its host." Why is this knowledge important? Learning how termites extract and synthesise a specific sound/vibration signal amidst a cacophony of other sounds could lead to advances in technologies based on acoustic signatures, Dr Oberst says. Termites are highly cryptic masters of surveillance and concealment. Those in defence and counter-espionage could learn from their unique skills, the researchers say. The pest control industry could be turned on its head if this knowledge can be commercialised. Chemical-based, toxic pest control methods could become a thing of the past if ant and termite acoustic signatures could be understood and used to control infestations. Further research must be conducted to crack the secret code of termites. The research, published in Ecology Letters, was supported by the Australian Research Council.

On May 21, Department of Civil and Environmental Engineering head and McAfee Professor of Engineering Markus J. Buehler received the 2015 Foresight Institute Feynman Prize in Theoretical Molecular Nanotechnology. Buehler’s award was one of three prizes presented by the Foresight Institute, a leading think tank and public interest organization, at its annual conference in Palo Alto, California. He was acknowledged at the awards ceremony for his influential research enabling new multiscale models in hierarchical systems. The prizes are named in honor of pioneering physicist and Nobel laureate Richard Feynman '39, who earned his BS in physics from MIT. “The Foresight Institute Feynman Prizes in nanotechnology recognize progress toward the most visionary objectives in nanotechnology,” said Julia Bossmann, president of Foresight Institute. “These objectives, often described as molecular manufacturing or molecular nanotechnology, are to construct atomically-precise products through the use of molecular machine systems, therefore allowing the fabrication of nearly anything that can be designed.” “Foresight Institute established these prizes to encourage research in the development of molecular nanotechnology. The Foresight Institute Feynman Prizes are awarded to those making significant advances toward that end,” said Christine Peterson, co-founder of Foresight Institute. “Productive nanosystems will result in the ultimate manufacturing technology, replacing most current methods with clean and inexpensive alternatives. This capability will help us tackle fundamental problems that face humanity and lead to solutions that are good for people and good for the planet.” The Foresight Institute recognized Buehler for his important contributions to making nanotechnology scalable for large-scale materials applications, enabled by bottom-up multiscale computational methods, and linking new manufacturing and characterization methods. Focusing on mechanical properties — especially deformation and failure — and translation from biological materials and structures to bio-inspired synthetic materials, his work has led to the development and application of new modeling, design, and manufacturing approaches for advanced materials that offer greater resilience and a wide range of controllable properties from the nano- to the macroscale. Buehler's signature achievement, according to the Institute, is the application of molecular and chemical principles in the analysis of mechanical systems, with the aim to design devices and materials that provide a defined set of functions. “It’s an incredible honor to receive such an esteemed award. I owe this to the outstanding students and postdocs whom I had a pleasure to work with over the years, my colleagues, as well my own mentors,” Buehler said. “Richard Feynman was a revolutionary scientist of his generation. It’s a privilege to share his goals of researching molecular technology at very small scale to create new, more efficient, and better lasting materials at much larger scale that will help transform lives and industries.” The two other award winners are Professor Michelle Y. Simmons of the University of New South Wales, who won the Feynman Prize for Experimental Molecular Nanotechnology, and Northwestern University graduate student Chuyang Cheng, who won the Distinguished Student Award. Foresight Institute is a think tank focused on transformative future technologies. Founded in 1986, its mission is to discover and promote the upsides, and help avoid the drawbacks, of nanotechnology, artificial intelligence, biotechnology, and similar life-changing developments.

Nelson B.,University of Melbourne | Whitford T.J.,University of New South Wales | Lavoie S.,University of Melbourne | Sass L.A.,Rutgers University
Schizophrenia Research | Year: 2014

Phenomenological research indicates that disturbance of the basic sense of self may be a core phenotypic marker of schizophrenia spectrum disorders. Basic self-disturbance refers to disruption of the sense of ownership of experience and agency of action and is associated with a variety of anomalous subjective experiences. Little is known about the neurocognitive underpinnings of basic self-disturbance. In these two theoretical papers (of which this is Part 1), we review some recent phenomenological and neurocognitive research and point to a convergence of these approaches around the concept of self-disturbance. Specifically, we propose that subjective anomalies associated with basic self-disturbance may be associated with: 1. source monitoring deficits, which may contribute particularly to disturbances of "ownership" and "mineness" (the phenomenological notion of presence or self-affection) and 2. aberrant salience, and associated disturbances of memory, prediction and attention processes, which may contribute to hyper-reflexivity, disturbed "grip" or "hold" on the perceptual and conceptual field, and disturbances of intuitive social understanding ("common sense"). In this paper (Part 1) we focus on source monitoring deficits. Part 2 (this issue) addresses aberrant salience. Empirical studies are required in a variety of populations in order to test these proposed associations between phenomenological and neurocognitive aspects of self-disturbance in schizophrenia. An integration of findings across the phenomenological and neurocognitive "levels" would represent a significant advance in the understanding of schizophrenia and possibly enhance early identification and intervention strategies. © 2013.

Rubinov M.,University of New South Wales | Rubinov M.,Queensland Institute of Medical Research | Rubinov M.,CSIRO | Sporns O.,Indiana University Bloomington
NeuroImage | Year: 2010

Brain connectivity datasets comprise networks of brain regions connected by anatomical tracts or by functional associations. Complex network analysis-a new multidisciplinary approach to the study of complex systems-aims to characterize these brain networks with a small number of neurobiologically meaningful and easily computable measures. In this article, we discuss construction of brain networks from connectivity data and describe the most commonly used network measures of structural and functional connectivity. We describe measures that variously detect functional integration and segregation, quantify centrality of individual brain regions or pathways, characterize patterns of local anatomical circuitry, and test resilience of networks to insult. We discuss the issues surrounding comparison of structural and functional network connectivity, as well as comparison of networks across subjects. Finally, we describe a Matlab toolbox ( accompanying this article and containing a collection of complex network measures and large-scale neuroanatomical connectivity datasets. © 2009 Elsevier Inc.

Goldblatt F.,Repatriation General Hospital | Goldblatt F.,Flinders Medical Center | O'Neill S.G.,University of New South Wales | O'Neill S.G.,Liverpool Hospital
The Lancet | Year: 2013

Multisystem autoimmune rheumatic diseases are heterogeneous rare disorders associated with substantial morbidity and mortality. Efforts to create international consensus within the past decade have resulted in the publication of new classification or nomenclature criteria for several autoimmune rheumatic diseases, specifically for systemic lupus erythematosus, Sjögren's syndrome, and the systemic vasculitides. Substantial progress has been made in the formulation of new criteria in systemic sclerosis and idiopathic infl ammatory myositis. Although the autoimmune rheumatic diseases share many common features and clinical presentations, diff erentiation between the diseases is crucial because of important distinctions in clinical course, appropriate drugs, and prognoses. We review some of the dilemmas in the diagnosis of these autoimmune rheumatic diseases, and focus on the importance of new classification criteria, clinical assessment, and interpretation of autoimmune serology. In this era of improvement of mortality rates for patients with autoimmune rheumatic diseases, we pay particular attention to the effect of leading complications, specifi cally cardiovascular manifestations and cancer, and we update epidemiology and prognosis.

Coiera E.,University of New South Wales | Aarts J.,Erasmus University Rotterdam | Kulikowski C.,Rutgers University
Journal of the American Medical Informatics Association | Year: 2012

Over the next 10 years, more information and communication technology (ICT) will be deployed in the health system than in its entire previous history. Systems will be larger in scope, more complex, and move from regional to national and supranational scale. Yet we are at roughly the same place the aviation industry was in the 1950s with respect to system safety. Even if ICT harm rates do not increase, increased ICT use will increase the absolute number of ICT related harms. Factors that could diminish ICT harm include adoption of common standards, technology maturity, better system development, testing, implementation and end user training. Factors that will increase harm rates include complexity and heterogeneity of systems and their interfaces, rapid implementation and poor training of users. Mitigating these harms will not be easy, as organizational inertia is likely to generate a hysteresis-like lag, where the paths to increase and decrease harm are not identical.

Evans J.P.,University of New South Wales | Ekstrom M.,CSIRO | Ji F.,Office of Environment and Heritage
Climate Dynamics | Year: 2012

When using the Weather Research and Forecasting (WRF) modelling system it is necessary to choose between many parametrisations for each physics option. This study examines the performance of various physics scheme combinations on the simulation of a series of rainfall events near the south-east coast of Australia known as East Coast Lows. A thirty-six member multi-physics ensemble was created such that each member had a unique set of physics parametrisations. No single ensemble member was found to perform best for all events, variables and metrics. This is reflected in the fact that different climate variables are found to be sensitive to different physical parametrisations. While a standardised super-metric can be used to identify best performers, a step-wise decision approach described here, allows explicit recognition of the "robustness" of choosing one parameterisation over another, allowing the identification of a group of "equally robustly" performing physics combinations. These results suggest that the Mellor-Yamada-Janjic planetary boundary layer scheme and the Betts-Miller-Janjic cumulus scheme can be chosen with some robustness. Possibly with greater confidence, the results also suggest that the Yonsei University planetary boundary layer scheme, Kain-Fritsch cumulus scheme and RRTMG radiation scheme should not be used in combination in this region. Results further indicate that the selection of physics scheme options has larger impact on model performance during the more intensive rainfall events. © 2011 Springer-Verlag.

Dittmann E.,University of Potsdam | Fewer D.P.,University of Helsinki | Neilan B.A.,University of New South Wales
FEMS Microbiology Reviews | Year: 2013

Cyanobacteria produce an unparalleled variety of toxins that can cause severe health problems or even death in humans, and wild or domestic animals. In the last decade, biosynthetic pathways have been assigned to the majority of the known toxin families. This review summarizes current knowledge about the enzymatic basis for the production of the hepatotoxins microcystin and nodularin, the cytotoxin cylindrospermopsin, the neurotoxins anatoxin and saxitoxin, and the dermatotoxin lyngbyatoxin. Elucidation of the biosynthetic pathways of the toxins has paved the way for the development of molecular techniques for the detection and quantification of the producing cyanobacteria in different environments. Phylogenetic analyses of related clusters from a large number of strains has also allowed for the reconstruction of the evolutionary scenarios that have led to the emergence, diversification, and loss of such gene clusters in different strains and genera of cyanobacteria. Advances in the understanding of toxin biosynthesis and evolution have provided new methods for drinking-water quality control and may inspire the development of techniques for the management of bloom formation in the future. We report the biosynthetic pathways of cyanobacterial toxins and describe the evolutionary scenarios that have led to the emergence, diversification and loss of such gene clusters. © 2012 Federation of European Microbiological Societies.

Zhang W.,Aston University | Webb D.J.,Aston University | Peng G.-D.,University of New South Wales
Journal of Lightwave Technology | Year: 2012

In this work we experimentally investigate the response time of humidity sensors based on polymer optical fiber Bragg gratings. By the use of etching with acetone we can control the poly (methyl methacrylate) based fiber in order to reduce the diffusion time of water into the polymer and hence speed up the relative wavelength change caused by humidity variations. A much improved response time of 12 minutes for humidity decrease and 7 minutes for humidity increase, has been achieved by using a polymer optical fiber Bragg grating with a reduced diameter of 135 microns. © 2011 IEEE.

Zwanenburg F.A.,MESA Institute for Nanotechnology | Zwanenburg F.A.,University of New South Wales | Dzurak A.S.,University of New South Wales | Morello A.,University of New South Wales | And 6 more authors.
Reviews of Modern Physics | Year: 2013

This review describes recent groundbreaking results in Si, Si/SiGe, and dopant-based quantum dots, and it highlights the remarkable advances in Si-based quantum physics that have occurred in the past few years. This progress has been possible thanks to materials development of Si quantum devices, and the physical understanding of quantum effects in silicon. Recent critical steps include the isolation of single electrons, the observation of spin blockade, and single-shot readout of individual electron spins in both dopants and gated quantum dots in Si. Each of these results has come with physics that was not anticipated from previous work in other material systems. These advances underline the significant progress toward the realization of spin quantum bits in a material with a long spin coherence time, crucial for quantum computation and spintronics. Published by the American Physical Society.

Alizon S.,IRD Montpellier | Luciani F.,University of New South Wales | Regoes R.R.,ETH Zurich
Trends in Microbiology | Year: 2011

Many viruses and bacteria are known to evolve rapidly over the course of an infection. However, epidemiological studies generally assume that within-host evolution is an instantaneous process. We argue that the dynamics of within-host evolution has implications at the within-host and at the between-host levels. We first show that epidemiologists should consider within-host evolution, notably because it affects the genotype of the pathogen that is transmitted. We then present studies that investigate evolution at the within-host level and examine the extent to which these studies can help to understand infection traits involved in the epidemiology (e.g. transmission rate, virulence, recovery rate). Finally, we discuss how new techniques for data acquisition can open new perspectives for empirical and theoretical research. © 2010 Elsevier Ltd.

News Article | May 17, 2016

Artificial intelligence systems are getting better at performing jobs that were once thought to be exclusive only to humans. Researchers have created sophisticated AI systems that can write political speeches and even accurately predict who will win the Kentucky Derby. Now, scientists are placing themselves at risk of losing their own job by using AI to run a complex science experiment. A pair of researchers has taught an AI to replicate a physics experiment that won for three physicists the 2001 Nobel Prize. The experiment involves trapping the Bose-Einstein condensate (BEC), a dilute gas of bosons cooled to extremely cold temperatures in lasers. Bose-Einstein condensates are extremely sensitive to fluctuations in energy, which makes them useful in experiments and measurements. Sensors that are powered by these trapped gases can potentially detect tiny changes in the Earth's magnetic field or gravity, so researchers work on ways to tap them as sensors for mineral exploration and navigational systems. Paul Wigley, from the Australian National University, and Michael Hush, from the University of New South Wales, developed an AI system that they trained to manage gas-trapping laser device. They cooled gas to 1 microkelvin then handed over the control of the laser system to the AI, which cooled the gas to nanokelvin by applying different techniques to manipulate lasers the physicists said humans would not have likely guessed. "Through repeated machine-controlled scientific experimentation and observations our 'learner' discovers an optimal evaporation ramp for BEC production," the researchers reported in the journal Scientific Reports on May 16. The researchers acknowledged the potentials of using artificial intelligence in making experiments bigger and better. The AI, which learned to conduct the experiment from scratch in less than an hour, may be capable of figuring out complex ways humans have not thought of to make measurements more accurate and get experiments colder. The AI system's ability to create condensates faster and in greater quantities under more conditions could also make it very useful in field measurements. "It's cheaper than taking a physicist everywhere with you," Hush said. "You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what." The researchers said that they plan to use the AI to build larger condensates faster than before. © 2016 Tech Times, All rights reserved. Do not reproduce without permission.

(—A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. In their paper published in the journal Science, Giuseppe Carleo and Matthias Troyer describe how they coaxed a neural network to simulate some aspects of a quantum many-body system. Michael Hush with the University of New South Wales offers a Perspectives piece on the work done by the pair in the same journal issue and also outlines the problems other researchers have faced when attempting to solve the same problem. One of the difficult challenges facing physicists today is coming up with a way to simulate quantum many-body systems, i.e., showing all the states that exist in a given system, such as a chunk of matter. Such systems grow complicated quickly—a group of just 100 quantum particles, for example, could have as many as 1035 spin states. Even the most powerful modern computers very quickly become overwhelmed trying to depict such systems. In this new effort, the researchers took a different approach—instead of attempting to calculate every possible state, they used a neural network to generalize the entire system. The pair began by noting that the system used to defeat a Go world champion last year might be modified in a way that could simulate a many-body system. They created a simplified version of the same type of neural network and programed it to simulate the wave function of a multi-body system (by using a set of weights and just one layer of hidden biases). They then followed up by getting the neural network to figure out the ground state of a system. To see how well their system worked, they ran comparisons with problems that have already been solved and report that their system was better than those that rely on a brute-force approach. The system was a proof-of-concept rather than an actual tool for use by physicists, but it demonstrates what is possible—large efforts, as Hush notes, that involve more hidden biases and weights could result in a tool with groundbreaking applications. Explore further: Convolutional neural network able to identify rare eye disorder More information: Solving the quantum many-body problem with artificial neural networks, Science  10 Feb 2017: vol. 355, Issue 6325, pp. 602-606 Abstract The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the nontrivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons. A reinforcement-learning scheme we demonstrate is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems. Our approach achieves high accuracy in describing prototypical interacting spins models in one and two dimensions.

The International Association of HealthCare Professionals is pleased to welcome Dr. Yiotoula Sotiropoulos, MBBS, to their prestigious organization with an upcoming publication in  The Leading Physicians of the World. Dr. Sotiropoulos holds 30 years of experience as a family practitioner, with areas of expertise in pediatrics, as well as family and general practice. Dr. Sotiropoulos is currently serving patients at her private practice Yiotoula Sotiropoulos Family Practice in Bexley, Australia. Additionally, she also works as a supervisor at the University of Sydney and the University of New South Wales. Dr. Sotiropoulos was educated at the University of New South Wales in Sydney, Australia, graduating in 1984. To keep up to date with the latest advances in her field. Dr. Sotiropoulos maintains a professional membership with the American Medical Association and The Royal Australian College of General Practitioners. Dr. Sotiropoulos attributes her great success to being an astute listener and keen observer. She believes that much of what a patient expresses may be communicated indirectly and she reads between the lines to achieve a more accurate diagnosis and treatment program. View Dr. Yiotoula Sotiropoulos’ Profile Here: Learn more about Dr. Yiotoula Sotiropoulos by reading her upcoming publication in The Leading Physicians of the World. is a hub for all things medicine, featuring detailed descriptions of medical professionals across all areas of expertise, and information on thousands of healthcare topics.  Each month, millions of patients use FindaTopDoc to find a doctor nearby and instantly book an appointment online or create a review. features each doctor’s full professional biography highlighting their achievements, experience, patient reviews and areas of expertise.  A leading provider of valuable health information that helps empower patient and doctor alike, FindaTopDoc enables readers to live a happier and healthier life.  For more information about FindaTopDoc, visit

News Article | March 7, 2016

Tourists from Italy walk in the Mesquite Dunes in Death Valley National Park, California in this June 29, 2013 file photo. REUTERS/Steve Marcus/Files More OSLO (Reuters) - Extreme downpours are increasingly hitting both the wettest and driest regions of the world and global warming will raise the risks of bigger cloudbursts for the rest of the century, a study showed on Monday. "Extreme daily precipitation averaged over both dry and wet regimes shows robust increases" since 1950, a team of scientists led by Markus Donat of the University of New South Wales in Sydney wrote in the journal Nature Climate Change. Projected climate change until 2100 indicated a "continued intensification of daily precipitation extremes", according to the study, which examined trends in the driest 30 percent of the Earth's land area and the wettest 30 percent. Warm air can hold more moisture than cold. Past studies have suggested that wet areas will get wetter and dry areas drier overall with a warming trend, blamed by scientists on man-made emissions of greenhouse gases. But there have been big uncertainties about how the frequency of local extremes of rainfall may change, from tropical forests in the Amazon to deserts. Among recent downpours, Death Valley in California, the hottest and driest place in North America, has had spectacular flowers this spring after heavy rains last year. And in 2015, Chile's Atacama desert was carpeted in flowers after the heaviest rains in 20 years. Extreme rains also cause damaging floods. Severe rainfall is especially a threat in dry areas, the study said, because authorities have often invested little in flood defenses. The study's findings "do not tell us what will actually happen in any particular location, but rather how risks will change", said William Ingram, of Oxford University, who was not part of the study team. In a commentary in Nature Climate Change he wrote that there were still big gaps in scientific understanding of rainfall and climate change. There are few rain gauges in the Sahara, for instance. Last December, 195 nations reached the Paris Agreement on climate change, aiming to shift from oil, coal and natural gas to low-emission renewable energies such as wind and solar power this century.

News Article | March 7, 2016

A view of Inca de Oro (Inca gold) town (C) in the middle of the Atacama desert, near Copiapo city, north of Santiago, Chile, in this December 16, 2015 file photo. A view of the Atacama desert in the Andes mountain range outside of Antofagasta, Chile, in this April 15, 2013 file photo. Tourists from Italy walk in the Mesquite Dunes in Death Valley National Park, California in this June 29, 2013 file photo. Badwater, in Death Valley National Park, the lowest elevation in the Western Hemisphere at 280 feet below sea level, is seen at sunrise in California in this July 15, 2013 file photo. Tourists from Italy walk in the Mesquite Dunes in Death Valley National Park, California in this June 29, 2013 file photo. "Extreme daily precipitation averaged over both dry and wet regimes shows robust increases" since 1950, a team of scientists led by Markus Donat of the University of New South Wales in Sydney wrote in the journal Nature Climate Change. Projected climate change until 2100 indicated a "continued intensification of daily precipitation extremes", according to the study, which examined trends in the driest 30 percent of the Earth's land area and the wettest 30 percent. Warm air can hold more moisture than cold. Past studies have suggested that wet areas will get wetter and dry areas drier overall with a warming trend, blamed by scientists on man-made emissions of greenhouse gases. But there have been big uncertainties about how the frequency of local extremes of rainfall may change, from tropical forests in the Amazon to deserts. Among recent downpours, Death Valley in California, the hottest and driest place in North America, has had spectacular flowers this spring after heavy rains last year. And in 2015, Chile's Atacama desert was carpeted in flowers after the heaviest rains in 20 years. Extreme rains also cause damaging floods. Severe rainfall is especially a threat in dry areas, the study said, because authorities have often invested little in flood defenses. The study's findings "do not tell us what will actually happen in any particular location, but rather how risks will change", said William Ingram, of Oxford University, who was not part of the study team. In a commentary in Nature Climate Change he wrote that there were still big gaps in scientific understanding of rainfall and climate change. There are few rain gauges in the Sahara, for instance. Last December, 195 nations reached the Paris Agreement on climate change, aiming to shift from oil, coal and natural gas to low-emission renewable energies such as wind and solar power this century.

News Article | February 24, 2016

Two new and extinct kangaroo species were named recently. Scientists said these modern-day kangaroo ancestors didn't hop but instead, moved on all fours. These extinct kangaroo species - Cookeroo bulwidarri and Cookeroo hortusensis - lived between 18 to 23 million years ago. The researchers named the new genus Cookero in honor of Dr. Bernard Cooke, a Queensland Museum researcher who spearheaded a research program on Riversleigh's ancient kangaroos' evolution. In the Aboriginal Waanyi language, bulwidarri means "white." The Cookeroo bulwidarri was named after the Riversleigh White Hunter Site where it was discovered. In Latin, Hortusensis means "belonging to the garden." The Cookeroo hortusensis is named after the Neville's Garden Site. Dr. Kenny Travouillon, Western Australian Museum's mammal curator, became interested in an unidentified skull at the Queensland Museum. The skull, along with other fossils, was unearthed in north-west Queensland at a Riversleigh World Heritage during the past 30 years. Due to the lack of scientific papers on kangaroos in recent years, Travouillon found the skull belonged to a new kangaroo species. Kaylene Butler, one of Travouillon's PH.D. students at the University of Queensland (UQ), analyzed and named the new species. During her analysis, she discovered another unnamed kangaroo species among the University of New South Wales' collections. Butler said the new species walked on all four legs and scurried across a thick, forested habitat that is far from the dry outback in today's modern Queensland. Travouillon said that the new and extinct kangaroo species looked similar to modern-day ones. However, they were smaller, about the size of a wallaby. But unlike modern-day kangaroos, these extinct ancestors didn't hop, instead, they walked on all four legs. An analysis of the leg bones found no evidence that could support hopping. "Hopping actually occurred much later in the evolution," said Travouillon, who is a former Robert Day Fellow at UQ. Gilbert Price from UQ and Riversleigh researchers Professor Suzanne Hand and Professor Michael Archer from the University of New South Wales were part of the research team. The discovery was published in the Journal of Vertebrate Paleontology on Feb. 17.

Cazorla C.,University of New South Wales | Errandonea D.,University of Valencia
Physical Review Letters | Year: 2014

We present a combined experimental and computational first-principles study of the superionic and structural properties of CaF2 at high P-T conditions. We observe an anomalous superionic behavior in the low-P fluorite phase that consists of a decrease of the normal → superionic critical temperature with compression. This unexpected effect can be explained in terms of a P-induced softening of a zone-boundary X phonon that involves exclusively fluorine displacements. Also we find that superionic conductivity is absent in the high-P cotunnite phase. Instead, superionicity develops in a new low-symmetry high-T phase that we identify as monoclinic (space group P21/c). We discuss the possibility of observing these intriguing phenomena in related isomorphic materials. © 2014 American Physical Society.

Kavallaris M.,Childrens Cancer Institute | Kavallaris M.,University of New South Wales
Nature Reviews Cancer | Year: 2010

Microtubules are dynamic structures composed of α-Β-tubulin heterodimers that are essential in cell division and are important targets for cancer drugs. Mutations in Β-tubulin that affect microtubule polymer mass and/or drug binding are associated with resistance to tubulin-binding agents such as paclitaxel. The aberrant expression of specific Β-tubulin isotypes, in particular ΒIII-tubulin, or of microtubule-regulating proteins is important clinically in tumour aggressiveness and resistance to chemotherapy. In addition, changes in actin regulation can also mediate resistance to tubulin-binding agents. Understanding the molecular mechanisms that mediate resistance to tubulin-binding agents will be vital to improve the efficacy of these agents. © 2010 Macmillan Publishers Limited. All rights reserved.

Perkins S.E.,University of New South Wales | Fischer E.M.,ETH Zurich
Geophysical Research Letters | Year: 2013

The evaluation of a climate model's capability to simulate trends in extreme events is not straightforward. This is due to the role of internal climate variability, the simulated phases of which are unique to each individual model realization. We undertake an assessment of the 21-member Community Earth System Model (CESM) on the basis of its ability to simulate heat wave days frequency over Australia. We employ the extreme heat factor definition to measure heat waves and study events for all summers (November-March) between 1950 and 2005. The spatial pattern, magnitude, and significance of trends in CESM were found to be reasonable since the corresponding observed trends were within the CESM ensemble range. There is a suggestion that the model produces higher interannual variability than what is observed. The trends between realizations of the same model differ strongly, which suggest that internal climate variability can strongly amplify or mask local trends in extreme events. Key Points The CESM model has a sound capability in simulating trends in heat wave days A small model ensemble may not be represent models' ability to simulate extremes The forced signal in CESM is toward increasing heat wave days over Australia ©2013. American Geophysical Union. All Rights Reserved.

Sherwood S.C.,University of New South Wales | Bony S.,University Pierre and Marie Curie | Dufresne J.-L.,University Pierre and Marie Curie
Nature | Year: 2014

Equilibrium climate sensitivity refers to the ultimate change in global mean temperature in response to a change in external forcing. Despite decades of research attempting to narrow uncertainties, equilibrium climate sensitivity estimates from climate models still span roughly 1.5 to 5 degrees Celsius for a doubling of atmospheric carbon dioxide concentration, precluding accurate projections of future climate. The spread arises largely from differences in the feedback from low clouds, for reasons not yet understood. Here we show that differences in the simulated strength of convective mixing between the lower and middle tropical troposphere explain about half of the variance in climate sensitivity estimated by 43 climate models. The apparent mechanism is that such mixing dehydrates the low-cloud layer at a rate that increases as the climate warms, and this rate of increase depends on the initial mixing strength, linking the mixing to cloud feedback. The mixing inferred from observations appears to be sufficiently strong to imply a climate sensitivity of more than 3 degrees for a doubling of carbon dioxide. This is significantly higher than the currently accepted lower bound of 1.5 degrees, thereby constraining model projections towards relatively severe future warming. © 2014 Macmillan Publishers Limited.

Agency: Cordis | Branch: FP7 | Program: ERC-AG | Phase: ERC-AG-SH2 | Award Amount: 2.17M | Year: 2012

What are the gender dynamics of institutional change? Changing institutions is a fundamental part of the task of lessening gender inequality and yet the gender dynamics of institutional change are still poorly understood. Feminist scholars have long been interested in how to achieve the social, economic and political changes that will lessen inequality. Huge changes in some womens social and economic status have occurred in many part of the world in the last fifty years. Nonetheless multiple and intersecting unequal power relations as well as male domination remain commonplace in many institutional arenas including judicial and political systems - despite measures such as quotas and equality legislation. Improving our understanding of institutional change is therefore a key undertaking for feminist, if not all, social science as well as a public policy priority. Crucially this institutional analysis will provide an important meso level link between the (sometimes unhelpful) overarching analyses of macro structures such as patriarchy and the more micro-level analysis of the actions and strategies of individual actors and groups that have often predominated. Such an approach will allow scholars to develop better explanatory frameworks while at the same time maintaining historical and contextual specificity. The programme will open new research agendas that systematically investigate how institutional change is gendered, why some forms of change appear more successful than others and how and why informal institutions operate in gendered ways. The outcomes will be of use to both academics and practitioners who want to ensure that gender equity concerns can be more effectively embedded in institutions and processes of institutional design and reform.

News Article | October 6, 2016

It is often necessary to identify unfamiliar people by comparing face images: for example a CCTV image to a mugshot, or a passport photograph to a traveller. Now researchers have shown, in a new study published in Royal Society Open Science, that the accuracy of these decisions is impaired by poor sleep. However the study also found that poor sleepers were just as confident in their decisions, highlighting possible implications for security and policing. The study, which was led by the University of New South Wales in collaboration with the University of Glasgow, set out to examine how sleep would affect the accuracy of facial identification. Participants were asked to decide whether two images, presented on a computer monitor at the same time, pictured the same person or two different people. The researchers set the task to differ from the face recognition tasks most of us encounter in our daily lives in two important ways: firstly, the people pictured in the images are unfamiliar. Secondly, the task did not involve memory, because the images appear on the screen at the same time. The authors noted that while most people would typically expect to perform well on these tasks, many are surprised at how many errors they make. Previous studies have shown impaired memory for faces following restricted sleep. However, until now it was not known whether lack of sleep impairs performance on face identification tasks that do not rely on recognition memory. Louise Beattie, Ph.D., the University of Glasgow's School of Psychology, said: "We found that poor sleep in the three days leading up to the test was associated with poorer performance on the face matching test. In a separate experiment, we also found that participants with insomnia were poorer on the task. "Sleep disruption is common in the general population, and especially so among night-shift workers. Here we show for the first time that performance in a crucial "passport task" is affected by poor sleep, and our research has important implications for those working in security or forensic settings. "This adds to the literature showing poor sleep and shift work to be associated with a range of adverse health, cognitive and emotional effects." The authors noted that poor sleep was not only associated with poorer performance, but also with higher levels of confidence in errors. David White, from the University of New South Wales, added: "In modern society it is often necessary to identify unfamiliar people by comparing face images. In this study we show that the accuracy of these decisions is impaired in poor sleepers. "Worryingly, although poorer sleep was associated with reduced accuracy, poor sleepers were not less confident in their responses. This has important implications for security and policing, where shift work is common."

News Article | August 31, 2016

Scientists have found what they think is the oldest fossil on Earth, a remnant of life from 3.7 billion years ago when Earth’s skies were orange and its oceans green. In a newly melted part of Greenland, Australian scientists found the leftover structure from a community of microbes that lived on an ancient seafloor, according to a study in Wednesday’s journal Nature . The discovery shows life may have formed quicker and easier than once thought, about half a billion years after Earth formed . And that may also give hope for life forming elsewhere, such as Mars, said study co-author Martin VanKranendonk of the University of New South Wales and director of the Australian Center for Astrobiology. “It gives us an idea how our planet evolved and how life gained a foothold,” VanKranendonk said. Scientists had thought it would take at least half a billion years for life to form after the molten Earth started to cool a bit, but this shows it could have happened quicker, he said. That’s because the newly found fossil is far too complex to have developed soon after the planet’s first life forms, he said. In an outcrop of rocks that used to be covered with ice and snow which melted after an exceptionally warm spring, the Australian team found stromatolites, which are intricately layered microscopic layered structures that are often produced by a community of microbes. The stromatolites were about .4 to 1.6 inches high (1 to 4 centimeters). It “is like the house left behind made by the microbes,” VanKranendonk said. Scientists used the layers of ash from volcanoes and tiny zircon with uranium and lead to date this back 3.7 billion years ago, using a standard dating method, VanKranendonk said. “It would have been a very different world. It would have had black continents, a green ocean with orange skies,” he said. The land was likely black because the cooling lava had no plants, while large amounts of iron made the oceans green. Because the atmosphere had very little oxygen and oxygen is what makes the sky blue, its predominant color would have been orange, he said. The dating seems about right, said Abigail Allwood , a NASA astrobiologist who found the previous oldest fossil, from 3.48 billion years ago, in Australia. But Allwood said she is not completely convinced that what VanKranendonk’s team found once was alive. She said the evidence wasn’t conclusive enough that it was life and not a geologic quirk. “It would be nice to have more evidence, but in these rocks that’s a lot to ask,” Allwood said in an email. Follow Seth Borenstein at and his work can be found at

News Article | September 1, 2016

Scientists have found what they think is the oldest fossil on Earth, a remnant of life from 3.7 billion years ago when Earth's skies were orange and its oceans green. In a newly melted part of Greenland, Australian scientists found the leftover structure from a community of microbes that lived on an ancient seafloor, according to a study in Wednesday's journal Nature. The discovery shows life may have formed quicker and easier than once thought, about half a billion years after Earth formed. And that may also give hope for life forming elsewhere, such as Mars, said study co-author Martin VanKranendonk of the University of New South Wales and director of the Australian Center for Astrobiology. "It gives us an idea how our planet evolved and how life gained a foothold," VanKranendonk said. Scientists had thought it would take at least half a billion years for life to form after the molten Earth started to cool a bit, but this shows it could have happened quicker, he said. That's because the newly found fossil is far too complex to have developed soon after the planet's first life forms, he said. In an outcrop of rocks that used to be covered with ice and snow which melted after an exceptionally warm spring, the Australian team found stromatolites, which are intricately layered microscopic layered structures that are often produced by a community of microbes. The stromatolites were about .4 to 1.6 inches high (1 to 4 centimeters). It "is like the house left behind made by the microbes," VanKranendonk said. Scientists used the layers of ash from volcanoes and tiny zircon with uranium and lead to date this back 3.7 billion years ago, using a standard dating method, VanKranendonk said. "It would have been a very different world. It would have had black continents, a green ocean with orange skies," he said. The land was likely black because the cooling lava had no plants, while large amounts of iron made the oceans green. Because the atmosphere had very little oxygen and oxygen is what makes the sky blue, its predominant color would have been orange, he said. The dating seems about right, said Abigail Allwood, a NASA astrobiologist who found the previous oldest fossil, from 3.48 billion years ago, in Australia. But Allwood said she is not completely convinced that what VanKranendonk's team found once was alive. She said the evidence wasn't conclusive enough that it was life and not a geologic quirk. "It would be nice to have more evidence, but in these rocks that's a lot to ask," Allwood said in an email.

News Article | October 27, 2016

Taj Ma-help. Pollution from open rubbish fires is turning India’s iconic white marble monument brown, leading to calls for better waste management. Door-to-door waste collection in Agra, the city where the Taj Mahal is located, often bypasses poor neighbourhoods. As a result, many households burn food scraps, paper and other rubbish in the street. A recent study by Sachchida Tripathi at the Indian Institute of Technology Kanpur and his colleagues found that waste burning in Agra deposits 150 milligrams of fine pollution particles per square metre of the Taj Mahal annually. This may explain why other interventions to reduce discolouration have had little impact. In 1996, for example, India restricted vehicle access near the monument and took steps to reduce industrial emissions. Last year, the use of cow dung as a cooking fuel was banned. But still the marble continues to darken. The study found that the impact of open rubbish on the Taj Mahal is 12 times that of dung cake burning, but didn’t compare it with other pollution sources. Cleaners are using “mud packs” which draw out impurities to eliminate the marble’s brown stains. But the Archaeological Survey of India – the country’s archaeological research body – recently warned that this could permanently alter the colour, texture and shine of the surface. Preventing staining in the first place is a better strategy, says Tripathi. Introducing low-emission household incinerators and organising regular waste pick-ups are possible solutions, he says. These interventions could also significantly improve public health. The study estimated that inhalation of fine particles from rubbish fires causes 713 premature deaths in Agra each year, based on the established link between air pollution and cardiovascular and respiratory disease. “When you live in a developed country, you don’t think about air pollution too much because you look outside the window and it’s clear,” says Bin Jalaludin at the University of New South Wales in Australia. “But it’s increasingly becoming a problem in megacities in India and China where there is a constant haze, so governments are taking notice.”

News Article | December 15, 2016

VANCOUVER, BC--(Marketwired - December 15, 2016) - Skeena Resources Limited (TSX VENTURE: SKE) ("Skeena" or the "Company") is pleased to announce the appointment of Craig Parry to the Board of Directors of the Company as an independent director. "We are privileged to have Mr. Parry join the Board of Skeena. Mr. Parry has held leadership positions for a number of mining companies across a broad range of commodities. He brings a strong technical background and a proven business track record. His depth of experience in all aspects of junior mining includes exploration, development and mine finance. Mr. Parry will be an invaluable addition to Skeena's Board of Directors," commented Walter Coles Jr., Skeena's President & CEO. Mr. Parry is currently the President & CEO and co-founder of IsoEnergy Ltd. He is also a co-founder and director of NexGen Energy Ltd. and a Senior Advisor and a founding-shareholder of EMR Capital. Prior to that, Mr. Parry was a co-founder and Executive Director of the Tigers Realm Group of companies and is the former CEO of Tigers Realm Coal where he led the discovery and development of the Amaam North coking coal project. Mr. Parry began his career as a geologist with Rio Tinto. He holds an Honours Degree in Geology from the University of New South Wales and is a Member of the Australian Institute of Mining and Metallurgy. This appointment is subject to TSX Venture Exchange approval. Skeena Resources Limited is a junior Canadian mining exploration company focused on developing prospective base and precious metal properties in the Golden Triangle region of northwest British Columbia, Canada. The Company's primary activities are the evaluation and development of the Spectrum-GJ gold-copper project as well as exploration on the recently optioned past-producing Snip gold mine, acquired from Barrick Gold, and the past-producing Porter Idaho silver mine. Skeena's management includes a highly experienced team of mine-finders, including Ron Netolitzky, Chairman of the Board. On behalf of the Board of Directors of Skeena Resources Limited, Certain statements made and information contained herein may constitute "forward looking information" and "forward looking statements" within the meaning of applicable Canadian and United States securities legislation, including, among other things, information with respect to this presentation. These statements and information are based on facts currently available to the Company and there is no assurance that actual results will meet management's expectations. Forward-looking statements and information may be identified by such terms as "anticipates", "believes", "targets", "estimates", "plans", "expects", "may", "will", "could" or "would". Forward-looking statements and information contained herein are based on certain factors and assumptions regarding, among other things, the estimation of Mineral Resources and Reserves, the realization of Mineral Resource and Reserve estimates, metal prices, taxation, the estimation, timing and amount of future exploration and development, capital and operating costs, the availability of financing, the receipt of regulatory approvals, environmental risks, title disputes and other matters. While the Company considers its assumptions to be reasonable as of the date hereof, forward-looking statements and information are not guarantees of future performance and readers should not place undue importance on such statements as actual events and results may differ materially from those described herein. The Company does not undertake to update any forward-looking statements or information except as may be required by applicable securities laws. Neither TSX Venture Exchange nor the Investment Industry Regulatory Organization of Canada accepts responsibility for the adequacy or accuracy of this release.

News Article | December 21, 2016

Sometimes the solution to a problem is staring you in the face all along. Chip maker Intel is betting that will be true in the race to build quantum computers—machines that should offer immense processing power by exploiting the oddities of quantum mechanics. Competitors IBM, Microsoft, and Google are all developing quantum components that are different from the ones crunching data in today’s computers. But Intel is trying to adapt the workhorse of existing computers, the silicon transistor, for the task. Intel has a team of quantum hardware engineers in Portland, Oregon, who collaborate with researchers in the Netherlands, at TU Delft’s QuTech quantum research institute, under a $50 million grant established last year. Earlier this month Intel’s group reported that they can now layer the ultra-pure silicon needed for a quantum computer onto the standard wafers used in chip factories. This strategy makes Intel an outlier among industry and academic groups working on qubits, as the basic components needed for quantum computers are known. Other companies can run code on prototype chips with several qubits made from superconducting circuits (see “Google’s Quantum Dream Machine”). No one has yet advanced silicon qubits that far. A quantum computer would need to have thousands or millions of qubits to be broadly useful, though. And Jim Clarke, who leads Intel’s project as director of quantum hardware, argues that silicon qubits are more likely to get to that point (although Intel is also doing some research on superconducting qubits). One thing in silicon’s favor, he says: the expertise and equipment used to make conventional chips with billions of identical transistors should allow work on perfecting and scaling up silicon qubits to progress quickly. Intel’s silicon qubits represent data in a quantum property called the “spin” of a single electron trapped inside a modified version of the transistors in its existing commercial chips. “The hope is that if we make the best transistors, then with a few material and design changes we can make the best qubits,” says Clarke. Another reason to work on silicon qubits is that they should be more reliable than the superconducting equivalents. Still, all qubits are error prone because they work on data using very weak quantum effects (see “Google Researchers Make Quantum Components More Reliable”). The new process that helps Intel experiment with silicon qubits on standard chip wafers, developed with the materials companies Urenco and Air Liquide, should help speed up its research, says Andrew Dzurak, who works on silicon qubits at the University of New South Wales in Australia. “To get to hundreds of thousands of qubits, we will need incredible engineering reliability, and that is the hallmark of the semiconductor industry,” he says. Companies developing superconducting qubits also make them using existing chip fabrication methods. But the resulting devices are larger than transistors, and there is no template for how to manufacture and package them up in large numbers, says Dzurak. Chad Rigetti, founder and CEO of Rigetti Computing, a startup working on superconducting qubits similar to those Google and IBM are developing, agrees that this presents a challenge. But he argues that his chosen technology’s head start will afford ample time and resources to tackle the problem. Google and Rigetti have both said that in just a few years they could build a quantum chip with tens or hundreds of qubits that dramatically outperforms conventional computers on certain problems, even doing useful work on problems in chemistry or machine learning.

News Article | December 5, 2016

On Monday, OpenAI, a nonprofit AI research organization backed by Elon Musk, announced Universe, a software platform for "benchmarking the general intelligence of AI agents across the world's supply of games, websites and other applications," the organization said. One of the main technical challenges facing AI research is that "AI agents require access to a large variety of realistic environments where they can learn progressively more complex skills, and where we can accurately benchmark their progress," according to an OpenAI blog post. Currently, it said that the biggest and most popular assemblage of "reinforcement learning environments" is the Arcade Learning Environment, which is a collection of 55 ATARI games. "Over the last few years, this benchmark has led to significant progress in developing more effective and robust reinforcement learning algorithms that allow agents to learn from experience," OpenAI said. With Universe, OpenAI said it wants to "measure and accelerate this progress." Right out of the box, according to OpenAI, the software contains thousands of games (such as several Flash-based ones and popular games like StarCraft), along with form-filling browser-based tasks and applications like "Our eventual goal is to develop a single AI agent that can flexibly apply its past experience on Universe environments to quickly master new ones, which would be a major step towards general intelligence," OpenAI said. The organization noted that Universe is already available for research, and it is requesting assistance to scale the "environment ecosystem." OpenAI added that there are a number of ways other businesses and organizations can help get Universe off the ground. Among them, it is seeking companies that are willing to grant it permission to use their games, programs, websites or apps. "If your program would yield good training tasks for an AI then we'd love your permission to include it in Universe," it said in the post. OpenAI also said that the "ideal environment candidates have an on-screen number which can be parsed as a reward, such as a computer game score, or a clear objective we can detect with a program and then send to an agent." The San Francisco-based nonprofit, backed by Silicon Valley heavyweights including Musk, Peter Thiel and Reid Hoffman, announced its launch in December 2015, with the goal to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." Since then, OpenAl has made significant alliances. Last month, for instance, the nonprofit partnered with Microsoft and said it would make Microsoft Azure its primary cloud platform. As previously reported by TechRepublic, the chief reason OpenAI linked up with Microsoft was due to its "focus on deep learning, open source technologies, and the capabilities available in tools such as Azure Batch, Azure Machine Learning, and the Microsoft Cognitive Toolkit." With the announcement of Universe, OpenAI said the AI community can only be sure of progresses made by measuring performance. When it comes to developing AI agents that display general intelligence, OpenAI said its "goal is to build a single agent that can rapidly learn to perform well on a large variety of tasks." Universe will allow the organization to measure this type of performance—referred to as an "AI Score"—and to make real progress towards such a goal. OpenAI added that this task "may seem daunting right now," but the project is drawing "inspiration from the historical precedent set by the ImageNet dataset in the Computer Vision community." In keeping with its open-source ethos, OpenAI also said it is "actively seeking strong researchers, engineers, and anyone in between" to assist with Universe. AI experts widely praised the move. Fabio Cardenas, CEO of Sundown AI, called it a "clever idea." "They are providing training sets for new AI platforms and benchmarking results," he said. "These are the sorts of efforts that will generate smarter systems over time." And the strategy has been proven effective already. "The Atari game suite has been a valuable benchmarking tool for AI researchers," said Marie desJardins, professor of computer science at the University of Maryland, Baltimore County, and former chair of AAAI (the National Conference of the Association for the Advancement of Artificial Intelligence). "The planned release of Universe, with a much broader and deeper collection of games and other application domains, will provide rich opportunities for researchers to evaluate and compare their systems in a wide variety of domains," desJardins said. Toby Walsh, professor of AI and the University of New South Wales also said "the goal of developing an imageNet for RL is a great one," and that OpenAI should be "congratulated." Still, Walsh believes this move is only "a partial step" towards that goal, and "an even smaller step to general AI." "There's much more to AI than playing artificial games," said Walsh. "There's little scope here for learning from limited data—games have proved popular as you can generate essentially unlimited amounts of training data and deep learning needs this." But real humans, said Walsh, are forced to learn from more limited data. "We can't afford to make many errors," he said. "There are many other issues that this data set will not greatly help—commonsense reasoning, planning, natural language processing. And then there's the whole topic of unsupervised learning." "Many of the goals we need to address to get to general AI will not be solved," said Walsh, "even with this software platform."

News Article | November 29, 2016

Leaders in Malaysia, Indonesia and the Philippines take a pragmatic, work-with-me-please view toward China in easing Asia’s widest reaching sovereignty dispute. They all say the South China Sea, or some fragment of it, is theirs and their claims overlap. China has expanded quickly since 2010 to assert control over almost the entire 3.5 million-square-km (1.4 million-square-mile) sea, putting other countries on the defensive. But one other country is doing the same thing and China can neither ignore it nor neutralize it through economic and trade deals. (That’s what happens to other countries.) It’s almost China’s double. It’s Vietnam. Vietnam is smaller than China and its military, ranked 17th by the database, lags China’s at No. 3. But otherwise the two are chasing maritime control in the same ways. Here are five: 1. Both make historic claims. Beijing cites maps and documents going back to the Han Dynasty 2,200 years ago to substantiate its claim to the South China Sea. Vietnamese people were using the Spratly Islands, the sea’s biggest group of tiny land features, as long as 1,000 years ago, their story goes. Hanoi has also cited an 1887 Franco-Chinese Treaty as a basis for allocating claims, though U.N. law questions the concept behind it, according to one scholar. 2. China and Vietnam assert more than the customary continental shelf. Vietnam’s claim reaches past a conventional, internationally exclusive economic zone 200-nautical mile (370-km) from its Indochinese coastline into the Spratly archipelago, among other places. Its military units occupy the group's largest feature, Spratly Island. Claimants Brunei, Malaysia and the Philippines normally operate within 200 nautical miles of their shores. China claims nearly the whole resource-rich sea. 3. Both countries are reclaiming land for military use. China has landfilled about 3,200 acres (1,294 hectares) of land to beef up tiny, partly submerged islets. It’s got surface-to-air missiles on Woody Island in the Paracel chain, to cite one example. Vietnam has landfilled 27 islets, more than any other claimant. It is investing now in the extension of Spratly Island’s runway from 2,500 to 3,300 feet, ideal for landing air force maritime surveillance aircraft, and building hangars, the U.S. think tank Center for Strategic and International Studies says. “Vietnam is the only other country there that has overlapping claims with China,” says Carl Thayer, emeritus professor of politics at the University of New South Wales in Australia. “They both claim everything essentially. It’s also the only other country to engage in such a robust defense modernization.” 4. Nationalist crowds at home. The Communist governments of China and Vietnam must please home audiences with a keen sense of nationalism, including a deep historic resentment in Vietnam against the Chinese. The two sides talk of cooperation to avoid a nationalistic spillover such as the anti-China riots in Vietnam in 2014. But each uses is state-controlled mass media to remind an eager public of its claims. Local media cover Vietnamese island reclamation work to stoke excitement, for example, says Mekong Economics chief economist Adam McCarty. 5. Neither side is afraid of a fight. Scores died in 1974, and again in 1988, when Chinese and Vietnamese vessels clashed in the South China Sea. Most of the dead were Vietnamese sailors. In 2014, vessels from both sides rammed each other after Beijing let the Chinese offshore oil driller position a rig in the disputed Gulf of Tonkin. Now Vietnam is preparing for any new fights by accepting maritime patrol boats from Japan and military aid of $18 million from the United States.

News Article | February 22, 2017

A new paper shows that the ancient ancestors of termites found in northern Australia crossed vast distances over oceans, and then followed an evolutionary path similar to humans, migrating from tree-tops to the ground. Mounds sometimes reaching as high as eight metres and housing millions of individual insects are seen in the Northern Territory, Western Australia and far north Queensland, built by cathedral termites. Relative to the animals’ 3mm height and the average human height the termite mounds are the equivalent to four of the world’s tallest structure, the Burj Khalifa skyscraper in Dubai, stacked on top of each other. Little was known about the termites’ origins until this research, said Associate Professor Nathan Lo, the co-lead author of the paper from the University of Sydney. DNA sequencing showed that today’s cathedral termites descend from the first “nasute termites” to arrive in Australia up to 20m years ago from Asia or South America. “It’s a strange result but we’re very confident about it,” said Lo. “The closest relatives of these mound-building termites in Australia are actually tree-nesting termites that live in Asia and South America.” He believed that termites arrived in Australia after crossing long distances of ocean on plant matter following tsunamis or large storm events. The researchers found this colonisation had happened three times in the past 20m years. These first settlers also lived in trees in coastal areas but over time began to build mounds on the ground and feed on litter and grass as they adapted to the arid conditions of northern Australia. Lo said the termites’ relocation was driven by change in climate and environment after Australia shifted from a forest-covered continent 20m years ago to a much drier landscape. As the forests succumbed to Australia’s dry conditions about seven to 10m years ago, the tree-dwelling termites sought more moisture in the earth, he said. “That’s why they started to build mounds.” Humans would later follow a similar evolutionary path, said Lo, with our ancestors living in trees as recently as the last 4m years. “These amazing mounds we see in the north of Australia, we didn’t know if they were 100m years old, 50m years old. Now we know it’s more likely that within the last 10m years that they’ve popped up. “They weren’t here when Australia separated from Gondwana some 100m years ago – they evolved here relatively recently due to ancient climate change.” The study, published on Wednesday in the Royal Society journal Biology Letters, was led by the University of Sydney in collaboration with Purdue University in the United States, the CSIRO National Research Collections Australia, the University of Western Australia and the University of New South Wales.

News Article | February 22, 2017

They build among the tallest non-human structures (proportionately speaking) in the world and now it's been discovered the termites that live in Australia's remote Top End originated from overseas - rafting vast distances and migrating from tree-tops to the ground, as humans later did. Referred to as "cathedral" termites, the Nasutitermes triodiae build huge mounds up to eight metres high in the Northern Territory, Western Australia and Queensland - representing some of the tallest non-human animal structures in the world. DNA sequencing found the forebearers, called nasute termites, colonised Australia three times in the past 20 million years or so and evolved from wood to grass-feeding as they adapted to significant environmental changes, including increasingly arid conditions and the conversion of woodlands to grassland habitats in subtropical savannahs and central Australia. Now a prominent feature of the arid landscape "Down Under", the mounds house millions of termites; this study is the first comprehensive investigation of the evolution of the nesting and feeding of the extended family of termites, through the Australian refugee descendants. The findings of the international research are published today in the Royal Society journal Biology Letters. Co-lead author of the paper from the University of Sydney, Associate Professor Nathan Lo, said although much was known about the functions of termite mounds - which include protection from predators - little had been known about their evolutionary origins. "We found that the ancestors of Australia's fortress-building termites were coastal tree-dwellers, which arrived in Australia by rafting long distances over the oceans from either Asia or South America," Associate Professor Lo said. "Once in Australia, they continued to build their nests in trees, but later descended and began building mounds on the ground instead, paralleling the evolution of the other great architects of the world - human beings, whose ancestors lived in the tree tops some millions of years ago." Associate Professor Lo, from the University of Sydney's School of Life and Environmental Sciences, said the mounds are an engineering feat when considered in comparison to the tallest structure on Earth - Dubai's skyscraper the Burj Khalifas. "Given that a worker termite stands about 3mm in height, these mounds are in human terms the equivalent of four Burj Khalifas stacked on top of each other," he said. The paper, "Parallel evolution of mound-building and grass-feeding in Australian nasute termites," said ancestral wood feeders would likely have lost the ability to feed on wood as they transitioned to feeding on litter and grass. "This group is one of the most ecologically successful groups of termites in Australia," the paper reads. "We have shown that its capacity to disperse over oceans - and to repeatedly evolve the ability to build mounds and feed on novel substrates in the face of significant environmental change - appears to have been important in promoting this success." The research was led by the University of Sydney in collaboration with Purdue University in the United States, the CSIRO National Research Collections Australia, the University of Western Australia and the University of New South Wales.

Heat waves may increase by 3 to 5 degrees warmer compared to the previous estimation by mid-century due to the reaction of plants to the increased carbon dioxide levels. Intense heat may be experienced in the future across Northern Eurasia, according to the study conducted by Australian researchers. Increase in greenhouse gases may be a cause of further heating in the atmosphere in terms of duration, intensity and frequency. Due to this increase, the plant's tiny pores found on its leaves called stomata will not open much to take in carbon dioxide in turn for a water vapor. "There's less water vapor being lost so you have a net warming effect," said Dr. Jatin Kala of Murdoch University. The plants releasing lesser water vapor causes more intense heat waves. The study parameter explored 314 plant species from 56 different field sites. The stomata of each plant were subjected to the newly implemented stomatal scheme derived from optimal stomatal theory. From the previous climate models, it only presented plants trading water for carbon dioxide, without considering the different plant types and other variations. With these, they have not accounted that they have overestimated the capacity of lost water in the atmosphere, especially in some regions. "We often underestimate the role of vegetation in extreme temperature events as it has not been included in enough detail in climate models up until this point," Dr. Kala added. Tundra, needleleaf and agricultural fields where crops are grown are believed to have the highest temperature change. With the use of the climate model, the study is unique to use its best resources in doing their observations to differentiate strategies of plant's water use. "These world-first results will have significant impact on the development of climate models around the world," said Professor Andy Pitman, one of the authors of the study and the director of Australian Research Council's Centre of Excellence for Climate Systems Science at University of New South Wales. Professor Belinda Medlyn, co-author and theoretical biologist of Western Sydney University, added that the study was made intended only to learn how plants work and they were not expecting results that have vital implications on heatwaves in the atmosphere. The study was published in the journal Nature Scientific Reports.

Pasquier E.,University of New South Wales | Kavallaris M.,University of New South Wales | Andre N.,Service dOncologie Pediatrique
Nature Reviews Clinical Oncology | Year: 2010

Tumor angiogenesis is recognized as a major therapeutic target in the fight against cancer. The key involvement of angiogenesis in tumor growth and metastasis has started to redefine chemotherapy and new protocols have emerged. Metronomic chemotherapy, which is intended to prevent tumor angiogenesis, is based on more frequent and low-dose drug administrations compared with conventional chemotherapy. The potential of metronomic chemotherapy was revealed in animal models a decade ago and the efficacy of this approach has been confirmed in the clinic. In the past 5 years, multiple clinical trials have investigated the safety and efficacy of metronomic chemotherapy in a variety of human cancers. While the results have been variable, clinical studies have shown that these new treatment protocols represent an interesting alternative for either primary systemic therapy or maintenance therapy. We review the latest clinical trials of metronomic chemotherapy in adult and pediatric cancer patients. Accumulating evidence suggests that the efficacy of such treatment may not only rely on anti-angiogenic activity. Potential new mechanisms of action, such as restoration of anticancer immune response and induction of tumor dormancy are discussed. Finally, we highlight the research efforts that need to be made to facilitate the optimal development of metronomic chemotherapy. © 2010 Macmillan Publishers Limited.

Mobbs R.J.,University of New South Wales | Raley D.A.,Prince of Wales Hospital
Journal of Spinal Disorders and Techniques | Year: 2014

Study design: Retrospective study.Objective: To determine the risk profile and complications associated with anterior vertebral body breach by Kirschner (K)-wire during percutaneous pedicle screw insertion.Summary of Background Data: Percutaneous techniques and indications are rapidly expanding with numerous studies now supporting the use of percutaneous pedicle screw stabilization as an adjunct for multiple pathologies such as degenerative, tumor, and trauma. With regards to complication rates, little has been documented.Materials and Methods: A total of 525 consecutive percutaneous pedicle screws were retrospectively reviewed and the rate of anterior vertebral body breach was recorded, including any potential adverse clinical outcomes.Results: Of 525 percutaneous pedicle screw insertions, there were 7 anterior breaches recorded. We rated the breaches as a minor breach (< 5mm; n = 3), moderate breach (5-25mm; n = 2), and major breach (> 25mm; n = 2). Two patients had a postoperative ileus with a retroperitoneal hematoma on postoperative computed tomography scan. No patient required reoperation or blood transfusion.Conclusions: The indications for minimally invasive spinal fusion have expanded to include conditions such as degenerative, trauma, deformity, infection, and neoplasia. Although the rate of anterior K-wire breach is low, the technique requires the acquisition of a new set of skills including the safe passage of a K-wire, and knowledge of potential complications that may ensue. Copyright © 2012 by Lippincott Williams & Wilkins.

Degenhardt L.,University of New South Wales | Degenhardt L.,Burnet Institute | Hall W.,University of Queensland
The Lancet | Year: 2012

This paper summarises data for the prevalence, correlates, and probable adverse health consequences of problem use of amphetamines, cannabis, cocaine, and opioids. We discuss findings from systematic reviews of the prevalence of illicit drug use and dependence, remission from dependence, and mortality in illicit drug users, and evidence for acute and chronic effects of illicit drug use. We outline the regional and global distribution of use and estimated health burden from illicit drugs. These distributions are likely to be underestimates because they have not included all adverse outcomes of drug use and exclude those of cannabis - the mostly widely used illicit drug. In high-income countries, illicit drug use contributes less to the burden of disease than does tobacco but a substantial proportion of that due to alcohol. The major adverse health effects of cannabis use are dependence and probably psychotic disorders and other mental disorders. The health-related harms of cannabis use differ from those of amphetamine, cocaine, and opioid use, in that cannabis contributes little to mortality. Intelligent policy responses to drug problems need better data for the prevalence of different types of illicit drug use and the harms that their use causes globally. This need is especially urgent in high-income countries with substantial rates of illicit drug use and in low-income and middle-income countries close to illicit drug production areas. © 2012 Elsevier Ltd.

Friesen C.R.,University of New South Wales | Friesen C.R.,University of Massachusetts Amherst
Proceedings. Biological sciences / The Royal Society | Year: 2014

Sexual conflict over mating can result in sex-specific morphologies and behaviours that allow each sex to exert control over the outcome of reproduction. Genital traits, in particular, are often directly involved in conflict interactions. Via genital manipulation, we experimentally investigated whether genital traits in red-sided garter snakes influence copulation duration and formation of a copulatory plug. The hemipenes of male red-sided garter snakes have a large basal spine that inserts into the female cloaca during mating. We ablated the spine and found that males were still capable of copulation but copulation duration was much shorter and copulatory plugs were smaller than those produced by intact males. We also anaesthetized the female cloacal region and found that anaesthetized females copulated longer than control females, suggesting that female cloacal and vaginal contractions play a role in controlling copulation duration. Both results, combined with known aspects of the breeding biology of red-sided garter snakes, strongly support the idea that sexual conflict is involved in mating interactions in this species. Our results demonstrate the complex interactions among male and female traits generated by coevolutionary processes in a wild population. Such complexity highlights the importance of simultaneous examination of male and female traits.

Felmingham K.L.,University of Tasmania | Bryant R.A.,University of New South Wales
Journal of Consulting and Clinical Psychology | Year: 2012

Objective: To examine potential differential responses in men and women to cognitive behavior therapy for posttraumatic stress disorder (PTSD). Method: Fifty-two men and 56 women diagnosed with PTSD participated in randomized controlled trials of cognitive behavior therapy for PTSD. Participants were randomly allocated to either (a) exposure-only therapy (Ex) or (b) exposure-based treatment combined with cognitive restructuring (ExCR). Results: There were no significant differences between men and women in treatment response immediately after treatment in either Ex or ExCR. At 6-month follow-up, men displayed significantly more severe PTSD symptoms in the Ex group compared with women in the Ex group, and compared with men and women in the ExCR conditions. Conclusions: These findings suggest that men with PTSD have a reduced maintenance of treatment gains following exposure therapy compared with women, but display less relapse if exposure therapy is combined with cognitive therapy. These findings are consistent with evidence that women recall emotional memories and retain extinction memories more strongly than men, which may facilitate emotional processing and long-term treatment gains. © 2012 American Psychological Association.

Hegarty B.,Prince of Wales Hospital | Parker G.,University of New South Wales
Current Opinion in Psychiatry | Year: 2013

Purpose of Review: To overview the theoretical relevance of omega-3 and omega-6 fatty acids in the cause of mood disorders, and focus on evaluating the potential therapeutic role of omega-3 fatty acids. Recent Findings: Numerous studies have documented low omega-3 fatty acid levels in those with depressive disorders, and there are plausible biological explanations as to why reduced omega-3 status may predispose to mood disorders as well as to a range of other conditions. Although early studies evaluating the role of omega-3 preparations as treatments of depression were generally positive, the rate of negative or nondifferential studies has increased in recent years. Recent meta-analyses provide an explanation in suggesting that docosahexaenoic acid-weighted preparations may be ineffective while finding support for eicosapentaenoic acid (EPA)-weighted preparations. Summary: There is sufficient indicative data favouring EPA-weighted omega-3 supplementation for those with a depressive mood disorder, particular when fish oil is viewed by patients as 'natural,' it has few side effects and is neuroprotective. Recent meta-analyses inform us that intervention studies should focus on EPA-weighted preparations. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.

Graham F.S.,University of Tasmania | Mcdougall T.J.,University of New South Wales
Journal of Physical Oceanography | Year: 2013

The evolution equation of potential temperature has to date been treated as an approximation to the oceanic version of the first law of thermodynamics. That is, oceanographers have regarded the advection and diffusion of potential temperature as the advection and diffusion of "heat." However, the nonconservative source terms that arise in the evolution equation for potential temperature are estimated to be two orders of magnitude larger than the corresponding source terms for Conservative Temperature. In this paper the nonconservative source terms of potential temperature, Conservative Temperature, and entropy are derived for a stratified turbulent fluid, then quantified using the output of a coarse-resolution ocean model and compared to the rate of dissipation of mechanical energy, epsilon. It is shown that the error incurred in ocean models by assuming that Conservative Temperature is 100% conservative is approximately 120 times smaller than the corresponding error for potential temperature and at least 1200 times smaller than the corresponding error for entropy. Furthermore, the error in assuming that Conservative Temperature is 100% conservative is approximately 6 times smaller than the error in ignoring epsilon. Hence Conservative Temperature can be quite accurately regarded as a conservative variable and can be treated as being proportional to the "heat content" per unit mass of seawater, and therefore it should now be used in place of potential temperature in physical oceanography, including as the prognostic temperature variable in ocean models. © 2013 American Meteorological Society.

Gaspers S.,University of New South Wales | Szeider S.,Vienna University of Technology
Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS | Year: 2013

There are various approaches to exploiting "hidden structure" in instances of hard combinatorial problems to allow faster algorithms than for general unstructured or random instances. For SAT and its counting version #SAT, hidden structure has been exploited in terms of decomposability and strong backdoor sets. Decomposability can be considered in terms of the treewidth of a graph that is associated with the given CNF formula, for instance by considering clauses and variables as vertices of the graph, and making a variable adjacent with all the clauses it appears in. On the other hand, a strong backdoor set of a CNF formula is a set of variables such that each assignment to this set moves the formula into a fixed class for which (#)SAT can be solved in polynomial time. In this paper we combine the two above approaches. In particular, we study the algorithmic question of finding a small strong backdoor set into the class W≤t of CNF formulas whose associated graphs have treewidth at most t. The main results are positive: (1) There is a cubic-time algorithm that, given a CNF formula F and two constants k, t ≥ 0, either finds a strong W≤t-backdoor set of size at most 2k, or concludes that F has no strong W≤t-backdoor set of size at most k. (2) There is a cubic-time algorithm that, given a CNF formula F, computes the number of satisfying assignments of F or concludes that sbt(F) > k, for any pair of constants k, t ≥ 0. Here, sbt(F) denotes the size of a smallest strong W≤t-backdoor set of F. We establish both results by distinguishing between two cases, depending on whether the treewidth of the given formula is small or large. For both results the case of small treewidth can be dealt with relatively standard methods. The case of large treewidth is challenging and requires novel and sophisticated combinatorial arguments. The main tool is an auxiliary graph whose vertices represent subgraphs in F's associated graph. It captures various ways to assemble large-treewidth subgraphs in F's associated graph. This is used to show that every backdoor set of size k intersects a certain set of variables whose size is bounded by a function of k and t. For any other set of k variables, one can use the auxiliary graph to find an assignment τ to these variables such that the graph associated with F[τ ] has treewidth at least t + 1. The significance of our results lies in the fact that they allow us to exploit algorithmically a hidden structure in formulas that is not accessible by any one of the two approaches (decomposability, backdoors) alone. Already a backdoor size 1 on top of treewidth 1 (i.e., sb1(F) = 1) entails formulas of arbitrarily large treewidth and arbitrarily large cycle cutsets (variables whose deletion makes the instance acyclic). Copyright © 2013 by The Institute of Electrical and Electronics Engineers, Inc.

Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2011.9.6 | Award Amount: 2.49M | Year: 2012

MULTI replaces the familiar sequential model of computation that uses Boolean variables and combinational gates by logic operations that are executed in parallel on devices that have a built-in many state memory and whose inputs and outputs are multivalued. MULTI seeks to design, simulate and experimentally implement proof of principle devices on the atomic and molecular scale.MULTI refers to the unique novel characteristics of the proposed schemes. We use MULTI-level logic variables as opposed to two valued Boolean variables. We use MULTI-variate circuits that act in parallel so that more than one logic function is evaluated at every stage of the computation thereby aiming for parallel circuits. We use MULTI-state inbuilt memories. New functionalities and ground-breaking characteristics of information processing are provided by the radically post-Boolean MULTI approach. As a final thrust, MULTI will explore a post-Turing model of computation by using continuous variables.In MULTI a single atom, molecule or a supra(bio)molecular assembly acts as a logic element. We aim for new foundational principles and for proof of concept of computing in hardware at the level of laboratory experiments. MULTI plans to take advantage of internal degrees of freedom of atoms or molecules to implement logic operations by electrical addressing in the solid state and/or by optical addressing in solution. The dynamics offers time-resolved response and thereby enables parallelism that can be massive because of the many resolvable states. Internal states of different molecules can be made to communicate thereby providing for concatenation. The other route of MULTI is (bio)chemical recognition that offers exquisite control for implementing many state, optically readable finite state machines.Benefits of MULTI approach are higher information rates for inputs and outputs, enhanced rates of processing due to parallelism and computing in memory and exploration of continuous logic.

Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2009.3.8 | Award Amount: 3.19M | Year: 2010

The LIMA project exploits cutting edge photonic technologies to enhance silicon solar cell efficiencies with new concepts in nanostructured materials. It proposes nano-structured surface layers designed to increase light absorption in the solar cell while decreasing surface and interface recombination loss. Integration in a back contact design further reduces these interface losses and avoids shading.\nThe project improves light-matter interaction by the use a surface plasmonic nanoparticle layer. This reduces reflection and efficiently couples incident radiation into the solar cell where it is trapped by internal reflection.\nSurface and interface recombination are minimised by using silicon quantum dot superlattices in a passivating matrix. The distance between quantum dots ensures wave-function overlap and good conductivity. An effective field at the superlattice - crystalline silicon interface ensures that the cell is insensitive to the recombination velocity at this heterojunction, and further increases the collection probability in the quantum dot layer.\nThe dots allow a fundamental efficiency enhancement due to experimentally confirmed multiple exciton generation. This mechanism increases photocurrent and can in theory raise the theoretical single junction efficiency limit from 33% to 44%.\nThese surface plasmonic and quantum dot layers are integrated in a high efficiency crystalline silicon back contact cell. This is designed such that the space charge region is separated from the superlattice crystalline silicon heterojunction minimising non radiative space-charge recombination. The back contacts and dielectric electrical insulator are designed to maximise back surface reflection and enhance the light trapping of incident radiation without shading losses.\nThe project combines expertise between academic and industrial partners. The goal is a high efficiency cell using novel concepts to enhance proven cell designs.

Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2011.9.8 | Award Amount: 3.32M | Year: 2012

TOLOP comprises investigations into three of the levels necessary for a paradigm shift in low-power electronics:\n1. Fabrication and measurement of devices which are inherently low-power in switching operation at room temperature\n2 . Theory of specific device implementations for each of those technologies to explain and validate the principles behind their low-power capabilities.\n3. Design of architecture to enable the circuit operation of these technologies for overall low-power circuit operation.\nThere will be an investigation of the inherent losses at realistic switching rates, and estimations of the device-circuit-operation trade-offs for minimising energy consumption. Metrics such as the energy-delay product will be provided for each implementation and benchmarked against existing commercial and research technologies.\nThe consortium brings together leading European institutions in this field, and allows the overall optimisation of energy consumption in realistic conditions, including device fabrication, device operation and circuit operation at several levels.\nTarget outcomes:\n1. Novel single-atom, single-electron and spintronic devices will be investigated experimentally. In the first two cases, the structures are inherently CMOS-compatible whilst in the third a higher-risk approach will be taken whilst still addressing manufacturability\n2. Both non-Boolean implementations such as multi-valued logic and parallel logic, and optimised Boolean logic implementations will be addressed. Non Boolean logic has the potential for exponential improvement in power needs.\n3. Architectures and operating protocols will be designed to optimise the total power consumption of a circuit-level implementation, taking into account measured and estimated losses at realistic operating rates and temperatures\n\nDevice-level proof of concept will be achieved experimentally, driving the design of specific implementations, with theoretical investigation of viability in a realistic circuit

Agency: Cordis | Branch: FP7 | Program: ERC-AG | Phase: ERC-AG-LS3 | Award Amount: 2.27M | Year: 2014

The contribution of clathrin-independent endocytosis to the cellular entry of signaling receptors, cell adhesion factors, and other cell surface molecules is well documented. However, how this process is initiated is still unknown. We have recently found that a cellular lectin, galectin-3 (Gal3), entered cells via morphologically distinct tubular structures, so-called clathrin-independent carriers (CLICs); Gal3 endocytosis was required for the uptake of the CLIC cargo CD44, a cell adhesion/migration factor; CD44 and Gal3 uptake required glycosphingolipids (GSLs); Gal3 induced tubular membrane invaginations in a GSL-dependent manner. Based on these findings we propose the groundbreaking hypothesis that Gal3 is an adaptor that clusters cargo proteins carrying defined carbohydrate modifications together with specific GSLs into nanoscale membrane environments whose mechanical properties drive the clathrin-independent formation of endocytic pits. In this program, we will first establish the compositional topology of endocytic galectin processes (Aim 1). The adaptor hypothesis will then be tested using innovative chemical biology tools (Aim 2). Quantitative models of cargo clustering and membrane shape changes will be developed on the basis of biophysical measurements and coarse grain simulations (Aim 3). Intravital imaging of endocytosis and cell migration in mice will finally explore how the functional link between galectins and GSLs contributes to wound healing in the colon (Aim 4). The molecular functions of galectins and GSLs in endocytosis major unresolved questions in cellular membrane biology will thereby be established, providing details from atomic arrangements via multi-molecular complexes and meso-scaled membrane domains to in vivo physiology. I am confident that this program will lead to the discovery of a new clathrin-independent endocytic mechanism by which different types of cargo are sorted to and internalized from specific membrane domains.

News Article | March 1, 2017

Are we closing in on life’s cradle? What is claimed to be the oldest evidence of life on Earth yet found backs the idea that the first microbes originated around hydrothermal vents on the seafloor – but the work is already proving controversial. Explaining the origin of life is one of the biggest unclaimed prizes in biology, and one that many scientists – including Nobel prizewinners – are chasing. The only thing we know for certain is that life must have popped into existence sometime between Earth’s formation 4.5 billion years ago and the appearance of the first undisputed fossils, about 3.4 billion years ago. Most origin-of-life researchers base their theories on biochemical principles. But to seek direct evidence of life’s emergence, they must visit some of Earth’s last remaining wildernesses, including parts of Greenland, northern Canada and Antarctica. By chance these are the only places where Earth’s oldest rocks – and potentially fossils – can still be found. Matthew Dodd at University College London and his colleagues have just finished analysing rocks collected from a region called the Nuvvuagittuq belt, in northern Quebec, Canada. The rocks here, on the coast of Hudson Bay, are at least 3.75 billion years old, and some geologists argue they are about 4.29 billion years old, which would mean they are just slightly younger than the planet itself. Like all such ancient rocks, they have been heavily altered. At some point they spent time deep inside Earth, where temperatures above 500°C and extreme pressures baked and deformed them. But geologists can still read clues that they formed at the bottom of Earth’s very early oceans. Significantly, they seem to preserve evidence of ancient deep-sea hydrothermal vents – just the sort of environment that many see as the most likely birthplace for life. Dodd and his colleagues believe they have now found evidence of early life, in iron-rich rocks originally formed around relatively cool vents (less than 160°C). These rocks contain microscopic tubes and filaments made of iron oxide. Very similar structures, the researchers say, are formed by bacteria that live in mat-like colonies around modern deep-sea hydrothermal vents. What’s more, the material close to the filaments includes carbon with an isotopic balance characteristic of biological processes, says Dodd. Some of that carbon is inside crystals of phosphorus-rich minerals, which also hints at early biology. “Phosphorus is essential for all life on Earth,” says Dodd. “As organisms die and decay it is released and it can then be incorporated into minerals.” Taken together, the evidence points to one inescapable conclusion, he says: the very early Earth was home to microbes similar to those found around today’s low-temperature hydrothermal vents. If confirmed, the conclusion would be significant for several reasons. It would potentially push the record of life back to 4.29 billion years ago, suggesting our planet was inhabited astonishingly early. Little more than a decade ago, the consensus was that Earth was still a molten mass so early in its history. More than that, it would show that life got going around deep-sea vents where organisms have to derive their energy from geothermal processes, since there is little or no sunlight. This would help bring the geological evidence in line with findings from genetic and biochemical studies hinting that life emerged in deep hydrothermal areas – and not in shallow, sun-drenched environments where most early fossils have been found. “The strongest [fossil] evidence for early life currently comes from shallow water deposits such as beach sands,” says David Wacey at the University of Western Australia in Perth. “This study once again gets us thinking about a potential hydrothermal cradle of life.” Wacey thinks Dodd and his colleagues’ evidence of ancient life is reasonably solid. “The individual lines of chemical evidence are not particularly strong, but put these together with the evidence from the filaments and one comes up with a pretty convincing biological scenario,” he says. David Emerson at the Bigelow Laboratory for Ocean Sciences in East Boothbay, Maine, is more cautious. Emerson has spent many years investigating the biology of modern microbial mats. “You do get unique structures in these mat communities that are very easily identifiable as biological,” he says. “The problem is that when you go back in time it becomes harder and harder to interpret the evidence.” Others are unconvinced by the new work. “I am frankly dubious,” says Frances Westall at the Centre for Molecular Biophysics in Orléans, France. “All kinds of reactions take place at [high] pressures and temperatures.” The filaments might simply be a curious by-product of those reactions and have nothing to do with biological activity, she says. Martin Van Kranendonk at the University of New South Wales in Sydney, who co-authored a 2016 study on possible early life in 3.7-billion-year-old rocks from Greenland, is another firm sceptic. He says it “beggars belief” that fragile, microscopic structures could survive in rocks that have been subjected to high temperatures and pressures deep underground. Westall draws a comparison with a mid-1990s claim by NASA scientists of possible signs of life in a 4-billion-year-old Martian meteorite. Again the evidence was in the form of interesting microstructures plus isotopic signatures consistent with biological activity. The findings were controversial from the start, and most scientists ultimately rejected them. The claims by Dodd’s team cannot be summarily dismissed on first impressions, though. Wacey says many researchers will want to scrutinise the evidence in great detail before making up their minds. “It may be many years before a consensus is reached,” he says. When dealing with “fossils” that may be 4.29 billion years old, it makes little sense to rush things.

Ride-sharing giant Uber just unveiled an "AI lab," acquiring the New York startup Geometric Intelligence (GI) to "advance machine learning," it announced on Monday. The AI lab team, led by GI's cofounder Gary Marcus, will include 15 researchers—from New York University, Stanford, Cornell, and other universities—who will bring a "multidisciplinary approach" to Uber's AI development, drawing from science and AI backgrounds, according to Uber. Uber has already made significant moves in the AI space. As Danny Lange, Uber's director of machine learning, told TechRepublic, "machine learning is Uber's DNA." The entire platform depends on integrating information from GPS locations, traffic, and pickup locations—and learning from millions of previous rides to improve its accuracy. Uber has already set up shop in Pittsburgh after recruiting top researchers from Carnegie Mellon, and launched a driverless fleet there. It also purchased Otto, a self-driving trucking company, back in August. What may, perhaps, be most significant about the announcement is the fact that Uber appears to be entering the space of artificial general intelligence (AGI)—a goal that is reached when a machine can successfully perform a task the same way a human can. (This is different from "narrow AI," which was displayed when IBM's Deep Blue AI system mastered chess, beating world-champion Garry Kasparov in 1996.) "With all of its complexity and uncertainty, negotiating the real world is a high-order intelligence problem," wrote Jeff Holden, Uber's CPO, in a post announcing the news. SEE: How data and machine learning are 'part of Uber's DNA' (TechRepublic) "Self-driving cars are one of the few very strong uses for artificial intelligence today and the demand for talent in this area has already led to numerous acquisitions," said Michael Ramsey, autonomous vehicle expert at Gartner. "Uber is trying to find a better way to train its vehicles to think like people and is hoping that Geometric Intelligence has a breakthrough in this area." According to Toby Walsh, AI professor at the University of New South Wales, the GI approach is inspired by cognitive science, and "based on richer representations and probabilistic reasoning." "It is good, nevertheless to see investment in technologies that go beyond deep learning. And it reflects how even technological mature organizations like Uber are having to reinvent themselves continually," Walsh said. "If your organization wants to succeed, you need to put AI close to the center of what you do." The acquisition "makes sense for Uber's physical as well as purely digital undertakings," said Bryant Walker Smith, professor at the University of South Carolina and expert in the autonomous vehicles space. AI researcher Roman Yampolskiy, head of the Cybersecurity Lab at the University of Louisville, agreed. "One can no longer dominate any industry without a robust AI infrastructure," he said. "All companies with necessary financial resources are trying to secure human resources in that space, and Uber is not an exception. Between self-driving cars and intelligent ride management, they need smart machines to remove humans from the loop in order to optimize their business model." "Not every company should outright buy an AI firm," Smith said, "but every company in just about any industry should be working with one."

News Article | April 29, 2016

In the world of engineering, you can usually trust nature to have done it first, and better. A newly-developed fuel cell technology looks to the cactus to solve a problem involving this promising but complex form of clean energy, which generates power without emitting planet-warming greenhouse gases. The technology, which is a thin plastic membrane, could help improve the performance of fuel cells to make them more useful for everything from electric cars to desalination plants. SEE ALSO: Elon Musk's SpaceX wants to send a spacecraft to Mars by 2018 The result of a collaboration between the CSIRO, Australia's peak science body, and Korea's Hanyang University, the research was published Thursday in the journal Nature. The new membrane aims to help keep fuel cells cool, Aaron Thornton, CSIRO researcher and the report's co-author, told Mashable Australia, solving a significant problem for the development of green technology. The problem the membrane is aimed at solving concerns a particular type of fuel cell, known as proton-exchange membrane fuel cells. Such fuel cells generate energy by mixing gases like hydrogen and oxygen. Most currently available membranes can heat up and dry out, meaning the fuel cell loses performance. A common solution is to include an extra component near the fuel cell — a humidifier — that helps keep the membrane moist. But this adds weight and more complexity to the fuel cell. The new plastic membrane can solve this problem by holding water and acting in a similar way to a cactus. "Cactus actually have a round hole, called a stomata, [on their surface]," he explained, "that's responsible for opening them up in the cool of the night to absorb water, and closing up in the dry hot day." The new membrane performs in much the same way, using two plastic, or polymer, skins. The internal skin, which actively draws in and holds water, is coated by a layer that repels water but also has tiny nano-cracks. "This skin cracks and opens up when there is moisture available — it basically self-humidifies," Thornton explained. It then closes back up in low-humidity environments to retain the water. This feature allows the performance of the fuel cell to be increased up to fourfold, he claimed. This could help fuel cells generate more electricity with less hydrogen, or reduce electric vehicle costs by getting rid of extra components such as the humidifier. Although it's only been tested in the lab so far, Thornton suggested the membrane could have a wide range of uses. It could be used to attempt to harness electricity from the sea, for example. "Because ions are actually in the sea, if you could separate [the ions] from water and run them through something like a fuel cell, you could generate electricity," he explained, acknowledging such research is just getting started. Kondo-Francois Aguey-Zinsou, the leader of the Materials Energy Research Laboratory in Nanoscale at the University of New South Wales, who was not involved in the new research, told Mashable Australia the study was valuable. "One of the problems [with fuel cells] is water management," he said. "This can potentially lead to better fuel cell technology." Nevertheless, the major hurdle to broadening the use of this type of fuel cell in clean technology is the cost, he suggested. "One of the main problems with fuel cells is the cost of platinum," Aguey-Zinsou said. "Membranes are a problem, but the highest cost of the fuel cell is the platinum catalyst that helps react the hydrogen and oxygen. "If this problem is not met, this technology won't have a significant impact." Maybe cacti have more cost-saving secrets to share?

News Article | October 26, 2016

A floor material that converts footsteps into energy may one day power lights in train stations and busy thoroughfares. Xudong Wang (pictured above) at the University of Wisconsin-Madison and his colleagues have invented a material that generates electricity when people step on it. Unlike previous attempts in this vein, which used relatively expensive materials like ceramics and metals, this one is low-cost, renewable and biodegradable. To make it, the researchers extracted cellulose nano-fibres from wood pulp and separated them into two layers, one of which they chemically treated to become positively charged. They then encased both layers in cardboard and compressed them to make a hard board. Under the pressure of a footstep, the two cellulose layers come into contact and swap electrons. When the foot is lifted, the electrons return but travel through an external circuit, producing electricity. One step on a floorboard produces between 10 and 30 volts, and is capable of lighting up 35 green LEDs. Wang’s team is now planning to test the prototype in a part of their university with heavy foot traffic. Eventually, they hope to introduce the floorboards into train stations and shopping centres, where they could power lights and sensors. However, Alistair Sproul at the University of New South Wales in Sydney, Australia, says that like existing electricity-producing floors, the new material has a tiny output compared with other green energy sources. “If you want to power society and do things efficiently, go for solar or wind energy,” says Sproul. “This is a novelty idea but probably not very practical.” But Wang says that energy from footsteps could complement solar energy. “The concept is to generate electricity from energy that would otherwise be wasted,” he says. “It also works inside buildings and underground, where solar may not be available or sufficient.” The team will now need to show that the floorboards are durable and cost-effective. The production cost is similar to that of regular wooden floors, but additional expenses include ripping up existing floors and connecting the new flooring to energy storage units and devices. “It can’t replace solar energy harvesting,” Wang says. “But as a complement, we believe it will be the most cost-effective way compared to other technologies for harvesting energy from footsteps.”

News Article | December 7, 2016

Have you ever taken the back off your phone and seen that your battery has swollen up like an inflatable pillow? For the first time, scientists have been able to peer deep inside to see what is happening. Lithium-ion batteries have become ubiquitous in phones, laptops and hoverboards because they are lightweight and hold a lot of charge. However, a  recent spate of explosions has raised safety concerns. One reason these batteries are damage-prone is because of their structure, says Toby Bond at Canadian Light Source, a synchrotron facility in Saskatoon, Canada. They usually contain electrode sheets wrapped in a spiralling “Swiss roll”, or “jelly roll”, configuration to fit in as many energy-producing layers as possible, but this causes problems if gas forms inside. Gas can build up if a battery becomes overheated or overcharged, or is left drained of charge for too long. Under these conditions, the liquid components in the battery can vaporise and produce gas. This causes the external casing to swell up, resulting in the characteristic pillow appearance. “The battery can either pillow a small amount and keep operating, pillow a lot and cease operation, or keep generating gas and rupture the cell, which can be accompanied by an explosion or fire,” says Bond. Pillowing can easily be identified by the battery’s outer appearance, but until now, no one knew what was going on inside. Bond and his colleagues used a high-powered X-ray technique called computed tomography to see the process. As gas built up, the electrode layers appeared to buckle and push apart. Subtle defects already present in the battery were found to be points of weakness. Such defects are probably introduced in the assembly process, suggesting that improvements in manufacturing could make lithium batteries more resistant, Bond says. It is unclear at this stage whether pillowing was involved in the recent battery failures in Samsung Galaxy Note 7 phones. Another way to reduce risk is to develop lithium-ion batteries with no liquid components, says Neeraj Sharma at the University of New South Wales in Sydney, Australia. “We and other groups are working on these solid-state batteries,” he says. “They can be very safe, but there are a number of scientific and technical challenges that need to be overcome before they compete with conventional lithium-ion batteries.” In the meantime, the best way to protect your lithium battery from pillowing is to stop it from overheating, says Bond. “Avoid spending many uninterrupted hours playing 3D games on your phone, and consider giving your tablet a break if you’ve been binge-watching movies for a nine-hour flight,” he says.

Scientists who reported the discovery of remnants of an ancient continent under the Indian ocean island Mauritius have called for broader studies on the supercontinent Gondwana's split to decipher the linkages with the undiscovered micro-continent Mauritia. Lewis Ashwal of the University of the Witwatersrand in South Africa and colleagues confirmed that the island nation Mauritius was sitting on top of a sunken continent that is billions of years old. According to Ashwal, the pieces of Mauritia formed after the splintering of Gondwana supercontinent is still spread over the Indian Ocean. The sunken continent was formed when India broke away from Africa billions of years ago. Gondwana disintegrated 200 millions of years ago with splinters later becoming Africa, Australia, South America, and Antarctica. According to the research, the presence of a continental crust has been indicated by the old Mauritian zircons showing the ancient crust's affinity with Madagascar, which is 700 km west of Mauritius now. It said a broad reconstruction that traces linkages of Mauritius and the continental fragments as an erstwhile part of the ancient nucleus of Madagascar and southern India needs to be attempted. "We are studying the break-up process of the continents, in order to understand the geological history of the planet," said Ashwal, also the study's lead author. Analysis of volcanic rocks in Mauritius led the scientists to establish the Mauritius connection firmly. Finding plenty of zircons at the beaches, which were extremely old and spewed by volcanic eruptions, the team was led to clear conclusions. There was a huge age disparity between the young Mauritius that was just 8 million years old and zircon crystals aged at least 3 billion years. "The fact that we have found zircons of this age proves that there are much older crustal materials under Mauritius that could only have originated from a continent," said Ashwal. Finding such old minerals in the relatively young Mauritius led to the surmise that the island was sitting on top of a sunken continent. The discovery came in the aftermath of the high gravitational fields exerted in some parts of the ocean thanks to the extremely thicker crusts. Beneath the sea, huge chunks of land that sank got attached to the ocean's crust. Experts conclude that Mauritia must have been a small continent. When India and Madagascar started drifting apart, Mauritia felt the stress and stretched far and eventually broke up. The assumption is that Mauritia sank into the sea 84 million years ago. "It's like plasticine: when continents are stretched they become thinner and split apart," noted Martin Van Kranendonk at the University of New South Wales in Australia. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.

News Article | March 8, 2016

The early emergence of flowers and leaves due to climate change amplifies springtime heat waves in Europe, new climate simulations suggest. While not as deadly as their summertime counterparts, spring heat waves can disrupt ecosystems and damage crops. Simulating a 30-day head start to spring’s natural beginning, researchers discovered that the shift alters water distribution in the environment and boosts the likelihood of spring heat waves by about 0.6 events per year. Since the 1970s, spring plant growth has shifted earlier in the year by around two to five days each decade in the Northern Hemisphere. The heat waves also become more intense, with the average one lasting a day longer and reaching 1 degree Celsius warmer, the researchers report in a paper to be published in Geophysical Research Letters. While early plant growth affected spring heat, the new research found no influence on summer heat waves. That result suggests that although plants impact temperature, the effect is short-lived, says study coauthor Andy Pitman, a climate scientist at the University of New South Wales in Sydney. Pitman and colleagues ran a climate simulation multiple times, each time shifting spring plant growth earlier and earlier. The simulations revealed that budding plants suck water from the soil that would otherwise evaporate and cool the air. The drier soil increased daytime temperatures and decreased low- and mid-lying clouds that block sunlight. By summer, the early spring had little effect. Pitman suggests that by the time summer rolls around, other factors such as rainfall begin to have a much larger effect than the remnant impacts of early spring greening. Essentially the effects of an early spring get “washed out,” he says. The researchers plan to look at the effects of early spring plant growth on other parts of the world, including North America. The results make sense, but “it’s just one model,” says Benjamin Zaitchik, a hydroclimate scientist at Johns Hopkins University. “It would be interesting to see if other modeling groups could show how the same effect plays out in the other models.”

Perkins S.E.,University of New South Wales | Alexander L.V.,University of New South Wales
Journal of Climate | Year: 2013

Despite their adverse impacts, definitions and measurements of heat waves are ambiguous and inconsistent, generally being endemic to only the group affected, or the respective study reporting the analysis. The present study addresses this issue by employing a set of three heat wave definitions, derived from surveying heatrelated indices in the climate science literature. The definitions include three or more consecutive days above one of the following: the 90th percentile for maximum temperature, the 90th percentile for minimum temperature, and positive extreme heat factor (EHF) conditions. Additionally, each index is studied using a multiaspect framework measuring heat wave number, duration, participating days, and the peak and mean magnitudes. Observed climatologies and trends computed by Sen's Kendall slope estimator are presented for the Australian continent for two time periods (1951-2008 and 1971-2008). Trends in all aspects and definitions are smaller in magnitude but more significant for 1951-2008 than for 1971-2008. Considerable similarities exist in trends of the yearly number of days participating in a heat wave and yearly heat wave frequency, suggesting that the number of available heat wave days drives the number of events. Larger trends in the hottest part of a heat wave suggest that heat wave intensity is increasing faster than the mean magnitude. Although the direct results of this study cannot be inferred for other regions, the methodology has been designed as such that it is widely applicable. Furthermore, it includes a range of definitions that may be useful for a wide range of systems impacted by heat waves. © 2013 American Meteorological Society.

Ballard J.W.O.,University of New South Wales | Melvin R.G.,University of New South Wales
Molecular Ecology | Year: 2010

One of the grand challenges of the postgenomics era is to mechanistically link the genotype with the phenotype. Here, we consider the link between the mitochondrial genotype and the organismal phenotype that is provided by bioenergetic studies of the electron transport chain. That linkage is pertinent for the fields of molecular ecology and phylogeography as it tests if, and potentially how, natural selection can influence the evolutionary and demographic past of both populations and species. We introduce the mitochondrial genotype in terms of mitochondrial-encoded genes, nuclear-encoded genes that produce structural proteins imported into the mitochondria, and mitochondrial DNA-nuclear interactions. We then review the potential for quaternary structure modelling to predict the functional consequence of specific naturally occurring mutations. We discuss how the energy-producing reactions of oxidative phosphorylation can be used to provide a mechanistic biochemical link between genotype and phenotype. Experimental manipulations can then be used to test the functional consequences of specific mutations in multiple genetic backgrounds. Finally, we examine how mitochondria can influence the organismal mitochondrial phenotype using the examples of lifespan, fertility and starvation resistance and discuss how mitochondria may be involved in establishing both the upper and lower thermal limits of organisms. We conclude that mitochondrial DNA mutations can be important in determining aspects of organism life history. The question that remains to be resolved is how common are these adaptive mutations? © 2010 Blackwell Publishing Ltd.

Hoyle C.E.,University of Southern Mississippi | Lowe A.B.,University of New South Wales | Bowman C.N.,University of Colorado at Boulder
Chemical Society Reviews | Year: 2010

The merits of thiol-click chemistry and its potential for making new forays into chemical synthesis and materials applications are described. Since thiols react to high yields under benign conditions with a vast range of chemical species, their utility extends to a large number of applications in the chemical, biological, physical, materials and engineering fields. This critical review provides insight into emerging venues for application as well as new mechanistic understanding of this exceptional chemistry in its many forms (81 references). © 2010 The Royal Society of Chemistry.

Dance I.,University of New South Wales
Dalton Transactions | Year: 2010

In seeking to mimic the hydrogenation of N2 to NH3 as effected under mild conditions by the enzyme nitrogenase, three classes of known metal sulfide clusters that resemble the NFe7MoS9 core of FeMo-co, the active site of nitrogenase, have been assessed theoretically. The assessment has been made in the context of the previously proposed mechanism for nitrogenase, in which protons are relayed to FeMo-co, where, as hydrogen atoms accumulated on Fe and S atoms, they transfer to bound N2 and subsequent intermediates in a critical sequence of intramolecular hydrogenations, probably accelerated by H atom tunneling. The three model systems possess the XcFe4S4 face which is the key active site of FeMo-co (X is most probably N in FeMo-co, and is S in the models). The most promising functional models are based on clusters M1, {(tpb)Mo(μ3-S)3Fe2(Fe-L)S c(μ-S)2(Fe-L)Fe2(μ3-S) 3Mo(tpb)} [tpb = tris(1-pyrazolyl)hydroborate], for which syntheses are well developed. The assessment is based on the ability of the models to mimic the intermediates in the FeMo-co mechanism, as determined by density functional simulations. The elaborations of M1 required to mimic the FeMo-co behaviour are described. These include modification of the tpb ligands to control the coordination at the Fe atoms, to provide for the proton relay functionality, and to prevent unwanted reactivity at other Fe and S atoms. Literature references with prescriptions for synthesis of the predicted homogeneous catalysts are provided. Further, in view of the similarities between the model systems and the P-cluster of nitrogenase, it is speculated that the P-cluster could be a relic catalytic site for N2 reduction. © 2010 The Royal Society of Chemistry.

van Holst Pellekaan S.,University of New South Wales
Quaternary International | Year: 2013

Mitochondrial DNA (mtDNA), Y-chromosome and, more recently, genome studies from living people have produced powerful evidence for the dispersal of modern human populations. The prevailing model of global dispersion assumes an African origin in which Australia and the American continents represent some of the extreme regions of human migration, though the relative timing of dispersal events remains debatable. Here, a focus on Australia and New Guinea discusses currently available genetic evidence from the two regions, compared with that from Asia. Mt haplotypes indicate ancient ancestry for both Australia and New Guinea peoples, with evidence of some shared genetic connection and other unshared haplogroups apparently specific to both places. Migration into Sahul from south-east Asia may have been by more complex routes than only along a 'southern coastal route', raising the question of possible common ancestry in central or northern Asia for some Australian and American peoples for which current genetic evidence is tenuous. Although current dating methods for genetic diversity rely heavily on several assumptions, best estimates provide support for archaeological dates, indicating that, relative to the colonization of America, Australia was inhabited very early. Genetic diversity of living descendants of Australia's founding populations is informative for dispersal within Australia and for understanding complex population histories of Asia. © 2011 Elsevier Ltd and INQUA.

Robertson R.,University of New South Wales
Journal of Geophysical Research: Oceans | Year: 2013

Tidal effects on the circulation under the ice shelves and ice shelf melting in the Amundsen Sea were investigated using a numerical model, through comparison of simulations with and without tides. In the Amundsen Sea, tidal impacts were dependant on the location of the ice shelf front with respect to the M2 effective critical latitude. The critical latitude is the latitude where the tidal frequency equals the inertial frequency. The effective critical latitude is where the tidal frequency equals the inertial frequency adjusted by relative vorticity, such as that associated with a wind-driven gyre. For ice shelves located equatorward of the M2 effective critical latitude, tides increased both mixing in front of and under the ice shelf and flow into the ice shelf cavities by as much as 50%, despite weak tides compared with the mean flows. Tides also increased melting for these ice shelves by 1-3.5 m yr-1, a 50% increase for Dotson Ice Shelf and 25% for Pine Island Ice Shelf. These enhancements were not a result of tidal residual flows, but instead originated from resonant effects, increases in the baroclinity of the velocities, and higher mixing, all of which are associated with critical latitude effects on internal tides. For ice shelves located poleward of the effective critical latitude, tides very slightly retarded flow into the cavity and slightly reduced melting. Key Points Tides can increase ice shelf melting by 1-3.5 m/yr, depending on latitude. Tides can increase mixing and flow under ice shelves, depending on latitude. Critical latitude shifts by wind influence tidal impacts on ice shelves. ©2013. American Geophysical Union. All Rights Reserved.

Stevens-Kalceff M.A.,University of New South Wales
Mineralogy and Petrology | Year: 2013

Cathodoluminescence (CL) techniques are used to investigate the defect structures of pure synthetic silicon dioxide (SiO2) polymorphs. Pure, synthetic Types I, II, III and IV amorphous SiO2 polymorphs, pure, synthetic crystal α-SiO2 and pure, synthetic amorphized crystal α-SiO2 have been investigated and their characteristic defects have been determined and compared. The CL emission from pure SiO2 polymorphs is generally related to local point defects in the tetrahedrally coordinated SiO2 host lattice. A range of CL emissions associated with non bridging oxygen defects, oxygen deficient defects and the radiative recombination of self trapped excitons are observed from both the pure synthetic crystal and amorphous SiO2 polymorphs. In addition CL emissions associated with residual concentrations of Aluminium impurities are also observed from α-SiO2 (quartz) and Type I and II a-SiO2 (fused quartz). Localised amorphous micro-volumes may exist within natural α-SiO2 due to the presence of a high concentration of pre-existing or induced defects. Amorphization of α-SiO2 diminishes the difference between the defect structures and associated CL from α-SiO2 and a-SiO2. Thus CL investigation of the defect structure of a-SiO2 polymorphs provides useful insight into the microstructure of amorphized α-SiO2. © 2013 Springer-Verlag Wien.

Sun W.,University of New South Wales
Physical Chemistry Chemical Physics | Year: 2014

Due to the breakdown of Derjaguin approximation at the nanoscale level apart from the neglect of the atomic discrete structure, the underestimated number density of atoms, and surface effects, the continuum Hamaker model does not hold to describe interactions between a spherical nanoparticle and a flat surface. In this work, the interaction forces including van der Waals (vdW) attraction, Born repulsion and mechanical contact forces between a spherical nanoparticle and a flat substrate have been studied using molecular dynamic (MD) simulations. The MD simulated results are compared with the Hamaker approach and it is found that the force ratios for one nanosphere interacting with a flat surface are different from those for two interacting nanospheres, both qualitatively and quantitatively. Thus two separate formulas have been proposed to estimate the vdW attraction and Born repulsion forces between a nanosphere and a flat surface. Besides, it is revealed that the mechanical contact forces between a spherical nanoparticle and a flat surface still can be described by the continuum Hertz model. © 2014 the Owner Societies.

Khoshghalb A.,University of New South Wales | Khalili N.,University of New South Wales
International Journal for Numerical and Analytical Methods in Geomechanics | Year: 2013

SUMMARY: A meshfree algorithm is proposed for the fully coupled analysis of flow and deformation in unsaturated poro-elastic media. A brief description of the governing differential equations is presented followed by the spatial discretization of the governing equations using the Galerkin approach and the radial point interpolation method. Temporal discretization is achieved using a three-point approximation technique with second order accuracy. Particular attention is given to the determination of constitutive coefficients and effective stress parameters of the medium taking into account hydraulic hysteresis. Application of the model to several experimental data is demonstrated. Good agreement is obtained between the numerical and experimental results in all cases highlighting the ability of the proposed model in capturing the volume change and suction dependency of the model parameters as well as the coupled behaviour of unsaturated porous media subject to hydraulic hysteresis. © 2012 John Wiley & Sons, Ltd.

Lowe A.B.,University of New South Wales
Polymer Chemistry | Year: 2010

This review highlights examples of recent applications of both the radical-mediated and base/nucleophile-initiated thiol-ene reactions in polymer and materials synthesis. Initial discussion focuses on mechanistic aspects of these reactions and also notes some of the structural considerations, with respect to reactants, that should be considered when practising such chemistries. The review is not intended to be exhaustive but rather to serve as an illustration of the impressive versatility and clear potential of these thiol-based "click" reactions and to highlight examples demonstrating its broad utility. © 2010 The Royal Society of Chemistry.

Stening R.J.,University of New South Wales | Winch D.E.,University of Sydney
Journal of Geophysical Research: Space Physics | Year: 2013

An earlier comprehensive analysis of geomagnetic tides by Winch is revisited to display changes in the current systems with season and longitude. The data used come from the quiet Sun years 1964-1965. We choose to present some total equivalent current systems, being the sum of the external and internal parts derived in the spherical harmonic analysis. The latitudes and local times of the current system foci are generally in agreement with other workers. The amplitudes of the current system vortices follow an annual variation with maximum in summer. The longitude variations of the vortex amplitudes vary as the inverse of the magnetic field strength at both equinoxes but have different variations at the solstices. Other longitude maxima which have been reported in the equatorial electrojet intensity were not found. We examine in detail the "invasions" of the summer current systems across the equator, identifying these as signatures of field-aligned currents (FACs). The tilting of current contours with respect to the equator is interpreted as being due to midday FACs. As others have found, the identification of afternoon FACs is more difficult. The seasonal swap-over of FAC directions occurs not in September but in October-November. Key Points Spherical harmonic analysis of global geomagnetic data is revisited for details Cross equator Annual variation of Sq vortex strength is found in both hemispheres ©2013. American Geophysical Union. All Rights Reserved.

Considering liquid fuels from a variety of feedstocks and refinement processes, evaporating sprays of various fuels are investigated in a constant-volume chamber at ambient and injection conditions typical of a diesel engine. For diesel sprays, the liquid phase of the spray reaches a maximum penetration distance soon after the start of injection, while the vapor phase of the spray continues to penetrate downstream. This liquid-phase penetration distance or in short, the "liquid length", depends upon the ambient and fuel-injector conditions. To clarify fuel effects on liquid length and vapor penetration, in this study we performed simultaneous high-speed imaging of Mie-scattering and schlieren at approximately 50-kHz framing rates. Experiments were performed at specified ambient gas temperature and density in an environment with 0% oxygen. This non-reacting condition was selected intentionally to isolate the mixing and vaporization processes from more complex effects of combustion. Fuels with a wide range of boiling points and densities were examined, including a conventional No. 2 diesel, low-aromatics jet fuel, world-average jet fuel, Fischer-Tropsch synthetic fuel, coal-derived fuel, and a two-component surrogate fuel. Results show that the liquid length increases with increasing fuel density for fuels of similar boiling-point temperature. Considering a mixing-limited vaporization process, entrained ambient mass per fuel mass is inversely proportional to the fuel density, and thereby, high fuel density causes a decrease in ambient-to-fuel ratio, which increases liquid length. Consequently, both boiling point and density of fuel affect liquid length. In contrast to the liquid-length result, no correlation between fuel density and vapor penetration was found. This is because the momentum flux, which governs the jet penetration, is not a function of the fuel density. Although the liquid length varies, the similarity in vapor penetration for all fuels investigated shows that the fuel density and boiling point do not significantly affect the total entrainment and mixing into the spray. © 2011 Elsevier Ltd. All rights reserved.

Clark G.F.,University of New South Wales | Johnston E.L.,University of New South Wales
Ecology Letters | Year: 2011

Disturbance can affect both the diversity and invasibility of communities. Many field studies have found correlations between diversity and susceptibility to invasion, but if both factors independently respond to disturbance then spurious non-causal relationships may be observed. Here, we show that disturbance can cause a temporal shift in the diversity-invasibility relationship. In a field experiment using sessile marine communities, disturbance strongly affected both diversity and invasion such that they were highly correlated. Disturbance facilitated initial invasion, creating a negative diversity-invasibility relationship when the invader first arrived. Over time, disturbance hindered the persistence of invaders, creating a positive diversity-invasibility relationship. We suggest that temporal changes in the diversity-invasibility relationship may have contributed to the 'invasion paradox', a term for the contrasting patterns of experimental and observational studies of the diversity-invasibility relationship. © 2010 Blackwell Publishing Ltd/CNRS.

Yu K.,University of New South Wales | Dutkiewicz E.,Macquarie University
IEEE Transactions on Mobile Computing | Year: 2012

This paper presents positioning algorithms for cellular network-based vehicle tracking in severe non-line-of-sight (NLOS) propagation scenarios. The aim of the algorithms is to enhance positional accuracy of network-based positioning systems when the GPS receiver does not perform well due to the complex propagation environment. A one-step position estimation method and another two-step method are proposed and developed. Constrained optimization is utilized to minimize the cost function which takes account of the NLOS error so that the NLOS effect is significantly reduced. Vehicle velocity and heading direction measurements are exploited in the algorithm development, which may be obtained using a speedometer and a heading sensor, respectively. The developed algorithms are practical so that they are suitable for implementation in practice for vehicle applications. It is observed through simulation that in severe NLOS propagation scenarios, the proposed positioning methods outperform the existing cellular network-based positioning algorithms significantly. Further, when the distance measurement error is modeled as the sum of an exponential bias variable and a Gaussian noise variable, the exact expressions of the CRLB are derived to benchmark the performance of the positioning algorithms. © 2012 IEEE.

Trainer T.,University of New South Wales
Energy Policy | Year: 2013

The possibility of 100% renewable supply of European energy is analysed in terms of the amount of redundant plant required to cope with the intermittency of wind and solar energy, and the limited biomass potential. It is concluded that the capital cost would be unaffordable, and that biomass resources are insufficient. © 2013 Elsevier Ltd.

Niven R.K.,University of New South Wales
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2010

This study examines a new formulation of non-equilibrium thermodynamics, which gives a conditional derivation of the 'maximum entropy production' (MEP) principle for flow and/or chemical reaction systems at steady state. The analysis uses a dimensionless potential function φst for non-equilibrium systems, analogous to the free energy concept of equilibrium thermodynamics. Spontaneous reductions in φst arise from increases in the 'flux entropy' of the system - a measure of the variability of the fluxes - or in the local entropy production; conditionally, depending on the behaviour of the flux entropy, the formulation reduces to the MEP principle. The inferred steady state is also shown to exhibit high variability in its instantaneous fluxes and rates, consistent with the observed behaviour of turbulent fluid flow, heat convection and biological systems; one consequence is the coexistence of energy producers and consumers in ecological systems. The different paths for attaining steady state are also classified. © 2010 The Royal Society.

Chen V.M.,University of New South Wales | Hogg P.J.,University of New South Wales
Journal of Thrombosis and Haemostasis | Year: 2013

Tissue factor (TF) is a transmembrane cofactor that binds and promotes the catalytic activity of factor (F) VIIa. The TF/VIIa complex activates FX by limited proteolysis to initiate blood coagulation and helps provide the thrombin burst that is important for a stable thrombus. TF is present both in the extravascular compartment, where it functions as a hemostatic envelope, and the intravascular compartment, where it contributes to thrombus formation, particularly when endothelial disruption is minimal. The regulation of its cofactor function appears to differ in the two compartments. Intravascular TF derives predominately from leucocytes, with either monocytes or neutrophils implicated in different models of thrombosis. This TF exists mostly in a non-coagulant or cryptic form and acute events lead to local decryption of TF and FX activation. A variety of experimental observations imply that decryption of leucocyte surface TF involves both a dithiol/disulfide switch and exposure of phosphatidylserine. The dithiol/disulfide switch appears to involve the Cys186-Cys209 disulfide bond in the membrane-proximal domain of TF, although this has not been demonstrated in vivo. Activation of a purinergic receptor or complement has recently been observed to decrypt TF on myeloid cells and a dithiol/disulfide switch and the oxidoreductase, protein disulfide isomerase, have been implicated in both systems. The molecular mechanism of action of protein disulfide isomerase in TF encryption/decryption, though, remains to be determined. © 2013 International Society on Thrombosis and Haemostasis.

Callaghan B.L.,University of New South Wales | Richardson R.,University of New South Wales
Biology of Mood and Anxiety Disorders | Year: 2013

Research first reported nearly 50 years ago demonstrated that infant and young animals (including humans) exhibit profoundly faster rates of forgetting (i.e., infantile amnesia) than do adults. In addition to these differences in retention, more recent research has shown that inhibition of fear learning is also very different in infancy than in adulthood. Specifically, extinction of fear early in life is much more resistant to relapse than is extinction later in life. Both of these findings suggest that young animals should be especially resilient to the emergence of mental health disorders, which appears to be at odds with the view that early-life experiences are particularly important for the development of later psychopathologies (such as anxiety disorders) and with the finding that the majority of anxiety disorders first emerge in adolescence or childhood. This apparent paradox might be resolved, however, if exposure to chronic stress early in life affects the maturation of the fear retention and extinction systems, leading to a faster transition to the adult form of each (i.e., long-lasting fear memories and relapse-prone extinction). In several recent studies we have found exactly this pattern; that is, infant rats exposed to maternal-separation stress exhibit adult-like fear and extinction learning early in development. Further, we have demonstrated that some of these effects can be mimicked by exposing the mother to the stress hormone corticosterone in their drinking water (in lieu of the separation procedure). These findings suggest that early-life exposure to stress and stress hormones may act as a general signal that can alter the developmental trajectory of emotional systems and potentially place animals at greater risk for the development of anxiety. The implications of these recent findings for our understanding of the developmental origins of health and disease, and for enhancing preventative and therapeutic treatments across the lifespan, are considered. © 2013 Callaghan and Richardson; licensee BioMed Central Ltd.

Van Sebille E.,University of New South Wales
Physics Today | Year: 2015

A piece of plastic discarded into the sea travels far and wide, carried by complex currents. Eventually, the material settles into one of five distinct garbage patches in the subtropical oceans.

Gopinath B.,University of New South Wales
Public health nutrition | Year: 2012

We aimed to establish associations of duration of breast-feeding with mean BMI and waist circumference, as well as the likelihood of being overweight/obese, during early childhood. Cross-sectional, population-based study. Height, weight and waist circumference were measured and BMI calculated. Interviewer-administered questionnaire determined whether the child was ever breast-fed and the duration of breast-feeding. Sydney, Australia. Infants and pre-school children (n 2092) aged 1-6 years were examined in the Sydney Paediatric Eye Disease Study during 2007-2009. Of the children aged 1-6 years, 1270 had been breast-fed compared with 822 who were never breast-fed. After multivariable adjustment, 1-6-year-old children who were ever breast-fed compared with those who were not had significantly lower BMI, 16·7 (se 0·1) kg/m2 v. 17·1 (se 0·2) kg/m2 (P = 0·01). Decreasing BMI was associated with increasing duration of breast-feeding (P trend = 0·002). After multivariable adjustment, each month increase in breast-feeding was associated with an average BMI decrease of 0·04 kg/m2 (P = 0·002) and 0·03 kg/m2 (P = 0·03) among children aged 1-2 years and 3-4 years, respectively. In 1-2-year-old children, each month increase in breast-feeding duration was associated with a 0·06 cm decrease in waist circumference (P = 0·04). Significant associations were not observed among 5-6-year-old children. Children who were ever breast-fed v. those never breast-fed were less likely to be overweight/obese (multivariable-adjusted OR = 0·54; 95 % CI 0·36, 0·83). We demonstrated a modest influence of breast-feeding on children's BMI during early childhood, particularly among those aged less than 5 years.

Martin A.C.,University of New South Wales
Nature reviews. Nephrology | Year: 2010

Interest has been renewed over the role of uric acid in the pathogenesis of hypertension, endothelial dysfunction and renal dysfunction, which are all features of pre-eclampsia. Uric acid is not a consistent predictive factor for the development of pre-eclampsia but its levels generally increase once the disease manifests, and plasma levels of uric acid approximately correlate with disease severity. Hyperuricemia in pre-eclampsia was once thought to result solely from reduced renal clearance, but levels of uric acid are now also thought to increase through increased uric acid production caused by trophoblast breakdown, cytokine release and ischemia. Uric acid can promote endothelial dysfunction, damage and inflammation, which leads to oxidation. Pre-eclampsia, which is characterized by widespread endothelial dysfunction and inflammation, might be propagated by uric acid through these known in vitro activities. Of note, however, uric acid can also act as a scavenger of oxygen free radicals. Plasma urate measurements are currently used to support the diagnosis of pre-eclampsia during pregnancy. As further studies define the role of uric acid in the development of pre-eclampsia, monitoring levels of this factor may again become essential to the future treatment of pre-eclampsia.

Brennan M.,University of New South Wales
Australian Family Physician | Year: 2011

Background: Lactational mastitis is common, affecting one in 5 breastfeeding women. As well as causing significant discomfort, it is a frequent reason for women to stop breastfeeding. Objective: This article outlines an evidence based approach to the diagnosis and management of lactational breast infections in general practice. Discussion: Lactational mastitis is usually bacterial in aetiology and can generally be effectively managed with oral antibiotics. Infections that do not improve rapidly require further investigation for breast abscess and nonlactational causes of inflammation, including the rare cause of inflammatory breast cancer. In addition to antibiotics, management of lactational breast infections include symptomatic treatment, assessment of the infant's attachment to the breast, and reassurance, emotional support, education and support for ongoing breastfeeding.

Williams K.,University of New South Wales
Cochrane database of systematic reviews (Online) | Year: 2010

BACKGROUND: Autism spectrum disorders (ASD) are characterised by abnormalities in social interaction and communication skills, as well as stereotypic behaviours and restricted activities and interests. Selective serotonin reuptake inhibitors (SSRIs) are prescribed for the treatment of co-morbidity associated with ASD such as depression, anxiety and obsessive-compulsive behaviours. OBJECTIVES: To determine if treatment with an SSRI: 1. improves the core features of autism (social interaction, communication and behavioural problems); 2. improves other non-core aspects of behaviour or function such as self-injurious behaviour; 3. improves the quality of life of children and their carers; 4. has short and long term effects on outcome; 5. causes harms. SEARCH STRATEGY: We searched the Cochrane Central Register of Controlled Trials (The Cochrane Library 2009, Issue 4), MEDLINE ( December 2009), EMBASE (December 2009), CINAHL (December 2009), PsycINFO (December 2009) and ERIC (December 2009), without language restrictions. SELECTION CRITERIA: Randomised controlled trials (RCTs) of any dose of oral SSRI compared with placebo, in participants with autism spectrum disorders. Trials must have included at least one standardised outcome measure. DATA COLLECTION AND ANALYSIS: Two authors independently selected and appraised studies for inclusion and risk of bias. All data were continuous. Meta-analysis, where possible, used a random-effects model. MAIN RESULTS: Seven RCTs with a total of 271 participants were included. Four SSRIs were evaluated: fluoxetine (two studies), fluvoxamine (two studies), fenfluramine (two studies) and citalopram (one study). Five studies included only children and two studies included only adults. Varying inclusion criteria were used with regard to diagnostic criteria and intelligence of participants. Seventeen different outcome measures were reported. Although more than one study reported data for Clinical Global Impression (CGI) and obsessive-compulsive behaviour (OCB), different tool types or components of these outcomes were used in each study. As such, data were unsuitable for meta-analysis. One large, high quality study in children showed no evidence of positive effect of citalopram. Two small studies in adults showed positive outcomes for CGI and OCB; one study showed improvements in aggression and another in anxiety. AUTHORS' CONCLUSIONS: There is no evidence of effect of SSRIs in children and emerging evidence of harm. There is limited evidence of the effectiveness of SSRIs in adults from small studies in which risk of bias is unclear.

Willcox M.D.P.,University of New South Wales
Experimental Eye Research | Year: 2013

The ocular surface is continually exposed to the environment and as a consequence to different types of microbes, but whether there is a normal microbiota of the ocular surface remains unresolved. Using traditional microbial culture techniques has shown that <80% of swabs of the conjunctiva yield cultivable microbes. These usually belong to the bacterial types of the coagulase-negative staphylococci, Propionibacterium sp., with low frequency of isolation of bacteria such as Staphylococcus aureus, Micrococcus sp., Gram-negative bacteria or fungi. Even when these are grown, the numbers of colony forming units (cfu) per swab of the conjunctiva is usually much less than 100cfu. Swabs of the lid more commonly result in microbial growth, of the same species as from the conjunctiva and slightly higher cfu. Contact lenses have also been cultured, and they yield similar microbial types. Microbes can be isolated from the ocular surface almost immediately after birth. The advent of molecular techniques for microbial identification based on 16S rRNA sequencing has opened up the possibility of determining whether there are non-cultivable microbes that can colonise the ocular surface. Additionally, use of these techniques with cross-sectional and longitudinal studies may help to understand whether the ocular surface harbours its own unique microbiota, or whether the microbiota are only transiently present. © 2013 Elsevier Ltd.

Cheong S.K.,University of New South Wales
The Journal of neuroscience : the official journal of the Society for Neuroscience | Year: 2013

Most neurons in primary visual cortex (V1) exhibit high selectivity for the orientation of visual stimuli. In contrast, neurons in the main thalamic input to V1, the lateral geniculate nucleus (LGN), are considered to be only weakly orientation selective. Here we characterize a sparse population of cells in marmoset LGN that show orientation and spatial frequency selectivity as great as that of cells in V1. The recording position in LGN and histological reconstruction of these cells shows that they are part of the koniocellular (K) pathways. Accordingly we have named them K-o ("koniocellular-orientation") cells. Most K-o cells prefer vertically oriented gratings; their contrast sensitivity and TF tuning are similar to those of parvocellular cells, and they receive negligible functional input from short wavelength-sensitive ("blue") cone photoreceptors. Four K-o cells tested displayed binocular responses. Our results provide further evidence that in primates as in nonprimate mammals the cortical input streams include a diversity of visual representations. The presence of K-o cells increases functional homologies between K pathways in primates and "sluggish/W" pathways in nonprimate visual systems.

Arguments are discussed on how ion energy measurements from ultra-thin diamond irradiation with 45 fs laser pulses of 26 terawatt power may be related to the ultra-high acceleration of plasma blocks where the significance of the highly efficient direct conversion of laser radiation into mechanical motion of ions or plasma blocks is dominated by nonlinear (ponderomotive) forces in fundamental contrast to thermo-kinetic dominated interaction with ns laser pulses. © 2012 Cambridge University Press.

Abbasi A.,University of New South Wales
Scientometrics | Year: 2013

In this study, new centrality (collaborative) measures are proposed for a node in weighted networks in three different categories. The bibliometric indicators' concepts (e.g., h-index and g-index) are applied to the network analysis measures in order to introduce the new centrality measures. First category of measures (i.e., l-index, al-index and gl-index) only considers a node's neighbors' degree. Second category of measures (i.e., h-Degree, a-Degree and g-Degree) takes into account the links' weights of a node in a weighted network. Third category of measures (i.e., Hw-Degree, Aw-Degree and Gw-Degree) combines both neighbors' degree and their links' weight. Using a co-authorship network, the association between these new measures and the existing measures with scholars' performance is examined to show the applicability of the new centrality measures. The analysis shows that the scholars' citation-based performances measures are significantly associated with all the proposed centrality measures but the correlation coefficient for the ones based on average indicators (i.e., a-Degree and Aw-Degree) is the highest. © 2013 Akadémiai Kiadó, Budapest, Hungary.

Barres R.,University of New South Wales | Zierath J.R.,Karolinska University Hospital
American Journal of Clinical Nutrition | Year: 2011

DNA methylation is a major epigeneticmodification that controls gene expression in physiologic and pathologic states. Metabolic diseases such as diabetes and obesity are associated with profound alterations in gene expression that are caused by genetic and environmental factors. Recent reports have provided evidence that environmental factors at all ages could modify DNA methylation in somatic tissues, which suggests that DNA methylation is a more dynamic process than previously appreciated. Because of the importance of lifestyle factors in metabolic disorders, DNA methylation provides a mechanism by which environmental factors, including diet and exercise, can modify genetic predisposition to disease. This article considers the current evidence that defines a role for DNA methylation in metabolic disorders. © 2011 American Society for Nutrition.

Vucic S.,University of Sydney | Kiernan M.C.,University of New South Wales
Journal of Neurology, Neurosurgery and Psychiatry | Year: 2010

Background: Upregulation of persistent Na+ conductances has been linked to axonal degeneration in sporadic amyotrophic lateral sclerosis (ALS) and has also been reported in the transgenic superoxide dismutase-1 (SOD-1) mouse model. The mechanisms of ectopic activity (fasciculations and cramp) and axonal degeneration still require clarification in familial ALS (FALS) in humans, and specifically whether there are any differences to the processes identified in sporadic patients. Consequently, novel threshold tracking techniques were used to assess whether upregulation of persistent Na+ conductances was a feature linked to axonal degeneration in FALS. Methods: Axonal excitability studies were undertaken in six FALS patients, 13 asymptomatic SOD-1 mutation carriers and 45 sporadic ALS (SALS) patients. Results: Compound muscle action potential amplitude was significantly reduced in FALS (6.3±1.3 mV) and SALS (6.060.4 mV) compared with controls (10.0±0.4 mV, p<0.05). The mean strength duration time constant (sSD) was significantly increased in FALS (0.55±0.10 ms, p<0.05) and SALS (0.52±0.02 ms, p<0.01) compared with controls (0.41±0.02). There were no differences in sSD between asymptomatic SOD-1 mutation carriers and controls. The increase in τSD correlated with the CMAP amplitude (r=-0.4) and neurophysiological index (r=-0.4). In separate studies that assessed cortical processes, short interval intracortical inhibition (SICI) was significantly reduced (FALS, -2.7±1.3%; controls 13.7±1.3%, p<0.0001) and intracortical facilitation increased (FALS, -5.0±2.2%; controls -0.461.1%, p<0.05) in FALS. The reduction in SICI correlated with τSD (r=-0.8). Conclusions: Taken together, these studies suggest that persistent Na+ conductances are upregulated in FALS and that this upregulation is intrinsically associated with axonal degeneration.

Dzuba V.A.,University of New South Wales | Berengut J.C.,University of New South Wales | Flambaum V.V.,University of New South Wales | Roberts B.,University of New South Wales
Physical Review Letters | Year: 2012

We apply the sum-over-states approach to calculate partial contributions to parity nonconservation (PNC) in cesium [Porsev, Beloy, and Drevianko, Phys. Rev. Lett. 102, 181601 (2009)]. We find significant corrections to two nondominating terms coming from the contribution of the core and highly excited states (n>9, the so called tail). When these differences are taken into account the result of Porsev etal., E PNC=0.8906(24)×10 -11i(-Q W/N) changes to 0.8977 (40), coming into good agreement with our previous calculations, 0.8980 (45). The interpretation of the PNC measurements in cesium still indicates reasonable agreement with the standard model (1.5σ); however, it gives new constraints on physics beyond it. © 2012 American Physical Society.

Hawes D.J.,University of Sydney | Price M.J.,University of Sydney | Dadds M.R.,University of New South Wales
Clinical Child and Family Psychology Review | Year: 2014

The treatment of conduct problems among children and adolescents with callous-unemotional (CU) traits has been subject to much speculation; however, treatment outcome research has been surprisingly limited and findings have been mixed. This review examines the research to date in this field as it pertains to two key questions. First, are CU traits associated with clinical outcomes and processes in the family based treatment of child and adolescent conduct problems? Second, can family based intervention produce change in CU traits? Using a systematic search strategy, we identified 16 treatment outcomes studies that can be brought to bear on these questions. These studies provide strong evidence of unique associations between CU traits and risk for poor treatment outcomes, while at the same time indicating that social-learning-based parent training is capable of producing lasting improvement in CU traits, particularly when delivered early in childhood. We discuss the potential for this emerging evidence base to inform the planning and delivery of treatments for clinic-referred children with CU traits, and detail an ongoing program of translational research into the development of novel interventions for this high-risk subgroup. © 2014 Springer Science+Business Media New York.

Han C.S.,University of New South Wales | Brooks R.C.,University of New South Wales
American Naturalist | Year: 2014

Behavioral traits often change over an individual's lifetime. Experience, physiological senescence, and age-dependent differences in optimal behavior can, in theory, all cause longitudinal behavioral changes. Yet most studies of behavioral plasticity and selection on behavior focus on short-term population-level responses to social factors such as conspecific density or sex ratio. Longer-term effects of social interactions on individual behavior have rarely been tested. Here we tested these effects by exposing male water striders (Hemiptera: Gerridae) to two different social conditions throughout their lifetime; we call these the nonsocial and social treatments. We then measured each male's lifetime mating success and individual behavioral plasticity by observing four different behaviors (exploring a novel environment, dispersal ability, sex-recognition sensitivity, and tendency to remount a resistant female after being dislodged) every 2 weeks. The social environment influenced individual variation in behavioral plasticity as well as population-level behavioral plasticity. Moreover, when we calculated linear selection gradients of individual behavioral traits and their plasticities on lifetime mating success, male remounting tendency and individual plasticity in exploration ability were likely to be the most important factors explaining variation in male lifetime mating success. In conclusion, the variation in social interactions throughout an individual's lifetime contributes to the individual variation in behavioral plasticity, which can significantly affect a male's lifetime fitness. © 2014 by The University of Chicago.

Howard R.W.,University of New South Wales
Acta Psychologica | Year: 2014