Edmonton, Canada
Edmonton, Canada

The University of Alberta is a public research university located in Edmonton, Alberta, Canada. It was founded in 1908 by Alexander Cameron Rutherford, the first premier of Alberta, and Henry Marshall Tory, its first president. Its enabling legislation is the Post-secondary Learning Act.The university comprises four campuses in Edmonton, the Augustana Campus in Camrose, and a staff centre in downtown Calgary. The original north campus consists of 150 buildings covering 50 city blocks on the south rim of the North Saskatchewan River valley, directly across from downtown Edmonton. More than 39,000 students from across Canada and 150 other countries participate in nearly 400 programs in 18 faculties.The University of Alberta is a major economic driver in Alberta. The university’s impact on the Alberta economy is an estimated $12.3 billion annually, or five per cent of the province’s gross domestic product. With more than 15,000 employees, the university is Alberta's fourth-largest employer.The university has been recognized by the Academic Ranking of World Universities, the QS World University Rankings and the Times Higher Education World University Rankings as one of the top five universities in Canada and one of the top 100 universities worldwide.According to the 2014 QS World University Rankings the top Faculty Area at the University of Alberta is Arts and Humanities , and the top-ranked Subject is English Language and Literature .The University of Alberta has graduated more than 260,000 alumni, including Governor General Roland Michener, Prime Minister Joe Clark, Chief Justice of Canada Beverley McLachlin, Alberta premiers Peter Lougheed, Dave Hancock and Jim Prentice, Edmonton Mayor Don Iveson and Nobel laureate Richard E. Taylor.The university is a member of the Alberta Rural Development Network, the Association for the Advancement of Sustainability in Higher Education and the Sustainability Tracking, Assessment & Rating System. Wikipedia.


Time filter

Source Type

Patent
University of Alberta | Date: 2016-09-06

A method of converting lipids to useful olefins includes reacting a mixture of lipids and a reactant olefin with microwave irradiation in the presence of ruthenium metathesis catalysts. The lipids may be unsaturated triacylglycerols or alkyl esters of fatty acids. The lipids may be sourced from renewable sources such as vegetable oil, waste cooking oil, or waste animal products.


Patent
University of Alberta | Date: 2015-04-17

The invention includes method, pharmaceutical compositions and uses thereof for treating patients with Papillary Thyroid Carcinoma (PTC) using a Platelet Derived Growth Factor Receptor Alpha (PDGFRA) inhibitor. The PDGFRA inhibitor is preferably an antibody specific to PDGFRA and causes an increase in the sensitivity level of PTC cells to radioiodine treatment. Moreover, the antibody can be used in combination with other PDGFRA inhibitors such as tyrosine kinase inhibitors and RNA interference molecules.


Patent
Massachusetts Institute of Technology and University of Alberta | Date: 2016-11-23

The invention, in some aspects relates to compositions and methods for altering cell activity and function and the introduction and use of light-activated ion channels.


Patent
University of Alberta | Date: 2016-09-21

A combined hydrothermal and activation process that uses hemp bast fiber as the precursor to achieve graphene-like carbon nanosheets, a carbon nanosheet including carbonized crystalline cellulose, a carbon nanosheet formed by carbonizing crystalline cellulose, a capacitative structure includes interconnected carbon nanosheets of carbonized crystalline cellulose, a method of forming a nanosheet including carbonizing crystalline cellulose to create carbonized crystalline cellulose. The interconnected two-dimensional carbon nanosheets also contain very high levels of mesoporosity.


Patent
University of Alberta | Date: 2014-11-07

A thermal emitter is provided, including a periodic structure operating as a metamaterial on an optically thick substrate; the periodic structure thermally emitting at high temperatures in a specified narrow wavelength of a predetermined resonance, the metamaterial including a composite medium of natural materials. The emitter may be part of a thermophotovoltaic device. The thermal emitter may include a plurality of layered films, wherein the distance between each adjacent film is substantially less than the wavelength.


Patent
University of Alberta | Date: 2016-09-12

The invention provides a binding-induced DNA nanomachine that can be activated by proteins and nucleic acids. This new type of nanomachine hamesses specific target binding to trigger assembly of separate DNA components that are otherwise unable to spontaneously assemble. Three-dimensional DNA tracks of high density are constructed on gold nanoparticles functionalized with hundreds of single-stranded oligonucleotides and tens of an affinity ligand. A DNA swing arm, free in solution, can be linked to a second affinity ligand. Binding of a target molecule to the two ligands brings the swing arm to AuNP and initiates autonomous, stepwise movement of the swing arm around the AuNP surface. The movement of the swing arm generates hundreds of oligonucleotides in response to a single binding event. The new nanomachines have several unique and advantageous features over DNA nanomachines that rely on DNA self-assembly.


An underwater camera system includes a projector operable to project a pattern of electromagnetic radiation toward a target object. The electromagnetic radiation includes at least three different wavelengths. A sensor directed toward the target object receives reflected electromagnetic radiation from the target object and stores corresponding image data received from the sensor. One or more processors process the image data to compute a refractive normal according to a wavelength dispersion represented by differences in the image data, and to compute an interface distance corresponding to a distance from a center point of the sensor to a first refractive interface nearest the sensor according to the refractive normal. The processors generate a 3D representation of the target object by back projecting each pixel of the image data at the first, second, and third wavelengths in order to determine an object point location according to the refractive normal and interface distance.


Jing Y.,University of Alberta | Shahbazpanahi S.,University of Ontario Institute of Technology
IEEE Transactions on Signal Processing | Year: 2012

This paper deals with optimal joint user power control and relay distributed beamforming for two-way relay networks, where two end-users exchange information through multiple relays, each of which is assumed to have its own power constraint. The problem includes the design of the distributed beamformer at the relays and the power control scheme for the two end-users to optimize the network performance. Considering the overall two-way network performance, we maximize the lower signal-to-noise ratio (SNR) of the two communication links. For single-relay networks, this maximization problem is solved analytically. For multi-relay networks, we propose an iterative numerical algorithm to find the optimal solution. While the complexity of the optimal algorithm is too high for large networks, two sub-optimal algorithms with low complexity are also proposed, which are numerically shown to perform close to the optimal technique. It is also shown via simulation that for two-way networks with both single relay and multiple relays, proper user power control and relay distributed beamforming can significantly improve the network performance, especially when the power constraints of the two end-users in the networks are unbalanced. Our approach also improves the power efficiency of the network largely. © 1991-2012 IEEE.


Hebblewhite M.,University of Montana | Merrill E.H.,University of Alberta
Oikos | Year: 2011

Partial migration is widespread in ungulates, yet few studies have assessed demographic mechanisms for how these alternative strategies are maintained in populations. Over the past two decades the number of resident individuals of the Ya Ha Tinda elk herd near Banff National Park has been increasing proportionally despite an overall population decline. We compared demographic rates of migrant and resident elk to test for demographic mechanisms partial migration. We determined adult female survival for 132 elk, pregnancy rates for 150 female elk, and elk calf survival for 79 calves. Population vital rates were combined in Leslie-matrix models to estimate demographic fitness, which we defined as the migration strategy-specific population growth rate. We also tested for differences in factors influencing risk of mortality between migratory strategies for adult females using Cox-proportional hazards regression and time-varying covariates of exposure to forage biomass, wolf predation risk, and group size. Despite higher pregnancy rates and winter calf weights associated with higher forage quality, survival of migrant adult females and calves were lower than resident elk. Resident elk traded high quality food to reduce predation risk by selecting areas close to human activity, and by living in group sizes 20% larger than migrants. Thus, residents experienced higher adult female survival and calf survival, but lower pregnancy and calf weights. Cause-specific mortality of migrants was dominated by wolf and grizzly bear mortality, whereas resident mortality was dominated by human hunting. Demographic differences translated into slightly higher (2-3%), but non-significant, resident population growth rate compared to migrant elk, suggesting demographic balancing between resident strategies during our study. Despite statistical equivalence, our results are also consistent with slow long-term declines in migrants because of high predation because of higher wolf-caused mortality in migrants. These results emphasize that migrants and residents will make different tradeoffs between forage and risk may affect the demographic balance of partially migratory populations, which may explain recent declines in migratory behavior in many ungulate populations around the world. © 2011 The Authors.


Ustin S.L.,University of California at Davis | Gamon J.A.,University of Alberta
New Phytologist | Year: 2010

Contents: Summary795I.Introduction796II.History of functional-type classifications of vegetation796III.History of remote sensing of vegetation 799IV.New sensors and perspectives802V.Measuring detailed canopy structure806VI.The emerging hypothesis of 'optical types'810VII.Conclusions811Acknowledgements811References811 Summary: Conceptually, plant functional types represent a classification scheme between species and broad vegetation types. Historically, these were based on physiological, structural and/or phenological properties, whereas recently, they have reflected plant responses to resources or environmental conditions. Often, an underlying assumption, based on an economic analogy, is that the functional role of vegetation can be identified by linked sets of morphological and physiological traits constrained by resources, based on the hypothesis of functional convergence. Using these concepts, ecologists have defined a variety of functional traits that are often context dependent, and the diversity of proposed traits demonstrates the lack of agreement on universal categories. Historically, remotely sensed data have been interpreted in ways that parallel these observations, often focused on the categorization of vegetation into discrete types, often dependent on the sampling scale. At the same time, current thinking in both ecology and remote sensing has moved towards viewing vegetation as a continuum rather than as discrete classes. The capabilities of new remote sensing instruments have led us to propose a new concept of optically distinguishable functional types ('optical types') as a unique way to address the scale dependence of this problem. This would ensure more direct relationships between ecological information and remote sensing observations. © The Authors (2010). Journal compilation © New Phytologist Trust (2010).


He J.,University of Alberta | Li Y.W.,University of Alberta | Blaabjerg F.,University of Aalborg
IEEE Transactions on Industrial Electronics | Year: 2014

To accomplish superior harmonic compensation performance using distributed generation (DG) unit power electronics interfaces, an adaptive hybrid voltage and current controlled method (HCM) is proposed in this paper. It shows that the proposed adaptive HCM can reduce the numbers of low-pass/bandpass filters in the DG unit digital controller. Moreover, phase-locked loops are not necessary as the microgrid frequency deviation can be automatically identified by the power control loop. Consequently, the proposed control method provides opportunities to reduce DG control complexity, without affecting the harmonic compensation performance. Comprehensive simulated and experimental results from a single-phase microgrid are provided to verify the feasibility of the proposed adaptive HCM approach. © 1982-2012 IEEE.


Woodside M.T.,University of Alberta | Woodside M.T.,Canadian National Institute For Nanotechnology | Block S.M.,Stanford University
Annual Review of Biophysics | Year: 2014

Folding may be described conceptually in terms of trajectories over a landscape of free energies corresponding to different molecular configurations. In practice, energy landscapes can be difficult to measure. Single-molecule force spectroscopy (SMFS), whereby structural changes are monitored in molecules subjected to controlled forces, has emerged as a powerful tool for probing energy landscapes. We summarize methods for reconstructing landscapes from force spectroscopy measurements under both equilibrium and nonequilibrium conditions. Other complementary, but technically less demanding, methods provide a model-dependent characterization of key features of the landscape. Once reconstructed, energy landscapes can be used to study critical folding parameters, such as the characteristic transition times required for structural changes and the effective diffusion coefficient setting the timescale for motions over the landscape. We also discuss issues that complicate measurement and interpretation, including the possibility of multiple states or pathways and the effects of projecting multiple dimensions onto a single coordinate. Copyright © 2014 by Annual Reviews. All rights reserved.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: HEALTH.2010.4.2-1 | Award Amount: 7.36M | Year: 2010

Assuming an annual birth rate of 10.25 births/1,000 population approximately 25,000 Extremely Low Gestational Age Newborns are born every year in the EU. Conservative figures estimate that approximately half of all these babies will develop low blood pressure and require treatment. However, no uniform criteria exist to define hypotension and the evidence to support our current management strategies is limited. Many of these interventions have been derived from adult literature and have not been validated in the newborn. Dopamine remains the most common inotrope used despite little evidence that it improves outcome. Hypotension is not only associated with mortality of preterm infants but is also associated with brain injury and impaired neurosensory development in ELGAN survivors. Preterm brain injury has far reaching implications for the child, parents, family, health service and society at large. It is therefore essential that we now design and perform the appropriate trials to determine whether the infusion of inotropic agents is associated with improved outcome. We have assembled a consortium with expertise in key areas of neonatal cardiology, neonatology, neurophysiology, basic science and pharmacology with the intention of answering these questions. The objectives of the group are as follows: 1. To perform a multinational, randomized controlled trial to evaluate whether a more restricted approach to the diagnosis and management of hypotension compared to a standard approach, with dopamine as a first line inotrope, affects survival without significant brain injury at 36 weeks gestational age in infants born less than 28 weeks gestation and affects survival without neurodevelopmental disability at 2 years corrected age 2. To perform pharmacokinetic and pharmcodynamic studies of dopamine 3. To develop and adapt a formulation of dopamine suitable for newborns in order to apply for a Paediatric Use Marketing Authorization


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2009.1.1 | Award Amount: 7.45M | Year: 2010

The availability of position information plays an increasing role in wireless communications networks already today and will be an integral part of future systems. They inherently can offer the ability for stand-alone positioning especially in situations where conventional satellite based positioning systems such as GPS fail (e.g., indoor). In this framework, positioning information is an important enabler either for location and context-aware services or even to improve the communications system itself.The WHERE2 project is a successor of the WHERE project and addresses the combination of positioning and communications in order to exploit synergies and to enhance the efficiency of future wireless communications systems. The key objective of WHERE2 is to assess the fundamental synergies between the two worlds of heterogeneous cooperative positioning and communications in the real world under realistic constraints. The estimation of the position of mobile terminals (MTs) is the main goal in WHERE2. The positioning algorithms combine measurements from heterogeneous infrastructure and complement them by cooperative measurements between MTs, additional information from inertial sensors, and context information. Based on the performance of the geo-aided positioning strategies (in the sense of accuracy, complexity, overhead of signalling, reliability of the provided information, etc.) the impact on coordinated, cooperative, and cognitive networks is assessed. This is done under realistic scenarios and system parameters following on-going standardization processes. A joint and integrated demonstration using multiple hardware platforms provides a verification of the performance of dedicated cooperative algorithms.All the tasks in WHERE2 are covered by different work packages, which are in close interaction to ensure an integral research of cooperative positioning and communications.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: HEALTH-2007-3.1-1 | Award Amount: 4.22M | Year: 2009

Facilitating Implementation of Research Evidence (FIRE) is a proposed four year programme of research to identify and validate key factors determining the successful implementation of research evidence in practice. The study is underpinned by a conceptual framework, the Promoting Action on Research Implementation in Health Services (PARiHS) framework, which proposes that the successful implementation of research evidence is dependent on the complex interplay of the evidence, the context of implementation and the way the process is facilitated. The planned research will focus on evaluating the feasibility and effectiveness of facilitation as an implementation strategy. A randomised, controlled trial with three intervention arms (standard dissemination and two different models of facilitation) and six units in each of five countries (four in Europe, plus Canada; n=30) is planned. The units will be asked to implement research based guidance on continence promotion and receive differing levels of facilitation support to do so. Detailed contextual, process and outcome data will be collected to fully explore the complex processes at work during implementation. With the combination of an international consortium and experienced research team, a theory-driven, multi-method evaluation study and detailed attention to stakeholder involvement and dissemination throughout the research, the study has the potential to make a significant contribution to the knowledge and practice of translating research evidence at a clinical, organisational and policy level, within Europe and internationally.


Grant
Agency: GTR | Branch: NERC | Program: | Phase: Research Grant | Award Amount: 1.47M | Year: 2015

Concerns are growing about how much melting occurs on the surface of the Greenland Ice Sheet (GrIS), and how much this melting will contribute to sea level rise (1). It seems that the amount of melting is accelerating and that the impact on sea level rise is over 1 mm each year (2). This information is of concern to governmental policy makers around the world because of the risk to viability of populated coastal and low-lying areas. There is currently a great scientific need to predict the amount of melting that will occur on the surface of the GrIS over the coming decades (3), since the uncertainties are high. The current models which are used to predict the amount of melting in a warmer climate rely heavily on determining the albedo, the ratio of how reflective the snow cover and the ice surface are to incoming solar energy. Surfaces which are whiter are said to have higher albedo, reflect more sunlight and melt less. Surfaces which are darker adsorb more sunlight and so melt more. Just how the albedo varies over time depends on a number of factors, including how wet the snow and ice is. One important factor that has been missed to date is bio-albedo. Each drop of water in wet snow and ice contains thousands of tiny microorganisms, mostly algae and cyanobacteria, which are pigmented - they have a built in sunblock - to protect them from sunlight. These algae and cyanobacteria have a large impact on the albedo, lowering it significantly. They also glue together dust particles that are swept out of the air by the falling snow. These dust particles also contain soot from industrial activity and forest fires, and so the mix of pigmented microbes and dark dust at the surface produces a darker ice sheet. We urgently need to know more about the factors that lead to and limit the growth of the pigmented microbes. Recent work by our group in the darkest zone of the ice sheet surface in the SW of Greenland shows that the darkest areas have the highest numbers of cells. Were these algae to grow equally well in other areas of the ice sheet surface, then the rate of melting of the whole ice sheet would increase very quickly. A major concern is that there will be more wet ice surfaces for these microorganisms to grow in, and for longer, during a period of climate warming, and so the microorganisms will grow in greater numbers and over a larger area, lowering the albedo and increasing the amount of melt that occurs each year. The nutrient - plant food - that the microorganisms need comes from the ice crystals and dust on the ice sheet surface, and there are fears that increased N levels in snow and ice may contribute to the growth of the microorganisms. This project aims to be the first to examine the growth and spread of the microorganisms in a warming climate, and to incorporate biological darkening into models that predict the future melting of the GrIS. References 1. Sasgen I and 8 others. Timing and origin of recent regional ice-mass loss in Greenland. Earth and Planetary Science Letters, 333-334, 293-303(2012). 2. Rignot, E., Velicogna, I., van den Broeke, M. R., Monaghan, A. & Lenaerts, J. Acceleration of the contribution of the Greenland and Antarctic ice sheets to sea level rise. Geophys. Res. Lett. 38, L05503, doi:10.1029/2011gl046583 (2011). 3. Milne, G. A., Gehrels, W. R., Hughes, C. W. & Tamisiea, M. E. Identifying the causes of sea-level change. Nature Geosci 2, 471-478 (2009).


Grant
Agency: GTR | Branch: ESRC | Program: | Phase: Research Grant | Award Amount: 24.84K | Year: 2012

The World Health Organization (WHO) model of age-friendly cities emphasizes the theme of supportive urban environments for older citizens. These defined as encouraging active ageing by optimizing opportunities for health, participation and security in order to enhance quality of life as people age (WHO, Global Age-friendly Cities, 2007). The goal of establishing age-friendly cities should be seen in the context of pressures arising from population ageing and urbanisation. By 2030, two-thirds of the worlds population will reside in cities, with - for urban areas in high-income countries - at least one-quarter of their populations aged 60 and over. This development raises important issues for older people: To what extent will cities develop as age-friendly communities? Will so-called global cities integrate or segregate their ageing populations? What kind of variations might occur across different types of urban areas? How are different groups of older people affected by urban change? The age-friendly city perspective has been influential in raising awareness about the impact of population ageing. Against this, the value of this approach has yet to be assessed in the context of modern cities influenced by pressures associated with global social and economic change. The IPNS has four main objectives: first, to build a collaborative research-based network focused on understanding population ageing in the context of urban environments; second to develop a research proposal for a cross-national study examining different approaches to building age-friendly cities; third to provide a systematic review of data sets and other resources of relevance to developing a research proposal on age-friendly cities; fourth, to develop training for early career resarchers working on ageing and urban issues. The network represents the first attempt to facilitate comparative research on the issue of age-friendly cities. It builds upon two meetings held at the Universities of Keele and Manchester in 2011 that sought to establish the basis for cross-national work around the age-friendly theme. The IPNS represents brings together world class research groups in Europe, Hong Kong and North America, professionals concerned with urban design and architecture, and leading NGOs working in the field of ageing. A range of activities have been identified over the two-year funding period: (1) Preparation of research proposals for a cross-national study of approaches to developing age-friendly urban environments. (2) Two workshops to specify theoretical and methodological issues raised by demographic change and urbanisation. (3) A Summer School exploring links between data resources of potential relevance to the ageing and urbanisation theme and which might underpin research proposals. (4) Master classes for network members from key researchers in the field of urbanisation and ageing. (5) A workshop with a user-based theme developing older peoples participation in research on building age-friendly communities. (6) Themed workshops (face-to-face and via video-link) to identify research and policy gaps drawing on inter-disciplinary perspectives The IPNS will be sustained in a variety of ways at the end of the funding period. A collaborative research proposal as well as one to maintain the network will be major outputs from the project and work with potential funding bodies will continue after 2014. Dissemination activities will continue through professional networks, symposia at major international conferences, and involvement in expert meetings. The project will continue to be advertised through the maintenance of a website maintained by the host UK HEI. The project will continue to make a contribution to policy development around the theme of age-friendly cities, notably with the main NGOs working in the field.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2007.1.1 | Award Amount: 5.55M | Year: 2008

To increase ubiquitous and mobile network access and data rates, scientific and technological development is more and more focussing on the integration of radio access networks (RANs). For an efficient usage of RANs, knowledge of the position of mobile terminals (MTs) is valuable information in order to allocate resources or predict the allocation within a heterogeneous RAN infrastructure.\nThe main objective of WHERE is to combine wireless communications and navigation for the benefit of the ubiquitous access for a future mobile radio system. The impact will be manifold, such as real time localization knowledge in B3G/4G systems that allow them to increase efficiency. Satellite navigation systems will be supplemented with techniques that improve accuracy and availability of position information.\nThe WHERE project addresses the combination of positioning and communication in order to exploit synergies and to improve the efficiency of future wireless communication systems.\nThus, the estimation of the position of MTs based on different RANs is the main goal in WHERE. Positioning algorithms and algorithms for combining several positioning measurements allow to estimate the position of MTs. Appropriate definitions of scenarios and system parameters together with channel propagation measurements and derived models will allow to assess the performance of RAN based positioning. Based on the performance of RAN positioning, location based strategies and protocols will be developed in order to optimise as well as to cross-optimise different OSI layers of communication systems and RAT selection policies. Performance assessment of the algorithms is provided by theoretical studies and simulations. Hardware signal processing will provide a verification of the performance of dedicated algorithms under realistic conditions. \nAll the tasks are covered by different work packages, which are in close interaction to ensure an integral research of positioning and communications.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: HEALTH.2010.4.2-3 | Award Amount: 3.75M | Year: 2010

The emergence of suicidality in patients receiving drug treatment is of concern because of the overall burden and the possible link with completed suicide. The lack of uniform requirements for defining, detecting and recording suicidality and the presence of disease related confounders create major problems. It is possible that Medication-Related Suicidality (MRS) differs from Psychopathology-Related Suicidality (PRS) in terms of phenomenology, clinical expression and time course, and may vary between children and adults. Unlike PRS, the time-course of MRS may be associated with possible differences in drug pharmacokinetics; abrupt onset; absence of suicidality prior to start of medication; and emergence of suicidality related co-morbidities after treatment. This proposal will focus on developing a web-based comprehensive methodology for the assessment and monitoring of suicidality and its mediators in children and adolescents using the HealthTrackerTM (a paediatric web-based health outcome monitoring system), with the aim of developing a Suicidality Assessment and Monitoring Module, a Bio-psycho-social Mediators of Suicidality Assessment Module, and a Suicidality-Related Psychiatric and Physical Illness Module. The information obtained will be used to computer-generate classification of suicidality using the Classification of Suicide-Related Thoughts and Behaviour (Silverman et al, 2007) and the Columbia Classification Algorithm of Suicidal Assessment (C-CASA) (Posner et al, 2007). The existing Medication Characteristics Module will be expanded to allow documentation of pharmacological characteristics of medication, to explore whether they mediate MRS. The methodology will then be tested in 3 paediatric observational trials (risperidone in conduct disorder; fluoxetine in depression, and montelukast in bronchial asthma) and standardized, which can be used pharmacovigilance and in epidemiological, observational, and registration trials.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-IP | Phase: KBBE.2011.2.2-02 | Award Amount: 7.84M | Year: 2012

NutriTech will build on the foundations of traditional human nutrition research using cutting-edge analytical technologies and methods to comprehensively evaluate the diet-health relationship and critically assess their usefulness for the future of nutrition research and human well-being. Technologies include genomics, transcriptomics, proteomics, metabolomics, laser scanning cytometry, NMR based lipoprotein profiling and advanced imaging by MRI/MRS. All methods will be applied in an integrated manner to quantify the effect of diet on phenotypic flexibility, based on metabolic flexibility (the capacity for the organism to adapt fuel oxidation to fuel availability). However, NutriTech will move beyond the state-of-the-art by applying these integrated methods to assess the underlying and related cell biological and genetic mechanisms and multiple physiological processes of adaptation when homeostasis is challenged. Methods will in the first instance be evaluated within a human intervention study, and the resulting optimal methods will be validated in a number of existing cohorts against established endpoints. NutriTech will disseminate the harmonised and integrated technologies on a global scale by a large academic network including 6 non-EU partners and by providing an integrated and standardised data storage and evaluation platform. The impact of NutriTech will be multifold and exploitation is crucial as major breakthroughs from our technology and research are expected. This will be achieved by collaboration with a consortium of 8 major food industries and by exploitation of specific technologies by our 6 SME partners. Overall, NutriTech will lay the foundations for successful integration of emerging technologies intro nutrition research.


The International Nurses Association is pleased to welcome MaryJane Johnson, RN, to their prestigious organization with her upcoming publication in the Worldwide Leaders in Healthcare. MaryJane Johnson is a Palliative Care Nurse currently serving patients at Bayshore Home Health in Ottawa, Ontario, Canada, and is also affiliated with the University of Alberta Hospital. MaryJane holds over 33 years of experience and an extensive expertise in all facets of nursing, especially palliative care and dementia care. MaryJane Johnson graduated with her Nursing Degree from the University of Alberta in Edmonton, Canada in 1983, and remains associated with the University’s hospital to this day. To keep up to date with the latest advances in her field, MaryJane maintains a professional membership with the Registered Nurses Association of Ontario, the College of Nurses of Ontario, and the Canadian Hospice Palliative Care Association. MaryJane says that her great success is due to her passion for palliative care, based upon her belief that people should leave the world with the same care and love that they enter it. Learn more about MaryJane Johnson here and read her upcoming publication in Worldwide Leaders in Healthcare.


CALGARY, AB --(Marketwired - December 08, 2016) - A collaborative research project involving four universities in Alberta and Atlantic Canada has received major funding to address the issue of pipeline corrosion caused by microbial activity. The federal government announcement was made today by Minister of Science, Kirsty Duncan in Montreal. The $7.8 m comes through the Genome Canada 2015 Large-Scale Applied Research Project Competition (LSARP). It will support Managing Microbial Corrosion in Canadian Offshore and Onshore Oil Production, a four-year research project set to begin in January with an aim to improve pipeline integrity. "This work will definitely help to pinpoint how microbial activity causes corrosion in carbon steel infrastructure and help in its early detection so we can minimize leaks," says Lisa Gieg, an associate professor in the Department of Biological Sciences at the University of Calgary. "It's not just about pipelines, this research will look at all points of contact between oil and steel in extraction, production and processing. This work can help make the industry safer." Gieg is one of three project leaders who include John Wolodko, an associate professor and Alberta Innovates Strategic Chair in Bio and Industrial Materials at the University of Alberta; and Faisal Khan, a professor and the Vale Research Chair of Process Safety and Risk Engineering at Memorial University in St. John's, NL. Also working on the project is Rob Beiko, an associate professor in computer science and Canada Research Chair in Bioinformatics at Dalhousie University in Halifax, N.S., and Dr. Tesfaalem Haile who is a senior corrosion specialist at InnoTech Alberta in Devon, AB. Beiko will be building a database to analyse the microbiology and chemistry lab results, while Haile's team will be working with the University of Alberta to simulate microbial corrosion in the lab and at the pilot-scale. "To some degree, [microbial degradation of pipelines] is akin to a cancer diagnosis and treatment in the medical field," says Wolodko. "While there is significant knowledge and best practices in diagnosing and treating cancer, it is still not completely understood, and significant research is still required to further eliminate its impact to society. "While this problem is complex, this pan-Canadian project brings together research groups from across Canada in different science disciplines to tackle this problem collectively. By bringing this multidisciplinary focus to this problem, it is hoped that this research will lead to a better understanding of the breadth of microbes responsible for microbial corrosion, and will help academia and industry develop improved solutions to rapidly identify and mitigate this form of corrosion globally." While researchers at Memorial University are involved in all stages of the project, Faisal Khan, Head, Department of Process Engineering, and Director, C-RISE, Memorial University, says the focus for Memorial is on how microbes cause corrosion. Khan leads Memorial's multidisciplinary team, which also includes Kelly Hawboldt, Department of Process Engineering, Faculty of Engineering and Applied Science; and Christina Bottaro, Department of Chemistry, Faculty of Science. "We know that microbes cause corrosion, but we are examining how they cause corrosion," said Khan. "We are identifying the chemical source and how it reacts to the surface of the metal to cause corrosion. The risk models that we're developing will link the corrosion process to the outcome. This will be very important for industry when evaluating their level of corrosion intervention and control, and where to focus their resources on corrosion mitigation." Corrosion of steel infrastructure is estimated to cost the oil and gas industry in the range of $3 billion to $7 billion each year in maintenance, repairs and replacement. Microbiologically influenced corrosion is responsible for at least 20 per cent of that cost. The research team will take samples from a wide range of environments including offshore platforms and both upstream pipelines and transmission pipelines, which are all associated with different fluid chemistries and physical characteristics. By using the latest in genomics techniques, the interdisciplinary team will be able to look for trends related to specific microbes and chemistries that lead to microbial corrosion. Ultimately, the project will lead to better predictions of whether microbial corrosion will occur in a given oil and gas operation. All three project leads say the key to success in this project is collaboration. Bringing the experience, skills and expertise from across a range of disciplines and from multiple universities provides the best opportunity to succeed in finding solutions to ensure the safety of pipelines and other oil and gas infrastructure. "Genome Alberta and Genome Atlantic are pleased to be supporting a major study that will develop technologies to proactively detect and pinpoint microbial corrosion in both offshore and onshore oil production," notes David Bailey, President and CEO, Genome Alberta. "These researchers will apply their combined expertise to help address the protection of our natural environment, as well as our growing energy needs," says John Reynolds, acting vice-president (research) at the University of Calgary. "We look forward to working with our research partners and funders who have joined together to support this important work through this Genome Canada award." This grant was one of 13 projects that received funding in an announcement made by the federal government Thursday. Combined with co-funding from the provinces, international organizations and the private sector, the total announcement is worth $110 million. This includes a second project involving a University of Calgary lead to research methods of bioremediation of potential oil spills in the arctic. All the funded projects involve emerging knowledge about genomics (e.g., the genetic makeup of living organisms) to help address challenges in the natural resource and environmental sectors. The project will be managed by Genome Alberta in conjunction with Genome Atlantic, and with an international collaboration of partners that are working together to ensure safer and more secure hydrocarbon energy production: Genome Canada, Alberta Economic Development & Trade, Research & Development Corporation of Newfoundland & Labrador, University of Newcastle upon Tyne, Natural Resources Canada, InnoTech Alberta, VIA University College, DNV-GL Canada, U of C Industrial Research Chair, and in-kind support from a variety of industry partners. About the University of Calgary The University of Calgary is making tremendous progress on its journey to become one of Canada's top five research universities, where research and innovative teaching go hand in hand, and where we fully engage the communities we both serve and lead. This strategy is called Eyes High, inspired by the university's Gaelic motto, which translates as 'I will lift up my eyes.' For more information, visit ucalgary.ca. Stay up to date with University of Calgary news headlines on Twitter @UCalgary. For details on faculties and how to reach experts go to our media centre at ucalgary.ca/news/media. About the University of Alberta The University of Alberta in Edmonton is one of Canada's top teaching and research universities, with an international reputation for excellence across the humanities, sciences, creative arts, business, engineering, and health sciences. Home to 39,000 students and 15,000 faculty and staff, the university has an annual budget of $1.84 billion and attracts nearly $450 million in sponsored research revenue. The U of A offers close to 400 rigorous undergraduate, graduate, and professional programs in 18 faculties on five campuses-including one rural and one francophone campus. The university has more than 275,000 alumni worldwide. The university and its people remain dedicated to the promise made in 1908 by founding president Henry Marshall Tory that knowledge shall be used for "uplifting the whole people." About Memorial University Memorial University is one of the largest universities in Atlantic Canada. As the province's only university, Memorial plays an integral role in the education and cultural life of Newfoundland and Labrador. Offering diverse undergraduate and graduate programs to almost 18,000 students, Memorial provides a distinctive and stimulating environment for learning in St. John's, a safe friendly city with great historic charm, a vibrant cultural life and easy access to a wide range of outdoor activities. About Genome Alberta Genome Alberta is a publicly funded not-for-profit genomics research funding organization based in Calgary, Alberta but leads projects at institutions around the province and participates in a variety of other projects across the country. In partnership with Genome Canada, Industry Canada, and the Province of Alberta, Genome Alberta was established in 2005 to focus on genomics as one of the central components of the Life Sciences Initiative in Alberta, and to help position genomics as a core research effort. For more information on the range of projects led and managed by Genome Alberta, visit http://GenomeAlberta.ca


News Article | March 22, 2016
Site: motherboard.vice.com

In 2008, Canada eliminated the position of national science adviser, angering scientists who saw the office as a key point of contact between the government and the scientific community. Over the next eight years, Canada went to war on science by preventing researchers from talking to the press (in a word, “muzzling”), and cutting billions in funding for research. Along the way, Canada gained a reputation for being flagrantly anti-science. Now, Canada is looking to revive the role of the national science adviser, and undo some of the damage done during the Harper years, with a “chief science officer.” Whoever is chosen for the position, and when—staff of science minister Kirsty Duncan would neither confirm nor deny that they will be named with the release of the federal budget on Tuesday—they will have one hell of a job ahead of them. Although much of the role of the chief science officer appears undefined at the moment, one theme overarches the entire discussion: transparency. When Prime Minister Justin Trudeau appointed Duncan, he said that the chief science officer would be mandated to “ensure that government science is fully available to the public, that scientists are able to speak freely about their work, and that scientific analyses are considered when the government makes decisions.” The previous Canadian government’s dubious record on science is a big ship to turn around, and it’s been on the same, dirge-like course for nearly a decade. With that in mind, here are some badass scientists that we think would be perfect for the job. Watch more from Motherboard: Oil and Water Katie Gibbs knows how to get people fired up about transparency (resist the urge to fall asleep after reading that word), which is pretty damn impressive. In 2012, the Harper government’s campaign to muzzle scientists was in full swing, and Gibbs was one of the chief organizers behind a protest that ended up swelling into thousands of angry researchers marching on Parliament Hill. A scientist by training, and a staunch advocate of government transparency by trade, Gibbs hasn’t let up on her cage-rattling since that day four years ago. In the intervening years, she’s helped to run Evidence for Democracy, an advocacy group that sprung up in the wake of the protest. Bringing her outlook and history of campaigning for transparency into the government itself would be a big move. BRENDA PARLEE Canada Research Chair in Social Responses to Ecological Change, University of Alberta Parlee’s bread and butter is researching the impacts of climate change, but with a focus on aboriginal beliefs that is all too uncommon in Canadian science today. She and her team of students go out into the field to engage with indigenous communities about changes to their environment as a result of climate change—the declining populations of certain animals, for example. In 2013, she helped organize a permanent exhibit at the University of Alberta called “Elders as Scientists” to raise awareness about indigenous knowledge systems. Appointing Parlee would make aboriginal knowledge a part of the communication process between scientists, the government, and the public. Canada’s track record with our indigenous peoples has been pretty awful in nearly every regard for, well, ever, and including them and their knowledge into our science priorities would be a welcome gesture. Bourque was once a climatologist for Environment Canada, but these days he mostly specializes in handing out scientific knowledge suplexes. Who better to take on the role of bridging the gap between science, government, and the public? He’s served as the executive director of Ouranos, a Canadian climate change think tank, since 2013, so he knows how to run an organization. He’s also somewhat of a firebrand when it comes to keeping temperatures on an even keel, which is a plus, and doesn’t hesitate to lay out the scientific consensus about climate in no uncertain terms—even when faced by government ministers. Basically, he’s got the cred and isn’t afraid to flaunt it.


News Article | November 1, 2016
Site: www.marketwired.com

EDMONTON, AB--(Marketwired - November 01, 2016) - The PCL family of companies is pleased to announce that succession transition has officially taken place for its leadership position of President and Chief Executive Officer. Dave Filipchuk is appointed the eighth President and CEO of PCL in its 110-year history. Mr. Filipchuk previously held the position of Deputy CEO, and before that was President and COO, Canadian and Australian Operations, with responsibility for the performance of PCL's Buildings and Civil Infrastructure divisions. Mr. Filipchuk has a BSc degree in civil engineering from the University of Alberta and attended the Ivey Executive program at the University of Western Ontario. He is Gold Seal certified and a member of APEGA. Mr. Filipchuk has been with PCL for 32 years and possesses a wealth of knowledge of both the company and the construction industry, having lived and worked in both Canada and the United States in PCL's buildings and civil infrastructure sectors. "I am extremely proud and excited to assume the position of President and CEO at PCL," said Mr. Filipchuk. "Guiding a company with such a storied and successful history is an opportunity I look forward to enjoying well into the future. I would like to thank Paul Douglas for his tireless work and dedication in leading PCL for the past seven years, and I congratulate him on an amazing career in construction and on his new role with our company." Mr. Douglas assumes the role of Chairman with PCL Construction's Board of Directors. "Succession planning is all about having the right people in the right place at the right time," said Mr. Douglas in talking about handing over the reins to Mr. Filipchuk. "We take succession seriously at all levels of our organization and make sure an appropriate amount of time is provided for a seamless transition to preserve continuity in our business. Dave Filipchuk has my, and the entire board of directors', full support in officially becoming the eighth CEO of this great company." During his 31 years with PCL, and apart from leading PCL to some of its most successful years to date, Mr. Douglas has received numerous accolades. Among those are recognition as Alberta's 2015 Business Person of the Year and inclusion on the list of Alberta's 50 Most Influential People for the past two years. About PCL Construction PCL is a group of independent construction companies that carries out work across Canada, the United States, the Caribbean, and in Australia. These diverse operations in the civil infrastructure, heavy industrial, and buildings markets are supported by a strategic presence in 31 major centers. Together, these companies have an annual construction volume of $8.5 billion, making PCL the largest contracting organization in Canada and one of the largest in North America. Watch us build at PCL.com.


News Article | December 7, 2016
Site: www.nature.com

The sequencing of a 10,600-year-old genome has settled a lengthy legal dispute over who should own the oldest mummy in North America — and given scientists a rare insight into early inhabitants of the Americas. The controversy centred on the ‘Spirit Cave Mummy’, a human skeleton unearthed in 1940 in northwest Nevada. The Fallon Paiute-Shoshone Tribe has long argued that it should be given the remains for reburial, whereas the US government opposed repatriation. Now, genetic analysis has proved that the skeleton is more closely related to contemporary Native Americans than to other global populations. The mummy was handed over to the tribe on 22 November. The genome of the Spirit Cave Mummy is significant because it could help to reveal how ancient humans settled the Americas, says Jennifer Raff, an anthropological geneticist at the University of Kansas in Lawrence. “It’s been a quest for a lot of geneticists to understand what the earliest peoples here looked like,” she says. The case follows the US government’s decision this year that another controversial skeleton, an 8,500-year-old human known as Kennewick Man, is Native American and qualifies for repatriation on the basis of genome sequencing. Some researchers lament such decisions because the buried skeletons are then unavailable for scientific study. But others point out that science could benefit if Native American tribes use ancient DNA to secure the return of more remains, because this may deliver long-sought data on the peopling of the region. “At least we get the knowledge before the remains are put back in the ground,” says Steven Simms, an archaeologist at Utah State University in Logan, who has studied the Spirit Cave Mummy. “We’ve got a lot of material in this country that’s been repatriated and never will be available to science.” The Spirit Cave Mummy is one of a handful of skeletons from the Americas that are more than 10,000 years old (see ‘Sequencing North American skeletons’). Archaeologists Georgia and Sydney Wheeler discovered it in Nevada’s Spirit Cave in 1940. The skeleton, an adult male aged around 40 at the time of his death, was shrouded in a rabbit-skin blanket and reed mats and was wearing moccasins; he was found with the cremated or partial remains of three other individuals. The Wheelers concluded that the remains were 1,500–2,000 years old. But when radiocarbon dating in the 1990s determined that they were much older, the finds drew attention from both scientists and the Fallon Paiute-Shoshone Tribe. The tribe considers Spirit Cave to be part of its ancestral homeland and wanted the remains and artefacts. The US Native American Graves Protection and Repatriation Act (NAGPRA) mandates that remains be returned to affiliated tribes if they are deemed ‘Native American’ by biological or cultural connections. In 2000, the US government’s Bureau of Land Management (BLM), which oversees the land where the mummy was found, decided against repatriation. The tribe sued, and in 2006 a US District Court judge ordered the agency to reconsider the case, calling the BLM’s decision “arbitrary and capricious”. The mummy’s remains were stored out of view in a Nevada museum, and placed off-limits to most research, except for efforts to determine its ancestry. In a 2014 monograph based on earlier examination of the remains, US anthropologists Douglas Owsley and Richard Jantz noted that the mummy’s skull was shaped differently from those of contemporary Native Americans from the region (Kennewick Man, Texas A&M Univ. Press). That contributed to the BLM’s decision to seek DNA analysis, says Bryan Hockett, an archaeologist at the bureau’s Nevada office in Reno. The tribe was originally opposed to genetic analysis to prove the mummy’s ancestry, says Hockett, but eventually agreed. In October 2015, Eske Willerslev, an evolutionary geneticist who specializes in ancient DNA analysis at the Natural History Museum of Denmark in Copenhagen, travelled to Nevada to collect bone and tooth samples from the mummy and other remains for DNA sequencing, after meeting with tribe members several months earlier. Willerslev’s team concluded that the Spirit Cave remains are more closely related to indigenous groups in North and South America than to any other contemporary population. The BLM gave Nature a preliminary scientific report from the team, and a 31-page memo outlining its reasoning for repatriating the remains. Willerslev declined to comment because his team’s data have not yet been published in a journal. Hockett says the genome findings offered the only unequivocal evidence that the remains are Native American. No evidence links the remains to any specific group — not even the ancient DNA — but NAGPRA allows the return of human remains to tribes that have a geographical connection. Len George, chair of the Fallon Paiute-Shoshone Tribe, did not respond to requests for comment. The genome of a 12,600-year-old skeleton from Montana, called the Anzick Child, is the only other published ancient genome from the Americas that is older than 10,000 years. The Spirit Cave remains and the Anzick Child both seem genetically closer to South American groups than to some North American groups, and the migrations behind this pattern are not yet understood, says Raff. One possibility is that both individuals lived before their local populations began spreading across regions of the Americas, says population geneticist Pontus Skoglund at Harvard Medical School in Boston, Massachusetts. Sequencing ancient DNA, which has become easier and cheaper in recent years, could help to determine the origins of many other ancient bones. Remains as old as the Spirit Cave Mummy are rare, but there are many younger remains that are not clearly affiliated to any tribe, and which might now be deemed Native American through ancient DNA sequencing and thus repatriated, scientists say. The BLM announced its intentions to repatriate the Spirit Cave remains in October and received no formal objections, says Hockett. But Jantz, the anthropologist who co-led the Spirit Cave skull study and is based at the University of Tennessee in Knoxville, laments the decision. “It’s just a sad day for science. We will lose a lot of information about the history of human occupation in the Americas as a consequence,” he says. Further molecular study of the remains could identify details about the Spirit Cave individuals — from the foods they consumed to the diseases that afflicted them. “I think Willerslev is the last guy who is going to look at these things,” Jantz adds. Dennis O’Rourke, a biological anthropologist at the University of Kansas, says he would like to see more researchers follow Willerslev’s example and work with Native American groups to decide whether to sequence ancient human remains, rebury them, or both. And Kimberley TallBear, an anthropologist who studies the views of indigenous groups on genetics at the University of Alberta in Edmonton, Canada, says researchers with O’Rourke’s attitude to studying ancient remains are becoming more common. She thinks it is wrong for scientists opposed to repatriation to conclude that tribes are not open to research. “Tribes do not like having a scientific world view politically shoved down their throat,” she says, “but there is interest in the science.”


News Article | December 6, 2016
Site: www.sciencemag.org

It’s often said that the heavens run like clockwork. Astronomers can easily predict eclipses, and they can foretell to a fraction of a second when the moon passes in front of a distant star. They can also rewind the clock, and find out when and where these events happened in the past. But a new historical survey of hundreds of eclipses, some dating back to the 8th century B.C.E., finds that they aren’t as predictable as scientists thought. That’s because Earth’s spin is slowing down slightly. Not only that, the study also identifies short-term hiccups in the spin rate that have been missed by cruder models. “There have been about a million days since 720 B.C.,” says Leslie Morrison, an astronomer now retired from the Royal Observatory Greenwich in London. Over such a long time, even a gradual slowdown in Earth’s rotation becomes evident, he notes. To conduct the research, Morrison and his colleagues analyzed the timing and location of eclipses from ancient Greece, China, the Middle East, and other areas worldwide. The oldest event in the catalog, a total solar eclipse that occurred in 720 B.C.E., was observed by astronomers at a site in Babylon (now modern-day Iraq). But, working backward, today’s astronomers would have predicted that the eclipse should have been seen a quarter of a world away, somewhere in the western Atlantic Ocean. The discrepancy means Earth’s rotation has gradually slowed since the 8th century B.C.E. Overall, Earth’s spin has slowed by about 6 hours in the past 2740 years, the team reports today in the . That sounds like a lot, but it works out to the duration of a 24-hour day being lengthened by about 1.78 milliseconds over the course of a century. The interaction between ocean tides and Earth’s continents is the biggest factor in slowing Earth down, Morrison explains. As those landmasses get slammed by the seas, Earth loses some rotational momentum. But models considering only this phenomenon suggest that Earth’s rotation should be slowing down more than observed, by about 2.3 milliseconds per day every century. So other factors must be at work, the researchers say. One major influence is the slow rebound of crust that was weighed down by massive ice sheets during the last ice age that have since melted away. Whereas the crust is springing upward at high latitudes, at lower latitudes the planet is shrinking inward. Like an ice skater bringing her arms inward to spin faster, that overall shift of mass is speeding up Earth’s rotation, Morrison says. Superimposed on that long-term trend, though, are small decade-to-decade variations in spin rate. These glitches are apparent from astronomical observations of occultations of stars by the moon—miniature eclipses that occur when the moon passes in front of the distant star. The variations stem from momentum shifts between Earth’s liquid outer core and the solid mantle that overlies it, Morrison explains. Mathieu Dumberry, a planetary scientist at the University of Alberta in Edmonton, Canada, who was not involved in the new study, says these momentum exchanges are poorly understood. Nevertheless, he notes, the team’s findings are “a wonderful new piece of evidence that helps us measure the magnitude and direction of such interactions deep within the Earth.” The new data should help scientists better model the movement of liquid iron in the outer core, which gives rise to Earth’s magnetic field, says Duncan Agnew, a geophysicist at Scripps Institution of Oceanography in San Diego, California. Although these small time shifts are important for scientists to consider over geologic timescales, eclipse predictions are still pretty good in the short term. The next total eclipse of the sun will darken a narrow path stretching across the United States next 21 August, give or take a millisecond.


News Article | September 13, 2016
Site: phys.org

A study, by San Diego Zoo Global conservationists, released this week (Sept. 12, 2016) is shedding new light on how scientists evaluate polar bear diet and weight loss during their fasting season. On average, a polar bear loses up to 30 percent of its total body mass while fasting during the open-water season. Although some scientists previously believed land-based foods could supplement the bears' nutritional needs until the sea ice returns, a new study published in the scientific journal Physiological and Biochemical Zoology has revealed that access to terrestrial food is not sufficient to reduce the rate of body mass loss for fasting polar bears. The study—undertaken by Manitoba Sustainable Development, the University of Alberta, and Environment and Climate Change Canada—weighed polar bears that were detained in the Polar Bear Holding Facility in Churchill, Manitoba, Canada from 2009 to 2014. Polar bears were kept in this facility as part of the Polar Bear Alert Program, which aims to reduce conflict between humans and polar bears around the town of Churchill. To prevent habituation, polar bears are not fed while in the facility, which allowed for a controlled measure of their weight loss. On average, polar bears lost 2.2 pounds (1 kilogram) of mass per day—exactly the same amount as free-ranging bears measured during the ice-free season on the coastline of Hudson Bay. Scientists reported that even with land-based food opportunities, polar bears lost the same amount of weight. "Some studies have suggested that polar bears could adapt to land-based foods to offset the missing calories during a shortened hunting period on the ice," said Nicholas Pilfold, Ph. D., lead author of the study and a postdoctoral associate in Applied Animal Ecology at San Diego Zoo Global. "Yet, our results contradict this, as unfed polar bears in our study lost mass at the same rate as free-ranging bears that had access to land-based food." Researchers also estimated starvation timelines for adult males and subadults, and found that subadults were more likely to starve before their adult counterparts. "Subadult polar bears have lower fat stores, and the added energy demands associated with growth," said Pilfold, "Future reductions to on-ice hunting opportunities due to sea ice loss will affect the younger polar bears first—especially given that these bears are less-experienced hunters." Today, it is estimated that there are approximately 26,000 polar bears throughout the Arctic. The Western Hudson Bay subpopulation of polar bears is currently stable, as the length of the ice-free season has shown recent short-term stability. However, past increases in the length of the ice-free season have caused declines in the number of bears, with subadults having a higher mortality rate than adults. The current research helps to shed light on the mechanisms of past population declines, as well as to provide an indication of what may occur if sea ice declines again. For nearly a decade, San Diego Zoo Global's researchers and its U.S. and Canadian partners have focused on developing conservation strategies to boost wild populations of polar bears. At the San Diego Zoo, polar bears "collaborated" with researchers at the U.S. Geological Survey in Alaska by wearing an accelerometer collar to track their movements. The data gained from accelerometers on collared polar bears—at the Zoo and in the Arctic— will provide scientists with new insights into the bears' daily behavior, movements and energy needs, and a better understanding of the effects of climate change on polar bears. Bringing species back from the brink of extinction is the goal of San Diego Zoo Global. As a leader in conservation, the work of San Diego Zoo Global includes on-site wildlife conservation efforts (representing both plants and animals) at the San Diego Zoo, San Diego Zoo Safari Park, and San Diego Zoo Institute for Conservation Research, as well as international field programs on six continents. The work of these entities is inspiring children through the San Diego Zoo Kids network, reaching out through the internet and in children's hospitals nationwide. The work of San Diego Zoo Global is made possible by the San Diego Zoo Global Wildlife Conservancy and is supported in part by the Foundation of San Diego Zoo Global. Explore further: Arctic conditions may become critical for polar bears by end of 21st century


News Article | October 26, 2016
Site: www.eurekalert.org

(Edmonton, AB) ProTraining, a University of Alberta spinoff company that provides mental health education and training to emergency personnel, announced today that it won a coveted Brandon Hall Group Gold Award for Excellence in the Learning Category (Best Advance in Custom Content). The prestigious awards program has been running for more than 20 years, and is often referred to as the 'Academy Awards' of the learning industry. ProTraining developed the program to increase the skills of first responders to de-escalate potentially difficult situations, and to improve interactions with individuals who may have mental health issues. The program is delivered via a combination of online and in-class training. The online course is the first step in the program, developed in partnership with Edmonton-based testing and training company Yardstick. "Our online program is based on empirical peer-reviewed research on a new approach to police training. This was conducted over a multi-year period, and the research program led to significant decreases in the use of physical force by police. There were also multiple other benefits from this program," says Dr. Peter Silverstone, a professor and psychiatrist in the University of Alberta's Department of Psychiatry, and chair of the Advisory Board at ProTraining. After the peer-reviewed training proved so successful, ProTraining was formed as a University of Alberta spinoff company with the help of TEC Edmonton and Yardstick. "We are always thrilled to hear about our clients' successes," says Jayant Kumar, TEC Edmonton vice president, Technology Management. "We'd like to congratulate ProTraining on being recognized for providing a valuable service to the first responder community." "Officers interacting with the public need specific skills to decrease the risk of negative outcomes. This online version offers a unique, engaging and interactive way to not only inform officers about useful skills, but also includes realistic video scenarios allowing them to practice these skills" says Dr. Yasmeen Krameddine, post-doctoral fellow in the Department of Psychiatry, and subject matter expert for this online course. According to Krameddine, the ProTraining course is a world leading online program. "Winning a Brandon Hall Group Excellence Award means an organization is an elite innovator within Human Capital Management. The award signifies that the organization's work represents a leading practice in that HCM function," said Rachel Cooke, chief operating officer of Brandon Hall Group and head of the awards program. "Their achievement is also notable because of the positive impact their work in HCM has on business results. All award winners have to demonstrate a measurable benefit to the business, not just the HCM operation. That's an important distinction. Our HCM award winners are helping to transform the business." More information about the innovative program can be found at protraining.com or on the Canadian Police Knowledge Network. ProTraining is a University of Alberta spinoff company created with the help of TEC Edmonton. ProTraining provides mental health and de-escalation training courses focusing on saving lives and preventing violent encounters in police interactions using online and classroom training. Courses are developed in partnership with law enforcement, protective service officers, bus operators and security professionals in Canada, America, and European Police Organizations. Contact information@protraining.com for custom programming. Brandon Hall Group is a HCM research and advisory services firm that provides insights around key performance areas, including Learning and Development, Talent Management, Leadership Development, Talent Acquisition and Workforce Management. TEC Edmonton helps technology entrepreneurs accelerate their growth. In addition to being the commercialization agent for University of Alberta technologies, TEC Edmonton operates Greater Edmonton's largest accelerator for early stage technology companies, including both university spinoffs and companies from the broader community. TEC Edmonton provides client services in four broad areas: Business development, funding and finance, technology commercialization and entrepreneur development. In 2015, TEC Edmonton was identified by the Swedish University Business Incubator (UBI) Index as the 4th best university business incubator in North America, and was also named Canadian "Incubator of the Year" at the 2014 Startup Canada Awards. For more information, visit http://www. .


The International Nurses Association is pleased to welcome MaryJane Johnson, RN, to their prestigious organization with her upcoming publication in the Worldwide Leaders in Healthcare. MaryJane Johnson is a Palliative Care Nurse currently serving patients at Health and Social Services, Yukon Government, Whitehorse, Yukon, Canada. MaryJane holds over 33 years of experience and an extensive expertise in all facets of nursing, especially palliative care and dementia care. MaryJane Johnson graduated with her Nursing Degree from the University of Alberta in Edmonton, Canada in 1983, and remains associated with the University’s hospital to this day. To keep up to date with the latest advances in her field, MaryJane maintains a professional membership with the Registered Nurses Association of the Yukon, the College of Nurses of Ontario, and the Canadian Hospice Palliative Care Association. MaryJane says that her great success is due to her passion for palliative care, based upon her belief that people should leave the world with the same care and love that they enter it. Learn more about MaryJane Johnson here and read her upcoming publication in Worldwide Leaders in Healthcare.


News Article | October 27, 2016
Site: www.eurekalert.org

To the naked eye, ancient rocks may look completely inhospitable, but in reality, they can sustain an entire ecosystem of microbial communities in their fracture waters isolated from sunlight for millions, if not billions, of years. New scientific findings discovered the source of the essential energy to sustain the life kilometers below Earth's surface with implications for life not only on our planet but also on Mars. The two essential substances used by the deep subsurface microbes are hydrogen and sulfate dissolved in the fracture water. There is a basic understanding that reactions between the water and minerals in the rock produce hydrogen, but what about sulfate? "We are very interested in the source of sulfate and how sustainable it is in those long isolated fracture water systems" says Long Li, assistant professor in the University of Alberta's Department of Earth and Atmospheric Sciences and Canada Research Chair in Stable Isotope Geochemistry. Li--who worked as postdoctoral fellow with Barbara Sherwood Lollar, professor in the Department of Earth Sciences at University of Toronto and Boswell Wing in the Department of Earth and Planetary Sciences at McGill University--examined the relative ratios of several types of sulfur atoms that have different neutron numbers, namely sulfur isotopes, in the dissolved sulfate in the billion-year-old water collected from 2.4 kilometers below the surface in Timmins, Ontario, Canada. They observed a unique distribution pattern called sulfur isotope mass-independent fractionation. "To date this signature of ancient Earth sulfur has only been found in rocks and minerals," says Sherwood Lollar. "Based on the match in the isotopic signature between the dissolved sulfate and the pyrite minerals in the 2.7 billion year old host rocks, we demonstrated that the sulfate was produced by oxidation of sulfide minerals in the host rocks by oxidants generated by radiolysis of water. The same pyrite and other sulfide ores that make these rocks ideal for economic mining of metals, produce the 'fuel' for microbial metabolisms." The authors demonstrate that the sulfate in this ancient water is not modern sulfate from surface water flowing down, but instead, just like the hydrogen, is actually produced in place by reaction between the water and the wall rock. What this means is that the reaction will occur naturally and can persist for as long as the water and rock are in contact, potentially billions of years. "The wow factor is high," says Li, who explains that billion-year-old rocks, exposed or unexposed, compose more than half of Earth's continental crust. "If geological processes can naturally supply a steady energy source in these rocks, the modern terrestrial subsurface biosphere may expand significantly both in breadth and depth." Some locations on Mars have similar mineral assemblages to the rocks in Timmins. This allows the scientists to speculate that microbial life can indeed be supported on Mars. "Because this is a fairly common geological setting on modern Mars, we think that as long as the right minerals and liquid water are present, maybe kilometers below the Martian surface, they may interact and produce energy for life, if there is any." Li concludes that if there is any life on Mars right now--a question that has long piqued people's curiosity--the best bet is to look below the surface. "Sulfur mass-independent fractionation in subsurface fracture waters indicates a long-standing sulfur cycle in Precambrian rocks" appeared in the October 27 issue of Nature Communications, an open access journal part of the Nature group of publications.


News Article | February 15, 2017
Site: www.marketwired.com

EDMONTON, ALBERTA--(Marketwired - Feb. 8, 2017) - Dalmac Energy Inc. (the "Company" or "Dalmac") (TSX VENTURE:DAL) is pleased to announce the appointment of Su Chun, of Edmonton, Alberta, to be its Chief Financial Officer, effective immediately. Mr. Jonathan Gallo, the previous Chief Financial Officer of Dalmac, only provided his services to Dalmac on a contract basis and instead will be focusing his efforts on his primary accounting business. As part of Dalmac's succession plan, Ms. Chun has worked at Dalmac for nearly two years learning under the instruction of Mr. Gallo. Further, in order to ensure a smooth transition, Mr. Gallo will still be available to Dalmac for consultation on a contract basis as needed. Su Chun, CA, CPA has served as the Controller at Dalmac Oilfield Services Inc. from June 2015 to present. She has been an integral part of navigating Dalmac through the financial downturn in the Alberta Oilfield Services industry, cost restructuring at Dalmac, and implementing a new financing agreement to strengthen the operating cash flows of the Company. Ms. Chun brings over seven years of financial accounting, assurance, and management experience to Dalmac. Prior to assuming her role as Controller at Dalmac, she worked at a MNP LLP, a national accounting firm, and an international oil and gas company based out of Calgary. Ms. Chun holds a Bachelor of Commerce degree from the University of Alberta and a Chartered Professional Accountant designation. Dalmac is a diversified provider of well stimulation and fluid management services, which include fluid transfers, hot oiling, frack heating, well acidizing, tank rentals and equipment moving. Dalmac is also a key distributer/supplier of glycol and methanol related products. Headquartered out of Edmonton with operations branches in Fox Creek, Edson, and Warburg, Dalmac has been servicing the oil and gas fields of west central Alberta for over 60 years. The Company has master service agreements (MSAs) with most of North America's leading exploration and production companies. Approximately half of Dalmac's revenue comes from recurring, fluid transfer and maintenance-related operations and the balance is derived from service related activities such as drilling, completions and well work overs. Dalmac is headquartered in Edmonton, Alberta, Canada and trades on the TSX Venture Exchange under the symbol "DAL". Additional information on the Company is available on its website at www.dalmacenergy.com and on SEDAR at sedar.com. Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.


The International Association of HealthCare Professionals is pleased to welcome Donna Marie Wachowich, MD, FCFP, Family Practitioner, to their prestigious organization with her upcoming publication in The Leading Physicians of the World. Dr. Donna Marie Wachowich is a highly trained and qualified physician with an extensive expertise in all facets of her work, especially family care and low risk obstetrics. Dr. Wachowich has been in practice for 30 years and is currently serving patients within Valley View Family Practice in Calgary, Alberta, Canada. She is also affiliated with the Foothills Medical Center. Dr. Wachowich attended the University of Alberta in Edmonton, Canada, where she graduated with her Medical Degree in 1982. Following her graduation, she completed her residency training at Queen’s University in Kingston, Ontario. Dr. Wachowich has earned the coveted title of Fellow of the College of Family Physicians of Canada. Dr. Wachowich was on the sexual response team of the Calgary Area Medical Staff Association, and speaks locally on sexual crimes and the appropriate responses. She maintains a professional membership with the Canadian College of Family Practice and the Canadian Medical Protective Association, allowing her to stay current with the latest advances in her field. Dr. Wachowich attributes her success to providing high quality patient care, and staying abreast in the new sciences, medical, and technological advances. In her free time, Dr. Wachowich enjoys curling, traveling, cooking, knitting, and photography. Learn more about Dr. Wachowich by reading her upcoming publication in The Leading Physicians of the World. FindaTopDoc.com is a hub for all things medicine, featuring detailed descriptions of medical professionals across all areas of expertise, and information on thousands of healthcare topics.  Each month, millions of patients use FindaTopDoc to find a doctor nearby and instantly book an appointment online or create a review.  FindaTopDoc.com features each doctor’s full professional biography highlighting their achievements, experience, patient reviews and areas of expertise.  A leading provider of valuable health information that helps empower patient and doctor alike, FindaTopDoc enables readers to live a happier and healthier life.  For more information about FindaTopDoc, visit http://www.findatopdoc.com


DEERFIELD, Ill.--(BUSINESS WIRE)--Fortune Brands Home & Security, Inc. (NYSE: FBHS), an industry-leading home and security products company, announced that effective today, Tracey Belcourt has joined the Company as the senior vice president of global growth and development. Belcourt brings more than 17 years of experience in global strategy, mergers and acquisitions (M&A) and business development. She comes to Fortune Brands from Mondelez International, Inc. where she spent four years as the executive vice president of strategy focused on development and execution of the company’s global growth strategies. Prior to that she spent 13 years consulting at Bain & Company where she had an opportunity to work with global clients in the industrial products, airline and consumer products categories with a focus on strategic performance and growth. She began her career in academia at Concordia University in Montreal, Canada, then at the University of Bonn in Germany. Belcourt holds both a Ph.D. and M.A. in economics from Queen’s University in Kingston, Ontario and a B.S. in economics and mathematics from the University of Alberta. “Tracey is a collaborative, results-oriented leader who is motivated by purpose and the ability to have a real impact on the business. She will be a great fit for our culture and we have confidence in her leadership abilities and overall approach to business,” said Chris Klein, chief executive officer, Fortune Brands. “I’m excited to welcome Tracey to our team in a key role to continue to accelerate our global growth strategy and enhance our ability to complete value-creating mergers and acquisitions.” Belcourt will partner with the executive team to identify, assess and execute opportunities to grow the business around the world in current segments, adjacencies, new segments and new geographies. She will also lead strategic planning and insights. Fortune Brands Home & Security, Inc. (NYSE: FBHS), headquartered in Deerfield, Ill., creates products and services that fulfill the dreams of homeowners and help people feel more secure. The Company’s four operating segments are Cabinets, Plumbing, Doors and Security. Its trusted brands include more than a dozen core brands under MasterBrand Cabinets; Moen, ROHL and Riobel under the Global Plumbing Group (GPG); Therma-Tru entry door systems; and Master Lock and SentrySafe security products under The Master Lock Company. Fortune Brands holds market leadership positions in all of its segments. Fortune Brands is part of the S&P 500 Index. For more information, please visit www.FBHS.com.


News Article | October 26, 2016
Site: www.theguardian.com

The latest Black Mirror series from Charlie Brooker presents, despite its transition to Netflix, another unsettling collection of future shock nightmares drawn from consumer technology and social media trends. The second episode, Playtest, has an American tourist lured to a British game development studio to test a new augmented-reality horror game that engages directly with each player’s brain via a biorobotic implant. The AI program mines the character’s darkest fears and manifests them into the real-world as photorealistic graphics. Inevitably, terror and mental breakdown follow. The idea of a video game that can analyse a player’s personality and change accordingly may seem like the stuff of outlandish sci-fi to some Black Mirror viewers. But it isn’t. This could well be where game design is heading. Eight years ago, video game writer Sam Barlow had a new idea about how to scare the crap out of video game players. Working on the survival horror adventure Silent Hill: Shattered Memories, Barlow introduced a character named Dr Kaufmann, a psychotherapist whose role, ostensibly, was to evaluate the mental wellbeing of protagonist Harry Mason. But that’s not really why he was there. Dr Kaufmann’s actual role was to psychologically assess the player. At key points throughout the terrifying narrative, the game provided a questionnaire inspired by the “Big Five” personality test, a method used by academic psychologists for personality research. Players would be asked things like: Are you a private person? Do you always listen to other people’s feelings? In this way it was building a psychological profile of the player. At the same time, the system was also drawing data from how players interacted with the game world: how long they spent exploring each area before moving on; whether they strayed from clearly marked paths; whether they faced non-player characters while they talked. Every action had an effect on the narrative. “Most scenes in the game had layers of variation – in the textures and colour, the lighting and the props,” explains Barlow. “Characters also had multiple appearances and personality differences. All phone calls, voicemails and readable materials had multiple variations according to different profile slices. As you approached a door to a new room, the game was spooling in the assets, testing your profile and loading up the custom asset packages to assemble your own version.” The idea was to draw in and then unsettle the player as much as possible based on their psychological traits. Characters, monsters and environments would all be subtly changed to reflect their own fears of aggression, enclosure or darkness. Game designers have been attempting to learn, assess and react to player types since the days of Dungeons and Dragons. Richard Bartle, co-creator of the original MUD roleplaying game, formed a taxonomy of players in 1996, and his types – Achievers, Explorers, Socialisers, and Killers – have often been often used by designers to try to pre-empt and entice different player types. Over the last decade, however, the concept of truly reactive “player modelling”, in which the game learns in real time from each individual player, has become an important part of academic research into artificial intelligence and machine learning. In 2004, AI researchers Georgios Yannakakis and John Hallam published a seminal paper detailing their work on Pac-Man. They created a modified version of the popular arcade game with the ghosts controlled by an evolutionary neural network that adjusted their behaviour based on each player’s individual strategies. In the same year, PhD student Christian Thurau presented his own player modelling system that used pattern recognition and machine learning techniques to teach AI characters how to move in a game world, based on watching humans play Quake II. In short: games were beginning to watch and learn from players. Many other other studies followed. In 2007, researchers at the University of Alberta’s Intelligent Reasoning Critiquing and Learning group (under Vadim Bulitko) developed PaSSAGE (Player-Specific Stories via Automatically Generated Events), an AI-based interactive storytelling system that could observe and learn from player activities in a role-playing adventure. As the game progressed, the program sorted players into five different types (based on the Robin’s Laws of Dungeons & Dragons) and then served them game events from a library of pre-written mini-missions. If they seemed to like looking for items in the game world, they were given a quest to find an object; if they liked fighting, they were given events that involved combat. That system was interesting (and is still being evolved in the department), but it relied on hand-designed set-piece events, and only had a limited grasp on who the player was. Matthew Guzdial, a PhD student at the Georgia Institute of Technology’s School of Interactive Computing, is currently working on a more adaptable evolution of this concept – a version of Nintendo’s Super Mario Bros platformer that features a neural network capable of observing player actions and builds novel new level designs, based on this data. “We’ve successfully been able to demonstrate that the generator creates levels that match a learned play style”, says Guzdial who collaborated with Adam Summerville from the University of California, Santa Cruz. “Put simply, if a player likes exploring, it creates levels that must be explored; if a player speed-runs, it makes levels that are suited to speed-runing.” Super Mario, it turns out, is a popular test-bed for AI researchers. It’s familiar, it allows lots of different player actions in a constrained environment, and its source code is easily available. At the University of Copenhagen, AI researchers Noor Shaker, Julian Togelius and the aforementioned Yannakakis developed a slightly different experiment based on the game. This time players were asked to provide emotional feedback on each play-through, giving scores for fun, challenge and frustration; this input was combined with data drawn from watching them play (how often the player jumped, ran or died, how many enemies they killed, etc), and the AI program constructed new levels as a result. Over the last decade, Yannakakis and colleagues over at the University of Malta’s Institute of Digital Games, where he currently works as an associate professor, have explored various forms of machine learning to estimate a player’s behavioural, cognitive and emotional patterns during play. They have combined deep-learning algorithms, which build general models of player experience from massive datasets, with sequence-mining algorithms, which learn from sequences of player actions (like continually choosing health pick-ups over ammo). They have also explored preference learning, which allow an AI system to learn from player choices between particular content types (for example, preferring levels with lots of jumping challenges over those with lots of enemies). Not only have they used behavioural data gathered during play, they’ve also used age, gender and other player details to inform their systems. Their aim isn’t just to make interesting games, however – the AI techniques they’re exploring may well be used in educational software or as diagnostic or treatment tools in mental health care. “Given a good estimate of a player’s experience, AI algorithms can automatically – or semi-automatically – procedurally generate aspects of a game such as levels, maps, audio, visuals, stories, or even game rules,” says Yannakakis. “The estimate can be used to help designers shape a better experience for the player. By tracking their preferences, goals and styles during the design process, AI can assist and inspire designers to create better, more novel, more surprising game content.” For Julian Togelius, one of the foremost experts on AI games research now based at NYU, the next step is active player modelling – he envisages an AI level designer that doesn’t just react to inputs, but is actually curious about the player and their preferences, and wants to find out more. “There is this machine learning technique called active learning, where the learning algorithm choses which training examples to work on itself,” he explains. “Using this technique, you could actually have a game that chooses in what way to explore you, the player: the game is curious about you and wants to find out more, therefore serving you situations where it does not know what you will do. That’s something that will be interesting for the player too, because the game has a reasonably good model of what you’re capable of and will create something that’s novel and interesting to you.” Of course, in many ways we’re already seeing this kind of player modelling happening in the conventional games industry. Valve’s critically acclaimed zombie shooter Left 4 Dead features an AI director that varies the type and threat level of undead enemies based on player activities in the game so far. With the arrival of free-to-play digital games on social media platforms and mobile phones, we also saw the emergence of a whole new game design ethos based on studying player data and iterating games accordingly. In its prime, Zynga was famed for its huge data science department that watched how players interacted with titles such as Farmville and Mafia Wars, worked out where they were getting bored or frustrated, and tweaked the gameplay to iron out those kinks. The analysis of player metrics quickly became a business in itself with companies such as Quantic Foundry and GameAnalytics set up to help smartphone developers garner information from the activities of players. But these systems are commercially motivated and based around making game design bets on the activities of thousands of players – they’re not about actually understanding players on an individual emotional level. That concept is definitely coming. Some AI researchers are shifting away from machine learning projects that watch what players do and toward systems that work out what they feel. It’s possible to get an idea about a player’s excitement, engagement or frustration from analysing certain in-game actions – is the player hammering the jump button, are they avoiding or engaging enemies, are they moving slowly or quickly? Simple actions can give away a lot about the player’s state of mind. In 2014, Julian Togelius found he was able to make informed assumptions about key character traits of test subjects by watching how they played Minecraft. “We asked them questions about their life motives then analysed the logs from the game,” he says. “Traits like independence and curiosity very strongly correlated with lots of things that happened in the game.” So could an AI program study that data and change a game to tweak those feelings? “The major challenge is to relate content and [player] behaviour to emotion,” says Noor Shaker, a researcher at the University of Copenhagen who completed a PhD in player-driven procedural content generation. “Ultimately, we want to be able to identify the aspects of games that have an impact on how players experience them.” Shaker is using a variety of methods for this purpose: neuroevolution, random decision forests, multivariate adaptive spline models are all complex machine learning toolsets that enable neural networks to gradually learn from and adapt to different player behaviours. “My work recently revolves around building more accurate models of experience, implementing interactive tools that allow us to visualise the expressive space of players’ emotions,” says Shaker. “Most of the work I have seen so far, such as adaptation in Left 4 Dead, focuses on game difficulty and adjusting the behaviour of the NPCs according to relatively simple metrics of player’s behaviour. I believe there are many other aspects to experience than difficulty and there are many more elements that can be considered to manipulate player experience than the behaviour of the NPCs. Recent research has shown that emotions such as frustration, engagement and surprise can be detected and modelled by machine learning methods. Shaker then, is interested in developing a video game AI system that understands not just how the player plays, but how the player is feeling as they play. Imagine a game that learns a player’s emotional state and generates non-player characters and story fragments that it knows will hit them right in the heart. “I believe data-driven automatic content personalisation is doable,” says Shaker. So far, much of this research has concentrated on how the player behaves within the game world. But that’s not the only place to gather data. As consumers in the digital era, we’re used to being profiled by major corporations: Facebook, Amazon, Microsoft and Google all use behavarioural targeting techniques to serve personalised ads and content to users. Advanced algorithms track our web-browsing activities via cookies and web beacons and learn our preferences. The data is all out there, and there’s no reason why games makers couldn’t use it too. In fact, AI researchers are already creating games that mine information from popular websites and bring it back for use in the narrative. Gabriella Barros is working on the concept of “data adventures” with Julian Togelius, in which the AI gathers information from sites like Wikipedia and OpenStreetMaps to create globe-trotting point-and-click adventures in the style of Where in the World is Carmen Sandiego – except they’re based in real locations and populated by real people. These data games are just the beginning, argues Michael Cook an AI researcher at Falmouth University. “Right now they’re interested in huge, open data platforms like Wikipedia or government statistics,” he says. “But you can imagine in the future a game which takes your Facebook feed instead of a Wikipedia database, and populates the game world with people you know, the things they like doing, the places they visit and the relationships people have with one another. Whether or not that sounds like a scary idea is another question, but I can definitely see it as a natural extension of [the data game concept]. We already open our lives up to so many companies every day, we might as well get a bespoke, commissioned video game out of the deal.” Copenhagen’s Noor Shaker points out that privacy issues are a bottleneck with social media data – but then it’s a bottleneck that Google and co have deftly circumnavigated. “Once we have the data, and depending on the source and type, natural language processing methods such as sentiment analysis could be used to profile and cluster players according to their opinion about different social, cultural or political matters,” she says. “Statistics could also be collected about games, books, or songs they already purchased, liked or expressed opinion about. All this information could feed powerful machine learning methods such as neural networks or standard classification techniques that learn profiles, discover similarity or predict personality traits.” So now we’re getting closer to the Black Mirror concept. Imagine something like The Sims, where the pictures on your apartment walls are photos from your Facebook wall, where neighbours are your real-life friends. Or, on a darker tangent, imagine a horror adventure that knows about your relationships, your Twitter arguments, your political views; imagine a horror game that knows what you watch on YouTube. “It is only natural to expect that game data can be fused with social media activity to better profile players and provide a better gaming experience,” says Yannakakis. Researchers can envisage a game that builds a detailed psychological and social profile of a player, from both their in-game actions and online footprint – but there’s still a gap between this, and the horror game posited in Black Mirror, which performs an invasive neurological hack on the player. Brain-computer interfacing of this sort is still the stuff of bleeding edge medical research and science fiction. However, we’re already seeing the use of devices – both in research and in consumer production – that can measure physiological states such as skin conductance and heart-rate variability to assess a player’s emotional reaction to game content. Konami’s 1997 arcade dating game Oshiete Your Heart, for example, featured a sensor that measured the player’s heart rate and skin conductance to influence the outcome of each romantic liaison. Nevermind, released by by Flying Mollusk last year, is a biofeedback-enhanced horror adventure that increases the level of challenge based on the player’s stress readings. Yannakakis and other researchers are also using off-the-shelf smart camera technologies like Intel RealSense and the emotion recognition software Affectiva to track a player’s facial expressions and monitor their heartbeat – both indicators of a variety of emotions. Noor Shaker has studied how tracking a player’s head pose while they take part in a game can tell us about the experience they’re having. Right now, these physiological inputs are mostly confined to research departments, but that may change. Valve, the company behind games like Portal and Half-Life and the HTC Vive VR headset has been experimenting with biometric inputs for years. Founder Gabe Newell predicted in 2011 that we would one day see game controllers with built in heart-rate and skin response detectors. A game supported by these sensors could easily present each player with different items, concepts or options to gauge a heart or skin response, adapting content on the fly depending on the reaction. Imagine a VR headset with sensors that measure skin heat response and heart rate. People are already hacking this sort of thing together. This all sounds terrifying, but it needn’t be used in the way the Black Mirror episode does. There are benevolent, perhaps even beautiful, possibilities in the idea of games learning from players. One company looking into this potential is Mobius AI, a New York and UK-based starup developing a cognitive AI engine for developers. Co-founder Dr Mitu Khandaker-Kokoris is more interested in the potential relationships that could occur between players and AI characters who have the ability to identify and learn from individual players. “What games really lack is that serendipitous kind of connection we feel when we meet someone in the real world that we get along with,” she says. “One of the ways this will happen is through games featuring AI-driven characters who truly understand us, and what we as individual players are expressing through the things we are actually saying and doing in the game. “Imagine, for instance, that you were the only one in the world who an AI-driven character could trust fully because the game could infer that you have similar personalities. This character could then take you down a unique experience, which only you have access to. It’s a fascinating problem space, and a great challenge to think about how games could truly work you out – or rather, who you are pretending to be – by paying attention to not only what you’re saying, but how you’re saying it.” Interestingly, Khandaker-Kokoris, who is also working on procedural storytelling game Little Invasion Tales, is more skeptical about the role of personal data mining in the future of game design. “We play games, often, to be someone who would have a different online history than our own,” she says. “But then, we are partly always ourselves, too. We have to work out what it would mean in terms of role-play and the idea of a permeable magic circle.” What’s certain though, is that game creators and AI researchers are moving in the same direction: toward systems that provide content based on individual player preferences and activities. Games now cost many millions to produce – the assumption that enough players will react favourably to a single narrative, and a single experience, is becoming prohibitively risky. We live in an age of behavioural modelling and data science, an age in which Amazon is capable of building personalised video adverts in real-time based on viewer preferences mined from the web. In this context, games that know you – that learn from you, that are curious about you – are almost inevitable.


News Article | January 13, 2016
Site: www.reuters.com

Coal is transported via conveyor belt to the coal-fired Jim Bridger Power Plant in Wyoming. WASHINGTON Global emissions of mercury from manmade sources fell 30 percent from 1990 to 2010, in part from decreasing use of coal, the U.S. Geological Survey (USGS) reported on Wednesday. The greatest decline of the toxic pollutant was in Europe and North America, offsetting increases in Asia, the agency said, citing an international study. The findings challenge longstanding assumptions on emission trends and show that local and regional efforts can have a major impact, it said. "This is great news for focused efforts on reducing exposure of fish, wildlife and humans to toxic mercury,” said David Krabbenhoft, a USGS scientist and one of the study’s co-authors. A metal that poses health risks, mercury can be converted into a gas during industrial activities as well as such natural events as volcanic eruptions. The study was carried out by the USGS, Harvard University, China's Peking University, Germany's Max Planck Institute for Chemistry and the University of Alberta in Canada. It was published in the Proceedings of the National Academy of Sciences. The analysis found that the drop came because mercury had been phased out of many commercial products. Controls have been put in place on coal-fired power plants that removed mercury from the coal being burned. Many power plants also have switched to natural gas from coal, the USGS said.


News Article | December 5, 2016
Site: www.wired.com

OpenAI, the billion-dollar San Francisco artificial intelligence lab backed by Tesla CEO Elon Musk, just unveiled a new virtual world. It’s called Universe, and it’s a virtual world like no other. This isn’t a digital playground for humans. It’s a school for artificial intelligence. It’s a place where AI can learn to do just about anything. Other AI labs have built similar worlds where AI agents can learn on their own. Researchers at the University of Alberta offer the Atari Learning Environment, where agents can learn to play old Atari games like Breakout and Space Invaders. Microsoft offers Malmo, based on the game Minecraft. And just today, Google’s DeepMind released an environment called DeepMind Lab. But Universe is bigger than any of these. It’s an AI training ground that spans any software running on any machine, from games to web browsers to protein folders. “The domain we chose is everything that a human can do with a computer,” says Greg Brockman, OpenAI’s chief technology officer. In coder-speak, Universe is a software platform—software for running other software—and much of it is now open source, so anyone can use and even modify it. In theory, AI researchers can plug any application into Universe, which then provides a common way for AI “agents” to interact with these applications. That means researchers can build bots that learn to navigate one application and then another and then another. For OpenAI, the hope is that Universe can drive the development of machines with “general intelligence”—the same kind of flexible brain power that humans have. “An AI should be able to solve any problem you throw at it,” says OpenAI researcher and former Googler Ilya Sutskever. That’s a ridiculously ambitious goal. And if it’s ever realized, it won’t happen for a very long time. But Sutskever argues that it’s already routine for AI systems to do things that seemed ridiculously ambitious just a few years ago. He compares Universe to the ImageNet project created by Stanford computer scientist Fei-Fei Li in 2009. The goal of ImageNet was to help computers “see” like humans. At the time, that seemed impossible. But today, Google’s Photo app routinely recognizes faces, places, and objects in digital images. So does Facebook. Now, OpenAI wants to expand artificial intelligence to every dimension of the digital realm—and possibly beyond. In Universe, AI agents interact with the virtual world by sending simulated mouse and keyboard strokes via what’s called Virtual Network Computing, or VNC. In this way, Universe facilitates reinforcement learning, an AI technique where agents learn tasks by trial and error, carefully keeping tabs on what works and what doesn’t, what brings the highest score or wins a game or grabs some other reward. It’s a powerful technology: Reinforcement learning is how Google’s DeepMind lab built AlphaGo, the AI that recently beat one of the world’s top players at the ancient game of Go. But with Universe, reinforcement learning can happen inside any piece of software. Agents can readily move between applications, learning to crack one and then another. In the long run, Sutskever says, they can even practice “transfer learning,” in which an agent takes what it has learned in one application and applies it to another. OpenAI, he says, is already building agents that can transfer at least some learning from one driving game to another. Michael Bowling, a University of Alberta professor who helped create the Atari Learning Environment, questions how well Universe will work in practice, if only because he hasn’t used it. But he applauds the concept—an AI proving ground that spans not just games but everything else. “It crystallizes an important idea: Games are a helpful benchmark, but the goal is AI.” Still, games are where it starts. OpenAI has seeded Universe with about a thousand games, securing approval from publishers like Valve and Microsoft. It’s also working with Microsoft to add Malmo and says it’s interested in adding DeepMind Lab as well. Games have always served as a natural training tool for AI. They’re more contained than the real world, and there’s a clear system of rewards, so that can AI agents can readily learn which actions to take and which to avoid. Games aren’t ends in and of themselves, but they’ve already helped create AI that has a meaningful effect on the real world. After building AI that can play old Atari games better than any human ever could, DeepMind used much the same technology to refine the operation of Google’s worldwide network of computer data centers, reducing its energy bill by hundreds of millions of dollars. The digitized chaos of Grand Theft Auto, the thinking goes, can help autonomous vehicles learn to handle the unexpected. Craig Quiter is using Universe with a similar goal in mind. Quiter helped build the platform at OpenAI before moving across town to Otto, the self-driving truck startup Uber acquired this summer in a deal worth about $680 million. Last month, drawing on work from several engineers who worked on autonomous cars inside Google, Otto’s driverless 18-wheeler delivered 50,000 cans of Budweiser down 120 miles of highway from Fort Collins to Colorado Springs. But Quiter is looking well beyond the $30,000 in hardware and software that made this delivery possible. With help from Universe, he’s building an AI that can play Grand Theft Auto V. Today, Otto’s truck can navigate a relatively calm interstate. But in the years to come, the company hopes to build autonomous vehicles that can respond to just about anything they encounter on the road, including cars spinning out of control across several lanes of traffic. The digitized chaos of Grand Theft Auto, the thinking goes, can help the AI controlling those vehicles learn to handle the unexpected. Meanwhile, researchers at OpenAI are already pushing Universe beyond games into web browsers and protein folding apps used by biologists. Andrej Karpathy, the lead researcher of this sub-project, dubbed World of Bits, questions how useful games will be in building AI for the real world. But an AI that learns how to use a web browser is, in a sense, already learning to participate in the real world. The web is part of our daily lives. Navigating a browser web services both motor skills and language skills. It’s a gateway to any software or any person. The rub is that reinforcement learning inside a web browser is a far more difficult to pull off. Universe includes a deep neural network that can automatically read scores from a game screen in much the same way neural nets can recognizes objects or faces in photos. But web services have no score. Researchers must define their own reward functions. Universe allows for this, but it’s still unclear what rewards will help agents, say, sign into a website or look up facts on Wikipedia, tasks that OpenAI is already exploring. But if we can teach machines these more amorphous tasks—teach AI agents to do anything on a computer—Sutskever believes we can teach them to do just about anything else. After all, an AI that can’t browse the internet unless it understands the natural way we humans talk. It can’t play Grand Theft Auto without the motor skills of a human. And like so many others, Quiter argues that navigating virtual worlds isn’t so different from navigating the real world. If Universe reaches is goal, then general intelligence isn’t that far away. It’s a ridiculous aim—but it may not be ridiculous for long. Update: This story has been updated with mention of DeepMind Lab.


News Article | November 7, 2016
Site: www.eurekalert.org

Long-term survivors of childhood cancer live longer thanks to improvements to cancer treatments, but a new study looking at three decades of therapy suggests patients do not report better health status. The findings from the Childhood Cancer Survivor Study (CCSS), which include feedback from a survey of more than 14,000 adult survivors treated from 1970 to 1999, appear online in Annals of Internal Medicine. "Improved survival following a diagnosis of childhood cancer is one of the success stories of modern medicine," said corresponding author Kirsten Ness, co-first author on the paper, member of the St. Jude Department of Epidemiology and Cancer Control, and CCSS investigator. "As part of our ongoing work for the Childhood Cancer Survivor Study, we wanted to investigate how survivors treated with contemporary therapy view their health status compared with survivors from earlier decades. Surprisingly, the data from the survey show a lack of improvement in perceived health status by childhood cancer survivors over the past 30 years, which serves as an important reminder that cures for cancer do not come without some consequences to patients." The Childhood Cancer Survivor Study is a multi-institutional investigation of childhood cancer survivors that has provided detailed information not only on the prevalence of disease but also on contributory factors that influence the adverse health status of cancer survivors. The cohort of patients in the CCSS was recently expanded to include individuals diagnosed between 1987 and 1999. This allowed investigators to look at how improvements to treatments have affected reported patient health over three decades. The new study involved 14,566 adult patients aged 18-48 years old who were treated for pediatric cancer in 27 institutions across North America. The analysis focused on treatments for solid tumors and cancers of the blood (including acute lymphoblastic leukemia, astrocytoma, medulloblastoma, Hodgkin lymphoma, non-Hodgkin lymphoma, neuroblastoma, Wilms tumor, rhabdomyosarcoma, Ewing sarcoma and osteosarcoma). Surgery, radiation and chemotherapy treatments were selected for evaluation. As part of the survey, patients provided feedback that described their general health, functional impairment, limits to activity, mental health, pain and anxiety. Treatment scores and adverse health status were reported as findings for 1970-79, 1980-89 and 1990-99. While contemporary therapy for certain childhood cancers has led to a reduction of late mortality and extended the lifespan of survivors, the researchers reported there was no parallel improvement in patient-reported health status among survivors. "Overall, we observed increases in the proportions of childhood cancer survivors treated from 1990-99 who reported poor general health and anxiety," said Ness. "Considerable progress has been made over the years to extend the lives of childhood cancer survivors," said Melissa Hudson, M.D., a co-first author of the paper, director of the St. Jude Division of Cancer Survivorship and CCSS co-investigator. "Survivors from more recent eras of treatment are less likely to die from the late effects of cancer treatment and are living longer. The current study reemphasizes that one of the significant challenges ahead is to find ways to improve quality of life and health for all survivors of childhood cancer." While the findings indicate a lack of reported improvement in health status for survivors, the investigators did highlight potential limitations of the study. Although participation in the survey was high, not every survivor eligible for inclusion agreed to take part. The study also did not consider the effects of risk factors to health on mortality. In addition, the researchers pointed out that worse later health outcomes could at least partially be explained by the fact that survivors are living longer. The authors did note one area where more interventions could help in the future. Activities that interfere with overall wellness for the wider population also contribute to adverse health status for childhood cancer survivors. High-risk behaviors like smoking, heavy drinking, lack of exercise or poor diet were associated with adverse health status in the survey. Clinicians and survivors should, therefore, work together to help to improve health status outcomes by using appropriate interventions and modifying high-risk behaviors. The other authors are Kendra Jones, Yutaka Yasui, Todd Gibson, Daniel Green, Kevin Krull, Gregory Armstrong, and Leslie Robison, all of St. Jude; Wendy Leisenring, Fred Hutchinson Cancer Research Center; Yan Chen, University of Alberta; Marilyn Stovall, University of Texas; Joseph Neglia, University of Minnesota Medical School; Tara Henderson, University of Chicago; Jacqueline Casillas, University of California at Los Angeles; Jennifer Ford and Kevin Oeffinger, Memorial Sloan-Kettering Cancer Center; Karen Effinger, Emory University; and Paul Nathan, The Hospital for Sick Children, Toronto. This work was supported by grants (CA55727, CA21765) from the National Cancer Institute, part of the National Institutes of Health; and ALSAC. St. Jude Children's Research Hospital is leading the way the world understands, treats and cures childhood cancer and other life-threatening diseases. It is the only National Cancer Institute-designated Comprehensive Cancer Center devoted solely to children. Treatments developed at St. Jude have helped push the overall childhood cancer survival rate from 20 percent to 80 percent since the hospital opened more than 50 years ago. St. Jude freely shares the breakthroughs it makes, and every child saved at St. Jude means doctors and scientists worldwide can use that knowledge to save thousands more children. Families never receive a bill from St. Jude for treatment, travel, housing and food -- because all a family should worry about is helping their child live. To learn more, visit stjude.org or follow St. Jude on Twitter and Instagram at @stjuderesearch.


News Article | December 14, 2016
Site: www.sciencenews.org

Our grasp of food allergy science is as jumbled as a can of mixed nuts. While there are tantalizing clues on how food allergies emerge and might be prevented, misconceptions are plentiful and broad conclusions are lacking, concludes a new report by the National Academies of Sciences, Engineering and Medicine. As a result, both the general public and medical community are confused and ill-informed about food allergies and what to do about them. Most prevention strategies and many tests used to diagnose a food allergy aren’t supported by scientific evidence and should be abandoned, the 562-page report concludes. “We are much more in the dark than we thought,” says Virginia Stallings, a coeditor of the new report, released November 30. While solid data are hard to come by, the report notes, estimates suggest that 12 million to 15 million Americans suffer from food allergies. Common culprits include peanuts, milk, eggs, fish, shellfish, sesame, wheat and soy. Food allergies should be distinguished from food intolerances; the two are often confused by the public and practitioners, says Stallings, a pediatrician and research director of the nutrition center at the Children's Hospital of Philadelphia. Strictly defined food allergies, the primary focus of the report, arise from a specific immune response to even a small amount of the allergen; they produce effects such as hives, swelling, vomiting, diarrhea and, most crucially, anaphylaxis, a severe, potentially deadly allergic reaction. These effects reliably occur within two hours after every time a person ingests that food. Allergic reactions that fall outside this strict definition and food-related intolerances, such as a gastrointestinal distress after ingesting lactose, are a legitimate public health concern. But the mechanisms behind them are probably very different than the more strictly defined food allergies, as are the outcomes, says Stallings. Anyone suspecting a food allergy should see a specialist. If medical history and preliminary results hint at problems, then the gold standard diagnostic test should be applied: the oral food challenge. This test exposes an individual to small amounts of the potentially offending food while under supervision. Doctors and others in health care should abandon many unproven tests, such as ones that analyze gastric juices or measure skin’s electrical resistance, the report concludes. Regarding prevention, research has borne a little fruit: The authors recommend that parents should give infants foods that contain potential allergens. This recommendation is largely based on peanut allergy research suggesting early exposure is better than late (SN: 3/21/2015, p. 15). There’s little to no evidence supporting virtually all other behaviors thought to prevent food allergies, such as taking vitamin D supplements, or women avoiding allergens while pregnant or breastfeeding. While additional rigorous long-term studies are needed to better understand why food allergies arise, the report addresses many issues that society can confront in the meantime. Industry needs to develop a low-dose (0.075 milligrams) epinephrine injector to treat infants who experience food allergy anaphylaxis; the U.S. Food and Drug Administration, Department of Agriculture and the food manufacturing industry need to revamp food labeling so it reflects allergy risks; and relevant agencies should establish consistent guidelines for schools and airplanes that include first-aid training and on-site epinephrine supplies. “This report is mammoth and very impressive,” says Anita Kozyrskyj, whose research focuses on the infant gut microbiome. Kozyrskyj, of the University of Alberta in Canada, presented research to the reports’ authors while they were gathering evidence. She says the report identifies issues that can help guide the research community. But its real value is in the recommendations for parents, schools, caregivers and health care providers who are dealing with food allergies in the here and now.


News Article | October 26, 2016
Site: www.latimes.com

In the future, your clothes will work for you. A team of scientists led out of the Georgia Institute of Technology has created a fabric that can gather energy from both sunlight and motion, then store it in embedded fibers. The textile, described in Science Advances, could help pave the way for energy-harvesting clothes and new wearable devices. Scientists and engineers have been working for years on creating fabrics that, if worn, could harvest energy for the wearer, said senior author Zhong Lin Wang, a nanotechnologist at Georgia Tech. “The objective was to harvest energy from our living environment, for example, human walking or muscle movement and fabric; the goal is to drive small electronics,” he said. “And this research recently attracted a lot of attention because these days, flexible electronics, wearable electronics, have become very popular and fashionable today. But each of them needs a power source.” It’s not an easy task to make devices that can be flexible enough to create a material that can actually be sewn into shirts, jackets or other garments. Wang’s team, for one, has been working on various aspects of this early-stage technology for 11 years. On top of that, any energy would have to be stored in some way that didn’t involve carrying around a bulky battery. Wang and his team solved these issues by creating a triple-threat (or perhaps, a triple-thread) fabric: It uses dye-sensitized solar cells shaped into long fibers to harvest light energy; it uses fiber-shaped triboelectric nanogenerators to harvest electrostatic charges made by normal movement; and it also uses fiber-shaped supercapacitors to store the energy in electrochemical form. Under sunlight, the solar cells provide the majority of the power; but indoors or on a cloudy day, the movement-based fibers pick up the slack, Wang said. (He was quick to add that you don’t need to flap your arms or do anything dramatic — the fibers will gather energy from small, normal movements.) “Our idea is to try to use whatever is available, whenever it’s available,” he said. The researchers have a roughly 225-square-centimeter (or 35-square-inch) patch of the energy-harvesting fabric that’s roughly as flexible as woven straw. The scientists hope to make it as flexible as common cloth. The key to that, Wang said, is thinner fibers. Currently they’re 15 to 20 centimeters long and about 2 millimeters wide. Once they reach about half a millimeter in width, they will be much more pliant, he added. But that’s if you were to make a device entirely out of these fibers. Theoretically, a few of these fibers also could be woven into any kind of textile, such as cotton, and offer some energy-harvesting benefits without sacrificing softness and flexibility, Wang said. While the technology could allow users to charge phones and wearable devices, it also could make interactive garments — a gown with LED lights, for example — more feasible for fashion designers. The research also could prove useful for building flexible screens, designing heart and other health monitors, and may even find applications in robotics. “I think it’s an exciting development,” said Thomas Thundat, a physicist in the chemical and materials engineering department at the University of Alberta in Canada, who was not involved in the study. “Wearable and portable devices are getting popular, and I believe that they will revolutionize our society in the near future,” Thundat said. “But the biggest problem is the batteries.” Wang’s new fabric helps solve that power problem, said Thundat, who envisioned a range of possible applications. Electronics embedded in smart clothing could heat you in the winter or keep you cool when working outside in the summer, he said. They could monitor the temperature, humidity and toxins in the environment, warning of extreme pollution or chemical exposure. Before the technology gets there, however, Wang ticked off the items on his to-do list. “There’s a lot of things to do: Number one is to get the fiber thinner, so more flexible,” he said. “Number two is to improve the durability or robustness, so they can last longer. And third is, we have to work on continuing to improve the performance. The more power, the better.” In the motions of distant solar system objects, astronomers find hints of Planet Nine Scientists may have a cure for jet lag: Temporary oxygen deprivation Oct. 28, 11 a.m.: This article has been updated with comments from physicist Thomas Thundat. This article was originally published at 12:05 p.m. Oct. 26.


While the medicinal cannabis industry continues to rapidly advance and expand operations by identifying new leading edge products, leaders are turning towards the expertise and knowledge of other medical sectors, especially with influence from the biopharma sector. Medical Marijuana and legal cannabis companies in the markets with recent developments and performance of note include: INSYS Therapeutics, Inc. (NASDAQ: INSY), Vinergy Resources Ltd (OTC: VNNYF) (CSE: VIN.CN), Canopy Growth Corporation (OTC: TWMJF) (TSX: WEED.TO), Aurora Cannabis Inc. (OTC: ACBFF) (TSX-V: ACB.V), Aphria Inc. (OTC: APHQF) (TSX-V: APH.V). Vinergy Resources Ltd (OTCQB: VNNYF) (CSE:VIN), in conjunction with its proposed acquisition of MJ Biopharma (announced December 14, 2016) is pleased to announce that, as a part of the Company's strategy to develop a lab for research and development products that test and identify specific cannabinoid isolates for targeted therapeutic purposes, it has appointed John Simon to the Company's Scientific Advisory Board (SAB). John has a Bachelor of Science from the University of Alberta, is a senior member of the American Society for Quality, a Certified Quality Auditor (CQA), a Registered Quality Assurance Professional in Good Laboratory Practice (RQAP-GLP) and maintains Regulatory Affairs Certification (RAC) through the Regulatory Affairs Professional Society. Read this and more news for Vinergy Resources at: http://marketnewsupdates.com/news/vnnyf.html Through John's consultancy practice, he assists companies with both site licenses and product licenses. He has helped companies obtain, renew and maintain in good standing Drug Establishment Licenses (DEL); Medical Device Establishment Licenses (MDEL); Natural and Non-prescription Site Licenses (NNHPD); and Licenses to Cultivate and Distribute under the Marihuana for Medical Purposes Regulations (MMPR) (now under the ACMPR). "With John's substantial background in QA and regulatory affairs specific to drug development and the cannabis industry, he will be a key asset in driving our cannabis product and technology initiatives," said Mr.Kent Deuters, CEO of MJ Biopharma. Vinergy Resources also announced this week a major breakthrough while conducting research and development on oral cannabinoid complex (Tetrahydrocannabinol (THC), Cannabidiol (CBD), Cannabinol (CBN) and Terpenes) delivery strips and controlled time release capsule technology. This novel approach will be the basis for several products where water or saliva is the catalyst used to activate the carrier for delivery and absorption of the cannabinoid complex into the body. In other cannabis - legal marijuana market performances and developments of note include: Aurora Cannabis Inc. (OTCQB: ACBFF) (TSX-V: ACB.V) a dually-listed company on Wednesday closed up on the OTC markets at $1.96 trading over 500,000 shares and closed even on the TSX at $2.56 trading over 2.6 million shares by the market close. Aurora Cannabis and Radient Technologies (RTI.V) this week provided an update on their previously announced collaboration arrangements. Read the full announcement at http://finance.yahoo.com/news/aurora-cannabis-radient-technologies-exclusive-124000123.html Canopy Growth Corporation (OTC: TWMJF)(TSX: WEED.TO) this week released its financial results for the third quarter of fiscal year 2017, the period ended December 31 , 2016. All financial information in this press release is reported in Canadian dollars, unless otherwise indicated. Consolidated financial results include the accounts of the Company and its wholly-owned subsidiaries which include Tweed Inc. ("Tweed"), Tweed Farms Inc. ("Tweed Farms"), and Bedrocan Canada Inc. ("Bedrocan Canada") and its investments in affiliates. Read the full report at http://finance.yahoo.com/news/canopy-growth-corporation-reports-third-113000287.html Aphria Inc. (OTCQB: APHQF) (TSX-V: APH.V) a dually listed company on Wednesday closed up on the OTC markets at $5.01 trading over 400,000 shares and closed up on the TSX at $6.52 trading over 4.8 million shares by the market close. Aphria, one of Canada's lowest cost producers, produces, supplies and sells medical cannabis. Located in Leamington, Ontario, the greenhouse capital of Canada. Aphria is truly powered by sunlight, allowing for the most natural growing conditions available. INSYS Therapeutics, Inc. (NASDAQ: INSY) closed up over 12% on Wednesday at $10.82 trading over 3.2 Million shares by the market close. Insys Therapeutics this week announced that the Company is providing for the use of Cannabidiol Oral Solution at doses up to 40 mg/kg/day in compassionate use studies in subjects with refractory pediatric epilepsy following completion of 48 weeks of treatment in the ongoing long-term safety study. The long-term safety study permitted subjects who had completed the initial safety and pharmacokinetic (PK) study to receive Cannabidiol Oral Solution at doses up to 40 mg/kg/day for up to 48 weeks. DISCLAIMER: MarketNewsUpdates.com (MNU) is a third party publisher and news dissemination service provider, which disseminates electronic information through multiple online media channels. MNU is NOT affiliated in any manner with any company mentioned herein. MNU and its affiliated companies are a news dissemination solutions provider and are NOT a registered broker/dealer/analyst/adviser, holds no investment licenses and may NOT sell, offer to sell or offer to buy any security. MNU's market updates, news alerts and corporate profiles are NOT a solicitation or recommendation to buy, sell or hold securities. The material in this release is intended to be strictly informational and is NEVER to be construed or interpreted as research material. All readers are strongly urged to perform research and due diligence on their own and consult a licensed financial professional before considering any level of investing in stocks. All material included herein is republished content and details which were previously disseminated by the companies mentioned in this release. MNU is not liable for any investment decisions by its readers or subscribers. Investors are cautioned that they may lose all or a portion of their investment when investing in stocks. For current services performed MNU has been compensated three thousand nine hundred dollars for news coverage of the current press release issued by Vinergy Resources Ltd by a non-affiliated third party. MNU HOLDS NO SHARES OF ANY COMPANY NAMED IN THIS RELEASE. This release contains "forward-looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E the Securities Exchange Act of 1934, as amended and such forward-looking statements are made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. "Forward-looking statements" describe future expectations, plans, results, or strategies and are generally preceded by words such as "may", "future", "plan" or "planned", "will" or "should", "expected," "anticipates", "draft", "eventually" or "projected". You are cautioned that such statements are subject to a multitude of risks and uncertainties that could cause future circumstances, events, or results to differ materially from those projected in the forward-looking statements, including the risks that actual results may differ materially from those projected in the forward-looking statements as a result of various factors, and other risks identified in a company's annual report on Form 10-K or 10-KSB and other filings made by such company with the Securities and Exchange Commission. You should consider these factors in evaluating the forward-looking statements included herein, and not place undue reliance on such statements. The forward-looking statements in this release are made as of the date hereof and MNU undertakes no obligation to update such statements.


News Article | February 16, 2017
Site: www.spie.org

From the SPIE Photonics West Show Daily :The first-ever neurophotonics plenary session at Photonics West featured 10 rapid-fire presentations covering the broad spectrum of current neurophotonics R&D. Following in the footsteps of the popular BiOS Hot Topics sessions, the first-ever neurophotonics plenary session at Photonics West this year featured 10 rapid-fire presentations covering the broad spectrum of neurophotonics R&D currently taking place worldwide. "There is a strong focus on developing the technologies to dramatically impact our understanding of how the brain works," said David Boas, who moderated the session and is editor-in-chief of SPIE's Neurophotonics journal. One of the initial challenges has been to find new ways to measure tens of thousands of neurons simultaneously. This requires taking an interdisciplinary approach to technology development that brings together neuroscientists, engineers, physicists, and clinical researchers. It also prompted SPIE to add a technology application track on the brain this year. "SPIE recognized the need to bring together all the different groups in this field to get an overview of the many neurophotonics activities going on," Boas said. The neurophotonics plenary session showcased the diversity of these research efforts, from genetically encoded indicators of neuronal activity to 3-photon microscopy for deep brain imaging, chemical sectioning for high throughput brain imaging, and mapping functional connections in the brain. "We need to step back and think about all of these important methods and the larger picture," said Rafael Yuste, professor of neuroscience at Columbia University and a pioneer in the development of optical methods for brain research. His presentation covered novel neurotechnologies and their impact on science, medicine, and society. "Why don't we already understand the brain?" Yuste asked. "People say it's just too complicated, but I believe the reason ... is that we don't have the right method yet. We do have methods that allow us to see entire activity of the brain, but not enough resolution of a single neuron. We need to be able to record from inside the neuron and capture every single spike in every neuron in brain circuits." Here are highlights from other plenary talks: Taking a cue from Nobel Laureate Roger Tsien, a pioneer in the field of engineering proteins for neuroscience, Canadian researchers at the University of Alberta are working to develop new kinds of protein indicators to study neuronal activity, noted the university's Robert Campbell. While early calcium indicators were synthetic tools, the Campbell Lab is working on genetically encoded proteins, taking a fluorescent protein and turning it into a calcium indicator, "a proxy for neuronal activity," Campbell said. Most recently, they have developed FlicR1, a new type of red fluorescent voltage indicator that can be used to image spontaneous activity in neurons. "We are very optimistic about this new indicator," he said. Optical detection of spatial-temporal correlations in whole brain activity Studying these types of correlations is "very important because morphology and functionality in the brain are tightly correlated to each other," said Francesco Pavone of Università degli Studi di Firenze in Italy. His group is taking a multi-modality approach in mouse models to study brain rehabilitation following a stroke. They are using light-sheet microscopy to look at vasculature remodeling, two-photon imaging to study structural plastics, and wide-field meso-scale imaging to evaluate functional plasticity. "We would like to study at all brain levels the map of all activated cells," Pavone said. "Do we have the technology to develop multi-cell, multiplane optogenetics with millisecond temporal resolution and single cell precision?" asked Valentina Emiliani, director of the Neurophotonics Laboratory at University Paris Descartes. Her lab is working with computer-generated holography, spatial light modulators (SLMs), and endoscopy to control the activity of a single neuronal cell. "We have been able to achieve very robust photostimulation of a cell while the mice were freely moving, with nice spatial resolution," she said. Peter So, professor of mechanical and biological engineering at Massachusetts Institute of Technology, described his group's work using 3D holographic excitation for targeted scanning as a way to study and map synaptic locations in the brain. "Neurons generate responses from many synaptic inputs, and we found that there are over 10,000 synaptic locations we would like to look at in parallel and map using synaptic coordinates to map activity," he said. "Three-photon has vastly improved the signal-to-background ratio for deep imaging in non-sparsely labeled brain," said Cornell University's Chris Xu. By combining a long wavelength (1300-1700 nm, the optimum spectral windows for deep imaging) with high excitation, Xu said researchers are making new inroads into deep imaging of brain tissue. Three-photon microscopy is also valuable for structural imaging and for imaging brain activity "in an entire mouse cortical column," Xu added. Mapping functional connections in the mouse brain for understanding and treating disease Mapping brain function is typically performed using task-based approaches to relate brain topography to function, noted Adam Bauer of Washington University School of Medicine. "But we want to be able to help patients who are incapable of performing tasks, such as infants and those with impairments," he said. For this reason, the lab has developed the functional connectivity optical intrinsic signal (fcOIS) imaging system to study mouse models of Alzheimer's, functional connectivity following focal ischemia, and to map cell-specific connectivity in awake mice. Maria Angela Franceschini of the Athinoula A. Martinos Center for Biomedical Imaging described her group's work developing MetaOX, a tissue oxygen consumption monitor. The instrument has been tested in neonatal intensive care units to monitor hypoxic ischemic injury and therapeutic hypothermia. It uses frequency-domain near infrared spectroscopy to acquire quantitative measurements of hemoglobin concentration and oxygenation and diffuse correlation spectroscopy to create an index of blood flow. The device is also being evaluated in Africa to study the effects of malnutrition on brain development, and in Uganda to study hydrocephalus outcomes in newborns. Shaoqun Zeng of the Wuhan National Lab for Optoelectronics in China outlined his group's work using chemical sectioning for high-throughput fluorescence imaging of a whole mouse brain at synaptic resolution. The goal is to systematically and automatically obtain a complete morphology of individual neurons. Opportunities and priorities in neurophotonics: perspectives from the NIH Edmund Talley of the US National Institutes of Health shared his experiences with the US BRAIN Initiative, which is slated to receive more than $430 million in the 2017 federal budget, plus $1.6 billion in dedicated funds through 2026 via the 21st Century Cures Act passed in December 2016. "There is some very serious investment in neurotechnologies to understand how the mind works, and there is bipartisan political support," Talley said. "Multiple federal agencies are funding this." Photonics West 2017, 28 January through 2 February at the Moscone Center, encompassed more than 4700 presentations on light-based technologies across more than 95 conferences. It was also the venue for dozens of technical courses for professional development, the Prism Awards for Photonics Innovation, the SPIE Startup Challenge, a two-day job fair, two major exhibitions, and a diverse business program with more than 25 events. SPIE Photonics West 2018 will run 27 January through 1 February at Moscone Center.


NEW YORK, March 02, 2017 (GLOBE NEWSWIRE) -- Tonix Pharmaceuticals Holding Corp. (Nasdaq:TNXP) (Tonix), a company that is developing innovative pharmaceutical products to address public health challenges, working with researchers from the University of Alberta, a leading Canadian research university, today announced the successful synthesis of a potential smallpox-preventing vaccine. This vaccine candidate, TNX-801, is a live form of horsepox virus (HPXV) that has been demonstrated to have protective vaccine activity in mice. “Presently, the safety concern of existing smallpox-preventing vaccines outweigh the potential benefit to provide immunization of first responders or the general public. By developing TNX-801 as a horsepox vaccine to prevent smallpox infection, we hope to have a safer vaccine to protect against smallpox than is currently available,” stated Seth Lederman, M.D., president and chief executive officer of Tonix. “Vaccines are a critical component of the infrastructure of global public health. Vaccination protects those who are vaccinated and also those who are not vaccinated, by decreasing the risk of contagion.” “Our goal is to improve on current methods that protect the public from possible viral outbreaks,” said Professor David Evans, Ph.D., FCAHS, Professor and Vice-Dean (Research), Faculty of Medicine and Dentistry at the University of Alberta, in Edmonton, Alberta, Canada, and principal investigator of the TNX-801 research project. HPXV was synthesized by Professor Evans and Research Associate Ryan Noyce, Ph.D., at the University of Alberta, with Dr. Lederman as co-investigator of the research and co-inventor of the TNX-801 patent. Under their research and development agreement, Tonix wholly owns the synthesized HPXV virus stock and related sequences. Professor Evans and Dr. Noyce also demonstrated that HPXV has protective vaccine activity in mice, using a model of lethal vaccinia infection. Vaccine manufacturing activities have been initiated by Tonix to support further nonclinical testing of TNX-801. Dr. Lederman stated, “Our research collaboration is dedicated to creating tools and innovative products that better protect public health.” Horsepox, an equine disease caused by a virus and characterized by eruptions in the mouth and on the skin, is believed to be eradicated. No true HPXV outbreaks have been reported since 1976, at which time the United States Department of Agriculture obtained the viral sample used for the sequence published in 2006 that allowed the synthesis of TNX-801. In 1798, Dr. Edward Jenner, English physician and scientist, speculated that smallpox is a human version of pox diseases in animals. Jenner had a strong suspicion that his vaccine began as a pox disease in horses and went on to show that it could be used to vaccinate against smallpox. Smallpox was eradicated as a result, and no cases of naturally occurring smallpox have been reported since 1977. Jenner’s vaccine appears to have evolved considerably in the vaccinia stocks maintained in different countries around the world, since vaccinia was mostly selected for growth and production.  Being able to provide safe and effective smallpox-preventing vaccines remains important and necessary for addressing and protecting public health. About the Material Threat Medical Countermeasures Provisions in the 21st Century Cures Act In 2016, the 21st Century Cures Act (Act) was signed into law to support ongoing biomedical innovation. One part of the Act, Section 3086, is aimed at “Encouraging Treatments for Agents that Present a National Security Threat.” This section of the Act created a new priority review voucher program for “material threat medical countermeasures.” The Act defines such countermeasures as drugs or vaccines intended to treat biological, chemical, radiological, or nuclear agents that present a national security threat, or to treat harm from a condition that may be caused by administering a drug or biological product against such an agent. The priority review vouchers are awarded at the time of FDA approval and are fully transferrable and may be sold to other companies to be used for priority review of any New Drug Application (NDA) or Biologic Licensing Application (BLA). Tonix is developing innovative pharmaceutical products to address public health challenges, with TNX-102 SL in Phase 3 development for posttraumatic stress disorder (PTSD). TNX-102 SL is designed for bedtime use and is believed to improve overall PTSD symptoms by improving sleep quality in PTSD patients.  PTSD is a serious condition characterized by chronic disability, inadequate treatment options especially for military-related PTSD and overall high utilization of healthcare services creating significant economic burden. TNX-102 SL was recently granted Breakthrough Therapy designation by the FDA for the treatment of PTSD. Other development efforts include TNX-601, a clinical candidate at Pre-IND (Investigational New Drug) application stage, designed for daytime use for the treatment of PTSD, and TNX-801, a potential smallpox-preventing vaccine. *TNX-102 SL (cyclobenzaprine HCl sublingual tablets) is an investigational new drug and has not been approved for any indication. This press release and further information about Tonix can be found at www.tonixpharma.com. Certain statements in this press release are forward-looking within the meaning of the Private Securities Litigation Reform Act of 1995. These statements may be identified by the use of forward-looking words such as “anticipate,” “believe,” “forecast,” “estimate,” “expect,” and “intend,” among others. These forward-looking statements are based on Tonix's current expectations and actual results could differ materially. There are a number of factors that could cause actual events to differ materially from those indicated by such forward-looking statements. These factors include, but are not limited to, substantial competition; our need for additional financing; uncertainties of patent protection and litigation; uncertainties of government or third party payor reimbursement; limited research and development efforts and dependence upon third parties; and risks related to failure to obtain FDA clearances or approvals and noncompliance with FDA regulations. As with any pharmaceutical under development, there are significant risks in the development, regulatory approval and commercialization of new products. Tonix does not undertake an obligation to update or revise any forward-looking statement. Investors should read the risk factors set forth in the Annual Report on Form 10-K for the year ended December 31, 2015, as filed with the Securities and Exchange Commission (the “SEC”) on March 3, 2016, and future periodic reports filed with the SEC on or after the date hereof. All of Tonix's forward-looking statements are expressly qualified by all such risk factors and other cautionary statements. The information set forth herein speaks only as of the date hereof.


NEW YORK, March 02, 2017 (GLOBE NEWSWIRE) -- Tonix Pharmaceuticals Holding Corp. (Nasdaq:TNXP) (Tonix), a company that is developing innovative pharmaceutical products to address public health challenges, working with researchers from the University of Alberta, a leading Canadian research university, today announced the successful synthesis of a potential smallpox-preventing vaccine. This vaccine candidate, TNX-801, is a live form of horsepox virus (HPXV) that has been demonstrated to have protective vaccine activity in mice. “Presently, the safety concern of existing smallpox-preventing vaccines outweigh the potential benefit to provide immunization of first responders or the general public. By developing TNX-801 as a horsepox vaccine to prevent smallpox infection, we hope to have a safer vaccine to protect against smallpox than is currently available,” stated Seth Lederman, M.D., president and chief executive officer of Tonix. “Vaccines are a critical component of the infrastructure of global public health. Vaccination protects those who are vaccinated and also those who are not vaccinated, by decreasing the risk of contagion.” “Our goal is to improve on current methods that protect the public from possible viral outbreaks,” said Professor David Evans, Ph.D., FCAHS, Professor and Vice-Dean (Research), Faculty of Medicine and Dentistry at the University of Alberta, in Edmonton, Alberta, Canada, and principal investigator of the TNX-801 research project. HPXV was synthesized by Professor Evans and Research Associate Ryan Noyce, Ph.D., at the University of Alberta, with Dr. Lederman as co-investigator of the research and co-inventor of the TNX-801 patent. Under their research and development agreement, Tonix wholly owns the synthesized HPXV virus stock and related sequences. Professor Evans and Dr. Noyce also demonstrated that HPXV has protective vaccine activity in mice, using a model of lethal vaccinia infection. Vaccine manufacturing activities have been initiated by Tonix to support further nonclinical testing of TNX-801. Dr. Lederman stated, “Our research collaboration is dedicated to creating tools and innovative products that better protect public health.” Horsepox, an equine disease caused by a virus and characterized by eruptions in the mouth and on the skin, is believed to be eradicated. No true HPXV outbreaks have been reported since 1976, at which time the United States Department of Agriculture obtained the viral sample used for the sequence published in 2006 that allowed the synthesis of TNX-801. In 1798, Dr. Edward Jenner, English physician and scientist, speculated that smallpox is a human version of pox diseases in animals. Jenner had a strong suspicion that his vaccine began as a pox disease in horses and went on to show that it could be used to vaccinate against smallpox. Smallpox was eradicated as a result, and no cases of naturally occurring smallpox have been reported since 1977. Jenner’s vaccine appears to have evolved considerably in the vaccinia stocks maintained in different countries around the world, since vaccinia was mostly selected for growth and production.  Being able to provide safe and effective smallpox-preventing vaccines remains important and necessary for addressing and protecting public health. About the Material Threat Medical Countermeasures Provisions in the 21st Century Cures Act In 2016, the 21st Century Cures Act (Act) was signed into law to support ongoing biomedical innovation. One part of the Act, Section 3086, is aimed at “Encouraging Treatments for Agents that Present a National Security Threat.” This section of the Act created a new priority review voucher program for “material threat medical countermeasures.” The Act defines such countermeasures as drugs or vaccines intended to treat biological, chemical, radiological, or nuclear agents that present a national security threat, or to treat harm from a condition that may be caused by administering a drug or biological product against such an agent. The priority review vouchers are awarded at the time of FDA approval and are fully transferrable and may be sold to other companies to be used for priority review of any New Drug Application (NDA) or Biologic Licensing Application (BLA). Tonix is developing innovative pharmaceutical products to address public health challenges, with TNX-102 SL in Phase 3 development for posttraumatic stress disorder (PTSD). TNX-102 SL is designed for bedtime use and is believed to improve overall PTSD symptoms by improving sleep quality in PTSD patients.  PTSD is a serious condition characterized by chronic disability, inadequate treatment options especially for military-related PTSD and overall high utilization of healthcare services creating significant economic burden. TNX-102 SL was recently granted Breakthrough Therapy designation by the FDA for the treatment of PTSD. Other development efforts include TNX-601, a clinical candidate at Pre-IND (Investigational New Drug) application stage, designed for daytime use for the treatment of PTSD, and TNX-801, a potential smallpox-preventing vaccine. *TNX-102 SL (cyclobenzaprine HCl sublingual tablets) is an investigational new drug and has not been approved for any indication. This press release and further information about Tonix can be found at www.tonixpharma.com. Certain statements in this press release are forward-looking within the meaning of the Private Securities Litigation Reform Act of 1995. These statements may be identified by the use of forward-looking words such as “anticipate,” “believe,” “forecast,” “estimate,” “expect,” and “intend,” among others. These forward-looking statements are based on Tonix's current expectations and actual results could differ materially. There are a number of factors that could cause actual events to differ materially from those indicated by such forward-looking statements. These factors include, but are not limited to, substantial competition; our need for additional financing; uncertainties of patent protection and litigation; uncertainties of government or third party payor reimbursement; limited research and development efforts and dependence upon third parties; and risks related to failure to obtain FDA clearances or approvals and noncompliance with FDA regulations. As with any pharmaceutical under development, there are significant risks in the development, regulatory approval and commercialization of new products. Tonix does not undertake an obligation to update or revise any forward-looking statement. Investors should read the risk factors set forth in the Annual Report on Form 10-K for the year ended December 31, 2015, as filed with the Securities and Exchange Commission (the “SEC”) on March 3, 2016, and future periodic reports filed with the SEC on or after the date hereof. All of Tonix's forward-looking statements are expressly qualified by all such risk factors and other cautionary statements. The information set forth herein speaks only as of the date hereof.


NEW YORK, March 02, 2017 (GLOBE NEWSWIRE) -- Tonix Pharmaceuticals Holding Corp. (Nasdaq:TNXP) (Tonix), a company that is developing innovative pharmaceutical products to address public health challenges, working with researchers from the University of Alberta, a leading Canadian research university, today announced the successful synthesis of a potential smallpox-preventing vaccine. This vaccine candidate, TNX-801, is a live form of horsepox virus (HPXV) that has been demonstrated to have protective vaccine activity in mice. “Presently, the safety concern of existing smallpox-preventing vaccines outweigh the potential benefit to provide immunization of first responders or the general public. By developing TNX-801 as a horsepox vaccine to prevent smallpox infection, we hope to have a safer vaccine to protect against smallpox than is currently available,” stated Seth Lederman, M.D., president and chief executive officer of Tonix. “Vaccines are a critical component of the infrastructure of global public health. Vaccination protects those who are vaccinated and also those who are not vaccinated, by decreasing the risk of contagion.” “Our goal is to improve on current methods that protect the public from possible viral outbreaks,” said Professor David Evans, Ph.D., FCAHS, Professor and Vice-Dean (Research), Faculty of Medicine and Dentistry at the University of Alberta, in Edmonton, Alberta, Canada, and principal investigator of the TNX-801 research project. HPXV was synthesized by Professor Evans and Research Associate Ryan Noyce, Ph.D., at the University of Alberta, with Dr. Lederman as co-investigator of the research and co-inventor of the TNX-801 patent. Under their research and development agreement, Tonix wholly owns the synthesized HPXV virus stock and related sequences. Professor Evans and Dr. Noyce also demonstrated that HPXV has protective vaccine activity in mice, using a model of lethal vaccinia infection. Vaccine manufacturing activities have been initiated by Tonix to support further nonclinical testing of TNX-801. Dr. Lederman stated, “Our research collaboration is dedicated to creating tools and innovative products that better protect public health.” Horsepox, an equine disease caused by a virus and characterized by eruptions in the mouth and on the skin, is believed to be eradicated. No true HPXV outbreaks have been reported since 1976, at which time the United States Department of Agriculture obtained the viral sample used for the sequence published in 2006 that allowed the synthesis of TNX-801. In 1798, Dr. Edward Jenner, English physician and scientist, speculated that smallpox is a human version of pox diseases in animals. Jenner had a strong suspicion that his vaccine began as a pox disease in horses and went on to show that it could be used to vaccinate against smallpox. Smallpox was eradicated as a result, and no cases of naturally occurring smallpox have been reported since 1977. Jenner’s vaccine appears to have evolved considerably in the vaccinia stocks maintained in different countries around the world, since vaccinia was mostly selected for growth and production.  Being able to provide safe and effective smallpox-preventing vaccines remains important and necessary for addressing and protecting public health. About the Material Threat Medical Countermeasures Provisions in the 21st Century Cures Act In 2016, the 21st Century Cures Act (Act) was signed into law to support ongoing biomedical innovation. One part of the Act, Section 3086, is aimed at “Encouraging Treatments for Agents that Present a National Security Threat.” This section of the Act created a new priority review voucher program for “material threat medical countermeasures.” The Act defines such countermeasures as drugs or vaccines intended to treat biological, chemical, radiological, or nuclear agents that present a national security threat, or to treat harm from a condition that may be caused by administering a drug or biological product against such an agent. The priority review vouchers are awarded at the time of FDA approval and are fully transferrable and may be sold to other companies to be used for priority review of any New Drug Application (NDA) or Biologic Licensing Application (BLA). Tonix is developing innovative pharmaceutical products to address public health challenges, with TNX-102 SL in Phase 3 development for posttraumatic stress disorder (PTSD). TNX-102 SL is designed for bedtime use and is believed to improve overall PTSD symptoms by improving sleep quality in PTSD patients.  PTSD is a serious condition characterized by chronic disability, inadequate treatment options especially for military-related PTSD and overall high utilization of healthcare services creating significant economic burden. TNX-102 SL was recently granted Breakthrough Therapy designation by the FDA for the treatment of PTSD. Other development efforts include TNX-601, a clinical candidate at Pre-IND (Investigational New Drug) application stage, designed for daytime use for the treatment of PTSD, and TNX-801, a potential smallpox-preventing vaccine. *TNX-102 SL (cyclobenzaprine HCl sublingual tablets) is an investigational new drug and has not been approved for any indication. This press release and further information about Tonix can be found at www.tonixpharma.com. Certain statements in this press release are forward-looking within the meaning of the Private Securities Litigation Reform Act of 1995. These statements may be identified by the use of forward-looking words such as “anticipate,” “believe,” “forecast,” “estimate,” “expect,” and “intend,” among others. These forward-looking statements are based on Tonix's current expectations and actual results could differ materially. There are a number of factors that could cause actual events to differ materially from those indicated by such forward-looking statements. These factors include, but are not limited to, substantial competition; our need for additional financing; uncertainties of patent protection and litigation; uncertainties of government or third party payor reimbursement; limited research and development efforts and dependence upon third parties; and risks related to failure to obtain FDA clearances or approvals and noncompliance with FDA regulations. As with any pharmaceutical under development, there are significant risks in the development, regulatory approval and commercialization of new products. Tonix does not undertake an obligation to update or revise any forward-looking statement. Investors should read the risk factors set forth in the Annual Report on Form 10-K for the year ended December 31, 2015, as filed with the Securities and Exchange Commission (the “SEC”) on March 3, 2016, and future periodic reports filed with the SEC on or after the date hereof. All of Tonix's forward-looking statements are expressly qualified by all such risk factors and other cautionary statements. The information set forth herein speaks only as of the date hereof.


NEW YORK, March 02, 2017 (GLOBE NEWSWIRE) -- Tonix Pharmaceuticals Holding Corp. (Nasdaq:TNXP) (Tonix), a company that is developing innovative pharmaceutical products to address public health challenges, working with researchers from the University of Alberta, a leading Canadian research university, today announced the successful synthesis of a potential smallpox-preventing vaccine. This vaccine candidate, TNX-801, is a live form of horsepox virus (HPXV) that has been demonstrated to have protective vaccine activity in mice. “Presently, the safety concern of existing smallpox-preventing vaccines outweigh the potential benefit to provide immunization of first responders or the general public. By developing TNX-801 as a horsepox vaccine to prevent smallpox infection, we hope to have a safer vaccine to protect against smallpox than is currently available,” stated Seth Lederman, M.D., president and chief executive officer of Tonix. “Vaccines are a critical component of the infrastructure of global public health. Vaccination protects those who are vaccinated and also those who are not vaccinated, by decreasing the risk of contagion.” “Our goal is to improve on current methods that protect the public from possible viral outbreaks,” said Professor David Evans, Ph.D., FCAHS, Professor and Vice-Dean (Research), Faculty of Medicine and Dentistry at the University of Alberta, in Edmonton, Alberta, Canada, and principal investigator of the TNX-801 research project. HPXV was synthesized by Professor Evans and Research Associate Ryan Noyce, Ph.D., at the University of Alberta, with Dr. Lederman as co-investigator of the research and co-inventor of the TNX-801 patent. Under their research and development agreement, Tonix wholly owns the synthesized HPXV virus stock and related sequences. Professor Evans and Dr. Noyce also demonstrated that HPXV has protective vaccine activity in mice, using a model of lethal vaccinia infection. Vaccine manufacturing activities have been initiated by Tonix to support further nonclinical testing of TNX-801. Dr. Lederman stated, “Our research collaboration is dedicated to creating tools and innovative products that better protect public health.” Horsepox, an equine disease caused by a virus and characterized by eruptions in the mouth and on the skin, is believed to be eradicated. No true HPXV outbreaks have been reported since 1976, at which time the United States Department of Agriculture obtained the viral sample used for the sequence published in 2006 that allowed the synthesis of TNX-801. In 1798, Dr. Edward Jenner, English physician and scientist, speculated that smallpox is a human version of pox diseases in animals. Jenner had a strong suspicion that his vaccine began as a pox disease in horses and went on to show that it could be used to vaccinate against smallpox. Smallpox was eradicated as a result, and no cases of naturally occurring smallpox have been reported since 1977. Jenner’s vaccine appears to have evolved considerably in the vaccinia stocks maintained in different countries around the world, since vaccinia was mostly selected for growth and production.  Being able to provide safe and effective smallpox-preventing vaccines remains important and necessary for addressing and protecting public health. About the Material Threat Medical Countermeasures Provisions in the 21st Century Cures Act In 2016, the 21st Century Cures Act (Act) was signed into law to support ongoing biomedical innovation. One part of the Act, Section 3086, is aimed at “Encouraging Treatments for Agents that Present a National Security Threat.” This section of the Act created a new priority review voucher program for “material threat medical countermeasures.” The Act defines such countermeasures as drugs or vaccines intended to treat biological, chemical, radiological, or nuclear agents that present a national security threat, or to treat harm from a condition that may be caused by administering a drug or biological product against such an agent. The priority review vouchers are awarded at the time of FDA approval and are fully transferrable and may be sold to other companies to be used for priority review of any New Drug Application (NDA) or Biologic Licensing Application (BLA). Tonix is developing innovative pharmaceutical products to address public health challenges, with TNX-102 SL in Phase 3 development for posttraumatic stress disorder (PTSD). TNX-102 SL is designed for bedtime use and is believed to improve overall PTSD symptoms by improving sleep quality in PTSD patients.  PTSD is a serious condition characterized by chronic disability, inadequate treatment options especially for military-related PTSD and overall high utilization of healthcare services creating significant economic burden. TNX-102 SL was recently granted Breakthrough Therapy designation by the FDA for the treatment of PTSD. Other development efforts include TNX-601, a clinical candidate at Pre-IND (Investigational New Drug) application stage, designed for daytime use for the treatment of PTSD, and TNX-801, a potential smallpox-preventing vaccine. *TNX-102 SL (cyclobenzaprine HCl sublingual tablets) is an investigational new drug and has not been approved for any indication. This press release and further information about Tonix can be found at www.tonixpharma.com. Certain statements in this press release are forward-looking within the meaning of the Private Securities Litigation Reform Act of 1995. These statements may be identified by the use of forward-looking words such as “anticipate,” “believe,” “forecast,” “estimate,” “expect,” and “intend,” among others. These forward-looking statements are based on Tonix's current expectations and actual results could differ materially. There are a number of factors that could cause actual events to differ materially from those indicated by such forward-looking statements. These factors include, but are not limited to, substantial competition; our need for additional financing; uncertainties of patent protection and litigation; uncertainties of government or third party payor reimbursement; limited research and development efforts and dependence upon third parties; and risks related to failure to obtain FDA clearances or approvals and noncompliance with FDA regulations. As with any pharmaceutical under development, there are significant risks in the development, regulatory approval and commercialization of new products. Tonix does not undertake an obligation to update or revise any forward-looking statement. Investors should read the risk factors set forth in the Annual Report on Form 10-K for the year ended December 31, 2015, as filed with the Securities and Exchange Commission (the “SEC”) on March 3, 2016, and future periodic reports filed with the SEC on or after the date hereof. All of Tonix's forward-looking statements are expressly qualified by all such risk factors and other cautionary statements. The information set forth herein speaks only as of the date hereof.


News Article | November 30, 2016
Site: globenewswire.com

SAN DIEGO, Nov. 30, 2016 (GLOBE NEWSWIRE) -- Otonomy, Inc. (NASDAQ:OTIC), a biopharmaceutical company focused on the development and commercialization of innovative therapeutics for diseases and disorders of the ear, today announced the appointment of Kathie M. Bishop, Ph.D., as chief scientific officer. Dr. Bishop is a neuroscientist with more than fifteen years of pharmaceutical development experience. At Ionis Pharmaceuticals, she led translational research and development of programs in the neurology franchise including SPINRAZAä (nusinersen), a treatment for patients with spinal muscular atrophy that is awaiting regulatory approval. "Kathie is a great fit to lead our development efforts given her neuroscience background and successful track record managing significant development programs from inception through to registration," said David A. Weber, Ph.D., president and CEO of Otonomy. "Furthermore, her extensive experience with local drug delivery in the nusinersen as well as other programs is highly relevant to our focus in developing locally administered therapeutics for otic disorders." Dr. Bishop succeeds Carl LeBel, Ph.D., who had previously announced his retirement. She joins Otonomy from Tioga Pharmaceuticals where she served as chief scientific officer since 2015. Previously, she served in product development management roles at Ionis Pharmaceuticals including vice president, clinical development. At Ionis, she led translational research and development of a portfolio of programs in the neurology franchise which included clinical-stage products for the treatment of spinal muscular atrophy, myotonic dystrophy, and amytrophic lateral sclerosis and preclinical programs targeting various disorders including retinal degeneration. Prior to Ionis, she served in research and development leadership roles at Ceregene, a company focused on the development of gene therapy products for the treatment of neurodegenerative disorders and retinal diseases. Before joining Ceregene, she worked as a post-doctoral fellow in the Molecular Neurobiology Lab at the Salk Institute in La Jolla. Dr. Bishop obtained her Ph.D. in Neuroscience from the University of Alberta, a B.A. in Psychology from Simon Fraser University and a B.Sc. in Cell Biology and Genetics from the University of British Columbia. Otonomy is a biopharmaceutical company focused on the development and commercialization of innovative therapeutics for diseases and disorders of the ear. OTIPRIO® (ciprofloxacin otic suspension) is approved in the United States for use during tympanostomy tube placement surgery in pediatric patients, and commercial launch commenced in March 2016. OTO-104 is a steroid in development for the treatment of Ménière's disease and other severe balance and hearing disorders. Two Phase 3 trials in Ménière's disease patients are underway, with results expected during the second half of 2017. OTO-311 is an NMDA receptor antagonist for the treatment of tinnitus that is in a Phase 1 clinical safety trial. Otonomy’s proprietary formulation technology utilizes a thermosensitive gel and drug microparticles to enable single dose treatment by a physician. For additional information please visit www.otonomy.com. This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements generally relate to future events or future financial or operating performance of Otonomy. Forward-looking statements in this press release include, but are not limited to, the timing of results for the two OTO-104 Phase 3 clinical trials in Ménière's disease. Otonomy's expectations regarding these matters may not materialize, and actual results in future periods are subject to risks and uncertainties. Actual results may differ materially from those indicated by these forward-looking statements as a result of these risks and uncertainties, including but not limited to: Otonomy's limited operating history and its expectation that it will incur significant losses for the foreseeable future; Otonomy's ability to obtain additional financing; Otonomy's dependence on the commercial success of OTIPRIO and the regulatory success and advancement of additional product candidates, such as OTO-104 and OTO-311, and label expansion indications for OTIPRIO; the uncertainties inherent in the clinical drug development process, including, without limitation, Otonomy's ability to adequately demonstrate the safety and efficacy of its product candidates, the preclinical and clinical results for its product candidates, which may not support further development, and challenges related to patient enrollment in clinical trials; Otonomy's ability to obtain regulatory approval for its product candidates; side effects or adverse events associated with Otonomy's product candidates; competition in the biopharmaceutical industry; Otonomy's dependence on third parties to conduct preclinical studies and clinical trials; the timing and outcome of hospital pharmacy and therapeutics reviews and other facility reviews; the impact of coverage and reimbursement decisions by third-party payors on the pricing and market acceptance of OTIPRIO; Otonomy's dependence on third parties for the manufacture of OTIPRIO and product candidates; Otonomy's dependence on a small number of suppliers for raw materials; Otonomy's ability to protect its intellectual property related to OTIPRIO and its product candidates in the United States and throughout the world; expectations regarding potential market size, opportunity and growth; Otonomy's ability to manage operating expenses; implementation of Otonomy's business model and strategic plans for its business, products and technology; and other risks. Information regarding the foregoing and additional risks may be found in the section entitled "Risk Factors" in Otonomy's Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (the "SEC") on November 3, 2016, and Otonomy's future reports to be filed with the SEC. The forward-looking statements in this press release are based on information available to Otonomy as of the date hereof. Otonomy disclaims any obligation to update any forward-looking statements, except as required by law.


News Article | November 30, 2016
Site: globenewswire.com

SAN DIEGO, Nov. 30, 2016 (GLOBE NEWSWIRE) -- Otonomy, Inc. (NASDAQ:OTIC), a biopharmaceutical company focused on the development and commercialization of innovative therapeutics for diseases and disorders of the ear, today announced the appointment of Kathie M. Bishop, Ph.D., as chief scientific officer. Dr. Bishop is a neuroscientist with more than fifteen years of pharmaceutical development experience. At Ionis Pharmaceuticals, she led translational research and development of programs in the neurology franchise including SPINRAZAä (nusinersen), a treatment for patients with spinal muscular atrophy that is awaiting regulatory approval. "Kathie is a great fit to lead our development efforts given her neuroscience background and successful track record managing significant development programs from inception through to registration," said David A. Weber, Ph.D., president and CEO of Otonomy. "Furthermore, her extensive experience with local drug delivery in the nusinersen as well as other programs is highly relevant to our focus in developing locally administered therapeutics for otic disorders." Dr. Bishop succeeds Carl LeBel, Ph.D., who had previously announced his retirement. She joins Otonomy from Tioga Pharmaceuticals where she served as chief scientific officer since 2015. Previously, she served in product development management roles at Ionis Pharmaceuticals including vice president, clinical development. At Ionis, she led translational research and development of a portfolio of programs in the neurology franchise which included clinical-stage products for the treatment of spinal muscular atrophy, myotonic dystrophy, and amytrophic lateral sclerosis and preclinical programs targeting various disorders including retinal degeneration. Prior to Ionis, she served in research and development leadership roles at Ceregene, a company focused on the development of gene therapy products for the treatment of neurodegenerative disorders and retinal diseases. Before joining Ceregene, she worked as a post-doctoral fellow in the Molecular Neurobiology Lab at the Salk Institute in La Jolla. Dr. Bishop obtained her Ph.D. in Neuroscience from the University of Alberta, a B.A. in Psychology from Simon Fraser University and a B.Sc. in Cell Biology and Genetics from the University of British Columbia. Otonomy is a biopharmaceutical company focused on the development and commercialization of innovative therapeutics for diseases and disorders of the ear. OTIPRIO® (ciprofloxacin otic suspension) is approved in the United States for use during tympanostomy tube placement surgery in pediatric patients, and commercial launch commenced in March 2016. OTO-104 is a steroid in development for the treatment of Ménière's disease and other severe balance and hearing disorders. Two Phase 3 trials in Ménière's disease patients are underway, with results expected during the second half of 2017. OTO-311 is an NMDA receptor antagonist for the treatment of tinnitus that is in a Phase 1 clinical safety trial. Otonomy’s proprietary formulation technology utilizes a thermosensitive gel and drug microparticles to enable single dose treatment by a physician. For additional information please visit www.otonomy.com. This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements generally relate to future events or future financial or operating performance of Otonomy. Forward-looking statements in this press release include, but are not limited to, the timing of results for the two OTO-104 Phase 3 clinical trials in Ménière's disease. Otonomy's expectations regarding these matters may not materialize, and actual results in future periods are subject to risks and uncertainties. Actual results may differ materially from those indicated by these forward-looking statements as a result of these risks and uncertainties, including but not limited to: Otonomy's limited operating history and its expectation that it will incur significant losses for the foreseeable future; Otonomy's ability to obtain additional financing; Otonomy's dependence on the commercial success of OTIPRIO and the regulatory success and advancement of additional product candidates, such as OTO-104 and OTO-311, and label expansion indications for OTIPRIO; the uncertainties inherent in the clinical drug development process, including, without limitation, Otonomy's ability to adequately demonstrate the safety and efficacy of its product candidates, the preclinical and clinical results for its product candidates, which may not support further development, and challenges related to patient enrollment in clinical trials; Otonomy's ability to obtain regulatory approval for its product candidates; side effects or adverse events associated with Otonomy's product candidates; competition in the biopharmaceutical industry; Otonomy's dependence on third parties to conduct preclinical studies and clinical trials; the timing and outcome of hospital pharmacy and therapeutics reviews and other facility reviews; the impact of coverage and reimbursement decisions by third-party payors on the pricing and market acceptance of OTIPRIO; Otonomy's dependence on third parties for the manufacture of OTIPRIO and product candidates; Otonomy's dependence on a small number of suppliers for raw materials; Otonomy's ability to protect its intellectual property related to OTIPRIO and its product candidates in the United States and throughout the world; expectations regarding potential market size, opportunity and growth; Otonomy's ability to manage operating expenses; implementation of Otonomy's business model and strategic plans for its business, products and technology; and other risks. Information regarding the foregoing and additional risks may be found in the section entitled "Risk Factors" in Otonomy's Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (the "SEC") on November 3, 2016, and Otonomy's future reports to be filed with the SEC. The forward-looking statements in this press release are based on information available to Otonomy as of the date hereof. Otonomy disclaims any obligation to update any forward-looking statements, except as required by law.


News Article | November 30, 2016
Site: globenewswire.com

SAN DIEGO, Nov. 30, 2016 (GLOBE NEWSWIRE) -- Otonomy, Inc. (NASDAQ:OTIC), a biopharmaceutical company focused on the development and commercialization of innovative therapeutics for diseases and disorders of the ear, today announced the appointment of Kathie M. Bishop, Ph.D., as chief scientific officer. Dr. Bishop is a neuroscientist with more than fifteen years of pharmaceutical development experience. At Ionis Pharmaceuticals, she led translational research and development of programs in the neurology franchise including SPINRAZAä (nusinersen), a treatment for patients with spinal muscular atrophy that is awaiting regulatory approval. "Kathie is a great fit to lead our development efforts given her neuroscience background and successful track record managing significant development programs from inception through to registration," said David A. Weber, Ph.D., president and CEO of Otonomy. "Furthermore, her extensive experience with local drug delivery in the nusinersen as well as other programs is highly relevant to our focus in developing locally administered therapeutics for otic disorders." Dr. Bishop succeeds Carl LeBel, Ph.D., who had previously announced his retirement. She joins Otonomy from Tioga Pharmaceuticals where she served as chief scientific officer since 2015. Previously, she served in product development management roles at Ionis Pharmaceuticals including vice president, clinical development. At Ionis, she led translational research and development of a portfolio of programs in the neurology franchise which included clinical-stage products for the treatment of spinal muscular atrophy, myotonic dystrophy, and amytrophic lateral sclerosis and preclinical programs targeting various disorders including retinal degeneration. Prior to Ionis, she served in research and development leadership roles at Ceregene, a company focused on the development of gene therapy products for the treatment of neurodegenerative disorders and retinal diseases. Before joining Ceregene, she worked as a post-doctoral fellow in the Molecular Neurobiology Lab at the Salk Institute in La Jolla. Dr. Bishop obtained her Ph.D. in Neuroscience from the University of Alberta, a B.A. in Psychology from Simon Fraser University and a B.Sc. in Cell Biology and Genetics from the University of British Columbia. Otonomy is a biopharmaceutical company focused on the development and commercialization of innovative therapeutics for diseases and disorders of the ear. OTIPRIO® (ciprofloxacin otic suspension) is approved in the United States for use during tympanostomy tube placement surgery in pediatric patients, and commercial launch commenced in March 2016. OTO-104 is a steroid in development for the treatment of Ménière's disease and other severe balance and hearing disorders. Two Phase 3 trials in Ménière's disease patients are underway, with results expected during the second half of 2017. OTO-311 is an NMDA receptor antagonist for the treatment of tinnitus that is in a Phase 1 clinical safety trial. Otonomy’s proprietary formulation technology utilizes a thermosensitive gel and drug microparticles to enable single dose treatment by a physician. For additional information please visit www.otonomy.com. This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements generally relate to future events or future financial or operating performance of Otonomy. Forward-looking statements in this press release include, but are not limited to, the timing of results for the two OTO-104 Phase 3 clinical trials in Ménière's disease. Otonomy's expectations regarding these matters may not materialize, and actual results in future periods are subject to risks and uncertainties. Actual results may differ materially from those indicated by these forward-looking statements as a result of these risks and uncertainties, including but not limited to: Otonomy's limited operating history and its expectation that it will incur significant losses for the foreseeable future; Otonomy's ability to obtain additional financing; Otonomy's dependence on the commercial success of OTIPRIO and the regulatory success and advancement of additional product candidates, such as OTO-104 and OTO-311, and label expansion indications for OTIPRIO; the uncertainties inherent in the clinical drug development process, including, without limitation, Otonomy's ability to adequately demonstrate the safety and efficacy of its product candidates, the preclinical and clinical results for its product candidates, which may not support further development, and challenges related to patient enrollment in clinical trials; Otonomy's ability to obtain regulatory approval for its product candidates; side effects or adverse events associated with Otonomy's product candidates; competition in the biopharmaceutical industry; Otonomy's dependence on third parties to conduct preclinical studies and clinical trials; the timing and outcome of hospital pharmacy and therapeutics reviews and other facility reviews; the impact of coverage and reimbursement decisions by third-party payors on the pricing and market acceptance of OTIPRIO; Otonomy's dependence on third parties for the manufacture of OTIPRIO and product candidates; Otonomy's dependence on a small number of suppliers for raw materials; Otonomy's ability to protect its intellectual property related to OTIPRIO and its product candidates in the United States and throughout the world; expectations regarding potential market size, opportunity and growth; Otonomy's ability to manage operating expenses; implementation of Otonomy's business model and strategic plans for its business, products and technology; and other risks. Information regarding the foregoing and additional risks may be found in the section entitled "Risk Factors" in Otonomy's Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (the "SEC") on November 3, 2016, and Otonomy's future reports to be filed with the SEC. The forward-looking statements in this press release are based on information available to Otonomy as of the date hereof. Otonomy disclaims any obligation to update any forward-looking statements, except as required by law.


News Article | November 30, 2016
Site: globenewswire.com

SAN DIEGO, Nov. 30, 2016 (GLOBE NEWSWIRE) -- Otonomy, Inc. (NASDAQ:OTIC), a biopharmaceutical company focused on the development and commercialization of innovative therapeutics for diseases and disorders of the ear, today announced the appointment of Kathie M. Bishop, Ph.D., as chief scientific officer. Dr. Bishop is a neuroscientist with more than fifteen years of pharmaceutical development experience. At Ionis Pharmaceuticals, she led translational research and development of programs in the neurology franchise including SPINRAZAä (nusinersen), a treatment for patients with spinal muscular atrophy that is awaiting regulatory approval. "Kathie is a great fit to lead our development efforts given her neuroscience background and successful track record managing significant development programs from inception through to registration," said David A. Weber, Ph.D., president and CEO of Otonomy. "Furthermore, her extensive experience with local drug delivery in the nusinersen as well as other programs is highly relevant to our focus in developing locally administered therapeutics for otic disorders." Dr. Bishop succeeds Carl LeBel, Ph.D., who had previously announced his retirement. She joins Otonomy from Tioga Pharmaceuticals where she served as chief scientific officer since 2015. Previously, she served in product development management roles at Ionis Pharmaceuticals including vice president, clinical development. At Ionis, she led translational research and development of a portfolio of programs in the neurology franchise which included clinical-stage products for the treatment of spinal muscular atrophy, myotonic dystrophy, and amytrophic lateral sclerosis and preclinical programs targeting various disorders including retinal degeneration. Prior to Ionis, she served in research and development leadership roles at Ceregene, a company focused on the development of gene therapy products for the treatment of neurodegenerative disorders and retinal diseases. Before joining Ceregene, she worked as a post-doctoral fellow in the Molecular Neurobiology Lab at the Salk Institute in La Jolla. Dr. Bishop obtained her Ph.D. in Neuroscience from the University of Alberta, a B.A. in Psychology from Simon Fraser University and a B.Sc. in Cell Biology and Genetics from the University of British Columbia. Otonomy is a biopharmaceutical company focused on the development and commercialization of innovative therapeutics for diseases and disorders of the ear. OTIPRIO® (ciprofloxacin otic suspension) is approved in the United States for use during tympanostomy tube placement surgery in pediatric patients, and commercial launch commenced in March 2016. OTO-104 is a steroid in development for the treatment of Ménière's disease and other severe balance and hearing disorders. Two Phase 3 trials in Ménière's disease patients are underway, with results expected during the second half of 2017. OTO-311 is an NMDA receptor antagonist for the treatment of tinnitus that is in a Phase 1 clinical safety trial. Otonomy’s proprietary formulation technology utilizes a thermosensitive gel and drug microparticles to enable single dose treatment by a physician. For additional information please visit www.otonomy.com. This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements generally relate to future events or future financial or operating performance of Otonomy. Forward-looking statements in this press release include, but are not limited to, the timing of results for the two OTO-104 Phase 3 clinical trials in Ménière's disease. Otonomy's expectations regarding these matters may not materialize, and actual results in future periods are subject to risks and uncertainties. Actual results may differ materially from those indicated by these forward-looking statements as a result of these risks and uncertainties, including but not limited to: Otonomy's limited operating history and its expectation that it will incur significant losses for the foreseeable future; Otonomy's ability to obtain additional financing; Otonomy's dependence on the commercial success of OTIPRIO and the regulatory success and advancement of additional product candidates, such as OTO-104 and OTO-311, and label expansion indications for OTIPRIO; the uncertainties inherent in the clinical drug development process, including, without limitation, Otonomy's ability to adequately demonstrate the safety and efficacy of its product candidates, the preclinical and clinical results for its product candidates, which may not support further development, and challenges related to patient enrollment in clinical trials; Otonomy's ability to obtain regulatory approval for its product candidates; side effects or adverse events associated with Otonomy's product candidates; competition in the biopharmaceutical industry; Otonomy's dependence on third parties to conduct preclinical studies and clinical trials; the timing and outcome of hospital pharmacy and therapeutics reviews and other facility reviews; the impact of coverage and reimbursement decisions by third-party payors on the pricing and market acceptance of OTIPRIO; Otonomy's dependence on third parties for the manufacture of OTIPRIO and product candidates; Otonomy's dependence on a small number of suppliers for raw materials; Otonomy's ability to protect its intellectual property related to OTIPRIO and its product candidates in the United States and throughout the world; expectations regarding potential market size, opportunity and growth; Otonomy's ability to manage operating expenses; implementation of Otonomy's business model and strategic plans for its business, products and technology; and other risks. Information regarding the foregoing and additional risks may be found in the section entitled "Risk Factors" in Otonomy's Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (the "SEC") on November 3, 2016, and Otonomy's future reports to be filed with the SEC. The forward-looking statements in this press release are based on information available to Otonomy as of the date hereof. Otonomy disclaims any obligation to update any forward-looking statements, except as required by law.


News Article | November 30, 2016
Site: globenewswire.com

SAN DIEGO, Nov. 30, 2016 (GLOBE NEWSWIRE) -- Otonomy, Inc. (NASDAQ:OTIC), a biopharmaceutical company focused on the development and commercialization of innovative therapeutics for diseases and disorders of the ear, today announced the appointment of Kathie M. Bishop, Ph.D., as chief scientific officer. Dr. Bishop is a neuroscientist with more than fifteen years of pharmaceutical development experience. At Ionis Pharmaceuticals, she led translational research and development of programs in the neurology franchise including SPINRAZAä (nusinersen), a treatment for patients with spinal muscular atrophy that is awaiting regulatory approval. "Kathie is a great fit to lead our development efforts given her neuroscience background and successful track record managing significant development programs from inception through to registration," said David A. Weber, Ph.D., president and CEO of Otonomy. "Furthermore, her extensive experience with local drug delivery in the nusinersen as well as other programs is highly relevant to our focus in developing locally administered therapeutics for otic disorders." Dr. Bishop succeeds Carl LeBel, Ph.D., who had previously announced his retirement. She joins Otonomy from Tioga Pharmaceuticals where she served as chief scientific officer since 2015. Previously, she served in product development management roles at Ionis Pharmaceuticals including vice president, clinical development. At Ionis, she led translational research and development of a portfolio of programs in the neurology franchise which included clinical-stage products for the treatment of spinal muscular atrophy, myotonic dystrophy, and amytrophic lateral sclerosis and preclinical programs targeting various disorders including retinal degeneration. Prior to Ionis, she served in research and development leadership roles at Ceregene, a company focused on the development of gene therapy products for the treatment of neurodegenerative disorders and retinal diseases. Before joining Ceregene, she worked as a post-doctoral fellow in the Molecular Neurobiology Lab at the Salk Institute in La Jolla. Dr. Bishop obtained her Ph.D. in Neuroscience from the University of Alberta, a B.A. in Psychology from Simon Fraser University and a B.Sc. in Cell Biology and Genetics from the University of British Columbia. Otonomy is a biopharmaceutical company focused on the development and commercialization of innovative therapeutics for diseases and disorders of the ear. OTIPRIO® (ciprofloxacin otic suspension) is approved in the United States for use during tympanostomy tube placement surgery in pediatric patients, and commercial launch commenced in March 2016. OTO-104 is a steroid in development for the treatment of Ménière's disease and other severe balance and hearing disorders. Two Phase 3 trials in Ménière's disease patients are underway, with results expected during the second half of 2017. OTO-311 is an NMDA receptor antagonist for the treatment of tinnitus that is in a Phase 1 clinical safety trial. Otonomy’s proprietary formulation technology utilizes a thermosensitive gel and drug microparticles to enable single dose treatment by a physician. For additional information please visit www.otonomy.com. This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements generally relate to future events or future financial or operating performance of Otonomy. Forward-looking statements in this press release include, but are not limited to, the timing of results for the two OTO-104 Phase 3 clinical trials in Ménière's disease. Otonomy's expectations regarding these matters may not materialize, and actual results in future periods are subject to risks and uncertainties. Actual results may differ materially from those indicated by these forward-looking statements as a result of these risks and uncertainties, including but not limited to: Otonomy's limited operating history and its expectation that it will incur significant losses for the foreseeable future; Otonomy's ability to obtain additional financing; Otonomy's dependence on the commercial success of OTIPRIO and the regulatory success and advancement of additional product candidates, such as OTO-104 and OTO-311, and label expansion indications for OTIPRIO; the uncertainties inherent in the clinical drug development process, including, without limitation, Otonomy's ability to adequately demonstrate the safety and efficacy of its product candidates, the preclinical and clinical results for its product candidates, which may not support further development, and challenges related to patient enrollment in clinical trials; Otonomy's ability to obtain regulatory approval for its product candidates; side effects or adverse events associated with Otonomy's product candidates; competition in the biopharmaceutical industry; Otonomy's dependence on third parties to conduct preclinical studies and clinical trials; the timing and outcome of hospital pharmacy and therapeutics reviews and other facility reviews; the impact of coverage and reimbursement decisions by third-party payors on the pricing and market acceptance of OTIPRIO; Otonomy's dependence on third parties for the manufacture of OTIPRIO and product candidates; Otonomy's dependence on a small number of suppliers for raw materials; Otonomy's ability to protect its intellectual property related to OTIPRIO and its product candidates in the United States and throughout the world; expectations regarding potential market size, opportunity and growth; Otonomy's ability to manage operating expenses; implementation of Otonomy's business model and strategic plans for its business, products and technology; and other risks. Information regarding the foregoing and additional risks may be found in the section entitled "Risk Factors" in Otonomy's Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (the "SEC") on November 3, 2016, and Otonomy's future reports to be filed with the SEC. The forward-looking statements in this press release are based on information available to Otonomy as of the date hereof. Otonomy disclaims any obligation to update any forward-looking statements, except as required by law.


News Article | December 21, 2016
Site: www.marketwired.com

Increases size of non-brokered private placement of Common Shares, CEE (Canadian Exploration Expenditures) and CDE (Canadian Development Expenditures) Flow through Shares; Announces the Consolidation "Reverse Share Split" of shares; Debt Repayment; Reinstatement to Trade; Related Party Loan Agreement; Additions to the Management Team - John Hogg, P. Geo. will become President for the Company with Don Cameron as Independent Directors along with Don Wright, Sean Rooney, Bill Calsbeck and Graham MacPherson as Executive Advisors THIS PRESS RELEASE IS NOT TO BE DISTRIBUTED TO U.S. NEWSWIRE SERVICES OR FOR DISSEMINATION IN THE UNITED STATES. ANY FAILURE TO COMPLY WITH THIS RESTRICTION MAY CONSTITUTE A VIOLATION OF U.S. SECURITIES LAW. Jaguar Resources Inc. (TSX VENTURE:JRI) ("Jaguar" or the "Company") is pleased to announce that the non-brokered private placement previously announced on May 31, 2016 has been increased to raise $34 million in total equity proceeds. Reverse Split In advance of closing of the non-brokered private placement, the Company received unanimous approval from its Board of Directors to completed a reverse share split of the issued and outstanding common shares at a ratio of 10 old common shares for each new common share held. Current outstanding shares consist of 84,405,538 and upon completion of the reserve share split the corporation will have approximately 8,440,554 common shares. Increased Size of Private Placement After the consolidation of its common shares outstanding, Jaguar intends to raise $22,000,000 through the issuance of 18,333,333 common shares of the Company ("Common Shares") at $1.20 per common share, $10,000,000 through the issuance of 7,575,757 Canadian Exploration Expenditures ("CEE") flow through shares at a price of $1.32 per CEE flow through share, and $2,000,000 through the issuance of 1,562,500 Canadian Development Expenditures ("CDE") flow through shares at a price of $1.28 per CDE flow through share (collectively, the "Placement"). The Company retains an over allotment option to increase the size of the Placement by 15% for 30 days. The Placement is subject to the approval of the TSX Venture Exchange. A finder's fees will be payable on a portion of the private placement. All securities issued will be subject to a four-month hold period from the date of closing. Debt Repayment The Company announces the issuance of 83,333 common shares (Shares) to a consultant pursuant to contractual arrangements, and to settle a certain debt of the Company owed to such consultant at a deemed price of $0.12 per Share representing in aggregate $9,999.96 (the Debt Repayment). The total Company's shares for debt issuance in August, including the 83,333 common shares, is 10,974,462 shares. Reinstatement to Trade The Cease Trade Order, issued May 6, 2015 by the Alberta Securities Commission was revoked on March 15, 2016. Subsequently the Company has addressed compliance and corporate governance concerns, primarily pertaining to related party transactions as part of the Exchange's reinstatement to trade review process. This included implementing policies and procedures for corporate governance, compensation and disclosure. The securities of the company will be reinstated to trade at the open on December 23, 2016. Related Party Loan Agreement The Company has entered in to a loan agreement with Corbin Blume for the $150,000, issued on June 13, 2014. The loan will be paid back at an interest rate of Royal Bank of Canada Prime rate plus (3%) three percent annually. The loan will be paid back on or before March 15, 2017. Appointments to Board of Directors / Advisory Team The Board of Directors of the Company (the "Board") are pleased to announce, effective January 9, 2017, Mr. John Hogg (P. Geo.) will be joining the Company as its new President. Mr. Hogg will report directly to the Board of Directors and be responsible for the Company's strategic direction and growth. Mr. Hogg is a professional geologist with over 35 years of oil and gas industry experience. He has recently been the President of Skybattle Resources Ltd, and is currently a Director of EOS-Petro Inc. He has worked for Gulf Canada, Husky Energy, Pan Canadian Energy, EnCana, Conoco-Phillips and MGM Energy Corp., where he was Vice President Exploration/Operations and a Corporate Officer. Effective January 9, 2017, Mr. Donald Cameron will be joining the Board as Independent Director. Mr. Donald Cameron began his career with Esso Resources and has over thirty years of senior management experience in all aspects of business development and in the oil and gas industry. Mr. Cameron has been CEO, President and Director of both private and public energy companies. Among his previous achievements, he was Senior Vice President, Development for Sobeys Inc. Mr. Cameron holds a Bachelor of Arts degree in Economics and Business from Queen's University, Kingston, Ontario, Canada. The Board is also pleased to announce, effective January 9, 2017, Mr. Donald Wright, Mr. Sean Rooney, Bill Calsbeck and Mr. Graham MacPherson will be joining the Company as Exclusive Advisors. Mr. Donald Wright is currently President and Chief Executive Officer of The Winnington Capital Group Inc. He is an active investor in both the private and public equity markets. Mr. Wright's career has spanned over 40 years in the investment industry. He has held several leadership positions, including President of Merrill Lynch Canada, Executive Vice-President, Director and member of the Executive Committee of Burns Fry Ltd., Chair and Chief Executive Officer of TD Securities Inc. and Deputy Chair of TD Bank, Financial Group. Mr. Sean Rooney has over 25 years' experience, in food & beverage operations, in both public and private businesses. Sean was former President and CEO of Aramark Stadiums & Arenas along with ownership of the Pittsburgh Steelers. He has also developed & managed several private entities in gaming and casino industries including PB Kennel Club, Empire City Casino & Yonkers Raceway, Boro Leisure Services. Mr. Bill Calsbeck has over 30 years of capital markets and micro-cap experience. For the past 10 years he has been managing the West Coast operations of Ubequity Capital Partners Inc. Mr. Calsbeck began his career in banking and trust services and after several years moved into the human resource field providing consulting services to clients such as the Vancouver Stock Exchange, Expo 86, MDA, and several major financial institutions. Over his career, Mr. Calsbeck was a member of the board of directors of many public companies. In the past, Mr. Calsbeck has experience serving as Chairman, secretary and on audit committees. Mr. Graham MacPherson is a professional Engineer who has 28 years' experience in the oil and gas industry where he has extensive experience in project management, company management and technical services consulting. Mr. MacPherson is a Mechanical Engineer graduate from the University of Alberta and a Petroleum Technologist (NAIT). These additions strongly position the Company to move forward with its corporate plans and future direction, which will involve the drilling of exploration prospects in Alberta, along with the evaluation of other oil and gas exploration and development opportunities outside of Canada. Agents and Advisors Jaguar retained Imperial Capital LLC, New York, New York, Topleft Securities, Toronto, Ontario and Roche Securities Ltd. Toronto Ontario as agents to the company. Thunderstone Capital Inc. Calgary Alberta is acted as a strategic advisor to the company. Jaguar's business strategy is to seek to provide shareholders with growth by exploring the existing assets in Saskatchewan, Bannock Creek. Jaguar is also pursuing industry farm in drilling opportunities both within Canada and Worldwide. As part of its corporate strategy of acquiring additional assets, the Company is in the process of evaluating several potential transactions which individually or together could be material. The Company cannot predict whether any current or future potential opportunities will result in one or more transactions involving the Company. The Company may issue equity or utilize debt facilities to finance all or a portion of any such potential acquisitions and or drilling programs on its Saskatchewan lands and future Alberta land holdings. Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release. Forward-looking Information This news release contains forward-looking statements and forward-looking information within the meaning of applicable securities laws. These statements relate to future events or future performance. All statements other than statements of historical fact may be forward-looking statements or information. More particularly and without limitation, this news release contains forward-looking statements and information relating to the resumption of trading of the Company's common shares on the Exchange. The forward-looking statements and information are based on certain key expectations and assumptions made by management of the Company, including, without limitation, the Company having adequate financial resources to satisfy all the requirements of the Exchange related to resumption of trading. Although management of the Company believes that the expectations and assumptions on which such forward-looking statements and information are based are reasonable, undue reliance should not be placed on the forward-looking statements and information since no assurance can be given that they will rove to be correct. Forward-looking statements and information are provided for providing information about the current expectations and plans of management of the Company relating to the future. Readers are cautioned that reliance on such statements and information may not be appropriate for other purposes, such as making investment decisions. Since forward-looking statements and information address future events and conditions, by their very nature they involve inherent risks and uncertainties. Actual results could differ materially from those currently anticipated due to a number of factors and risks. These include, but are not limited to, the Company's ability to continue operations without adequate capital, the Company's ability to raise further capital, the Company's ability to efficiently and successful explore and develop its properties, availability of drilling rigs, failure to interpret geological and geophysical information accurately, and the likelihood of those or any geological structures containing hydrocarbons. Accordingly, readers should not place undue reliance on the forward-looking statements and information contained in this news release. Readers are cautioned that the foregoing list of factors is not exhaustive. The forward-looking statements and information contained in this news release are made as of the date hereof and no undertaking is given to update publicly or revise any forward-looking statements or information, whether as a result of new information, future events or otherwise, unless so required by applicable securities laws or the TSXV. The forward-looking statements or information contained in this news release is expressly qualified by this cautionary statement.


News Article | November 17, 2016
Site: www.eurekalert.org

Researchers at University of California San Diego School of Medicine have shown that ustekinumab, a human antibody used to treat arthritis, significantly induces response and remission in patients with moderate to severe Crohn's disease. Results of the clinical trial will appear in the November 16 issue of the New England Journal of Medicine. "A high percentage of the patients in the study who had not responded to conventional therapies were in clinical remission after only a single dose of intravenous ustekinumab," said William J. Sandborn, MD, professor of medicine at UC San Diego School of Medicine and director of the Inflammatory Bowel Disease Center at UC San Diego Health. "Finding effective new treatment options for this patient population is critical because Crohn's disease can dramatically impact a person's quality of life. Patients suffering from this disease may go to the bathroom up to 20 times a day and experience abdominal pain, ulcers and a reduced appetite." Crohn's disease is a chronic inflammatory disease of the gastrointestinal tract that affects approximately 700,000 people in the United States. It can affect any part of the GI tract but it is more commonly found at the end of the small intestine (the ileum) where it joins the beginning of the large intestine (or colon). Crohn's disease is usually treated with glucocorticoids, immunosuppressants, tumor necrosis factor (TNF) antagonists or integrin inhibitors. "The drawbacks of these therapies include an increased risk of infection and cancer, and limited efficacy," said Sandborn. "Ustekinumab has not been associated with an increased risk of serious adverse events." The rates of remission response in the randomized study at week six among patients receiving intravenous ustekinumab at a dose of either 130 mg or approximately 6 mg per kilogram were significantly higher than the rates among patients receiving a placebo. The study also found subcutaneous (injected) ustekinumab every 8 to 12 weeks maintained remission in patients. "This study indicates that ustekinumab may have a long duration of action, a likelihood that may become better understood in future trials," said Sandborn. "Our current findings offer hope for those suffering from this debilitating gastrointestinal tract disease." The Inflammatory Bowel Disease (IBD) Center at UC San Diego Health is dedicated to diagnosing and treating people with IBD from around the world. The center's leadership in IBD medical research means patient access to clinical trials for the newest therapies and advanced surgical techniques for the treatment of this challenging condition. Care is provided by a multidisciplinary team of specialists in gastroenterology, endoscopy, oncology, surgery, transplantation and radiology. Co-authors of the study include: Brian Feagan, Robarts Clinical Trials, Robarts Research Institute, Western University, London; Subrata Ghosh, University of Calgary; Levinus Dieleman, University of Alberta; Stephan Targan, Cedars-Sinai Medical Center; Christopher Gasink, Douglas Jacobstein, Yinghua Lang, Joshua Friedman, Jewel Johanns, Long?Long Gao, Ye Miao, and Omoniyi Adedokun, Janssen Research and Development; Marion Blank, Janssen Scientific Affairs; Bruce Sands, Jean?Frédéric Colombel, Icahn School of Medicine; Seymour Katz, New York University School of Medicine; Stephen Hanauer, Feinberg School of Medicine; Severine Vermeire, Paul Rutgeerts, University of Hospitals; Willem de Villiers, Stellenbosch University; Zsolt Tulassay, Semmelweis University; Ursula Seidler, Hannover Medical School; Bruce Salzberg, Atlanta Gastroenterology Specialists; Pierre Desreumaux, Hopital Claude Huriez; Scott Lee, University of Washington Medical Center; and Edward Loftus,Jr., Mayo Clinic.


News Article | February 28, 2017
Site: www.PR.com

Receive press releases from Strathmore Who's Who: By Email Strathmore’s Who’s Who Honors Dale R. Mudt as a 2017 Professional of the Year Dale R. Mudt, of Sarnia, Ontario, Canada, has recently been honored as a 2017 Strathmore’s Who’s Who Professional of the Year for his outstanding contributions and achievements in the field of Chemical Engineering. Sarnia, Ontario, Canada, February 28, 2017 --( Dale R. Mudt is Manager, Process Automation at the Suncor Energy Products Inc. Refinery. Mr. Mudt earned a BSc with Distinction in Chemical Engineering from the University of Alberta. Mr. Mudt’s expertise is Real Time Optimization, Management, Advanced Process Control, and Data Acquisition. He has spoken on Advanced Control Optimization, Safety Systems to 4th year engineering students and presented "Refinery Real Time Optimization...On the Road for 25 Years" at the Manufacturing Technology Network Conference. Prior to joining Suncor, Mr. Mudt worked for several consulting engineering firms in the food, mining, and inorganic chemical industries. He is a registered Professional Engineer in the Province of Ontario and is member of both the CSChE and AIChE. In his leisure time, Mr. Mudt enjoys fishing, travel, sports, genealogy and spending time with his family. About Strathmore’s Who’s Who Strathmore's Who's Who publishes an annual two thousand page hard cover biographical registry, honoring successful individuals in the fields of Business, the Arts and Sciences, Law, Engineering and Government. Based on one's position and lifetime of accomplishments, we honor professional men and women in all academic areas and professions. Inclusion is limited to individuals who have demonstrated leadership and achievement in their occupation, industry or profession. Sarnia, Ontario, Canada, February 28, 2017 --( PR.com )-- About Dale R. MudtDale R. Mudt is Manager, Process Automation at the Suncor Energy Products Inc. Refinery. Mr. Mudt earned a BSc with Distinction in Chemical Engineering from the University of Alberta. Mr. Mudt’s expertise is Real Time Optimization, Management, Advanced Process Control, and Data Acquisition. He has spoken on Advanced Control Optimization, Safety Systems to 4th year engineering students and presented "Refinery Real Time Optimization...On the Road for 25 Years" at the Manufacturing Technology Network Conference. Prior to joining Suncor, Mr. Mudt worked for several consulting engineering firms in the food, mining, and inorganic chemical industries. He is a registered Professional Engineer in the Province of Ontario and is member of both the CSChE and AIChE. In his leisure time, Mr. Mudt enjoys fishing, travel, sports, genealogy and spending time with his family. www.suncor.com About Strathmore’s Who’s WhoStrathmore's Who's Who publishes an annual two thousand page hard cover biographical registry, honoring successful individuals in the fields of Business, the Arts and Sciences, Law, Engineering and Government. Based on one's position and lifetime of accomplishments, we honor professional men and women in all academic areas and professions. Inclusion is limited to individuals who have demonstrated leadership and achievement in their occupation, industry or profession. Click here to view the list of recent Press Releases from Strathmore Who's Who


News Article | March 25, 2016
Site: phys.org

Home to a mix of preserved wetlands, green rolling hills and dense boreal forests, the Beaver Hills area east of Edmonton has been designated as a United Nations Educational, Scientific and Cultural Organization (UNESCO) Biosphere Reserve, under its Man and the Biosphere Programme. The area joins a network of 669 sites in 120 countries that foster ecologically sustainable human and economic development. Researchers from various faculties at the U of A have conducted dozens of studies there over the last 30 years, focused on work ranging from wildlife and outdoor recreation to wetlands and land management. "University of Alberta research has benefited from the Beaver Hills area in many ways," said Guy Swinnerton, professor emeritus in the Faculty of Physical Education and Recreation and chair of the Beaver Hills Initiative Protected Areas Working Group. Swinnerton, who has enjoyed the Beaver Hills area as both a hiker and a researcher for many years, assisted in the nomination process for the UNESCO designation. He began taking students to the area in 1978 while teaching courses about protected areas and outdoor recreation. "Beaver Hills has different types of protected areas, and it's that whole mosaic that is important," he said. The Beaver Hills Biosphere Reserve becomes the second area of Alberta to win UNESCO designation, after the Waterton Biosphere Reserve in 1979. Home to the U of A's Augustana Miquelon Lake Research Station, the biosphere's 1,572 square kilometres also encompass Elk Island National Park, Miquelon Lake Provincial Park, Cooking Lake-Blackfoot Provincial Recreation Area, the Ukrainian Cultural Heritage Village, the Ministik Lake Game Bird Sanctuary and the Strathcona Wilderness Centre. With its well-preserved, protected parklands and forests sitting next to surrounding farms and residential subdivisions, the Beaver Hills biosphere provides opportunities for university researchers and government scientists to investigate, through comparative studies, how to protect biodiversity and practise sustainable development within the lived-in landscape. "It's this total landscape approach that demonstrates how we have to work collectively to find balance between conservation and sustainable development," Swinnerton said. "It's a hidden gem," added Glynnis Hood, an associate professor of environmental science based at the U of A's Augustana Campus. "Beaver Hills is spectacular because of its subtle beauty. There are ecological surprises around every corner, because you're not looking for the big features like mountains, but for the small surprises." One of those surprises is the fisher, a weasel thought to be gone from the area that seems to have a healthy population and is now the subject of a collaborative University of Victoria study involving Augustana Campus. "The Beaver Hills biosphere offers a rich opportunity to keep exploring questions that are right in our own backyard," said Hood, who lives near Miquelon Lake and has for years guided students in researching area wetlands. She's also studied human-wildlife conflicts and is currently researching low-impact wetland management practices. Last year she and colleague Glen Hvenegaard led the first field course in environmental science and ecology at the Miquelon Lake Research Station, which opened in 2015. The 17-day course, which will be offered biannually, gave U of A students the chance to appreciate the Beaver Hills area's rich diversity as they studied everything from park interpretation to muskrats to soil science. "It was a great way to get the students to really live in the landscape and understand it intimately though research," Hood said. The UNESCO designation affirms the Beaver Hills Biosphere Reserve as a world-class discovery ground that, through the work of U of A researchers and other groups, is yielding insights into global problems. "It demonstrates grassroots excellence and honours the commitment of organizations and people in solving conservation and sustainable development problems on the ground," Swinnerton said.


REDONDO BEACH, CA / ACCESSWIRE / December 13, 2016 / BioLargo, Inc. (OTCQB: BLGO), owner and developer of the breakthrough AOS (Advanced Oxidation System), a low-energy high-efficiency clean water technology, announced the start of a relationship with Chicago Bridge & Iron, NV (NYSE: CBI). According to the press release and a number of recent interviews with BioLargo's President & CEO, Dennis P. Calvert, the new relationship was formed to support the commercialization of BioLargo's proprietary technology and to provide independent performance verification. BioLargo also reports the AOS has been proven to disinfect and decontaminate water better, faster and at a lower cost than any other competing technology. Based on the breadth and significance of the technical performance claims for its AOS, BioLargo has a broad range of commercial opportunities for large industrial applications that must contend with water such as: maritime ballast water management systems, wastewater treatment, environmental remediation, food safety, oil & gas, mining, and agriculture. Its future uses also promise to impact the drinking water industry, including municipal, home use, and emerging nations. The company is also busy commercializing its new "CupriDyne Clean", an industrial odor control product launched last May. The company reports that the product is so effective and low-cost it is gaining rapid traction through trials with leaders within the waste handling industry and that it has had some early sales. Management believes sales will continue to climb, as they finalize supplier agreements with large multi-location customer accounts. CupriDyne Clean may also have an important role to play in industries that contend with volatile organic compounds like hydrogen sulfide (H2S) that impact air quality and safety. Dennis P. Calvert, President & CEO of BioLargo commented, "All of our technologies at BioLargo can serve a wide array of industrial customers that want clean water and clean air. Our mission to 'Make Life Better' includes helping industry tackle operational challenges cost effectively. That intersection of service is likely where our new relationship with CB&I will shine the brightest and we look forward to working with the exceptional team at CB&I to serve industry." With more than 40,000 employees, $13 billion in annual revenue and over $20 billion in future contracts, CB&I is a world-leading engineering, procurement, fabrication, and construction company, and a provider of environmental and infrastructure services. CB&I builds oil refineries, liquefied natural gas terminals, wastewater treatment plants, offshore platforms, and power plants. CB&I is also the world's largest tank construction company and builds tanks for the oil & gas, mining, water, and wastewater industries. The company also remediates hazardous waste problems. Clean water and clean air are at the heart of many of industries served by CB&I and BioLargo's technologies. Details in the first announcement were slim. This news sends notice to the investment world and to industry that Biolargo's technologies can have an important role to play in helping solve air and water contamination problems in a safe, effective and affordable way. Calvert has been quick to point out that the current version of the AOS has been engineered to serve entry-level clients and that important scale-up work is required to serve very large-scale industrial clients. BioLargo Water's research team recently showcased the first pre-commercial prototype of its AOS water treatment system, billed as the lowest cost and highest impact, scalable clean water technology in the world. By combining a cutting-edge carbon matrix, advanced iodine chemistry, and electrolysis, this technology rapidly and inexpensively eliminates bacteria and chemical contaminants in water without leaving residual toxins. University of Alberta researchers, in collaboration with BioLargo Water Scientists, have confirmed test results that validate the AOS achieves unprecedented rates of disinfection, eliminating infectious biological pathogens such as Salmonella, Listeria and E. coli. The AOS has also been proven effective in oxidizing and removing hard-to-manage soluble organics acids, aromatic compounds, and solvents faster than existing technologies and with very little input energy. Proven test results validate its important role for extremely high oxidation potential to tackle a long "watch list" of contaminants identified by the EPA. The company reports that future generations of the AOS will include the extraction and harvesting of important contaminants like sulfur, nitrates, phosphorus, and even heavy metals. The company's first "Alpha" AOS was constructed in collaboration with the Northern Alberta Institute of Technology (NAIT)'s Center for Sensors and Systems Integration and with NAIT's Applied Bio/Nanotechnology Industrial Research Chair. Its "Beta" unit is expected to be ready for commercial trials in 2017. What places the AOS above competing technologies is its exceptionally high rate of disinfection (100x more effective than the competition, as verified in poultry production applications) and remarkably low capital and operational costs, made possible by its extremely low amount of electrical energy required to power the oxidation process. Studies have shown the AOS to achieve remarkable rates of disinfection at less than 1/20th the electrical energy input of competing technologies. The AOS is scalable and modular in design to meet a wide variety of needs in the marketplace. BioLargo is already working on what it calls the "Gen 2 AOS" for ultra-high flow rates. Because the markets for the AOS are very large and the needs so great, management reports that they believe it is only a matter of time before industry adopts this new breakthrough low cost technology. Oil and gas companies such as Exxon Mobil Corporation (NYSE: XOM), Halliburton Company (NYSE: HAL), Schlumberger Limited (NYSE: SLB), Chevron Corporation (NYSE: CVX) and Royal Dutch Shell plc (NYSE: RDS-A) could dramatically reduce water transportation, sourcing and disposal costs by adopting the AOS. The AOS has been shown to be cost effective at removing problematic contaminants from oil & gas "produced water", and any technology such as the AOS that could cost-effectively enable water recycling on-site could slash costs and greatly improve the bottom line for many producers that are now suffering big losses due to persistently low oil prices. It could also alleviate the costly problem of injecting produced water deep into injection wells, and simultaneously reduce pollution. The maritime industry has increasing regulatory pressure to eliminate the detrimental transfer and release of invasive marine species through the discharge of ballast water. This issue prompted the International Maritime Organization to impose regulations for the treatment and discharge of ballast water, and these new rules are scheduled to come into force beginning September of 2017. An estimated 65,000 ships must adopt ballast water treatment systems type approved under the International Convention for the Control and Management of Ships' Ballast Water and Sediments, 2004 (BWMC). Approved systems must disinfect seawater to specified standards without adding any toxic elements to the discharged water. Global Water Intelligence estimates that the average cost for each ballast water management system will be more than $750,000 and the total cost to outfit every vessel will be about $46.5 billion. Because it is the highest impact, lowest cost, lowest energy technology known that can solve this problem, the AOS is could be the most practical solution to maritime operators such as DryShips, Inc. (NASDAQ: DRYS), Navios Maritime Holdings, Inc. (NASDAQ: NM), Diana Shipping, Inc. (NYSE: DSX), Sino-Global Shipping America, Ltd. (NASDAQ: SINO), Diana Containerships Inc. (DCIX) and several others. In an effort to reduce the incidence of foodborne illness in the poultry industry, the U.S. Department of Agriculture's Food Safety and Inspection Service, FSIS, announced new, stricter federal standards to reduce Salmonella and Campylobacter in ground chicken and turkey products, as well as in raw chicken breasts, legs, and wings. The new regulations took effect July 1, 2016 and have the potential to impact sales of poultry processing operations of Tyson Foods, Inc. (NYSE: TSN), Pilgrims Pride Corporation, (NASDAQ: PPC), Sanderson Farms, Inc., (NASDAQ: SAFM), Hormel Foods Corporation, (NYSE: HRL), Perdue, Cargill, Smithfield Food, Inc., Conagra Foods, Inc., and every other poultry processor. Researchers at the University of Alberta confirmed that the AOS could be highly effective in reducing cross-contamination of pathogens when poultry is washed in chill tanks. Water quality of municipal water systems is also a growing concern and a few large water treatment companies that provide water services to millions of U.S. residents are American Water Works Company, Inc., (NYSE: AWK), American States Water Company (NYSE: AWR), Aqua America, Inc. (NYSE: WTR) and Veolia Environnement S.A. (OTC: VEOEY). The need for a better and lower cost clean water technology is urgent and CB&I may just be the perfect company to support implementation of breakthrough low-cost water and air treatment technologies developed by BioLargo, Inc. that can help solve problems across such a broad spectrum of industries. Except for the historical information presented herein, matters discussed in this release contain forward-looking statements that are subject to certain risks and uncertainties that could cause actual results to differ materially from any future results, performance or achievements expressed or implied by such statements. Emerging Growth LLC, which owns SECFilings.com, is not registered with any financial or securities regulatory authority, and does not provide nor claims to provide investment advice or recommendations to readers of this release. Emerging Growth LLC may from time to time have a position in the securities mentioned herein and may increase or decrease such positions without notice. For making specific investment decisions, readers should seek their own advice. Emerging Growth LLC may be compensated for its services in the form of cash-based compensation or equity securities in the companies it writes about, or a combination of the two. For full disclosure please visit: http://secfilings.com/Disclaimer.aspx. REDONDO BEACH, CA / ACCESSWIRE / December 13, 2016 / BioLargo, Inc. (OTCQB: BLGO), owner and developer of the breakthrough AOS (Advanced Oxidation System), a low-energy high-efficiency clean water technology, announced the start of a relationship with Chicago Bridge & Iron, NV (NYSE: CBI). According to the press release and a number of recent interviews with BioLargo's President & CEO, Dennis P. Calvert, the new relationship was formed to support the commercialization of BioLargo's proprietary technology and to provide independent performance verification. BioLargo also reports the AOS has been proven to disinfect and decontaminate water better, faster and at a lower cost than any other competing technology. Based on the breadth and significance of the technical performance claims for its AOS, BioLargo has a broad range of commercial opportunities for large industrial applications that must contend with water such as: maritime ballast water management systems, wastewater treatment, environmental remediation, food safety, oil & gas, mining, and agriculture. Its future uses also promise to impact the drinking water industry, including municipal, home use, and emerging nations. The company is also busy commercializing its new "CupriDyne Clean", an industrial odor control product launched last May. The company reports that the product is so effective and low-cost it is gaining rapid traction through trials with leaders within the waste handling industry and that it has had some early sales. Management believes sales will continue to climb, as they finalize supplier agreements with large multi-location customer accounts. CupriDyne Clean may also have an important role to play in industries that contend with volatile organic compounds like hydrogen sulfide (H2S) that impact air quality and safety. Dennis P. Calvert, President & CEO of BioLargo commented, "All of our technologies at BioLargo can serve a wide array of industrial customers that want clean water and clean air. Our mission to 'Make Life Better' includes helping industry tackle operational challenges cost effectively. That intersection of service is likely where our new relationship with CB&I will shine the brightest and we look forward to working with the exceptional team at CB&I to serve industry." With more than 40,000 employees, $13 billion in annual revenue and over $20 billion in future contracts, CB&I is a world-leading engineering, procurement, fabrication, and construction company, and a provider of environmental and infrastructure services. CB&I builds oil refineries, liquefied natural gas terminals, wastewater treatment plants, offshore platforms, and power plants. CB&I is also the world's largest tank construction company and builds tanks for the oil & gas, mining, water, and wastewater industries. The company also remediates hazardous waste problems. Clean water and clean air are at the heart of many of industries served by CB&I and BioLargo's technologies. Details in the first announcement were slim. This news sends notice to the investment world and to industry that Biolargo's technologies can have an important role to play in helping solve air and water contamination problems in a safe, effective and affordable way. Calvert has been quick to point out that the current version of the AOS has been engineered to serve entry-level clients and that important scale-up work is required to serve very large-scale industrial clients. BioLargo Water's research team recently showcased the first pre-commercial prototype of its AOS water treatment system, billed as the lowest cost and highest impact, scalable clean water technology in the world. By combining a cutting-edge carbon matrix, advanced iodine chemistry, and electrolysis, this technology rapidly and inexpensively eliminates bacteria and chemical contaminants in water without leaving residual toxins. University of Alberta researchers, in collaboration with BioLargo Water Scientists, have confirmed test results that validate the AOS achieves unprecedented rates of disinfection, eliminating infectious biological pathogens such as Salmonella, Listeria and E. coli. The AOS has also been proven effective in oxidizing and removing hard-to-manage soluble organics acids, aromatic compounds, and solvents faster than existing technologies and with very little input energy. Proven test results validate its important role for extremely high oxidation potential to tackle a long "watch list" of contaminants identified by the EPA. The company reports that future generations of the AOS will include the extraction and harvesting of important contaminants like sulfur, nitrates, phosphorus, and even heavy metals. The company's first "Alpha" AOS was constructed in collaboration with the Northern Alberta Institute of Technology (NAIT)'s Center for Sensors and Systems Integration and with NAIT's Applied Bio/Nanotechnology Industrial Research Chair. Its "Beta" unit is expected to be ready for commercial trials in 2017. What places the AOS above competing technologies is its exceptionally high rate of disinfection (100x more effective than the competition, as verified in poultry production applications) and remarkably low capital and operational costs, made possible by its extremely low amount of electrical energy required to power the oxidation process. Studies have shown the AOS to achieve remarkable rates of disinfection at less than 1/20th the electrical energy input of competing technologies. The AOS is scalable and modular in design to meet a wide variety of needs in the marketplace. BioLargo is already working on what it calls the "Gen 2 AOS" for ultra-high flow rates. Because the markets for the AOS are very large and the needs so great, management reports that they believe it is only a matter of time before industry adopts this new breakthrough low cost technology. Oil and gas companies such as Exxon Mobil Corporation (NYSE: XOM), Halliburton Company (NYSE: HAL), Schlumberger Limited (NYSE: SLB), Chevron Corporation (NYSE: CVX) and Royal Dutch Shell plc (NYSE: RDS-A) could dramatically reduce water transportation, sourcing and disposal costs by adopting the AOS. The AOS has been shown to be cost effective at removing problematic contaminants from oil & gas "produced water", and any technology such as the AOS that could cost-effectively enable water recycling on-site could slash costs and greatly improve the bottom line for many producers that are now suffering big losses due to persistently low oil prices. It could also alleviate the costly problem of injecting produced water deep into injection wells, and simultaneously reduce pollution. The maritime industry has increasing regulatory pressure to eliminate the detrimental transfer and release of invasive marine species through the discharge of ballast water. This issue prompted the International Maritime Organization to impose regulations for the treatment and discharge of ballast water, and these new rules are scheduled to come into force beginning September of 2017. An estimated 65,000 ships must adopt ballast water treatment systems type approved under the International Convention for the Control and Management of Ships' Ballast Water and Sediments, 2004 (BWMC). Approved systems must disinfect seawater to specified standards without adding any toxic elements to the discharged water. Global Water Intelligence estimates that the average cost for each ballast water management system will be more than $750,000 and the total cost to outfit every vessel will be about $46.5 billion. Because it is the highest impact, lowest cost, lowest energy technology known that can solve this problem, the AOS is could be the most practical solution to maritime operators such as DryShips, Inc. (NASDAQ: DRYS), Navios Maritime Holdings, Inc. (NASDAQ: NM), Diana Shipping, Inc. (NYSE: DSX), Sino-Global Shipping America, Ltd. (NASDAQ: SINO), Diana Containerships Inc. (DCIX) and several others. In an effort to reduce the incidence of foodborne illness in the poultry industry, the U.S. Department of Agriculture's Food Safety and Inspection Service, FSIS, announced new, stricter federal standards to reduce Salmonella and Campylobacter in ground chicken and turkey products, as well as in raw chicken breasts, legs, and wings. The new regulations took effect July 1, 2016 and have the potential to impact sales of poultry processing operations of Tyson Foods, Inc. (NYSE: TSN), Pilgrims Pride Corporation, (NASDAQ: PPC), Sanderson Farms, Inc., (NASDAQ: SAFM), Hormel Foods Corporation, (NYSE: HRL), Perdue, Cargill, Smithfield Food, Inc., Conagra Foods, Inc., and every other poultry processor. Researchers at the University of Alberta confirmed that the AOS could be highly effective in reducing cross-contamination of pathogens when poultry is washed in chill tanks. Water quality of municipal water systems is also a growing concern and a few large water treatment companies that provide water services to millions of U.S. residents are American Water Works Company, Inc., (NYSE: AWK), American States Water Company (NYSE: AWR), Aqua America, Inc. (NYSE: WTR) and Veolia Environnement S.A. (OTC: VEOEY). The need for a better and lower cost clean water technology is urgent and CB&I may just be the perfect company to support implementation of breakthrough low-cost water and air treatment technologies developed by BioLargo, Inc. that can help solve problems across such a broad spectrum of industries. Except for the historical information presented herein, matters discussed in this release contain forward-looking statements that are subject to certain risks and uncertainties that could cause actual results to differ materially from any future results, performance or achievements expressed or implied by such statements. Emerging Growth LLC, which owns SECFilings.com, is not registered with any financial or securities regulatory authority, and does not provide nor claims to provide investment advice or recommendations to readers of this release. Emerging Growth LLC may from time to time have a position in the securities mentioned herein and may increase or decrease such positions without notice. For making specific investment decisions, readers should seek their own advice. Emerging Growth LLC may be compensated for its services in the form of cash-based compensation or equity securities in the companies it writes about, or a combination of the two. For full disclosure please visit: http://secfilings.com/Disclaimer.aspx.


News Article | March 11, 2016
Site: cleantechnica.com

They come from the West Coast, as far south as California, as north as Alaska, and as east as the Atlantic coast. Their joint letter refers to “Misrepresentation,” “lack of information,” and “Disregard for science that was not funded by the proponent.” Scientists condemn the flawed review process for Lelu Island, at the mouth of British Columbia’s Skeena River, as “a symbol of what is wrong with environmental decision-making in Canada.” More than 130 scientists signed on to this letter. “This letter is not about being for or against LNG, the letter is about scientific integrity in decision-making,” said Dr. Jonathan Moore, Liber Ero Chair of Coastal Science and Management, Simon Fraser University. One of the other signatories is Otto Langer, former Chief of Habitat Assessment at Department of Fisheries and Oceans (DFO), who wrote: These are tough words for a Federal government that promised to put teeth back in the gutted environmental review process. In Prime Minister Justin Trudeau’s defense, this is yet another problem he inherited from the previous administration, and the task of cleaning up this mess seems enormous. That said, this government was aware the environmental review process was broken before it was elected and has not intervened to at least stop the process from moving forward until it is prepared to take action. The Liberal Government appears to be facing a tough decision. So far, it has attempted to work with the provinces. On Lelu Island, as well as the equally controversial proposed Kinder Morgan Pipeline  expansion and Site C Dam project, continuing to support Premier Clak’s policies in this manner would appear to necessitate betraying the trust of the Canadian people. Here are a few choice excerpts from the public letter that more than 130 scientists sent to Catherine McKenna and Prime Minister Trudeau: ” … The CEAA draft report has not accurately characterized the importance of the project area, the Flora Bank region, for fish. The draft CEAA report1 states that the “…marine habitats around Lelu Island are representative of marine ecosystems throughout the north coast of B.C.”. In contrast, five decades of science has repeatedly documented that this habitat is NOT representative of other areas along the north coast or in the greater Skeena River estuary, but rather that it is exceptional nursery habitat for salmon2-6 that support commercial, recreational, and First Nation fisheries from throughout the Skeena River watershed and beyond7. A worse location is unlikely to be found for PNW LNG with regards to potential risks to fish and fisheries….” ” … CEAA’s draft report concluded that the project is not likely to cause adverse effects on fish in the estuarine environment, even when their only evidence for some species was an absence of information. For example, eulachon, a fish of paramount importance to First Nations and a Species of Special Concern8, likely use the Skeena River estuary and project area during their larval, juvenile, and adult life-stages. There has been no systematic study of eulachon in the project area. Yet CEAA concluded that the project posed minimal risks to this fish…” ” … CEAA’s draft report is not a balanced consideration of the best-available science. On the contrary, CEAA relied upon conclusions presented in proponent-funded studies which have not been subjected to independent peer-review and disregarded a large and growing body of relevant independent scientific research, much of it peer-reviewed and published…” ” …The PNW LNG project presents many different potential risks to the Skeena River estuary and its fish, including, but not limited to, destruction of shoreline habitat, acid rain, accidental spills of fuel and other contaminants, dispersal of contaminated sediments, chronic and acute sound, seafloor destruction by dredging the gas pipeline into the ocean floor, and the erosion and food-web disruption from the trestle structure. Fisheries and Oceans Canada (DFO) and Natural Resources Canada provided detailed reviews12 on only one risk pathway – habitat erosion – while no such detailed reviews were conducted on other potential impacts or their cumulative effects…” ” … CEAA’s draft report concluded that the project posed moderate risks to marine fish but that these risks could be mitigated. However, the proponent has not fully developed their mitigation plans and the plans that they have outlined are scientifically dubious. For example, the draft assessment states that destroyed salmon habitat will be mitigated; the “proponent identified 90 000 m2 of lower productivity habitats within five potential offsetting sites that could be modified to increase the productivity of fisheries”, when in fact, the proponent did not present data on productivity of Skeena Estuary habitats for fish at any point in the CEAA process. Without understanding relationships between fish and habitat, the proposed mitigation could actually cause additional damage to fishes of the Skeena River estuary…” British Columbia Institute of Technology 1. Marvin Rosenau, Ph.D., Professor, British Columbia Institute of Technology. 2. Eric M. Anderson, Ph.D., Faculty, British Columbia Institute of Technology. British Columbia Ministry of Environment 1. R. S. Hooton, M.Sc., Former Senior Fisheries Management Authority for British Columbia Ministry of Environment, Skeena Region. California Academy of Sciences 1. John E. McCosker, Ph.D., Chair of Aquatic Biology, Emeritus, California Academy of Sciences. Department of Fisheries and Oceans Canada 1. Otto E. Langer, M.Sc., R.P.Bio., Fisheries Biologist, Former Chief of Habitat Assessment, Department of Fisheries and Oceans Canada Memorial University of Newfoundland 1. Ian A. Fleming, Ph.D., Professor, Memorial University of Newfoundland. 2. Brett Favaro, Ph.D., Liber Ero conservation fellow, Memorial University of Newfoundland. Norwegian Institute for Nature Research 1. Rachel Malison, Ph.D., Marie Curie Fellow and Research Ecologist, The Norwegian Institute for Nature Research. Russian Academy of Science 1. Alexander I. Vedenev, Ph.D., Head of Ocean Noise Laboratory, Russian Academy of Science 2. Victor Afanasiev, Ph.D., Russian Academy of Sciences. Sakhalin Research Institute of Fisheries and Oceanography 1. Alexander Shubin, M.Sc. Fisheries Biologist, Sakhalin Research Institute of Fisheries and Oceanography. Simon Fraser University, BC 1. Jonathan W. Moore, Ph.D., Liber Ero Chair of Coastal Science and Management, Associate Professor, Simon Fraser University. 2. Randall M. Peterman, Ph.D., Professor Emeritus and Former Canada Research Chair in Fisheries Risk Assessment and Management, Simon Fraser University. 3. John D. Reynolds, Ph.D., Tom Buell BC Leadership Chair in Salmon Conservation, Professor, Simon Fraser University 4. Richard D. Routledge, Ph.D., Professor, Simon Fraser University. 5. Evelyn Pinkerton, Ph.D., School of Resource and Environmental Management, Professor, Simon Fraser University. 6. Dana Lepofsky, Ph.D., Professor, Simon Fraser University 7. Nicholas Dulvy, Ph.D., Canada Research Chair in Marine Biodiversity and Conservation, Professor, Simon Fraser University. 8. Ken Lertzman, Ph.D., Professor, Simon Fraser University. 9. Isabelle M. Côté, Ph.D., Professor, Simon Fraser University. 10. Brendan Connors, Ph.D., Senior Systems Ecologist, ESSA Technologies Ltd., Adjunct Professor, Simon Fraser University. 11. Lawrence Dill, Ph.D., Professor Emeritus, Simon Fraser University. 12. Patricia Gallaugher, Ph.D., Adjunct Professor, Simon Fraser University. 13. Anne Salomon, Ph.D., Associate Professor, Simon Fraser University. 14. Arne Mooers, Ph.D., Professor, Simon Fraser University. 15. Lynne M. Quarmby, Ph.D., Professor, Simon Fraser University. 16. Wendy J. Palen, Ph.D., Associate Professor, Simon Fraser University. University of Alaska 1. Peter Westley, Ph.D., Assistant Professor of Fisheries, University of Alaska Fairbanks. 2. Anne Beaudreau, Ph.D., Assistant Professor of Fisheries, University of Alaska Fairbanks. 3. Megan V. McPhee, Ph.D., Assistant Professor, University of Alaska Fairbanks. University of Alberta 1. David.W. Schindler, Ph.D., Killam Memorial Professor of Ecology Emeritus, University of Alberta. 2. Suzanne Bayley, Ph.D., Emeritus Professor, University of Alberta. University of British Columbia 1. John G. Stockner, Ph.D., Emeritus Senior Scientist DFO, West Vancouver Laboratory, Adjuct Professor, University of British Columbia. 2. Kai M.A. Chan, Ph.D., Canada Research Chair in Biodiversity and Ecosystem Services, Associate Professor, University of British Columbia 3. Hadi Dowlatabadi, Ph.D., Canada Research Chair in Applied Mathematics and Integrated Assessment of Global Change, Professor, University of British Columbia 4. Sarah P. Otto, Ph.D., Professor and Director, Biodiversity Research Centre, University of British Columbia. 5. Michael Doebeli, Ph.D., Professor, University of British Columbia. 6. Charles J. Krebs, Ph.D., Professor, University of British Columbia. 7. Amanda Vincent, Ph.D., Professor, University of British Columbia. 8. Michael Healey, Ph.D., Professor Emeritus, University of British Columbia. University of California (various campuses) 1. Mary E. Power, Ph.D., Professor, University of California, Berkeley 2. Peter B. Moyle, Ph.D., Professor, University of California. 3. Heather Tallis, Ph.D., Chief Scientist, The Nature Conservancy, Adjunct Professor, University of California, Santa Cruz. 4. James A. Estes, Ph.D., Professor, University of California. 5. Eric P. Palkovacs, Ph.D., Assistant Professor, University of California-Santa Cruz. 6. Justin D. Yeakel, Ph.D., Assistant Professor, University of California. 7. John L. Largier, Ph.D., Professor, University of California Davis. University of Montana 1. Jack A. Stanford, Ph.D., Professor of Ecology, University of Montana. 2. Andrew Whiteley, Ph.D., Assistant Professor, University of Montana. 3. F. Richard Hauer, Ph.D., Professor and Director, Center for Integrated Research on the Environment, University of Montana. University of New Brunswick 1. Richard A. Cunjak, Ph.D., Professor, University of New Brunswick. University of Ontario Institute of Technology 1. Douglas A. Holdway, Ph.D., Canada Research Chair in Aquatic Toxicology, Professor, University of Ontario Institute of Technology. University of Ottawa 1. Jeremy Kerr, Ph.D., University Research Chair in Macroecology and Conservation, Professor, University of Ottawa University of Toronto 1. Martin Krkosek, Ph.D., Assistant Professor, University of Toronto. Gail McCabe, Ph.D., University of Toronto. University of Victoria 1. Chris T. Darimont, Ph.D., Associate Professor, University of Victoria 2. John Volpe, Ph.D., Associate Professor, University of Victoria. 3. Aerin Jacob, Ph.D., Postdoctoral Fellow, University of Victoria. 4. Briony E.H. Penn, Ph.D., Adjunct Professor, University of Victoria. 5. Natalie Ban, Ph.D., Assistant Professor, School of Environmental Studies, University of Victoria. 6. Travis G. Gerwing, Ph.D., Postdoctoral Fellow, University of Victoria. 7. Eric Higgs, Ph.D., Professor, University of Victoria. 8. Paul C. Paquet, Ph.D., Senior Scientist, Raincoast Conservation Foundation, Adjunct Professor, University of Victoria. 9. James K. Rowe, Ph.D., Assistant Professor, University of Victoria. University of Washington 1. Charles Simenstad, Ph.D., Professor, University of Washington. 2. Daniel Schindler, Ph.D., Harriet Bullitt Endowed Chair in Conservation, Professor, University of Washington. 3. Julian D. Olden, Ph.D., Associate Professor, University of Washington. 4. P. Sean McDonald, Ph.D., Research Scientist, University of Washington. 5. Tessa Francis, Ph.D., Research Scientist, University of Washington. University of Windsor 1. Hugh MacIsaac, Ph.D., Canada Research Chair Great Lakes Institute for Environmental Research, Professor, University of Windsor. Photo Credits: 9 of the scientist condemning the CEAA review are professors at the University of Victoria. Photo shows U Vic students listening to a UN official in 2012 by Herb Neufeld via Flickr (CC BY SA, 2.0 License); Screen shot from a Liberal campaign video in which Trudeau promised to bring real change to Ottawa;8 of the scientist condemning the CEAA review are professors at the University of British Columbia. Photo of UBC by abdallahh via Flickr (CC BY SA, 2.0 License);5 of the scientists condemning the CEAA review are from the University of Washington. Photo is Mary Gates Hall, in the University of Washington by PRONam-ho Park Follow via Flickr (CC BY SA, 2.0 License);5 of the scientists condemning the CEAA review are from the Skeena Fisheries Commission. Photo is Coast mountains near the mouth of the Skeena River by Roy Luck via Flickr (CC BY SA, 2.0 License);16 of the scientists condemning the CEAA review were professors at Simon Fraser University. Photo shows SFU’s Reflective Pool by Jon the Happy Web Creative via Flickr (CC BY SA, 2.0 License)    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | October 28, 2016
Site: www.prfire.com

New Film “Microbirth” Reveals the Microscopic Secrets of Childbirth [http://microbirth.com] – A new documentary “MICROBIRTH” warns how our children are born could have serious repercussions for their lifelong health. “Microbirth” looks at childbirth in a whole new way; through the lens of a microscope. Featuring Ivy League scientists, the film investigates the latest research that is starting to indicate modern birth practices could be interfering with critical biological processes. This could be making our children more susceptible to disease later in life. Recent population studies have shown babies born by Caesarean Section have approximately: 20% increased risk of developing asthma 20% increased risk of developing type 1 diabetes 20% increased risk of obesity slightly smaller increases with gastro-intestinal conditions like Crohn’s disease or coeliac disease. These conditions are all linked to the immune system. In the film, scientists hypothesise that Caesarean Section could be interfering with “the seeding of the baby’s microbiome”. This is an important microbiological process where bacteria is transferred from mother to baby in the birth canal. As a consequence, the baby’s immune system may not develop to its full potential. Another hypothesis is that the stresses and hormones associated with natural birth could switch on or off certain genes related to the immune system and metabolism. If a baby is born by C-Section, this might affect these epigenetic processes. Dr Rodney R Dietert, Professor of Immunotoxicology at Cornell University, says, “Over the past 20-30 years, we’ve seen dramatic increases in childhood asthma, type 1 diabetes, coeliac disease, childhood obesity. We’ve also seen increases in Caesarean delivery. Does Caesarean cause these conditions? No. What Caesarean does is not allow the baby to be seeded with the microbes. The immune system doesn’t mature. And the metabolism changes. It’s the immune dysfunction and the changes in metabolism that we now know contribute to those diseases and conditions.” Dr Matthew Hyde, Research Associate of Neonatal Medicine, Imperial College London says, ”We are increasingly seeing a world out there with what is really a public health time-bomb waiting to go off. And the research we are doing suggests it is only going to get worse, generation on generation. So tomorrow’s generation really is on the edge of the precipice unless we can begin to do something about it.” The film’s co-Director Toni Harman says, “The very latest scientific research is starting to indicate that the microscopic processes happening during childbirth could be critical for the life-long health of the baby. We are hoping “Microbirth” raises awareness of the importance of “seeding the microbiome” for all babies, whether born naturally or by C-Section, to give all children the best chance of a healthy life. This could be an exciting opportunity to improve health across populations. And it all starts at birth”. “MICROBIRTH” is premiering with hundreds of simultaneous grass-roots public screenings around the world on Saturday 20th September 2014. http://microbirth.com/events – High-res images and academics available for interview upon request. – Short synopsis of “Microbirth”: “Microbirth” is a new sixty minute documentary looking at birth in a whole new way: through the lens of a microscope. Investigating the latest scientific research, the film reveals how we give birth could impact the lifelong health of our children. http://microbirth.com – “Microbirth” is an independent production by Alto Films Ltd. The film has been produced and directed by British filmmaking couple, Toni Harman and Alex Wakeford. Their previous film “Freedom For Birth” premiered in over 1,100 public screenings in 50 countries in September 2012. – “Microbirth” will premiere at grass-roots public screenings around the world on Saturday 20th September 2014. The film will then be represented for international broadcast sales as well as being available via online platforms. For a full list of screenings, please visit: http://microbirth.com/events – For more information about the film, please visit http://microbirth.com – “Microbirth” includes the following scientists and academics: RODNEY DIETERT, Professor of Immunotoxicology, Cornell University MARTIN BLASER, Director of the Human Microbiome Program & Professor of Translational Medicine, New York University MARIA GLORIA DOMINGUEZ BELLO, Associate Professor, Department of Medicine, New York University PHILIP STEER, Emeritus Professor of Obstetrics, Imperial College, London NEENA MODI, Professor of Neonatal Medicine, Imperial College, London MATTHEW HYDE, Research Associate in the Section of Neonatal Medicine, Imperial College, London SUE CARTER, Professor, Behavioral Neurobiologist, University of North Carolina, Chapel Hill ALEECA BELL, Assistant Professor, Dept of Women, Children and Family Health Science, University of Illinois at Chicago STEFAN ELBE, Professor of International Relations, University of Sussex and Director of Centre for Global Health Policy ANITA KOZYRSKYJ, Professor, Department of Pediatrics, University of Alberta and Co-Principal Investigator, Synergy in Microbiota Research (SyMBIOTA) JACQUELYN TAYLOR, Associate Professor of Nursing, University of Yale HANNAH DAHLEN, Professor of Midwifery, University of Western Sydney LESLEY PAGE, Professor of Midwifery, King’s College London and President, Royal College of Midwives


News Article | November 17, 2016
Site: www.sciencedaily.com

Researchers at University of California San Diego School of Medicine have shown that ustekinumab, a human antibody used to treat arthritis, significantly induces response and remission in patients with moderate to severe Crohn's disease. Results of the clinical trial will appear in the November 16 issue of the New England Journal of Medicine. "A high percentage of the patients in the study who had not responded to conventional therapies were in clinical remission after only a single dose of intravenous ustekinumab," said William J. Sandborn, MD, professor of medicine at UC San Diego School of Medicine and director of the Inflammatory Bowel Disease Center at UC San Diego Health. "Finding effective new treatment options for this patient population is critical because Crohn's disease can dramatically impact a person's quality of life. Patients suffering from this disease may go to the bathroom up to 20 times a day and experience abdominal pain, ulcers and a reduced appetite." Crohn's disease is a chronic inflammatory disease of the gastrointestinal tract that affects approximately 700,000 people in the United States. It can affect any part of the GI tract but it is more commonly found at the end of the small intestine (the ileum) where it joins the beginning of the large intestine (or colon). Crohn's disease is usually treated with glucocorticoids, immunosuppressants, tumor necrosis factor (TNF) antagonists or integrin inhibitors. "The drawbacks of these therapies include an increased risk of infection and cancer, and limited efficacy," said Sandborn. "Ustekinumab has not been associated with an increased risk of serious adverse events." The rates of remission response in the randomized study at week six among patients receiving intravenous ustekinumab at a dose of either 130 mg or approximately 6 mg per kilogram were significantly higher than the rates among patients receiving a placebo. The study also found subcutaneous (injected) ustekinumab every 8 to 12 weeks maintained remission in patients. "This study indicates that ustekinumab may have a long duration of action, a likelihood that may become better understood in future trials," said Sandborn. "Our current findings offer hope for those suffering from this debilitating gastrointestinal tract disease." The Inflammatory Bowel Disease (IBD) Center at UC San Diego Health is dedicated to diagnosing and treating people with IBD from around the world. The center's leadership in IBD medical research means patient access to clinical trials for the newest therapies and advanced surgical techniques for the treatment of this challenging condition. Care is provided by a multidisciplinary team of specialists in gastroenterology, endoscopy, oncology, surgery, transplantation and radiology. Co-authors of the study include: Brian Feagan, Robarts Clinical Trials, Robarts Research Institute, Western University, London; Subrata Ghosh, University of Calgary; Levinus Dieleman, University of Alberta; Stephan Targan, Cedars-Sinai Medical Center; Christopher Gasink, Douglas Jacobstein, Yinghua Lang, Joshua Friedman, Jewel Johanns, Long‑Long Gao, Ye Miao, and Omoniyi Adedokun, Janssen Research and Development; Marion Blank, Janssen Scientific Affairs; Bruce Sands, Jean‑Frédéric Colombel, Icahn School of Medicine; Seymour Katz, New York University School of Medicine; Stephen Hanauer, Feinberg School of Medicine; Severine Vermeire, Paul Rutgeerts, University of Hospitals; Willem de Villiers, Stellenbosch University; Zsolt Tulassay, Semmelweis University; Ursula Seidler, Hannover Medical School; Bruce Salzberg, Atlanta Gastroenterology Specialists; Pierre Desreumaux, Hopital Claude Huriez; Scott Lee, University of Washington Medical Center; and Edward Loftus,Jr., Mayo Clinic. This study was funded by Janssen Research and Development (NCT01369329, NCT01369342, NCT01369355).


Prado C.M.M.,University of Alberta | Heymsfield S.B.,Pennington Biomedical Research Center
Journal of Parenteral and Enteral Nutrition | Year: 2014

Body composition refers to the amount of fat and lean tissues in our body; it is a science that looks beyond a unit of body weight, accounting for the proportion of different tissues and its relationship to health. Although body weight and body mass index are well-known indexes of health status, most researchers agree that they are rather inaccurate measures, especially for elderly individuals and those patients with specific clinical conditions. The emerging use of imaging techniques such as dual energy x-ray absorptiometry, computerized tomography, magnetic resonance imaging, and ultrasound imaging in the clinical setting have highlighted the importance of lean soft tissue (LST) as an independent predictor of morbidity and mortality. It is clear from emerging studies that body composition health will be vital in treatment decisions, prognostic outcomes, and quality of life in several nonclinical and clinical states. This review explores the methodologies and the emerging value of imaging techniques in the assessment of body composition, focusing on the value of LST to predict nutrition status. © 2014 American Society for Parenteral and Enteral Nutrition.


Kutty S.,University of Nebraska at Omaha | Smallhorn J.F.,University of Alberta
Journal of the American Society of Echocardiography | Year: 2012

Atrioventricular septal defects comprise a disease spectrum characterized by deficient atrioventricular septation, with several common features seen in all affected hearts and variability in atrioventricular valve morphology and interatrial and interventricular communications. Atrioventricular septal defects are among the more common defects encountered by pediatric cardiologists and echocardiographers. Despite advances in understanding, standard two-dimensional echocardiography may not be the optimal method for the morphologic and functional evaluation of this lesion, particularly malformations of the atrioventricular valve(s). In this review, the authors summarize the role of three-dimensional echocardiography in the diagnostic evaluation of atrioventricular septal defects.


Evans J.P.,University of North Carolina at Chapel Hill | Meslin E.M.,Indiana University | Marteau T.M.,King's College London | Caulfield T.,University of Alberta
Science | Year: 2011

Unrealistic expectations and uncritical translation of genetic discoveries may undermine other promising approaches to preventing disease and improving health.


Armstrong P.,University of Alberta | Boden W.,Buffalo General Hospital
Annals of Internal Medicine | Year: 2011

A transformation in ST-segment elevation myocardial infarction (STEMI) care in the United States has unfolded. It asserts superior reperfusion with primary percutaneous coronary intervention (PPCI) over fibrinolysis on the basis of studies showing the former method to be superior for reperfusion of patients with STEMI. Although clear benefit has resulted from national programs directed toward achieving shorter times to PPCI in facilities with around-the-clock access, most patients present to non-PPCI hospitals. Because delay to PPCI for most patients with STEMI presenting to non-PPCI centers remains outside current guidelines, many are denied benefit from pharmacologic therapy. This article describes why this approach creates a treatment paradox in which more effort to improve treatment for patients with PPCI for acute STEMI often leads to unnecessary avoidance and delay in the use of fibrinolysis. Recent evidence confirms the unfavorable consequences of delay to PPCI and that early prehospital fibrinolysis combined with strategic mechanical co-interventions affords excellent outcomes. The authors believe it is time to embrace an integrated dual reperfusion strategy to best serve all patients with STEMI.© 2011 American College of Physicians.


Patent
Massachusetts Institute of Technology, President And Fellows Of Harvard College and University of Alberta | Date: 2015-02-06

The invention, in some aspects relates to light-activated ion channel polypeptides and encoding nucleic acids and also relates in part to compositions comprising light-activated ion channel polypeptides and methods using light-activated ion channel polypeptides to alter cell activity and function.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-IP | Phase: KBBE-2007-3-1-03 | Award Amount: 11.21M | Year: 2008

Replacing fossil oil with renewable resources is perhaps the most urgent need and the most challenging task that human society faces today. Cracking fossil hydrocarbons and building the desired chemicals with advanced organic chemistry usually requires many times more energy than is contained in the final product. Thus, using plant material in the chemical industry does not only replace the fossil material contained in the final product but also save substantial energy in the processing. Of particular interest are seed oils which show a great variation in their composition between different plant species. Many of the oil qualities found in wild species would be very attractive for the chemical industry if they could be obtained at moderate costs in bulk quantities and with a secure supply. Genetic engineering of vegetable oil qualities in high yielding oil crops could in a relatively short time frame yield such products. This project aims at developing such added value oils in dedicated industrial oil crops mainly in form of various wax esters particularly suited for lubrication. This project brings together the most prominent scientists in plant lipid biotechnology in an unprecedented world-wide effort in order to produce added value oils in industrial oil crops within the time frame of four years as well as develop a tool box of genes und understanding of lipid cellular metabolism in order for rational designing of vast array of industrial oil qualities in oil crops. Since GM technologies that will be used in the project are met with great scepticism in Europe it is crucial that ideas, expectations and results are communicated to the public and that methods, ethics, risks and risk assessment are open for debate. The keywords of our communication strategies will be openness and an understanding of public concerns.


Patent
President And Fellows Of Harvard College and University of Alberta | Date: 2015-06-17

Provided herein are variants of an archaerhodopsin useful for application such as optical measurement of membrane potential. The present invention also relates to polynucleotides encoding the variants; nucleic acid constructs, vectors, cells comprising the polynucleotides, and cells comprising the polypeptides; and methods of using the variants.


Patent
University of Alberta and University of Lethbridge | Date: 2015-02-05

The disclosure provides methods for the treatment of skin disorders through the use of minimally invasive terahertz radiation. The method includes exposing skin cells to terahertz radiation in amount sufficient to modulate gene expression in the skin cells. The modulation of gene expression then results in a reduction of the disease state or aspects thereof in the exposed skin cells.


Grant
Agency: Cordis | Branch: FP7 | Program: CP-FP | Phase: KBBE-2009-2-4-01 | Award Amount: 4.05M | Year: 2010

The NanoLyse project will focus on the development of validated methods and reference materials for the analysis of engineered nano-particles (ENP) in food and beverages. The developed methods will cover all relevant classes of ENP with reported or expected food and food contact material applications, i.e. metal, metal oxide/silicate, surface functionalised and organic encapsulate (colloidal/micelle type) ENP. Priority ENPs have been selected out of each class as model particles to demonstrate the applicability of the developed approaches, e.g. nano-silver, nano-silica, an organically surface modified nano-clay and organic nano-encapsulates. Priority will be given to methods which can be implemented in existing food analysis laboratories. A dual approach will be followed. Rapid imaging and screening methods will allow the distinction between samples which contain ENP and those that do not. These methods will be characterised by minimal sample preparation, cost-efficiency, high throughput and will be achieved by the application of automated smart electron microscopy imaging and screening techniques in sensor and immunochemical formats. More sophisticated, hyphenated methods will allow the unambiguous characterisation and quantification of ENP. These will include elaborate sample preparation, separation by flow field fractionation and chromatographic techniques as well as mass spectrometric and electron microscopic characterisation techniques. The developed methods will be validated using the well characterised food matrix reference materials that will be produced within the project. Small-scale interlaboratory method performance studies and the analysis of a few commercially available products claiming or suspect to contain ENP will demonstrate the applicability and soundness of the developed methods.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ENERGY.2010.5.2-3 | Award Amount: 5.31M | Year: 2011

CO2CARE aims to support the large scale demonstration of CCS technology by addressing the research requirements of CO2 storage site abandonment. It will deliver technologies and procedures for abandonment and post-closure safety, satisfying the regulatory requirements for transfer of responsibility. The project will focus on three key areas: well abandonment and long-term integrity; reservoir management and prediction from closure to the long-term; risk management methodologies for long-term safety. Objectives will be achieved via integrated laboratory research, field experiments and state-of-the-art numerical modelling, supported by literature review and data from a rich portfolio of real storage sites, covering a wide range of geological and geographical settings. CO2CARE will develop plugging techniques to ensure long-term well integrity; study the factors critical to long-term site safety; develop monitoring methods for leakage detection; investigate and develop remediation technologies. Predictive modelling approaches will be assessed for their ability to help define acceptance criteria. Risk management procedures and tools to assess post-closure system performance will be developed. Integrating these, the technical criteria necessary to assess whether a site meets the high level requirements for transfer of responsibility defined by the EU Directive will be established. The technologies developed will be implemented at the Ketzin site and dry-run applications for site abandonment will be developed for hypothetical closure scenarios at Sleipner and K12-B. Participation of partners from the US, Canada, Japan and Australia and data obtained from current and closed sites will add to the field monitoring database and place the results of CO2CARE in a world-wide perspective. Research findings will be presented as best-practice guidelines. Dissemination strategy will deliver results to a wide range of international stakeholders and the general public.


News Article | September 16, 2016
Site: www.cemag.us

Inspired by the anatomy of insects, an interdisciplinary research team at the University of Alberta has come up with a novel way to quickly and accurately detect dangerous airborne chemicals. The work started with Arindam Phani, a graduate student in U of A’s Department of Chemical and Materials Engineering, who observed that most insects have tiny hairs on their body surfaces, and it is not clear what the hairs are for. Trying to make sense of what these hairs may be capable of, Phani designed experiments involving a “forest” of tiny hairs on a thin vibrating crystal chip, under the guidance of his academic advisor Thomas Thundat, the Canada Research Chair in Oil Sands Molecular Engineering. The two joined forces with Vakhtang Putkaradze, Centennial Professor in the University of Alberta’s Department of Mathematical and Statistical Sciences. The experiments and subsequent theoretical explanation formed the crux of a new study published in Scientific Reports, an online, open access journal from the publishers of Nature. “We wanted to do something that nobody else does,” says Putkaradze, a mathematician who is also a renowned expert in the field of mechanics. “When using resonators as sensors, most people want to get rid of dissipation or friction because it’s considered highly undesirable, it tends to obscure what you are trying to measure. We have taken that undesirable thing and made it useful.” “Sensing chemicals without chemical receptors has been a challenge in normal conditions,” says Thundat, a world-leading expert in the field of sensing. “We realized that there is a wealth of information contained in the frictional loss of a mechanical resonator in motion and is more pronounced at the nanoscale.”


News Article | February 16, 2017
Site: www.eurekalert.org

(Edmonton, AB) Every day tens of thousands of Canadians unwillingly find themselves becoming shadows of their former selves. They grasp onto moments of clarity--fleeting windows of time--before slipping away again into confusion; robbed of memories, talents and their very personalities. Alzheimer's is a heart-wrenching disease that directly affects half a million Canadians. There is no cure, let alone treatment to stop progression of the disease. While current answers are few, research at the University of Alberta is spearheading the discovery of new potential therapies for the future. A study published in the journal Alzheimer's and Dementia: Translational Research and Clinical Intervention examines if a compound called AC253 can inhibit a "rogue" protein called amyloid. The protein is found in large numbers in the brains of Alzheimer's patients and is suspected to be a key player in the development of the disease. "The way I look at it, it's hard to ignore the biggest player on the stage, which is the amyloid protein. Whatever treatment you develop, it's got to address that player," says Jack Jhamandas, Professor of Neurology in the Faculty of Medicine & Dentistry at the University of Alberta and senior author of the study. "In our previous work we have shown that there are certain drug compounds that can protect nerve cells from amyloid toxicity. One of these is a compound we call AC253. It sounds like an Air Canada flight. I hope this one is on time and takes us to our destination!" The team, comprised of postdoctoral fellows and research associates Rania Soudy, Aarti Patel and Wen Fu, tested AC253 on mice bred by David Westaway (a University of Alberta collaborator) to develop Alzheimer's. Mice were treated with a continuous infusion of AC253 for five months, beginning at three months of age before development of the disease. "We found at eight months, when these mice typically have a lot of amyloid in the brain and have a lot of difficulty in memory and learning tasks, that they actually improved their memory and learning," says Jhamandas, also a member of the U of A's Neuroscience and Mental Health Institute. As part of the study, the team of local and international researchers also developed and tested a more efficient method of getting the compound into the brain. Given an injection three times a week for 10 weeks of AC253 with a slightly modified structure, they again found there was an improvement in memory and learning performance. In addition, the researchers noted there was a lower amount of amyloid in the brains of mice treated with the compound compared to mice that did not get the drug, and that they exhibited reduced inflammation of the brain. The team is now planning additional studies to examine optimal dosage and methods of further improving the compound to increase its effectiveness in the brain. Much more work is needed before the research can move to human trials. Despite the long path still ahead, Jhamandas believes the findings offer both hope and a new way forward to unlock the Alzheimer's enigma. "Alzheimer's is a complex disease. Not for a moment do I believe that the solution is going to be a simple one, but maybe it will be a combination of solutions." "We can't build nursing homes and care facilities fast enough because of an aging population. And that tsunami, the silver tsunami, is coming if not already here," adds Jhamandas. "At a human level, if you can keep someone home instead of institutionalized, even for a year, what does it mean to them? It means the world to them and their families."


News Article | November 1, 2016
Site: www.marketwired.com

New director a champion of "entrepreneurs for entrepreneurs" approach to business education EDMONTON, AB--(Marketwired - November 01, 2016) - TEC Edmonton announced today the appointment of Lan Tan as its new Director of Entrepreneur Development. Lan has been with TEC Edmonton for six years on the Business Development team, and brings over 10 years of experience working with entrepreneurs and startup companies to her new role. Coming from an entrepreneurial family, Lan began her career in international business as a liaison between industries importing and exporting to Europe and China. "I believe in a hands-on, roll up your sleeves and work type of approach," says Lan of her new role. "I'm looking forward to developing programs that will impart knowledge and tools for entrepreneurs to utilize." TEC Edmonton's "entrepreneurs for entrepreneurs" approach Lan is facilitating comes from her experience as an entrepreneur herself who learned from watching others. Lan will partner with other Edmonton business service providers to create programs that facilitate formal sharing and learning amongst Edmonton entrepreneurs. "Lan's contagious enthusiasm and passion for helping entrepreneurs succeed has set her apart, and the Entrepreneur Development program will no doubt thrive under her direction," said TEC Edmonton CEO Chris Lumb. Part of Lan's role will also be to oversee the TEC Edmonton accelerator space (opening early 2017) on the main floor of TEC Edmonton's home in Enterprise Square, and to design industry-supported acceleration programs for early-stage technology companies. About TEC Edmonton TEC Edmonton is a business accelerator that helps emerging technology companies grow successfully. As a joint venture of the University of Alberta and Edmonton Economic Development Corporation, TEC Edmonton operates the Edmonton region's largest accelerator for early-stage technology companies, and also manages commercialization of University of Alberta technologies. TEC Edmonton delivers services in four areas: Business Development, Funding and Finance, Technology Management, and Entrepreneur Development. Since 2011, TEC clients have generated $680M in revenue, raised $350M in financing and funding, invested $200M in R&D, grown both revenue and employment by 25 per cent per year and now employ over 2,400 people in the region. In addition, TEC has assisted in the creation of 22 spinoff companies from the University of Alberta in the last four years. TEC Edmonton was named the 4th best university business incubator in North America by the University Business Incubator (UBI) Global Index in 2015, and "Incubator of the Year" by Startup Canada in 2014. For more information, visit www.tecedmonton.com.


TORONTO, ON--(Marketwired - December 06, 2016) - The Canadian Council for Aboriginal Business (CCAB) is pleased to announce: The Lifetime Achievement Award recognizes a First Nations (Status or Non-Status), Inuit, or Métis business person whose community leadership and business success has made a substantive contribution to the economic and social well-being of Aboriginal peoples across Canada. The National Youth Entrepreneur of the year award recognizes an up-and-coming Aboriginal entrepreneur under the age of 35. Respected Métis entrepreneur Herb Belcourt is the founder of several businesses including Belcourt Construction started in 1965, the third largest power-line company in Alberta. In 2001 Herb with two fellow co-founders formed Belcourt Brosseau Métis Awards a $13-million endowment with a mandate to support Métis students pursuing further education. To date $17-million is in the endowment, and over 15 years $6-million has been given away to over 1,000 students in over 200 programs in every institution in Alberta. Mr. Belcourt's accolades include an Honourary Doctorate of Laws (University of Alberta, 2001), The Order of Athabasca University (2006), Investiture as a Member of the Order of Canada (2010) and an Honorary Diploma from NorQuest College (2014). Isabell Ringenoldus (First Nations), owns and operates TAWS Security, which provides physical security, mobile patrol security, as well as many technological solutions and value-added services to empower their clients and staff. TAWS Security is based on the Fort McMurray #468 First Nation in Anzac, Alberta. 100% of their ownership and management team are local Fort McMurray residents. Following the unfortunate and devastating event of the Fort McMurray wildfire, TAWS Security was able to showcase their ability and resources to immediately deploy both management and trained guards to the Regional Municipality of Wood Buffalo hours after the fire started. The Chief Operating Officer of TAWS Security was promptly appointed the position of Director of Private Security Service. The award ceremonies will take place January 31, 2017, at the CCAB Annual Toronto Gala, Ritz Carlton, 181 Wellington Street West in Toronto, Ontario from 17:30 to 21:00. About the Canadian Council for Aboriginal Business (CCAB): CCAB is committed to the full participation of Aboriginal people in Canada's economy. A national non-profit, non-partisan association, CCAB offers knowledge, resources, and programs to both mainstream and Aboriginal owned companies that foster economic opportunities for Aboriginal people and businesses across Canada. About the Aboriginal Lifetime Achievement Award: The Aboriginal Lifetime Achievement Award is part of CCAB's Aboriginal Business Hall of Fame, which recognizes Aboriginal persons whose business leadership has made a substantive contribution to the economic and social well-being of Aboriginal people over a lifetime. The inaugural award was given in 2005 and there have been over 22 laureates since then. About the National Youth Entrepreneur of the Year award: The Canadian Council for Aboriginal Business is presented annually to an Aboriginal entrepreneur under the age of 35. The recipient will receive a $10,000 financial award and be recognized at CCAB's 2017 Toronto Gala. For more information go to: https://www.ccab.com/awards


News Article | February 1, 2016
Site: motherboard.vice.com

“For me, a calorie is a unit of measurement that’s a real pain in the rear.” Bo Nash is 38. He lives in Arlington, Texas, where he’s a technology director for a textbook publisher. And he’s 5’10” and 245 lbs—which means he is classed as obese. In an effort to lose weight, Nash uses an app to record the calories he consumes and a Fitbit band to track the energy he expends. These tools bring an apparent precision: Nash can quantify the calories in each cracker crunched and stair climbed. But when it comes to weight gain, he finds that not all calories are equal. How much weight he gains or loses seems to depend less on the total number of calories, and more on where the calories come from and how he consumes them. The unit, he says, has a “nebulous quality to it”. Tara Haelle is also obese. She had her second son on St Patrick’s Day in 2014, and hasn’t been able to lose the 70 lbs she gained during pregnancy. Haelle is a freelance science journalist, based in Illinois. She understands the science of weight loss, but, like Nash, doesn’t see it translate into practice. “It makes sense from a mathematical and scientific and even visceral level that what you put in and what you take out, measured in the discrete unit of the calorie, should balance,” says Haelle. “But it doesn’t seem to work that way.” Nash and Haelle are in good company: more than two-thirds of American adults are overweight or obese. For many of them, the cure is diet: one in three are attempting to lose weight in this way at any given moment. Yet there is ample evidence that diets rarely lead to sustained weight loss. These are expensive failures. This inability to curb the extraordinary prevalence of obesity costs the United States more than $147 billion in healthcare, as well as $4.3 billion in job absenteeism and yet more in lost productivity. At the heart of this issue is a single unit of measurement—the calorie—and some seemingly straightforward arithmetic. “To lose weight, you must use up more calories than you take in,” according to the Centers for Disease Control and Prevention. Dieters like Nash and Haelle could eat all their meals at McDonald’s and still lose weight, provided they burn enough calories, says Marion Nestle, professor of nutrition, food studies and public health at New York University. “Really, that’s all it takes.” But Nash and Haelle do not find weight control so simple. And part of the problem goes way beyond individual self-control. The numbers logged in Nash’s Fitbit, or printed on the food labels that Haelle reads religiously, are at best good guesses. Worse yet, as scientists are increasingly finding, some of those calorie counts are flat-out wrong—off by more than enough, for instance, to wipe out the calories Haelle burns by running an extra mile on a treadmill. A calorie isn’t just a calorie. And our mistaken faith in the power of this seemingly simple measurement may be hindering the fight against obesity. The process of counting calories begins in an anonymous office block in Maryland. The building is home to the Beltsville Human Nutrition Research Center, a facility run by the US Department of Agriculture. When we visit, the kitchen staff are preparing dinner for people enrolled in a study. Plastic dinner trays are laid out with meatloaf, mashed potatoes, corn, brown bread, a chocolate-chip scone, vanilla yoghurt and a can of tomato juice. The staff weigh and bag each item, sometimes adding an extra two-centimetre sliver of bread to ensure a tray’s contents add up to the exact calorie requirements of each participant. “We actually get compliments about the food,” says David Baer, a supervisory research physiologist with the Department. The work that Baer and colleagues do draws on centuries-old techniques. Nestle traces modern attempts to understand food and energy back to a French aristocrat and chemist named Antoine Lavoisier. In the early 1780s, Lavoisier developed a triple-walled metal canister large enough to house a guinea pig. Inside the walls was a layer of ice. Lavoisier knew how much energy was required to melt ice, so he could estimate the heat the animal emitted by measuring the amount of water that dripped from the canister. What Lavoisier didn’t realise—and never had time to find out; he was put to the guillotine during the Revolution—was that measuring the heat emitted by his guinea pigs was a way to estimate the amount of energy they had extracted from the food they were digesting. Until recently, the scientists at Beltsville used what was essentially a scaled-up version of Lavoisier’s canister to estimate the energy used by humans: a small room in which a person could sleep, eat, excrete, and walk on a treadmill, while temperature sensors embedded in the walls measured the heat given off and thus the calories burned. (We now measure this energy in calories. Roughly speaking, one calorie is the heat required to raise the temperature of one kilogram of water by one degree Celsius.) Today, those ‘direct-heat’ calorimeters have largely been replaced by ‘indirect-heat’ systems, in which sensors measure oxygen intake and carbon dioxide exhalations. Scientists know how much energy is used during the metabolic processes that create the carbon dioxide we breathe out, so they can work backwards to deduce that, for example, a human who has exhaled 15 litres of carbon dioxide must have used 94 calories of energy. The facility’s three indirect calorimeters are down the halls from the research kitchen. “They’re basically nothing more than walk-in coolers, modified to allow people to live in here,” physiologist William Rumpler explains as he shows us around. Inside each white room, a single bed is folded up against the wall, alongside a toilet, sink, a small desk and chair, and a short treadmill. A couple of airlocks allow food, urine, faeces and blood samples to be passed back and forth. Apart from these reminders of the room’s purpose, the vinyl-floored, fluorescent-lit units resemble a 1970s dorm room. Rumpler explains that subjects typically spend 24 to 48 hours inside the calorimeter, following a highly structured schedule. A notice pinned to the door outlines the protocol for the latest study: 6:00 to 6:45pm – Dinner, 11:00pm – Latest bedtime, mandatory lights out, 11:00pm to 6:30am – Sleep, remain in bed even if not sleeping. In between meals, blood tests and bowel movements, calorimeter residents are asked to walk on the treadmill at 3 miles per hour for 30 minutes. They fill the rest of the day with what Rumpler calls “low activity.. “We encourage people to bring knitting or books to read,” he says. “If you give people free hand, you’ll be surprised by what they’ll do inside the chamber.” He tells us that one of his less cooperative subjects smuggled in a bag of M&Ms, and then gave himself away by dropping them on the floor. Using a bank of screens just outside the rooms, Rumpler can monitor exactly how many calories each subject is burning at any moment. Over the years, he and his colleagues have aggregated these individual results to arrive at numbers for general use: how many calories a 120-lb woman burns while running at 4.0 miles an hour, say, or the calories a sedentary man in his 60s needs to consume every day. It’s the averages derived from thousands of extremely precise measurements that provide the numbers in Bo Nash’s movement tracker and help Tara Haelle set a daily calorie intake target that is based on her height and weight. Measuring the calories in food itself relies on another modification of Lavoisier’s device. In 1848, an Irish chemist called Thomas Andrews realised that he could estimate calorie content by setting food on fire in a chamber and measuring the temperature change in the surrounding water. (Burning food is chemically similar to the ways in which our bodies break food down, despite being much faster and less controlled.) Versions of Andrews’s ‘bomb calorimeter’ are used to measure the calories in food today. At the Beltsville centre, samples of the meatloaf, mashed potatoes and tomato juice have been incinerated in the lab’s bomb calorimeter. “We freeze-dry it, crush into a powder, and fire it,” says Baer. Humans are not bomb calorimeters, of course, and we don’t extract every calorie from the food we eat. This problem was addressed at the end of the 19th century, in one of the more epic experiments in the history of nutrition science. Wilbur Atwater, a Department of Agriculture scientist, began by measuring the calories contained in more than 4,000 foods. Then he fed those foods to volunteers and collected their faeces, which he incinerated in a bomb calorimeter. After subtracting the energy measured in the faeces from that in the food, he arrived at the Atwater values, numbers that represent the available energy in each gram of protein, carbohydrate and fat. These century-old figures remain the basis for today’s standards. When Baer wants to know the calories per gram figure for that night’s meatloaf, he corrects the bomb calorimeter results using Atwater values. This entire enterprise, from the Beltsville facility to the numbers on the packets of the food we buy, creates an aura of scientific precision around the business of counting calories. That precision is illusory. The trouble begins at source, with the lists compiled by Atwater and others. Companies are allowed to incinerate freeze-dried pellets of product in a bomb calorimeter to arrive at calorie counts, though most avoid that hassle, says Marion Nestle. Some use the data developed by Atwater in the late 1800s. But the Food and Drug Administration (FDA) also allows companies to use a modified set of values, published by the Department of Agriculture in 1955, that take into account our ability to digest different foods in different ways. Atwater’s numbers say that Tara Haelle can extract 8.9 calories per gram of fat in a plate of her favourite Tex-Mex refried beans; the modified table shows that, thanks to the indigestibility of some of the plant fibres in legumes, she only gets 8.3 calories per gram. Depending on the calorie-measuring method that a company chooses—the FDA allows two more variations on the theme, for a total of five—a given serving of spaghetti can contain from 200 to 210 calories. These uncertainties can add up. Haelle and Bo Nash might deny themselves a snack or sweat out another few floors on the StairMaster to make sure they don’t go 100 calories over their daily limit. If the data in their calorie counts is wrong, they can go over regardless. There’s also the issue of serving size. After visiting over 40 US chain restaurants, including Olive Garden, Outback Steak House and PF Chang’s China Bistro, Susan Roberts of Tufts University’s nutrition research centre and colleagues discovered that a dish listed as having, say, 500 calories could contain 800 instead. The difference could easily have been caused, says Roberts, by local chefs heaping on extra french fries or pouring a dollop more sauce. It would be almost impossible for a calorie-counting dieter to accurately estimate their intake given this kind of variation. Even if the calorie counts themselves were accurate, dieters like Haelle and Nash would have to contend with the significant variations between the total calories in the food and the amount our bodies extract. These variations, which scientists have only recently started to understand, go beyond the inaccuracies in the numbers on the back of food packaging. In fact, the new research calls into question the validity of nutrition science’s core belief that a calorie is a calorie. Using the Beltsville facilities, for instance, Baer and his colleagues found that our bodies sometimes extract fewer calories than the number listed on the label. Participants in their studies absorbed around a third fewer calories from almonds than the modified Atwater values suggest. For walnuts, the difference was 21 per cent. This is good news for someone who is counting calories and likes to snack on almonds or walnuts: he or she is absorbing far fewer calories than expected. The difference, Baer suspects, is due to the nuts’ particular structure: “All the nutrients—the fat and the protein and things like that—they’re inside this plant cell wall.” Unless those walls are broken down—by processing, chewing or cooking—some of the calories remain off-limits to the body, and thus are excreted rather than absorbed. Another striking insight came from an attempt to eat like a chimp. In the early 1970s, Richard Wrangham, an anthropologist at Harvard University and author of the book Catching Fire: How cooking made us human, observed wild chimps in Africa. Wrangham attempted to follow the entirely raw diet he saw the animals eating, snacking only on fruit, seeds, leaves, and insects such as termites and army ants. “I discovered that it left me incredibly hungry,” he says. “And then I realised that every human eats their food cooked.” Wrangham and his colleagues have since shown that cooking unlaces microscopic structures that bind energy in foods, reducing the work our gut would otherwise have to do. It effectively outsources digestion to ovens and frying pans. Wrangham found that mice fed raw peanuts, for instance, lost significantly more weight than mice fed the equivalent amount of roasted peanut butter. The same effect holds true for meat: there are many more usable calories in a burger than in steak tartare. Different cooking methods matter, too. In 2015, Sri Lankan scientists discovered that they could more than halve the available calories in rice by adding coconut oil during cooking and then cooling the rice in the refrigerator. Wrangham’s findings have significant consequences for dieters. If Nash likes his porterhouse steak bloody, for example, he will likely be consuming several hundred calories less than if he has it well-done. Yet the FDA’s methods for creating a nutrition label do not for the most part account for the differences between raw and cooked food, or pureed versus whole, let alone the structure of plant versus animal cells. A steak is a steak, as far as the FDA is concerned. Industrial food processing, which subjects foods to extremely high temperatures and pressures, might be freeing up even more calories. The food industry, says Wrangham, has been “increasingly turning our food to mush, to the maximum calories you can get out of it. Which, of course, is all very ironic, because in the West there’s tremendous pressure to reduce the number of calories you’re getting out of your food.” He expects to find examples of structural differences that affect caloric availability in many more foods. “I think there is work here for hundreds and probably thousands of nutritionists for years,” he says. There’s also the problem that no two people are identical. Differences in height, body fat, liver size, levels of the stress hormone cortisol, and other factors influence the energy required to maintain the body’s basic functions. Between two people of the same sex, weight and age, this number may differ by up to 600 calories a day—over a quarter of the recommended intake for a moderately active woman. Even something as seemingly insignificant as the time at which we eat may affect how we process energy. In one recent study, researchers found that mice fed a high-fat diet between 9 AM and 5 PM gained 28 percent less weight than mice fed the exact same food across a 24-hour period. The researchers suggested that irregular feedings affect the circadian cycle of the liver and the way it metabolises food, thus influencing overall energy balance. Such differences would not emerge under the feeding schedules in the Beltsville experiments. Until recently, the idea that genetics plays a significant role in obesity had some traction: researchers hypothesised that evolutionary pressures may have favoured genes that predisposed some people to hold on to more calories in the form of added fat. Today, however, most scientists believe we can’t blame DNA for making us overweight. “The prevalence of obesity started to rise quite sharply in the 1980s,” says Nestle. “Genetics did not change in that ten- or twenty-year period. So genetics can only account for part of it.” Instead, researchers are beginning to attribute much of the variation to the trillions of tiny creatures that line the coiled tubes inside our midriffs. The microbes in our intestines digest some of the tough or fibrous matter that our stomachs cannot break down, releasing a flow of additional calories in the process. But different species and strains of microbes vary in how effective they are at releasing those extra calories, as well as how generously they share them with their host human. In 2013, researchers in Jeffrey Gordon’s lab at Washington University tracked down pairs of twins of whom one was obese and one lean. He took gut microbes from each, and inserted them into the intestines of microbe-free mice. Mice that got microbes from an obese twin gained weight; the others remained lean, despite eating the exact same diet. “That was really striking,” said Peter Turnbaugh, who used to work with Gordon and now heads his own lab at the University of California, San Francisco. “It suggested for the first time that these microbes might actually be contributing to the energy that we gain from our diet.” The diversity of microbes that each of us hosts is as individual as a fingerprint and yet easily transformed by diet and our environment. And though it is poorly understood, new findings about how our gut microbes affect our overall energy balance are emerging almost daily. For example, it seems that medications that are known to cause weight gain might be doing so by modifying the populations of microbes in our gut. In November 2015, researchers showed that risperidone, an antipsychotic drug, altered the gut microbes of mice who received it. The microbial changes slowed the animals’ resting metabolisms, causing them to increase their body mass by 10 per cent in two months. The authors liken the effects to a 30-lb weight gain over one year for an average human, which they say would be the equivalent of an extra cheeseburger every day. Other evidence suggests that gut microbes might affect weight gain in humans as they do in lab animals. Take the case of the woman who gained more than 40 lbs after receiving a transplant of gut microbesfrom her overweight teenage daughter. The transplant successfully treated the mother’s intestinal infection of Clostridium difficile, which had resisted antibiotics. But, as of the study’s publication last year, she hadn’t been able to shed the excess weight through diet or exercise. The only aspect of her physiology that had changed was her gut microbes. All of these factors introduce a disturbingly large margin of error for an individual who is trying, like Nash, Haelle and millions of others, to count calories. The discrepancies between the number on the label and the calories that are actually available in our food, combined with individual variations in how we metabolise that food, can add up to much more than the 200 calories a day that nutritionists often advise cutting in order to lose weight. Nash and Haelle can do everything right and still not lose weight. None of this means that the calorie is a useless concept. Inaccurate as they are, calorie counts remain a helpful guide to relative energy values: standing burns more calories than sitting; cookies contain more calories than spinach. But the calorie is broken in many ways, and there’s a strong case to be made for moving our food accounting system away from that one particular number. It’s time to take a more holistic look at what we eat. Wilbur Atwater worked in a world with different problems. At the beginning of the 20th century, nutritionists wanted to ensure people were well fed. The calorie was a useful way to quantify a person’s needs. Today, excess weight affects more people than hunger; 1.9 billion adults around the world are considered overweight, 600 million of them obese. Obesity brings with it a higher risk of diabetes, heart disease and cancer. This is a new challenge, and it is likely to require a new metric. One option is to focus on something other than energy intake. Like satiety, for instance. Picture a 300-calorie slice of cheesecake: it is going to be small. “So you’re going to feel very dissatisfied with that meal,” says Susan Roberts. If you eat 300 calories of a chicken salad instead, with nuts, olive oil and roasted vegetables, “you’ve got a lot of different nutrients that are hitting all the signals quite nicely,” she says. “So you’re going to feel full after you’ve eaten it. That fullness is going to last for several hours.” As a result of her research, Roberts has created a weight-loss plan that focuses on satiety rather than a straight calorie count. The idea is that foods that help people feel satisfied and full for longer should prevent them from overeating at lunch or searching for a snack soon after cleaning the table. Whole apples, white fish and Greek yoghurt are on her list of the best foods for keeping hunger at bay. There’s evidence to back up this idea: in one study, Roberts and colleagues found that people lost three times more weight by following her satiety plan compared with a traditional calorie-based one—and kept it off. Harvard nutritionist David Ludwig, who also proposes evaluating food on the basis of satiety instead of calories, has shown that teens given instant oats for breakfast consumed 650 more calories at lunch than their peers who were given the same number of breakfast calories in the form of a more satisfying omelette and fruit. Meanwhile, Adam Drewnowski, a epidemiologist at the University of Washington, has his own calorie upgrade: a nutrient density score. This system ranks food in terms of nutrition per calorie, rather than simply overall caloric value. Dark green vegetables and legumes score highly. Though the details of their approaches differ, all three agree: changing how we measure our food can transform our relationship with it for the better. Individual consumers could start using these ideas now. But persuading the food industry and its watchdogs, such as the FDA, to adopt an entirely new labelling system based on one of these alternative measures is much more of a challenge. Consumers are unlikely to see the calorie replaced by Roberts’s or Drewnowski’s units on their labels any time soon; nonetheless, this work is an important reminder that there are other ways to measure food, ones that might be more useful for both weight loss and overall health. Down the line, another approach might eventually prove even more useful: personalised nutrition. Since 2005, David Wishart of the University of Alberta has been cataloguing the hundreds of thousands of chemical compounds in our bodies, which make up what’s known as the human metabolome. There are now 42,000 chemicals on his list, and many of them help digest the food we eat. His food metabolome database is a more recent effort: it contains about 30,000 chemicals derived directly from food. Wishart estimates that both databases may end up listing more than a million compounds. “Humans eat an incredible variety of foods,” he says. “Then those are all transformed by our body. And they’re turned into all kinds of other compounds.” We have no idea what they all are, he adds—or what they do. According to Wishart, these chemicals and their interactions affect energy balance. He points to research demonstrating that high-fructose corn syrup and other forms of added fructose (as opposed to fructose found in fruit) can trigger the creation of compounds that lead us to form an excess of fat cells, unrelated to additional calorie consumption. “If we cut back on some of these things,” he says, “it seems to revert our body back to more appropriate, arguably less efficient metabolism, so that we aren’t accumulating fat cells in our body.” It increasingly seems that there are significant variations in the way each one of us metabolises food, based on the tens of thousands—perhaps millions—of chemicals that make up each of our metabolomes. This, in combination with the individuality of each person’s gut microbiome, could lead to the development of personalised dietary recommendations. Wishart imagines a future where you could hold up your smartphone, snap a picture of a dish, and receive a verdict on how that food will affect you as well as how many calories you’ll extract from it. Your partner might receive completely different information from the same dish. Or maybe the focus will shift to tweaking your microbial community: if you’re trying to lose weight, perhaps you will curate your gut microbiome so as to extract fewer calories without harming your overall health. Peter Turnbaugh cautions that the science is not yet able to recommend a particular set of microbes, let alone how best to get them inside your gut, but he takes comfort from the fact that our microbial populations are “very plastic and very malleable”—we already know that they change when we take antibiotics, when we travel and when we eat different foods. “If we’re able to figure this out,” he says, “there is the chance that someday you might be able to tailor your microbiome” to get the outcomes you want. None of these alternatives is ready to replace the calorie tomorrow. Yet the need for a new system of food accounting is clear. Just ask Haelle. “I’m kind of pissed at the scientific community for not coming up with something better for us,” she confesses, recalling a recent meltdown at TGI Friday’s as she navigated a confusing datasheet to find a low-calorie dish she could eat. There should be a better metric for people like her and Nash—people who know the health risks that come with being overweight and work hard to counter them. And it’s likely there will be. Science has already shown that the calorie is broken. Now it has to find a replacement. This story originally appeared on Mosaic with the headline, "Why the calorie is broken." It is published under a CC BY 4.0 license.


News Article | February 15, 2017
Site: www.eurekalert.org

Imagine patterning and visualizing silicon at the atomic level, something which, if done successfully, will revolutionize the quantum and classical computing industry. A team of scientists in Edmonton, Canada has done just that, led by a world-renowned physicist and his up-and-coming protégé. University of Alberta PhD student Taleana Huff teamed up with her supervisor Robert Wolkow to channel a technique called atomic force microscopy--or AFM--to pattern and image electronic circuits at the atomic level. This is the first time the powerful technique has been applied to atom-scale fabrication and imaging of a silicon surface, notoriously difficult because the act of applying the technique risks damaging the silicon. However, the reward is worth the risk, because this level of control could stimulate the revolution of the technology industry. "It's kind of like braille," explains Huff. "You bring the atomically sharp tip really close to the sample surface to simply feel the atoms by using the forces that naturally exist among all materials." One of the problems with working at the atomic scale is the risk of perturbing the thing you are measuring by the act of measuring it. Huff, Wolkow, and their research collaborators have largely overcome those problems and as a result can now build by moving individual atoms around: most importantly, those atomically defined structures result in a new level of control over single electrons. This is the first time that the powerful AFM technique has been shown to see not only the silicon atoms but also the electronic bonds between those atoms. Central to the technique is a powerful new computational approach that analyzes and verifies the identity of the atoms and bonds seen in the images. "We couldn't have performed these new and demanding computations without the support of Compute Canada. This combined computation and measurement approach succeeds in creating a foundation for a whole new generation of both classical and quantum computing architectures," says Wolkow. He has his long-term sights set on making ultra-fast and ultra-low-power silicon-based circuits, potentially consuming ten thousand times less power than what is on the market. "Imagine instead of your phone battery lasting a day that it could last weeks at a time, because you're only using a couple of electrons per computational pattern," says Huff, who explains that the precision of the work will allow the group and potential industry investors to geometrically pattern atoms to make just about any kind of logic structure imaginable. This hands-on work was exactly what drew the self-described Canadian-by-birth American-by-personality to condensed matter physics in the University of Alberta's Faculty of Science. Following undergraduate work in astrophysics--and an internship at NASA--Huff felt the urge to get more tangible with her graduate work. (With hobbies that include power lifting and motorcycle restoration, she comes by the desire for tangibility quite honestly.) "I wanted something that I could touch, something that was going to be a physical product I could work with right away," says Huff. And in terms of who she wanted to work with, she went straight to the top, seeking out Wolkow, renowned the world over for his work with quantum dots, dangling bonds, and industry-pushing work on atomic-scale science. "He just has such passion and conviction for what he does," she continues. "With Bob, it's like, 'we're going to change the world.' I find that really inspiring," says Huff. "Taleana has the passion and the drive to get very challenging things done. She now has understanding and skills that are truly unique in the world giving us a great advantage in the field," says Wolkow. "We just need to work on her taste in music," he adds with a laugh. The group's latest research findings, "Possible observation of chemical bond contrast in AFM images of a hydrogen terminated silicon surface" were published in the February 13, 2017 issue of Nature Communications.


News Article | November 22, 2016
Site: www.npr.org

The Standing Rock Resistance Is Unprecedented (It's Also Centuries Old) As resistance to the Dakota Access Pipeline in Standing Rock, N.D., concludes its seventh month, two narratives have emerged: Both are true. The scope of the resistance at Standing Rock exceeds just about every protest in Native American history. But that history itself, of indigenous people fighting to protect not just their land, but the land, is centuries old. Over the weekend, the situation at Standing Rock grew more contentious. On Sunday night, Morton County police sprayed the crowd of about 400 people with tear gas and water as temperatures dipped below freezing. But the resistance, an offspring of history, continues. Through the years, details of such protests change — sometimes the foe is the U.S. government; sometimes a large corporation; sometimes, as in the case of the pipeline, a combination of the two. Still, the broad strokes of each land infringement and each resistance stay essentially the same. In that tradition, the tribes gathered at Standing Rock today are trying to stop a natural gas pipeline operator from bulldozing what they say are sacred sites to construct an 1,172-mile oil pipeline. The tribes also want to protect the Missouri River, the primary water source for the Standing Rock Reservation, from a potential pipeline leak. (Energy Transfer Partners, which is building the pipeline, says on its website that it emphasizes safety and that, "in many instances we exceed government safety standards to ensure a long-term, safe and reliable pipeline.") Since April, when citizens of the Standing Rock Sioux Nation set up the Sacred Stone Camp, thousands of people have passed through and pledged support. Environmentalists and activist groups like Black Lives Matter and Code Pink have also stepped in as allies. Many people who have visited say that the camp is beyond anything they've ever experienced. "It's historic, really. I don't think anything like this has ever happened in documented history," said Ruth Hopkins, a reporter from Indian Country Today. But there are historical preludes, and you don't have to look too far back to find them. In 2015, when the Keystone XL pipeline was being debated, numerous Native American tribes and the Indigenous Environmental Network organized against it. The pipeline would have stretched 1,179 miles from Canada to the Gulf of Mexico. The Rosebud Sioux, a tribe in South Dakota, called the proposed pipeline an "act of war" and set up an encampment where the pipeline was to be constructed. Also joining in were the Environmental Protection Agency, the National Resources Defense Council, and the Omaha, Dene, Ho-chunk, and Creek Nations, whose lands the pipeline would have traversed. President Obama vetoed Keystone XL. But even at the time, A. Gay Kingman, the executive director of the Great Plains Tribal Chairman's Association, warned that the reprieve would be temporary. "Wopila [thank you] to all our relatives who stood strong to oppose the KXL," Kingman said in a statement after the veto. "But keep the coalitions together, because there are more pipelines proposed, and we must protect our Mother Earth for our future generations." In the case of the Dakota Access Pipeline, the Standing Rock Sioux have been able to attract support from hundreds of tribes all over the country, not just in places that would be directly affected. The tribes aren't just leaning on long-held beliefs about the importance of the natural world. They're also using long-held resistance strategies. Like the encampment itself. "If you don't know very much about Native American people, you wouldn't understand that this is something that's kind of natural to us," said Hopkins, who is enrolled in the Sisseton Wahpeton Oyate Nation and was born on the Standing Rock Reservation. "When we have ceremonies, we do camps like this. It's something that we've always known how to do, going back to pre-colonial times." In the late 1800s more than 10,000 members of the Lakota Sioux, Cheyenne and Arapaho tribes set up camp to resist the U.S. Army's attempt to displace them in search of gold. That camp took form at the Little Bighorn River in Montana. After the soldiers attacked the camp in June of 1876, the Battle of the Little Bighorn, widely known as (Gen. George) Custer's Last Stand, erupted. In defeating the Army, the tribes won a huge land rights victory for Native Americans. There was also Wounded Knee, a protest that was part of the American Indian Movement. During the 1973 demonstration, about 200 people occupied the town of Wounded Knee on the Pine Ridge Reservation in South Dakota — the site of an 1890 massacre in which U.S. soldiers killed hundreds of Native Americans. Protesters turned Wounded Knee into what one former AIM leader called "an armed camp" in order to protest corruption in tribal leadership and draw attention to the U.S. government's failure to honor treaties. Over the course of the 1973 occupation, two Sioux men were killed and hundreds more arrested. But the resistance, which lasted 71 days, underscored Native American civil rights issues in a way that many see reflected today in Standing Rock. If Native American resistance is an old story, that's because the systemic violation of indigenous land rights is an old story. And if history is any precedent, the resistance won't end at Standing Rock. "There are no rights being violated here that haven't been violated before." said Kim Tallbear, a professor of Native Studies at the University of Alberta, who for years worked on tribal issues as an environmental planner for the U.S. Environmental Protection Agency and the Department of Energy. Those violations, she said, have taken two forms: long-term disregard for indigenous land rights and a "bureaucratic disregard for consultation with indigenous people." When she sees images of police using pepper spray and water cannons or security guards unleashing dogs on Standing Rock protesters, Tallbear said, she isn't shocked. "I'm, like, oh yeah, they did that in the 19th century, they did that in the 16th century," she said. "This is not new. ... The contemporary tactics used against indigenous people might look a little bit more complex or savvy, but to me, I can read it all as part of a longstanding colonial project." "Maybe for non-Natives who thought that the West was won, and the Indian Wars were over, and Native people were mostly dead and gone and isn't that too bad – now, they're like, 'Oh wait a minute, they're still there? And they're still fighting the same things they were 150 years ago?'


News Article | November 29, 2016
Site: www.eurekalert.org

Heart medication taken in combination with chemotherapy reduces the risk of serious cardiovascular damage in patients with early-stage breast cancer, according to results from a new landmark clinical trial. Existing research has shown some cancer therapies such as Herceptin greatly improve survival rates for early-stage breast cancer, but come with a fivefold risk of heart failure -- a devastating condition as life-threatening as the cancer itself. A new five-year study, led by researchers at the University of Alberta and Alberta Health Services and funded by the Canadian Institutes of Health Research (CIHR) and Alberta Cancer Foundation, shows that two kinds of heart medications, beta blockers and ACE inhibitors, effectively prevent a drop in heart function from cancer treatment. "We think this is practice-changing," said Edith Pituskin, co-investigator of the MANTICORE trial. "This will improve the safety of the cancer treatment that we provide." Pituskin, an assistant professor in the Faculty of Nursing and Faculty of Medicine & Dentistry at the U of A, published their findings Nov. 28 in the Journal of Clinical Oncology. In the double-blind trial, 100 patients from Alberta and Manitoba with early-stage breast cancer were selected at random to receive either a beta blocker, ACE inhibitor or placebo for one year. Beta blockers and ACE inhibitors are drugs used to treat several conditions, including heart failure. Cardiac MRI images taken over a two-year period showed that patients who received the beta blockers showed fewer signs of heart weakening than the placebo group. The ACE inhibitor drug also had heart protection effects. Study lead Ian Paterson, a cardiologist at the Mazankowski Alberta Heart Institute and associate professor with the U of A's Department of Medicine, said these medications not only safeguard against damage to the heart, but may improve breast cancer survival rates by limiting interruptions to chemotherapy treatment. Any time a patient shows signs of heart weakening, he said, chemotherapy is stopped immediately, sometimes for a month or two months until heart function returns to normal. "We are aiming for two outcomes for these patients--we're hoping to prevent heart failure and we're hoping for them to receive all the chemotherapy that they are meant to get, when they are supposed to get it--to improve their odds of remission and survival." Patients with heart failure often experience fatigue, shortness of breath or even death, making it "an equally devastating disease with worse prognosis than breast cancer," Paterson said. Brenda Skanes has a history of cardiovascular problems in her family--her mom died of a stroke and her dad had a heart attack. She was eager to join the trial, both for her own health and the health of other breast cancer survivors. "I met survivors through my journey who experienced heart complications caused by Herceptin. If they had access to this, maybe they wouldn't have those conditions now," she said. "Me participating, it's for the other survivors who are just going into treatment." With two daughters of her own and a mother who lost her fight with colon cancer, study participant Debbie Cameron says she'd do anything to ensure prevent others from going through similar upheaval. "My daughters are always in the back of my mind and the what ifs--if they're diagnosed, what would make their treatment safer, better," Cameron said. "Anything I could do to make this easier for anybody else or give some insight to treatment down the road was, to me, a very easy decision." Pituskin said the study team, which also includes collaborators from the AHS Clinical Trials Unit at the Cross Cancer Institute and the University of Manitoba, represents a strong mix of research disciplines, particularly the oncology and cardiology groups. She said the results would not have been possible without funding support from CIHR and the Alberta Cancer Foundation. "Local people in Alberta supported a study that not only Albertans benefited from, but will change, again, the way care is delivered around the world." The results are expected to have a direct impact on clinical practice guidelines in Canada and beyond. "Every day in Canada, around 68 women are diagnosed with breast cancer. This discovery holds real promise for improving these women's quality of life and health outcomes," said Stephen Robbins, scientific director of CIHR's Cancer Research Institute. "We couldn't be more pleased with this return on our investment," said Myka Osinchuk, CEO of the Alberta Cancer Foundation. "This clinical research will improve treatment and make life better not only for Albertans facing cancer, but also for those around the world." Paterson said the research team is also investigating how to prevent heart complications in patients with other cancers, noting several other therapies have been linked to heart complications.


News Article | November 16, 2016
Site: www.marketwired.com

EDMONTON, AB--(Marketwired - November 16, 2016) - TEC Edmonton announced today that its annual VenturePrize competition is now open. This year marks the 15th anniversary of VenturePrize, Alberta's premier business plan competition. TEC Edmonton invites companies province-wide to submit business plans in Health, Fast Growth, Student, and Information & Communications Technology streams. "This is a very exciting year for VenturePrize," says TEC Edmonton CEO Chris Lumb. "We look forward to offering the best of TEC Edmonton's network of expertise to help Alberta companies achieve their goals." More than a competition, the road to the VenturePrize finals includes educational opportunities spread out over several months, involving a seminar series and personalized business coaching. The month-long seminar series covers a wide variety of business, marketing and legal topics designed to help participants perfect their business plans and hone their pitching skills. Companies that are paired with mentors will also receive personalized expertise from seasoned entrepreneurs. Interested companies can register for VenturePrize here. Past VenturePrize finalists and winners include Fitset, MagnetTx Oncology Solutions, Pogo CarShare, and Localize. TEC Edmonton is a business accelerator that helps emerging technology companies grow successfully. As a joint venture of the University of Alberta and Edmonton Economic Development Corporation, TEC Edmonton operates the Edmonton region's largest accelerator for early-stage technology companies, and also manages commercialization of University of Alberta technologies. TEC Edmonton delivers services in four areas: Business Development, Funding and Finance, Technology Management, and Entrepreneur Development. Since 2011, TEC clients have generated $680M in revenue, raised $350M in financing and funding, invested $200M in R&D, grown both revenue and employment by 25 per cent per year and now employ over 2,400 people in the region. In addition, TEC has assisted in the creation of 22 spinoff companies from the University of Alberta in the last four years. TEC Edmonton was named the 4th best university business incubator in North America by the University Business Incubator (UBI) Global Index in 2015, and "Incubator of the Year" by Startup Canada in 2014. For more information, visit www.tecedmonton.com.


News Article | November 30, 2016
Site: www.eurekalert.org

Montreal, November 30, 2016 -- For some, the start of December marks the beginning of the most wonderful time of the year. But for most university students, the coming weeks mean final exams, mounting stress and negative moods. While that doesn't seem like an ideal combination for great grades, new research from Concordia University in Montreal shows that the occasional bout of bad feelings can actually improve students' academic success. A study published in Developmental Psychology by Erin Barker, professor of psychology in Concordia's Faculty of Arts and Science, shows that students who were mostly happy during their four years of university but who also experienced occasional negative moods had the highest GPAs at the time of graduation. In contrast, the study also confirmed that students who experienced high levels of negative moods and low levels of positive moods often ended up with the lowest GPAs -- a pattern consistent with depressive disorders. "Students often report feeling overwhelmed and experiencing high levels of anxiety and depressive symptoms," says Barker, who is also a member of the Centre for Research in Human Development. "This study shows that we need to teach them strategies to both manage negative emotions and stress in productive ways, and to maintain positive emotional experiences." For the study, Barker and her co-authors* worked with 187 first-year students at a large university. The researchers tracked the students throughout their four years of schooling by having them complete questionnaires about recent emotional experiences each year, beginning in the first year and continuing throughout their undergraduate degree. "We looked at students' response patterns to better understand how experiences of positive and negative emotions occurred over time. We then combined average patterns to look how each person varied from their own average and examined different combinations of trait and state affects together," Barker explains. "This allowed us to identify the pattern associated with the greatest academic success: those who were happy for the most part, but who also showed bouts of elevated negative moods." These findings demonstrate that both negative and positive emotions play a role in our successes. "We often think that feeling bad is bad for us. But if you're generally a happy person, negative emotions can be motivating. They can signal to you that there is a challenge that you need to face. Happy people usually have coping resources and support that they draw on to meet that challenge." In January, Barker and psychology graduate students Sarah Newcomb-Anjo and Kate Mulvihill will expand on this research by launching a new study focused on life beyond graduation. Their plan: examine patterns of emotional experience and well-being as former students navigate new challenges associated with finding work or entering a post-graduation program. *Partners in research: This study was co-authored by Carsten Wrosch, professor of psychology in Concordia's Faculty of Arts and Science, Andrea L. Howard from Carleton University and Nancy L. Galambos from the University of Alberta. The research was funded by a grant awarded to N. Galambos by the the Social Sciences and Humanities Research Council of Canada.


News Article | February 16, 2017
Site: www.rdmag.com

A new diagnostic method has correctly predicted autism in 80 percent of high-risk infants, according to a new study. Researchers at the University of North Carolina have developed a method using magnetic resonance imaging (MRI) in infants with older siblings with autism to correctly predict whether infants would later meet the criteria for autism at two years old. “Our study shows that early brain development biomarkers could be very useful in identifying babies at the highest risk for autism before behavioral symptoms emerge,” Dr. Joseph Piven, the Thomas E. Castelloe Distinguished Professor of Psychiatry at UNC and senior author of the paper, said in a statement. “Typically, the earliest an autism diagnosis can be made is between ages two and three. But for babies with older autistic siblings, our imaging approach may help predict during the first year of life which babies are most likely to receive an autism diagnosis at 24 months.” It is estimated that one out of every 68 children develop Autism Spectrum Disorder (ASD) in the U.S. The patients have characteristic social deficits and demonstrate a range of ritualistic, repetitive and stereotyped behaviors. Despite extensive research, it has been impossible to identify those at ultra-high risk for autism prior to two-years old, which is the earliest time when the hallmark behavioral characteristics of ASD can be observed and a diagnosis made in most children. In the study, the researchers conducted MRI scans of infants at six, 12 and 24 months old. The researchers found that the babies who developed autism experienced a hyper-expansion of brain surface area from six to 12 months, as compared to babies who had an older sibling with autism but did not themselves show evidence of the condition at 24 months of age. They also found that increased growth rate of surface area in the first year of life was linked to increased growth rate of overall brain volume in the second year, which is tied to the emergence of autistic social deficits in the second year. The next step was to take the data—MRI’s of brain volume, surface area, cortical thickness at six and 12 months of age and the sex of the infants—and used a computer program to identify a way to classify babies most likely to meet criteria for autism at two-years old. The computer program developed an algorithm that the researchers applied to a separate set of study participants. The researchers concluded that brain differences at six and 12 months in infants with older siblings with autism correctly predicted eight of 10 infants who would later meet criteria for autism at two-years old in comparison to those with older ASD siblings who did not meet the criteria at two years old. “This means we potentially can identify infants who will later develop autism, before the symptoms of autism begin to consolidate into a diagnosis,” Piven said. This test could be helpful to parents who have a child with autism and have a second child, where they could intervene ‘pre-symptomatically’ before the emergence of the defining symptoms of autism. Researchers could then begin to examine the effect of interventions on children during a period before the syndrome is present and when the brain is most malleable. “Putting this into the larger context of neuroscience research and treatment, there is currently a big push within the field of neurodegenerative diseases to be able to detect the biomarkers of these conditions before patients are diagnosed, at a time when preventive efforts are possible,” Piven said. “In Parkinson’s for instance, we know that once a person is diagnosed, they’ve already lost a substantial portion of the dopamine receptors in their brain, making treatment less effective.” The research, which was led by researchers at the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina, where Piven is director, included hundreds of children from across the country. The project’s other clinical sites included the University of Washington, Washington University in St. Louis and The Children’s Hospital of Philadelphia. Other key collaborators are McGill University, the University of Alberta, the University of Minnesota, the College of Charleston and New York University. “This study could not have been completed without a major commitment from these families, many of whom flew in to be part of this,” first author Heather Hazlett, Ph.D., assistant professor of psychiatry at the UNC School of Medicine and a CIDD researcher, said in a statement.


News Article | November 2, 2016
Site: www.eurekalert.org

(Edmonton) A new discovery from University of Alberta scientists represents an important milestone in the fight against thyroid cancer. In a study published in EBioMedicine and recently presented at the American Thyroid Association annual meeting, the team has identified a marker of aggressive disease for papillary thyroid cancer, which comprises about 90 per cent of all thyroid cancers. The marker--a protein known as Platelet Derived Growth Factor Receptor Alpha, or PDGFRA--could also be used as a therapeutic target for future treatments. Todd McMullen, senior author and associate professor of surgery with the U of A's Faculty of Medicine & Dentistry, believes the findings will have a significant clinical impact. "The big problem for individual patients and physicians is knowing if the patient has the disease that is easy to treat or if they have a more aggressive variant. A lot of patients get over-treated simply because we don't want to miss the one case in five that may spread to other sites," says McMullen. "The only way to be sure it doesn't spread is to undertake a larger surgery which can have lifelong consequences. Most of these patients are young. They have children. The majority tend to opt for the surgery because until now we haven't had another tool to help them know when it is needed." Each year approximately 6,300 Canadians will be diagnosed with thyroid cancer. More than three quarters of those patients are women. Treatments for the disease include radioactive iodine therapy and surgery. Those who opt for aggressive surgery can see their speech affected, have trouble eating, swallowing and even breathing as a result. "We came up with a tool to identify aggressive tumours so that people can have just the right amount of surgery. No more, no less," says McMullen. "What we're really excited about is that this is both a diagnostic tool and a therapy. It can be used to do both. We've identified the mechanism of how this protein actually drives metastasis in thyroid cancer. And not only that, we found out that it also makes the cancer resistant to radioactive iodine therapy." McMullen says that by identifying the mechanism, the team is able to predict which people will have recurrent disease and which patients will respond to radioactive iodine therapy--both tools that are currently lacking in the medical community. The foundation of the work stems from previous efforts in which McMullen's team examined thyroid cancer patient specimens. In a study published in 2012 they looked at genetic signatures showing which patients experienced metastasis and which patients did not. Through their efforts at that time they discovered PDGFRA was linked to metastatic disease. According to McMullen, this latest research significantly advances that work. In the very near future the team hopes to begin two separate clinical trials. The first will investigate a new way to treat thyroid cancers using a cancer drug that specifically targets PDGFRA. The second will work on a new diagnostic tool to give patients an early indicator of whether their thyroid cancer will be aggressive or not. "We hope within the next 18 months that we can prove the utility of this approach and change the way thyroid cancers are managed for those patients that have the worst disease," says McMullen. "We were lucky enough to find something that we think is important for thyroid cancer. It will be put to the test now."


News Article | February 15, 2017
Site: www.eurekalert.org

This first-of-its-kind study used MRIs to image the brains of infants, and then researchers used brain measurements and a computer algorithm to accurately predict autism before symptoms set in CHAPEL HILL, NC - Using magnetic resonance imaging (MRI) in infants with older siblings with autism, researchers from around the country were able to correctly predict 80 percent of those infants who would later meet criteria for autism at two years of age. The study, published today in Nature, is the first to show it is possible to identify which infants - among those with older siblings with autism - will be diagnosed with autism at 24 months of age. "Our study shows that early brain development biomarkers could be very useful in identifying babies at the highest risk for autism before behavioral symptoms emerge," said senior author Joseph Piven, MD, the Thomas E. Castelloe Distinguished Professor of Psychiatry at the University of North Carolina-Chapel Hill. "Typically, the earliest an autism diagnosis can be made is between ages two and three. But for babies with older autistic siblings, our imaging approach may help predict during the first year of life which babies are most likely to receive an autism diagnosis at 24 months." This research project included hundreds of children from across the country and was led by researchers at the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina, where Piven is director. The project's other clinical sites included the University of Washington, Washington University in St. Louis, and The Children's Hospital of Philadelphia. Other key collaborators are McGill University, the University of Alberta, the University of Minnesota, the College of Charleston, and New York University. "This study could not have been completed without a major commitment from these families, many of whom flew in to be part of this," said first author Heather Hazlett, PhD, assistant professor of psychiatry at the UNC School of Medicine and a CIDD researcher. "We are still enrolling families for this study, and we hope to begin work on a similar project to replicate our findings." People with Autism Spectrum Disorder (or ASD) have characteristic social deficits and demonstrate a range of ritualistic, repetitive and stereotyped behaviors. It is estimated that one out of 68 children develop autism in the United States. For infants with older siblings with autism, the risk may be as high as 20 out of every 100 births. There are about 3 million people with autism in the United States and tens of millions around the world. Despite much research, it has been impossible to identify those at ultra-high risk for autism prior to 24 months of age, which is the earliest time when the hallmark behavioral characteristics of ASD can be observed and a diagnosis made in most children. For this Nature study, Piven, Hazlett, and researchers from around the country conducted MRI scans of infants at six, 12, and 24 months of age. They found that the babies who developed autism experienced a hyper-expansion of brain surface area from six to 12 months, as compared to babies who had an older sibling with autism but did not themselves show evidence of the condition at 24 months of age. Increased growth rate of surface area in the first year of life was linked to increased growth rate of overall brain volume in the second year of life. Brain overgrowth was tied to the emergence of autistic social deficits in the second year. Previous behavioral studies of infants who later developed autism - who had older siblings with autism -revealed that social behaviors typical of autism emerge during the second year of life. The researchers then took these data - MRIs of brain volume, surface area, cortical thickness at 6 and 12 months of age, and sex of the infants - and used a computer program to identify a way to classify babies most likely to meet criteria for autism at 24 months of age. The computer program developed the best algorithm to accomplish this, and the researchers applied the algorithm to a separate set of study participants. The researchers found that brain differences at 6 and 12 months of age in infants with older siblings with autism correctly predicted eight out of ten infants who would later meet criteria for autism at 24 months of age in comparison to those infants with older ASD siblings who did not meet criteria for autism at 24 months. "This means we potentially can identify infants who will later develop autism, before the symptoms of autism begin to consolidate into a diagnosis," Piven said. If parents have a child with autism and then have a second child, such a test might be clinically useful in identifying infants at highest risk for developing this condition. The idea would be to then intervene 'pre-symptomatically' before the emergence of the defining symptoms of autism. Research could then begin to examine the effect of interventions on children during a period before the syndrome is present and when the brain is most malleable. Such interventions may have a greater chance of improving outcomes than treatments started after diagnosis. "Putting this into the larger context of neuroscience research and treatment, there is currently a big push within the field of neurodegenerative diseases to be able to detect the biomarkers of these conditions before patients are diagnosed, at a time when preventive efforts are possible," Piven said. "In Parkinson's for instance, we know that once a person is diagnosed, they've already lost a substantial portion of the dopamine receptors in their brain, making treatment less effective." Piven said the idea with autism is similar; once autism is diagnosed at age 2-3 years, the brain has already begun to change substantially. "We haven't had a way to detect the biomarkers of autism before the condition sets in and symptoms develop," he said. "Now we have very promising leads that suggest this may in fact be possible." For this research, NIH funding was provided by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the National Institute of Mental Health (NIMH), and the National Institute of Biomedical Imaging and Bioengineering. Autism Speaks and the Simons Foundation contributed additional support.


News Article | January 6, 2016
Site: phys.org

"The differences are subtle, and most humans wouldn't pick up on them, yet the birds do perceive the variances," says Chris Sturdy, professor of psychology at the University of Alberta and one of the authors on a recent study of the birds' vocal communication. "These birds can pack a lot of information into a really simple signal." Sturdy and his former PhD student Allison Hahn, lead author on the study, were measuring the production and perception of chickadee vocalizations. "We're studying natural vocalizations, and we're asking the birds how they perceive them," says Hahn. "We can see if there are certain features within their vocalizations that birds rely on more than others to perceive differences." Focusing on the chickadees' song (or "fee-bee"), used by the birds for territorial defence and mate attraction—versus the call (the ubiquitous "chick-a-dee-dee-dee") used for flock mobilization and communication among flock members—the scientists found that the birds generalized their learning from one song to another and could make distinctions among similar songs from other geographical regions. They worked with birds from Alberta—chickadees are non-migratory—and used recordings from other locations to conduct their study. Hahn and Sturdy made several other significant discoveries by studying the bioacoustic differences between male and female chickadees. "The previous belief was that females didn't even sing," says Sturdy, noting that male song has been the dominant area of inquiry until now. "People have been studying songbirds forever, but no one to date has documented the fact that female vocal production is symmetrical to males." Sturdy notes that the findings present an entirely new avenue of natural history research. Along with perceiving subtle nuances based on geography, the birds can also tell the difference between the sexes singing a similar song. "We taught the birds how to discriminate the two," says Hahn, noting that the focus was on perception rather than behaviour. "Our studies help us and the world to better understand nature," notes Sturdy. The U of A researchers in comparative cognition and neuroethology are unique in their combination of fundamental studies of vocal production with studies of perception—allowing the discovery of whether acoustic differences observable in the first type of studies are used by the birds in meaningful ways to make sense of their auditory worlds. The findings, "Black-capped chickadees categorize songs based on features that vary geographically," were published in the leading international journal Animal Behavior. Explore further: Chickadees Tweet About Themselves More information: Allison H. Hahn et al. Black-capped chickadees categorize songs based on features that vary geographically, Animal Behaviour (2016). DOI: 10.1016/j.anbehav.2015.11.017


News Article | August 31, 2016
Site: www.scientificcomputing.com

University of Alberta mechanical engineering professors Pierre Mertiny and Marc Secanell are looking to make an old technology new again and save some money for transit train operators such as the Edmonton LRT while they do it. "The flywheel is an old technology, but that's partly what makes it so sensible," says Mertiny. "Fundamentally, it's a really simple technology. We already have everything we need." The two recently calculated that the use of flywheel technology to assist light rail transit in Edmonton., Alberta, would produce energy savings of 31 per cent and cost savings of 11 per cent. Their findings are published in the July 2016 edition of the journal Energy ("Analysis of a flywheel storage system for light rail transit"). A flywheel is exactly what it sounds like: a disk, also known as the rotor, rotates and increases its rotational speed as it is fed electricity. This rotational energy can then be turned back into electrical energy whenever it is needed. It is, in a sense, a mechanical battery. The system loses very little energy to heat or friction because it operates in a vacuum and may even use magnetic bearings to levitate the rotor. Although we don't hear a lot about flywheel technology, it is used for 'high-end' applications, like the International Space Station or race cars built by Audi and Porsche. In North America, high-capacity flywheels are also used in areas of high population density, such as New York, Massachusetts and Pennsylvania, to buffer electricity to prevent power outages. Secanell and Mertiny examined the possibility of using flywheel technology to store energy generated when the city's LRT trains decelerate and stop. Trains such as the LRT are designed with so-called dynamic braking, using traction motors on the train's wheels, for smooth stops. But the deceleration generates energy, which needs to go somewhere. "Electric and fuel cell vehicles, already implement regenerative braking in order to store the energy produced during braking for start-up, so why would trains not be able to do so?" says Secanell, whose research also focuses on fuel cell vehicle technologies. Currently that electricity is considered 'dirty' electricity because it is intermittent and therefore difficult to use. Conventional systems simply send the braking electric power to resistors on the train, which convert the electrical energy to heat, which is then released into the air. A flywheel system would take the electrical energy and store it as mechanical energy. This mechanical energy could would then be converted back to electrical energy when the train is ready to leave the station again. "It's difficult to use a conventional battery for this purpose," explains Mertiny. "You need to recharge and discharge a lot of energy very quickly. Batteries don't last long under those conditions." Mertiny and Secanell predict that using a flywheel to capture the electricity generated by a train's deceleration and applying it for acceleration would produce an energy savings of 31 per cent and cost savings of 11 per cent on the Edmonton LRT system. A flywheel system could result in substantial energy and cost savings for the city. "The city of Hannover in Germany is already testing flywheel technology for just this purpose," says Mertiny. "They have banks of flywheels at each station to capture and re-use the electricity generated when their trains come into the station." Keeping the flywheels at each station meant that Hannover's trains did not have to be retro-fitted for the development. Secanell and Mertiny are involved in a pan-Canadian Energy Storage Network investigating ways to optimize the flywheel energy storage and cost. Mertiny is also currently working with Landmark Homes of Edmonton, through the U of A's Nasseri School of Building Science and Engineering, to develop a prototype flywheel to store solar energy for household use.


News Article | April 27, 2016
Site: motherboard.vice.com

On April 26, David and Collet Stephan, the Alberta couple who treated their sick toddler with horseradish and onion smoothies for two weeks, were found guilty of failing to provide the necessaries of life to their 18-month-old son, a sentence that carries up to five years in jail. Ezekiel died of meningitis in 2012. It’s a tragic story, even more so because nobody doubts that David and Collet Stephan loved their son. Instead, they seem to have misguidedly believed that alternative therapies, including an echinacea tincture they bought at a naturopath’s office, would help him get better. The boy eventually stopped breathing, and they phoned 911. He later died in hospital. (The parents testified that they thought Ezekiel had the flu, although a family friend and nurse had suggested he may have meningitis.) Doctors, government, and the alternative medicine industry all have a duty to do better here. Horseradish and echinacea are no substitute for conventional, science-based medicine. Patients across Canada need better access to family doctors, and they need to know—without a doubt—when it’s time to seek one out, and forego the naturopath. “I hope this sends a strong message about the nature of [alternative] services,” Tim Caulfield, Canada Research Chair in Health Law and Policy at the University of Alberta, who has been following the case, told Motherboard on Monday, shortly after the verdict. “And I hope it causes policymakers throughout Canada to rethink how they’re positioning these therapies in our healthcare system.” Alternative therapies, including naturopaths’ services, are popular. It’s easy to see why: in Canada, which suffers from a longstanding doctor shortage, it can be difficult—if not impossible—to get a family doctor. Even when you do have one, that doctor is often rushed. By contrast, naturopaths sit with their patients for half an hour or longer, going over every little detail of their health. One of their most valuable services is “lifestyle counselling,” simple diet and exercise advice, that doctors often don’t have the time to do. Naturopaths have been given the right to self-regulate in many parts of Canada, like Alberta, which gives them a veneer of professionalism. But the general public should be clear on this: plenty of their most popular services still have little or no science behind them. In a 2011 survey in Alberta, Caulfield found that homeopathy, detoxification, and hydrotherapy were among the most popular and advertised treatments offered by Alberta’s naturopaths. “There is no scientific evidence to support those services at all,” he said. Detoxing, for one, has been debunked over and over again. But people keep paying for it. The growing creep to pseudoscience, and a distrust of conventional medicine, is something we all need to address—Canada’s doctors and policymakers included. A step in the right direction was former Health Minister Rona Ambrose’s announcement, in 2015, that “nosodes” (homeopathic treatments) would be labelled clearly to show they are not vaccines. The College of Naturopaths of Ontario is in line with this advice. But there’s clearly still some confusion around alternative therapies. In November, another trial begins in Alberta, into the death of 7-year-old Ryan Lovett, whose mother treated his illness with “holistic” treatments. The Canadian Press found several cases dating back to the 1960s. Since I first wrote about Ezekiel Stephan, I’ve heard from many naturopaths who point out that they’re licensed professionals, working to protect their patients. I don’t doubt that’s true. But what needs to be made absolutely clear is that such treatments are not an alternative to conventional, science-based medicine. Naturopaths have an important role in this. The Alberta naturopath whose office provided Ezekiel’s parents with the tincture is now under investigation. As for David and Collet Stephan, observers seem to doubt that they’ll be sentenced to a full five years in jail. “Alternative practitioners shouldn’t be your go-to primary care physician,” Caulfield said. If an adult want to pay for a detox or some other alternative treatment, that’s one thing. “We shouldn’t be testing out our ideologies around healthcare on our children.”


News Article | March 2, 2017
Site: www.sciencenews.org

In the battle of wits between humans and machines, computers have just upped the ante. Two new poker-playing programs can best professionals at heads-up no-limit Texas Hold’em, a two-player version of poker without restrictions on the size of bets. It’s another in a growing list of complex games, including chess, checkers (SN: 7/21/07, p. 36) and Go (SN: 12/24/16, p. 28), in which computers reign supreme. Computer scientists from the University of Alberta in Canada report that their program, known as DeepStack, roundly defeated professional poker players, playing 3,000 hands against each. The program didn’t win every hand — sometimes the luck of the draw was against it. But after the results were tallied, DeepStack beat 10 out of 11 card sharks, the scientists report online March 2 in Science. (DeepStack also beat the 11th competitor, but that victory was not statistically significant.) “This work is very impressive,” says computer scientist Murray Campbell, one of the creators of Deep Blue, the computer that bested chess grandmaster Garry Kasparov in 1997. DeepStack “had a huge margin of victory,” says Campbell, of IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. Likewise, computer scientists led by Tuomas Sandholm of Carnegie Mellon University in Pittsburgh recently trounced four elite heads-up no-limit Texas Hold’em players with a program called Libratus. Each contestant played 30,000 hands against the program during a tournament held in January in Pittsburgh. Libratus was “much tougher than any human I’ve ever played,” says poker pro Jason Les. Previously, Michael Bowling — one of DeepStack’s creators — and colleagues had created a program that could play a two-person version of poker in which the size of bets is limited. That program played the game nearly perfectly: It was statistically unbeatable within a human lifetime (SN: 2/7/2015, p.14). But no-limit poker is vastly more complicated because when any bet size is allowed, there are many more possible actions. Players must decide whether to go all in, play it safe with a small wager or bet something in between. “Heads-up no-limit Texas Hold’em … is, in fact, far more complex than chess,” Campbell says. In the card game, each player is dealt two cards facedown and both players share five cards dealt faceup, with rounds of betting between stages of dealing. Unlike chess or Go, where both players can see all the pieces on the board, in poker, some information is hidden — the two cards in each player’s hand. Such games, known as imperfect-information games, are particularly difficult for computers to master. To hone DeepStack’s technique, the researchers used deep learning — a method of machine learning that formulates an intuition-like sense of when to hold ’em and when to fold ’em. When it’s the program’s turn, it sorts through options for its next few actions and decides what to do. As a result, DeepStack’s nature “looks a lot more like humans’,” says Bowling. Libratus computes a strategy for the game ahead of time and updates itself as it plays to patch flaws in its tactics that its human opponents have revealed. Near the end of a game, Libratus switches to real-time calculation, during which it further refines its methods. Libratus is so computationally demanding that it requires a supercomputer to run. (DeepStack can run on a laptop.) Teaching computers to play games with hidden information, like poker, could eventually lead to real-life applications. “The whole area of imperfect-information games is a step towards the messiness of the real world,” says Campbell. Computers that can handle that messiness could assist with business negotiations or auctions, and could help guard against hidden risks, in cybersecurity, for example.


News Article | February 16, 2017
Site: www.biosciencetechnology.com

Using magnetic resonance imaging (MRI) in infants with older siblings with autism, researchers from around the country were able to correctly predict 80 percent of those infants who would later meet criteria for  autism at two years of age. The study, published today in Nature, is the first to show it is possible to identify which infants – among those with older siblings with autism – will be diagnosed with autism at 24 months of age. “Our study shows that early brain development biomarkers could be very useful in identifying babies at the highest risk for autism before behavioral symptoms emerge,” said senior author Joseph Piven, MD, the Thomas E. Castelloe Distinguished Professor of Psychiatry at the University of North Carolina-Chapel Hill. “Typically, the earliest an autism diagnosis can be made is between ages two and three. But for babies with older autistic siblings, our imaging approach may help predict during the first year of life which babies are most likely to receive an autism diagnosis at 24 months.” This research project included hundreds of children from across the country and was led by researchers at the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina, where Piven is director. The project’s other clinical sites included the University of Washington, Washington University in St. Louis, and The Children’s Hospital of Philadelphia. Other key collaborators are McGill University, the University of Alberta, the University of Minnesota, the College of Charleston, and New York University. “This study could not have been completed without a major commitment from these families, many of whom flew in to be part of this,” said first author Heather Hazlett, PhD, assistant professor of psychiatry at the UNC School of Medicine and a CIDD researcher. “We are still enrolling families for this study, and we hope to begin work on a similar project to replicate our findings.” People with Autism Spectrum Disorder (or ASD) have characteristic social deficits and demonstrate a range of ritualistic, repetitive and stereotyped behaviors. It is estimated that one out of 68 children develop autism in the United States. For infants with older siblings with autism, the risk may be as high as 20 out of every 100 births. There are about 3 million people with autism in the United States and tens of millions around the world. Despite much research, it has been impossible to identify those at ultra-high risk for autism prior to 24 months of age, which is the earliest time when the hallmark behavioral characteristics of ASD can be observed and a diagnosis made in most children. For this Nature study, Piven, Hazlett, and researchers from around the country conducted MRI scans of infants at six, 12, and 24 months of age. They found that the babies who developed autism experienced a hyper-expansion of brain surface area from six to 12 months, as compared to babies who had an older sibling with autism but did not themselves show evidence of the condition at 24 months of age. Increased growth rate of surface area in the first year of life was linked to increased growth rate of overall brain volume in the second year of life. Brain overgrowth was tied to the emergence of autistic social deficits in the second year. Previous behavioral studies of infants who later developed autism – who had older siblings with autism –revealed that social behaviors typical of autism emerge during the second year of life. The researchers then took these data – MRIs of brain volume, surface area, cortical thickness at 6 and 12 months of age, and sex of the infants – and used a computer program to identify a way to classify babies most likely to meet criteria for autism at 24 months of age. The computer program developed the best algorithm to accomplish this, and the researchers applied the algorithm to a separate set of study participants. The researchers found that brain differences at 6 and 12 months of age in infants with older siblings with autism correctly predicted eight out of ten infants who would later meet criteria for autism at 24 months of age in comparison to those infants with older ASD siblings who did not meet criteria for autism at 24 months. “This means we potentially can identify infants who will later develop autism, before the symptoms of autism begin to consolidate into a diagnosis,” Piven said. If parents have a child with autism and then have a second child, such a test might be clinically useful in identifying infants at highest risk for developing this condition. The idea would be to then intervene ‘pre-symptomatically’ before the emergence of the defining symptoms of autism. Research could then begin to examine the effect of interventions on children during a period before the syndrome is present and when the brain is most malleable.  Such interventions may have a greater chance of improving outcomes than treatments started after diagnosis. “Putting this into the larger context of neuroscience research and treatment, there is currently a big push within the field of neurodegenerative diseases to be able to detect the biomarkers of these conditions before patients are diagnosed, at a time when preventive efforts are possible,” Piven said. “In Parkinson’s for instance, we know that once a person is diagnosed, they’ve already lost a substantial portion of the dopamine receptors in their brain, making treatment less effective.” Piven said the idea with autism is similar; once autism is diagnosed at age 2-3 years, the brain has already begun to change substantially. “We haven’t had a way to detect the biomarkers of autism before the condition sets in and symptoms develop,” he said. “Now we have very promising leads that suggest this may in fact be possible.”


News Article | September 7, 2016
Site: cleantechnica.com

The Canadian province of Alberta is partnering with the world’s leading all-electric bus firm, BYD, for the development of “smarter, safer transit buses,” according to an email sent to CleanTechnica and EV Obsession. The new framework agreement seems to be a serious one, as it was actually signed in the presence of Canadian Prime Minister Justin Trudeau and Canadian Minister of International Trade Chrystia Freeland, while in Shanghai. “Alberta is making strides in clean technology development and is looking to be a leader in smart infrastructure and electric transportation systems. BYD is excited to leverage the province’s machine learning, advanced sensors, and software development expertise to build even smarter and safer zero-emission buses,” stated BYD Heavy Industries Vice President Ted Dowling. “Not only has BYD delivered more than 10,000 buses worldwide that have more than 250 million kilometers of in revenue service, but in head-to-head trials against diesel buses, our battery-electric buses have proven their reliability in even the coldest climates. BYD is proud of our record as a global leader in all-electric buses, and this partnership will support both smarter transit technology and high-tech skill creation for a greener future.” The email notes that BYD is bringing substantial electric bus expertise to the partnership, having been arguably the first to bring a long-range, all-electric transit bus to market (back in 2011). The BYD lineup now includes 7 different electric bus models, “ranging from a 23-foot coach to a 60-foot articulated transit bus,” BYD notes. Going on: “There are BYD battery-electric buses running on 6 continents that have together saved customers tens of millions of dollars in fuel and maintenance costs. BYD’s proprietary Iron-Phosphate (or ‘Fe’) Battery is the safest and longest lasting electric bus battery available on the market today. Not only is the battery fully recyclable and flame resistant, but BYD also offers a full 12-year battery warranty, the longest electric battery warranty available in the industry.” “We are very pleased to be partnering with BYD to promote the facilitation of collaborative research and development, and commercialization activities,” commented Deron Bilous, Minister of Economic Development and Trade with the Government of Alberta. “Collaboration with global firms like BYD support diversification of Alberta’s economy, high tech skill creation, and increased export of technology products and services.” Current plans call for initial program details to be hammered out between BYD, partner firms in Alberta’s tech industry, the Alberta Centre for Advanced MNT Products, the University of Alberta, and the University of Calgary, later this fall.   Drive an electric car? Complete one of our short surveys for our next electric car report.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.   James Ayre 's background is predominantly in geopolitics and history, but he has an obsessive interest in pretty much everything. After an early life spent in the Imperial Free City of Dortmund, James followed the river Ruhr to Cofbuokheim, where he attended the University of Astnide. And where he also briefly considered entering the coal mining business. He currently writes for a living, on a broad variety of subjects, ranging from science, to politics, to military history, to renewable energy. You can follow his work on Google+.


News Article | October 31, 2016
Site: www.eurekalert.org

Scientists have found a way to use satellites to track photosynthesis in evergreens -- a discovery that could improve our ability to assess the health of northern forests amid climate change. An international team of researchers used satellite sensor data to identify slight colour shifts in evergreen trees that show seasonal cycles of photosynthesis -- the process in which plants use sunlight to convert carbon dioxide and water into glucose. Photosynthesis is easy to track in deciduous trees -- when leaves bud or turn yellow and fall off. But until recently, it had been impossible to detect in evergreen conifers on a large scale. "Photosynthesis is arguably the most important process on the planet, without which life as we know it would not exist," said John Gamon, lead researcher and a professor of biological sciences at the University of Alberta. "As the climate changes, plants respond -- their photosynthesis changes, their growing season changes. And if photosynthesis changes, that in turn further affects the atmosphere and climate." Through their CO2-consuming ways, plants have been slowing climate change far more than scientists previously realized. The "million-dollar question" is whether this will continue as the planet continues to warm due to human activity, Gamon said. Scientists have two hypotheses -- the first is that climate change and longer growing seasons will result in plants sucking up even more CO2, further slowing climate change. The other predicts a drop in photosynthetic activity due to drought conditions that stress plants, causing them to release CO2 into the atmosphere through a process called respiration -- thereby accelerating climate change. "If it's hypothesis one, that's helping us. If it's hypothesis two, that's pretty scary," said Gamon. The research team combined two different satellite bands -- one of which was used to study oceans and only recently made public by NASA -- to track seasonal changes in green (pigment created by chlorophyll) and yellow (created by carotenoid) needle colour. The index they developed provides a new tool to monitor changes in northern forests, which cover 14 per cent of all the land on Earth. Gamon has taken a leave of absence from the U of A to further the research, now funded by NASA, at the University of Nebraska-Lincoln. His lab in the U.S. is reviewing 15 years' worth of satellite data on forests in Canada and Alaska to ultimately determine whether photosynthetic cycles are happening earlier because of climate change and whether forests are becoming more or less productive at converting CO2. "Those are key questions we haven't been able to answer for the boreal forest as a whole," he said. Researchers from the University of Toronto, University of North Carolina, University of Maryland, Baltimore County, University of Barcelona, NASA and the U.S. Forest Service collaborated on the project. Their findings were published Monday in the Proceedings of the National Academy of Sciences.


News Article | September 16, 2016
Site: www.rdmag.com

Inspired by the anatomy of insects, an interdisciplinary research team at the University of Alberta has come up with a novel way to quickly and accurately detect dangerous airborne chemicals. The work started with Arindam Phani, a graduate student in U of A's Department of Chemical and Materials Engineering, who observed that most insects have tiny hairs on their body surfaces, and it is not clear what the hairs are for. Trying to make sense of what these hairs may be capable of, Phani designed experiments involving a "forest" of tiny hairs on a thin vibrating crystal chip, under the guidance of his academic advisor Thomas Thundat, the Canada Research Chair in Oil Sands Molecular Engineering. The two joined forces with Vakhtang Putkaradze, Centennial Professor in the University of Alberta's Department of Mathematical and Statistical Sciences. The experiments and subsequent theoretical explanation formed the crux of a new study published in the Sept. 6 issue of Scientific Reports, an online, open access journal from the publishers of Nature. "We wanted to do something that nobody else does," said Putkaradze, a mathematician who is also a renowned expert in the field of mechanics. "When using resonators as sensors, most people want to get rid of dissipation or friction because it's considered highly undesirable, it tends to obscure what you are trying to measure. We have taken that undesirable thing and made it useful." "Sensing chemicals without chemical receptors has been a challenge in normal conditions," said Thundat, a world-leading expert in the field of sensing. "We realized that there is a wealth of information contained in the frictional loss of a mechanical resonator in motion and is more pronounced at the nanoscale." The idea is that any object moving rapidly through the air can probe the properties of the surrounding environment. Imagine having a wand in your hand and moving it back and forth, and—even with your eyes closed—you can feel whether the wand is moving through air, water, or honey, just by feeling the resistance. Now, picture this wand with billions of tiny hairs on it, moving back and forth several million times per second, and just imagine the sensing possibilities. "With the nanostructures, we can feel tiny changes in the air surrounding the resonator," says Putkaradze. "This sensitivity makes the device useful for detecting a wide variety of chemicals." Phani, who is the first author on the publication, believes "similar mechanisms involving motions of nano-hairs may be used for sensing by living organisms." Because the friction is changing dramatically with minute changes in the environment and is easy to measure, it may be possible to eventually produce a gadget of the size similar to or slightly larger than a Rubik's cube and designed to plug into a wall. At present, the group's device is geared primarily to sensing chemical vapors in air. "We are thinking that this device can work like a smaller and cheaper spectrometer, measuring chemicals in the parts-per-million range," added Putkaradze. Putkaradze explains that, apart from size and reasonable cost, what sets the device apart from larger and more expensive equipment is its versatility. "Because our sensor is not directed to detect any specific chemical, it can interpret a broad range, and it doesn't require that we actually attach the molecules to anything to create a mechanical response, meaning that it's also reusable." The team adds that the most immediate and obvious use will be for environmental air quality monitoring. Concluded Putkaradze, "we would like to work with applications like law enforcement and scientific laboratories, but the most obvious use is for environmental observation of chemical air pollution in cities and the resource industry." Future iterations are geared toward detecting particulate matter—like dust—as well as the number of viruses present in air, invaluable for the public health.


News Article | December 14, 2016
Site: www.eurekalert.org

Each year, thousands of Canadian men with prostate cancer undergo biopsies to help their doctors better understand the progression and nature of their disease. It provides vital, sometimes life-saving information, yet cancer researcher John Lewis knows it can be a difficult test to ask of anyone. "Currently the best way to get information is through a biopsy, which involves pushing 12 needles through an organ the size of a walnut. As you might imagine, it's a very uncomfortable and invasive procedure," says Lewis, the Frank and Carla Sojonky Chair in Prostate Cancer Research at the University of Alberta and a member of the Cancer Research Institute of Northern Alberta. "Patients with low grade prostate cancer can decide not to get treatment and instead monitor the disease, but monitoring usually involves a biopsy every year or so. Many people opt for surgery instead of more biopsies. It is clearly something we need to improve upon." A new innovation from the University of Alberta is promising to do just that through a relatively painless procedure. A study published in the journal Cancer Research describes the use of focused ultrasound along with particles called nanodroplets for the enhanced detection of cancer biomarkers in the blood. The research team, led by senior co-authors John Lewis and Roger Zemp, used the technique on tumours to cause extracellular vesicles to be released into the bloodstream, giving them large amounts of genetic material to analyze from drawing just a small sample of blood. "With a little bit of ultrasound energy, nanodroplets phase-change into microbubbles. That's important because ultrasound can really oscillate these microbubbles," says Roger Zemp, professor of engineering at the U of A. "The microbubbles absorb of the ultrasound energy and then act like boxing gloves to punch the tumour cells and knock little vesicles off." "That led us to detect some genes that were indicative of the aggressiveness of the tumour. That's potentially very powerful. You can get a genetic characterization of the tumour, but do it relatively non-invasively." "Separately, the ultrasound and nanodroplets had very little effect," says Robert Paproski, first author of the study and a research associate working with Roger Zemp in the Faculty of Engineering. "But when we added the two together they had a very big effect. It allows us to detect roughly 100 times more vesicles than would normally be there, that are specific from the tumour." The researchers say the technique is as accurate as using needles in a biopsy, with the ultrasound able to give them information about specific parts of a tumour. They add that genetic information can be used for personalized medicine - helping doctors know if a patient's tumour has a specific mutation which would then allow them to determine what medications would work best for treatment. The team is pushing forward with strategies to further enrich the population of key vesicles released into the bloodstream through the technique, focusing on the biomarkers that are of the most importance. Lewis believes they can quickly progress the work to clinical trials, and from there to real world applications, thanks to the accessibility of the technology. "Focused ultrasound systems are already used in the clinic. Microbubbles are already used in the clinic. So I think the movement of this into the clinic is relatively straightforward." Prostate Cancer Canada is the leading national foundation dedicated to the prevention of the most common cancer in men through research, advocacy, education, support and awareness. As one of the largest investors in prostate cancer research in Canada, Prostate Cancer Canada is committed to continuous discovery in the areas of prevention, diagnosis, treatment, and support. Alberta Cancer Foundation is the official fundraiser for all 17 cancer centres in Alberta, including the Cross Cancer Institute in Edmonton and the Tom Baker Cancer Centre in Calgary, supporting Albertans at the point of care.


News Article | February 23, 2017
Site: www.eurekalert.org

EDMONTON (Under embargo until Thursday, February 23, 2017 at 10 a.m. MST)--In the middle of Alberta's boreal forest, a bird eats a wild chokecherry. During his scavenging, the bird is caught and eaten by a fox. The cherry seed, now inside the belly of the bird within the belly of fox, is transported far away from the tree it came from. Eventually, the seed is deposited on the ground. After being broken down in the belly of not one but two animals, the seed is ready to germinate and become a cherry tree itself. The circle of life at work. Diploendozoochory, or the process of a seed being transported in the gut of multiple animals, occurs with many species of plants in habitats around the world. First described by Charles Darwin in 1859, this type of seed dispersal has only been studied a handful of times. And in a world affected by climate change and increasing rates of human development, understanding this process is becoming increasingly important. A new study by researchers at the University of Alberta's Department of Biological Sciences is the first to comprehensively examine existing literature to identify broader patterns and suggest ways in which the phenomenon is important for plant populations and seed evolution. Anni Hämäläinen, lead investigator and postdoctoral fellow, explains that predator-assisted seed dispersal is important to colonize and recolonize plant life in the wild. "Thick-shelled seeds may benefit from the wear and tear of passing through the guts of two animals, making them better able to germinate than if they had passed through the gut of the prey alone," explains Hämäläinen. "It's even possible that some plants have evolved specifically to take advantage of these predator-specific behaviours." Often larger than prey animals, predators cover larger distances with ease. As humans continue to develop and alter wilderness, such as by cutting down forests or building roads, predators may be the only animals large enough to navigate across these areas and enable plants to recolonize them. "Climate change will alter where some plants can find suitable places to grow," explains Hämäläinen. "Seed-carrying predators may have a role in helping plants cover a larger area and hence move with the changing climate." These different factors are like pieces in a puzzle, explains Hämäläinen: to fully understand the big picture of how they affect plant populations, scientists need to know how all of the pieces fit together. "Our work has highlighted how interesting and important diploendozoochory is, and we hope that it will help and encourage others to fill some of these gaps in our understanding," says Hämäläinen. The paper "The ecological significance of secondary seed dispersal by carnivores" is published in Ecosphere. This research was conducted by Anni Hämäläinen, Kate Broadley, Amanda Droghini, Jessica Haines, Clayton Lamb, and Sophie Gilbert, under the supervision of Stan Boutin, professor in the Department of Biological Sciences and Alberta Biodiversity Conservation Chair.


News Article | November 30, 2016
Site: www.marketwired.com

EDMONTON, AB--(Marketwired - November 30, 2016) - TEC Edmonton announced today that Dr. Randy Yatscoff, Executive Vice-President of Business Development, has been awarded the National Startup Canada Adam Chowaniec Lifetime Achievement Award. The award, only one of which is presented at the national level, recognizes an individual who has made a long-term impact on advancing an environment of entrepreneurial growth and success in Canada. "I'm grateful for the incredible people and teams I've worked with over the years that made this possible," said Randy. "This award is really about the people who come together to be bigger than the sum of their parts." "There is no more deserving person than Randy for this award," says TEC Edmonton CEO Chris Lumb. "Randy brings passion, commitment, and action to everything he does, and it's an honour to work with him. We and our clients are all richer from Randy's presence, and this award recognizes his outstanding contributions to Canadian entrepreneurship." Dr. Yatscoff has worked with TEC Edmonton since 2008, initially as an Entrepreneur-in-Residence before becoming Executive Vice-President of Business Development in 2010. In his role at TEC, he oversees a team that serves over 80 startup companies per year. During his time at TEC Edmonton, Randy has directly or indirectly helped to create 15 university spinoff companies like Metabolomic Technologies Inc. (MTI) and Tevosol, allowing research innovations to make a real-world impact. In addition to university-based companies, Randy has also mentored dozens of companies in the community. Randy's time at TEC Edmonton is backed by more than a decade of experience as a biotech executive, notably serving as President and CEO of the drug development company Isotechnika. During his tenure at Isotechnika, Randy helped raise $200 million in equity financing and took the company public. In an earlier life, Randy was also an accomplished academic, a full professor and researcher at several Canadian universities. He remains an adjunct professor at the University of Alberta and holds more than 20 patents. About TEC Edmonton TEC Edmonton is a business accelerator that helps emerging technology companies grow successfully. As a joint venture of the University of Alberta and Edmonton Economic Development Corporation, TEC Edmonton operates the Edmonton region's largest accelerator for early-stage technology companies, and also manages commercialization of University of Alberta technologies. TEC Edmonton delivers services in four areas: Business Development, Funding and Finance, Technology Management, and Entrepreneur Development. Since 2011, TEC clients have generated $680M in revenue, raised $350M in financing and funding, invested $200M in R&D, grown both revenue and employment by 25 per cent per year and now employ over 2,400 people in the region. In addition, TEC has assisted in the creation of 22 spinoff companies from the University of Alberta in the last four years. TEC Edmonton was named the 4th best university business incubator in North America by the University Business Incubator (UBI) Global Index in 2015, and "Incubator of the Year" by Startup Canada in 2014. For more information, visit www.tecedmonton.com.


News Article | August 31, 2016
Site: www.scientificcomputing.com

University of Alberta mechanical engineering professors Pierre Mertiny and Marc Secanell are looking to make an old technology new again and save some money for transit train operators such as the Edmonton LRT while they do it. "The flywheel is an old technology, but that's partly what makes it so sensible," says Mertiny. "Fundamentally, it's a really simple technology. We already have everything we need." The two recently calculated that the use of flywheel technology to assist light rail transit in Edmonton., Alberta, would produce energy savings of 31 per cent and cost savings of 11 per cent. Their findings are published in the July 2016 edition of the journal Energy ("Analysis of a flywheel storage system for light rail transit"). A flywheel is exactly what it sounds like: a disk, also known as the rotor, rotates and increases its rotational speed as it is fed electricity. This rotational energy can then be turned back into electrical energy whenever it is needed. It is, in a sense, a mechanical battery. The system loses very little energy to heat or friction because it operates in a vacuum and may even use magnetic bearings to levitate the rotor. Although we don't hear a lot about flywheel technology, it is used for 'high-end' applications, like the International Space Station or race cars built by Audi and Porsche. In North America, high-capacity flywheels are also used in areas of high population density, such as New York, Massachusetts and Pennsylvania, to buffer electricity to prevent power outages. Secanell and Mertiny examined the possibility of using flywheel technology to store energy generated when the city's LRT trains decelerate and stop. Trains such as the LRT are designed with so-called dynamic braking, using traction motors on the train's wheels, for smooth stops. But the deceleration generates energy, which needs to go somewhere. "Electric and fuel cell vehicles, already implement regenerative braking in order to store the energy produced during braking for start-up, so why would trains not be able to do so?" says Secanell, whose research also focuses on fuel cell vehicle technologies. Currently that electricity is considered 'dirty' electricity because it is intermittent and therefore difficult to use. Conventional systems simply send the braking electric power to resistors on the train, which convert the electrical energy to heat, which is then released into the air. A flywheel system would take the electrical energy and store it as mechanical energy. This mechanical energy could would then be converted back to electrical energy when the train is ready to leave the station again. "It's difficult to use a conventional battery for this purpose," explains Mertiny. "You need to recharge and discharge a lot of energy very quickly. Batteries don't last long under those conditions." Mertiny and Secanell predict that using a flywheel to capture the electricity generated by a train's deceleration and applying it for acceleration would produce an energy savings of 31 per cent and cost savings of 11 per cent on the Edmonton LRT system. A flywheel system could result in substantial energy and cost savings for the city. "The city of Hannover in Germany is already testing flywheel technology for just this purpose," says Mertiny. "They have banks of flywheels at each station to capture and re-use the electricity generated when their trains come into the station." Keeping the flywheels at each station meant that Hannover's trains did not have to be retro-fitted for the development. Secanell and Mertiny are involved in a pan-Canadian Energy Storage Network investigating ways to optimize the flywheel energy storage and cost. Mertiny is also currently working with Landmark Homes of Edmonton, through the U of A's Nasseri School of Building Science and Engineering, to develop a prototype flywheel to store solar energy for household use.


News Article | December 16, 2016
Site: www.eurekalert.org

Something as seemingly harmless as a heartburn pill could lead cancer patients to take a turn for the worse. A University of Alberta study published in journal JAMA Oncology discovered that proton pump inhibitors (PPIs), which are very common medications for heartburn and gastrointestinal bleeding, decrease effects of capecitabine, a type of chemotherapy usually prescribed to gastric cancer patients. The study by Department of Oncology's Michael Sawyer, Michael Chu and their team included more than 500 patients and the results were conclusive: PPIs affected progression-free survival by more than a month; the overall survival in cancer patients was reduced by more than two months, and the disease control rate decreased by 11 per cent. Although this research was focused on gastric cancer patients, Sawyer's team has followed up with another study in early stage colorectal cancer and discovered that those who took PPIs and capecitabine were also at risk for decreased cancer treatment efficacy. In that study, patients who took PPIs while on capecitabine had a decreased chance of being cured of their colorectal cancer. PPIs are very popular for their efficacy and many of them are over-the-counter drugs (some common brands are Nexium, Prevacid and Protonix). Sawyer explains the risk of this interaction is high as some cancer patients may not even have these medications prescribed by a physician, but could obtain them easily over-the-counter at a pharmacy and accidentally alter their chemotherapy treatment without knowing it: "This could be a very common and underappreciated side effect. One study estimated that at 20 per cent of cancer patients in general take proton pump inhibitors." The explanation for the negative outcome may be in gastric pH levels. Previous studies had been done on the interaction of this type of chemo with the antacid medication Maalox, without obtaining any alarming results; but unlike Maalox, PPI's are able to raise pH to a point where they could affect disintegration of capecitabine tablets. "Given that PPIs are much more potent and can essentially abolish gastric acidity there may be a significant interaction between capecitabine and PPIs," says Sawyer. Sawyer, a clinical pharmacologist and medical oncologist and member of the U of A's Faculty of Medicine & Dentistry since 2001, is currently conducting more research on this topic to unveil more about the interaction of chemotherapy with other medications. This discovery may lead to change the usual procedures for prescription of PPIs. Some cancer patients cannot discontinue these medications in order to treat bleedings or other gastric conditions that must be kept under control. "In that case, there are alternatives for oncologists or family doctors that become aware of this risk," says Sawyer. "Physicians should use caution in prescribing PPIs to patients on capecitabine and, if they must use PPIs due to gastrointestinal bleeding issues, maybe they should consider using other types of chemotherapy that don't present this interaction."


Research shows secondary seed dispersal by predator animals is important for recolonization of plants. Credit: Kate Broadley and Clayton Lamb In the middle of Alberta's boreal forest, a bird eats a wild chokecherry. During his scavenging, the bird is caught and eaten by a fox. The cherry seed, now inside the belly of the bird within the belly of fox, is transported far away from the tree it came from. Eventually, the seed is deposited on the ground. After being broken down in the belly of not one but two animals, the seed is ready to germinate and become a cherry tree itself. The circle of life at work. Diploendozoochory, or the process of a seed being transported in the gut of multiple animals, occurs with many species of plants in habitats around the world. First described by Charles Darwin in 1859, this type of seed dispersal has only been studied a handful of times. And in a world affected by climate change and increasing rates of human development, understanding this process is becoming increasingly important. A new study by researchers at the University of Alberta's Department of Biological Sciences is the first to comprehensively examine existing literature to identify broader patterns and suggest ways in which the phenomenon is important for plant populations and seed evolution. Anni Hämäläinen, lead investigator and postdoctoral fellow, explains that predator-assisted seed dispersal is important to colonize and recolonize plant life in the wild. "Thick-shelled seeds may benefit from the wear and tear of passing through the guts of two animals, making them better able to germinate than if they had passed through the gut of the prey alone," explains Hämäläinen. "It's even possible that some plants have evolved specifically to take advantage of these predator-specific behaviours." Often larger than prey animals, predators cover larger distances with ease. As humans continue to develop and alter wilderness, such as by cutting down forests or building roads, predators may be the only animals large enough to navigate across these areas and enable plants to recolonize them. "Climate change will alter where some plants can find suitable places to grow," explains Hämäläinen. "Seed-carrying predators may have a role in helping plants cover a larger area and hence move with the changing climate." These different factors are like pieces in a puzzle, explains Hämäläinen: to fully understand the big picture of how they affect plant populations, scientists need to know how all of the pieces fit together. "Our work has highlighted how interesting and important diploendozoochory is, and we hope that it will help and encourage others to fill some of these gaps in our understanding," says Hämäläinen. The paper "The ecological significance of secondary seed dispersal by carnivores" is published in Ecosphere. Explore further: Can mountain-climbing bears rescue cherry trees from global warming?


EDMONTON, AB--(Marketwired - October 27, 2016) - TEC Edmonton announced today its acceptance of two additional early-stage information technology companies to its T-Squared Accelerator program. SensorUp, founded by Dr. Steve Liang, Associate professor of Geomatics Engineering at the University of Calgary, provides an open standard platform for connectivity to and between Internet of Things (IoT) devices, data, and analytics over the Web. "We are excited that SensorUp has been chosen to join the T-Squared Accelerator. More importantly, it's great to know that TEC Edmonton and TELUS share the same vision for SensorUp, which is building the Internet of Things with open standards," says Steve Liang, Founder and CEO of SensorUp. "I'm confident that this partnership will pave a quick path for our growth and establish SensorUp as a leading Internet of Things platform." TVCom is developing an e-Commerce platform that connects TV viewers to real-time and context relevant fashion, merchandise and supplemental content. The company was also the 2016 TEC VenturePrize TELUS ICT Stream Winner earlier this year. "Much of what we have accomplished so far is intimately linked to TEC Edmonton's support of our project from the start," said TVCom CTO and University of Alberta Computer Science student Pavlo Malynin. "We take tremendous pride in knowing that we have some of Alberta's finest experts supporting our technology." "SensorUp and TVCom are both very exciting Alberta-based companies building globally relevant and scalable platforms. We look forward to working closely with them over the next 12 months to rapidly accelerate their progress," stated Shaheel Hooda, Program Director of the T-Squared Accelerator program and Executive in Residence at TEC Edmonton. The T-Squared Accelerator is a collaboration between TEC Edmonton and TELUS that provides promising early-stage information and communications technology (ICT) companies with 12 months of free incubation space and support in Edmonton's Enterprise Square, along with seed funding and expert mentorship from TELUS and TEC Edmonton to advance their business. For more information, visit www.tecedmonton.com/t-squared-accelerator. About TEC Edmonton TEC Edmonton is a business accelerator that helps emerging technology companies grow successfully. As a joint venture of the University of Alberta and Edmonton Economic Development Corporation, TEC Edmonton operates the Edmonton region's largest accelerator for early-stage technology companies, and also manages commercialization of University of Alberta technologies. TEC Edmonton delivers services in four areas: Business Development, Funding and Finance, Technology Management, and Entrepreneur Development. Since 2011, TEC clients have generated $680M in revenue, raised $350M in financing and funding, invested $200M in R&D, grown both revenue and employment by 25 per cent per year and now employ over 2,400 people in the region. In addition, TEC has assisted in the creation of 22 spinoff companies from the University of Alberta in the last four years. TEC Edmonton was named the 4th best university business incubator in North America by the University Business Incubator (UBI) Global Index in 2015, and "Incubator of the Year" by Startup Canada in 2014. For more information, visit www.tecedmonton.com.


News Article | March 2, 2017
Site: www.fastcompany.com

Beth Linas has a reputation among her scientific colleagues for her love of social media. “Oh I’m ridiculed,” she tells me, via Twitter direct message (of course). “Not by everyone, but by some old school folk.” Linas, an infectious disease epidemiologist, tweets regularly about topics that she’s passionate about, whether it’s mobile technology or public health. During her fellowship year at the National Science Foundation, she is leveraging social media to help debunk theories that aren’t scientifically validated, such as that vaccines are linked to autism, as well as improve health literacy and inspire more women to train for STEM careers. Increasingly, young scientists like Linas regard Facebook, Twitter, and blogging platforms as a key part of their day job. Not everyone is on board. Linas stresses that the ridicule from her colleagues isn’t mean-spirited, but it still demonstrates some fundamental discomfort with engaging with the public. Social media is viewed by many, she says, as time spent away from more important work, like peer-reviewed research. Experts say that academics have to walk a fine line, even today. Many scientists today will encounter a “cultural pushback,” says Tim Caulfield, a health policy professor at the University of Alberta Caulfield, if they’re viewed as being too “self promotional.” Carl Sagan, for instance, is remembered for his television persona but many forget that he was also a prolific scientific researcher. Scientists on social media also risk alienating colleagues or university officials if they tweet or post about a controversial topic that doesn’t reflect well on their institution. For Prachee Avasthi, assistant professor of anatomy and cell biology at University of Kansas Medical Center, the biggest risk is to protect her reputation within the scientific community, so she watches what she tweets. “[Another] scientist might have power over me in that they might review my grants or papers.” Despite the risks, experts who have studied the trend see this as a way to increase public support for the sciences at a time when the Trump administration is questioning facts and threatening funding for basic research. “I tell academics that social media is now the top source of science information for people,” says Paige Jarreau, a science communication specialist at Louisiana State University. “And they are a trusted voice for people that don’t have that background and literacy.” Surveys show that public confidence in the scientific community has remained stable since the early 1970s, and that they are more trusted than public officials and religious leaders. For that reason, Caulfield argues that it’s meaningful for scientists to be part of the conversation even if they have far fewer followers than celebrities peddling pseudo-science, such as actress and Goop founder Gwyneth Paltrow (Caulfield is the author of a book titled, Is Gwyneth Paltrow Wrong About Everything?). A trusted voice can be very influential, he says. “[Scientists communicating online] is an important part of pushing back against misinformation.” Caulfield says he has been personally criticized for “spending so much time tweeting,” but he’s noticed a shift in recent years. Now, he says, students, scientists, and universities are approaching him to advise them on how to communicate their work to the public. At universities and medical centers, including Louisiana State University, science departments are now hosting regular workshops to encourage scientists to be present on social media. For Dana Smith, a science writer and communicator who previously worked at the Gladstone Institutes, it’s no longer an option for scientists not to engage with lay audiences. “It’s becoming a moral obligation,” she says, with much of their research funding coming from taxpayers. For this reason, she personally made the switch from academia–she was a doctoral psychology researcher at the University of Cambridge–to communications. She doesn’t think everyone needs to be the “next public face of science,” but she encourages researchers to try their hand at the occasional blog post or tweet.


News Article | November 27, 2013
Site: www.theguardian.com

Polar bear populations are a sensitive topic for the Canadian government, which has faced international criticism for its policies on climate change and for allowing limited hunting of bears, mainly by indigenous communities. The Canadian environment minister provoked outrage last October when she discounted abundant scientific studies of polar bear decline across the Arctic, saying her brother, a hunter, was having no trouble finding bears. Leona Aglukkaq, an Inuk, spoke of a "debate" about the existence of climate change. "Scientists latch on to the wildlife in the north to state their case that climate change is happening and the polar bears will disappear and whatnot," she said. "But people on the ground will say the polar bear population is quite healthy. You know, in these regions, the population has increased, in fact. Why are you [saying it's] decreasing?" she told a meeting. "My brother is a full-time hunter who will tell you polar bear populations have increased and scientists are wrong." Scientists dispute this. One single polar bear population on the western shore of Hudson Bay, for example, has shrunk by nearly 10% to 850 bears in under a decade, according to the latest Canadian government estimate seen by the Guardian. The rate of decline – and an even sharper drop in the birth and survival rate of young cubs – puts the entire population of western Hudson Bay polar bears at risk of collapse within a matter of years, scientists have warned. "All indications are that this population could collapse in the space of a year or two if conditions got bad enough," said Andrew Derocher, a polar bear scientist at the University of Alberta. "In 2020, I think it is still an open bet that we are going to have polar bears in western Hudson Bay." The latest Canadian government estimates, which have yet to be shared with independent scientists or the public, confirm scientists' fears that the polar bears of the western Hudson Bay have little chance of long-term survival. In 1987, when the first reliable estimates of polar bear population were made, using a technique known as mark and recapture, there were about 1,200 bears in the western Hudson Bay area; by 2004, the figure had dropped to 935. "Now we are somewhere in the ballpark of 850," said Nick Lunn, an Environment Canada scientist, who is considered to be the leading expert on the polar bear population of western Hudson Bay. "This gives us a glimpse of what may be coming down the road for other subpopulations." The polar bears of western Hudson Bay are at greater risk in a warming Arctic because of their relatively southern exposure. But scientists have projected two-thirds of all polar bears could disappear by 2050 under climate change. Polar bear experts had been braced for a 10% decline in the western Hudson Bay population, based on observations about the retreat of sea ice and the deteriorating condition of polar bears, especially mothers and cubs. The ice-free season in Hudson Bay has expanded by about a day every year for the past 30 years, reaching 143 days last year. Scientists have predicted polar bears will be unable to survive once it reaches 160 days. Earlier break-up is forcing polar bears off the ice at their peak feeding time in the spring, when bears typically pack on two-thirds of the weight they need to survive the year. With freeze-up occurring later each year, bears are skinnier and less healthy when it comes time to return to the ice. "You can see their backbones and their hips and shoulder blades when they are moving and they are visibly thin," said Ian Stirling, a wildlife biologist at the University of Alberta, who has studied the population for more than 35 years. Scientists are already seeing the effects of that extended starvation on future generations of polar bears. Female polar bears are now on average 88lbs lighter than they were in the early 1980s. They are having fewer cubs, and those cubs tend to be lighter, which means they have a lower rate of survival. Stirling, who conducts aerial surveys of polar bears, said he was struck each year by the scarcity of young cubs returning to the ice in the autumn. "There is no way a population can remain stable, if the young aren't surviving," said Stirling. "If the climate continues to warm, slowly and steadily, they are on the way out."


News Article | October 28, 2016
Site: co.newswire.com

Dr. Raj Padwal, of the University of Alberta, and his research team have selected TeleMED Diagnostic Management Inc. (TeleMED) as its Health Technology Partner in a study titled Telemonitoring and Protocolized Case Management for Hypertension in Seniors. This study is funded by Canadian Institutes of Health Research and will be a randomized controlled trial to determine if telemonitoring is the optimal method for controlling blood pressure in Canadian seniors.


News Article | October 26, 2016
Site: www.nature.com

It didn’t take long for Canada’s Prime Minister Justin Trudeau to send scientists swooning. Within days of taking office on 4 November 2015, the middle-left Liberal relaxed restrictions on government scientists’ ability to speak to the press and the public, and reinstated a long-form census prized by social scientists. A year on, Trudeau has boosted science budgets and restored some research jobs cut by his Conservative predecessor, Stephen Harper. “The sun has peeked through some of the clouds,” says Paul Dufour, a science-policy analyst at the University of Ottawa. “The dark prince has left.” Yet many in Canada’s science community say they are reserving judgement, waiting to see whether Trudeau can sustain his string of victories as he tackles some of country’s thorniest science-policy issues. Among them are revisions of processes ranging from environmental regulations to Canada’s system for doling out research grants. Kathleen Walsh, executive director of the non-profit science-advocacy group Evidence for Democracy in Ottawa, worries that some of the Trudeau government’s environ-mental policies may favour style over substance. Take the prime minister’s decision to put a price on carbon — starting at Can$10 (US$7.5) per tonne in 2018 and rising to Can$50 per tonne in 2022. Environmentalists and economists say that those prices are too low to achieve Canada’s goal of reducing its greenhouse-gas emissions by 30% below the 2005 level by 2030. Many also see that emissions goal, set by Harper, as lacklustre. And Trudeau has not broached harder subjects, such as fulfilling a campaign promise to phase out fossil-fuel subsidies. “The Trudeau government has squandered an opportunity for effective national action,” says Douglas Macdonald, an environmental-policy expert at the University of Toronto. The prime minister’s first budget, released in March, brought good news for scientists: an increase of roughly Can$95 million for the country’s research councils — more than twice the 2015 boost (see ‘Budget boost’). But there are still grumbles about how research councils’ funds are apportioned. “A lot of money is going to large institutions,” says Walsh. “Your everyday scientists in everyday labs are still struggling.” Earlier this year, health scientists cried foul over reforms to the Canadian Institutes of Health Research (CIHR) system for awarding grants. Researchers complained that the measures, including a switch to online peer review, made reviews less effective and put early-career scientists at a disadvantage. More than 1,000 researchers signed a letter demanding changes; in September, the CIHR launched an international review of its grant processes. A broader examination of the government’s science-funding system, called the Fundamental Science Review, began in June. Science minister Kirsty Duncan says that the government has received more than 1,200 public comments, and a final report on the review is due by early 2017 at the latest. The Trudeau government is also re-examining Harper’s changes to fisheries and environmental-assessment laws, with recommendations due by early 2017. In the meantime, controversial projects such as a natural-gas plant on the British Columbia coast are receiving government approval. “There is a rush by companies to get hearings over and the necessary papers in place before [environmental assessment] regulations are strengthened,” says David Schindler, an ecologist at the University of Alberta in Edmonton. Trudeau’s main campaign promise to scientists was to reinstate evidence-based decision-making. To that end, jobs are being restored to some government research departments after a loss of roughly 1,800 positions during the Harper administration — 344 of those at the agency Environment Canada alone. Now, the department of fisheries and oceans is hiring 135 scientists. And the Professional Institute of the Public Service of Canada, a union that represents government workers, wants the administration to hire 1,500 extra scientists next year. Many researchers are waiting for Trudeau to deliver on his promise to install a chief science officer to keep science at the heart of governance. That position is still in the planning stage, and Duncan would not comment on when an appointment would be made. “We’re kind of still in the honeymoon period,” says Dufour. “Everyone is willing to give the government some long string. But at some point they’re going to have to take some actual action.”


News Article | November 22, 2016
Site: www.npr.org

The Standing Rock Resistance Is Unprecedented (It's Also Centuries Old) As resistance to the Dakota Access Pipeline in Standing Rock, N.D., concludes its seventh month, two narratives have emerged: Both are true. The scope of the resistance at Standing Rock exceeds just about every protest in Native American history. But that history itself, of indigenous people fighting to protect not just their land, but the land, is centuries old. Over the weekend, the situation at Standing Rock grew more contentious. On Sunday night, Morton County police sprayed the crowd of about 400 people with tear gas and water as temperatures dipped below freezing. But the resistance, an offspring of history, continues. Through the years, details of such protests change — sometimes the foe is the U.S. government; sometimes a large corporation; sometimes, as in the case of the pipeline, a combination of the two. Still, the broad strokes of each land infringement and each resistance stay essentially the same. In that tradition, the tribes gathered at Standing Rock today are trying to stop a natural gas pipeline operator from bulldozing what they say are sacred sites to construct an 1,172-mile oil pipeline. The tribes also want to protect the Missouri River, the primary water source for the Standing Rock Reservation, from a potential pipeline leak. (Energy Transfer Partners, which is building the pipeline, says on its website that it emphasizes safety and that, "in many instances we exceed government safety standards to ensure a long-term, safe and reliable pipeline.") Since April, when citizens of the Standing Rock Sioux Nation set up the Sacred Stone Camp, thousands of people have passed through and pledged support. Environmentalists and activist groups like Black Lives Matter and Code Pink have also stepped in as allies. Many people who have visited say that the camp is beyond anything they've ever experienced. "It's historic, really. I don't think anything like this has ever happened in documented history," said Ruth Hopkins, a reporter from Indian Country Today. But there are historical preludes, and you don't have to look too far back to find them. In 2015, when the Keystone XL pipeline was being debated, numerous Native American tribes and the Indigenous Environmental Network organized against it. The pipeline would have stretched 1,179 miles from Canada to the Gulf of Mexico. The Rosebud Sioux, a tribe in South Dakota, called the proposed pipeline an "act of war" and set up an encampment where the pipeline was to be constructed. Also joining in were the Environmental Protection Agency, the National Resources Defense Council, and the Omaha, Dene, Ho-chunk, and Creek Nations, whose lands the pipeline would have traversed. President Obama vetoed Keystone XL. But even at the time, A. Gay Kingman, the executive director of the Great Plains Tribal Chairman's Association, warned that the reprieve would be temporary. "Wopila [thank you] to all our relatives who stood strong to oppose the KXL," Kingman said in a statement after the veto. "But keep the coalitions together, because there are more pipelines proposed, and we must protect our Mother Earth for our future generations." In the case of the Dakota Access Pipeline, the Standing Rock Sioux have been able to attract support from hundreds of tribes all over the country, not just in places that would be directly affected. The tribes aren't just leaning on long-held beliefs about the importance of the natural world. They're also using long-held resistance strategies. Like the encampment itself. "If you don't know very much about Native American people, you wouldn't understand that this is something that's kind of natural to us," said Hopkins, who is enrolled in the Sisseton Wahpeton Oyate Nation and was born on the Standing Rock Reservation. "When we have ceremonies, we do camps like this. It's something that we've always known how to do, going back to pre-colonial times." In the late 1800s more than 10,000 members of the Lakota Sioux, Cheyenne and Arapaho tribes set up camp to resist the U.S. Army's attempt to displace them in search of gold. That camp took form at the Little Bighorn River in Montana. After the soldiers attacked the camp in June of 1876, the Battle of the Little Bighorn, widely known as (Gen. George) Custer's Last Stand, erupted. In defeating the Army, the tribes won a huge land rights victory for Native Americans. There was also Wounded Knee, a protest that was part of the American Indian Movement. During the 1973 demonstration, about 200 people occupied the town of Wounded Knee on the Pine Ridge Reservation in South Dakota — the site of an 1890 massacre in which U.S. soldiers killed hundreds of Native Americans. Protesters turned Wounded Knee into what one former AIM leader called "an armed camp" in order to protest corruption in tribal leadership and draw attention to the U.S. government's failure to honor treaties. Over the course of the 1973 occupation, two Sioux men were killed and hundreds more arrested. But the resistance, which lasted 71 days, underscored Native American civil rights issues in a way that many see reflected today in Standing Rock. If Native American resistance is an old story, that's because the systemic violation of indigenous land rights is an old story. And if history is any precedent, the resistance won't end at Standing Rock. "There are no rights being violated here that haven't been violated before." said Kim Tallbear, a professor of Native Studies at the University of Alberta, who for years worked on tribal issues as an environmental planner for the U.S. Environmental Protection Agency and the Department of Energy. Those violations, she said, have taken two forms: long-term disregard for indigenous land rights and a "bureaucratic disregard for consultation with indigenous people." When she sees images of police using pepper spray and water cannons or security guards unleashing dogs on Standing Rock protesters, Tallbear said, she isn't shocked. "I'm, like, oh yeah, they did that in the 19th century, they did that in the 16th century," she said. "This is not new. ... The contemporary tactics used against indigenous people might look a little bit more complex or savvy, but to me, I can read it all as part of a longstanding colonial project." "Maybe for non-Natives who thought that the West was won, and the Indian Wars were over, and Native people were mostly dead and gone and isn't that too bad – now, they're like, 'Oh wait a minute, they're still there? And they're still fighting the same things they were 150 years ago?'


News Article | December 12, 2016
Site: www.eurekalert.org

New research shows that, when focused, we process information continuously, rather in waves as previously thought EDMONTON (Monday, December 5, 2016)--You're in a crowded lecture theatre. Around you are a million tiny distractions: someone rustling in their bag; a door opening for latecomers; a phone vibrating or lighting up; another listener having a snack; a pen dropping on the floor. However, you remain focused, concentrating on the speaker, listening and engaging with the talk. But, how do you do that? New research shows that when we're paying attention to something, that information is processed in a continuous manner. But when we're trying to ignore something, we perceive and experience information in waves or frames, like scenes in a movie. Cognitive neuroscientist Kyle Mathewson and Sayeed Kizuk, graduate of the bachelor of science program in honours psychology and current master of science student, recently published research explaining the phenomena. "We are better at prioritizing certain times when we are not attending to that space in the world," explains Mathewson, assistant professor in the Department of Psychology at the University of Alberta and Neuroscience and Mental Health institute affiliate. "This research shows that the two processes for attending to space and attending to time interact with one another." Our brains oscillate at many different frequencies, explains Mathewson, and each frequency has a different role. "This study examined 12 hertz alpha oscillations, a mechanisms used to inhibit, or ignore, a certain stimulus thereby allowing us to focus on a particular time or space that we are experiencing, while ignoring others," says Mathewson. For example, if there is a repetitive stimulus in the world, such as the sound of someone's voice in a lecture theatre, the alpha waves lock onto the timing of that stimulus, and the brain becomes better at processing things that occur in time with that stimulus. The new findings show, surprisingly, that this happens more in places we are ignoring. "We are bombarded with so much information and stimulation that we can't possibly process it all at once. Whether it be commuting, engaging in our work, studying for a class, or working out, our brains select the useful information and ignore the rest, so that we can focus on a single or a few items in order to make appropriate responses in the world. This research helps explain how," says Mathewson. Mathewson is now working on stimulating the brain at alpha frequencies in order to understand how to improve brain function in meaningful ways. For instance, improving one's ability to focus and perform in real-world situations, such as working on a project or riding a bike. "To better understand how the brain and mind works can help us improve performance and attention in our everyday lives, to improve our safety, increase our work productivity, do better at school, and perform better in sports," explains Mathewson. "We're developing and testing novel, portable technologies to make this possible." The paper, "Power and phase of alpha oscillations reveal an interaction between spatial and temporal visual attention" was published in the Journal of Cognitive Neuroscience in fall 2016. The University of Alberta Faculty of Science is a research and teaching powerhouse dedicated to shaping the future by pushing the boundaries of knowledge in the classroom, laboratory, and field. Through exceptional teaching, learning, and research experiences, we competitively position our students, staff, and faculty for current and future success.


News Article | April 11, 2016
Site: www.biosciencetechnology.com

Several studies have recognized a link between obesity and cancer. Richard Lehner, professor of Pediatrics and investigator at the University of Alberta's Faculty of Medicine & Dentistry, has taken his research further to understand how tumour cells grow through scavenging very low-density lipoproteins (VLDL) and low-density lipoproteins (LDL), commonly known as the "bad cholesterol", and what mechanisms can be used to reduce the malignant cells' growth. The innovative study, an effort of over 2 years by Lehner's group in collaboration with Gerald Hoefler and his team (Medical University of Graz, Austria), was published in scientific journal Cell Reports. The data gathered from their experiments suggest a feed-forward loop, in which tumours not only use lipids as "building blocks" to grow, but they can regulate their host's lipid metabolism to increase production of these lipids. The "bad cholesterol" binds to LDL receptors in the liver, the organ in charge of degrading it and excreting it from the organism as bile. "Cancer cells need lipids to grow. They can make their own lipids or get more from the host because these cells grow so fast," explains Lehner. "The tumour signals to the liver: 'I need more cholesterol for growth' and the liver is reprogrammed to secrete those lipids." One of the key factors for this process are proteins we all have that, in larger quantities, may cause a decrease in the amount of LDL receptors to excrete the cholesterol. The tumour affects these proteins to reduce clearance of cholesterol from the blood, leaving the LDL for cancer to feed off of it. These findings led Lehner and Hoefler to an interesting hypothesis: minimizing the liver's production of LDL would deprive a tumour from its constant supply and therefore reduce its possibility of growth. Their experiments in pre-clinical models proved to be successful, confirming lower tumour development with the regulation of the proteins that affect production of VLDL (precursors of LDL) and uptake of LDL by receptors from the liver. This research received the support of grants from the Canadian Institutes of Health Research (CIHR) and Austrian Science Fund (FWF) and its DK Programme. It was also possible through the Faculty of Medicine & Dentistry's Lipid Analysis Core Facility, the Women and Children Health Research Institute (WCHRI) and the Canada Foundation for Innovation (CFI). The next step for Lehner and his team will be to test existing medications that would help in limiting the production of cholesterol on patients undergoing cancer treatment -- adding them to their current therapies. "There are medications approved that we can test", says Lehner. "They were not developed for cancer, they were manufactured for people with hypercholesterolemia [chronic condition where patients have very high level of cholesterol in their blood], but it will be interesting for us to test them with cancer patients and see if there is improvement." Lehner intends to expand the support received and develop these tests locally, including technology and facilities from the institutes and clinics related to the University of Alberta. "The collaboration with Austria was to set the concept of the investigation," he explains. "We have a great group here, great cancer researchers. We are in good hands to continue." Should these potential clinical trials prove to be effective, we could be facing an improved way to help cancer patients: eliminating the tumour, while preventing it from growing at the same time.


News Article | February 16, 2017
Site: www.eurekalert.org

Alpha cells in the pancreas can be induced in living mice to quickly and efficiently become insulin-producing beta cells when the expression of just two genes is blocked, according to a study led by researchers at the Stanford University School of Medicine. Studies of human pancreases from diabetic cadaver donors suggest that the alpha cells' "career change" also occurs naturally in diabetic humans, but on a much smaller and slower scale. The research suggests that scientists may one day be able to take advantage of this natural flexibility in cell fate to coax alpha cells to convert to beta cells in humans to alleviate the symptoms of diabetes. "It is important to carefully evaluate any and all potential sources of new beta cells for people with diabetes," said Seung Kim, MD, PhD, professor of developmental biology and of medicine. "Now we've discovered what keeps an alpha cell as an alpha cell, and found a way to efficiently convert them in living animals into cells that are nearly indistinguishable from beta cells. It's very exciting." Kim is the senior author of the study, which will be published online Feb. 16 in Cell Metabolism. Postdoctoral scholar Harini Chakravarthy, PhD, is the lead author. "Transdifferentiation of alpha cells into insulin-producing beta cells is a very attractive therapeutic approach for restoring beta cell function in established Type 1 diabetes," said Andrew Rakeman, PhD, the director of discovery research at JDRF, an organization that funds research into Type 1 diabetes. "By identifying the pathways regulating alpha to beta cell conversion and showing that these same mechanisms are active in human islets from patients with Type 1 diabetes, Chakravarthy and her colleagues have made an important step toward realizing the therapeutic potential of alpha cell transdifferentiation." Rakeman was not involved in the study. Cells in the pancreas called beta cells and alpha cells are responsible for modulating the body's response to the rise and fall of blood glucose levels after a meal. When glucose levels rise, beta cells release insulin to cue cells throughout the body to squirrel away the sugar for later use. When levels fall, alpha cells release glucagon to stimulate the release of stored glucose. Although both Type 1 and Type 2 diabetes are primarily linked to reductions in the number of insulin-producing beta cells, there are signs that alpha cells may also be dysfunctional in these disorders. "In some cases, alpha cells may actually be secreting too much glucagon," said Kim. "When there is already not enough insulin, excess glucagon is like adding gas to a fire." Because humans have a large reservoir of alpha cells, and because the alpha cells sometimes secrete too much glucagon, converting some alpha cells to beta cells should be well-tolerated, the researchers believe. The researchers built on a previous study in mice several years ago that was conducted in a Swiss laboratory, which also collaborated on the current study. It showed that when beta cells are destroyed, about 1 percent of alpha cells in the pancreas begin to look and act like beta cells. But this happened very slowly. "What was lacking in that initial index study was any sort of understanding of the mechanism of this conversion," said Kim. "But we had some ideas based on our own work as to what the master regulators might be." Chakravarthy and her colleagues targeted two main candidates: a protein called Arx known to be important during the development of alpha cells and another called DNMT1 that may help alpha cells "remember" how to be alpha cells by maintaining chemical tags on its DNA. The researchers painstakingly generated a strain of laboratory mice unable to make either Arx or DNMT1 in pancreatic alpha cells when the animals were administered a certain chemical compound in their drinking water. They observed a rapid conversion of alpha cells into what appeared to be beta cells in the mice within seven weeks of blocking the production of both these proteins. To confirm the change, the researchers collaborated with colleagues in the laboratory of Stephen Quake, PhD, a co-author and professor of bioengineering and of applied physics at Stanford, to study the gene expression patterns of the former alpha cells. They also shipped the cells to collaborators in Alberta, Canada, and at the University of Illinois to test the electrophysiological characteristics of the cells and whether and how they responded to glucose. "Through these rigorous studies by our colleagues and collaborators, we found that these former alpha cells were -- in every way -- remarkably similar to native beta cells," said Kim. The researchers then turned their attention to human pancreatic tissue from diabetic and nondiabetic cadaver donors. They found that samples of tissue from children with Type 1 diabetes diagnosed within a year or two of their death include a proportion of bi-hormonal cells -- individual cells that produce both glucagon and insulin. Kim and his colleagues believe they may have caught the cells in the act of converting from alpha cells to beta cells in response to the development of diabetes. They also saw that the human alpha cell samples from the diabetic donors had lost the expression of the very genes -- ARX and DNMT1 -- they had blocked in the mice to convert alpha cells into beta cells. "So the same basic changes may be happening in humans with Type 1 diabetes," said Kim. "This indicates that it might be possible to use targeted methods to block these genes or the signals controlling them in the pancreatic islets of people with diabetes to enhance the proportion of alpha cells that convert into beta cells." Kim is a member of Stanford Bio-X, the Stanford Cardiovascular Institute, the Stanford Cancer Institute and the Stanford Child Health Research Institute. Researchers from the University of Alberta, the University of Illinois, the University of Geneva and the University of Bergen are also co-authors of the study. The research was supported by the National Institutes of Health (grants U01HL099999, U01HL099995, UO1DK089532, UO1DK089572 and UC4DK104211), the California Institute for Regenerative Medicine, the Juvenile Diabetes Research Foundation, the Center of Excellence for Stem Cell Genomics, the Wallenberg Foundation, the Swiss National Science Foundation, the NIH Beta-Cell Biology Consortium, the European Union, the Howard Hughes Medical Institute, the H.L. Snyder Foundation, the Elser Trust and the NIH Human Islet Resource Network. Stanford's Department of Developmental Biology also supported the work. The Stanford University School of Medicine consistently ranks among the nation's top medical schools, integrating research, medical education, patient care and community service. For more news about the school, please visit http://med. . The medical school is part of Stanford Medicine, which includes Stanford Health Care and Stanford Children's Health. For information about all three, please visit http://med. .


News Article | August 31, 2016
Site: www.nature.com

No statistical methods were used to predetermine sample size. The experiments were not randomized. The investigators were not blinded to allocation during experiments and outcome assessment. We obtained 23 sediment cores from 8 different lakes by using a percussion corer deployed from the frozen lake surface51. To prevent eventual internal mixing, we discarded all upper suspended sediments and only kept the compacted sediment for further investigation. Cores were cut into smaller sections to allow transport and storage. All cores were taken to laboratories at the University of Calgary and were stored cold at 5 °C until subsequent subsampling. Cores were split using an adjustable tile saw, cutting only the PVC pipe. The split half was taken into a positive pressure laboratory for DNA subsampling. DNA samples were taken wearing full body suit, mask and sterile gloves; the top 10 mm were removed using two sterile scalpels and samples were taken with a 5 ml sterile disposable syringe (3–4 cm2) and transferred to a 15 ml sterile spin tube. Caution was taken not to cross-contaminate between layers or to sample sediments in contact with the inner side of the PVC pipe. Samples were taken every centimetre in the lowest 1 m of the core (except for Spring Lake, the lowest 2 m), then intervals of 2 cm higher up, and finally samples were taken every 5 cm, and subsequently frozen until analysed. Pollen samples were taken immediately next to the DNA samples, while macrofossil samples were cut from the remaining layer in 1 cm or 2 cm slices. Following sampling, the second intact core halves were visually described and wrapped for transport. All cores were stored at 5 °C before, during and after shipment to the University of Copenhagen (Denmark). An ITRAX core scanner was used to take high-resolution images and to measure magnetic susceptibility at the Department of Geoscience, Aarhus University. Magnetic susceptibility52 was measured every 0.5 cm using a Bartington Instruments MS2 system (Extended Data Fig. 2). Pollen was extracted using a standard protocol30. Lycopodium markers were added to determine pollen concentrations53 (see Supplementary Information). Samples were mounted in (2000 cs) silicone oil and pollen including spores were counted using a Leica Laborlux-S microscope at 400× magnification and identified using keys30, 53, 54 as well as reference collections of North American and Arctic pollen housed at the University of Alberta and the Danish Natural History Museum, respectively. Pollen and pteridophyte spores were identified at least to family level and, more typically, to genera. Green algae coenobia of Pediastrum boryanum and Botryococcus were recorded to track changes in lake trophic status. Pollen influx values were calculated using pollen concentrations divided by the deposition rate (see Supplementary Information). Microfossil diagrams were produced and analysed using PSIMPOLL 4.10 (ref. 31). The sequences were zoned with CONIIC31, with a stratigraphy constrained clustering technique using the information statistic as a distance measure. All macrofossils were retrieved using a 100 μm mesh size and were identified but not quantified. Plant macrofossils identified as terrestrial taxa (or unidentifiable macrofossils with terrestrial characteristics where no preferable material could be identified) were selected for radiocarbon (14C) dating of the lacustrine sediment. All macrofossils were subjected to a standard acid-base-acid (ABA) chemical pre-treatment at the Oxford Radiocarbon Accelerator Unit (ORAU), following a standard protocol55, with appropriate ‘known age’ (that is, independently dendrochronologically-dated tree-ring) standards run alongside the unknown age plant macrofossil samples56. Specifically, this ABA chemical pre-treatment (ORAU laboratory pre-treatment code ‘VV’) involved successive 1 M HCl (20 min, 80 °C), 0.2 M NaOH (20 min, 80 °C) and 1 M HCl (1 h, 80 °C) washes, with each stage followed by rinsing to neutrality (≥3 times) with ultrapure MilliQ deionised water. The three principal stages of this process (successive ABA washes) are similar across most radiocarbon laboratories and are, respectively, intended to remove: (i) sedimentary- and other carbonate contaminants; (ii) organic (principally humic- and fulvic-) acid contaminants; and (iii) any dissolved atmospheric CO that might have been absorbed during the preceding base wash. Thus, any potential secondary carbon contamination was removed, leaving the samples pure for combustion and graphitisation. Accelerator mass spectrometry (AMS) 14C dating was subsequently performed on the 2.5 MV HVEE tandem AMS system at ORAU57. As is standard practice, measurements were corrected for natural isotopic fractionation by normalizing the data to a standard δ13C value of −25‰ VPDB, before reporting as conventional 14C ages before present (bp, before ad 1950)58. These 14C data were calibrated with the IntCal13 calibration curve59 and modelled using the Bayesian statistical software OxCal v. 4.2 (ref. 60). Poisson process (‘P_Sequence’) deposition models were applied to each of the Charlie and Spring Lake sediment profiles61, with objective ‘Outlier’ analysis applied to each of the constituent 14C determinations62. The P_Sequence model takes into account the complexity (randomness) of the underlying sedimentation process, and thus provides realistic age-depth models for the sediment profiles on the calibrated radiocarbon (IntCal) timescale. The rigidity of the P_Sequence (the regularity of the sedimentation rate) is determined iteratively within OxCal through a model averaging approach, based upon the likelihood (calibrated 14C) data included within the model60. A prior ‘Outlier’ probability of 5% was applied to each of the 14C determinations, because there was no reason, a priori, to believe that any samples were more likely to be statistical outliers than others. All 14C determinations are provided in Extended Data Table 1; OxCal model coding is provided in the Supplementary Information; and plots of the age-depth models derived for Spring and Charlie Lakes are given in Extended Data Fig. 2. All DNA extractions and pre-PCR analyses were performed in the ancient DNA facilities of the Centre for GeoGenetics, Copenhagen. Total genomic DNA was extracted using a modified version of an organic extraction protocol63. We used a lysis buffer containing 68 mM N-lauroylsarcosine sodium salt, 50 mM Tris-HCl (pH 8.0), 150 mM NaCl, and 20 mM EDTA (pH 8.0) and, immediately before extraction, 1.5 ml 2-mercaptoethanol and 1.0 ml 1 M DTT were added for each 30 ml lysis buffer. Approximately 2 g of sediment was added, and 3 ml of buffer, together with 170 μg of proteinase K, and vortexed vigorously for 2× 20 s using a FastPrep-24 at speed 4.0 m s−1. An additional 170 μg of proteinase K was added to each sample and incubated, gently rotating overnight at 37 °C. For removal of inhibitors we used the MOBIO (MO BIO Laboratories, Carlsbad, CA) C2 and C3 buffers following the manufacturer’s protocol. The extracts were further purified using phenol-chloroform and concentrated using 30 kDa Amicon Ultra-4 centrifugal filters as described in the Andersen extraction protocol63. Our extraction method was changed from this protocol with the following modifications: no lysis matrix was added due to the minerogenic nature of the samples and the two phenol, one chloroform step was altered, thus both phenol:chloroform:supernatant were added simultaneously in the respective ratio 1:0.5:1, followed by gentle rotation at room temperature for 10 min and spun for 5 min at 3,200g. For dark-coloured extracts, this phenol:chloroform step was repeated. All extracts were quantified using Quant-iT dsDNA HS assay kit (Invitrogen) on a Qubit 2.0 Fluorometer according to the manufacturer’s manual. The measured concentrations were used to calculate the total ng DNA extracted per g of sediment (Fig. 2). 32 samples were prepared for shotgun metagenome sequencing64 using the NEBNext DNA Library Prep Master Mix Set for 454 (New England BioLabs) following the manufacturer’s protocol with the following modifications: (i) all reaction volumes (except for the end repair step) were decreased to half the size as in the protocol, and (ii) all purification steps were performed using the MinElute PCR Purification kit (Qiagen). Metagenome libraries were amplified using AmpliTaq Gold (Applied Biosystems), given 14–20 cycles following and quantified using the 2100 BioAnalyser chip (Agilent). All libraries were purified using Agencourt AMPure XP beads (BeckmanCoulter), quantified on the 2100 BioAnalyzer and pooled equimolarly. All pooled libraries were sequenced on an Illumina HiSeq 2500 platform and treated as single-end reads. Metagenomic reads were demultiplexed and trimmed using AdapterRemoval 1.5 (ref. 65) with a minimum base quality of 30 and minimum length of 30 bp66. All reads with poly-A/T tails ≥ 4 were removed from each sample. Low-quality reads and duplicates were removed using String Graph Assembler (SGA)67 setting the preprocessing tool dust-threshold = 1, index algorithm = ‘ropebwt’ and using the SGA filter tool to remove exact and contained duplicates. Each quality-controlled (QC) read was thereafter allowed equal change to map to reference sequences using Bowtie2 version 2.2.4 (ref. 68) (end-to-end alignment and mode –k 50 for example, reads were allowed a total of 500 hits before being parsed). A few reads with more than 500 matches were confirmed by checking that the best blast hit belonged to this taxon, and that alternative hits have lower e-values and alignment scores. We used the full nucleotide database (nt) from GenBank (accessed 4 March 2015), which due to size and downstream handling was divided into 9 consecutive equally sized databases and indexed using Bowtie2-build. All QC checked fastq files were aligned end-to-end using Bowtie2 default settings. Each alignment was merged using SAMtools69, sorted according to read identifier and imported to MEGAN v. 10.5 (ref. 70). We performed a lowest common ancestor (LCA) analysis using the built-in algorithm in MEGAN and computed the taxonomic assignments employing the embedded NCBI taxonomic tree (March 2015 version) on reads having 100% matches to a reference sequence. We call this pipeline ‘Holi’ because it takes a holistic approach because it has no a priori assumption of environment and the read is given an equal chance to align against the nt database containing the vast majority of organismal sequences (see Supplementary Information). In silico testing of ‘Holi’ sensitivity (see Supplementary Information) revealed 0.1% as a reliable minimum threshold for Viridiplantae taxa. For metazoan reads, which were found to be under-represented in our data, we set this threshold to 3 unique reads in one sample or 3 unique reads in three different samples from the same lake. In addition, we confirmed that each read within the metazoans by checking that the best blast hit belonged to this taxon, and that alternative hits have lower e-values and alignment scores71. We merged all sequences from all blanks and subtracted this from the total data set (instead of pairing for each extract and library build), using lowest taxonomic end nodes. Candidate detection was performed by decreasing the detection threshold in ‘Holi’ from 0.1% to 0.01% to increase the detection of contaminating plants, and similar for metazoans, we decreased the detection level and subtracted all with 2 or more reads per taxa (see Supplementary Information). We performed a series of in silico tests to measure the sensitivity and specificity of our assignment method and to estimate likelihood of false-positives (see Supplementary Information). We generated 1,030,354,587 Illumina reads distributed across 32 sediment samples and used the dedicated computational pipeline (‘Holi’) for handling read de-multiplexing, adaptor trimming, control quality, duplicate and low-complexity read removal (see Supplementary Information). The 257,890,573 reads parsing filters were further aligned against the whole non-redundant nucleotide (nt) sequence database72. Hereafter, we used a lowest common ancestor approach70 to recover taxonomic information from the 985,818 aligning reads. Plants represented by less than 0.1% of the total reads assigned were discarded to limit false positives resulting from database mis-annotations, PCR and sequencing errors (see Supplementary Information). Given the low number of reads assigned to multicellular, eukaryotic organisms (metazoans), we set a minimal threshold of 3 counts per sample or 1 count in each of three samples. For plants and metazoans this resulted in 511,504 and 2,596 reads assigned at the family or genus levels, respectively. The read counts were then normalized for generating plant and metazoan taxonomic profiles (Extended Data Figs 5 and 6). Taxonomic profiles for reads assigned to bacteria, archaea, fungi and alveolata were also produced (see Supplementary Information). We estimated the DNA damage levels using the MapDamage package 2.0 (ref. 40) for the most abundant organisms (Extended Data Fig. 7b). These represent distinctive sources, which help to account for potential differences between damage accumulated from source to deposition or during deposition. Input SAM files were generated for each sample using Bowtie2 (ref. 68) to align all QC reads from each sample against each reference genome. All aligning sequences were converted to BAM format, sorted and parsed through MapDamage by running the statistical estimation using only the 5′-ends (–forward) for single reads. All frequencies of cytosine to thymine mutations per position from the 5′ ends were parsed and the standard deviation was calculated to generate DNA damage models for each lake (Extended Data Fig. 7a and Supplementary Information).


Catuneanu O.,University of Alberta | Zecchin M.,National Institute of Oceanography and Applied Geophysics - OGS
Marine and Petroleum Geology | Year: 2013

Both allogenic and autogenic processes may contribute to the formation of sequence stratigraphic surfaces, particularly at the scale of fourth-order and lower rank cycles. This is the case with all surfaces that are associated with transgression, which include the maximum regressive surface, the transgressive ravinement surfaces and the maximum flooding surface, and, under particular circumstances, the subaerial unconformity as well. Not all autogenic processes play a role in the formation of sequence stratigraphic surfaces, but only those that can influence the direction of shoreline shift. Any changes in shoreline trajectory, whether autogenic or allogenic in origin, influence the stratal stacking patterns in the rock record which sequence stratigraphic interpretations are based upon.The discrimination between the allogenic and autogenic processes that may control changes in shoreline trajectory is a matter of interpretation and is tentative at best in many instances. For this reason, the definition and nomenclature of units and bounding surfaces need to be based on the observation of stratal features and stacking patterns rather than the interpretation of the controlling mechanisms. In this light, we extend the concept of 'sequence' to include all cycles bounded by recurring surfaces of sequence stratigraphic significance, irrespective of the origin of these surfaces. The updated sequence concept promotes a separation between the objective observation of field criteria and the subsequent interpretation of controlling parameters, and stresses that a sequence stratigraphic unit is defined by its bounding surfaces and not by its interpreted origin. The use of high-frequency sequences eliminates the need to employ the concepts of parasequence or small-scale cycle in high-resolution studies, and simplifies the sequence stratigraphic methodology and the nomenclature. © 2012 Elsevier Ltd.


Zecchin M.,National Institute of Oceanography and Applied Geophysics - OGS | Catuneanu O.,University of Alberta
Marine and Petroleum Geology | Year: 2013

The high-resolution sequence stratigraphy tackles scales of observation that typically fall below the resolution of seismic exploration methods, commonly referred to as of 4th-order or lower rank. Outcrop- and core-based studies are aimed at recognizing features at these scales, and represent the basis for high-resolution sequence stratigraphy. Such studies adopt the most practical ways to subdivide the stratigraphic record, and take into account stratigraphic surfaces with physical attributes that may only be detectable at outcrop scale. The resolution offered by exposed strata typically allows the identification of a wider array of surfaces as compared to those recognizable at the seismic scale, which permits an accurate and more detailed description of cyclic successions in the rock record. These surfaces can be classified as 'sequence stratigraphic', if they serve as systems tract boundaries, or as facies contacts, if they develop within systems tracts. Both sequence stratigraphic surfaces and facies contacts are important in high-resolution studies; however, the workflow of sequence stratigraphic analysis requires the identification of sequence stratigraphic surfaces first, followed by the placement of facies contacts within the framework of systems tracts and bounding sequence stratigraphic surfaces.Several types of stratigraphic units may be defined, from architectural units bounded by the two nearest non-cryptic stratigraphic surfaces to systems tracts and sequences. The need for other types of stratigraphic units in high-resolution studies, such as parasequences and small-scale cycles, may be replaced by the usage of high-frequency sequences. The sequence boundaries that may be employed in high-resolution sequence stratigraphy are represented by the same types of surfaces that are used traditionally in larger scale studies, but at a correspondingly lower hierarchical level. © 2012 Elsevier Ltd.


Wu Y.,Tsinghua University | Zhao R.C.H.,Peking Union Medical College | Tredget E.E.,University of Alberta
Stem Cells | Year: 2010

Our understanding of the role of bone marrow (BM)-derived cells in cutaneous homeostasis and wound healing had long been limited to the contribution of inflammatory cells. Recent studies, however, suggest that the BM contributes a significant proportion of noninflammatory cells to the skin, which are present primarily in the dermis in fibroblast-like morphology and in the epidermis in a keratinocyte phenotype; and the number of these BM-derived cells increases markedly after wounding. More recently, several studies indicate that mesenchymal stem cells derived from the BM could significantly impact wound healing in diabetic and nondiabetic animals, through cell differentiation and the release of paracrine factors, implying a profound therapeutic potential. This review discusses the most recent understanding of the contribution of BM-derived noninflammatory cells to cutaneous homeostasis and wound healing. © AlphaMed Press.


Abbaszadeh M.,UTRC - United Technologies Research Center | Marquez H.J.,University of Alberta
Automatica | Year: 2012

In this paper, a generalized robust nonlinear H ∞ filtering method is proposed for a class of Lipschitz descriptor systems, in which the nonlinearities appear both in the state and measured output equations. The system is assumed to have norm-bounded uncertainties in the realization matrices as well as nonlinear uncertainties. We synthesize the H ∞ filter through semidefinite programming and strict LMIs. The admissible Lipschitz constants of the nonlinear functions are maximized through LMI optimization. The resulting H ∞ filter guarantees asymptotic stability of the estimation error dynamics with prespecified disturbance attenuation level and is robust against time-varying parametric uncertainties as well as Lipschitz nonlinear additive uncertainty. Explicit bound on the tolerable nonlinear uncertainty is derived based on a norm-wise robustness analysis. © 2012 Elsevier Ltd. All rights reserved.


Fearon K.,University of Edinburgh | Arends J.,Albert Ludwigs University of Freiburg | Baracos V.,University of Alberta
Nature Reviews Clinical Oncology | Year: 2013

Cancer cachexia is a metabolic syndrome that can be present even in the absence of weight loss ('precachexia'). Cachexia is often compounded by pre-existing muscle loss, and is exacerbated by cancer therapy. Furthermore, cachexia is frequently obscured by obesity, leading to under-diagnosis and excess mortality. Muscle wasting (the signal event in cachexia) is associated not only with reduced quality of life, but also markedly increased toxicity from chemotherapy. Many of the primary events driving cachexia are likely mediated via the central nervous system and include inflammation-related anorexia and hypoanabolism or hypercatabolism. Treatment of cachexia should be initiated early. In addition to active management of secondary causes of anorexia (such as pain and nausea), therapy should target reduced food intake (nutritional support), inflammation-related metabolic change (anti-inflammatory drugs or nutrients) and reduced physical activity (resistance exercise). Advances in the understanding of the molecular biology of the brain, immune system and skeletal muscle have provided novel targets for the treatment of cachexia. The combination of therapies into a standard multimodal package coupled with the development of novel therapeutics promises a new era in supportive oncology whereby quality of life and tolerance to cancer therapy could be improved considerably.


Hayduk L.A.,University of Alberta | Littvay L.,Central European University
BMC Medical Research Methodology | Year: 2012

Background: Structural equation modeling developed as a statistical melding of path analysis and factor analysis that obscured a fundamental tension between a factor preference for multiple indicators and path modelings openness to fewer indicators. Discussion. Multiple indicators hamper theory by unnecessarily restricting the number of modeled latents. Using the few best indicators - possibly even the single best indicator of each latent - encourages development of theoretically sophisticated models. Additional latent variables permit stronger statistical control of potential confounders, and encourage detailed investigation of mediating causal mechanisms. Summary. We recommend the use of the few best indicators. One or two indicators are often sufficient, but three indicators may occasionally be helpful. More than three indicators are rarely warranted because additional redundant indicators provide less research benefit than single indicators of additional latent variables. Scales created from multiple indicators can introduce additional problems, and are prone to being less desirable than either single or multiple indicators. © 2012 Hayduk and Littvay; licensee BioMed Central Ltd.


News Article | December 14, 2016
Site: www.sciencenews.org

In a hotel ballroom in Seoul, South Korea, early in 2016, a centuries-old strategy game offered a glimpse into the fantastic future of computing. The computer program AlphaGo bested a world champion player at the Chinese board game Go, four games to one (SN Online: 3/15/16). The victory shocked Go players and computer gurus alike. “It happened much faster than people expected,” says Stuart Russell, a computer scientist at the University of California, Berkeley. “A year before the match, people were saying that it would take another 10 years for us to reach this point.” The match was a powerful demonstration of the potential of computers that can learn from experience. Elements of artificial intelligence are already a reality, from medical diagnostics to self-driving cars (SN Online: 6/23/16), and computer programs can even find the fastest routes through the London Underground. “We don’t know what the limits are,” Russell says. “I’d say there’s at least a decade of work just finding out the things we can do with this technology.” AlphaGo’s design mimics the way human brains tackle problems and allows the program to fine-tune itself based on new experiences. The system was trained using 30 million positions from 160,000 games of Go played by human experts. AlphaGo’s creators at Google DeepMind honed the software even further by having it play games against slightly altered versions of itself, a sort of digital “survival of the fittest.” These learning experiences allowed AlphaGo to more efficiently sweat over its next move. Programs aimed at simpler games play out every single hypothetical game that could result from each available choice in a branching pattern — a brute-force approach to computing. But this technique becomes impractical for more complex games such as chess, so many chess-playing programs sample only a smaller subset of possible outcomes. That was true of Deep Blue, the computer that beat chess master Garry Kasparov in 1997. But Go offers players many more choices than chess does. A full-sized Go board includes 361 playing spaces (compared with chess’ 64), often has various “battles” taking place across the board simultaneously and can last for more moves. AlphaGo overcomes Go’s sheer complexity by drawing on its own developing knowledge to choose which moves to evaluate. This intelligent selection led to the program’s surprising triumph, says computer scientist Jonathan Schaeffer of the University of Alberta in Canada. “A lot of people have put enormous effort into making small, incremental progress,” says Schaeffer, who led the development of the first computer program to achieve perfect play of checkers. “Then the AlphaGo team came along and, incremental progress be damned, made this giant leap forward.” Real-world problems have complexities far exceeding those of chess or Go, but the winning strategies demonstrated in 2016 could be game changers.


News Article | December 1, 2016
Site: www.marketwired.com

GRANDE PRAIRIE, AB--(Marketwired - December 01, 2016) - ANGKOR GOLD CORP. (TSX VENTURE: ANK) ( : ANKOF) ("Angkor" or "the Company") announced today the resignation of Mr. Aaron Triplett as Angkor's CFO to pursue other opportunities, and the appointment of Mr. Terry Mereniuk, B.Comm., CPA, CMC, as interim CFO. Mr. Mereniuk is currently a director of Angkor. Mr. Mereniuk has been a Director and CFO of several public and private companies. Prior to that, he owned and operated his own accounting firm. Terry obtained a Bachelor of Commerce (with distinction) from the University of Alberta in 1981. He is a Certified Management Consultant since June 1988 and a Chartered Professional Accountant since December 1983. The Company welcomes Mr. Mereniuk to his new role and extends its thanks to Mr. Triplett for his service. His contributions to the Company have been greatly appreciated. ANGKOR Gold Corp. is a public company listed on the TSX-Venture Exchange and is Cambodia's premier mineral explorer with a large land package and a first-mover advantage building strong relationships with all levels of government and stakeholders. ANGKOR'S six exploration licenses in the Kingdom of Cambodia cover 1,352 km2, which the company has been actively exploring over the past 6 years. The company has now covered all tenements with stream sediment geochemical sampling and has flown low level aeromagnetic surveys over most of the ground. Angkor has diamond drilled 21,855 metres in 190 holes, augured 2,643 metres over 728 holes, collected over 165,000 termite mound samples and 'B' and 'C' zone soil samples in over 20 centres of interest over a combined area of more than 140km2, in addition to numerous trenches, IP surveys and detailed geological field mapping. Exploration on all tenements is ongoing. Website at: http://www.angkorgold.ca or follow us @AngkorGold for all the latest updates. Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release. Certain of the statements made and information contained herein may constitute "forward-looking information". In particular references to the private placement and future work programs or expectations on the quality or results of such work programs are subject to risks associated with operations on the property, exploration activity generally, equipment limitations and availability, as well as other risks that we may not be currently aware of. Accordingly, readers are advised not to place undue reliance on forward-looking information. Except as required under applicable securities legislation, the Company undertakes no obligation to publicly update or revise forward-looking information, whether as a result of new information, future events or otherwise.


News Article | June 8, 2016
Site: www.techtimes.com

Could the bison provide clues to the mystery of ancient American settlement? Bones of giant steppe bison and traces of their ice-age hunters have led researchers to conclude that early humans likely colonized North America south from Alaska along the Pacific coast – not through the Rocky Mountains as previously thought. But when and how this happened remains a mystery. Not Through The Rocky Mountains Corridor? The first ancient people in America are thought to have reached their destination from Siberia using an ice-free corridor up along the Rocky Mountains during the late Pleistocene era. It remains uncertain when the crossing was created and how the people spread across the rest of America. The traditional assumption is that people swept into the continent in a single wave 13,500 years ago, but there has been contradicting evidence that human societies have settled far east 14,500 years earlier and far south over 15,000 years earlier. More recent proof shows, too, that the Rocky Mountain corridor was open until about 21,000 years ago, at the height of the last ice age when east and west ice sheets coalesced and completely separated populations. Now, using radiocarbon dating and DNA analysis, researchers from University of California Santa Cruz followed ancient hunters and tracked bison movements. Studying 78 bison fossils, they found two distinct populations to the north and south, as well as traced when the animals migrated and interbred. They discovered southern bison started moving in first with the opening of the southern part of the corridor, followed by the northern bison. The two started to mingle in the open pass approximately 13,000 years ago. What this means: the mountains likely cleared of ice over a thousand years post-human colonization in the south – a suggestion that early humans first inhabited the Americas along the Pacific coast. "When the corridor opened, people were already living south of there,” said study author and ecology and evolutionary biology professor Beth Shapiro. “And because those people were bison hunters, we can assume they would have followed the bison as they moved north into the corridor.” First author and postdoc researcher Peter Heintzman said that given these results, one would be pressed to think otherwise. “Fourteen to 15,000 years ago, there’s still a hell of a lot of ice around everywhere,” he told the Guardian. “And if that wasn’t opened up you’d have to go around the ice, and going the coastal route is the simplest explanation.” The Rocky Mountains corridor, however, remains important for its role in later migrations and idea exchange between people north and south, Heintzman added. Heintzman pointed to tidal erosion for little archeological evidence along the Pacific coast to vouch for its use among ancient people in migrating south. In the north, on the other hand, site dating is improving, but there are only a handful found in the land bridge along the Bering Strait. Here enters fossils of bison, which are deemed the most numerous mammals of their kind in western North America. These animals, unlike most other large mammals like sloths and dire wolves, also survived mass extinction events. The over-6-foot tall steppe bison of this period were much more massive than their living counterparts, according to author Duane Froese from the University of Alberta in Canada. Modern bison descended from these giants, Heintzman said, although they reside south of the range of their ancestors. Many of the fossil samples came from the Royal Alberta Museum and other institutions’ collections. They were revealed through mining operations and later made available for scientific research. The findings were published on June 6 in the journal Proceedings of the National Academy of Sciences. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | December 8, 2016
Site: motherboard.vice.com

For Lida Xing, a paleontologist based at the China University of Geosciences, scientific progress occasionally calls for some light espionage. This kind of situation arose last year, after he made an astonishing discovery at an amber market in Myitkyina, Myanmar. Suspended within a snowglobe-sized chunk of fossilized tree resin, Xing recognized the partial remains of an exquisitely preserved feathered tail belonging to a small juvenile coelurosaur, a type of bird-like dinosaur. The fossil dates 99 million years back to the middle Cretaceous period, when temperatures were warm, sea levels were high, and dinosaurs walked among the earliest flowering plants. Its discovery at the market was a stroke of "great luck," Xing told me over email. "I often visit amber markets," he said. "But this is the only dinosaur amber I've ever seen." So, where does the paleontological reconnaissance come in? Burmese amber markets happen to be fed by amber mines in Hukawng Valley, located in the Kachin State of northern Myanmar. This region is currently under the control of the insurgent Kachin Independence Army, which has a long history of conflict with the Burmese government. "Resellers buy scraps from amber miners and sell them on the markets," Xing explained. "The mines are extremely dangerous, so foreigners can hardly get there." Xing decided to go undercover. "I disguised myself as a Burmese man with a face painted with Thanaka," he told me. (Thanaka is a popular cosmetic paste in Myanmar, yellowish-white in color, made from finely ground tree bark.) Stealthily camouflaged and armed with a fake ID, Xing snuck into the region. He met the prospector responsible for excavating the dinosaur tail, who guided Xing through the mines and showed him new geological samples. "We are very lucky," he said of the escapade. As for the dinosaur tail itself, Xing persuaded the Dexu Institute of Palaeontology in Chaozhou to purchase it, and has since headed up an international team of researchers to analyse the fossil with computerized tomography (CT) scanning techniques. The results, published Thursday in Current Biology, shed light on the evolution of feathers and reveal intimate details about this particular coelurosaur, including its coloration pattern, skeletal features, and even the hemoglobin molecules that ran through its blood, which left traces of iron oxide within the tail. Xing and his co-authors, including paleontologists Ryan McKellar of the Royal Saskatchewan Museum and Philip J. Currie of the University of Alberta, were able to identify the animal as coelurosaur by its flexible vertebral structure, which distinguishes it from the fused rod-like spines of avian dinosaurs that would have sported similar feathered plumages. The feather coloration pattern suggests that the young dinosaur had chestnut-brown dorsal feathers, while its underbelly was pale. Xing told me that the brown feathers may have acted as "protective coloration," helping the coelurosaur blend into the woodland environments in which it is presumed to have lived. Small coelurosaurs would likely have scuttled on the ground hunting insects in tropical forests, likely populated by trees similar to those of Kauri trees extant in New Zealand. That said, there is still a lot to learn about Myanmar's rich paleontological history. READ MORE: A Tour of the Ancient Life Still Trapped in Amber "The environment of the middle Cretaceous from northern Myanmar does not appear to be formally studied," Xing said. This gap in paleontological knowledge is caused both by the remote location of amber mines and fossil beds in the region, as well as the longstanding social and political unrest that makes much of Myanmar's north off-limits to outsiders. The fact that this gorgeous snapshot of the Cretaceous world wound up in a Burmese amber market seems like a great incentive for more paleontologists to scout out local vendors. Indeed, Xing and McKellar previously teamed up on a June 2016 study about an amber specimen containing spectacular Cretaceous bird wings, which was also sourced from these markets. Hopefully, these recent discoveries will spark efforts to collect more of these astonishing amber-encased time capsules, even if it requires top secret dinosaurian espionage. These fossils may not bring dinosaurs back to life, as in Jurassic Park, but they still offer valuable and vivid tableaus of long-dead ecosystems. Get six of our favorite Motherboard stories every day by signing up for our newsletter.


Home > Press > Semiconducting inorganic double helix: New flexible semiconductor for electronics, solar technology and photo catalysis Abstract: It is the double helix, with its stable and flexible structure of genetic information, that made life on Earth possible in the first place. Now a team from the Technical University of Munich (TUM) has discovered a double helix structure in an inorganic material. The material comprising tin, iodine and phosphorus is a semiconductor with extraordinary optical and electronic properties, as well as extreme mechanical flexibility. Flexible yet robust - this is one reason why nature codes genetic information in the form of a double helix. Scientists at TU Munich have now discovered an inorganic substance whose elements are arranged in the form of a double helix. The substance called SnIP, comprising the elements tin (Sn), iodine (I) and phosphorus (P), is a semiconductor. However, unlike conventional inorganic semiconducting materials, it is highly flexible. The centimeter-long fibers can be arbitrarily bent without breaking. "This property of SnIP is clearly attributable to the double helix," says Daniela Pfister, who discovered the material and works as a researcher in the work group of Tom Nilges, Professor for Synthesis and Characterization of Innovative Materials at TU Munich. "SnIP can be easily produced on a gram scale and is, unlike gallium arsenide, which has similar electronic characteristics, far less toxic." Countless application possibilities The semiconducting properties of SnIP promise a wide range of application opportunities, from energy conversion in solar cells and thermoelectric elements to photocatalysts, sensors and optoelectronic elements. By doping with other elements, the electronic characteristics of the new material can be adapted to a wide range of applications. Due to the arrangement of atoms in the form of a double helix, the fibers, which are up to a centimeter in length can be easily split into thinner strands. The thinnest fibers to date comprise only five double helix strands and are only a few nanometers thick. That opens the door also to nanoelectronic applications. "Especially the combination of interesting semiconductor properties and mechanical flexibility gives us great optimism regarding possible applications," says Professor Nilges. "Compared to organic solar cells, we hope to achieve significantly higher stability from the inorganic materials. For example, SnIP remains stable up to around 500°C (930 °F)." Just at the beginning "Similar to carbon, where we have the three-dimensional (3D) diamond, the two dimensional graphene and the one dimensional nanotubes," explains Professor Nilges, "we here have, alongside the 3D semiconducting material silicon and the 2D material phosphorene, for the first time a one dimensional material - with perspectives that are every bit as exciting as carbon nanotubes." Just as with carbon nanotubes and polymer-based printing inks, SnIP double helices can be suspended in solvents like toluene. In this way, thin layers can be produced easily and cost-effectively. "But we are only at the very beginning of the materials development stage," says Daniela Pfister. "Every single process step still needs to be worked out." Since the double helix strands of SnIP come in left and right-handed variants, materials that comprise only one of the two should display special optical characteristics. This makes them highly interesting for optoelectronics applications. But, so far there is no technology available for separating the two variants. Theoretical calculations by the researchers have shown that a whole range of further elements should form these kinds of inorganic double helices. Extensive patent protection is pending. The researchers are now working intensively on finding suitable production processes for further materials. Interdisciplinary cooperation An extensive interdisciplinary alliance is working on the characterization of the new material: Photoluminescence and conductivity measurements have been carried out at the Walter Schottky Institute of the TU Munich. Theoretical chemists from the University of Augsburg collaborated on the theoretical calculations. Researchers from the University of Kiel and the Max Planck Institute of Solid State Research in Stuttgart performed transmission electron microscope investigations. Mössbauer spectra and magnetic properties were measured at the University of Augsburg, while researchers of TU Cottbus contributed thermodynamics measurements. ### The research was funded by the DFB (SPP 1415), the international graduate school ATUMS (TU Munich and the University of Alberta, Canada) and the TUM Graduate School. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | August 6, 2013
Site: www.theguardian.com

A starved polar bear found found dead in Svalbard as "little more than skin and bones" perished due to a lack of sea ice on which to hunt seals, according to a renowned polar bear expert. Climate change has reduced sea ice in the Arctic to record lows in the last year and Dr Ian Stirling, who has studied the bears for almost 40 years and examined the animal, said the lack of ice forced the bear into ranging far and wide in an ultimately unsuccessful search for food. "From his lying position in death the bear appears to simply have starved and died where he dropped," Stirling said. "He had no external suggestion of any remaining fat, having been reduced to little more than skin and bone." The bear had been examined by scientists from the Norwegian Polar Institute in April in the southern part of Svalbard, an Arctic island archipelago, and appeared healthy. The same bear had been captured in the same area in previous years, suggesting that the discovery of its body, 250km away in northern Svalbard in July, represented an unusual movement away from its normal range. The bear probably followed the fjords inland as it trekked north, meaning it may have walked double or treble that distance. Polar bears feed almost exclusively on seals and need sea ice to capture their prey. But 2012 saw the lowest level of sea ice in the Arctic on record. Prond Robertson, at the Norwegian Meteorological Institute, said: "The sea ice break up around Svalbard in 2013 was both fast and very early." He said recent years had been poor for ice around the islands: "Warm water entered the western fjords in 2005-06 and since then has not shifted." Stirling, now at Polar Bears International and previously at the University of Alberta and the Canadian Wildlife Service, said: "Most of the fjords and inter-island channels in Svalbard did not freeze normally last winter and so many potential areas known to that bear for hunting seals in spring do not appear to have been as productive as in a normal winter. As a result the bear likely went looking for food in another area but appears to have been unsuccessful." Research published in May showed that loss of sea ice was harming the health, breeding success and population size of the polar bears of Hudson Bay, Canada, as they spent longer on land waiting for the sea to refreeze. Other work has shown polar bear weights are declining. In February a panel of polar bear experts published a paper stating that rapid ice loss meant options such the feeding of starving bears by humans needed to be considered to protect the 20,000-25,000 animals thought to remain. The International Union for the Conservation of Nature, the world's largest professional conservation network, states that of the 19 populations of polar bear around the Arctic, data is available for 12. Of those, eight are declining, three are stable and one is increasing. The IUCN predicts that increasing ice loss will mean between one-third and a half of polar bears will be lost in the next three generations, about 45 years. But the US and Russian governments said in March that faster-than-expected ice losses could mean two-thirds are lost. Attributing a single incident to climate change can be controversial, but Douglas Richardson, head of living collections at the Highland Wildlife Park near Kingussie, said: "It's not just one bear though. There are an increasing number of bears in this condition: they are just not putting down enough fat to survive their summer fast. This particular polar bear is the latest bit of evidence of the impact of climate change." Ice loss due to climate change is "absolutely, categorically and without question" the cause of falling polar bear populations, said Richardson, who cares for the UK's only publicly kept polar bears. He said 16 years was not particularly old for a wild male polar bear, which usually live into their early 20s. "There may have been some underlying disease, but I would be surprised if this was anything other than starvation," he said. "Once polar bears reach adulthood they are normally nigh on indestructible, they are hard as nails." Jeff Flocken, at the International Fund for Animal Welfare, said: "While it is difficult to ascribe a single death or act to climate change it couldn't be clearer that drastic and long-term changes in their Arctic habitat threaten the survival of the polar bear. The threat of habitat loss from climate change, exacerbated by unsustainable killing for commercial trade in Canada, could lead to the demise of one of the world's most iconic animals, and this would be a true tragedy."


News Article | October 31, 2016
Site: www.sciencemag.org

UPDATE: The fossil Tetrapodophis amplectus will return to the Bürgermeister-Müller-Museum in Solnhofen, Germany, later this month, sources say. The fossil’s owner had temporarily removed it because of damage it had sustained during CT scanning at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. “Two very important bones of the holotype were partially damaged,” says Martin Röper, director of the museum. After investigating the extent of the damage, the owner agreed to return it to the museum—but scientists will now only be able to study it in-house, Röper says. The good news, says Paul Tafforeau of ESRF, is that the facility has improved its imaging protocols for flat fossils, so that “it can never happen again.” Here is our original story: SALT LAKE CITY—It is a tiny, fragile thing: a squashed skull barely a centimeter in length; a sinuous curving body about two fingers long; four delicate limbs with grasping hands. In a major paper last year, researchers called this rare fossil from more than 100 million years ago the first known four-legged snake. But at a meeting of the Society of Vertebrate Paleontology (SVP) here last week, another team suggested that it’s a marine lizard instead. Even as scientists debate the identity of this controversial specimen, the only one of its kind, it appears to be inaccessible for further study. And paleontologists are mad as hell. “It’s horrifying,” says Jacques Gauthier, a paleontologist at Yale University. As far as he’s concerned, if the fossil can’t be studied, it doesn’t exist. “For me, the take-home message is that I don’t want to mention the name Tetrapodophis ever again.” A year ago, researchers led by David Martill of the University of Portsmouth in the United Kingdom reported in Science that the fossil, which they named Tetrapodophis amplectus (for four-footed snake), was a missing link in the snake evolutionary tree. Researchers knew snakes had evolved from four-limbed reptiles, but few transitional forms had been discovered, and researchers continue to wrangle over whether the first lizards to lose their limbs and become snakes were terrestrial burrowers or aquatic swimmers. Martill and colleagues reported that the fossil, which they described as a specimen in a German museum, originated from a Brazilian outcrop of the Crato Formation, a 108-million-year-old limestone layer rich in both marine and terrestrial species. They identified snakelike features in the fossil, including a long body consisting of more than 150 vertebrae, a relatively short tail of 112 vertebrae, hooked teeth, and scales on its belly. Those features, they say, support the hypothesis that snakes evolved from burrowing ancestors. But many paleontologists weren’t convinced. Last week at the annual SVP meeting here, vertebrate paleontologist Michael Caldwell of the University of Alberta in Edmonton, Canada, and colleagues presented their own observations of the specimen, rebutting Martill’s paper point by point to a standing-room-only crowd. The new analysis hinges on the “counterpart” to the original fossil, which was also housed in the Bürgermeister-Müller Museum in Solnhofen, Germany. When the slab of rock containing the fossil was cracked open, the body of the organism stayed mostly in one half of the slab, whereas the skull was mostly in the other half, paired with a mold or impression of the body. This counterpart slab, Caldwell says, preserved clearer details of the skull in particular. In his group’s analysis of the counterpart, he says, “every single character that was identified in the original manuscript as being diagnostic of a snake was either not the case or not observable.” For example, in snake skulls, a bone called the quadrate is elongated, which allows snakes to open their jaws very wide. This fossil’s quadrate bone is more C-shaped, and it surrounds the animal’s hearing apparatus—a “characteristic feature” of a group of lizards called squamates, says co-author Robert Reisz, a vertebrate paleontologist at the University of Toronto in Mississauga, Canada. He and Caldwell add that although the fossil has more vertebrae in its body than in its tail, the tail isn’t short, but longer than that of many living lizards. They are working on a paper arguing that the fossil is probably a dolichosaur, an extinct genus of marine lizard. Martill and co-author Nicholas Longrich of the University of Bath in the United Kingdom, neither of whom was at the meeting, stand strongly by their original analysis. Longrich cites all the snakelike features discussed in the original paper. “In virtually every single respect [it] looks like a snake, except for one little detail—it has arms and legs,” he told Science by email. Many researchers who attended the talk, including Gauthier and paleontologist Jason Head of the University of Cambridge in the United Kingdom, are persuaded that Tetrapodophis is not a snake. But as for what it is, there may be as many opinions as there are paleontologists. Hong-yu Yi of the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing, China, says she’s reserving judgment on the specimen’s identity until further analysis. “I was always waiting for a longer description of the specimen. I’m still waiting,” she says. That analysis may never happen. Caldwell says he went back to the Bürgermeister-Müller Museum several months ago to study the specimen again; separately, Head says he also attempted to study the fossil. Neither could get access to it. Caldwell says that the fossil wasn’t actually part of the museum’s collection, but was on loan from a private owner. Researchers who declined to be named because of ongoing discussions around the fossil say that it may have been damaged during study, prompting the collector to restrict access to it. “I don’t even know if a publication at this moment is appropriate because no one else will be able to access this specimen,” Yi says. In fact, some researchers say the original paper should not have been published, because the fossil was not officially deposited in a museum or other repository, so the authors couldn’t guarantee that future researchers could access it. “I have nothing against” private fossil collecting, Gauthier says. But when a fossil enters the scientific literature, he says, “then it has to be available. Science requires repeatability.” In response, Science deputy editor Andrew Sugden says, “Our understanding at the time of publication and in subsequent correspondence was that the specimen was accessible at the museum, as stated at the end of the paper.” Researchers had already raised other questions about the fossil’s transport out of Brazil. Brazil passed laws in the 1940s making all fossils property of the state rather than private owners. “Most of the exploration of the limestone quarries in that region of the country began in the second half of the 20th century,” says Tiago Simões, a paleontologist at the University of Alberta, who was also an author on the SVP talk. “So the vast majority” of fossils from those areas were collected after the law had passed. “That really touches on some very sensitive ethical boundaries.” Head agrees. “The best way to move forward is to literally erase the specimen from our research program. Tetrapodophis is no longer science. … It’s not repeatable, it’s not testable. If any good can come out of Tetrapodophis, it’s the recognition that we have got to maintain scientific standards when it comes to fossils … they have to be accessible."


News Article | October 31, 2016
Site: www.sciencemag.org

UPDATE: The fossil Tetrapodophis amplectus will return to the Bürgermeister-Müller-Museum in Solnhofen, Germany, later this month, sources say. The fossil’s owner had temporarily removed it because of damage it had sustained during CT scanning at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. “Two very important bones of the holotype were partially damaged,” says Martin Röper, director of the museum. After investigating the extent of the damage, the owner agreed to return it to the museum—but scientists will now only be able to study it in-house, Röper says. The good news, says Paul Tafforeau of ESRF, is that the facility has improved its imaging protocols for flat fossils, so that “it can never happen again.” Here is our original story: SALT LAKE CITY—It is a tiny, fragile thing: a squashed skull barely a centimeter in length; a sinuous curving body about two fingers long; four delicate limbs with grasping hands. In a major paper last year, researchers called this rare fossil from more than 100 million years ago the first known four-legged snake. But at a meeting of the Society of Vertebrate Paleontology (SVP) here last week, another team suggested that it’s a marine lizard instead. Even as scientists debate the identity of this controversial specimen, the only one of its kind, it appears to be inaccessible for further study. And paleontologists are mad as hell. “It’s horrifying,” says Jacques Gauthier, a paleontologist at Yale University. As far as he’s concerned, if the fossil can’t be studied, it doesn’t exist. “For me, the take-home message is that I don’t want to mention the name Tetrapodophis ever again.” A year ago, researchers led by David Martill of the University of Portsmouth in the United Kingdom reported in Science that the fossil, which they named Tetrapodophis amplectus (for four-footed snake), was a missing link in the snake evolutionary tree. Researchers knew snakes had evolved from four-limbed reptiles, but few transitional forms had been discovered, and researchers continue to wrangle over whether the first lizards to lose their limbs and become snakes were terrestrial burrowers or aquatic swimmers. Martill and colleagues reported that the fossil, which they described as a specimen in a German museum, originated from a Brazilian outcrop of the Crato Formation, a 108-million-year-old limestone layer rich in both marine and terrestrial species. They identified snakelike features in the fossil, including a long body consisting of more than 150 vertebrae, a relatively short tail of 112 vertebrae, hooked teeth, and scales on its belly. Those features, they say, support the hypothesis that snakes evolved from burrowing ancestors. But many paleontologists weren’t convinced. Last week at the annual SVP meeting here, vertebrate paleontologist Michael Caldwell of the University of Alberta in Edmonton, Canada, and colleagues presented their own observations of the specimen, rebutting Martill’s paper point by point to a standing-room-only crowd. The new analysis hinges on the “counterpart” to the original fossil, which was also housed in the Bürgermeister-Müller Museum in Solnhofen, Germany. When the slab of rock containing the fossil was cracked open, the body of the organism stayed mostly in one half of the slab, whereas the skull was mostly in the other half, paired with a mold or impression of the body. This counterpart slab, Caldwell says, preserved clearer details of the skull in particular. In his group’s analysis of the counterpart, he says, “every single character that was identified in the original manuscript as being diagnostic of a snake was either not the case or not observable.” For example, in snake skulls, a bone called the quadrate is elongated, which allows snakes to open their jaws very wide. This fossil’s quadrate bone is more C-shaped, and it surrounds the animal’s hearing apparatus—a “characteristic feature” of a group of lizards called squamates, says co-author Robert Reisz, a vertebrate paleontologist at the University of Toronto in Mississauga, Canada. He and Caldwell add that although the fossil has more vertebrae in its body than in its tail, the tail isn’t short, but longer than that of many living lizards. They are working on a paper arguing that the fossil is probably a dolichosaur, an extinct genus of marine lizard. Martill and co-author Nicholas Longrich of the University of Bath in the United Kingdom, neither of whom was at the meeting, stand strongly by their original analysis. Longrich cites all the snakelike features discussed in the original paper. “In virtually every single respect [it] looks like a snake, except for one little detail—it has arms and legs,” he told Science by email. Many researchers who attended the talk, including Gauthier and paleontologist Jason Head of the University of Cambridge in the United Kingdom, are persuaded that Tetrapodophis is not a snake. But as for what it is, there may be as many opinions as there are paleontologists. Hong-yu Yi of the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing, China, says she’s reserving judgment on the specimen’s identity until further analysis. “I was always waiting for a longer description of the specimen. I’m still waiting,” she says. That analysis may never happen. Caldwell says he went back to the Bürgermeister-Müller Museum several months ago to study the specimen again; separately, Head says he also attempted to study the fossil. Neither could get access to it. Caldwell says that the fossil wasn’t actually part of the museum’s collection, but was on loan from a private owner. Researchers who declined to be named because of ongoing discussions around the fossil say that it may have been damaged during study, prompting the collector to restrict access to it. “I don’t even know if a publication at this moment is appropriate because no one else will be able to access this specimen,” Yi says. In fact, some researchers say the original paper should not have been published, because the fossil was not officially deposited in a museum or other repository, so the authors couldn’t guarantee that future researchers could access it. “I have nothing against” private fossil collecting, Gauthier says. But when a fossil enters the scientific literature, he says, “then it has to be available. Science requires repeatability.” In response, Science deputy editor Andrew Sugden says, “Our understanding at the time of publication and in subsequent correspondence was that the specimen was accessible at the museum, as stated at the end of the paper.” Researchers had already raised other questions about the fossil’s transport out of Brazil. Brazil passed laws in the 1940s making all fossils property of the state rather than private owners. “Most of the exploration of the limestone quarries in that region of the country began in the second half of the 20th century,” says Tiago Simões, a paleontologist at the University of Alberta, who was also an author on the SVP talk. “So the vast majority” of fossils from those areas were collected after the law had passed. “That really touches on some very sensitive ethical boundaries.” Head agrees. “The best way to move forward is to literally erase the specimen from our research program. Tetrapodophis is no longer science. … It’s not repeatable, it’s not testable. If any good can come out of Tetrapodophis, it’s the recognition that we have got to maintain scientific standards when it comes to fossils … they have to be accessible."


News Article | September 12, 2016
Site: www.cemag.us

Flexible yet robust — this is one reason why nature codes genetic information in the form of a double helix. Scientists at TU Munich have now discovered an inorganic substance whose elements are arranged in the form of a double helix. The substance called SnIP, comprising the elements tin (Sn), iodine (I) and phosphorus (P), is a semiconductor. However, unlike conventional inorganic semiconducting materials, it is highly flexible. The centimeter-long fibers can be arbitrarily bent without breaking. "This property of SnIP is clearly attributable to the double helix," says Daniela Pfister, who discovered the material and works as a researcher in the work group of Tom Nilges, Professor for Synthesis and Characterization of Innovative Materials at TU Munich. "SnIP can be easily produced on a gram scale and is, unlike gallium arsenide, which has similar electronic characteristics, far less toxic." The semiconducting properties of SnIP promise a wide range of application opportunities, from energy conversion in solar cells and thermoelectric elements to photocatalysts, sensors and optoelectronic elements. By doping with other elements, the electronic characteristics of the new material can be adapted to a wide range of applications. Due to the arrangement of atoms in the form of a double helix, the fibers, which are up to a centimeter in length can be easily split into thinner strands. The thinnest fibers to date comprise only five double helix strands and are only a few nanometers thick. That opens the door also to nanoelectronic applications. "Especially the combination of interesting semiconductor properties and mechanical flexibility gives us great optimism regarding possible applications," says Nilges. "Compared to organic solar cells, we hope to achieve significantly higher stability from the inorganic materials. For example, SnIP remains stable up to around 500 C (930 F)." "Similar to carbon, where we have the three-dimensional (3D) diamond, the two dimensional graphene and the one dimensional nanotubes," says Nilges, "we here have, alongside the 3D semiconducting material silicon and the 2D material phosphorene, for the first time a one dimensional material — with perspectives that are every bit as exciting as carbon nanotubes." Just as with carbon nanotubes and polymer-based printing inks, SnIP double helices can be suspended in solvents like toluene. In this way, thin layers can be produced easily and cost-effectively. "But we are only at the very beginning of the materials development stage," says Pfister. "Every single process step still needs to be worked out." Since the double helix strands of SnIP come in left and right-handed variants, materials that comprise only one of the two should display special optical characteristics. This makes them highly interesting for optoelectronics applications. But, so far there is no technology available for separating the two variants. Theoretical calculations by the researchers have shown that a whole range of further elements should form these kinds of inorganic double helices. Extensive patent protection is pending. The researchers are now working intensively on finding suitable production processes for further materials. An extensive interdisciplinary alliance is working on the characterization of the new material: Photoluminescence and conductivity measurements have been carried out at the Walter Schottky Institute of the TU Munich. Theoretical chemists from the University of Augsburg collaborated on the theoretical calculations. Researchers from the University of Kiel and the Max Planck Institute of Solid State Research in Stuttgart performed transmission electron microscope investigations. Mössbauer spectra and magnetic properties were measured at the University of Augsburg, while researchers of TU Cottbus contributed thermodynamics measurements. The research was funded by the DFB (SPP 1415), the international graduate school ATUMS (TU Munich and the University of Alberta, Canada) and the TUM Graduate School.


VANCOUVER, BC--(Marketwired - February 15, 2017) - SG Spirit Gold Inc. (TSX VENTURE: SG) (the "Company") is pleased to announce that following its review of strategic acquisition opportunities, the Company has entered into a definitive amalgamation agreement, effective February 10, 2017 (the "Definitive Agreement"), with Northern Lights Marijuana Company Limited ("DOJA"). Pursuant to the terms of the Definitive Agreement, the Company will acquire all of the issued and outstanding securities of DOJA (the "Transaction"). DOJA is a privately-owned company based in Canada's picturesque Okanagan Valley, that is committed to becoming a licensed producer of marijuana under the Access to Cannabis for Medical Purposes Regulations ("ACMPR") and building a fast growing lifestyle brand that offers the highest quality handcrafted cannabis strains in Canada. DOJA was founded in 2013 by the same team that founded and built SAXX Underwear into an internationally recognizable brand. The DOJA team plans to build upon their past success in the consumer packaged goods industry and their mutual interest in, and appreciation for, cannabis culture and grow DOJA into a market leading brand in the cannabis industry. DOJA's marijuana production growth strategy can be broken down into three phases: DOJA's team has the experience to ensure they successfully navigate the ACMPR licensing process and deliver on their vision for the DOJA brand. Trent Kitsch -- Chief Executive Officer: Mr. Kitsch co-founded DOJA in 2013. Prior thereto, Mr. Kitsch founded SAXX Underwear in 2007 and successfully built SAXX into a globally recognizable brand and the fastest growing underwear brand in North America before fully exiting the business in 2015. In 2013, Mr. Kitsch and his wife Ria Kitsch founded Kitsch Wines in the Okanagan Valley. Trent is a proven entrepreneur and a graduate of the Richard Ivey School of Business with a major in entrepreneurship. Ryan Foreman -- President: Mr. Foreman co-founded DOJA with Mr. Kitsch in 2013. Mr. Foreman has spent over 15 years developing e-commerce operations within the consumer goods space working with influential brands and industry disrupters in the lifestyle and action sports markets. He has expertise developing and managing teams executing all business aspects including system integrations, domestic and international compliance, fulfillment, website development and online marketing. Jeff Barber -- Chief Financial Officer: Mr. Barber joined DOJA in 2016 after selling his ownership in a boutique M&A advisory firm in Calgary. Prior thereto, he was an investment banker with Raymond James Limited and previously held investment banking and equity research positions at Canaccord Genuity Corp. Jeff began his career as an economist with Deloitte LLP. Throughout his career, Mr. Barber has worked closely with various public company boards and executive teams to assist in institutional capital initiatives and advise on go-public transactions, valuations and M&A mandates. Jeff Barber is a CFA charterholder and holds a Masters degree in Finance and Economics from the University of Alberta. Zena Prokosh -- Chief Operating Officer: Ms. Prokosh joined DOJA after spending two years with THC Biomed International Ltd., where she was an Alternate Responsible Person In Charge and played an integral role in guiding the company through the MMPR/ACMPR licensing process. Prior thereto she was the Curator and Germplasm PlantSMART Research Technician / Lab Manager at the UBC Charles Fipke Centre for Innovative Research in Kelowna. Zena was accepted to and attended the 2016 Masterclass Medicinal Cannabis® held in Leiden, Netherlands. Ms. Prokosh holds a B.Sc. in Biology from UBC. Ria Kitsch -- Vice President: Mrs. Kitsch has been with DOJA since inception. Ria was formerly head of marketing for SAXX Underwear. Prior to that, Ms. Kitsch was employed with WaterPlay Solutions Corp., where she became a top salesperson and territory manager by quickly identifying and executing strategies to grow in regulated markets. Strong customer service skills and marketing focus make her a front-line specialist. Mrs. Kitsch earned a Business Honors degree from UBC-Okanagan. Shawn McDougall -- Master Grower and Curer: Mr. McDougall brings over a decade of cannabis growing and curing experience to DOJA. Shawn is truly a cannabis connoisseur and he will thoughtfully curate DOJA's strain selection to represent the full spectrum of the cannabis experience. Prior to joining DOJA, Shawn was the Master Grower for a number of patients under the Marijuana Medical Access Regulations and also consulted for MMPR applicants. Shawn has continued to hone his craft over the years and developed growing techniques that allow him to consistently produce high-quality cannabis and impressive yields. Shawn is an automation specialist and ticketed HVAC technician. In accordance with the terms of the Definitive Agreement, DOJA will amalgamate with a wholly-owned subsidiary of the Company, following which the resulting amalgamated entity will continue as a wholly-owned subsidiary of the Company. In consideration for completion of the Transaction, the current holders of DOJA class "A" voting common shares will be issued one-and-eight-tenths (1.8) common shares of the Company in exchange for every share of DOJA they hold. Existing convertible securities of DOJA will be exchanged for convertibles of the Company, on substantially the same terms, and applying the same exchange ratio. Prior to closing of the Transaction it is anticipated that the Company will apply to list its common shares for trading on the Canadian Securities Exchange (the "CSE") and voluntarily delist its shares from the TSX Venture Exchange. On closing of the Transaction it is anticipated that the Company will change its name to "DOJA Cannabis Company Limited", and will reconstitute its board of directors to consist of Trent Kitsch, Jeffrey Barber, Ryan Foreman and Patrick Brauckmann, with Trent Kitsch serving as Chief Executive Officer, Jeffrey Barber serving as Chief Financial Officer and Ryan Foreman serving as President. Closing of the Transaction remains subject to a number of conditions, including the completion of satisfactory due diligence, receipt of any required shareholder, regulatory and third-party consents, the Canadian Securities Exchange having conditionally accepted the listing of the Company's common shares, the TSX Venture Exchange having consented to the voluntarily delisting of the Company's common shares, and the satisfaction of other customary closing conditions. Additional information regarding the Transaction will be made available under the Company's profile on SEDAR (www.sedar.com) as such information becomes available. The Transaction cannot close until the required approvals are obtained, and the Company's common shares have been delisted from the TSX Venture Exchange. There can be no assurance that the Transaction will be completed as proposed or at all, or that the Company's common shares will be listed and posted for trading on any stock exchange. Trading in the Company's common shares has been halted and it is anticipated that trading will remain halted until completion of the Transaction. Neither the TSX Venture Exchange, nor the Canadian Securities Exchange, has in any way passed upon the merits of the proposed Transaction and has neither approved nor disapproved the contents of this press release. On behalf of the Board, Neither the TSX Venture Exchange nor its regulation services provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release. This news release includes certain "forward-looking statements" under applicable Canadian securities legislation. Forward-looking statements include, but are not limited to, statements with respect to the terms and conditions of the proposed Transaction; and future developments and the business and operations of DOJA. Forward-looking statements are necessarily based upon a number of estimates and assumptions that, while considered reasonable, are subject to known and unknown risks, uncertainties, and other factors which may cause the actual results and future events to differ materially from those expressed or implied by such forward-looking statements. Such factors include, but are not limited to: general business, economic, competitive, political and social uncertainties, uncertain capital markets; and delay or failure to receive board, shareholder or regulatory approvals. There can be no assurance that the Transaction will proceed on the terms contemplated above or at all and that such statements will prove to be accurate, as actual results and future events could differ materially from those anticipated in such statements. Accordingly, readers should not place undue reliance on forward-looking statements. The Company and DOJA disclaim any intention or obligation to update or revise any forward-looking statements, whether as a result of new information, future events or otherwise, except as required by law.


News Article | September 19, 2016
Site: www.chromatographytechniques.com

Addressing fundamental unknowns about the earliest history of Earth’s crust, scientists have precisely dated the world’s oldest rock unit at 4.02 billion years old. Driven by the University of Alberta, the findings suggest that early Earth was largely covered with an oceanic crust-like surface. “It gives us important information about how the early continents formed,” says lead author Jesse Reimink. “Because it’s so far back in time, we have to grasp at every piece of evidence we can. We have very few data points with which to evaluate what was happening on Earth at this time.” In fact, only three locations worldwide exist with rocks or minerals older than 4 billion years old: one from Northern Quebec, mineral grains from Western Australia, and the rock formation from Canada’s Northwest Territories examined in this new study. While it is well known that the oldest rocks formed prior to 4 billion years ago, the unique twist on Reimink’s rock is the presence of well-preserved grains of the mineral zircon, leaving no doubt about the date it formed. The sample in question was found during fieldwork by Reimink’s PhD supervisor, Tom Chacko, in an area roughly 300 kilometres north of Yellowknife. Reimink recently completed his PhD at the University of Alberta before starting a post-doctoral fellowship at the Carnegie Institute for Science in Washington, D.C. “Zircons lock in not only the age but also other geochemical information that we’ve exploited in this paper,” Reimink continues. “Rocks and zircon together give us much more information than either on their own. Zircon retains its chemical signature and records age information that doesn’t get reset by later geological events, while the rock itself records chemical information that the zircon grains don’t.” He explains that the chemistry of the rock itself looks like rocks that are forming today in modern Iceland, which is transitional between oceanic and continental crust. In fact, Iceland has been hypothesized as an analog for how continental crusts started to form. “We examined the rock itself to analyze those chemical signatures to explore the way that the magma intrudes into the surrounding rock.” One signature in particular recorded the assimilation step of magma from Earth’s crust. “While the magma cooled, it simultaneously heated up and melted the rock around it, and we have evidence for that.” Reimink says that the lack of signatures of continental crust in this rock, different from what the early continents were expected to look like, leads to more questions than answers. Reimink says one of the biggest challenges as a geologist is that as we travel back in time on Earth, the quantity and quality of available evidence decreases. “Earth is constantly recycling itself, the crust is being deformed or melted, and pre-history is being erased,” remarks Reimink. “The presence of continents above water and exposed to the atmosphere has huge implications in atmospheric chemistry and the presence or absence of life. The amount of continents on Earth has a large chemical influence both on processes in the deep Earth (mantle and core) and at the Earth’s surface (atmosphere and biosphere). There are constant feedback loops between chemistry and geology. Though there are still a lot of unknowns, this is just one example that everything on Earth is intertwined.” “No evidence for Hadean continental crust within Earth’s oldest evolved rock unit” appears in the October issue of Nature Geoscience.


The International Nurses Association is pleased to welcome Deb Sterling-Bauer, RN, Cardiovascular Nurse, to their prestigious organization with her upcoming publication in the Worldwide Leaders in Healthcare. Deb Sterling-Bauer is currently serving as Program Nurse Educator for the Heartfit Clinic and is proprietor of DSB Nursing Services in Calgary, Alberta, Canada. With over 25 years of experience and extensive expertise in all facets of nursing, she specializes in cardiovascular prevention and rehabilitation. Deb Sterling-Bauer’s career in nursing began in 1990 when she graduated with her initial Nursing Diploma from the Misericordia Hospital School of Nursing. An advocate for continuing education, Deb gained her Bachelor of Science Degree in Nursing from the University of Alberta. She then went on to obtain her Certificates as a Medical Exercise Professional from the American Academy of Health, Fitness, and Rehab Professionals and her Holistic Nursing specialization. Furthermore, Deb holds additional certification in Telemetry. To keep up to date with the latest advances and developments in her field, Deb maintains a professional membership with the Canadian Nurses Association, the Alberta Association of Registered Nurses in Private Practices, the College and Association of Registered Nurses of Alberta, the Canadian Holistic Nurses Association, and the American Holistic Nurses Association and by reading the American Journal of Nursing. She attributes her success to her desire to be always learn new skills. When she is not assisting patients, Deb enjoys staying physical by skiing, cycling, and hiking. Learn more about Deb Sterling-Bauer here: http://inanurse.org/network/index.php?do=/4132169/info/ and be sure to read her upcoming publication in Worldwide Leaders In Healthcare.


The International Association of HealthCare Professionals is pleased to welcome Stephen S. Campbell, MD, Family Practitioner, to their prestigious organization with his upcoming publication in The Leading Physicians of the World. He is a highly-trained and qualified family practitioner with vast expertise in all facets of his work. Dr. Campbell has been practicing for over 19 years and is currently serving as a family practitioner at The Everett Clinic in Shoreline, Washington. Dr. Stephen S. Campbell received his medical degree at the University of British Columbia in Vancouver, Canada in 1995. He then completed an internship at the University of Alberta in Edmonton, prior to a family medicine residency at the same educational institution. Dr. Campbell specializes in chronic disease management and preventative health, and is also an experienced executive leader and manager within the healthcare field. He remains a professional member of the American Academy of Family Physicians, the Washington Academy of Family Physicians, and the American Association for Physician Leadership, and attributes his success to hard work, listening to what his patients say and putting them first. When not working he enjoys cooking, traveling, and is a passionate hockey fan. Learn more about Dr. Campbell here: http://www.everettclinic.com/ and by reading his upcoming publication in The Leading Physicians of the World. FindaTopDoc.com is a hub for all things medicine, featuring detailed descriptions of medical professionals across all areas of expertise, and information on thousands of healthcare topics.  Each month, millions of patients use FindaTopDoc to find a doctor nearby and instantly book an appointment online or create a review.  FindaTopDoc.com features each doctor’s full professional biography highlighting their achievements, experience, patient reviews and areas of expertise.  A leading provider of valuable health information that helps empower patient and doctor alike, FindaTopDoc enables readers to live a happier and healthier life.  For more information about FindaTopDoc, visit http://www.findatopdoc.com


News Article | October 26, 2016
Site: spaceref.com

Astronomers have found a pair of extraordinary cosmic objects that dramatically burst in X-rays. This discovery, obtained with NASA's Chandra X-ray Observatory and ESA's XMM-Newton observatory, may represent a new class of explosive events found in space. The mysterious X-ray sources flare up and become about a hundred times brighter in less than a minute, before returning to original X-ray levels after about an hour. At their peak, these objects qualify as ultraluminous X-ray sources (ULXs) that give off hundreds to thousands of times more X-rays than typical binary systems where a star is orbiting a black hole or neutron star. "We've never seen anything like this," said Jimmy Irwin of the University of Alabama, who led the study that appears in the latest issue of the journal Nature [http://www.nature.com]. "Astronomers have seen many different objects that flare up, but these may be examples of an entirely new phenomenon." While magnetars -- young neutron stars with powerful magnetic fields -- have been known to produce bright and rapid flares in X-rays, these newly discovered objects are different in key ways. First, magnetars only take a few seconds to tens of seconds to decline in X-rays after a flare. Secondly, these new flaring objects are found in populations of old stars in elliptical galaxies, which are spherical or egg-shaped galaxies that are composed mostly of older stars. This makes it unlikely that these new flaring objects are young, astronomically speaking, like magnetars are thought to be. Also, these objects are brighter in X-rays during their "calm" periods. "These flares are extraordinary," said Peter Maksym, a co-author from the Harvard-Smithsonian Center for Astrophysics. "For a brief period, one of the sources became one of the brightest ULX to ever be seen in an elliptical galaxy." When they are not flaring, these sources appear to be normal binary systems where a black hole or neutron star is pulling material from a companion star similar to the Sun. This indicates that the flares do not significantly disrupt the binary system. While the nature of these flares is unknown, the team has begun to search for answers. One idea is that the flares represent episodes when matter being pulled away from a companion star falls rapidly onto a black hole or neutron star. This could happen when the companion makes its closest approach to the compact object in an eccentric orbit. Another explanation could involve matter falling onto an intermediate-mass black hole, with a mass of about 800 times that of the Sun for one source and 80 times that of the Sun for the other. "Now that we've discovered these flaring objects, observational astronomers and theorists alike are going to be working hard to figure out what's happening," said co-author Gregory Sivakoff of the University of Alberta. One of the sources, located near and presumably associated with the galaxy NGC 4636 at a distance of 47 million light-years, was observed with Chandra to flare once. Five flares were detected from the other source, which is located near the galaxy NGC 5128 at a distance of 14 million light-years. Four of these flares were seen with Chandra and one with XMM-Newton. The team looked at the X-ray variation of several thousand X-ray sources in Chandra observations of 70 nearby galaxies. Although several examples of flaring X-ray sources were found, none exhibited the behavior of the giant rapid flares reported here. Please follow SpaceRef on Twitter and Like us on Facebook.


Casey J.R.,University of Alberta | Grinstein S.,Hospital for Sick Children | Orlowski J.,McGill University
Nature Reviews Molecular Cell Biology | Year: 2010

Protons dictate the charge and structure of macromolecules and are used as energy currency by eukaryotic cells. The unique function of individual organelles therefore depends on the establishment and stringent maintenance of a distinct pH. This, in turn, requires a means to sense the prevailing pH and to respond to deviations from the norm with effective mechanisms to transport, produce or consume proton equivalents. A dynamic, finely tuned balance between proton-extruding and proton-importing processes underlies pH homeostasis not only in the cytosol, but in other cellular compartments as well.


Fujimoto K.,Japan National Astronomical Observatory | Sydora R.D.,University of Alberta
Physical Review Letters | Year: 2012

The dissipation mechanism in collisionless magnetic reconnection in a quasisteady period is investigated for the antiparallel field configuration. A three-dimensional simulation in a fully kinetic system reveals that a current-aligned electromagnetic mode produces turbulent electron flow that facilitates the transport of the momentum responsible for the current density. It is found that the electromagnetic turbulence is drastically enhanced by plasmoid formations and has a significant impact on the dissipation at the magnetic x-line. The linear analyses confirm that the mode survives in the real ion-to-electron mass ratio, which assures the importance of the turbulence in collisionless reconnection. © 2012 American Physical Society.


Geng H.,Tsinghua University | Liu C.,University of Alberta | Yang G.,Tsinghua University
IEEE Transactions on Industrial Electronics | Year: 2013

In this paper, the low-voltage ride-through (LVRT) capability of the doubly fed induction generator (DFIG)-based wind energy conversion system in the asymmetrical grid fault situation is analyzed, and the control scheme for the system is proposed to follow the requirements defined by the grid codes. As analyzed in the paper, the control efforts of the negative-sequence current are much higher than that of the positive-sequence current for the DFIG. As a result, the control capability of the DFIG restrained by the dc-link voltage will degenerate for the fault type with higher negative-sequence voltage component and 2φ fault turns out to be the most serious scenario for the LVRT problem. When the fault location is close to the grid connection point, the DFIG may be out of control resulting in non-ride-through zones. In the worst circumstance when LVRT can succeed, the maximal positive-sequence reactive current supplied by the DFIG is around 0.4 pu, which coordinates with the present grid code. Increasing the power rating of the rotor-side converter can improve the LVRT capability of the DFIG but induce additional costs. Based on the analysis, an LVRT scheme for the DFIG is also proposed by taking account of the code requirements and the control capability of the converters. As verified by the simulation and experimental results, the scheme can promise the DFIG to supply the defined positive-sequence reactive current to support the power grid and mitigate the oscillations in the generator torque and dc-link voltage, which improves the reliability of the wind farm and the power system. © 2012 IEEE.


Mezzacapo F.,Max Planck Institute of Quantum Optics | Boninsegni M.,University of Alberta
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

We study the ground-state phase diagram of the quantum J 1-J 2 model on the honeycomb lattice by means of an entangled-plaquette variational ansatz. Values of energy and relevant order parameters are computed in the range 0≤J 2/J 1≤1. The system displays classical order for J 2/J 1 0.2 (Néel) and for J 2/J 1 0.4 (collinear). In the intermediate region, the ground state is disordered. Our results show that the reduction of the half-filled Hubbard model to the model studied here does not yield accurate predictions. © 2012 American Physical Society.


In some embodiments, the present invention pertains to methods of detecting a contamination of an environment by a fracture fluid that comprises magnetic particles. In some embodiments, such methods include: (1) collecting a sample from the environment; and (2) measuring a magnetic susceptibility of the sample in order to detect the presence or absence of the magnetic particles. Further embodiments of the present invention pertain to methods of tracing fracture fluids in a mineral formation. In some embodiments, such methods include: (1) associating the fracture fluids with magnetic particles; (2) introducing the fracture fluids into the mineral formation; and (3) measuring a magnetic susceptibility of the fracture fluids. Additional embodiments of the present invention pertain to fracture fluids containing the aforementioned magnetic particles, the actual magnetic particles, and methods of making said magnetic particles.


News Article | February 15, 2017
Site: physicsworld.com

Taken from the February 2017 issue of Physics World Following a false alarm in 2004, two groups report what could be the first observation of supersolids, a theoretically predicted state of matter that is both a superfluid and a solid at the same time. Stephen Ornes reports We learn it from a young age: solids hold their shapes; liquids flow. Physical states of matter are mutually exclusive. A solid occupies a particular position in space, its molecules fixed. A fluid assumes the shape of its container, its molecules in constant motion. But a so-called supersolid, a predicted phase of matter that forms only under extreme circumstances, doesn’t follow this idea of order. To describe supersolids is an exercise in contradictions. On the one hand, they form rigid crystalline structures. On the other, theory predicts that part of their mass also acts like a superfluid – a quantum phase of matter that flows like a liquid, but without viscosity. That combination lets supersolids do things that seem unfathomable to the humdrum, room-temperature, Newtonian world, like flow through themselves – without friction. Although the Russian physicists Alexander Andreev and Ilya Liftshitz first predicted in 1969 that supersolids could form in helium close to absolute zero, definite proof has been hard to come by, and this elusive phase of matter has largely remained entrenched in the world of theory. That may have changed, though: two independent groups of researchers – one at the Massachusetts Institute of Technology (MIT) in the US, and the other at ETH Zurich in Switzerland – recently reported forming supersolids. Both of the new papers were posted on the arXiv preprint server in October (arXiv:1610.08194; arXiv:1609.09053), though they have not yet been published in peer-reviewed journals. Experts in the field say that so far, the evidence for supersolids looks convincing, with the usual caveats: namely, that more work and replication are needed. Both teams report coaxing supersolids into existence by manipulating a Bose–Einstein condensate (BEC), a bizarre state of matter that forms when bosons are chilled to within a fraction of a degree above absolute zero. The near-simultaneous reporting of two cases of supersolids, found using different experimental approaches, is exciting not only because supersolids may now join the ranks of exotic, fundamental phases, like superconductivity and superfluidity, but also because the material has travelled a long and at times rocky path from prediction to experimental evidence. “There are no scoops in science, only a slow construction of truth,” says physicist Sébastien Balibar at the École Normale Supérieure in Paris, who has conducted research on quantum solids and was not involved in the new studies. “Discoveries are very rarely made in one shot.” The latest reports weren’t the first from physicists who suspected they’d formed supersolids. In a study published in 2004, Pennsylvania State University physicist Moses Chan, together with his graduate student Eun-Seong Kim, reported extraordinary results from experiments using helium-4, the most abundant isotope of helium on Earth (Nature 427 225). At cold temperatures, helium-4 can be encouraged to form either a solid (at high pressure) or a superfluid (at standard pressure). Experiments in the 1930s showed that helium undergoes a phase transition to become a superfluid at 2.2 K, below which it exhibits spectacularly bizarre behaviour, like flowing up the walls of its container and out down the sides. Chan and Kim started with solid helium-4. They put the material in a torsional oscillator – a device that rotates in alternating directions – and lowered the temperature. At a sliver of a degree above absolute zero, the rotation of the device increased in frequency, which suggested that the amount of mass that was rotating had decreased. That change was consistent with the 1969 predictions by Andreev and Liftshitz, who hypothesized that some of the helium’s mass would form a superfluid that could flow through the rest of the solid without friction. Other groups reproduced the experiment and found the same results, exciting the condensed-matter physics community. Still, doubt lingered, and for years, the results from Chan and Kim remained controversial. One team that set out to reproduce the experiment comprised John Reppy, a physicist at Cornell University in the US, and graduate student Sophie Rittner. In a paper published in 2006, they reported that the frequency uptick was tied to defects in the solid helium. When they warmed the helium and let it cool slowly – a process called annealing that smooths out defects – the signature of supersolidity vanished. Then, in a paper published in Nature in 2007, physicist John Beamish at the University of Alberta, Canada, and his collaborators challenged Chan and Kim’s findings by suggesting that solid helium wasn’t perfectly stiff but instead had some give, a “giant plasticity”. This effect could allow some atoms to slide past each other, mimicking the properties of supersolidity. In later experiments, Beamish’s group worked with Balibar and his colleagues in Paris to better understand this effect, and bolstered the case for the new explanation. Chan, ultimately, brought this chapter to its close. Reppy had been Chan’s adviser in graduate school, and Chan set out to redesign his own experiment to test alternative ideas about the supersolid state. In a paper published in 2012 and based on a new set-up, he reported finding no increase in rotational frequency – and thus no evidence for supersolids (Phys. Rev. Lett. 109 155301). “This is a remarkable piece of science history,” says physicist Tilman Pfau, who studies particle interactions in BECs at the University of Stuttgart, in Germany. “The same author that claims something, gets criticized, goes back to the lab, sees he was wrong and writes a paper about it.” While some researchers continue to pursue the formation of supersolids in helium, many other labs have turned to BECs. Albert Einstein first predicted the existence of this state of matter in 1924, based on theoretical work by Indian physicist Satyendra Bose, but it took decades to develop the machinery needed to test the prediction. The first BEC was created in a lab in Colorado, US, in 1995, when physicists used lasers and magnetic fields to trap a clutch of rubidium atoms as the temperature was reduced as much as possible. Just above absolute zero, the individual atoms all began behaving like one giant superatom – a single quantum entity at its lowest energy state. Research into the discovery and properties of BECs netted Nobel prizes for physicists Eric Cornell, at the US National Institute of Standards and Technology, and Carl Wieman, then at the University of Colorado Boulder and now at Stanford University, as well as Wolfgang Ketterle at MIT, whose lab is one of the two that has produced new findings on supersolids. In the two decades since a BEC was first observed, physicists have become adept at finding ways to control every term in the Hamiltonian – the mathematical description of the energy state of the material. It is through tweaking the values of these terms that they’ve been able to probe new fundamental phases of matter, like supersolids. Physicists often characterize transitions between phases of matter by what kind of symmetry is broken. Liquid water, for example, at the molecular level, looks the same under any transformation. The arrangement of molecules at one place in the liquid looks like the arrangement of molecules at another. But ice is a crystal, which means its structure looks the same only when observed at periodic intervals. So the translational symmetry of the liquid is broken as it becomes a crystal. Both forming a crystal and forming a superfluid are associated with breaking symmetry; thus, to form a supersolid requires two kinds of symmetry to break simultaneously. First, a superfluid must be formed. An advantage of working with BECs is that it is well known how to make BECs behave like superfluids, making them a natural place to start; another is that physicists know how to vary atom interactions in the material. Second, while this superfluidity is maintained, the superfluid must become regularly ordered into regions of high and low density, like atoms in a crystal. Physicists have posited a variety of ways to stimulate atom interactions that lead to a solid state while maintaining superfluidity, i.e. the long-sought supersolid state. “Supersolidity is a paradoxical competition between two different and contradictory types of order,” says Balibar. One of those is the order demanded by solidity, where individual atoms line up on a lattice; the other is superfluidity, where the atoms effectively combine, accumulating to the same quantum state. “Atoms in a supersolid should be localized and delocalized at the same time, distinguishable and indistinguishable.” There may be more than one way to coax a solid from a BEC superfluid. One group that reported its findings in October, led by Tilman Esslinger at ETH Zurich, trapped the BEC at the intersection of crossing lasers, with each laser forming an optical cavity. The interaction of the photons and atoms in the BEC gave rise to self-organization – the hallmark of solidity – even though the material continued to look like a superfluid. Pfau says the new work “goes clearly beyond” what groups have done before; Balibar, in Paris, says that the results look “convincing” and “the fundamental effect is clearly there”. At the same time, Balibar cautions that although Esslinger’s group claims evidence for spontaneous symmetry breaking, he’d like to see better confirmation. “That’s not totally obvious to me since the period of the supersolid is fixed by the laser wavelength.” The other group, from Ketterle’s lab at MIT, also used lasers, but with a kind of BEC that takes advantage of the connections between the spin of an atom – an intrinsic quantum property that’s analogous to rotation – and its motion. (Spin–orbit coupling is a physical interaction that underlies many unusual physical phenomena, including topological insulators and some behaviours in superconductors.) The physicists used a laser to transfer some momentum to the atoms in the BEC, which led to the formation of interference patterns. From those patterns emerged tiger-like stripes of alternating density – standing waves – in the material. In its paper, Ketterle’s group reports that this density modulation breaks translational symmetry, the requirement for a solid. Physicist Thomas Busch, who studies quantum processes in ultracold atomic gases at the Okinawa Institute of Science and Technology, in Japan, says theorists predicted a few years ago that the supersolid stripes should emerge. At the same time, he notes that experimental verification is exciting news to the community. Neither group explicitly showed that the material could flow through itself, though the papers do offer arguments in favour of superfluidity. Despite past controversies over what is or isn’t a supersolid, Busch says that the vast majority of people will not have a problem calling the entities in the two new studies supersolids. “Figuring out the exact ‘super’ properties of the states created is now an exciting task for the future,” he says. Finding new states of matter has been a driving force in cold-atom research for decades, and supersolids are the latest bizarre material to join a growing list that already includes things like superfluids and superconductors. For the last two years, Pfau’s group, in Stuttgart, has been exploring quantum ferro­fluids – magnetic droplets that can self-organize out of BECs at low temperature. “Nobody would have thought before [we observed the material in the lab] that this was a stable state of matter,” he says. Last year, in a paper published in Nature, the group reported that quantum ferrofluids can also break translational symmetry, which means they might be a good place to search for other supersolids. Because scientists have been working with BECs for decades, they’ve figured out a lot about how to tame them and tune them to probe fundamental phases of matter. But they’re just getting started, says Busch. Now they’re looking for ways not only to identify other exotic phases, but also to explore what happens when these strange materials are combined, or how they act under other experimental conditions. “How do these systems actually behave by themselves? How do they react to external stimulation? What happens if we squeeze them?” Busch likens this era of discovery to what happened in the years after BECs were first discovered, when physicists couldn’t wait to get to know the new condensates better. “The first thing people did [to BECs] was to squeeze them – the stuff you do when you get a new toy.” In addition, he says, physicists want to study the effects of different long-range interactions and better understand how impurities affect the properties of the materials. Impurities could be critical in finding applications for supersolids. Busch notes that in semiconductor research, impurities added through doping can change the conductivity of a material and make it fit a certain use. Higher dimensions may also be in store. In the preprint from Ketterle’s group, the researchers note a couple of possible future directions: more characterization of the system, for example, or extending their method to a 2D spin–orbit coupling system. Achieving supersolidity in three dimensions would be another major milestone, but breaking symmetries in three dimensions would be difficult to realize in experiments. Exotic states of matter, like supersolids, show that under extreme conditions our physical reality behaves in bizarre ways that aren’t easy to explain. “The physics of cold atoms is some kind of simulation of fundamental problems that are well defined, but hard to calculate,” says Balibar. Theory may predict a spectrum of undiscovered properties that emerge in idealized matter, but controlling such strange stuff under extreme conditions is difficult. “Real matter has defects and surface states,” he says, “so our understanding of real matter is far from being complete.”


News Article | February 15, 2017
Site: www.marketwired.com

VANCOUVER, BRITISH COLUMBIA--(Marketwired - Feb. 14, 2017) - Conifex Timber Inc. ("Conifex") (TSX:CFF) is pleased to announce that Janine North has joined its Board of Directors. Ms. North has extensive background and experience in the resource sector, particularly in the northern interior region of British Columbia. Ms. North is the recently retired CEO of the Northern Development Initiative Trust ("NDIT"), a regional development corporation focused on building a stronger economy across central and northern British Columbia. Ms. North's previous roles prior to NDIT included management of logging and trucking companies, and management of certain forest districts in British Columbia. Ms. North currently serves as a director and the Chair of Governance and Human Resources for BC Hydro, and as a director of viaSport British Columbia, an independent not-for-profit organization, tasked by the provincial government with promoting and developing amateur sport in B.C. Ms. North previously chaired the Nechako-Kitamaat Development Fund, Vice-Chaired the Central Interior Logging Association, served as Vice President of the Agricultural Institute of Canada and held directorships with the Association of Mineral Exploration of B.C., Canadian Sport Institute and Junior Achievement BC. Ms. North's commitment to excellence has been recognized through being honoured with the Influential Women in Business Award by Business in Vancouver, 50 Most Influential Women in BC; Association for Mineral Exploration of BC Honorary Service Roll; Honorary Lifetime Alumni University of Northern B.C.; Northern B.C.'s Woman of Influence and Impact; Northern B.C. Mentor of the Year to Industry and Business; Northern B.C. Newsmaker of the Year; and Central Interior Logging Association Contractor of the Year. Ms. North is an accredited corporate director with the Canadian Institute of Corporate Directors and a graduate of the University of Alberta Faculty of Forestry and Agriculture. Conifex and its subsidiaries' primary business currently includes timber harvesting, reforestation, forest management, sawmilling logs into lumber and wood chips, and value added lumber finishing and distribution. Conifex's lumber products are sold in the United States, Chinese, Canadian and Japanese markets. Conifex has expanded its operations to include bioenergy production following the commencement of commercial operations of its power generation facility at Mackenzie, British Columbia.


News Article | February 21, 2017
Site: www.eurekalert.org

For the first time ever, scientists have captured images of terahertz electron dynamics of a semiconductor surface on the atomic scale. The successful experiment indicates a bright future for the new and quickly growing sub-field called terahertz scanning tunneling microscopy (THz-STM), pioneered by the University of Alberta in Canada. THz-STM allows researchers to image electron behaviour at extremely fast timescales and explore how that behaviour changes between different atoms. "We can essentially zoom in to observe very fast processes with atomic precision and over super fast time scales," says Vedran Jelic, PhD student at the University of Alberta and lead author on the new study. "THz-STM provides us with a new window into the nanoworld, allowing us to explore ultrafast processes on the atomic scale. We're talking a picosecond, or a millionth millionth of a second. It's something that's never been done before." Jelic and his collaborators used their scanning tunneling microscope (STM) to capture images of silicon atoms by raster scanning a very sharp tip across the surface and recording the tip height as it follows the atomic corrugations of the surface. While the original STM can measure and manipulate single atoms--for which its creators earned a Nobel Prize in 1986--it does so using wired electronics and is ultimately limited in speed and thus time resolution. Modern lasers produce very short light pulses that can measure a whole range of ultra-fast processes, but typically over length scales limited by the wavelength of light at hundreds of nanometers. Much effort has been expended to overcome the challenges of combining ultra-fast lasers with ultra-small microscopy. The University of Alberta scientists addressed these challenges by working in a unique terahertz frequency range of the electromagnetic spectrum that allows wireless implementation. Normally the STM needs an applied voltage in order to operate, but Jelic and his collaborators are able to drive their microscope using pulses of light instead. These pulses occur over really fast timescales, which means the microscope is able to see really fast events. By incorporating the THz-STM into an ultrahigh vacuum chamber, free from any external contamination or vibration, they are able to accurately position their tip and maintain a perfectly clean surface while imaging ultrafast dynamics of atoms on surfaces. Their next step is to collaborate with fellow material scientists and image a variety of new surfaces on the nanoscale that may one day revolutionize the speed and efficiency of current technology, ranging from solar cells to computer processing. "Terahertz scanning tunneling microscopy is opening the door to an unexplored regime in physics," concludes Jelic, who is studying in the Ultrafast Nanotools Lab with University of Alberta professor Frank Hegmann, a world expert in ultra-fast terahertz science and nanophysics. Their findings, "Ultrafast terahertz control of extreme tunnel currents through single atoms on a silicon surface," appeared in the February 20 issue of Nature Physics.


News Article | November 30, 2016
Site: www.eurekalert.org

ANCHORAGE -- Permafrost loss due to a rapidly warming Alaska is leading to significant changes in the freshwater chemistry and hydrology of Alaska's Yukon River Basin with potential global climate implications. This is the first time a Yukon River study has been able to use long-term continuous water chemistry data to document hydrological changes over such an enormous geographic area and long time span. The results of the study have global climate change implications because of the cascading effects of such dramatic chemical changes on freshwater, oceanic and high-latitude ecosystems, the carbon cycle and the rural communities that depend on fish and wildlife in Alaska's iconic Yukon River Basin. The study was led by researcher Ryan Toohey of the Department of the Interior's Alaska Climate Science Center and published in Geophysical Research Letters. Permafrost rests below much of the surface of the Yukon River Basin, a silent store of thousands of years of frozen water, minerals, nutrients and contaminants. Above the permafrost is the 'active layer' of soil that freezes and thaws each year. Aquatic ecosystems -- and their plants and animals -- depend on the ebb and flow of water through this active layer and its specific chemical composition of minerals and nutrients. When permafrost thaws, the soil's active layer expands and new pathways open for water to flow through different parts of the soil, bedrock and groundwater. These new pathways ultimately change the chemical composition of both surface water and groundwater. "As the climate gets warmer," said Toohey, "the thawing permafrost not only enables the release of more greenhouse gases to the atmosphere, but our study shows that it also allows much more mineral-laden and nutrient-rich water to be transported to rivers, groundwater and eventually the Arctic Ocean. Changes to the chemistry of the Arctic Ocean could lead to changes in currents and weather patterns worldwide." Another recent study by University of Alberta scientist Suzanne Tank documented similar changes on another major Arctic river, the Mackenzie River in Canada. With two of these rivers showing striking, long-term changes in their water chemistry, Toohey noted that "these trends strongly suggest that permafrost loss is leading to massive changes in hydrology within the arctic and boreal forest that may have consequences for the carbon cycle, fish and wildlife habitat and other ecosystem services." The Yukon River Basin, which is the size of California, starts in northwestern British Columbia, then flows northwest through Yukon across the interior of Alaska to its delta, where it discharges into the Bering Sea. Eventually, its waters reach the Arctic Ocean; it is one of six major rivers that play an important role in the circulation and chemical makeup of the Arctic Ocean. This study, which analyzed more than 30 years of data, sheds light on how the effects of climate change are already affecting this system. The study specifically found that the Yukon River and one of its major tributaries, the Tanana River, have experienced significant increases in calcium, magnesium and sulfate over the last three decades. As permafrost loss allows for more water to access more soil and bedrock, increased weathering most likely explains these significant increases. In fact, the annual pulse of sulfate in the Yukon River jumped by 60 percent over the past thirty years. This research also suggests that groundwater, enriched with organic carbon and other minerals, is likely contributing to these changes. How long the river stays frozen plays an important role in erosion. The Yukon River ice has been breaking up earlier and earlier, often accompanied by tremendous flooding events that devastate the communities on its banks. At the same time, the river has been freezing up later and later. When the river is unfrozen, its banks and soils are more susceptible to erosion. Phosphorous, often a product of this erosion, has increased by over 200 percent during December. All of these increases impact the aquatic ecosystems of the Yukon River and may ultimately contribute to changes in the Arctic Ocean. Together, said the authors, the research shows that permafrost degradation is already fundamentally transforming the way that high-latitude, Northern Hemisphere ecosystems function. The study was the result of a unique collaboration between the USGS, the Yukon River Inter-Tribal Watershed Council, the Pilot Station Traditional Council and the Indigenous Observation Network funded by these organizations and the Administration for Native Americans and the National Science Foundation. ION is a citizen-science network that depends on Alaska Native Tribes and First Nations along the Yukon River and its tributaries to participate in scientific research. The study, "Multi-decadal increases in Yukon River Basin chemical fluxes as indicators of changing flowpaths, groundwater, and permafrost," was just published in Geophysical Research Letters. It is authored by R.C. Toohey, USGS Alaska Climate Science Center; N.M. Herman-Mercer and P.F. Schuster, USGS National Research Program; E. Mutter, Yukon River Inter-Tribal Watershed Council; and J.C. Koch, USGS Alaska Science Center.


News Article | February 29, 2016
Site: www.techtimes.com

Polar bear encounters with humans are on the rise, as suggested by documents from the Manitoba government in Canada. In fact, encounters on the Hudson Bay shores may have reached a record high. As a result, more polar bears are ending up in specialized lockups in Churchill. In 2013, there were 229 documented polar bear cases in Churchill. The numbers jumped to 351 cases in 2015, Manitoba Conservation regional wildlife manager Daryll Hedman said it was a high number for occurrences. Commonly known as the "polar bear jail," the holding facility is where polar bears are locked and tranquilized before its re-entry into the wild. In 2013, there were 36 bears in custody. In 2015, the inmate population jumped to 65. Patrol officers from the Manitoba Conservation increased their activities already but Hedman and other specialists said climate change has a lot to do with the increased polar bear encounters. Two-thirds of the entire polar bear population call Canada its home. Unfortunately, due to the effects of climate change, the polar bear population in Hudson Bay could be extinct in the next several decades. Hunting during the winter months is important for the polar bears. They use this time to stock up on fatty seal meat to help them make it through the summer months when they suffer food scarcity on land. Recently, there has been delay in the yearly freezing in the Arctic waters. Additionally, the ice melt faster during spring. These shrink the polar bears' window to stock up on the seal meat they need for fat build-up. Since fatty meat is scarce in dry land, hungry polar bears on land tend to venture in human towns for food. Normally, polar bear encounters happen around late August. Hedman said that lately, polar bear encounters are documented as early as the first of July. "What's the tipping point? What's the threshold that they can go without food? When they're on land, they're not eating." added Hedman. It's a question of how long these polar bears can go on without hunting seals on the sea ice. University of Alberta's polar bear expert Andrew Derocher said that as polar bear spend more time on dry land without food, the chances of venture into populated areas increases. "Hungry bears are always going to be a problem. All projections are that they will increase their on-land time," said Derocher.


News Article | August 25, 2016
Site: phys.org

Wheelchair users facing persistent shoulder strain or injury will soon have assistance thanks to a device recently developed by University of Alberta researchers and partnered with help from TEC Edmonton's Technology Management team.


News Article | September 8, 2016
Site: www.cemag.us

University of Alabama at Birmingham researchers are exploring ways to wrap pig tissue with a protective coating to ultimately fight diabetes in humans. The nano-thin bilayers of protective material are meant to deter or prevent immune rejection. The ultimate goal: transplant insulin-producing cell-clusters from pigs into humans to treat Type 1 diabetes. In preclinical work begun this year, these stealth insulin-producers — pancreatic islets from pigs or mice coated with thin bilayers of biomimetic material — are being tested in vivo in a mouse model of diabetes, say UAB investigators Hubert Tse, Ph.D., and Eugenia Kharlampieva, Ph.D. Tse is an immunologist and associate professor in the Department of Microbiology, UAB School of Medicine, and Kharlampieva is a polymer and materials chemist and associate professor in the Department of Chemistry, UAB College of Arts and Sciences. Their research, supported by two new JDRF Diabetes Foundation grants, “is a nice example of a truly multidisciplinary project that encompasses distinct areas of expertise including engineering, nanomaterials, immunology and islet transplantation,” says Fran Lund, Ph.D., professor and chair of Microbiology at UAB. “The project also melds basic science and engineering with the goal of developing better treatments for diabetes.” “Our collaboration works because we have the same mindset,” Kharlampieva says of her collaboration with Tse. “We want to do good science.” One of the chief jobs of pancreatic islets is production of insulin to regulate levels of blood sugar. In Type 1 diabetes, the β-cells that produce insulin are destroyed by an autoimmune attack by the body’s own immune system. To protect transplanted donor islets, researchers elsewhere have tried to coat islets with thick gels, or with coatings that bind covalently or ionically to the islets. Those approaches have had limited success. Tse and Kharlampieva have taken a different approach, applying a gentler and much thinner coating of just five bilayers of biomimetic material about 30 nanometers thick. These layers act as a physical barrier that dissipates reactive oxygen species, and they also dampen the immune response. The thinness of the coat allows nutrients and oxygen easy passage to the cells. “We did not expect the multilayers would show such a large, potential benefit,” Kharlampieva says of the immunomodulation shown by the bilayers. The Tse-Kharlampieva collaboration got its start out of efforts to solve a problem in a UAB service to provide islets to national researchers — the islets often died or stopped secreting insulin during the three to five days of shipping. Kharlampieva was asked whether her bilayers might somehow protect the islets and preserve viability and functionality. The bilayers are held together by hydrogen bonding, through an attraction between polar groups in the layers, which Kharlampieva calls a “friendlier approach” than covalent or ionic bonds. One of the layers, tannic acid, is a polyphenol that can scavenge destructive free radicals, much like the polyphenols found in green tea. Tse — who studies how oxidative stress contributes to islet dysfunction and autoimmune responses in Type 1 diabetes — wondered whether tannic acid’s ability to defuse radical oxygen species might help to lessen autoimmune dysregulation. In collaborative research over more than five years, the UAB researchers showed that the answer was yes. In a 2012 Advanced Functional Materials paper, Tse, Kharlampieva, and colleagues found that: In a 2014 Advanced Healthcare Materials paper, the researchers further examined the immunomodulatory effect of the hydrogen-bonded multilayers, in the form of hollow shells. They showed that the bilayer shells have: The next step for the UAB researchers is in vivo testing of xeno- and allotransplantation to see if the bilayer-coated pancreatic islets have decreased risk of graft rejection, while restoring control of blood sugar. Xenotransplantation is transplanting from one species to another, and allotransplantation is transplanting from one member of a species to a different member of the same species. In a one-year, in vivo demonstration grant, the UAB researchers found that nano-coated mouse islets survived and functioned as long as 40 days in diabetic mice that lack working immune systems. “We showed that they do stay alive, and they function to regulate blood glucose,” Tse says. Now Tse and Kharlampieva, supported by two new JDRF grants, are testing the survival and functioning of nano-coated islets from mice or pigs in diabetic mice with intact immune systems. The pig islets come from their University of Alberta collaborator Greg Korbutt, Ph.D. Korbutt’s team in Edmonton, Canada, has shown that human islets transplanted into immunosuppressed patients with brittle diabetes can produce insulin independence. “They are the leader in islet transplantation and developed the Edmonton Protocol for novel immunosuppression,” Tse says. Pig islets — in contrast to scarce supplies of human islets — offer an unlimited source of insulin-producing tissue. In the UAB experiments, the mouse and pig islets are coated with four or five bilayers of tannic acid and either poly(N–vinylpyrrolidone) or poly(N–vinylcaprolactam) by UAB research scientist Veronika Kozlovskaya, Ph.D. Mouse islet collection and transplantation of mouse or pig islets into mice is performed by UAB research technician Michael Zeiger, who grew up in Indonesia learning surgical skills from his veterinarian father. At UAB, Lund holds the Charles H. McCauley Chair of Microbiology.


News Article | February 15, 2017
Site: www.nature.com

As the Arctic slipped into the half-darkness of autumn last year, it seemed to enter the Twilight Zone. In the span of a few months, all manner of strange things happened. The cap of sea ice covering the Arctic Ocean started to shrink when it should have been growing. Temperatures at the North Pole soared more than 20 °C above normal at times. And polar bears prowling the shorelines of Hudson Bay had a record number of run-ins with people while waiting for the water to freeze over. It was a stark illustration of just how quickly climate change is reshaping the far north. And if last autumn was bizarre, it's the summers that have really got scientists worried. As early as 2030, researchers say, the Arctic Ocean could lose essentially all of its ice during the warmest months of the year — a radical transformation that would upend Arctic ecosystems and disrupt many northern communities. Change will spill beyond the region, too. An increasingly blue Arctic Ocean could amplify warming trends and even scramble weather patterns around the globe. “It’s not just that we’re talking about polar bears or seals,” says Julienne Stroeve, a sea-ice researcher at University College London. “We all are ice-dependent species.” With the prospect of ice-free Arctic summers on the horizon, scientists are striving to understand how residents of the north will fare, which animals face the biggest risks and whether nations could save them by protecting small icy refuges. But as some researchers look even further into the future, they see reasons to preserve hope. If society ever manages to reverse the surge in greenhouse-gas concentrations — as some suspect it ultimately will — then the same physics that makes it easy for Arctic sea ice to melt rapidly may also allow it to regrow, says Stephanie Pfirman, a sea-ice researcher at Barnard College in New York City. She and other scientists say that it’s time to look beyond the Arctic’s decline and start thinking about what it would take to restore sea ice. That raises controversial questions about how quickly summer ice could return and whether it could regrow fast enough to spare Arctic species. Could nations even cool the climate quickly through geoengineering, to reverse the most drastic changes up north? Pfirman and her colleagues published a paper1 last year designed to kick-start a broader conversation about how countries might plan for the regrowth of ice, and whether they would welcome it. Only by considering all the possibilities for the far future can the world stay one step ahead of the ever-changing Arctic, say scientists. “We’ve committed to the Arctic of the next generation,” Pfirman says. “What comes next?” Pfirman remembers the first time she realized just how fast the Arctic was unravelling. It was September 2007, and she was preparing to give a talk. She went online to download the latest sea-ice maps and discovered something disturbing: the extent of Arctic ice had shrunk past the record minimum and was still dropping. “Oh, no! It’s happening,” she thought. Although Pfirman and others knew that Arctic sea ice was shrinking, they hadn’t expected to see such extreme ice losses until the middle of the twenty-first century. “It was a wake-up call that we had basically run out of time,” she says. In theory, there’s still a chance that the world could prevent the total loss of summer sea ice. Global climate models suggest that about 3 million square kilometres — roughly half of the minimum summer coverage in recent decades — could survive if countries fulfil their commitments to the newly ratified Paris climate agreement, which limits global warming to 2 °C above pre-industrial temperatures. But sea-ice researchers aren’t counting on that. Models have consistently underestimated ice losses in the past, causing scientists to worry that the declines in the next few decades will outpace projections2. And given the limited commitments that countries have made so far to address climate change, many researchers suspect the world will overshoot the 2 °C target, all but guaranteeing essentially ice-free summers (winter ice is projected to persist for much longer). In the best-case scenario, the Arctic is in for a 4–5 °C temperature rise, thanks to processes that amplify warming at high latitudes, says James Overland, an oceanographer at the US National Oceanic and Atmospheric Administration in Seattle, Washington. “We really don’t have any clue about how disruptive that’s going to be.” The Arctic’s 4 million residents — including 400,000 indigenous people — will feel the most direct effects of ice loss. Entire coastal communities, such as many in Alaska, will be forced to relocate as permafrost melts and shorelines crumble without sea ice to buffer them from violent storms, according to a 2013 report3 by the Brookings Institution in Washington DC. Residents in Greenland will find it hard to travel on sea ice, and reindeer herders in Siberia could struggle to feed their animals. At the same time, new economic opportunities will beckon as open water allows greater access to fishing grounds, oil and gas deposits, and other sources of revenue. People living at mid-latitudes may not be immune, either. Emerging research4 suggests that open water in the Arctic might have helped to amplify weather events, such as cold snaps in the United States, Europe and Asia in recent winters. Indeed, the impacts could reach around the globe. That’s because sea ice helps to cool the planet by reflecting sunlight and preventing the Arctic Ocean from absorbing heat. Keeping local air and water temperatures low, in turn, limits melting of the Greenland ice sheet and permafrost. With summer ice gone, Greenland’s glaciers could contribute more to sea-level rise, and permafrost could release its stores of greenhouse gases such as methane. Such is the vast influence of Arctic ice. “It is really the tail that wags the dog of global climate,” says Brenda Ekwurzel, director of climate science at the Union of Concerned Scientists in Cambridge, Massachusetts. But Arctic ecosystems will take the biggest hit. In 2007, for example, biologists in Alaska noticed something odd: vast numbers of walruses had clambered ashore on the coast of the Chukchi Sea. From above, it looked like the Woodstock music festival — with tusks — as thousands of plump pinnipeds crowded swathes of ice-free shoreline. Normally, walruses rest atop sea ice while foraging on the shallow sea floor. But that year, and almost every year since, sea-ice retreat made that impossible by late summer. Pacific walruses have adapted by hauling out on land, but scientists with the US Fish and Wildlife Service worry that their numbers will continue to decline. Here and across the region, the effects of Arctic thawing will ripple through ecosystems. In the ocean, photosynthetic plankton that thrive in open water will replace algae that grow on ice. Some models5 suggest that biological productivity in a seasonally ice-free Arctic could increase by up to 70% by 2100, which could boost revenue from Arctic fisheries even more. (To prevent a seafood gold rush, five Arctic nations have agreed to refrain from unregulated fishing in international waters for now.) Many whales already seem to be benefiting from the bounty of food, says Sue Moore, an Arctic mammal specialist at the Pacific Marine Environmental Laboratory. But the changing Arctic will pose a challenge for species whose life cycles are intimately linked to sea ice, such as walruses and Arctic seals — as well as polar bears, which don’t have much to eat on land. Research6 suggests that many will starve if the ice-free season gets too long in much of the Arctic. “Basically, you can write off most of the southern populations,” says Andrew Derocher, a biologist at the University of Alberta in Edmonton, Canada. Such findings spurred the US Fish and Wildlife Service to list polar bears as threatened in 2008. Ice-dependent ecosystems may survive for longest along the rugged north shores of Greenland and Canada, where models suggest that about half a million square kilometres of summer sea ice will linger after the rest of the Arctic opens up (see ‘Going, going …’). Wind patterns cause ice to pile up there, and the thickness of the ice — along with the high latitude — helps prevent it from melting. “The Siberian coastlines are the ice factory, and the Canadian Arctic Archipelago is the ice graveyard,” says Robert Newton, an oceanographer at Columbia University’s Lamont–Doherty Earth Observatory in Palisades, New York. Groups such as the wildlife charity WWF have proposed protecting this ‘last ice area’ as a World Heritage Site in the hope that it will serve as a life preserver for many Arctic species. Last December, Canada announced that it would at least consider setting the area aside for conservation, and indigenous groups have expressed interest in helping to manage it. (Before he left office, then-US president Barack Obama joined Canadian Prime Minister Justin Trudeau in pledging to protect 17% of the countries’ Arctic lands and 10% of marine areas by 2020.) But the last ice area has limitations as an Arctic Noah’s ark. Some species don’t live in the region, and those that do are there in only small numbers. Derocher estimates that there are less than 2,000 polar bears in that last ice area today — a fraction of the total Arctic population of roughly 25,000. How many bears will live there in the future depends on how the ecosystem evolves with warming. The area may also be more vulnerable than global climate models suggest. Bruno Tremblay, a sea-ice researcher at McGill University in Montreal, Canada, and David Huard, an independent climate consultant based in Quebec, Canada, studied the fate of the refuge with a high-resolution sea-ice and ocean model that better represented the narrow channels between the islands of the Canadian archipelago. In a report7 commissioned by the WWF, they found that ice might actually be able to sneak between the islands and flow south to latitudes where it would melt. According to the model, Tremblay says, “even the last ice area gets flushed out much more efficiently”. If the future of the Arctic seems dire, there is one source of optimism: summer sea ice will return whenever the planet cools down again. “It’s not this irreversible process,” Stroeve says. “You could bring it back even if you lose it all.” Unlike land-based ice sheets, which wax and wane over millennia and lag behind climate changes by similar spans, sea ice will regrow as soon as summer temperatures get cold enough. But identifying the exact threshold at which sea ice will return is tricky, says Dirk Notz, a sea-ice researcher at the Max Planck Institute for Meteorology in Hamburg, Germany. On the basis of model projections, researchers suggest that the threshold hovers around 450 parts per million (p.p.m.) — some 50 p.p.m. higher than today. But greenhouse-gas concentrations are not the only factor that affects ice regrowth; it also depends on how long the region has been ice-free in summer, which determines how much heat can build up in the Arctic Ocean. Notz and his colleagues studied the interplay between greenhouse gases and ocean temperature with a global climate model8. They increased CO from pre-industrial concentrations of 280 p.p.m. to 1,100 p.p.m. — a bit more than the 1,000 p.p.m. projected by 2100 if no major action is taken to curtail greenhouse-gas emissions. Then they left it at those levels for millennia. This obliterated both winter and summer sea ice, and allowed the ocean to warm up. The researchers then reduced CO concentrations to levels at which summer ice should have returned, but it did not regrow until the ocean had a chance to cool off, which took centuries. By contrast, if the Arctic experiences ice-free summers for a relatively short time before greenhouse gases drop, then models suggest ice would regrow much sooner. That could theoretically start to happen by the end of the century, assuming that nations take very aggressive steps to reduce carbon dioxide levels1, according to Newton, Pfirman and their colleagues. So even if society cannot forestall the loss of summer sea ice in coming decades, taking action to keep CO concentrations under control could still make it easier to regrow the ice cover later, Notz says. Given the stakes, some researchers have proposed global-scale geoengineering to cool the planet and, by extension, preserve or restore ice. Others argue that it might be possible to chill just the north, for instance by artificially whitening the Arctic Ocean with light-coloured floating particles to reflect sunlight. A study9 this year suggested installing wind-powered pumps to bring water to the surface in winter, where it would freeze, forming thicker ice. But many researchers hesitate to embrace geoengineering. And most agree that regional efforts would take tremendous effort and have limited benefits, given that Earth’s circulation systems could just bring more heat north to compensate. “It’s kind of like walking against a conveyor the wrong way,” Pfirman says. She and others agree that managing greenhouse gases — and local pollutants such as black carbon from shipping — is the only long-term solution. Returning to a world with summer sea ice could have big perks, such as restoring some of the climate services that the Arctic provides to the globe and stabilizing weather patterns. And in the region itself, restoring a white Arctic could offer relief to polar bears and other ice-dependent species, says Pfirman. These creatures might be able to weather a relatively short ice-free window, hunkered down in either the last ice area or other places set aside to preserve biodiversity. When the ice returned, they could spread out again to repopulate the Arctic. That has almost certainly happened during past climate changes. For instance, researchers think the Arctic may have experienced nearly ice-free summers during the last interglacial period, 130,000 years ago10. But, one thing is certain: getting back to a world with Arctic summer sea ice won’t be simple, politically or technically. Not everyone will embrace a return to an ice-covered Arctic, especially if it’s been blue for several generations. Companies and countries are already eyeing the opportunities for oil and gas exploration, mining, shipping, tourism and fishing in a region hungry for economic development. “In many communities, people are split,” Pfirman says. Some researchers also say that the idea of regrowing sea ice seems like wishful thinking, because it would require efforts well beyond what nations must do to meet the Paris agreement. Limiting warming to 2 °C will probably entail converting huge swathes of land into forest and using still-nascent technologies to suck billions of tonnes of CO out of the air. Lowering greenhouse-gas concentrations enough to regrow ice would demand even more. And if summer sea ice ever does come back, it’s hard to know how a remade Arctic would work, Derocher says. “There will be an ecosystem. It will function. It just may not look like the one we currently have.”


News Article | December 21, 2016
Site: www.eurekalert.org

For three billion years or more, the evolution of the first animal life on Earth was ready to happen, practically waiting in the wings. But the breathable oxygen it required wasn't there, and a lack of simple nutrients may have been to blame. Then came a fierce planetary metamorphosis. Roughly 800 million years ago, in the late Proterozoic Eon, phosphorus, a chemical element essential to all life, began to accumulate in shallow ocean zones near coastlines widely considered to be the birthplace of animals and other complex organisms, according to a new study by geoscientists from the Georgia Institute of Technology and Yale University. Along with phosphorus accumulation came a global chemical chain reaction, which included other nutrients, that powered organisms to pump oxygen into the atmosphere and oceans. Shortly after that transition, waves of climate extremes swept the globe, freezing it over twice for tens of millions of years each time, a highly regarded theory holds. The elevated availability of nutrients and bolstered oxygen also likely fueled evolution's greatest lunge forward. After billions of years, during which life consisted almost entirely of single-celled organisms, animals evolved. At first, they were extremely simple, resembling today's sponges or jellyfish, but Earth was on its way from being, for eons, a planet less than hospitable to complex life to becoming one bursting with it. In the last few hundred million years, biodiversity has blossomed, leading to dense jungles and grasslands echoing with animal calls, and waters writhing with every shape of fin and color of scale. And most every stage of development has left its mark on the fossil record. The researchers are careful not to imply that phosphorous necessarily caused the chain reaction, but in sedimentary rock taken from coastal areas, the nutrient has marked the spot where that burst of life and climate change took off. "The timing is definitely conspicuous," said Chris Reinhard, an assistant professor in Georgia Tech's School of Earth and Atmospheric Sciences. Reinhard and Noah Planavsky, a geochemist from Yale University, who headed up the research together, have mined records of sedimentary rock that formed in ancient coastal zones, going down layer by layer to 3.5 billion years ago, to compute how the cycle of the essential fertilizer phosphorus evolved and how it appeared to play a big part in a veritable genesis. They noticed a remarkable congruency as they moved upward through the layers of shale into the time period where animal life began, in the late Proterozoic Eon. "The most basic change was from very limited phosphorous availability to much higher phosphorus availability in surface waters of the ocean," Reinhard said. "And the transition seemed to occur right around the time that there were very large changes in ocean-atmosphere oxygen levels and just before the emergence of animals." Reinhard and Planavsky, together with an international team, have proposed that a scavenging of nutrients in an anoxic (nearly O2-free) world stunted photosynthetic organisms that otherwise had been poised for at least two billion years to make stockpiles of oxygen. Then that balanced system was upset and oceanic phosphorus made its way to coastal waters. The scientists published their findings in the journal Nature on Wednesday, December 21, 2016. Their research was funded by the National Science Foundation, the NASA Astrobiology Institute, the Sloan Foundation and the Japan Society for the Promotion of Science. The work provides a new view into what factors allowed life to reshape Earth's atmosphere. It helps lay a foundation that scientists can apply to make predictions about what would allow life to alter exoplanets' atmospheres, and may inspire deeper studies, here on Earth, of how oceanic-atmospheric chemistry drives climate instability and influences the rise and fall of life through the ages. Complex living things, including animals, usually have an immense metabolism and require ample O2 to drive it. The evolution of animals is unthinkable without it. The path to understanding how a nutrient dearth would starve out breathable oxygen production leads back to a very special kind of bacteria called cyanobacteria, the mother of oxygen on Earth. "The only reason we have a well-oxygenated planet we can live on is because of oxygenic photosynthesis," Planavsky said. "O2 is the waste product of photosynthesizing cells, like cyanobacteria, combining CO2 and water to build sugars." And photosynthesis is an evolutionary singularity, meaning it only evolved once in Earth's history - in cyanobacteria. Some other biological phenomena evolved repeatedly in dozens or hundreds of unrelated incidences across the ages, such as the transition from single-celled organisms to rudimentary multicellular organisms. But scientists are confident that oxygenic photosynthesis evolved only this one time in Earth's history, only in cyanobacteria, and all plants and other beings on Earth that photosynthesize coopted the development. Cyanobacteria are credited with filling Earth's atmosphere with O2, and they've been around for 2.5 billion years or more. That begs the question: What took so long? Basic nutrients that fed the bacteria weren't readily available, the scientist hypothesize. The phosphorus, which Planavsky and Reinhard specifically tracked, was in the ocean for billions of years, too, but it was tied up in the wrong places. For eons, the mineral iron, which once saturated oceans, likely bonded with phosphorous, and sank it down to dark ocean depths, far away from those shallows -- also called continental margins -- where cyanobacteria would have needed it to thrive and make oxygen. Even today, iron is used to treat waters polluted with fertilizer to remove phosphorous by sinking it as deep sediment. The researchers also used a geochemical model to show how a global system with high iron concentration and low phosphorus availability combined with low nitrogen availability in ocean shallows could perpetuate itself in a low-oxygen world. "It looks to have been such a stable planetary system," Reinhard said. "But it's obviously not the planet we live on now, so the question is, how did we transition from this low-oxygen state to where we are now?" What ultimately caused that change is a question for future research. But something did change about 800 million years ago, and cyanobacteria and other minute organisms in continental margin ecosystems got more phosphorus, the backbone of DNA and RNA, and a main actor in cell metabolism. The bacteria became more active, reproduced more quickly, ate lots more phosphorus and made loads more O2. "Phosphorus is not only essential for life," Planavsky said. "What's implicit in all this is: It can control the amount of life on our planet." When the newly multiplied bacteria died, they fell to the floor of those ocean shallows, stacking up layer by layer to decay and enrich the mud with phosphorus. The mud eventually compressed to stone. "As the biomass increased in phosphorus content, the more of it landed in layers of sedimentary rock," Reinhard said. "To scientists, that shale is the pages of the sea floor's history book." Scientists have thumbed through them for decades, compiling data. Planavsky and Reinhard analyzed some 15,000 rock records for their study. "The first compilation we had of this was only 600 samples," Planavsky said. Reinhard added, "But you could already see it then. The phosphorus jolt was as clear as day. And as the database grew in size, the phenomenon became more entrenched." That first signal of phosphorus in Earth's coast shallows pops up in the shale record like a shot from a starting pistol in the race for abundant life. The following people coauthored the study: Benjamin Gill from Virginia Tech, Kazumi Ozaki from the University of Tokyo, Leslie Robbins and Kurt Konhauser from the University of Alberta, Timothy Lyons from the University of California Riverside, Woodward Fischer from the California Institute of Technology, Chunjiang Wang from the University of Petroleum in Beijing, and Devon Cole from Yale University. The study was funded by the National Science Foundation (grant EAR-1338290), the NASA Astrobiology Institute (grant NNA15BB03A), the Sloan Foundation (grant FR-2015-65744) and the Japan Society for the Promotion of Science. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsoring agencies.


Vedran Jelic, PhD student at the University of Alberta and lead author on a new paper pioneering microscopy at terahertz frequencies. Credit: John Ulan for the University of Alberta. For the first time ever, scientists have captured images of terahertz electron dynamics of a semiconductor surface on the atomic scale. The successful experiment indicates a bright future for the new and quickly growing sub-field called terahertz scanning tunneling microscopy (THz-STM), pioneered by the University of Alberta in Canada. THz-STM allows researchers to image electron behaviour at extremely fast timescales and explore how that behaviour changes between different atoms. "We can essentially zoom in to observe very fast processes with atomic precision and over super fast time scales," says Vedran Jelic, PhD student at the University of Alberta and lead author on the new study. "THz-STM provides us with a new window into the nanoworld, allowing us to explore ultrafast processes on the atomic scale. We're talking a picosecond, or a millionth millionth of a second. It's something that's never been done before." Jelic and his collaborators used their scanning tunneling microscope (STM) to capture images of silicon atoms by raster scanning a very sharp tip across the surface and recording the tip height as it follows the atomic corrugations of the surface. While the original STM can measure and manipulate single atoms—for which its creators earned a Nobel Prize in 1986—it does so using wired electronics and is ultimately limited in speed and thus time resolution. Modern lasers produce very short light pulses that can measure a whole range of ultra-fast processes, but typically over length scales limited by the wavelength of light at hundreds of nanometers. Much effort has been expended to overcome the challenges of combining ultra-fast lasers with ultra-small microscopy. The University of Alberta scientists addressed these challenges by working in a unique terahertz frequency range of the electromagnetic spectrum that allows wireless implementation. Normally the STM needs an applied voltage in order to operate, but Jelic and his collaborators are able to drive their microscope using pulses of light instead. These pulses occur over really fast timescales, which means the microscope is able to see really fast events. By incorporating the THz-STM into an ultrahigh vacuum chamber, free from any external contamination or vibration, they are able to accurately position their tip and maintain a perfectly clean surface while imaging ultrafast dynamics of atoms on surfaces. Their next step is to collaborate with fellow material scientists and image a variety of new surfaces on the nanoscale that may one day revolutionize the speed and efficiency of current technology, ranging from solar cells to computer processing. "Terahertz scanning tunneling microscopy is opening the door to an unexplored regime in physics," concludes Jelic, who is studying in the Ultrafast Nanotools Lab with University of Alberta professor Frank Hegmann, a world expert in ultra-fast terahertz science and nanophysics. Their findings, "Ultrafast terahertz control of extreme tunnel currents through single atoms on a silicon surface," appeared in the February 20 issue of Nature Physics. Explore further: Researchers demonstrate way to shape electron beams in time through interaction with terahertz electromagnetic fields More information: Vedran Jelic et al. Ultrafast terahertz control of extreme tunnel currents through single atoms on a silicon surface, Nature Physics (2017). DOI: 10.1038/nphys4047


News Article | March 2, 2017
Site: www.sciencenewsdaily.org

A team of computing scientists from the University of Alberta's Computer Poker Research Group is once again capturing the world's collective fascination with artificial intelligence. In a historic result for the flourishing AI research community, the team—which includes researchers from Charles University in Prague and Czech Technical University—has developed an AI system called DeepStack that defeated professional poker players in December 2016. The landmark findings have just been published in Science, one of the world's most prestigious peer-reviewed scientific journals. DeepStack the first computer program to outplay human professionals at heads-up no-limit Texas hold'em poker A team of computing scientists from the University of Alberta's Computer Poker Research Group is once again capturing the world's collective fascination with artificial intelligence. In a historic ... A.I. for Texas Hold’em has 10 times the win rate of professional poker players After the researchers gave the AI "intuition", it blew away most professional human players. Unlike chess and go, poker players don't have full knowledge of the game state. A team of scientists from the University of Alberta have created DeepStack, a computer system that could challenge the poker face of even the world's greatest spy. Poker-playing AI program first to beat pros at no-limit Texas hold 'em A team of computing scientists is once again capturing the world's collective fascination with artificial intelligence. In a historic result for the flourishing AI research community, the team ...


News Article | September 8, 2016
Site: phys.org

The discovery of the structure of DNA in 1953 made it immediately obvious how DNA could be copied, or replicated. The three-dimensional structure of PrPSc has remained elusive, but the hope is that its discovery would likewise promote the understanding of prion replication, as well as lead to the development of structure-based therapeutic interventions. Convinced that the structure of what they call 'infectious conformers'—PrPSc from the brain of diseased animals—will be most informative, a team led by Holger Wille and Howard Young from the University of Alberta in Edmonton, Canada, and Jesús Requena from the University of Santiago de Compostela, Spain, is applying electron cryomicroscopy (cryo-EM) to the problem. In this study, they used cryo-EM to record and analyze the structure of PrPSc isolated from the brain of infected mice. Prion-infected mouse and human brains contain a mix of different versions of PrPSc because different types of molecules such as lipids and sugars have been attached to the core protein. The heterogeneity of these modified brain-derived PrPSc makes it difficult to analyze their structure. To avoid this difficulty, the researchers started with PrPSc molecules that were truncated to delete the attachment of one type of modification, the so-called GPI lipid anchor. By using as a source the brains of transgenic mice expressing a GPI-anchorless form of the prion protein, they were able to analyze a more homogeneous version of PrPSc that nonetheless retained its ability to cause disease and convert normal cellular prion proteins. In the diseased brain, PrPSc molecules are often arranged in fibrils. The cryo-EM images of the mouse GPI-anchorless PrPSc fibrils, and their subsequent analysis, showed that they consist of two intertwined protofilaments of defined volume. As cryo-EM preserves the native structure of specimens, this information sets a structural restraint for the conformation of GPI-anchorless PrPSc, with the implication that PrPSc molecules can form protofilaments with the observed dimensions only if they are folded up onto themselves. Based on their own analyses (and consistent with data from related studies), the researchers conclude that the cryo-EM data reveal a four-rung ß-solenoid architecture as the basic element for the structure of the mammalian prion GPI-anchorless PrPSc. ß-solenoids are protein structures that consist of an array of repetitive elements with secondary structures that are predominantly beta sheets. These PrPSc beta-sheet rungs, the researchers propose, serve as templates for new unfolded PrPSc molecules. What they have learned about the structure of GPI-anchorless PrPSc and its four-rung ß solenoid architecture, the researchers say, allows them to rule out all previously proposed templating mechanisms for the replication of infectious prions in vivo. Discussing their ideas for the conversion of PrPC to PrPSc, the researchers note that the molecular forces responsible for the templating are fundamentally similar to those operating during the replication of DNA. "Because the exquisite specificity of the A:T and G:C pairings is lacking", they conclude that "a much more complex array of forces controls the pairing of the pre-existing and nascent ß-rungs". "Templating based on a four-rung ß-solenoid architecture", they say, "must involve the upper- and lowermost ß-solenoid rungs [which] are inherently aggregation-prone". "Once an additional ß-rung has formed", they propose, "it creates a fresh "sticky" edge ready to continue templating until the incoming unfolded PrP molecule has been converted into another copy of the infectious conformer". The researchers acknowledge that higher resolution structures and resolution of structures of other PrPSc molecules will be needed. Nonetheless, they conclude, "we present data based on cryo-EM analysis that strongly support the notion that GPI-anchorless PrPSc fibrils consist of stacks of four-rung ß-solenoids. Two of such protofilaments intertwine to form double fibrils [...]. The four-rung ß-solenoid architecture of GPI-anchorless PrPSc provides unique and novel insights into the molecular mechanism by which mammalian prions replicate". More information: Vázquez-Fernández E, Vos MR, Afanasyev P, Cebey L, Sevillano AM, Vidal E, et al. (2016) The Structural Architecture of an Infectious Mammalian Prion Using Electron Cryomicroscopy. PLoS Pathog 12(9): e1005835. DOI: 10.1371/journal.ppat.1005835


News Article | February 16, 2017
Site: www.marketwired.com

VANCOUVER, BRITISH COLUMBIA--(Marketwired - Feb. 16, 2017) - Vinergy Resources Ltd. ("Vinergy" or the "Company") (CSE:VIN)(OTCQB:VNNYF) in conjunction with its proposed acquisition of MJ BioPharma (announced December 14, 2016) is pleased to announce that, as a part of the Company's strategy to develop a lab for research and development products that test and identify specific cannabinoid isolates for targeted therapeutic purposes, it has appointed John Simon to the Company's Scientific Advisory Board (SAB). John has a Bachelor of Science from the University of Alberta, is a senior member of the American Society for Quality, a Certified Quality Auditor (CQA), a Registered Quality Assurance Professional in Good Laboratory Practice (RQAP-GLP) and maintains Regulatory Affairs Certification (RAC) through the Regulatory Affairs Professional Society. John has held various management positions in Quality Assurance and Regulatory Affairs and has worked as a consultant supporting clients in the medical device, pharmaceutical, biotechnology and natural health product industries since 2004. He has been directly involved in Federal Drug Administration (FDA) and Health Canada audits of medical device manufacturers, drug manufacturers, testing facilities, and clinical sites. He has experience with submissions to the FDA and Health Canada. Through John's consultancy practice he assists companies with both site licenses and product licenses. He has helped companies obtain, renew and maintain in good standing Drug Establishment Licenses (DEL); Medical Device Establishment Licenses (MDEL); Natural and Non-prescription Site Licenses (NNHPD); and Licenses to Cultivate and Distribute under the Marihuana for Medical Purposes Regulations (MMPR) (now under the ACMPR). John also works in creating quality systems to support ISO certification for various clients (ISO 17025, ISO 13485 and ISO 9001). John consults to groups in the creation of specifications, batch records and procedures to support the design and development of a variety of products including cosmetics, natural health products, medical devices, biologics, pharmaceuticals and controlled substances. "With John's substantial background in QA and regulatory affairs specific to drug development and the cannabis industry he will be a key asset in driving our cannabis product and technology initiatives," said Mr.Kent Deuters, CEO of MJ Biopharma. This news release does not constitute an offer to sell or a solicitation of an offer to buy any of the securities in the United States. The securities have not been and will not be registered under the United States Securities Act of 1933, as amended (the "U.S. Securities Act"), or any state securities laws and may not be offered or sold within the United States or to U.S. Persons unless registered under the U.S. Securities Act and applicable state securities laws or an exemption from such registration is available. The CSE does not accept responsibility for the adequacy or accuracy of this release. The forward-looking information contained in this press release is made as of the date of this press release and, except as required by applicable law, the Company does not undertake any obligation to update publicly or to revise any of the included forward-looking information, whether as a result of new information, future events or otherwise, except as may be required by law. By its very nature, such forward-looking information requires the Company to make assumptions that may not materialize or that may not be accurate. This forward-looking information is subject to known and unknown risks and uncertainties and other factors, which may cause actual results, levels of activity and achievements to differ materially from those expressed or implied by such information. Information pertaining to the Target has been provided by the Target.


Scientists have discovered the oldest known water on Earth in an ancient pool in Canada. The water, which was untouched for 2 billion years, was found 3 kilometers (1.86 miles) underground. In 2013, researchers found water dating back about 1.5 billion years at the same site, an underground tunnel at the Kidd Mine in Ontario, but after searching deeper, they found an even older source of water buried underground. After conducting an analysis of the gases dissolved in the ancient groundwater, which include neon, helium, argon and xenon, the researchers dated the liquid to at least 2 billion years, making it the oldest known water in the world. The researchers were also able to find chemical traces left behind by tiny unicellular organisms that once lived in the water. "The microbes that produced this signature couldn't have done it overnight. This isn't just a signature of very modern microbiology," study lead Barbara Sherwood Lollar from the University of Toronto told BBC News. "This has to be an indication that organisms have been present in these fluids on a geological timescale." The discovery, which was presented at the American Geophysical Union Fall Meeting in San Francisco, offers clues about the possibility of alien life residing within the underground pockets of water on planet Mars. NASA has been sending rovers to the planet with the hope of finding evidence of extraterrestrial life. Studying water on Earth such as the one discovered in Canada may offer hints on where life may be found on the red planet and elsewhere in the solar system. In an earlier study that looked at the 1.5-billion-year-old water discovered 2.4 kilometers underground, researchers have found evidence that the ancient water has its own independent life support system. What this means is that it is possible that exotic life has been evolving underground, totally independent from life on the surface for billions of years without sunlight or atmospheric oxygen. If microbial communities can evolve in parallel to life as we know it deep below the surface, it is possible that the same thing can happen on the red planet. "Because this is a fairly common geological setting in early Earth as well as modern Mars, we think that as long as the right minerals and water are present, likely kilometers below the surface, they can produce the necessary energy source to support the microbes. I'm not saying that these microbes definitively exist, but the conditions are right to support microbial life on Mars," said Long Li from the University of Alberta. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | March 16, 2016
Site: www.nature.com

Following the defeat of one of its finest human players, the ancient game of Go has joined the growing list of tasks at which computers perform better than humans. In a 6-day tournament in Seoul, watched by a reported 100 million people around the world, the computer algorithm AlphaGo, created by the Google-owned company DeepMind, beat Go professional Lee Sedol by 4 games to 1. The complexity and intuitive nature of the ancient board game had established Go as one the greatest challenges in artificial intelligence (AI). Now the big question is what the DeepMind team will turn to next. AlphaGo’s general-purpose approach — which was mainly learned, with a few elements crafted specifically for the game — could be applied to problems that involve pattern recognition, decision-making and planning. But the approach is also limited. “It’s really impressive, but at the same time, there are still a lot of challenges,” says Yoshua Bengio, a computer scientist at the University of Montreal in Canada. Lee, who had predicted that he would win the Google tournament in a landslide, was shocked by his loss. In October, AlphaGo beat European champion Fan Hui. But the version of the program that won in Seoul is significantly stronger, says Jonathan Schaeffer, a computer scientist at the University of Alberta in Edmonton, Canada, whose Chinook software mastered draughts in 2007: “I expected them to use more computational resources and do a lot more learning, but I still didn’t expect to see this amazing level of performance.” The improvement was largely down to the fact that the more AlphaGo plays, the better it gets, says Miles Brundage, a social scientist at Arizona State University in Tempe, who studies trends in AI. AlphaGo uses a brain-inspired architecture known as a neural network, in which connections between layers of simulated neurons strengthen on the basis of experience. It learned by first studying 30 million Go positions from human games and then improving by playing itself over and over again, a technique known as reinforcement learning. Then, DeepMind combined AlphaGo’s ability to recognize successful board configurations with a ‘look-ahead search’, in which it explores the consequences of playing promising moves and uses that to decide which one to pick. Next, DeepMind could tackle more games. Most board games, in which players tend to have access to all information about play, are now solved. But machines still cannot beat humans at multiplayer poker, say, in which each player sees only their own cards. The DeepMind team has expressed an interest in tackling Starcraft, a science-fiction strategy game, and Schaeffer suggests that DeepMind devise a program that can learn to play different types of game from scratch. Such programs already compete annually at the International General Game Playing Competition, which is geared towards creating a more general type of AI. Schaeffer suspects that DeepMind would excel at the contest. “It’s so obvious, that I’m positive they must be looking at it,” he says. DeepMind’s founder and chief executive Demis Hassabis mentioned the possibility of training a version of AlphaGo using self-play alone, omitting the knowledge from human-expert games, at a conference last month. The firm created a program that learned to play less complex arcade games in this manner in 2015. Without a head start, AlphaGo would probably take much longer to learn, says Bengio — and might never beat the best human. But it’s an important step, he says, because humans learn with such little guidance. DeepMind, based in London, also plans to venture beyond games. In February the company founded DeepMind Health and launched a collaboration with the UK National Health Service: its algorithms could eventually be applied to clinical data to improve diagnoses or treatment plans. Such applications pose different challenges from games, says Oren Etzioni, chief executive of the non-profit Allen Institute for Artificial Intelligence in Seattle, Washington. “The universal thing about games is that you can collect an arbitrary amount of data,” he says — and that the program is constantly getting feedback on what’s a good or bad move by playing many games. But, in the messy real world, data — on rare diseases, say — might be scarcer, and even with common diseases, labelling the consequences of a decision as ‘good’ or ‘bad’ may not be straightforward. Hassabis has said that DeepMind’s algorithms could give smartphone personal assistants a deeper understanding of users’ requests. And AI researchers see parallels between human dialogue and games: “Each person is making a play, and we have a sequence of turns, and each of us has an objective,” says Bengio. But they also caution that language and human interaction involve a lot more uncertainty. DeepMind is fuelled by a “very powerful cocktail” of the freedoms usually reserved for academic researchers, and by the vast staff and computing resources that come with being a Google-backed firm, says Joelle Pineau, a computer scientist at McGill University in Montreal. Its achievement with Go has prompted speculation about when an AI will have a versatile, general intelligence. “People’s minds race forward and say, if it can beat a world champion it can do anything,” says Etzioni. But deep reinforcement learning remains applicable only in certain domains, he says: “We are a long, long way from general artificial intelligence.” DeepMind’s approach is not the only way to push the boundaries of AI. Gary Marcus, a neuroscientist at New York University in New York City, has co-founded a start-up company, Geometric Intelligence, to explore learning techniques that extrapolate from a small number of examples, inspired by how children learn. In its short life, AlphaGo probably played hundreds of millions of games — many more than Lee, who still won one of the five games against AlphaGo. “It’s impressive that a human can use a much smaller quantity of data to pick up a pattern,” says Marcus. “Probably, humans are much faster learners than computers.”


News Article | March 2, 2017
Site: www.eurekalert.org

A team of computing scientists from the University of Alberta's Computer Poker Research Group is once again capturing the world's collective fascination with artificial intelligence. In a historic result for the flourishing AI research community, the team -- which includes researchers from Charles University in Prague and Czech Technical University -- has developed an AI system called DeepStack that defeated professional poker players in December 2016. The landmark findings have just been published in Science, one of the world's most prestigious peer-reviewed scientific journals. DeepStack bridges the gap between approaches used for games of perfect information -- like those used in checkers, chess, and Go--with those used for imperfect information games, reasoning while it plays using "intuition" honed through deep learning to reassess its strategy with each decision. "Poker has been a longstanding challenge problem in artificial intelligence," says Michael Bowling, professor in the University of Alberta's Faculty of Science and principal investigator on the study. "It is the quintessential game of imperfect information in the sense that the players don't have the same information or share the same perspective while they're playing." Don't let the name fool you: imperfect information games are serious business. These "games" are a general mathematical model that describe how decision-makers interact. Artificial intelligence research has a storied history of using parlour games to study these models, but attention has been focused primarily on perfect information games. "We need new AI techniques that can handle cases where decision-makers have different perspectives," says Bowling, explaining that developing techniques to solve imperfect information games will have applications well beyond the poker table. "Think of any real world problem. We all have a slightly different perspective of what's going on, much like each player only knowing their own cards in a game of poker." Immediate applications include making robust medical treatment recommendations, strategic defense planning, and negotiation. This latest discovery builds on an already impressive body of research findings about artificial intelligence and imperfect information games that stretches back to the creation of the University of Alberta's Computer Poker Research Group in 1996. Bowling, who became the group's principal investigator in 2006, has led the group to several milestones for artificial intelligence. He and his colleagues developed Polaris in 2008, beating top poker players at heads-up limit Texas hold'em poker. They then went on to solve heads-up limit hold'em with Cepheus, published in 2015 in Science. DeepStack extends the ability to think about each situation during play--which has been famously successful in games like checkers, chess, and Go--to imperfect information games using a technique called continual re-solving. This allows DeepStack to determine the correct strategy for a particular poker situation without thinking about the entire game by using its "intuition" to evaluate how the game might play out in the near future. "We train our system to learn the value of situations," says Bowling. "Each situation itself is a mini poker game. Instead of solving one big poker game, it solves millions of these little poker games, each one helping the system to refine its intuition of how the game of poker works. And this intuition is the fuel behind how DeepStack plays the full game." Thinking about each situation as it arises is important for complex problems like heads-up no-limit hold'em, which has vastly more unique situations than there are atoms in the universe, largely due to players' ability to wager different amounts including the dramatic "all-in." Despite the game's complexity, DeepStack takes action at human speed -- with an average of only three seconds of "thinking" time--and runs on a simple gaming laptop with an Nvidia graphics processing unit. To test the approach, DeepStack played against a pool of professional poker players in December, 2016, recruited by the International Federation of Poker. Thirty-three players from 17 countries were recruited, with each asked to play a 3000-hand match over a period of four weeks. DeepStack beat each of the 11 players who finished their match, with only one outside the margin of statistical significance, making it the first computer program to beat professional players in heads-up no-limit Texas hold'em poker. "DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker" will be published online by the journal Science on Thursday, March 2, 2017.


Liu X.,University of Arkansas at Little Rock | Xu W.,University of Alberta
IEEE Transactions on Power Systems | Year: 2010

In this paper we develop a load dispatch model to minimize the emission due to oxides of nitrogen (NOx). This model takes into account both thermal generators and wind turbines. We derive a closed-form in terms of the incomplete gamma function (IGF) to characterize the impact of wind power. Accordingly, the effects of wind power on emission control are investigated. The model is implemented in a computer program and a set of numerical experiments for a standard test system is reported. © 2010 IEEE.


Liu X.,University of Arkansas at Little Rock | Xu W.,University of Alberta
IEEE Transactions on Sustainable Energy | Year: 2010

In this paper, a load dispatch model is developed for the system consisting of both thermal generators and wind turbines. The probability of stochastic wind power is included in the model as a constraint. This strategy, referred to as the here-and-now approach, avoids the probabilistic infeasibility appearing in conventional models. It is shown that, based on the presented model, the impacts of stochastic wind speed on the generated power can be readily assessed. © 2010 IEEE.


Heusch G.,University of Duisburg - Essen | Libby P.,Harvard University | Gersh B.,Rochester College | Yellon D.,University College London | And 3 more authors.
The Lancet | Year: 2014

Remodelling is a response of the myocardium and vasculature to a range of potentially noxious haemodynamic, metabolic, and inflammatory stimuli. Remodelling is initially functional, compensatory, and adaptive but, when sustained, progresses to structural changes that become self-perpetuating and pathogenic. Remodelling involves responses not only of the cardiomyocytes, endothelium, and vascular smooth muscle cells, but also of interstitial cells and matrix. In this Review we characterise the remodelling processes in atherosclerosis, vascular and myocardial ischaemia-reperfusion injury, and heart failure, and we draw attention to potential avenues for innovative therapeutic approaches, including conditioning and metabolic strategies.


News Article | September 2, 2016
Site: www.chromatographytechniques.com

A rare small-bodied pterosaur, a flying reptile from the Late Cretaceous period approximately 77 million years ago, is the first of its kind to have been discovered on the west coast of North America. Pterosaurs are the earliest vertebrates known to have evolved powered flight. The specimen is unusual as most pterosaurs from the Late Cretaceous were much larger with wingspans between 4 and 11 meters (the biggest being as large as a giraffe, with a wingspan of a small plane), whereas this new specimen had a wingspan of only 1.5 meters. The fossils of this animal are the first associated remains of a small pterosaur from this time, comprising a humerus, dorsal vertebrae (including three fused notarial vertebrae) and other fragments. They are the first to be positively identified from British Columbia, Canada and have been identified as belonging to an azhdarchoid pterosaur, a group of short-winged and toothless flying reptiles which dominated the final phase of pterosaur evolution. Previous studies suggest that the Late Cretaceous skies were only occupied by much larger pterosaur species and birds, but this new finding, which is reported in the Royal Society Open Science journal, provides crucial information about the diversity and success of Late Cretaceous pterosaurs. "This new pterosaur is exciting because it suggests that small pterosaurs were present all the way until the end of the Cretaceous, and weren't outcompeted by birds. The hollow bones of pterosaurs are notoriously poorly preserved, and larger animals seem to be preferentially preserved in similarly aged Late Cretaceous ecosystems of North America. This suggests that a small pterosaur would very rarely be preserved, but not necessarily that they didn't exist," said lead author Elizabeth Martin-Silverstone, a palaeobiology PhD Student at the University of Southampton. The fossil fragments were found on Hornby Island in British Columbia in 2009 by a collector and volunteer from the Royal British Columbia Museum, who then donated them to the Museum. At the time, it was given to Victoria Arbour, a then PhD student and dinosaur expert at the University of Alberta. Victoria, as a postdoctoral researcher at North Carolina State University and the North Carolina Museum of Natural Sciences, then contacted Martin-Silverstone and the Royal BC Museum sent the specimen for analysis in collaboration with Mark Witton, a pterosaur expert at the University of Portsmouth. "The specimen is far from the prettiest or most complete pterosaur fossil you'll ever see, but it's still an exciting and significant find. It's rare to find pterosaur fossils at all because their skeletons were lightweight and easily damaged once they died, and the small ones are the rarest of all. But luck was on our side and several bones of this animal survived the preservation process. Happily, enough of the specimen was recovered to determine the approximate age of the pterosaur at the time of its death. By examining its internal bone structure and the fusion of its vertebrae we could see that, despite its small size, the animal was almost fully grown. The specimen thus seems to be a genuinely small species, and not just a baby or juvenile of a larger pterosaur type" said Witton. "The absence of small juveniles of large species—which must have existed—in the fossil record is evidence of a preservational bias against small pterosaurs in the Late Cretaceous. It adds to a growing set of evidence that the Late Cretaceous period was not dominated by large or giant species, and that smaller pterosaurs may have been well represented in this time. As with other evidence of smaller pterosaurs, the fossil specimen is fragmentary and poorly preserved: researchers should check collections more carefully for misidentified or ignored pterosaur material, which may enhance our picture of pterosaur diversity and disparity at this time," added Martin-Silverstone.


University of Alberta PhD student Taleana Huff teamed up with her supervisor Robert Wolkow to channel a technique called atomic force microscopy—or AFM—to pattern and image electronic circuits at the atomic level. This is the first time the powerful technique has been applied to atom-scale fabrication and imaging of a silicon surface, notoriously difficult because the act of applying the technique risks damaging the silicon. However, the reward is worth the risk, because this level of control could stimulate the revolution of the technology industry. "It's kind of like braille," explains Huff. "You bring the atomically sharp tip really close to the sample surface to simply feel the atoms by using the forces that naturally exist among all materials." One of the problems with working at the atomic scale is the risk of perturbing the thing you are measuring by the act of measuring it. Huff, Wolkow, and their research collaborators have largely overcome those problems and as a result can now build by moving individual atoms around: most importantly, those atomically defined structures result in a new level of control over single electrons. This is the first time that the powerful AFM technique has been shown to see not only the silicon atoms but also the electronic bonds between those atoms. Central to the technique is a powerful new computational approach that analyzes and verifies the identity of the atoms and bonds seen in the images. "We couldn't have performed these new and demanding computations without the support of Compute Canada. This combined computation and measurement approach succeeds in creating a foundation for a whole new generation of both classical and quantum computing architectures," says Wolkow. He has his long-term sights set on making ultra-fast and ultra-low-power silicon-based circuits, potentially consuming ten thousand times less power than what is on the market. "Imagine instead of your phone battery lasting a day that it could last weeks at a time, because you're only using a couple of electrons per computational pattern," says Huff, who explains that the precision of the work will allow the group and potential industry investors to geometrically pattern atoms to make just about any kind of logic structure imaginable. This hands-on work was exactly what drew the self-described Canadian-by-birth American-by-personality to condensed matter physics in the University of Alberta's Faculty of Science. Following undergraduate work in astrophysics—and an internship at NASA—Huff felt the urge to get more tangible with her graduate work. (With hobbies that include power lifting and motorcycle restoration, she comes by the desire for tangibility quite honestly.) "I wanted something that I could touch, something that was going to be a physical product I could work with right away," says Huff. And in terms of who she wanted to work with, she went straight to the top, seeking out Wolkow, renowned the world over for his work with quantum dots, dangling bonds, and industry-pushing work on atomic-scale science. "He just has such passion and conviction for what he does," she continues. "With Bob, it's like, 'we're going to change the world.' I find that really inspiring," says Huff. "Taleana has the passion and the drive to get very challenging things done. She now has understanding and skills that are truly unique in the world giving us a great advantage in the field," says Wolkow. "We just need to work on her taste in music," he adds with a laugh. The group's latest research findings, "Possible observation of chemical bond contrast in AFM images of a hydrogen terminated silicon surface" were published in the February 13, 2017 issue of Nature Communications. Explore further: Electrical currents can be now be switched on and off at the smallest conceivable scale


News Article | February 15, 2017
Site: www.eurekalert.org

A study published today in the journal Nature is the first to show that it is possible to predict within the first year of life, whether some infants will go on to develop autism. The ability to identify autism risk during infancy could set the stage for developing very early preventive treatments when the brain is most malleable. Earlier detection also provides opportunities for early treatment--and earlier intervention is known to be associated with better long term outcomes. Researchers used magnetic resonance imaging (MRI) technology to capture brain images of infants who are considered at high risk for developing autism spectrum disorder (ASD) by virtue of having an older sibling with ASD. The research team took different measurements of the child's brain at 6 and 12 months of age, including overall volume, surface area and thickness of the cerebral cortex in particular regions. A computer-generated algorithm was used to combine these measurements and was able to predict which babies would develop autism by age two with more than 90 percent accuracy. The Center for Autism Research (CAR) at Children's Hospital of Philadelphia (CHOP) was a major study site in the multicenter research project. The study's lead site was based at University of North Carolina-Chapel Hill. "The results of this study are a real breakthrough for early diagnosis of autism," said Robert T. Schultz, PhD, who directs the Center for Autism Research and led the CHOP study site. "While we have known for some time that autism emerges in subtle, gradual ways over the first few years of life, this study offers the first firm evidence before a child's first birthday predicting whether certain high-risk children are likely to be diagnosed with autism." Despite extensive research, it has been impossible until now to identify these children before the second year of life, when behaviors typical of autism emerge. "Our study shows that early brain development biomarkers could be very useful in identifying babies at the highest risk for autism before behavioral symptoms emerge," said the study's senior author, Joseph Piven, MD, of the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina. Autism Spectrum Disorder (or ASD) is a complex developmental disability characterized by difficulties in social interaction, verbal and nonverbal communication, and repetitive behaviors or interests. Behavioral symptoms usually become evident between ages two and four, and research has shown that children who receive the earliest treatment tend to reap the most benefits. It is estimated that one in 68 school-aged children are diagnosed with autism. In infants who have older siblings with autism, the risk of developing ASD may be as high as 20 out of every 100 births. There are about 3 million people with autism in the United States and tens of millions around the world. For this Nature study, Piven, Schultz, and researchers from across North America conducted MRI scans of 106 high-risk infants and 42 low-risk infants at six, 12, and 24 months of age. They found that the babies who developed autism experienced much more rapid growth of the brain's surface area from six to 12 months than babies who did not show evidence of autism at 24 months of age. The study team also found a link between increased growth rate of surface area in the first year of life and an increased growth rate of overall brain volume in the second year of life. Extensive prior research has identified enlarged brain size as a risk factor for autism. This most recent study shows this pattern of rapid growth originates in specific brain regions long before brain size itself shows significant enlargement. In addition, brain overgrowth correlated with the severity of social deficits that emerged by age two. The researchers made measurements of cortical surface areas and cortical thickness at 6 and 12 months of age and studied the rate of growth between 6 and 12 months of age. These measurements, combined with brain volume and sex of the infants predicted with a high degree of accuracy who would develop autism by age 24 months. To generate these predictive results, the team drew on machine learning, a statistical approach that uses pattern recognition to make very detailed predictions. The brain differences at 6 and 12 months of age in infants with older siblings with autism correctly predicted eight out of ten infants who would later meet criteria for autism at 24 months of age in comparison to those infants with older ASD siblings who did not meet criteria for autism at 24 months. This analytic approach was also almost perfect in predicting which high-risk babies would not develop autism by age 2 years. The authors emphasize that the effectiveness of the algorithm needs to be reproduced in future studies in order to be ready for clinical use. "If we are able to replicate these results in further studies, these findings promise to change how we approach infant and toddler screening for autism, making it possible to identify infants who will later develop autism before the behavioral symptoms of autism become apparent," Schultz said. For example, if parents have a child with autism and then have a second child, such a test might be clinically useful in identifying infants at highest risk for developing this condition. The idea would be to then intervene 'pre-symptomatically' before the defining symptoms of autism emerge. The study also has implications for developing new autism treatments, said Schultz, a pediatric neuropsychologist. "Using brain imaging, we were able to pinpoint areas of the brain where atypical development contributes to autism. Understanding these neural mechanisms may guide us in developing opportunities for early treatment--possibly, before the symptoms of autism become outwardly visible." The same collaborators published a related study last month using functional MRI scans to identify brain networks involved in a key social behavior called initiation of joint attention. In this behavior--often impaired in ASD--a baby focuses on an object and draws another person's attention to that object. This study is the earliest known description of how functional brain systems underlie an important social behavior. In addition to adding to the neurobiology of how social behavior develops, those findings may inform efforts to design new treatments. "Putting this into the larger context of neuroscience research and treatment, there is currently a big push within the field to be able to detect the biomarkers of these conditions before patients are diagnosed, at a time when preventive efforts are possible," Piven added. "In Parkinson's, for instance, we know that once a person is diagnosed, they've already lost a substantial portion of the dopamine receptors in their brain, making treatment less effective." Piven said the idea with autism is similar; once autism is diagnosed at age two to three years, the brain has already begun to change substantially. "We haven't had a way to detect the biomarkers of autism before the condition sets in and symptoms develop," he said. "Now we have very promising leads that suggest this may in fact be possible." The National Institutes of Health (grants HD055741, EB005149, HD003110 and MH093510) funded this study. This research was led by researchers at the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina, which is directed by the study's senior author, Joseph Piven, MD, the Thomas E. Castelloe Distinguished Professor of Psychiatry at the University of North Carolina-Chapel Hill. Other clinical sites included Children's Hospital of Philadelphia, the University of Washington, and Washington University in St. Louis. Other key collaborators are McGill University, the University of Alberta, the College of Charleston, and New York University. Heather Cody Hazlett, et al "Early Brain Development in Infants at High Risk for Autism Spectrum Disorder" Nature, in print Feb. 16, 2017. http://doi. About Children's Hospital of Philadelphia: Children's Hospital of Philadelphia was founded in 1855 as the nation's first pediatric hospital. Through its long-standing commitment to providing exceptional patient care, training new generations of pediatric healthcare professionals, and pioneering major research initiatives, Children's Hospital has fostered many discoveries that have benefited children worldwide. Its pediatric research program is among the largest in the country. In addition, its unique family-centered care and public service programs have brought the 535-bed hospital recognition as a leading advocate for children and adolescents. For more information, visit http://www.


News Article | April 25, 2016
Site: motherboard.vice.com

UPDATE: On the afternoon of April 26, a jury convicted David and Collet Stephan of failing to provide their son with the necessaries of life. The maximum penalty is five years in prison. The story is developing and we'll update it as we know more. When their son Ezekiel was very sick, David and Collet Stephan, who live in Alberta, didn’t take him to see a doctor. Instead, for two weeks they gave him smoothies of hot pepper, horseradish, ginger root and onion, and eventually brought him to a naturopath clinic, where they bought an echinacea tincture. At that point, the 18-month-old was so stiff from his illness that he couldn’t sit in his car seat, and had to be transported lying on a mattress in the back of the car, as the Lethbridge court heard. Ezekiel later stopped breathing and was rushed to hospital. A few days later, he died of meningitis. On Monday, a jury started deliberating whether David and Collet, whose son died in 2012, are guilty of failing to provide the necessaries of life. Lots of people have already condemned Ezekiel's parents for not doing more to help him—not even after a family friend, who’s also a nurse, suggested he might have meningitis. During the trial, his dad testified that he thought Ezekiel had the flu. It isn’t the first time that a parent has chosen a dangerous alternative cure for their kid, and it raises a troubling question: why are naturopaths even treating kids? It can be hard to find a family doctor in many parts of Canada, and that’s surely part of it. But doing a juice cleanse when you’re an adult, and can make up your own mind, is one thing. (That fancy green drink isn’t really shedding toxins from your body anyway, by the way.) Parents’ belief in pseudoscience is putting kids at risk. In another case, in Calgary, a 7-year-old died after his mom used “holistic medicine” to treat him. (She’s awaiting trial later this year.) Parents continue to shun vaccines, or to buy naturopathic remedies for their kids, like “nosodes.” Or they take celebrities’ advice over their own doctor’s. People, for one, was recently forced to pull down a recipe for Kristin Cavallari’s “natural” infant formula, which could have been seriously dangerous for babies. Tim Caulfield, Canada Research Chair in Health Law and Policy at the University of Alberta, has been following the case. “There are a lot of things going on in society that are making way for pseudoscience,” Caulfield told Motherboard. “There’s some good evidence to suggest people are fed up with the conventional system.” In Canada, it can be difficult (if not impossible) to get a family doctor: in BC, for example, the government recently had to abandon an election promise to match everyone in the province with a family doctor, because there simply aren’t enough. Canada’s doctor shortage is a longstanding problem, and there are all sorts of reasons behind it: an aging population that needs more medical care, younger doctors’ demand for shorter working hours, and funding troubles, to name a few. Even if a patient does have a doctor, they’ll too often rush through an appointment, whereas a naturopath might sit and listen for half an hour or longer, sending the patient off with a range of (sometimes expensive) treatments and cures. Celebrities are playing into it, as Caulfield’s recent book, Is Gwyneth Paltrow Wrong About Everything?, goes to show. “They spread the word about these therapies and put them in the public mind,” he said. As the title of his book implies, Paltrow is a prime offender. Last year, health experts found themselves urging women not to get their vaginas “steamed” after she recommended it. But we can point a finger at government, too. In 2012, naturopaths in Alberta became a self-regulated profession (they’re self-regulated in other parts of Canada, too) which gives them a veneer of professionalism. A lot of health experts pushed back against it. It was a compromise, given that more people were seeking them out, Caulfield said. “If these practitioners are going to be more popular, they have to make sure there’s a minimum standard,” he said. “You can sympathize with that, but very quickly it becomes legitimization.” Naturopaths in Canada and beyond are treating patients who suffer from a range of conditions, including cancer. And they’re treating kids. The jury will decide how much Ezekiel’s parents are to blame for what happened, but even the Crown has recognized that the problem wasn’t that they didn’t love their son. It’s that they seem to have trusted smoothies and tinctures to treat him, more than they trusted modern medicine.


News Article | March 2, 2017
Site: motherboard.vice.com

Everyone's been losing their shit about a recent study from the University of Alberta, which reported that up to 75 litres of urine were present in a standard public swimming pool in a Canadian city—about the equivalent volume of "20 large milk jugs," as The National Post memorably reported. That's a lot of pee. But how bad is it, really? The first thing graduate student Lindsay Blackstock, lead author of the paper, told me when I called her was: "It's really not bad at all." In a public swimming pool, harmful bacteria are zapped by a boatload of chlorine. It's even possible to filter and drink human pee (astronauts do this on the International Space Station). The American Chemical Society has previously endorsed peeing in the ocean. Plus, Blackstock said, most of the gross stuff "that could be harmful in water is broken down in a wastewater treatment plant," and that happens well before the water gets to our sinks or pools. Aside from a resounding "ew" from the media, the only thing scientists are worried about is that some nitrogenous compounds present in pee—such as urea or ammonia—can interact with a swimming pool's chlorine to create disinfection byproducts (DBPs), as reported in a 2014 paper. These DBPs, like trihalomethanes, have been shown to potentially cause lung irritation, eye irritation and occupational asthma. Potentially. Occupational. These are keywords—you're not going to drop dead in a puff of pee-coloured steam the next time you enter a hot tub. Read More: Trump Ordered the EPA to Overturn Clean Water Rule So with the reader's best interest in mind, I asked Blackstock: If the real problem here is DBP, should we just stop putting chlorine in our swimming pools? "Goodness, no," she said. "There are water-born pathogens that can be introduced into the swimming pool, but the chlorine or disinfectants used are an extremely effective way of eliminating these dangerous pathogens." Basically, pee is the least of your problems. Blackstock and her team measured the urine in pools by sampling 29 hot tubs, as well as indoor and outdoor pools, across two Canadian cities that they wouldn't name. They tested for acesulfame potassium, or Ace-K, a sweetener that's commonly consumed but not metabolized by the body—the Ace-K you drink is the same Ace-K you pee. Equipped with a good average of how much Ace-K is present in recreational bodies of water, in the next phase of their test, Blackstock and her team got that "75 litres of urine" number from the largest of the two pools they tracked for three weeks. Ace-K levels rose over the three-week period, leading to a conclusion that more pee was present. Cue the freakout. "Keep in mind how diluted that [pee] would be in 870,000 litres" of total pool water, she said. The pool water in question was 0.0086 per cent urine. The thought of swimming in pee is undeniably gross (even though some people have been known to wash their faces with it). But activity of swimming is much better for you than the damage any pool water pee could do, Blackstock added. Plus, there are some very easy fixes. "We want people to keep swimming but quit peeing," she said. "Make sure to rinse off in the showers provided to rinse off any personal care products that could also interact with chlorine … and leave the swimming pool to go to the restroom when nature calls." Get six of our favorite Motherboard stories every day by signing up for our newsletter .


News Article | February 1, 2016
Site: www.nanotech-now.com

Abstract: ABSTRACT All-dielectric metamaterials Saman Jahani1 and Zubin Jacob1,2,* 1Department of Electrical and Computer Engineering, University of Alberta 2 Birck Nanotechnology Center, School of Electrical and Computer Engineering, Purdue University E-mail: The ideal material for nanophotonic applications will have a large refractive index at optical frequencies, respond to both the electric and magnetic fields of light, support large optical chirality and anisotropy, confine and guide light at the nanoscale, and be able to modify the phase and amplitude of incoming radiation in a fraction of a wavelength. Artificial electromagnetic media, or metamaterials, based on metallic or polar dielectric nanostructures can provide many of these properties by coupling light to free electrons (plasmons) or phonons (phonon polaritons), respectively, but at the inevitable cost of significant energy dissipation and reduced device efficiency. Recently, however, there has been a shift in the approach to nanophotonics. Lowloss electromagnetic responses covering all four quadrants of possible permittivities and permeabilities have been achieved using completely transparent and high-refractive-index dielectric building blocks. Moreover, an emerging class of all-dielectric metamaterials consisting of anisotropic crystals has been shown to support large refractive index contrast between orthogonal polarizations of light. These advances have revived the exciting prospect of integrating exotic electromagnetic effects in practical photonic devices, to achieve, for example, ultrathin and efficient optical elements, and realize the long-standing goal of subdiffraction confinement and guiding of light without metals. In this Review, we present a broad outline of the whole range of electromagnetic effects observed using all-dielectric metamaterials: high-refractive-index nanoresonators, metasurfaces, zero-index metamaterials and anisotropic metamaterials. Finally, we discuss current challenges and future goals for the field at the intersection with quantum, thermal and silicon photonics, as well as biomimetic metasurfaces. New transparent metamaterials under development could make possible computer chips and interconnecting circuits that use light instead of electrons to process and transmit data, representing a potential leap in performance. Although optical fibers are now used to transmit large amounts of data over great distances, the technology cannot easily be miniaturized because the wavelength of light is too large to fit within the miniscule dimensions of microcircuits. "The role of optical fibers is to guide light from point A to point B, in fact, across continents," said Zubin Jacob, an assistant professor of electrical and computer engineering at Purdue University. "The biggest advantage of doing this compared to copper cables is that it has a very high bandwidth, so large amounts of data can pass through these optical cables as opposed to copper wires. However, on our computers and consumer electronics we still use copper wires between different parts of the chip. The reason is that you can't confine light to the same size as a nanoscale copper wire." Transparent metamaterials, nanostructured artificial media with transparent building blocks, allow unprecedented control of light and may represent a solution. Researchers are making progress in developing metamaterials that shrink the wavelength of light, pointing toward a strategy to use light instead of electrons to process and transmit data in computer chips. "If you have very high bandwidth communication on the chip as well as interconnecting circuits between chips, you can go to faster clock speeds, so faster data processing," Jacob said. Such an advance could make it possible to shrink the bulkiness of a high-performance computer cluster to the size of a standard desktop machine. Unlike some of the metamaterials under development, which rely on the use of noble metals such as gold and silver, the new metamaterials are made entirely of dielectric materials, or insulators and non-metals. This approach could allow researchers to overcome a major limitation encountered thus far in the development of technologies based on metamaterials: using metals results in the loss of too much light to be practical for many applications. A review article about all-dielectric metamaterials appeared online this month in the journal Nature Nanotechnology, highlighting the rapid development in this new field of research. The article was authored by doctoral student Saman Jahani and Jacob. "A key factor is that we don't use metals at all in this metamaterial, because if you use metals a lot of the light goes into heat and is lost," Jacob said. "We want to bring everything to the silicon platform because this is the best material to integrate electronic and photonic devices on the same chip." A critical detail is the material's "anisotropic velocity" – meaning light is transmitted much faster in one direction through the material than in another. Conventional materials transmit light at almost the same speed no matter which direction it is traveling through the material. "The tricky part of this work is that we require the material to be highly anisotropic," he said. "So in one direction light travels almost as fast as it would in a vacuum, and in the other direction it travels as it would in silicon, which is around four times slower." The innovation could make it possible to modify a phenomenon called "total internal reflection," the principle currently used to guide light in fiber optics. The researchers are working to engineer total internal reflection in optical fibers surrounded by the new silicon-based metamaterial. "Our contribution has been basically the fact that we have been able to adapt this total internal reflection phenomenon down to the nanoscale, which was conventionally thought impossible," Jacob said. Because the material is transparent it is suitable for transmitting light, which is a critical issue for practical device applications. The approach could reduce heating in circuits, meaning less power would be required to operate devices. Such an innovation could in the long run bring miniaturized data processing units. "Another fascinating application for these transparent metamaterials is in enhancing light-matter coupling for single quantum light emitters," Jacob said. "The size of light waves inside a fiber are too large to effectively interact with tiny atoms and molecules. The transparent metamaterial cladding can compress the light waves to sub-wavelength values thus allowing light to effectively interact with quantum objects. This can pave the way for light sources at the single photon level." The research is being performed jointly at Purdue's Birck Nanotechnology Center in the university's Discovery Park and at the University of Alberta. The researchers have obtained a U.S. patent on the design. The research was funded by the National Science and Engineering Research Council of Canada and Helmholtz Alberta Initiative. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | December 15, 2016
Site: www.24-7pressrelease.com

OTTAWA, ON, December 15, 2016-- Donna Karlin, The Shadow Coach, Certified Executive Coach, and Founder/President of The No Ceiling, Just Sky Institute, A Better Perspective, and The School of Shadow Coaching, has been recognized as a Distinguished Professional in her field through Women of Distinction Magazine. Donna Karlin will soon be featured in an upcoming edition of the Women of Distinction Magazine in 2016.Coaching and developing sustainable leadership as a Certified Executive Coach and Principal of The No Ceiling, Just Sky Institute, Donna Karlin changed career paths being inspired by her son's surgeon. Born partially paralyzed and requiring plastic reconstructive surgery, Karlin's son's doctor later asked her to work with all of her future patients and she agreed. Working closely with patients and families only after returning to school and consulting with professionals across many fields of medicine, she eventually designed her own methodology, Shadow Coaching , which she now teaches around the world."Working with parents and their children who needed everything from facial reanimation and re-enervation to working with terminally ill cancer kids, I had to wade through the 'masks' that these parents and kids wore to hide their fears during treatment and surgical interventions," Karlin explained.Transitioning from coaching long-term care and terminally ill patients to a realm where she could support healthcare awareness and change at the decision-making level, Karlin's model demanded that she expand into the corporate, government, political, and military arena."Soon after I founded the Shadow Coaching methodology, there was a push from seasoned practitioners to learn what I had created, so I founded The School of Shadow Coaching in 2004," Karlin said. "Today I teach advanced practitioners and listen to each one of them as they learn and percolate on how to best use the model I created. I continue to evolve the methodology and how I do my work as a result. If we don't listen to those we serve, then it's about us as practitioners and not about why we do what we do."Shadow Coaching is an essential element and key tool in evolving leadership and supporting fundamental change within organizations. Specifically, it is a combination of organizational psychology, organizational systems, and human systems; situational, observational coaching between coach and client with mutually agreed upon objectives. This can take place in a group format where the coachee is observed within group dynamics to further strengthen organizational and team cohesiveness. This type of coaching is ideally suited for organizational leaders who must make decisions and act adaptively in intense work environments.Karlin's umbrella organization, The No Ceiling, Just Sky Institute, which houses A Better Perspective , The School of Shadow Coaching and her work with the TED Fellows community, brings a 360 degree perspective to human evolvement, a dynamic, strengths-focused, full-circle approach to leadership and organizational development.An award-winning author of the book 'Leaders: Their Stories, Their Words - Conversations with Human-Based Leaders', Karlin also authored a second book, 'The Power of Coaching', and has also contributed to the International Journal of Coaching in Organizations. She is active as a member of the Advisory Council for International Academy of Behavioral Medicine, Counseling, and Psychotherapy, is a member of the International Coach Federation and the International Critical Incidence Stress Foundation, is a Founding Fellow of the Institute of Coaching, Mclean Hospital, Harvard Medical School. In her down time, she does pro bono work for the TED Fellows, the Unreasonable Institute, StartingBloc, and several other non-profits.Karlin holds a certification in Organizational Psychology with a focus in Executive Coaching from the Professional School of Psychology, is a Certified Diplomate in Professional Coaching through the International Academy of Behavioral Medicine, Counseling, and Psychotherapy, and is a Certified Executive Coach through the Center for Executive Coaching. Karlin completed postgraduate studies in organizational behavior at the University of Alberta.For more information, visit www.noceilingjustskyinstitute.com and www.abetterperspective.com About Women of Distinction Magazine:Women of Distinction Magazine strives to continually bring the very best out in each article published and highlight Women of Distinction. Women of Distinction Magazine's mission is to have a platform where women can grow, inspire, empower, educate and encourage professionals from any industry by sharing stories of courage and success.Contact:Women of Distinction Magazine, Melville, NY631-465-9024


Goulopoulou S.,University of North Texas Health Science Center | Davidge S.T.,University of Alberta | Davidge S.T.,Women and Childrens Health Research Institute
Trends in Molecular Medicine | Year: 2015

•Endothelium-derived factor production and signaling are altered in preeclampsia.•Placenta-derived factors induce systemic maternal vascular dysfunction. In preeclampsia, as a heterogeneous syndrome, multiple pathways have been proposed for both the causal as well as the perpetuating factors leading to maternal vascular dysfunction. Postulated mechanisms include imbalance in the bioavailability and activity of endothelium-derived contracting and relaxing factors and oxidative stress. Studies have shown that placenta-derived factors [antiangiogenic factors, microparticles (MPs), cell-free nucleic acids] are released into the maternal circulation and act on the vascular wall to modify the secretory capacity of endothelial cells and alter the responsiveness of vascular smooth muscle cells to constricting and relaxing stimuli. These molecules signal their deleterious effects on the maternal vascular wall via pathways that provide the molecular basis for novel and effective therapeutic interventions. © 2014 Elsevier Ltd.


Chue P.,University of Alberta | Lalonde J.K.,Roche Holding AG
Neuropsychiatric Disease and Treatment | Year: 2014

The negative symptoms of schizophrenia represent an impairment of normal emotional responses, thought processes and behaviors, and include blunting or flattening of affect, alogia/aprosody, avolition/apathy, anhedonia, and asociality. Negative symptoms contribute to a reduced quality of life, increased functional disability, increased burden of illness, and poorer long-term outcomes, to a greater degree than positive symptoms. Primary negative symptoms are prominent and persistent in up to 26% of patients with schizophrenia, and they are estimated to occur in up to 58% of outpatients at any given time. Negative symptoms respond less well to medications than positive symptoms, and to date treatment options for negative symptoms have been limited, with no accepted standard treatment. Modest benefits have been reported with a variety of different agents, including second-generation antipsychotics and add-on therapy with antidepressants and other pharmacological classes. Recent clinical research focusing on negative symptoms target novel biological systems, such as glutamatergic neurotransmission. Different approaches include: enhancing N-methyl-D-aspartate receptor function with agents that bind directly to the glycine ligand site or with glycine reuptake inhibitors; influencing the metabotropic glutamate receptor (mGluR2/3) with positive allosteric modulators; and stimulating nicotinic acetylcholine receptors. In conclusion, the lack of clearly efficacious pharmacological treatments for the management of negative symptoms represents a significant unmet need, especially considering the importance of these symptoms on patient outcomes. Hence, further research to identify and characterize novel pharmacological treatments for negative symptoms is greatly needed. © 2014 Chue and Lalonde.


Lalonde S.V.,French National Center for Scientific Research | Konhauser K.O.,University of Alberta
Proceedings of the National Academy of Sciences of the United States of America | Year: 2015

The Great Oxidation Event (GOE) is currently viewed as a protracted process during which atmospheric oxygen increased above ∼10-5 times the present atmospheric level (PAL). This threshold represents an estimated upper limit for sulfur isotope mass-independent fractionation (S-MIF), an Archean signature of atmospheric anoxia that begins to disappear from the rock record at 2.45 Ga. However, an increasing number of papers have suggested that the timing for oxidative continental weathering, and by conventional thinking the onset of atmospheric oxygenation, was hundreds of million years earlier than previously thought despite the presence of S-MIF. We suggest that this apparent discrepancy can be resolved by the earliest oxidative-weathering reactions occurring in benthic and soil environments at profound redox disequilibrium with the atmosphere, such as biological soil crusts and freshwater microbial mats covering riverbed, lacustrine, and estuarine sediments. We calculate that oxygenic photosynthesis in these millimeter-thick ecosystems provides sufficient oxidizing equivalents to mobilize sulfate and redox-sensitive trace metals from land to the oceans while the atmosphere itself remained anoxic with its attendant S-MIF signature. As continental freeboard increased significantly between 3.0 and 2.5 Ga, the chemical and isotopic signatures of benthic oxidative weathering would have become more globally significant from a mass-balance perspective. These observations help reconcile evidence for pre-GOE oxidative weathering with the history of atmospheric chemistry, and support the plausible antiquity of a terrestrial biosphere populated by cyanobacteria well before the GOE.

Loading University of Alberta collaborators
Loading University of Alberta collaborators