Edmonton, Canada
Edmonton, Canada

The University of Alberta is a public research university located in Edmonton, Alberta, Canada. It was founded in 1908 by Alexander Cameron Rutherford, the first premier of Alberta, and Henry Marshall Tory, its first president. Its enabling legislation is the Post-secondary Learning Act.The university comprises four campuses in Edmonton, the Augustana Campus in Camrose, and a staff centre in downtown Calgary. The original north campus consists of 150 buildings covering 50 city blocks on the south rim of the North Saskatchewan River valley, directly across from downtown Edmonton. More than 39,000 students from across Canada and 150 other countries participate in nearly 400 programs in 18 faculties.The University of Alberta is a major economic driver in Alberta. The university’s impact on the Alberta economy is an estimated $12.3 billion annually, or five per cent of the province’s gross domestic product. With more than 15,000 employees, the university is Alberta's fourth-largest employer.The university has been recognized by the Academic Ranking of World Universities, the QS World University Rankings and the Times Higher Education World University Rankings as one of the top five universities in Canada and one of the top 100 universities worldwide.According to the 2014 QS World University Rankings the top Faculty Area at the University of Alberta is Arts and Humanities , and the top-ranked Subject is English Language and Literature .The University of Alberta has graduated more than 260,000 alumni, including Governor General Roland Michener, Prime Minister Joe Clark, Chief Justice of Canada Beverley McLachlin, Alberta premiers Peter Lougheed, Dave Hancock and Jim Prentice, Edmonton Mayor Don Iveson and Nobel laureate Richard E. Taylor.The university is a member of the Alberta Rural Development Network, the Association for the Advancement of Sustainability in Higher Education and the Sustainability Tracking, Assessment & Rating System. Wikipedia.


Time filter

Source Type

Patent
University of Alberta | Date: 2016-09-06

A method of converting lipids to useful olefins includes reacting a mixture of lipids and a reactant olefin with microwave irradiation in the presence of ruthenium metathesis catalysts. The lipids may be unsaturated triacylglycerols or alkyl esters of fatty acids. The lipids may be sourced from renewable sources such as vegetable oil, waste cooking oil, or waste animal products.


Patent
University of Alberta | Date: 2015-04-17

The invention includes method, pharmaceutical compositions and uses thereof for treating patients with Papillary Thyroid Carcinoma (PTC) using a Platelet Derived Growth Factor Receptor Alpha (PDGFRA) inhibitor. The PDGFRA inhibitor is preferably an antibody specific to PDGFRA and causes an increase in the sensitivity level of PTC cells to radioiodine treatment. Moreover, the antibody can be used in combination with other PDGFRA inhibitors such as tyrosine kinase inhibitors and RNA interference molecules.


Patent
Massachusetts Institute of Technology and University of Alberta | Date: 2016-11-23

The invention, in some aspects relates to compositions and methods for altering cell activity and function and the introduction and use of light-activated ion channels.


Patent
University of Alberta | Date: 2016-09-21

A combined hydrothermal and activation process that uses hemp bast fiber as the precursor to achieve graphene-like carbon nanosheets, a carbon nanosheet including carbonized crystalline cellulose, a carbon nanosheet formed by carbonizing crystalline cellulose, a capacitative structure includes interconnected carbon nanosheets of carbonized crystalline cellulose, a method of forming a nanosheet including carbonizing crystalline cellulose to create carbonized crystalline cellulose. The interconnected two-dimensional carbon nanosheets also contain very high levels of mesoporosity.


Patent
University of Alberta | Date: 2014-11-07

A thermal emitter is provided, including a periodic structure operating as a metamaterial on an optically thick substrate; the periodic structure thermally emitting at high temperatures in a specified narrow wavelength of a predetermined resonance, the metamaterial including a composite medium of natural materials. The emitter may be part of a thermophotovoltaic device. The thermal emitter may include a plurality of layered films, wherein the distance between each adjacent film is substantially less than the wavelength.


Patent
University of Alberta | Date: 2016-09-12

The invention provides a binding-induced DNA nanomachine that can be activated by proteins and nucleic acids. This new type of nanomachine hamesses specific target binding to trigger assembly of separate DNA components that are otherwise unable to spontaneously assemble. Three-dimensional DNA tracks of high density are constructed on gold nanoparticles functionalized with hundreds of single-stranded oligonucleotides and tens of an affinity ligand. A DNA swing arm, free in solution, can be linked to a second affinity ligand. Binding of a target molecule to the two ligands brings the swing arm to AuNP and initiates autonomous, stepwise movement of the swing arm around the AuNP surface. The movement of the swing arm generates hundreds of oligonucleotides in response to a single binding event. The new nanomachines have several unique and advantageous features over DNA nanomachines that rely on DNA self-assembly.


An underwater camera system includes a projector operable to project a pattern of electromagnetic radiation toward a target object. The electromagnetic radiation includes at least three different wavelengths. A sensor directed toward the target object receives reflected electromagnetic radiation from the target object and stores corresponding image data received from the sensor. One or more processors process the image data to compute a refractive normal according to a wavelength dispersion represented by differences in the image data, and to compute an interface distance corresponding to a distance from a center point of the sensor to a first refractive interface nearest the sensor according to the refractive normal. The processors generate a 3D representation of the target object by back projecting each pixel of the image data at the first, second, and third wavelengths in order to determine an object point location according to the refractive normal and interface distance.


Healthy brain development involves changes in brain structure and function that are believed to support cognitive maturation. However, understanding how structural changes such as grey matter thinning relate to functional changes is challenging. To gain insight into structure-function relationships in development, the present study took a data driven approach to define age-related patterns of variation in gray matter volume (GMV), cerebral blood flow (CBF) and blood-oxygen level dependent (BOLD) signal variation (fractional amplitude of low-frequency fluctuations; fALFF) in 59 healthy children aged 7-18 years, and examined relationships between modalities. Principal components analysis (PCA) was applied to each modality in parallel, and participant scores for the top components were assessed for age associations. We found that decompositions of CBF, GMV and fALFF all included components for which scores were significantly associated with age. The dominant patterns in GMV and CBF showed significant (GMV) or trend level (CBF) associations with age and a strong spatial overlap, driven by increased signal intensity in default mode network (DMN) regions. GMV, CBF and fALFF additionally showed components accounting for 3-5% of variability with significant age associations. However, these patterns were relatively spatially independent, with small-to-moderate overlap between modalities. Independence of age effects was further demonstrated by correlating individual subject maps between modalities: CBF was significantly less correlated with GMV and fALFF in older children relative to younger. These spatially independent effects of age suggest that the parallel decline observed in global GMV and CBF may not reflect spatially synchronized processes. © 2017 Wiley Periodicals, Inc.


Rahimi B.,University of Alberta
Clinical Nuclear Medicine | Year: 2017

ABSTRACT: Systemic radioisotope therapy with I-metaiodobenzylguanidine (I-MIBG) is an effective form of targeted therapy for neuroendocrine tumors. One of the absolute contraindications to administering I-MIBG therapy listed in the 2008 European Association of Nuclear Medicine guidelines is renal insufficiency requiring dialysis, although this contraindication is not evidence based. We describe a 68-year-old woman with a metastatic small bowel neuroendocrine tumor who developed renal insufficiency requiring hemodialysis. Imaging and dosimetry with I-MIBG were performed and showed that the radiation doses to the whole body and lungs were within safe limits. She was treated with 1820 MBq of I-MIBG with no short-term adverse reactions. Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved.


Spence N.D.,University of Alberta
Journal of Racial and Ethnic Health Disparities | Year: 2016

Objectives Debates surrounding the importance of social context versus individual level processes have a long history in public health. Aboriginal peoples in Canada are very diverse, and the reserve communities in which they reside are complex mixes of various cultural and socioeconomic circumstances. The social forces of these communities are believed to affect health, in addition to individual level determinants, but no large scale work has ever probed their relative effects. One aspect of social context, relative deprivation, as indicated by income inequality, has greatly influenced the social determinants of health landscape. An investigation of relative deprivation in Canada’s Aboriginal population has never been conducted. This paper proposes a new model of Aboriginal health, using a multidisciplinary theoretical approach that is multilevel. Methods This study explored the self-rated health of respondents using two levels of determinants, contextual and individual. Data were from the 2001 Aboriginal Peoples Survey. There were 18,890 Registered First Nations (subgroup of Aboriginal peoples) on reserve nested within 134 communities. The model was assessed using a hierarchical generalized linear model. Results There was no significant variation at the contextual level. Subsequently, a sequential logistic regression analysis was run. With the sole exception culture, demographics, lifestyle factors, formal health services, and social support were significant in explaining self-rated health. Conclusions The non-significant effect of social context, and by extension relative deprivation, as indicated by income inequality, is noteworthy, and the primary role of individual level processes, including the material conditions, social support, and lifestyle behaviors, on health outcomes is illustrated. It is proposed that social structure is best conceptualized as a dynamic determinant of health inequality and more multilevel theoretical models of Aboriginal health should be developed and tested. © W. Montague Cobb-NMA Health Institute 2015.


Kouritzin M.A.,University of Alberta
Computational Statistics and Data Analysis | Year: 2017

A class of discrete-time branching particle filters is introduced with individual resampling: If there are n particles alive at time n, 0=N, an≤1≤bn, L̂n+1 i is the current unnormalized importance weight for particle i and An+1=N∑i=1 NnL̂n+1 i, then weight is preserved when L̂n+1 i∈(anAn+1,bnAn+1). Otherwise, ⌊L̂ ⌋. The algorithms are shown to be stable with respect to the number of particles and perform better than the bootstrap algorithm as well as other popular resampled particle filters on both tracking problems considered here. Moreover, the new branching filters run significantly faster than these other particle filters on tracking and Bayesian model selection problems. © 2017 Elsevier B.V.


Wiens D.P.,University of Alberta
Statistics and Computing | Year: 2017

We present and discuss the theory of minimax I- and D-robust designs on a finite design space, and detail three methods for their construction that are new in this context: (i) a numerical search for the optimal parameters in a provably minimax robust parametric class of designs, (ii) a first-order iterative algorithm similar to that of Wynn (Ann Math Stat 5:1655–1664, 1970), and (iii) response-adaptive designs. These designs minimize a loss function, based on the mean squared error of the predicted responses or the parameter estimates, when the regression response is possibly misspecified. The loss function being minimized has first been maximized over a neighbourhood of the approximate and possibly inadequate response being fitted by the experimenter. The methods presented are all vastly more economical, in terms of the computing time required, than previously available algorithms. © 2017 Springer Science+Business Media New York


Mehri M.,Sharif University of Technology | Asadi H.,University of Alberta | Asadi H.,Amirkabir University of Technology | Kouchakzadeh M.A.,Sharif University of Technology
Computer Methods in Applied Mechanics and Engineering | Year: 2017

As a first endeavor, the aeroelastic responses of functionally graded carbon nanotube reinforced composite (FG-CNTRC) truncated conical curved panels subjected to aerodynamic load and axial compression are investigated. The nonlinear dynamic equations of FG-CNTRC conical curved panels are derived according to Green's strains and the Novozhilov nonlinear shell theory. The aerodynamic load is estimated in accordance with the quasi-steady Krumhaar's modified supersonic piston theory by taking into account the effect of the panel curvature. Matrix transform method along with the harmonic differential quadrature method (HDQM) are employed to solve the nonlinear equations of motion of the FG-CNTRC truncated conical curved panel. The advantage of the matrix transform method is that we only need to discretize the meridional direction. Effects of semi-vertex angle of the cone, subtended angle of the panel, boundary conditions, geometrical parameters, volume fraction and distribution of CNT, and Mach number on the aeroelastic characteristics of the FG-CNTRC conical curved panel are put into evidence via a set of parametric studies and pertinent conclusions are outlined. The results prove that the panels with different FG distributions have different critical dynamic pressure. It is found that the semi-vertex and subtended angles play a pivotal role in changing the critical circumferential mode number of the flutter instability. Besides, the research shows that the superb efficiency of proposed method with few grid points, which requires less CPU time, are attributed to the matrix transform method and the higher-order harmonic approximation function in the HDQM. © 2017 Elsevier B.V.


Scarpella E.,University of Alberta
Current Opinion in Genetics and Development | Year: 2017

The problem of long-distance transport is solved in many multicellular organisms by tissue networks such as the vascular networks of plants. Because tissue networks transport from one tissue area to another, they are polar and continuous; most of them, including plant vascular networks, are also plastic. Surprisingly, the formation of tissue networks is in most cases just as polar, continuous and plastic. Available evidence suggests that the polarity, continuity and plasticity of plant vascular networks and their formation could be accounted for by a patterning process that combines: (i) excess of developmental alternatives competing for a limiting cell-polarizing signal; (ii) positive feedback between cell polarization and continuous, cell-to-cell transport of the cell-polarizing signal; and (iii) gradual restriction of differentiation that increasingly removes the cell-polarizing signal. © 2017 Elsevier Ltd


Kono S.,University of Alberta
Leisure Sciences | Year: 2017

Although the paradigmatic discussion has encouraged leisure scholars to critically examine their inquiry assumptions (Parry, Johnson, & Stewart, 2013), Henderson (2011) and Neville (2013) provided critical comments from pragmatist perspectives on the dominance of the paradigmatic framework in the leisure literature. However, it remains unaddressed what it means to adopt pragmatism for leisure researchers who undertake empirical research. The purpose of this article is to offer a starting point to apply pragmatist discussion and pragmatism to empirical leisure research projects. I first describe implications of John Dewey's pragmatism for an empirical inquiry while contrasting them with ontological and epistemological concerns in the paradigmatic schema. Second, I critically reflect upon my previous leisure research project from the Deweyan perspective. I identify several research stages wherein pragmatist leisure scholars should be aware of implications of their inquiry philosophy, including research question formulation, research design and methodological choice, and research outcome report. © 2017 Taylor & Francis Group, LLC


Egerton R.F.,University of Alberta
Ultramicroscopy | Year: 2017

We discuss the delocalization of the inelastic scattering of 60-300. keV electrons in a thin specimen, for energy losses below 50. eV where the delocalization length exceeds atomic dimensions. Analytical expressions are derived for the point spread function (PSF) that describes the radial distribution of this scattering, based on its angular distribution and a dielectric representation of energy loss. We also compute a PSF for energy deposition, which is directly related to the radiolysis damage created by a small-diameter probe. These concepts are used to explain the damage kinetics, measured as a function of probe diameter, in various polymers. We also evaluate a "leapfrog" coarse-scanning procedure as a technique for energy-filtered imaging of a beam-sensitive specimen. © 2017 Elsevier B.V.


Pedrycz W.,University of Alberta
Advances in Intelligent Systems and Computing | Year: 2017

The apparent challenges in data analytics calls for new advanced technologies. Granular Computing along with a diversity of its formal settings offers a badly needed conceptual and algorithmic environment that becomes instrumental in this setting. In virtue of the key facets of data analytics, there is a genuine quest to foster new development avenues of Granular Computing by bringing concepts of information granules of higher type and higher order. © Springer International Publishing Switzerland 2017.


Zhang W.,University of Alberta
Nature Methods | Year: 2017

To expand the range of experiments that are accessible with optogenetics, we developed a photocleavable protein (PhoCl) that spontaneously dissociates into two fragments after violet-light-induced cleavage of a specific bond in the protein backbone. We demonstrated that PhoCl can be used to engineer light-activatable Cre recombinase, Gal4 transcription factor, and a viral protease that in turn was used to activate opening of the large-pore ion channel Pannexin-1. © 2017 Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved.


Luc J.G.Y.,University of Alberta
ASAIO Journal | Year: 2017

Normothermic ex vivo lung perfusion (EVLP) allows for assessment and reconditioning of donor lungs. Though a leukocyte filter (LF) is routinely incorporated into the EVLP circuit, its efficacy remains to be determined. Twelve pig lungs were perfused and ventilated ex vivo in a normothermic state for 12 hours. Lungs (n=3) were allocated to 4 groups according to perfusate composition and the presence or absence of a LF in the circuit (acellular ± LF, cellular ± LF). Acceptable physiologic lung parameters were achieved during EVLP; however, increased amounts of pro-inflammatory cytokines (TNF-α, IL-6) and leukocytes in the perfusate were observed despite the presence or absence of a LF. Analysis of cells washed off the LF demonstrates that it trapped leukocytes though was ineffective throughout perfusion as it became saturated over 12 hours. We conclude that there is no objective evidence to support the routine incorporation of a LF during EVLP as it does not provide further benefit and its removal does not appear to cause harm. The lack of hypothesized benefit to a LF may be due to the saturation of the LF with donor leukocytes, leading to similar amounts of circulating leukocytes still present in the perfusate with and without a LF. Copyright © 2017 by the American Society for Artificial Internal Organs


In this brief report, time-varying (including both gradual and abrupt change) and time-constant diversification models are fitted on a phylogeny of endemic birds of mainland China to test the diversification patterns of endemic birds in the region. The results show that phylogeny of endemic birds is best quantified by a constant-rate diversification model through model comparison. Limitations of the study are discussed. In particular, ignorance of non-endemic taxa and the limited sampling of endemic taxa could influence the conclusions of the study. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd


Ultrafast control of current on the atomic scale is essential for future innovations in nanoelectronics. Extremely localized transient electric fields on the nanoscale can be achieved by coupling picosecond duration terahertz pulses to metallic nanostructures. Here, we demonstrate terahertz scanning tunnelling microscopy (THz-STM) in ultrahigh vacuum as a new platform for exploring ultrafast non-equilibrium tunnelling dynamics with atomic precision. Extreme terahertz-pulse-driven tunnel currents up to 107 times larger than steady-state currents in conventional STM are used to image individual atoms on a silicon surface with 0.3 nm spatial resolution. At terahertz frequencies, the metallic-like Si(111)-(7 × 7) surface is unable to screen the electric field from the bulk, resulting in a terahertz tunnel conductance that is fundamentally different than that of the steady state. Ultrafast terahertz-induced band bending and non-equilibrium charging of surface states opens new conduction pathways to the bulk, enabling extreme transient tunnel currents to flow between the tip and sample. © 2017 Nature Publishing Group


Woolgar E.,University of Alberta
Journal of High Energy Physics | Year: 2017

The new positive energy conjecture was first formulated by Horowitz and Myers in the late 1990s to probe for a possible extended, nonsupersymmetric AdS/CFT correspondence. We consider a version formulated for complete, asymptotically Poincaré-Einstein Riemannian metrics (M, g) with bounded scalar curvature R ≥ −n(n − 1) and with no (inner) boundary except possibly a finite union of compact, totally geodesic hypersurfaces (horizons). This version then asserts that any such (M, g) must have mass not less than a certain bound which is realized as the mass m0 of a metric g0 induced on a time-symmetric slice of a spacetime called an AdS soliton. This conjecture remains unproved, having so far resisted standard techniques. Little is known other than that the conjecture is true for metrics which are sufficiently small perturbations of g0. We pose another test for the conjecture. We assume its validity and attempt to prove as a corollary the corresponding scalar curvature rigidity statement, which is that g0 is the unique asymptotically Poincaré-Einstein metric with mass m0 obeying R ≥ −n(n − 1). Were a second such metric g1 not isometric to g0 to exist, it then may well admit perturbations of lower mass, contradicting the assumed validity of the conjecture. We find enough rigidity to show that the minimum mass metric must be static Einstein, so the problem is reduced to that of static uniqueness. When n = 3 the manifold must be isometric to a time-symmetric slice of an AdS soliton spacetime, or must have a non-compact horizon. En route we study the mass aspect, obtaining and generalizing known results: (i) we relate the mass aspect of static metrics to the holographic energy density, (ii) we obtain the conformal invariance of the mass aspect when the bulk dimension is odd, and (iii) we show the vanishing of the mass aspect for negative Einstein manifolds with Einstein conformal boundary. © 2017, The Author(s).


Lam N.N.,University of Alberta
Nature Reviews Nephrology | Year: 2017

Living kidney donation provides the best therapeutic outcomes for eligible patients with end-stage renal disease. To ensure suitability for living kidney donation, donor candidates undergo a thorough medical, surgical, and psychosocial evaluation by a multidisciplinary transplant assessment team. Numerous guidelines are available to assist clinicians in the process of donor evaluation and selection. These guidelines outline the minimum recommended requirements for donor screening and additional tests that are indicated when abnormalities arise; however, evidence suggests that some of these additional tests might not be required in certain donor candidates. Measured glomerular filtration rate (GFR) using isotopic methods is more accurate than estimated GFR for the assessment of renal function; however, a new clinical tool might enable the identification of donor candidates for whom nuclear medicine renal scans are not needed. Persistent isolated microscopic haematuria caused by urologic or glomerular diseases can preclude donation and such abnormalities can often be identified by kidney biopsy. Cystoscopy might not be useful for young patients, however, owing to the rarity of urological cancers in this population. Currently, no evidence exists to support the notion that donor candidates at low-risk of cardiac events should undergo additional preoperative cardiovascular evaluation before donation. Reducing and/or eliminating the need for additional testing has the potential to enhance efficiency in the donor evaluation process, improve patient satisfaction, and increase access to living donor kidney transplantation. © 2017 Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved.


Xu C.,University of Alberta
Journal of Mathematical Chemistry | Year: 2017

In this paper, we incorporate stochastic incidence of a chemical reaction into the standard Keizer’s open chemical reaction. We prove that a positive stationary distribution (PSD) for the associated chemical master equation exists and is globally asymptotically stable. We present threshold dynamics of the stochastic Keizer’s model in term of the profile of the PSD for both finite and infinite volume size V. This establishes a sharp link between deterministic Keizer’s model and the stochastic model. In this way, we resolve Keizer’s paradox from a new perspective. This simple model reveals that such stochastic incidence incorporated, though negligible when V goes to infinity, may play an indispensable role in the stochastic formulation for irreversible biochemical reactions. © 2017 Springer International Publishing Switzerland


Jones B.,University of Alberta
Sedimentary Geology | Year: 2017

Many spring deposits throughout the world are characterized by spectacular deposits of calcium carbonate that are formed of various combinations of aragonite and calcite, and in very rare cases vaterite. The factors that control the precipitation of the aragonite and calcite have been the subject of considerable debate that has been based on natural precipitates and information gained from numerous laboratory experiments. Synthesis of this information indicates that there is probably no single universal factor that controls calcite and aragonite precipitation in all springs. Instead, the reason for aragonite as opposed to calcite precipitation should be ascertained by considering the following ordered series of possibilities for each system. First, aragonite, commonly with calcite as a co-precipitate, will form from spring water that has a high CO2 content and rapid CO2 degassing, irrespective of the Mg:Ca ratio and scale of precipitation. Second, aragonite can be precipitated from waters that have low levels of CO2 degassing provided that the Mg:Ca ratio is high enough to inhibit calcite precipitation. Third, the presence of biofilms may lead to the simultaneous precipitation of aragonite and calcite (irrespective of CO2 degassing or Mg:Ca ratio) either within the different microdomains that develop in the biofilm or because of diurnal changes in various geochemical parameters associated with the biofilm. Although the precipitation of calcite and aragonite has commonly been linked directly to water temperature, there is no clear evidence for this proposition. It is possible, however, that temperature may be influencing another parameter that plays a more direct role in the precipitation of these CaCO3 polymorphs. Despite the advances that have been made, the factors that ultimately control calcite and aragonite are still open to debate because this long-standing problem has still not been fully resolved. © 2017


Pedrycz W.,University of Alberta
Journal of Advanced Computational Intelligence and Intelligent Informatics | Year: 2017

This study is aimed at a brief, carefully focused retrospective view at the Computational Intelligence - a paradigm supporting the analysis and synthesis of intelligent systems. We stress the reason behind the emergence of this discipline and identify its main features. We highlight the synergistic aspects of Computational Intelligence arising from an interaction and collaboration of fuzzy sets, neural networks, and evolutionary optimization. Some promising directions of future fundamental and applied research are also identified.


Alles S.R.A.,University of Alberta | Smith P.A.,University of Alberta
Neuroscientist | Year: 2017

The gabapentinoids (pregabalin and gabapentin) are first line treatments for neuropathic pain. They exert their actions by binding to the α2δ accessory subunits of voltage-gated Ca2+ channels. Because these subunits interact with critical aspects of the neurotransmitter release process, gabapentinoid binding prevents transmission in nociceptive pathways. Gabapentinoids also reduce plasma membrane expression of voltage-gated Ca2+ channels but this may have little direct bearing on their therapeutic actions. In animal models of neuropathic pain, gabapentinoids exert an anti-allodynic action within 30 minutes but most of their in vitro effects are 30-fold slower, taking at least 17 hours to develop. This difference may relate to increased levels of α2δ expression in the injured nervous system. Thus, in situations where α2δ is experimentally upregulated in vitro, gabapentinoids act within minutes to interrupt trafficking of α2δ subunits to the plasma membrane within nerve terminals. When α2δ is not up-regulated, gabapentinoids act slowly to interrupt trafficking of α2δ protein from cell bodies to nerve terminals. This improved understanding of the mechanism of gabapentinoid action is related to their slowly developing actions in neuropathic pain patients, to the concept that different processes underlie the onset and maintenance of neuropathic pain and to the use of gabapentinoids in management of postsurgical pain. © The Author(s) 2016.


Chaput J.-P.,Eastern Research Group | Saunders T.J.,University of Prince Edward Island | Carson V.,University of Alberta
Obesity Reviews | Year: 2017

Research examining the health effects of physical activity, sedentary behaviour and sleep on different health outcomes has largely been conducted independently or in isolation of the other behaviours. However, the fact that time is finite (i.e. 24 h) suggests that the debate on whether or not the influence of a single behaviour is independent of another one is conceptually incorrect. Time spent in one behaviour should naturally depend on the composition of the rest of the day. Recent evidence using more appropriate analytical approaches to deal with this methodological issue shows that the combination of sleep, movement and non-movement behaviours matters and all components of the 24-h movement continuum should be targeted to enhance health and prevent childhood obesity. The objective of this review is to discuss research investigating how combinations of physical activity, sedentary behaviour and sleep are related to childhood obesity. Emerging statistical approaches (e.g. compositional data analysis) that can provide a good understanding of the best ‘cocktail’ of behaviours associated with lower adiposity and improved health are also discussed. Finally, future research directions are provided. Collectively, it becomes clearer that guidelines and public health interventions should target all movement behaviours synergistically to optimize health of children and youth around the world. © 2017 World Obesity Federation


Veeman M.,University of Alberta
Canadian Journal of Agricultural Economics | Year: 2017

Despite overwhelming evidence of benefits overall from lower barriers to trade in goods and services, the number of trade restrictions continues to grow. In some nations, nationalistic politicians who threaten to build walls between nations and reject both new and established trade treaties have gained considerable public support by fanning the resentment and anger of those left behind in a globalizing world. Trade and immigration have incorrectly been blamed. Instead, it should be recognized that economic growth has been fueled by technological and institutional changes that have been accelerated by trade and investment. While these changes improve standards of living overall and create new job opportunities, they also displace workers in high cost regions and industries. Perceptions of the fairness of distribution of gains from trade are likely to be improved where public policy effectively assists labor adjustment and mandates socially acceptable employment standards and safety nets. Possible ways to encourage improvements in communication of the benefits of growth that arise from trade and globalization are suggested. © 2016 Canadian Agricultural Economics Society


Ji Y.,University of Alberta | Kumar S.,Texas A&M University | Mookerjee V.,University of Texas at Dallas
Information Systems Research | Year: 2016

We study operational and managerial problems arising in the context of security monitoring where sessions, rather than raw individual events, are monitored to prevent attacks. The objective of the monitoring problem is to maximize the benefit of monitoring minus the monitoring cost. The key trade-off in our model is that as more sessions are monitored, the attack costs should decrease. However, the monitoring cost would likely increase with the number of sessions being monitored. A key step in solving the problem is to derive the probability density of a system with n sessions being monitored with a session's age measured as the time elapsed since it last generated a suspicious event. We next optimize the number of sessions monitored by trading off the attack cost saved with the cost of monitoring. A profiling step is added prior to monitoring and a resulting two-dimensional optimization problem is studied. Through numerical simulation, we find that a simple size-based policy is quite robust for a very reasonable range of values and, under typical situations, performs almost as well as the two more sophisticated policies do. Also, we find that adopting a simplified policy without using the option of managing sessions using age threshold can greatly increase the ease of finding an optimal solution, and reduce operational overhead with little performance loss compared with a policy using such an option. The insights gained from the mechanics of profiling and monitoring are leveraged to suggest a socially optimal contract for outsourcing these activities in a reward-based contract. We also study penalty-based contracts. Such contracts (specifically, when the penalty is levied as a percentage of the monthly service fee) do not achieve the social optimum. We show how an appropriate penalty coefficient can be chosen to implement a socially optimal penalty-based contract. In addition, we provide a high-level comparison between rewardand penalty-based contracts. In a penalty-based contract, the setting of the fixed payment can be challenging because it requires additional knowledge of the total expected malicious event rate, which needs to be observed through a period of no monitoring. © 2016 INFORMS.


Lopaschuk G.D.,University of Alberta
Canadian Journal of Cardiology | Year: 2017

Ischemic heart disease and heart failure are leading causes of mortality and morbidity worldwide. They continue to be major burden on health care systems throughout the world, despite major advances made over the past 40 years in developing new therapeutic approaches to treat these debilitating diseases. A potential therapeutic approach that has been underutilized in treating ischemic heart disease and heart failure is "metabolic modulation." Major alterations in myocardial energy substrate metabolism occur in ischemic heart disease and heart failure, and are associated with an energy deficit in the heart. A metabolic shift from mitochondrial oxidative metabolism to glycolysis, as well as an uncoupling between glycolysis and glucose oxidation, plays a crucial role in the development of cardiac inefficiency (oxygen consumed per work performed) and functional impairment in ischemic heart disease as well as in heart failure. This has led to the concept that optimizing energy substrate use with metabolic modulators can be a potentially promising approach to decrease the severity of ischemic heart disease and heart failure, primarily by improving cardiac efficiency. Two approaches for metabolic modulator therapy are to stimulate myocardial glucose oxidation and/or inhibit fatty acid oxidation. In this review, the past, present, and future of metabolic modulators as an approach to optimizing myocardial energy substrate metabolism and treating ischemic heart disease and heart failure are discussed. This includes a discussion of pharmacological interventions that target enzymes involved in fatty acid uptake, fatty acid oxidation, and glucose oxidation in the heart, as well as enzymes involved in ketone and branched chain amino acid catabolism in the heart. © 2017 Canadian Cardiovascular Society.


Metcalfe P.D.,University of Alberta
Journal of the Canadian Urological Association | Year: 2017

The child with a neuropathic bladder requires lifelong dedicated care. Just as each patient presents with unique physiology, each phase of their life presents varying challenges. The primary concern for our patients is their renal health, but continence and independence also play significant roles. Most patients can be managed conservatively, but a myriad of surgical options are also available, reinforcing our emphasis on individualized care. Appropriate pre-surgical planning is required to ensure the right patient receives the best operation for his/her wants and needs. Furthermore, the numerous potential complications must be understood and long-term followup and surveillance is required. This review outlines the basic pathophysiology, investigations, and treatments, with a focus on the changing needs throughout their lives. © 2017 Canadian Urological Association.


Blackmore D.,University of Alberta
Journal of clinical neuromuscular disease | Year: 2017

OBJECTIVES: Despite its relative common occurrence, definitive diagnosis of small fiber neuropathy (SFN) remains problematic. In practice, patients with pain, numbness, and/or paresthesias in their lower limbs are diagnosed with SFN if found to have dissociated sensory loss in their feet, that is, impaired pinprick perception (PP) but relatively preserved vibration. We sought to assess the sensitivity and specificity of clinical examination and various diagnostic tools available for screening SFN.METHODS: Medical records of 56 patients diagnosed with SFN were reviewed. Diagnosis was based on symptoms, detailed neurological examination that included PP, and abnormal results on at least one testing modality-quantitative sudomotor axon reflex (sweat) test (QSART), quantitative sensory testing (QST), and heart rate variability (HRV) testing.RESULTS: Sensitivity of PP was relatively consistent between modalities of about 63% in presence of appropriate sensory symptoms. Laboratory testing diagnosed 88% of patients when both QSART and QST are employed. QST was most sensitive for detection of SFN with the heat-pain testing having higher sensitivity than cooling. Heart rate variability testing revealed low correlation across all groups.CONCLUSIONS: The diagnostic yield for SFN increases by combining clinical features with various testing modalities. In symptomatic patients, we propose the following diagnostic criteria for diagnosis of SFN: Definite SFN-abnormal neurological examination and both QSART and QST; Probable SFN-abnormal neurological examination, and either QSART or QST; Possible SFN-abnormal neurological exam, QSART, or QST.


News Article | April 26, 2017
Site: www.treehugger.com

With much of the trail following the shoulders of busy highways, Edmund Aunger says the trail is dangerous and should not be promoted as a tourist attraction. Edmund Aunger has a message for anyone thinking about coming to Canada this year to cycle the Trans Canada Trail: “Do not come!” The trail is supposed to be completed by July 1, 2017, just in time for Canada’s 150th birthday, but Aunger fears it isn’t what people expect it to be, based on misleading advertising. Much of the trail is along the shoulders of highways, with motor vehicles racing past at dangerous speeds. It is not safe. In an interview with CBC radio host Ian Brown, Aunger, an avid cyclist and political science professor at the University of Alberta, explained how the original intent for the trail was to provide a safe transportation network for non-motorized travelers – cyclists, runners, and hikers in summertime, cross-country skiers in winter. Canadians realized the importance of having non-motorized trails after a horrific cycling accident in 1985 killed three children and injured six others. Unfortunately, what’s now being touted as the Trans Canada Trail, or "The Great Trail," has strayed dangerously far from its original vision. “[The current trail] is 8,500 kilometres (5,280 miles) of roads and highways, it is 5,000 kilometres (3,100 mi) of ATV trails, it is 7,000 kilometres (4,350 mi) of waterways including Lake Superior. Of course you can't walk and ride your bicycle in it, and people who walk or cycle through that area still have to go on the Trans Canada Highway. This, compared to the dream and the promises that were made, this is absolutely horrible.” If anyone understands the dangers of mixing cars and bicycles, it is Aunger. On July 14, 2012, his wife Elizabeth was struck and killed by a vehicle while cycling on a highway in Prince Edward Island. She had always refused to ride on roads, saying they were far too dangerous, but when their guide suddenly left the trail and took them onto a highway, they had no choice but to ride the short 2.9-kilometer (1.8 mile) distance to reach the next section of trail. Within minutes, Elizabeth was dead. Aunger is completing a cross-country cycling trip this summer, traveling the final leg from Ottawa to Charlottetown, PEI. (He divided it up into five stages over five years.) He cycles in honor of Elizabeth and to promote a safe, accessible, and consistently passable trail. His website, Ride The Trail, states: Aunger continues to be an outspoken critic of the trail, which he thinks failed because there was no government oversight; its construction was left to volunteers, which would never be done for highways or other important transportation infrastructure. He is petitioning the government to create minimum standards for safety and quality on the Trans Canada Trail, and to ensure that it is a “genuinely non-motorized and world-class greenway.” You can add your name to the petition here. Learn more at Ride The Trail.ca, where you can also find a heart-wrenching blog post containing driver excuses for striking cyclists. © The Great Trail -- PR team is working overtime to keep up its image as a safe, welcoming destination


A new discovery at the University of Alberta will fundamentally alter how we view spinal cord function and rehabilitation after spinal cord injuries. Neuroscientists found that spinal blood flow in rats was unexpectedly compromised long after a spinal cord injury (chronically ischemia), and that improving blood flow or simply inhaling more oxygen produces lasting improvements in cord oxygenation and motor functions, such as walking. Previous work had shown that while blood flow was temporarily disrupted at the injury site, it resumed rapidly, and it was more or less assumed that the blood flow was normal below the injury. This turns out to be wrong. "We've shown for the first time that spinal cord injuries (SCI) lead to a chronic state of poor blood flow and lack of oxygen to neuronal networks in the spinal cord," says co-principal investigator Karim Fouad, professor, Faculty of Rehabilitation Medicine and Canada Research Chair for spinal cord injury. "By elevating oxygen in the spinal cord we can improve function and re-establish activity in different parts of the body." Published in Nature Medicine on May 1, 2017, the study demonstrates chronic ischemic hypoxia (lack of blood and oxygen) after spinal cord injury and how blood flow plays a key role in the cause and treatment of motor disorders. Simply put, this could mean restored activity and ability in parts of the body that stopped working after spinal cord injury in the near future. The discovery, like most "eureka moments" in science, happened by accident. The lead author Yaqing (Celia) Li, rehabilitation science post-doctoral fellow, and David Bennett, co-principal investigator and professor, Faculty of Rehabilitation Medicine, were looking at the injured spinal cord of a rat under a microscope and noticed the capillaries contracting in response to application of dietary amino acids like tryptophan. "I thought, 'why would capillaries contract, when conventionally arteries are the main contractile vessels, and why should dietary amino acids circulating in the blood cause these contractions?'" says Bennett. "That is just plain weird, that what you eat should influence blood flow in the spinal cord." So they set out to answer these questions. Li, Bennett and Fouad found that the AADC (Aromatic l-amino acid decarboxylase) enzyme that converts dietary amino acids into trace amines was upregulated in specialized cells called pericytes that wrap capillaries. Unexpectedly, these trace amines produced in the pericytes caused them to contract, clamping down on the capillaries and reducing blood flow. This surprising finding led them to make basic measurements of blood flow and oxygenation below the spinal cord, which led to the discovery of the chronic ischemic hypoxia. They reasoned that the capillaries were excessively constricted by these pericytes after SCI, since there is ample supply of tryptophan. So they decided to try blocking AADC to improve blood flow. "Since blood flow below the injury is compromised, the neuronal networks function poorly with a lack of oxygen. So we blocked the AADC enzyme and found that it improved blood flow and oxygenation to the networks below the injury," Bennett says. "More importantly, this allowed the animals to produce more muscle activity." As an alternative treatment to blocking the AADC enzyme in the spinal cord of rats, the neuroscientists exposed the animals to higher oxygen levels and even they were surprised to see what happened next. "The rat could walk better!" Fouad says. "The change in oxygen restored function, albeit temporarily." Though the team knows their discovery can have big implications in the world of neuroscience, rehabilitation and spinal cord injury, they are quick to mention a disclaimer. "There is still a long way to go when it comes to treatment and helping patients with spinal cord injuries," says Fouad. "But this discovery has helped us understand the etiology of spinal cord injuries in a way we never did before. We can now design treatments that improve blood flow to produce long-term rehabilitation after SCI. Possibly even simple therapies such as exercise or just breathing will play a role in preventing long-term hypoxia and damage to the spinal cord. It's a small but important step in the right direction, stemming from studying an obscure enzyme in the spinal cord -- and that's the beauty of basic science."


News Article | April 17, 2017
Site: cen.acs.org

A record in carbohydrate synthesis has been broken: Researchers have synthesized a glycan nearly twice as large as, and more complex than, any made before. The polysaccharide, called an arabinogalactan, contains 92 sugars and is an essential cell-wall component in Mycobacterium tuberculosis, the bacterium that causes tuberculosis, and other mycobacteria. The tuberculosis drug ethambutol works by blocking the polysaccharide’s biosynthesis. “It is important that somebody showed this can be done,” says glycan expert Peter H. Seeberger of the Max Planck Institute of Colloids & Interfaces, who was not involved in the synthesis. “It illustrates what is possible with traditional methods. Chemists can now think of what one might do with such large carbohydrates.” Xin-Shan Ye of Peking University, whose group carried out the total synthesis, says he believes the work could lead to novel tuberculosis vaccines and a better understanding of the bacterium’s mechanism of cell-wall biosynthesis (Nat. Commun. 2017, DOI: 10.1038/ncomms14851). Natural glycans, which are oligosaccharides or polysaccharides, adopt a multiplicity of structural forms in living organisms. They exist in only small quantities and are hard to isolate, making it difficult to characterize them, evaluate them as drug targets, and use them in carbohydrate vaccines and other applications. To overcome these limitations, researchers synthesize the glycans in the laboratory, often up to a few tens of sugars in length. Synthesizing larger glycans is tough work: They are structurally complex, they have specific regio- and stereochemistry, the syntheses include multiple protecting-group manipulations, and reaction intermediates are difficult to purify. Chemists have wrestled with this complex chemistry over the years. In 1993, a RIKEN group including Tomoya Ogawa and Yukishige Ito synthesized a 25-sugar glycan—a feat that held the size record for many years. Todd L. Lowary of the University of Alberta and coworkers made a 22-unit fragment of the mycobacterial arabinogalactan a decade later. Since then, carbohydrate chemists have synthesized glycans in the 30- and 40-unit range. And Seeberger and coworkers just submitted a paper reporting the automated carbohydrate synthesis of a linear 50-sugar oligosaccharide—the largest single glycan ever created in one go, without combining smaller fragments. But the mycobacterial arabinogalactan not only has nearly twice as many sugar units as Seeberger’s glycan but is also branched and more complex. The researchers assembled it by using preactivation-based one-pot glycosylation, an efficient synthetic method they developed previously. They optimized the method to rapidly synthesize five-, six-, and seven-unit glycans, each in a single pot. They used coupling reactions to combine sets of those short glycans into one 30-sugar galactan and two identical 31-sugar arabinans. The team then joined the three fragments into the 92-sugar arabinogalactan. One graduate student and several undergraduates accomplished the synthesis in two-and-a-half years. Lowary says his group has recently been trying to make glycans with close to 100 sugars. “We’re nowhere near that, but now Ye and coworkers have been able to do it,” he says. “Considering the sheer size and complexity of the arabinogalactan, the efficient way they’ve put it together is tremendous. It shows how far you can push the envelope, and it will be a catalyst for future work in large-glycan synthesis.”


News Article | April 17, 2017
Site: www.eurekalert.org

EDMONTON (EMBARGOED UNTIL WEDNESDAY, ARPIL 12 AT 4 A.M. MST)--Researchers at the University of Alberta have demystified the way that polar bears search for their typical prey of ringed seals. The answer, it turns out, is simple: they follow their nose using the power of wind. Using satellite telemetry data collected from 123 adult polar bears in Canada's Hudson Bay over 11 years, the researchers merged the movements of polar bears with wind patterns to explore how they looked for seals. They hypothesized that when a bear smells prey, it moves up-wind to find it. But what is a bear to do before it smells anything at all? "Predators search for prey using odours in the air, and their success depends on how they move relative to the wind," explained Ron Togunov, University of Alberta alumnus and lead author on the study. "Travelling crosswind gives the bears a steady supply of new air streams and maximizes the area they can sense through smell." While this phenomenon had been suspected in many animals, it had not been quantified in mammals until now. The best conditions for olfactory hunting, explained UAlberta professor Andrew Derocher, co-author and renowned polar bear expert, takes place at night during the winter. "Crosswind search was most frequent when winds were slow, when is is easier to localize the source of a certain smell, and at night when bears are relatively active and when vision is less effective, so bears rely more heavily on their sense of smell." The findings also raise questions about the implications of climate change. "Wind speeds in the Arctic are projected to increase, potentially making olfaction more difficult," explained Togunov. "It is important to understand how polar bear hunting success will be affected by these changing conditions." The study, "Windscapes and olfactory foraging in a large carnivore," was published in Scientific Reports in April 2017.


News Article | April 8, 2017
Site: www.techtimes.com

Cats and dogs as pets have many beneficial effects on the overall well-being of an individual or a family. Not only these furry friends are great stress buster when one return's home from a hectic day at work, but they also offer health benefits. A new study reveals that having a pet may help in weight reduction and allergy resistance as well. The research conducted by scientists at the University of Alberta in Canada shows that dogs may boost the immunity against allergens in babies, while they are still in the mother's womb. The researchers analyzed fecal matter of 746 babies, registered to the Canadian Healthy Infant Longitudinal Development study, to determine whether early exposure to dogs led to an increased immunity against certain allergens. The researchers noticed that the presence of a dog or cat increased exposure to two types of microbes, which are associated with lowering risks from allergies, as well as obesity. "The abundance of these two bacteria were increased twofold when there was a pet in the house," revealed Anita Kozyrskyj, the study's lead author and pediatric epidemiologist at the University. The microbes ruminococcus and oscillospira were responsible for the reducing allergic reactions, as well as maintaining the body mass index or BMI. Researchers revealed that these bacteria were transmitted from the dog to the baby even before birth. The initial transmission which starts when the baby is still inside the womb is through indirect transference from the mother. However, this transmission continued till three months after the baby's birth, directly from the dog to the infant. Scientists also reported that this bacterial transference was the same in babies delivered through C-section, as it was for those from a normal vaginal delivery. However, researchers claim that these changes in bacterial concentration — leading to better allergic immunity and lower BMI — is only limited to babies in that particular stage of life. It will not affect older children or adults in any way. This is not the first time that a research has surmised that pets are beneficial for the family, especially children and infants. A previous Tech Times report shared that studies proved that owning a pet led to reduced risk of asthma development in children. This research indicated that the presence of a pet dog, during the first year of a baby's life, reduced asthma development risk by as much as 13 percent. The current study has once again establishes that dogs and cats as pets are not just valuable for the companionship they offer, but also for the overall health benefits that humans derive from them. The results of the recent study have been published in the journal Microbiome. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | April 16, 2017
Site: www.theguardian.com

Within them sits some 80,000 years of history, offering researchers tantalising clues about climate change and the Earth’s past. At least that was the case – until the precious cache of Arctic ice cores was hit by warming temperatures. A freezer malfunction at the University of Alberta in Edmonton has melted part of the world’s largest collection of ice cores from the Canadian Arctic, reducing some of the ancient ice into puddles. “For every ice-core facility on the planet, this is their No1 nightmare,” said glaciologist Martin Sharp. The ice cores – long cylinders extracted from glaciers – contain trapped gasses and particles that offer a glimpse into atmospheric history. “When you lose part of an ice core,” Sharp said, “you lose part of the record of past climates, past environments – an archive of the history of our atmosphere. You just don’t have easy access to information about those past time periods.” The university had recently acquired the dozen cores, or 1.4km (0.9 miles) of ice, drilled from five locations in the Canadian Arctic, and carefully transported them from Ottawa to Edmonton. The samples were moved into the university’s brand-new, custom-built C$4m (US$3m) facility earlier this month. Days later, one of the freezers tripped a high-heat alarm. “The way in which the freezer failed meant that it started to pump heat into the freezer,” said Sharp. “So it wasn’t just a question of it gradually warming up … It was actually quite rapidly raised to a temperature of 40C (104F).” Sharp rushed to the walk-in storage freezer to survey the damage and found steaming puddles gathered around the millennia-old ice. “It was more like a changing room in a swimming pool than a freezer,” he said. “I’ve had better days, let’s say that.” Around 13% of the archive had been exposed to high heat, representing more than 180m of ice. None of the cores were completely destroyed. “There are some which are clearly toast and there are others which are not obviously very much affected,” Sharp said. An ice core from the Penny Ice Cap on Baffin Island lost about a third of its mass, amounting to about 22,000 years of history, while a core from Mount Logan, Canada’s tallest mountain, saw 16,000 years melted away. But much of the collection was unaffected by the malfunction, thanks to a stroke of luck. A television crew had been documenting the ice core move and had asked that the samples be put in a second freezer because the lighting was better. The university complied, storing nearly 90% of the collection in an unaffected freezer. The question is now whether any research can be carried out on the affected cores. “This incident will affect research, no question,” said Sharp. “It rules out certain studies that we might have wanted to conduct on the cores, such as reconstructing continuous long-term histories where parts of the cores have been lost or contaminated.” As returning to the Arctic to replace the damaged ice cores would be a costly endeavour, the focus is now on regularly monitoring and safeguarding the ice cores that are left. “It’s by no means a write-off from a scientific point of view,” said Sharp. “It’s just disappointing to have this happen at all and to have lost some ice that would be of potentially quite a bit of interest.”


News Article | April 26, 2017
Site: www.eurekalert.org

(Edmonton, AB) Yet more evidence can be added to the growing literature that shows women with cardiovascular disease may receive different health care and experience worse outcomes than men. A study analyzing data from 21,062 Albertans discharged from hospital emergency rooms after presenting primarily for atrial fibrillation/flutter (AFF) showed that 1.3 per cent of women and 0.9 per cent of men died up to one month later. That translates to 40 per cent more women than men. "We found this to be true after adjusting for factors such as age and other health-related variables," said Rhonda Rosychuk, a statistician and professor in Pediatrics at the U of A. "It's an unexpected discrepancy." The study also showed that women experienced shorter or longer wait times to see a physician and specialist in follow-up care depending on other factors. It's far too early to draw any causal connections with the higher death rates, explained Rosychuk. "Further examination is required to determine if these differences are systemic, for example, related to care delivery." But there's no denying the consequence of these findings given the mortality and time to death varied based on sex, she added. "Overall, emergency, family medicine and specialist clinician groups should be aware of these differences and do their best to ensure evidence-based management is provided to both men and women." This is the first study to establish sex-based difference in outcomes after discharged from Alberta emergency departments after AFF presentation. Previous investigators have identified numerous sex-based differences in AFF. The study was published in the Canadian Journal of Cardiology and funding was received from the Women & Children's Health Research Institute and Alberta Health. The University of Alberta collaborators on this project were Michelle Graham, Brian Holroyd, Xuechen Zhang, and Brian Rowe.


News Article | April 18, 2017
Site: www.eurekalert.org

Four hundred and thirty million years ago, long before the evolution of barracudas or sharks, a different kind of predator stalked the primordial seas. The original sea monsters were eurypterids--better known as sea scorpions. Related to both modern scorpions and horseshow crabs, sea scorpions had thin, flexible bodies. Some species also had pinching claws and could grow up to three metres in length. New research by University of Alberta scientists Scott Persons and John Acorn hypothesise that the sea scorpions had another weapon at their disposal: a serrated, slashing tail spine. "Our study suggests that sea scorpions used their tails, weaponized by their serrated spiny tips, to dispatch their prey," says Scott Persons, paleontologist and lead author on the study. Sparked by the discovery of a new fossil specimen of the eurypterid Slimonia acuminata, Persons and Acorn make the biomechanical case that these sea scorpions attacked and killed their prey with sidelong strikes of their serrated tail. The fossil, collected from the Patrick Burn Formation near Lesmahagow, Scotland, shows a eurypterid Slimonia acuminate, with a serrated-spine-tipped tail curved strongly to one side. Unlike lobsters and shrimps, which can flip their broad tails up and down to help them swim, eurypterid tails were vertically inflexible but horizontally highly mobile. "This means that these sea scorpions could slash their tails from side to side, meeting little hydraulic resistance and without propelling themselves away from an intended target," explains Persons. "Perhaps clutching their prey with their sharp front limbs eurypterids could kill pretty using a horizontal slashing motion." Among the likely prey of Slimonia acuminata and other eurypterids were ancient early vertebrates. The paper, "A sea scorpion's strike: New evidence of extreme lateral flexibility in the opisthoma of eurypterids," was published in The American Naturalist in April 2017.


News Article | April 24, 2017
Site: www.medicalnewstoday.com

The new study is the work of researchers from the University of Exeter in the United Kingdom and the University of Alberta in Canada. They report their findings in the Journal of Neuroinflammation. Co-author Paul Eggleton, an immunologist and professor at the University of Exeter Medical School, says that multiple sclerosis can have a "devastating impact on people's lives," and yet, unfortunately, the present situation is that "all medicine can offer is treatment and therapy for the symptoms." Multiple sclerosis (MS) is a disease in which the immune system mistakenly attacks tissue of the central nervous system - which comprises the brain, spinal cord, and optic nerve. As the disease progresses, it destroys more and more of the fatty myelin sheath that insulates and protects the nerve fibers that send electrical messages in the central nervous system. This destruction can lead to brain damage, vision impairment, pain, altered sensation, extreme fatigue, problems with movement, and other symptoms. As research into the cause of MS progresses, scientists are becoming increasingly interested in the role of mitochondria - the tiny components inside cells that produce units of energy for powering the cell. In earlier work, the team behind the new study was the first to provide an explanation for the role of defective mitochondria in MS through clinical and laboratory experiments. In their new investigation, the researchers study a protein called Rab32, which is known to be involved in certain mitochondrial processes. They found that levels of Rab32 are much higher in the brains of people with MS and hardly detectable in brains of people without the disease. They also discovered that the presence of Rab32 coincides with disruption to a communication system that causes mitochondria to malfunction, causing toxic effects in the brain cells of people with MS. The disruption is caused by a cell compartment called the endoplasmic reticulum (ER) being too close to the mitochondria. The ER produces, processes, and transports many compounds that are used inside and outside the cell. The researchers note that one of the functions of the ER is to store calcium, and if the distance between the ER and mitochondria is too short, it disrupts the communication between the mitochondria and the calcium supply. Calcium uptake into mitochondria is already known to be critical to cell functioning. Although they did not discover what causes Rab32 levels to increase, the team believes that the problem may lie in a defect in the base of the ER. The study could help scientists to find ways to use Rab32 as a treatment target, as well as look for other proteins that may cause similar disruptions, note the authors. Learn how a new immunotherapy reversed paralysis in mouse models of MS.


Study finds some significant differences in brains of men and women Do the anatomical differences between men and women—sex organs, facial hair, and the like—extend to our brains? The question has been as difficult to answer as it has been controversial. Now, the largest brain-imaging study of its kind has found some sex-specific patterns, but overall more similarities than differences. The work raises new questions about how brain differences between the sexes may influence intelligence and behavior. This new solar-powered device can pull water straight from the desert air You can’t squeeze blood from a stone, but wringing water from the desert sky is now possible, thanks to a new spongelike device that uses sunlight to suck water vapor from air, even in low humidity. The device can produce nearly 3 liters of water per day, and researchers say future versions will be even better. That means homes in the driest parts of the world could soon have a solar-powered appliance capable of delivering all the water they need, offering relief to billions of people. A precious collection of ice cores from the Canadian Arctic has suffered a catastrophic meltdown. A freezer failure at a cold storage facility in Edmonton run by the University of Alberta caused 180 of the meter-long ice cylinders to melt, depriving scientists of some of the oldest records of climate change in Canada’s far north. Why do shoelaces untie themselves? This team may have the answer It’s the bane of tennis shoe wearers everywhere: No matter how tightly you tie your laces, they seem to come undone, often at the most inopportune time. Now, for the first time, scientists have untangled why shoelace knots fail. The work also reveals the best knots to tie, which could have implications for everything from surgery to new cancer drugs. In 2005, NASA’s Cassini spacecraft spied jets of water ice and vapor erupting into space from fissures on Enceladus, evidence of a salty ocean beneath the saturnian moon’s placid icy surface. Now, it turns out that the jets contain hydrogen gas, a sign of ongoing reactions on the floor of that alien sea. Because such chemistry provides energy for microbial life on Earth, the discovery makes Enceladus the top candidate for hosting life elsewhere in the solar system—besting even Jupiter’s Europa, another icy moon with an ocean.


News Article | April 27, 2017
Site: www.chromatographytechniques.com

After being headless for almost a century, a dinosaur skeleton that had become a tourist attraction in Dinosaur Provincial Park was finally reconnected to its head. Researchers at the University of Alberta have matched the headless skeleton to a Corythosaurus skull from the university’s Paleontology Museum that had been collected in 1920 by George Sternberg. “In the early days of dinosaur hunting and exploration, explorers only took impressive and exciting specimens for their collections, such as skulls, tail spines and claws,” explained graduate student Katherine Bramble, adding the practice was commonly referred to as head hunting. “Now, it’s common for paleontologists to come across specimens in the field without their skulls.” The headless Corythosaurus skeleton has been a tourist attraction in Dinosaur Provincial Park since the 1990s. In the early 2010s, a group of scientists noticed newspaper clippings dating back to the 1920s in the debris around the site. Among them was Darren Tanke, technician at the Royal Tyrrell Museum and co-author on the paper, who began to wonder if this skeleton could be related to the skull at the University of Alberta. That was where Bramble and her supervisor Philip Currie came in, along with former post-doctoral fellow Angelica Torices. “Using anatomical measurements of the skull and the skeleton, we conducted a statistical analysis,” Bramble explained. “Based on these results, we believed there was potential that the skull and this specimen belonged together.” In 2012, the skull and skeleton of the Corythosaurus were reunited. Whole once more, the specimen resides at the University of Alberta. As natural erosion takes place and human activity digs up new specimens, more headless dinosaur skeletons continue to crop up. “It’s becoming more and more common,” said Bramble. “One institution will have one part of a skeleton. Years later, another will collect another part of a skeleton that could belong to the same animal.” The reasons are many, ranging from the historical practice of head hunting to a lack of resources for exploration to new parts of skeletons becoming exposed. This discovery highlights a growing field of study in paleontology, Bramble noted. “Researchers are now trying to develop new ways of determining whether or not disparate parts of skeletons come from the same animal,” she explained. “For this paper, we used anatomical measurements, but there are many other ways of matching, such as conducting a chemical analysis of the rock in which the specimens are found.” As scientists develop new methods for matching specimens, Bramble hopes more dinosaurs skeletons will be reunited as well. The entire story is explained in detail in the paper, “Reuniting the ‘head hunted’ Corythosaurus excavatus (Dinosauria: Hadrosauridae) holotype skull with its dentary and postcranium,” which was published in the April 2017 edition of Cretaceous Research.


News Article | April 27, 2017
Site: www.rdmag.com

A Corythosaurus is now complete after being without its head for nearly a century. Researchers at the University of Alberta matched a dinosaur skull—displayed at the university’s Paleontology Museum after being collected by George Sternberg in 1920— with its body— which has been displayed in Dinosaur Provincial Park in Alberta since the 1990s. After examining newspaper clippings dating back to the 1920’s, scientists at the University of Alberta began to wonder if the skeleton at Dinosaur Provincial Park could be related to the skull at the University. “Using anatomical measurements of the skull and the skeleton, we conducted a statistical analysis,” University of Alberta graduate student Katherine Bramble said in a statement. "Based on these results, we believed there was potential that the skull and this specimen belonged together.” Researchers have found more new headless dinosaur skeletons recently,  due to natural erosion and human activity digging up new specimens. “In the early days of dinosaur hunting and exploration, explorers only took impressive and exciting specimens for their collections, such as skulls, tail spines and claws,” said Bramble. “Now, it's common for paleontologists to come across specimens in the field without their skulls.” “One institution will have one part of a skeleton. Years later, another will collect another part of a skeleton that could belong to the same animal,” she added. According to Bramble, this particular aspect of paleontology has grown in recent years. “Researchers are now trying to develop new ways of determining whether or not disparate parts of skeletons come from the same animal,” she said. “For this paper, we used anatomical measurements, but there are many other ways of matching, such as conducting a chemical analysis of the rock in which the specimens are found.”


News Article | March 6, 2017
Site: www.techtimes.com

Researchers have a new theory as to why some dinosaur species stood on two instead of four feet: their predecessors’ need to run faster and for longer distances. Bipedalism was handed down from their ancestors, the much smaller proto-dinosaurs, according to paleontologists from the University of Alberta in Canada. These ancient creatures used to walk on all fours but evolved to stand upright, a trait passed on to their much bigger dino descendants. According to ongoing theories, proto-dinosaurs turned bipedal so as to allow their forelimbs to freely catch prey. For lead study author and postdoctoral fellow Scott Persons, this theory does not hold water. “Many ancient bipedal dinosaurs were herbivores, and even early carnivorous dinosaurs evolved small forearms,” said Persons in a statement. “Rather than using their hands to grapple with prey, it is more likely they seized their meals with their powerful jaws.” The key to this evolution are the tails of the proto-dinosaurs that are characterized by huge, leg-powering muscles. This muscle mass, Persons explained, offered the power and strength required for the ancient dinosaurs to stand on and move with their hind legs — a similar effect seen in many modern lizards running bipedally. Proto-dinosaurs eventually learned to run faster and for greater distances, with their elongated hind limbs responding to their need for speed while their smaller forelimbs helping reduce body weight and enhance balance. This way, some of them entirely gave up moving on all four feet. Eventually some dinosaurs returned to a four-legged stance, and they are mostly those with heavy horns and plates circling their heads that would have made it difficult to balance upright. Herbivores also evolved over time to maintain bigger guts in order to break down cellulose in their plant diets. That would mean a heavier weight on the front half of the animal’s body, making it harder to balance on just the hind legs, Persons added. Scientists have also studied why some of the modern fast-running animals, including horses and cheetahs, aren’t bipedal. Around 252 million years earlier during the Permian period, it appeared that some creatures started to lose their leg-powering tail muscles, when many have become burrowers and needed strong front limbs to dig. Living underground likely helped those mammals survive mass extinction events, as humungous back legs as well as a large tail would have probably made it difficult to move underground and evade predators. When proto-mammals came out of burrowing and some of them evolved to become fast runners, they lacked the tail muscles that would have dictated a bipedal nature. The findings have been discussed in the Journal of Theoretical Biology. Earlier this month, a new method that harnesses high-powered lasers probed dinosaur fossils and helped uncover the ancient animal’s transition from being a small feathered dinosaur into a flying bird. Using the technique on the four-winged, feather Anchiornis dinosaur, the team found that the prehistoric creature had drumstick-shaped legs, bird-resembling arms, and a long, slender tail. The Jurassic creature has not been classified as a bird but kept some characteristics found in bird and lived around the time birds moved away from their similar-looking dinosaur ancestors. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | April 17, 2017
Site: www.newscientist.com

A major glacier in Alaska has retreated to its lowest point in 900 years as a consequence of global warming. Glaciers around the world are in retreat. But the Columbia glacier is one of the most dramatic and well-documented cases, as well as the largest contributor to sea level rise out of the 50 or so glaciers that descend to the sea in Alaska. Anders Carlson at Oregon State University and his colleagues have put the current ebb in the historical context of the past millennium, during which human contributions to climate change weren’t always so high. In 2004, the team bored down into the mud at the bottom of Prince William Sound, the bay on the southern coast of Alaska that the glacier flows into, and examined the layers of sediment deposited over a period of about 1600 years. The glacier recently receded past a geological fault line, either side of which are rocks of differing magnetic and chemical compositions. So Carlson and his colleagues looked for a corresponding shift in the sediment to find the last time the glacier crossed this fault. The number and thickness of rings in the trunks of trees uncovered in the retreat of the glacier provided a timeline for the initial advance of the ice, and gave information about past climate conditions. By cross-referencing the sediments and tree data, the team found a matching point about 900 years ago. Running climate simulators revealed that summer air temperatures about 1°C  higher than normal between 1910 and 1980 led to the glacier thinning until it became unstable in the 1980s and triggered the rapid retreat over the past three decades. The team attributes that warming to human-caused climate change. “What was surprising was the tight coupling between surface temperature of the glacier and its response,” says Carlson. Linking the behaviour of glaciers that terminate in water with climate is difficult, especially in areas with tectonic activity. But the team’s approach has allowed them to do just that, says Julie Brigham-Grette at the University of Massachusetts, Amherst. “Because of the coupling of the tree rings and the sediments, they can make the case that this retreat is a response to temperature, and not the internal dynamics of the glacier,” she says. The result is of wider importance, says Chris Rapley at University College London. “It shows that a small temperature increase of less than 2°C is sufficient to destabilise a glacier,” he says. International efforts to fight climate change are focused on limiting warming to 2°C, but are widely thought not to be sufficient to achieve that limit. “Previous analyses have speculated that the warming acted as a trigger for the mechanical processes of the retreat, and this analysis provides evidence that this is the case,” Rapley says. It’s unlikely that this is an isolated case. Alberto Reyes at the University of Alberta in Canada, one of the study’s authors, says that at some sites around the world, retreating glaciers are exposing trees that are some 7000 years old, indicating that those glaciers are now smaller than they have been in many thousands of years. Read more: El Niño means glaciers in the Andes are melting at record rates


News Article | April 3, 2017
Site: www.medicalnewstoday.com

The first step in shaping the brain is that the neural plate, a sheet-like cell layer, curves to form the neural tube. Assistant Professor Makoto Suzuki of the National Institute for Basic Biology, Professor Naoto Ueno, and their colleagues have shown that during the process of neural tube formation a transient increase in the concentration of calcium ions in cells causes these morphological changes and is essential for neural tube formation. This result was published in the journal Development and an image from this research was selected for the cover. In this study, the researchers observed the cell population during neural tube formation in the embryos of African clawed frogs (Xenopus laevis) using the fluorescent protein GECO, the brightness of which varies depending on the intracellular calcium ion concentration. As a result, they found that the pattern of the fluctuation in intracellular calcium ion concentration in the cell population is complex. Local and transient rises in intracellular calcium ion concentrations have been found to cause cell deformation and contribute to the formation of the neural tube. Suzuki said, "According to these results, in the elevated pattern of calcium ions, we found that there is a pattern randomly occurring in single cells, and a pattern that occurs synchronously in many neighboring cells. It was found that the morphological changes necessary for normal neural tube formation occurred by combining these different patterns." This research was conducted as a joint research project with the National Institute for Basic Biology, Kyoto University, Osaka University, and the University of Alberta.


News Article | April 27, 2017
Site: www.rdmag.com

A Corythosaurus is now complete after being without its head for nearly a century. Researchers at the University of Alberta matched a dinosaur skull—displayed at the university’s Paleontology Museum after being collected by George Sternberg in 1920— with its body— which has been displayed in Dinosaur Provincial Park in Alberta since the 1990s. After examining newspaper clippings dating back to the 1920’s, scientists at the University of Alberta began to wonder if the skeleton at Dinosaur Provincial Park could be related to the skull at the University. “Using anatomical measurements of the skull and the skeleton, we conducted a statistical analysis,” University of Alberta graduate student Katherine Bramble said in a statement. "Based on these results, we believed there was potential that the skull and this specimen belonged together.” Researchers have found more new headless dinosaur skeletons recently,  due to natural erosion and human activity digging up new specimens. “In the early days of dinosaur hunting and exploration, explorers only took impressive and exciting specimens for their collections, such as skulls, tail spines and claws,” said Bramble. “Now, it's common for paleontologists to come across specimens in the field without their skulls.” “One institution will have one part of a skeleton. Years later, another will collect another part of a skeleton that could belong to the same animal,” she added. According to Bramble, this particular aspect of paleontology has grown in recent years. “Researchers are now trying to develop new ways of determining whether or not disparate parts of skeletons come from the same animal,” she said. “For this paper, we used anatomical measurements, but there are many other ways of matching, such as conducting a chemical analysis of the rock in which the specimens are found.”


News Article | May 2, 2017
Site: www.eurekalert.org

Amantadine hydrochloride may be the most common medication you've never heard of. This compound has been around for decades as the basis for antiviral and other medications, from flu therapy to treatments for brain disorders such as Parkinson's disease and the fatigue associated with multiple sclerosis. And yet, this compound has long been a bit of an enigma because of missing information on its properties. Now, chemists at the National Institute of Standards and Technology (NIST) and collaborators have published the very first data on this important chemical's thermodynamic properties, including data on how it responds to heat and changes from a solid into a gas. Such data are valuable to the chemical and pharmaceutical industries for getting the highest production yields and shelf life for the medication. "Our research results are not directly related to the medical application of this multifunctional drug, although I am really fascinated by the range of its pharmacological activity," NIST research chemist Ala Bazyleva said. "We studied its thermodynamic properties and decomposition," Bazyleva said. "It is surprising, given the long history of amantadine-based drugs, that there is almost no information like this in the literature for many of them. Chemical engineers often have to rely on estimates and predictions based on similar compounds. Collating this information and developing these types of recommendations is at the core of what our group at NIST does." Amantadine hydrochloride belongs to a diamondoid class, a family of compounds whose structure is based on a cage of carbon atoms similar to diamond. Amantadine has a single carbon cage with a nitrogen atom attached on one side. Nonmedical studies have focused on the solid form of amantadine hydrochloride because it was expected to form disordered, or plastic, crystals, as many diamondoids do. Turns out, amantadine hydrochloride does not. Bazyleva began studying amantadine hydrochloride years ago while in Belarus working on her doctoral dissertation, and continued the effort during her postdoctoral studies in Germany and Canada. But progress was slow, partly because adamantine hydrochloride changes from a solid directly into a gas (a process called sublimation) and simultaneously falls apart, or decomposes. She needed a model explaining this complex process, one that incorporates detailed, high-level calculations of quantum chemistry. She finally got access to this computational capability after she began working with the Thermodynamics Research Center (TRC) Group at NIST in Boulder several years ago. "NIST was fundamental in facilitating the modeling component," Bazyleva said. "In particular, the unique combination of facilities, software and expertise in quantum chemical computations allowed us to apply high-level calculations to get insight into the structure and stability of the drug in the gas phase." While the compound behaves like it is ionic (composed of positively and negatively charged pieces, though neutral overall) in the solid crystal form and when dissolved in a liquid, quantum chemistry calculations revealed that it decomposes into two neutral compounds in the gas phase. The data were generated by NIST's TRC, which for more than 70 years has been producing chemical data for scientific research and industrial process design. Co-authors are from the Belarusian State University in Belarus; the University of Rostock in Germany; and the University of Alberta in Canada. Paper: A. Bazyleva, A.V. Blokhin, D.H. Zaitsau, G.J. Kabo, E. Paulechka, A.F. Kazakov and J.M. Shaw. Thermodynamics of the antiviral and antiparkinsonian drug amantadine hydrochloride: condensed state properties and decomposition. Journal of Chemical and Engineering Data. Published online May 1. DOI: 10.1021/acs.jced.7b00107


News Article | May 8, 2017
Site: www.eurekalert.org

A new process for water filtration using carbon dioxide consumes one thousand times less energy than conventional methods, scientific research published recently has shown. The research was led by Dr Orest Shardt of University of Limerick, Ireland together with Dr Sangwoo Shin (now at University of Hawaii, Manoa), while they were post doctoral researchers at Princeton University (United States) last year. With global demand for clean water increasing, there is a continuing need to improve the performance of water treatment processes. Dr Shardt expects this new method which uses CO2 could be applied in a variety of industries such as mining, food and beverage production, pharmaceutical manufacturing and water treatment. The research, published in open-access scientific journal Nature Communications, indicates that the new process could be easily scaled up, "suggesting the technique could be particularly beneficial in both the developing and developed worlds". The new method could also be used to remove bacteria and viruses without chlorination or ultraviolet treatment. "We are at the early stages of developing this concept. Eventually, this new method could be used to clean water for human consumption or to treat effluent from industrial facilities," Dr Shardt stated. Currently, water filtration technologies such as microfiltration or ultrafiltration use porous membranes to remove suspended particles and solutes. These processes trap and remove suspended particles, such as fine silt, by forcing the suspension through a porous material with gaps that are smaller than the particles. Energy must be wasted to overcome the friction of pushing the water through these small passages. These kinds of filtration processes have drawbacks such as high pumping costs and a need for periodic replacement of the membranes due to fouling. The research by Drs Shardt and Shin demonstrates an alternative membraneless method for separating suspended particles that works by exposing the colloidal suspension to CO2. "The demonstration device is made from a standard silicone polymer, a material that is commonly used in microfluidics research and similar to what is used in household sealants. While we have not yet analysed the capital and operating costs of a scaled-up process based on our device, the low pumping energy it requires, just 0.1% that of conventional filtration methods, suggests that the process deserves further research," said Dr Shardt. "What we need to do now is to study the effects of various compounds, such as salts and dissolved organic matter that are present in natural and industrial water to understand what impact they will have on the process. This could affect how we optimise the operating conditions, design the flow channel, and scale-up the process," he continued. Since joining the €86 million Bernal Institute at University of Limerick last September, Dr Shardt is continuing his research on the mathematical modelling and simulation of the water purification process and the physical phenomena on which it is based. "As a new arrival to Ireland, I'm now looking for motivated PhD students to work with me in this area. I am sure that creative students will find new ways to improve the process and apply it in unexpected ways," Dr Shardt concluded. To read the research paper, published in Nature Communications, visit: https:/ The paper is authored by Sangwoo Shin, Orest Shardt, Patrick B Warren and Howard A Stone. It was published online on May 2, 2017. Dr Orest Shardt is a lecturer in fluid mechanics and transport processes in the Bernal Institute at the University of Limerick (Ireland). He obtained his PhD in chemical engineering from the University of Alberta (Edmonton, Canada) in 2014. His thesis examined the behaviour of droplets in liquid mixtures via high performance computing with lattice Boltzmann methods. He was subsequently awarded a postdoctoral fellowship from the Natural Sciences and Engineering Research Council of Canada, which he held at Princeton University (USA) until August 2016. He joined the University of Limerick in September 2016. University of Limerick, Ireland, with more than 13,000 students and 1,300 staff is an energetic and enterprising institution with a proud record of innovation and excellence in education, research and scholarship. The dynamic, entrepreneurial and pioneering values which drive UL's mission and strategy ensures that it capitalises on local, national and international engagement and connectivity. It is renowned for providing an outstanding student experience and conducting leading-edge research. Its commitment is to make a difference by shaping the future through educating and empowering our students. UL is situated on a superb riverside campus of over 130 hectares with the River Shannon as a unifying focal point. Outstanding recreational, cultural and sporting facilities further enhance this exceptional learning and research environment.


CALGARY, ALBERTA / ACCESSWIRE / May 4, 2017 / Daniel Gundersen and Kingsway Financial Services Inc. ("Kingsway" and, together with Daniel Gundersen, the "Concerned Shareholders") have concerns about the future of Eagle Energy Inc. (TSX: EGL) (OTC PINK: EGRGF) ("Eagle" or the "Company"). In late 2015, Mr. Gundersen negotiated the sale of Maple Leaf Royalties Corp. ("Maple Leaf") to Eagle. He was the CEO of Maple Leaf at the time. In that transaction, Maple Leaf shareholders received 7,141,815 common shares of Eagle, which represents 16.7% of the outstanding shares. The Concerned Shareholders currently exercise control or direction of 1,631,254 common shares of Eagle, representing 3.8% of the outstanding shares. On March 13, 2017, Eagle announced a new plan that included increased debt, suspension of its dividend and an expensive and onerous term loan. This announcement forced the Concerned Shareholders to respond. The Concerned Shareholders have expressed their dissatisfaction with the Company's plan and recommended changes that would be in the best interests of shareholders. Their requests were rejected. The Concerned Shareholders believe that the Company is in immediate need of a stronger leadership team. Therefore, the Concerned Shareholders have submitted notice to Eagle that it will nominate four new independent directors for election at the upcoming Annual General Meeting to be held on June 14, 2017 (the "Meeting"). Need for Change at Eagle The Concerned Shareholders have developed a business plan to address Eagle's current problems and maximize value for shareholders. 1. Replace the Board of Directors - A change of direction is required. We have nominated four highly-qualified and talented individuals for election. 2. Reduce Overhead - Immediate steps will be taken to reduce the cost structure of the business. 3. Sell Assets - The most logical and lowest-risk source of capital for Eagle is to sell assets. 4. Reduce Debt - Proceeds from asset sales will be used to reduce debt. By reducing debt, we will lower the risk profile of the company, and also decrease interest payment costs. 5. Maximize Value of Remaining Assets - Sale of assets will make Eagle a much simpler entity that would be more attractive to prospective buyers. The Concerned Shareholders intend to file and disseminate an information circular in due course in order to permit shareholders of Eagle to make an informed decision in advance of the upcoming Meeting. Replacing the current board of Eagle with four independent, aligned and highly qualified business professionals offers shareholders a great opportunity from this point forward. The Concerned Shareholders have the people and a plan to minimize shareholder risk and maximize shareholder value. The proposed nominees of the Concerned Shareholders (the "Nominees") are as follows: Mr. Gundersen has over 20 years of direct oil and gas industry experience. Since February 2016, Mr. Gundersen has been an independent businessman managing oil and gas assets and providing consulting services to industry. From November 2014 to January 2016, he was CEO and director of Maple Leaf Royalties Corp., a TSX-V listed oil and gas royalties company. From October 2013 to October 2014, he was an independent businessman with active oil and gas interests and also providing consulting services to various industry clients. From January 2011 to September 2013, he was Vice President, Energy Finance for Sandstorm Metals and Energy Ltd., a TSX-V listed commodities streaming company where $33 million was deployed into oil and gas streaming transactions. From 2008 to 2010, he was the Vice President, Engineering for DeeThree Exploration Ltd., a TSX listed oil and gas exploration and production company. He was Vice President, Engineering at Dual Exploration Inc., a TSX listed oil and gas exploration and production company from 2005 until the company's sale in 2006. He also held management roles with Cyries Energy Inc., a TSX listed oil and gas exploration and production company, from 2007 to 2008, and Devlan Exploration Inc., a TSX listed oil and gas exploration and production company, from 2002 to 2005. Mr. Gundersen is a professional engineer, a member of APEGA, and is also a Chartered Financial Analyst (CFA) charterholder. Mr. Fong has had a 23 year career in the area of investments, financial and business analysis and public markets. From 2012 up to September 2016, he was the Director of Equity Capital Markets and Compliance & Disclosure for the TSX Venture Exchange in western Canada. During this period, Mr. Fong was responsible for the strategic direction of the public venture markets and all aspects of operations for the TSX Venture Exchange in western Canada, while at all times ensuring and protecting the integrity of the venture capital markets. Prior to this, Mr. Fong worked for and represented various independent investment dealer firms as the senior regional executive and business leader overseeing retail brokerage, public venture capital investment banking and compliance functions. Mr. Fong is a graduate of the Alberta School of Business at the University of Alberta and is a holder of the Chartered Financial Analyst (CFA) designation. Mr. Gilewicz has served as the Chief Financial Officer of Journey Energy Inc. since September of 2012. Previously, Mr. Gilewicz served as Chief Financial Officer and Vice President of Finance at Vero Energy Inc. from November 2005 to August 2012. Previous to that, Mr. Gilewicz served as Vice President of Finance and Chief Financial Officer of Devlan Exploration Inc. and its spinoff company, Dual Exploration Inc., from September 1999 to November 2005. Prior to this, Mr. Gilewicz served as a Senior Manager at Deloitte & Touche LLP. Mr. Gilewicz has served as a director of several publicly traded oil and gas and service companies and has also been the chair of the Finance Committee for the Exploration and Producers Association of Canada. Mr. Gilewicz is a Certified Public Accountant and received his Bachelor of Commerce degree from the University of Saskatchewan. Mr. Porter has over 35 years of diverse oil and gas experience which began with field operations and progressed to senior levels of management. An independent businessman, he has served as a board member for a number of public and private corporations in both the service and producing sectors of the oil and gas industry. Currently he serves on the board of Granite Oil Corp. (formerly DeeThree Exploration Ltd.) and Return Energy Inc. (formerly DualEx Energy International Inc.) Prior to founding DeeThree in January 2007, Bradley was Executive Vice President, COO, and Director of Dual Exploration from July 2005 to the sale of the company in December 2006. From 1996 to July 2005, he was Executive Vice President, COO, Director and Secretary of Devlan Exploration Inc. Prior to founding Devlan, he co-founded Bredal Energy Corp., a private oil and gas company. He served as President and a director of Bredal until its sale in October 2016. Mr. Gundersen has over 20 years of direct oil and gas industry experience. In late 2015, as CEO of Maple Leaf, he negotiated the sale of Maple Leaf to Eagle. Maple Leaf shareholders received 7,141,815 common shares of Eagle, which represents 16.7% of the Company. In October 2016, Mr Gundersen submitted an offer to Eagle to purchase certain Eagle assets. Eagle declined his offer. Kingsway is a holding company functioning as a merchant bank with a focus on long term value creation. The company owns or controls stakes in several insurance industry assets and utilizes its subsidiaries, 1347 Advisors LLC and 1347 Capital LLC, to pursue opportunities acting as an advisor, an investor and a financier. The common shares of Kingsway are listed on the Toronto Stock Exchange and the New York Stock Exchange under the trading symbol "KFS." The Nominees are Robert Fong, Gerald Gilewicz, Daniel Gundersen, and Bradley Porter. The table below sets out, in respect of each Nominee, his name, province or state and country of residence, his principal occupation, business or employment within the five preceding years, and the number of common shares of Eagle beneficially owned, or controlled or directed, directly or indirectly, by such Nominee. 1. Information set out in the table above has been provided by each Nominee. To the knowledge of the Concerned Shareholders, no Nominee is, at the date hereof, or has been, within ten (10) years before the date hereof: (a) a director, chief executive officer or chief financial officer of any company that (i) was subject to a cease trade order, an order similar to a cease trade order or an order that denied the relevant company access to any exemption under securities legislation that was in effect for a period of more than thirty (30) consecutive days (each, an "order"), in each case that was issued while the Nominee was acting in the capacity as director, chief executive officer or chief financial officer, or (ii) was subject to an order that was issued after the Concerned Shareholders Nominee ceased to be a director, chief executive officer or chief financial officer and which resulted from an event that occurred while that person was acting in the capacity as director, chief executive officer or chief financial officer; (b) a director or executive officer of any company that, while such Nominee was acting in that capacity, or within one (1) year of such Nominee ceasing to act in that capacity, became bankrupt, made a proposal under any legislation relating to bankruptcy or insolvency or was subject to or instituted any proceedings, arrangement or compromise with creditors or had a receiver, receiver manager or trustee appointed to hold its assets; or (c) someone who became bankrupt, made a proposal under any legislation relating to bankruptcy or insolvency, or became subject to or instituted any proceedings, arrangement or compromise with creditors, or had a receiver, receiver manager or trustee appointed to hold the assets of such Nominee. To the knowledge of the Concerned Shareholders, as at the date hereof, no Nominee has been subject to: (a) any penalties or sanctions imposed by a court relating to securities legislation, or by a securities regulatory authority, or has entered into a settlement agreement with a securities regulatory authority; or (b) any other penalties or sanctions imposed by a court or regulatory body that would likely be considered important to a reasonable security holder in deciding whether to vote for a Nominee. To the knowledge of the Concerned Shareholders, none of the Nominees, nor any associates or affiliates of the Nominees, has any material interest, direct or indirect, in any transaction since the commencement of Eagle's most recently completed financial year or in any proposed transaction which has materially affected or will materially affect Eagle or any of its subsidiaries. The information contained in this press release does not and is not meant to constitute a solicitation of a proxy within the meaning of applicable securities laws. Shareholders are not being asked at this time to execute a proxy in favour of the Nominees. In connection with the Meeting, the Concerned Shareholders intend to file a dissident information circular (the "Information Circular") in due course in compliance with applicable securities laws. Notwithstanding the foregoing, the Concerned Shareholders are voluntarily providing the disclosure required under section 9.2(4) of National Instrument 51-102 - Continuous Disclosure Obligations in accordance with securities laws applicable to public broadcast solicitations. This press release and any solicitation made by the Concerned Shareholders in advance of the Meeting is, or will be, as applicable, made by the Concerned Shareholders, and not by or on behalf of the management of Eagle. All costs incurred for any solicitation will be borne by the Concerned Shareholders, provided that, subject to applicable law, the Concerned Shareholders may seek reimbursement from Eagle of the Concerned Shareholders' out-of-pocket expenses, including proxy solicitation expenses and legal fees, incurred in connection with a successful reconstitution of the Board. The Concerned Shareholders are not soliciting proxies in connection with the Meeting at this time, and shareholders are not being asked at this time to execute proxies in favour of Nominees. Any proxies solicited by the Concerned Shareholders will be solicited pursuant to the Information Circular sent to shareholders of Eagle after which solicitations may be made by or on behalf of the Concerned Shareholders, by mail, telephone, fax, email or other electronic means, and in person by directors, officers and employees of the Concerned Shareholders or its proxy advisor D.F. King or by the Nominees. Any proxies solicited by the Concerned Shareholders in connection with the Meeting may be revoked by instrument in writing by the shareholder giving the proxy or by its duly authorized officer or attorney, or in any other manner permitted by law and the articles of Eagle. Neither of the Concerned Shareholders or, to their knowledge, any of their associates or affiliates, has any material interest, direct or indirect, by way of beneficial ownership of securities or otherwise, in any matter proposed to be acted on at the Meeting, other than the election of directors to the Board. Barrel of Oil Equivalent: Where amounts are expressed on a barrel of oil equivalent ("boe") basis, natural gas volumes have been converted to boe at a ratio of 6,000 cubic feet of natural gas to one barrel of oil equivalent. This conversion ratio is based upon an energy equivalent conversion method primarily applicable at the burner tip and does not represent value equivalence at the wellhead. Boe figures may be misleading, particularly if used in isolation. Eagle's principal business office is 2710, 500 - 4th Avenue S.W., Calgary, Alberta T2P 2V6. A copy of this press release may be obtained on Eagle's SEDAR profile at .


News Article | May 3, 2017
Site: www.eurekalert.org

VIDEO:  Extension of Continental, Marginal and Marine environments from ~18.4 to ~10.5 Ma showing the the two marine incursions reported in this study. view more A tiny shark tooth, part of a mantis shrimp and other microscopic marine organisms reveal that as the Andes rose, the Eastern Amazon sank twice, each time for less than a million years. Water from the Caribbean flooded the region from Venezuela to northwestern Brazil. These new findings by Smithsonian scientists and colleagues, published this week in Science Advances, fuel an ongoing controversy regarding the geologic history of the region. "Pollen records from oil wells in eastern Colombia and outcrops in northwestern Brazil clearly show two short-lived events in which ocean water from the Caribbean flooded what is now the northwest part of the Amazon basin," said Carlos Jaramillo, staff scientist at the Smithsonian Tropical Research Institute and lead author of the study. "Geologists disagree about the origins of the sediments in this area, but we provide clear evidence that they are of marine origin, and that the flooding events were fairly brief," Jaramillo said. His team dated the two flooding events to between 17 to18 million years ago and between 16 to 12 million years ago. Several controversial interpretations of the history of the region include the existence of a large, shallow sea covering the Amazon for millions of years, a freshwater megalake, shifting lowland rivers occasionally flooded by seawater, frequent seawater incusions, and a long-lived "para-marine metalake," which has no modern analog. Jaramillo assembled a diverse team from the Smithsonian and the University of Illinois at Urbana-Champaign; Corporacion Geologica Ares; the University of Birmingham; the University of Ghent; the Universidad del Norte, Baranquilla, Colombia; the University of Alberta, Edmonton; the University of Zurich; Ecopetrol, S.A.; Hocol, S.A.; the Royal Netherlands Institute for Sea Research at Utrecht University; the University of Texas of the Permian Basin; and the Naturalis Biodiversity Center. Together, they examined evidence including more than 50,000 individual pollen grains representing more than 900 pollen types from oil drilling cores from the Saltarin region of Colombia and found two distinct layers of marine pollen separated by layers of non-marine pollen types. They also found several fossils of marine organisms in the lower layer: a shark tooth and a mantis shrimp. "It's important to understand changes across the vast Amazonian landscape that had a profound effect, both on the evolution and distribution of life there and on the modern and ancient climates of the continent," Jaramillo said. The Smithsonian Tropical Research Institute, headquartered in Panama City, Panama, is a part of the Smithsonian Institution. The Institute furthers the understanding of tropical nature and its importance to human welfare, trains students to conduct research in the tropics and promotes conservation by increasing public awareness of the beauty and importance of tropical ecosystems. STRI website: http://www. . C. Jaramillo, I. Romero, C. D'Apolito, J. Ortiz. "Miocene flooding events of western Amazonia." Science Advances. Manuscript Number: sciadv.1601693; Smithsonian Tropical Research Institute


Announces positive results from the first phase of a project to develop a functional energy drink at the 16th European Meeting on Supercritical Fluid Technologies held in Lisbon, Portugal EDMONTON, ALBERTA--(Marketwired - May 4, 2017) - Ceapro Inc. (TSX VENTURE:CZO) ("Ceapro" or the "Company"), a growth-stage biotechnology company focused on the development and commercialization of active ingredients for healthcare and cosmetic industries, announced today the successful completion of the first development phase of a project entitled, "Beta glucan with coenzyme Q10 ("CoQ10"): A novel ingredient for functional beverages." This project included two studies, which were conducted at the University of Alberta by Dr. Feral Temelli's team along with Ceapro researchers. The primary objective of the first study was to find the optimal technical conditions to combine beta glucan and CoQ10, and to characterize the physical chemistry properties of the resulting newly formed chemical complex. The second study involved the development and testing of a prototype beverage formulation with the new chemical complex developed in the first study. Positive results were obtained from both studies. The first study resulted in the successful development of a novel water soluble chemical complex (CoQ10-Beta Glucan) obtained from the utilization of Ceapro's enabling Pressurized Gas eXpanded Technology ("PGX"). The second study resulted in the successful preparation of an appealing prototype beverage formulation that was determined to be well liked by a trained panel. This conclusion comes from a blinded study involving 91 subjects who were asked to compare two formulations, beta glucan alone ("BG") versus CoQ10-impregnated beta glucan ("iBG"), based on several attributes such as appearance, flavor, sweetness, aftertaste, bitterness and thickness. No difference was observed between the two formulations, confirming the successful impregnation of the new complex (iBG) and its uniform dispersion in water. These positive findings open new opportunities for commercialization of CoQ10, which is normally poorly bioavailable and commonly sold in the form of gel capsules or emulsions due to its liposoluble nature. The lipophilic antioxidant CoQ10 is a natural substance present in all human cells that plays a fundamental role during aerobic cellular respiration. While many approaches have attempted to compensate for the depletion of this "energy source" caused by aging, certain diseases and the use of drugs to decrease cholesterol levels (statins), bioavailability remains an issue due to its poor solubility in water and its crystalline nature. Bernhard Seifried, Ph.D., Senior Scientist/Engineer at Ceapro commented, "We are excited with these results where we were able to demonstrate for the first time that it is possible to form stable mixtures of coenzyme Q10 nanocrystals in water due to impregnation onto a highly porous water-soluble matrix like beta glucan that acts as a carrier to potentially enhance its bioavailability and bring this anti-oxidant compound to the targeted cells. We are pleased with the outcomes of this project, which we believe represents the first potential commercial application from the use of our PGX technology." "We are very encouraged with the positive results from this study. We believe that the prototype beverage that was developed in this first phase exhibited the potential of iBG as a functional ingredient for incorporation into beverages and may inspire new applications in food products or natural health products. While many health claims have been reported with current commercial formulations, our next phase will be to conduct a bioavailability study in humans in order to support efficacy with solid scientific data. Upon the successful demonstration of an increased concentration of CoQ10 in human blood, we plan to pursue commercialization into this large market in partnership with a multinational company," added Gilles Gagnon, M.Sc., MBA, President and CEO of Ceapro. This project was co-funded by Alberta Innovates Bio Solutions and Ceapro Inc. The Company's patented Pressurized Gas eXpanded (PGX) is a unique and disruptive technology with several key advantages over conventional drying and purification technologies that can be used to process biopolymers into high-value, nano-sized polymer structures and novel bio-nanocomposites. PGX is ideally suited for processing challenging high-molecular-weight, water-soluble biopolymers. It has the ability to make ultra-light, highly porous polymer structures on a continuous basis, which is not possible using today's conventional technologies. PGX was invented by Dr. Feral Temelli from the Department of Agricultural, Food & Nutritional Science of the University of Alberta (U of A) along with Dr. Bernhard Seifried, now Senior Researcher at Ceapro. The license from U of A provides Ceapro with exclusive worldwide rights in all industrial applications. Ceapro Inc. is a Canadian biotechnology company involved in the development of proprietary extraction technology and the application of this technology to the production of extracts and "active ingredients" from oats and other renewable plant resources. Ceapro adds further value to its extracts by supporting their use in cosmeceutical, nutraceutical and therapeutics products for humans and animals. The Company has a broad range of expertise in natural product chemistry, microbiology, biochemistry, immunology and process engineering. These skills merge in the fields of active ingredients, biopharmaceuticals and drug-delivery solutions. For more information on Ceapro, please visit the Company's website at www.ceapro.com. Neither the TSX Venture Exchange nor its Regulation Service Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.


CALGARY, ALBERTA / ACCESSWIRE / May 4, 2017 / Daniel Gundersen and Kingsway Financial Services Inc. ("Kingsway" and, together with Daniel Gundersen, the "Concerned Shareholders") have concerns about the future of Eagle Energy Inc. (TSX: EGL) (OTC PINK: EGRGF) ("Eagle" or the "Company"). In late 2015, Mr. Gundersen negotiated the sale of Maple Leaf Royalties Corp. ("Maple Leaf") to Eagle. He was the CEO of Maple Leaf at the time. In that transaction, Maple Leaf shareholders received 7,141,815 common shares of Eagle, which represents 16.7% of the outstanding shares. The Concerned Shareholders currently exercise control or direction of 1,631,254 common shares of Eagle, representing 3.8% of the outstanding shares. On March 13, 2017, Eagle announced a new plan that included increased debt, suspension of its dividend and an expensive and onerous term loan. This announcement forced the Concerned Shareholders to respond. The Concerned Shareholders have expressed their dissatisfaction with the Company's plan and recommended changes that would be in the best interests of shareholders. Their requests were rejected. The Concerned Shareholders believe that the Company is in immediate need of a stronger leadership team. Therefore, the Concerned Shareholders have submitted notice to Eagle that it will nominate four new independent directors for election at the upcoming Annual General Meeting to be held on June 14, 2017 (the "Meeting"). Share Performance Has Been Dismal - The trading price of Eagle's stock is down 70% since October 1, 2015, while the Energy Index is up 15% in the same time period. Board and Management Are Not Aligned with All Shareholders - They own only 2.1% of the Company. Overhead Costs Are Unsustainably High - Eagle's general and administrative cash expenses are triple many of their peers. The New Term Loan is Expensive and Onerous - Interest payments will increase significantly under the new term loan, which has demanding financial covenants. Eagle's New Plan is a High-Risk Plan - Increasing capital expenditures and increasing debt levels increases risk to shareholders. The Concerned Shareholders have developed a business plan to address Eagle's current problems and maximize value for shareholders. 1. Replace the Board of Directors - A change of direction is required. We have nominated four highly-qualified and talented individuals for election. 2. Reduce Overhead - Immediate steps will be taken to reduce the cost structure of the business. 3. Sell Assets - The most logical and lowest-risk source of capital for Eagle is to sell assets. 4. Reduce Debt - Proceeds from asset sales will be used to reduce debt. By reducing debt, we will lower the risk profile of the company, and also decrease interest payment costs. 5. Maximize Value of Remaining Assets - Sale of assets will make Eagle a much simpler entity that would be more attractive to prospective buyers. The Concerned Shareholders intend to file and disseminate an information circular in due course in order to permit shareholders of Eagle to make an informed decision in advance of the upcoming Meeting. Replacing the current board of Eagle with four independent, aligned and highly qualified business professionals offers shareholders a great opportunity from this point forward. The Concerned Shareholders have the people and a plan to minimize shareholder risk and maximize shareholder value. The proposed nominees of the Concerned Shareholders (the "Nominees") are as follows: Mr. Gundersen has over 20 years of direct oil and gas industry experience. Since February 2016, Mr. Gundersen has been an independent businessman managing oil and gas assets and providing consulting services to industry. From November 2014 to January 2016, he was CEO and director of Maple Leaf Royalties Corp., a TSX-V listed oil and gas royalties company. From October 2013 to October 2014, he was an independent businessman with active oil and gas interests and also providing consulting services to various industry clients. From January 2011 to September 2013, he was Vice President, Energy Finance for Sandstorm Metals and Energy Ltd., a TSX-V listed commodities streaming company where $33 million was deployed into oil and gas streaming transactions. From 2008 to 2010, he was the Vice President, Engineering for DeeThree Exploration Ltd., a TSX listed oil and gas exploration and production company. He was Vice President, Engineering at Dual Exploration Inc., a TSX listed oil and gas exploration and production company from 2005 until the company's sale in 2006. He also held management roles with Cyries Energy Inc., a TSX listed oil and gas exploration and production company, from 2007 to 2008, and Devlan Exploration Inc., a TSX listed oil and gas exploration and production company, from 2002 to 2005. Mr. Gundersen is a professional engineer, a member of APEGA, and is also a Chartered Financial Analyst (CFA) charterholder. Mr. Fong has had a 23 year career in the area of investments, financial and business analysis and public markets. From 2012 up to September 2016, he was the Director of Equity Capital Markets and Compliance & Disclosure for the TSX Venture Exchange in western Canada. During this period, Mr. Fong was responsible for the strategic direction of the public venture markets and all aspects of operations for the TSX Venture Exchange in western Canada, while at all times ensuring and protecting the integrity of the venture capital markets. Prior to this, Mr. Fong worked for and represented various independent investment dealer firms as the senior regional executive and business leader overseeing retail brokerage, public venture capital investment banking and compliance functions. Mr. Fong is a graduate of the Alberta School of Business at the University of Alberta and is a holder of the Chartered Financial Analyst (CFA) designation. Mr. Gilewicz has served as the Chief Financial Officer of Journey Energy Inc. since September of 2012. Previously, Mr. Gilewicz served as Chief Financial Officer and Vice President of Finance at Vero Energy Inc. from November 2005 to August 2012. Previous to that, Mr. Gilewicz served as Vice President of Finance and Chief Financial Officer of Devlan Exploration Inc. and its spinoff company, Dual Exploration Inc., from September 1999 to November 2005. Prior to this, Mr. Gilewicz served as a Senior Manager at Deloitte & Touche LLP. Mr. Gilewicz has served as a director of several publicly traded oil and gas and service companies and has also been the chair of the Finance Committee for the Exploration and Producers Association of Canada. Mr. Gilewicz is a Certified Public Accountant and received his Bachelor of Commerce degree from the University of Saskatchewan. Mr. Porter has over 35 years of diverse oil and gas experience which began with field operations and progressed to senior levels of management. An independent businessman, he has served as a board member for a number of public and private corporations in both the service and producing sectors of the oil and gas industry. Currently he serves on the board of Granite Oil Corp. (formerly DeeThree Exploration Ltd.) and Return Energy Inc. (formerly DualEx Energy International Inc.) Prior to founding DeeThree in January 2007, Bradley was Executive Vice President, COO, and Director of Dual Exploration from July 2005 to the sale of the company in December 2006. From 1996 to July 2005, he was Executive Vice President, COO, Director and Secretary of Devlan Exploration Inc. Prior to founding Devlan, he co-founded Bredal Energy Corp., a private oil and gas company. He served as President and a director of Bredal until its sale in October 2016. Mr. Gundersen has over 20 years of direct oil and gas industry experience. In late 2015, as CEO of Maple Leaf, he negotiated the sale of Maple Leaf to Eagle. Maple Leaf shareholders received 7,141,815 common shares of Eagle, which represents 16.7% of the Company. In October 2016, Mr Gundersen submitted an offer to Eagle to purchase certain Eagle assets. Eagle declined his offer. Kingsway is a holding company functioning as a merchant bank with a focus on long term value creation. The company owns or controls stakes in several insurance industry assets and utilizes its subsidiaries, 1347 Advisors LLC and 1347 Capital LLC, to pursue opportunities acting as an advisor, an investor and a financier. The common shares of Kingsway are listed on the Toronto Stock Exchange and the New York Stock Exchange under the trading symbol "KFS." The Nominees are Robert Fong, Gerald Gilewicz, Daniel Gundersen, and Bradley Porter. The table below sets out, in respect of each Nominee, his name, province or state and country of residence, his principal occupation, business or employment within the five preceding years, and the number of common shares of Eagle beneficially owned, or controlled or directed, directly or indirectly, by such Nominee. Name, Province or State and Country of Residence(1) Present Principal Occupation, Business or Employment and Principal Occupation, Business or Employment During the Preceding Five Years Number of Common Shares Beneficially Owned or Controlled or Directed (Directly or Indirectly) Robert Fong Alberta, Canada Director of Equity Capital Markets and Compliance & Disclosure at the TSX Venture Exchange from October 2012 to September 2016, Vice President & Sales Manager at Union Securities Ltd. from October 2010 to October 2012 13,000 Gerald Gilewicz Alberta, Canada Chief Financial Officer of Journey Energy Inc. from September 2012 to Present Chief Financial Officer of Vero Energy Inc. from October 2005 to September 2012 23,484 Daniel Gundersen Alberta, Canada Chief Executive Officer of Peace Energy Inc. from February 2016 to Present and from October 2013 to October 2014, Chief Executive Officer and Director of Maple Leaf Royalties Corp. from November 2014 to January 2016, Vice President, Energy Finance of Sandstorm Metals & Energy Ltd. from January 2011 to September 2013 515,254 Bradley Porter Alberta, Canada President of HighRange Capital Corporation from January 2007 to Present 42,532 1. Information set out in the table above has been provided by each Nominee. To the knowledge of the Concerned Shareholders, no Nominee is, at the date hereof, or has been, within ten (10) years before the date hereof: (a) a director, chief executive officer or chief financial officer of any company that (i) was subject to a cease trade order, an order similar to a cease trade order or an order that denied the relevant company access to any exemption under securities legislation that was in effect for a period of more than thirty (30) consecutive days (each, an "order"), in each case that was issued while the Nominee was acting in the capacity as director, chief executive officer or chief financial officer, or (ii) was subject to an order that was issued after the Concerned Shareholders Nominee ceased to be a director, chief executive officer or chief financial officer and which resulted from an event that occurred while that person was acting in the capacity as director, chief executive officer or chief financial officer; (b) a director or executive officer of any company that, while such Nominee was acting in that capacity, or within one (1) year of such Nominee ceasing to act in that capacity, became bankrupt, made a proposal under any legislation relating to bankruptcy or insolvency or was subject to or instituted any proceedings, arrangement or compromise with creditors or had a receiver, receiver manager or trustee appointed to hold its assets; or (c) someone who became bankrupt, made a proposal under any legislation relating to bankruptcy or insolvency, or became subject to or instituted any proceedings, arrangement or compromise with creditors, or had a receiver, receiver manager or trustee appointed to hold the assets of such Nominee. To the knowledge of the Concerned Shareholders, as at the date hereof, no Nominee has been subject to: (a) any penalties or sanctions imposed by a court relating to securities legislation, or by a securities regulatory authority, or has entered into a settlement agreement with a securities regulatory authority; or (b) any other penalties or sanctions imposed by a court or regulatory body that would likely be considered important to a reasonable security holder in deciding whether to vote for a Nominee. To the knowledge of the Concerned Shareholders, none of the Nominees, nor any associates or affiliates of the Nominees, has any material interest, direct or indirect, in any transaction since the commencement of Eagle's most recently completed financial year or in any proposed transaction which has materially affected or will materially affect Eagle or any of its subsidiaries. The information contained in this press release does not and is not meant to constitute a solicitation of a proxy within the meaning of applicable securities laws. Shareholders are not being asked at this time to execute a proxy in favour of the Nominees. In connection with the Meeting, the Concerned Shareholders intend to file a dissident information circular (the "Information Circular") in due course in compliance with applicable securities laws. Notwithstanding the foregoing, the Concerned Shareholders are voluntarily providing the disclosure required under section 9.2(4) of National Instrument 51-102 - Continuous Disclosure Obligations in accordance with securities laws applicable to public broadcast solicitations. This press release and any solicitation made by the Concerned Shareholders in advance of the Meeting is, or will be, as applicable, made by the Concerned Shareholders, and not by or on behalf of the management of Eagle. All costs incurred for any solicitation will be borne by the Concerned Shareholders, provided that, subject to applicable law, the Concerned Shareholders may seek reimbursement from Eagle of the Concerned Shareholders' out-of-pocket expenses, including proxy solicitation expenses and legal fees, incurred in connection with a successful reconstitution of the Board. The Concerned Shareholders are not soliciting proxies in connection with the Meeting at this time, and shareholders are not being asked at this time to execute proxies in favour of Nominees. Any proxies solicited by the Concerned Shareholders will be solicited pursuant to the Information Circular sent to shareholders of Eagle after which solicitations may be made by or on behalf of the Concerned Shareholders, by mail, telephone, fax, email or other electronic means, and in person by directors, officers and employees of the Concerned Shareholders or its proxy advisor D.F. King or by the Nominees. Any proxies solicited by the Concerned Shareholders in connection with the Meeting may be revoked by instrument in writing by the shareholder giving the proxy or by its duly authorized officer or attorney, or in any other manner permitted by law and the articles of Eagle. Neither of the Concerned Shareholders or, to their knowledge, any of their associates or affiliates, has any material interest, direct or indirect, by way of beneficial ownership of securities or otherwise, in any matter proposed to be acted on at the Meeting, other than the election of directors to the Board. Barrel of Oil Equivalent: Where amounts are expressed on a barrel of oil equivalent ("boe") basis, natural gas volumes have been converted to boe at a ratio of 6,000 cubic feet of natural gas to one barrel of oil equivalent. This conversion ratio is based upon an energy equivalent conversion method primarily applicable at the burner tip and does not represent value equivalence at the wellhead. Boe figures may be misleading, particularly if used in isolation. Eagle's principal business office is 2710, 500 - 4th Avenue S.W., Calgary, Alberta T2P 2V6. A copy of this press release may be obtained on Eagle's SEDAR profile at www.sedar.com. CALGARY, ALBERTA / ACCESSWIRE / May 4, 2017 / Daniel Gundersen and Kingsway Financial Services Inc. ("Kingsway" and, together with Daniel Gundersen, the "Concerned Shareholders") have concerns about the future of Eagle Energy Inc. (TSX: EGL) (OTC PINK: EGRGF) ("Eagle" or the "Company"). In late 2015, Mr. Gundersen negotiated the sale of Maple Leaf Royalties Corp. ("Maple Leaf") to Eagle. He was the CEO of Maple Leaf at the time. In that transaction, Maple Leaf shareholders received 7,141,815 common shares of Eagle, which represents 16.7% of the outstanding shares. The Concerned Shareholders currently exercise control or direction of 1,631,254 common shares of Eagle, representing 3.8% of the outstanding shares. On March 13, 2017, Eagle announced a new plan that included increased debt, suspension of its dividend and an expensive and onerous term loan. This announcement forced the Concerned Shareholders to respond. The Concerned Shareholders have expressed their dissatisfaction with the Company's plan and recommended changes that would be in the best interests of shareholders. Their requests were rejected. The Concerned Shareholders believe that the Company is in immediate need of a stronger leadership team. Therefore, the Concerned Shareholders have submitted notice to Eagle that it will nominate four new independent directors for election at the upcoming Annual General Meeting to be held on June 14, 2017 (the "Meeting"). Need for Change at Eagle Share Performance Has Been Dismal - The trading price of Eagle's stock is down 70% since October 1, 2015, while the Energy Index is up 15% in the same time period. Board and Management Are Not Aligned with All Shareholders - They own only 2.1% of the Company. Overhead Costs Are Unsustainably High - Eagle's general and administrative cash expenses are triple many of their peers. The New Term Loan is Expensive and Onerous - Interest payments will increase significantly under the new term loan, which has demanding financial covenants. Eagle's New Plan is a High-Risk Plan - Increasing capital expenditures and increasing debt levels increases risk to shareholders. The Concerned Shareholders have developed a business plan to address Eagle's current problems and maximize value for shareholders. 1. Replace the Board of Directors - A change of direction is required. We have nominated four highly-qualified and talented individuals for election. 2. Reduce Overhead - Immediate steps will be taken to reduce the cost structure of the business. 3. Sell Assets - The most logical and lowest-risk source of capital for Eagle is to sell assets. 4. Reduce Debt - Proceeds from asset sales will be used to reduce debt. By reducing debt, we will lower the risk profile of the company, and also decrease interest payment costs. 5. Maximize Value of Remaining Assets - Sale of assets will make Eagle a much simpler entity that would be more attractive to prospective buyers. The Concerned Shareholders intend to file and disseminate an information circular in due course in order to permit shareholders of Eagle to make an informed decision in advance of the upcoming Meeting. Replacing the current board of Eagle with four independent, aligned and highly qualified business professionals offers shareholders a great opportunity from this point forward. The Concerned Shareholders have the people and a plan to minimize shareholder risk and maximize shareholder value. The proposed nominees of the Concerned Shareholders (the "Nominees") are as follows: Mr. Gundersen has over 20 years of direct oil and gas industry experience. Since February 2016, Mr. Gundersen has been an independent businessman managing oil and gas assets and providing consulting services to industry. From November 2014 to January 2016, he was CEO and director of Maple Leaf Royalties Corp., a TSX-V listed oil and gas royalties company. From October 2013 to October 2014, he was an independent businessman with active oil and gas interests and also providing consulting services to various industry clients. From January 2011 to September 2013, he was Vice President, Energy Finance for Sandstorm Metals and Energy Ltd., a TSX-V listed commodities streaming company where $33 million was deployed into oil and gas streaming transactions. From 2008 to 2010, he was the Vice President, Engineering for DeeThree Exploration Ltd., a TSX listed oil and gas exploration and production company. He was Vice President, Engineering at Dual Exploration Inc., a TSX listed oil and gas exploration and production company from 2005 until the company's sale in 2006. He also held management roles with Cyries Energy Inc., a TSX listed oil and gas exploration and production company, from 2007 to 2008, and Devlan Exploration Inc., a TSX listed oil and gas exploration and production company, from 2002 to 2005. Mr. Gundersen is a professional engineer, a member of APEGA, and is also a Chartered Financial Analyst (CFA) charterholder. Mr. Fong has had a 23 year career in the area of investments, financial and business analysis and public markets. From 2012 up to September 2016, he was the Director of Equity Capital Markets and Compliance & Disclosure for the TSX Venture Exchange in western Canada. During this period, Mr. Fong was responsible for the strategic direction of the public venture markets and all aspects of operations for the TSX Venture Exchange in western Canada, while at all times ensuring and protecting the integrity of the venture capital markets. Prior to this, Mr. Fong worked for and represented various independent investment dealer firms as the senior regional executive and business leader overseeing retail brokerage, public venture capital investment banking and compliance functions. Mr. Fong is a graduate of the Alberta School of Business at the University of Alberta and is a holder of the Chartered Financial Analyst (CFA) designation. Mr. Gilewicz has served as the Chief Financial Officer of Journey Energy Inc. since September of 2012. Previously, Mr. Gilewicz served as Chief Financial Officer and Vice President of Finance at Vero Energy Inc. from November 2005 to August 2012. Previous to that, Mr. Gilewicz served as Vice President of Finance and Chief Financial Officer of Devlan Exploration Inc. and its spinoff company, Dual Exploration Inc., from September 1999 to November 2005. Prior to this, Mr. Gilewicz served as a Senior Manager at Deloitte & Touche LLP. Mr. Gilewicz has served as a director of several publicly traded oil and gas and service companies and has also been the chair of the Finance Committee for the Exploration and Producers Association of Canada. Mr. Gilewicz is a Certified Public Accountant and received his Bachelor of Commerce degree from the University of Saskatchewan. Mr. Porter has over 35 years of diverse oil and gas experience which began with field operations and progressed to senior levels of management. An independent businessman, he has served as a board member for a number of public and private corporations in both the service and producing sectors of the oil and gas industry. Currently he serves on the board of Granite Oil Corp. (formerly DeeThree Exploration Ltd.) and Return Energy Inc. (formerly DualEx Energy International Inc.) Prior to founding DeeThree in January 2007, Bradley was Executive Vice President, COO, and Director of Dual Exploration from July 2005 to the sale of the company in December 2006. From 1996 to July 2005, he was Executive Vice President, COO, Director and Secretary of Devlan Exploration Inc. Prior to founding Devlan, he co-founded Bredal Energy Corp., a private oil and gas company. He served as President and a director of Bredal until its sale in October 2016. Mr. Gundersen has over 20 years of direct oil and gas industry experience. In late 2015, as CEO of Maple Leaf, he negotiated the sale of Maple Leaf to Eagle. Maple Leaf shareholders received 7,141,815 common shares of Eagle, which represents 16.7% of the Company. In October 2016, Mr Gundersen submitted an offer to Eagle to purchase certain Eagle assets. Eagle declined his offer. Kingsway is a holding company functioning as a merchant bank with a focus on long term value creation. The company owns or controls stakes in several insurance industry assets and utilizes its subsidiaries, 1347 Advisors LLC and 1347 Capital LLC, to pursue opportunities acting as an advisor, an investor and a financier. The common shares of Kingsway are listed on the Toronto Stock Exchange and the New York Stock Exchange under the trading symbol "KFS." The Nominees are Robert Fong, Gerald Gilewicz, Daniel Gundersen, and Bradley Porter. The table below sets out, in respect of each Nominee, his name, province or state and country of residence, his principal occupation, business or employment within the five preceding years, and the number of common shares of Eagle beneficially owned, or controlled or directed, directly or indirectly, by such Nominee. Name, Province or State and Country of Residence(1) Present Principal Occupation, Business or Employment and Principal Occupation, Business or Employment During the Preceding Five Years Number of Common Shares Beneficially Owned or Controlled or Directed (Directly or Indirectly) Robert Fong Alberta, Canada Director of Equity Capital Markets and Compliance & Disclosure at the TSX Venture Exchange from October 2012 to September 2016, Vice President & Sales Manager at Union Securities Ltd. from October 2010 to October 2012 13,000 Gerald Gilewicz Alberta, Canada Chief Financial Officer of Journey Energy Inc. from September 2012 to Present Chief Financial Officer of Vero Energy Inc. from October 2005 to September 2012 23,484 Daniel Gundersen Alberta, Canada Chief Executive Officer of Peace Energy Inc. from February 2016 to Present and from October 2013 to October 2014, Chief Executive Officer and Director of Maple Leaf Royalties Corp. from November 2014 to January 2016, Vice President, Energy Finance of Sandstorm Metals & Energy Ltd. from January 2011 to September 2013 515,254 Bradley Porter Alberta, Canada President of HighRange Capital Corporation from January 2007 to Present 42,532 1. Information set out in the table above has been provided by each Nominee. To the knowledge of the Concerned Shareholders, no Nominee is, at the date hereof, or has been, within ten (10) years before the date hereof: (a) a director, chief executive officer or chief financial officer of any company that (i) was subject to a cease trade order, an order similar to a cease trade order or an order that denied the relevant company access to any exemption under securities legislation that was in effect for a period of more than thirty (30) consecutive days (each, an "order"), in each case that was issued while the Nominee was acting in the capacity as director, chief executive officer or chief financial officer, or (ii) was subject to an order that was issued after the Concerned Shareholders Nominee ceased to be a director, chief executive officer or chief financial officer and which resulted from an event that occurred while that person was acting in the capacity as director, chief executive officer or chief financial officer; (b) a director or executive officer of any company that, while such Nominee was acting in that capacity, or within one (1) year of such Nominee ceasing to act in that capacity, became bankrupt, made a proposal under any legislation relating to bankruptcy or insolvency or was subject to or instituted any proceedings, arrangement or compromise with creditors or had a receiver, receiver manager or trustee appointed to hold its assets; or (c) someone who became bankrupt, made a proposal under any legislation relating to bankruptcy or insolvency, or became subject to or instituted any proceedings, arrangement or compromise with creditors, or had a receiver, receiver manager or trustee appointed to hold the assets of such Nominee. To the knowledge of the Concerned Shareholders, as at the date hereof, no Nominee has been subject to: (a) any penalties or sanctions imposed by a court relating to securities legislation, or by a securities regulatory authority, or has entered into a settlement agreement with a securities regulatory authority; or (b) any other penalties or sanctions imposed by a court or regulatory body that would likely be considered important to a reasonable security holder in deciding whether to vote for a Nominee. To the knowledge of the Concerned Shareholders, none of the Nominees, nor any associates or affiliates of the Nominees, has any material interest, direct or indirect, in any transaction since the commencement of Eagle's most recently completed financial year or in any proposed transaction which has materially affected or will materially affect Eagle or any of its subsidiaries. The information contained in this press release does not and is not meant to constitute a solicitation of a proxy within the meaning of applicable securities laws. Shareholders are not being asked at this time to execute a proxy in favour of the Nominees. In connection with the Meeting, the Concerned Shareholders intend to file a dissident information circular (the "Information Circular") in due course in compliance with applicable securities laws. Notwithstanding the foregoing, the Concerned Shareholders are voluntarily providing the disclosure required under section 9.2(4) of National Instrument 51-102 - Continuous Disclosure Obligations in accordance with securities laws applicable to public broadcast solicitations. This press release and any solicitation made by the Concerned Shareholders in advance of the Meeting is, or will be, as applicable, made by the Concerned Shareholders, and not by or on behalf of the management of Eagle. All costs incurred for any solicitation will be borne by the Concerned Shareholders, provided that, subject to applicable law, the Concerned Shareholders may seek reimbursement from Eagle of the Concerned Shareholders' out-of-pocket expenses, including proxy solicitation expenses and legal fees, incurred in connection with a successful reconstitution of the Board. The Concerned Shareholders are not soliciting proxies in connection with the Meeting at this time, and shareholders are not being asked at this time to execute proxies in favour of Nominees. Any proxies solicited by the Concerned Shareholders will be solicited pursuant to the Information Circular sent to shareholders of Eagle after which solicitations may be made by or on behalf of the Concerned Shareholders, by mail, telephone, fax, email or other electronic means, and in person by directors, officers and employees of the Concerned Shareholders or its proxy advisor D.F. King or by the Nominees. Any proxies solicited by the Concerned Shareholders in connection with the Meeting may be revoked by instrument in writing by the shareholder giving the proxy or by its duly authorized officer or attorney, or in any other manner permitted by law and the articles of Eagle. Neither of the Concerned Shareholders or, to their knowledge, any of their associates or affiliates, has any material interest, direct or indirect, by way of beneficial ownership of securities or otherwise, in any matter proposed to be acted on at the Meeting, other than the election of directors to the Board. Barrel of Oil Equivalent: Where amounts are expressed on a barrel of oil equivalent ("boe") basis, natural gas volumes have been converted to boe at a ratio of 6,000 cubic feet of natural gas to one barrel of oil equivalent. This conversion ratio is based upon an energy equivalent conversion method primarily applicable at the burner tip and does not represent value equivalence at the wellhead. Boe figures may be misleading, particularly if used in isolation. Eagle's principal business office is 2710, 500 - 4th Avenue S.W., Calgary, Alberta T2P 2V6. A copy of this press release may be obtained on Eagle's SEDAR profile at www.sedar.com.


Supplements containing arsenic have been banned in the European Union since 1999 and in North America since 2013. In many countries they are still added to poultry feed to prevent parasitic infection and promote weight gain. In the journal Angewandte Chemie, scientists have now demonstrated that the danger to human health may be greater than previously thought because the metabolic breakdown of these compounds in chickens occurs via intermediates that are significantly more toxic than the initial additives. Roxarsone (3-nitro-4-hydroxyphenylarsonic acid, "Rox") is a common feed supplement that is only slightly toxic to those animals that have been tested. However, we do not yet have enough knowledge about which arsenic-containing metabolites are found in treated chickens and what risks these pose to human health. The toxicity of arsenic-containing species depends strongly on the type of compound and can vary by orders of magnitude. In a study of 1600 chickens under controlled feeding, a team headed by Bin Hu at Wuhan University in China and X. Chris Le at the University of Alberta in Canada analyzed liver samples from birds treated with Rox. Previously, these researchers found a number of different arsenic-containing species in chicken livers, breast meat, and waste. By using various mass spectrometric and chromatographic methods, they have now been able to identify three additional compounds. These compounds are Rox derivatives that have an additional methyl group (–CH3) on their arsenic atom. The three methylated compounds make up about 42 % of the total arsenic compounds found in the chicken livers. What causes this methylation? The researchers are pointing to the enzyme arsenic methyltransferase (As3MT), which is also involved in the human metabolism of arsenic. However, this enzyme only methylates trivalent arsenic, whereas Rox and its derivatives contain arsenic in its pentavalent form. Tests with reduced versions of Rox have shown that the process of breaking down Rox occurs via trivalent intermediates. Tests with cell cultures have shown that these species are 300 to 30,000 times as toxic as Rox derivatives with pentavalent arsenic. It remains to be determined whether and at what concentrations these highly toxic intermediates occur in treated chickens. In the poultry industry, Rox supplementation is usually halted five days before slaughter. Liver samples taken after this interval still contained residues of arsenic compounds at a concentration that—at least if the chicken liver is consumed—could be alarming. The researchers recommend an assessment of the extent of human exposure to various arsenic compounds to determine whether feed containing arsenic is not more problematic for human health than previously thought. Explore further: Arsenic in chicken feed may pose health risks to humans More information: Hanyong Peng et al. Methylated Phenylarsenical Metabolites Discovered in Chicken Liver, Angewandte Chemie International Edition (2017). DOI: 10.1002/anie.201700736


« Global Bioenergies successfully scales up renewable isobutene process in Leuna Demo plant | Main | A3 by Airbus and AUVSI call for cooperation in developing industry standards for urban air mobility » Air Canada is participating in the Civil Aviation Alternate Fuel Contrail and Emissions Research project (CAAFCER), a research project led by the National Research Council of Canada (NRC) to test the environmental benefits of biofuel use on contrails. This project will use advanced sensing equipment mounted on a research aircraft operated by the NRC to measure the impact of biofuel blends on contrail formation by aircraft on five biofuel flights operated by Air Canada between Montreal and Toronto in the coming days, weather permitting. During these flights the National Research Council of Canada will trail the Air Canada aircraft with a modified T-33 research jet to sample and test the contrail biofuel emissions. The sustainable biofuel is produced by AltAir Fuels from used cooking oil and supplied by SkyNRG. A reduction in the thickness and coverage of contrails produced by the jet engines of aircraft could reduce aviation’s impact on the environment, an important beneficial effect of sustainable biofuel usage in aviation. This project involves six stakeholder organizations, with primary funding from the Green Aviation Research and Development Network (GARDN), a non-profit organization funded by the business-Led Network of Centres of Excellence of the Government of Canada and the Canadian aerospace industry. The project has further financial support from the NRC and the enabling support of Air Canada ground and flight operations. In addition to Air Canada, other CAAFCER partners include (alphabetical order) Boeing, National Research Council Canada (NRC), SkyNRG, University of Alberta, and Waterfall. Since 2009, CAAFCER has built a portfolio of more than 30 R&D projects to reduce the environmental impact of a new generation of engines, structures and systems by the aerospace industry. To reduce its own emissions Air Canada has adopted a four-pillar strategy that includes: the use of new technology; improved operations; infrastructure changes; and the use of economic instruments. One example is Air Canada’s participation as an airline partner in Canada’s Biojet Supply Chain Initiative (CBSCI). It is a three-year collaborative project begun in 2015 with 14 stakeholder organizations to introduce 400,000 liters of sustainable aviation biofuel (biojet) into the shared fuel system at Montreal airport. The CBSCI project is a first in Canada and is aimed at creating a sustainable Canadian supply chain of biojet using renewable feedstocks. In 2012 Air Canada operated two biofuel flights one between Toronto and Mexico City as part of a series of commercial biofuel flights that took the secretary general of ICAO to the United Nations conference on Sustainable Development held in Rio de Janeiro; the second flight transported a number of Olympic athletes and officials on their way to the London 2012 Olympic Games. In 2016 Air Canada continued taking delivery of the Boeing 787 Dreamliner. Initial results show these aircraft are delivering approximately 20% improvement in efficiency over the aircraft they replaced. Air Canada plans to introduce 37 of these new aircraft in the coming years. In addition, later this year, it will be acquire up to 79 new Boeing 737 Max aircraft, expected to yield a 14% decrease in fuel use over the most current narrow-body aircraft. The aircraft investments represent a commitment of more than $11 billion at list prices. Air Canada has achieved a 40% improvement in average fuel efficiency between 1990 and 2016.


News Article | May 5, 2017
Site: www.marketwired.com

EDMONTON, AB--(Marketwired - May 05, 2017) - Innovative, technology-focused Alberta entrepreneurs took centre stage on Thursday night at the 15th VenturePrize Awards Celebration. VenturePrize awards the province's most promising early-stage technology ventures. In addition to emerging companies, the event showcased just how far the culture of entrepreneurship in Alberta has come in 15 years. "In 2002, some leaders in Edmonton looked at the technology clusters in Silicon Valley and Boston and said, 'why not here?'" said Allan Scott, Co-Founder of TEC Edmonton and VenturePrize. "The rest is history. Congratulations to all the winners and participants of this year's awards." vrCAVE strives to make virtual reality an immersive social experience by creating collaborative VR games. When standing inside a vrCAVE unit, players can touch, see, and hear each other while freely interacting with the shared virtual world. vrCAVE partners with entertainment businesses to share its unique multiplayer virtual reality experiences, eliminating the need for individuals to own their own VR equipment. Tevosol, Inc. is a medical device company developing the Ex-Vivo Organ Support System (EVOSS™) with the goal of increasing the number of organs available for transplantation. Tevosol's unique innovation keeps a donated organ warm and supplied with oxygen as if it is still in the body, and extends its "ex-vivo," or 'out of body' life. The device revives organs deemed unsuitable and allows assessment of transplant suitability, which has the potential to double or triple the number of available donor organs worldwide. Preza has created a temperature monitoring system for the restaurant industry using the latest developments in wireless technology. The system has a one-step installation process using a direct connection to existing Wi-Fi, and the software is tailored specifically for restaurants, offering a daily summary report and customizable alerts. Squire aims to recapture the lost benefits of a secondary school environment by making every post secondary campus, regardless of size, feel like a small classroom. Squire's solution is an online, mobile and web-based platform where students can create virtual study tables to work with their classmates, find tutors, and help on-demand through video calls and receive ongoing assistance directly from professors. Ceres Solutions takes an innovative and environmentally-conscious approach to adding value to the tons of spent grain produced every day in Alberta as a byproduct of the beer brewing process. Ceres uses the brewer's spent grain as a substrate to grow gourmet mushrooms. The process results in a high-quality mushroom as well as a grain with crude protein content increased by up to 180 per cent, resulting in a high-value livestock feed. The jurors' Screener's Award of Merit went to Good Glucos. A surprise element of the evening was the introduction of VentureKids, a celebration of Alberta entrepreneurs under 16 years old. Kaitlyn Coen, Sophia Fairweather, and JR Wikkerink of Screamin Brothers were each presented with an award or recognition and $1,000 to put towards their future education. It was also announced that this year's competition would be the last. "VenturePrize started 15 years ago, when the Edmonton region had much less public focus on entrepreneurship," said TEC Edmonton CEO Chris Lumb. "We ultimately decided to deliver the same services offered through VenturePrize in a different way. The program has had a great run, and would not have been possible without the support of our passionate volunteers and sponsors over the years." About TEC Edmonton TEC Edmonton is a business accelerator that helps emerging technology companies grow successfully. As a joint venture of the University of Alberta and Edmonton Economic Development Corporation, TEC Edmonton operates the Edmonton region's largest accelerator for early-stage technology companies, and also manages commercialization of University of Alberta technologies. TEC Edmonton delivers services in four areas: Business Development, Funding and Finance, Technology Management, and Entrepreneur Development. Since 2011, TEC clients have generated $680M in revenue, raised $350M in financing and funding, invested $200M in R&D, grown both revenue and employment by 25 per cent per year and now employ over 2,400 people in the region. In addition, TEC has assisted in the creation of 22 spinoff companies from the University of Alberta in the last four years. TEC Edmonton was named the 4th best university business incubator in North America by the University Business Incubator (UBI) Global Index in 2015, and "Incubator of the Year" by Startup Canada in 2014. For more information, visit www.tecedmonton.com. About VenturePrize VenturePrize ran for 15 years as an annual business plan competition that showcased Alberta's top innovative companies. Over 100 volunteers every year have helped make VenturePrize possible, serving in such roles as mentors, screeners, judges and seminar presenters. The total value of prizes since 2002 is estimated at $2.5 million, and over 1,000 companies have competed. Organizations that provide administrative support for VenturePrize include Innovate Calgary, NAIT, Business Link, TELUS and DynaLIFE. Learn more at www.ventureprize.com.


News Article | April 26, 2017
Site: www.prweb.com

6Connex, the trusted choice for enterprise organizations seeking to create engaging digital experiences through virtual events and environments, today announces that Hootsuite, the most widely used platform for managing social media, will be hosting the largest free online social media conference of its kind, Connect via Hootsuite, on the 6Connex platform. The global event will bring together social innovators, inspiring brands, and industry experts to show how companies can increase reach and grow revenue through social marketing and social selling. The Connect via Hootsuite conference on May 3rd will run entirely on the 6Connex Virtual Destinations platform, known for being the most flexible, configurable and scalable virtual event software on the market. Fully HTML5, the platform will be accessible to thousands of marketers from any browser, on any device. Hootsuite selected 6Connex to raise the bar for their online conference series. The front-end user experience was a top priority, as was a back-end toolset that included a robust integration with Marketo, and more options for staff and partners. “Connect via Hootsuite will help attendees better develop strategies for leveraging social to grow their business,” said Craig Ryomoto, VP Growth at Hootsuite. “We chose the 6Connex platform to give attendees an optimal user experience while connecting with peers and industry experts.” “With more than 15 million users worldwide, Hootsuite is a true pioneer in social media management,” says Mike Nelson, CEO of 6Connex. “As an industry innovator, their trust in our platform speaks volumes. We look forward to making the May 3rd event their most successful online social media conference yet.” This year’s Connect via Hootsuite conference, “From Reach to Revenue: Profiting From Social” offers exclusive content such as access to social media reports, guides and case studies, as well as expert training through live presentations and virtual booths hosted by top industry brands. In addition, attendees can connect with social media pros from around the world. The conference opens May 3rd at 5 a.m. PT and runs to 9:30 p.m. PT. To learn more or to register visit https://hootsuite.6connex.com/event/CVH/login. To learn more about the 6Connex Virtual Destinations platform, please visit http://www.6connex.com/ or contact lisa.farrell(at)6connex(dot)com. ABOUT 6CONNEX 6Connex powers virtual events and environments for career fairs, employee onboarding, user conferences, corporate universities and other applications. The 100-percent SaaS virtual destination platform is backed by a team of passionate virtual event experts, dedicated to both market innovation and customer success. More than 100 organizations rely on 6Connex, including Intuit, GE, Economist, Salesforce, Hootsuite, Ericsson, CDC, the American Association of Medical Schools, and the University of Alberta. 6Connex is based in Pleasanton, CA, with offices in London and Shanghai.


News Article | May 5, 2017
Site: www.eurekalert.org

Supplements containing arsenic have been banned in the European Union since 1999 and in North America since 2013. In many countries they are still added to poultry feed to prevent parasitic infection and promote weight gain. In the journal Angewandte Chemie, scientists have now demonstrated that the danger to human health may be greater than previously thought because the metabolic breakdown of these compounds in chickens occurs via intermediates that are significantly more toxic than the initial additives. Roxarsone (3-nitro-4-hydroxyphenylarsonic acid, "Rox") is a common feed supplement that is only slightly toxic to those animals that have been tested. However, we do not yet have enough knowledge about which arsenic-containing metabolites are found in treated chickens and what risks these pose to human health. The toxicity of arsenic-containing species depends strongly on the type of compound and can vary by orders of magnitude. In a study of 1600 chickens under controlled feeding, a team headed by Bin Hu at Wuhan University in China and X. Chris Le at the University of Alberta in Canada analyzed liver samples from birds treated with Rox. Previously, these researchers found a number of different arsenic-containing species in chicken livers, breast meat, and waste. By using various mass spectrometric and chromatographic methods, they have now been able to identify three additional compounds. These compounds are Rox derivatives that have an additional methyl group (-CH3) on their arsenic atom. The three methylated compounds make up about 42 % of the total arsenic compounds found in the chicken livers. What causes this methylation? The researchers are pointing to the enzyme arsenic methyltransferase (As3MT), which is also involved in the human metabolism of arsenic. However, this enzyme only methylates trivalent arsenic, whereas Rox and its derivatives contain arsenic in its pentavalent form. Tests with reduced versions of Rox have shown that the process of breaking down Rox occurs via trivalent intermediates. Tests with cell cultures have shown that these species are 300 to 30,000 times as toxic as Rox derivatives with pentavalent arsenic. It remains to be determined whether and at what concentrations these highly toxic intermediates occur in treated chickens. In the poultry industry, Rox supplementation is usually halted five days before slaughter. Liver samples taken after this interval still contained residues of arsenic compounds at a concentration that--at least if the chicken liver is consumed--could be alarming. The researchers recommend an assessment of the extent of human exposure to various arsenic compounds to determine whether feed containing arsenic is not more problematic for human health than previously thought. Dr. Chris Le is Distinguished University Professor and Director of the Analytical and Environmental Toxicology Division at the University of Alberta. He is Fellow of the Royal Society of Canada (Academy of Science) and Canada Research Chair in Bio-analytical Technology and Environmental Health.


News Article | April 26, 2017
Site: motherboard.vice.com

Given that caribou are herbivores, it seems possible that populations in the Arctic might actually benefit from climate change—there'd be more green stuff around for them to eat. Well, no. A team of researchers has released a new study finding the changes to vegetation that come with melted sea ice seem to put the animals' food sources in danger. Lead researcher Per Fauchald, research manager at the Norwegian Institute for Nature Research, said he and his team originally thought caribou could stand to benefit from a greening Arctic. In fact, caribou populations may very well decline in the face of climate change, as they'll face increasing pressures. It's bad news for the Indigenous communities that rely on them. Animation of annual sea ice concentration, summer greening on the Arctic tundra and the development of caribou populations from 1982-2011. Herd population fluctuations over time are depicted by growing and shrinking caribou symbols. Image: Per Fauchald Caribou are herbivores, feeding on grass, herbs and lichen. A greener Arctic would theoretically mean more food, he said in a Skype interview. "We found that what was the opposite of what was happening." The team studied over 35 years' worth of population data on 11 herds of caribou moving across North America, from Alaska to Labrador. They found no evidence of large herds eating the vegetation supplied by a greener Arctic. In fact, sea ice is connected to more summer pasture plant growth, and maybe a drop in caribou populations—possibly because different kinds of shrubs are now spreading, with toxins that caribou typically avoid. Mark Boyce, an ecology professor at the University of Alberta who was not involved in this new research, published a paper in 2009 on the global decline of caribou and reindeer. "Caribou numbers go up and down, and that's always happened," he said in a phone interview. "But, what's really disturbing about the recent trend, over the last few decades or so, is there are declining populations everywhere and there are very few herds anywhere that are actually increasing." Read More: Canada's Rapidly 'Greening' North is Bad News For Everyone "'Shrubification' is happening," Fauchald said, of the changes now taking place to Arctic vegetation. This is where small shrubs are expanding into the tundra, while grass and herbs that have historically grown there are being taken over. Boyce has studied the socioeconomic impact of this loss on northern Indigenous cultures, for whom the caribou hunt is important. "[Some] Inuit peoples, as well as northern First Nations, really depend on Caribou," said Boyce. "It's their lifeblood for meat and they use the hides." In 2016, Nunavut Arctic College Media published a book about Inuit Elders observing climate change. The book, titled The Caribou Taste Different Now, details how climate change has impacted local food sources. In excerpts in Nunatsiaq Online, Inuit Elders said that caribou are disappearing. Where the animals can still be found, they appear skinnier, like they're sick. "This is the problem with climate change," said Fauchald, citing the scarcity of data around its impacts on caribou. "It's hard to predict the future because we've never seen anything like this before." Subscribe to Science Solved It, Motherboard's new show about the greatest mysteries that were solved by science.


News Article | May 4, 2017
Site: news.yahoo.com

FILE - In this Aug. 22, 2015, file photo, volunteers learn to deploy fire shelters with practice equipment after a call out by fire officials seeking to supplement their usual resources in Omak, Wash. Wildland firefighters will have to wait at least an extra year before getting better fire shelters than the ones that failed to save the Arizona 19 firefighters. The U.S. Forest Service's expedited attempt following those deaths to have an upgraded shelter later this year is being pushed back a year after nearly 100 prototypes couldn't outperform the current shelter developed in 2002. (AP Photo/Elaine Thompson, File) BOISE, Idaho (AP) — Crews who battle wildfires will have to wait at least another year before getting better fire shelters than those that failed to save 19 firefighters trapped by flames in Arizona four years ago, officials told The Associated Press on Thursday. The deaths pushed the U.S. Forest Service to speed up work to get an upgraded shelter in place this year, but the effort has been delayed a year after prototypes could not outperform the shelter developed in 2002. It comes as firefighters are facing more destructive wildfires and the struggle to protect more homes being built in or near remote areas. "The reason there isn't (a new shelter) is because there were no great options to choose from," said Tony Petrilli, fire shelter project leader for the U.S. Forest Service at the Missoula Technology and Development Center in Montana. Petrilli escaped serious injury or death by getting into a fire shelter as flames roared past on Colorado's Storm King Mountain in 1994. His elation at emerging a survivor didn't last long. Within minutes, he was among the first to find the bodies of some of the 14 firefighters whose fire shelters didn't save them. His radio message reporting the deaths rattled federal agencies and led to the development of the 2002 shelter. The Forest Service wants to replace that shelter following the 2013 deaths in Arizona. Those two fires are among the deadliest for wildland firefighters in U.S. history and the worst since fire shelters became mandatory in 1977. But the effort faces serious setbacks, and the agency says it won't meet its December deadline to create an upgraded shelter for the 2018 fire season despite help from NASA, research universities and private companies. After spending roughly $200,000 to $500,000 on the program, it's possible the 2002 shelter will stay the standard, the Forest Service said. "We're not having a whole lot of success," said Mark Ackerman, a former academic at the University of Alberta in Edmonton, Alberta. He helped develop the 2002 shelter and is helping the government test new designs to replace it. "Based on what we're seeing now, I don't think the game-changer is there." But several promising materials that showed up recently remain, and the extra year will allow time for testing, Petrilli said. Scientists need to create a shelter that can repel radiant heat, which is felt standing near flames, and convective heat, felt if you put a hand into the fire. Today's shelters reflect 95 percent of radiant heat, and firefighters have survived in them for an hour with brief exposure to direct flames. The challenge is making them last as fire burns around them. The shelter Petrilli used in 1994 could last only seconds in direct flames. The current 4.5-pound (2-kilogram) shelter with an aluminum foil-woven silica outer shell can withstand direct flames and 2,000 degrees (1,090 Celsius) for about a minute. The Forest Service says it could save lives if it can create a shelter of equal weight that can withstand those conditions longer. Ackerman has tried for two minutes, but prototypes have not come close to that. "What we're talking about is buying a little more time," he said. Based on several years of testing, he estimated that a shelter with double the protection would be double the weight because of added insulation. That's a non-starter with frontline firefighters. A survey at the start of the development process found that less than 5 percent wanted to carry a heavier, more protective shelter. About half wanted a lighter shelter with the same protection, and most of the rest wanted a shelter of similar weight with better protection. Shawna Legarza, director of the Forest Service's National Fire and Aviation Management Program who spent 20 years battling wildfires, said putting more weight on firefighters, who already carry 40 or 50 pounds, is not an option. Instead, federal officials have been working to reduce wildfire risks. Former Interior Secretary Sally Jewell warned homeowners who build in wild areas to create rock walls, green grass extending from homes, or rock lawns to stop approaching flames. "I think we are making some progress across the nation in people making their homes defensible," Legarza said. "If we have to respond to a wildfire, we're only going to do it when we can be successful. Our Number 1 thing is our employees." One of them, Missoula, Montana-based smokejumper Dan Cottrell, said safety considerations have increased since he started fighting fires in 1995. He said carrying a fire shelter is worth the extra weight. "People are pretty aware of their limitations, but it's better than nothing," he said.


News Article | May 4, 2017
Site: hosted2.ap.org

(AP) — Crews who battle wildfires will have to wait at least another year before getting better fire shelters than those that failed to save 19 firefighters trapped by flames in Arizona four years ago, officials told The Associated Press on Thursday. The deaths pushed the U.S. Forest Service to speed up work to get an upgraded shelter in place this year, but the effort has been delayed a year after prototypes could not outperform the shelter developed in 2002. It comes as firefighters are facing more destructive wildfires and the struggle to protect more homes being built in or near remote areas. "The reason there isn't (a new shelter) is because there were no great options to choose from," said Tony Petrilli, fire shelter project leader for the U.S. Forest Service at the Missoula Technology and Development Center in Montana. Petrilli escaped serious injury or death by getting into a fire shelter as flames roared past on Colorado's Storm King Mountain in 1994. His elation at emerging a survivor didn't last long. Within minutes, he was among the first to find the bodies of some of the 14 firefighters whose fire shelters didn't save them. His radio message reporting the deaths rattled federal agencies and led to the development of the 2002 shelter. The Forest Service wants to replace that shelter following the 2013 deaths in Arizona. Those two fires are among the deadliest for wildland firefighters in U.S. history and the worst since fire shelters became mandatory in 1977. But the effort faces serious setbacks, and the agency says it won't meet its December deadline to create an upgraded shelter for the 2018 fire season despite help from NASA, research universities and private companies. After spending roughly $200,000 to $500,000 on the program, it's possible the 2002 shelter will stay the standard, the Forest Service said. "We're not having a whole lot of success," said Mark Ackerman, a former academic at the University of Alberta in Edmonton, Alberta. He helped develop the 2002 shelter and is helping the government test new designs to replace it. "Based on what we're seeing now, I don't think the game-changer is there." But several promising materials that showed up recently remain, and the extra year will allow time for testing, Petrilli said. Scientists need to create a shelter that can repel radiant heat, which is felt standing near flames, and convective heat, felt if you put a hand into the fire. Today's shelters reflect 95 percent of radiant heat, and firefighters have survived in them for an hour with brief exposure to direct flames. The challenge is making them last as fire burns around them. The shelter Petrilli used in 1994 could last only seconds in direct flames. The current 4.5-pound (2-kilogram) shelter with an aluminum foil-woven silica outer shell can withstand direct flames and 2,000 degrees (1,090 Celsius) for about a minute. The Forest Service says it could save lives if it can create a shelter of equal weight that can withstand those conditions longer. Ackerman has tried for two minutes, but prototypes have not come close to that. "What we're talking about is buying a little more time," he said. Based on several years of testing, he estimated that a shelter with double the protection would be double the weight because of added insulation. That's a non-starter with frontline firefighters. A survey at the start of the development process found that less than 5 percent wanted to carry a heavier, more protective shelter. About half wanted a lighter shelter with the same protection, and most of the rest wanted a shelter of similar weight with better protection. Shawna Legarza, director of the Forest Service's National Fire and Aviation Management Program who spent 20 years battling wildfires, said putting more weight on firefighters, who already carry 40 or 50 pounds, is not an option. Instead, federal officials have been working to reduce wildfire risks. Former Interior Secretary Sally Jewell warned homeowners who build in wild areas to create rock walls, green grass extending from homes, or rock lawns to stop approaching flames. "I think we are making some progress across the nation in people making their homes defensible," Legarza said. "If we have to respond to a wildfire, we're only going to do it when we can be successful. Our Number 1 thing is our employees." One of them, Missoula, Montana-based smokejumper Dan Cottrell, said safety considerations have increased since he started fighting fires in 1995. He said carrying a fire shelter is worth the extra weight. "People are pretty aware of their limitations, but it's better than nothing," he said.


News Article | March 30, 2017
Site: www.csmonitor.com

This is an artist's illustration of the dinosaur Daspletosaurus horneri, based on the distribution of texture on the facial bones as described in a new study published in the journal Scientific Reports. According to that research, the face of tyrannosaurs was covered by an extensive mask of large, flat scales, and regions of armor-like skin on the snout, jaws, and ornamental horns. The large horn behind the eye was covered by horn, the same material that makes human fingernails. The small bumps on the flat scales are Integumentary Sensory Organs (ISOs), as are seen in crocodylians that provide extreme tactile sensitivity. The skull is 895 millimeters long. —Before Tyrannosaurus rex terrorized Cretaceous North America, another frightful lizard ruled Montana. Some 75 million years ago, Daspletosaurus horneri, at nearly 30 feet long and over seven feet tall, dominated the landscape. And although D. horneri doesn't have the movie star reputation of its younger, larger cousin, T. rex, its bones could be revolutionary pieces to the puzzle of dinosaur evolution and our picture of the family of tyrannosaurids. "With this dinosaur, we've literally changed the face of tyrannosaurs," paleontologist Thomas Carr says of D. horneri. Dr. Carr and colleagues don't just describe and name the new dinosaur species in a paper published Thursday in the journal Scientific Reports. They also investigated what the face of these beasts may have looked like in life, blood, flesh, scales, and all. And it turns out, these tyrannosaurs may have had patches of armor-like skin across their faces, horns covered in a hard, protective layer much like fingernails or bird beaks, and a highly sensitive snout protected by flat scales. This would have made their snouts a lot like those of dinosaurs' close relatives: crocodilians, Carr and his coauthors say. In crocodiles, sensitive snouts help them sense tiny vibrations in murky waters or measure the temperature of nests. "Crocodiles' faces are as sensitive as human fingertips," Carr, who is director of the Carthage Institute of Paleontology and a professor at Carthage College in Kenosha, Wis., says in a phone interview with The Christian Science Monitor. "Basically their entire face is a fingertip. And we are floating the hypothesis that tyrannosaurs were no different." It's not just the faces of dinosaurs that D. horneri might be changing. It could also help reshape biologists' ideas of how evolution works. That dinosaur was first mentioned in scientific literature in 1992, and eventually was known by the nickname 'Two Medicine tyrannosaurine' among paleontologists for the site where it was found. But D. horneri wasn't officially named as a species until now, 25 years later. The animals' bones had been shelved. When Carr first began to examine the bones, the first task was to find out if this was indeed a new species of dinosaur, or if it was perhaps a transitional species, perhaps between D. horneri's closest relative, Daspletosaurus torosus and T. rex. D. horneri is in fact distinct enough in its morphology to be a new species, Carr and his colleagues say. So they gave it a species name. The name Daspletosaurus horneri, or "Horner’s Frightful Lizard", honors famed American paleontologist John R. "Jack" Horner. But that presents a new dilemma. D. horneri is younger than D. torosus, but the two are quite closely related, likely roamed the same lands (D. torosus was unearthed in Alberta), and lived perhaps just 100,000 years apart – geologically "a blink of an eye," Carr says. Scientists think the most common mechanism for speciation is cladogenesis, in which new species branches off from ancestral species, typically due to geographic isolation or some other force. But Carr and his colleagues think this is a case of anagenesis, when one species simply evolves into another another, without the branching. "That's not easy to show in the fossil record," Lawrence Witmer, a paleontologist at Ohio University who was not involved in the research, says in a phone interview with the Monitor. But Dr. Witmer thinks Carr and his team made the case well. But, cautions Hans-Dieter Sues, curator and chair of vertebrate paleontology at the National Museum of Natural History of the Smithsonian Institution who also was not part of the research team, "The fossil record for most animals, especially dinosaurs, is far too meager to confidently make inferences regarding anagenesis." "The authors’ interpretation is plausible but difficult to test rigorously unless there are much larger samples," he writes in an email to the Monitor. That may be changing, says Philip Currie, a paleontologist at the University of Alberta who was not involved in the research. "There wasn't a lot that could be said before about anagenesis. But now certainly there's enough tyrannosaur specimens that have been found, and especially this group of things from Montana that seem to be different from the ones in Alberta," he says in a phone interview with the Monitor. And "the radiometric dating has improved so much so you can be a lot more specific about what level these things are coming from." And it's not just tyrannosaurs, Dr. Currie says. Some paleontologists are pointing to evidence in the fossils of anagenesis in other families of dinosaurs, like hadrosaurs and ceratopsian dinosaurs. "Most people in recent years have tended to think that cladogenesis is the way that evolution tends to work most of the time," Currie says. But research is beginning to show that that might not be the case. "We think it's worth investigating just how widespread anagenesis might be," Carr says. "We really don't know. To help with that, Carr and his colleagues hope their evidence for suggesting D. horneri might be the product of anagenesis could help shape criteria for other paleontologists to look for in the fossil record. Does D. horneri really suggest that tyrannosaurs had crocodile-like faces? Carr's investigation of tyrannosaur facial flesh began with the bones of D. horneri. The tyrannosaur expert and his team compared the texture on the dinosaur's skull bones with the bones of other animals and found that the roughness on D. horneri matched the roughness on crocodilian snout bones. Because the skin overlying the bones in crocodilians actually changes the texture of the bone itself, Carr says, the scientists can use that as clues to figure out what soft tissue was where on the dinosaur's face. If the parallel between the dinosaurs and crocodilians that Carr and his colleagues draw is correct, this could provide intriguing insight into how the dinosaur's scales develop, suggests Paul Gignac, a paleontologist at Oklahoma State University who was not involved in the research. "Crocodiles grow facial scales by actually cracking their skin, like drying paint, because the underlying bone grows faster than the skin above it," Dr. Gignac explains in an email to the Monitor. "The parallel that Carr and colleagues draw between tyrannosaurs and crocodiles suggests a similar kind of development, which we previously thought was unique to crocodiles and their kin, but that may no longer be the case." Carr and his colleagues took the parallel a step further and, pointing to small holes in the facial bones thought to provide channels for blood vessels and nerves, suggest that D. horneri's snout was as sensitive to touch as crocodilian snouts. They argue that the arrangement of these holes, called foramina, and the texture of the bones are clues that the tyrannosaur had the same kind of specialized sense organs found in crocodilian snout skin called integumentary sensory organs (ISOs). "The authors interpret the openings on the tips of the snout and lower jaw of tyrannosaurs as transmitting nerves that provide sensation, and this make a lot of sense," Gignac says. "The major nerve that allows people to sense touch on their faces ... is the same nerve that allows birds to sense touch and temperature on their beaks and crocodiles to sense pressure waves under water." "This nerve ('the trigeminal nerve') also innervates the whiskers in cats and dogs," he adds. "It is a ubiquitous and important neural component of how vertebrates interact with their environments. So, it seems apropos that evolution would also equip an apex predator – one thought to engage in social, head-biting behaviors, just like crocodiles – with a touch-sensitive snout that would, for example, reinforce cues of social dominance and weakness." But not everyone is sold on this hypothesis. "I question the authors' inferences regarding crocodile-like facial sensitivity in tyrannosaurs," Dr. Sues says. "Lizards have numerous tiny openings in their jaw bones for the passage of nerves and blood vessels that supply the superficial tissues of the snout," as do other animals as well, he points out. "It's certainly not just dinosaurs, certainly not tyrannosaurs and crocodiles" that have these holes in their facial bones, Currie agrees. And, the vastly different lifestyles of the beasts adds to both Sues and Currie's skepticism. "Most crocodilians and their extinct relatives are/were semi-aquatic and have highly specialized receptors for detecting pressure changes in water. (Not surprisingly, certain land-adapted extinct crocs lack these structures.)," Sues says. During the Cretaceous, Montana would have looked a bit different than it does today. A shallow sea called the Western Interior Seaway ran through the middle of North America at the time. "D. horneri lived on the coastal plain, which was forested and carved by an extensive stream system," Carr writes in an email. "The climate would have been comparable to, say, modern-day Mississippi." Carr admits that more research needs to be done. "We really need to get a handle on the distribution, density, and number of these foramina. I think this really needs to be quantified to make the comparisons a bit more rigorous," he says. But ultimately, Carr says, "What we've proposed as a hypothesis, and what will really test it is the discovery of a tyrannosaur with its skin preserved on its face. And I think that day will come, and that will be the test of our hypothesis." "I really hope someone finds a T. rex mummy," he says. "Wouldn't that be great?" [Editor's note: This article was updated to add details about the Cretaceous environment.]


UAlberta partners with top Chinese institution, Tsinghua University, to create Joint Research Centre for Future Energy and Environment during recent Alberta mission. EDMONTON, Alberta, April 22, 2017 /PRNewswire/ -- The University of Alberta is teaming up with research partners in China to develop low-carbon, sustainable energy solutions while tackling global environmental challenges. Officials from the U of A and Tsinghua University were in Beijing on April 20 to sign an agreement to create the Joint Research Centre for Future Energy and Environment. It was one of several key agreements the U of A signed with Chinese partners as part of a wider Government of Alberta trade mission, led by Premier Rachel Notley, to strengthen ties with the province's second-largest trading partner. "The University of Alberta values our long-standing partnerships with China and Tsinghua University, which bring together world-leading talent to address globally important issues such as clean energy, environment and climate change," said U of A President David Turpin. "Strengthening these collaborations will open even more avenues of discovery and lead to new ideas, technologies and innovations that will benefit both countries and the world." Larry Kostiuk, the U of A's associate vice-president of research, said the Joint Research Centre for Future Energy and Environment is the latest evolution in more than two decades of collaborations between the U of A and Tsinghua University—arguably China's best research institution and among the top in the world. In 2012, the U of A and Tsinghua created the Sino-Canadian Energy and Environment Research and Education Initiative, which has led to more than 30 partnerships in clean energy, environment, water, energy transport and policy. Kostiuk said the new centre elevates relations with Tsinghua to a "completely new level." Researchers will collaborate on a range of problems related to energy, environment and climate change, renewable energy, advanced power systems, energy transport and more. "This is a rare opportunity for a Canadian university to partner with Tsinghua University in such a significant way," said Kostiuk, who was in China for the signing. "We come from different places and backgrounds, but we're going to come together and leverage our different perspectives to solve common problems." The centre will be based at Tsinghua in a state-of-the-art research facility. Once operational, the centre will be able to apply for grant funding through the Chinese Ministry of Education, which is establishing strategic international research centres across the country. This centre would be the only one created in partnership with a Canadian university. Kostiuk will serve as the new centre's deputy director while retaining his position as director of U of A's Future Energy Systems initiative, which brings together researchers across disciplines to improve and develop new low-carbon energy technologies, integrate them into today's infrastructure and understand the social and economic impacts of their adoption. Kostiuk said the U of A-Tsinghua centre will be even broader in scope than Future Energy Systems, addressing environment and water issues not necessarily tied to energy. "Tsinghua University is a world leader in clean, low-carbon and renewable energy research and technologies. We all look forward to getting to work with incredibly bright people on both sides." "We are pleased to work with the University of Alberta, which has a global reputation in energy systems research," said Qikun Xue, vice-president of research with Tsinghua University. "We look forward to bringing our strengths together to tackle many critical issues facing our planet." In addition to the joint research centre, the Faculty of Rehabilitation Medicine signed a memorandum of understanding with Guanghua International Education Association to develop training for health professionals that will help China enhance and expand rehabilitation capacity. TEC Edmonton signed an agreement with Tsinghua University's research innovation incubator, TusPark/TusStar, on a new joint incubator. TusPark/TusStar operates the largest university science park in the world, and the new partnership would expand its global reach, creating economic opportunities for Edmonton and Alberta. "I am extremely proud to support the University of Alberta and TEC Edmonton in forming relationships with such innovative partners in China," said Premier Notley. "We look forward to seeing this partnership thrive, and to watching Alberta's expertise across a variety of areas, not only create opportunities for Albertans, but make a difference around the world." For further information: Bryan Alary, Communications Manager, Marketing & Communications, University of Alberta, Office: 780-492-0336  |  Email: bryan.alary@ualberta.ca


VANCOUVER, BRITISH COLUMBIA--(Marketwired - April 27, 2017) - Strikepoint Gold Inc. (TSX VENTURE:SKP) ("Strikepoint" or the Company) Shawn Khunkhun, CEO of Strikepoint, stated: "The appointment of Andy Randell as Vice President of Exploration enables a strong move forward for the Strikepoint inaugural work program in the Yukon. Randell and the company share the same vision in the development of our Yukon portfolio. State of the art technology, driven by the passion for discovery, Andy's work ethic, adherence to strict environmental codes, and his innate respect for the local First Nations, as well as his past involvement with Ryan Gold Corp. and Victoria Gold Corp. in the Yukon, we are privileged to have him lead our charge in May." Andy Randell is a Vancouver based geologist with nearly 20 years of experience from Europe, South and North America across a variety of commodities. He founded his own consultancy, Strata GeoData Services in 2014, and then launched the 'Hive' initiative shortly afterwards, whereby graduate geologists are given the opportunity to work and be mentored on real geological projects. He was subsequently recognized by the Canadian Institute of Mining in 2016 when he was awarded the Bedford Young Mining Leaders Award for the Hive concept. In January 2017, Andy was elected onto the Board of Directors for the Association for Mineral Exploration (AME) for a three-year term. He also sits on the Mentoring Committee with the Association of Professional Engineers and Geoscientists BC (APEGBC), is on the CIM Geological Society committee and is President of the Below BC Geological Association, a public facing geological education non-profit society. Andy earned his Bachelor of Science degree from the University of Wales, Cardiff (UK) and is a registered professional geoscientist (PGeo.) with APEGBC. He is a strong advocate of the exploration industry with a focus on sustainability, education and succession planning. Andy is extremely familiar with Strikepoint's Yukon portfolio of properties owing to his previous position as Chief Geologist for Ryan Gold Corp. where he oversaw the major sampling, mapping and drilling programs of 2011 to 2013. Prior to this, Andy was a Project Geologist for Victoria Gold Corp. on their Eagle Gold Project. He has had extensive interaction with local Government and First Nations, and as such is enabling Strikepoint to get a rapid start to the 2017 field season. We are also pleased to announce that Hive has dedicated geologist Scott Dorion to the Yukon projects. Scott is a graduate of the University of Alberta and has worked across Canada and Australia on gold and copper focussed projects. Most recently Scott was the Project Geologist for Ryan Gold Corp. and so has first hand knowledge of the properties in the portfolio. He was also a geologist with Victoria Gold on their Eagle Project site. ON BEHALF OF THE BOARD of STRIKEPOINT GOLD INC. Neither the TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.


News Article | April 24, 2017
Site: www.chromatographytechniques.com

A new study has made a major new discovery towards finding the cause of multiple sclerosis (MS), potentially paving the way for research to investigate new treatments. Ahead of MS Awareness Week, which starts today (Monday April 24), an international team involving the University of Exeter Medical School and the University of Alberta has discovered a new cellular mechanism— an underlying defect in brain cells—that may cause the disease, and a potential hallmark that may be a target for future treatment of the autoimmune disorder. The study was recently published in the Journal of Neuroinflammation. "Multiple sclerosis can have a devastating impact on people's lives, affecting mobility, speech, mental ability and more. So far, all medicine can offer is treatment and therapy for the symptoms - as we do not yet know the precise causes, research has been limited. Our exciting new findings have uncovered a new avenue for researchers to explore. It is a critical step, and in time, we hope it might lead to effective new treatments for MS,"  said Paul Eggleton, of the University of Exeter Medical School. Multiple sclerosis affects around 2.5 million people around the world. Typically, people are diagnosed in their 20s and 30s, and it is more common in women than men. Although the cause has so far been a mystery, the disease causes the body's own immune system to attack myelin - the fatty "sheaths" that protect nerves in the brain and spinal cord. This leads to brain damage, a reduction in blood supply and oxygen and the formation of lesions in the body. Symptoms can be wide-ranging, and can include muscle spasms, mobility problems, pain, fatigue, and problems with speech. Scientists have long suspected that mitochondria, the energy-creating "powerhouse" of the cell, plays a link in causing multiple sclerosis. The joint Exeter-Alberta research team was the first to combine clinical and laboratory experiments to explain how mitochondria becomes defective in people with MS. Using human brain tissue samples , they found that a protein called Rab32 is present in large quantities in the brains of people with MS, but is virtually absent in healthy brain cells. Where Rab32 is present, the team discovered that a part of the cell that stores calcium (endoplasmic reticulum or ER) gets too close to the mitochondria. The resulting miscommunication with the calcium supply triggers the mitochondria to misbehave, ultimately causing toxicity for brain cells people with MS. Researchers do not yet know what causes an unwelcome influx of Rab32 but they believe the defect could originate at the base of the ER organelle. The finding will enable scientists to search for effective treatments that target Rab32 and embark on determining whether there are other proteins that may pay a role in triggering MS. "No one knows for sure why people develop MS and we welcome any research that increases our understanding of how to stop it. There are currently no treatments available for many of the more than 100,000 people in the UK who live with this challenging and unpredictable condition. We want people with MS to have a range of treatments to choose from, and be able to get the right treatment at the right time," said David Schley, research communications manager at the MS Society.


News Article | April 28, 2017
Site: www.marketwired.com

LONDON, ENGLAND--(Marketwired - April 28, 2017) - Nanotechnology has had a multitude of applications: from healthcare to textiles to new consumer gadgets, innovative new uses for nanotechnology are constantly emerging. It has now found a new role, with University of Alberta-based nanotechnology accelerator Ingenuity Lab using it as the basis for the development of a revolutionary new way to clean up oil spills. The system has received a vote of confidence from Natural Resources Canada, with the organization providing $1.7m to fund its ongoing development. Using a carbon-nanotube mesh combined with other minerals and polymers, Ingenuity Lab's system acts as a sponge that attracts and absorbs oil underwater. When it is fully saturated with oil, the mesh is then removed from the water and exposed to heat, electricity or ultraviolet light, causing it to expel the collected oil. The oil spill cleaning system has come a long way in a remarkably short amount of time. Director of Ingenuity Lab Dr Carlo Montemango said his team was able to demonstrate the effectiveness of the membrane approximately a year ago, but is now working on developing a large-scale version of the system. "Where you might see it is being rolled off the back [of a vessel] and dragged or moved through the water. As it becomes saturated, it would be brought on board, the oil would be expelled and it would be redeployed." Past tests have shown the system can be remarkably effective and is capable of cleaning up 100 percent of a spill – even the heavier oil that may be trapped below the surface. The system also recovers the oil, allowing it to be reused and potentially recuperate some of the cost associated with cleaning up a spill. The technology is far more advanced than current methods of containing a spill, which usually involve floating booms and skimming oil from the water's surface. Ingenuity Lab's system would make a substantial difference should the world see another event on the scale of the 2010 Deepwater Horizon disaster, or the Exxon Valdez spill of 1989. This is by no means the first nanotechnology project Ingenuity Lab has worked on. Other projects have looked at capturing carbon emissions, healthcare and agriculture. Montemango said the team is now working on developing a pilot system and beginning field tests in less than two years. "Our mission is to develop solutions to significant societal problems and challenges, and translate those solutions to the marketplace."


News Article | May 4, 2017
Site: www.marketwired.com

CALGARY, ALBERTA--(Marketwired - May 3, 2017) - Orbus Pharma Inc. (the "Company") announced today that the securities regulators (the "Commissions") in the Provinces of Ontario, British Columbia, Manitoba, Alberta and Québec have granted a full revocation (the "Revocation") of the cease trade order imposed by each of them in May, 2010 against the securities of the Company. The cease trade orders had been imposed by the Commissions for failure by the Company to file its required filings by the filing deadline as prescribed by applicable securities laws. Its common shares were listed on the TSX NEX Exchange ("NEX") under the symbol ORB, but were suspended from trading on the NEX on April 30, 2010 for failure to maintain minimum NEX Exchange listing requirements. Shortly after the cease trade orders were issued, the Company's shares were delisted from the NEX on January 25, 2012. The Company applied in or about March, 2017 to each of the Commissions for a revocation of the cease trade orders and that time, requested relief from filing the annual and quarterly financial reports and related MD&A for 2010 - 2013. In April, 2017, the Company filed annual audited financial statements and related MD&A for 2014, 2015 and 2016, and the Commissions granted the requested relief. On May 3, 2017, each of the Commissions revoked the cease trade orders issued against the Company. As a condition of revoking the Ontario cease trade order, the Ontario Securities Commission requested that the Company undertake not to complete a restructuring transaction, significant acquisition or reverse takeover of a business not located in Canada unless the Company first received a receipt for a final prospectus in respect of such business. The Company has given such undertaking. The Company intends to hold a meeting of shareholders within 90 days of the date of the Revocation. Although the Company has been inactive, following the Revocation, the Company intends to reactivate itself. In the near term, the Company intends to seek opportunities to acquire assets or a business and obtain financing, in conjunction with which it may seek a listing on a Canadian stock exchange. The Continuous Disclosure Materials can be reviewed on SEDAR under the Company's profile at www.sedar.com. Directors and Officers of the Company Greg Muir (President, CEO and acting CFO and Director) is currently Vice President Finance and Information Technology with Crestline Coach Ltd, headquartered in Saskatoon. In this role, he provides corporate and functional leadership to drive operational excellence and outstanding financial performance. He has held key roles in both private and public listed companies where his responsibilities included, but were not limited to, operations management, enterprise financing and regulatory compliance. Mr. Muir is a Chartered Professional Accountant (CPA, CMA), Management Accounting, with an MBA specializing in Finance and Statistics and a Bachelor of Arts in Economics Laurie M. Paré (Director) is a Financial Consultant and President of Bellevue Spur Capital Corporation, a private company. He is a former partner of Pricewaterhouse Coopers LLP. Mr. Paré has a Bachelor of Commerce degree from the University of Alberta and is a Chartered Professional Accountant. He is a Director on the Board of Directors of Imperial Metals Corporation. Jeffrey McCaig (Director) is the Chairman of the board of directors of the Trimac Group of Companies, of which he was CEO until December 31, 2015. Mr. McCaig has been a director of MEG Energy Inc. since March 1, 2014, a director of Potash Corporation of Saskatchewan since January 2001 and a director of Bantrel Company since 2000, becoming its Chairman in December 2007. Mr. McCaig is also a director and co-owner of the Calgary Flames Hockey Club. Mr. McCaig holds a degree in economics from Harvard University, a law degree from Osgoode Hall Law School, and a Master of Science in Management degree from Stanford University. Other than with respect to the Company and as disclosed above, no director or executive officer of the Company: No director or executive officer of the Company has been subject to: (a) any penalties or sanctions imposed by a court relating to securities legislation or by a securities regulatory authority or has entered into a settlement agreement with a securities regulatory authority; or (b) any other penalties or sanctions imposed by a court or regulatory body that would likely be considered important to a reasonable investor in making an investment decision. This press release contains "forward looking information" within the meaning of applicable Canadian securities legislation. Forward looking information includes, but is not limited to, statements with respect to the Company's expectation with respect to future plans for the business, raising capital, listing on a stock exchange, and the anticipated timing of such events. Generally, forward looking information can be identified by the use of forward-looking terminology such as "plans", "expects" or "does not expect", "is expected", "budget", "scheduled", "estimates", "forecasts", "intends", "anticipates" or "does not anticipate", or "believes", or variations of such words and phrases or state that certain actions, events or results "may", "could", "would", "might" or "will be taken", "occur" or "be achieved". Forward-looking information is subject to known and unknown risks, uncertainties and other factors that may cause the actual results, level of activity, performance or achievements of the Company to be materially different from those expressed or implied by such forward-looking information, including but not limited to: general business, economic, competitive and regulatory risks. Although the Company has attempted to identify important factors that could cause actual results to differ materially from those contained in forward-looking information, there may be other factors that cause results not to be as anticipated, estimated or intended. There can be no assurance that such information will prove to be accurate, as actual results and future events could differ materially from those anticipated in such statements. Accordingly, readers should not place undue reliance on forward looking information. The Company does not undertake to update any forward-looking information, except in accordance with applicable securities laws. Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.


More deaths occurred among female patients with atrial fibrillation/flutter than men 30 and 90 days after discharge from emergency departments, reports the Canadian Journal of Cardiology Philadelphia, PA, May 3, 2017 - Atrial fibrillation and flutter (also known as AFF) is associated with serious health problems and is a significant contributor to death rates. Investigators have identified differences in outcomes for male and female patients who presented with AFF to emergency departments in Alberta, Canada and were then discharged. Most importantly, women experienced higher death rates than men at 30 and 90 days after discharge. Their findings are published in the Canadian Journal of Cardiology. The prevalence of AFF increases with age. Rates are 5.9% in men and 2.8% in women 65-69 years of age, increasing to 8.0% in men and 6.7% in women aged 80 years and older. The number of people with AFF is expected to rise substantially in the next ten years, given increased life expectancy. "As health care systems are stretched beyond their capacity, there are various pressures on the emergency departments," explained lead investigator Rhonda J. Rosychuk, PhD, Professor of Pediatrics at the Department of Pediatrics, University of Alberta and the Women & Children's Health Research Institute, Edmonton, Alberta, Canada. "In Alberta, women were more likely to be discharged from the emergency department than men for acute myocardial infarction, unstable angina, stable angina, and chest pain. However, there are few data on the epidemiology of AFF in the emergency department setting, and sex differences are not well understood." Investigators examined differences in outcomes for male and female patients who presented with AFF to emergency departments across Alberta, Canada. They extracted anonymized data from linked provincial databases for all patients who had been discharged from the emergency department after presenting for AFF from 1999 to 2011. They analyzed data from 21,062 patients, 47.5% of whom were women. The investigators identified important differences between male and female patients for times to return to the emergency department, follow-up visit, and death. Women experienced shorter or longer waits to see a physician and specialist in follow-up, depending on different factors, such as socioeconomic group and the presence of other medical conditions. Overall, women experienced higher death rates than men at 30 and 90 days after discharge, and this remained significant after adjustment for other demographic and health-related variables. Within 30 days of discharge, 234 patients had died (1.3% female vs 0.9% male). Of these, 6.0%, 6.8%, and 5.6% of deaths were reported as AFF, heart failure, and stroke related, respectively. Within 90 days of discharge, there were 548 deaths (2.9% female vs 2.4% male). Of these deaths, 4.6%, 5.3%, and 4.6% were reported as AFF, heart failure, and stroke related, respectively, and there were more deaths following stroke for woman than men. Previously, investigators have reported conflicting results regarding AFF care and outcomes between men and women. The differences identified in this study suggest that further examination is required to determine if they are physiological (related to patient factors) or systemic (related to factors such as income, access to services, health care biases, etc.). Mortality and time to death varied based on sex, and this suggests the consequence of these differences is important. "Sex and gender-based analyses provide opportunities for clinicians and researchers to identify health inequities and advocate for changes in health care delivery," commented study co-author Brian H. Rowe, MD, MSc, Scientific Director at the Institute of Circulatory and Respiratory Health (ICRH) for the Canadian Institutes of Health Research (CIHR). "This research adds to accumulating evidence that women with cardiovascular disease may receive different management and experience worse outcomes than men." "Emergency, family medicine, and specialist clinician groups should be aware of the sex-based differences we have identified and ensure similar evidence-based management is provided to both men and women to improve health outcomes," concluded Dr. Rosychuk. AFF is an irregular heartbeat (arrhythmia) that is associated with blood clots to the brain (e.g., stroke) and other organs, heart failure, and sometimes death. It affects approximately 350,000 Canadians and 2.66 million Americans.


News Article | April 25, 2017
Site: www.biosciencetechnology.com

A new study has made a major new discovery towards finding the cause of multiple sclerosis (MS), potentially paving the way for research to investigate new treatments. Ahead of MS Awareness Week, which started Monday, April 24, an international team involving the University of Exeter Medical School and the University of Alberta has discovered a new cellular mechanism-- an underlying defect in brain cells -- that may cause the disease, and a potential hallmark that may be a target for future treatment of the autoimmune disorder. The study was recently published in the Journal of Neuroinflammation and part funded by the Royal Devon & Exeter NHS Foundation Trust. Professor Paul Eggleton, of the University of Exeter Medical School, said: "Multiple sclerosis can have a devastating impact on people's lives, affecting mobility, speech, mental ability and more. So far, all medicine can offer is treatment and therapy for the symptoms - as we do not yet know the precise causes, research has been limited. Our exciting new findings have uncovered a new avenue for researchers to explore. It is a critical step, and in time, we hope it might lead to effective new treatments for MS." Multiple sclerosis affects around 2.5 million people around the world. Typically, people are diagnosed in their 20s and 30s, and it is more common in women than men. Although the cause has so far been a mystery, the disease causes the body's own immune system to attack myelin - the fatty "sheaths" that protect nerves in the brain and spinal cord. This leads to brain damage, a reduction in blood supply and oxygen and the formation of lesions in the body. Symptoms can be wide-ranging, and can include muscle spasms, mobility problems, pain, fatigue, and problems with speech. Scientists have long suspected that mitochondria, the energy-creating "powerhouse" of the cell, plays a link in causing multiple sclerosis. The joint Exeter-Alberta research team was the first to combine clinical and laboratory experiments to explain how mitochondria becomes defective in people with MS. Using human brain tissue samples , they found that a protein called Rab32 is present in large quantities in the brains of people with MS, but is virtually absent in healthy brain cells. Where Rab32 is present, the team discovered that a part of the cell that stores calcium (endoplasmic reticulum or ER) gets too close to the mitochondria. The resulting miscommunication with the calcium supply triggers the mitochondria to misbehave, ultimately causing toxicity for brain cells people with MS. Researchers do not yet know what causes an unwelcome influx of Rab32 but they believe the defect could originate at the base of the ER organelle. The finding will enable scientists to search for effective treatments that target Rab32 and embark on determining whether there are other proteins that may pay a role in triggering MS. Dr. David Schley, Research Communications Manager at the MS Society, said: "No one knows for sure why people develop MS and we welcome any research that increases our understanding of how to stop it. There are currently no treatments available for many of the more than 100,000 people in the UK who live with this challenging and unpredictable condition. We want people with MS to have a range of treatments to choose from, and be able to get the right treatment at the right time."


Jing Y.,University of Alberta | Shahbazpanahi S.,University of Ontario Institute of Technology
IEEE Transactions on Signal Processing | Year: 2012

This paper deals with optimal joint user power control and relay distributed beamforming for two-way relay networks, where two end-users exchange information through multiple relays, each of which is assumed to have its own power constraint. The problem includes the design of the distributed beamformer at the relays and the power control scheme for the two end-users to optimize the network performance. Considering the overall two-way network performance, we maximize the lower signal-to-noise ratio (SNR) of the two communication links. For single-relay networks, this maximization problem is solved analytically. For multi-relay networks, we propose an iterative numerical algorithm to find the optimal solution. While the complexity of the optimal algorithm is too high for large networks, two sub-optimal algorithms with low complexity are also proposed, which are numerically shown to perform close to the optimal technique. It is also shown via simulation that for two-way networks with both single relay and multiple relays, proper user power control and relay distributed beamforming can significantly improve the network performance, especially when the power constraints of the two end-users in the networks are unbalanced. Our approach also improves the power efficiency of the network largely. © 1991-2012 IEEE.


Hebblewhite M.,University of Montana | Merrill E.H.,University of Alberta
Oikos | Year: 2011

Partial migration is widespread in ungulates, yet few studies have assessed demographic mechanisms for how these alternative strategies are maintained in populations. Over the past two decades the number of resident individuals of the Ya Ha Tinda elk herd near Banff National Park has been increasing proportionally despite an overall population decline. We compared demographic rates of migrant and resident elk to test for demographic mechanisms partial migration. We determined adult female survival for 132 elk, pregnancy rates for 150 female elk, and elk calf survival for 79 calves. Population vital rates were combined in Leslie-matrix models to estimate demographic fitness, which we defined as the migration strategy-specific population growth rate. We also tested for differences in factors influencing risk of mortality between migratory strategies for adult females using Cox-proportional hazards regression and time-varying covariates of exposure to forage biomass, wolf predation risk, and group size. Despite higher pregnancy rates and winter calf weights associated with higher forage quality, survival of migrant adult females and calves were lower than resident elk. Resident elk traded high quality food to reduce predation risk by selecting areas close to human activity, and by living in group sizes 20% larger than migrants. Thus, residents experienced higher adult female survival and calf survival, but lower pregnancy and calf weights. Cause-specific mortality of migrants was dominated by wolf and grizzly bear mortality, whereas resident mortality was dominated by human hunting. Demographic differences translated into slightly higher (2-3%), but non-significant, resident population growth rate compared to migrant elk, suggesting demographic balancing between resident strategies during our study. Despite statistical equivalence, our results are also consistent with slow long-term declines in migrants because of high predation because of higher wolf-caused mortality in migrants. These results emphasize that migrants and residents will make different tradeoffs between forage and risk may affect the demographic balance of partially migratory populations, which may explain recent declines in migratory behavior in many ungulate populations around the world. © 2011 The Authors.


Ustin S.L.,University of California at Davis | Gamon J.A.,University of Alberta
New Phytologist | Year: 2010

Contents: Summary795I.Introduction796II.History of functional-type classifications of vegetation796III.History of remote sensing of vegetation 799IV.New sensors and perspectives802V.Measuring detailed canopy structure806VI.The emerging hypothesis of 'optical types'810VII.Conclusions811Acknowledgements811References811 Summary: Conceptually, plant functional types represent a classification scheme between species and broad vegetation types. Historically, these were based on physiological, structural and/or phenological properties, whereas recently, they have reflected plant responses to resources or environmental conditions. Often, an underlying assumption, based on an economic analogy, is that the functional role of vegetation can be identified by linked sets of morphological and physiological traits constrained by resources, based on the hypothesis of functional convergence. Using these concepts, ecologists have defined a variety of functional traits that are often context dependent, and the diversity of proposed traits demonstrates the lack of agreement on universal categories. Historically, remotely sensed data have been interpreted in ways that parallel these observations, often focused on the categorization of vegetation into discrete types, often dependent on the sampling scale. At the same time, current thinking in both ecology and remote sensing has moved towards viewing vegetation as a continuum rather than as discrete classes. The capabilities of new remote sensing instruments have led us to propose a new concept of optically distinguishable functional types ('optical types') as a unique way to address the scale dependence of this problem. This would ensure more direct relationships between ecological information and remote sensing observations. © The Authors (2010). Journal compilation © New Phytologist Trust (2010).


He J.,University of Alberta | Li Y.W.,University of Alberta | Blaabjerg F.,University of Aalborg
IEEE Transactions on Industrial Electronics | Year: 2014

To accomplish superior harmonic compensation performance using distributed generation (DG) unit power electronics interfaces, an adaptive hybrid voltage and current controlled method (HCM) is proposed in this paper. It shows that the proposed adaptive HCM can reduce the numbers of low-pass/bandpass filters in the DG unit digital controller. Moreover, phase-locked loops are not necessary as the microgrid frequency deviation can be automatically identified by the power control loop. Consequently, the proposed control method provides opportunities to reduce DG control complexity, without affecting the harmonic compensation performance. Comprehensive simulated and experimental results from a single-phase microgrid are provided to verify the feasibility of the proposed adaptive HCM approach. © 1982-2012 IEEE.


Woodside M.T.,University of Alberta | Woodside M.T.,Canadian National Institute For Nanotechnology | Block S.M.,Stanford University
Annual Review of Biophysics | Year: 2014

Folding may be described conceptually in terms of trajectories over a landscape of free energies corresponding to different molecular configurations. In practice, energy landscapes can be difficult to measure. Single-molecule force spectroscopy (SMFS), whereby structural changes are monitored in molecules subjected to controlled forces, has emerged as a powerful tool for probing energy landscapes. We summarize methods for reconstructing landscapes from force spectroscopy measurements under both equilibrium and nonequilibrium conditions. Other complementary, but technically less demanding, methods provide a model-dependent characterization of key features of the landscape. Once reconstructed, energy landscapes can be used to study critical folding parameters, such as the characteristic transition times required for structural changes and the effective diffusion coefficient setting the timescale for motions over the landscape. We also discuss issues that complicate measurement and interpretation, including the possibility of multiple states or pathways and the effects of projecting multiple dimensions onto a single coordinate. Copyright © 2014 by Annual Reviews. All rights reserved.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-FP | Phase: HEALTH.2010.4.2-1 | Award Amount: 7.36M | Year: 2010

Assuming an annual birth rate of 10.25 births/1,000 population approximately 25,000 Extremely Low Gestational Age Newborns are born every year in the EU. Conservative figures estimate that approximately half of all these babies will develop low blood pressure and require treatment. However, no uniform criteria exist to define hypotension and the evidence to support our current management strategies is limited. Many of these interventions have been derived from adult literature and have not been validated in the newborn. Dopamine remains the most common inotrope used despite little evidence that it improves outcome. Hypotension is not only associated with mortality of preterm infants but is also associated with brain injury and impaired neurosensory development in ELGAN survivors. Preterm brain injury has far reaching implications for the child, parents, family, health service and society at large. It is therefore essential that we now design and perform the appropriate trials to determine whether the infusion of inotropic agents is associated with improved outcome. We have assembled a consortium with expertise in key areas of neonatal cardiology, neonatology, neurophysiology, basic science and pharmacology with the intention of answering these questions. The objectives of the group are as follows: 1. To perform a multinational, randomized controlled trial to evaluate whether a more restricted approach to the diagnosis and management of hypotension compared to a standard approach, with dopamine as a first line inotrope, affects survival without significant brain injury at 36 weeks gestational age in infants born less than 28 weeks gestation and affects survival without neurodevelopmental disability at 2 years corrected age 2. To perform pharmacokinetic and pharmcodynamic studies of dopamine 3. To develop and adapt a formulation of dopamine suitable for newborns in order to apply for a Paediatric Use Marketing Authorization


Grant
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ICT-2009.1.1 | Award Amount: 7.45M | Year: 2010

The availability of position information plays an increasing role in wireless communications networks already today and will be an integral part of future systems. They inherently can offer the ability for stand-alone positioning especially in situations where conventional satellite based positioning systems such as GPS fail (e.g., indoor). In this framework, positioning information is an important enabler either for location and context-aware services or even to improve the communications system itself.The WHERE2 project is a successor of the WHERE project and addresses the combination of positioning and communications in order to exploit synergies and to enhance the efficiency of future wireless communications systems. The key objective of WHERE2 is to assess the fundamental synergies between the two worlds of heterogeneous cooperative positioning and communications in the real world under realistic constraints. The estimation of the position of mobile terminals (MTs) is the main goal in WHERE2. The positioning algorithms combine measurements from heterogeneous infrastructure and complement them by cooperative measurements between MTs, additional information from inertial sensors, and context information. Based on the performance of the geo-aided positioning strategies (in the sense of accuracy, complexity, overhead of signalling, reliability of the provided information, etc.) the impact on coordinated, cooperative, and cognitive networks is assessed. This is done under realistic scenarios and system parameters following on-going standardization processes. A joint and integrated demonstration using multiple hardware platforms provides a verification of the performance of dedicated cooperative algorithms.All the tasks in WHERE2 are covered by different work packages, which are in close interaction to ensure an integral research of cooperative positioning and communications.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-FP | Phase: HEALTH-2007-3.1-1 | Award Amount: 4.22M | Year: 2009

Facilitating Implementation of Research Evidence (FIRE) is a proposed four year programme of research to identify and validate key factors determining the successful implementation of research evidence in practice. The study is underpinned by a conceptual framework, the Promoting Action on Research Implementation in Health Services (PARiHS) framework, which proposes that the successful implementation of research evidence is dependent on the complex interplay of the evidence, the context of implementation and the way the process is facilitated. The planned research will focus on evaluating the feasibility and effectiveness of facilitation as an implementation strategy. A randomised, controlled trial with three intervention arms (standard dissemination and two different models of facilitation) and six units in each of five countries (four in Europe, plus Canada; n=30) is planned. The units will be asked to implement research based guidance on continence promotion and receive differing levels of facilitation support to do so. Detailed contextual, process and outcome data will be collected to fully explore the complex processes at work during implementation. With the combination of an international consortium and experienced research team, a theory-driven, multi-method evaluation study and detailed attention to stakeholder involvement and dissemination throughout the research, the study has the potential to make a significant contribution to the knowledge and practice of translating research evidence at a clinical, organisational and policy level, within Europe and internationally.


Grant
Agency: GTR | Branch: NERC | Program: | Phase: Research Grant | Award Amount: 1.47M | Year: 2015

Concerns are growing about how much melting occurs on the surface of the Greenland Ice Sheet (GrIS), and how much this melting will contribute to sea level rise (1). It seems that the amount of melting is accelerating and that the impact on sea level rise is over 1 mm each year (2). This information is of concern to governmental policy makers around the world because of the risk to viability of populated coastal and low-lying areas. There is currently a great scientific need to predict the amount of melting that will occur on the surface of the GrIS over the coming decades (3), since the uncertainties are high. The current models which are used to predict the amount of melting in a warmer climate rely heavily on determining the albedo, the ratio of how reflective the snow cover and the ice surface are to incoming solar energy. Surfaces which are whiter are said to have higher albedo, reflect more sunlight and melt less. Surfaces which are darker adsorb more sunlight and so melt more. Just how the albedo varies over time depends on a number of factors, including how wet the snow and ice is. One important factor that has been missed to date is bio-albedo. Each drop of water in wet snow and ice contains thousands of tiny microorganisms, mostly algae and cyanobacteria, which are pigmented - they have a built in sunblock - to protect them from sunlight. These algae and cyanobacteria have a large impact on the albedo, lowering it significantly. They also glue together dust particles that are swept out of the air by the falling snow. These dust particles also contain soot from industrial activity and forest fires, and so the mix of pigmented microbes and dark dust at the surface produces a darker ice sheet. We urgently need to know more about the factors that lead to and limit the growth of the pigmented microbes. Recent work by our group in the darkest zone of the ice sheet surface in the SW of Greenland shows that the darkest areas have the highest numbers of cells. Were these algae to grow equally well in other areas of the ice sheet surface, then the rate of melting of the whole ice sheet would increase very quickly. A major concern is that there will be more wet ice surfaces for these microorganisms to grow in, and for longer, during a period of climate warming, and so the microorganisms will grow in greater numbers and over a larger area, lowering the albedo and increasing the amount of melt that occurs each year. The nutrient - plant food - that the microorganisms need comes from the ice crystals and dust on the ice sheet surface, and there are fears that increased N levels in snow and ice may contribute to the growth of the microorganisms. This project aims to be the first to examine the growth and spread of the microorganisms in a warming climate, and to incorporate biological darkening into models that predict the future melting of the GrIS. References 1. Sasgen I and 8 others. Timing and origin of recent regional ice-mass loss in Greenland. Earth and Planetary Science Letters, 333-334, 293-303(2012). 2. Rignot, E., Velicogna, I., van den Broeke, M. R., Monaghan, A. & Lenaerts, J. Acceleration of the contribution of the Greenland and Antarctic ice sheets to sea level rise. Geophys. Res. Lett. 38, L05503, doi:10.1029/2011gl046583 (2011). 3. Milne, G. A., Gehrels, W. R., Hughes, C. W. & Tamisiea, M. E. Identifying the causes of sea-level change. Nature Geosci 2, 471-478 (2009).


Grant
Agency: GTR | Branch: ESRC | Program: | Phase: Research Grant | Award Amount: 24.84K | Year: 2012

The World Health Organization (WHO) model of age-friendly cities emphasizes the theme of supportive urban environments for older citizens. These defined as encouraging active ageing by optimizing opportunities for health, participation and security in order to enhance quality of life as people age (WHO, Global Age-friendly Cities, 2007). The goal of establishing age-friendly cities should be seen in the context of pressures arising from population ageing and urbanisation. By 2030, two-thirds of the worlds population will reside in cities, with - for urban areas in high-income countries - at least one-quarter of their populations aged 60 and over. This development raises important issues for older people: To what extent will cities develop as age-friendly communities? Will so-called global cities integrate or segregate their ageing populations? What kind of variations might occur across different types of urban areas? How are different groups of older people affected by urban change? The age-friendly city perspective has been influential in raising awareness about the impact of population ageing. Against this, the value of this approach has yet to be assessed in the context of modern cities influenced by pressures associated with global social and economic change. The IPNS has four main objectives: first, to build a collaborative research-based network focused on understanding population ageing in the context of urban environments; second to develop a research proposal for a cross-national study examining different approaches to building age-friendly cities; third to provide a systematic review of data sets and other resources of relevance to developing a research proposal on age-friendly cities; fourth, to develop training for early career resarchers working on ageing and urban issues. The network represents the first attempt to facilitate comparative research on the issue of age-friendly cities. It builds upon two meetings held at the Universities of Keele and Manchester in 2011 that sought to establish the basis for cross-national work around the age-friendly theme. The IPNS represents brings together world class research groups in Europe, Hong Kong and North America, professionals concerned with urban design and architecture, and leading NGOs working in the field of ageing. A range of activities have been identified over the two-year funding period: (1) Preparation of research proposals for a cross-national study of approaches to developing age-friendly urban environments. (2) Two workshops to specify theoretical and methodological issues raised by demographic change and urbanisation. (3) A Summer School exploring links between data resources of potential relevance to the ageing and urbanisation theme and which might underpin research proposals. (4) Master classes for network members from key researchers in the field of urbanisation and ageing. (5) A workshop with a user-based theme developing older peoples participation in research on building age-friendly communities. (6) Themed workshops (face-to-face and via video-link) to identify research and policy gaps drawing on inter-disciplinary perspectives The IPNS will be sustained in a variety of ways at the end of the funding period. A collaborative research proposal as well as one to maintain the network will be major outputs from the project and work with potential funding bodies will continue after 2014. Dissemination activities will continue through professional networks, symposia at major international conferences, and involvement in expert meetings. The project will continue to be advertised through the maintenance of a website maintained by the host UK HEI. The project will continue to make a contribution to policy development around the theme of age-friendly cities, notably with the main NGOs working in the field.


Grant
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ICT-2007.1.1 | Award Amount: 5.55M | Year: 2008

To increase ubiquitous and mobile network access and data rates, scientific and technological development is more and more focussing on the integration of radio access networks (RANs). For an efficient usage of RANs, knowledge of the position of mobile terminals (MTs) is valuable information in order to allocate resources or predict the allocation within a heterogeneous RAN infrastructure.\nThe main objective of WHERE is to combine wireless communications and navigation for the benefit of the ubiquitous access for a future mobile radio system. The impact will be manifold, such as real time localization knowledge in B3G/4G systems that allow them to increase efficiency. Satellite navigation systems will be supplemented with techniques that improve accuracy and availability of position information.\nThe WHERE project addresses the combination of positioning and communication in order to exploit synergies and to improve the efficiency of future wireless communication systems.\nThus, the estimation of the position of MTs based on different RANs is the main goal in WHERE. Positioning algorithms and algorithms for combining several positioning measurements allow to estimate the position of MTs. Appropriate definitions of scenarios and system parameters together with channel propagation measurements and derived models will allow to assess the performance of RAN based positioning. Based on the performance of RAN positioning, location based strategies and protocols will be developed in order to optimise as well as to cross-optimise different OSI layers of communication systems and RAT selection policies. Performance assessment of the algorithms is provided by theoretical studies and simulations. Hardware signal processing will provide a verification of the performance of dedicated algorithms under realistic conditions. \nAll the tasks are covered by different work packages, which are in close interaction to ensure an integral research of positioning and communications.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-FP | Phase: HEALTH.2010.4.2-3 | Award Amount: 3.75M | Year: 2010

The emergence of suicidality in patients receiving drug treatment is of concern because of the overall burden and the possible link with completed suicide. The lack of uniform requirements for defining, detecting and recording suicidality and the presence of disease related confounders create major problems. It is possible that Medication-Related Suicidality (MRS) differs from Psychopathology-Related Suicidality (PRS) in terms of phenomenology, clinical expression and time course, and may vary between children and adults. Unlike PRS, the time-course of MRS may be associated with possible differences in drug pharmacokinetics; abrupt onset; absence of suicidality prior to start of medication; and emergence of suicidality related co-morbidities after treatment. This proposal will focus on developing a web-based comprehensive methodology for the assessment and monitoring of suicidality and its mediators in children and adolescents using the HealthTrackerTM (a paediatric web-based health outcome monitoring system), with the aim of developing a Suicidality Assessment and Monitoring Module, a Bio-psycho-social Mediators of Suicidality Assessment Module, and a Suicidality-Related Psychiatric and Physical Illness Module. The information obtained will be used to computer-generate classification of suicidality using the Classification of Suicide-Related Thoughts and Behaviour (Silverman et al, 2007) and the Columbia Classification Algorithm of Suicidal Assessment (C-CASA) (Posner et al, 2007). The existing Medication Characteristics Module will be expanded to allow documentation of pharmacological characteristics of medication, to explore whether they mediate MRS. The methodology will then be tested in 3 paediatric observational trials (risperidone in conduct disorder; fluoxetine in depression, and montelukast in bronchial asthma) and standardized, which can be used pharmacovigilance and in epidemiological, observational, and registration trials.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-IP | Phase: KBBE.2011.2.2-02 | Award Amount: 7.84M | Year: 2012

NutriTech will build on the foundations of traditional human nutrition research using cutting-edge analytical technologies and methods to comprehensively evaluate the diet-health relationship and critically assess their usefulness for the future of nutrition research and human well-being. Technologies include genomics, transcriptomics, proteomics, metabolomics, laser scanning cytometry, NMR based lipoprotein profiling and advanced imaging by MRI/MRS. All methods will be applied in an integrated manner to quantify the effect of diet on phenotypic flexibility, based on metabolic flexibility (the capacity for the organism to adapt fuel oxidation to fuel availability). However, NutriTech will move beyond the state-of-the-art by applying these integrated methods to assess the underlying and related cell biological and genetic mechanisms and multiple physiological processes of adaptation when homeostasis is challenged. Methods will in the first instance be evaluated within a human intervention study, and the resulting optimal methods will be validated in a number of existing cohorts against established endpoints. NutriTech will disseminate the harmonised and integrated technologies on a global scale by a large academic network including 6 non-EU partners and by providing an integrated and standardised data storage and evaluation platform. The impact of NutriTech will be multifold and exploitation is crucial as major breakthroughs from our technology and research are expected. This will be achieved by collaboration with a consortium of 8 major food industries and by exploitation of specific technologies by our 6 SME partners. Overall, NutriTech will lay the foundations for successful integration of emerging technologies intro nutrition research.


The International Nurses Association is pleased to welcome MaryJane Johnson, RN, to their prestigious organization with her upcoming publication in the Worldwide Leaders in Healthcare. MaryJane Johnson is a Palliative Care Nurse currently serving patients at Bayshore Home Health in Ottawa, Ontario, Canada, and is also affiliated with the University of Alberta Hospital. MaryJane holds over 33 years of experience and an extensive expertise in all facets of nursing, especially palliative care and dementia care. MaryJane Johnson graduated with her Nursing Degree from the University of Alberta in Edmonton, Canada in 1983, and remains associated with the University’s hospital to this day. To keep up to date with the latest advances in her field, MaryJane maintains a professional membership with the Registered Nurses Association of Ontario, the College of Nurses of Ontario, and the Canadian Hospice Palliative Care Association. MaryJane says that her great success is due to her passion for palliative care, based upon her belief that people should leave the world with the same care and love that they enter it. Learn more about MaryJane Johnson here and read her upcoming publication in Worldwide Leaders in Healthcare.


News Article | March 22, 2016
Site: motherboard.vice.com

In 2008, Canada eliminated the position of national science adviser, angering scientists who saw the office as a key point of contact between the government and the scientific community. Over the next eight years, Canada went to war on science by preventing researchers from talking to the press (in a word, “muzzling”), and cutting billions in funding for research. Along the way, Canada gained a reputation for being flagrantly anti-science. Now, Canada is looking to revive the role of the national science adviser, and undo some of the damage done during the Harper years, with a “chief science officer.” Whoever is chosen for the position, and when—staff of science minister Kirsty Duncan would neither confirm nor deny that they will be named with the release of the federal budget on Tuesday—they will have one hell of a job ahead of them. Although much of the role of the chief science officer appears undefined at the moment, one theme overarches the entire discussion: transparency. When Prime Minister Justin Trudeau appointed Duncan, he said that the chief science officer would be mandated to “ensure that government science is fully available to the public, that scientists are able to speak freely about their work, and that scientific analyses are considered when the government makes decisions.” The previous Canadian government’s dubious record on science is a big ship to turn around, and it’s been on the same, dirge-like course for nearly a decade. With that in mind, here are some badass scientists that we think would be perfect for the job. Watch more from Motherboard: Oil and Water Katie Gibbs knows how to get people fired up about transparency (resist the urge to fall asleep after reading that word), which is pretty damn impressive. In 2012, the Harper government’s campaign to muzzle scientists was in full swing, and Gibbs was one of the chief organizers behind a protest that ended up swelling into thousands of angry researchers marching on Parliament Hill. A scientist by training, and a staunch advocate of government transparency by trade, Gibbs hasn’t let up on her cage-rattling since that day four years ago. In the intervening years, she’s helped to run Evidence for Democracy, an advocacy group that sprung up in the wake of the protest. Bringing her outlook and history of campaigning for transparency into the government itself would be a big move. BRENDA PARLEE Canada Research Chair in Social Responses to Ecological Change, University of Alberta Parlee’s bread and butter is researching the impacts of climate change, but with a focus on aboriginal beliefs that is all too uncommon in Canadian science today. She and her team of students go out into the field to engage with indigenous communities about changes to their environment as a result of climate change—the declining populations of certain animals, for example. In 2013, she helped organize a permanent exhibit at the University of Alberta called “Elders as Scientists” to raise awareness about indigenous knowledge systems. Appointing Parlee would make aboriginal knowledge a part of the communication process between scientists, the government, and the public. Canada’s track record with our indigenous peoples has been pretty awful in nearly every regard for, well, ever, and including them and their knowledge into our science priorities would be a welcome gesture. Bourque was once a climatologist for Environment Canada, but these days he mostly specializes in handing out scientific knowledge suplexes. Who better to take on the role of bridging the gap between science, government, and the public? He’s served as the executive director of Ouranos, a Canadian climate change think tank, since 2013, so he knows how to run an organization. He’s also somewhat of a firebrand when it comes to keeping temperatures on an even keel, which is a plus, and doesn’t hesitate to lay out the scientific consensus about climate in no uncertain terms—even when faced by government ministers. Basically, he’s got the cred and isn’t afraid to flaunt it.


News Article | November 1, 2016
Site: www.marketwired.com

EDMONTON, AB--(Marketwired - November 01, 2016) - The PCL family of companies is pleased to announce that succession transition has officially taken place for its leadership position of President and Chief Executive Officer. Dave Filipchuk is appointed the eighth President and CEO of PCL in its 110-year history. Mr. Filipchuk previously held the position of Deputy CEO, and before that was President and COO, Canadian and Australian Operations, with responsibility for the performance of PCL's Buildings and Civil Infrastructure divisions. Mr. Filipchuk has a BSc degree in civil engineering from the University of Alberta and attended the Ivey Executive program at the University of Western Ontario. He is Gold Seal certified and a member of APEGA. Mr. Filipchuk has been with PCL for 32 years and possesses a wealth of knowledge of both the company and the construction industry, having lived and worked in both Canada and the United States in PCL's buildings and civil infrastructure sectors. "I am extremely proud and excited to assume the position of President and CEO at PCL," said Mr. Filipchuk. "Guiding a company with such a storied and successful history is an opportunity I look forward to enjoying well into the future. I would like to thank Paul Douglas for his tireless work and dedication in leading PCL for the past seven years, and I congratulate him on an amazing career in construction and on his new role with our company." Mr. Douglas assumes the role of Chairman with PCL Construction's Board of Directors. "Succession planning is all about having the right people in the right place at the right time," said Mr. Douglas in talking about handing over the reins to Mr. Filipchuk. "We take succession seriously at all levels of our organization and make sure an appropriate amount of time is provided for a seamless transition to preserve continuity in our business. Dave Filipchuk has my, and the entire board of directors', full support in officially becoming the eighth CEO of this great company." During his 31 years with PCL, and apart from leading PCL to some of its most successful years to date, Mr. Douglas has received numerous accolades. Among those are recognition as Alberta's 2015 Business Person of the Year and inclusion on the list of Alberta's 50 Most Influential People for the past two years. About PCL Construction PCL is a group of independent construction companies that carries out work across Canada, the United States, the Caribbean, and in Australia. These diverse operations in the civil infrastructure, heavy industrial, and buildings markets are supported by a strategic presence in 31 major centers. Together, these companies have an annual construction volume of $8.5 billion, making PCL the largest contracting organization in Canada and one of the largest in North America. Watch us build at PCL.com.


News Article | December 7, 2016
Site: www.nature.com

The sequencing of a 10,600-year-old genome has settled a lengthy legal dispute over who should own the oldest mummy in North America — and given scientists a rare insight into early inhabitants of the Americas. The controversy centred on the ‘Spirit Cave Mummy’, a human skeleton unearthed in 1940 in northwest Nevada. The Fallon Paiute-Shoshone Tribe has long argued that it should be given the remains for reburial, whereas the US government opposed repatriation. Now, genetic analysis has proved that the skeleton is more closely related to contemporary Native Americans than to other global populations. The mummy was handed over to the tribe on 22 November. The genome of the Spirit Cave Mummy is significant because it could help to reveal how ancient humans settled the Americas, says Jennifer Raff, an anthropological geneticist at the University of Kansas in Lawrence. “It’s been a quest for a lot of geneticists to understand what the earliest peoples here looked like,” she says. The case follows the US government’s decision this year that another controversial skeleton, an 8,500-year-old human known as Kennewick Man, is Native American and qualifies for repatriation on the basis of genome sequencing. Some researchers lament such decisions because the buried skeletons are then unavailable for scientific study. But others point out that science could benefit if Native American tribes use ancient DNA to secure the return of more remains, because this may deliver long-sought data on the peopling of the region. “At least we get the knowledge before the remains are put back in the ground,” says Steven Simms, an archaeologist at Utah State University in Logan, who has studied the Spirit Cave Mummy. “We’ve got a lot of material in this country that’s been repatriated and never will be available to science.” The Spirit Cave Mummy is one of a handful of skeletons from the Americas that are more than 10,000 years old (see ‘Sequencing North American skeletons’). Archaeologists Georgia and Sydney Wheeler discovered it in Nevada’s Spirit Cave in 1940. The skeleton, an adult male aged around 40 at the time of his death, was shrouded in a rabbit-skin blanket and reed mats and was wearing moccasins; he was found with the cremated or partial remains of three other individuals. The Wheelers concluded that the remains were 1,500–2,000 years old. But when radiocarbon dating in the 1990s determined that they were much older, the finds drew attention from both scientists and the Fallon Paiute-Shoshone Tribe. The tribe considers Spirit Cave to be part of its ancestral homeland and wanted the remains and artefacts. The US Native American Graves Protection and Repatriation Act (NAGPRA) mandates that remains be returned to affiliated tribes if they are deemed ‘Native American’ by biological or cultural connections. In 2000, the US government’s Bureau of Land Management (BLM), which oversees the land where the mummy was found, decided against repatriation. The tribe sued, and in 2006 a US District Court judge ordered the agency to reconsider the case, calling the BLM’s decision “arbitrary and capricious”. The mummy’s remains were stored out of view in a Nevada museum, and placed off-limits to most research, except for efforts to determine its ancestry. In a 2014 monograph based on earlier examination of the remains, US anthropologists Douglas Owsley and Richard Jantz noted that the mummy’s skull was shaped differently from those of contemporary Native Americans from the region (Kennewick Man, Texas A&M Univ. Press). That contributed to the BLM’s decision to seek DNA analysis, says Bryan Hockett, an archaeologist at the bureau’s Nevada office in Reno. The tribe was originally opposed to genetic analysis to prove the mummy’s ancestry, says Hockett, but eventually agreed. In October 2015, Eske Willerslev, an evolutionary geneticist who specializes in ancient DNA analysis at the Natural History Museum of Denmark in Copenhagen, travelled to Nevada to collect bone and tooth samples from the mummy and other remains for DNA sequencing, after meeting with tribe members several months earlier. Willerslev’s team concluded that the Spirit Cave remains are more closely related to indigenous groups in North and South America than to any other contemporary population. The BLM gave Nature a preliminary scientific report from the team, and a 31-page memo outlining its reasoning for repatriating the remains. Willerslev declined to comment because his team’s data have not yet been published in a journal. Hockett says the genome findings offered the only unequivocal evidence that the remains are Native American. No evidence links the remains to any specific group — not even the ancient DNA — but NAGPRA allows the return of human remains to tribes that have a geographical connection. Len George, chair of the Fallon Paiute-Shoshone Tribe, did not respond to requests for comment. The genome of a 12,600-year-old skeleton from Montana, called the Anzick Child, is the only other published ancient genome from the Americas that is older than 10,000 years. The Spirit Cave remains and the Anzick Child both seem genetically closer to South American groups than to some North American groups, and the migrations behind this pattern are not yet understood, says Raff. One possibility is that both individuals lived before their local populations began spreading across regions of the Americas, says population geneticist Pontus Skoglund at Harvard Medical School in Boston, Massachusetts. Sequencing ancient DNA, which has become easier and cheaper in recent years, could help to determine the origins of many other ancient bones. Remains as old as the Spirit Cave Mummy are rare, but there are many younger remains that are not clearly affiliated to any tribe, and which might now be deemed Native American through ancient DNA sequencing and thus repatriated, scientists say. The BLM announced its intentions to repatriate the Spirit Cave remains in October and received no formal objections, says Hockett. But Jantz, the anthropologist who co-led the Spirit Cave skull study and is based at the University of Tennessee in Knoxville, laments the decision. “It’s just a sad day for science. We will lose a lot of information about the history of human occupation in the Americas as a consequence,” he says. Further molecular study of the remains could identify details about the Spirit Cave individuals — from the foods they consumed to the diseases that afflicted them. “I think Willerslev is the last guy who is going to look at these things,” Jantz adds. Dennis O’Rourke, a biological anthropologist at the University of Kansas, says he would like to see more researchers follow Willerslev’s example and work with Native American groups to decide whether to sequence ancient human remains, rebury them, or both. And Kimberley TallBear, an anthropologist who studies the views of indigenous groups on genetics at the University of Alberta in Edmonton, Canada, says researchers with O’Rourke’s attitude to studying ancient remains are becoming more common. She thinks it is wrong for scientists opposed to repatriation to conclude that tribes are not open to research. “Tribes do not like having a scientific world view politically shoved down their throat,” she says, “but there is interest in the science.”


News Article | October 26, 2016
Site: www.eurekalert.org

(Edmonton, AB) ProTraining, a University of Alberta spinoff company that provides mental health education and training to emergency personnel, announced today that it won a coveted Brandon Hall Group Gold Award for Excellence in the Learning Category (Best Advance in Custom Content). The prestigious awards program has been running for more than 20 years, and is often referred to as the 'Academy Awards' of the learning industry. ProTraining developed the program to increase the skills of first responders to de-escalate potentially difficult situations, and to improve interactions with individuals who may have mental health issues. The program is delivered via a combination of online and in-class training. The online course is the first step in the program, developed in partnership with Edmonton-based testing and training company Yardstick. "Our online program is based on empirical peer-reviewed research on a new approach to police training. This was conducted over a multi-year period, and the research program led to significant decreases in the use of physical force by police. There were also multiple other benefits from this program," says Dr. Peter Silverstone, a professor and psychiatrist in the University of Alberta's Department of Psychiatry, and chair of the Advisory Board at ProTraining. After the peer-reviewed training proved so successful, ProTraining was formed as a University of Alberta spinoff company with the help of TEC Edmonton and Yardstick. "We are always thrilled to hear about our clients' successes," says Jayant Kumar, TEC Edmonton vice president, Technology Management. "We'd like to congratulate ProTraining on being recognized for providing a valuable service to the first responder community." "Officers interacting with the public need specific skills to decrease the risk of negative outcomes. This online version offers a unique, engaging and interactive way to not only inform officers about useful skills, but also includes realistic video scenarios allowing them to practice these skills" says Dr. Yasmeen Krameddine, post-doctoral fellow in the Department of Psychiatry, and subject matter expert for this online course. According to Krameddine, the ProTraining course is a world leading online program. "Winning a Brandon Hall Group Excellence Award means an organization is an elite innovator within Human Capital Management. The award signifies that the organization's work represents a leading practice in that HCM function," said Rachel Cooke, chief operating officer of Brandon Hall Group and head of the awards program. "Their achievement is also notable because of the positive impact their work in HCM has on business results. All award winners have to demonstrate a measurable benefit to the business, not just the HCM operation. That's an important distinction. Our HCM award winners are helping to transform the business." More information about the innovative program can be found at protraining.com or on the Canadian Police Knowledge Network. ProTraining is a University of Alberta spinoff company created with the help of TEC Edmonton. ProTraining provides mental health and de-escalation training courses focusing on saving lives and preventing violent encounters in police interactions using online and classroom training. Courses are developed in partnership with law enforcement, protective service officers, bus operators and security professionals in Canada, America, and European Police Organizations. Contact information@protraining.com for custom programming. Brandon Hall Group is a HCM research and advisory services firm that provides insights around key performance areas, including Learning and Development, Talent Management, Leadership Development, Talent Acquisition and Workforce Management. TEC Edmonton helps technology entrepreneurs accelerate their growth. In addition to being the commercialization agent for University of Alberta technologies, TEC Edmonton operates Greater Edmonton's largest accelerator for early stage technology companies, including both university spinoffs and companies from the broader community. TEC Edmonton provides client services in four broad areas: Business development, funding and finance, technology commercialization and entrepreneur development. In 2015, TEC Edmonton was identified by the Swedish University Business Incubator (UBI) Index as the 4th best university business incubator in North America, and was also named Canadian "Incubator of the Year" at the 2014 Startup Canada Awards. For more information, visit http://www. .


The International Nurses Association is pleased to welcome MaryJane Johnson, RN, to their prestigious organization with her upcoming publication in the Worldwide Leaders in Healthcare. MaryJane Johnson is a Palliative Care Nurse currently serving patients at Health and Social Services, Yukon Government, Whitehorse, Yukon, Canada. MaryJane holds over 33 years of experience and an extensive expertise in all facets of nursing, especially palliative care and dementia care. MaryJane Johnson graduated with her Nursing Degree from the University of Alberta in Edmonton, Canada in 1983, and remains associated with the University’s hospital to this day. To keep up to date with the latest advances in her field, MaryJane maintains a professional membership with the Registered Nurses Association of the Yukon, the College of Nurses of Ontario, and the Canadian Hospice Palliative Care Association. MaryJane says that her great success is due to her passion for palliative care, based upon her belief that people should leave the world with the same care and love that they enter it. Learn more about MaryJane Johnson here and read her upcoming publication in Worldwide Leaders in Healthcare.


News Article | February 15, 2017
Site: www.marketwired.com

EDMONTON, ALBERTA--(Marketwired - Feb. 8, 2017) - Dalmac Energy Inc. (the "Company" or "Dalmac") (TSX VENTURE:DAL) is pleased to announce the appointment of Su Chun, of Edmonton, Alberta, to be its Chief Financial Officer, effective immediately. Mr. Jonathan Gallo, the previous Chief Financial Officer of Dalmac, only provided his services to Dalmac on a contract basis and instead will be focusing his efforts on his primary accounting business. As part of Dalmac's succession plan, Ms. Chun has worked at Dalmac for nearly two years learning under the instruction of Mr. Gallo. Further, in order to ensure a smooth transition, Mr. Gallo will still be available to Dalmac for consultation on a contract basis as needed. Su Chun, CA, CPA has served as the Controller at Dalmac Oilfield Services Inc. from June 2015 to present. She has been an integral part of navigating Dalmac through the financial downturn in the Alberta Oilfield Services industry, cost restructuring at Dalmac, and implementing a new financing agreement to strengthen the operating cash flows of the Company. Ms. Chun brings over seven years of financial accounting, assurance, and management experience to Dalmac. Prior to assuming her role as Controller at Dalmac, she worked at a MNP LLP, a national accounting firm, and an international oil and gas company based out of Calgary. Ms. Chun holds a Bachelor of Commerce degree from the University of Alberta and a Chartered Professional Accountant designation. Dalmac is a diversified provider of well stimulation and fluid management services, which include fluid transfers, hot oiling, frack heating, well acidizing, tank rentals and equipment moving. Dalmac is also a key distributer/supplier of glycol and methanol related products. Headquartered out of Edmonton with operations branches in Fox Creek, Edson, and Warburg, Dalmac has been servicing the oil and gas fields of west central Alberta for over 60 years. The Company has master service agreements (MSAs) with most of North America's leading exploration and production companies. Approximately half of Dalmac's revenue comes from recurring, fluid transfer and maintenance-related operations and the balance is derived from service related activities such as drilling, completions and well work overs. Dalmac is headquartered in Edmonton, Alberta, Canada and trades on the TSX Venture Exchange under the symbol "DAL". Additional information on the Company is available on its website at www.dalmacenergy.com and on SEDAR at sedar.com. Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.


The International Association of HealthCare Professionals is pleased to welcome Donna Marie Wachowich, MD, FCFP, Family Practitioner, to their prestigious organization with her upcoming publication in The Leading Physicians of the World. Dr. Donna Marie Wachowich is a highly trained and qualified physician with an extensive expertise in all facets of her work, especially family care and low risk obstetrics. Dr. Wachowich has been in practice for 30 years and is currently serving patients within Valley View Family Practice in Calgary, Alberta, Canada. She is also affiliated with the Foothills Medical Center. Dr. Wachowich attended the University of Alberta in Edmonton, Canada, where she graduated with her Medical Degree in 1982. Following her graduation, she completed her residency training at Queen’s University in Kingston, Ontario. Dr. Wachowich has earned the coveted title of Fellow of the College of Family Physicians of Canada. Dr. Wachowich was on the sexual response team of the Calgary Area Medical Staff Association, and speaks locally on sexual crimes and the appropriate responses. She maintains a professional membership with the Canadian College of Family Practice and the Canadian Medical Protective Association, allowing her to stay current with the latest advances in her field. Dr. Wachowich attributes her success to providing high quality patient care, and staying abreast in the new sciences, medical, and technological advances. In her free time, Dr. Wachowich enjoys curling, traveling, cooking, knitting, and photography. Learn more about Dr. Wachowich by reading her upcoming publication in The Leading Physicians of the World. FindaTopDoc.com is a hub for all things medicine, featuring detailed descriptions of medical professionals across all areas of expertise, and information on thousands of healthcare topics.  Each month, millions of patients use FindaTopDoc to find a doctor nearby and instantly book an appointment online or create a review.  FindaTopDoc.com features each doctor’s full professional biography highlighting their achievements, experience, patient reviews and areas of expertise.  A leading provider of valuable health information that helps empower patient and doctor alike, FindaTopDoc enables readers to live a happier and healthier life.  For more information about FindaTopDoc, visit http://www.findatopdoc.com


DEERFIELD, Ill.--(BUSINESS WIRE)--Fortune Brands Home & Security, Inc. (NYSE: FBHS), an industry-leading home and security products company, announced that effective today, Tracey Belcourt has joined the Company as the senior vice president of global growth and development. Belcourt brings more than 17 years of experience in global strategy, mergers and acquisitions (M&A) and business development. She comes to Fortune Brands from Mondelez International, Inc. where she spent four years as the executive vice president of strategy focused on development and execution of the company’s global growth strategies. Prior to that she spent 13 years consulting at Bain & Company where she had an opportunity to work with global clients in the industrial products, airline and consumer products categories with a focus on strategic performance and growth. She began her career in academia at Concordia University in Montreal, Canada, then at the University of Bonn in Germany. Belcourt holds both a Ph.D. and M.A. in economics from Queen’s University in Kingston, Ontario and a B.S. in economics and mathematics from the University of Alberta. “Tracey is a collaborative, results-oriented leader who is motivated by purpose and the ability to have a real impact on the business. She will be a great fit for our culture and we have confidence in her leadership abilities and overall approach to business,” said Chris Klein, chief executive officer, Fortune Brands. “I’m excited to welcome Tracey to our team in a key role to continue to accelerate our global growth strategy and enhance our ability to complete value-creating mergers and acquisitions.” Belcourt will partner with the executive team to identify, assess and execute opportunities to grow the business around the world in current segments, adjacencies, new segments and new geographies. She will also lead strategic planning and insights. Fortune Brands Home & Security, Inc. (NYSE: FBHS), headquartered in Deerfield, Ill., creates products and services that fulfill the dreams of homeowners and help people feel more secure. The Company’s four operating segments are Cabinets, Plumbing, Doors and Security. Its trusted brands include more than a dozen core brands under MasterBrand Cabinets; Moen, ROHL and Riobel under the Global Plumbing Group (GPG); Therma-Tru entry door systems; and Master Lock and SentrySafe security products under The Master Lock Company. Fortune Brands holds market leadership positions in all of its segments. Fortune Brands is part of the S&P 500 Index. For more information, please visit www.FBHS.com.


News Article | October 26, 2016
Site: www.theguardian.com

The latest Black Mirror series from Charlie Brooker presents, despite its transition to Netflix, another unsettling collection of future shock nightmares drawn from consumer technology and social media trends. The second episode, Playtest, has an American tourist lured to a British game development studio to test a new augmented-reality horror game that engages directly with each player’s brain via a biorobotic implant. The AI program mines the character’s darkest fears and manifests them into the real-world as photorealistic graphics. Inevitably, terror and mental breakdown follow. The idea of a video game that can analyse a player’s personality and change accordingly may seem like the stuff of outlandish sci-fi to some Black Mirror viewers. But it isn’t. This could well be where game design is heading. Eight years ago, video game writer Sam Barlow had a new idea about how to scare the crap out of video game players. Working on the survival horror adventure Silent Hill: Shattered Memories, Barlow introduced a character named Dr Kaufmann, a psychotherapist whose role, ostensibly, was to evaluate the mental wellbeing of protagonist Harry Mason. But that’s not really why he was there. Dr Kaufmann’s actual role was to psychologically assess the player. At key points throughout the terrifying narrative, the game provided a questionnaire inspired by the “Big Five” personality test, a method used by academic psychologists for personality research. Players would be asked things like: Are you a private person? Do you always listen to other people’s feelings? In this way it was building a psychological profile of the player. At the same time, the system was also drawing data from how players interacted with the game world: how long they spent exploring each area before moving on; whether they strayed from clearly marked paths; whether they faced non-player characters while they talked. Every action had an effect on the narrative. “Most scenes in the game had layers of variation – in the textures and colour, the lighting and the props,” explains Barlow. “Characters also had multiple appearances and personality differences. All phone calls, voicemails and readable materials had multiple variations according to different profile slices. As you approached a door to a new room, the game was spooling in the assets, testing your profile and loading up the custom asset packages to assemble your own version.” The idea was to draw in and then unsettle the player as much as possible based on their psychological traits. Characters, monsters and environments would all be subtly changed to reflect their own fears of aggression, enclosure or darkness. Game designers have been attempting to learn, assess and react to player types since the days of Dungeons and Dragons. Richard Bartle, co-creator of the original MUD roleplaying game, formed a taxonomy of players in 1996, and his types – Achievers, Explorers, Socialisers, and Killers – have often been often used by designers to try to pre-empt and entice different player types. Over the last decade, however, the concept of truly reactive “player modelling”, in which the game learns in real time from each individual player, has become an important part of academic research into artificial intelligence and machine learning. In 2004, AI researchers Georgios Yannakakis and John Hallam published a seminal paper detailing their work on Pac-Man. They created a modified version of the popular arcade game with the ghosts controlled by an evolutionary neural network that adjusted their behaviour based on each player’s individual strategies. In the same year, PhD student Christian Thurau presented his own player modelling system that used pattern recognition and machine learning techniques to teach AI characters how to move in a game world, based on watching humans play Quake II. In short: games were beginning to watch and learn from players. Many other other studies followed. In 2007, researchers at the University of Alberta’s Intelligent Reasoning Critiquing and Learning group (under Vadim Bulitko) developed PaSSAGE (Player-Specific Stories via Automatically Generated Events), an AI-based interactive storytelling system that could observe and learn from player activities in a role-playing adventure. As the game progressed, the program sorted players into five different types (based on the Robin’s Laws of Dungeons & Dragons) and then served them game events from a library of pre-written mini-missions. If they seemed to like looking for items in the game world, they were given a quest to find an object; if they liked fighting, they were given events that involved combat. That system was interesting (and is still being evolved in the department), but it relied on hand-designed set-piece events, and only had a limited grasp on who the player was. Matthew Guzdial, a PhD student at the Georgia Institute of Technology’s School of Interactive Computing, is currently working on a more adaptable evolution of this concept – a version of Nintendo’s Super Mario Bros platformer that features a neural network capable of observing player actions and builds novel new level designs, based on this data. “We’ve successfully been able to demonstrate that the generator creates levels that match a learned play style”, says Guzdial who collaborated with Adam Summerville from the University of California, Santa Cruz. “Put simply, if a player likes exploring, it creates levels that must be explored; if a player speed-runs, it makes levels that are suited to speed-runing.” Super Mario, it turns out, is a popular test-bed for AI researchers. It’s familiar, it allows lots of different player actions in a constrained environment, and its source code is easily available. At the University of Copenhagen, AI researchers Noor Shaker, Julian Togelius and the aforementioned Yannakakis developed a slightly different experiment based on the game. This time players were asked to provide emotional feedback on each play-through, giving scores for fun, challenge and frustration; this input was combined with data drawn from watching them play (how often the player jumped, ran or died, how many enemies they killed, etc), and the AI program constructed new levels as a result. Over the last decade, Yannakakis and colleagues over at the University of Malta’s Institute of Digital Games, where he currently works as an associate professor, have explored various forms of machine learning to estimate a player’s behavioural, cognitive and emotional patterns during play. They have combined deep-learning algorithms, which build general models of player experience from massive datasets, with sequence-mining algorithms, which learn from sequences of player actions (like continually choosing health pick-ups over ammo). They have also explored preference learning, which allow an AI system to learn from player choices between particular content types (for example, preferring levels with lots of jumping challenges over those with lots of enemies). Not only have they used behavioural data gathered during play, they’ve also used age, gender and other player details to inform their systems. Their aim isn’t just to make interesting games, however – the AI techniques they’re exploring may well be used in educational software or as diagnostic or treatment tools in mental health care. “Given a good estimate of a player’s experience, AI algorithms can automatically – or semi-automatically – procedurally generate aspects of a game such as levels, maps, audio, visuals, stories, or even game rules,” says Yannakakis. “The estimate can be used to help designers shape a better experience for the player. By tracking their preferences, goals and styles during the design process, AI can assist and inspire designers to create better, more novel, more surprising game content.” For Julian Togelius, one of the foremost experts on AI games research now based at NYU, the next step is active player modelling – he envisages an AI level designer that doesn’t just react to inputs, but is actually curious about the player and their preferences, and wants to find out more. “There is this machine learning technique called active learning, where the learning algorithm choses which training examples to work on itself,” he explains. “Using this technique, you could actually have a game that chooses in what way to explore you, the player: the game is curious about you and wants to find out more, therefore serving you situations where it does not know what you will do. That’s something that will be interesting for the player too, because the game has a reasonably good model of what you’re capable of and will create something that’s novel and interesting to you.” Of course, in many ways we’re already seeing this kind of player modelling happening in the conventional games industry. Valve’s critically acclaimed zombie shooter Left 4 Dead features an AI director that varies the type and threat level of undead enemies based on player activities in the game so far. With the arrival of free-to-play digital games on social media platforms and mobile phones, we also saw the emergence of a whole new game design ethos based on studying player data and iterating games accordingly. In its prime, Zynga was famed for its huge data science department that watched how players interacted with titles such as Farmville and Mafia Wars, worked out where they were getting bored or frustrated, and tweaked the gameplay to iron out those kinks. The analysis of player metrics quickly became a business in itself with companies such as Quantic Foundry and GameAnalytics set up to help smartphone developers garner information from the activities of players. But these systems are commercially motivated and based around making game design bets on the activities of thousands of players – they’re not about actually understanding players on an individual emotional level. That concept is definitely coming. Some AI researchers are shifting away from machine learning projects that watch what players do and toward systems that work out what they feel. It’s possible to get an idea about a player’s excitement, engagement or frustration from analysing certain in-game actions – is the player hammering the jump button, are they avoiding or engaging enemies, are they moving slowly or quickly? Simple actions can give away a lot about the player’s state of mind. In 2014, Julian Togelius found he was able to make informed assumptions about key character traits of test subjects by watching how they played Minecraft. “We asked them questions about their life motives then analysed the logs from the game,” he says. “Traits like independence and curiosity very strongly correlated with lots of things that happened in the game.” So could an AI program study that data and change a game to tweak those feelings? “The major challenge is to relate content and [player] behaviour to emotion,” says Noor Shaker, a researcher at the University of Copenhagen who completed a PhD in player-driven procedural content generation. “Ultimately, we want to be able to identify the aspects of games that have an impact on how players experience them.” Shaker is using a variety of methods for this purpose: neuroevolution, random decision forests, multivariate adaptive spline models are all complex machine learning toolsets that enable neural networks to gradually learn from and adapt to different player behaviours. “My work recently revolves around building more accurate models of experience, implementing interactive tools that allow us to visualise the expressive space of players’ emotions,” says Shaker. “Most of the work I have seen so far, such as adaptation in Left 4 Dead, focuses on game difficulty and adjusting the behaviour of the NPCs according to relatively simple metrics of player’s behaviour. I believe there are many other aspects to experience than difficulty and there are many more elements that can be considered to manipulate player experience than the behaviour of the NPCs. Recent research has shown that emotions such as frustration, engagement and surprise can be detected and modelled by machine learning methods. Shaker then, is interested in developing a video game AI system that understands not just how the player plays, but how the player is feeling as they play. Imagine a game that learns a player’s emotional state and generates non-player characters and story fragments that it knows will hit them right in the heart. “I believe data-driven automatic content personalisation is doable,” says Shaker. So far, much of this research has concentrated on how the player behaves within the game world. But that’s not the only place to gather data. As consumers in the digital era, we’re used to being profiled by major corporations: Facebook, Amazon, Microsoft and Google all use behavarioural targeting techniques to serve personalised ads and content to users. Advanced algorithms track our web-browsing activities via cookies and web beacons and learn our preferences. The data is all out there, and there’s no reason why games makers couldn’t use it too. In fact, AI researchers are already creating games that mine information from popular websites and bring it back for use in the narrative. Gabriella Barros is working on the concept of “data adventures” with Julian Togelius, in which the AI gathers information from sites like Wikipedia and OpenStreetMaps to create globe-trotting point-and-click adventures in the style of Where in the World is Carmen Sandiego – except they’re based in real locations and populated by real people. These data games are just the beginning, argues Michael Cook an AI researcher at Falmouth University. “Right now they’re interested in huge, open data platforms like Wikipedia or government statistics,” he says. “But you can imagine in the future a game which takes your Facebook feed instead of a Wikipedia database, and populates the game world with people you know, the things they like doing, the places they visit and the relationships people have with one another. Whether or not that sounds like a scary idea is another question, but I can definitely see it as a natural extension of [the data game concept]. We already open our lives up to so many companies every day, we might as well get a bespoke, commissioned video game out of the deal.” Copenhagen’s Noor Shaker points out that privacy issues are a bottleneck with social media data – but then it’s a bottleneck that Google and co have deftly circumnavigated. “Once we have the data, and depending on the source and type, natural language processing methods such as sentiment analysis could be used to profile and cluster players according to their opinion about different social, cultural or political matters,” she says. “Statistics could also be collected about games, books, or songs they already purchased, liked or expressed opinion about. All this information could feed powerful machine learning methods such as neural networks or standard classification techniques that learn profiles, discover similarity or predict personality traits.” So now we’re getting closer to the Black Mirror concept. Imagine something like The Sims, where the pictures on your apartment walls are photos from your Facebook wall, where neighbours are your real-life friends. Or, on a darker tangent, imagine a horror adventure that knows about your relationships, your Twitter arguments, your political views; imagine a horror game that knows what you watch on YouTube. “It is only natural to expect that game data can be fused with social media activity to better profile players and provide a better gaming experience,” says Yannakakis. Researchers can envisage a game that builds a detailed psychological and social profile of a player, from both their in-game actions and online footprint – but there’s still a gap between this, and the horror game posited in Black Mirror, which performs an invasive neurological hack on the player. Brain-computer interfacing of this sort is still the stuff of bleeding edge medical research and science fiction. However, we’re already seeing the use of devices – both in research and in consumer production – that can measure physiological states such as skin conductance and heart-rate variability to assess a player’s emotional reaction to game content. Konami’s 1997 arcade dating game Oshiete Your Heart, for example, featured a sensor that measured the player’s heart rate and skin conductance to influence the outcome of each romantic liaison. Nevermind, released by by Flying Mollusk last year, is a biofeedback-enhanced horror adventure that increases the level of challenge based on the player’s stress readings. Yannakakis and other researchers are also using off-the-shelf smart camera technologies like Intel RealSense and the emotion recognition software Affectiva to track a player’s facial expressions and monitor their heartbeat – both indicators of a variety of emotions. Noor Shaker has studied how tracking a player’s head pose while they take part in a game can tell us about the experience they’re having. Right now, these physiological inputs are mostly confined to research departments, but that may change. Valve, the company behind games like Portal and Half-Life and the HTC Vive VR headset has been experimenting with biometric inputs for years. Founder Gabe Newell predicted in 2011 that we would one day see game controllers with built in heart-rate and skin response detectors. A game supported by these sensors could easily present each player with different items, concepts or options to gauge a heart or skin response, adapting content on the fly depending on the reaction. Imagine a VR headset with sensors that measure skin heat response and heart rate. People are already hacking this sort of thing together. This all sounds terrifying, but it needn’t be used in the way the Black Mirror episode does. There are benevolent, perhaps even beautiful, possibilities in the idea of games learning from players. One company looking into this potential is Mobius AI, a New York and UK-based starup developing a cognitive AI engine for developers. Co-founder Dr Mitu Khandaker-Kokoris is more interested in the potential relationships that could occur between players and AI characters who have the ability to identify and learn from individual players. “What games really lack is that serendipitous kind of connection we feel when we meet someone in the real world that we get along with,” she says. “One of the ways this will happen is through games featuring AI-driven characters who truly understand us, and what we as individual players are expressing through the things we are actually saying and doing in the game. “Imagine, for instance, that you were the only one in the world who an AI-driven character could trust fully because the game could infer that you have similar personalities. This character could then take you down a unique experience, which only you have access to. It’s a fascinating problem space, and a great challenge to think about how games could truly work you out – or rather, who you are pretending to be – by paying attention to not only what you’re saying, but how you’re saying it.” Interestingly, Khandaker-Kokoris, who is also working on procedural storytelling game Little Invasion Tales, is more skeptical about the role of personal data mining in the future of game design. “We play games, often, to be someone who would have a different online history than our own,” she says. “But then, we are partly always ourselves, too. We have to work out what it would mean in terms of role-play and the idea of a permeable magic circle.” What’s certain though, is that game creators and AI researchers are moving in the same direction: toward systems that provide content based on individual player preferences and activities. Games now cost many millions to produce – the assumption that enough players will react favourably to a single narrative, and a single experience, is becoming prohibitively risky. We live in an age of behavioural modelling and data science, an age in which Amazon is capable of building personalised video adverts in real-time based on viewer preferences mined from the web. In this context, games that know you – that learn from you, that are curious about you – are almost inevitable.


News Article | January 13, 2016
Site: www.reuters.com

Coal is transported via conveyor belt to the coal-fired Jim Bridger Power Plant in Wyoming. WASHINGTON Global emissions of mercury from manmade sources fell 30 percent from 1990 to 2010, in part from decreasing use of coal, the U.S. Geological Survey (USGS) reported on Wednesday. The greatest decline of the toxic pollutant was in Europe and North America, offsetting increases in Asia, the agency said, citing an international study. The findings challenge longstanding assumptions on emission trends and show that local and regional efforts can have a major impact, it said. "This is great news for focused efforts on reducing exposure of fish, wildlife and humans to toxic mercury,” said David Krabbenhoft, a USGS scientist and one of the study’s co-authors. A metal that poses health risks, mercury can be converted into a gas during industrial activities as well as such natural events as volcanic eruptions. The study was carried out by the USGS, Harvard University, China's Peking University, Germany's Max Planck Institute for Chemistry and the University of Alberta in Canada. It was published in the Proceedings of the National Academy of Sciences. The analysis found that the drop came because mercury had been phased out of many commercial products. Controls have been put in place on coal-fired power plants that removed mercury from the coal being burned. Many power plants also have switched to natural gas from coal, the USGS said.


NEW YORK, March 02, 2017 (GLOBE NEWSWIRE) -- Tonix Pharmaceuticals Holding Corp. (Nasdaq:TNXP) (Tonix), a company that is developing innovative pharmaceutical products to address public health challenges, working with researchers from the University of Alberta, a leading Canadian research university, today announced the successful synthesis of a potential smallpox-preventing vaccine. This vaccine candidate, TNX-801, is a live form of horsepox virus (HPXV) that has been demonstrated to have protective vaccine activity in mice. “Presently, the safety concern of existing smallpox-preventing vaccines outweigh the potential benefit to provide immunization of first responders or the general public. By developing TNX-801 as a horsepox vaccine to prevent smallpox infection, we hope to have a safer vaccine to protect against smallpox than is currently available,” stated Seth Lederman, M.D., president and chief executive officer of Tonix. “Vaccines are a critical component of the infrastructure of global public health. Vaccination protects those who are vaccinated and also those who are not vaccinated, by decreasing the risk of contagion.” “Our goal is to improve on current methods that protect the public from possible viral outbreaks,” said Professor David Evans, Ph.D., FCAHS, Professor and Vice-Dean (Research), Faculty of Medicine and Dentistry at the University of Alberta, in Edmonton, Alberta, Canada, and principal investigator of the TNX-801 research project. HPXV was synthesized by Professor Evans and Research Associate Ryan Noyce, Ph.D., at the University of Alberta, with Dr. Lederman as co-investigator of the research and co-inventor of the TNX-801 patent. Under their research and development agreement, Tonix wholly owns the synthesized HPXV virus stock and related sequences. Professor Evans and Dr. Noyce also demonstrated that HPXV has protective vaccine activity in mice, using a model of lethal vaccinia infection. Vaccine manufacturing activities have been initiated by Tonix to support further nonclinical testing of TNX-801. Dr. Lederman stated, “Our research collaboration is dedicated to creating tools and innovative products that better protect public health.” Horsepox, an equine disease caused by a virus and characterized by eruptions in the mouth and on the skin, is believed to be eradicated. No true HPXV outbreaks have been reported since 1976, at which time the United States Department of Agriculture obtained the viral sample used for the sequence published in 2006 that allowed the synthesis of TNX-801. In 1798, Dr. Edward Jenner, English physician and scientist, speculated that smallpox is a human version of pox diseases in animals. Jenner had a strong suspicion that his vaccine began as a pox disease in horses and went on to show that it could be used to vaccinate against smallpox. Smallpox was eradicated as a result, and no cases of naturally occurring smallpox have been reported since 1977. Jenner’s vaccine appears to have evolved considerably in the vaccinia stocks maintained in different countries around the world, since vaccinia was mostly selected for growth and production.  Being able to provide safe and effective smallpox-preventing vaccines remains important and necessary for addressing and protecting public health. About the Material Threat Medical Countermeasures Provisions in the 21st Century Cures Act In 2016, the 21st Century Cures Act (Act) was signed into law to support ongoing biomedical innovation. One part of the Act, Section 3086, is aimed at “Encouraging Treatments for Agents that Present a National Security Threat.” This section of the Act created a new priority review voucher program for “material threat medical countermeasures.” The Act defines such countermeasures as drugs or vaccines intended to treat biological, chemical, radiological, or nuclear agents that present a national security threat, or to treat harm from a condition that may be caused by administering a drug or biological product against such an agent. The priority review vouchers are awarded at the time of FDA approval and are fully transferrable and may be sold to other companies to be used for priority review of any New Drug Application (NDA) or Biologic Licensing Application (BLA). Tonix is developing innovative pharmaceutical products to address public health challenges, with TNX-102 SL in Phase 3 development for posttraumatic stress disorder (PTSD). TNX-102 SL is designed for bedtime use and is believed to improve overall PTSD symptoms by improving sleep quality in PTSD patients.  PTSD is a serious condition characterized by chronic disability, inadequate treatment options especially for military-related PTSD and overall high utilization of healthcare services creating significant economic burden. TNX-102 SL was recently granted Breakthrough Therapy designation by the FDA for the treatment of PTSD. Other development efforts include TNX-601, a clinical candidate at Pre-IND (Investigational New Drug) application stage, designed for daytime use for the treatment of PTSD, and TNX-801, a potential smallpox-preventing vaccine. *TNX-102 SL (cyclobenzaprine HCl sublingual tablets) is an investigational new drug and has not been approved for any indication. This press release and further information about Tonix can be found at www.tonixpharma.com. Certain statements in this press release are forward-looking within the meaning of the Private Securities Litigation Reform Act of 1995. These statements may be identified by the use of forward-looking words such as “anticipate,” “believe,” “forecast,” “estimate,” “expect,” and “intend,” among others. These forward-looking statements are based on Tonix's current expectations and actual results could differ materially. There are a number of factors that could cause actual events to differ materially from those indicated by such forward-looking statements. These factors include, but are not limited to, substantial competition; our need for additional financing; uncertainties of patent protection and litigation; uncertainties of government or third party payor reimbursement; limited research and development efforts and dependence upon third parties; and risks related to failure to obtain FDA clearances or approvals and noncompliance with FDA regulations. As with any pharmaceutical under development, there are significant risks in the development, regulatory approval and commercialization of new products. Tonix does not undertake an obligation to update or revise any forward-looking statement. Investors should read the risk factors set forth in the Annual Report on Form 10-K for the year ended December 31, 2015, as filed with the Securities and Exchange Commission (the “SEC”) on March 3, 2016, and future periodic reports filed with the SEC on or after the date hereof. All of Tonix's forward-looking statements are expressly qualified by all such risk factors and other cautionary statements. The information set forth herein speaks only as of the date hereof.


News Article | October 28, 2016
Site: www.prfire.com

New Film “Microbirth” Reveals the Microscopic Secrets of Childbirth [http://microbirth.com] – A new documentary “MICROBIRTH” warns how our children are born could have serious repercussions for their lifelong health. “Microbirth” looks at childbirth in a whole new way; through the lens of a microscope. Featuring Ivy League scientists, the film investigates the latest research that is starting to indicate modern birth practices could be interfering with critical biological processes. This could be making our children more susceptible to disease later in life. Recent population studies have shown babies born by Caesarean Section have approximately: 20% increased risk of developing asthma 20% increased risk of developing type 1 diabetes 20% increased risk of obesity slightly smaller increases with gastro-intestinal conditions like Crohn’s disease or coeliac disease. These conditions are all linked to the immune system. In the film, scientists hypothesise that Caesarean Section could be interfering with “the seeding of the baby’s microbiome”. This is an important microbiological process where bacteria is transferred from mother to baby in the birth canal. As a consequence, the baby’s immune system may not develop to its full potential. Another hypothesis is that the stresses and hormones associated with natural birth could switch on or off certain genes related to the immune system and metabolism. If a baby is born by C-Section, this might affect these epigenetic processes. Dr Rodney R Dietert, Professor of Immunotoxicology at Cornell University, says, “Over the past 20-30 years, we’ve seen dramatic increases in childhood asthma, type 1 diabetes, coeliac disease, childhood obesity. We’ve also seen increases in Caesarean delivery. Does Caesarean cause these conditions? No. What Caesarean does is not allow the baby to be seeded with the microbes. The immune system doesn’t mature. And the metabolism changes. It’s the immune dysfunction and the changes in metabolism that we now know contribute to those diseases and conditions.” Dr Matthew Hyde, Research Associate of Neonatal Medicine, Imperial College London says, ”We are increasingly seeing a world out there with what is really a public health time-bomb waiting to go off. And the research we are doing suggests it is only going to get worse, generation on generation. So tomorrow’s generation really is on the edge of the precipice unless we can begin to do something about it.” The film’s co-Director Toni Harman says, “The very latest scientific research is starting to indicate that the microscopic processes happening during childbirth could be critical for the life-long health of the baby. We are hoping “Microbirth” raises awareness of the importance of “seeding the microbiome” for all babies, whether born naturally or by C-Section, to give all children the best chance of a healthy life. This could be an exciting opportunity to improve health across populations. And it all starts at birth”. “MICROBIRTH” is premiering with hundreds of simultaneous grass-roots public screenings around the world on Saturday 20th September 2014. http://microbirth.com/events – High-res images and academics available for interview upon request. – Short synopsis of “Microbirth”: “Microbirth” is a new sixty minute documentary looking at birth in a whole new way: through the lens of a microscope. Investigating the latest scientific research, the film reveals how we give birth could impact the lifelong health of our children. http://microbirth.com – “Microbirth” is an independent production by Alto Films Ltd. The film has been produced and directed by British filmmaking couple, Toni Harman and Alex Wakeford. Their previous film “Freedom For Birth” premiered in over 1,100 public screenings in 50 countries in September 2012. – “Microbirth” will premiere at grass-roots public screenings around the world on Saturday 20th September 2014. The film will then be represented for international broadcast sales as well as being available via online platforms. For a full list of screenings, please visit: http://microbirth.com/events – For more information about the film, please visit http://microbirth.com – “Microbirth” includes the following scientists and academics: RODNEY DIETERT, Professor of Immunotoxicology, Cornell University MARTIN BLASER, Director of the Human Microbiome Program & Professor of Translational Medicine, New York University MARIA GLORIA DOMINGUEZ BELLO, Associate Professor, Department of Medicine, New York University PHILIP STEER, Emeritus Professor of Obstetrics, Imperial College, London NEENA MODI, Professor of Neonatal Medicine, Imperial College, London MATTHEW HYDE, Research Associate in the Section of Neonatal Medicine, Imperial College, London SUE CARTER, Professor, Behavioral Neurobiologist, University of North Carolina, Chapel Hill ALEECA BELL, Assistant Professor, Dept of Women, Children and Family Health Science, University of Illinois at Chicago STEFAN ELBE, Professor of International Relations, University of Sussex and Director of Centre for Global Health Policy ANITA KOZYRSKYJ, Professor, Department of Pediatrics, University of Alberta and Co-Principal Investigator, Synergy in Microbiota Research (SyMBIOTA) JACQUELYN TAYLOR, Associate Professor of Nursing, University of Yale HANNAH DAHLEN, Professor of Midwifery, University of Western Sydney LESLEY PAGE, Professor of Midwifery, King’s College London and President, Royal College of Midwives


News Article | February 16, 2017
Site: www.spie.org

From the SPIE Photonics West Show Daily :The first-ever neurophotonics plenary session at Photonics West featured 10 rapid-fire presentations covering the broad spectrum of current neurophotonics R&D. Following in the footsteps of the popular BiOS Hot Topics sessions, the first-ever neurophotonics plenary session at Photonics West this year featured 10 rapid-fire presentations covering the broad spectrum of neurophotonics R&D currently taking place worldwide. "There is a strong focus on developing the technologies to dramatically impact our understanding of how the brain works," said David Boas, who moderated the session and is editor-in-chief of SPIE's Neurophotonics journal. One of the initial challenges has been to find new ways to measure tens of thousands of neurons simultaneously. This requires taking an interdisciplinary approach to technology development that brings together neuroscientists, engineers, physicists, and clinical researchers. It also prompted SPIE to add a technology application track on the brain this year. "SPIE recognized the need to bring together all the different groups in this field to get an overview of the many neurophotonics activities going on," Boas said. The neurophotonics plenary session showcased the diversity of these research efforts, from genetically encoded indicators of neuronal activity to 3-photon microscopy for deep brain imaging, chemical sectioning for high throughput brain imaging, and mapping functional connections in the brain. "We need to step back and think about all of these important methods and the larger picture," said Rafael Yuste, professor of neuroscience at Columbia University and a pioneer in the development of optical methods for brain research. His presentation covered novel neurotechnologies and their impact on science, medicine, and society. "Why don't we already understand the brain?" Yuste asked. "People say it's just too complicated, but I believe the reason ... is that we don't have the right method yet. We do have methods that allow us to see entire activity of the brain, but not enough resolution of a single neuron. We need to be able to record from inside the neuron and capture every single spike in every neuron in brain circuits." Here are highlights from other plenary talks: Taking a cue from Nobel Laureate Roger Tsien, a pioneer in the field of engineering proteins for neuroscience, Canadian researchers at the University of Alberta are working to develop new kinds of protein indicators to study neuronal activity, noted the university's Robert Campbell. While early calcium indicators were synthetic tools, the Campbell Lab is working on genetically encoded proteins, taking a fluorescent protein and turning it into a calcium indicator, "a proxy for neuronal activity," Campbell said. Most recently, they have developed FlicR1, a new type of red fluorescent voltage indicator that can be used to image spontaneous activity in neurons. "We are very optimistic about this new indicator," he said. Optical detection of spatial-temporal correlations in whole brain activity Studying these types of correlations is "very important because morphology and functionality in the brain are tightly correlated to each other," said Francesco Pavone of Università degli Studi di Firenze in Italy. His group is taking a multi-modality approach in mouse models to study brain rehabilitation following a stroke. They are using light-sheet microscopy to look at vasculature remodeling, two-photon imaging to study structural plastics, and wide-field meso-scale imaging to evaluate functional plasticity. "We would like to study at all brain levels the map of all activated cells," Pavone said. "Do we have the technology to develop multi-cell, multiplane optogenetics with millisecond temporal resolution and single cell precision?" asked Valentina Emiliani, director of the Neurophotonics Laboratory at University Paris Descartes. Her lab is working with computer-generated holography, spatial light modulators (SLMs), and endoscopy to control the activity of a single neuronal cell. "We have been able to achieve very robust photostimulation of a cell while the mice were freely moving, with nice spatial resolution," she said. Peter So, professor of mechanical and biological engineering at Massachusetts Institute of Technology, described his group's work using 3D holographic excitation for targeted scanning as a way to study and map synaptic locations in the brain. "Neurons generate responses from many synaptic inputs, and we found that there are over 10,000 synaptic locations we would like to look at in parallel and map using synaptic coordinates to map activity," he said. "Three-photon has vastly improved the signal-to-background ratio for deep imaging in non-sparsely labeled brain," said Cornell University's Chris Xu. By combining a long wavelength (1300-1700 nm, the optimum spectral windows for deep imaging) with high excitation, Xu said researchers are making new inroads into deep imaging of brain tissue. Three-photon microscopy is also valuable for structural imaging and for imaging brain activity "in an entire mouse cortical column," Xu added. Mapping functional connections in the mouse brain for understanding and treating disease Mapping brain function is typically performed using task-based approaches to relate brain topography to function, noted Adam Bauer of Washington University School of Medicine. "But we want to be able to help patients who are incapable of performing tasks, such as infants and those with impairments," he said. For this reason, the lab has developed the functional connectivity optical intrinsic signal (fcOIS) imaging system to study mouse models of Alzheimer's, functional connectivity following focal ischemia, and to map cell-specific connectivity in awake mice. Maria Angela Franceschini of the Athinoula A. Martinos Center for Biomedical Imaging described her group's work developing MetaOX, a tissue oxygen consumption monitor. The instrument has been tested in neonatal intensive care units to monitor hypoxic ischemic injury and therapeutic hypothermia. It uses frequency-domain near infrared spectroscopy to acquire quantitative measurements of hemoglobin concentration and oxygenation and diffuse correlation spectroscopy to create an index of blood flow. The device is also being evaluated in Africa to study the effects of malnutrition on brain development, and in Uganda to study hydrocephalus outcomes in newborns. Shaoqun Zeng of the Wuhan National Lab for Optoelectronics in China outlined his group's work using chemical sectioning for high-throughput fluorescence imaging of a whole mouse brain at synaptic resolution. The goal is to systematically and automatically obtain a complete morphology of individual neurons. Opportunities and priorities in neurophotonics: perspectives from the NIH Edmund Talley of the US National Institutes of Health shared his experiences with the US BRAIN Initiative, which is slated to receive more than $430 million in the 2017 federal budget, plus $1.6 billion in dedicated funds through 2026 via the 21st Century Cures Act passed in December 2016. "There is some very serious investment in neurotechnologies to understand how the mind works, and there is bipartisan political support," Talley said. "Multiple federal agencies are funding this." Photonics West 2017, 28 January through 2 February at the Moscone Center, encompassed more than 4700 presentations on light-based technologies across more than 95 conferences. It was also the venue for dozens of technical courses for professional development, the Prism Awards for Photonics Innovation, the SPIE Startup Challenge, a two-day job fair, two major exhibitions, and a diverse business program with more than 25 events. SPIE Photonics West 2018 will run 27 January through 1 February at Moscone Center.


REDONDO BEACH, CA / ACCESSWIRE / December 13, 2016 / BioLargo, Inc. (OTCQB: BLGO), owner and developer of the breakthrough AOS (Advanced Oxidation System), a low-energy high-efficiency clean water technology, announced the start of a relationship with Chicago Bridge & Iron, NV (NYSE: CBI). According to the press release and a number of recent interviews with BioLargo's President & CEO, Dennis P. Calvert, the new relationship was formed to support the commercialization of BioLargo's proprietary technology and to provide independent performance verification. BioLargo also reports the AOS has been proven to disinfect and decontaminate water better, faster and at a lower cost than any other competing technology. Based on the breadth and significance of the technical performance claims for its AOS, BioLargo has a broad range of commercial opportunities for large industrial applications that must contend with water such as: maritime ballast water management systems, wastewater treatment, environmental remediation, food safety, oil & gas, mining, and agriculture. Its future uses also promise to impact the drinking water industry, including municipal, home use, and emerging nations. The company is also busy commercializing its new "CupriDyne Clean", an industrial odor control product launched last May. The company reports that the product is so effective and low-cost it is gaining rapid traction through trials with leaders within the waste handling industry and that it has had some early sales. Management believes sales will continue to climb, as they finalize supplier agreements with large multi-location customer accounts. CupriDyne Clean may also have an important role to play in industries that contend with volatile organic compounds like hydrogen sulfide (H2S) that impact air quality and safety. Dennis P. Calvert, President & CEO of BioLargo commented, "All of our technologies at BioLargo can serve a wide array of industrial customers that want clean water and clean air. Our mission to 'Make Life Better' includes helping industry tackle operational challenges cost effectively. That intersection of service is likely where our new relationship with CB&I will shine the brightest and we look forward to working with the exceptional team at CB&I to serve industry." With more than 40,000 employees, $13 billion in annual revenue and over $20 billion in future contracts, CB&I is a world-leading engineering, procurement, fabrication, and construction company, and a provider of environmental and infrastructure services. CB&I builds oil refineries, liquefied natural gas terminals, wastewater treatment plants, offshore platforms, and power plants. CB&I is also the world's largest tank construction company and builds tanks for the oil & gas, mining, water, and wastewater industries. The company also remediates hazardous waste problems. Clean water and clean air are at the heart of many of industries served by CB&I and BioLargo's technologies. Details in the first announcement were slim. This news sends notice to the investment world and to industry that Biolargo's technologies can have an important role to play in helping solve air and water contamination problems in a safe, effective and affordable way. Calvert has been quick to point out that the current version of the AOS has been engineered to serve entry-level clients and that important scale-up work is required to serve very large-scale industrial clients. BioLargo Water's research team recently showcased the first pre-commercial prototype of its AOS water treatment system, billed as the lowest cost and highest impact, scalable clean water technology in the world. By combining a cutting-edge carbon matrix, advanced iodine chemistry, and electrolysis, this technology rapidly and inexpensively eliminates bacteria and chemical contaminants in water without leaving residual toxins. University of Alberta researchers, in collaboration with BioLargo Water Scientists, have confirmed test results that validate the AOS achieves unprecedented rates of disinfection, eliminating infectious biological pathogens such as Salmonella, Listeria and E. coli. The AOS has also been proven effective in oxidizing and removing hard-to-manage soluble organics acids, aromatic compounds, and solvents faster than existing technologies and with very little input energy. Proven test results validate its important role for extremely high oxidation potential to tackle a long "watch list" of contaminants identified by the EPA. The company reports that future generations of the AOS will include the extraction and harvesting of important contaminants like sulfur, nitrates, phosphorus, and even heavy metals. The company's first "Alpha" AOS was constructed in collaboration with the Northern Alberta Institute of Technology (NAIT)'s Center for Sensors and Systems Integration and with NAIT's Applied Bio/Nanotechnology Industrial Research Chair. Its "Beta" unit is expected to be ready for commercial trials in 2017. What places the AOS above competing technologies is its exceptionally high rate of disinfection (100x more effective than the competition, as verified in poultry production applications) and remarkably low capital and operational costs, made possible by its extremely low amount of electrical energy required to power the oxidation process. Studies have shown the AOS to achieve remarkable rates of disinfection at less than 1/20th the electrical energy input of competing technologies. The AOS is scalable and modular in design to meet a wide variety of needs in the marketplace. BioLargo is already working on what it calls the "Gen 2 AOS" for ultra-high flow rates. Because the markets for the AOS are very large and the needs so great, management reports that they believe it is only a matter of time before industry adopts this new breakthrough low cost technology. Oil and gas companies such as Exxon Mobil Corporation (NYSE: XOM), Halliburton Company (NYSE: HAL), Schlumberger Limited (NYSE: SLB), Chevron Corporation (NYSE: CVX) and Royal Dutch Shell plc (NYSE: RDS-A) could dramatically reduce water transportation, sourcing and disposal costs by adopting the AOS. The AOS has been shown to be cost effective at removing problematic contaminants from oil & gas "produced water", and any technology such as the AOS that could cost-effectively enable water recycling on-site could slash costs and greatly improve the bottom line for many producers that are now suffering big losses due to persistently low oil prices. It could also alleviate the costly problem of injecting produced water deep into injection wells, and simultaneously reduce pollution. The maritime industry has increasing regulatory pressure to eliminate the detrimental transfer and release of invasive marine species through the discharge of ballast water. This issue prompted the International Maritime Organization to impose regulations for the treatment and discharge of ballast water, and these new rules are scheduled to come into force beginning September of 2017. An estimated 65,000 ships must adopt ballast water treatment systems type approved under the International Convention for the Control and Management of Ships' Ballast Water and Sediments, 2004 (BWMC). Approved systems must disinfect seawater to specified standards without adding any toxic elements to the discharged water. Global Water Intelligence estimates that the average cost for each ballast water management system will be more than $750,000 and the total cost to outfit every vessel will be about $46.5 billion. Because it is the highest impact, lowest cost, lowest energy technology known that can solve this problem, the AOS is could be the most practical solution to maritime operators such as DryShips, Inc. (NASDAQ: DRYS), Navios Maritime Holdings, Inc. (NASDAQ: NM), Diana Shipping, Inc. (NYSE: DSX), Sino-Global Shipping America, Ltd. (NASDAQ: SINO), Diana Containerships Inc. (DCIX) and several others. In an effort to reduce the incidence of foodborne illness in the poultry industry, the U.S. Department of Agriculture's Food Safety and Inspection Service, FSIS, announced new, stricter federal standards to reduce Salmonella and Campylobacter in ground chicken and turkey products, as well as in raw chicken breasts, legs, and wings. The new regulations took effect July 1, 2016 and have the potential to impact sales of poultry processing operations of Tyson Foods, Inc. (NYSE: TSN), Pilgrims Pride Corporation, (NASDAQ: PPC), Sanderson Farms, Inc., (NASDAQ: SAFM), Hormel Foods Corporation, (NYSE: HRL), Perdue, Cargill, Smithfield Food, Inc., Conagra Foods, Inc., and every other poultry processor. Researchers at the University of Alberta confirmed that the AOS could be highly effective in reducing cross-contamination of pathogens when poultry is washed in chill tanks. Water quality of municipal water systems is also a growing concern and a few large water treatment companies that provide water services to millions of U.S. residents are American Water Works Company, Inc., (NYSE: AWK), American States Water Company (NYSE: AWR), Aqua America, Inc. (NYSE: WTR) and Veolia Environnement S.A. (OTC: VEOEY). The need for a better and lower cost clean water technology is urgent and CB&I may just be the perfect company to support implementation of breakthrough low-cost water and air treatment technologies developed by BioLargo, Inc. that can help solve problems across such a broad spectrum of industries. Except for the historical information presented herein, matters discussed in this release contain forward-looking statements that are subject to certain risks and uncertainties that could cause actual results to differ materially from any future results, performance or achievements expressed or implied by such statements. Emerging Growth LLC, which owns SECFilings.com, is not registered with any financial or securities regulatory authority, and does not provide nor claims to provide investment advice or recommendations to readers of this release. Emerging Growth LLC may from time to time have a position in the securities mentioned herein and may increase or decrease such positions without notice. For making specific investment decisions, readers should seek their own advice. Emerging Growth LLC may be compensated for its services in the form of cash-based compensation or equity securities in the companies it writes about, or a combination of the two. For full disclosure please visit: http://secfilings.com/Disclaimer.aspx. REDONDO BEACH, CA / ACCESSWIRE / December 13, 2016 / BioLargo, Inc. (OTCQB: BLGO), owner and developer of the breakthrough AOS (Advanced Oxidation System), a low-energy high-efficiency clean water technology, announced the start of a relationship with Chicago Bridge & Iron, NV (NYSE: CBI). According to the press release and a number of recent interviews with BioLargo's President & CEO, Dennis P. Calvert, the new relationship was formed to support the commercialization of BioLargo's proprietary technology and to provide independent performance verification. BioLargo also reports the AOS has been proven to disinfect and decontaminate water better, faster and at a lower cost than any other competing technology. Based on the breadth and significance of the technical performance claims for its AOS, BioLargo has a broad range of commercial opportunities for large industrial applications that must contend with water such as: maritime ballast water management systems, wastewater treatment, environmental remediation, food safety, oil & gas, mining, and agriculture. Its future uses also promise to impact the drinking water industry, including municipal, home use, and emerging nations. The company is also busy commercializing its new "CupriDyne Clean", an industrial odor control product launched last May. The company reports that the product is so effective and low-cost it is gaining rapid traction through trials with leaders within the waste handling industry and that it has had some early sales. Management believes sales will continue to climb, as they finalize supplier agreements with large multi-location customer accounts. CupriDyne Clean may also have an important role to play in industries that contend with volatile organic compounds like hydrogen sulfide (H2S) that impact air quality and safety. Dennis P. Calvert, President & CEO of BioLargo commented, "All of our technologies at BioLargo can serve a wide array of industrial customers that want clean water and clean air. Our mission to 'Make Life Better' includes helping industry tackle operational challenges cost effectively. That intersection of service is likely where our new relationship with CB&I will shine the brightest and we look forward to working with the exceptional team at CB&I to serve industry." With more than 40,000 employees, $13 billion in annual revenue and over $20 billion in future contracts, CB&I is a world-leading engineering, procurement, fabrication, and construction company, and a provider of environmental and infrastructure services. CB&I builds oil refineries, liquefied natural gas terminals, wastewater treatment plants, offshore platforms, and power plants. CB&I is also the world's largest tank construction company and builds tanks for the oil & gas, mining, water, and wastewater industries. The company also remediates hazardous waste problems. Clean water and clean air are at the heart of many of industries served by CB&I and BioLargo's technologies. Details in the first announcement were slim. This news sends notice to the investment world and to industry that Biolargo's technologies can have an important role to play in helping solve air and water contamination problems in a safe, effective and affordable way. Calvert has been quick to point out that the current version of the AOS has been engineered to serve entry-level clients and that important scale-up work is required to serve very large-scale industrial clients. BioLargo Water's research team recently showcased the first pre-commercial prototype of its AOS water treatment system, billed as the lowest cost and highest impact, scalable clean water technology in the world. By combining a cutting-edge carbon matrix, advanced iodine chemistry, and electrolysis, this technology rapidly and inexpensively eliminates bacteria and chemical contaminants in water without leaving residual toxins. University of Alberta researchers, in collaboration with BioLargo Water Scientists, have confirmed test results that validate the AOS achieves unprecedented rates of disinfection, eliminating infectious biological pathogens such as Salmonella, Listeria and E. coli. The AOS has also been proven effective in oxidizing and removing hard-to-manage soluble organics acids, aromatic compounds, and solvents faster than existing technologies and with very little input energy. Proven test results validate its important role for extremely high oxidation potential to tackle a long "watch list" of contaminants identified by the EPA. The company reports that future generations of the AOS will include the extraction and harvesting of important contaminants like sulfur, nitrates, phosphorus, and even heavy metals. The company's first "Alpha" AOS was constructed in collaboration with the Northern Alberta Institute of Technology (NAIT)'s Center for Sensors and Systems Integration and with NAIT's Applied Bio/Nanotechnology Industrial Research Chair. Its "Beta" unit is expected to be ready for commercial trials in 2017. What places the AOS above competing technologies is its exceptionally high rate of disinfection (100x more effective than the competition, as verified in poultry production applications) and remarkably low capital and operational costs, made possible by its extremely low amount of electrical energy required to power the oxidation process. Studies have shown the AOS to achieve remarkable rates of disinfection at less than 1/20th the electrical energy input of competing technologies. The AOS is scalable and modular in design to meet a wide variety of needs in the marketplace. BioLargo is already working on what it calls the "Gen 2 AOS" for ultra-high flow rates. Because the markets for the AOS are very large and the needs so great, management reports that they believe it is only a matter of time before industry adopts this new breakthrough low cost technology. Oil and gas companies such as Exxon Mobil Corporation (NYSE: XOM), Halliburton Company (NYSE: HAL), Schlumberger Limited (NYSE: SLB), Chevron Corporation (NYSE: CVX) and Royal Dutch Shell plc (NYSE: RDS-A) could dramatically reduce water transportation, sourcing and disposal costs by adopting the AOS. The AOS has been shown to be cost effective at removing problematic contaminants from oil & gas "produced water", and any technology such as the AOS that could cost-effectively enable water recycling on-site could slash costs and greatly improve the bottom line for many producers that are now suffering big losses due to persistently low oil prices. It could also alleviate the costly problem of injecting produced water deep into injection wells, and simultaneously reduce pollution. The maritime industry has increasing regulatory pressure to eliminate the detrimental transfer and release of invasive marine species through the discharge of ballast water. This issue prompted the International Maritime Organization to impose regulations for the treatment and discharge of ballast water, and these new rules are scheduled to come into force beginning September of 2017. An estimated 65,000 ships must adopt ballast water treatment systems type approved under the International Convention for the Control and Management of Ships' Ballast Water and Sediments, 2004 (BWMC). Approved systems must disinfect seawater to specified standards without adding any toxic elements to the discharged water. Global Water Intelligence estimates that the average cost for each ballast water management system will be more than $750,000 and the total cost to outfit every vessel will be about $46.5 billion. Because it is the highest impact, lowest cost, lowest energy technology known that can solve this problem, the AOS is could be the most practical solution to maritime operators such as DryShips, Inc. (NASDAQ: DRYS), Navios Maritime Holdings, Inc. (NASDAQ: NM), Diana Shipping, Inc. (NYSE: DSX), Sino-Global Shipping America, Ltd. (NASDAQ: SINO), Diana Containerships Inc. (DCIX) and several others. In an effort to reduce the incidence of foodborne illness in the poultry industry, the U.S. Department of Agriculture's Food Safety and Inspection Service, FSIS, announced new, stricter federal standards to reduce Salmonella and Campylobacter in ground chicken and turkey products, as well as in raw chicken breasts, legs, and wings. The new regulations took effect July 1, 2016 and have the potential to impact sales of poultry processing operations of Tyson Foods, Inc. (NYSE: TSN), Pilgrims Pride Corporation, (NASDAQ: PPC), Sanderson Farms, Inc., (NASDAQ: SAFM), Hormel Foods Corporation, (NYSE: HRL), Perdue, Cargill, Smithfield Food, Inc., Conagra Foods, Inc., and every other poultry processor. Researchers at the University of Alberta confirmed that the AOS could be highly effective in reducing cross-contamination of pathogens when poultry is washed in chill tanks. Water quality of municipal water systems is also a growing concern and a few large water treatment companies that provide water services to millions of U.S. residents are American Water Works Company, Inc., (NYSE: AWK), American States Water Company (NYSE: AWR), Aqua America, Inc. (NYSE: WTR) and Veolia Environnement S.A. (OTC: VEOEY). The need for a better and lower cost clean water technology is urgent and CB&I may just be the perfect company to support implementation of breakthrough low-cost water and air treatment technologies developed by BioLargo, Inc. that can help solve problems across such a broad spectrum of industries. Except for the historical information presented herein, matters discussed in this release contain forward-looking statements that are subject to certain risks and uncertainties that could cause actual results to differ materially from any future results, performance or achievements expressed or implied by such statements. Emerging Growth LLC, which owns SECFilings.com, is not registered with any financial or securities regulatory authority, and does not provide nor claims to provide investment advice or recommendations to readers of this release. Emerging Growth LLC may from time to time have a position in the securities mentioned herein and may increase or decrease such positions without notice. For making specific investment decisions, readers should seek their own advice. Emerging Growth LLC may be compensated for its services in the form of cash-based compensation or equity securities in the companies it writes about, or a combination of the two. For full disclosure please visit: http://secfilings.com/Disclaimer.aspx.


News Article | February 28, 2017
Site: www.PR.com

Receive press releases from Strathmore Who's Who: By Email Strathmore’s Who’s Who Honors Dale R. Mudt as a 2017 Professional of the Year Dale R. Mudt, of Sarnia, Ontario, Canada, has recently been honored as a 2017 Strathmore’s Who’s Who Professional of the Year for his outstanding contributions and achievements in the field of Chemical Engineering. Sarnia, Ontario, Canada, February 28, 2017 --( Dale R. Mudt is Manager, Process Automation at the Suncor Energy Products Inc. Refinery. Mr. Mudt earned a BSc with Distinction in Chemical Engineering from the University of Alberta. Mr. Mudt’s expertise is Real Time Optimization, Management, Advanced Process Control, and Data Acquisition. He has spoken on Advanced Control Optimization, Safety Systems to 4th year engineering students and presented "Refinery Real Time Optimization...On the Road for 25 Years" at the Manufacturing Technology Network Conference. Prior to joining Suncor, Mr. Mudt worked for several consulting engineering firms in the food, mining, and inorganic chemical industries. He is a registered Professional Engineer in the Province of Ontario and is member of both the CSChE and AIChE. In his leisure time, Mr. Mudt enjoys fishing, travel, sports, genealogy and spending time with his family. About Strathmore’s Who’s Who Strathmore's Who's Who publishes an annual two thousand page hard cover biographical registry, honoring successful individuals in the fields of Business, the Arts and Sciences, Law, Engineering and Government. Based on one's position and lifetime of accomplishments, we honor professional men and women in all academic areas and professions. Inclusion is limited to individuals who have demonstrated leadership and achievement in their occupation, industry or profession. Sarnia, Ontario, Canada, February 28, 2017 --( PR.com )-- About Dale R. MudtDale R. Mudt is Manager, Process Automation at the Suncor Energy Products Inc. Refinery. Mr. Mudt earned a BSc with Distinction in Chemical Engineering from the University of Alberta. Mr. Mudt’s expertise is Real Time Optimization, Management, Advanced Process Control, and Data Acquisition. He has spoken on Advanced Control Optimization, Safety Systems to 4th year engineering students and presented "Refinery Real Time Optimization...On the Road for 25 Years" at the Manufacturing Technology Network Conference. Prior to joining Suncor, Mr. Mudt worked for several consulting engineering firms in the food, mining, and inorganic chemical industries. He is a registered Professional Engineer in the Province of Ontario and is member of both the CSChE and AIChE. In his leisure time, Mr. Mudt enjoys fishing, travel, sports, genealogy and spending time with his family. www.suncor.com About Strathmore’s Who’s WhoStrathmore's Who's Who publishes an annual two thousand page hard cover biographical registry, honoring successful individuals in the fields of Business, the Arts and Sciences, Law, Engineering and Government. Based on one's position and lifetime of accomplishments, we honor professional men and women in all academic areas and professions. Inclusion is limited to individuals who have demonstrated leadership and achievement in their occupation, industry or profession. Click here to view the list of recent Press Releases from Strathmore Who's Who


While the medicinal cannabis industry continues to rapidly advance and expand operations by identifying new leading edge products, leaders are turning towards the expertise and knowledge of other medical sectors, especially with influence from the biopharma sector. Medical Marijuana and legal cannabis companies in the markets with recent developments and performance of note include: INSYS Therapeutics, Inc. (NASDAQ: INSY), Vinergy Resources Ltd (OTC: VNNYF) (CSE: VIN.CN), Canopy Growth Corporation (OTC: TWMJF) (TSX: WEED.TO), Aurora Cannabis Inc. (OTC: ACBFF) (TSX-V: ACB.V), Aphria Inc. (OTC: APHQF) (TSX-V: APH.V). Vinergy Resources Ltd (OTCQB: VNNYF) (CSE:VIN), in conjunction with its proposed acquisition of MJ Biopharma (announced December 14, 2016) is pleased to announce that, as a part of the Company's strategy to develop a lab for research and development products that test and identify specific cannabinoid isolates for targeted therapeutic purposes, it has appointed John Simon to the Company's Scientific Advisory Board (SAB). John has a Bachelor of Science from the University of Alberta, is a senior member of the American Society for Quality, a Certified Quality Auditor (CQA), a Registered Quality Assurance Professional in Good Laboratory Practice (RQAP-GLP) and maintains Regulatory Affairs Certification (RAC) through the Regulatory Affairs Professional Society. Read this and more news for Vinergy Resources at: http://marketnewsupdates.com/news/vnnyf.html Through John's consultancy practice, he assists companies with both site licenses and product licenses. He has helped companies obtain, renew and maintain in good standing Drug Establishment Licenses (DEL); Medical Device Establishment Licenses (MDEL); Natural and Non-prescription Site Licenses (NNHPD); and Licenses to Cultivate and Distribute under the Marihuana for Medical Purposes Regulations (MMPR) (now under the ACMPR). "With John's substantial background in QA and regulatory affairs specific to drug development and the cannabis industry, he will be a key asset in driving our cannabis product and technology initiatives," said Mr.Kent Deuters, CEO of MJ Biopharma. Vinergy Resources also announced this week a major breakthrough while conducting research and development on oral cannabinoid complex (Tetrahydrocannabinol (THC), Cannabidiol (CBD), Cannabinol (CBN) and Terpenes) delivery strips and controlled time release capsule technology. This novel approach will be the basis for several products where water or saliva is the catalyst used to activate the carrier for delivery and absorption of the cannabinoid complex into the body. In other cannabis - legal marijuana market performances and developments of note include: Aurora Cannabis Inc. (OTCQB: ACBFF) (TSX-V: ACB.V) a dually-listed company on Wednesday closed up on the OTC markets at $1.96 trading over 500,000 shares and closed even on the TSX at $2.56 trading over 2.6 million shares by the market close. Aurora Cannabis and Radient Technologies (RTI.V) this week provided an update on their previously announced collaboration arrangements. Read the full announcement at http://finance.yahoo.com/news/aurora-cannabis-radient-technologies-exclusive-124000123.html Canopy Growth Corporation (OTC: TWMJF)(TSX: WEED.TO) this week released its financial results for the third quarter of fiscal year 2017, the period ended December 31 , 2016. All financial information in this press release is reported in Canadian dollars, unless otherwise indicated. Consolidated financial results include the accounts of the Company and its wholly-owned subsidiaries which include Tweed Inc. ("Tweed"), Tweed Farms Inc. ("Tweed Farms"), and Bedrocan Canada Inc. ("Bedrocan Canada") and its investments in affiliates. Read the full report at http://finance.yahoo.com/news/canopy-growth-corporation-reports-third-113000287.html Aphria Inc. (OTCQB: APHQF) (TSX-V: APH.V) a dually listed company on Wednesday closed up on the OTC markets at $5.01 trading over 400,000 shares and closed up on the TSX at $6.52 trading over 4.8 million shares by the market close. Aphria, one of Canada's lowest cost producers, produces, supplies and sells medical cannabis. Located in Leamington, Ontario, the greenhouse capital of Canada. Aphria is truly powered by sunlight, allowing for the most natural growing conditions available. INSYS Therapeutics, Inc. (NASDAQ: INSY) closed up over 12% on Wednesday at $10.82 trading over 3.2 Million shares by the market close. Insys Therapeutics this week announced that the Company is providing for the use of Cannabidiol Oral Solution at doses up to 40 mg/kg/day in compassionate use studies in subjects with refractory pediatric epilepsy following completion of 48 weeks of treatment in the ongoing long-term safety study. The long-term safety study permitted subjects who had completed the initial safety and pharmacokinetic (PK) study to receive Cannabidiol Oral Solution at doses up to 40 mg/kg/day for up to 48 weeks. DISCLAIMER: MarketNewsUpdates.com (MNU) is a third party publisher and news dissemination service provider, which disseminates electronic information through multiple online media channels. MNU is NOT affiliated in any manner with any company mentioned herein. MNU and its affiliated companies are a news dissemination solutions provider and are NOT a registered broker/dealer/analyst/adviser, holds no investment licenses and may NOT sell, offer to sell or offer to buy any security. MNU's market updates, news alerts and corporate profiles are NOT a solicitation or recommendation to buy, sell or hold securities. The material in this release is intended to be strictly informational and is NEVER to be construed or interpreted as research material. All readers are strongly urged to perform research and due diligence on their own and consult a licensed financial professional before considering any level of investing in stocks. All material included herein is republished content and details which were previously disseminated by the companies mentioned in this release. MNU is not liable for any investment decisions by its readers or subscribers. Investors are cautioned that they may lose all or a portion of their investment when investing in stocks. For current services performed MNU has been compensated three thousand nine hundred dollars for news coverage of the current press release issued by Vinergy Resources Ltd by a non-affiliated third party. MNU HOLDS NO SHARES OF ANY COMPANY NAMED IN THIS RELEASE. This release contains "forward-looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E the Securities Exchange Act of 1934, as amended and such forward-looking statements are made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. "Forward-looking statements" describe future expectations, plans, results, or strategies and are generally preceded by words such as "may", "future", "plan" or "planned", "will" or "should", "expected," "anticipates", "draft", "eventually" or "projected". You are cautioned that such statements are subject to a multitude of risks and uncertainties that could cause future circumstances, events, or results to differ materially from those projected in the forward-looking statements, including the risks that actual results may differ materially from those projected in the forward-looking statements as a result of various factors, and other risks identified in a company's annual report on Form 10-K or 10-KSB and other filings made by such company with the Securities and Exchange Commission. You should consider these factors in evaluating the forward-looking statements included herein, and not place undue reliance on such statements. The forward-looking statements in this release are made as of the date hereof and MNU undertakes no obligation to update such statements.


News Article | March 11, 2016
Site: cleantechnica.com

They come from the West Coast, as far south as California, as north as Alaska, and as east as the Atlantic coast. Their joint letter refers to “Misrepresentation,” “lack of information,” and “Disregard for science that was not funded by the proponent.” Scientists condemn the flawed review process for Lelu Island, at the mouth of British Columbia’s Skeena River, as “a symbol of what is wrong with environmental decision-making in Canada.” More than 130 scientists signed on to this letter. “This letter is not about being for or against LNG, the letter is about scientific integrity in decision-making,” said Dr. Jonathan Moore, Liber Ero Chair of Coastal Science and Management, Simon Fraser University. One of the other signatories is Otto Langer, former Chief of Habitat Assessment at Department of Fisheries and Oceans (DFO), who wrote: These are tough words for a Federal government that promised to put teeth back in the gutted environmental review process. In Prime Minister Justin Trudeau’s defense, this is yet another problem he inherited from the previous administration, and the task of cleaning up this mess seems enormous. That said, this government was aware the environmental review process was broken before it was elected and has not intervened to at least stop the process from moving forward until it is prepared to take action. The Liberal Government appears to be facing a tough decision. So far, it has attempted to work with the provinces. On Lelu Island, as well as the equally controversial proposed Kinder Morgan Pipeline  expansion and Site C Dam project, continuing to support Premier Clak’s policies in this manner would appear to necessitate betraying the trust of the Canadian people. Here are a few choice excerpts from the public letter that more than 130 scientists sent to Catherine McKenna and Prime Minister Trudeau: ” … The CEAA draft report has not accurately characterized the importance of the project area, the Flora Bank region, for fish. The draft CEAA report1 states that the “…marine habitats around Lelu Island are representative of marine ecosystems throughout the north coast of B.C.”. In contrast, five decades of science has repeatedly documented that this habitat is NOT representative of other areas along the north coast or in the greater Skeena River estuary, but rather that it is exceptional nursery habitat for salmon2-6 that support commercial, recreational, and First Nation fisheries from throughout the Skeena River watershed and beyond7. A worse location is unlikely to be found for PNW LNG with regards to potential risks to fish and fisheries….” ” … CEAA’s draft report concluded that the project is not likely to cause adverse effects on fish in the estuarine environment, even when their only evidence for some species was an absence of information. For example, eulachon, a fish of paramount importance to First Nations and a Species of Special Concern8, likely use the Skeena River estuary and project area during their larval, juvenile, and adult life-stages. There has been no systematic study of eulachon in the project area. Yet CEAA concluded that the project posed minimal risks to this fish…” ” … CEAA’s draft report is not a balanced consideration of the best-available science. On the contrary, CEAA relied upon conclusions presented in proponent-funded studies which have not been subjected to independent peer-review and disregarded a large and growing body of relevant independent scientific research, much of it peer-reviewed and published…” ” …The PNW LNG project presents many different potential risks to the Skeena River estuary and its fish, including, but not limited to, destruction of shoreline habitat, acid rain, accidental spills of fuel and other contaminants, dispersal of contaminated sediments, chronic and acute sound, seafloor destruction by dredging the gas pipeline into the ocean floor, and the erosion and food-web disruption from the trestle structure. Fisheries and Oceans Canada (DFO) and Natural Resources Canada provided detailed reviews12 on only one risk pathway – habitat erosion – while no such detailed reviews were conducted on other potential impacts or their cumulative effects…” ” … CEAA’s draft report concluded that the project posed moderate risks to marine fish but that these risks could be mitigated. However, the proponent has not fully developed their mitigation plans and the plans that they have outlined are scientifically dubious. For example, the draft assessment states that destroyed salmon habitat will be mitigated; the “proponent identified 90 000 m2 of lower productivity habitats within five potential offsetting sites that could be modified to increase the productivity of fisheries”, when in fact, the proponent did not present data on productivity of Skeena Estuary habitats for fish at any point in the CEAA process. Without understanding relationships between fish and habitat, the proposed mitigation could actually cause additional damage to fishes of the Skeena River estuary…” British Columbia Institute of Technology 1. Marvin Rosenau, Ph.D., Professor, British Columbia Institute of Technology. 2. Eric M. Anderson, Ph.D., Faculty, British Columbia Institute of Technology. British Columbia Ministry of Environment 1. R. S. Hooton, M.Sc., Former Senior Fisheries Management Authority for British Columbia Ministry of Environment, Skeena Region. California Academy of Sciences 1. John E. McCosker, Ph.D., Chair of Aquatic Biology, Emeritus, California Academy of Sciences. Department of Fisheries and Oceans Canada 1. Otto E. Langer, M.Sc., R.P.Bio., Fisheries Biologist, Former Chief of Habitat Assessment, Department of Fisheries and Oceans Canada Memorial University of Newfoundland 1. Ian A. Fleming, Ph.D., Professor, Memorial University of Newfoundland. 2. Brett Favaro, Ph.D., Liber Ero conservation fellow, Memorial University of Newfoundland. Norwegian Institute for Nature Research 1. Rachel Malison, Ph.D., Marie Curie Fellow and Research Ecologist, The Norwegian Institute for Nature Research. Russian Academy of Science 1. Alexander I. Vedenev, Ph.D., Head of Ocean Noise Laboratory, Russian Academy of Science 2. Victor Afanasiev, Ph.D., Russian Academy of Sciences. Sakhalin Research Institute of Fisheries and Oceanography 1. Alexander Shubin, M.Sc. Fisheries Biologist, Sakhalin Research Institute of Fisheries and Oceanography. Simon Fraser University, BC 1. Jonathan W. Moore, Ph.D., Liber Ero Chair of Coastal Science and Management, Associate Professor, Simon Fraser University. 2. Randall M. Peterman, Ph.D., Professor Emeritus and Former Canada Research Chair in Fisheries Risk Assessment and Management, Simon Fraser University. 3. John D. Reynolds, Ph.D., Tom Buell BC Leadership Chair in Salmon Conservation, Professor, Simon Fraser University 4. Richard D. Routledge, Ph.D., Professor, Simon Fraser University. 5. Evelyn Pinkerton, Ph.D., School of Resource and Environmental Management, Professor, Simon Fraser University. 6. Dana Lepofsky, Ph.D., Professor, Simon Fraser University 7. Nicholas Dulvy, Ph.D., Canada Research Chair in Marine Biodiversity and Conservation, Professor, Simon Fraser University. 8. Ken Lertzman, Ph.D., Professor, Simon Fraser University. 9. Isabelle M. Côté, Ph.D., Professor, Simon Fraser University. 10. Brendan Connors, Ph.D., Senior Systems Ecologist, ESSA Technologies Ltd., Adjunct Professor, Simon Fraser University. 11. Lawrence Dill, Ph.D., Professor Emeritus, Simon Fraser University. 12. Patricia Gallaugher, Ph.D., Adjunct Professor, Simon Fraser University. 13. Anne Salomon, Ph.D., Associate Professor, Simon Fraser University. 14. Arne Mooers, Ph.D., Professor, Simon Fraser University. 15. Lynne M. Quarmby, Ph.D., Professor, Simon Fraser University. 16. Wendy J. Palen, Ph.D., Associate Professor, Simon Fraser University. University of Alaska 1. Peter Westley, Ph.D., Assistant Professor of Fisheries, University of Alaska Fairbanks. 2. Anne Beaudreau, Ph.D., Assistant Professor of Fisheries, University of Alaska Fairbanks. 3. Megan V. McPhee, Ph.D., Assistant Professor, University of Alaska Fairbanks. University of Alberta 1. David.W. Schindler, Ph.D., Killam Memorial Professor of Ecology Emeritus, University of Alberta. 2. Suzanne Bayley, Ph.D., Emeritus Professor, University of Alberta. University of British Columbia 1. John G. Stockner, Ph.D., Emeritus Senior Scientist DFO, West Vancouver Laboratory, Adjuct Professor, University of British Columbia. 2. Kai M.A. Chan, Ph.D., Canada Research Chair in Biodiversity and Ecosystem Services, Associate Professor, University of British Columbia 3. Hadi Dowlatabadi, Ph.D., Canada Research Chair in Applied Mathematics and Integrated Assessment of Global Change, Professor, University of British Columbia 4. Sarah P. Otto, Ph.D., Professor and Director, Biodiversity Research Centre, University of British Columbia. 5. Michael Doebeli, Ph.D., Professor, University of British Columbia. 6. Charles J. Krebs, Ph.D., Professor, University of British Columbia. 7. Amanda Vincent, Ph.D., Professor, University of British Columbia. 8. Michael Healey, Ph.D., Professor Emeritus, University of British Columbia. University of California (various campuses) 1. Mary E. Power, Ph.D., Professor, University of California, Berkeley 2. Peter B. Moyle, Ph.D., Professor, University of California. 3. Heather Tallis, Ph.D., Chief Scientist, The Nature Conservancy, Adjunct Professor, University of California, Santa Cruz. 4. James A. Estes, Ph.D., Professor, University of California. 5. Eric P. Palkovacs, Ph.D., Assistant Professor, University of California-Santa Cruz. 6. Justin D. Yeakel, Ph.D., Assistant Professor, University of California. 7. John L. Largier, Ph.D., Professor, University of California Davis. University of Montana 1. Jack A. Stanford, Ph.D., Professor of Ecology, University of Montana. 2. Andrew Whiteley, Ph.D., Assistant Professor, University of Montana. 3. F. Richard Hauer, Ph.D., Professor and Director, Center for Integrated Research on the Environment, University of Montana. University of New Brunswick 1. Richard A. Cunjak, Ph.D., Professor, University of New Brunswick. University of Ontario Institute of Technology 1. Douglas A. Holdway, Ph.D., Canada Research Chair in Aquatic Toxicology, Professor, University of Ontario Institute of Technology. University of Ottawa 1. Jeremy Kerr, Ph.D., University Research Chair in Macroecology and Conservation, Professor, University of Ottawa University of Toronto 1. Martin Krkosek, Ph.D., Assistant Professor, University of Toronto. Gail McCabe, Ph.D., University of Toronto. University of Victoria 1. Chris T. Darimont, Ph.D., Associate Professor, University of Victoria 2. John Volpe, Ph.D., Associate Professor, University of Victoria. 3. Aerin Jacob, Ph.D., Postdoctoral Fellow, University of Victoria. 4. Briony E.H. Penn, Ph.D., Adjunct Professor, University of Victoria. 5. Natalie Ban, Ph.D., Assistant Professor, School of Environmental Studies, University of Victoria. 6. Travis G. Gerwing, Ph.D., Postdoctoral Fellow, University of Victoria. 7. Eric Higgs, Ph.D., Professor, University of Victoria. 8. Paul C. Paquet, Ph.D., Senior Scientist, Raincoast Conservation Foundation, Adjunct Professor, University of Victoria. 9. James K. Rowe, Ph.D., Assistant Professor, University of Victoria. University of Washington 1. Charles Simenstad, Ph.D., Professor, University of Washington. 2. Daniel Schindler, Ph.D., Harriet Bullitt Endowed Chair in Conservation, Professor, University of Washington. 3. Julian D. Olden, Ph.D., Associate Professor, University of Washington. 4. P. Sean McDonald, Ph.D., Research Scientist, University of Washington. 5. Tessa Francis, Ph.D., Research Scientist, University of Washington. University of Windsor 1. Hugh MacIsaac, Ph.D., Canada Research Chair Great Lakes Institute for Environmental Research, Professor, University of Windsor. Photo Credits: 9 of the scientist condemning the CEAA review are professors at the University of Victoria. Photo shows U Vic students listening to a UN official in 2012 by Herb Neufeld via Flickr (CC BY SA, 2.0 License); Screen shot from a Liberal campaign video in which Trudeau promised to bring real change to Ottawa;8 of the scientist condemning the CEAA review are professors at the University of British Columbia. Photo of UBC by abdallahh via Flickr (CC BY SA, 2.0 License);5 of the scientists condemning the CEAA review are from the University of Washington. Photo is Mary Gates Hall, in the University of Washington by PRONam-ho Park Follow via Flickr (CC BY SA, 2.0 License);5 of the scientists condemning the CEAA review are from the Skeena Fisheries Commission. Photo is Coast mountains near the mouth of the Skeena River by Roy Luck via Flickr (CC BY SA, 2.0 License);16 of the scientists condemning the CEAA review were professors at Simon Fraser University. Photo shows SFU’s Reflective Pool by Jon the Happy Web Creative via Flickr (CC BY SA, 2.0 License)    Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.”   Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10.   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.  


News Article | March 25, 2016
Site: phys.org

Home to a mix of preserved wetlands, green rolling hills and dense boreal forests, the Beaver Hills area east of Edmonton has been designated as a United Nations Educational, Scientific and Cultural Organization (UNESCO) Biosphere Reserve, under its Man and the Biosphere Programme. The area joins a network of 669 sites in 120 countries that foster ecologically sustainable human and economic development. Researchers from various faculties at the U of A have conducted dozens of studies there over the last 30 years, focused on work ranging from wildlife and outdoor recreation to wetlands and land management. "University of Alberta research has benefited from the Beaver Hills area in many ways," said Guy Swinnerton, professor emeritus in the Faculty of Physical Education and Recreation and chair of the Beaver Hills Initiative Protected Areas Working Group. Swinnerton, who has enjoyed the Beaver Hills area as both a hiker and a researcher for many years, assisted in the nomination process for the UNESCO designation. He began taking students to the area in 1978 while teaching courses about protected areas and outdoor recreation. "Beaver Hills has different types of protected areas, and it's that whole mosaic that is important," he said. The Beaver Hills Biosphere Reserve becomes the second area of Alberta to win UNESCO designation, after the Waterton Biosphere Reserve in 1979. Home to the U of A's Augustana Miquelon Lake Research Station, the biosphere's 1,572 square kilometres also encompass Elk Island National Park, Miquelon Lake Provincial Park, Cooking Lake-Blackfoot Provincial Recreation Area, the Ukrainian Cultural Heritage Village, the Ministik Lake Game Bird Sanctuary and the Strathcona Wilderness Centre. With its well-preserved, protected parklands and forests sitting next to surrounding farms and residential subdivisions, the Beaver Hills biosphere provides opportunities for university researchers and government scientists to investigate, through comparative studies, how to protect biodiversity and practise sustainable development within the lived-in landscape. "It's this total landscape approach that demonstrates how we have to work collectively to find balance between conservation and sustainable development," Swinnerton said. "It's a hidden gem," added Glynnis Hood, an associate professor of environmental science based at the U of A's Augustana Campus. "Beaver Hills is spectacular because of its subtle beauty. There are ecological surprises around every corner, because you're not looking for the big features like mountains, but for the small surprises." One of those surprises is the fisher, a weasel thought to be gone from the area that seems to have a healthy population and is now the subject of a collaborative University of Victoria study involving Augustana Campus. "The Beaver Hills biosphere offers a rich opportunity to keep exploring questions that are right in our own backyard," said Hood, who lives near Miquelon Lake and has for years guided students in researching area wetlands. She's also studied human-wildlife conflicts and is currently researching low-impact wetland management practices. Last year she and colleague Glen Hvenegaard led the first field course in environmental science and ecology at the Miquelon Lake Research Station, which opened in 2015. The 17-day course, which will be offered biannually, gave U of A students the chance to appreciate the Beaver Hills area's rich diversity as they studied everything from park interpretation to muskrats to soil science. "It was a great way to get the students to really live in the landscape and understand it intimately though research," Hood said. The UNESCO designation affirms the Beaver Hills Biosphere Reserve as a world-class discovery ground that, through the work of U of A researchers and other groups, is yielding insights into global problems. "It demonstrates grassroots excellence and honours the commitment of organizations and people in solving conservation and sustainable development problems on the ground," Swinnerton said.


Prado C.M.M.,University of Alberta | Heymsfield S.B.,Pennington Biomedical Research Center
Journal of Parenteral and Enteral Nutrition | Year: 2014

Body composition refers to the amount of fat and lean tissues in our body; it is a science that looks beyond a unit of body weight, accounting for the proportion of different tissues and its relationship to health. Although body weight and body mass index are well-known indexes of health status, most researchers agree that they are rather inaccurate measures, especially for elderly individuals and those patients with specific clinical conditions. The emerging use of imaging techniques such as dual energy x-ray absorptiometry, computerized tomography, magnetic resonance imaging, and ultrasound imaging in the clinical setting have highlighted the importance of lean soft tissue (LST) as an independent predictor of morbidity and mortality. It is clear from emerging studies that body composition health will be vital in treatment decisions, prognostic outcomes, and quality of life in several nonclinical and clinical states. This review explores the methodologies and the emerging value of imaging techniques in the assessment of body composition, focusing on the value of LST to predict nutrition status. © 2014 American Society for Parenteral and Enteral Nutrition.


Kutty S.,University of Nebraska at Omaha | Smallhorn J.F.,University of Alberta
Journal of the American Society of Echocardiography | Year: 2012

Atrioventricular septal defects comprise a disease spectrum characterized by deficient atrioventricular septation, with several common features seen in all affected hearts and variability in atrioventricular valve morphology and interatrial and interventricular communications. Atrioventricular septal defects are among the more common defects encountered by pediatric cardiologists and echocardiographers. Despite advances in understanding, standard two-dimensional echocardiography may not be the optimal method for the morphologic and functional evaluation of this lesion, particularly malformations of the atrioventricular valve(s). In this review, the authors summarize the role of three-dimensional echocardiography in the diagnostic evaluation of atrioventricular septal defects.


Evans J.P.,University of North Carolina at Chapel Hill | Meslin E.M.,Indiana University | Marteau T.M.,King's College London | Caulfield T.,University of Alberta
Science | Year: 2011

Unrealistic expectations and uncritical translation of genetic discoveries may undermine other promising approaches to preventing disease and improving health.


Armstrong P.,University of Alberta | Boden W.,Buffalo General Hospital
Annals of Internal Medicine | Year: 2011

A transformation in ST-segment elevation myocardial infarction (STEMI) care in the United States has unfolded. It asserts superior reperfusion with primary percutaneous coronary intervention (PPCI) over fibrinolysis on the basis of studies showing the former method to be superior for reperfusion of patients with STEMI. Although clear benefit has resulted from national programs directed toward achieving shorter times to PPCI in facilities with around-the-clock access, most patients present to non-PPCI hospitals. Because delay to PPCI for most patients with STEMI presenting to non-PPCI centers remains outside current guidelines, many are denied benefit from pharmacologic therapy. This article describes why this approach creates a treatment paradox in which more effort to improve treatment for patients with PPCI for acute STEMI often leads to unnecessary avoidance and delay in the use of fibrinolysis. Recent evidence confirms the unfavorable consequences of delay to PPCI and that early prehospital fibrinolysis combined with strategic mechanical co-interventions affords excellent outcomes. The authors believe it is time to embrace an integrated dual reperfusion strategy to best serve all patients with STEMI.© 2011 American College of Physicians.


Patent
Massachusetts Institute of Technology, President And Fellows Of Harvard College and University of Alberta | Date: 2015-02-06

The invention, in some aspects relates to light-activated ion channel polypeptides and encoding nucleic acids and also relates in part to compositions comprising light-activated ion channel polypeptides and methods using light-activated ion channel polypeptides to alter cell activity and function.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-IP | Phase: KBBE-2007-3-1-03 | Award Amount: 11.21M | Year: 2008

Replacing fossil oil with renewable resources is perhaps the most urgent need and the most challenging task that human society faces today. Cracking fossil hydrocarbons and building the desired chemicals with advanced organic chemistry usually requires many times more energy than is contained in the final product. Thus, using plant material in the chemical industry does not only replace the fossil material contained in the final product but also save substantial energy in the processing. Of particular interest are seed oils which show a great variation in their composition between different plant species. Many of the oil qualities found in wild species would be very attractive for the chemical industry if they could be obtained at moderate costs in bulk quantities and with a secure supply. Genetic engineering of vegetable oil qualities in high yielding oil crops could in a relatively short time frame yield such products. This project aims at developing such added value oils in dedicated industrial oil crops mainly in form of various wax esters particularly suited for lubrication. This project brings together the most prominent scientists in plant lipid biotechnology in an unprecedented world-wide effort in order to produce added value oils in industrial oil crops within the time frame of four years as well as develop a tool box of genes und understanding of lipid cellular metabolism in order for rational designing of vast array of industrial oil qualities in oil crops. Since GM technologies that will be used in the project are met with great scepticism in Europe it is crucial that ideas, expectations and results are communicated to the public and that methods, ethics, risks and risk assessment are open for debate. The keywords of our communication strategies will be openness and an understanding of public concerns.


Patent
President And Fellows Of Harvard College and University of Alberta | Date: 2015-06-17

Provided herein are variants of an archaerhodopsin useful for application such as optical measurement of membrane potential. The present invention also relates to polynucleotides encoding the variants; nucleic acid constructs, vectors, cells comprising the polynucleotides, and cells comprising the polypeptides; and methods of using the variants.


Patent
University of Alberta and University of Lethbridge | Date: 2015-02-05

The disclosure provides methods for the treatment of skin disorders through the use of minimally invasive terahertz radiation. The method includes exposing skin cells to terahertz radiation in amount sufficient to modulate gene expression in the skin cells. The modulation of gene expression then results in a reduction of the disease state or aspects thereof in the exposed skin cells.


Grant
Agency: European Commission | Branch: FP7 | Program: CP-FP | Phase: KBBE-2009-2-4-01 | Award Amount: 4.05M | Year: 2010

The NanoLyse project will focus on the development of validated methods and reference materials for the analysis of engineered nano-particles (ENP) in food and beverages. The developed methods will cover all relevant classes of ENP with reported or expected food and food contact material applications, i.e. metal, metal oxide/silicate, surface functionalised and organic encapsulate (colloidal/micelle type) ENP. Priority ENPs have been selected out of each class as model particles to demonstrate the applicability of the developed approaches, e.g. nano-silver, nano-silica, an organically surface modified nano-clay and organic nano-encapsulates. Priority will be given to methods which can be implemented in existing food analysis laboratories. A dual approach will be followed. Rapid imaging and screening methods will allow the distinction between samples which contain ENP and those that do not. These methods will be characterised by minimal sample preparation, cost-efficiency, high throughput and will be achieved by the application of automated smart electron microscopy imaging and screening techniques in sensor and immunochemical formats. More sophisticated, hyphenated methods will allow the unambiguous characterisation and quantification of ENP. These will include elaborate sample preparation, separation by flow field fractionation and chromatographic techniques as well as mass spectrometric and electron microscopic characterisation techniques. The developed methods will be validated using the well characterised food matrix reference materials that will be produced within the project. Small-scale interlaboratory method performance studies and the analysis of a few commercially available products claiming or suspect to contain ENP will demonstrate the applicability and soundness of the developed methods.


Grant
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ENERGY.2010.5.2-3 | Award Amount: 5.31M | Year: 2011

CO2CARE aims to support the large scale demonstration of CCS technology by addressing the research requirements of CO2 storage site abandonment. It will deliver technologies and procedures for abandonment and post-closure safety, satisfying the regulatory requirements for transfer of responsibility. The project will focus on three key areas: well abandonment and long-term integrity; reservoir management and prediction from closure to the long-term; risk management methodologies for long-term safety. Objectives will be achieved via integrated laboratory research, field experiments and state-of-the-art numerical modelling, supported by literature review and data from a rich portfolio of real storage sites, covering a wide range of geological and geographical settings. CO2CARE will develop plugging techniques to ensure long-term well integrity; study the factors critical to long-term site safety; develop monitoring methods for leakage detection; investigate and develop remediation technologies. Predictive modelling approaches will be assessed for their ability to help define acceptance criteria. Risk management procedures and tools to assess post-closure system performance will be developed. Integrating these, the technical criteria necessary to assess whether a site meets the high level requirements for transfer of responsibility defined by the EU Directive will be established. The technologies developed will be implemented at the Ketzin site and dry-run applications for site abandonment will be developed for hypothetical closure scenarios at Sleipner and K12-B. Participation of partners from the US, Canada, Japan and Australia and data obtained from current and closed sites will add to the field monitoring database and place the results of CO2CARE in a world-wide perspective. Research findings will be presented as best-practice guidelines. Dissemination strategy will deliver results to a wide range of international stakeholders and the general public.


News Article | March 2, 2017
Site: www.sciencenews.org

In the battle of wits between humans and machines, computers have just upped the ante. Two new poker-playing programs can best professionals at heads-up no-limit Texas Hold’em, a two-player version of poker without restrictions on the size of bets. It’s another in a growing list of complex games, including chess, checkers (SN: 7/21/07, p. 36) and Go (SN: 12/24/16, p. 28), in which computers reign supreme. Computer scientists from the University of Alberta in Canada report that their program, known as DeepStack, roundly defeated professional poker players, playing 3,000 hands against each. The program didn’t win every hand — sometimes the luck of the draw was against it. But after the results were tallied, DeepStack beat 10 out of 11 card sharks, the scientists report online March 2 in Science. (DeepStack also beat the 11th competitor, but that victory was not statistically significant.) “This work is very impressive,” says computer scientist Murray Campbell, one of the creators of Deep Blue, the computer that bested chess grandmaster Garry Kasparov in 1997. DeepStack “had a huge margin of victory,” says Campbell, of IBM’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. Likewise, computer scientists led by Tuomas Sandholm of Carnegie Mellon University in Pittsburgh recently trounced four elite heads-up no-limit Texas Hold’em players with a program called Libratus. Each contestant played 30,000 hands against the program during a tournament held in January in Pittsburgh. Libratus was “much tougher than any human I’ve ever played,” says poker pro Jason Les. Previously, Michael Bowling — one of DeepStack’s creators — and colleagues had created a program that could play a two-person version of poker in which the size of bets is limited. That program played the game nearly perfectly: It was statistically unbeatable within a human lifetime (SN: 2/7/2015, p.14). But no-limit poker is vastly more complicated because when any bet size is allowed, there are many more possible actions. Players must decide whether to go all in, play it safe with a small wager or bet something in between. “Heads-up no-limit Texas Hold’em … is, in fact, far more complex than chess,” Campbell says. In the card game, each player is dealt two cards facedown and both players share five cards dealt faceup, with rounds of betting between stages of dealing. Unlike chess or Go, where both players can see all the pieces on the board, in poker, some information is hidden — the two cards in each player’s hand. Such games, known as imperfect-information games, are particularly difficult for computers to master. To hone DeepStack’s technique, the researchers used deep learning — a method of machine learning that formulates an intuition-like sense of when to hold ’em and when to fold ’em. When it’s the program’s turn, it sorts through options for its next few actions and decides what to do. As a result, DeepStack’s nature “looks a lot more like humans’,” says Bowling. Libratus computes a strategy for the game ahead of time and updates itself as it plays to patch flaws in its tactics that its human opponents have revealed. Near the end of a game, Libratus switches to real-time calculation, during which it further refines its methods. Libratus is so computationally demanding that it requires a supercomputer to run. (DeepStack can run on a laptop.) Teaching computers to play games with hidden information, like poker, could eventually lead to real-life applications. “The whole area of imperfect-information games is a step towards the messiness of the real world,” says Campbell. Computers that can handle that messiness could assist with business negotiations or auctions, and could help guard against hidden risks, in cybersecurity, for example.


News Article | October 31, 2016
Site: www.eurekalert.org

Scientists have found a way to use satellites to track photosynthesis in evergreens -- a discovery that could improve our ability to assess the health of northern forests amid climate change. An international team of researchers used satellite sensor data to identify slight colour shifts in evergreen trees that show seasonal cycles of photosynthesis -- the process in which plants use sunlight to convert carbon dioxide and water into glucose. Photosynthesis is easy to track in deciduous trees -- when leaves bud or turn yellow and fall off. But until recently, it had been impossible to detect in evergreen conifers on a large scale. "Photosynthesis is arguably the most important process on the planet, without which life as we know it would not exist," said John Gamon, lead researcher and a professor of biological sciences at the University of Alberta. "As the climate changes, plants respond -- their photosynthesis changes, their growing season changes. And if photosynthesis changes, that in turn further affects the atmosphere and climate." Through their CO2-consuming ways, plants have been slowing climate change far more than scientists previously realized. The "million-dollar question" is whether this will continue as the planet continues to warm due to human activity, Gamon said. Scientists have two hypotheses -- the first is that climate change and longer growing seasons will result in plants sucking up even more CO2, further slowing climate change. The other predicts a drop in photosynthetic activity due to drought conditions that stress plants, causing them to release CO2 into the atmosphere through a process called respiration -- thereby accelerating climate change. "If it's hypothesis one, that's helping us. If it's hypothesis two, that's pretty scary," said Gamon. The research team combined two different satellite bands -- one of which was used to study oceans and only recently made public by NASA -- to track seasonal changes in green (pigment created by chlorophyll) and yellow (created by carotenoid) needle colour. The index they developed provides a new tool to monitor changes in northern forests, which cover 14 per cent of all the land on Earth. Gamon has taken a leave of absence from the U of A to further the research, now funded by NASA, at the University of Nebraska-Lincoln. His lab in the U.S. is reviewing 15 years' worth of satellite data on forests in Canada and Alaska to ultimately determine whether photosynthetic cycles are happening earlier because of climate change and whether forests are becoming more or less productive at converting CO2. "Those are key questions we haven't been able to answer for the boreal forest as a whole," he said. Researchers from the University of Toronto, University of North Carolina, University of Maryland, Baltimore County, University of Barcelona, NASA and the U.S. Forest Service collaborated on the project. Their findings were published Monday in the Proceedings of the National Academy of Sciences.


News Article | September 16, 2016
Site: www.rdmag.com

Inspired by the anatomy of insects, an interdisciplinary research team at the University of Alberta has come up with a novel way to quickly and accurately detect dangerous airborne chemicals. The work started with Arindam Phani, a graduate student in U of A's Department of Chemical and Materials Engineering, who observed that most insects have tiny hairs on their body surfaces, and it is not clear what the hairs are for. Trying to make sense of what these hairs may be capable of, Phani designed experiments involving a "forest" of tiny hairs on a thin vibrating crystal chip, under the guidance of his academic advisor Thomas Thundat, the Canada Research Chair in Oil Sands Molecular Engineering. The two joined forces with Vakhtang Putkaradze, Centennial Professor in the University of Alberta's Department of Mathematical and Statistical Sciences. The experiments and subsequent theoretical explanation formed the crux of a new study published in the Sept. 6 issue of Scientific Reports, an online, open access journal from the publishers of Nature. "We wanted to do something that nobody else does," said Putkaradze, a mathematician who is also a renowned expert in the field of mechanics. "When using resonators as sensors, most people want to get rid of dissipation or friction because it's considered highly undesirable, it tends to obscure what you are trying to measure. We have taken that undesirable thing and made it useful." "Sensing chemicals without chemical receptors has been a challenge in normal conditions," said Thundat, a world-leading expert in the field of sensing. "We realized that there is a wealth of information contained in the frictional loss of a mechanical resonator in motion and is more pronounced at the nanoscale." The idea is that any object moving rapidly through the air can probe the properties of the surrounding environment. Imagine having a wand in your hand and moving it back and forth, and—even with your eyes closed—you can feel whether the wand is moving through air, water, or honey, just by feeling the resistance. Now, picture this wand with billions of tiny hairs on it, moving back and forth several million times per second, and just imagine the sensing possibilities. "With the nanostructures, we can feel tiny changes in the air surrounding the resonator," says Putkaradze. "This sensitivity makes the device useful for detecting a wide variety of chemicals." Phani, who is the first author on the publication, believes "similar mechanisms involving motions of nano-hairs may be used for sensing by living organisms." Because the friction is changing dramatically with minute changes in the environment and is easy to measure, it may be possible to eventually produce a gadget of the size similar to or slightly larger than a Rubik's cube and designed to plug into a wall. At present, the group's device is geared primarily to sensing chemical vapors in air. "We are thinking that this device can work like a smaller and cheaper spectrometer, measuring chemicals in the parts-per-million range," added Putkaradze. Putkaradze explains that, apart from size and reasonable cost, what sets the device apart from larger and more expensive equipment is its versatility. "Because our sensor is not directed to detect any specific chemical, it can interpret a broad range, and it doesn't require that we actually attach the molecules to anything to create a mechanical response, meaning that it's also reusable." The team adds that the most immediate and obvious use will be for environmental air quality monitoring. Concluded Putkaradze, "we would like to work with applications like law enforcement and scientific laboratories, but the most obvious use is for environmental observation of chemical air pollution in cities and the resource industry." Future iterations are geared toward detecting particulate matter—like dust—as well as the number of viruses present in air, invaluable for the public health.


News Article | February 23, 2017
Site: www.eurekalert.org

EDMONTON (Under embargo until Thursday, February 23, 2017 at 10 a.m. MST)--In the middle of Alberta's boreal forest, a bird eats a wild chokecherry. During his scavenging, the bird is caught and eaten by a fox. The cherry seed, now inside the belly of the bird within the belly of fox, is transported far away from the tree it came from. Eventually, the seed is deposited on the ground. After being broken down in the belly of not one but two animals, the seed is ready to germinate and become a cherry tree itself. The circle of life at work. Diploendozoochory, or the process of a seed being transported in the gut of multiple animals, occurs with many species of plants in habitats around the world. First described by Charles Darwin in 1859, this type of seed dispersal has only been studied a handful of times. And in a world affected by climate change and increasing rates of human development, understanding this process is becoming increasingly important. A new study by researchers at the University of Alberta's Department of Biological Sciences is the first to comprehensively examine existing literature to identify broader patterns and suggest ways in which the phenomenon is important for plant populations and seed evolution. Anni Hämäläinen, lead investigator and postdoctoral fellow, explains that predator-assisted seed dispersal is important to colonize and recolonize plant life in the wild. "Thick-shelled seeds may benefit from the wear and tear of passing through the guts of two animals, making them better able to germinate than if they had passed through the gut of the prey alone," explains Hämäläinen. "It's even possible that some plants have evolved specifically to take advantage of these predator-specific behaviours." Often larger than prey animals, predators cover larger distances with ease. As humans continue to develop and alter wilderness, such as by cutting down forests or building roads, predators may be the only animals large enough to navigate across these areas and enable plants to recolonize them. "Climate change will alter where some plants can find suitable places to grow," explains Hämäläinen. "Seed-carrying predators may have a role in helping plants cover a larger area and hence move with the changing climate." These different factors are like pieces in a puzzle, explains Hämäläinen: to fully understand the big picture of how they affect plant populations, scientists need to know how all of the pieces fit together. "Our work has highlighted how interesting and important diploendozoochory is, and we hope that it will help and encourage others to fill some of these gaps in our understanding," says Hämäläinen. The paper "The ecological significance of secondary seed dispersal by carnivores" is published in Ecosphere. This research was conducted by Anni Hämäläinen, Kate Broadley, Amanda Droghini, Jessica Haines, Clayton Lamb, and Sophie Gilbert, under the supervision of Stan Boutin, professor in the Department of Biological Sciences and Alberta Biodiversity Conservation Chair.


News Article | November 30, 2016
Site: www.marketwired.com

EDMONTON, AB--(Marketwired - November 30, 2016) - TEC Edmonton announced today that Dr. Randy Yatscoff, Executive Vice-President of Business Development, has been awarded the National Startup Canada Adam Chowaniec Lifetime Achievement Award. The award, only one of which is presented at the national level, recognizes an individual who has made a long-term impact on advancing an environment of entrepreneurial growth and success in Canada. "I'm grateful for the incredible people and teams I've worked with over the years that made this possible," said Randy. "This award is really about the people who come together to be bigger than the sum of their parts." "There is no more deserving person than Randy for this award," says TEC Edmonton CEO Chris Lumb. "Randy brings passion, commitment, and action to everything he does, and it's an honour to work with him. We and our clients are all richer from Randy's presence, and this award recognizes his outstanding contributions to Canadian entrepreneurship." Dr. Yatscoff has worked with TEC Edmonton since 2008, initially as an Entrepreneur-in-Residence before becoming Executive Vice-President of Business Development in 2010. In his role at TEC, he oversees a team that serves over 80 startup companies per year. During his time at TEC Edmonton, Randy has directly or indirectly helped to create 15 university spinoff companies like Metabolomic Technologies Inc. (MTI) and Tevosol, allowing research innovations to make a real-world impact. In addition to university-based companies, Randy has also mentored dozens of companies in the community. Randy's time at TEC Edmonton is backed by more than a decade of experience as a biotech executive, notably serving as President and CEO of the drug development company Isotechnika. During his tenure at Isotechnika, Randy helped raise $200 million in equity financing and took the company public. In an earlier life, Randy was also an accomplished academic, a full professor and researcher at several Canadian universities. He remains an adjunct professor at the University of Alberta and holds more than 20 patents. About TEC Edmonton TEC Edmonton is a business accelerator that helps emerging technology companies grow successfully. As a joint venture of the University of Alberta and Edmonton Economic Development Corporation, TEC Edmonton operates the Edmonton region's largest accelerator for early-stage technology companies, and also manages commercialization of University of Alberta technologies. TEC Edmonton delivers services in four areas: Business Development, Funding and Finance, Technology Management, and Entrepreneur Development. Since 2011, TEC clients have generated $680M in revenue, raised $350M in financing and funding, invested $200M in R&D, grown both revenue and employment by 25 per cent per year and now employ over 2,400 people in the region. In addition, TEC has assisted in the creation of 22 spinoff companies from the University of Alberta in the last four years. TEC Edmonton was named the 4th best university business incubator in North America by the University Business Incubator (UBI) Global Index in 2015, and "Incubator of the Year" by Startup Canada in 2014. For more information, visit www.tecedmonton.com.


News Article | August 31, 2016
Site: www.scientificcomputing.com

University of Alberta mechanical engineering professors Pierre Mertiny and Marc Secanell are looking to make an old technology new again and save some money for transit train operators such as the Edmonton LRT while they do it. "The flywheel is an old technology, but that's partly what makes it so sensible," says Mertiny. "Fundamentally, it's a really simple technology. We already have everything we need." The two recently calculated that the use of flywheel technology to assist light rail transit in Edmonton., Alberta, would produce energy savings of 31 per cent and cost savings of 11 per cent. Their findings are published in the July 2016 edition of the journal Energy ("Analysis of a flywheel storage system for light rail transit"). A flywheel is exactly what it sounds like: a disk, also known as the rotor, rotates and increases its rotational speed as it is fed electricity. This rotational energy can then be turned back into electrical energy whenever it is needed. It is, in a sense, a mechanical battery. The system loses very little energy to heat or friction because it operates in a vacuum and may even use magnetic bearings to levitate the rotor. Although we don't hear a lot about flywheel technology, it is used for 'high-end' applications, like the International Space Station or race cars built by Audi and Porsche. In North America, high-capacity flywheels are also used in areas of high population density, such as New York, Massachusetts and Pennsylvania, to buffer electricity to prevent power outages. Secanell and Mertiny examined the possibility of using flywheel technology to store energy generated when the city's LRT trains decelerate and stop. Trains such as the LRT are designed with so-called dynamic braking, using traction motors on the train's wheels, for smooth stops. But the deceleration generates energy, which needs to go somewhere. "Electric and fuel cell vehicles, already implement regenerative braking in order to store the energy produced during braking for start-up, so why would trains not be able to do so?" says Secanell, whose research also focuses on fuel cell vehicle technologies. Currently that electricity is considered 'dirty' electricity because it is intermittent and therefore difficult to use. Conventional systems simply send the braking electric power to resistors on the train, which convert the electrical energy to heat, which is then released into the air. A flywheel system would take the electrical energy and store it as mechanical energy. This mechanical energy could would then be converted back to electrical energy when the train is ready to leave the station again. "It's difficult to use a conventional battery for this purpose," explains Mertiny. "You need to recharge and discharge a lot of energy very quickly. Batteries don't last long under those conditions." Mertiny and Secanell predict that using a flywheel to capture the electricity generated by a train's deceleration and applying it for acceleration would produce an energy savings of 31 per cent and cost savings of 11 per cent on the Edmonton LRT system. A flywheel system could result in substantial energy and cost savings for the city. "The city of Hannover in Germany is already testing flywheel technology for just this purpose," says Mertiny. "They have banks of flywheels at each station to capture and re-use the electricity generated when their trains come into the station." Keeping the flywheels at each station meant that Hannover's trains did not have to be retro-fitted for the development. Secanell and Mertiny are involved in a pan-Canadian Energy Storage Network investigating ways to optimize the flywheel energy storage and cost. Mertiny is also currently working with Landmark Homes of Edmonton, through the U of A's Nasseri School of Building Science and Engineering, to develop a prototype flywheel to store solar energy for household use.


Research shows secondary seed dispersal by predator animals is important for recolonization of plants. Credit: Kate Broadley and Clayton Lamb In the middle of Alberta's boreal forest, a bird eats a wild chokecherry. During his scavenging, the bird is caught and eaten by a fox. The cherry seed, now inside the belly of the bird within the belly of fox, is transported far away from the tree it came from. Eventually, the seed is deposited on the ground. After being broken down in the belly of not one but two animals, the seed is ready to germinate and become a cherry tree itself. The circle of life at work. Diploendozoochory, or the process of a seed being transported in the gut of multiple animals, occurs with many species of plants in habitats around the world. First described by Charles Darwin in 1859, this type of seed dispersal has only been studied a handful of times. And in a world affected by climate change and increasing rates of human development, understanding this process is becoming increasingly important. A new study by researchers at the University of Alberta's Department of Biological Sciences is the first to comprehensively examine existing literature to identify broader patterns and suggest ways in which the phenomenon is important for plant populations and seed evolution. Anni Hämäläinen, lead investigator and postdoctoral fellow, explains that predator-assisted seed dispersal is important to colonize and recolonize plant life in the wild. "Thick-shelled seeds may benefit from the wear and tear of passing through the guts of two animals, making them better able to germinate than if they had passed through the gut of the prey alone," explains Hämäläinen. "It's even possible that some plants have evolved specifically to take advantage of these predator-specific behaviours." Often larger than prey animals, predators cover larger distances with ease. As humans continue to develop and alter wilderness, such as by cutting down forests or building roads, predators may be the only animals large enough to navigate across these areas and enable plants to recolonize them. "Climate change will alter where some plants can find suitable places to grow," explains Hämäläinen. "Seed-carrying predators may have a role in helping plants cover a larger area and hence move with the changing climate." These different factors are like pieces in a puzzle, explains Hämäläinen: to fully understand the big picture of how they affect plant populations, scientists need to know how all of the pieces fit together. "Our work has highlighted how interesting and important diploendozoochory is, and we hope that it will help and encourage others to fill some of these gaps in our understanding," says Hämäläinen. The paper "The ecological significance of secondary seed dispersal by carnivores" is published in Ecosphere. Explore further: Can mountain-climbing bears rescue cherry trees from global warming?


News Article | February 1, 2016
Site: motherboard.vice.com

“For me, a calorie is a unit of measurement that’s a real pain in the rear.” Bo Nash is 38. He lives in Arlington, Texas, where he’s a technology director for a textbook publisher. And he’s 5’10” and 245 lbs—which means he is classed as obese. In an effort to lose weight, Nash uses an app to record the calories he consumes and a Fitbit band to track the energy he expends. These tools bring an apparent precision: Nash can quantify the calories in each cracker crunched and stair climbed. But when it comes to weight gain, he finds that not all calories are equal. How much weight he gains or loses seems to depend less on the total number of calories, and more on where the calories come from and how he consumes them. The unit, he says, has a “nebulous quality to it”. Tara Haelle is also obese. She had her second son on St Patrick’s Day in 2014, and hasn’t been able to lose the 70 lbs she gained during pregnancy. Haelle is a freelance science journalist, based in Illinois. She understands the science of weight loss, but, like Nash, doesn’t see it translate into practice. “It makes sense from a mathematical and scientific and even visceral level that what you put in and what you take out, measured in the discrete unit of the calorie, should balance,” says Haelle. “But it doesn’t seem to work that way.” Nash and Haelle are in good company: more than two-thirds of American adults are overweight or obese. For many of them, the cure is diet: one in three are attempting to lose weight in this way at any given moment. Yet there is ample evidence that diets rarely lead to sustained weight loss. These are expensive failures. This inability to curb the extraordinary prevalence of obesity costs the United States more than $147 billion in healthcare, as well as $4.3 billion in job absenteeism and yet more in lost productivity. At the heart of this issue is a single unit of measurement—the calorie—and some seemingly straightforward arithmetic. “To lose weight, you must use up more calories than you take in,” according to the Centers for Disease Control and Prevention. Dieters like Nash and Haelle could eat all their meals at McDonald’s and still lose weight, provided they burn enough calories, says Marion Nestle, professor of nutrition, food studies and public health at New York University. “Really, that’s all it takes.” But Nash and Haelle do not find weight control so simple. And part of the problem goes way beyond individual self-control. The numbers logged in Nash’s Fitbit, or printed on the food labels that Haelle reads religiously, are at best good guesses. Worse yet, as scientists are increasingly finding, some of those calorie counts are flat-out wrong—off by more than enough, for instance, to wipe out the calories Haelle burns by running an extra mile on a treadmill. A calorie isn’t just a calorie. And our mistaken faith in the power of this seemingly simple measurement may be hindering the fight against obesity. The process of counting calories begins in an anonymous office block in Maryland. The building is home to the Beltsville Human Nutrition Research Center, a facility run by the US Department of Agriculture. When we visit, the kitchen staff are preparing dinner for people enrolled in a study. Plastic dinner trays are laid out with meatloaf, mashed potatoes, corn, brown bread, a chocolate-chip scone, vanilla yoghurt and a can of tomato juice. The staff weigh and bag each item, sometimes adding an extra two-centimetre sliver of bread to ensure a tray’s contents add up to the exact calorie requirements of each participant. “We actually get compliments about the food,” says David Baer, a supervisory research physiologist with the Department. The work that Baer and colleagues do draws on centuries-old techniques. Nestle traces modern attempts to understand food and energy back to a French aristocrat and chemist named Antoine Lavoisier. In the early 1780s, Lavoisier developed a triple-walled metal canister large enough to house a guinea pig. Inside the walls was a layer of ice. Lavoisier knew how much energy was required to melt ice, so he could estimate the heat the animal emitted by measuring the amount of water that dripped from the canister. What Lavoisier didn’t realise—and never had time to find out; he was put to the guillotine during the Revolution—was that measuring the heat emitted by his guinea pigs was a way to estimate the amount of energy they had extracted from the food they were digesting. Until recently, the scientists at Beltsville used what was essentially a scaled-up version of Lavoisier’s canister to estimate the energy used by humans: a small room in which a person could sleep, eat, excrete, and walk on a treadmill, while temperature sensors embedded in the walls measured the heat given off and thus the calories burned. (We now measure this energy in calories. Roughly speaking, one calorie is the heat required to raise the temperature of one kilogram of water by one degree Celsius.) Today, those ‘direct-heat’ calorimeters have largely been replaced by ‘indirect-heat’ systems, in which sensors measure oxygen intake and carbon dioxide exhalations. Scientists know how much energy is used during the metabolic processes that create the carbon dioxide we breathe out, so they can work backwards to deduce that, for example, a human who has exhaled 15 litres of carbon dioxide must have used 94 calories of energy. The facility’s three indirect calorimeters are down the halls from the research kitchen. “They’re basically nothing more than walk-in coolers, modified to allow people to live in here,” physiologist William Rumpler explains as he shows us around. Inside each white room, a single bed is folded up against the wall, alongside a toilet, sink, a small desk and chair, and a short treadmill. A couple of airlocks allow food, urine, faeces and blood samples to be passed back and forth. Apart from these reminders of the room’s purpose, the vinyl-floored, fluorescent-lit units resemble a 1970s dorm room. Rumpler explains that subjects typically spend 24 to 48 hours inside the calorimeter, following a highly structured schedule. A notice pinned to the door outlines the protocol for the latest study: 6:00 to 6:45pm – Dinner, 11:00pm – Latest bedtime, mandatory lights out, 11:00pm to 6:30am – Sleep, remain in bed even if not sleeping. In between meals, blood tests and bowel movements, calorimeter residents are asked to walk on the treadmill at 3 miles per hour for 30 minutes. They fill the rest of the day with what Rumpler calls “low activity.. “We encourage people to bring knitting or books to read,” he says. “If you give people free hand, you’ll be surprised by what they’ll do inside the chamber.” He tells us that one of his less cooperative subjects smuggled in a bag of M&Ms, and then gave himself away by dropping them on the floor. Using a bank of screens just outside the rooms, Rumpler can monitor exactly how many calories each subject is burning at any moment. Over the years, he and his colleagues have aggregated these individual results to arrive at numbers for general use: how many calories a 120-lb woman burns while running at 4.0 miles an hour, say, or the calories a sedentary man in his 60s needs to consume every day. It’s the averages derived from thousands of extremely precise measurements that provide the numbers in Bo Nash’s movement tracker and help Tara Haelle set a daily calorie intake target that is based on her height and weight. Measuring the calories in food itself relies on another modification of Lavoisier’s device. In 1848, an Irish chemist called Thomas Andrews realised that he could estimate calorie content by setting food on fire in a chamber and measuring the temperature change in the surrounding water. (Burning food is chemically similar to the ways in which our bodies break food down, despite being much faster and less controlled.) Versions of Andrews’s ‘bomb calorimeter’ are used to measure the calories in food today. At the Beltsville centre, samples of the meatloaf, mashed potatoes and tomato juice have been incinerated in the lab’s bomb calorimeter. “We freeze-dry it, crush into a powder, and fire it,” says Baer. Humans are not bomb calorimeters, of course, and we don’t extract every calorie from the food we eat. This problem was addressed at the end of the 19th century, in one of the more epic experiments in the history of nutrition science. Wilbur Atwater, a Department of Agriculture scientist, began by measuring the calories contained in more than 4,000 foods. Then he fed those foods to volunteers and collected their faeces, which he incinerated in a bomb calorimeter. After subtracting the energy measured in the faeces from that in the food, he arrived at the Atwater values, numbers that represent the available energy in each gram of protein, carbohydrate and fat. These century-old figures remain the basis for today’s standards. When Baer wants to know the calories per gram figure for that night’s meatloaf, he corrects the bomb calorimeter results using Atwater values. This entire enterprise, from the Beltsville facility to the numbers on the packets of the food we buy, creates an aura of scientific precision around the business of counting calories. That precision is illusory. The trouble begins at source, with the lists compiled by Atwater and others. Companies are allowed to incinerate freeze-dried pellets of product in a bomb calorimeter to arrive at calorie counts, though most avoid that hassle, says Marion Nestle. Some use the data developed by Atwater in the late 1800s. But the Food and Drug Administration (FDA) also allows companies to use a modified set of values, published by the Department of Agriculture in 1955, that take into account our ability to digest different foods in different ways. Atwater’s numbers say that Tara Haelle can extract 8.9 calories per gram of fat in a plate of her favourite Tex-Mex refried beans; the modified table shows that, thanks to the indigestibility of some of the plant fibres in legumes, she only gets 8.3 calories per gram. Depending on the calorie-measuring method that a company chooses—the FDA allows two more variations on the theme, for a total of five—a given serving of spaghetti can contain from 200 to 210 calories. These uncertainties can add up. Haelle and Bo Nash might deny themselves a snack or sweat out another few floors on the StairMaster to make sure they don’t go 100 calories over their daily limit. If the data in their calorie counts is wrong, they can go over regardless. There’s also the issue of serving size. After visiting over 40 US chain restaurants, including Olive Garden, Outback Steak House and PF Chang’s China Bistro, Susan Roberts of Tufts University’s nutrition research centre and colleagues discovered that a dish listed as having, say, 500 calories could contain 800 instead. The difference could easily have been caused, says Roberts, by local chefs heaping on extra french fries or pouring a dollop more sauce. It would be almost impossible for a calorie-counting dieter to accurately estimate their intake given this kind of variation. Even if the calorie counts themselves were accurate, dieters like Haelle and Nash would have to contend with the significant variations between the total calories in the food and the amount our bodies extract. These variations, which scientists have only recently started to understand, go beyond the inaccuracies in the numbers on the back of food packaging. In fact, the new research calls into question the validity of nutrition science’s core belief that a calorie is a calorie. Using the Beltsville facilities, for instance, Baer and his colleagues found that our bodies sometimes extract fewer calories than the number listed on the label. Participants in their studies absorbed around a third fewer calories from almonds than the modified Atwater values suggest. For walnuts, the difference was 21 per cent. This is good news for someone who is counting calories and likes to snack on almonds or walnuts: he or she is absorbing far fewer calories than expected. The difference, Baer suspects, is due to the nuts’ particular structure: “All the nutrients—the fat and the protein and things like that—they’re inside this plant cell wall.” Unless those walls are broken down—by processing, chewing or cooking—some of the calories remain off-limits to the body, and thus are excreted rather than absorbed. Another striking insight came from an attempt to eat like a chimp. In the early 1970s, Richard Wrangham, an anthropologist at Harvard University and author of the book Catching Fire: How cooking made us human, observed wild chimps in Africa. Wrangham attempted to follow the entirely raw diet he saw the animals eating, snacking only on fruit, seeds, leaves, and insects such as termites and army ants. “I discovered that it left me incredibly hungry,” he says. “And then I realised that every human eats their food cooked.” Wrangham and his colleagues have since shown that cooking unlaces microscopic structures that bind energy in foods, reducing the work our gut would otherwise have to do. It effectively outsources digestion to ovens and frying pans. Wrangham found that mice fed raw peanuts, for instance, lost significantly more weight than mice fed the equivalent amount of roasted peanut butter. The same effect holds true for meat: there are many more usable calories in a burger than in steak tartare. Different cooking methods matter, too. In 2015, Sri Lankan scientists discovered that they could more than halve the available calories in rice by adding coconut oil during cooking and then cooling the rice in the refrigerator. Wrangham’s findings have significant consequences for dieters. If Nash likes his porterhouse steak bloody, for example, he will likely be consuming several hundred calories less than if he has it well-done. Yet the FDA’s methods for creating a nutrition label do not for the most part account for the differences between raw and cooked food, or pureed versus whole, let alone the structure of plant versus animal cells. A steak is a steak, as far as the FDA is concerned. Industrial food processing, which subjects foods to extremely high temperatures and pressures, might be freeing up even more calories. The food industry, says Wrangham, has been “increasingly turning our food to mush, to the maximum calories you can get out of it. Which, of course, is all very ironic, because in the West there’s tremendous pressure to reduce the number of calories you’re getting out of your food.” He expects to find examples of structural differences that affect caloric availability in many more foods. “I think there is work here for hundreds and probably thousands of nutritionists for years,” he says. There’s also the problem that no two people are identical. Differences in height, body fat, liver size, levels of the stress hormone cortisol, and other factors influence the energy required to maintain the body’s basic functions. Between two people of the same sex, weight and age, this number may differ by up to 600 calories a day—over a quarter of the recommended intake for a moderately active woman. Even something as seemingly insignificant as the time at which we eat may affect how we process energy. In one recent study, researchers found that mice fed a high-fat diet between 9 AM and 5 PM gained 28 percent less weight than mice fed the exact same food across a 24-hour period. The researchers suggested that irregular feedings affect the circadian cycle of the liver and the way it metabolises food, thus influencing overall energy balance. Such differences would not emerge under the feeding schedules in the Beltsville experiments. Until recently, the idea that genetics plays a significant role in obesity had some traction: researchers hypothesised that evolutionary pressures may have favoured genes that predisposed some people to hold on to more calories in the form of added fat. Today, however, most scientists believe we can’t blame DNA for making us overweight. “The prevalence of obesity started to rise quite sharply in the 1980s,” says Nestle. “Genetics did not change in that ten- or twenty-year period. So genetics can only account for part of it.” Instead, researchers are beginning to attribute much of the variation to the trillions of tiny creatures that line the coiled tubes inside our midriffs. The microbes in our intestines digest some of the tough or fibrous matter that our stomachs cannot break down, releasing a flow of additional calories in the process. But different species and strains of microbes vary in how effective they are at releasing those extra calories, as well as how generously they share them with their host human. In 2013, researchers in Jeffrey Gordon’s lab at Washington University tracked down pairs of twins of whom one was obese and one lean. He took gut microbes from each, and inserted them into the intestines of microbe-free mice. Mice that got microbes from an obese twin gained weight; the others remained lean, despite eating the exact same diet. “That was really striking,” said Peter Turnbaugh, who used to work with Gordon and now heads his own lab at the University of California, San Francisco. “It suggested for the first time that these microbes might actually be contributing to the energy that we gain from our diet.” The diversity of microbes that each of us hosts is as individual as a fingerprint and yet easily transformed by diet and our environment. And though it is poorly understood, new findings about how our gut microbes affect our overall energy balance are emerging almost daily. For example, it seems that medications that are known to cause weight gain might be doing so by modifying the populations of microbes in our gut. In November 2015, researchers showed that risperidone, an antipsychotic drug, altered the gut microbes of mice who received it. The microbial changes slowed the animals’ resting metabolisms, causing them to increase their body mass by 10 per cent in two months. The authors liken the effects to a 30-lb weight gain over one year for an average human, which they say would be the equivalent of an extra cheeseburger every day. Other evidence suggests that gut microbes might affect weight gain in humans as they do in lab animals. Take the case of the woman who gained more than 40 lbs after receiving a transplant of gut microbesfrom her overweight teenage daughter. The transplant successfully treated the mother’s intestinal infection of Clostridium difficile, which had resisted antibiotics. But, as of the study’s publication last year, she hadn’t been able to shed the excess weight through diet or exercise. The only aspect of her physiology that had changed was her gut microbes. All of these factors introduce a disturbingly large margin of error for an individual who is trying, like Nash, Haelle and millions of others, to count calories. The discrepancies between the number on the label and the calories that are actually available in our food, combined with individual variations in how we metabolise that food, can add up to much more than the 200 calories a day that nutritionists often advise cutting in order to lose weight. Nash and Haelle can do everything right and still not lose weight. None of this means that the calorie is a useless concept. Inaccurate as they are, calorie counts remain a helpful guide to relative energy values: standing burns more calories than sitting; cookies contain more calories than spinach. But the calorie is broken in many ways, and there’s a strong case to be made for moving our food accounting system away from that one particular number. It’s time to take a more holistic look at what we eat. Wilbur Atwater worked in a world with different problems. At the beginning of the 20th century, nutritionists wanted to ensure people were well fed. The calorie was a useful way to quantify a person’s needs. Today, excess weight affects more people than hunger; 1.9 billion adults around the world are considered overweight, 600 million of them obese. Obesity brings with it a higher risk of diabetes, heart disease and cancer. This is a new challenge, and it is likely to require a new metric. One option is to focus on something other than energy intake. Like satiety, for instance. Picture a 300-calorie slice of cheesecake: it is going to be small. “So you’re going to feel very dissatisfied with that meal,” says Susan Roberts. If you eat 300 calories of a chicken salad instead, with nuts, olive oil and roasted vegetables, “you’ve got a lot of different nutrients that are hitting all the signals quite nicely,” she says. “So you’re going to feel full after you’ve eaten it. That fullness is going to last for several hours.” As a result of her research, Roberts has created a weight-loss plan that focuses on satiety rather than a straight calorie count. The idea is that foods that help people feel satisfied and full for longer should prevent them from overeating at lunch or searching for a snack soon after cleaning the table. Whole apples, white fish and Greek yoghurt are on her list of the best foods for keeping hunger at bay. There’s evidence to back up this idea: in one study, Roberts and colleagues found that people lost three times more weight by following her satiety plan compared with a traditional calorie-based one—and kept it off. Harvard nutritionist David Ludwig, who also proposes evaluating food on the basis of satiety instead of calories, has shown that teens given instant oats for breakfast consumed 650 more calories at lunch than their peers who were given the same number of breakfast calories in the form of a more satisfying omelette and fruit. Meanwhile, Adam Drewnowski, a epidemiologist at the University of Washington, has his own calorie upgrade: a nutrient density score. This system ranks food in terms of nutrition per calorie, rather than simply overall caloric value. Dark green vegetables and legumes score highly. Though the details of their approaches differ, all three agree: changing how we measure our food can transform our relationship with it for the better. Individual consumers could start using these ideas now. But persuading the food industry and its watchdogs, such as the FDA, to adopt an entirely new labelling system based on one of these alternative measures is much more of a challenge. Consumers are unlikely to see the calorie replaced by Roberts’s or Drewnowski’s units on their labels any time soon; nonetheless, this work is an important reminder that there are other ways to measure food, ones that might be more useful for both weight loss and overall health. Down the line, another approach might eventually prove even more useful: personalised nutrition. Since 2005, David Wishart of the University of Alberta has been cataloguing the hundreds of thousands of chemical compounds in our bodies, which make up what’s known as the human metabolome. There are now 42,000 chemicals on his list, and many of them help digest the food we eat. His food metabolome database is a more recent effort: it contains about 30,000 chemicals derived directly from food. Wishart estimates that both databases may end up listing more than a million compounds. “Humans eat an incredible variety of foods,” he says. “Then those are all transformed by our body. And they’re turned into all kinds of other compounds.” We have no idea what they all are, he adds—or what they do. According to Wishart, these chemicals and their interactions affect energy balance. He points to research demonstrating that high-fructose corn syrup and other forms of added fructose (as opposed to fructose found in fruit) can trigger the creation of compounds that lead us to form an excess of fat cells, unrelated to additional calorie consumption. “If we cut back on some of these things,” he says, “it seems to revert our body back to more appropriate, arguably less efficient metabolism, so that we aren’t accumulating fat cells in our body.” It increasingly seems that there are significant variations in the way each one of us metabolises food, based on the tens of thousands—perhaps millions—of chemicals that make up each of our metabolomes. This, in combination with the individuality of each person’s gut microbiome, could lead to the development of personalised dietary recommendations. Wishart imagines a future where you could hold up your smartphone, snap a picture of a dish, and receive a verdict on how that food will affect you as well as how many calories you’ll extract from it. Your partner might receive completely different information from the same dish. Or maybe the focus will shift to tweaking your microbial community: if you’re trying to lose weight, perhaps you will curate your gut microbiome so as to extract fewer calories without harming your overall health. Peter Turnbaugh cautions that the science is not yet able to recommend a particular set of microbes, let alone how best to get them inside your gut, but he takes comfort from the fact that our microbial populations are “very plastic and very malleable”—we already know that they change when we take antibiotics, when we travel and when we eat different foods. “If we’re able to figure this out,” he says, “there is the chance that someday you might be able to tailor your microbiome” to get the outcomes you want. None of these alternatives is ready to replace the calorie tomorrow. Yet the need for a new system of food accounting is clear. Just ask Haelle. “I’m kind of pissed at the scientific community for not coming up with something better for us,” she confesses, recalling a recent meltdown at TGI Friday’s as she navigated a confusing datasheet to find a low-calorie dish she could eat. There should be a better metric for people like her and Nash—people who know the health risks that come with being overweight and work hard to counter them. And it’s likely there will be. Science has already shown that the calorie is broken. Now it has to find a replacement. This story originally appeared on Mosaic with the headline, "Why the calorie is broken." It is published under a CC BY 4.0 license.


News Article | November 16, 2016
Site: www.marketwired.com

EDMONTON, AB--(Marketwired - November 16, 2016) - TEC Edmonton announced today that its annual VenturePrize competition is now open. This year marks the 15th anniversary of VenturePrize, Alberta's premier business plan competition. TEC Edmonton invites companies province-wide to submit business plans in Health, Fast Growth, Student, and Information & Communications Technology streams. "This is a very exciting year for VenturePrize," says TEC Edmonton CEO Chris Lumb. "We look forward to offering the best of TEC Edmonton's network of expertise to help Alberta companies achieve their goals." More than a competition, the road to the VenturePrize finals includes educational opportunities spread out over several months, involving a seminar series and personalized business coaching. The month-long seminar series covers a wide variety of business, marketing and legal topics designed to help participants perfect their business plans and hone their pitching skills. Companies that are paired with mentors will also receive personalized expertise from seasoned entrepreneurs. Interested companies can register for VenturePrize here. Past VenturePrize finalists and winners include Fitset, MagnetTx Oncology Solutions, Pogo CarShare, and Localize. TEC Edmonton is a business accelerator that helps emerging technology companies grow successfully. As a joint venture of the University of Alberta and Edmonton Economic Development Corporation, TEC Edmonton operates the Edmonton region's largest accelerator for early-stage technology companies, and also manages commercialization of University of Alberta technologies. TEC Edmonton delivers services in four areas: Business Development, Funding and Finance, Technology Management, and Entrepreneur Development. Since 2011, TEC clients have generated $680M in revenue, raised $350M in financing and funding, invested $200M in R&D, grown both revenue and employment by 25 per cent per year and now employ over 2,400 people in the region. In addition, TEC has assisted in the creation of 22 spinoff companies from the University of Alberta in the last four years. TEC Edmonton was named the 4th best university business incubator in North America by the University Business Incubator (UBI) Global Index in 2015, and "Incubator of the Year" by Startup Canada in 2014. For more information, visit www.tecedmonton.com.


News Article | January 6, 2016
Site: phys.org

"The differences are subtle, and most humans wouldn't pick up on them, yet the birds do perceive the variances," says Chris Sturdy, professor of psychology at the University of Alberta and one of the authors on a recent study of the birds' vocal communication. "These birds can pack a lot of information into a really simple signal." Sturdy and his former PhD student Allison Hahn, lead author on the study, were measuring the production and perception of chickadee vocalizations. "We're studying natural vocalizations, and we're asking the birds how they perceive them," says Hahn. "We can see if there are certain features within their vocalizations that birds rely on more than others to perceive differences." Focusing on the chickadees' song (or "fee-bee"), used by the birds for territorial defence and mate attraction—versus the call (the ubiquitous "chick-a-dee-dee-dee") used for flock mobilization and communication among flock members—the scientists found that the birds generalized their learning from one song to another and could make distinctions among similar songs from other geographical regions. They worked with birds from Alberta—chickadees are non-migratory—and used recordings from other locations to conduct their study. Hahn and Sturdy made several other significant discoveries by studying the bioacoustic differences between male and female chickadees. "The previous belief was that females didn't even sing," says Sturdy, noting that male song has been the dominant area of inquiry until now. "People have been studying songbirds forever, but no one to date has documented the fact that female vocal production is symmetrical to males." Sturdy notes that the findings present an entirely new avenue of natural history research. Along with perceiving subtle nuances based on geography, the birds can also tell the difference between the sexes singing a similar song. "We taught the birds how to discriminate the two," says Hahn, noting that the focus was on perception rather than behaviour. "Our studies help us and the world to better understand nature," notes Sturdy. The U of A researchers in comparative cognition and neuroethology are unique in their combination of fundamental studies of vocal production with studies of perception—allowing the discovery of whether acoustic differences observable in the first type of studies are used by the birds in meaningful ways to make sense of their auditory worlds. The findings, "Black-capped chickadees categorize songs based on features that vary geographically," were published in the leading international journal Animal Behavior. Explore further: Chickadees Tweet About Themselves More information: Allison H. Hahn et al. Black-capped chickadees categorize songs based on features that vary geographically, Animal Behaviour (2016). DOI: 10.1016/j.anbehav.2015.11.017


News Article | March 2, 2017
Site: www.fastcompany.com

Beth Linas has a reputation among her scientific colleagues for her love of social media. “Oh I’m ridiculed,” she tells me, via Twitter direct message (of course). “Not by everyone, but by some old school folk.” Linas, an infectious disease epidemiologist, tweets regularly about topics that she’s passionate about, whether it’s mobile technology or public health. During her fellowship year at the National Science Foundation, she is leveraging social media to help debunk theories that aren’t scientifically validated, such as that vaccines are linked to autism, as well as improve health literacy and inspire more women to train for STEM careers. Increasingly, young scientists like Linas regard Facebook, Twitter, and blogging platforms as a key part of their day job. Not everyone is on board. Linas stresses that the ridicule from her colleagues isn’t mean-spirited, but it still demonstrates some fundamental discomfort with engaging with the public. Social media is viewed by many, she says, as time spent away from more important work, like peer-reviewed research. Experts say that academics have to walk a fine line, even today. Many scientists today will encounter a “cultural pushback,” says Tim Caulfield, a health policy professor at the University of Alberta Caulfield, if they’re viewed as being too “self promotional.” Carl Sagan, for instance, is remembered for his television persona but many forget that he was also a prolific scientific researcher. Scientists on social media also risk alienating colleagues or university officials if they tweet or post about a controversial topic that doesn’t reflect well on their institution. For Prachee Avasthi, assistant professor of anatomy and cell biology at University of Kansas Medical Center, the biggest risk is to protect her reputation within the scientific community, so she watches what she tweets. “[Another] scientist might have power over me in that they might review my grants or papers.” Despite the risks, experts who have studied the trend see this as a way to increase public support for the sciences at a time when the Trump administration is questioning facts and threatening funding for basic research. “I tell academics that social media is now the top source of science information for people,” says Paige Jarreau, a science communication specialist at Louisiana State University. “And they are a trusted voice for people that don’t have that background and literacy.” Surveys show that public confidence in the scientific community has remained stable since the early 1970s, and that they are more trusted than public officials and religious leaders. For that reason, Caulfield argues that it’s meaningful for scientists to be part of the conversation even if they have far fewer followers than celebrities peddling pseudo-science, such as actress and Goop founder Gwyneth Paltrow (Caulfield is the author of a book titled, Is Gwyneth Paltrow Wrong About Everything?). A trusted voice can be very influential, he says. “[Scientists communicating online] is an important part of pushing back against misinformation.” Caulfield says he has been personally criticized for “spending so much time tweeting,” but he’s noticed a shift in recent years. Now, he says, students, scientists, and universities are approaching him to advise them on how to communicate their work to the public. At universities and medical centers, including Louisiana State University, science departments are now hosting regular workshops to encourage scientists to be present on social media. For Dana Smith, a science writer and communicator who previously worked at the Gladstone Institutes, it’s no longer an option for scientists not to engage with lay audiences. “It’s becoming a moral obligation,” she says, with much of their research funding coming from taxpayers. For this reason, she personally made the switch from academia–she was a doctoral psychology researcher at the University of Cambridge–to communications. She doesn’t think everyone needs to be the “next public face of science,” but she encourages researchers to try their hand at the occasional blog post or tweet.


News Article | August 31, 2016
Site: www.nature.com

No statistical methods were used to predetermine sample size. The experiments were not randomized. The investigators were not blinded to allocation during experiments and outcome assessment. We obtained 23 sediment cores from 8 different lakes by using a percussion corer deployed from the frozen lake surface51. To prevent eventual internal mixing, we discarded all upper suspended sediments and only kept the compacted sediment for further investigation. Cores were cut into smaller sections to allow transport and storage. All cores were taken to laboratories at the University of Calgary and were stored cold at 5 °C until subsequent subsampling. Cores were split using an adjustable tile saw, cutting only the PVC pipe. The split half was taken into a positive pressure laboratory for DNA subsampling. DNA samples were taken wearing full body suit, mask and sterile gloves; the top 10 mm were removed using two sterile scalpels and samples were taken with a 5 ml sterile disposable syringe (3–4 cm2) and transferred to a 15 ml sterile spin tube. Caution was taken not to cross-contaminate between layers or to sample sediments in contact with the inner side of the PVC pipe. Samples were taken every centimetre in the lowest 1 m of the core (except for Spring Lake, the lowest 2 m), then intervals of 2 cm higher up, and finally samples were taken every 5 cm, and subsequently frozen until analysed. Pollen samples were taken immediately next to the DNA samples, while macrofossil samples were cut from the remaining layer in 1 cm or 2 cm slices. Following sampling, the second intact core halves were visually described and wrapped for transport. All cores were stored at 5 °C before, during and after shipment to the University of Copenhagen (Denmark). An ITRAX core scanner was used to take high-resolution images and to measure magnetic susceptibility at the Department of Geoscience, Aarhus University. Magnetic susceptibility52 was measured every 0.5 cm using a Bartington Instruments MS2 system (Extended Data Fig. 2). Pollen was extracted using a standard protocol30. Lycopodium markers were added to determine pollen concentrations53 (see Supplementary Information). Samples were mounted in (2000 cs) silicone oil and pollen including spores were counted using a Leica Laborlux-S microscope at 400× magnification and identified using keys30, 53, 54 as well as reference collections of North American and Arctic pollen housed at the University of Alberta and the Danish Natural History Museum, respectively. Pollen and pteridophyte spores were identified at least to family level and, more typically, to genera. Green algae coenobia of Pediastrum boryanum and Botryococcus were recorded to track changes in lake trophic status. Pollen influx values were calculated using pollen concentrations divided by the deposition rate (see Supplementary Information). Microfossil diagrams were produced and analysed using PSIMPOLL 4.10 (ref. 31). The sequences were zoned with CONIIC31, with a stratigraphy constrained clustering technique using the information statistic as a distance measure. All macrofossils were retrieved using a 100 μm mesh size and were identified but not quantified. Plant macrofossils identified as terrestrial taxa (or unidentifiable macrofossils with terrestrial characteristics where no preferable material could be identified) were selected for radiocarbon (14C) dating of the lacustrine sediment. All macrofossils were subjected to a standard acid-base-acid (ABA) chemical pre-treatment at the Oxford Radiocarbon Accelerator Unit (ORAU), following a standard protocol55, with appropriate ‘known age’ (that is, independently dendrochronologically-dated tree-ring) standards run alongside the unknown age plant macrofossil samples56. Specifically, this ABA chemical pre-treatment (ORAU laboratory pre-treatment code ‘VV’) involved successive 1 M HCl (20 min, 80 °C), 0.2 M NaOH (20 min, 80 °C) and 1 M HCl (1 h, 80 °C) washes, with each stage followed by rinsing to neutrality (≥3 times) with ultrapure MilliQ deionised water. The three principal stages of this process (successive ABA washes) are similar across most radiocarbon laboratories and are, respectively, intended to remove: (i) sedimentary- and other carbonate contaminants; (ii) organic (principally humic- and fulvic-) acid contaminants; and (iii) any dissolved atmospheric CO that might have been absorbed during the preceding base wash. Thus, any potential secondary carbon contamination was removed, leaving the samples pure for combustion and graphitisation. Accelerator mass spectrometry (AMS) 14C dating was subsequently performed on the 2.5 MV HVEE tandem AMS system at ORAU57. As is standard practice, measurements were corrected for natural isotopic fractionation by normalizing the data to a standard δ13C value of −25‰ VPDB, before reporting as conventional 14C ages before present (bp, before ad 1950)58. These 14C data were calibrated with the IntCal13 calibration curve59 and modelled using the Bayesian statistical software OxCal v. 4.2 (ref. 60). Poisson process (‘P_Sequence’) deposition models were applied to each of the Charlie and Spring Lake sediment profiles61, with objective ‘Outlier’ analysis applied to each of the constituent 14C determinations62. The P_Sequence model takes into account the complexity (randomness) of the underlying sedimentation process, and thus provides realistic age-depth models for the sediment profiles on the calibrated radiocarbon (IntCal) timescale. The rigidity of the P_Sequence (the regularity of the sedimentation rate) is determined iteratively within OxCal through a model averaging approach, based upon the likelihood (calibrated 14C) data included within the model60. A prior ‘Outlier’ probability of 5% was applied to each of the 14C determinations, because there was no reason, a priori, to believe that any samples were more likely to be statistical outliers than others. All 14C determinations are provided in Extended Data Table 1; OxCal model coding is provided in the Supplementary Information; and plots of the age-depth models derived for Spring and Charlie Lakes are given in Extended Data Fig. 2. All DNA extractions and pre-PCR analyses were performed in the ancient DNA facilities of the Centre for GeoGenetics, Copenhagen. Total genomic DNA was extracted using a modified version of an organic extraction protocol63. We used a lysis buffer containing 68 mM N-lauroylsarcosine sodium salt, 50 mM Tris-HCl (pH 8.0), 150 mM NaCl, and 20 mM EDTA (pH 8.0) and, immediately before extraction, 1.5 ml 2-mercaptoethanol and 1.0 ml 1 M DTT were added for each 30 ml lysis buffer. Approximately 2 g of sediment was added, and 3 ml of buffer, together with 170 μg of proteinase K, and vortexed vigorously for 2× 20 s using a FastPrep-24 at speed 4.0 m s−1. An additional 170 μg of proteinase K was added to each sample and incubated, gently rotating overnight at 37 °C. For removal of inhibitors we used the MOBIO (MO BIO Laboratories, Carlsbad, CA) C2 and C3 buffers following the manufacturer’s protocol. The extracts were further purified using phenol-chloroform and concentrated using 30 kDa Amicon Ultra-4 centrifugal filters as described in the Andersen extraction protocol63. Our extraction method was changed from this protocol with the following modifications: no lysis matrix was added due to the minerogenic nature of the samples and the two phenol, one chloroform step was altered, thus both phenol:chloroform:supernatant were added simultaneously in the respective ratio 1:0.5:1, followed by gentle rotation at room temperature for 10 min and spun for 5 min at 3,200g. For dark-coloured extracts, this phenol:chloroform step was repeated. All extracts were quantified using Quant-iT dsDNA HS assay kit (Invitrogen) on a Qubit 2.0 Fluorometer according to the manufacturer’s manual. The measured concentrations were used to calculate the total ng DNA extracted per g of sediment (Fig. 2). 32 samples were prepared for shotgun metagenome sequencing64 using the NEBNext DNA Library Prep Master Mix Set for 454 (New England BioLabs) following the manufacturer’s protocol with the following modifications: (i) all reaction volumes (except for the end repair step) were decreased to half the size as in the protocol, and (ii) all purification steps were performed using the MinElute PCR Purification kit (Qiagen). Metagenome libraries were amplified using AmpliTaq Gold (Applied Biosystems), given 14–20 cycles following and quantified using the 2100 BioAnalyser chip (Agilent). All libraries were purified using Agencourt AMPure XP beads (BeckmanCoulter), quantified on the 2100 BioAnalyzer and pooled equimolarly. All pooled libraries were sequenced on an Illumina HiSeq 2500 platform and treated as single-end reads. Metagenomic reads were demultiplexed and trimmed using AdapterRemoval 1.5 (ref. 65) with a minimum base quality of 30 and minimum length of 30 bp66. All reads with poly-A/T tails ≥ 4 were removed from each sample. Low-quality reads and duplicates were removed using String Graph Assembler (SGA)67 setting the preprocessing tool dust-threshold = 1, index algorithm = ‘ropebwt’ and using the SGA filter tool to remove exact and contained duplicates. Each quality-controlled (QC) read was thereafter allowed equal change to map to reference sequences using Bowtie2 version 2.2.4 (ref. 68) (end-to-end alignment and mode –k 50 for example, reads were allowed a total of 500 hits before being parsed). A few reads with more than 500 matches were confirmed by checking that the best blast hit belonged to this taxon, and that alternative hits have lower e-values and alignment scores. We used the full nucleotide database (nt) from GenBank (accessed 4 March 2015), which due to size and downstream handling was divided into 9 consecutive equally sized databases and indexed using Bowtie2-build. All QC checked fastq files were aligned end-to-end using Bowtie2 default settings. Each alignment was merged using SAMtools69, sorted according to read identifier and imported to MEGAN v. 10.5 (ref. 70). We performed a lowest common ancestor (LCA) analysis using the built-in algorithm in MEGAN and computed the taxonomic assignments employing the embedded NCBI taxonomic tree (March 2015 version) on reads having 100% matches to a reference sequence. We call this pipeline ‘Holi’ because it takes a holistic approach because it has no a priori assumption of environment and the read is given an equal chance to align against the nt database containing the vast majority of organismal sequences (see Supplementary Information). In silico testing of ‘Holi’ sensitivity (see Supplementary Information) revealed 0.1% as a reliable minimum threshold for Viridiplantae taxa. For metazoan reads, which were found to be under-represented in our data, we set this threshold to 3 unique reads in one sample or 3 unique reads in three different samples from the same lake. In addition, we confirmed that each read within the metazoans by checking that the best blast hit belonged to this taxon, and that alternative hits have lower e-values and alignment scores71. We merged all sequences from all blanks and subtracted this from the total data set (instead of pairing for each extract and library build), using lowest taxonomic end nodes. Candidate detection was performed by decreasing the detection threshold in ‘Holi’ from 0.1% to 0.01% to increase the detection of contaminating plants, and similar for metazoans, we decreased the detection level and subtracted all with 2 or more reads per taxa (see Supplementary Information). We performed a series of in silico tests to measure the sensitivity and specificity of our assignment method and to estimate likelihood of false-positives (see Supplementary Information). We generated 1,030,354,587 Illumina reads distributed across 32 sediment samples and used the dedicated computational pipeline (‘Holi’) for handling read de-multiplexing, adaptor trimming, control quality, duplicate and low-complexity read removal (see Supplementary Information). The 257,890,573 reads parsing filters were further aligned against the whole non-redundant nucleotide (nt) sequence database72. Hereafter, we used a lowest common ancestor approach70 to recover taxonomic information from the 985,818 aligning reads. Plants represented by less than 0.1% of the total reads assigned were discarded to limit false positives resulting from database mis-annotations, PCR and sequencing errors (see Supplementary Information). Given the low number of reads assigned to multicellular, eukaryotic organisms (metazoans), we set a minimal threshold of 3 counts per sample or 1 count in each of three samples. For plants and metazoans this resulted in 511,504 and 2,596 reads assigned at the family or genus levels, respectively. The read counts were then normalized for generating plant and metazoan taxonomic profiles (Extended Data Figs 5 and 6). Taxonomic profiles for reads assigned to bacteria, archaea, fungi and alveolata were also produced (see Supplementary Information). We estimated the DNA damage levels using the MapDamage package 2.0 (ref. 40) for the most abundant organisms (Extended Data Fig. 7b). These represent distinctive sources, which help to account for potential differences between damage accumulated from source to deposition or during deposition. Input SAM files were generated for each sample using Bowtie2 (ref. 68) to align all QC reads from each sample against each reference genome. All aligning sequences were converted to BAM format, sorted and parsed through MapDamage by running the statistical estimation using only the 5′-ends (–forward) for single reads. All frequencies of cytosine to thymine mutations per position from the 5′ ends were parsed and the standard deviation was calculated to generate DNA damage models for each lake (Extended Data Fig. 7a and Supplementary Information).


News Article | February 15, 2017
Site: www.eurekalert.org

Imagine patterning and visualizing silicon at the atomic level, something which, if done successfully, will revolutionize the quantum and classical computing industry. A team of scientists in Edmonton, Canada has done just that, led by a world-renowned physicist and his up-and-coming protégé. University of Alberta PhD student Taleana Huff teamed up with her supervisor Robert Wolkow to channel a technique called atomic force microscopy--or AFM--to pattern and image electronic circuits at the atomic level. This is the first time the powerful technique has been applied to atom-scale fabrication and imaging of a silicon surface, notoriously difficult because the act of applying the technique risks damaging the silicon. However, the reward is worth the risk, because this level of control could stimulate the revolution of the technology industry. "It's kind of like braille," explains Huff. "You bring the atomically sharp tip really close to the sample surface to simply feel the atoms by using the forces that naturally exist among all materials." One of the problems with working at the atomic scale is the risk of perturbing the thing you are measuring by the act of measuring it. Huff, Wolkow, and their research collaborators have largely overcome those problems and as a result can now build by moving individual atoms around: most importantly, those atomically defined structures result in a new level of control over single electrons. This is the first time that the powerful AFM technique has been shown to see not only the silicon atoms but also the electronic bonds between those atoms. Central to the technique is a powerful new computational approach that analyzes and verifies the identity of the atoms and bonds seen in the images. "We couldn't have performed these new and demanding computations without the support of Compute Canada. This combined computation and measurement approach succeeds in creating a foundation for a whole new generation of both classical and quantum computing architectures," says Wolkow. He has his long-term sights set on making ultra-fast and ultra-low-power silicon-based circuits, potentially consuming ten thousand times less power than what is on the market. "Imagine instead of your phone battery lasting a day that it could last weeks at a time, because you're only using a couple of electrons per computational pattern," says Huff, who explains that the precision of the work will allow the group and potential industry investors to geometrically pattern atoms to make just about any kind of logic structure imaginable. This hands-on work was exactly what drew the self-described Canadian-by-birth American-by-personality to condensed matter physics in the University of Alberta's Faculty of Science. Following undergraduate work in astrophysics--and an internship at NASA--Huff felt the urge to get more tangible with her graduate work. (With hobbies that include power lifting and motorcycle restoration, she comes by the desire for tangibility quite honestly.) "I wanted something that I could touch, something that was going to be a physical product I could work with right away," says Huff. And in terms of who she wanted to work with, she went straight to the top, seeking out Wolkow, renowned the world over for his work with quantum dots, dangling bonds, and industry-pushing work on atomic-scale science. "He just has such passion and conviction for what he does," she continues. "With Bob, it's like, 'we're going to change the world.' I find that really inspiring," says Huff. "Taleana has the passion and the drive to get very challenging things done. She now has understanding and skills that are truly unique in the world giving us a great advantage in the field," says Wolkow. "We just need to work on her taste in music," he adds with a laugh. The group's latest research findings, "Possible observation of chemical bond contrast in AFM images of a hydrogen terminated silicon surface" were published in the February 13, 2017 issue of Nature Communications.


News Article | November 30, 2016
Site: www.eurekalert.org

Montreal, November 30, 2016 -- For some, the start of December marks the beginning of the most wonderful time of the year. But for most university students, the coming weeks mean final exams, mounting stress and negative moods. While that doesn't seem like an ideal combination for great grades, new research from Concordia University in Montreal shows that the occasional bout of bad feelings can actually improve students' academic success. A study published in Developmental Psychology by Erin Barker, professor of psychology in Concordia's Faculty of Arts and Science, shows that students who were mostly happy during their four years of university but who also experienced occasional negative moods had the highest GPAs at the time of graduation. In contrast, the study also confirmed that students who experienced high levels of negative moods and low levels of positive moods often ended up with the lowest GPAs -- a pattern consistent with depressive disorders. "Students often report feeling overwhelmed and experiencing high levels of anxiety and depressive symptoms," says Barker, who is also a member of the Centre for Research in Human Development. "This study shows that we need to teach them strategies to both manage negative emotions and stress in productive ways, and to maintain positive emotional experiences." For the study, Barker and her co-authors* worked with 187 first-year students at a large university. The researchers tracked the students throughout their four years of schooling by having them complete questionnaires about recent emotional experiences each year, beginning in the first year and continuing throughout their undergraduate degree. "We looked at students' response patterns to better understand how experiences of positive and negative emotions occurred over time. We then combined average patterns to look how each person varied from their own average and examined different combinations of trait and state affects together," Barker explains. "This allowed us to identify the pattern associated with the greatest academic success: those who were happy for the most part, but who also showed bouts of elevated negative moods." These findings demonstrate that both negative and positive emotions play a role in our successes. "We often think that feeling bad is bad for us. But if you're generally a happy person, negative emotions can be motivating. They can signal to you that there is a challenge that you need to face. Happy people usually have coping resources and support that they draw on to meet that challenge." In January, Barker and psychology graduate students Sarah Newcomb-Anjo and Kate Mulvihill will expand on this research by launching a new study focused on life beyond graduation. Their plan: examine patterns of emotional experience and well-being as former students navigate new challenges associated with finding work or entering a post-graduation program. *Partners in research: This study was co-authored by Carsten Wrosch, professor of psychology in Concordia's Faculty of Arts and Science, Andrea L. Howard from Carleton University and Nancy L. Galambos from the University of Alberta. The research was funded by a grant awarded to N. Galambos by the the Social Sciences and Humanities Research Council of Canada.


News Article | November 2, 2016
Site: www.eurekalert.org

(Edmonton) A new discovery from University of Alberta scientists represents an important milestone in the fight against thyroid cancer. In a study published in EBioMedicine and recently presented at the American Thyroid Association annual meeting, the team has identified a marker of aggressive disease for papillary thyroid cancer, which comprises about 90 per cent of all thyroid cancers. The marker--a protein known as Platelet Derived Growth Factor Receptor Alpha, or PDGFRA--could also be used as a therapeutic target for future treatments. Todd McMullen, senior author and associate professor of surgery with the U of A's Faculty of Medicine & Dentistry, believes the findings will have a significant clinical impact. "The big problem for individual patients and physicians is knowing if the patient has the disease that is easy to treat or if they have a more aggressive variant. A lot of patients get over-treated simply because we don't want to miss the one case in five that may spread to other sites," says McMullen. "The only way to be sure it doesn't spread is to undertake a larger surgery which can have lifelong consequences. Most of these patients are young. They have children. The majority tend to opt for the surgery because until now we haven't had another tool to help them know when it is needed." Each year approximately 6,300 Canadians will be diagnosed with thyroid cancer. More than three quarters of those patients are women. Treatments for the disease include radioactive iodine therapy and surgery. Those who opt for aggressive surgery can see their speech affected, have trouble eating, swallowing and even breathing as a result. "We came up with a tool to identify aggressive tumours so that people can have just the right amount of surgery. No more, no less," says McMullen. "What we're really excited about is that this is both a diagnostic tool and a therapy. It can be used to do both. We've identified the mechanism of how this protein actually drives metastasis in thyroid cancer. And not only that, we found out that it also makes the cancer resistant to radioactive iodine therapy." McMullen says that by identifying the mechanism, the team is able to predict which people will have recurrent disease and which patients will respond to radioactive iodine therapy--both tools that are currently lacking in the medical community. The foundation of the work stems from previous efforts in which McMullen's team examined thyroid cancer patient specimens. In a study published in 2012 they looked at genetic signatures showing which patients experienced metastasis and which patients did not. Through their efforts at that time they discovered PDGFRA was linked to metastatic disease. According to McMullen, this latest research significantly advances that work. In the very near future the team hopes to begin two separate clinical trials. The first will investigate a new way to treat thyroid cancers using a cancer drug that specifically targets PDGFRA. The second will work on a new diagnostic tool to give patients an early indicator of whether their thyroid cancer will be aggressive or not. "We hope within the next 18 months that we can prove the utility of this approach and change the way thyroid cancers are managed for those patients that have the worst disease," says McMullen. "We were lucky enough to find something that we think is important for thyroid cancer. It will be put to the test now."


News Article | December 14, 2016
Site: www.eurekalert.org

Each year, thousands of Canadian men with prostate cancer undergo biopsies to help their doctors better understand the progression and nature of their disease. It provides vital, sometimes life-saving information, yet cancer researcher John Lewis knows it can be a difficult test to ask of anyone. "Currently the best way to get information is through a biopsy, which involves pushing 12 needles through an organ the size of a walnut. As you might imagine, it's a very uncomfortable and invasive procedure," says Lewis, the Frank and Carla Sojonky Chair in Prostate Cancer Research at the University of Alberta and a member of the Cancer Research Institute of Northern Alberta. "Patients with low grade prostate cancer can decide not to get treatment and instead monitor the disease, but monitoring usually involves a biopsy every year or so. Many people opt for surgery instead of more biopsies. It is clearly something we need to improve upon." A new innovation from the University of Alberta is promising to do just that through a relatively painless procedure. A study published in the journal Cancer Research describes the use of focused ultrasound along with particles called nanodroplets for the enhanced detection of cancer biomarkers in the blood. The research team, led by senior co-authors John Lewis and Roger Zemp, used the technique on tumours to cause extracellular vesicles to be released into the bloodstream, giving them large amounts of genetic material to analyze from drawing just a small sample of blood. "With a little bit of ultrasound energy, nanodroplets phase-change into microbubbles. That's important because ultrasound can really oscillate these microbubbles," says Roger Zemp, professor of engineering at the U of A. "The microbubbles absorb of the ultrasound energy and then act like boxing gloves to punch the tumour cells and knock little vesicles off." "That led us to detect some genes that were indicative of the aggressiveness of the tumour. That's potentially very powerful. You can get a genetic characterization of the tumour, but do it relatively non-invasively." "Separately, the ultrasound and nanodroplets had very little effect," says Robert Paproski, first author of the study and a research associate working with Roger Zemp in the Faculty of Engineering. "But when we added the two together they had a very big effect. It allows us to detect roughly 100 times more vesicles than would normally be there, that are specific from the tumour." The researchers say the technique is as accurate as using needles in a biopsy, with the ultrasound able to give them information about specific parts of a tumour. They add that genetic information can be used for personalized medicine - helping doctors know if a patient's tumour has a specific mutation which would then allow them to determine what medications would work best for treatment. The team is pushing forward with strategies to further enrich the population of key vesicles released into the bloodstream through the technique, focusing on the biomarkers that are of the most importance. Lewis believes they can quickly progress the work to clinical trials, and from there to real world applications, thanks to the accessibility of the technology. "Focused ultrasound systems are already used in the clinic. Microbubbles are already used in the clinic. So I think the movement of this into the clinic is relatively straightforward." Prostate Cancer Canada is the leading national foundation dedicated to the prevention of the most common cancer in men through research, advocacy, education, support and awareness. As one of the largest investors in prostate cancer research in Canada, Prostate Cancer Canada is committed to continuous discovery in the areas of prevention, diagnosis, treatment, and support. Alberta Cancer Foundation is the official fundraiser for all 17 cancer centres in Alberta, including the Cross Cancer Institute in Edmonton and the Tom Baker Cancer Centre in Calgary, supporting Albertans at the point of care.


News Article | December 16, 2016
Site: www.eurekalert.org

Something as seemingly harmless as a heartburn pill could lead cancer patients to take a turn for the worse. A University of Alberta study published in journal JAMA Oncology discovered that proton pump inhibitors (PPIs), which are very common medications for heartburn and gastrointestinal bleeding, decrease effects of capecitabine, a type of chemotherapy usually prescribed to gastric cancer patients. The study by Department of Oncology's Michael Sawyer, Michael Chu and their team included more than 500 patients and the results were conclusive: PPIs affected progression-free survival by more than a month; the overall survival in cancer patients was reduced by more than two months, and the disease control rate decreased by 11 per cent. Although this research was focused on gastric cancer patients, Sawyer's team has followed up with another study in early stage colorectal cancer and discovered that those who took PPIs and capecitabine were also at risk for decreased cancer treatment efficacy. In that study, patients who took PPIs while on capecitabine had a decreased chance of being cured of their colorectal cancer. PPIs are very popular for their efficacy and many of them are over-the-counter drugs (some common brands are Nexium, Prevacid and Protonix). Sawyer explains the risk of this interaction is high as some cancer patients may not even have these medications prescribed by a physician, but could obtain them easily over-the-counter at a pharmacy and accidentally alter their chemotherapy treatment without knowing it: "This could be a very common and underappreciated side effect. One study estimated that at 20 per cent of cancer patients in general take proton pump inhibitors." The explanation for the negative outcome may be in gastric pH levels. Previous studies had been done on the interaction of this type of chemo with the antacid medication Maalox, without obtaining any alarming results; but unlike Maalox, PPI's are able to raise pH to a point where they could affect disintegration of capecitabine tablets. "Given that PPIs are much more potent and can essentially abolish gastric acidity there may be a significant interaction between capecitabine and PPIs," says Sawyer. Sawyer, a clinical pharmacologist and medical oncologist and member of the U of A's Faculty of Medicine & Dentistry since 2001, is currently conducting more research on this topic to unveil more about the interaction of chemotherapy with other medications. This discovery may lead to change the usual procedures for prescription of PPIs. Some cancer patients cannot discontinue these medications in order to treat bleedings or other gastric conditions that must be kept under control. "In that case, there are alternatives for oncologists or family doctors that become aware of this risk," says Sawyer. "Physicians should use caution in prescribing PPIs to patients on capecitabine and, if they must use PPIs due to gastrointestinal bleeding issues, maybe they should consider using other types of chemotherapy that don't present this interaction."


News Article | February 16, 2017
Site: www.eurekalert.org

(Edmonton, AB) Every day tens of thousands of Canadians unwillingly find themselves becoming shadows of their former selves. They grasp onto moments of clarity--fleeting windows of time--before slipping away again into confusion; robbed of memories, talents and their very personalities. Alzheimer's is a heart-wrenching disease that directly affects half a million Canadians. There is no cure, let alone treatment to stop progression of the disease. While current answers are few, research at the University of Alberta is spearheading the discovery of new potential therapies for the future. A study published in the journal Alzheimer's and Dementia: Translational Research and Clinical Intervention examines if a compound called AC253 can inhibit a "rogue" protein called amyloid. The protein is found in large numbers in the brains of Alzheimer's patients and is suspected to be a key player in the development of the disease. "The way I look at it, it's hard to ignore the biggest player on the stage, which is the amyloid protein. Whatever treatment you develop, it's got to address that player," says Jack Jhamandas, Professor of Neurology in the Faculty of Medicine & Dentistry at the University of Alberta and senior author of the study. "In our previous work we have shown that there are certain drug compounds that can protect nerve cells from amyloid toxicity. One of these is a compound we call AC253. It sounds like an Air Canada flight. I hope this one is on time and takes us to our destination!" The team, comprised of postdoctoral fellows and research associates Rania Soudy, Aarti Patel and Wen Fu, tested AC253 on mice bred by David Westaway (a University of Alberta collaborator) to develop Alzheimer's. Mice were treated with a continuous infusion of AC253 for five months, beginning at three months of age before development of the disease. "We found at eight months, when these mice typically have a lot of amyloid in the brain and have a lot of difficulty in memory and learning tasks, that they actually improved their memory and learning," says Jhamandas, also a member of the U of A's Neuroscience and Mental Health Institute. As part of the study, the team of local and international researchers also developed and tested a more efficient method of getting the compound into the brain. Given an injection three times a week for 10 weeks of AC253 with a slightly modified structure, they again found there was an improvement in memory and learning performance. In addition, the researchers noted there was a lower amount of amyloid in the brains of mice treated with the compound compared to mice that did not get the drug, and that they exhibited reduced inflammation of the brain. The team is now planning additional studies to examine optimal dosage and methods of further improving the compound to increase its effectiveness in the brain. Much more work is needed before the research can move to human trials. Despite the long path still ahead, Jhamandas believes the findings offer both hope and a new way forward to unlock the Alzheimer's enigma. "Alzheimer's is a complex disease. Not for a moment do I believe that the solution is going to be a simple one, but maybe it will be a combination of solutions." "We can't build nursing homes and care facilities fast enough because of an aging population. And that tsunami, the silver tsunami, is coming if not already here," adds Jhamandas. "At a human level, if you can keep someone home instead of institutionalized, even for a year, what does it mean to them? It means the world to them and their families."


News Article | November 22, 2016
Site: www.npr.org

The Standing Rock Resistance Is Unprecedented (It's Also Centuries Old) As resistance to the Dakota Access Pipeline in Standing Rock, N.D., concludes its seventh month, two narratives have emerged: Both are true. The scope of the resistance at Standing Rock exceeds just about every protest in Native American history. But that history itself, of indigenous people fighting to protect not just their land, but the land, is centuries old. Over the weekend, the situation at Standing Rock grew more contentious. On Sunday night, Morton County police sprayed the crowd of about 400 people with tear gas and water as temperatures dipped below freezing. But the resistance, an offspring of history, continues. Through the years, details of such protests change — sometimes the foe is the U.S. government; sometimes a large corporation; sometimes, as in the case of the pipeline, a combination of the two. Still, the broad strokes of each land infringement and each resistance stay essentially the same. In that tradition, the tribes gathered at Standing Rock today are trying to stop a natural gas pipeline operator from bulldozing what they say are sacred sites to construct an 1,172-mile oil pipeline. The tribes also want to protect the Missouri River, the primary water source for the Standing Rock Reservation, from a potential pipeline leak. (Energy Transfer Partners, which is building the pipeline, says on its website that it emphasizes safety and that, "in many instances we exceed government safety standards to ensure a long-term, safe and reliable pipeline.") Since April, when citizens of the Standing Rock Sioux Nation set up the Sacred Stone Camp, thousands of people have passed through and pledged support. Environmentalists and activist groups like Black Lives Matter and Code Pink have also stepped in as allies. Many people who have visited say that the camp is beyond anything they've ever experienced. "It's historic, really. I don't think anything like this has ever happened in documented history," said Ruth Hopkins, a reporter from Indian Country Today. But there are historical preludes, and you don't have to look too far back to find them. In 2015, when the Keystone XL pipeline was being debated, numerous Native American tribes and the Indigenous Environmental Network organized against it. The pipeline would have stretched 1,179 miles from Canada to the Gulf of Mexico. The Rosebud Sioux, a tribe in South Dakota, called the proposed pipeline an "act of war" and set up an encampment where the pipeline was to be constructed. Also joining in were the Environmental Protection Agency, the National Resources Defense Council, and the Omaha, Dene, Ho-chunk, and Creek Nations, whose lands the pipeline would have traversed. President Obama vetoed Keystone XL. But even at the time, A. Gay Kingman, the executive director of the Great Plains Tribal Chairman's Association, warned that the reprieve would be temporary. "Wopila [thank you] to all our relatives who stood strong to oppose the KXL," Kingman said in a statement after the veto. "But keep the coalitions together, because there are more pipelines proposed, and we must protect our Mother Earth for our future generations." In the case of the Dakota Access Pipeline, the Standing Rock Sioux have been able to attract support from hundreds of tribes all over the country, not just in places that would be directly affected. The tribes aren't just leaning on long-held beliefs about the importance of the natural world. They're also using long-held resistance strategies. Like the encampment itself. "If you don't know very much about Native American people, you wouldn't understand that this is something that's kind of natural to us," said Hopkins, who is enrolled in the Sisseton Wahpeton Oyate Nation and was born on the Standing Rock Reservation. "When we have ceremonies, we do camps like this. It's something that we've always known how to do, going back to pre-colonial times." In the late 1800s more than 10,000 members of the Lakota Sioux, Cheyenne and Arapaho tribes set up camp to resist the U.S. Army's attempt to displace them in search of gold. That camp took form at the Little Bighorn River in Montana. After the soldiers attacked the camp in June of 1876, the Battle of the Little Bighorn, widely known as (Gen. George) Custer's Last Stand, erupted. In defeating the Army, the tribes won a huge land rights victory for Native Americans. There was also Wounded Knee, a protest that was part of the American Indian Movement. During the 1973 demonstration, about 200 people occupied the town of Wounded Knee on the Pine Ridge Reservation in South Dakota — the site of an 1890 massacre in which U.S. soldiers killed hundreds of Native Americans. Protesters turned Wounded Knee into what one former AIM leader called "an armed camp" in order to protest corruption in tribal leadership and draw attention to the U.S. government's failure to honor treaties. Over the course of the 1973 occupation, two Sioux men were killed and hundreds more arrested. But the resistance, which lasted 71 days, underscored Native American civil rights issues in a way that many see reflected today in Standing Rock. If Native American resistance is an old story, that's because the systemic violation of indigenous land rights is an old story. And if history is any precedent, the resistance won't end at Standing Rock. "There are no rights being violated here that haven't been violated before." said Kim Tallbear, a professor of Native Studies at the University of Alberta, who for years worked on tribal issues as an environmental planner for the U.S. Environmental Protection Agency and the Department of Energy. Those violations, she said, have taken two forms: long-term disregard for indigenous land rights and a "bureaucratic disregard for consultation with indigenous people." When she sees images of police using pepper spray and water cannons or security guards unleashing dogs on Standing Rock protesters, Tallbear said, she isn't shocked. "I'm, like, oh yeah, they did that in the 19th century, they did that in the 16th century," she said. "This is not new. ... The contemporary tactics used against indigenous people might look a little bit more complex or savvy, but to me, I can read it all as part of a longstanding colonial project." "Maybe for non-Natives who thought that the West was won, and the Indian Wars were over, and Native people were mostly dead and gone and isn't that too bad – now, they're like, 'Oh wait a minute, they're still there? And they're still fighting the same things they were 150 years ago?'


News Article | November 29, 2016
Site: www.eurekalert.org

Heart medication taken in combination with chemotherapy reduces the risk of serious cardiovascular damage in patients with early-stage breast cancer, according to results from a new landmark clinical trial. Existing research has shown some cancer therapies such as Herceptin greatly improve survival rates for early-stage breast cancer, but come with a fivefold risk of heart failure -- a devastating condition as life-threatening as the cancer itself. A new five-year study, led by researchers at the University of Alberta and Alberta Health Services and funded by the Canadian Institutes of Health Research (CIHR) and Alberta Cancer Foundation, shows that two kinds of heart medications, beta blockers and ACE inhibitors, effectively prevent a drop in heart function from cancer treatment. "We think this is practice-changing," said Edith Pituskin, co-investigator of the MANTICORE trial. "This will improve the safety of the cancer treatment that we provide." Pituskin, an assistant professor in the Faculty of Nursing and Faculty of Medicine & Dentistry at the U of A, published their findings Nov. 28 in the Journal of Clinical Oncology. In the double-blind trial, 100 patients from Alberta and Manitoba with early-stage breast cancer were selected at random to receive either a beta blocker, ACE inhibitor or placebo for one year. Beta blockers and ACE inhibitors are drugs used to treat several conditions, including heart failure. Cardiac MRI images taken over a two-year period showed that patients who received the beta blockers showed fewer signs of heart weakening than the placebo group. The ACE inhibitor drug also had heart protection effects. Study lead Ian Paterson, a cardiologist at the Mazankowski Alberta Heart Institute and associate professor with the U of A's Department of Medicine, said these medications not only safeguard against damage to the heart, but may improve breast cancer survival rates by limiting interruptions to chemotherapy treatment. Any time a patient shows signs of heart weakening, he said, chemotherapy is stopped immediately, sometimes for a month or two months until heart function returns to normal. "We are aiming for two outcomes for these patients--we're hoping to prevent heart failure and we're hoping for them to receive all the chemotherapy that they are meant to get, when they are supposed to get it--to improve their odds of remission and survival." Patients with heart failure often experience fatigue, shortness of breath or even death, making it "an equally devastating disease with worse prognosis than breast cancer," Paterson said. Brenda Skanes has a history of cardiovascular problems in her family--her mom died of a stroke and her dad had a heart attack. She was eager to join the trial, both for her own health and the health of other breast cancer survivors. "I met survivors through my journey who experienced heart complications caused by Herceptin. If they had access to this, maybe they wouldn't have those conditions now," she said. "Me participating, it's for the other survivors who are just going into treatment." With two daughters of her own and a mother who lost her fight with colon cancer, study participant Debbie Cameron says she'd do anything to ensure prevent others from going through similar upheaval. "My daughters are always in the back of my mind and the what ifs--if they're diagnosed, what would make their treatment safer, better," Cameron said. "Anything I could do to make this easier for anybody else or give some insight to treatment down the road was, to me, a very easy decision." Pituskin said the study team, which also includes collaborators from the AHS Clinical Trials Unit at the Cross Cancer Institute and the University of Manitoba, represents a strong mix of research disciplines, particularly the oncology and cardiology groups. She said the results would not have been possible without funding support from CIHR and the Alberta Cancer Foundation. "Local people in Alberta supported a study that not only Albertans benefited from, but will change, again, the way care is delivered around the world." The results are expected to have a direct impact on clinical practice guidelines in Canada and beyond. "Every day in Canada, around 68 women are diagnosed with breast cancer. This discovery holds real promise for improving these women's quality of life and health outcomes," said Stephen Robbins, scientific director of CIHR's Cancer Research Institute. "We couldn't be more pleased with this return on our investment," said Myka Osinchuk, CEO of the Alberta Cancer Foundation. "This clinical research will improve treatment and make life better not only for Albertans facing cancer, but also for those around the world." Paterson said the research team is also investigating how to prevent heart complications in patients with other cancers, noting several other therapies have been linked to heart complications.


News Article | February 16, 2017
Site: www.biosciencetechnology.com

Using magnetic resonance imaging (MRI) in infants with older siblings with autism, researchers from around the country were able to correctly predict 80 percent of those infants who would later meet criteria for  autism at two years of age. The study, published today in Nature, is the first to show it is possible to identify which infants – among those with older siblings with autism – will be diagnosed with autism at 24 months of age. “Our study shows that early brain development biomarkers could be very useful in identifying babies at the highest risk for autism before behavioral symptoms emerge,” said senior author Joseph Piven, MD, the Thomas E. Castelloe Distinguished Professor of Psychiatry at the University of North Carolina-Chapel Hill. “Typically, the earliest an autism diagnosis can be made is between ages two and three. But for babies with older autistic siblings, our imaging approach may help predict during the first year of life which babies are most likely to receive an autism diagnosis at 24 months.” This research project included hundreds of children from across the country and was led by researchers at the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina, where Piven is director. The project’s other clinical sites included the University of Washington, Washington University in St. Louis, and The Children’s Hospital of Philadelphia. Other key collaborators are McGill University, the University of Alberta, the University of Minnesota, the College of Charleston, and New York University. “This study could not have been completed without a major commitment from these families, many of whom flew in to be part of this,” said first author Heather Hazlett, PhD, assistant professor of psychiatry at the UNC School of Medicine and a CIDD researcher. “We are still enrolling families for this study, and we hope to begin work on a similar project to replicate our findings.” People with Autism Spectrum Disorder (or ASD) have characteristic social deficits and demonstrate a range of ritualistic, repetitive and stereotyped behaviors. It is estimated that one out of 68 children develop autism in the United States. For infants with older siblings with autism, the risk may be as high as 20 out of every 100 births. There are about 3 million people with autism in the United States and tens of millions around the world. Despite much research, it has been impossible to identify those at ultra-high risk for autism prior to 24 months of age, which is the earliest time when the hallmark behavioral characteristics of ASD can be observed and a diagnosis made in most children. For this Nature study, Piven, Hazlett, and researchers from around the country conducted MRI scans of infants at six, 12, and 24 months of age. They found that the babies who developed autism experienced a hyper-expansion of brain surface area from six to 12 months, as compared to babies who had an older sibling with autism but did not themselves show evidence of the condition at 24 months of age. Increased growth rate of surface area in the first year of life was linked to increased growth rate of overall brain volume in the second year of life. Brain overgrowth was tied to the emergence of autistic social deficits in the second year. Previous behavioral studies of infants who later developed autism – who had older siblings with autism –revealed that social behaviors typical of autism emerge during the second year of life. The researchers then took these data – MRIs of brain volume, surface area, cortical thickness at 6 and 12 months of age, and sex of the infants – and used a computer program to identify a way to classify babies most likely to meet criteria for autism at 24 months of age. The computer program developed the best algorithm to accomplish this, and the researchers applied the algorithm to a separate set of study participants. The researchers found that brain differences at 6 and 12 months of age in infants with older siblings with autism correctly predicted eight out of ten infants who would later meet criteria for autism at 24 months of age in comparison to those infants with older ASD siblings who did not meet criteria for autism at 24 months. “This means we potentially can identify infants who will later develop autism, before the symptoms of autism begin to consolidate into a diagnosis,” Piven said. If parents have a child with autism and then have a second child, such a test might be clinically useful in identifying infants at highest risk for developing this condition. The idea would be to then intervene ‘pre-symptomatically’ before the emergence of the defining symptoms of autism. Research could then begin to examine the effect of interventions on children during a period before the syndrome is present and when the brain is most malleable.  Such interventions may have a greater chance of improving outcomes than treatments started after diagnosis. “Putting this into the larger context of neuroscience research and treatment, there is currently a big push within the field of neurodegenerative diseases to be able to detect the biomarkers of these conditions before patients are diagnosed, at a time when preventive efforts are possible,” Piven said. “In Parkinson’s for instance, we know that once a person is diagnosed, they’ve already lost a substantial portion of the dopamine receptors in their brain, making treatment less effective.” Piven said the idea with autism is similar; once autism is diagnosed at age 2-3 years, the brain has already begun to change substantially. “We haven’t had a way to detect the biomarkers of autism before the condition sets in and symptoms develop,” he said. “Now we have very promising leads that suggest this may in fact be possible.”


CALGARY, AB --(Marketwired - December 08, 2016) - A collaborative research project involving four universities in Alberta and Atlantic Canada has received major funding to address the issue of pipeline corrosion caused by microbial activity. The federal government announcement was made today by Minister of Science, Kirsty Duncan in Montreal. The $7.8 m comes through the Genome Canada 2015 Large-Scale Applied Research Project Competition (LSARP). It will support Managing Microbial Corrosion in Canadian Offshore and Onshore Oil Production, a four-year research project set to begin in January with an aim to improve pipeline integrity. "This work will definitely help to pinpoint how microbial activity causes corrosion in carbon steel infrastructure and help in its early detection so we can minimize leaks," says Lisa Gieg, an associate professor in the Department of Biological Sciences at the University of Calgary. "It's not just about pipelines, this research will look at all points of contact between oil and steel in extraction, production and processing. This work can help make the industry safer." Gieg is one of three project leaders who include John Wolodko, an associate professor and Alberta Innovates Strategic Chair in Bio and Industrial Materials at the University of Alberta; and Faisal Khan, a professor and the Vale Research Chair of Process Safety and Risk Engineering at Memorial University in St. John's, NL. Also working on the project is Rob Beiko, an associate professor in computer science and Canada Research Chair in Bioinformatics at Dalhousie University in Halifax, N.S., and Dr. Tesfaalem Haile who is a senior corrosion specialist at InnoTech Alberta in Devon, AB. Beiko will be building a database to analyse the microbiology and chemistry lab results, while Haile's team will be working with the University of Alberta to simulate microbial corrosion in the lab and at the pilot-scale. "To some degree, [microbial degradation of pipelines] is akin to a cancer diagnosis and treatment in the medical field," says Wolodko. "While there is significant knowledge and best practices in diagnosing and treating cancer, it is still not completely understood, and significant research is still required to further eliminate its impact to society. "While this problem is complex, this pan-Canadian project brings together research groups from across Canada in different science disciplines to tackle this problem collectively. By bringing this multidisciplinary focus to this problem, it is hoped that this research will lead to a better understanding of the breadth of microbes responsible for microbial corrosion, and will help academia and industry develop improved solutions to rapidly identify and mitigate this form of corrosion globally." While researchers at Memorial University are involved in all stages of the project, Faisal Khan, Head, Department of Process Engineering, and Director, C-RISE, Memorial University, says the focus for Memorial is on how microbes cause corrosion. Khan leads Memorial's multidisciplinary team, which also includes Kelly Hawboldt, Department of Process Engineering, Faculty of Engineering and Applied Science; and Christina Bottaro, Department of Chemistry, Faculty of Science. "We know that microbes cause corrosion, but we are examining how they cause corrosion," said Khan. "We are identifying the chemical source and how it reacts to the surface of the metal to cause corrosion. The risk models that we're developing will link the corrosion process to the outcome. This will be very important for industry when evaluating their level of corrosion intervention and control, and where to focus their resources on corrosion mitigation." Corrosion of steel infrastructure is estimated to cost the oil and gas industry in the range of $3 billion to $7 billion each year in maintenance, repairs and replacement. Microbiologically influenced corrosion is responsible for at least 20 per cent of that cost. The research team will take samples from a wide range of environments including offshore platforms and both upstream pipelines and transmission pipelines, which are all associated with different fluid chemistries and physical characteristics. By using the latest in genomics techniques, the interdisciplinary team will be able to look for trends related to specific microbes and chemistries that lead to microbial corrosion. Ultimately, the project will lead to better predictions of whether microbial corrosion will occur in a given oil and gas operation. All three project leads say the key to success in this project is collaboration. Bringing the experience, skills and expertise from across a range of disciplines and from multiple universities provides the best opportunity to succeed in finding solutions to ensure the safety of pipelines and other oil and gas infrastructure. "Genome Alberta and Genome Atlantic are pleased to be supporting a major study that will develop technologies to proactively detect and pinpoint microbial corrosion in both offshore and onshore oil production," notes David Bailey, President and CEO, Genome Alberta. "These researchers will apply their combined expertise to help address the protection of our natural environment, as well as our growing energy needs," says John Reynolds, acting vice-president (research) at the University of Calgary. "We look forward to working with our research partners and funders who have joined together to support this important work through this Genome Canada award." This grant was one of 13 projects that received funding in an announcement made by the federal government Thursday. Combined with co-funding from the provinces, international organizations and the private sector, the total announcement is worth $110 million. This includes a second project involving a University of Calgary lead to research methods of bioremediation of potential oil spills in the arctic. All the funded projects involve emerging knowledge about genomics (e.g., the genetic makeup of living organisms) to help address challenges in the natural resource and environmental sectors. The project will be managed by Genome Alberta in conjunction with Genome Atlantic, and with an international collaboration of partners that are working together to ensure safer and more secure hydrocarbon energy production: Genome Canada, Alberta Economic Development & Trade, Research & Development Corporation of Newfoundland & Labrador, University of Newcastle upon Tyne, Natural Resources Canada, InnoTech Alberta, VIA University College, DNV-GL Canada, U of C Industrial Research Chair, and in-kind support from a variety of industry partners. About the University of Calgary The University of Calgary is making tremendous progress on its journey to become one of Canada's top five research universities, where research and innovative teaching go hand in hand, and where we fully engage the communities we both serve and lead. This strategy is called Eyes High, inspired by the university's Gaelic motto, which translates as 'I will lift up my eyes.' For more information, visit ucalgary.ca. Stay up to date with University of Calgary news headlines on Twitter @UCalgary. For details on faculties and how to reach experts go to our media centre at ucalgary.ca/news/media. About the University of Alberta The University of Alberta in Edmonton is one of Canada's top teaching and research universities, with an international reputation for excellence across the humanities, sciences, creative arts, business, engineering, and health sciences. Home to 39,000 students and 15,000 faculty and staff, the university has an annual budget of $1.84 billion and attracts nearly $450 million in sponsored research revenue. The U of A offers close to 400 rigorous undergraduate, graduate, and professional programs in 18 faculties on five campuses-including one rural and one francophone campus. The university has more than 275,000 alumni worldwide. The university and its people remain dedicated to the promise made in 1908 by founding president Henry Marshall Tory that knowledge shall be used for "uplifting the whole people." About Memorial University Memorial University is one of the largest universities in Atlantic Canada. As the province's only university, Memorial plays an integral role in the education and cultural life of Newfoundland and Labrador. Offering diverse undergraduate and graduate programs to almost 18,000 students, Memorial provides a distinctive and stimulating environment for learning in St. John's, a safe friendly city with great historic charm, a vibrant cultural life and easy access to a wide range of outdoor activities. About Genome Alberta Genome Alberta is a publicly funded not-for-profit genomics research funding organization based in Calgary, Alberta but leads projects at institutions around the province and participates in a variety of other projects across the country. In partnership with Genome Canada, Industry Canada, and the Province of Alberta, Genome Alberta was established in 2005 to focus on genomics as one of the central components of the Life Sciences Initiative in Alberta, and to help position genomics as a core research effort. For more information on the range of projects led and managed by Genome Alberta, visit http://GenomeAlberta.ca


News Article | February 16, 2017
Site: www.rdmag.com

A new diagnostic method has correctly predicted autism in 80 percent of high-risk infants, according to a new study. Researchers at the University of North Carolina have developed a method using magnetic resonance imaging (MRI) in infants with older siblings with autism to correctly predict whether infants would later meet the criteria for autism at two years old. “Our study shows that early brain development biomarkers could be very useful in identifying babies at the highest risk for autism before behavioral symptoms emerge,” Dr. Joseph Piven, the Thomas E. Castelloe Distinguished Professor of Psychiatry at UNC and senior author of the paper, said in a statement. “Typically, the earliest an autism diagnosis can be made is between ages two and three. But for babies with older autistic siblings, our imaging approach may help predict during the first year of life which babies are most likely to receive an autism diagnosis at 24 months.” It is estimated that one out of every 68 children develop Autism Spectrum Disorder (ASD) in the U.S. The patients have characteristic social deficits and demonstrate a range of ritualistic, repetitive and stereotyped behaviors. Despite extensive research, it has been impossible to identify those at ultra-high risk for autism prior to two-years old, which is the earliest time when the hallmark behavioral characteristics of ASD can be observed and a diagnosis made in most children. In the study, the researchers conducted MRI scans of infants at six, 12 and 24 months old. The researchers found that the babies who developed autism experienced a hyper-expansion of brain surface area from six to 12 months, as compared to babies who had an older sibling with autism but did not themselves show evidence of the condition at 24 months of age. They also found that increased growth rate of surface area in the first year of life was linked to increased growth rate of overall brain volume in the second year, which is tied to the emergence of autistic social deficits in the second year. The next step was to take the data—MRI’s of brain volume, surface area, cortical thickness at six and 12 months of age and the sex of the infants—and used a computer program to identify a way to classify babies most likely to meet criteria for autism at two-years old. The computer program developed an algorithm that the researchers applied to a separate set of study participants. The researchers concluded that brain differences at six and 12 months in infants with older siblings with autism correctly predicted eight of 10 infants who would later meet criteria for autism at two-years old in comparison to those with older ASD siblings who did not meet the criteria at two years old. “This means we potentially can identify infants who will later develop autism, before the symptoms of autism begin to consolidate into a diagnosis,” Piven said. This test could be helpful to parents who have a child with autism and have a second child, where they could intervene ‘pre-symptomatically’ before the emergence of the defining symptoms of autism. Researchers could then begin to examine the effect of interventions on children during a period before the syndrome is present and when the brain is most malleable. “Putting this into the larger context of neuroscience research and treatment, there is currently a big push within the field of neurodegenerative diseases to be able to detect the biomarkers of these conditions before patients are diagnosed, at a time when preventive efforts are possible,” Piven said. “In Parkinson’s for instance, we know that once a person is diagnosed, they’ve already lost a substantial portion of the dopamine receptors in their brain, making treatment less effective.” The research, which was led by researchers at the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina, where Piven is director, included hundreds of children from across the country. The project’s other clinical sites included the University of Washington, Washington University in St. Louis and The Children’s Hospital of Philadelphia. Other key collaborators are McGill University, the University of Alberta, the University of Minnesota, the College of Charleston and New York University. “This study could not have been completed without a major commitment from these families, many of whom flew in to be part of this,” first author Heather Hazlett, Ph.D., assistant professor of psychiatry at the UNC School of Medicine and a CIDD researcher, said in a statement.


News Article | February 15, 2017
Site: www.eurekalert.org

This first-of-its-kind study used MRIs to image the brains of infants, and then researchers used brain measurements and a computer algorithm to accurately predict autism before symptoms set in CHAPEL HILL, NC - Using magnetic resonance imaging (MRI) in infants with older siblings with autism, researchers from around the country were able to correctly predict 80 percent of those infants who would later meet criteria for autism at two years of age. The study, published today in Nature, is the first to show it is possible to identify which infants - among those with older siblings with autism - will be diagnosed with autism at 24 months of age. "Our study shows that early brain development biomarkers could be very useful in identifying babies at the highest risk for autism before behavioral symptoms emerge," said senior author Joseph Piven, MD, the Thomas E. Castelloe Distinguished Professor of Psychiatry at the University of North Carolina-Chapel Hill. "Typically, the earliest an autism diagnosis can be made is between ages two and three. But for babies with older autistic siblings, our imaging approach may help predict during the first year of life which babies are most likely to receive an autism diagnosis at 24 months." This research project included hundreds of children from across the country and was led by researchers at the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina, where Piven is director. The project's other clinical sites included the University of Washington, Washington University in St. Louis, and The Children's Hospital of Philadelphia. Other key collaborators are McGill University, the University of Alberta, the University of Minnesota, the College of Charleston, and New York University. "This study could not have been completed without a major commitment from these families, many of whom flew in to be part of this," said first author Heather Hazlett, PhD, assistant professor of psychiatry at the UNC School of Medicine and a CIDD researcher. "We are still enrolling families for this study, and we hope to begin work on a similar project to replicate our findings." People with Autism Spectrum Disorder (or ASD) have characteristic social deficits and demonstrate a range of ritualistic, repetitive and stereotyped behaviors. It is estimated that one out of 68 children develop autism in the United States. For infants with older siblings with autism, the risk may be as high as 20 out of every 100 births. There are about 3 million people with autism in the United States and tens of millions around the world. Despite much research, it has been impossible to identify those at ultra-high risk for autism prior to 24 months of age, which is the earliest time when the hallmark behavioral characteristics of ASD can be observed and a diagnosis made in most children. For this Nature study, Piven, Hazlett, and researchers from around the country conducted MRI scans of infants at six, 12, and 24 months of age. They found that the babies who developed autism experienced a hyper-expansion of brain surface area from six to 12 months, as compared to babies who had an older sibling with autism but did not themselves show evidence of the condition at 24 months of age. Increased growth rate of surface area in the first year of life was linked to increased growth rate of overall brain volume in the second year of life. Brain overgrowth was tied to the emergence of autistic social deficits in the second year. Previous behavioral studies of infants who later developed autism - who had older siblings with autism -revealed that social behaviors typical of autism emerge during the second year of life. The researchers then took these data - MRIs of brain volume, surface area, cortical thickness at 6 and 12 months of age, and sex of the infants - and used a computer program to identify a way to classify babies most likely to meet criteria for autism at 24 months of age. The computer program developed the best algorithm to accomplish this, and the researchers applied the algorithm to a separate set of study participants. The researchers found that brain differences at 6 and 12 months of age in infants with older siblings with autism correctly predicted eight out of ten infants who would later meet criteria for autism at 24 months of age in comparison to those infants with older ASD siblings who did not meet criteria for autism at 24 months. "This means we potentially can identify infants who will later develop autism, before the symptoms of autism begin to consolidate into a diagnosis," Piven said. If parents have a child with autism and then have a second child, such a test might be clinically useful in identifying infants at highest risk for developing this condition. The idea would be to then intervene 'pre-symptomatically' before the emergence of the defining symptoms of autism. Research could then begin to examine the effect of interventions on children during a period before the syndrome is present and when the brain is most malleable. Such interventions may have a greater chance of improving outcomes than treatments started after diagnosis. "Putting this into the larger context of neuroscience research and treatment, there is currently a big push within the field of neurodegenerative diseases to be able to detect the biomarkers of these conditions before patients are diagnosed, at a time when preventive efforts are possible," Piven said. "In Parkinson's for instance, we know that once a person is diagnosed, they've already lost a substantial portion of the dopamine receptors in their brain, making treatment less effective." Piven said the idea with autism is similar; once autism is diagnosed at age 2-3 years, the brain has already begun to change substantially. "We haven't had a way to detect the biomarkers of autism before the condition sets in and symptoms develop," he said. "Now we have very promising leads that suggest this may in fact be possible." For this research, NIH funding was provided by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the National Institute of Mental Health (NIMH), and the National Institute of Biomedical Imaging and Bioengineering. Autism Speaks and the Simons Foundation contributed additional support.


EDMONTON, AB--(Marketwired - October 27, 2016) - TEC Edmonton announced today its acceptance of two additional early-stage information technology companies to its T-Squared Accelerator program. SensorUp, founded by Dr. Steve Liang, Associate professor of Geomatics Engineering at the University of Calgary, provides an open standard platform for connectivity to and between Internet of Things (IoT) devices, data, and analytics over the Web. "We are excited that SensorUp has been chosen to join the T-Squared Accelerator. More importantly, it's great to know that TEC Edmonton and TELUS share the same vision for SensorUp, which is building the Internet of Things with open standards," says Steve Liang, Founder and CEO of SensorUp. "I'm confident that this partnership will pave a quick path for our growth and establish SensorUp as a leading Internet of Things platform." TVCom is developing an e-Commerce platform that connects TV viewers to real-time and context relevant fashion, merchandise and supplemental content. The company was also the 2016 TEC VenturePrize TELUS ICT Stream Winner earlier this year. "Much of what we have accomplished so far is intimately linked to TEC Edmonton's support of our project from the start," said TVCom CTO and University of Alberta Computer Science student Pavlo Malynin. "We take tremendous pride in knowing that we have some of Alberta's finest experts supporting our technology." "SensorUp and TVCom are both very exciting Alberta-based companies building globally relevant and scalable platforms. We look forward to working closely with them over the next 12 months to rapidly accelerate their progress," stated Shaheel Hooda, Program Director of the T-Squared Accelerator program and Executive in Residence at TEC Edmonton. The T-Squared Accelerator is a collaboration between TEC Edmonton and TELUS that provides promising early-stage information and communications technology (ICT) companies with 12 months of free incubation space and support in Edmonton's Enterprise Square, along with seed funding and expert mentorship from TELUS and TEC Edmonton to advance their business. For more information, visit www.tecedmonton.com/t-squared-accelerator. About TEC Edmonton TEC Edmonton is a business accelerator that helps emerging technology companies grow successfully. As a joint venture of the University of Alberta and Edmonton Economic Development Corporation, TEC Edmonton operates the Edmonton region's largest accelerator for early-stage technology companies, and also manages commercialization of University of Alberta technologies. TEC Edmonton delivers services in four areas: Business Development, Funding and Finance, Technology Management, and Entrepreneur Development. Since 2011, TEC clients have generated $680M in revenue, raised $350M in financing and funding, invested $200M in R&D, grown both revenue and employment by 25 per cent per year and now employ over 2,400 people in the region. In addition, TEC has assisted in the creation of 22 spinoff companies from the University of Alberta in the last four years. TEC Edmonton was named the 4th best university business incubator in North America by the University Business Incubator (UBI) Global Index in 2015, and "Incubator of the Year" by Startup Canada in 2014. For more information, visit www.tecedmonton.com.


News Article | February 16, 2017
Site: www.eurekalert.org

Alpha cells in the pancreas can be induced in living mice to quickly and efficiently become insulin-producing beta cells when the expression of just two genes is blocked, according to a study led by researchers at the Stanford University School of Medicine. Studies of human pancreases from diabetic cadaver donors suggest that the alpha cells' "career change" also occurs naturally in diabetic humans, but on a much smaller and slower scale. The research suggests that scientists may one day be able to take advantage of this natural flexibility in cell fate to coax alpha cells to convert to beta cells in humans to alleviate the symptoms of diabetes. "It is important to carefully evaluate any and all potential sources of new beta cells for people with diabetes," said Seung Kim, MD, PhD, professor of developmental biology and of medicine. "Now we've discovered what keeps an alpha cell as an alpha cell, and found a way to efficiently convert them in living animals into cells that are nearly indistinguishable from beta cells. It's very exciting." Kim is the senior author of the study, which will be published online Feb. 16 in Cell Metabolism. Postdoctoral scholar Harini Chakravarthy, PhD, is the lead author. "Transdifferentiation of alpha cells into insulin-producing beta cells is a very attractive therapeutic approach for restoring beta cell function in established Type 1 diabetes," said Andrew Rakeman, PhD, the director of discovery research at JDRF, an organization that funds research into Type 1 diabetes. "By identifying the pathways regulating alpha to beta cell conversion and showing that these same mechanisms are active in human islets from patients with Type 1 diabetes, Chakravarthy and her colleagues have made an important step toward realizing the therapeutic potential of alpha cell transdifferentiation." Rakeman was not involved in the study. Cells in the pancreas called beta cells and alpha cells are responsible for modulating the body's response to the rise and fall of blood glucose levels after a meal. When glucose levels rise, beta cells release insulin to cue cells throughout the body to squirrel away the sugar for later use. When levels fall, alpha cells release glucagon to stimulate the release of stored glucose. Although both Type 1 and Type 2 diabetes are primarily linked to reductions in the number of insulin-producing beta cells, there are signs that alpha cells may also be dysfunctional in these disorders. "In some cases, alpha cells may actually be secreting too much glucagon," said Kim. "When there is already not enough insulin, excess glucagon is like adding gas to a fire." Because humans have a large reservoir of alpha cells, and because the alpha cells sometimes secrete too much glucagon, converting some alpha cells to beta cells should be well-tolerated, the researchers believe. The researchers built on a previous study in mice several years ago that was conducted in a Swiss laboratory, which also collaborated on the current study. It showed that when beta cells are destroyed, about 1 percent of alpha cells in the pancreas begin to look and act like beta cells. But this happened very slowly. "What was lacking in that initial index study was any sort of understanding of the mechanism of this conversion," said Kim. "But we had some ideas based on our own work as to what the master regulators might be." Chakravarthy and her colleagues targeted two main candidates: a protein called Arx known to be important during the development of alpha cells and another called DNMT1 that may help alpha cells "remember" how to be alpha cells by maintaining chemical tags on its DNA. The researchers painstakingly generated a strain of laboratory mice unable to make either Arx or DNMT1 in pancreatic alpha cells when the animals were administered a certain chemical compound in their drinking water. They observed a rapid conversion of alpha cells into what appeared to be beta cells in the mice within seven weeks of blocking the production of both these proteins. To confirm the change, the researchers collaborated with colleagues in the laboratory of Stephen Quake, PhD, a co-author and professor of bioengineering and of applied physics at Stanford, to study the gene expression patterns of the former alpha cells. They also shipped the cells to collaborators in Alberta, Canada, and at the University of Illinois to test the electrophysiological characteristics of the cells and whether and how they responded to glucose. "Through these rigorous studies by our colleagues and collaborators, we found that these former alpha cells were -- in every way -- remarkably similar to native beta cells," said Kim. The researchers then turned their attention to human pancreatic tissue from diabetic and nondiabetic cadaver donors. They found that samples of tissue from children with Type 1 diabetes diagnosed within a year or two of their death include a proportion of bi-hormonal cells -- individual cells that produce both glucagon and insulin. Kim and his colleagues believe they may have caught the cells in the act of converting from alpha cells to beta cells in response to the development of diabetes. They also saw that the human alpha cell samples from the diabetic donors had lost the expression of the very genes -- ARX and DNMT1 -- they had blocked in the mice to convert alpha cells into beta cells. "So the same basic changes may be happening in humans with Type 1 diabetes," said Kim. "This indicates that it might be possible to use targeted methods to block these genes or the signals controlling them in the pancreatic islets of people with diabetes to enhance the proportion of alpha cells that convert into beta cells." Kim is a member of Stanford Bio-X, the Stanford Cardiovascular Institute, the Stanford Cancer Institute and the Stanford Child Health Research Institute. Researchers from the University of Alberta, the University of Illinois, the University of Geneva and the University of Bergen are also co-authors of the study. The research was supported by the National Institutes of Health (grants U01HL099999, U01HL099995, UO1DK089532, UO1DK089572 and UC4DK104211), the California Institute for Regenerative Medicine, the Juvenile Diabetes Research Foundation, the Center of Excellence for Stem Cell Genomics, the Wallenberg Foundation, the Swiss National Science Foundation, the NIH Beta-Cell Biology Consortium, the European Union, the Howard Hughes Medical Institute, the H.L. Snyder Foundation, the Elser Trust and the NIH Human Islet Resource Network. Stanford's Department of Developmental Biology also supported the work. The Stanford University School of Medicine consistently ranks among the nation's top medical schools, integrating research, medical education, patient care and community service. For more news about the school, please visit http://med. . The medical school is part of Stanford Medicine, which includes Stanford Health Care and Stanford Children's Health. For information about all three, please visit http://med. .


Zecchin M.,National Institute of Oceanography and Applied Geophysics - OGS | Catuneanu O.,University of Alberta
Marine and Petroleum Geology | Year: 2013

The high-resolution sequence stratigraphy tackles scales of observation that typically fall below the resolution of seismic exploration methods, commonly referred to as of 4th-order or lower rank. Outcrop- and core-based studies are aimed at recognizing features at these scales, and represent the basis for high-resolution sequence stratigraphy. Such studies adopt the most practical ways to subdivide the stratigraphic record, and take into account stratigraphic surfaces with physical attributes that may only be detectable at outcrop scale. The resolution offered by exposed strata typically allows the identification of a wider array of surfaces as compared to those recognizable at the seismic scale, which permits an accurate and more detailed description of cyclic successions in the rock record. These surfaces can be classified as 'sequence stratigraphic', if they serve as systems tract boundaries, or as facies contacts, if they develop within systems tracts. Both sequence stratigraphic surfaces and facies contacts are important in high-resolution studies; however, the workflow of sequence stratigraphic analysis requires the identification of sequence stratigraphic surfaces first, followed by the placement of facies contacts within the framework of systems tracts and bounding sequence stratigraphic surfaces.Several types of stratigraphic units may be defined, from architectural units bounded by the two nearest non-cryptic stratigraphic surfaces to systems tracts and sequences. The need for other types of stratigraphic units in high-resolution studies, such as parasequences and small-scale cycles, may be replaced by the usage of high-frequency sequences. The sequence boundaries that may be employed in high-resolution sequence stratigraphy are represented by the same types of surfaces that are used traditionally in larger scale studies, but at a correspondingly lower hierarchical level. © 2012 Elsevier Ltd.


Wu Y.,Tsinghua University | Zhao R.C.H.,Peking Union Medical College | Tredget E.E.,University of Alberta
Stem Cells | Year: 2010

Our understanding of the role of bone marrow (BM)-derived cells in cutaneous homeostasis and wound healing had long been limited to the contribution of inflammatory cells. Recent studies, however, suggest that the BM contributes a significant proportion of noninflammatory cells to the skin, which are present primarily in the dermis in fibroblast-like morphology and in the epidermis in a keratinocyte phenotype; and the number of these BM-derived cells increases markedly after wounding. More recently, several studies indicate that mesenchymal stem cells derived from the BM could significantly impact wound healing in diabetic and nondiabetic animals, through cell differentiation and the release of paracrine factors, implying a profound therapeutic potential. This review discusses the most recent understanding of the contribution of BM-derived noninflammatory cells to cutaneous homeostasis and wound healing. © AlphaMed Press.


Abbaszadeh M.,UTRC - United Technologies Research Center | Marquez H.J.,University of Alberta
Automatica | Year: 2012

In this paper, a generalized robust nonlinear H ∞ filtering method is proposed for a class of Lipschitz descriptor systems, in which the nonlinearities appear both in the state and measured output equations. The system is assumed to have norm-bounded uncertainties in the realization matrices as well as nonlinear uncertainties. We synthesize the H ∞ filter through semidefinite programming and strict LMIs. The admissible Lipschitz constants of the nonlinear functions are maximized through LMI optimization. The resulting H ∞ filter guarantees asymptotic stability of the estimation error dynamics with prespecified disturbance attenuation level and is robust against time-varying parametric uncertainties as well as Lipschitz nonlinear additive uncertainty. Explicit bound on the tolerable nonlinear uncertainty is derived based on a norm-wise robustness analysis. © 2012 Elsevier Ltd. All rights reserved.


Fearon K.,University of Edinburgh | Arends J.,Albert Ludwigs University of Freiburg | Baracos V.,University of Alberta
Nature Reviews Clinical Oncology | Year: 2013

Cancer cachexia is a metabolic syndrome that can be present even in the absence of weight loss ('precachexia'). Cachexia is often compounded by pre-existing muscle loss, and is exacerbated by cancer therapy. Furthermore, cachexia is frequently obscured by obesity, leading to under-diagnosis and excess mortality. Muscle wasting (the signal event in cachexia) is associated not only with reduced quality of life, but also markedly increased toxicity from chemotherapy. Many of the primary events driving cachexia are likely mediated via the central nervous system and include inflammation-related anorexia and hypoanabolism or hypercatabolism. Treatment of cachexia should be initiated early. In addition to active management of secondary causes of anorexia (such as pain and nausea), therapy should target reduced food intake (nutritional support), inflammation-related metabolic change (anti-inflammatory drugs or nutrients) and reduced physical activity (resistance exercise). Advances in the understanding of the molecular biology of the brain, immune system and skeletal muscle have provided novel targets for the treatment of cachexia. The combination of therapies into a standard multimodal package coupled with the development of novel therapeutics promises a new era in supportive oncology whereby quality of life and tolerance to cancer therapy could be improved considerably.


Home > Press > Semiconducting inorganic double helix: New flexible semiconductor for electronics, solar technology and photo catalysis Abstract: It is the double helix, with its stable and flexible structure of genetic information, that made life on Earth possible in the first place. Now a team from the Technical University of Munich (TUM) has discovered a double helix structure in an inorganic material. The material comprising tin, iodine and phosphorus is a semiconductor with extraordinary optical and electronic properties, as well as extreme mechanical flexibility. Flexible yet robust - this is one reason why nature codes genetic information in the form of a double helix. Scientists at TU Munich have now discovered an inorganic substance whose elements are arranged in the form of a double helix. The substance called SnIP, comprising the elements tin (Sn), iodine (I) and phosphorus (P), is a semiconductor. However, unlike conventional inorganic semiconducting materials, it is highly flexible. The centimeter-long fibers can be arbitrarily bent without breaking. "This property of SnIP is clearly attributable to the double helix," says Daniela Pfister, who discovered the material and works as a researcher in the work group of Tom Nilges, Professor for Synthesis and Characterization of Innovative Materials at TU Munich. "SnIP can be easily produced on a gram scale and is, unlike gallium arsenide, which has similar electronic characteristics, far less toxic." Countless application possibilities The semiconducting properties of SnIP promise a wide range of application opportunities, from energy conversion in solar cells and thermoelectric elements to photocatalysts, sensors and optoelectronic elements. By doping with other elements, the electronic characteristics of the new material can be adapted to a wide range of applications. Due to the arrangement of atoms in the form of a double helix, the fibers, which are up to a centimeter in length can be easily split into thinner strands. The thinnest fibers to date comprise only five double helix strands and are only a few nanometers thick. That opens the door also to nanoelectronic applications. "Especially the combination of interesting semiconductor properties and mechanical flexibility gives us great optimism regarding possible applications," says Professor Nilges. "Compared to organic solar cells, we hope to achieve significantly higher stability from the inorganic materials. For example, SnIP remains stable up to around 500°C (930 °F)." Just at the beginning "Similar to carbon, where we have the three-dimensional (3D) diamond, the two dimensional graphene and the one dimensional nanotubes," explains Professor Nilges, "we here have, alongside the 3D semiconducting material silicon and the 2D material phosphorene, for the first time a one dimensional material - with perspectives that are every bit as exciting as carbon nanotubes." Just as with carbon nanotubes and polymer-based printing inks, SnIP double helices can be suspended in solvents like toluene. In this way, thin layers can be produced easily and cost-effectively. "But we are only at the very beginning of the materials development stage," says Daniela Pfister. "Every single process step still needs to be worked out." Since the double helix strands of SnIP come in left and right-handed variants, materials that comprise only one of the two should display special optical characteristics. This makes them highly interesting for optoelectronics applications. But, so far there is no technology available for separating the two variants. Theoretical calculations by the researchers have shown that a whole range of further elements should form these kinds of inorganic double helices. Extensive patent protection is pending. The researchers are now working intensively on finding suitable production processes for further materials. Interdisciplinary cooperation An extensive interdisciplinary alliance is working on the characterization of the new material: Photoluminescence and conductivity measurements have been carried out at the Walter Schottky Institute of the TU Munich. Theoretical chemists from the University of Augsburg collaborated on the theoretical calculations. Researchers from the University of Kiel and the Max Planck Institute of Solid State Research in Stuttgart performed transmission electron microscope investigations. Mössbauer spectra and magnetic properties were measured at the University of Augsburg, while researchers of TU Cottbus contributed thermodynamics measurements. ### The research was funded by the DFB (SPP 1415), the international graduate school ATUMS (TU Munich and the University of Alberta, Canada) and the TUM Graduate School. For more information, please click If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.


News Article | December 14, 2016
Site: www.sciencenews.org

In a hotel ballroom in Seoul, South Korea, early in 2016, a centuries-old strategy game offered a glimpse into the fantastic future of computing. The computer program AlphaGo bested a world champion player at the Chinese board game Go, four games to one (SN Online: 3/15/16). The victory shocked Go players and computer gurus alike. “It happened much faster than people expected,” says Stuart Russell, a computer scientist at the University of California, Berkeley. “A year before the match, people were saying that it would take another 10 years for us to reach this point.” The match was a powerful demonstration of the potential of computers that can learn from experience. Elements of artificial intelligence are already a reality, from medical diagnostics to self-driving cars (SN Online: 6/23/16), and computer programs can even find the fastest routes through the London Underground. “We don’t know what the limits are,” Russell says. “I’d say there’s at least a decade of work just finding out the things we can do with this technology.” AlphaGo’s design mimics the way human brains tackle problems and allows the program to fine-tune itself based on new experiences. The system was trained using 30 million positions from 160,000 games of Go played by human experts. AlphaGo’s creators at Google DeepMind honed the software even further by having it play games against slightly altered versions of itself, a sort of digital “survival of the fittest.” These learning experiences allowed AlphaGo to more efficiently sweat over its next move. Programs aimed at simpler games play out every single hypothetical game that could result from each available choice in a branching pattern — a brute-force approach to computing. But this technique becomes impractical for more complex games such as chess, so many chess-playing programs sample only a smaller subset of possible outcomes. That was true of Deep Blue, the computer that beat chess master Garry Kasparov in 1997. But Go offers players many more choices than chess does. A full-sized Go board includes 361 playing spaces (compared with chess’ 64), often has various “battles” taking place across the board simultaneously and can last for more moves. AlphaGo overcomes Go’s sheer complexity by drawing on its own developing knowledge to choose which moves to evaluate. This intelligent selection led to the program’s surprising triumph, says computer scientist Jonathan Schaeffer of the University of Alberta in Canada. “A lot of people have put enormous effort into making small, incremental progress,” says Schaeffer, who led the development of the first computer program to achieve perfect play of checkers. “Then the AlphaGo team came along and, incremental progress be damned, made this giant leap forward.” Real-world problems have complexities far exceeding those of chess or Go, but the winning strategies demonstrated in 2016 could be game changers.


News Article | June 8, 2016
Site: www.techtimes.com

Could the bison provide clues to the mystery of ancient American settlement? Bones of giant steppe bison and traces of their ice-age hunters have led researchers to conclude that early humans likely colonized North America south from Alaska along the Pacific coast – not through the Rocky Mountains as previously thought. But when and how this happened remains a mystery. Not Through The Rocky Mountains Corridor? The first ancient people in America are thought to have reached their destination from Siberia using an ice-free corridor up along the Rocky Mountains during the late Pleistocene era. It remains uncertain when the crossing was created and how the people spread across the rest of America. The traditional assumption is that people swept into the continent in a single wave 13,500 years ago, but there has been contradicting evidence that human societies have settled far east 14,500 years earlier and far south over 15,000 years earlier. More recent proof shows, too, that the Rocky Mountain corridor was open until about 21,000 years ago, at the height of the last ice age when east and west ice sheets coalesced and completely separated populations. Now, using radiocarbon dating and DNA analysis, researchers from University of California Santa Cruz followed ancient hunters and tracked bison movements. Studying 78 bison fossils, they found two distinct populations to the north and south, as well as traced when the animals migrated and interbred. They discovered southern bison started moving in first with the opening of the southern part of the corridor, followed by the northern bison. The two started to mingle in the open pass approximately 13,000 years ago. What this means: the mountains likely cleared of ice over a thousand years post-human colonization in the south – a suggestion that early humans first inhabited the Americas along the Pacific coast. "When the corridor opened, people were already living south of there,” said study author and ecology and evolutionary biology professor Beth Shapiro. “And because those people were bison hunters, we can assume they would have followed the bison as they moved north into the corridor.” First author and postdoc researcher Peter Heintzman said that given these results, one would be pressed to think otherwise. “Fourteen to 15,000 years ago, there’s still a hell of a lot of ice around everywhere,” he told the Guardian. “And if that wasn’t opened up you’d have to go around the ice, and going the coastal route is the simplest explanation.” The Rocky Mountains corridor, however, remains important for its role in later migrations and idea exchange between people north and south, Heintzman added. Heintzman pointed to tidal erosion for little archeological evidence along the Pacific coast to vouch for its use among ancient people in migrating south. In the north, on the other hand, site dating is improving, but there are only a handful found in the land bridge along the Bering Strait. Here enters fossils of bison, which are deemed the most numerous mammals of their kind in western North America. These animals, unlike most other large mammals like sloths and dire wolves, also survived mass extinction events. The over-6-foot tall steppe bison of this period were much more massive than their living counterparts, according to author Duane Froese from the University of Alberta in Canada. Modern bison descended from these giants, Heintzman said, although they reside south of the range of their ancestors. Many of the fossil samples came from the Royal Alberta Museum and other institutions’ collections. They were revealed through mining operations and later made available for scientific research. The findings were published on June 6 in the journal Proceedings of the National Academy of Sciences. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | August 6, 2013
Site: www.theguardian.com

A starved polar bear found found dead in Svalbard as "little more than skin and bones" perished due to a lack of sea ice on which to hunt seals, according to a renowned polar bear expert. Climate change has reduced sea ice in the Arctic to record lows in the last year and Dr Ian Stirling, who has studied the bears for almost 40 years and examined the animal, said the lack of ice forced the bear into ranging far and wide in an ultimately unsuccessful search for food. "From his lying position in death the bear appears to simply have starved and died where he dropped," Stirling said. "He had no external suggestion of any remaining fat, having been reduced to little more than skin and bone." The bear had been examined by scientists from the Norwegian Polar Institute in April in the southern part of Svalbard, an Arctic island archipelago, and appeared healthy. The same bear had been captured in the same area in previous years, suggesting that the discovery of its body, 250km away in northern Svalbard in July, represented an unusual movement away from its normal range. The bear probably followed the fjords inland as it trekked north, meaning it may have walked double or treble that distance. Polar bears feed almost exclusively on seals and need sea ice to capture their prey. But 2012 saw the lowest level of sea ice in the Arctic on record. Prond Robertson, at the Norwegian Meteorological Institute, said: "The sea ice break up around Svalbard in 2013 was both fast and very early." He said recent years had been poor for ice around the islands: "Warm water entered the western fjords in 2005-06 and since then has not shifted." Stirling, now at Polar Bears International and previously at the University of Alberta and the Canadian Wildlife Service, said: "Most of the fjords and inter-island channels in Svalbard did not freeze normally last winter and so many potential areas known to that bear for hunting seals in spring do not appear to have been as productive as in a normal winter. As a result the bear likely went looking for food in another area but appears to have been unsuccessful." Research published in May showed that loss of sea ice was harming the health, breeding success and population size of the polar bears of Hudson Bay, Canada, as they spent longer on land waiting for the sea to refreeze. Other work has shown polar bear weights are declining. In February a panel of polar bear experts published a paper stating that rapid ice loss meant options such the feeding of starving bears by humans needed to be considered to protect the 20,000-25,000 animals thought to remain. The International Union for the Conservation of Nature, the world's largest professional conservation network, states that of the 19 populations of polar bear around the Arctic, data is available for 12. Of those, eight are declining, three are stable and one is increasing. The IUCN predicts that increasing ice loss will mean between one-third and a half of polar bears will be lost in the next three generations, about 45 years. But the US and Russian governments said in March that faster-than-expected ice losses could mean two-thirds are lost. Attributing a single incident to climate change can be controversial, but Douglas Richardson, head of living collections at the Highland Wildlife Park near Kingussie, said: "It's not just one bear though. There are an increasing number of bears in this condition: they are just not putting down enough fat to survive their summer fast. This particular polar bear is the latest bit of evidence of the impact of climate change." Ice loss due to climate change is "absolutely, categorically and without question" the cause of falling polar bear populations, said Richardson, who cares for the UK's only publicly kept polar bears. He said 16 years was not particularly old for a wild male polar bear, which usually live into their early 20s. "There may have been some underlying disease, but I would be surprised if this was anything other than starvation," he said. "Once polar bears reach adulthood they are normally nigh on indestructible, they are hard as nails." Jeff Flocken, at the International Fund for Animal Welfare, said: "While it is difficult to ascribe a single death or act to climate change it couldn't be clearer that drastic and long-term changes in their Arctic habitat threaten the survival of the polar bear. The threat of habitat loss from climate change, exacerbated by unsustainable killing for commercial trade in Canada, could lead to the demise of one of the world's most iconic animals, and this would be a true tragedy."


News Article | October 31, 2016
Site: www.sciencemag.org

UPDATE: The fossil Tetrapodophis amplectus will return to the Bürgermeister-Müller-Museum in Solnhofen, Germany, later this month, sources say. The fossil’s owner had temporarily removed it because of damage it had sustained during CT scanning at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. “Two very important bones of the holotype were partially damaged,” says Martin Röper, director of the museum. After investigating the extent of the damage, the owner agreed to return it to the museum—but scientists will now only be able to study it in-house, Röper says. The good news, says Paul Tafforeau of ESRF, is that the facility has improved its imaging protocols for flat fossils, so that “it can never happen again.” Here is our original story: SALT LAKE CITY—It is a tiny, fragile thing: a squashed skull barely a centimeter in length; a sinuous curving body about two fingers long; four delicate limbs with grasping hands. In a major paper last year, researchers called this rare fossil from more than 100 million years ago the first known four-legged snake. But at a meeting of the Society of Vertebrate Paleontology (SVP) here last week, another team suggested that it’s a marine lizard instead. Even as scientists debate the identity of this controversial specimen, the only one of its kind, it appears to be inaccessible for further study. And paleontologists are mad as hell. “It’s horrifying,” says Jacques Gauthier, a paleontologist at Yale University. As far as he’s concerned, if the fossil can’t be studied, it doesn’t exist. “For me, the take-home message is that I don’t want to mention the name Tetrapodophis ever again.” A year ago, researchers led by David Martill of the University of Portsmouth in the United Kingdom reported in Science that the fossil, which they named Tetrapodophis amplectus (for four-footed snake), was a missing link in the snake evolutionary tree. Researchers knew snakes had evolved from four-limbed reptiles, but few transitional forms had been discovered, and researchers continue to wrangle over whether the first lizards to lose their limbs and become snakes were terrestrial burrowers or aquatic swimmers. Martill and colleagues reported that the fossil, which they described as a specimen in a German museum, originated from a Brazilian outcrop of the Crato Formation, a 108-million-year-old limestone layer rich in both marine and terrestrial species. They identified snakelike features in the fossil, including a long body consisting of more than 150 vertebrae, a relatively short tail of 112 vertebrae, hooked teeth, and scales on its belly. Those features, they say, support the hypothesis that snakes evolved from burrowing ancestors. But many paleontologists weren’t convinced. Last week at the annual SVP meeting here, vertebrate paleontologist Michael Caldwell of the University of Alberta in Edmonton, Canada, and colleagues presented their own observations of the specimen, rebutting Martill’s paper point by point to a standing-room-only crowd. The new analysis hinges on the “counterpart” to the original fossil, which was also housed in the Bürgermeister-Müller Museum in Solnhofen, Germany. When the slab of rock containing the fossil was cracked open, the body of the organism stayed mostly in one half of the slab, whereas the skull was mostly in the other half, paired with a mold or impression of the body. This counterpart slab, Caldwell says, preserved clearer details of the skull in particular. In his group’s analysis of the counterpart, he says, “every single character that was identified in the original manuscript as being diagnostic of a snake was either not the case or not observable.” For example, in snake skulls, a bone called the quadrate is elongated, which allows snakes to open their jaws very wide. This fossil’s quadrate bone is more C-shaped, and it surrounds the animal’s hearing apparatus—a “characteristic feature” of a group of lizards called squamates, says co-author Robert Reisz, a vertebrate paleontologist at the University of Toronto in Mississauga, Canada. He and Caldwell add that although the fossil has more vertebrae in its body than in its tail, the tail isn’t short, but longer than that of many living lizards. They are working on a paper arguing that the fossil is probably a dolichosaur, an extinct genus of marine lizard. Martill and co-author Nicholas Longrich of the University of Bath in the United Kingdom, neither of whom was at the meeting, stand strongly by their original analysis. Longrich cites all the snakelike features discussed in the original paper. “In virtually every single respect [it] looks like a snake, except for one little detail—it has arms and legs,” he told Science by email. Many researchers who attended the talk, including Gauthier and paleontologist Jason Head of the University of Cambridge in the United Kingdom, are persuaded that Tetrapodophis is not a snake. But as for what it is, there may be as many opinions as there are paleontologists. Hong-yu Yi of the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing, China, says she’s reserving judgment on the specimen’s identity until further analysis. “I was always waiting for a longer description of the specimen. I’m still waiting,” she says. That analysis may never happen. Caldwell says he went back to the Bürgermeister-Müller Museum several months ago to study the specimen again; separately, Head says he also attempted to study the fossil. Neither could get access to it. Caldwell says that the fossil wasn’t actually part of the museum’s collection, but was on loan from a private owner. Researchers who declined to be named because of ongoing discussions around the fossil say that it may have been damaged during study, prompting the collector to restrict access to it. “I don’t even know if a publication at this moment is appropriate because no one else will be able to access this specimen,” Yi says. In fact, some researchers say the original paper should not have been published, because the fossil was not officially deposited in a museum or other repository, so the authors couldn’t guarantee that future researchers could access it. “I have nothing against” private fossil collecting, Gauthier says. But when a fossil enters the scientific literature, he says, “then it has to be available. Science requires repeatability.” In response, Science deputy editor Andrew Sugden says, “Our understanding at the time of publication and in subsequent correspondence was that the specimen was accessible at the museum, as stated at the end of the paper.” Researchers had already raised other questions about the fossil’s transport out of Brazil. Brazil passed laws in the 1940s making all fossils property of the state rather than private owners. “Most of the exploration of the limestone quarries in that region of the country began in the second half of the 20th century,” says Tiago Simões, a paleontologist at the University of Alberta, who was also an author on the SVP talk. “So the vast majority” of fossils from those areas were collected after the law had passed. “That really touches on some very sensitive ethical boundaries.” Head agrees. “The best way to move forward is to literally erase the specimen from our research program. Tetrapodophis is no longer science. … It’s not repeatable, it’s not testable. If any good can come out of Tetrapodophis, it’s the recognition that we have got to maintain scientific standards when it comes to fossils … they have to be accessible."


News Article | October 31, 2016
Site: www.sciencemag.org

UPDATE: The fossil Tetrapodophis amplectus will return to the Bürgermeister-Müller-Museum in Solnhofen, Germany, later this month, sources say. The fossil’s owner had temporarily removed it because of damage it had sustained during CT scanning at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. “Two very important bones of the holotype were partially damaged,” says Martin Röper, director of the museum. After investigating the extent of the damage, the owner agreed to return it to the museum—but scientists will now only be able to study it in-house, Röper says. The good news, says Paul Tafforeau of ESRF, is that the facility has improved its imaging protocols for flat fossils, so that “it can never happen again.” Here is our original story: SALT LAKE CITY—It is a tiny, fragile thing: a squashed skull barely a centimeter in length; a sinuous curving body about two fingers long; four delicate limbs with grasping hands. In a major paper last year, researchers called this rare fossil from more than 100 million years ago the first known four-legged snake. But at a meeting of the Society of Vertebrate Paleontology (SVP) here last week, another team suggested that it’s a marine lizard instead. Even as scientists debate the identity of this controversial specimen, the only one of its kind, it appears to be inaccessible for further study. And paleontologists are mad as hell. “It’s horrifying,” says Jacques Gauthier, a paleontologist at Yale University. As far as he’s concerned, if the fossil can’t be studied, it doesn’t exist. “For me, the take-home message is that I don’t want to mention the name Tetrapodophis ever again.” A year ago, researchers led by David Martill of the University of Portsmouth in the United Kingdom reported in Science that the fossil, which they named Tetrapodophis amplectus (for four-footed snake), was a missing link in the snake evolutionary tree. Researchers knew snakes had evolved from four-limbed reptiles, but few transitional forms had been discovered, and researchers continue to wrangle over whether the first lizards to lose their limbs and become snakes were terrestrial burrowers or aquatic swimmers. Martill and colleagues reported that the fossil, which they described as a specimen in a German museum, originated from a Brazilian outcrop of the Crato Formation, a 108-million-year-old limestone layer rich in both marine and terrestrial species. They identified snakelike features in the fossil, including a long body consisting of more than 150 vertebrae, a relatively short tail of 112 vertebrae, hooked teeth, and scales on its belly. Those features, they say, support the hypothesis that snakes evolved from burrowing ancestors. But many paleontologists weren’t convinced. Last week at the annual SVP meeting here, vertebrate paleontologist Michael Caldwell of the University of Alberta in Edmonton, Canada, and colleagues presented their own observations of the specimen, rebutting Martill’s paper point by point to a standing-room-only crowd. The new analysis hinges on the “counterpart” to the original fossil, which was also housed in the Bürgermeister-Müller Museum in Solnhofen, Germany. When the slab of rock containing the fossil was cracked open, the body of the organism stayed mostly in one half of the slab, whereas the skull was mostly in the other half, paired with a mold or impression of the body. This counterpart slab, Caldwell says, preserved clearer details of the skull in particular. In his group’s analysis of the counterpart, he says, “every single character that was identified in the original manuscript as being diagnostic of a snake was either not the case or not observable.” For example, in snake skulls, a bone called the quadrate is elongated, which allows snakes to open their jaws very wide. This fossil’s quadrate bone is more C-shaped, and it surrounds the animal’s hearing apparatus—a “characteristic feature” of a group of lizards called squamates, says co-author Robert Reisz, a vertebrate paleontologist at the University of Toronto in Mississauga, Canada. He and Caldwell add that although the fossil has more vertebrae in its body than in its tail, the tail isn’t short, but longer than that of many living lizards. They are working on a paper arguing that the fossil is probably a dolichosaur, an extinct genus of marine lizard. Martill and co-author Nicholas Longrich of the University of Bath in the United Kingdom, neither of whom was at the meeting, stand strongly by their original analysis. Longrich cites all the snakelike features discussed in the original paper. “In virtually every single respect [it] looks like a snake, except for one little detail—it has arms and legs,” he told Science by email. Many researchers who attended the talk, including Gauthier and paleontologist Jason Head of the University of Cambridge in the United Kingdom, are persuaded that Tetrapodophis is not a snake. But as for what it is, there may be as many opinions as there are paleontologists. Hong-yu Yi of the Institute of Vertebrate Paleontology and Paleoanthropology in Beijing, China, says she’s reserving judgment on the specimen’s identity until further analysis. “I was always waiting for a longer description of the specimen. I’m still waiting,” she says. That analysis may never happen. Caldwell says he went back to the Bürgermeister-Müller Museum several months ago to study the specimen again; separately, Head says he also attempted to study the fossil. Neither could get access to it. Caldwell says that the fossil wasn’t actually part of the museum’s collection, but was on loan from a private owner. Researchers who declined to be named because of ongoing discussions around the fossil say that it may have been damaged during study, prompting the collector to restrict access to it. “I don’t even know if a publication at this moment is appropriate because no one else will be able to access this specimen,” Yi says. In fact, some researchers say the original paper should not have been published, because the fossil was not officially deposited in a museum or other repository, so the authors couldn’t guarantee that future researchers could access it. “I have nothing against” private fossil collecting, Gauthier says. But when a fossil enters the scientific literature, he says, “then it has to be available. Science requires repeatability.” In response, Science deputy editor Andrew Sugden says, “Our understanding at the time of publication and in subsequent correspondence was that the specimen was accessible at the museum, as stated at the end of the paper.” Researchers had already raised other questions about the fossil’s transport out of Brazil. Brazil passed laws in the 1940s making all fossils property of the state rather than private owners. “Most of the exploration of the limestone quarries in that region of the country began in the second half of the 20th century,” says Tiago Simões, a paleontologist at the University of Alberta, who was also an author on the SVP talk. “So the vast majority” of fossils from those areas were collected after the law had passed. “That really touches on some very sensitive ethical boundaries.” Head agrees. “The best way to move forward is to literally erase the specimen from our research program. Tetrapodophis is no longer science. … It’s not repeatable, it’s not testable. If any good can come out of Tetrapodophis, it’s the recognition that we have got to maintain scientific standards when it comes to fossils … they have to be accessible."


VANCOUVER, BC--(Marketwired - February 15, 2017) - SG Spirit Gold Inc. (TSX VENTURE: SG) (the "Company") is pleased to announce that following its review of strategic acquisition opportunities, the Company has entered into a definitive amalgamation agreement, effective February 10, 2017 (the "Definitive Agreement"), with Northern Lights Marijuana Company Limited ("DOJA"). Pursuant to the terms of the Definitive Agreement, the Company will acquire all of the issued and outstanding securities of DOJA (the "Transaction"). DOJA is a privately-owned company based in Canada's picturesque Okanagan Valley, that is committed to becoming a licensed producer of marijuana under the Access to Cannabis for Medical Purposes Regulations ("ACMPR") and building a fast growing lifestyle brand that offers the highest quality handcrafted cannabis strains in Canada. DOJA was founded in 2013 by the same team that founded and built SAXX Underwear into an internationally recognizable brand. The DOJA team plans to build upon their past success in the consumer packaged goods industry and their mutual interest in, and appreciation for, cannabis culture and grow DOJA into a market leading brand in the cannabis industry. DOJA's marijuana production growth strategy can be broken down into three phases: DOJA's team has the experience to ensure they successfully navigate the ACMPR licensing process and deliver on their vision for the DOJA brand. Trent Kitsch -- Chief Executive Officer: Mr. Kitsch co-founded DOJA in 2013. Prior thereto, Mr. Kitsch founded SAXX Underwear in 2007 and successfully built SAXX into a globally recognizable brand and the fastest growing underwear brand in North America before fully exiting the business in 2015. In 2013, Mr. Kitsch and his wife Ria Kitsch founded Kitsch Wines in the Okanagan Valley. Trent is a proven entrepreneur and a graduate of the Richard Ivey School of Business with a major in entrepreneurship. Ryan Foreman -- President: Mr. Foreman co-founded DOJA with Mr. Kitsch in 2013. Mr. Foreman has spent over 15 years developing e-commerce operations within the consumer goods space working with influential brands and industry disrupters in the lifestyle and action sports markets. He has expertise developing and managing teams executing all business aspects including system integrations, domestic and international compliance, fulfillment, website development and online marketing. Jeff Barber -- Chief Financial Officer: Mr. Barber joined DOJA in 2016 after selling his ownership in a boutique M&A advisory firm in Calgary. Prior thereto, he was an investment banker with Raymond James Limited and previously held investment banking and equity research positions at Canaccord Genuity Corp. Jeff began his career as an economist with Deloitte LLP. Throughout his career, Mr. Barber has worked closely with various public company boards and executive teams to assist in institutional capital initiatives and advise on go-public transactions, valuations and M&A mandates. Jeff Barber is a CFA charterholder and holds a Masters degree in Finance and Economics from the University of Alberta. Zena Prokosh -- Chief Operating Officer: Ms. Prokosh joined DOJA after spending two years with THC Biomed International Ltd., where she was an Alternate Responsible Person In Charge and played an integral role in guiding the company through the MMPR/ACMPR licensing process. Prior thereto she was the Curator and Germplasm PlantSMART Research Technician / Lab Manager at the UBC Charles Fipke Centre for Innovative Research in Kelowna. Zena was accepted to and attended the 2016 Masterclass Medicinal Cannabis® held in Leiden, Netherlands. Ms. Prokosh holds a B.Sc. in Biology from UBC. Ria Kitsch -- Vice President: Mrs. Kitsch has been with DOJA since inception. Ria was formerly head of marketing for SAXX Underwear. Prior to that, Ms. Kitsch was employed with WaterPlay Solutions Corp., where she became a top salesperson and territory manager by quickly identifying and executing strategies to grow in regulated markets. Strong customer service skills and marketing focus make her a front-line specialist. Mrs. Kitsch earned a Business Honors degree from UBC-Okanagan. Shawn McDougall -- Master Grower and Curer: Mr. McDougall brings over a decade of cannabis growing and curing experience to DOJA. Shawn is truly a cannabis connoisseur and he will thoughtfully curate DOJA's strain selection to represent the full spectrum of the cannabis experience. Prior to joining DOJA, Shawn was the Master Grower for a number of patients under the Marijuana Medical Access Regulations and also consulted for MMPR applicants. Shawn has continued to hone his craft over the years and developed growing techniques that allow him to consistently produce high-quality cannabis and impressive yields. Shawn is an automation specialist and ticketed HVAC technician. In accordance with the terms of the Definitive Agreement, DOJA will amalgamate with a wholly-owned subsidiary of the Company, following which the resulting amalgamated entity will continue as a wholly-owned subsidiary of the Company. In consideration for completion of the Transaction, the current holders of DOJA class "A" voting common shares will be issued one-and-eight-tenths (1.8) common shares of the Company in exchange for every share of DOJA they hold. Existing convertible securities of DOJA will be exchanged for convertibles of the Company, on substantially the same terms, and applying the same exchange ratio. Prior to closing of the Transaction it is anticipated that the Company will apply to list its common shares for trading on the Canadian Securities Exchange (the "CSE") and voluntarily delist its shares from the TSX Venture Exchange. On closing of the Transaction it is anticipated that the Company will change its name to "DOJA Cannabis Company Limited", and will reconstitute its board of directors to consist of Trent Kitsch, Jeffrey Barber, Ryan Foreman and Patrick Brauckmann, with Trent Kitsch serving as Chief Executive Officer, Jeffrey Barber serving as Chief Financial Officer and Ryan Foreman serving as President. Closing of the Transaction remains subject to a number of conditions, including the completion of satisfactory due diligence, receipt of any required shareholder, regulatory and third-party consents, the Canadian Securities Exchange having conditionally accepted the listing of the Company's common shares, the TSX Venture Exchange having consented to the voluntarily delisting of the Company's common shares, and the satisfaction of other customary closing conditions. Additional information regarding the Transaction will be made available under the Company's profile on SEDAR (www.sedar.com) as such information becomes available. The Transaction cannot close until the required approvals are obtained, and the Company's common shares have been delisted from the TSX Venture Exchange. There can be no assurance that the Transaction will be completed as proposed or at all, or that the Company's common shares will be listed and posted for trading on any stock exchange. Trading in the Company's common shares has been halted and it is anticipated that trading will remain halted until completion of the Transaction. Neither the TSX Venture Exchange, nor the Canadian Securities Exchange, has in any way passed upon the merits of the proposed Transaction and has neither approved nor disapproved the contents of this press release. On behalf of the Board, Neither the TSX Venture Exchange nor its regulation services provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release. This news release includes certain "forward-looking statements" under applicable Canadian securities legislation. Forward-looking statements include, but are not limited to, statements with respect to the terms and conditions of the proposed Transaction; and future developments and the business and operations of DOJA. Forward-looking statements are necessarily based upon a number of estimates and assumptions that, while considered reasonable, are subject to known and unknown risks, uncertainties, and other factors which may cause the actual results and future events to differ materially from those expressed or implied by such forward-looking statements. Such factors include, but are not limited to: general business, economic, competitive, political and social uncertainties, uncertain capital markets; and delay or failure to receive board, shareholder or regulatory approvals. There can be no assurance that the Transaction will proceed on the terms contemplated above or at all and that such statements will prove to be accurate, as actual results and future events could differ materially from those anticipated in such statements. Accordingly, readers should not place undue reliance on forward-looking statements. The Company and DOJA disclaim any intention or obligation to update or revise any forward-looking statements, whether as a result of new information, future events or otherwise, except as required by law.


News Article | April 27, 2016
Site: motherboard.vice.com

On April 26, David and Collet Stephan, the Alberta couple who treated their sick toddler with horseradish and onion smoothies for two weeks, were found guilty of failing to provide the necessaries of life to their 18-month-old son, a sentence that carries up to five years in jail. Ezekiel died of meningitis in 2012. It’s a tragic story, even more so because nobody doubts that David and Collet Stephan loved their son. Instead, they seem to have misguidedly believed that alternative therapies, including an echinacea tincture they bought at a naturopath’s office, would help him get better. The boy eventually stopped breathing, and they phoned 911. He later died in hospital. (The parents testified that they thought Ezekiel had the flu, although a family friend and nurse had suggested he may have meningitis.) Doctors, government, and the alternative medicine industry all have a duty to do better here. Horseradish and echinacea are no substitute for conventional, science-based medicine. Patients across Canada need better access to family doctors, and they need to know—without a doubt—when it’s time to seek one out, and forego the naturopath. “I hope this sends a strong message about the nature of [alternative] services,” Tim Caulfield, Canada Research Chair in Health Law and Policy at the University of Alberta, who has been following the case, told Motherboard on Monday, shortly after the verdict. “And I hope it causes policymakers throughout Canada to rethink how they’re positioning these therapies in our healthcare system.” Alternative therapies, including naturopaths’ services, are popular. It’s easy to see why: in Canada, which suffers from a longstanding doctor shortage, it can be difficult—if not impossible—to get a family doctor. Even when you do have one, that doctor is often rushed. By contrast, naturopaths sit with their patients for half an hour or longer, going over every little detail of their health. One of their most valuable services is “lifestyle counselling,” simple diet and exercise advice, that doctors often don’t have the time to do. Naturopaths have been given the right to self-regulate in many parts of Canada, like Alberta, which gives them a veneer of professionalism. But the general public should be clear on this: plenty of their most popular services still have little or no science behind them. In a 2011 survey in Alberta, Caulfield found that homeopathy, detoxification, and hydrotherapy were among the most popular and advertised treatments offered by Alberta’s naturopaths. “There is no scientific evidence to support those services at all,” he said. Detoxing, for one, has been debunked over and over again. But people keep paying for it. The growing creep to pseudoscience, and a distrust of conventional medicine, is something we all need to address—Canada’s doctors and policymakers included. A step in the right direction was former Health Minister Rona Ambrose’s announcement, in 2015, that “nosodes” (homeopathic treatments) would be labelled clearly to show they are not vaccines. The College of Naturopaths of Ontario is in line with this advice. But there’s clearly still some confusion around alternative therapies. In November, another trial begins in Alberta, into the death of 7-year-old Ryan Lovett, whose mother treated his illness with “holistic” treatments. The Canadian Press found several cases dating back to the 1960s. Since I first wrote about Ezekiel Stephan, I’ve heard from many naturopaths who point out that they’re licensed professionals, working to protect their patients. I don’t doubt that’s true. But what needs to be made absolutely clear is that such treatments are not an alternative to conventional, science-based medicine. Naturopaths have an important role in this. The Alberta naturopath whose office provided Ezekiel’s parents with the tincture is now under investigation. As for David and Collet Stephan, observers seem to doubt that they’ll be sentenced to a full five years in jail. “Alternative practitioners shouldn’t be your go-to primary care physician,” Caulfield said. If an adult want to pay for a detox or some other alternative treatment, that’s one thing. “We shouldn’t be testing out our ideologies around healthcare on our children.”


News Article | December 8, 2016
Site: motherboard.vice.com

For Lida Xing, a paleontologist based at the China University of Geosciences, scientific progress occasionally calls for some light espionage. This kind of situation arose last year, after he made an astonishing discovery at an amber market in Myitkyina, Myanmar. Suspended within a snowglobe-sized chunk of fossilized tree resin, Xing recognized the partial remains of an exquisitely preserved feathered tail belonging to a small juvenile coelurosaur, a type of bird-like dinosaur. The fossil dates 99 million years back to the middle Cretaceous period, when temperatures were warm, sea levels were high, and dinosaurs walked among the earliest flowering plants. Its discovery at the market was a stroke of "great luck," Xing told me over email. "I often visit amber markets," he said. "But this is the only dinosaur amber I've ever seen." So, where does the paleontological reconnaissance come in? Burmese amber markets happen to be fed by amber mines in Hukawng Valley, located in the Kachin State of northern Myanmar. This region is currently under the control of the insurgent Kachin Independence Army, which has a long history of conflict with the Burmese government. "Resellers buy scraps from amber miners and sell them on the markets," Xing explained. "The mines are extremely dangerous, so foreigners can hardly get there." Xing decided to go undercover. "I disguised myself as a Burmese man with a face painted with Thanaka," he told me. (Thanaka is a popular cosmetic paste in Myanmar, yellowish-white in color, made from finely ground tree bark.) Stealthily camouflaged and armed with a fake ID, Xing snuck into the region. He met the prospector responsible for excavating the dinosaur tail, who guided Xing through the mines and showed him new geological samples. "We are very lucky," he said of the escapade. As for the dinosaur tail itself, Xing persuaded the Dexu Institute of Palaeontology in Chaozhou to purchase it, and has since headed up an international team of researchers to analyse the fossil with computerized tomography (CT) scanning techniques. The results, published Thursday in Current Biology, shed light on the evolution of feathers and reveal intimate details about this particular coelurosaur, including its coloration pattern, skeletal features, and even the hemoglobin molecules that ran through its blood, which left traces of iron oxide within the tail. Xing and his co-authors, including paleontologists Ryan McKellar of the Royal Saskatchewan Museum and Philip J. Currie of the University of Alberta, were able to identify the animal as coelurosaur by its flexible vertebral structure, which distinguishes it from the fused rod-like spines of avian dinosaurs that would have sported similar feathered plumages. The feather coloration pattern suggests that the young dinosaur had chestnut-brown dorsal feathers, while its underbelly was pale. Xing told me that the brown feathers may have acted as "protective coloration," helping the coelurosaur blend into the woodland environments in which it is presumed to have lived. Small coelurosaurs would likely have scuttled on the ground hunting insects in tropical forests, likely populated by trees similar to those of Kauri trees extant in New Zealand. That said, there is still a lot to learn about Myanmar's rich paleontological history. READ MORE: A Tour of the Ancient Life Still Trapped in Amber "The environment of the middle Cretaceous from northern Myanmar does not appear to be formally studied," Xing said. This gap in paleontological knowledge is caused both by the remote location of amber mines and fossil beds in the region, as well as the longstanding social and political unrest that makes much of Myanmar's north off-limits to outsiders. The fact that this gorgeous snapshot of the Cretaceous world wound up in a Burmese amber market seems like a great incentive for more paleontologists to scout out local vendors. Indeed, Xing and McKellar previously teamed up on a June 2016 study about an amber specimen containing spectacular Cretaceous bird wings, which was also sourced from these markets. Hopefully, these recent discoveries will spark efforts to collect more of these astonishing amber-encased time capsules, even if it requires top secret dinosaurian espionage. These fossils may not bring dinosaurs back to life, as in Jurassic Park, but they still offer valuable and vivid tableaus of long-dead ecosystems. Get six of our favorite Motherboard stories every day by signing up for our newsletter.


News Article | December 1, 2016
Site: www.marketwired.com

GRANDE PRAIRIE, AB--(Marketwired - December 01, 2016) - ANGKOR GOLD CORP. (TSX VENTURE: ANK) ( : ANKOF) ("Angkor" or "the Company") announced today the resignation of Mr. Aaron Triplett as Angkor's CFO to pursue other opportunities, and the appointment of Mr. Terry Mereniuk, B.Comm., CPA, CMC, as interim CFO. Mr. Mereniuk is currently a director of Angkor. Mr. Mereniuk has been a Director and CFO of several public and private companies. Prior to that, he owned and operated his own accounting firm. Terry obtained a Bachelor of Commerce (with distinction) from the University of Alberta in 1981. He is a Certified Management Consultant since June 1988 and a Chartered Professional Accountant since December 1983. The Company welcomes Mr. Mereniuk to his new role and extends its thanks to Mr. Triplett for his service. His contributions to the Company have been greatly appreciated. ANGKOR Gold Corp. is a public company listed on the TSX-Venture Exchange and is Cambodia's premier mineral explorer with a large land package and a first-mover advantage building strong relationships with all levels of government and stakeholders. ANGKOR'S six exploration licenses in the Kingdom of Cambodia cover 1,352 km2, which the company has been actively exploring over the past 6 years. The company has now covered all tenements with stream sediment geochemical sampling and has flown low level aeromagnetic surveys over most of the ground. Angkor has diamond drilled 21,855 metres in 190 holes, augured 2,643 metres over 728 holes, collected over 165,000 termite mound samples and 'B' and 'C' zone soil samples in over 20 centres of interest over a combined area of more than 140km2, in addition to numerous trenches, IP surveys and detailed geological field mapping. Exploration on all tenements is ongoing. Website at: http://www.angkorgold.ca or follow us @AngkorGold for all the latest updates. Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release. Certain of the statements made and information contained herein may constitute "forward-looking information". In particular references to the private placement and future work programs or expectations on the quality or results of such work programs are subject to risks associated with operations on the property, exploration activity generally, equipment limitations and availability, as well as other risks that we may not be currently aware of. Accordingly, readers are advised not to place undue reliance on forward-looking information. Except as required under applicable securities legislation, the Company undertakes no obligation to publicly update or revise forward-looking information, whether as a result of new information, future events or otherwise.


Casey J.R.,University of Alberta | Grinstein S.,Hospital for Sick Children | Orlowski J.,McGill University
Nature Reviews Molecular Cell Biology | Year: 2010

Protons dictate the charge and structure of macromolecules and are used as energy currency by eukaryotic cells. The unique function of individual organelles therefore depends on the establishment and stringent maintenance of a distinct pH. This, in turn, requires a means to sense the prevailing pH and to respond to deviations from the norm with effective mechanisms to transport, produce or consume proton equivalents. A dynamic, finely tuned balance between proton-extruding and proton-importing processes underlies pH homeostasis not only in the cytosol, but in other cellular compartments as well.


Fujimoto K.,Japan National Astronomical Observatory | Sydora R.D.,University of Alberta
Physical Review Letters | Year: 2012

The dissipation mechanism in collisionless magnetic reconnection in a quasisteady period is investigated for the antiparallel field configuration. A three-dimensional simulation in a fully kinetic system reveals that a current-aligned electromagnetic mode produces turbulent electron flow that facilitates the transport of the momentum responsible for the current density. It is found that the electromagnetic turbulence is drastically enhanced by plasmoid formations and has a significant impact on the dissipation at the magnetic x-line. The linear analyses confirm that the mode survives in the real ion-to-electron mass ratio, which assures the importance of the turbulence in collisionless reconnection. © 2012 American Physical Society.


Geng H.,Tsinghua University | Liu C.,University of Alberta | Yang G.,Tsinghua University
IEEE Transactions on Industrial Electronics | Year: 2013

In this paper, the low-voltage ride-through (LVRT) capability of the doubly fed induction generator (DFIG)-based wind energy conversion system in the asymmetrical grid fault situation is analyzed, and the control scheme for the system is proposed to follow the requirements defined by the grid codes. As analyzed in the paper, the control efforts of the negative-sequence current are much higher than that of the positive-sequence current for the DFIG. As a result, the control capability of the DFIG restrained by the dc-link voltage will degenerate for the fault type with higher negative-sequence voltage component and 2φ fault turns out to be the most serious scenario for the LVRT problem. When the fault location is close to the grid connection point, the DFIG may be out of control resulting in non-ride-through zones. In the worst circumstance when LVRT can succeed, the maximal positive-sequence reactive current supplied by the DFIG is around 0.4 pu, which coordinates with the present grid code. Increasing the power rating of the rotor-side converter can improve the LVRT capability of the DFIG but induce additional costs. Based on the analysis, an LVRT scheme for the DFIG is also proposed by taking account of the code requirements and the control capability of the converters. As verified by the simulation and experimental results, the scheme can promise the DFIG to supply the defined positive-sequence reactive current to support the power grid and mitigate the oscillations in the generator torque and dc-link voltage, which improves the reliability of the wind farm and the power system. © 2012 IEEE.


Mezzacapo F.,Max Planck Institute of Quantum Optics | Boninsegni M.,University of Alberta
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

We study the ground-state phase diagram of the quantum J 1-J 2 model on the honeycomb lattice by means of an entangled-plaquette variational ansatz. Values of energy and relevant order parameters are computed in the range 0≤J 2/J 1≤1. The system displays classical order for J 2/J 1 0.2 (Néel) and for J 2/J 1 0.4 (collinear). In the intermediate region, the ground state is disordered. Our results show that the reduction of the half-filled Hubbard model to the model studied here does not yield accurate predictions. © 2012 American Physical Society.

Loading University of Alberta collaborators
Loading University of Alberta collaborators