Winnipeg, Canada
Winnipeg, Canada

The University of Manitoba is a public university in the province of Manitoba, Canada. Located in Winnipeg, it is a research-intensive post-secondary educational institution. Founded in 1877, it was Western Canada’s first university. Wikipedia.


Time filter

Source Type

Patent
University of Manitoba | Date: 2014-12-17

A composition for therapy of a peripheral neuropathy disorder in a subject in need thereof. The composition comprises an effective amount of an agent selected from a group consisting of pirenzepine, oxybutynin, muscarinic toxin 7, a muscarinic receptor antagonist, and combinations thereof, and a pharmacologically acceptable carrier and/or an excipient. The composition is useful for therapy of peripheral neuropathies exemplified by peripheral neuropathies induced by systemic diseases, peripheral neuropathies induced by metabolic diseases, chemotherapy-induced peripheral neuropathies, compression-induced peripheral neuropathies, peripheral neuropathies induced by exposure to dichloroacetate, immune-mediated peripheral neuropathies, peripheral neuropathies induced by infections, and genetically acquired peripheral neuropathies.


Patent
University of Manitoba | Date: 2015-02-18

An enzymatic method is provided for restructuring an affinity ligand bound heterogenous glycoform antibody sample to a substantially homogenous single desired glycoform antibody sample for therapeutic uses and kits for performing the methods. A method for enzymatically altering the Fc region of an affinity ligand bound antibody from a heterogenous glycoform to a substantially homogenous single glycoform comprises: contacting the affinity ligand bound heterogeneous glycoform antibody with a reaction buffer designed for a particular glycoform modification for a time sufficient and under conditions to modify the glycoform of the Fc region to a substantially homogeneous single form; optionally adding one or more nucleotide sugars and/or cofactors; and releasing the substantially homogeneous single glycoform antibody sample from said affinity ligand. The invention also encompasses biopharmaceuticals comprising single glycoform mAbs and polyclonal antibodies enzymatically produced for the treatment of cancers and immune disorders as well as compositions comprising the single glycoform antibodies as a biopharmaceutical.


Patent
University of Manitoba | Date: 2016-09-28

The Mito-Ob obese mouse model overexpresses the mitochondrial protein prohibitin (PHB). Mito-Ob male mice develop insulin resistance in addition to obesity and they do not develop overt diabetes. It has been discovered that these mice also spontaneously develop nonalcoholic steatohepatitis (NASH) and hepatocarcinogenesis over time. Also described is a mutant Mito-Ob mouse that develops lymphadenopathy and histiocytosis.


A ventilator system for determining respiratory system resistance (R) comprising a flowmeter that measures the flow rate (V) and volume (V) of gas received by a patient;a pressure sensor that measures pressure near the airway of the patient (Paw); and electronic circuitry to receive the Paw, V and V signals and which is also connected to a control system of the ventilator, comprising : - circuitry to generate an output that results in a step decrease (negative pulse) in the pressure and/or flow output of the ventilator during selected inflation cycles;- circuitry to measure Paw, V and V at a point (To) near the beginning of the pulse, at a point (T_(i)) near the trough of the negative pulse, and at a point preceding To but after the onset of inspiratory effort; and- circuitry to calculate the value of resistance (R) based on the measured values.


Maghsudi S.,Yale University | Hossain E.,University of Manitoba
IEEE Transactions on Wireless Communications | Year: 2017

We investigate a distributed downlink user association problem in a dynamic small cell network, where every small base station (SBS) obtains its required energy through ambient energy harvesting. On the one hand, energy harvesting is inherently opportunistic, so that the amount of available energy is a random variable. On the other hand, users arrive at random and require different wireless services, rendering the energy consumption a random variable. In this paper, we develop a probabilistic framework to mathematically model and analyze the random behavior of energy harvesting and energy consumption. We further analyze the probability of QoS satisfaction (success probability), for each user with respect to every SBS. The proposed user association scheme is distributed in the sense that every user independently selects its corresponding SBS with the success probability serving as the performance metric. The success probability however depends on a variety of random factors such as energy harvesting, channel quality, and network traffic, whose distribution or statistical characteristics might not be known at users. Since acquiring the knowledge of these random variables (even statistical) is very costly in a dense network, we develop a bandit-theoretical formulation for distributed SBS selection when no prior information is available at users. The performance is analyzed both theoretically and numerically. © 2017 IEEE.


Munro D.,University of Manitoba | Treberg J.R.,University of Manitoba
Journal of Experimental Biology | Year: 2017

Mitochondria are widely recognized as a source of reactive oxygen species (ROS) in animal cells, where it is assumed that overproduction of ROS leads to an overwhelmed antioxidant system and oxidative stress. In this Commentary, we describe a more nuanced model of mitochondrial ROS metabolism, where integration of ROS production with consumption by the mitochondrial antioxidant pathways may lead to the regulation of ROS levels. Superoxide and hydrogen peroxide (H2O2) are the main ROS formed by mitochondria. However, superoxide, a free radical, is converted to the non-radical, membrane-permeant H2O2; consequently, ROS may readily cross cellular compartments. By combining measurements of production and consumption of H2O2, it can be shown that isolated mitochondria can intrinsically approach a steady-state concentration of H2O2 in the medium. The central hypothesis here is that mitochondria regulate the concentration of H2O2 to a value set by the balance between production and consumption. In this context, the consumers of ROS are not simply a passive safeguard against oxidative stress; instead, they control the established steady-state concentration of H2O2. By considering the response of rat skeletal muscle mitochondria to high levels of ADP, we demonstrate that H2O2 production by mitochondria is far more sensitive to changes in mitochondrial energetics than is H2O2 consumption; this concept is further extended to evaluate how the muscle mitochondrial H2O2 balance should respond to changes in aerobic work load. We conclude by considering how differences in the ROS consumption pathways may lead to important distinctions amongst tissues, along with briefly examining implications for differing levels of activity, temperature change and metabolic depression. © 2017 Published by The Company of Biologists Ltd.


Altman A.D.,University of Manitoba | Robinson C.,University of Manitoba
Gynecologic Oncology | Year: 2017

Objective: This paper reviews all current literature for vulvar postoperative care, and forms a summary of evidence based practice. Data sources: Scopus, Cochrane Library, CINHAL, Web of Science Core Collection, PubMed, Embase, Google Scholar, clinicaltrials.gov and Medline databases were searched. Methods of study selection: Various combinations of key-terms were used to identify relevant articles. All identified primary research articles and review articles were then examined with their references in order to identify further relevant studies. The literature was examined within gynecology, gynecologic oncology, surgical oncology, urology, plastic surgery and dermatology. Tabulation, integration and results: A total of 199 studies were reviewed and 80 were included in this paper. All relevant studies pertaining to the subject were included. Studies were excluded if there was no relevance to the review as deemed by both authors. Conclusion: There remains much room for improvement to minimize postoperative stay, decrease the chances of morbidity and improve patient outcome and satisfaction, while establishing standardized care pathways. Further research and clinical trials are needed in this area to help us to provide evidence-based care to our postoperative vulvar patient population. © 2017 Elsevier Inc.


Rizvi S.,University of Manitoba
Proceedings of the 2016 19th International Multi-Topic Conference, INMIC 2016 | Year: 2016

A novel multipath routing protocol for wireless sensor networks is proposed. The protocol constructs multiple paths based on residual energy of the nodes in the network and allows the source node to make energy based decisions to select a path for data transmission from the set of discovered paths. Using alternative paths for routing data packets incorporates load balancing in the network which maximizes network lifetime and minimizes energy consumption. The results show that the proposed algorithm has lower residual energy (84%, 78%, and 60% less) and longer network lifetime (394%, 195%, and 105%) than directed diffusion and its variants. © 2016 IEEE.


Landrum L.,University of Manitoba
Architecture and Culture | Year: 2016

This paper argues for a pre-theoretical and pro-theatrical understanding of theory. To begin, it considers the Greek tradition of theōria as practiced around the fifth century BCE in the period just before Plato appropriated the cultural practice of theōria as a model for philosophical inquiry. As will be shown, this protophilosophical practice of theōria was profoundly theatrical, which is to say, spectacular and dramatic in social, situational, and symbolic ways. Such events of theōria involved diverse citizens participating as active witnesses in recurring festivals that had both intimate and far-reaching political, religious, and aesthetic significance. Reflecting on some present-day settings and occasions for practicing theory, this paper concludes with a disciplinary provocation: the re-engagement of theōria’s fundamental theatricality can reanimate the social, situational, and symbolic dimensions of architectural theory, without sacrificing either its relative independence or its capacity for heuristic wonder. © 2016 Informa UK Limited, trading as Taylor & Francis Group.


Carkner M.K.,University of Manitoba | Entz M.H.,University of Manitoba
Field Crops Research | Year: 2017

Most non-genetically modified (GM) soybean (Glycine max Merr.) cultivars are bred and performance-tested under conventional conditions and have rarely been tested in organic production. Twelve non-GM soybean cultivars were evaluated in weed-free and weedy conditions on 5 organic farms and 1 transitional farm in southern Manitoba in 2014 and 2015. The mean cultivar yield ranged from 1384 to 1807 kg ha−1. Weed biomass at soybean maturity ranged from 1289 to 2553 kg ha−1 and was significantly affected by cultivar. Significant site–cultivar interactions were observed for soybean biomass, height, and yield. Site accounted for 72.4% of yield variability; cultivar accounted for only 1%. Our hypothesis that cultivars with greater early season height are more competitive with weeds was not supported. Yield loss due to weeds ranged between 20 and 44%; lower yield loss was associated with timely weed management. Partial least squares regression was used to assess the main factors controlling grain yield. Higher soil nitrate (N) status negatively impacted final grain yield in this study, suggesting that soil nutrient status impacted the soybean cultivars’ competitive ability against weeds. Results suggest that weed management and soil N status are of equal importance to cultivar choice for successful organic soybean production in Manitoba. © 2017 Elsevier B.V.


Rogers A.,University of Manitoba | Safi-Harb S.,University of Manitoba
Monthly Notices of the Royal Astronomical Society | Year: 2017

Energy losses from isolated neutron stars are commonly attributed to the emission of electromagnetic radiation from a rotating point-like magnetic dipole in vacuum. This emission mechanism predicts a braking index n = 3, which is not observed in highly magnetized neutron stars. Despite this fact, the assumptions of a dipole field and rapid early rotation are often assumed a priori, typically causing a discrepancy between the characteristic age and the associated supernova remnant (SNR) age. We focus on neutron stars with 'anomalous' magnetic fields that have established SNR associations and known ages. Anomalous X-ray pulsars (AXPs) and soft gamma repeaters (SGRs) are usually described in terms of the magnetar model that posits a large magnetic field established by dynamo action. The high magnetic field pulsars (HBPs) have extremely large magnetic fields just above quantum electrodynamics scale (but below that of the AXPs and SGRs), and central compact objects (CCOs) may have buried fields that will emerge in the future as nascent magnetars. In the first part of this series, we examined magnetic field growth as a method of uniting the CCOs with HBPs and X-ray dim isolated neutron stars (XDINSs) through evolution. In this work, we constrain the characteristic age of these neutron stars using the related SNR age for a variety of energy-loss mechanisms and allowing for arbitrary initial spin periods. In addition to the SNR age, we also use the observed braking indices and X-ray luminosities to constrain the models. © 2016 The Authors.


Rodgers J.A.,University of Manitoba | Koper N.,University of Manitoba
Journal of Wildlife Management | Year: 2017

Grassland bird species have declined more than birds of any other region in North America and industrial development may exert further pressure on these species. We evaluated effects of conventional natural gas infrastructure on the relative abundances of grassland songbirds in southeastern Alberta, Canada at sites with shallow gas well pad densities ranging from 0 to 16 pads/258 ha (0–24 well heads/258 ha). Conventional gas wells have a relatively small footprint and minimal associated noise and maintenance activities, allowing us to focus on effects of the infrastructure itself and vegetation surrounding wells. We conducted fixed-radius point counts and vegetation sampling at 34 sites in 2010 and 40 sites in 2011. We used generalized linear mixed models and information theory to evaluate effects of infrastructure on birds. Relative abundances of vesper sparrow (Pooecetes gramineus) and western meadowlark (Sturnella neglecta) increased near gas wells, whereas abundance of the threatened Sprague's pipit (Anthus spragueii) declined. Vegetation near infrastructure was shorter and sparser than locations farther from wells, but discrepancies with avian habitat preferences suggest that, in contrast to conclusions of previous studies, vegetation structure could not explain responses to infrastructure by birds. Instead, gas wells may have acted as artificial shrubs because they attracted species that use vegetation for perching but were avoided by species that avoid shrubs. Our results suggest that observed effects were a direct result of the presence of wells and associated fencing, and thus risk mitigation should focus on reducing the extent of aboveground infrastructure. © 2017 The Wildlife Society. © The Wildlife Society, 2017


Leston L.,University of Manitoba | Koper N.,University of Manitoba
Landscape and Urban Planning | Year: 2017

Urban rights-of-way (ROWs) offer large underused tracts of land that could be managed for plants and butterflies of threatened ecosystems like tall-grass prairies. However, built-up unvegetated urban lands might serve as barriers preventing butterflies and resource plants from settling along ROWs. Further, negative edge effects from surrounding urban lands or frequent mowing and spraying associated with urbanization may prevent butterflies from benefiting from urban ROWs as habitats. However, because ROWs often run for kilometres, they might facilitate movement from other, similar habitats by which they run close. To determine if surrounding built-up lands had a greater effect on butterflies than did the abundance of resource plants along ROWs, we surveyed butterflies and resource plants along transects in 48 transmission lines in or near Winnipeg, Manitoba, 2007–2009. In general, butterfly richness and abundance were better predicted by available resources than by built-up urban lands surrounding ROWs. Butterfly species richness per visit increased by 85% with increases from 10 plant species per site to 80 species of plants per site, while abundance per species per visit increased by 100% with increases from negligible forb cover to 5% forb cover, and by 112% with increases in vegetation height-density from 5 cm to 40 cm high. If appropriate resource plants are reintroduced and managed for along urban ROWs, densities of most butterfly species will increase along these lines despite surrounding built-up urban lands. Thus, urban ROWs present an opportunity for restoring habitats for prairie butterflies. © 2016 Elsevier B.V.


Waritanant T.,University of Manitoba | Major A.,University of Manitoba
Optics Letters | Year: 2017

Selectable and discretely tunable multi-wavelength diode-pumped Nd:YVO4 laser operating at 1064.0, 1073.1, and 1085.2 nm was demonstrated using a single intracavity birefringent plate. Under 11.2 W of absorbed pump power, the maximum output power of more than 3.8 W for any of the three wavelengths was achieved with slope and optical-to-optical efficiencies of more than 45% and 35%, respectively. © 2017 Optical Society of America.


Gusynin V.P.,NASU Bogolyubov Institute for Theoretical Physics | Pyatkovskiy P.K.,University of Manitoba
Physical Review D | Year: 2016

Previous analytical studies of quantum electrodynamics in 2 1 dimensions (QED3) have shown the existence of a critical number of fermions for onset of chiral symmetry breaking, the most known being the value Nc ≈ 3.28 obtained by Nash to 1=N2 order in the 1=N expansion [D. Nash, Phys. Rev. Lett. 62, 3024 (1989)]. This analysis is reconsidered by solving the Dyson-Schwinger equations for the fermion propagator and the vertex to show that the more accurate gauge-independent value is Nc ≈ 2.85, which means that the chiral symmetry is dynamically broken for integer values N ≤ 2, while for N ≥ 3 the system is in a chirally symmetric phase. An estimate for the value of chiral condensate hψ ψi is given for N 2. Knowing precise Nc would be important for comparison between continuum studies and lattice simulations of QED3. © 2016 American Physical Society.


Yoo J.,University of Manitoba | Koper N.,University of Manitoba
PLoS ONE | Year: 2017

Grassland songbird populations across North America have experienced dramatic population declines due to habitat loss and degradation. In Canada, energy development continues to fragment and disturb prairie habitat, but effects of oil and gas development on reproductive success of songbirds in North American mixed-grass prairies remains largely unknown. From 2010-2012, in southeastern Alberta, Canada, we monitored 257 nests of two groundnesting grassland songbird species, Savannah sparrow (Passerculus sandwichensis) and chestnut-collared longspur (Calcarius ornatus). Nest locations varied with proximity to and density of conventional shallow gas well structures and associated roads in forty-two 258-ha mixed-grass prairie sites. We estimated the probabilities of nest success and clutch size relative to gas well structures and roads. There was little effect of distance to or density of gas well structure on nest success; however, Savannah sparrow experienced lower nest success near roads. Clutch sizes were lower near gas well structures and cattle water sources. Minimizing habitat disturbance surrounding gas well structures, and reducing abundance of roads and trails, would help minimize impacts on reproductive success for some grassland songbirds. © 2017 Yoo, Koper. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.


Asefi M.,University of Manitoba | Zakaria A.,American University of Sharjah | Lovetri J.,University of Manitoba
IEEE Transactions on Microwave Theory and Techniques | Year: 2017

A novel 3-D microwave imaging approach performed within a resonant air-filled metallic chamber is introduced and investigated. The new method utilizes the measurements of normal electric-field components at discrete points along the metallic chamber's wall - near the chamber-wall boundary, the normal-field components are dominant, while the tangential components vanish. The inversion algorithm fully incorporates the resonant features of the low-loss chamber. A numerical study is used to quantify the imaging performance of using this technique compared with the traditional unbounded domain imaging. An experimental system is presented where the electric field is collected using 24 antennas distributed in three circumferential layers around an object of interest located inside the circular-cylindrical metallic chamber. For collecting the normal component of the field, two types of linearly polarized antennas are investigated: λ/4 monopole antennas and specially designed reconfigurable antennas (RAs), both projecting perpendicularly out from the chamber walls into the enclosure. The measured data are calibrated and then inverted using a multiplicatively regularized finite-element contrast source inversion algorithm. Using 3-D reconstructions of simple dielectric targets, it is shown that utilizing the RAs improves imaging performance due to a reduction in the modeling error introduced in the inversion algorithm. © 1963-2012 IEEE.


Woodgate R.L.,University of Manitoba | Busolo D.S.,University of Manitoba
BMJ Open | Year: 2017

Objectives Cancer has been described using metaphors for over 4 decades. However, little is known about healthy adolescents' perspectives of cancer using metaphors. This paper reports on findings specific to adolescents' perspectives of cancer using metaphors. The findings emerged from a qualitative ethnographic study that sought to understand Canadian adolescents' conceptualisation of cancer and cancer prevention. Design To arrive at a detailed description, data were obtained using individual interviews, focus groups and photovoice. Setting 6 high schools from a western Canada province. Participants 75 Canadian adolescents. Results Use of 4 metaphors emerged from the data: loss (cancer as the sick patient and cancer as death itself); military (cancer as a battle); living thing (haywire cells and other living things) and faith (cancer as God's will) metaphors, with the loss and military metaphors being the ones most frequently used by adolescents. Adolescents' descriptions of cancer were partly informed by their experiences with family members with cancer but also what occurs in their social worlds including mass media. Adolescents related cancer to emotions such as sadness and fear. Accordingly, more holistic and factual cancer descriptions, education and psychosocial support are needed to direct cancer messaging and clinical practice. Conclusions Findings from this study suggest that the public and healthcare providers be more aware of how they communicate cancer messages. © Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.


Allard D.,University of Manitoba
Proceedings of the Association for Information Science and Technology | Year: 2016

This poster describes the process of translocal meaning making, a set of information practices by which newcomers from the Philippines to Winnipeg, Canada, came to make sense of and operate within the Winnipeg information context. It describes how newcomers constructed meaning as they migrated to Winnipeg, encountering and incorporating diverse, complex, and often contradictory information and information resources into their daily lives as they migrated and settled in an unknown information context. This 5 step process demonstrates that migrants' information practices are more dynamic, fluid, and iterative than articulated in previous studies that examine the information practices of migrants. Copyright © 2016 by Association for Information Science and Technology


Amaratunga T.,University of Manitoba
Ultrasound Quarterly | Year: 2017

ABSTRACT: Mayer-Rokitansky-Küster-Hauser (MRKH) syndrome is a rare disorder characterized by aplasia or hypoplasia of the uterus and vagina due to arrest in the development of the müllerian ducts. Women with this syndrome have the normal 46 XX karyotype, normal female secondary sex characteristics, and primary amenorrhea. Only a few cases have been described in the literature where a fibroid develops from a rudimentary, nonfunctioning uterus in patients with MRKH syndrome. In even rarer instances, a fibroid can develop in patients with a congenitally absent uterus.Here, we present the first reported case of an ectopic fibroid in association with congenital absence of a uterus found by ultrasound in a 66-year-old white female patient with MRKH syndrome and unilateral renal agenesis.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved


Martineau P.,University of Ottawa | Leslie W.D.,University of Manitoba
Bone | Year: 2017

Trabecular bone score (TBS) is a texture index derived from standard lumbar spine dual energy X-ray absorptiometry (DXA) images and provides information about the underlying bone independent of the bone mineral density (BMD). Several salient observations have emerged. Numerous studies have examined the relationship between TBS and fracture risk and have shown that lower TBS values are associated with increased risk for major osteoporotic fracture in postmenopausal women and older men, with this result being independent of BMD values and other clinical risk factors. Therefore, despite being derived from standard DXA images, the information contained in TBS is independent and complementary to the information provided by BMD and the FRAX® tool. A procedure to generate TBS-adjusted FRAX probabilities has become available with the resultant predicted fracture risks shown to be more accurate than the standard FRAX tool. With these developments, TBS has emerged as a clinical tool for improved fracture risk prediction and guiding decisions regarding treatment initiation, particularly for patients with FRAX probabilities around an intervention threshold. In this article, we review the development, validation, clinical application, and limitations of TBS. © 2017 Elsevier Inc.


Marrie R.A.,University of Manitoba
Nature Reviews Neurology | Year: 2017

Most efforts aimed at understanding the notable heterogeneity of outcomes in multiple sclerosis (MS) have focused on disease-specific factors, such as symptoms at initial presentation, initial relapse rate, and age at symptom onset. These factors, however, explain relatively little of the heterogeneity of disease outcomes. Owing to the high prevalence of comorbidity in MS and the potential for its prevention or treatment, comorbidity is of rising interest as a factor that could explain the heterogeneity of outcomes. A rapidly growing body of evidence suggests that comorbidity adversely affects outcomes throughout the disease course in MS, including diagnostic delays from symptom onset, disability at diagnosis and subsequent progression, cognition, mortality, and health-related quality of life. Therefore, clinicians need to incorporate the prevention and management of comorbidity when treating patients with MS, but managing comorbidities in MS successfully may require the adoption of new collaborative models of care. © 2017 Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved.


Ferroptosis is a recently discovered form of regulated necrosis that involves iron-dependent lipid peroxidation. How cells die once ferroptosis is triggered remains unclear. Ferroptosis is hypothesized to require three critical events: (1) accumulation of redox-active iron, (2) glutathione depletion, and (3) lipid peroxidation. It is proposed that these three events must unfold simultaneously because stopping any critical event also stops ferroptosis. These events are hypothesized to amplify in severity through positive feedback loops. The cause of death in ferroptosis is therefore the synergistic combination of antioxidant depletion, iron toxicity, and membrane denaturation. The relevance of these feedback loops for cancer and neurodegenerative therapies is discussed. © 2017 Elsevier Ltd


Sicherer S.H.,Mount Sinai School of Medicine | Simons F.E.R.,University of Manitoba
Pediatrics | Year: 2017

Anaphylaxis is a severe, generalized allergic or hypersensitivity reaction that is rapid in onset and may cause death. Epinephrine (adrenaline) can be life-saving when administered as rapidly as possible once anaphylaxis is recognized. This clinical report from the American Academy of Pediatrics is an update of the 2007 clinical report on this topic. It provides information to help clinicians identify patients at risk of anaphylaxis and new information about epinephrine and epinephrine autoinjectors (EAs). The report also highlights the importance of patient and family education about the recognition and management of anaphylaxis in the community. Key points emphasized include the following: (1) validated clinical criteria are available to facilitate prompt diagnosis of anaphylaxis; (2) prompt intramuscular epinephrine injection in the mid-outer thigh reduces hospitalizations, morbidity, and mortality; (3) prescribing EAs facilitates timely epinephrine injection in community settings for patients with a history of anaphylaxis and, if specific circumstances warrant, for some high-risk patients who have not previously experienced anaphylaxis; (4) prescribing epinephrine for infants and young children weighing <15 kg, especially those who weigh 7.5 kg and under, currently presents a dilemma, because the lowest dose available in EAs, 0.15 mg, is a high dose for many infants and some young children; (5) effective management of anaphylaxis in the community requires a comprehensive approach involving children, families, preschools, schools, camps, and sports organizations; and (6) prevention of anaphylaxis recurrences involves confirmation of the trigger, discussion of specific allergen avoidance, allergen immunotherapy (eg, with stinging insect venom, if relevant), and a written, personalized anaphylaxis emergency action plan; and (7) the management of anaphylaxis also involves education of children and supervising adults about anaphylaxis recognition and first-aid treatment. Copyright © 2017 by the American Academy of Pediatrics.


Kelly C.,University of Manitoba
Health and Social Care in the Community | Year: 2017

There is growing attention to the training and education of Personal Support Workers, or PSWs, who work in community, home and long-term care settings supporting older people and people with disabilities. In Ontario, Canada, amid a volatile policy landscape, the provincial government launched an effort to standardise PSW education. Using qualitative methods, this study considered the question: What are the central educational issues reflected by students, working PSWs and key informants, and are they addressed by the PSW programme and training standards? Phase one was a public domain analysis completed between January and March 2014 and updated for major developments after that period. Phase two, completed between August 2014 and March 2015, included 15 key informant interviews and focus group discussions and mini-phone interviews with 35 working PSWs and current PSW students. According to the participants, the central educational issues are: casualisation of labour that is not conveyed in educational recruitment efforts, disconnect between theory and working conditions, overemphasis on long-term care as a career path, and variability of PSW education options. While the standards should help to address the final issue, they do not address the other key issues raised, which have to do with the structural organisation of work. There is thus a disconnect between the experiences of students, PSWs and key informants and the policy decisions surrounding this sector. This is particularly significant as education is often touted as a panacea for issues in long-term and community care. In fact, the curriculum of some of the PSW programmes, especially those in public college settings, is robust. Yet, the underlying issues will remain barring a structural overhaul of the organisation of long-term and community care sectors founded on a social revaluing of older people and the gendered work of care. © 2017 John Wiley & Sons Ltd.


Artificially sweetened beverages like diet soda are strategically positioned in the market to provide a healthful alternative to sugar-sweetened drinks, which are traditionally linked to a greater risk for conditions such as heart disease, obesity, and diabetes. A new study, however, links diet soda to an increased risk for stroke and dementia, adding to the growing list of health perils associated with the beverages. According to the new research published in the journal Stroke, people who drank at least one diet soda every day maintained nearly three times the risk of suffering from stroke or dementia. The findings were based on 4,300 subjects of the Framingham Heart Study. Over the next decade, subjects who consumed one artificially sweetened soft drink each day had almost three times the risk of having ischemic stroke - the condition when an artery to the brain becomes blocked -compared to those who never drank these soda products. At least one diet soda a day, too, translated to 2.89 times greater risk of developing Alzheimer’s disease, the most prevalent form of dementia that is characterized by memory and cognitive skill decline. "We know that sugary and artificially sweetened beverages are not great for us. This study adds strength to that, and also says they may not be great for your brain, specifically," said Heather Snyder, Alzheimer’s Association senior director in a CNN report. Snyder pointed to alternatives such as cardiovascular fitness to elevate heart rate and enhance blood flow, as well as mental games and puzzles to keep challenging the mind. A 2016 study warned that babies born to mothers consuming diet soda while pregnant were at a greater risk of developing childhood obesity. According to researchers from the University of Manitoba in Canada, pregnant women consuming artificially sweetened liquids every day predispose their children to a higher body mass index during childhood. Of the 3,033 pregnant subjects included in the study, the team saw that 29.5 percent drank these diet drinks while 5.1 percent of kids born to them became overweight by their first year. "To our knowledge, our results provide the first human evidence that artificial sweetener consumption during pregnancy may increase the risk of early childhood overweight," concluded the authors. Diet soda contains high levels of artificial sweeteners, including a form known as aspartame. Early this year, a study argued that there exists no evidence that artificially sweetened drinks are better options for staying slim than sugar-laden versions. Diet drinks are deemed unable to slash the risk for obesity-related diseases, including type 2 diabetes. Experts even raise a red flag: Diet drinks can actually cause one to gain weight, mainly through stimulating one’s sweet cravings and leading one to overeat. Aspartame is low-calorie yet up to 200 times sweeter than regular sugar. It is used worldwide as a sugar substitute in cereals, chewing gum, soft drinks, and thousands of other food and drinks, yet it is not immune to controversy. Reports linked aspartame to a greater chance of brain tumors and cancer, premature birth, allergies, and liver damage. Artificial sweetener sucralose, marketed under the brand name Splenda, had also been tied to a significantly increased risk of leukemia and other cancers. In 2013, it was downgraded from a "safe" to "caution" standing because of earlier research also from the Ramazzini Institute. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | April 17, 2017
Site: www.eurekalert.org

HOUSTON -- (April 12, 2017) -- Rice University professor and engineer Richard Baraniuk has been elected to the American Academy of Arts and Sciences. He is one of 228 new members announced today by the academy, which honors some of the world's most accomplished scholars, scientists, writers, artists and civic, business and philanthropic leaders. Baraniuk is the Victor E. Cameron Professor of Electrical and Computer Engineering at Rice. Others in the academy's Class of 2017 include philanthropist and singer-songwriter John Legend, actress Carol Burnett, chairman of the board of Xerox Corp. Ursula Burns, mathematician Maryam Mirzakhani, immunologist James P. Allison, writer Chimamanda Ngozi Adichie and Pulitzer Prize winners, MacArthur Fellows and winners of the Academy, Grammy, Emmy and Tony awards. "In a tradition reaching back to the earliest days of our nation, the honor of election to the American Academy is also a call to service," said Academy President Jonathan F. Fanton. "Through our projects, publications and events, the academy provides members with opportunities to make common cause and produce the useful knowledge for which the academy's 1780 charter calls." Baraniuk is one of the world's leading experts on machine learning and compressive sensing, a branch of signal processing that enables engineers to deduce useful information from far fewer data samples than would ordinarily be required. He is a co-inventor of the single-pixel camera and of the FlatCam, a lens-less camera that is thinner than a dime and can be fabricated like a microchip. A pioneer in education, Baraniuk founded Rice-based Connexions in 1999 to bring textbooks and other learning materials to the internet. Next came OpenStax, which provides high-quality, peer-reviewed, college-level textbooks to students worldwide as free downloads or low-cost printed publications. More than 1.8 million college students have used one of the 27 textbooks published by OpenStax. These textbooks are estimated to have saved students more than $100 million during the 2016-17 academic year. Baraniuk is also using OpenStax to develop a software platform for textbooks that deliver personalized lessons. The American Academy of Arts and Sciences membership comes less than a month after Baraniuk was selected as one of 13 Vannevar Bush Faculty Fellows -- one of the Defense Department's most coveted basic research awards for U.S. university scientists and engineers -- and a week after he was inducted into the National Academy of Inventors as a fellow. "It was a complete, total surprise," Baraniuk said about the announcement that the academy had elected him. "It's fantastic news. And it's a tribute to all the tremendous mentors I've had at Rice and my colleagues around the globe. This would never have happened without their guidance and support." Baraniuk was raised in Winnipeg, Canada. He has three degrees in electrical and computer engineering: a B.S. from the University of Manitoba, an M.S. from the University of Wisconsin and a Ph.D. from the University of Illinois at Urbana-Champaign. He holds 28 U.S. patents and six foreign patents in signal processing and acquisition. He came to Rice in 1992 and has received multiple teaching awards as a member of the faculty. Baraniuk is also a fellow of the American Association for the Advancement of Science and of the Institute of Electrical and Electronic Engineers (IEEE). Three times he has been named a Thomson Reuters Highly Cited Researcher. Among his other honors and awards are the 2012 Compressive Sampling Pioneer Award and the 2008 Wavelet Pioneer Award, both from the International Society for Optics and Photonics, and the IEEE Signal Processing Society's Best Paper (2015), Technical Achievement (2014) and Education (2010) awards. The American Academy of Arts and Sciences' new honorees will be inducted at a ceremony Oct. 7 in Cambridge, Mass. The list of the 237th class of new members is available at http://www. . The academy is one of the country's oldest learned societies and independent policy research centers. It convenes leaders from the academic, business and government sectors to respond to the challenges facing -- and opportunities available to -- the nation and the world. Members contribute to academy publications and studies in science, engineering and technology policy; global security and international affairs; the humanities, arts and education; and American institutions and the public good.


News Article | May 8, 2017
Site: www.businesswire.com

BOSTON & CHICAGO--(BUSINESS WIRE)--Meketa Investment Group, a global investment consulting firm, is pleased to announce that Gordon “Gord” Latter recently joined the firm as Executive Vice President and Consultant. Mr. Latter will be based in Meketa’s Chicago office. “Gord’s many years of industry experience are a terrific value-add for Meketa and will help us even more efficiently and effectively meet the needs of our clients,” said Frank Benham, Managing Principal and Director of Research, Meketa Investment Group. “We are very pleased to have Gord on our team and look forward to benefitting from his valuable insights.” “I am thrilled to be joining Meketa, a firm with an enviable record providing investment and advisory services to the institutional investor community,” said Mr. Latter. “I look forward to working closely with this talented team to build upon the firm’s strong foundation.” Gordon Latter Biography Gordon Latter has over 20 years of experience in the financial services industry, serving in both asset management and consulting roles. Prior to joining Meketa, Mr. Latter served as Executive Vice President of Ryan Labs Asset Management, a New York-based fixed income manager. Previously, he was a Managing Director at RBC Global Asset Management, as well as a Senior Director and Head of Pension & Endowment Strategy at Merrill Lynch. Mr. Latter also served as a Consultant at Leong & Associates. Mr. Latter graduated from the University of Manitoba with a Bachelor of Commerce (Honors) in Actuarial Mathematics. He is a Fellow with the Society of Actuaries and Canadian Institute of Actuaries. About Meketa Investment Group Founded in 1978, Meketa Investment Group is an employee-owned, full service investment consulting and advisory firm. As an independent fiduciary, the firm serves institutional investors in discretionary and non-discretionary roles. The firm consults on more than $450 billion in assets for over 150 clients whose aggregate institutional assets exceed $900 billion. For more information, please visit www.meketagroup.com.


News Article | April 25, 2017
Site: www.PR.com

The American Law Society Board of Directors is proud to announce that Daniel Gunn has been accepted as a new member. Manitoba, Canada, April 25, 2017 --( Mr. Gunn earned his Bachelor of Laws from Osgoode Hall Law School at York University in 1994. He was called to the Ontario Bar in 1996 and was called to the Manitoba Bar in 2001. Prior to law school, Mr. Gunn earned a Bachelor of Arts with distinction from the University of Manitoba in 1989. He also attended the National Theatre School in Montreal from 1989-1990. Of note is Mr. Gunn’s extensive contribution to the Crown-Defence Conference as a speaker (2003, 2006, 2010, 2013). He acts as a learning group facilitator for criminal advocacy at the Canadian Centre for Professional Legal Education (2013-present). He serves as independent counsel for the Province of Manitoba (2008-present). He has also served as legal counsel to the Chief Electoral Officer for Nisichawayasihk Cree Nation, Nelson House (2005, 2006, 2010, 2014). American Law Society's Board of Directors selectively chooses lawyers who show a history of greatness and consistency. It is a privilege to have Mr. Gunn join the organization. If you would like to contact Mr. Gunn, you may contact him through his profile page at: www.americanlawsociety.org/winnipeg-manitoba/daniel-gunn "We look forward to following Daniel Gunn's career and are extremely excited to see his articles, videos, and information posted on America Law Society's platform." -Valerie Dougherty of American Law Society. Manitoba, Canada, April 25, 2017 --( PR.com )-- Daniel Gunn is an experienced criminal lawyer who co-founded Campbell Gunn Inness, a criminal law firm based in Winnipeg, Canada. Mr. Gunn represents criminal cases at all levels of court in Manitoba and Ontario including homicide, drug and wiretap cases, and impaired driving. Through his career, Mr. Gunn has represented clients in many high profile cases, including the case of Gerald Daniel Blanchard, which dealt with charges related to international and high-tech crimes.Mr. Gunn earned his Bachelor of Laws from Osgoode Hall Law School at York University in 1994. He was called to the Ontario Bar in 1996 and was called to the Manitoba Bar in 2001. Prior to law school, Mr. Gunn earned a Bachelor of Arts with distinction from the University of Manitoba in 1989. He also attended the National Theatre School in Montreal from 1989-1990.Of note is Mr. Gunn’s extensive contribution to the Crown-Defence Conference as a speaker (2003, 2006, 2010, 2013). He acts as a learning group facilitator for criminal advocacy at the Canadian Centre for Professional Legal Education (2013-present). He serves as independent counsel for the Province of Manitoba (2008-present). He has also served as legal counsel to the Chief Electoral Officer for Nisichawayasihk Cree Nation, Nelson House (2005, 2006, 2010, 2014).American Law Society's Board of Directors selectively chooses lawyers who show a history of greatness and consistency. It is a privilege to have Mr. Gunn join the organization.If you would like to contact Mr. Gunn, you may contact him through his profile page at:"We look forward to following Daniel Gunn's career and are extremely excited to see his articles, videos, and information posted on America Law Society's platform." -Valerie Dougherty of American Law Society. Click here to view the list of recent Press Releases from American Law Society


News Article | April 20, 2017
Site: www.eurekalert.org

Scientists at St. Boniface Hospital Albrechtsen Research Centre and the University of Manitoba have developed a drug that combats 2 of the top 10 "priority pathogens" recently defined by the World Health Organization (WHO) as antiobiotic-resistant bacteria requiring new interventions1. The drug, dubbed PEG-2S, has received a provisional patent, and its development is highlighted in a study published today in the Canadian Journal of Physiology and Pharmacology (CJPP). Without affecting healthy cells, the drug prevents the proliferation of a harmful bacteria that possesses a specific type of energy supply shared by a number of other bacteria. The paper, entitled "Development of a novel rationally designed antibiotic to inhibit a nontraditional bacterial target", revealed that a variety of bacteria share a unique respiratory sodium pump (NQR) that supplies energy vital to the bacteria's survival. The study showed that the drug in question, PEG-2S, inhibits the function of the NQR pump and the production and growth of Chlamydia trachomatis bacteria. The drug is highly targeted and only impacts bacterial cells with NQR pumps and is not toxic to normal, healthy cells. The list of NQR-possessing bacteria is growing steadily as genomic information becomes available. With more than 20 different pathogenic bacteria containing NQR, the possibility for this drug to avoid multidrug resistance through NQR inhibition represents a potential breakthrough in antibiotic design. Traditional targets for antibiotics are limited; no new antibiotics have been discovered since 1987. Only 2 antibiotics have received US FDA approval since 2009. "New drugs are not being approved because they share the same target to which the bacteria are developing resistance. We have not only defined a new and effective target, we have designed a drug to attack it without affecting normal cells," explains St. Boniface Hospital Executive Director of Research and University of Manitoba professor of physiology and pathophysiology Dr. Grant Pierce. "The first pathogen our research team studied (Chlamydia trachomatis) has confirmed that NQR is a good target, and it is shared by many bacteria in need of a more effective antibiotic." "The results from our collaboration are tremendously exciting," adds lead author, University of Manitoba Faculty of Science professor Dr. Pavel Dibrov. "We are currently designing PEG-2S variations and hope to tailor PEG-based antimicrobials to each specific NQR-containing pathogenic bacterium." "Antibiotic and antimicrobial resistance to superbugs is a priority research direction in pharmacology. The quality and findings of this study may be instrumental in our efforts to develop new drugs and technologies that effectively address this global health alarm recently raised by the World Health Organization," say CJPP Editors Dr. Ghassan Bkaily and Dr. Pedro D'Orléans-Juste. "I applaud the research collaboration that resulted in this new breakthrough," said Dr. Digvir Jayas, Vice-President (Research and International) and Distinguished Professor at the University of Manitoba. "Solving the complex and evolving challenges of antibiotic resistance will put new tools in the hands of caregivers around the globe." "New antibiotics targeting this priority list of pathogens will help to reduce deaths due to resistant infections around the world," says Prof Evelina Tacconelli, Head of the Division of Infectious Diseases at the University of Tübingen and a major contributor to the development of the WHO list. "Waiting any longer will cause further public health problems and dramatically impact on patient care."2 1WHO priority pathogens list for R&D of new antibiotics Please cite Canadian Journal of Physiology and Pharmacology as the source of this story and include a hyperlink to the research study: dx.doi.org/10.1139/cjpp-2016-0505. Published since 1929, this monthly journal reports current research in all aspects of physiology, nutrition, pharmacology, and toxicology, contributed by recognized experts and scientists. It publishes symposium reviews and award lectures and occasionally dedicates entire issues or portions of issues to subjects of special interest to its international readership. The journal periodically publishes a "Made In Canada" special section that features invited review articles from internationally recognized scientists who have received some of their training in Canada. Canadian Science Publishing publishes the NRC Research Press suite of journals but is not affiliated with the National Research Council of Canada. Papers published by Canadian Science Publishing are peer-reviewed by experts in their field. The views of the authors in no way reflect the opinions of Canadian Science Publishing. Requests for commentary about the contents of any study should be directed to the authors.


News Article | April 25, 2017
Site: www.PR.com

The American Law Society Board of Directors is proud to announce that Gregory Gordon Evans has been accepted as a new member with ATL distinction. The America's Top Lawyers list is comprised of well-rounded individuals representing a diverse cross-section of U.S. legal advocates. Winnipeg Manitoba, Canada, April 25, 2017 --( Greg earned his Bachelor of Laws from the University of Manitoba in 1997. There, he was awarded the Frank Billinkoff Prize for highest standing in Canadian Charter of Rights and Freedoms (1996-1997). Prior to law school, Greg earned his Bachelor of Arts in Linguistics, English, and Psychology in 1986. Of note, he earned a Diploma of Sign Language Interpreting from St. Paul Technical Vocational institute in 1981 - he has served the deaf community nationally and internationally for more than three decades. Also of note: Greg is a founding member of Collaborative Practice Manitoba and acts as its Membership Committee chair; he is a member of the Steering Committee; he was named to the Best Lawyers International list for 2015; he was awarded the Family Lawyer of the Year in 2016 in Winnipeg by Best lawyers; and he is listed as a Top Lawyer in the Global Directory of Who’s Who as a lifetime member. American Law Society's Board of Directors selectively chooses lawyers who show a history of greatness and consistency. It is a privilege to have Mr. Evans join the organization. If you would like to contact Greg, you may contact him through his profile page at: www.americanlawsociety.org/winnipeg-manitoba/gregory-evans "We look forward to following Gregory G. Evans' career and are extremely excited to see his articles, videos, and information posted on America Law Society's platform." -Valerie Dougherty of American Law Society. Winnipeg Manitoba, Canada, April 25, 2017 --( PR.com )-- Gregory Gordon Evans has exclusively practiced family law since 1998. His Winnipeg Manitoba-based practice, Evans Family Law Corporation, is a collaborative family law practice that offers an alternative to traditional, litigation-based separation and divorce. Greg understands the human impact of family difficulties and offers an approach that favors mutual legal agreements rather than bitter court battles. Indeed, he is a mediator and an arbitrator, receiving his training in 2005 from the Arbitration and Mediation Institute of Manitoba.Greg earned his Bachelor of Laws from the University of Manitoba in 1997. There, he was awarded the Frank Billinkoff Prize for highest standing in Canadian Charter of Rights and Freedoms (1996-1997). Prior to law school, Greg earned his Bachelor of Arts in Linguistics, English, and Psychology in 1986. Of note, he earned a Diploma of Sign Language Interpreting from St. Paul Technical Vocational institute in 1981 - he has served the deaf community nationally and internationally for more than three decades.Also of note: Greg is a founding member of Collaborative Practice Manitoba and acts as its Membership Committee chair; he is a member of the Steering Committee; he was named to the Best Lawyers International list for 2015; he was awarded the Family Lawyer of the Year in 2016 in Winnipeg by Best lawyers; and he is listed as a Top Lawyer in the Global Directory of Who’s Who as a lifetime member.American Law Society's Board of Directors selectively chooses lawyers who show a history of greatness and consistency. It is a privilege to have Mr. Evans join the organization.If you would like to contact Greg, you may contact him through his profile page at:"We look forward to following Gregory G. Evans' career and are extremely excited to see his articles, videos, and information posted on America Law Society's platform." -Valerie Dougherty of American Law Society.


News Article | May 5, 2017
Site: marketersmedia.com

— Trish Bishop, joins Resolute as Director of Service Delivery, ensuring all Resolute service areas are aligned for growth and success. “I am super excited with my new role at Resolute. I’ve been in IT for more than 20 years and it’s rare to find an IT shop that is as disciplined, mature, and innovative as Resolute. I’ve always pushed the envelope in terms of ensuring tech works for end users, aiming for solution simplicity, ease of use, and relevance. The customer focus and technology talent at Resolute is off the charts. Any organization would benefit from their awesome portfolio of services.” We’ve hired eight new team members in the last two months (bringing the count to 64 employees) including a Senior Talent Acquisition Specialist who will help us continue to grow to meet increasing client demand. A main area that’s expanded is our bilingual IT support as part of Resolute's managed services team which is crucial for supporting clients in Winnipeg and across Canada. We had to build out more of our office space and restructure to make room for all the new people. It’s likely we’ll have to extend to another floor of our building if we maintain this recent level of hiring but we’ll continue to scale to meet client demand. If our clients need more services; our team will keep growing to provide them. Despite the rapid growth, one thing Resolute President Rod De Vos wants to make clear is that the values and the culture that the company was founded on in 2005 aren’t going to change. “We’re known as the company that goes the extra mile for both clients and community; that’s not about to change. We’ve had a long history of giving back to the community and putting our clients first. I’m excited to bring these values to our new team members and welcome them into the Resolute family.” For years, we’ve had a successful partnership with the University of Manitoba and University of Winnipeg for our Developer Student Co-op program. Lisa Wise, Co-op Coordinator from the U of M said, “Resolute has been a huge supporter and advocate of our program for so long and we truly value the partnership.” Now, we’re starting a second Student Co-op program centered around infrastructure-related learning that we will extend out to educational institutions that are teaching network/device management and other infrastructure-related courses. Resolute Technology Solutions is a full-service IT company located in Winnipeg’s Exchange District providing consulting, managed services, and development for companies across North America. For more information, please visit https://www.resolutets.com


Stajduhar K.I.,University of Victoria | Funk L.,University of Manitoba | Outcalt L.,University of Victoria
Palliative Medicine | Year: 2013

Background: Family caregivers are assuming growing responsibilities in providing care to dying family members. Supporting them is fundamental to ensure quality end-of-life care and to buffer potentially negative outcomes, although family caregivers frequently acknowledge a deficiency of information, knowledge, and skills necessary to assume the tasks involved in this care. Aim: The aim of this inquiry was to explore how family caregivers describe learning to provide care to palliative patients. Design: Secondary analysis of data from four qualitative studies (n = 156) with family caregivers of dying people. Data sources: Data included qualitative interviews with 156 family caregivers of dying people. Results: Family caregivers learn through the following processes: trial and error, actively seeking needed information and guidance, applying knowledge and skills from previous experience, and reflecting on their current experiences. Caregivers generally preferred and appreciated a supported or guided learning process that involved being shown or told by others, usually learning reactively after a crisis. Conclusions: Findings inform areas for future research to identify effective, individualized programs and interventions to support positive learning experiences for family caregivers of dying people. © The Author(s) 2013.


Cuzzocrea A.,CNR Institute for High Performance Computing and Networking | Leung C.K.-S.,University of Manitoba | Mackinnon R.K.,University of Manitoba
Future Generation Computer Systems | Year: 2014

Nowadays, high volumes of massive data can be generated from various sources (e.g., sensor data from environmental surveillance). Many existing distributed frequent itemset mining algorithms do not allow users to express the itemsets to be mined according to their intention via the use of constraints. Consequently, these unconstrained mining algorithms can yield numerous itemsets that are not interesting to users. Moreover, due to inherited measurement inaccuracies and/or network latencies, the data are often riddled with uncertainty. These call for both constrained mining and uncertain data mining. In this journal article, we propose a data-intensive computer system for tree-based mining of frequent itemsets that satisfy user-defined constraints from a distributed environment such as a wireless sensor network of uncertain data. © 2013 Elsevier B.V. All rights reserved.


Vagianos K.,University of Manitoba | Bernstein C.N.,University of Manitoba
Inflammatory Bowel Diseases | Year: 2012

Background: The aim of this study was to longitudinally study serum homocysteine levels in patients with Crohn's disease (CD) and ulcerative colitis (UC) in relation to disease activity and B vitamin status. Methods: In all, 98 consecutive adult patients (age 25-55 years) with CD (n = 70) and UC (n = 28) were enrolled and assessed at three timepoints over 1 year. Results: There were no significant differences in levels of homocysteine, B vitamins, or dietary intake by disease type, disease activity, or across visits. 13% of all inflammatory bowel disease (IBD) patients had elevated homocysteine at least once during the study. Nine patients with CD had fluctuating homocysteine levels during the study but these were inconsistent, ranging from within normal range to elevated levels in any individual. Six of these nine patients were persistently in remission. 30% of all IBD patients had vitamin B6 deficiency, 11% had vitamin B12 deficiency, and one patient (CD) had folate deficiency. All vitamins showed a significant correlation between intake and serum levels (B6; r = 0.46, P < 0.001, B12; r = 0.42, P < 0.001, and folate; r = 0.26, P = 0.008). There was an inverse relationship between serum homocysteine in the blood and serum vitamin B12 (r = -0.241, P = 0.017). Conclusions: Serum homocysteine was mostly normal in patients with IBD and changed minimally over time. There was no association between disease activity and elevation of serum homocysteine. 30% of patients have vitamin B6 deficiency but vitamin B6 is not associated with elevated homocysteine. The routine measurement of homocysteine is not warranted. Copyright © 2011 Crohn's & Colitis Foundation of America, Inc.


Blunden P.G.,University of Manitoba | Melnitchouk W.,Jefferson Lab | Thomas A.W.,University of Adelaide
Physical Review Letters | Year: 2012

We present a new dispersive formulation of the γZ box radiative corrections to weak charges of bound protons and neutrons in atomic parity violation measurements on heavy nuclei such as Cs133 and Ra213. We evaluate for the first time a small but important additional correction arising from Pauli blocking of nucleons in a heavy nucleus. Overall, we find a significant shift in the γZ correction to the weak charge of Cs133, approximately 4 times larger than the current uncertainty on the value of sin2θ W, but with a reduced error compared to earlier estimates. © 2012 American Physical Society.


Bernstein C.N.,University of Manitoba | Ng S.C.,Chinese University of Hong Kong | Lakatos P.L.,Semmelweis University | Moum B.,University of Oslo | Loftus Jr. E.V.,Mayo Medical School
Inflammatory Bowel Diseases | Year: 2013

Standardized mortality rates in ulcerative colitis (UC) are no different than that in the general population. Patients who are older and have more comorbidities have increased mortality. Emergent colectomy still carries 30-day mortality rates of approximately 5%. In more recent studies, UC surgery rates at 10 years from diagnosis are nearly 3% in Hungary, <10% in referral center studies from Asia, approximately 10% in Norway, the European Cohort Study of Inflammatory Bowel Diseases and Manitoba, Canada, and nearly 17% in Olmsted County, Minnesota. These rates are for the most part lower than reported colectomy rates from studies completed before 1990. Short-term colectomy rates in severe hospitalized UC have remained stable at 27% for several years. Generally, children seem to have higher rates of extensive colitis at diagnosis than adults. There also seems to be higher rates of colectomy in children than in adults (i.e., at least 20% at 10 years), and perhaps, this reflects a higher rate of extensive disease. Acute severe colitis in patients with UC still represents a condition with a high early colectomy rate and a measurable mortality rate. Copyright © 2013 Crohn's & Colitis Foundation of America, Inc.


Martens P.J.,Manitoba Center for Health Policy | Chochinov H.M.,University of Manitoba | Prior H.J.,Manitoba Center for Health Policy
Journal of Clinical Psychiatry | Year: 2014

Objective: To compare the causes and rates of death for people with and without schizophrenia in Manitoba, Canada. Method: Using de-identified administrative databases at the Manitoba Centre for Health Policy, a population-based analysis was performed to compare age- and sex-adjusted 10-year (1999-2008) mortality rates, overall and by specific cause, of decedents aged 10 years or older who had 1 diagnosis of schizophrenia (ICD-9-CM code 295, ICD-10-CA codes F20, F21, F23.2, F25) over a 12-year period (N=9,038) to the rest of the population (N=969,090). Results: The mortality rate for those with schizophrenia was double that of the rest of the population (20.00% vs 9.37%). The all-cause mortality rate was higher for people with schizophrenia compared to all others (168.9 vs 99.1 per thousand; relative risk [RR]=1.70, P<.0001); rates of death due to suicide (RR=8.67, P<.0001), injury (RR=2.35, P<.0001), respiratory illness (RR=2.00, P<.0001), and circulatory illness (RR=1.64, P<.0001) were also significantly higher in people with schizophrenia. Overall cancer deaths were similar (28.6 vs 27.3 per thousand, P=.42, NS) except in the middle-aged group (40-59), in which cancer death rates were significantly higher for those with schizophrenia (28.7 vs 11.6 per thousand; RR=2.48, P<.01). Mortality rates due to lung cancer were significantly higher in people with schizophrenia (9.4 vs 6.4 per thousand, RR=1.45, P<.001). Conclusions: People with schizophrenia are at increased risk of death compared to the general population, and the majority of these deaths are occurring in older age from physical disease processes. Risk of cancer mortality is significantly higher in middle-aged but not younger or older patients with schizophrenia. Understanding these patients' vulnerabilities to physical illness has important public health implications for prevention, screening, and treatment as the population ages. © Copyright 2014 Physicians Postgraduate Press, Inc.


John P.,University of Manitoba | Montgomery P.,Vancouver Island Health Authority
Journal of Geriatric Psychiatry and Neurology | Year: 2013

Objectives: 1. To determine if Self-Rated Health (SRH) predicts dementia over a five period in cognitively intact older adults, and in older adults with Cognitive Impairment, No Dementia (CIND); and 2. To determine if different methods of eliciting SRH (age-referenced (AR) versus unreferenced) yield similar results. Design: Prospective cohort. Population: 1468 cognitively intact adults and 94 older adults with CIND aged 65+ living in the community, followed over five years. Measures: Age, gender, education, subjective memory loss, depressive symptoms, functional status, cognition, SRH and AR-SRH were all measured; dementia was diagnosed on clinical examination. Those with abnormal cognition not meeting criteria for dementia were diagnosed with CIND. Results: In those who were cognitively intact at time 1, and had good SRH: 69.4% were intact; 6.0% had CIND; 6.9% had dementia, and 17.7% had died at time 2, while in those with poor SRH: 44.9% were intact, 11.1% had CIND, 9.1% had dementia, and 34.8% had died (p<0.001, chi-square test). In multinomial regression models SRH predicted dementia and death. In those with CIND at time 1 and good SRH: 2.3% were intact: 18.6% had CIND; 34.9% had dementia and 44.2% had died at time 2, while in those with poor SRH: 4.8% were intact, 31.0% had CIND, 19.0% had dementia, and 43.6% had died (p=0.30, chi-square test). In multinomial regression models, this was not significant. AR-SRH analyses were similar. Conclusions: In cognitively intact older adults SRH predicts dementia. In older adults with CIND, SRH does not predict dementia. © The Author(s) 2012.


Blunden P.G.,University of Manitoba | Melnitchouk W.,Jefferson Lab | Thomas A.W.,University of Adelaide
Physical Review Letters | Year: 2011

We present a new formulation of one of the major radiative corrections to the weak charge of the proton-that arising from the axial-vector hadron part of the γZ box diagram, e□γZA. This formulation, based on dispersion relations, relates the γZ contributions to moments of the F3γZ interference structure function. It has a clear connection to the pioneering work of Marciano and Sirlin, and enables a systematic approach to improved numerical precision. Using currently available data, the total correction from all intermediate states is e□γZA=0.0044(4) at zero energy, which shifts the theoretical estimate of the proton weak charge from 0.0713(8) to 0.0705(8). The energy dependence of this result, which is vital for interpreting the Q weak experiment, is also determined. © 2011 American Physical Society.


PURPOSE OF REVIEW: Of the idiopathic interstitial pneumonias, the differentiation between idiopathic pulmonary fibrosis (IPF) and nonspecific interstitial pneumonitis (NSIP) raises considerable diagnostic challenges, as their clinical presentations share many overlapping features. IPF is a fibrosing pneumonia of unknown cause, showing a histologic pattern of usual interstitial pneumonia (UIP), and has a poorer prognosis than does NSIP. This review examines whether the radiographic features of IFP and NSIP as assessed by high-resolution computed tomography (HRCT) can be used to distinguish between these two entities. RECENT FINDINGS: The diagnostic accuracy of HRCT for UIP and NSIP has been reported to be approximately 70 in various studies. Disagreement between the HRCT diagnosis and the histologic diagnosis occurs in approximately one-third of the cases. The predominant feature of honeycombing on HRCT yields a specificity of approximately 95 and sensitivity of approximately 40 for UIP. In contrast, a predominant feature of ground glass opacities (GGOs) gives a sensitivity of approximately 95 and specificity of approximately 40 for NSIP. SUMMARY: The finding of honeycombing as the predominant HRCT feature suggests the diagnosis of UIP and may exclude the need for biopsy. Predominant features of GGOs are not specific enough to distinguish between NSIP and UIP. Copyright © 2012 Lippincott Williams & Wilkins.


Song L.,Peking University | Niyato D.,Nanyang Technological University | Han Z.,University of Houston | Hossain E.,University of Manitoba
IEEE Wireless Communications | Year: 2014

Device-to-device communication underlaying cellular networks allows mobile devices such as smartphones and tablets to use the licensed spectrum allocated to cellular services for direct peer-to-peer transmission. D2D communication can use either one-hop transmission (i.e. D2D direct communication) or multi-hop clusterbased transmission (i.e. in D2D local area networks). The D2D devices can compete or cooperate with each other to reuse the radio resources in D2D networks. Therefore, resource allocation and access for D2D communication can be treated as games. The theories behind these games provide a variety of mathematical tools to effectively model and analyze the individual or group behaviors of D2D users. In addition, game models can provide distributed solutions to the resource allocation problems for D2D communication. The aim of this article is to demonstrate the applications of game-theoretic models to study the radio resource allocation issues in D2D communication. The article also outlines several key open research directions. © 2002-2012 IEEE.


Gordon V.,Cancer Care Manitoba | Gordon V.,University of Manitoba | Banerji S.,Cancer Care Manitoba | Banerji S.,University of Manitoba | Banerji S.,Manitoba Institute of Cell Biology
Clinical Cancer Research | Year: 2013

The triple-negative breast cancer (TNBC) subtype, defined clinically by the lack of estrogen, progesterone, and Her2 receptor expression, accounts for 10% to 15% of annual breast cancer diagnoses. Currently, limited therapeutic options have shown clinical benefit beyond cytotoxic chemotherapy. Defining this clinical cohort and identifying subtype-specific molecular targets remain critical for new therapeutic development. The current era of high-throughput molecular analysis has revealed new insights into these targets and confirmed the phosphoinositide 3-kinase (PI3K) as a key player in pathogenesis. The improved knowledge of the molecular basis of TNBC in parallel with efforts to develop new PI3K pathway-specific inhibitors may finally produce the therapeutic breakthrough that is desperately needed. © 2013 American Association for Cancer Research.


Domanski D.,University of Victoria | Murphy L.C.,University of Manitoba | Borchers C.H.,University of Victoria
Analytical Chemistry | Year: 2010

We have developed a phosphatase-based phosphopeptide quantitation (PPQ) method for determining phosphorylation stoichiometry in complex biological samples. This PPQ method is based on enzymatic dephosphorylation, combined with specific and accurate peptide identification and quantification by multiple reaction monitoring (MRM) with stable-isotope-labeled standard peptides. In contrast with classical MRM methods for the quantitation of phosphorylation stoichiometry, the PPQ-MRM method needs only one nonphosphorylated SIS (stable isotope-coded standard) and two analyses (one for the untreated sample and one for the phosphatase-treated sample), from which the expression and modification levels can accurately be determined. From these analyses, the percent phosphorylation can be determined. In this manuscript, we compare the PPQ-MRM method with an MRM method without phosphatase and demonstrate the application of these methods to the detection and quantitation of phosphorylation of the classic phosphorylated breast cancer biomarkers (ERα and HER2), and for phosphorylated RAF and ERK1, which also contain phosphorylation sites of biological importance. Using synthetic peptides spiked into a complex protein digest, we were able to use our PPQ-MRM method to accurately determine the total phosphorylation stoichiometry on specific peptides as well as the absolute amount of the peptide and phosphopeptide present. Analyses of samples containing ERα protein revealed that the PPQ-MRM method is capable of determining phosphorylation stoichiometry in proteins from cell lines, and is in good agreement with determinations obtained using the direct MRM approach in terms of phosphorylation and total protein amount. © 2010 American Chemical Society.


Power K.E.,University of Ontario Institute of Technology | McCrea D.A.,University of Manitoba | Fedirchuk B.,University of Manitoba
Journal of Physiology | Year: 2010

This is the first study to report on the increase in motoneurone excitability during fictive scratch in adult decerebrate cats. Intracellular recordings from antidromically identified motoneurones revealed a decrease in the voltage threshold for spike initiation (V th), a suppression of motoneurone afterhyperpolarization and activation of voltage-dependent excitation at the onset of scratch. These state-dependent changes recovered within 10-20 s after scratch and could be evoked after acute transection of the spinal cord at C1. Thus, there is a powerful intraspinal system that can quickly and reversibly re-configure neuronal excitability during spinal network activation. Fictive scratch was evoked in spinal intact and transected decerebrate preparations by stroking the pinnae following topical curare application to the dorsal cervical spinal cord and neuromuscular block. Hyperpolarization of Vth occurred (mean -5.8mV) in about 80% of ipsilateral flexor, extensor or bifunctional motoneurones during fictive scratch. The decrease in Vth began before any scratch-evoked motoneurone activity as well as during the initial phase in which extensors are tonically hyperpolarized. The V th of contralateral extensors depolarized by ameanof+3.7mVduring the tonic contralateral extensor activity accompanying ipsilateral scratch. There was a consistent and substantial reduction of afterhyperpolarization amplitude without large increases in motoneurone conductance in both spinal intact and transected preparations. Depolarizing current injection increased, and hyperpolarization decreased the amplitude of rhythmic scratch drive potentials in acute spinal preparations indicating that the spinal scratch-generating network can activate voltage-dependent conductances in motoneurones. The enhanced excitability in spinal preparations associated with fictive scratch indicates the existence of previously unrecognized, intraspinal mechanisms increasing motoneurone excitability. © 2010 The Authors. Journal compilation © 2010 The Physiological Society.


Bernstein C.N.,University of Manitoba | Loftus Jr. E.V.,Mayo Medical School | Ng S.C.,Chinese University of Hong Kong | Lakatos P.L.,Semmelweis University | Moum B.,University of Oslo
Gut | Year: 2012

Hospitalisation and surgery are considered to be markers of more severe disease in Crohn's disease. These are costly events and limiting these costs has emerged as one rationale for the cost of expensive biologic therapies. The authors sought to review the most recent international literature to estimate current hospitalisation and surgery rates for Crohn's disease and place them in the historical context of where they have been, whether they have changed over time, and to compare these rates across different jurisdictions. It is in this context that the authors could set the stage for interpreting some of the early data and studies that will be forthcoming on rates of hospitalisation and surgery in an era of more aggressive biologic therapy. The most recent data from Canada, the United Kingdom and Hungary all suggest that surgical rates were falling prior to the advent of biologic therapy, and continue to fall during this treatment era. The impact of biologic therapy on surgical rates will have to be analysed in the context of evolving reductions in developed regions before biologic therapy was even introduced. Whether more aggressive medical therapy will decrease the requirement for surgery over long periods of time remains to be proven.


Middleton D.J.,CSIRO | Weingartl H.M.,Canadian Food Inspection Agency | Weingartl H.M.,University of Manitoba
Current Topics in Microbiology and Immunology | Year: 2012

Hendra virus (HeV) and Nipah virus (NiV) form a separate genus Henipavirus within the family Paramyxoviridae, and are classified as biosafety level 4 pathogens due to their high case fatality rate following human infection and because of the lack of effective vaccines or therapy. Both viruses emerged from their natural reservoir during the last decade of the twentieth century, causing severe disease in humans, horses and swine, and infecting a number of other mammalian species. The current review summarizes our up to date understanding of pathology and pathogenesis in the natural reservoir species, the Pteropus bat, and in the equine and porcine spill over species. © 2012 Springer-Verlag Berlin Heidelberg.


Nguyen G.C.,University of Toronto | Nguyen G.C.,Institute for Clinical Evaluative science | Nguyen G.C.,Johns Hopkins University | Bernstein C.N.,University of Manitoba
American Journal of Gastroenterology | Year: 2013

OBJECTIVES:There is practice variation in the duration of anticoagulation for venous thromboembolism (VTE) in inflammatory bowel disease (IBD) patients. Clinicians must weigh the high risk of recurrent VTE with the risk of gastrointestinal bleeding.METHODS:We implemented Markov decision analysis to compare the costs and effectiveness of extended anticoagulation vs. time-limited anticoagulation (6 months) among IBD patients with first unprovoked VTE over a 5-year time horizon. In a secondary analysis, we added two strategies in which therapeutic-dose or prophylactic-dose anticoagulation was administered during IBD flares.RESULTS:Compared with time-limited anticoagulation, extended anticoagulation yielded slightly higher quality-adjusted life years (QALYs) (4.40 vs. 4.38) and costs ($21,158 vs. $20,825), and an incremental cost-effectiveness ratio (ICER) of $15,254/QALY over 5 years. In secondary analysis, pharmacological prophylaxis during IBD flares was associated with the highest QALYs (4.41) and costs ($28,177), but was not cost-effective when compared with extended anticoagulation (ICER=$1,158, 717/QALY). Anticoagulation during flares yielded the lowest cost ($19,681) and same QALYs as extended anticoagulation. In probabilistic sensitivity analysis, extended anticoagulation yielded higher QALYs than time-limited anticoagulation in 91% of trials and was dominant or cost-effective (<$50,000/QALY) in 72% of trials. When analyzed over a lifetime, extended anticoagulation dominated time-limited anticoagulation with higher effectiveness (18.44 vs. 17.95 QALYs) and lower costs ($94,738 vs. $102,874) and was highly robust in sensitivity analyses.CONCLUSIONS:Our analyses suggest that extended anticoagulation may provide marginal benefit over time-limited anticoagulation and should be considered in the management of first unprovoked VTE in IBD. Anticoagulation and prophylaxis during IBD flares are alternative viable strategies. © 2013 by the American College of Gastroenterology.


Mahajan S.T.,University Hospitals Case Medical Center | Patel P.B.,University Hospitals Case Medical Center | Marrie R.A.,University of Manitoba
Journal of Urology | Year: 2010

Purpose: We describe the prevalence of overactive bladder symptoms in patients with multiple sclerosis as well as the rates of evaluation and treatment of urinary complaints. Materials and Methods: Data from the fall 2005 North American Research Committee On Multiple Sclerosis survey were examined, including the Urogenital Distress Inventory plus a nocturia question, the SF-12, and inquiries regarding urological care and treatments. Data were analyzed using descriptive statistics, chi-square and Student's t tests, ANOVA and multivariable logistic regression. Results: Of 16,858 surveys distributed 9,702 (58%) were completed. Participants with a surgically altered bladder were excluded from analysis (21). At least 1 moderate to severe urinary symptom (score of 2 or greater) was reported by 6,263 (65%) respondents. Increasing overactive bladder symptoms were correlated with longer disease duration (r = 0.135) and increasing physical disability (r = 0.291) (both p <0.001). Decreased quality of life was associated with increasing disability (p <0.001) and overactive bladder symptom score (p <0.001). Of patients with moderate to severe overactive bladder symptoms only 2,710 (43.3%) were evaluated by urology and 2,361 (51%) were treated with an anticholinergic medication. Treated patients more frequently reported leakage (p <0.001) and newer treatments were significantly underused (less than 10% total use). Catheter use was reported by 2,309 (36.8%) respondents, and was associated with greater disability, higher overactive bladder symptom score and reduced quality of life (all p <0.001). Conclusions: This large scale study identified high rates of overactive bladder symptoms in patients with MS, and correlations with increasing disease duration and physical disability. Despite an increasing awareness of overactive bladder symptoms and the need for evaluation and treatment, many patients remain underserved. © 2010 American Urological Association Education and Research, Inc.


VANCOUVER, BRITISH COLUMBIA--(Marketwired - Feb. 22, 2017) - Power Metals Corp. ("Power Metals Corp." or the "Company") (TSX VENTURE:PWM)(FRANKFURT:OAA1) is very pleased to announce that it has acquired the Coyote Project (the "Project") located in the Lisbon Valley area in the Paradox Basin, Utah. The Project includes 150 placer mineral claims covering an area of 3,000 acres and inclusive of lithium brine mineral rights, on trend and adjoining to the north, the Lisbon Valley oil and gas field, where historic lithium brine content has been reported as high as 730 parts per million lithium (Superior Oil 88-21P). Johnathan More, CEO of Power Metals noted, "We are extremely excited to have been able to position the company in the Lisbon Valley, as a starting point. As we roll out our plan, we intend to deploy increased resources towards the building of a petro lithium portfolio in the United States including but not limited to the acquisition of oil field assets, lithium brine, oil wells and associated infrastructure." To view the map accompanying this press release please click the following link: http://media3.marketwire.com/docs/PowerMap221.pdf More continued, "Structurally, the Coyote Project is situated down dip from an existing oilfield and within a geosyncline basin feature which could represent a fluid trap for migrating brine fluids. The property lies entirely within a zone identified by the United States Geological Survey (USGS) to contain 40% plus TDS (total dissolved solids) within the Pennsylvanian brine aquifers (USGS Report 1962). To date, the most concentrated brines have been found in Pennsylvanian rocks, especially in the thin clastic breaks which separate the salt beds in the Paradox Formation. The porous Mississippian dolomites and limestones appear to offer the potential of sustained brine flow from a large reservoir, especially where they have been faulted into contact with rich Paradox salt beds." The Lisbon Valley oil and gas field is located approximately 40 miles southeast of Moab, Utah in the salt anticline belt on the southwest edge of the Paradox Basin in San Juan county. The oilfield was first discovered by Pure Oil Company in 1960. The Lisbon field produces oil and gas from the southwest flank of a faulted anticlinal trap in the Devonian sandstones and Mississippian limestones (Segal et al., 1986). The Paradox Basin covers large parts of San Juan, Garfield, Wayne, Emery, and Grand Counties in southeastern Utah. The Basin was a structural and depositional trough associated with the Pennsylvanian-age Ancestral Rocky Mountains. The subsiding basin developed a shallow-water carbonate shelf that locally contained carbonate buildups along its south and southwest margins. The region is home to the former Rio Algom uranium mill facility, an active copper mine operated by Lisbon Valley Mining Company, and a natural gas processing plant located in the city of Lisbon, Utah. The company has entered into a number of discussions with parties who have had extensive experience with, or whose main operating business includes the separation of metals and physical particulate from water, recycled water and oil and gas waste water. The company hopes to conclude an agreement to test these processes and methods for commercial scale application. Power Metals is pleased to appoint Mr. Ron Bourgeois as Project Manager covering its asset base in Alberta and Utah. Mr. Bourgeois has over thirty years of experience in executive management, particularly in the oil and gas industry. He has held numerous and varying management and public company positions with extensive experience in the development and financing of major oil and gas resources and infrastructure assets around the world. Specifically, Mr. Bourgeois has significant experience in developing commercial solutions in the extractive industries to liberate major resource bases, recently including the Palo Duro Basin, Texas, where Mr. Bourgeois worked closely on fracking and other solutions. Mr. Bourgeois holds a B. Comm. (Hons.) from the University of Manitoba and he has been a chartered accountant since 1976. In connection with the appointment, the Company announces the grant of 100,000 options at 48c per share. The company is acquiring a 100% interest the Coyote Project in consideration of the issuance of 3,500,000 shares of Power Metals and a payment of US$50,000. John F. Wightman, MSc. (Geology), P.Eng., FGAC, a qualified person, prepared the disclosures reports related to these projects. National Instrument 43-101 reports have not been prepared on these properties. Power Metals Corp is one of Canada's newest premier mining companies with a mandate to explore, develop and acquire high quality mining projects for minerals contributing to power. We are committed to building an arsenal of projects in both lithium and clean power fuels like uranium. We see an unprecedented opportunity to supply the staggering growth of the lithium battery industry. ON BEHALF OF THE BOARD, Neither the TSX Venture Exchange nor its Regulation Service Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release. No stock exchange, securities commission or other regulatory authority has approved or disapproved the information contained herein.


News Article | October 28, 2016
Site: www.marketwired.com

The Royal Canadian Air Force (RCAF) will conduct flybys for the NHL Heritage Classic event between the Winnipeg Jets and the Edmonton Oilers in Winnipeg this weekend. The first flyby will occur shortly after 3 p.m. on Saturday, October 22 during the NHL Alumni Game, and the second during the NHL Heritage Classic event shortly after 2 p.m. on Sunday, October 23. The flybys will take place over the Investors Group Football Field on the University of Manitoba campus grounds. The flybys will consist of the following aircraft: During the flybys, the aircraft will fly at an altitude no lower than 500 feet above the highest obstacle in their path before resuming a higher flying altitude for the return to home base at Bagotville, Que, and Moose Jaw, Sask. Flybys by RCAF aircraft are carefully planned and closely controlled to ensure public safety at all times. The RCAF is proud to share in community events such as this, with flybys that allow us to demonstrate to Canadians the skills we have honed with the effective aircraft entrusted to us.


News Article | February 22, 2017
Site: www.nature.com

Previous studies have used age-binned means of the δ15N database over hand-picked geologic intervals to propose changes in this proxy with time2, 4, 14, 31. These studies have provided a qualitative indication that the δ15N record appears to vary systematically over geologic time. However, they are not statistically robust, because two samples drawn from a single population will often express different means owing to random noise. A Student’s t-test is a more statistically robust method for determining whether two (otherwise normally distributed) sample sets are likely to arise from the same population, which is considered the null hypothesis. We therefore performed 754 independent two-tailed t-tests spanning every possible time-weighted binning of the δ15N database, assuming unequal variances in the sample sets. In all but a few cases at the extremes (where the bin sizes of one of the sample sets were small), the null hypothesis was rejected at greater than 99% confidence and so we can conclude that the divided sample sets arise from populations with different means and variances. The sample sets are defined as ranging from the first database entry from the Proterozoic, at 0.70 Gyr ago, to the entry with the age shown on the horizontal axis of Extended Data Fig. 1, and then from the subsequent entry to our final database entry, with age of 3.80 Gyr. Extended Data Fig. 1 shows the logarithm of the ‘false-positive’ probability that the two samples sets arise from populations of the same mean and variance. Using this method, datasets from 2.31 Gyr ago (this study), 2.50 Gyr ago and 2.70–2.80 Gyr ago are demonstrated to be the most statistically meaningful pivot ages (pivot ages separate the database into distinct sample sets). As discussed in the main text, the large number of database entries from about 2.50 Gyr ago stem from predominantly deep-water environments that show small stratigraphic shifts in δ15N, interpreted to reflect temporary localized nitrification/denitrification in an otherwise reducing ocean16, 17, 19. As a result, the global database may be slightly biased towards results showing an ‘oxic’ nitrogen cycle at this time period. The data presented in this study are from unequivocally oxic shallow waters, and the statistical analysis confirms that our new data provide a stronger statistical power in separating the datasets, even given the bias in the database at 2.50 Gyr ago. As we note in the main text, additional δ15N data from shallow-water depositional environments in this crucial interval are required to test alternative hypotheses. Although it is beyond the scope of this current study, we additionally note that the most statistically meaningful separation of the δ15N database occurs when the sample sets are split into the time periods 0.70–2.71 Gyr ago and 2.75–3.80 Gyr ago. The statistical power for this split is driven primarily by the predominance of extremely 15N-enriched δ15N measurements (upwards of +55‰, dominantly in kerogens) from this time period. The origin of these extreme values is highly debated, with hypotheses including the onset of partial nitrification18, and effects from ammonia degassing under highly alkaline conditions32. Regardless, it is clear that the data from around 2.70 Gyr ago do not represent a modern-style aerobic N cycle, as no such extreme values are seen anywhere in the modern Earth system. These statistical analyses therefore demonstrate that the nitrogen cycle underwent massive changes in both the early Neoarchean4 and at the GOE, with the data from this study forming the key pivot point for the latter. Iron speciation was determined by means of the sequential extraction technique described in ref. 21, with a relative standard deviation of <5% for all extraction steps. TOC was measured on a Leco analyser after decarbonation by treatment with 20% HCl, with a 1σ of 0.05%. δ13C was measured at the SIFIR Laboratory at the University of Manitoba. A calibration line was calculated by least-squares linear regression of analyses of two international standards (USGS40, USGS41) performed at the beginning, middle and end of each run. Replicate analyses of international standard USGS Green River shale SGR-1b (δ13C  = −29.3 ± 0.1‰ VPDB) alongside unknown samples yielded δ13C  = −29.5 ± 0.2‰ (n = 29). Kerogen was extracted following a method modified from ref. 33 in the Geobiology laboratory at the University of St Andrews. Approximately 100–200 mg of bulk rock powders were decarbonated twice with 10% (v/v) HCl overnight at 40 °C in a clean hood, then transferred to Teflon beakers in a dedicated fume cupboard, where 5 ml of 10% HCl + 2 ml of concentrated HF was added and volatilized at 40 °C. Residues were rinsed five times with Milli-Q water. Chloroform was added to the residue, shaken, and allowed to settle in separation funnels for about 30 min. Heavy minerals that sank to the bottom were first removed, and then floated kerogen was transferred to a Teflon beaker, dried in a clean hood, and stored in an anaerobic chamber until analysis. A subset of samples were also extracted commercially at Global Geolab Ltd, using techniques similar to those above, except that kerogens were separated out by heavy liquid separation with zinc bromide instead of chloroform. Repeat extracts of the same sample (all plotted in Fig. 2) had consistent δ15N values between laboratories, generally within 1‰ (see Source Data for Fig. 2). Kerogen N isotope ratios (δ15N ) were measured using a Eurovector 3028HT elemental analyser fitted with a Costech Zero Blank autosampler coupled to an Isoprime isotope ratio mass spectrometer, at the University of Leeds. Columns with reagents were fitted to the elemental analyser along with either a high-resolution CN gas chromatography column (Elemental Microanalysis E3037), or a NCH column (Elemental Microanalysis E3001), as below. A magnesium perchlorate-carbosorb trap was used to trap water and CO . The setup was leak-checked and then the combustion and reduction furnaces were heated to operating temperatures and left purging with He overnight. The combustion furnace was held at 1,020 °C and the reduction furnace at 650 °C. The gas chromatography column was baked at 190 °C with He flowing overnight, and then its temperature was reduced to the normal running temperature (80 °C for the NCH column, and 110 °C for the high-resolution CN column). Samples were prepared by weighing between 10 mg and 30 mg of kerogen into 8 mm × 5 mm tin cups. These were loaded into the autosampler and purged for at least an hour before analyses. Upon sealing the autosampler chamber and opening it to the main He flow, mass 28 was monitored until it returned to a stable background (less than 7 × 10−11 nA). Samples were combusted in a pulse of pure oxygen (N5.0 grade, BOC) and injected into a stream of helium (CP grade, BOC). The resulting gases were passed through chromous oxide and silvered cobaltous oxide, fine copper wires, and a magnesium perchlorate/carbosorb trap before entering the gas chromatography column. The mass 29/28 ratio of the sample N gas was measured relative to a pulse of pure N (research grade, BOC) and corrected to the AIR scale (the air standard) using the USGS-25 and USGS-26 ammonium sulfate standards, with δ15N values of −30.1‰ and +53.7‰, respectively. Repeated runs of standard materials during each analytical session produced standard deviations of the raw δ15N that were generally between 0.15‰ and 0.41‰, with the majority ≤0.30‰. Data were corrected with bracketing standards using a simple linear regression equation. Repeats of an in-house yeast standard (7.6 wt% N) gave a long-term average value of −0.8 ± 0.31‰ (1σ, 37 runs with both NCH and high-resolution CN gas chromatography columns), with in-run reproducibility always ≤0.2‰ where three or more repeats were measured during the same analytical session. A sample size test using the same yeast standard determined that samples producing peak heights of <1 nA have larger variability, approaching the blank δ15N value as their peak height decreased. Repeat analyses of the yeast standard with peak height >1 nA produced δ15N values that differed by ≤0.1‰. Therefore, analyses that produced peak heights of <1 nA were discarded in this study. The analysis of organic materials with low concentrations of nitrogen can be complicated by the production of CO gas (at masses 28 and 29) as a result of incomplete combustion, which can alter the apparent 15N/14N ratio of the sample. We took the following precautions to ensure that data were not affected by CO production during incomplete combustion: (1) combustion tests using a low-N organic material (cornflower, 0.07 wt% N); (2) mass 30 monitoring; and (3) use of an NCH column to produce a better separation between the N and unwanted CO (which might produce a secondary mass 28 peak for samples affected by partial combustion). A subset of samples from the Rooihoogte and Timeball Hill formations was analysed for bulk rock geochemistry (wt% K O) to screen for post-depositional alteration at the University of St Andrews, using standard X-ray fluorescence (with 1σ of 0.02 wt%). Bulk nitrogen content (% TN) and bulk δ15N (δ15N , without decarbonation) were measured at the SIFIR Laboratory at the University of Manitoba. Analyses were performed using a Costech 4010 elemental analyser fitted with a Costech Zero Blank autosampler and coupled to a Thermo Finnigan Delta V Plus isotope-ratio mass spectrometer via an open-split interface (ConFlo III, Thermo Finnigan). A magnesium perchlorate-carbosorb trap was placed before the ConFlo III to remove remaining water and CO . To improve the efficiency of sample combustion, temperature in the oxidation column was raised to 1,050 °C, and a ‘macro’ O injection loop was used. The setup was leak-checked and then the oxidation and reduction columns were heated to operating temperatures and left purging with He overnight. The oxidation column was held at 1,050 °C and the reduction column at 650 °C. The approximately 3-m-long stainless steel gas chromatography column was baked at 100–110 °C with He flowing overnight, and then its temperature was reduced to the normal running temperature (55 °C). CO level was monitored during analytical sessions. Sample normalization was performed using the two-point calibration described in ref. 34, by analysing two international standards (USGS40 and USGS41) at the beginning, middle and end of each analytical session. Two certified standards were additionally analysed alongside with samples: B2153, soil, % TN = 0.13 ± 0.02%, δ15N  = +6.70 ± 0.15‰ (Elemental Microanalysis); and SDO-1, Devonian Ohio Shale, % TN = 0.36 ± 0.01%, δ15N  = -0.8 ± 0.3‰ (USGS). The data obtained were % TN = 0.14 ± 0.00% and δ15N values of +6.76 ± 0.02‰ (n = 3) for B2153, and % TN = 0.37 ± 0.00% and -0.32 ± 0.02‰ (n = 3) for SDO-1. A subset of extracted kerogens and bulk rock powders were also run for δ15N by nano-elemental analyser-isotope ratio mass spectrometry at Syracuse University, following methods outlined in ref. 35. The benefit of this approach is that it is specifically designed for analysis of as little as 0.5 mg of kerogen and 50 nanomoles of N, thus limiting some of the complications associated with achieving complete combustion on larger samples. Encapsulated sample powders were evacuated to remove atmospheric N present in capsule pore space and purged with Ar. Sample combustion was performed in an Elementar Isotope Cube elemental analyser with reaction conditions set at 1,100 °C and 650 °C for the oxidation and reduction reactors, respectively. Oxygen flow was set at 30 ml min−1 and introduced to the helium stream for 90 s, initiating when the sample is dropped into the oxidation reactor. The elemental analyser is coupled to an automated cryotrapping system that was built using a modified Elementar TraceGas analyser. The generated N gas was trapped in a silica-gel-filled, stainless steel trap cooled in liquid N . Following complete collection of the N peak from the high-flow elemental analyser, the He flow through the cryotrap was switched to a lower flow (2 ml min−1) via actuation of a VICI Valco 6-port valve. The trap was heated and N was released to a room-temperature capillary gas chromatography column (JW CarboBOND, 25 m, 0.53 mm internal diameter, 5 μm), and ultimately to the isotope ratio mass spectrometer. The Elementar elemental analyser traps CO from combustion in a molecular sieve trap that is released to waste or to the isotope ratio mass spectrometer directly for δ13C analyses. This ensures that CO is not trapped in the N cryotrap and mitigates the potential for neo-formed CO within the ion source. All samples were run in triplicate and blank-corrected using Keeling-style plots and normalized using the 2-point-correction scheme of ref. 34. Use of Keeling plots allows for simple estimation of the influence of the N procedural blank on samples and for high-fidelity measurements of δ15N on the small sample sizes employed. The reproducibility of replicate analyses of standards—IAEA N1 (0.4‰), IAEA N2 (+20.35‰) and NIST Peach Leaves (1.98‰)—and samples was ±0.26‰. Nitrogen is preserved in the sedimentary rock record primarily as organic N or as ammonium substituting for potassium in phyllosilicates36. The sedimentary N isotope values can be modified by a number of post-depositional processes, including diagenesis, burial and metamorphism. Therefore, before interpreting sedimentary δ15N data, it is first necessary to examine the possible impacts of post-depositional alteration on the primary signal. Here we examine trends in supplementary and bulk rock data to validate our δ15N dataset as representing a primary signal. Degradation of organic matter during early diagenesis can offset primary δ15N signals by 2‰ to 3‰ (ref. 37). High-pressure metamorphism does not impart significant δ15N changes38, although high-temperature metamorphism can increase δ15N in ammoniated phyllosilicates (and possibly N ; but see ref. 39) owing to volatilization of 15N-depleted nitrogen36, 38. Since the Rooihoogte and Timeball Hill formations have only experienced lower greenschist facies metamorphism23, this mechanism would be expected to produce at most a 1‰–2‰ positive shift in δ15N . Cross-plots demonstrate no correlation between % N in kerogen (N ) and δ15N values (Extended Data Fig. 4a), rendering no evidence for metamorphic devolatilization of 15N-depleted nitrogen from organics. δ15N and % TN show only a loose positive correlation (with R2 = 0.34; Extended Data Fig. 5a), in the opposite direction of what would be expected from substantial loss of 15N-depleted N from whole rocks via devolatilization. Only a weak negative correlation exists between wt% TOC and δ13C (R2 = 0.42; Extended Data Fig. 4c), also inconsistent with substantial devolatilization of 13C-depleted carbon during metamorphism. These data indicate that loss of N during metamorphism and deep burial did not greatly alter the primary δ15N (or δ13C) values. Nitrogen isotope exchange can occur between rocks and N-containing compounds when fluids migrate during organic matter maturation40. Similar to metamorphism, offset during thermal maturation generally results from preferential volatilization of 15N-depleted nitrogen from organic molecules. The δ15N of the natural gas is highly variable, but can have δ15N as low41, 42 as −12‰. Nitrogen isotope exchange during fluid migration would tend to homogenize the isotopic composition of participating N pools, decreasing the isotopic range within the organic N pool and differences between organic and inorganic N pools40. Bulk rock δ15N (δ15N ) values cover the measured range of δ15N , but are generally more positive than δ15N , inconsistent with complete isotopic homogenization. We observe only a very weak negative correlation between δ15N and TOC:TN (R2 = 0.29; Extended Data Fig. 5b), suggesting that some 15N-enriched ammonium could have been sorbed onto and/or incorporated into clay minerals in very-low-TOC sediments, presumably during exchange with post-depositional fluids. The % TN (but not δ15N ) indeed shows a clear positive correlation with % K O (R2 = 0.81; Extended Data Fig. 5c), supporting incorporation of N into illites during K-metasomatism; however, there is no correlation between δ15N and % K O (R2 = 0.10; Extended Data Fig. 5d), suggesting that this exchange did not greatly affect bulk δ15N values. Source Data for Fig. 2 and Extended Data Figs 4 and 5 are available in the online version of the paper. Data for Fig. 1 are from ref. 29 and references therein; full data table is available from the corresponding author on reasonable request.


News Article | January 4, 2016
Site: www.scientificcomputing.com

In this special feature, we have invited top astronomers to handpick the Hubble Space Telescope image that has the most scientific relevance to them. The images they’ve chosen aren’t always the colorful glory shots that populate the countless “best of” galleries around the internet, but rather their impact comes in the scientific insights they reveal. My all-time favorite astronomical object is the Orion Nebula — a beautiful and nearby cloud of gas that is actively forming stars. I was a high school student when I first saw the nebula through a small telescope and it gave me such a sense of achievement to manually point the telescope in the right direction and, after a fair bit of hunting, to finally track it down in the sky (there was no automatic ‘go-to’ button on that telescope). Of course, what I saw on that long ago night was an amazingly delicate and wispy cloud of gas in black and white. One of the wonderful things that Hubble does is to reveal the colors of the universe. And this image of the Orion Nebula, is our best chance to imagine what it would look like if we could possibly go there and see it up-close. So many of Hubble’s images have become iconic, and for me the joy is seeing its beautiful images bring science and art together in a way that engages the public. The entrance to my office, features an enormous copy of this image wallpapered on a wall 4m wide and 2.5m tall. I can tell you, it’s a lovely way to start each working day. The impact of the fragments of Comet Shoemaker Levy 9 with Jupiter in July 1994 was the first time astronomers had advance warning of a planetary collision. Many of the world’s telescopes, including the recently repaired Hubble, turned their gaze onto the giant planet. The comet crash was also my first professional experience of observational astronomy. From a frigid dome on Mount Stromlo, we hoped to see Jupiter’s moons reflect light from comet fragments crashing into the far side of Jupiter. Unfortunately we saw no flashes of light from Jupiter’s moons. However, Hubble got an amazing and unexpected view. The impacts on the far side of Jupiter produced plumes that rose so far above Jupiter’s clouds that they briefly came into view from Earth. As Jupiter rotated on its axis, enormous dark scars came into view. Each scar was the result of the impact of a comet fragment, and some of the scars were larger in diameter than our moon. For astronomers around the globe, it was a jaw dropping sight. NASA, ESA and Jonathan Nichols (University of Leicester), CC BY This pair of images shows a spectacular ultraviolet aurora light show occurring near Saturn’s north pole in 2013. The two images were taken just 18 hours apart, but show changes in the brightness and shape of the auroras. We used these images to better understand how much of an impact the solar wind has on the auroras. We used Hubble photographs like these acquired by my astronomer colleagues to monitor the auroras while using the Cassini spacecraft, in orbit around Saturn, to observe radio emissions associated with the lights. We were able to determine that the brightness of the auroras is correlated with higher radio intensities. Therefore, I can use Cassini’s continuous radio observations to tell me whether or not the auroras are active, even if we don’t always have images to look at. This was a large effort including many Cassini investigators and Earth-based astronomers. This far-ultraviolet image of Jupiter’s northern aurora shows the steady improvement in capability of Hubble’s scientific instruments. The Space Telescope Imaging Spectrograph (STIS) images showed, for the first time, the full range of auroral emissions that we were just beginning to understand. The earlier Wide Field Planetary Camera 2 (WFPC2) camera had shown that Jupiter’s auroral emissions rotated with the planet, rather than being fixed with the direction to the sun, thus Jupiter did not behave like the Earth. We knew that there were aurora from the mega-ampere currents flowing from Io along the magnetic field down to Jupiter, but we were not certain this would occur with the other satellites. While there were many ultraviolet images of Jupiter taken with STIS, I like this one because it clearly shows the auroral emissions from the magnetic footprints of Jupiter’s moons Io, Europa, and Ganymede, and Io’s emission clearly shows the height of the auroral curtain. To me it looks three-dimensional. Take a good look at these images of the dwarf planet, Pluto, which show detail at the extreme limit of Hubble’s capabilities. A few days from now, they will be old hat, and no-one will bother looking at them again. Why? Because in early May, the New Horizons spacecraft will be close enough to Pluto for its cameras to reveal better detail, as the craft nears its 14 July rendezvous. Yet this sequence of images — dating from the early 2000s — has given planetary scientists their best insights to date, the variegated colors revealing subtle variations in Pluto’s surface chemistry. That yellowish region prominent in the center image, for example, has an excess of frozen carbon monoxide. Why that should be is unknown. The Hubble images are all the more remarkable given that Pluto is only 2/3 the diameter of our own moon, but nearly 13,000 times farther away. I once dragged my wife into my office to proudly show her the results of some imaging observations made at the Anglo-Australian Telescope with a (then) new and (then) state-of-the-art 8,192 x 8,192 pixel imager. The images were so large, they had to be printed out on multiple A4 pages, and then stuck together to create a huge black-and-white map of a cluster of galaxies that covered a whole wall. I was crushed when she took one look and said: “Looks like mould”. Which just goes to show the best science is not always the prettiest. My choice of the greatest image from HST is another black-and-white image from 2012 that also “looks like mould”. But buried in the heart of the image is an apparently unremarkable faint dot. However it represents the confirmed detection of the coldest example of a brown dwarf then discovered. An object lurking less than 10 parsecs (32.6 light years) away from the sun with a temperature of about 350 Kelvin (77 degrees Celsius) –- colder than a cup of tea! And to this day it remains one of the coldest compact objects we’ve detected outside out solar system. NASA/ESA/STScI, processing by Lucas Macri (Texas A&M University). Observations carried out as part of HST Guest Observer program 9810. In 2004, I was part of a team that used the recently-installed Advanced Camera for Surveys (ACS) on Hubble to observe a small region of the disk of a nearby spiral galaxy (Messier 106) on 12 separate occasions within 45 days. These observations allowed us to discover over 200 Cepheid variables, which are very useful to measure distances to galaxies and ultimately determine the expansion rate of the universe (appropriately named the Hubble constant). This method requires a proper calibration of Cepheid luminosities, which can be done in Messier 106 thanks to a very precise and accurate estimate of the distance to this galaxy (24.8 million light-years, give or take 3%) obtained via radio observations of water clouds orbiting the massive black hole at its center (not included in the image). A few years later, I was involved in another project that used these observations as the first step in a robust cosmic distance ladder and determined the value of the Hubble constant with a total uncertainty of three percent. NASA, ESA and H.E. Bond (STScI), CC BY One of the images that excited me most — even though it never became famous — was our first one of the light echo around the strange explosive star V838 Monocerotis. Its eruption was discovered in January 2002, and its light echo was discovered about a month later, both from small ground-based telescopes. Although light from the explosion travels straight to the Earth, it also goes out to the side, reflects off nearby dust, and arrives at Earth later, producing the “echo.” Astronauts had serviced Hubble in March 2002, installing the new Advanced Camera for Surveys (ACS). In April, we were one of the first to use ACS for science observations. I always liked to think that NASA somehow knew that the light from V838 was on its way to us from 20,000 light-years away, and got ACS installed just in time! The image, even in only one color, was amazing. We obtained many more Hubble observations of the echo over the ensuing decade, and they are some of the most spectacular of all, and VERY famous, but I still remember being awed when I saw this first one. X-ray: NASA/CXC/Univ of Iowa/P.Kaaret et al.; Optical: NASA/ESA/STScI/Univ of Iowa/P.Kaaret et al., CC BY-NC Galaxies form stars. Some of those stars end their “normal” lives by collapsing into black holes, but then begin new lives as powerful X-ray emitters powered by gas sucked off a companion star. I obtained this Hubble image (in red) of the Medusa galaxy to better understand the relation between black hole X-ray binaries and star formation. The striking appearance of the Medusa arises because it’s a collision between two galaxies – the “hair” is remnants of one galaxy torn apart by the gravity of the other. The blue in the image shows X-rays, imaged with the Chandra X-ray Observatory. The blue dots are black hole binaries. Earlier work had suggested that the number of X-ray binaries is simply proportional to the rate at which the host galaxy forms stars. These images of the Medusa allowed us to show that the same relation holds, even in the midst of galactic collisions. NASA, ESA, the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration, and A. Evans (University of Virginia, Charlottesville/NRAO/Stony Brook University), CC BY Some of the Hubble Space Telescope images that appeal to me a great deal show interacting and merging galaxies, such as the Antennae (NGC 4038 and NGC 4039), the Mice (NGC 4676), the Cartwheel galaxy (ESO 350-40), and many others without nicknames. These are spectacular examples of violent events that are common in the evolution of galaxies. The images provide us with exquisite detail about what goes on during these interactions: the distortion of the galaxies, the channeling of gas towards their centers, and the formation of stars. I find these images very useful when I explain to the general public the context of my own research, the accretion of gas by the supermassive black holes at the centers of such galaxies. Particularly neat and useful is a video put together by Frank Summers at the Space Telescope Science Institute (STScI), illustrating what we learn by comparing such images with models of galaxy collisions. Our best computer simulations tell us galaxies grow by colliding and merging with each other. Similarly our theories tell us that when two spiral galaxies collide, they should form a large elliptical galaxy. But actually seeing it happen is another story entirely! This beautiful Hubble image has captured a galaxy collision in action. This doesn’t just tell us that our predictions are good, but it lets us start working out the details because we can now see what actually happens. There are fireworks of new star formation triggered as the gas clouds collide and huge distortions going on as the spiral arms break up. We have a long way to go before we’ll completely understand how big galaxies form, but images like this are pointing the way. This is the highest-resolution view of a collimated jet powered by a supermassive black hole in the nucleus of the galaxy M87 (the biggest galaxy in the Virgo Cluster, 55 million light years from us). The jet shoots out of the hot plasma region surrounding the black hole (top left) and we can see it streaming down across the galaxy, over a distance of 6,000 light-years. The white/purple light of the jet in this stunning image is produced by the stream of electrons spiraling around magnetic field lines at a speed of approximately 98% of the speed of light. Understanding the energy budget of black holes is a challenging and fascinating problem in astrophysics. When gas falls into a black hole, a huge amount of energy is released in the form of visible light, X-rays and jets of electrons and positrons traveling almost at the speed of light. With Hubble, we can measure the size of the black hole (a thousand times bigger than the central black hole of our galaxy), the energy and speed of its jet, and the structure of the magnetic field that collimates it. NASA, Jayanne English (University of Manitoba), Sally Hunsberger (Pennsylvania State University), Zolt Levay (Space Telescope Science Institute), Sarah Gallagher (Pennsylvania State University), and Jane Charlton (Pennsylvania State University), CC BY When my Hubble Space Telescope proposal was accepted in 1998 it was one of the biggest thrills of my life. To imagine that, for me, the telescope would capture Stephan’s Quintet, a stunning compact group of galaxies! Over the next billion years Stephan’s Quintet galaxies will continue in their majestic dance, guided by each other’s gravitational attraction. Eventually they will merge, change their forms, and ultimately become one. We have since observed several other compact groups of galaxies with Hubble, but Stephan’s Quintet will always be special because its gas has been released from its galaxies and lights up in dramatic bursts of intergalactic star formation. What a fine thing to be alive at a time when we can build the Hubble and push our minds to glimpse the meaning of these signals from our universe. Thanks to all the heroes who made and maintained Hubble. When Hubble was launched in 1990, I was beginning my PhD studies into gravitational lensing, the action of mass bending the paths of light rays as they travel across the universe. Hubble’s image of the massive galaxy cluster, Abell 2218, brings this gravitational lensing into sharp focus, revealing how the massive quantity of dark matter present in the cluster – matter that binds the many hundreds of galaxies together — magnifies the light from sources many times more distant. As you stare deeply into the image, these highly magnified images are apparent as long thin streaks, the distorted views of baby galaxies that would normally be impossible to detect. It gives you pause to think that such gravitational lenses, acting as natural telescopes, use the gravitational pull from invisible matter to reveal amazing detail of the universe we cannot normally see! NASA, ESA, J. Rigby (NASA Goddard Space Flight Center), K. Sharon (Kavli Institute for Cosmological Physics, University of Chicago), and M. Gladders and E. Wuyts (University of Chicago) Gravitational lensing is an extraordinary manifestation of the effect of mass on the shape of space-time in our universe. Essentially, where there is mass the space is curved, and so objects viewed in the distance, beyond these mass structures, have their images distorted. It’s somewhat like a mirage; indeed this is the term the French use for this effect. In the early days of the Hubble Space Telescope, an image appeared of the lensing effects of a massive cluster of galaxies: the tiny background galaxies were stretched and distorted but embraced the cluster, almost like a pair of hands. I was stunned. This was a tribute to the extraordinary resolution of the telescope, operating far above the Earth’s atmosphere. Viewed from the ground, these extraordinary thin wisps of galactic light would have been smeared out and not distinguishable from the background noise. My third year astrophysics class explored the 100 Top Shots of Hubble, and they were most impressed by the extraordinary, but true colors of the clouds of gas. However, I cannot go past an image displaying the effect of mass on the very fabric of our universe. NASA, ESA, J. Richard (Center for Astronomical Research/Observatory of Lyon, France), and J.-P. Kneib (Astrophysical Laboratory of Marseille, France), CC BY With General Relativity, Einstein postulated that matter changes space-time and can bend light. A fascinating consequence is that very massive objects in the universe will magnify light from distant galaxies, in essence becoming cosmic telescopes. With the Hubble Space Telescope, we have now harnessed this powerful ability to peer back in time to search for the first galaxies. This Hubble image shows a hive of galaxies that have enough mass to bend light from very distant galaxies into bright arcs. My first project as a graduate student was to study these remarkable objects, and I still use the Hubble today to explore the nature of galaxies across cosmic time. To the human eye, the night sky in this image is completely empty. A tiny region no thicker than a grain of rice held at arms length. The Hubble Space Telescope was pointed at this region for 12 full days, letting light hit the detectors and slowly, one by one, the galaxies appeared, until the entire image was filled with 10,000 galaxies stretching all the way across the universe. The most distant are tiny red dots tens of billions of light years away, dating back to a time just a few hundred million years after the Big Bang. The scientific value of this single image is enormous. It revolutionized our theories both of how early galaxies could form and how rapidly they could grow. The history of our universe, as well as the rich variety of galaxy shapes and sizes, is contained in a single image. To me, what truly makes this picture extraordinary is that it gives a glimpse into the scale of our visible universe. So many galaxies in so small an area implies that there are 100 thousand million galaxies across the entire night sky. One entire galaxy for every star in our Milky Way! NASA, ESA, and J. Lotz, M. Mountain, A. Koekemoer, and the HFF Team (STScI), CC BY This is what Hubble is all about. A single, awe-inspiring view can unmask so much about our Universe: its distant past, its ongoing assembly, and even the fundamental physical laws that tie it all together. We’re peering through the heart of a swarming cluster of galaxies. Those glowing white balls are giant galaxies that dominated the cluster center. Look closely and you’ll see diffuse shreds of white light being ripped off of them! The cluster is acting like a gravitational blender, churning many individual galaxies into a single cloud of stars. But the cluster itself is just the first chapter in the cosmic story being revealed here. See those faint blue rings and arcs? Those are the distorted images of other galaxies that sit far in the distance. The immense gravity of the cluster causes the space-time around it to warp. As light from distant galaxies passes by, it’s forced to bend into weird shapes, like a warped magnifying glass would distort and brighten our view of a faint candle. Leveraging our understanding of Einstein’s General Relativity, Hubble is using the cluster as a gravitational telescope, allowing us to see farther and fainter than ever before possible. We are looking far back in time to see galaxies as they were more than 13 billion years ago! As a theorist, I want to understand the full life cycle of galaxies – how they are born (small, blue, bursting with new stars), how they grow, and eventually how they die (big, red, fading with the light of ancient stars). Hubble allows us to connect these stages. Some of the faintest, most distant galaxies in this image are destined to become monster galaxies like those glowing white in the foreground. We’re seeing the distant past and the present in a single glorious picture. Tanya Hill, Honorary Fellow of the University of Melbourne and Senior Curator (Astronomy), Museum Victoria. This article was originally published on The Conversation. Read the original article.


News Article | December 7, 2016
Site: www.marketwired.com

WINNIPEG, MB--(Marketwired - December 07, 2016) - More than 300 future female hockey stars will hit the ice with Olympian, World Champion and Canadian women's hockey star Cassie Campbell-Pascall for the 6th Scotiabank Girls HockeyFest in Winnipeg, on Sunday, December 11, from 10:00 a.m. to 6:05 p.m. at the MTS Iceplex. Originally created in partnership with the Ottawa Senators in 2004, Scotiabank Girls HockeyFest has provided female minor hockey players with positive experiences through the game of hockey for over a decade. "Scotiabank Girls HockeyFest is an incredible opportunity to bring together young female hockey players to improve their skills, build their confidence and celebrate Canada's game," said Cassie Campbell-Pascall. "I know first-hand how valuable it is to receive support from the hockey community as a young athlete. The support I received helped me get to where I am today, and I'm thrilled to be able to give back through Scotiabank Girls HockeyFest -- and to celebrate Canada's game with these girls who share my love for the game." Scotiabank Girls HockeyFest is a day-long on and off-ice training series designed to engage and encourage young girls across Canada to dream big and reach their full potential. The event is available at no cost to girls aged 7-14. In partnership with Hockey Manitoba, the program includes on- and off-ice training from Olympian/hockey great Cassie Campbell-Pascall and members of the University of Manitoba Bisons Varsity Women's Hockey Team. The day also includes a nutrition session explaining the importance of healthy and proper nutrition, and a keynote address from Cassie Campbell-Pascall on the importance of teamwork and giving back to the community. "We are delighted to present Scotiabank Girls HockeyFest, encouraging young athletes to reach their full potential through teamwork, on-ice drills and inspiration from hockey heroes," said Martin MacCool, District Vice President at Scotiabank. "Girls HockeyFest in Winnipeg will allow us to share our passion for the sport and to inspire a new generation of players to achieve their hockey dreams." Scotiabank has a long tradition of supporting Canadian hockey at all levels -- from local teams and minor hockey associations to professional players and leagues. Scotiabank is proud to support over 8,000 community hockey teams across Canada through the Scotiabank Community Hockey Sponsorship Program. To learn more about the event and Scotiabank's commitment to kids' community hockey, visit www.scotiabankgirlshockeyfest.com About Scotiabank Scotiabank is the Official Bank of the NHL®, NHL Alumni™, CWHL, Vancouver Canucks®, Winnipeg Jets®, Toronto Maple Leafs®, Ottawa Senators®, Edmonton Oilers® and the Calgary Flames® whose home arena is the Scotiabank Saddledome. The Bank also supports the Montreal Canadiens®. Scotiabank's Community Hockey Sponsorship Program supports over 8,000 minor hockey teams in communities across Canada. To find out more about Scotiabank's hockey programs, please visit www.scotiabank.com/the5thseason.


Among older women residing in nursing homes, administration of cranberry capsules compared with placebo resulted in no significant difference in presence of bacteriuria plus pyuria (presence of bacteria and white blood cells in the urine, a sign of urinary tract infection [UTI]), or in the number of episodes of UTIs over l year, according to a study published online by JAMA. The study is being released to coincide with its presentation at IDWeek 2016. Urinary tract infection is the most commonly diagnosed infection among nursing home residents. Bacteriuria is prevalent in 25 percent to 50 percent of women living in nursing homes, and pyuria is present in 90 percent of those with bacteriuria. Cranberry capsules are an understudied, nonantimicrobial prevention strategy used in this population. Manisha Juthani-Mehta, M.D., of the Yale School of Medicine, New Haven, Conn., and colleagues randomly assigned 185 women (average age, 86 years; with or without bacteriuria plus pyuria at study entry) residing in nursing homes to two oral cranberry capsules, each capsule containing 36 mg of the active ingredient proanthocyanidin (i.e., 72 mg total, equivalent to 20 ounces of cranberry juice) or placebo administered once a day. Of the 185 study participants (31 percent with bacteriuria plus pyuria at study entry), 147 completed the study. Overall adherence was 80 percent. After adjustment for various factors, there was no significant difference in the presence of bacteriuria plus pyuria between the treatment group vs the control group (29.1 percent vs 29.0 percent). There were also no significant differences in number of symptomatic UTIs (10 episodes in the treatment group vs 12 in the control group), rates of death (17 vs 16 deaths), hospitalization, antibiotics administered for suspected UTIs, or total antimicrobial utilization. "Many studies of cranberry products have been conducted over several decades with conflicting evidence of its utility for UTI prevention. The results have led to the recommendation that cranberry products do not prevent UTI overall but may be effective in older women. This trial did not show a benefit of cranberry capsules in terms of a lower presence of bacteriuria plus pyuria among older women living in nursing homes," the authors write. Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc. Editorial: Cranberry for Prevention of Urinary Tract Infection? - Time to Move on "The continuing promotion of cranberry use to prevent recurrent UTI in the popular press or online advice seems inconsistent with the reality of repeated negative studies or positive studies compromised by methodological shortcomings. Any continued promotion of the use of cranberry products seems to go beyond available scientific evidence and rational reasoning," writes Lindsay E. Nicolle, M.D., F.R.C.P.C., of the University of Manitoba, Winnipeg, Manitoba, Canada, in an accompanying editorial. "Some of this conviction is likely an interest of individuals or groups to promote the use of natural health products for clinical benefits, allowing avoidance of medical interventions and, potentially, giving women who experience recurrent UTI an element of personal control in managing their problem. The current emphasis on antimicrobial stewardship and limiting antimicrobial use whenever possible also may have some influence in the continued endorsement of cranberry juice or tablets as a nonantimicrobial strategy for management of UTI." "Recurrent UTI is a common problem that is distressing to patients and because it is so frequent and costly for the health care system. It is time to identify other potential approaches for management. This certainly must include a wiser use of antimicrobial therapy for syndromes of recurrent UTI in women in long-term care facilities. Other possible interventions to explore in this and other populations may include, among other approaches, adherence inhibitors or immunologic interventions. Intellectual discussions and clinical trial activity should be redirected to identify and evaluate other innovative antimicrobial and nonantimicrobial approaches. It is time to move on from cranberries." Editor's Note: The author has completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported. To place an electronic embedded link to these articles in your story These links will be live at the embargo time: http://jamanetwork. http://jamanetwork.


Reza-Paul S.,University of Manitoba | Mwangi P.,Bar Hostess Empowerment and Support Program | Butler J.,UN Population Fund
The Lancet | Year: 2015

A community empowerment-based response to HIV is a process by which sex workers take collective ownership of programmes to achieve the most effective HIV outcomes and address social and structural barriers to their overall health and human rights. Community empowerment has increasingly gained recognition as a key approach for addressing HIV in sex workers, with its focus on addressing the broad context within which the heightened risk for infection takes places in these individuals. However, large-scale implementation of community empowerment-based approaches has been scarce. We undertook a comprehensive review of community empowerment approaches for addressing HIV in sex workers. Within this effort, we did a systematic review and meta-analysis of the effectiveness of community empowerment in sex workers in low-income and middle-income countries. We found that community empowerment-based approaches to addressing HIV among sex workers were significantly associated with reductions in HIV and other sexually transmitted infections, and with increases in consistent condom use with all clients. Despite the promise of a community-empowerment approach, we identified formidable structural barriers to implementation and scale-up at various levels. These barriers include regressive international discourses and funding constraints; national laws criminalising sex work; and intersecting social stigmas, discrimination, and violence. The evidence base for community empowerment in sex workers needs to be strengthened and diversified, including its role in aiding access to, and uptake of, combination interventions for HIV prevention. Furthermore, social and political change are needed regarding the recognition of sex work as work, both globally and locally, to encourage increased support for community empowerment responses to HIV. © 2015 Elsevier Ltd.


Dr. Neumeister has previously served as President of the Plastic Surgery Foundation, Plastic Surgery Research Council and a Leader in Translational Regenerative Medicine SOUTH PLAINFIELD, NJ--(Marketwired - Dec 9, 2016) - Majesco Entertainment, Inc. ( : COOL) ("Majesco") following announcement that it had signed a definitive merger agreement with PolarityTE, Inc. ("Polarity") www.polarityte.com  announced it has appointed Michael W. Neumeister, MD, FRCSC, FACS as Chief Medical Officer (http://www.siumed.edu/surgery/plastics/cvs/neumeister_cv.html). Dr. Neumeister was formerly President of the Plastic Surgery Foundation and Plastic Surgery Research Council and is a Leader in Regenerative Medicine (http://www.siumed.edu/surgery/plastics/cvs/MWN%20CV.pdf). Following satisfaction of the conditions to closing, including approval of stockholders, Polarity will be acquired by Majesco, and will operate as a wholly-owned subsidiary of Majesco, which will change its name to Polarity in connection with the contemplated transaction.  "Polarity seeks to alter the paradigms of regenerative medicine and patient-specific tissue engineering for the future. It is with these ambitious goals in mind that I am pleased to announce world renowned plastic and reconstructive surgeon Dr. Michael Neumeister has agreed to join as Chief Medical Officer of PolarityTE™. Beyond his tremendous expertise in some of the most complex reconstructive procedures performed, he has remained a leader and mentor in the field and an innovator in pragmatic translational regenerative medicine," said Chief Executive Officer and Chairman Dr. Denver Lough. "I am extremely excited to join the Polarity Team and help transform the landscape of translational tissue engineering and reconstructive surgery. I believe I can add tremendous value with a large network of clinical thought leaders and practical viewpoint on the application of the technology. Dr. Lough and I have a close working relationship and I consider him to be one of the most gifted and brilliant innovators in regenerative medicine, and it is an honor to be brought on to his team in this role. I have no doubt, he will change field as we know it," said Dr Neumeister. About PolarityTE PolarityTE, Inc. is the owner of a novel regenerative medicine and tissue engineering platform developed and patented by Denver Lough MD, PhD. This radical and proprietary technology employs a patients' own cells for the healing of full-thickness functionally-polarized tissues. If clinically successful, the PolarityTE platform will be able to provide medical professionals with a truly new paradigm in wound healing and reconstructive surgery by utilizing a patient's own tissue substrates for the regeneration of skin, bone, muscle, cartilage, fat, blood vessels and nerves. It is because PolarityTE uses a natural and biologically sound platform technology, which is readily adaptable to a wide spectrum of organ and tissue systems, that the company and its world-renowned clinical advisory board, are poised to drastically change the field and future of translational regenerative medicine. More info can be found online at www.polarityte.com. Welcome to the Shift™. About Michael W. Neumeister, MD, FRCSC, FACS Professor & Chairman - Department of Surgery The Elvin G. Zook Endowed Chair in Plastic Surgery Microsurgery/Research Lab Director Director: Memorial Medical Center Regional Burn Unit Director: Memorial Medical Center Wound Center Southern Illinois University School of Medicine, Springfield, IL Dr. Neumeister is Professor & Chairman of the Department of Surgery and The Elvin G. Zook Endowed Chair in Plastic Surgery at Southern Illinois University School of Medicine in Springfield, IL. He received his medical degree from the University of Toronto and previously completed a degree in physiology and pharmacology at the University of Western Ontario. Dr. Neumeister began his residency at Dalhousie University in Halifax, Nova Scotia in general surgery and went on to complete his plastic surgery residency at the University of Manitoba. He continued his training as a microsurgery fellow at Harvard University's Brigham & Women's Hospital in Boston and completed a one year hand and microsurgery fellowship at Southern Illinois University School of Medicine. Dr. Neumeister is board certified in plastic surgery by the Royal College of Surgeons of Canada and the American Board of Plastic Surgery. He has also received his Certificate in (SOTH) Surgery of The Hand. Dr. Neumeister has received awards for presentations given regionally, nationally and internationally, has over 150 book chapters and articles, and has multiple research interests in tissue engineering and regenerative medicine. Dr. Neumeister is the Editor in Chief of the official AAHS journal HAND. He is the past President of the American Society of Reconstructive Microsurgery, American Association for Hand Surgery, The Plastic Surgery Foundation (The Research Body of The American Society of Plastic Surgeons), Plastic Surgery Research Council, and the Midwest Association of Plastic Surgeons. His memberships include the American Society of Plastic Surgeons, Plastic Surgery Foundation, Plastic Surgery Research Council, American Association for Hand Surgery, American Society of Reconstructive Microsurgery, American Society for Surgery of the Hand, American Burn Association, American Council of Academic Plastic Surgeons and the American Association of Plastic Surgeons where he also serves as an elected official on several of their committees. Dr. Neumeister has received awards for presentations given regionally, nationally and internationally and has over 150 published manuscripts and book chapters. Dr. Neumeister's research interests include: allotransplantation, tissue engineering, the role of stem cells in reconstruction, ischemia reperfusion, peripheral nerve, and burn modulation. Forward-Looking Statements Certain statements contained in this release are "forward-looking statements" within the meaning of the Private Securities Litigation Reform Act of 1995. Forward looking statements contained in this release relate to, among other things, the Company's ongoing compliance with the requirements of The NASDAQ Stock Market and the Company's ability to maintain the closing bid price requirements of The NASDAQ Stock Market on a post reverse split basis. They are generally identified by words such as "believes," "may," "expects," "anticipates," "should" and similar expressions. Readers should not place undue reliance on such forward-looking statements, which are based upon the Company's beliefs and assumptions as of the date of this release. The Company's actual results could differ materially due to risk factors and other items described in more detail in the "Risk Factors" section of the Company's Annual Reports filed with the SEC (copies of which may be obtained at www.sec.gov). Subsequent events and developments may cause these forward-looking statements to change. The Company specifically disclaims any obligation or intention to update or revise these forward-looking statements as a result of changed events or circumstances that occur after the date of this release, except as required by applicable law.


WINNIPEG, MB--(Marketwired - November 14, 2016) - 3D Signatures Inc. (TSX VENTURE: DXD) (the "Company" or "3DS") is pleased to introduce its Clinical and Scientific Advisory Board (CSAB), comprised of world-renowned physicians and researchers. The CSAB, made up of external experts, will serve as a resource to Dr. Sabine Mai, Director and Chair of 3DS' Clinical and Scientific Advisory Board and 3DS' CEO, Jason Flowerday. The CSAB will help guide the clinical development of 3DS' proprietary genomic analysis software from research, right through to validation and regulatory approval. The Company is currently focused on Prostate Cancer, Hodgkin's Lymphoma, Multiple Myeloma, and Alzheimer's disease. Dr. Sabine Mai is a tenured Professor of Physiology and Pathophysiology, Biochemistry and Medical Genetics, Human Anatomy and Cell Science, University of Manitoba. She is also Director of The Genomic Centre for Cancer Research and Diagnosis (GCCRD) at University of Manitoba. Dr. Mai is an internationally known researcher who has more than one hundred publications related to research on Genomic Instability and the 3D nuclear organization in cancer and Alzheimer's disease. Most recently she has contributed to a library of patents related to her work on 3D Genomic Analysis. She is the recipient of numerous academic awards including the Braidwood Jackson Memorial Award; the Dr. Saul Highman Memorial Award; the Rh Award (Basic Science); the J&J Cognition Challenge (2013). She was recognized in 2015 as one of the Top 100: Canada's Most Powerful Women and has recently accepted an Editorial Board Member position with Genes, Chromosomes and Cancer, a high-profile peer-reviewed academic journal. Dr. Anderson is the Kraft Family Professor of Medicine at Harvard Medical School, as well as Director of the Lebow Institute for Myeloma Therapeutics and Jerome Lipper Multiple Myeloma Center at Dana-Farber Cancer Institute. He is a Doris Duke Distinguished Clinical Research Scientist and American Cancer Society Clinical Research Professor. After graduating from Johns Hopkins Medical School, he trained in internal medicine at Johns Hopkins Hospital, and then completed hematology, medical oncology, and tumor immunology training at the Dana-Farber Cancer Institute. Over the last three decades, he has focused his laboratory and clinical research studies on multiple myeloma. He has developed laboratory and animal models of the tumor in it is microenvironment which have allowed for both identification of novel targets and validation of novel targeted therapies, and has then rapidly translated these studies to clinical trials culminating in FDA approval of novel targeted therapies. His paradigm for identifying and validating targets in the tumor cell and its milieu has transformed myeloma therapy and markedly improved patient outcome. Dr. Klotz is internationally recognized for his contributions to the treatment of prostate cancer, notably for pioneering the adoption of Active Surveillance as a standard aspect of patient care. Dr. Klotz obtained his medical degree and residency training from the University of Toronto with a special fellowship in uro-oncology and tumour biology at Memorial Sloan Kettering Cancer Centre, New York. He is a widely published uro-oncologist who serves on the board or heads many medical/scientific organizations. He is a Professor, Department of Surgery, University of Toronto, past Chief of Urology, Sunnybrook Health Sciences Centre, Toronto, and Chairman, World Uro-Oncology Federation. Dr. Klotz was awarded the Order of Canada in 2014 for his contribution to prostate cancer treatment. Dr. Knecht established himself as a prominent haematologist through his ground-breaking translational research on lymphoma biology. His current focus is on the molecular events leading to the transition from the mononuclear Hodgkin to the multinuclear Reed-Sternberg cell and the impact of 3D nuclear telomere organization on this transformation. Dr. Knecht received his medical degree from the University of Zurich, Switzerland with post-graduate work under both Maxime Seligmann (Haematology) and Karl Lennert (Haematopathology) in Paris and Kiel, respectively. Dr. Knecht is currently a Professor of Medicine and Chief, Division of Haematology at McGill University and Jewish General Hospital, Montreal. Dr. Drachenberg is a urologic oncologist and researcher and strong proponent of Active Surveillance for prostate cancer patients. Dr. Drachenberg attended medical school at the University of British Columbia and urology residency at Dalhousie University. He is an American Foundation of Urology Scholar with fellowship training in urologic oncology at the National Cancer Institute in Bethesda, Maryland. He founded the laparoscopic urology program and prostate brachytherapy, cryotherapy, and HIFU programs at the University of Manitoba where he works as assistant professor of surgery and director of research for the Manitoba Prostate Center and Section of Urology and Chair of the Genito-Urinary disease site group, CancerCare Manitoba. Dr. Kotb completed his medical residency training in Paris, France, and then became a staff member at Paris XI University. He joined the Hematology-Oncology team at Sherbrooke University (QC, Canada) in 2005 as an Assistant, then Associate Professor. He also worked as the Director of Hematology undergraduate education, Head of the supra-regional team of Hematological Neoplasia and Head of the Institutional Oncology Quality Sub-committee. Late 2011, he moved to British Columbia to work at the BC Cancer Agency as an Oncologist/Hematologist, Associate Professor at the University of British Columbia and affiliate Professor at the University of Victoria. He joined the team at CancerCare Manitoba in September 2014. His practice and research activity will be focused on lymphoid neoplasia, primarily myeloma and lymphoma. Dr. Cremer is an internationally-recognized scientist specializing in the study of nuclear architecture. He is one of the pioneers of interphase cytogenetics and comparative genomic hybridization (CGH). These methods have become widely used tools for cytogenetic analyses of chromosomal imbalances. He is a corresponding member of the Heidelberg Academy for Sciences and Humanities since 2000, a member of Germany's National Academy of Sciences Leopoldina since 2006, and an honorary member of both the European Cytogenetics Association (ECA) and the German Society of Human Genetics since 2011, as well as the recipient of the medal of Honor of this Society. Dr. Cremer is an independent expert to 3DS. "The newly formed CSAB will be invaluable in guiding our clinical programs," said Dr. Sabine Mai, Company Director and Chair of 3DS' Clinical and Scientific Advisory Board. "Each member is a distinguished leader in their field and can bring insights that will help us achieve our objectives: to validate and secure the approval of accurate and minimally invasive first-in-class biomarkers that allow clinicians to personalize treatments and improve outcomes for cancer and Alzheimer's disease patients." The Company recently announced participation in a major clinical trial for prostate cancer diagnosis and management known as PRECISE (PRostate Evaluation for Clinically Important disease MRI vs Standard Evaluation procedures). The trial marks the Company's first step toward validation and approval of clinical risk assessment tests for prostate cancer. It is currently being tested as a new blood-based biomarker to accurately stratify Prostate Cancer patients into risk groups. Such a tool does not currently exist for prostate cancer patients. For more information about the PRECISE Trial and Prostate Cancer Canada, please visit their website at http://www.prostatecancer.ca. 3DS (TSX VENTURE: DXD) is a personalized medicine company with a proprietary software platform based on the three-dimensional analysis chromosomal signatures. The technology is well developed and supported by 16 clinical studies on over 1,500 patients on 13 different cancers and Alzheimer's disease. Depending on the desired application, the technology can measure the stage of disease, rate of progression of disease, drug efficacy, and drug toxicity. The technology is designed to predict the course of disease and to personalize treatment for the individual patient. For more information, visit the Company's new website at http://www.3dsignatures.com. This news release includes forward-looking statements that are subject to risks and uncertainties. Forward-looking statements involve known and unknown risks, uncertainties, and other factors that could cause the actual results of the Company to be materially different from the historical results or from any future results expressed or implied by such forward-looking statements. All statements within, other than statements of historical fact, are to be considered forward looking. In particular, the Company's statements that it expects to benefit greatly from its association with the individuals named in this news release is forward-looking information. Although 3DS believes the expectations expressed in such forward-looking statements are based on reasonable assumptions, such statements are not guarantees of future performance and actual results or developments may differ materially from those in forward-looking statements. Risk factors that could cause actual results or outcomes to differ materially from the results expressed or implied by forward-looking information include, among other things: market demand; technological changes that could impact the Company's existing products or the Company's ability to develop and commercialize future products; competition; existing governmental legislation and regulations and changes in, or the failure to comply with, governmental legislation and regulations; the ability to manage operating expenses, which may adversely affect the Company's financial condition; the Company's ability to successfully maintain and enforce its intellectual property rights and defend third-party claims of infringement of their intellectual property rights; adverse results or unexpected delays in clinical trials; changes in laws, general economic and business conditions; and changes in the regulatory regime. There can be no assurances that such statements will prove accurate and, therefore, readers are advised to rely on their own evaluation of such uncertainties. We do not assume any obligation to update any forward-looking statements. Neither the TSX Venture Exchange nor its Regulation Service Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.


News Article | October 28, 2016
Site: www.eurekalert.org

New Rochelle, NY, October 28, 2016--Researchers describe the first evidence linking prolactin inducible protein (PIP) to the immune system's ability to recognize and destroy foreign cells, such as tumor cells. New research in PIP-deficient mice that demonstrates the role of PIP in cell-mediated immunity and suggests that this immune regulatory function may be protective against breast cancer is presented in DNA and Cell Biology, a peer-reviewed journal from Mary Ann Liebert, Inc., publishers. The article is available free on the DNA and Cell Biology website until November 18, 2016. Coauthors Olivia Ihedioha, Robert Shiu, Jude Uzonna, and Yvonne Myal, University of Manitoba, Winnipeg, Canada, describe the potential clinical implications of these findings, in which PIP could represent an effective new target for the development of novel immunotherapeutic agents. The researchers review their recent studies of PIP, known as a biomarker of mammary differentiation, in the article entitled "Prolactin-Inducible Protein: From Breast Cancer Biomarker to Immune Modulator--Novel Insights from Knockout Mice." "Breast cancers are among the most common tumors. PIP was observed to be selectively expressed by these cells," says Carol Shoshkes Reiss, PhD, Editor-in-Chief of DNA and Cell Biology and Professor, Departments of Biology and Neural Science, and Global Public Health at New York University, NY. "The work from the Myal lab in this paper is exciting because of the immunoregulatory activity they describe. I hope it will lead to novel therapeutic approaches to this devastating disease." DNA and Cell Biology is the trusted source for authoritative, peer-reviewed reporting on the latest research in the field of molecular biology. By combining mechanistic and clinical studies from multiple systems in a single journal, DNA and Cell Biology facilitates communication among biological sub-disciplines. Coverage includes gene structure, function, and regulation, molecular medicine, cellular organelles, protein biosynthesis and degradation, and cell-autonomous inflammation and host cell response to infection. Complete tables of content and a sample issue may be viewed on the DNA and Cell Biology website. Mary Ann Liebert, Inc., publishers is a privately held, fully integrated media company known for establishing authoritative peer-reviewed journals in many promising areas of science and biomedical research, including Human Gene Therapy, Antioxidants and Redox Signaling, and AIDS Research and Human Retroviruses. Its biotechnology trade magazine, GEN (Genetic Engineering & Biotechnology News), was the first in its field and is today the industry's most widely read publication worldwide. A complete list of the firm's 80 journals, books, and newsmagazines is available on the Mary Ann Liebert, Inc., publishers website.


SASKATOON, Saskatchewan & WINNIPEG, Manitoba--(BUSINESS WIRE)--As one of the major symptoms experienced by patients with Multiple Sclerosis (MS), neuropathic pain can be extremely debilitating. A leading neuro-immunology team lead by Dr. Michael Namaka at the University of Manitoba in Winnipeg, Manitoba, is looking to determine the analgesic efficacy of two of CanniMed Therapeutics Inc. (TSX: CMED) cannabinoid plant derived oil extracts using a rodent model of MS-induced neuropathic pain. The study entitled “ Identifying the molecular mechanisms involved in supressing multiple sclerosis induced neuropathic pain following cannabinoid treatment in an animal model of multiple sclerosis (MS)” will address the scientific merit of using medical cannabis in alleviating neuropathic pain. Study Specifics: Experimental Autoimmune Encephalomyelitis (EAE) is a well-known animal model of MS. Dr. Namaka has several recent publications that demonstrate that EAE animals develop neuropathic pain following an immune system mediated insult such as an MS attack. 1,2,3,4,5,6,7 In his publications, Dr. Namaka and others have shown that key biological targets such as TNFα and CX3CL1 increase during EAE-induced neuropathic pain. As such, EAE animals will receive analgesic treatment intervention with one of two cannabinoid oil extract products at a comparable oral dose that is used in humans to see if this medication can reduce the expression of these pathological molecules that drive chronic pain. “ This research endeavour will be the first pre-clinical scientific validation to identifying the direct molecular mechanisms of action of herbal medical cannabis oils and their direct potential impact on neuropathic pain for MS patients,” said Dr. Namaka, B.Sc. Pharm, M.Sc. Pharm, PhD; EPP, Associate Professor, College of Pharmacy, Rady Faculty of Health Sciences at the University of Manitoba. “ With CanniMed’s ability to supply consistent, quality controlled and pharmaceutical-grade medical cannabis oils for this trial, we are confident that our outcomes will be standardized and provide us with direction on how cannabis oil will also respond in the patient population.” This trial involves the investigation of two CanniMed® Oil products (CanniMed® 10:10; CanniMed® 1:20) to identify whether THC (tetrahydrocannabinol) and CBD (cannabidiol) together, or CBD alone, has an impact on MS-related neuropathic pain. The future goal of this pre-clinical study is to use these validated scientific findings to identify the lead cannabinoid oil extract that will move forward to a clinical trial involving human subjects with chronic pain. “ CanniMed is committed to working with leading physicians and researchers across Canada and around the world in the effort of identifying the potential impact of medical cannabis in supporting symptom management of a number of medical conditions,” said Brent Zettl, President and CEO, CanniMed Therapeutics Inc. “ Research endeavours like this one will build upon the expanding library of pre-clinical and clinical research in order to demonstrate to patients, physicians, regulatory groups and governments that medical cannabis is an important therapeutic option.” This study is currently underway and Dr. Namaka has indicated that the early preliminary results provide compelling scientific evidence to support the specific molecular mechanisms by which they exert their beneficial effects to suppress pain. These exciting results are expected to be submitted for publication within the next eight months. CanniMed Therapeutics Inc. has invested $80,000 CDN in pre-clinical research funding to the University of Manitoba under the direction of Dr. Namaka to explore the molecular mechanisms that are responsible for the beneficial effects of cannabinoids in the treatment of multiple sclerosis-induced neuropathic pain. The Company is a Canadian-based, international plant biopharmaceutical company and a leader in the Canadian medical cannabis industry, with 15 years of pharmaceutical cannabis cultivation experience, state-of-the-art, GMP-compliant plant production processes and world class research and development platforms with a wide range of pharmaceutical-grade cannabis products. In addition, the Company has an active plant biotechnology research and product development program focused on the production of plant-based materials for pharmaceutical, agricultural and environmental applications. CanniMed Ltd., a wholly-owned subsidiary of the Company, was the first producer to be licensed under the Marihuana for Medical Purposes Regulations, the predecessor to the current Access to Cannabis for Medical Purposes Regulations. Prairie Plant Systems Inc., a wholly-owned subsidiary of the Company, was the sole supplier to Health Canada under the former medical marijuana system for 13 years, and has been producing safe and consistent medical marijuana for thousands of Canadian patients, with no incident of diversion. This news release contains forward-looking statements within the meaning of applicable securities laws. All statements that are not historical facts, including without limitation, the pre-clinical scientific validation of molecular mechanisms of action and statements regarding future estimates, plans, programs, forecasts, projections, objectives, assumptions, expectations or beliefs of future performance, are “forward-looking statements”. Forward-looking statements can be identified by the use of words such as “plans”, “expects” or “does not expect”, “is expected”, “estimates”, “intends”, “anticipates” or “does not anticipate”, or “believes”, or variations of such words and phrases or state that certain actions, events or results “may”, “could”, “would”, “might” or “will” be taken, occur or be achieved. Forward-looking statements are based on assumptions and involve known and unknown risks, uncertainties and other factors which may cause the actual results, performance or achievements of CanniMed Therapeutics Inc. to be materially different from any future results, performance or achievements expressed or implied by the forward-looking statements, including the risk that the results of the study will not be conclusive and the risks described in CanniMed Therapeutics Inc.’s documents filed with applicable Canadian securities regulatory authorities which may be viewed at sedar.com. The forward-looking statements included in this news release are made as of the date of this news release. CanniMed Therapeutics Inc. does not undertake to publicly update such forward-looking statements to reflect new information, subsequent events or otherwise, unless required by applicable securities legislation. 1 Acosta, C., Cortes, C., Altaweel, K., MacPhee, H., Hoogervorst, B., Bhullar, H., MacNeil, B., Mahmoud Torabi, Burczynski, F., Namaka, M. (2015). Immune System Induction of Nerve Growth Factor in an Animal Model of Multiple Sclerosis: Implications in Re-myelination and Myelin repair. CNS & Neurological Disorders Drug Targets, 14 (8), 1069 - 78. 2 Khorshid Ahmad, T., Acosta, C., Cortes, C., Lakowski, T.M., Namaka, M. (2015). Transcriptional Regulation of Brain Derived Neurotrophic Factor (BDNF) by Methyl CpG Binding Protein 2 (MeCP2): A Novel Mechanism for Re-myelination and/or Myelin Repair Involved in the Treatment of Multiple Sclerosis (MS). Molecular Neurobiology, 53 (2), 1092 - 1107. 3 Turcotte, D. A., Doupe, M., Torabi, M., Gomori, A. J., Ethans, K., Esfahani, F., Galloway, K., Namaka, M. (2015). Nabilone as an Adjunctive to Gabapentin for Multiple Sclerosis - Induced Neuropathic Pain: A Randomized Controlled Trial. Pain Medicine, 16 (1), 149 - 59. 4 Zhu, W., Acosta, C., MacNeil, B.J., Cortes, C., Intrater, H., Gong, Y., Namaka, M. (2013). Elevated Expression of Fractalkine (CX3CL1) and Fractalkine receptor (CX3CR1) in the Dorsal Root Ganglia (DRG) and Spinal Cord (SC) in Experimental Autoimmune Encephalomyelitis (EAE): Implications in Multiple Sclerosis (MS) - Induced Neuropathic Pain (NPP). BioMed Research International, 2013 (September), doi: 10.1155/2013/480702. 5 Begum, F., Zhu, W., Cortes, C., MacNeil, B. J., Namaka, M. P. (2013). Elevation of Tumor Necrosis Factor Alpha in Dorsal Root Ganglia and Spinal Cord is Associated with Neuroimmune Modulation of Pain in an Animal Model of Multiple Sclerosis. Journal of Neuroimmune Pharmacology, 8 (3), 677 - 90. 6 Zhu, W., Frost, E. E., Begum, F., Vora, P., Au, K., Gong, Y., MacNeil, B., Pillai, P., Namaka, M. (2012). The Role of Dorsal Root Ganglia Activation and Brain - Derived Neurotrophic Factor in Multiple Sclerosis. Journal of Cellular and Molecular Medicine, 16 (8), 1856 - 65. 7 Turcotte, D., Le Dorze, J.A., Esfahani, F., Frost, E., Gomori, A., Namaka, M. (2010). Examining the Roles of Cannabinoids in Pain and Other Therapeutic Indications. Expert Opinion on Pharmacotherapy, 11 (1), 17 - 31.


CALGARY, AB--(Marketwired - December 08, 2016) - A collaborative research project titled 'GENICE' that partners the University of Calgary and the University of Manitoba has been awarded $10.7 million as part of the Genome Canada 2015 Large-Scale Applied Research Project Competition (LSARP). Announced today in Montreal by Minister of Science Kirsty Duncan, the research teams will be led by the University of Calgary's Casey Hubert, associate professor in the Faculty of Science and Campus Alberta Innovation Program Chair in Geomicrobiology, and University of Manitoba's Research Professor Gary Stern, Centre for Earth Observation Science. They will combine their expertise in the areas of genomics, microbiology, petroleomics and sea-ice physics to investigate the potential for natural microbial communities to mitigate oil spills, as warmer temperatures and melting sea ice usher in increasing shipping throughout Arctic waters. "Bioremediation in the cold Arctic and in the presence of sea ice remains poorly understood," Hubert says. "By developing a better understanding of how Arctic microbes will be mobilized in the event of a spill, we can better model and map what will happen and what our response should be, should an accidental spill ever occur," says Hubert. With northern shipping increasing by 166 per cent since 2004, and cruise ships and tourism increasing by 500 per cent in the past five years, the pressures on the Northwest Passage have never been greater. The Passage represents a sea route connecting the northern Atlantic and Pacific Oceans through the Arctic Ocean, along the northern coast of North America via waterways through the Canadian Arctic Archipelago, which has never been busier. "The expertise that Manitoba brings to the table are in the areas of petroleomics and sea ice physics as well as our new facility [under construction in Churchill, Manitoba] that will allow us to study oil degradation processes under controlled Arctic conditions," says Stern. The soon-to-be-completed Churchill Marine Observatory (CMO) is a globally unique, highly innovative, multidisciplinary research facility located in Churchill, Manitoba, adjacent to Canada's only Arctic deep-water port. The CMO will directly support the technological, scientific, and ethical, environmental, economic, legal and social research that is needed to safely guide (through policy development) the unprecedented Arctic marine transportation and oil and gas exploration and development throughout the Arctic. The University of Calgary is partnering closely with the University of Manitoba on this CFI-sponsored initiative, which is being built at the perfect time to support the new Genome Canada project. "The idea is that we will be able to emulate different thermodynamic states of the sea-ice and how, under these conditions, different crude and fuel oils will interact with native microbial populations in a controlled environment," Stern adds. The 2015 LSARP competition aims to support applied research projects focused on using genomic approaches to address challenges and opportunities of importance to Canada's natural resources and environment sectors, including interactions between natural resources and the environment, thereby contributing to the Canadian bioeconomy and the well-being of Canadians. "Climate change may present the opportunity for year-round shipping traffic along Canada's Arctic coast. The work of the GENICE team on genomics-based bioremediation will help Canadian companies and agencies be better prepared to mitigate the environmental impact of expanding industrial activities in the Arctic." Reno Pontarollo, President & CEO, Genome Prairie notes. "Casey Hubert and Gary Stern are working to address the growing pressures on Arctic marine environments, while also offering insights into protecting other coastal areas in Canada," notes John Reynolds, acting vice-president (research) at the University of Calgary. "We thank Genome Canada and their subsidiaries, as well as the wide range of partners who have come together to support this project." The project will be managed by Genome Alberta in conjunction with Genome Prairie and with an international collaboration of funding partners that have shown the desire to protect the complex Arctic environment: Genome Canada, Alberta Economic Development and Trade, University of Manitoba, Natural Resources Canada, Arctic Institute of North America, Arctic Research Foundation, Stantec Consulting Ltd., National Research Council of Canada, Research Manitoba, University of Calgary Petroleum Reservoir Group, University of Newcastle Upon Tyne, Georgia Institute of Technology, Churchill Northern Studies Centre, Amundsen Science Inc., Environment and Climate Change Canada, Genome Quebec, Aphorist, and Aarhus University. About the University of Calgary The University of Calgary is making tremendous progress on its journey to become one of Canada's top five research universities, where research and innovative teaching go hand in hand, and where we fully engage the communities we both serve and lead. This strategy is called Eyes High, inspired by the university's Gaelic motto, which translates as 'I will lift up my eyes.' For more information, visit ucalgary.ca. Stay up to date with University of Calgary news headlines on Twitter @UCalgary. For details on faculties and how to reach experts go to our media center at ucalgary.ca/mediacentre About the University of Manitoba For nearly 140 years, the University of Manitoba has been recognized as Manitoba's premier university - shaping our leaders, enhancing our community, and conducting world-class research. Our home is Manitoba but our impact is global. The university has a tradition of excellence in research, scholarly work and creative activities. Our connection to the agricultural and natural landscapes of the Canadian Prairie, to the Arctic, to local and Indigenous communities, has shaped our research focus. We have made pioneering contributions in many fields and developed life-changing solutions to problems faced by peoples in Manitoba, Canada and the world. About Genome Alberta Genome Alberta is a publicly funded not-for-profit genomics research funding organization based in Calgary, Alberta but leads projects at institutions around the province and participates in a variety of other projects across the country. In partnership with Genome Canada, Industry Canada, and the Province of Alberta, Genome Alberta was established in 2005 to focus on genomics as one of the central components of the Life Sciences Initiative in Alberta, and to help position genomics as a core research effort. For more information on the range of projects led and managed by Genome Alberta, visit http://GenomeAlberta.ca


Feldmann H.,National Institute of Allergy and Infectious Diseases | Feldmann H.,University of Manitoba | Geisbert T.W.,University of Texas Medical Branch
The Lancet | Year: 2011

Ebola viruses are the causative agents of a severe form of viral haemorrhagic fever in man, designated Ebola haemorrhagic fever, and are endemic in regions of central Africa. The exception is the species Reston Ebola virus, which has not been associated with human disease and is found in the Philippines. Ebola virus constitutes an important local public health threat in Africa, with a worldwide effect through imported infections and through the fear of misuse for biological terrorism. Ebola virus is thought to also have a detrimental effect on the great ape population in Africa. Case-fatality rates of the African species in man are as high as 90, with no prophylaxis or treatment available. Ebola virus infections are characterised by immune suppression and a systemic inflammatory response that causes impairment of the vascular, coagulation, and immune systems, leading to multiorgan failure and shock, and thus, in some ways, resembling septic shock. © 2011 Elsevier Ltd.


Frank J.,University of Manitoba | Garcia P.,University of Illinois at Urbana - Champaign
American Journal of Agricultural Economics | Year: 2011

Using literature-based measures and a modified Bayesian method specified here, we estimate liquidity costs and their determinants for the live cattle and hog futures markets. Volume and volatility are simultaneously determined and significantly related to the bid-ask spread. Daily volume is negatively related to the spread while volatility and average volume per transaction display positive relationships. Electronic trading has a significant competitive effect on liquidity costs, particularly in the live cattle market. Results are sensitive to the bid-ask spread measure, with our modified Bayesian method providing estimates most consistent with expectations and the competitive structure in these markets. © 2010 The Author. Published by Oxford University Press on behalf of the Agricultural and Applied Economics Association. All rights reserved.


Leslie W.D.,University of Manitoba | Rubin M.R.,Columbia University | Schwartz A.V.,University of California at San Francisco | Kanis J.A.,University of Sheffield
Journal of Bone and Mineral Research | Year: 2012

There is a growing body of research showing that diabetes is an independent risk factor for fracture. Type 2 diabetes (T2D), which predominates in older individuals and is increasing globally as a consequence of the obesity epidemic, is associated with normal or even increased dual-energy x-ray absorptiometry (DXA)-derived areal bone mineral density (BMD). Therefore, the paradoxical increase in fracture risk has led to the hypothesis that there are diabetes-associated alterations in material and structural properties. An overly glycated collagen matrix, confounded by a low turnover state, in the setting of subtle cortical abnormalities, may lead to compromised biomechanical competence. In current clinical practice, because BMD is central to fracture prediction, a consequence of this paradox is a lack of suitable methods, including FRAX, to predict fracture risk in older adults with T2D. The option of adding diabetes to the FRAX algorithm is appealing but requires additional data from large population-based cohorts. The need for improved methods for identification of fracture in older adults with T2D is an important priority for osteoporosis research. © 2012 American Society for Bone and Mineral Research.


Safronetz D.,National Institute of Allergy and Infectious Diseases | Feldmann H.,National Institute of Allergy and Infectious Diseases | Feldmann H.,University of Manitoba | De Wit E.,National Institute of Allergy and Infectious Diseases
Annual Review of Pathology: Mechanisms of Disease | Year: 2015

Emerging infectious diseases of zoonotic origin are shaping today's infectious disease field more than ever. In this article, we introduce and review three emerging zoonotic viruses. Novel hantaviruses emerged in the Americas in the mid-1990s as the cause of severe respiratory infections, designated hantavirus pulmonary syndrome, with case fatality rates of around 40%. Nipah virus emerged a few years later, causing respiratory infections and encephalitis in Southeast Asia, with case fatality rates ranging from 40% to more than 90%. A new coronavirus emerged in 2012 on the Arabian Peninsula with a clinical syndrome of acute respiratory infections, later designated as Middle East respiratory syndrome (MERS), and an initial case fatality rate of more than 40%. Our current state of knowledge on the pathogenicity of these three severe, emerging viral infections is discussed. © 2015 by Annual Reviews.


Elsawy H.,University of Manitoba | Hossain E.,University of Manitoba | Haenggi M.,University of Notre Dame
IEEE Communications Surveys and Tutorials | Year: 2013

For more than three decades, stochastic geometry has been used to model large-scale ad hoc wireless networks, and it has succeeded to develop tractable models to characterize and better understand the performance of these networks. Recently, stochastic geometry models have been shown to provide tractable yet accurate performance bounds for multi-tier and cognitive cellular wireless networks. Given the need for interference characterization in multi-tier cellular networks, stochastic geometry models provide high potential to simplify their modeling and provide insights into their design. Hence, a new research area dealing with the modeling and analysis of multi-tier and cognitive cellular wireless networks is increasingly attracting the attention of the research community. In this article, we present a comprehensive survey on the literature related to stochastic geometry models for single-tier as well as multi-tier and cognitive cellular wireless networks. A taxonomy based on the target network model, the point process used, and the performance evaluation technique is also presented. To conclude, we discuss the open research challenges and future research directions. © 2009-2012 IEEE.


Wong G.,Public Health Agency of Canada | Wong G.,University of Manitoba | Qiu X.,Public Health Agency of Canada | Olinger G.G.,National Institute of Allergy and Infectious Diseases | And 3 more authors.
Trends in Microbiology | Year: 2014

Filovirus infections cause fatal hemorrhagic fever characterized by the initial onset of general symptoms before rapid progression to severe disease; the most virulent species can cause death to susceptible hosts within 10 days after the appearance of symptoms. Before the advent of monoclonal antibody (mAb) therapy, infection of nonhuman primates (NHPs) with the most virulent filovirus species was fatal if interventions were not administered within minutes. A novel nucleoside analogue, BCX4430, has since been shown to also demonstrate protective efficacy with a delayed treatment start. This review summarizes and evaluates the potential of current experimental candidates for treating filovirus disease with regard to their feasibility and use in the clinic, and assesses the most promising strategies towards the future development of a pan-filovirus medical countermeasure. © 2014 Elsevier Ltd.


Asaduzzaman A.M.,University of Manitoba | Kruger P.,University of Burgundy
Journal of Physical Chemistry C | Year: 2010

A first principles theoretical study on the diffusion mechanism of Ti interstitials and O vacancies in rutile TiO2 is reported. We find that the diffusion depends strongly on the defect charge. Weakly charged Ti ions diffuse preferentially through the open channels along the c axis with a barrier of ∼0.4 eV. Ti4+ ions, however, diffuse perpendicular to c by an interstitialcy mechanism with a barrier of ∼0.2 eV. Neutral oxygen vacancies diffuse along the c axis with a barrier of 0.65 eV. © 2010 American Chemical Society.


Vokey J.R.,University of Lethbridge | Jamieson R.K.,University of Manitoba
Psychological Science | Year: 2014

Grainger, Dufau, Montant, Ziegler, and Fagot (2012a) taught 6 baboons to discriminate words from nonwords in an analogue of the lexical decision task. The baboons more readily identified novel words than novel nonwords as words, and they had difficulty rejecting nonwords that were orthographically similar to learned words. In a subsequent test (Ziegler, Hannagan, et al., 2013), responses from the same animals evinced a transposed-letter effect. These three effects, when seen in skilled human readers, are taken as hallmarks of orthographic processing. We show, by simulation of the unique learning trajectory of each baboon, that the results can be interpreted equally well as an example of simple, familiarity-based discrimination of pixel maps without orthographic processing. © The Author(s) 2014.


Drew D.A.,Tufts Medical Center | Lok C.E.,Toronto General Hospital | Cohen J.T.,Institute for Clinical Research and Health Policy Studies | Wagner M.,University of Würzburg | And 2 more authors.
Journal of the American Society of Nephrology | Year: 2015

Hemodialysis vascular access recommendations promote arteriovenous (AV) fistulas first; however, itmay not be the best approach for all hemodialysis patients, because likelihood of successful fistula placement, procedure-related and subsequent costs, and patient survival modify the optimal access choice. We performed a decision analysis evaluating AV fistula, AV graft, and central venous catheter (CVC) strategies for patients initiating hemodialysis with a CVC, a scenario occurring in over 70% of United States dialysis patients. A decision tree model was constructed to reflect progression from hemodialysis initiation. Patients were classified into one of three vascular access choices:maintain CVC, attempt fistula, or attempt graft. We explicitly modeled probabilities of primary and secondary patency for each access type, with success modified by age, sex, and diabetes. Access-specific mortality was incorporated using preexisting cohort data, including terms for age, sex, and diabetes. Costs were ascertained from the 2010 USRDS report and Medicare for procedure costs. An AV fistula attempt strategy was found to be superior to AV grafts and CVCs in regard to mortality and cost for the majority of patient characteristic combinations, especially youngermen without diabetes.Women with diabetes and elderlymen with diabetes had similar outcomes, regardless of access type. Overall, the advantages of an AV fistula attempt strategy lessened considerably among older patients, particularly women with diabetes, reflecting the effect of lower AV fistula success rates and lower life expectancy. These results suggest that vascular access-related outcomes may be optimized by considering individual patient characteristics. Copyright © 2015 by the American Society of Nephrology.


Herzallah S.,Mu'tah University | Holley R.,University of Manitoba
LWT - Food Science and Technology | Year: 2012

A reversed phase-high performance liquid chromatography method was developed to quantify sinigrin, sinalbin, allyl isothiocyanate and benzyl isothiocyanate present in aqueous and freeze-dried yellow and Oriental (brown) mustard extract samples using two pre-treatment methods (autoclaving, boiling) to prevent degradation by myrosinase. The lowest detection limits for sinigrin and sinalbin were 0.05 mg/L and for allyl- and benzyl isothiocyanate were 2 mg/L. The methods developed make it possible to quantify both the glucosinolates (sinigrin, and sinalbin) and their hydrolysis products (allyl- and benzyl isothiocyanate) with the same mobile phase, and only require adjustment of the wavelength and a change in the ratio of the high performance liquid chromatography mobile phase solvents (tetrabutylammonium hydrogen sulphate and acetonitrile). The use of a single method yielded accurate and rapid results for the four compounds (sinigrin, sinalbin, allyl- and benzyl isothiocyanate). Autoclaving of both yellow and brown mustard powder before glucosinolate extraction did not consistently improve the amount of sinalbin and sinigrin recovered over boiling treatments because the thermal stability of myrosinase proved problematic in glucosinolate recovery. Nonetheless, the highest extract yields found were 4.06 g/100 g for sinigrin and 2.57 g/100 g for sinalbin, respectively, which represented over 94 g/100 g extract yield of sinigrin from the Oriental mustard powder. © 2012 Elsevier Ltd.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: BG-07-2015 | Award Amount: 5.51M | Year: 2016

Objectives: 1) to improve the observation and predictions of oil spreading in the sea using novel on-line sensors on-board vessels, fixed structures or gliders, and smart data transfer into operational awareness systems; 2) to examine the true environmental impacts and benefits of a suite of marine oil spill response methods (mechanical collection in water and below ice, in situ burning, use of chemical dispersants, bioremediation, electro-kinetics, and combinations of these) in cold climate and ice-infested areas; 3) to assess the impacts on biota of naturally and chemically dispersed oil, in situ burning residues and non-collected oil using biomarker methods and to develop specific methods for the rapid detection of the effects of oil pollution; 4) to develop a strategic Net Environmental Benefit Analysis tool (sNEBA) for oil spill response strategy decision making. A true trans-disciplinary consortium will carry out the project. Oil sensors will be applied to novel platforms such as ferry-boxes, smart buoys, and gliders. The environmental impacts of the oil spill response methods will be assessed by performing pilot tests and field experiments in the coastal waters of Greenland, as well as laboratory tests in Svalbard and the Baltic Sea with the main focus on dispersed oil, in situ burning residues and non-collected oil. The sNEBA tool will be developed to include and overarch the biological and technical knowledge obtained in the project, as well as integrate with operational assessments being based on expertise on coastal protection and shoreline response. This can be used in establishing cross-border and trans-boundary cooperation and agreements. The proposal addresses novel observation technology and integrated response methods at extreme cold temperatures and in ice. It also addresses the environmental impacts and includes a partner from Canada. The results are vital for the off-shore industry and will enhance the business of oil spill response services.


Grant
Agency: European Commission | Branch: FP7 | Program: CSA-CA | Phase: HEALTH-2007-2.1.2-6 | Award Amount: 3.59M | Year: 2009

The I-DCC coordination and support project responds to the topic defined by HEALTH-2007-2.1.2-6 International activities in large-scale data gathering and systems biology. I-DCC integrates European skills, efforts, resources, and infrastructure in an international collaborative programme to establish a European Data Coordination Centre for mouse genetic resources. Large-scale mouse mutagenesis programs have been established worldwide with the aim of creating loss-of-function mutations in all genes in the mouse. At the moment, these resources largely consist of frozen archives of genetically modified mouse embryonic stem cells that can be conveniently disseminated within the international mouse genetics community for the purpose of generating knockout mice. Most notably, the EUCOMM, KOMP and NorCOMM projects (based in Europe, the USA and Canada, respectively) have been recently established and are expected to achieve near-complete coverage of all mouse genes using high-throughput gene trapping and gene targeting technology. Currently, there is no single point of entry for information on the international ES cell resources. The overall goal of theI-DCC is to lead an effort to establish common data formats and a single portal for access to information on all mutant ES cell resources. This project is built on exceptionally strong international expertise in mouse genome informatics and includes key scientists and bioinformaticians actively involved in the three ongoing large-scale mouse knock out projects (EUCOMM, KOMP and NorCOMM). The I-DCC will 1) coordinate with existing international bioinformatics resources to catalogue and assign unique identifiers to all mouse genes, 2) collaborate with all major ES cell production sites to define standard data formats for the annotation and display of mutant alleles and 3) provide ES cell repositories with standardized data on all ES cell lines and 4) establish a common web portal to facilitate the distribution and use


News Article | March 9, 2016
Site: www.biosciencetechnology.com

The bathroom scale may show a good number but how much of that weight is fat, not muscle? New studies are adding to the evidence that the scale doesn't always tell the whole story when it comes to weight-related health risks. Keeping body fat low is more important for healthy aging than a low overall weight, researchers reported Monday in the journal Annals of Internal Medicine. A separate study found young people who aren't physically fit are at greater risk of developing Type 2 diabetes later in life even if their weight is healthy. Here are some things to know: Yes. Body mass index, or BMI, is a measure of a person's weight compared to their height. For many people, that's plenty of evidence to tell if they're overweight or obese and thus at increased risk of heart disease, diabetes and premature death. Generally, a BMI of 25 and above indicates overweight, while 30 and above indicates obesity. Someone who is 5 feet, 9 inches would hit that obesity threshold at 203 pounds. BUT IT'S NOT A PERFECT MEASURE Some people have a high BMI because they're more muscular. More common are people who harbor too little muscle and too much body fat even if their BMI is in the normal range. Body composition shifts as we age, with the proportion of muscle decreasing and the proportion of body fat increasing. That slows metabolism, making it easier to put on pounds in middle age even if people haven't changed how they eat or how much they exercise. Dr. William Leslie of the University of Manitoba wondered if poorly measured body fat might help explain the controversial "obesity paradox," where some studies have suggested that being moderately overweight later in life might be good for survival. He tracked 50,000 middle-aged and older Canadians, mostly women, who'd undergone screening for bone-thinning osteoporosis. Those screening X-rays - known as DXA for dual-energy X-ray absorptiometry - measure bone and also allow an estimation of fat. A higher percent of body fat, independent of the person's BMI, was linked to reduced survival, Leslie reported. Risk began rising when body fat was in the range of 36 percent to 38 percent. Interestingly, being underweight also was linked to reduced survival, possibly reflecting age-related frailty. "It's not just the amount of body you've got, but what you're actually made of," Leslie concludes. A high BMI is one of the biggest risk factors for Type 2 diabetes. But a second study reported in Annals Monday suggests people can still be at risk if they're skinny but not physically fit. Researchers in Sweden and New York checked records of about 1.5 million Swedish men who at age 18 received medical exams for mandatory military service, and tracked how many developed diabetes many years later. Low muscle strength and low aerobic fitness each were associated with an increased diabetes risk - regardless of whether the men were normal weight or overweight. Scoring low on both added to the risk. WHAT DO THE FINDINGS MEAN? For diabetes, "normal-weight persons may not receive appropriate lifestyle counseling if they are sedentary or unfit because of their lower perceived risk," wrote obesity specialist Peter Katzmarzyk of Louisiana's Pennington Biomedical Research Center, who wasn't involved in the study. That study also suggests fitness in adolescence can have long-lasting impact. And Leslie said doctors should consider patients' body composition, not just weight, in assessing their health. HOW TO TELL Most people won't benefit from a DXA scan for fat, stressed Dympna Gallagher, who directs the human body composition laboratory at Columbia University Medical Center and thinks those tests are more for research than real life. Other methods for determining body composition range from measuring skinfold thickness to "bioimpedance" scales that use a tiny electrical current, but all have varying degrees of error, Gallagher said. Plus, normal body fat varies with age and there's no agreement on the best cutoffs for health, she said. Her recommendation: Check your waistline, even if your BMI is normal. Abdominal fat, an apple-shaped figure, is riskier than fat that settles on the hips. The government says men are at increased risk of health problems if their waist circumference is larger than 40 inches, and 35 inches for women.


News Article | December 23, 2016
Site: www.eurekalert.org

The Biophysical Society has announced the winners of its annual CPOW Travel Awards to attend the Biophysical Society's 61st Annual Meeting in New Orleans, Louisiana, February 11-15, 2017. CPOW, the Society's Committee for Professional Opportunities for Women, has initiated these travel fellowships to increase the number of women biophysicists and encourage their participation at the Meeting. The recipients of this competitive award must be female postdoctoral fellows or mid-career scientists presenting a poster or oral presentation at the conference. Each awardee receives a travel grant and will be recognized at a reception on Saturday, February 11, at the Ernest N. Morial Convention Center. Teresa Aman, University of Washington, HCN CHANNEL GATING STUDIED WITH TMFRET AND A FLUORESCENT NONCANONICAL AMINO ACID. Anna Blice-Baum, Sam Houston State University, CARDIAC-SPECIFIC EXPRESSION OF VCP/TER94 RNAI OR DISEASE ALLELES DISRUPTS DROSOPHILA HEART STRUCTURE AND IMPAIRS FUNCTION. Lusine Demirkhanyan, University of Illinois at Chicago, ASSESSMENT OF ENDOGENOUS AND EXOGENOUS MODULATORS OF THE TRPM7 CHANNEL IN PLANAR LIPID BILAYERS. Maria Hoernke, Albert-Ludwigs-Universität, GUV AND LUV LEAKAGE: HOW ALL-OR-NONE AND GRADED LEAKAGE SCALE WITH VESICLE SIZE. Pooja Jadiya, Temple University, GENETIC RESCUE OF MITOCHONDRIAL CALCIUM EFFLUX IN ALZHEIMER'S DISEASE PRESERVES MITOCHONDRIAL FUNCTION AND PROTECTS AGAINST NEURONAL CELL DEATH. Marthe Ludtmann, UCL, Institute of Neurology, DIRECT MODULATION OF THE MITOCHONDRIAL PERMEABILITY TRANSITION PORE BY OLIGOMERIC ALPHA-SYNUCLEIN CAUSES TOXICITY IN PD. Yoojin Oh, Johannes Kepler University, Linz, CURLI MEDIATE BACTERIAL ADHESION TO FIBRONECTIN VIA A TENSILE COLLECTIVE BINDING NETWORK. Laura Orellana, Science for Life Laboratory, TRAPPING ON-PATHWAY INTERMEDIATES FOR LARGE SCALE CONFORMATIONAL CHANGES WITH COARSE-GRAINED SIMULATIONS. Hagit Peretz Soroka, University of Manitoba, NOVEL MECHANISM FOR DRIVING AMOEBOIDLIKE MOTILITY OF HUMAN NEUTROPHILS UNDER AN ELECTRIC FIELD, BASED ON INTRACELLULAR PROTON CURRENTS AND CYTOPLASM STREAMING. Sarah Rouse, Imperial College London, STRUCTURAL AND MECHANISTIC INSIGHTS INTO TRANSPORT OF FUNCTIONAL AMYLOID SUBUNITS ACROSS THE PSEUDOMONAS OUTER MEMBRANE. Siobhan Toal, University of Pennsylvania, DETERMINING THE ROLE OF N-TERMINAL ACETYLATION ON α-SYNUCLEIN FUNCTION. Shelli Frey, Gettysburg College, THE ROLE OF SPHINGOMYELIN AND GANGLIOSIDE GM1 IN THE INTERACTION OF POLYGLUTAMINE PEPTIDES WITH LIPID MEMBRANES. Rebecca Howard, Stockholm University, TRANSMEMBRANE STRUCTURAL DETERMINANTS OF ALCOHOL BINDING AND MODULATION IN A MODEL LIGAND-GATED ION CHANNEL. Sabina Mate, INIBIOLP-CONICET-UNLP, ORIENTATIONAL PROPERTIES OF DOPC/SM/CHOLESTEROL MIXTURES: A PM-IRRAS STUDY. Ekaterina Nestorovich, The Catholic University of America, LIPID DYNAMICS AND THE ANTHRAX TOXIN INTRACELLULAR JOURNEY. The Biophysical Society, founded in 1958, is a professional, scientific Society established to encourage development and dissemination of knowledge in biophysics. The Society promotes growth in this expanding field through its annual meeting, monthly journal, and committee and outreach activities. Its 9000 members are located throughout the U.S. and the world, where they teach and conduct research in colleges, universities, laboratories, government agencies, and industry. For more information on these awards, the Society, or the 2017 Annual Meeting, visit http://www.


News Article | February 15, 2017
Site: www.eurekalert.org

A new study published in the Canadian Journal of Microbiology has identified new toxic metalloid-reducing bacteria in highly polluted abandoned gold mine tailings in Manitoba's Nopiming Provincial Park. "These bacteria have the ability to convert toxic components that exist as a result of mining activities into less toxic forms and are prevalent in extreme environments," says Dr. Vladimir Yurkov, Professor at the University of Manitoba. These bacteria or their enzymes may be potential candidates for the development of bioremediation technologies, a treatment that uses naturally occurring organisms to break down toxic substances. "We wanted to look at the bacterial resistance to toxic waste, which would be an important asset within the context of heavily polluted mines. We also aimed to enrich our understanding of the microbial diversity of extreme environments, knowing that the vast majority of these microbes and their potential uses and benefits, remain undiscovered," continued Dr. Yurkov. Aerobic anoxygenic phototrophs (AAPs) are a physiological group of bacteria that have been found in many different environments, including harsh or extreme environments. Habitats with extremely high concentrations of metalloid oxides are toxic, but AAPs are able to survive in these locales. They do so by converting the toxic compounds to less toxic forms through a process called reduction. Microbes capable of removing toxic compounds from their environment are potentially beneficial for bioremediation, the use of bacteria to clean up contaminated environments. By identifying bacteria that are capable of living in extreme conditions, candidates for bioremediation can be found. The Central Gold Mine operated from 1927 to 1937, and although the mine was abandoned more than 75 years ago, the tailings, the byproducts left over from the operation, remain highly polluted with heavy metalloid oxides. To better understand the microbial diversity of these environments, researchers from the University of Manitoba isolated AAP strains from soil samples at four different sites within the Central Gold Mine tailings. Physiological study of five of the strains showed that they could grow under a wide range of temperature, acidity and salt content. Importantly, all of them were highly resistant to toxic metalloid oxides, and were able to convert toxic tellurite to the less toxic elemental form tellurium, a process which could potentially contribute to decontamination of the tailings. The study also concluded that despite resembling previously discovered AAP, the five isolates characterized were phylogenetically unique, and may represent new species. These studies of microbial diversity are critical. "There are countless undiscovered microbes with unique abilities in every possible environment. Less than 1% of existing microbes are currently known in pure laboratory cultures. The majority of bacterial diversity is only theoretically indicated by DNA sequencing," says Dr. Yurkov. This research makes important contributions to the fields of microbial diversity in extreme environments and bioremediation. Identification of novel microbes that can inhabit extreme environments that most other forms of life cannot tolerate could eventually lead to the development of tools for environmental detoxification. Added Dr. Yurkov, "Continually searching for these microbes and investigating details of their physiology and biochemistry could uncover the great potential of possible benefits for our society". The paper, "Aerobic Anoxygenic Phototrophs in Gold Mine Tailings in Nopiming Provincial Park, Manitoba, Canada" by Elizabeth Hughes, Breanne Head, Chris Maltman, Michele Piercey-Normore and Vladimir Yurkov was published today in the Canadian Journal of Microbiology.


News Article | March 4, 2016
Site: www.nature.com

Following a record winter in many ways, Arctic sea-ice cover seems poised to reach one of its smallest winter maxima ever. As of 28 February, ice covered 14.525 million square kilometres, or  938,000 square kilometres less than the 1981–2010 average. And researchers are using a new technique to capture crucial information about the thinning ice pack in near real time, to better forecast future changes. Short-term weather patterns and long-term climate trends have conspired to create an extraordinary couple of months, even by Arctic standards. “This winter will be the topic of research for many years to come,” says Jennifer Francis, a climate scientist at Rutgers University in New Brunswick, New Jersey. “There’s such an unusual cast of characters on the stage that have never played together before.” The characters include the El Niño weather pattern that is pumping heat and moisture across the globe, and the Arctic Oscillation, a large-scale climate pattern whose shifts in recent months have pushed warm air northward. Together, they are exacerbating the long-term decline of Arctic sea ice, which has shrunk by an average of 3% each February since satellite records began in 1979. A persistent ridge of high-pressure air perched off the US West Coast has steered weather systems around drought-stricken California, funnelling warmth northward. As a consequence, sea ice is particularly scarce this year in the Bering Sea. “The ice would normally be extensive and cold, but we have open water instead,” says Francis. A storm last December compounded the situation by pushing warm air — more than 20 °C above average — to the North Pole. In January, an Arctic Oscillation-driven warm spell heated the air above most of the Arctic Ocean. By February, ice had begun to circulate clockwise around the Arctic basin and out through the Fram Strait, says Julienne Stroeve, a researcher at the US National Snow and Ice Data Center (NSIDC) in Boulder, Colorado. Given the Arctic’s notoriously unpredictable weather, the low maximum doesn’t necessarily foretell record-low melting this summer, when sea ice will reach its annual minimum. (The biggest summer melt on record happened in 2012, a year without an El Niño.) But researchers have one new tool with which to track the changes as they happen this year — the first detailed, near-real-time estimates of ice thickness, from the European Space Agency’s CryoSat-2 satellite. Three research groups currently calculate Arctic ice thickness from satellite data, but with a lag time of at least a month. Faster estimates would allow shipping companies to better plot routes through the Arctic, and scientists to improve their longer-term forecasts of ice behaviour. “The quicker you have these estimates of sea-ice thickness, the quicker you can start assimilating them into models and make more timely predictions of what’s going to happen,” says Rachel Tilling, a sea-ice researcher at University College London. She and her colleagues have developed a faster way to get information on ice thickness from CryoSat-2 (see ‘Measuring stick’). The satellite measures thickness by comparing the time that it takes for radar signals to bounce off the ice, as opposed to open water. Normally, it takes several months for satellite operators to calculate Cryo-Sat-2’s precise orbit (and therefore the exact location of the ice and water that it flew over). But Tilling’s group instead runs a quick-and-dirty analysis of orbital data, then combines it with near-real-time information on ice concentration from the NSIDC and ice type from the Norwegian Meteorological Service (R. L. Tilling et al. Cryosphere Discuss. http://doi.org/bcw5; 2016). The result is ice-thickness measurements that are ready in just 3 days, and accurate to within 1.5% of those produced months later. The current winter cycle is the first complete season for the near-real-time data. (The measurements cannot be done in the summer, when melt ponds on the ice confuse the satellite.) Tilling has begun to speak to shipping companies, among others, that are interested in using the data as fast as they are produced. “It really is a new era for CryoSat-2,” she says. More-accurate ice-thickness data would improve climate models and give better forecasts for the possible impacts of thick or thin sea ice, says Nathan Kurtz, a cryosphere scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. Kurtz helps to lead NASA’s IceBridge project, which will begin flying aeroplanes north of Greenland later this month to measure ice thickness using lasers and an infrared camera that can detect heat from the underlying water. Thickness measurements are more crucial than ever, given the changing Arctic, says David Barber, a sea-ice specialist at the University of Manitoba in Winnipeg, Canada. He and his colleagues reported last year that there is increased open water all around the edge of the Arctic ice pack every month of the year (D. G. Barber et al. Prog. Oceanogr. 139, 122–150; 2015). “We’re getting more open water in the winter than we were expecting,” Barber says. “These changes are happening very quickly, and I don’t think people are fully aware of how dramatic they are.”


News Article | October 26, 2016
Site: www.newscientist.com

Is this one of the worst wrong turns in the history of parenting advice? Telling people to delay the age they start their babies on solid food might be contributing to the rise in food allergies. Babies used to be given their first solids when they were around 4 months old. Many start showing an interest in the food their family is eating around this time, as well as developing a larger appetite. But since the World Health Organization published a report about a decade ago saying that babies should be exclusively breastfed until 6 months, countries like the UK and US have recommended parents hold off until then. Not all UK parents follow this 6-month rule but healthcare staff and parenting websites all tend to give out this advice. NHS leaflets and websites warn parents that if they start weaning earlier than 6 months they must avoid potentially allergenic foods, like peanuts and eggs. Yet this is at odds with the latest research and advice from allergy specialists, says Elissa Abrams of the University of Manitoba in Winnipeg, Canada, in a review published this week. Several studies now suggest that, to avoid developing food allergies, it’s better for babies to be exposed to food from 4 months old. A trial published this year, for instance, showed the best way to avoid an allergy to peanuts – one of the commonest food allergies – is to give them to babies from 4 months of age. Surveys and observational studies have suggested it could be beneficial for infants to encounter wheat, egg and cow’s milk from an earlier age, while there are also concerns that delayed weaning could cause anaemia, due to a lack of iron in breastmilk. One criticism of the WHO stance has been that it is more relevant to developing countries, where babies who are breastfed for longer are less likely to be exposed to contaminated water while they are at their most vulnerable. In developed countries, the risk of this is very low. But scientific thinking has changed too. Pregnant women used to be told to avoid eating peanuts, until a 2013 study showed that this actually increased the likelihood of their babies having a peanut allergy. Is it time to change weaning advice? Despite her paper, Abrams sticks to the official Canadian Pediatric Society line to hold out until 6 months “because of the benefits of breastfeeding for babies and mothers”. And that could be the heart of the matter – the weaning question may be being distorted by efforts to raise breastfeeding rates. Because the official line is that babies should be given nothing other than breastmilk, rather than formula, for the first 6 months of life, telling parents they can give solids during this time could dilute this message. “Breastfeeding in the UK is so politicised that weaning has become drawn in,” says one researcher who did not want to be named. Different advice is given by some other expert groups. The European Food Safety Authority says weaning at 4 months is fine. The British Dietetic Association says that while parents should aim for 6 months they should use their own judgement as different babies have different needs. “Sometimes they’re starting to grab food out of your hand before six months,” says dietitian and BDA spokesperson Tanya Thomas. “As a parent you know when they’re ready.” We will know more later this year, when the results from a large trial that addresses exactly this question will be published. In the meantime, perhaps new parents should be informed of the questions hanging over the 6-month rule. They’re not babies, after all.


News Article | November 29, 2016
Site: www.eurekalert.org

Heart medication taken in combination with chemotherapy reduces the risk of serious cardiovascular damage in patients with early-stage breast cancer, according to results from a new landmark clinical trial. Existing research has shown some cancer therapies such as Herceptin greatly improve survival rates for early-stage breast cancer, but come with a fivefold risk of heart failure -- a devastating condition as life-threatening as the cancer itself. A new five-year study, led by researchers at the University of Alberta and Alberta Health Services and funded by the Canadian Institutes of Health Research (CIHR) and Alberta Cancer Foundation, shows that two kinds of heart medications, beta blockers and ACE inhibitors, effectively prevent a drop in heart function from cancer treatment. "We think this is practice-changing," said Edith Pituskin, co-investigator of the MANTICORE trial. "This will improve the safety of the cancer treatment that we provide." Pituskin, an assistant professor in the Faculty of Nursing and Faculty of Medicine & Dentistry at the U of A, published their findings Nov. 28 in the Journal of Clinical Oncology. In the double-blind trial, 100 patients from Alberta and Manitoba with early-stage breast cancer were selected at random to receive either a beta blocker, ACE inhibitor or placebo for one year. Beta blockers and ACE inhibitors are drugs used to treat several conditions, including heart failure. Cardiac MRI images taken over a two-year period showed that patients who received the beta blockers showed fewer signs of heart weakening than the placebo group. The ACE inhibitor drug also had heart protection effects. Study lead Ian Paterson, a cardiologist at the Mazankowski Alberta Heart Institute and associate professor with the U of A's Department of Medicine, said these medications not only safeguard against damage to the heart, but may improve breast cancer survival rates by limiting interruptions to chemotherapy treatment. Any time a patient shows signs of heart weakening, he said, chemotherapy is stopped immediately, sometimes for a month or two months until heart function returns to normal. "We are aiming for two outcomes for these patients--we're hoping to prevent heart failure and we're hoping for them to receive all the chemotherapy that they are meant to get, when they are supposed to get it--to improve their odds of remission and survival." Patients with heart failure often experience fatigue, shortness of breath or even death, making it "an equally devastating disease with worse prognosis than breast cancer," Paterson said. Brenda Skanes has a history of cardiovascular problems in her family--her mom died of a stroke and her dad had a heart attack. She was eager to join the trial, both for her own health and the health of other breast cancer survivors. "I met survivors through my journey who experienced heart complications caused by Herceptin. If they had access to this, maybe they wouldn't have those conditions now," she said. "Me participating, it's for the other survivors who are just going into treatment." With two daughters of her own and a mother who lost her fight with colon cancer, study participant Debbie Cameron says she'd do anything to ensure prevent others from going through similar upheaval. "My daughters are always in the back of my mind and the what ifs--if they're diagnosed, what would make their treatment safer, better," Cameron said. "Anything I could do to make this easier for anybody else or give some insight to treatment down the road was, to me, a very easy decision." Pituskin said the study team, which also includes collaborators from the AHS Clinical Trials Unit at the Cross Cancer Institute and the University of Manitoba, represents a strong mix of research disciplines, particularly the oncology and cardiology groups. She said the results would not have been possible without funding support from CIHR and the Alberta Cancer Foundation. "Local people in Alberta supported a study that not only Albertans benefited from, but will change, again, the way care is delivered around the world." The results are expected to have a direct impact on clinical practice guidelines in Canada and beyond. "Every day in Canada, around 68 women are diagnosed with breast cancer. This discovery holds real promise for improving these women's quality of life and health outcomes," said Stephen Robbins, scientific director of CIHR's Cancer Research Institute. "We couldn't be more pleased with this return on our investment," said Myka Osinchuk, CEO of the Alberta Cancer Foundation. "This clinical research will improve treatment and make life better not only for Albertans facing cancer, but also for those around the world." Paterson said the research team is also investigating how to prevent heart complications in patients with other cancers, noting several other therapies have been linked to heart complications.


Chen Y.,University of Manitoba | Munkholm L.J.,University of Aarhus | Nyord T.,University of Aarhus
Soil and Tillage Research | Year: 2012

Soil-tool interactions are at the centre of many agricultural field operations, including slurry injection. Understanding of soil-tool interaction behaviours (soil cutting forces and soil disturbance) is important for designing high performance injection tools. A discrete element model was developed to simulate a slurry injection tool (a sweep) and its interaction with soil using Particle Flow Code in Three Dimensions (PFC 3D). In the model, spherical particles with bonds and viscous damping between particles were used to simulate agricultural soil aggregates and their cohesive behaviours. To serve the model development, the sweep was tested in three different soils (coarse sand, loamy sand, and sandy loam). In the tests, soil cutting forces (draught and vertical forces) and soil disturbance characteristics (soil cross-section disturbance and surface deformation) resulting from the sweep were measured. The measured draught and vertical forces were used in calibrations of the most sensitive model parameter, particle stiffness. The calibrated particle stiffness was 0.75×10 3Nm -1 for the coarse sand, 2.75×10 3Nm -1 for the loamy sand, and 6×10 3Nm -1 for the sandy loam. The calibrated model was validated using the soil disturbance characteristics measured in those three soils. The simulations agreed well with the measurements with relative errors below 10% in most cases. © 2012 Elsevier B.V.


Smith J.R.,Hoffmann-La Roche | Ariano R.E.,University of Manitoba | Toovey S.,University College London
Critical Care Medicine | Year: 2010

The clinical course of pandemic H1N1 2009 influenza can be severe, particularly in the very young and patients with comorbidities. Pandemic H1N1 2009 is sensitive to the antiviral agents oseltamivir and zanamivir but is resistant to the M2 inhibitors. Although few clinical data are yet available, treatment of pandemic H1N1 2009 influenza in hospital settings with oseltamivir or zanamivir appears to be beneficial. In hospitalized patients with severe influenza treated with oseltamivir, mortality and length of stay are significantly reduced, and viral load is reduced more quickly than in untreated patients. In patients at high risk treated with oseltamivir or zanamivir, reductions in the risk of complica-tions and mortality after treatment have been demonstrated with oseltamivir and zanamivir, although there are fewer data on the latter. There is no evidence yet that other antiviral agents are effective in severe or pandemic H1N1 2009 influenza. Current World Health Organization guidance strongly recommends the use of oseltamivir for severe or progressive infection with pandemic H1N1 2009, with zanamivir as an alternative if the infecting virus is oseltamivir-resistant. Very little resistance to oseltamivir has been found to date. Copyrignt © 2010 by the Society ot Critical Care Medicine and Lippincott Williams & Wilkins.


Gordon J.W.,St Boniface General Hospital Research Center | Shaw J.A.,St Boniface General Hospital Research Center | Kirshenbaum L.A.,St Boniface General Hospital Research Center | Kirshenbaum L.A.,University of Manitoba
Circulation Research | Year: 2011

The progression from cardiac injury to symptomatic heart failure has been intensely studied over the last decade, and is largely attributable to a loss of functional cardiac myocytes through necrosis, intrinsic and extrinsic apoptosis pathways and autophagy. Therefore, the molecular regulation of these cellular programs has been rigorously investigated in the hopes of identifying a potential cell target that could promote cell survival and/or inhibit cell death to avert, or at least prolong, the degeneration toward symptomatic heart failure. The nuclear factor (NF)-κB super family of transcription factors has been implicated in the regulation of immune cell maturation, cell survival, and inflammation in many cell types, including cardiac myocytes. Recent studies have shown that NF-κB is cardioprotective during acute hypoxia and reperfusion injury. However, prolonged activation of NF-κB appears to be detrimental and promotes heart failure by eliciting signals that trigger chronic inflammation through enhanced elaboration of cytokines including tumor necrosis factor α, interleukin-1, and interleukin-6, leading to endoplasmic reticulum stress responses and cell death. The underlying mechanisms that account for the multifaceted and differential outcomes of NF-κB on cardiac cell fate are presently unknown. Herein, we posit a novel paradigm in which the timing, duration of activation, and cellular context may explain mechanistically the differential outcomes of NF-κB signaling in the heart that may be essential for future development of novel therapeutic interventions designed to target NF-κB responses and heart failure following myocardial injury. © 2011 American Heart Association, Inc.


Weber R.E.,University of Aarhus | Campbell K.L.,University of Manitoba
Acta Physiologica | Year: 2011

As demonstrated by August Krogh et al. a century ago, the oxygen-binding reaction of vertebrate haemoglobin is cooperative (described by sigmoid O2 equilibrium curves) and modulated by CO2 and protons (lowered pH) that - in conjunction with later discovered allosteric effectors (chloride, lactate and organic phosphate anions) - enhance O2 unloading from blood in relatively acidic and oxygen-poor tissues. Based on the exothermic nature of the oxygenation of the haem groups, haemoglobin-O2 affinity also decreases with rising temperature. This thermal sensitivity favours oxygen unloading in warm working muscles, but may become detrimental in regionally heterothermic animals, for example in cold-tolerant birds and mammals and warmbodied fish, where it may perturb the balance between O2 unloading and O2 requirement in organs with substantially different temperatures than at the respiratory organs and thus commonly is reduced or obliterated. Given that the oxygenation of haemoglobin is linked with the endothermic release of allosteric effectors, increased effector interaction is an effective strategy that is widely exploited to achieve adaptive reductions in the temperature dependence of blood-O2 affinity. The molecular mechanisms implicated in heterothermic vertebrates from different taxonomic groups reveal remarkable variability, both as regards the effectors implicated (protons in tunas, organic phosphates in sharks and billfish, chloride ions in ruminants and chloride and phosphate anions in the extinct woolly mammoth, etc.) and binding sites for the same effectors, indicating multiple evolutionary origins, but convergent physiological functionality (reductions in temperature dependence of O2-binding affinity that safeguard tissue O2 supply). © 2010 The Authors.


Eckert D.J.,Neuroscience Research Australia NeuRA | Eckert D.J.,University of New South Wales | Younes M.K.,University of Manitoba | Younes M.K.,University of Calgary
Journal of Applied Physiology | Year: 2014

Historically, brief awakenings from sleep (cortical arousals) have been assumed to be vitally important in restoring airflow and blood-gas disturbances at the end of obstructive sleep apnea (OSA) breathing events. Indeed, in patients with blunted chemical drive (e.g., obesity hypoventilation syndrome) and in instances when other defensive mechanisms fail, cortical arousal likely serves an important protective role. However, recent insight into the pathogenesis of OSA indicates that a substantial proportion of respiratory events do not terminate with a cortical arousal from sleep. In many cases, cortical arousals may actually perpetuate blood-gas disturbances, breathing instability, and subsequent upper airway closure during sleep. This brief review summarizes the current understanding of the mechanisms mediating respiratory- induced cortical arousal, the physiological factors that influence the propensity for cortical arousal, and the potential dual roles that cortical arousal may play in OSA pathogenesis. Finally, the extent to which existing sedative agents decrease the propensity for cortical arousal and their potential to be therapeutically beneficial for certain OSA patients are highlighted. Copyright © 2014 the American Physiological Society.


Skipetrov S.E.,University Grenoble Alpes | Skipetrov S.E.,French National Center for Scientific Research | Page J.H.,University of Manitoba
New Journal of Physics | Year: 2016

During the last 30 years, the search for Anderson localization of light in three-dimensional (3D) disordered samples yielded a number of experimental observations that were first considered successful, then disputed by opponents, and later refuted by their authors. This includes recent results for light in TiO2 powders that Sperling et al now show to be due to fluorescence and not to Anderson localization (2016 New J. Phys. 18 013039). The difficulty of observing Anderson localization of light in 3D may be due to a number of factors: insufficient optical contrast between the components of the disordered material, near-field effects, etc. The way to overcome these difficulties may consist in using partially ordered materials, complex structured scatterers, or clouds of cold atoms in magnetic fields. © 2016 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.


Quigley E.M.M.,Alimentary Pharmabiotic Center | Bernstein C.N.,University of Manitoba
American Journal of Gastroenterology | Year: 2012

Irritable bowel syndrome (IBS) and inflammatory bowel disease (IBD) are common, chronic, relapsing, and potentially disabling disorders in the West which are increasing in prevalence in the rest of the world. They typically afflict young adults in the prime of their lives and, consequently, may inflict a considerable emotional, personal, and socioeconomic toll. Not surprisingly, therefore, their management requires considerable clinical acumen and a fundamental commitment to the many dimensions of the patient-doctor relationship. There the similarities end. Despite a considerable body of recent data reporting a number of abnormalities (both upregulation and downregulation) in various components of the mucosal and systemic immune response in IBS, none of these findings come even close to the inflammatory processes that typify IBD. Furthermore, there is little evidence that those with an established diagnosis of IBS (in contrast to those with IBS-type symptoms in which IBD may have been missed (1)) can evolve into IBD; IBS, regardless of immunological or microbiological findings, should not be considered as a part of the spectrum of IBD. If IBS and IBD are distinct entities, then can they co-exist and lead to diagnostic confusion for the clinician. © 2012 by the American College of Gastroenterology.


Watt C.,University of Manitoba | Salewski V.,University of Osnabrück
Oikos | Year: 2011

The many definitions of Bergmann's rule have resulted in confusion and debate over how and in what organisms to test the original rule. Watt et al. published a paper in 2010, based directly on Bergmann's original paper, in the hopes of clarifying the rule and presenting direct translations to resolve uncertainties. Recently, Olalla-Tárraga has criticized our publication, stating that we assumed the rule was a causal law, which has narrowed our epistemological scope of the rule. We argue we did not assume the rule was a law and suggest that Olalla-Tárraga has only focused on the observed pattern and has ignored the proposed mechanism, which is inherent in the definition. We also discuss the proposed mechanism and describe why it cannot apply to ectotherms. Despite this, we encourage a thorough investigation of the mechanisms responsible for maintaining Bergmann's pattern in ectotherms and support Olalla-Tárraga's quest for a unifying mechanism to explain body size gradients in endotherms and ectotherms. © 2011 The Authors.


Duan W.H.,Monash University | Gong K.,Monash University | Wang Q.,University of Manitoba
Carbon | Year: 2011

The initiation and development of wrinkles in a single layer graphene sheet subjected to in-plane shear displacements are investigated. The dependence of the wavelength and amplitude of wrinkles on the applied shear displacements is explicitly obtained with molecular mechanics simulations. A continuum model is developed for the characteristics of the wrinkles which show that the wrinkle wavelength decreases with an increase in shear loading, while the amplitude of the wrinkles is found to initially increase and then become stable. The propagation and growth process of the wrinkles in the sheet is elucidated. It is expected that the research could promote applications of graphenes in the transportation of biological systems, separation science, and the development of the fluidic electronics. © 2011 Elsevier Ltd. All rights reserved.


Duan W.H.,Monash University | Wang Q.,University of Manitoba | Collins F.,Monash University
Chemical Science | Year: 2011

Dispersion of carbon nanotubes with sodium dodecyl sulfate (SDS) surfactant is reported by molecular mechanics simulations from an energy perspective. The interaction energy of carbon nanotubes in a tube bundle is first calculated to estimate the force sufficient to separate it from the bundle. The binding energy between increasing numbers of SDS molecules with a carbon nanotube is next estimated to identify the threshold number of surfactant molecules for a possible dispersion. With the help of ultrasonication, a sufficient number of SDS molecules are found to penetrate into an initial gap between a single tube and other nanotubes in the bundle. Owing to further congregation of the surfactants at the gap site, the gap becomes enlarged until complete dispersion. In addition to the dispersion observation in view of the interaction and binding energy perspectives, four congregation processes were identified to reveal the aggregation morphologies of SDS surfactants on the surface of carbon nanotubes as well as the effect of diameter of a carbon nanotube on the adsorption density. © The Royal Society of Chemistry 2011.


Vahedi A.,Red River College | Gorczyca B.,University of Manitoba
Water Research | Year: 2014

A number of different flocculation mechanisms are involved in the formation of chemical coagulation flocs. Consequently, two flocs with the same size may have been formed by different mechanisms of aggregation and therefore have different arrangement of primary particles. As a result, two flocs with the same size may have different masses or mass distributions and therefore, different settling velocities. Although the correct estimation of the floc mass and density is critical for the development of the floc settling model, none of the suggested floc settling models incorporate the information on mass distribution and variable density of flocs. A probability-based method is used to determine the floc fractal dimensions on floc images. The results demonstrated that flocs formed in lime softening coagulation are multifractal. The multifractal spectra indicated the existence of a multiple fractal dimensions as opposed to the unique box-counting dimension which is a morphology-based fractal dimensions typically introduced into the Stokes' Law. These fractal dimensions may provide information on the flocs' aggregation mechanism, floc's structure, and the distribution of mass inside the floc. More research is required to investigate how to utilize the information obtained from the multifractal spectra to incorporate the variable floc density and nonhomogeneous mass distribution of flocs into the floc settling models. © 2014 Elsevier Ltd.


Wong G.,Public Health Agency of Canada | Wong G.,University of Manitoba | Kobinger G.P.,Public Health Agency of Canada | Kobinger G.P.,University of Manitoba | Kobinger G.P.,University of Pennsylvania
Clinical Microbiology Reviews | Year: 2015

The 2014-2015 outbreak of Ebola virus (EBOV), originating from Guinea, is now responsible for the infection of > 20,000 people in 9 countries. Whereas past filovirus outbreaks in sub-Saharan Africa have been rapidly brought under control with comparably few cases, this outbreak has been particularly resistant to containment efforts. Both the general population and primary health care workers have been affected by this outbreak, with hundreds of doctors and nurses being infected in the line of duty. In the absence of approved therapeutics, several caregivers have turned to investigational new drugs as well as experimental therapies in an effort to save lives. This review aims to summarize the candidates currently under consideration for postexposure use in infected patients during the largest EBOV outbreak in history. © 2015, American Society for Microbiology. All Rights Reserved.


Luo Y.,University of Manitoba | Luo Y.,University of South China
Osteoporosis International | Year: 2016

Osteoporotic fracture has been found associated with many clinical risk factors, and the associations have been explored dominantly by evidence-based and case-control approaches. The major challenges emerging from the studies are the large number of the risk factors, the difficulty in quantification, the incomplete list, and the interdependence of the risk factors. A biomechanical sorting of the risk factors may shed lights on resolving the above issues. Based on the definition of load-strength ratio (LSR), we first identified the four biomechanical variables determining fracture risk, i.e., the risk of fall, impact force, bone quality, and bone geometry. Then, we explored the links between the FRAX clinical risk factors and the biomechanical variables by looking for evidences in the literature. To accurately assess fracture risk, none of the four biomechanical variables can be ignored and their values must be subject-specific. A clinical risk factor contributes to osteoporotic fracture by affecting one or more of the biomechanical variables. A biomechanical variable represents the integral effect from all the clinical risk factors linked to the variable. The clinical risk factors in FRAX mostly stand for bone quality. The other three biomechanical variables are not adequately represented by the clinical risk factors. From the biomechanical viewpoint, most clinical risk factors are interdependent to each other as they affect the same biomechanical variable(s). As biomechanical variables must be expressed in numbers before their use in calculating LSR, the numerical value of a biomechanical variable can be used as a gauge of the linked clinical risk factors to measure their integral effect on fracture risk, which may be more efficient than to study each individual risk factor. © 2015, International Osteoporosis Foundation and National Osteoporosis Foundation.


Duan W.H.,Monash University | Wang Q.,University of Manitoba
ACS Nano | Year: 2010

Transportation of water molecules in a carbon nanotube based on an energy pump concept is investigated by molecular dynamics simulations. A small portion of the initially twisted wall of a carbon nanotube is employed to function as an energy pump for possible smooth transportation of water molecules. The momentum and resultant force on a water molecule and the corresponding displacement and velocity of the molecule are particularly studied to disclose the transportation process. The efficiency of the transportation is found to be dependent on the size of the energy pump. Once the process for the transportation of one molecule is elucidated, transportations of 20 water molecules are simulated to investigate the effect of the environmental temperature and fluctuations in the nanotube channel on the transportation. It is revealed that the accelerated period of multiple water molecules is longer than that in the transportation of a single water molecule. In addition, the fluctuations in the nanotube wall due to the buckling propagation and a higher environmental temperature will all lead to obvious decreases in the water velocity and hence retard the transportation process. © 2010 American Chemical Society.


Bekker A.,University of Manitoba | Holland H.D.,University of Pennsylvania
Earth and Planetary Science Letters | Year: 2012

During the Lomagundi Event, ca. 2.22 to 2.06Ga, marine carbonates recorded the largest and longest uninterrupted positive carbon isotope excursion, the earliest extensive marine sulfate evaporites were deposited, and the average ferric iron to total iron (expressed as Fe 2O 3/∑Fe |Fe2O3|) ratio of shales increased dramatically. At the end of the Lomagundi Event, the first economic sedimentary phosphorites were deposited, and the carbon isotope values of marine carbonates returned to ~0% VPDB. Thereafter marine sulfate evaporites and phosphorites again became scarce, while the average Fe 2O 3/∑Fe |Fe2O3| ratio of shales decreased to values intermediate between those of the Archean and Lomagundi-age shales. We propose that the large isotopic and chemical excursions during the Lomagundi Event were caused by a positive feedback between the rise of atmospheric O 2, the weathering of sulfides in the pre-2.3Ga continental crust, and the flux of phosphate to the oceans (cf. Holland, 2002). The rise in the terrestrial phosphate flux led to an increase in the burial rate of organic carbon and a major transfer of oxygen from the carbon to the sulfur cycle.The end of the Lomagundi Event was probably caused by a decrease in the terrestrial phosphate flux related to the weathering of low-pyrite sediments that were deposited during the Lomagundi Event. The rate of deposition of organic matter and the precipitation of sulfate evaporites decreased, the isotopic and chemical excesses of the Lomagundi Event were eliminated, and the ocean-atmosphere system entered the period frequently called the Boring Billion. © 2011 Elsevier B.V.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: Health | Award Amount: 2.04M | Year: 2014

IF-EBOLa has been strategically designed to efficiently respond to critical needs required to control the current EBOV outbreak from spreading. The work will involve two of the main EVD outbreak sites, Sierra Leone and Guinea. MDs, public health authorities and virus experts working on site, under ethical regulatory rules, will extend their collaboration to companies and institution to form a consortium of outstanding complementary partners, sharing their innovative technological approaches for a common goal. Our project aim is to contribute to provide an innovative early and accurate diagnostic for an early treatment and includes 2 phases: (I) a phase of preparation including, ethical authorizations, antibody production, technical and field organization as well as the beginning a follow-up of the homeostatic profile of contacts early-EBOV diagnosed and self-cured convalescent individuals in the absence of existing treatment, (with an ultrasensitive detection method of pernicious microorganisms, from the EC USDEP project qualified as a European success story USDEP project in 2010 by the EC-Project Officer) and (II) using a wide validated approach revisited with an innovative concept (strongly supported EC/EMA-WHO), we propose to carry out an experimental passive-immune therapy based on neutralizing capacity of horse anti-EBOV polyclonal F(ab)2 on early-diagnosed patients (n>300 that will be adapted in function of the epidemic situation) to impact and reduce their pre-existing viremia, their mortality, the evolution of their homeostasis profile, during and after this treatment (once patients become convalescents). The homeostasis status evolution will help to generate high quality scientific data to understand the EVD, the effect of this therapy and cure parameters characterized at 3 different levels: immune (transcriptomes, NGS, metagenomics); infectious (other than EBOV, DNA arrays), and EBOV diversity (sequencing and metagenomics).


News Article | November 10, 2016
Site: www.npr.org

In excavated waste heaps along the western coast of Greenland, researchers have found evidence that ancient Greenlanders, known as the paleo-Inuit or Saqqaq, may have been eating large amounts of bowhead whale. But these 4,000-year-old "dumpsters" are from millennia before humans had specialized technology to hunt down such massive prey. The pits are filled with the bones from other animals, like harp seals and caribou, but barely any whale. Out of around 100,000 excavated bones, only a mere hundred fragments were identified as bowhead whale parts — perhaps three bones in total. But in the greasy soil, an analysis revealed a great deal of bowhead whale DNA. About half of the mammal DNA recovered from two of these ancient trash piles are from bowhead whales, researchers report in Nature Communications on Tuesday. "That was a surprise, definitely," says Frederik Seersholm, a geneticist at the Natural History Museum of Denmark and Curtin University in Australia. It's enough DNA that it suggests ancient Greenlanders must have been eating large amounts of whale, the team thinks. "The archaeologists have found a lot of harp seal. It was several thousand harp seals they were killing [a year], and apparently it's just as much whale by volume," Seersholm says. "My idea of it is they would have had to go out looking for them." But, without the large multiple-person vessels, specialized harpoons and floats that came to the Arctic some millennia later, it would have been extraordinarily difficult for the Saqqaq culture to hunt animals as large as bowhead whales, Seersholm says. At best, the Saqqaq might have had small, single-person kayaks and small spears or lancers. But they might still have been able to do it, Seersholm thinks. After all, others have been able to hunt bowheads using simple spears and small boats. "In old hunting methods from a hundred years ago, people described coming up behind the whale," he says. If Saqqaq hunters were to have ventured out to sea in kayaks, Seersholm says they would have to have had small teams. Maneuvering against the wind and tide, hunters can avoid detection and creep up on resting whales at the surface. "If you stab it with the lance just below the flipper, you can hit it straight into the heart," Seersholm says. "With just small boats and three men, you should be able to kill them." Then, after dragging the whale on land, people would butcher it and carry the meat and blubber back to the settlement for consumption and leave the heavy bones behind, explaining the lack of whale bones from the excavation. But not everyone is convinced. "The DNA is fascinating. The science is indeed impressive," says Brooke Milne, an anthropologist at the University of Manitoba who was not involved with the work. But the interpretation that Saqqaq peoples were actively hunting bowhead whales is a bit fishy, she thinks. For one thing, hunting a bowhead whale in this fashion would have been incredibly dangerous. "Imagine them going out into open water in small watercraft, making the assumption they had seaworthy kayaks, to spear a 50-ton whale behind the flipper," Milne says. "What would happen if they went in the water? What if [the whale] destroyed the boat? You'd lose these crucial members of your society. [The hunters] would drown in the frigid water." The Saqqaq roamed the Arctic in very small groups that would struggle to survive after the loss of a hunter or a team of hunters, Milne says. "I can see going after smaller whale like beluga or narwhal. But if you have a small population, why would you even go after that higher-risk quarry when you have equally reliable, lesser-risk animals?" For that reason, Milne thinks if the Saqqaq were consuming bowhead whales, "it's more likely scavenging a dead or beached whale." And there are still a few key missing items. For example, there was little baleen, the fibrous teeth bowhead whales use to filter feed and which are made of a valuable and tough, sinewy material. And the stone tools that the Saqqaq used are mostly made of flint, which quickly gums up with fat when butchering a massive, blubbery animal like whale. And finally, Milne says that it's not impossible that oils from dead whales seeped into the archaeological sites over the years — making this a case of contamination rather than evidence of whale dinners. Even so, bowhead DNA is still in those ancient waste heaps. Whether early Greenlanders were hunting them or scavenging them off the beach is an open question, says Marcello Manino, an archaeologist at the University of Aarhus who was also not involved with the study. He says it's exciting that the question is there at all. "Sometimes the recoverable remains are not giving us the full picture of what might have been taken back," he says. Analyzing DNA in the soils has helped fill that picture in. "This is the first case study like this that I know of, and I'm sure it will be applied more now that we know what the potential is." For example, perhaps researchers can look for mammoth DNA in ice-age trash piles. "It could be another case where a lot more mammoth meat was being consumed than we might think from the actual bones," he says. Like whales, those too might be so large that people would only have carved off the meat to carry back. "That's why this is so interesting. It could open up a new kind of research," Manino says.


WINNIPEG, MANITOBA--(Marketwired - Dec. 12, 2016) - Winston Gold Mining Corp. ("Winston Gold" or the "Corporation") (CSE:WGC) (CSE:WGC.CN) (OTCQB:WGMCF) is pleased to announce the results of its Annual General and Special Meeting of Shareholders (the "Meeting") held on December 12, 2016, in Vancouver. At the Meeting, the shareholders of the Corporation unanimously approved all resolutions put before them by management, including the election of directors, re-appointment of the auditor, continuation of the Corporation into British Columbia from Manitoba and the accompanying provisions, and the Corporation's 10% rolling stock option plan. At the Meeting, the Corporation's shareholders re-elected Murray Nye, Max Polinsky, Darwin Ben Porterfield, and Allan Fabbro as directors of the Corporation. In addition, the shareholders elected Stanley Stewin as a director. Mr. Stewin is a Member of the Institute of Chartered Accountants of Manitoba (2007 to present) and obtained a Bachelor of Commerce (Honours) - University of Manitoba. Mr. Stewin has over 20 years' experience in the agricultural industry. Mr. Stewin is currently Head of Audits at the Canadian Grain Commission located in Winnipeg, Manitoba (from 2007 to present) and is managing a staff of five professionals. Mr. Stewin was previously Head of Country Operation Eastern Region at Agricore United, Winnipeg Manitoba (from 1985 to 2007), an agricultural business with a grain handle in excess of 11 million Metric tons and with Crop Production Sales in excess of $900 million. Mr. Stewin has extensive experience in restructuring and re-organizing departments/organizations involving business analysis, developing business plans, leading negotiations and community consultations. The Corporation is also pleased to announce the appointment of Ronan Sabo-Walsh as its Chief Financial Officer. Mr. Sabo-Walsh's appointment is effective immediately. He replaces Mr. Max Polinsky, who has acted as the Corporation's Chief Financial Officer since September 29, 2014. Mr. Polinsky will retain his status as the President and a director of the Corporation. Mr. Sabo-Walsh holds a Bachelor of Commerce degree in Finance from the University of British Columbia and has over 5 years' experience in corporate finance. He has been employed by V Baron Global Financial Canada Ltd., a full-service merchant bank providing ongoing financial and back-office support to public companies, since 2011 and currently holds the title of Assistant Manager, Corporate Finance. Mr. Sabo-Walsh is also the VP, Finance of Novo Resources Corp., a mineral exploration company listed on the TSX Venture Exchange. Mr. Sabo-Walsh has extensive experience with public listings, merger transactions, and public company management. Winston Gold is a junior mining company focused on advancing high-grade, low cost mining opportunities into production. Towards that end, the Corporation has acquired two under-explored and under-exploited gold/silver mining opportunities, being the Winston Gold project near Helena, Montana, and the Gold Ridge project, near Willcox, Arizona. On behalf of the Board of Directors of the Company The CSE has neither approved nor disapproved the information contained herein.


News Article | November 18, 2015
Site: www.nature.com

Two satellites that were accidentally launched into the wrong orbit will be repurposed to make the most stringent test to date of a prediction made by Albert Einstein’s general theory of relativity — that clocks run more slowly the closer they are to heavy objects. The satellites, operated by the European Space Agency (ESA), were mislaunched last year by a Russian Soyuz rocket that put them into elliptical, rather than circular, orbits. This left them unfit for their intended use as part of a European global-navigation system called Galileo. But the two crafts still have atomic clocks on board. According to general relativity, the clocks' 'ticking' should slow down as the satellites move closer to Earth in their wonky orbits, because the heavy planet’s gravity bends the fabric of space-time. The clocks should then speed up as the crafts recede. On 9 November, ESA announced that teams at Germany's Center of Applied Space Technology and Microgravity (ZARM) in Bremen and the department of Time–Space Reference Systems at the Paris Observatory will now track this rise and fall1. By comparing the speed of the clocks’ ticking with the crafts’ known altitudes — pinpointed within a few centimetres by monitoring stations on the ground, which bounce lasers off the satellites — the teams can test the accuracy of Einstein's theory. Launching space experiments takes enormous time and money, so using the off-course Galileo satellites is "a brilliant idea", says Gerald Gwinner, a physicist at the University of Manitoba in Winnipeg, Canada, who is not involved in the work. "Even a mishap can be turned into something useful and fascinating," he adds. "This is a classic case of 'When life gives you lemons, make lemonade'." In 1976, NASA launched an atomic clock aboard Gravity Probe A from Earth's surface, 10,000 kilometres into space, to compare its ticking with an identical clock on the ground. But that probe stayed in the air for just shy of two hours. The Galileo satellites, by contrast, will conduct experiments for a year, climbing and falling by 8,500 kilometres twice each day. The test is the first time that scientists have had the chance to improve on the 1976 measurement2, says ESA. The agency's senior satnav adviser, Javier Ventura-Traveset, says that it will be the most accurate assessment ever conducted of how gravity affects the passing of time. (A 2010 experiment3 claimed a measurement 10,000 times more precise than Gravity Probe A, but the assertion is disputed4, 5). ESA expects the results, which should arrive in around a year, to be four times more accurate than those of Gravity Probe A — enabling the agency to test whether theory agrees with reality to a precision of below 0.004%. No one expects Einstein’s theory, published almost 100 years ago, to break down — it has passed every test thrown at it. But the results should nonetheless prove fascinating, says Gwinner. "While we don't know if and where relativity might break down, it is important to push the limits of our knowledge further and further, to eventually find hints of deviations. If this can be done as a money-saving opportunity, even better." A future ESA experiment called the Atomic Clock Ensemble in Space, or ACES, is scheduled to fly on the International Space Station in 2017, and will push Einstein’s theory to even greater limits, with a precision that may reach 0.0002%. In the meantime, the Galileo satellites might still find uses in navigation, adds Ventura-Traveset. Since the crafts' launch, a series of manoeuvres have gone some way to rectifying their errant orbits. This could potentially permit them to participate in the Galileo system in the future while simultaneously carrying out the relativity tests. But that has yet to be decided, he says.


News Article | December 20, 2016
Site: www.businesswire.com

VAN NUYS, Calif.--(BUSINESS WIRE)--Valley Presbyterian Hospital is pleased to announce the appointment of Kevin Rice, MD, to the Valley Presbyterian Hospital Board of Directors. “Dr. Rice offers a diverse perspective and expertise from his work serving patients in the San Fernando Valley,” said Gustavo Valdespino, Valley Presbyterian Hospital President & CEO. “We look forward to working with an exceptional community leader and physician who has demonstrated leadership and dedication to Valley Presbyterian Hospital and the surrounding community.” Kevin Rice, MD, is the Medical Director of the Radiology Division of Valley Presbyterian Hospital, and has been a physician at Valley Presbyterian Hospital since 2007. Dr. Rice completed his medical education at the University of Manitoba, and completed a four-year diagnostic radiology residency at the University of Ottawa. He also completed a year of sub-specialty training and research in a fellowship awarded to him at Brown University in Providence, Rhode Island. Following his training, Dr. Rice went into private practice with Kern Radiology, one of the largest radiology groups in California, where he was on the Executive Board serving as Vice President of Finance and Chief Financial Officer. In 2000, Dr. Rice joined Renaissance Imaging Medical Associates, where he continues to practice. He is board certified by the American Board of Radiology. Our current 2016 Board of Directors are: Greg Kay, MD, Chairman of the Board David W. Fleming, Chairman Emeritus, Special Advisor, State Senator Robert Hertzberg Peter Koetters, MD, Chief of Staff, Valley Presbyterian Hospital David Adelman, Secretary, Partner, Greenberg & Bass, LLP Alex Guerrero, Treasurer, Senior Vice President, VEDC Daniel Chandler, President, Chandler Pratt & Partners Dianne F. Harrison, PhD, President, California State University, Northridge Luca Jacobellis, President, Cal Net Technology Group Matthew Mischel, MD, Physician, Valley Internal Medicine & Nephrology Medical Group Todd Moldawer, MD, Physician, Southern California Orthopedic Institute (SCOI) Ganesa Pandian, MD, Medical Director of Cardiology, Valley Presbyterian Hospital Kevin Rice, MD, Medical Director of Radiology, Valley Presbyterian Hospital Barbara Romero, Commissioner, Los Angeles Public Works Sukshma Sreepathi, MD, Physician, President, Tri-Valley Neonatal Medical Group Gustavo Valdespino, President & CEO, Valley Presbyterian Hospital Vladimir Victorio, Vice President – Senior Private Banker, Wells Fargo Wealth Advisors Valley Presbyterian Hospital is a 350-bed facility that ranks among the largest acute care hospitals in the San Fernando Valley. Founded in 1958, the nonprofit, non-sectarian, independent, community hospital provides high quality, patient-centered care through leading-edge technology and a full range of medical services. For more information, visit www.valleypres.org.


News Article | November 3, 2016
Site: motherboard.vice.com

The mystery of a "pinging" sound emanating from the Arctic seafloor in Canada, which local hunters say has been driving away wildlife, just grew deeper. The Canadian Department of National Defence (DND) dispatched a military plane to investigate the noise after the territory of Nunavut asked for assistance in determining the its origin. When the crew returned, they reported that they found nothing except for some whales and walruses. "The air crew performed various multi-sensor searches in the area, including an acoustic search for 1.5 hours, without detecting any acoustic anomalies," DND spokesperson Ashley Lemire wrote me in an email. "The crew did not detect any surface or subsurface contacts. The crew did observe two pods of whales and six walruses in the area of interest." Locals had theorized the noise might be due to local mining activities, or even environmentalists, although both denied it. So could whales really be to blame? Read More: The Canadian Military Is Investigating a Mysterious Noise In the Arctic It wouldn't be the first time that whale sounds were confused for something more sinister. In 2014, amateur zoologists recorded what they believed to be Vermont's own version of the Loch Ness Monster: a creature named Champ, named for Lake Champlain. Professionals, however, thought it might be whales echolocating. (Beluga whales can live in both cold ocean water like Canada's Arctic, and in warmer freshwater.) According to University of Manitoba professor Steve Ferguson, who studies the evolutionary ecology of large Arctic mammals like whales, it's possible but highly unlikely that whales are the cause of the mysterious pinging. "Beluga and Narwhal whales use echolocation commonly," Ferguson said. This echolocation "might scare away fish that whales might eat, but it's unlikely it would scare away wildlife." Moreover, Ferguson said, if you're close enough to hear a whale's pinging echolocation, instead of their booming calls, then you're likely swimming right beside it. "If you heard the echolocation, you'd probably be able to attribute it to the whale," he said. Bowhead whales emit a large, low-frequency sound to communicate with each other over long distances, Ferguson said, which could conceivably be perceived as a sort of "hum," another word used by Nunavut hunters to describe this mysterious sound. But there's no way it could be heard as a ping, Ferguson contended. According to the DND, the species of whale spotted by the plane wasn't reported. "I think the assumption is that the pinging is a more human sound," Ferguson added, adding yet another layer of intrigue to the case of the unknown Arctic ping. Someone really needs to call up Mulder and Scully.


Kirpalani H.,Children's Hospital of Philadelphia | Kirpalani H.,McMaster University | Millar D.,Royal Maternity Hospital | Lemyre B.,University of Ottawa | And 3 more authors.
New England Journal of Medicine | Year: 2013

BACKGROUND: To reduce the risk of bronchopulmonary dysplasia in extremely-low-birth-weight infants, clinicians attempt to minimize the use of endotracheal intubation by the early introduction of less invasive forms of positive airway pressure. METHODS: We randomly assigned 1009 infants with a birth weight of less than 1000 g and a gestational age of less than 30 weeks to one of two forms of noninvasive respiratory support - nasal intermittent positive-pressure ventilation (IPPV) or nasal continuous positive airway pressure (CPAP) - at the time of the first use of noninvasive respiratory support during the first 28 days of life. The primary outcome was death before 36 weeks of postmenstrual age or survival with bronchopulmonary dysplasia. RESULTS: Of the 497 infants assigned to nasal IPPV for whom adequate data were available, 191 died or survived with bronchopulmonary dysplasia (38.4%), as compared with 180 of 490 infants assigned to nasal CPAP (36.7%) (adjusted odds ratio, 1.09; 95% confidence interval, 0.83 to 1.43; P = 0.56). The frequencies of air leaks and necrotizing enterocolitis, the duration of respiratory support, and the time to full feedings did not differ significantly between treatment groups. CONCLUSIONS: Among extremely-low-birth-weight infants, the rate of survival to 36 weeks of post-menstrual age without bronchopulmonary dysplasia did not differ significantly after noninvasive respiratory support with nasal IPPV as compared with nasal CPAP. Copyright © 2013 Massachusetts Medical Society.


Papakyriakou T.,University of Manitoba | Miller L.,Canadian Department of Fisheries and Oceans
Annals of Glaciology | Year: 2011

Springtime measurements of CO2 exchange over seasonal sea ice in the Canadian Arctic Archipelago using eddy covariance show that CO2 was generally released to the atmosphere during the cold (ice surface temperatures less than about -6°C) early part of the season, but was absorbed from the atmosphere as warming advanced. Hourly maximum efflux and uptake rates approached 1.0 and -3.0 μmol m-2 s-1, respectively. These CO2 flux rates are far greater than previously reported over sea ice and are comparable in magnitude to exchanges observed within other systems (terrestrial and marine). Uptake generally occurred for wind speeds in excess of 6 ms-1 and corresponded to local maxima in temperature at the snow-ice interface and net radiation. Efflux, on the other hand, occurred under weaker wind speeds and periods of local minima in temperature and net radiation. The wind speeds associated with uptake are above a critical threshold for drifting and blowing snow, suggesting that ventilation of the snowpack and turbulent exchange with the brine-wetted grains are an important part of the process. Both the uptake and release fluxes may be at least partially driven by the temperature sensitivity of the carbonate system speciation in the brine-wetted snow base and upper sea ice. The period of maximum springtime CO2 uptake occurred as the sea-ice permeability increased, passing a critical threshold allowing vertical brine movement throughout the sea-ice sheet. At this point, atmospheric CO2 would have been available to the under-ice sea-water carbonate system, with ramifications for carbon cycling in sea-ice-dominated polar waters.


Vriend J.,University of Manitoba | Reiter R.J.,University of Texas Health Science Center at San Antonio
Journal of Pineal Research | Year: 2015

The expression of 'clock' genes occurs in all tissues, but especially in the suprachiasmatic nuclei (SCN) of the hypothalamus, groups of neurons in the brain that regulate circadian rhythms. Melatonin is secreted by the pineal gland in a circadian manner as influenced by the SCN. There is also considerable evidence that melatonin, in turn, acts on the SCN directly influencing the circadian 'clock' mechanisms. The most direct route by which melatonin could reach the SCN would be via the cerebrospinal fluid of the third ventricle. Melatonin could also reach the pars tuberalis (PT) of the pituitary, another melatonin-sensitive tissue, via this route. The major 'clock' genes include the period genes, Per1 and Per2, the cryptochrome genes, Cry1 and Cry2, the clock (circadian locomotor output cycles kaput) gene, and the Bmal1 (aryl hydrocarbon receptor nuclear translocator-like) gene. Clock and Bmal1 heterodimers act on E-box components of the promoters of the Per and Cry genes to stimulate transcription. A negative feedback loop between the cryptochrome proteins and the nucleus allows the Cry and Per proteins to regulate their own transcription. A cycle of ubiquitination and deubiquitination controls the levels of CRY protein degraded by the proteasome and, hence, the amount of protein available for feedback. Thus, it provides a post-translational component to the circadian clock mechanism. BMAL1 also stimulates transcription of REV-ERBα and, in turn, is also partially regulated by negative feedback by REV-ERBα. In the 'black widow' model of transcription, proteasomes destroy transcription factors that are needed only for a particular period of time. In the model proposed herein, the interaction of melatonin and the proteasome is required to adjust the SCN clock to changes in the environmental photoperiod. In particular, we predict that melatonin inhibition of the proteasome interferes with negative feedback loops (CRY/PER and REV-ERBα) on Bmal1 transcription genes in both the SCN and PT. Melatonin inhibition of the proteasome would also tend to stabilize BMAL1 protein itself in the SCN, particularly at night when melatonin is naturally elevated. Melatonin inhibition of the proteasome could account for the effects of melatonin on circadian rhythms associated with molecular timing genes. The interaction of melatonin with the proteasome in the hypothalamus also provides a model for explaining the dramatic 'time of day' effect of melatonin injections on reproductive status of seasonal breeders. Finally, the model predicts that a proteasome inhibitor such as bortezomib would modify circadian rhythms in a manner similar to melatonin. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.


Wilding J.P.H.,University of Liverpool | Woo V.,University of Manitoba | Soler N.G.,Hospital Sisters Health System Medical Group | Pahor A.P.,Astrazeneca | And 3 more authors.
Annals of Internal Medicine | Year: 2012

Background: Dapagliflozin, a selective inhibitor of sodium-glucose cotransporter 2, may improve glycemic control with a lower dose of insulin and attenuate the associated weight gain in patients with inadequate control despite high doses of insulin. Objective: To evaluate the efficacy and safety of adding dapagliflozin therapy in patients whose type 2 diabetes mellitus is inadequately controlled with insulin with or without oral antidiabetic drugs. Design: A 24-week, randomized, placebo-controlled, multicenter trial followed by a 24-week extension period. An additional 56-week extension period is ongoing. (ClinicalTrials.gov registration number: NCT00673231) Setting: 126 centers in Europe and North America from 30 April 2008 to 19 November 2009. Patients: 808 patients with inadequately controlled type 2 diabetes mellitus receiving at least 30 U of insulin daily, with or without up to 2 oral antidiabetic drugs. Intervention: Patients were randomly assigned in a 1:1:1:1 ratio and allocated with a computer-generated scheme to receive placebo or 2.5, 5, or 10 mg of dapagliflozin, once daily, for 48 weeks. Measurements: The primary outcome was change in hemoglobin A 1c from baseline to 24 weeks. Secondary outcomes included changes in body weight, insulin dose, and fasting plasma glucose level at 24 weeks and during the 24-week extension period. Adverse events were evaluated throughout both 24-week periods. Results: 800 patients were analyzed. After 24 weeks, mean hemoglobin A 1c decreased by 0.79% to 0.96% with dapagliflozin compared with 0.39% with placebo (mean difference, -0.40% [95% CI, -0.54% to -0.25%] in the 2.5-mg group, -0.49% [CI, -0.65% to -0.34%] in the 5-mg group, and -0.57% [CI, -0.72% to -0.42%] in the 10-mg group). Daily insulin dose decreased by 0.63 to 1.95 U with dapagliflozin and increased by 5.65 U with placebo (mean difference, -7.60 U [CI, -10.32 to -4.87 U] in the 2.5-mg group, -6.28 U [CI, -8.99 to -3.58 U] in the 5-mg group, and -6.82 U [CI, -9.56 to -4.09 U] in the 10-mg group). Body weight decreased by 0.92 to 1.61 kg with dapagliflozin and increased by 0.43 kg with placebo (mean differences, -1.35 kg [CI, -1.90 to -0.80 kg] in the 2.5-mg group, -1.42 kg [CI, -1.97 to -0.88 kg] in the 5-mg group, and -2.04 kg [CI, -2.59 to -1.48 kg] in the 10-mg group). These effects were maintained at 48 weeks. Compared with the placebo group, patients in the pooled dapagliflozin groups had a higher rate of hypoglycemic episodes (56.6% vs. 51.8%), events suggesting genital infection (9.0% vs. 2.5%), and events suggesting urinary tract infection (9.7% vs. 5.1%). Limitation: Insulin doses were not titrated to target, and the study was not designed to evaluate long-term safety. Conclusion: Dapagliflozin improves glycemic control, stabilizes insulin dosing, and reduces weight without increasing major hypoglycemic episodes in patients with inadequately controlled type 2 diabetes mellitus. © 2012 American College of Physicians.


Lynch J.P.,University of California at Los Angeles | Clark N.M.,Loyola University | Zhanel G.G.,University of Manitoba
Expert Opinion on Pharmacotherapy | Year: 2013

Introduction: Bacteria within the family Enterobacteriaceae are important pathogens in nosocomial and community settings. Over the past two decades, antimicrobial resistance among Enterobacteriaceae dramatically escalated worldwide. The authors review the mechanisms of antimicrobial resistance among Enterobacteriaceae, epidemiology and global spread of resistance elements and discuss therapeutic options. Areas covered: An exhaustive search for literature relating to Enterobacteriaceae was performed using PubMed, using the following key words: Enterobacteriaceae; Klebsiella pneumoniae; Escherichia coli; antimicrobial resistance; plasmids; global epidemiology; carbapenemases (CPEs); extended spectrum β-lactamases (ESBLs) and multidrug resistance (MDR). Expert opinion: Enterobacteriaceae are inhabitants of intestinal flora and spread easily among humans (via hand carriage, contaminated food or water or environmental sources). Antimicrobial resistance may develop via plasmids, transposons or other mobile resistance elements. Mutations conferring resistance typically increase over time; the rate of increase is amplified by selection pressure from antibiotic use. Factors that enhance spread of antimicrobial resistance include: crowding; lack of hygiene; overuse and over-the-counter use of antibiotics; tourism; refugees and international travel. Clonal spread of resistant organisms among hospitals, geographic regions and continents has globally fueled the explosive rise in resistance. The emergence and widespread dissemination of MDR clones containing novel resistance elements (particularly ESBLs and CPEs) has greatly limited therapeutic options. In some cases, infections due to MDR Enterobacteriaceae are untreatable with existing antimicrobial agents. The authors discuss current and future therapeutic options for difficult-to-treat infections due to these organisms. © 2013 Informa UK, Ltd.


Vriend J.,University of Manitoba | Reiter R.J.,University of Texas Health Science Center at San Antonio
Life Sciences | Year: 2014

Proteasome inhibitors and melatonin are both intimately involved in the regulation of major signal transduction proteins including p53, cyclin p27, transcription factor NF-κB, apoptotic factors Bax and Bim, caspase 3, caspase 9, anti-apoptotic factor Bcl-2, TRAIL, NRF2 and transcription factor beta-catenin. The fact that these factors are shared targets of the proteasome inhibitor bortezomib and melatonin suggests the working hypothesis that melatonin is a proteasome inhibitor. Supporting this hypothesis is the fact that melatonin shares with bortezomib a selective pro-apoptotic action in cancer cells. Furthermore, both bortezomib and melatonin increase the sensitivity of human glioma cells to TRAIL-induced apoptosis. Direct evidence for melatonin inhibition of the proteasome was recently found in human renal cancer cells. We raise the issue whether melatonin should be investigated in combination with proteasome inhibitors to reduce toxicity, to reduce drug resistance, and to enhance efficacy. This may be particularly valid for hematological malignancies in which proteasome inhibitors have been shown to be useful. Further studies are necessary to determine whether the actions of melatonin on cellular signaling pathways are due to a direct inhibitory effect on the catalytic core of the proteasome, due to an inhibitory action on the regulatory particle of the proteasome, or due to an indirect effect of melatonin on phosphorylation of signal transducing factors. © 2014 Elsevier Inc.


Vriend J.,University of Manitoba | Reiter R.J.,University of Texas Health Science Center at San Antonio
Cellular and Molecular Life Sciences | Year: 2014

Melatonin has been widely studied for its role in photoperiodism in seasonal breeders; it is also a potent antioxidant. Ubiquitin, a protein also widespread in living cells, contributes to many cellular events, although the most well known is that of tagging proteins for destruction by the proteasome. Herein, we suggest a model in which melatonin interacts with the ubiquitin-proteasome system to regulate a variety of seemingly unrelated processes. Ubiquitin, for example, is a major regulator of central activity of thyroid hormone type 2 deiodinase; the subsequent regulation of T3 may be central to the melatonin-induced changes in seasonal reproduction and seasonal changes in metabolism. Both melatonin and ubiquitin also have important roles in protecting cells from oxidative stress. We discuss the interaction of melatonin and the ubiquitin-proteasome system in oxidative stress through regulation of the ubiquitin-activating enzyme, E1. Previous reports have shown that glutathiolation of this enzyme protects proteins from unnecessary degradation. In addition, evidence is discussed concerning the interaction of ubiquitin and melatonin in activation of the transcription factor NF-κB as well as modulating cellular levels of numerous signal transducing factors including the tumor suppressor, p53. Some of the actions of melatonin on the regulatory particle of the proteasome appear to be related to its inhibition of the calcium-dependent calmodulin kinase II, an enzyme which reportedly copurifies with proteasomes. Many of the actions of melatonin on signal transduction are similar to those of a proteasome inhibitor. While these actions of melatonin could be explained by a direct inhibitory action on the catalytic core particle of the proteasome, this has not been experimentally verified. If our hypothesis of melatonin as a general inhibitor of the ubiquitin-proteasome system is confirmed, it is predicted that more examples of this interaction will be demonstrated in a variety of tissues in which ubiquitin and melatonin co-exist. Furthermore, the hypothesis of melatonin as an inhibitor of the ubiquitin-proteasome system will be a very useful model for clinical testing of melatonin. © 2014 Springer.


Coulthard S.,University of Ulster | Johnson D.,University of Manitoba | McGregor J.A.,University of Sussex
Global Environmental Change | Year: 2011

The purpose of this paper is to explore the extent to which a social wellbeing approach can offer a useful way of addressing the policy challenge of reconciling poverty and environmental objectives for development policy makers. In order to provide detail from engagement with a specific policy challenge it takes as its illustrative example the global fisheries crisis. This crisis portends not only an environmental disaster but also a catastrophe for human development and for the millions of people directly dependent upon fish resources for their livelihoods and food security. The paper presents the argument for framing the policy problem using a social conception of human wellbeing, suggesting that this approach provides insights which have the potential to improve fisheries policy and governance. By broadening the scope of analysis to consider values, aspirations and motivations and by focusing on the wide range of social relationships that are integral to people achieving their wellbeing, it provides a basis for better understanding the competing interests in fisheries which generate conflict and which often undermine existing policy regimes. © 2011 Elsevier Ltd.


Hossain E.,University of Manitoba | Rasti M.,Amirkabir University of Technology | Tabassum H.,University of Manitoba | Abdelnasser A.,University of Manitoba
IEEE Wireless Communications | Year: 2014

The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems. © 2002-2012 IEEE.


Andraschko F.,University of Kaiserslautern | Andraschko F.,University of Manitoba | Enss T.,University of Heidelberg | Sirker J.,University of Kaiserslautern | Sirker J.,University of Manitoba
Physical Review Letters | Year: 2014

We propose to observe many-body localization in cold atomic gases by realizing a Bose-Hubbard chain with binary disorder and studying its nonequilibrium dynamics. In particular, we show that measuring the difference in occupation between even and odd sites, starting from a prepared density-wave state, provides clear signatures of localization. Furthermore, we confirm as hallmarks of the many-body localized phase a logarithmic increase of the entanglement entropy in time and Poissonian level statistics. Our numerical density-matrix renormalization group calculations for infinite system size are based on a purification approach; this allows us to perform the disorder average exactly, thus producing data without any statistical noise and with maximal simulation times of up to a factor 10 longer than in the clean case. © 2014 American Physical Society.


Perera N.,ERL Phase Power Technologies | Rajapakse A.D.,University of Manitoba
IEEE Transactions on Power Delivery | Year: 2013

This paper presents the development of a new protection method for series-compensated double-circuit transmission lines using current transients. Using the proposed method, the faulted circuit can be identified locally, by comparing the polarities of wavelet coefficients of the branch currents. Applicability of the proposed method is demonstrated using a 500-kV transmission system simulated in an electromagnetic transient simulation program. Comparisons with the conventional distance and phase comparison protection schemes show that the proposed method can provide faster and more reliable protection for the series-compensated double-circuit transmission systems. The security of the relay can be further enhanced if the fault direction information is exchanged between the relays at two ends. © 1986-2012 IEEE.


Meng J.,University of Houston | Yin W.,Rice University | Li H.,University of Tennessee at Knoxville | Hossain E.,University of Manitoba | Han Z.,University of Houston
IEEE Journal on Selected Areas in Communications | Year: 2011

Spectrum sensing, which aims at detecting spectrum holes, is the precondition for the implementation of cognitive radio (CR). Collaborative spectrum sensing among the cognitive radio nodes is expected to improve the ability of checking complete spectrum usage. Due to hardware limitations, each cognitive radio node can only sense a relatively narrow band of radio spectrum. Consequently, the available channel sensing information is far from being sufficient for precisely recognizing the wide range of unoccupied channels. Aiming at breaking this bottleneck, we propose to apply matrix completion and joint sparsity recovery to reduce sensing and transmission requirements and improve sensing results. Specifically, equipped with a frequency selective filter, each cognitive radio node senses linear combinations of multiple channel information and reports them to the fusion center, where occupied channels are then decoded from the reports by using novel matrix completion and joint sparsity recovery algorithms. As a result, the number of reports sent from the CRs to the fusion center is significantly reduced. We propose two decoding approaches, one based on matrix completion and the other based on joint sparsity recovery, both of which allow exact recovery from incomplete reports. The numerical results validate the effectiveness and robustness of our approaches. In particular, in small-scale networks, the matrix completion approach achieves exact channel detection with a number of samples no more than 50% of the number of channels in the network, while joint sparsity recovery achieves similar performance in large-scale networks. © 2006 IEEE.


Rasti M.,Amirkabir University of Technology | Hossain E.,University of Manitoba
IEEE Transactions on Wireless Communications | Year: 2013

A distributed priority-based power and admission control algorithm is presented to address the priority-based gradual removal problem in cellular wireless networks. We assume that there exist two classes of priority for users (high-priority users versus low-priority users) and minimal number of low-priority users should be gradually removed, subject to the constraint that all high-priority users are supported with their target signal-to-interference- plus-noise ratios (SINRs) which is assumed feasible. In our proposed algorithm, each high-priority user rigidly tracks its target-SINR by employing the conventional target-SINR tracking power control algorithm, and each transmitting low-priority user tracks its target-SINR as long as its required transmit power is below a threshold, otherwise it temporarily removes itself. Each removed low-priority user resumes its transmission if the required transmit power to reach its target-SINR goes below a given threshold which is different from the former. Of these two thresholds, whose values are analytically obtained, the former is provided by the base station and the latter is obtained in a distributed manner as a function of the former. We show that the distributed power-update function corresponding to our proposed algorithm has at least one fixed-point which is not unique in general. The convergence point, where our proposed algorithm potentially converges to, depends on initial transmit power levels of users. We also show that our proposed algorithm, at each of its fixed-points, not only provides all high-priority users with their (feasible) target-SINRs but also guarantees that no low-priority user is erroneously removed (i.e., no additional low priority user can be supported along with currently supported users). Furthermore, for the special case of tracking a common target-SINR by all low-priority users, we show that our proposed algorithm minimizes the outage-ratio of low-priority users subject to zero-outage-ratio of high-priority users. Simulation results confirm our analytical developments and show that our proposed priority-based power and admission control algorithm solves the priority-based gradual removal problem efficiently. © 2013 IEEE.


Prime S.L.,University of Manitoba | Vesia M.,York University | Crawford J.D.,York University
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2011

Constructing an internal representation of the world from successive visual fixations, i.e. separated by saccadic eye movements, is known as trans-saccadic perception. Research on trans-saccadic perception (TSP) has been traditionally aimed at resolving the problems of memory capacity and visual integration across saccades. In this paper, we review this literature on TSP with a focus on research showing that egocentric measures of the saccadic eye movement can be used to integrate simple object features across saccades, and that the memory capacity for items retained across saccades, like visual working memory, is restricted to about three to four items. We also review recent transcranial magnetic stimulation experiments which suggest that the right parietal eye field and frontal eye fields play a key functional role in spatial updating of objects in TSP. We conclude by speculating on possible cortical mechanisms for governing egocentric spatial updating of multiple objects in TSP. This journal is © 2011 The Royal Society.


Sun X.D.,Heilongjiang Academy of Agricultural Science | Holley R.A.,University of Manitoba
Comprehensive Reviews in Food Science and Food Safety | Year: 2012

The shelf life of packaged fresh red meats is most frequently determined by the activity of microorganisms, which results in the development of off-odors, gas, and slime, but it is also influenced by biochemical factors such as lipid radical chain and pigment oxidation causing undesirable flavors and surface discoloration. The predominant bacteria associated with spoilage of refrigerated meats are Pseudomonas, Acinetobacter/Moraxella (Psychrobacter), Shewanella putrefaciens, lactic acid bacteria, Enterobacteriaceae, and Brochothrix thermosphacta. The spoilage potential of these organisms and factors influencing their impact on meat quality are discussed. High O2-modified atmosphere (80% O2+ 20% CO2) packaging (MAP) is commonly used for meat retail display but vacuum packaging remains the major MAP method used for meat distribution. Two-step master packaging (outer anoxic-20% CO2+ 80% N2/inner gas-permeable film) is used for centralized MAP distribution, but CO use (0.4%) in low O2 packaging systems is limited by consumer uncertainty that CO may mask spoilage. Active packaging where the film contributes more than a gas/physical barrier is an important technology and has been studied widely. Its application in combination with MAP is very promising but impediments remain to its widespread industrial use. The influence of processing technologies including modified atmospheres on lipid oxidation and discoloration of meats are analyzed. Because both organic acids and antioxidants have been evaluated for their effects on microorganism growth, in concert with the prevention of lipid oxidation, work in this area is examined. © 2012 Institute of Food Technologists®.


Israels S.J.,University of Manitoba | Rand M.L.,Hospital for Sick Children
Pediatric Blood and Cancer | Year: 2013

Identifying the molecular basis of inherited platelet disorders has contributed to our understanding of normal platelet physiology. Many of these conditions are rare, but close observation of clinical and laboratory phenotype, and subsequent identification of the abnormal protein and mutated gene, have provided us with unique opportunities to examine specific aspects of platelet biogenesis and function. Phenotype-genotype association studies are providing a detailed understanding of the structure and function of platelet membrane receptors, the biogenesis and release of platelet granules, and the assembly of the cytoskeleton. Genetic polymorphisms contributing to decreased or increased platelet adhesion and activation may translate into increased clinical risks for bleeding or thrombosis. More recently, genome wide association studies have identified new genes contributing to the variation in normal platelet function. © 2012 Wiley Periodicals, Inc.


News Article | March 23, 2016
Site: www.sciencenews.org

White-tailed prairie dogs — those stand-up, nose-wiggling nibblers of grass — turn out to be routine killers of baby ground squirrels. And the strongest sign of successful white-tailed motherhood could be repeat ground squirrel kills, researchers say. At a Colorado prairie dog colony, females that kill at least two ground squirrels raise three times as many offspring during their lives as nonkiller females, says John Hoogland of the University of Maryland Center for Environmental Science in Frostburg. The “serial killers,” as he calls repeat-attack females, rarely even nibble at the carcasses and aren’t getting much, if any, meat bonus. Instead, the supermom assassins may improve grazing in their territories by reducing competition from grass-snitching ground squirrels, Hoogland and Charles Brown of the University of Tulsa propose March 23 in Proceedings of the Royal Society B. “This really caught me by surprise,” Hoogland says. Carnivorous mammals killing other carnivore species wouldn’t be surprising, but prairie dogs and ground squirrels eat plants. He knows of no other systematic study documenting routine fatal attacks by one herbivore species on another. “It’s also striking because it’s so subtle,” he says. He had been watching prairie dogs in general for decades and the white-tailed prairie dogs in the Arapaho National Wildlife Refuge for a year before he noticed an attack. A female “jumped on something, shook it, shook it hard, kept attacking — and then walked away,” he says. The encounter lasted just minutes. Hoogland rushed from his observation tower to the scene of the fight and, to his surprise, retrieved a dead baby ground squirrel. Once he and his colleagues knew what to look for, they saw 101 such lethal attacks (mostly from females, but also from some males) over six years and inferred 62 more from carcasses. A propensity for killing ground squirrels turned out to be the only factor (among such possibilities as body mass, age and number of neighbors) that predicted a tendency toward lifetime success in raising lots of young. That factor, which biologists describe as fitness, is a big deal in analyzing how populations change and species evolve. Hoogland and Brown propose that prairie dogs and ground squirrels compete for grazing. An analysis of the animals’ diets finds at least six plant species in common, the researchers say. Hoogland didn’t directly test to see if the serial killer prairie dogs just had great territories that attracted lots of ground squirrels and thus provided more opportunities for killing. But if that were true, he says, he would predict that the holders of this prime territory would have robust body sizes, and therefore there would be some link between maternal body size and high offspring number. No such link shows up, he says. The best hypothesis explaining the benefit of killing squirrels that Hoogland can think of, he says, is that prairie dogs slay the competition for food resources. Still, the idea that prairie dogs and ground squirrels compete for plants needs more information, says ecologist Liesbeth Bakker of the Netherlands Institute of Ecology in Wageningen. The total of ground squirrel kills was an impressive number, she says, but it’s unclear what percentage it represents. If the deaths remove only a small proportion of ground squirrels, competition isn’t likely to ease much. Also, any effect would be weakened by the relative sizes of the species. “The ground squirrels are about half the size of the prairie dogs and thus eat less food,” she says. Behavioral ecologist James Hare wonders why ground squirrels venture into prairie dog territory if it’s so dangerous. One of the ideas Hoogland suggests is that prairie dog vigilance in raising alarms about predators might make the risks of hanging out in a colony worthwhile. Hare, at the University of Manitoba in Canada, also wonders whether ground squirrels have trouble finding good habitat free from prairie dogs. Hoogland too is left with questions, including one about the big-family bonus of interspecific killing. “Is this really unique to prairie dogs or is this more common?”


News Article | November 3, 2016
Site: www.csmonitor.com

Climate scientists have calculated just how fast humans' carbon emissions are melting Arctic sea ice in a new study. Just 75 miles in a fossil-fuel powered car equals one square foot of ice melted Arctic ice. No more watching videos at work: Facebook will now default to audio Sorry polar bears, the Arctic Ocean might be free of sea ice before 2050. According to new calculations, for every metric ton of carbon dioxide emitted, about three square meters (approximately 32.3 square feet) of Arctic summer sea ice disappears. And, with humans currently emitting about 35 to 40 million tons of CO2 each year, the future doesn't look very frozen. It's not hard to rack up those emissions. About 2,433 miles of driving – roughly the distance from Washington, DC to Las Vegas – or just one seat on a return flight from New York to London – on average produces a metric ton of CO2 emissions. Or, for those who aren't long-distance travelers, just over 75 miles of driving in a typical fossil-fuel powered car produces enough emissions to melt one square foot of ice. That's according to Dirk Notz, head of a research group at the Max Planck Institute for Meteorology in Germany that studies sea ice. Dr. Notz calculated the relationship between CO2 emissions and the loss of Arctic summer sea ice as lead author of a paper published Thursday in the journal Science. "Our study now provides individuals with the sense that their own individual actions make a difference," Notz tells The Christian Science Monitor in a phone interview. "If I decide to drive my car a little less or to buy a car that uses less fuel, for example, all these little actions will make a difference for sea ice." Technically, Notz has calculated when there will be less than 1 million square kilometers (386,000 square miles) of Arctic sea ice left in September, after summer melting, a measurement commonly used to define sea ice free conditions. Winter temperatures will continue to freeze parts of the Arctic Ocean. That 1 million square kilometers "seems like quite a lot of ice," says Walter Meier, a sea ice researcher at the NASA Goddard Space Flight Center who was not part of the study. "But in reality it's not that much." "The Arctic Ocean will be for all intents and purposes a blue Arctic Ocean" when that happens, he says. Dr. Meier says Notz's calculations oversimplify the relationship between carbon emissions and Arctic sea ice loss. "The climate system is, in reality, a lot more complex than that," he says in a phone interview with the Monitor. Kevin Trenberth of the the Climate Analysis Section at the National Center for Atmospheric Research, who also was not part of the research, agrees. "I think it's too simple because it doesn't deal with ocean transports and it doesn't deal with atmospheric transports," he says in a phone interview with the Monitor. Furthermore, Dr. Trenberth says, seasonal variations complicate trends so the calculated relationship between CO2 and sea ice loss could be off. Although he questions their methods, Trenberth agrees with the researchers that the Arctic will see an ice-free September. And, he says, it could be as soon as in the 2030s. What would a world with a blue Arctic look like? "We're changing ice that has been around for many years to, mostly, ice that forms every year," James Overland, an oceanographer at the NOAA Pacific Marine Environmental Laboratory who was not part of the study, tells the Monitor in a phone interview. In decades past, ice would build up and become thicker through the winter. While some of that ice would melt during the summer, most would remain to accumulate more ice year after year. Animals like polar bears and walruses use those thicker sheets of ice as a sort of home base when hunting. Thinner and more fragmented ice could destroy their lifestyles. And that's not just true for animals; a disrupted icy ecosystem could make it more difficult for native human populations to hunt and forage too. But loss of Arctic sea ice probably won't just have a local impact. "Arctic sea ice regulates the temperature of our planet by cooling the Atlantic and Pacific waters," David Barber, a sea ice and climate scientist at the University of Manitoba, who was not involved in the research, writes in an email to the Monitor. Some research has suggested that less Arctic ice could lead to a weakening of the jet stream, an atmospheric system that affects the global climate. This shift could be leading to more extreme weather events, like flooding, freezing, and even droughts, already. And on top of that, Arctic sea ice serves as a sort of refrigerator for the planet, Notz explains. When the summer sun rays hit the vast, bright ice, much of that energy is reflected back. But the dark waters of a blue ocean will absorb that heat, leading to even more warming and melting. Then, warmer, more wave-filled waters can eat away at other geological features, including glaciers, coastlines, and permafrost, in a spiral of changes. It is important to note, however, that melting Arctic sea ice will not directly raise sea levels. Like melting ice in a soda glass, the oceans won't spill over from ice that is already floating in water. But as ice on land, such as the Greenland ice sheet, melts as an indirect effect of the disappearing sea ice, that water will flow into the oceans and raise sea levels. The Paris climate agreement is set to go into effect Friday with the aim of meeting an ambitious goal: preventing global temperatures from rising 2 degrees Celsius (3.6 degrees Fahrenheit) over pre-industrial levels. But, by Notz's calculations, Arctic summer sea ice will already be gone if temperatures reach that threshold. That sea ice could survive, however, if the more aggressive target of 1.5 degrees Celsius warming is attained. Further complicating things, the Arctic is heating up faster than the rest of the world, perhaps two or three times faster, Notz says. Trenberth points out that greenhouse gas emissions can also have a delayed effect. So even if humans suddenly stopped emitting carbon altogether, temperatures likely still would rise. And, he says, although scientists have been discussing ways to extract CO2 from the atmosphere, "this is an extremely difficult thing to really achieve." Looking at the sea ice loss in the Arctic is "a really stark indicator of climate change," Meier says. "We think of the Arctic as a cold place, but in a lot of ways it's relatively warm," he says. During the summer, many sections sit on the cusp of the freezing point. So, while the difference between 80 and 82 degrees Fahrenheit might not make a huge difference in Washington D.C., Meier explains, if you go from 31 to 33 degrees in the Arctic, it's the difference between ice skating and swimming. While this study doesn't really add new information for scientists, Meier says, with Arctic sea ice far from most people's everyday lives and carbon emissions, "it really helps people understand and visualize the impact."


News Article | November 3, 2016
Site: www.csmonitor.com

Climate scientists have calculated just how fast humans' carbon emissions are melting Arctic sea ice in a new study. Just 75 miles in a fossil-fuel powered car equals one square foot of ice melted Arctic ice. No more watching videos at work: Facebook will now default to audio Sorry polar bears, the Arctic Ocean might be free of sea ice before 2050. According to new calculations, for every metric ton of carbon dioxide emitted, about three square meters (approximately 32.3 square feet) of Arctic summer sea ice disappears. And, with humans currently emitting about 35 to 40 million tons of CO2 each year, the future doesn't look very frozen. It's not hard to rack up those emissions. About 2,433 miles of driving – roughly the distance from Washington, DC to Las Vegas – or just one seat on a return flight from New York to London – on average produces a metric ton of CO2 emissions. Or, for those who aren't long-distance travelers, just over 75 miles of driving in a typical fossil-fuel powered car produces enough emissions to melt one square foot of ice. That's according to Dirk Notz, head of a research group at the Max Planck Institute for Meteorology in Germany that studies sea ice. Dr. Notz calculated the relationship between CO2 emissions and the loss of Arctic summer sea ice as lead author of a paper published Thursday in the journal Science. "Our study now provides individuals with the sense that their own individual actions make a difference," Notz tells The Christian Science Monitor in a phone interview. "If I decide to drive my car a little less or to buy a car that uses less fuel, for example, all these little actions will make a difference for sea ice." Technically, Notz has calculated when there will be less than 1 million square kilometers (386,000 square miles) of Arctic sea ice left in September, after summer melting, a measurement commonly used to define sea ice free conditions. Winter temperatures will continue to freeze parts of the Arctic Ocean. That 1 million square kilometers "seems like quite a lot of ice," says Walter Meier, a sea ice researcher at the NASA Goddard Space Flight Center who was not part of the study. "But in reality it's not that much." "The Arctic Ocean will be for all intents and purposes a blue Arctic Ocean" when that happens, he says. Dr. Meier says Notz's calculations oversimplify the relationship between carbon emissions and Arctic sea ice loss. "The climate system is, in reality, a lot more complex than that," he says in a phone interview with the Monitor. Kevin Trenberth of the the Climate Analysis Section at the National Center for Atmospheric Research, who also was not part of the research, agrees. "I think it's too simple because it doesn't deal with ocean transports and it doesn't deal with atmospheric transports," he says in a phone interview with the Monitor. Furthermore, Dr. Trenberth says, seasonal variations complicate trends so the calculated relationship between CO2 and sea ice loss could be off. Although he questions their methods, Trenberth agrees with the researchers that the Arctic will see an ice-free September. And, he says, it could be as soon as in the 2030s. What would a world with a blue Arctic look like? "We're changing ice that has been around for many years to, mostly, ice that forms every year," James Overland, an oceanographer at the NOAA Pacific Marine Environmental Laboratory who was not part of the study, tells the Monitor in a phone interview. In decades past, ice would build up and become thicker through the winter. While some of that ice would melt during the summer, most would remain to accumulate more ice year after year. Animals like polar bears and walruses use those thicker sheets of ice as a sort of home base when hunting. Thinner and more fragmented ice could destroy their lifestyles. And that's not just true for animals; a disrupted icy ecosystem could make it more difficult for native human populations to hunt and forage too. But loss of Arctic sea ice probably won't just have a local impact. "Arctic sea ice regulates the temperature of our planet by cooling the Atlantic and Pacific waters," David Barber, a sea ice and climate scientist at the University of Manitoba, who was not involved in the research, writes in an email to the Monitor. Some research has suggested that less Arctic ice could lead to a weakening of the jet stream, an atmospheric system that affects the global climate. This shift could be leading to more extreme weather events, like flooding, freezing, and even droughts, already. And on top of that, Arctic sea ice serves as a sort of refrigerator for the planet, Notz explains. When the summer sun rays hit the vast, bright ice, much of that energy is reflected back. But the dark waters of a blue ocean will absorb that heat, leading to even more warming and melting. Then, warmer, more wave-filled waters can eat away at other geological features, including glaciers, coastlines, and permafrost, in a spiral of changes. It is important to note, however, that melting Arctic sea ice will not directly raise sea levels. Like melting ice in a soda glass, the oceans won't spill over from ice that is already floating in water. But as ice on land, such as the Greenland ice sheet, melts as an indirect effect of the disappearing sea ice, that water will flow into the oceans and raise sea levels. The Paris climate agreement is set to go into effect Friday with the aim of meeting an ambitious goal: preventing global temperatures from rising 2 degrees Celsius (3.6 degrees Fahrenheit) over pre-industrial levels. But, by Notz's calculations, Arctic summer sea ice will already be gone if temperatures reach that threshold. That sea ice could survive, however, if the more aggressive target of 1.5 degrees Celsius warming is attained. Further complicating things, the Arctic is heating up faster than the rest of the world, perhaps two or three times faster, Notz says. Trenberth points out that greenhouse gas emissions can also have a delayed effect. So even if humans suddenly stopped emitting carbon altogether, temperatures likely still would rise. And, he says, although scientists have been discussing ways to extract CO2 from the atmosphere, "this is an extremely difficult thing to really achieve." Looking at the sea ice loss in the Arctic is "a really stark indicator of climate change," Meier says. "We think of the Arctic as a cold place, but in a lot of ways it's relatively warm," he says. During the summer, many sections sit on the cusp of the freezing point. So, while the difference between 80 and 82 degrees Fahrenheit might not make a huge difference in Washington D.C., Meier explains, if you go from 31 to 33 degrees in the Arctic, it's the difference between ice skating and swimming. While this study doesn't really add new information for scientists, Meier says, with Arctic sea ice far from most people's everyday lives and carbon emissions, "it really helps people understand and visualize the impact."


News Article | November 3, 2016
Site: www.csmonitor.com

Climate scientists have calculated just how fast humans' carbon emissions are melting Arctic sea ice in a new study. Just 75 miles in a fossil-fuel powered car equals one square foot of ice melted Arctic ice. No more watching videos at work: Facebook will now default to audio Sorry polar bears, the Arctic Ocean might be free of sea ice before 2050. According to new calculations, for every metric ton of carbon dioxide emitted, about three square meters (approximately 32.3 square feet) of Arctic summer sea ice disappears. And, with humans currently emitting about 35 to 40 million tons of CO2 each year, the future doesn't look very frozen. It's not hard to rack up those emissions. About 2,433 miles of driving – roughly the distance from Washington, DC to Las Vegas – or just one seat on a return flight from New York to London – on average produces a metric ton of CO2 emissions. Or, for those who aren't long-distance travelers, just over 75 miles of driving in a typical fossil-fuel powered car produces enough emissions to melt one square foot of ice. That's according to Dirk Notz, head of a research group at the Max Planck Institute for Meteorology in Germany that studies sea ice. Dr. Notz calculated the relationship between CO2 emissions and the loss of Arctic summer sea ice as lead author of a paper published Thursday in the journal Science. "Our study now provides individuals with the sense that their own individual actions make a difference," Notz tells The Christian Science Monitor in a phone interview. "If I decide to drive my car a little less or to buy a car that uses less fuel, for example, all these little actions will make a difference for sea ice." Technically, Notz has calculated when there will be less than 1 million square kilometers (386,000 square miles) of Arctic sea ice left in September, after summer melting, a measurement commonly used to define sea ice free conditions. Winter temperatures will continue to freeze parts of the Arctic Ocean. That 1 million square kilometers "seems like quite a lot of ice," says Walter Meier, a sea ice researcher at the NASA Goddard Space Flight Center who was not part of the study. "But in reality it's not that much." "The Arctic Ocean will be for all intents and purposes a blue Arctic Ocean" when that happens, he says. Dr. Meier says Notz's calculations oversimplify the relationship between carbon emissions and Arctic sea ice loss. "The climate system is, in reality, a lot more complex than that," he says in a phone interview with the Monitor. Kevin Trenberth of the the Climate Analysis Section at the National Center for Atmospheric Research, who also was not part of the research, agrees. "I think it's too simple because it doesn't deal with ocean transports and it doesn't deal with atmospheric transports," he says in a phone interview with the Monitor. Furthermore, Dr. Trenberth says, seasonal variations complicate trends so the calculated relationship between CO2 and sea ice loss could be off. Although he questions their methods, Trenberth agrees with the researchers that the Arctic will see an ice-free September. And, he says, it could be as soon as in the 2030s. What would a world with a blue Arctic look like? "We're changing ice that has been around for many years to, mostly, ice that forms every year," James Overland, an oceanographer at the NOAA Pacific Marine Environmental Laboratory who was not part of the study, tells the Monitor in a phone interview. In decades past, ice would build up and become thicker through the winter. While some of that ice would melt during the summer, most would remain to accumulate more ice year after year. Animals like polar bears and walruses use those thicker sheets of ice as a sort of home base when hunting. Thinner and more fragmented ice could destroy their lifestyles. And that's not just true for animals; a disrupted icy ecosystem could make it more difficult for native human populations to hunt and forage too. But loss of Arctic sea ice probably won't just have a local impact. "Arctic sea ice regulates the temperature of our planet by cooling the Atlantic and Pacific waters," David Barber, a sea ice and climate scientist at the University of Manitoba, who was not involved in the research, writes in an email to the Monitor. Some research has suggested that less Arctic ice could lead to a weakening of the jet stream, an atmospheric system that affects the global climate. This shift could be leading to more extreme weather events, like flooding, freezing, and even droughts, already. And on top of that, Arctic sea ice serves as a sort of refrigerator for the planet, Notz explains. When the summer sun rays hit the vast, bright ice, much of that energy is reflected back. But the dark waters of a blue ocean will absorb that heat, leading to even more warming and melting. Then, warmer, more wave-filled waters can eat away at other geological features, including glaciers, coastlines, and permafrost, in a spiral of changes. It is important to note, however, that melting Arctic sea ice will not directly raise sea levels. Like melting ice in a soda glass, the oceans won't spill over from ice that is already floating in water. But as ice on land, such as the Greenland ice sheet, melts as an indirect effect of the disappearing sea ice, that water will flow into the oceans and raise sea levels. The Paris climate agreement is set to go into effect Friday with the aim of meeting an ambitious goal: preventing global temperatures from rising 2 degrees Celsius (3.6 degrees Fahrenheit) over pre-industrial levels. But, by Notz's calculations, Arctic summer sea ice will already be gone if temperatures reach that threshold. That sea ice could survive, however, if the more aggressive target of 1.5 degrees Celsius warming is attained. Further complicating things, the Arctic is heating up faster than the rest of the world, perhaps two or three times faster, Notz says. Trenberth points out that greenhouse gas emissions can also have a delayed effect. So even if humans suddenly stopped emitting carbon altogether, temperatures likely still would rise. And, he says, although scientists have been discussing ways to extract CO2 from the atmosphere, "this is an extremely difficult thing to really achieve." Looking at the sea ice loss in the Arctic is "a really stark indicator of climate change," Meier says. "We think of the Arctic as a cold place, but in a lot of ways it's relatively warm," he says. During the summer, many sections sit on the cusp of the freezing point. So, while the difference between 80 and 82 degrees Fahrenheit might not make a huge difference in Washington D.C., Meier explains, if you go from 31 to 33 degrees in the Arctic, it's the difference between ice skating and swimming. While this study doesn't really add new information for scientists, Meier says, with Arctic sea ice far from most people's everyday lives and carbon emissions, "it really helps people understand and visualize the impact."


News Article | November 3, 2016
Site: www.csmonitor.com

Climate scientists have calculated just how fast humans' carbon emissions are melting Arctic sea ice in a new study. Just 75 miles in a fossil-fuel powered car equals one square foot of ice melted Arctic ice. No more watching videos at work: Facebook will now default to audio Sorry polar bears, the Arctic Ocean might be free of sea ice before 2050. According to new calculations, for every metric ton of carbon dioxide emitted, about three square meters (approximately 32.3 square feet) of Arctic summer sea ice disappears. And, with humans currently emitting about 35 to 40 million tons of CO2 each year, the future doesn't look very frozen. It's not hard to rack up those emissions. About 2,433 miles of driving – roughly the distance from Washington, DC to Las Vegas – or just one seat on a return flight from New York to London – on average produces a metric ton of CO2 emissions. Or, for those who aren't long-distance travelers, just over 75 miles of driving in a typical fossil-fuel powered car produces enough emissions to melt one square foot of ice. That's according to Dirk Notz, head of a research group at the Max Planck Institute for Meteorology in Germany that studies sea ice. Dr. Notz calculated the relationship between CO2 emissions and the loss of Arctic summer sea ice as lead author of a paper published Thursday in the journal Science. "Our study now provides individuals with the sense that their own individual actions make a difference," Notz tells The Christian Science Monitor in a phone interview. "If I decide to drive my car a little less or to buy a car that uses less fuel, for example, all these little actions will make a difference for sea ice." Technically, Notz has calculated when there will be less than 1 million square kilometers (386,000 square miles) of Arctic sea ice left in September, after summer melting, a measurement commonly used to define sea ice free conditions. Winter temperatures will continue to freeze parts of the Arctic Ocean. That 1 million square kilometers "seems like quite a lot of ice," says Walter Meier, a sea ice researcher at the NASA Goddard Space Flight Center who was not part of the study. "But in reality it's not that much." "The Arctic Ocean will be for all intents and purposes a blue Arctic Ocean" when that happens, he says. Dr. Meier says Notz's calculations oversimplify the relationship between carbon emissions and Arctic sea ice loss. "The climate system is, in reality, a lot more complex than that," he says in a phone interview with the Monitor. Kevin Trenberth of the the Climate Analysis Section at the National Center for Atmospheric Research, who also was not part of the research, agrees. "I think it's too simple because it doesn't deal with ocean transports and it doesn't deal with atmospheric transports," he says in a phone interview with the Monitor. Furthermore, Dr. Trenberth says, seasonal variations complicate trends so the calculated relationship between CO2 and sea ice loss could be off. Although he questions their methods, Trenberth agrees with the researchers that the Arctic will see an ice-free September. And, he says, it could be as soon as in the 2030s. What would a world with a blue Arctic look like? "We're changing ice that has been around for many years to, mostly, ice that forms every year," James Overland, an oceanographer at the NOAA Pacific Marine Environmental Laboratory who was not part of the study, tells the Monitor in a phone interview. In decades past, ice would build up and become thicker through the winter. While some of that ice would melt during the summer, most would remain to accumulate more ice year after year. Animals like polar bears and walruses use those thicker sheets of ice as a sort of home base when hunting. Thinner and more fragmented ice could destroy their lifestyles. And that's not just true for animals; a disrupted icy ecosystem could make it more difficult for native human populations to hunt and forage too. But loss of Arctic sea ice probably won't just have a local impact. "Arctic sea ice regulates the temperature of our planet by cooling the Atlantic and Pacific waters," David Barber, a sea ice and climate scientist at the University of Manitoba, who was not involved in the research, writes in an email to the Monitor. Some research has suggested that less Arctic ice could lead to a weakening of the jet stream, an atmospheric system that affects the global climate. This shift could be leading to more extreme weather events, like flooding, freezing, and even droughts, already. And on top of that, Arctic sea ice serves as a sort of refrigerator for the planet, Notz explains. When the summer sun rays hit the vast, bright ice, much of that energy is reflected back. But the dark waters of a blue ocean will absorb that heat, leading to even more warming and melting. Then, warmer, more wave-filled waters can eat away at other geological features, including glaciers, coastlines, and permafrost, in a spiral of changes. It is important to note, however, that melting Arctic sea ice will not directly raise sea levels. Like melting ice in a soda glass, the oceans won't spill over from ice that is already floating in water. But as ice on land, such as the Greenland ice sheet, melts as an indirect effect of the disappearing sea ice, that water will flow into the oceans and raise sea levels. The Paris climate agreement is set to go into effect Friday with the aim of meeting an ambitious goal: preventing global temperatures from rising 2 degrees Celsius (3.6 degrees Fahrenheit) over pre-industrial levels. But, by Notz's calculations, Arctic summer sea ice will already be gone if temperatures reach that threshold. That sea ice could survive, however, if the more aggressive target of 1.5 degrees Celsius warming is attained. Further complicating things, the Arctic is heating up faster than the rest of the world, perhaps two or three times faster, Notz says. Trenberth points out that greenhouse gas emissions can also have a delayed effect. So even if humans suddenly stopped emitting carbon altogether, temperatures likely still would rise. And, he says, although scientists have been discussing ways to extract CO2 from the atmosphere, "this is an extremely difficult thing to really achieve." Looking at the sea ice loss in the Arctic is "a really stark indicator of climate change," Meier says. "We think of the Arctic as a cold place, but in a lot of ways it's relatively warm," he says. During the summer, many sections sit on the cusp of the freezing point. So, while the difference between 80 and 82 degrees Fahrenheit might not make a huge difference in Washington D.C., Meier explains, if you go from 31 to 33 degrees in the Arctic, it's the difference between ice skating and swimming. While this study doesn't really add new information for scientists, Meier says, with Arctic sea ice far from most people's everyday lives and carbon emissions, "it really helps people understand and visualize the impact."


News Article | November 3, 2016
Site: www.csmonitor.com

Climate scientists have calculated just how fast humans' carbon emissions are melting Arctic sea ice in a new study. Just 75 miles in a fossil-fuel powered car equals one square foot of ice melted Arctic ice. No more watching videos at work: Facebook will now default to audio Sorry polar bears, the Arctic Ocean might be free of sea ice before 2050. According to new calculations, for every metric ton of carbon dioxide emitted, about three square meters (approximately 32.3 square feet) of Arctic summer sea ice disappears. And, with humans currently emitting about 35 to 40 million tons of CO2 each year, the future doesn't look very frozen. It's not hard to rack up those emissions. About 2,433 miles of driving – roughly the distance from Washington, DC to Las Vegas – or just one seat on a return flight from New York to London – on average produces a metric ton of CO2 emissions. Or, for those who aren't long-distance travelers, just over 75 miles of driving in a typical fossil-fuel powered car produces enough emissions to melt one square foot of ice. That's according to Dirk Notz, head of a research group at the Max Planck Institute for Meteorology in Germany that studies sea ice. Dr. Notz calculated the relationship between CO2 emissions and the loss of Arctic summer sea ice as lead author of a paper published Thursday in the journal Science. "Our study now provides individuals with the sense that their own individual actions make a difference," Notz tells The Christian Science Monitor in a phone interview. "If I decide to drive my car a little less or to buy a car that uses less fuel, for example, all these little actions will make a difference for sea ice." Technically, Notz has calculated when there will be less than 1 million square kilometers (386,000 square miles) of Arctic sea ice left in September, after summer melting, a measurement commonly used to define sea ice free conditions. Winter temperatures will continue to freeze parts of the Arctic Ocean. That 1 million square kilometers "seems like quite a lot of ice," says Walter Meier, a sea ice researcher at the NASA Goddard Space Flight Center who was not part of the study. "But in reality it's not that much." "The Arctic Ocean will be for all intents and purposes a blue Arctic Ocean" when that happens, he says. Dr. Meier says Notz's calculations oversimplify the relationship between carbon emissions and Arctic sea ice loss. "The climate system is, in reality, a lot more complex than that," he says in a phone interview with the Monitor. Kevin Trenberth of the the Climate Analysis Section at the National Center for Atmospheric Research, who also was not part of the research, agrees. "I think it's too simple because it doesn't deal with ocean transports and it doesn't deal with atmospheric transports," he says in a phone interview with the Monitor. Furthermore, Dr. Trenberth says, seasonal variations complicate trends so the calculated relationship between CO2 and sea ice loss could be off. Although he questions their methods, Trenberth agrees with the researchers that the Arctic will see an ice-free September. And, he says, it could be as soon as in the 2030s. What would a world with a blue Arctic look like? "We're changing ice that has been around for many years to, mostly, ice that forms every year," James Overland, an oceanographer at the NOAA Pacific Marine Environmental Laboratory who was not part of the study, tells the Monitor in a phone interview. In decades past, ice would build up and become thicker through the winter. While some of that ice would melt during the summer, most would remain to accumulate more ice year after year. Animals like polar bears and walruses use those thicker sheets of ice as a sort of home base when hunting. Thinner and more fragmented ice could destroy their lifestyles. And that's not just true for animals; a disrupted icy ecosystem could make it more difficult for native human populations to hunt and forage too. But loss of Arctic sea ice probably won't just have a local impact. "Arctic sea ice regulates the temperature of our planet by cooling the Atlantic and Pacific waters," David Barber, a sea ice and climate scientist at the University of Manitoba, who was not involved in the research, writes in an email to the Monitor. Some research has suggested that less Arctic ice could lead to a weakening of the jet stream, an atmospheric system that affects the global climate. This shift could be leading to more extreme weather events, like flooding, freezing, and even droughts, already. And on top of that, Arctic sea ice serves as a sort of refrigerator for the planet, Notz explains. When the summer sun rays hit the vast, bright ice, much of that energy is reflected back. But the dark waters of a blue ocean will absorb that heat, leading to even more warming and melting. Then, warmer, more wave-filled waters can eat away at other geological features, including glaciers, coastlines, and permafrost, in a spiral of changes. It is important to note, however, that melting Arctic sea ice will not directly raise sea levels. Like melting ice in a soda glass, the oceans won't spill over from ice that is already floating in water. But as ice on land, such as the Greenland ice sheet, melts as an indirect effect of the disappearing sea ice, that water will flow into the oceans and raise sea levels. The Paris climate agreement is set to go into effect Friday with the aim of meeting an ambitious goal: preventing global temperatures from rising 2 degrees Celsius (3.6 degrees Fahrenheit) over pre-industrial levels. But, by Notz's calculations, Arctic summer sea ice will already be gone if temperatures reach that threshold. That sea ice could survive, however, if the more aggressive target of 1.5 degrees Celsius warming is attained. Further complicating things, the Arctic is heating up faster than the rest of the world, perhaps two or three times faster, Notz says. Trenberth points out that greenhouse gas emissions can also have a delayed effect. So even if humans suddenly stopped emitting carbon altogether, temperatures likely still would rise. And, he says, although scientists have been discussing ways to extract CO2 from the atmosphere, "this is an extremely difficult thing to really achieve." Looking at the sea ice loss in the Arctic is "a really stark indicator of climate change," Meier says. "We think of the Arctic as a cold place, but in a lot of ways it's relatively warm," he says. During the summer, many sections sit on the cusp of the freezing point. So, while the difference between 80 and 82 degrees Fahrenheit might not make a huge difference in Washington D.C., Meier explains, if you go from 31 to 33 degrees in the Arctic, it's the difference between ice skating and swimming. While this study doesn't really add new information for scientists, Meier says, with Arctic sea ice far from most people's everyday lives and carbon emissions, "it really helps people understand and visualize the impact."


News Article | December 9, 2015
Site: www.nature.com

As a palliative-care researcher, Susan McClement has talked to many people dying of cancer and their families — and some of their stories are burned into her brain. One man was so concerned by the sight of his emaciated wife, whose body had been ravaged by metastatic breast cancer, that he resorted to force feeding her — pinching her nose and slipping in a spoonful of food when she opened her mouth. Convinced that food would give her the energy to fight the cancer, his daily visits became protracted battles. She died a few weeks later. McClement, who works at the University of Manitoba in Winnipeg, Canada, says that nutritional conflicts can become a source of regret for relatives. “They said, ‘You know, if I could do it over again, I would have spent much less time fighting about tapioca pudding and much more time telling my wife that I loved her.’” The woman in this case had cachexia, a metabolic disorder that affects some 9 million people worldwide, including as many as 80% of people with advanced cancer. It typically involves extreme weight- and muscle-loss, makes routine activities difficult and increases the risk of deadly complications such as infections. Adding calories doesn’t reverse cachexia, and McClement says that the disorder sometimes provokes extreme reactions from family members because it serves as visual confirmation of their worst fears. “It’s a constant reminder that the person is sick and is not going to get better,” says McClement. Cachexia is seen in the late stages of almost every major chronic illness, affecting 16–42% of people with heart failure, 30% of those with chronic obstructive pulmonary disease and up to 60% of people with kidney disease. But for many years it was overlooked, as physicians and researchers focused their attention on the primary illness instead. Now, scientists are increasingly viewing cachexia as a distinct, treatable condition. Basic research has revealed how it is driven by inflammation and metabolic imbalances, and has generated drug targets, says Stefan Anker, a cardiologist and cachexia specialist at the University Medical Center Göttingen in Germany. “Now we have quite a number of powerful options to test,” he says. This has spurred investment from drug developers who aim to reduce suffering, and possibly give patients the strength to withstand chemotherapy or surgery. But some high-profile clinical trials in the past two years have produced disappointing results, prompting much self-reflection in the young field. “I’m a little bit worried that if we don’t see a successful clinical trial in the next five years, the dollars from the pharmaceutical industry to develop a treatment will go somewhere else,” says Jose Garcia, a clinical researcher focused on wasting disorders at the Michael E. DeBakey Veterans Affairs Medical Center in Houston, Texas. “In my view, that would be a missed opportunity.” The term cachexia is derived from the Greek kakos and hexis, meaning ‘bad condition’. It is thought that Hippocrates recognized the syndrome — but it took until 2006 for the cachexia field to start working up a formal definition, which includes a loss of 5% or more of body weight over 12 months, and reduced muscle strength. In the clinic, it remains under-recognized by oncologists, says Egidio Del Fabbro, a palliative-care physician and researcher at Virginia Commonwealth University in Richmond. There are no standard guidelines for treatment. In the past decade, researchers have made strides in learning about the causes of cachexia, thanks to funding from the US National Cancer Institute and some advocacy groups. New international conferences (including one that wrapped up this week in Paris) and the launch of a research journal — the Journal of Cachexia, Sarcopenia and Muscle — have also drummed up interest in the field. It is now clear that a key mechanism underlying cachexia is the increased breakdown of muscle protein, along with dampened protein synthesis, which leads to overall muscle loss. Studies in 2001 helped to jump-start the field when they identified genes that were more active in atrophying rodent muscles than in normal ones1, 2. These genes encode enzymes called E3 ubiquitin ligases, which tag proteins for destruction in the cell. Mice without these enzymes were resistant to muscle loss. Muscle cells seem to make more of these ligases when hit with certain inflammatory signals from tumours or from immune cells responding to cancer or other illness. Abnormalities in apoptosis (programmed cell death) and in the muscle cell’s energy-producing organelles, mitochondria, have also been implicated. Several drug-makers have homed in on the protein myostatin, which blocks muscle growth. In a 2010 paper3 that got many people excited about a possible cachexia drug, researchers from biotechnology company Amgen in Thousand Oaks, California, showed that they could reverse muscle loss and extend the lives of mice with tumours and cachexia by blocking signalling through the myostatin pathway. Research since then suggests that cachexia is more than a muscle disease. Studies4 have identified problems in the brain’s regulation of appetite and feeding, and even ways in which the liver might be contributing to the energy imbalance that sees the body burn its own tissue to sustain itself. Others have looked at fat tissue, which can also waste away in cachexia. They showed that inflammation5 and molecules made by tumours6 cause white fat cells to turn into brown fat cells, which burn more energy to generate heat than white fat cells. The question that researchers are now tackling is how tissues and organs — muscle, brain, fat, even bone — are communicating with one another. A paper published last week7 suggests that fat signalling could be involved in muscle atrophy. All this research has brought more representatives of biotechnology and pharmaceutical companies to cachexia meetings in recent years, says Denis Guttridge, a cell biologist at the Ohio State University in Columbus, who organizes one such conference. “That’s exciting for a basic scientist like myself,” he says. “I can see the increase in the translational pipeline.” Despite the excitement in labs, clinical research has so far proved disappointing. In 2011, biotech firm GTx of Memphis, Tennessee, launched two late-stage clinical trials of enobosarm, a molecule that binds to the same receptor as testosterone but only in muscle and bone, mimicking the hormone’s ability to stimulate muscle build-up but without its undesirable side effects. Results from earlier, smaller trials looked promising: people taking the drug had increased lean body mass and improved physical function, as measured by their speed at climbing stairs8. But in the larger tests of the drug, on people with advanced lung cancer, the benefits in function disappeared. The firm has since abandoned muscle wasting, and is instead testing larger doses of enobosarm to treat breast cancer. A pair of unpublished studies on people with lung cancer and cachexia tested a compound called anamorelin, which mimics ghrelin, an appetite-stimulating peptide hormone produced mainly by the stomach. The trials were sponsored by pharmaceutical company Helsinn in Lugano, Switzerland, which reported that participants in the treatment group put on weight and muscle mass compared with those taking a placebo, but showed no difference in hand grip strength. Still, the company announced last week that the European Medicines Agency is reviewing its drug for approval. There is a lot of debate about why the trials failed to show functional improvements. Some researchers say that the teams did not use the most clinically relevant measures of muscle function. “We don’t really know what is the best test for this,” says Garcia. “If you can climb up a set of stairs one second faster, what does that mean?” This confusion about trial design is a problem for the field, says Anker. “We need to reach consensus on endpoints and what to aim for in our treatments.” Another problem is that animal data on cachexia may not translate into humans. Some work has tried to make a case that the mechanisms found in rodents might be similar to those in humans, by looking at human tissue samples, says Vickie Baracos, a clinical translational researcher in muscle wasting at the University of Alberta in Edmonton, Canada. “But held up to scrutiny, this clinical evidence is often rather sketchy.” Researchers in the field lament the dearth of human data and clinical samples. Baracos says that studies are needed that follow people with cachexia over time, collecting blood and muscle samples along the way. “A cachexia data repository with a biobank would sure be a great thing,” she says. Perhaps the biggest challenge is that the field has to compete for funding and recognition with research into other major diseases, says Anker. “Cachexia is competing for internal resources within big companies, fighting with cancer, cardiology,” he says. Few companies have dedicated cachexia groups or departments. GTx stopped its work on muscle wasting in part because insurers did not seem interested in covering a medication that was only going to target cachexia and not cancer, says Mary Ann Johnston, the company’s vice-president for clinical development. “There’s a lack of interest in supportive care.” But an effective treatment would be transformative, says Garcia. It might spur physicians to talk more to patients and their families about the troubling symptoms of cachexia. Without the tools to treat the syndrome, many doctors don’t address it, he says. And that vacuum of information can be distressing. McClement, for her part, has been interviewing more families of people with cachexia. She hopes to find ways to better inform them about the condition and help them to cope. Given the absence of pharmacological interventions, such psychosocial ones are important, she says. “That’s all we’ve got.”


News Article | November 3, 2016
Site: www.csmonitor.com

Climate scientists have calculated just how fast humans' carbon emissions are melting Arctic sea ice in a new study. Just 75 miles in a fossil-fuel powered car equals one square foot of ice melted Arctic ice. No more watching videos at work: Facebook will now default to audio Sorry polar bears, the Arctic Ocean might be free of sea ice before 2050. According to new calculations, for every metric ton of carbon dioxide emitted, about three square meters (approximately 32.3 square feet) of Arctic summer sea ice disappears. And, with humans currently emitting about 35 to 40 million tons of CO2 each year, the future doesn't look very frozen. It's not hard to rack up those emissions. About 2,433 miles of driving – roughly the distance from Washington, DC to Las Vegas – or just one seat on a return flight from New York to London – on average produces a metric ton of CO2 emissions. Or, for those who aren't long-distance travelers, just over 75 miles of driving in a typical fossil-fuel powered car produces enough emissions to melt one square foot of ice. That's according to Dirk Notz, head of a research group at the Max Planck Institute for Meteorology in Germany that studies sea ice. Dr. Notz calculated the relationship between CO2 emissions and the loss of Arctic summer sea ice as lead author of a paper published Thursday in the journal Science. "Our study now provides individuals with the sense that their own individual actions make a difference," Notz tells The Christian Science Monitor in a phone interview. "If I decide to drive my car a little less or to buy a car that uses less fuel, for example, all these little actions will make a difference for sea ice." Technically, Notz has calculated when there will be less than 1 million square kilometers (386,000 square miles) of Arctic sea ice left in September, after summer melting, a measurement commonly used to define sea ice free conditions. Winter temperatures will continue to freeze parts of the Arctic Ocean. That 1 million square kilometers "seems like quite a lot of ice," says Walter Meier, a sea ice researcher at the NASA Goddard Space Flight Center who was not part of the study. "But in reality it's not that much." "The Arctic Ocean will be for all intents and purposes a blue Arctic Ocean" when that happens, he says. Dr. Meier says Notz's calculations oversimplify the relationship between carbon emissions and Arctic sea ice loss. "The climate system is, in reality, a lot more complex than that," he says in a phone interview with the Monitor. Kevin Trenberth of the the Climate Analysis Section at the National Center for Atmospheric Research, who also was not part of the research, agrees. "I think it's too simple because it doesn't deal with ocean transports and it doesn't deal with atmospheric transports," he says in a phone interview with the Monitor. Furthermore, Dr. Trenberth says, seasonal variations complicate trends so the calculated relationship between CO2 and sea ice loss could be off. Although he questions their methods, Trenberth agrees with the researchers that the Arctic will see an ice-free September. And, he says, it could be as soon as in the 2030s. What would a world with a blue Arctic look like? "We're changing ice that has been around for many years to, mostly, ice that forms every year," James Overland, an oceanographer at the NOAA Pacific Marine Environmental Laboratory who was not part of the study, tells the Monitor in a phone interview. In decades past, ice would build up and become thicker through the winter. While some of that ice would melt during the summer, most would remain to accumulate more ice year after year. Animals like polar bears and walruses use those thicker sheets of ice as a sort of home base when hunting. Thinner and more fragmented ice could destroy their lifestyles. And that's not just true for animals; a disrupted icy ecosystem could make it more difficult for native human populations to hunt and forage too. But loss of Arctic sea ice probably won't just have a local impact. "Arctic sea ice regulates the temperature of our planet by cooling the Atlantic and Pacific waters," David Barber, a sea ice and climate scientist at the University of Manitoba, who was not involved in the research, writes in an email to the Monitor. Some research has suggested that less Arctic ice could lead to a weakening of the jet stream, an atmospheric system that affects the global climate. This shift could be leading to more extreme weather events, like flooding, freezing, and even droughts, already. And on top of that, Arctic sea ice serves as a sort of refrigerator for the planet, Notz explains. When the summer sun rays hit the vast, bright ice, much of that energy is reflected back. But the dark waters of a blue ocean will absorb that heat, leading to even more warming and melting. Then, warmer, more wave-filled waters can eat away at other geological features, including glaciers, coastlines, and permafrost, in a spiral of changes. It is important to note, however, that melting Arctic sea ice will not directly raise sea levels. Like melting ice in a soda glass, the oceans won't spill over from ice that is already floating in water. But as ice on land, such as the Greenland ice sheet, melts as an indirect effect of the disappearing sea ice, that water will flow into the oceans and raise sea levels. The Paris climate agreement is set to go into effect Friday with the aim of meeting an ambitious goal: preventing global temperatures from rising 2 degrees Celsius (3.6 degrees Fahrenheit) over pre-industrial levels. But, by Notz's calculations, Arctic summer sea ice will already be gone if temperatures reach that threshold. That sea ice could survive, however, if the more aggressive target of 1.5 degrees Celsius warming is attained. Further complicating things, the Arctic is heating up faster than the rest of the world, perhaps two or three times faster, Notz says. Trenberth points out that greenhouse gas emissions can also have a delayed effect. So even if humans suddenly stopped emitting carbon altogether, temperatures likely still would rise. And, he says, although scientists have been discussing ways to extract CO2 from the atmosphere, "this is an extremely difficult thing to really achieve." Looking at the sea ice loss in the Arctic is "a really stark indicator of climate change," Meier says. "We think of the Arctic as a cold place, but in a lot of ways it's relatively warm," he says. During the summer, many sections sit on the cusp of the freezing point. So, while the difference between 80 and 82 degrees Fahrenheit might not make a huge difference in Washington D.C., Meier explains, if you go from 31 to 33 degrees in the Arctic, it's the difference between ice skating and swimming. While this study doesn't really add new information for scientists, Meier says, with Arctic sea ice far from most people's everyday lives and carbon emissions, "it really helps people understand and visualize the impact."


News Article | November 3, 2016
Site: www.csmonitor.com

Climate scientists have calculated just how fast humans' carbon emissions are melting Arctic sea ice in a new study. Just 75 miles in a fossil-fuel powered car equals one square foot of ice melted Arctic ice. No more watching videos at work: Facebook will now default to audio Sorry polar bears, the Arctic Ocean might be free of sea ice before 2050. According to new calculations, for every metric ton of carbon dioxide emitted, about three square meters (approximately 32.3 square feet) of Arctic summer sea ice disappears. And, with humans currently emitting about 35 to 40 million tons of CO2 each year, the future doesn't look very frozen. It's not hard to rack up those emissions. About 2,433 miles of driving – roughly the distance from Washington, DC to Las Vegas – or just one seat on a return flight from New York to London – on average produces a metric ton of CO2 emissions. Or, for those who aren't long-distance travelers, just over 75 miles of driving in a typical fossil-fuel powered car produces enough emissions to melt one square foot of ice. That's according to Dirk Notz, head of a research group at the Max Planck Institute for Meteorology in Germany that studies sea ice. Dr. Notz calculated the relationship between CO2 emissions and the loss of Arctic summer sea ice as lead author of a paper published Thursday in the journal Science. "Our study now provides individuals with the sense that their own individual actions make a difference," Notz tells The Christian Science Monitor in a phone interview. "If I decide to drive my car a little less or to buy a car that uses less fuel, for example, all these little actions will make a difference for sea ice." Technically, Notz has calculated when there will be less than 1 million square kilometers (386,000 square miles) of Arctic sea ice left in September, after summer melting, a measurement commonly used to define sea ice free conditions. Winter temperatures will continue to freeze parts of the Arctic Ocean. That 1 million square kilometers "seems like quite a lot of ice," says Walter Meier, a sea ice researcher at the NASA Goddard Space Flight Center who was not part of the study. "But in reality it's not that much." "The Arctic Ocean will be for all intents and purposes a blue Arctic Ocean" when that happens, he says. Dr. Meier says Notz's calculations oversimplify the relationship between carbon emissions and Arctic sea ice loss. "The climate system is, in reality, a lot more complex than that," he says in a phone interview with the Monitor. Kevin Trenberth of the the Climate Analysis Section at the National Center for Atmospheric Research, who also was not part of the research, agrees. "I think it's too simple because it doesn't deal with ocean transports and it doesn't deal with atmospheric transports," he says in a phone interview with the Monitor. Furthermore, Dr. Trenberth says, seasonal variations complicate trends so the calculated relationship between CO2 and sea ice loss could be off. Although he questions their methods, Trenberth agrees with the researchers that the Arctic will see an ice-free September. And, he says, it could be as soon as in the 2030s. What would a world with a blue Arctic look like? "We're changing ice that has been around for many years to, mostly, ice that forms every year," James Overland, an oceanographer at the NOAA Pacific Marine Environmental Laboratory who was not part of the study, tells the Monitor in a phone interview. In decades past, ice would build up and become thicker through the winter. While some of that ice would melt during the summer, most would remain to accumulate more ice year after year. Animals like polar bears and walruses use those thicker sheets of ice as a sort of home base when hunting. Thinner and more fragmented ice could destroy their lifestyles. And that's not just true for animals; a disrupted icy ecosystem could make it more difficult for native human populations to hunt and forage too. But loss of Arctic sea ice probably won't just have a local impact. "Arctic sea ice regulates the temperature of our planet by cooling the Atlantic and Pacific waters," David Barber, a sea ice and climate scientist at the University of Manitoba, who was not involved in the research, writes in an email to the Monitor. Some research has suggested that less Arctic ice could lead to a weakening of the jet stream, an atmospheric system that affects the global climate. This shift could be leading to more extreme weather events, like flooding, freezing, and even droughts, already. And on top of that, Arctic sea ice serves as a sort of refrigerator for the planet, Notz explains. When the summer sun rays hit the vast, bright ice, much of that energy is reflected back. But the dark waters of a blue ocean will absorb that heat, leading to even more warming and melting. Then, warmer, more wave-filled waters can eat away at other geological features, including glaciers, coastlines, and permafrost, in a spiral of changes. It is important to note, however, that melting Arctic sea ice will not directly raise sea levels. Like melting ice in a soda glass, the oceans won't spill over from ice that is already floating in water. But as ice on land, such as the Greenland ice sheet, melts as an indirect effect of the disappearing sea ice, that water will flow into the oceans and raise sea levels. The Paris climate agreement is set to go into effect Friday with the aim of meeting an ambitious goal: preventing global temperatures from rising 2 degrees Celsius (3.6 degrees Fahrenheit) over pre-industrial levels. But, by Notz's calculations, Arctic summer sea ice will already be gone if temperatures reach that threshold. That sea ice could survive, however, if the more aggressive target of 1.5 degrees Celsius warming is attained. Further complicating things, the Arctic is heating up faster than the rest of the world, perhaps two or three times faster, Notz says. Trenberth points out that greenhouse gas emissions can also have a delayed effect. So even if humans suddenly stopped emitting carbon altogether, temperatures likely still would rise. And, he says, although scientists have been discussing ways to extract CO2 from the atmosphere, "this is an extremely difficult thing to really achieve." Looking at the sea ice loss in the Arctic is "a really stark indicator of climate change," Meier says. "We think of the Arctic as a cold place, but in a lot of ways it's relatively warm," he says. During the summer, many sections sit on the cusp of the freezing point. So, while the difference between 80 and 82 degrees Fahrenheit might not make a huge difference in Washington D.C., Meier explains, if you go from 31 to 33 degrees in the Arctic, it's the difference between ice skating and swimming. While this study doesn't really add new information for scientists, Meier says, with Arctic sea ice far from most people's everyday lives and carbon emissions, "it really helps people understand and visualize the impact."


News Article | November 3, 2016
Site: www.csmonitor.com

Climate scientists have calculated just how fast humans' carbon emissions are melting Arctic sea ice in a new study. Just 75 miles in a fossil-fuel powered car equals one square foot of ice melted Arctic ice. No more watching videos at work: Facebook will now default to audio Sorry polar bears, the Arctic Ocean might be free of sea ice before 2050. According to new calculations, for every metric ton of carbon dioxide emitted, about three square meters (approximately 32.3 square feet) of Arctic summer sea ice disappears. And, with humans currently emitting about 35 to 40 million tons of CO2 each year, the future doesn't look very frozen. It's not hard to rack up those emissions. About 2,433 miles of driving – roughly the distance from Washington, DC to Las Vegas – or just one seat on a return flight from New York to London – on average produces a metric ton of CO2 emissions. Or, for those who aren't long-distance travelers, just over 75 miles of driving in a typical fossil-fuel powered car produces enough emissions to melt one square foot of ice. That's according to Dirk Notz, head of a research group at the Max Planck Institute for Meteorology in Germany that studies sea ice. Dr. Notz calculated the relationship between CO2 emissions and the loss of Arctic summer sea ice as lead author of a paper published Thursday in the journal Science. "Our study now provides individuals with the sense that their own individual actions make a difference," Notz tells The Christian Science Monitor in a phone interview. "If I decide to drive my car a little less or to buy a car that uses less fuel, for example, all these little actions will make a difference for sea ice." Technically, Notz has calculated when there will be less than 1 million square kilometers (386,000 square miles) of Arctic sea ice left in September, after summer melting, a measurement commonly used to define sea ice free conditions. Winter temperatures will continue to freeze parts of the Arctic Ocean. That 1 million square kilometers "seems like quite a lot of ice," says Walter Meier, a sea ice researcher at the NASA Goddard Space Flight Center who was not part of the study. "But in reality it's not that much." "The Arctic Ocean will be for all intents and purposes a blue Arctic Ocean" when that happens, he says. Dr. Meier says Notz's calculations oversimplify the relationship between carbon emissions and Arctic sea ice loss. "The climate system is, in reality, a lot more complex than that," he says in a phone interview with the Monitor. Kevin Trenberth of the the Climate Analysis Section at the National Center for Atmospheric Research, who also was not part of the research, agrees. "I think it's too simple because it doesn't deal with ocean transports and it doesn't deal with atmospheric transports," he says in a phone interview with the Monitor. Furthermore, Dr. Trenberth says, seasonal variations complicate trends so the calculated relationship between CO2 and sea ice loss could be off. Although he questions their methods, Trenberth agrees with the researchers that the Arctic will see an ice-free September. And, he says, it could be as soon as in the 2030s. What would a world with a blue Arctic look like? "We're changing ice that has been around for many years to, mostly, ice that forms every year," James Overland, an oceanographer at the NOAA Pacific Marine Environmental Laboratory who was not part of the study, tells the Monitor in a phone interview. In decades past, ice would build up and become thicker through the winter. While some of that ice would melt during the summer, most would remain to accumulate more ice year after year. Animals like polar bears and walruses use those thicker sheets of ice as a sort of home base when hunting. Thinner and more fragmented ice could destroy their lifestyles. And that's not just true for animals; a disrupted icy ecosystem could make it more difficult for native human populations to hunt and forage too. But loss of Arctic sea ice probably won't just have a local impact. "Arctic sea ice regulates the temperature of our planet by cooling the Atlantic and Pacific waters," David Barber, a sea ice and climate scientist at the University of Manitoba, who was not involved in the research, writes in an email to the Monitor. Some research has suggested that less Arctic ice could lead to a weakening of the jet stream, an atmospheric system that affects the global climate. This shift could be leading to more extreme weather events, like flooding, freezing, and even droughts, already. And on top of that, Arctic sea ice serves as a sort of refrigerator for the planet, Notz explains. When the summer sun rays hit the vast, bright ice, much of that energy is reflected back. But the dark waters of a blue ocean will absorb that heat, leading to even more warming and melting. Then, warmer, more wave-filled waters can eat away at other geological features, including glaciers, coastlines, and permafrost, in a spiral of changes. It is important to note, however, that melting Arctic sea ice will not directly raise sea levels. Like melting ice in a soda glass, the oceans won't spill over from ice that is already floating in water. But as ice on land, such as the Greenland ice sheet, melts as an indirect effect of the disappearing sea ice, that water will flow into the oceans and raise sea levels. The Paris climate agreement is set to go into effect Friday with the aim of meeting an ambitious goal: preventing global temperatures from rising 2 degrees Celsius (3.6 degrees Fahrenheit) over pre-industrial levels. But, by Notz's calculations, Arctic summer sea ice will already be gone if temperatures reach that threshold. That sea ice could survive, however, if the more aggressive target of 1.5 degrees Celsius warming is attained. Further complicating things, the Arctic is heating up faster than the rest of the world, perhaps two or three times faster, Notz says. Trenberth points out that greenhouse gas emissions can also have a delayed effect. So even if humans suddenly stopped emitting carbon altogether, temperatures likely still would rise. And, he says, although scientists have been discussing ways to extract CO2 from the atmosphere, "this is an extremely difficult thing to really achieve." Looking at the sea ice loss in the Arctic is "a really stark indicator of climate change," Meier says. "We think of the Arctic as a cold place, but in a lot of ways it's relatively warm," he says. During the summer, many sections sit on the cusp of the freezing point. So, while the difference between 80 and 82 degrees Fahrenheit might not make a huge difference in Washington D.C., Meier explains, if you go from 31 to 33 degrees in the Arctic, it's the difference between ice skating and swimming. While this study doesn't really add new information for scientists, Meier says, with Arctic sea ice far from most people's everyday lives and carbon emissions, "it really helps people understand and visualize the impact."


News Article | March 23, 2016
Site: www.sciencenews.org

White-tailed prairie dogs — those stand-up, nose-wiggling nibblers of grass — turn out to be routine killers of baby ground squirrels. And the strongest sign of successful white-tailed motherhood could be repeat ground squirrel kills, researchers say. At a Colorado prairie dog colony, females that kill at least two ground squirrels raise three times as many offspring during their lives as nonkiller females, says John Hoogland of the University of Maryland Center for Environmental Science in Frostburg. The “serial killers,” as he calls repeat-attack females, rarely even nibble at the carcasses and aren’t getting much, if any, meat bonus. Instead, the supermom assassins may improve grazing in their territories by reducing competition from grass-snitching ground squirrels, Hoogland and Charles Brown of the University of Tulsa propose March 23 in Proceedings of the Royal Society B. “This really caught me by surprise,” Hoogland says. Carnivorous mammals killing other carnivore species wouldn’t be surprising, but prairie dogs and ground squirrels eat plants. He knows of no other systematic study documenting routine fatal attacks by one herbivore species on another. “It’s also striking because it’s so subtle,” he says. He had been watching prairie dogs in general for decades and the white-tailed prairie dogs in the Arapaho National Wildlife Refuge for a year before he noticed an attack. A female “jumped on something, shook it, shook it hard, kept attacking — and then walked away,” he says. The encounter lasted just minutes. Hoogland rushed from his observation tower to the scene of the fight and, to his surprise, retrieved a dead baby ground squirrel. Once he and his colleagues knew what to look for, they saw 101 such lethal attacks (mostly from females, but also from some males) over six years and inferred 62 more from carcasses. A propensity for killing ground squirrels turned out to be the only factor (among such possibilities as body mass, age and number of neighbors) that predicted a tendency toward lifetime success in raising lots of young. That factor, which biologists describe as fitness, is a big deal in analyzing how populations change and species evolve. Hoogland and Brown propose that prairie dogs and ground squirrels compete for grazing. An analysis of the animals’ diets finds at least six plant species in common, the researchers say. Hoogland didn’t directly test to see if the serial killer prairie dogs just had great territories that attracted lots of ground squirrels and thus provided more opportunities for killing. But if that were true, he says, he would predict that the holders of this prime territory would have robust body sizes, and therefore there would be some link between maternal body size and high offspring number. No such link shows up, he says. The best hypothesis explaining the benefit of killing squirrels that Hoogland can think of, he says, is that prairie dogs slay the competition for food resources. Still, the idea that prairie dogs and ground squirrels compete for plants needs more information, says ecologist Liesbeth Bakker of the Netherlands Institute of Ecology in Wageningen. The total of ground squirrel kills was an impressive number, she says, but it’s unclear what percentage it represents. If the deaths remove only a small proportion of ground squirrels, competition isn’t likely to ease much. Also, any effect would be weakened by the relative sizes of the species. “The ground squirrels are about half the size of the prairie dogs and thus eat less food,” she says. Behavioral ecologist James Hare wonders why ground squirrels venture into prairie dog territory if it’s so dangerous. One of the ideas Hoogland suggests is that prairie dog vigilance in raising alarms about predators might make the risks of hanging out in a colony worthwhile. Hare, at the University of Manitoba in Canada, also wonders whether ground squirrels have trouble finding good habitat free from prairie dogs. Hoogland too is left with questions, including one about the big-family bonus of interspecific killing. “Is this really unique to prairie dogs or is this more common?”


Chieochan S.,University of Manitoba | Hossain E.,University of Manitoba | Diamond J.,TRLabs
IEEE Communications Surveys and Tutorials | Year: 2010

Efficient channel assignment is crucial for successful deployment and operation of IEEE 802.11-based WLANs. In this article we present a survey on the state of the art channel assignment schemes in IEEE 802.11-based WLANs. After detailing out all the schemes, we provide a qualitative comparison among different schemes in terms of algorithm execution behaviors, complexity, and scalability. We then conclude the survey with several research issues open for further investigation. © 2010 IEEE.


Butler M.,University of Manitoba | Meneses-Acosta A.,Autonomous University of the State of Morelos
Applied Microbiology and Biotechnology | Year: 2012

The demand for production of glycoproteins from mammalian cell culture continues with an increased number of approvals as biopharmaceuticals for the treatment of unmet medical needs. This is particularly the case for humanized monoclonal antibodies which are the largest and fastest growing class of therapeutic pharmaceuticals. This demand has fostered efforts to improve the efficiency of production as well as to address the quality of the final product. Chinese hamster ovary cells are the predominant hosts for stable transfection and high efficiency production on a large scale. Specific productivity of recombinant glycoproteins from these cells can be expected to be above 50 pg/cell/day giving rise to culture systems with titers of around 5 g/L if appropriate fed-batch systems are employed. Cell engineering can delay the onset of programmed cell death to ensure prolonged maintenance of productive viable cells. The clinical efficacy and quality of the final product can be improved by strategic metabolic engineering. The best example of this is the targeted production of afucosylated antibodies with enhanced antibody-dependent cell cytotoxicity, an important function for use in cancer therapies. The development of culture media from non-animal sources continues and is important to ensure products of consistent quality and without the potential danger of contamination. Process efficiencies may also be improved by employing disposable bioreactors with the associated minimization of downtime. Finally, advances in downstream processing are needed to handle the increased supply of product from the bioreactor but maintaining the high purity demanded of these biopharmaceuticals. © Springer-Verlag Berlin Heidelberg 2012.


Luque S.P.,University of Manitoba | Fried R.,TU Dortmund
PLoS ONE | Year: 2011

Zero offset correction of diving depth measured by time-depth recorders is required to remove artifacts arising from temporal changes in accuracy of pressure transducers. Currently used methods for this procedure are in the proprietary software domain, where researchers cannot study it in sufficient detail, so they have little or no control over how their data were changed. GNU R package diveMove implements a procedure in the Free Software domain that consists of recursively smoothing and filtering the input time series using moving quantiles. This paper describes, demonstrates, and evaluates the proposed method by using a "perfect" data set, which is subsequently corrupted to provide input for the proposed procedure. The method is evaluated by comparing the corrected time series to the original, uncorrupted, data set from an Antarctic fur seal (Arctocephalus gazella Peters, 1875). The Root Mean Square Error of the corrected data set, relative to the "perfect" data set, was nearly identical to the magnitude of noise introduced into the latter. The method, thus, provides a flexible, reliable, and efficient mechanism to perform zero offset correction for analyses of diving behaviour. We illustrate applications of the method to data sets from four species with large differences in diving behaviour, measured using different sampling protocols and instrument characteristics. © 2011 Luque, Fried.


Kumar A.,University of Manitoba | Kumar A.,Rutgers University
Journal of Antimicrobial Chemotherapy | Year: 2011

The need for early antimicrobial therapy is well established for life-threatening bacterial and fungal infections including meningitis and sepsis/septic shock. However, a link between the outcome of serious viral infections and delays in antiviral therapy is not as well recognized. Recently, with the occurrence of the influenza A/H1N1 pandemic of 2009, a large body of data regarding this issue has become available. Studies analysing data from this pandemic have consistently shown that delays in initiation of antiviral therapy following symptom onset are significantly associated with disease severity and death. Optimal survival and minimal disease severity appear to result when antivirals are started as soon as possible after symptom onset. © The Author 2011. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved.


Cerny P.,University of Manitoba | London D.,University of Oklahoma | Novak M.,Masaryk University
Elements | Year: 2012

Pegmatites accentuate the trace element signatures of their granitic sources. Through that signature, the origin of pegmatites can commonly be ascribed to granites whose own source characteristics are known and distinctive. Interactions with host rocks that might modify the composition of pegmatites are limited by the rapid cooling and low heat content of pegmatite-forming magmas. The trace element signatures of most pegmatites clearly align with those of S-type (sedimentary source, mostly postcollisional tectonic environment) and A-type (anorogenic environment, lower continental crust ± mantle source) granites. Pegmatites are not commonly associated with I-type (igneous source) granites. The distinction between granites that spawn pegmatites and those that do not appears to depend on the presence or absence, respectively, of fl uxing components, such as B, P, and F, in addition to H 2O, at the source.


Abergel D.S.L.,University of Manitoba | Apalkov V.,Georgia State University | Berashevich J.,University of Manitoba | Ziegler K.,University of Augsburg | Chakraborty T.,University of Manitoba
Advances in Physics | Year: 2010

The electronic properties of graphene, a two-dimensional crystal of carbon atoms, are exceptionally novel. For instance, the low-energy quasiparticles in graphene behave as massless chiral Dirac fermions which has led to the experimental observation of many interesting effects similar to those predicted in the relativistic regime. Graphene also has immense potential to be a key ingredient of new devices, such as single molecule gas sensors, ballistic transistors and spintronic devices. Bilayer graphene, which consists of two stacked monolayers and where the quasiparticles are massive chiral fermions, has a quadratic low-energy band structure which generates very different scattering properties from those of the monolayer. It also presents the unique property that a tunable band gap can be opened and controlled easily by a top gate. These properties have made bilayer graphene a subject of intense interest. In this review, we provide an in-depth description of the physics of monolayer and bilayer graphene from a theorist's perspective. We discuss the physical properties of graphene in an external magnetic field, reflecting the chiral nature of the quasiparticles near the Dirac point with a Landau level at zero energy. We address the unique integer quantum Hall effects, the role of electron correlations, and the recent observation of the fractional quantum Hall effect in the monolayer graphene. The quantum Hall effect in bilayer graphene is fundamentally different from that of a monolayer, reflecting the unique band structure of this system. The theory of transport in the absence of an external magnetic field is discussed in detail, along with the role of disorder studied in various theoretical models. Recent experminental observations of a metal-insulator transition in hydrogenated graphene is discussed in terms of a self-consistent theory and compared with related numerical simulations. We highlight the differences and similarities between monolayer and bilayer graphene, and focus on thermodynamic properties such as the compressibility, the plasmon spectra, the weak localization correction, quantum Hall effect and optical properties. Confinement of electrons in graphene is non-trivial due to Klein tunnelling. We review various theoretical and experimental studies of quantum confined structures made from graphene. The band structure of graphene nanoribbons and the role of the sublattice symmetry, edge geometry and the size of the nanoribbon on the electronic and magnetic properties are very active areas of research, and a detailed review of these topics is presented. Also, the effects of substrate interactions, adsorbed atoms, lattice defects and doping on the band structure of finite-sized graphene systems are discussed. We also include a brief description of graphane-gapped material obtained from graphene by attaching hydrogen atoms to each carbon atom in the lattice. © 2010 Taylor & Francis.


Arrington J.,Argonne National Laboratory | Blunden P.G.,University of Manitoba | Melnitchouk W.,Jefferson Lab
Progress in Particle and Nuclear Physics | Year: 2011

We review the role of two-photon exchange (TPE) in electronhadron scattering, focusing in particular on hadronic frameworks suitable for describing the low and moderate Q2 region relevant to most experimental studies. We discuss the effects of TPE on the extraction of nucleon form factors and their role in the resolution of the proton electric to magnetic form factor ratio puzzle. The implications of TPE on various other observables, including neutron form factors, electroproduction of resonances and pions, and nuclear form factors, are summarized. Measurements seeking to directly identify TPE effects, such as through the angular dependence of polarization observables, nonlinear ε contributions to the cross sections, and via e+p to e-p cross section ratios, are also outlined. In the weak sector, we describe the role of TPE and γZ interference in parity-violating electron scattering, and assess their impact on the extraction of the strange form factors of the nucleon and the weak charge of the proton. © 2011 Elsevier B.V. All rights reserved.


News Article | October 29, 2016
Site: marketersmedia.com

Seema Goel will be speaking at the upcoming University Art Association of Canada Conference this week at UQAM in Montreal. Her talk Data Dexterities is part of the session “Making Knowledge: Craft and the Digital” on Friday October 28th. In Data Dexterities she explores her own art/craft practise highlighting the use of digital technologies as a material to enhance touch, and play, and where the viewer’s awareness of the digital experience is integral to the success of the work. Data Dexterities: The project of shifting the digital experience beyond the binary is well underway. From simple yes/no response, Seema strives now to mimic the multiplicity available in human interaction. How does she, in craft, participate in this shift to engage the nuance and complexity of touch, materiality, and maker-user connection? How is craft language equally explored and accentuated through this effort? This presentation explores the contradictions and connections between touch and craft & digital interfaces through her own craft-based art practise. Bio: Seema Goel is a Canadian artist, writer, and curator. Her current work explores the manipulations and representations of the natural world resulting from human intervention. Using a wide range of media including taxidermy, projection, natural materials, and responsive technologies, she invites the viewer to engage these subjects through humour, touch, and participation. She has exhibited in North America and Europe and her writing has appeared in numerous literary publications, newspaper journals, and on radio and stage. Goel holds an MFA from the Rhode Island School of Design, an Associated Arts Diploma from the Ontario College of Art and Design, and a BSc. from McGill. She is also an alumna of the Harvard Summer Writing program, the Banff Centre Writing program, St. Peter’s Abbey, Fort San, and also managed to moonlight in the Brown Creative Writing program while a student at RISD. She is currently the STEAM coordinator and artist-in-residence in the faculty of Science at the University of Manitoba. For more information, please visit http://seemagoel.com/


News Article | November 2, 2016
Site: www.eurekalert.org

Including canola oil in a healthy diet may help reduce abdominal fat in as little as four weeks, according to health researchers. "Visceral, or abdominal, fat increases the risk for cardiovascular disease, and is also associated with increased risk for conditions such as metabolic syndrome and diabetes," said Penny M. Kris-Etherton, Distinguished Professor of Nutrition, Penn State. "Monounsaturated fats in canola oil decrease this fat that has adverse health effects." Kris-Etherton and colleagues found that after one month of adhering to diets that included canola oil, participants had .11 kilograms, or a quarter pound, less belly fat than they did before the diet. They also found that the weight lost from the mid-section did not redistribute elsewhere in the body. The researchers report their results at The Obesity Society's Annual Scientific Meeting today (Nov. 2). "As a general rule, you can't target weight loss to specific body regions," said Kris-Etherton. "But monounsaturated fatty acids seem to specifically target abdominal fat." In order to incorporate canola oil into the diet, Kris-Etherton suggests using it when sautéing foods, in baking, adding it to a smoothie and in salad dressings. Canola oil is high in monounsaturated fatty acids, which have been shown to have beneficial effects on body composition, especially in people with obesity. When participants consumed conventional canola oil or high-oleic acid canola oil for just four weeks, they lost abdominal fat. The researchers tested the effect of five different vegetable oil blends in 101 participants' diets through a controlled study. The subjects were randomly assigned to follow for four weeks each of the treatment oil diets: conventional canola, high-oleic acid canola, high-oleic acid canola with DHA (a type of omega-3 fatty acid), corn/safflower and flax/safflower. After each four-week diet period, participants were given a four-week break before starting the next diet period. The participants consumed two smoothies during the day, which contained the specified treatment oil. The quantity of oil was calculated based on the participant's energy needs. For example, a participant who was on a 3,000-calorie diet would receive 60 grams of the treatment oil per day, providing 18 percent of his or her total dietary energy. Each smoothie would then contain 100 grams of orange sherbet, 100 grams of non-fat milk, 100 grams of frozen unsweetened strawberries and 30 grams of canola oil. A hundred grams is equivalent to roughly three-and-a-half ounces and 30 grams is approximately two tablespoons. The canola oil was carefully incorporated into the test diets so as to not exceed the participants' daily calorie needs. All of the participants had abdominal obesity, or increased waist circumference, and were either at risk for or had metabolic syndrome -- a group of conditions including obesity, type 2 diabetes, high blood pressure, high blood sugar, low HDL (also known as good cholesterol) and excess body fat around the waist. The researchers point out that further studies should be conducted to look at the long-term effects of a diet high in monounsaturated fatty acids, like canola oil. Also contributing to this research were Xiaoran Liu, a doctoral student, Sheila G. West, professor, biobehavioral health and nutritional sciences, Jennifer A. Fleming, instructor and clinical research coordinator, nutritional sciences, and Cindy E. McCrea, graduate student, biobehavioral health, all at Penn State; Benoît Lamarche, professor, nutrition, and Patrick Couture, professor, endocrinology and nephrology, both at Laval University; David J. A. Jenkins, professor, nutritional sciences and medicine, University of Toronto; Shuaihua Pu, a doctoral student, and Peter J. H. Jones, Canada Research Chair in Functional Foods and Nutrition, both at University of Manitoba; and Philip W. Connelly, staff scientist, Keenan Research Centre for Biomedical Science of St. Michael's Hospital, Toronto. Agriculture and Agri Food Canada, the Canola Council of Canada, the Dow Agrosciences and Flax Council of Canada and the National Center for Advancing Translational Sciences all supported this research.


News Article | April 20, 2016
Site: motherboard.vice.com

It’s a tale as old as time: energy company proposes big project, energy company says it will have no effects on the local population, local population says it’ll actually poison their land, and their people, for decades. Classic! The energy company in question here is Nalcor Energy, and the project is the multi-billion dollar Muskrat Falls hydroelectric dam in Labrador, Newfoundland, which got the green light from the provincial government in 2012. Flooding the reservoir to build the dam will release toxic methylmercury into the area around nearby Lake Melville, but Nalcor argues that it will be diluted enough to have no effect on the local Inuit population. But a new study, commissioned by the aboriginal Nunatsiavut Government and completed by scientists from Memorial University, Harvard, and the University of Manitoba, says that the toxic mercury released during the dam’s construction will have highly detrimental effects on the area’s wildlife and the aboriginal people who live off of it. More than 200 individuals (and their children and grandchildren) could be affected by the toxic mercury, the study’s authors concluded. Additionally, 66 percent of the community in nearby Rigolet will be pushed above acceptable mercury levels, per the most conservative US Environmental Protection Agency guidelines, according to the report. Nalcor’s more positive assessment of the dam’s effects was ”false and based on incorrect assumptions,” a summary of the study for policymakers states. “The findings from epidemiological studies show that [mercury] is associated with lifelong neurocognitive deficits,” Harvard epidemiologist and study co-author Elsie Sunderland told me. “This isn’t something that you would see visibly. It’s basically a direct impact on their brain development, so they wouldn’t realize the potential they would have without this kind of exposure.” One of the main indicators of this kind of mercury exposure is children with lowered IQs, Sunderland said. Gilbert Bennett, vice-president of the Nalcor project that oversees the Muskrat Falls dam, said in a prepared statement sent to Motherboard that "we do not predict that creation of the Muskrat Falls reservoir will heighten risk to people in Lake Melville." “We will carefully review the assumptions, approaches, parameters and outcomes of the study by Nunatsiavut Government, and any implications of the report on the project’s ongoing environmental effects monitoring programs,” the statement reads. A spokesperson for Newfoundland and Labrador's minister of environment and conservation Perry Trimper said the minister has yet to make a decision on the environmental impacts of the Muskrat Falls project, and will take the recent study's findings into consideration. Watch more from Motherboard: Oil and Water According to Sunderland, contamination of the region would take just 120 hours, and the effects would persist for decades. “We are looking at multiple generations of exposure to higher levels of methylmercury,” Sunderland said. So, how did Nalcor not catch this, if these findings are right? According to Sunderland, Nalcor simply did not take the needed measurements, and instead just assumed that the mercury would be diluted. If Nalcor had done the work, they would have seen that this is flatly untrue, she contended. “I don’t see this as a difference in opinion, or a difference in findings,” said Sunderland. “That’s a misrepresentation, because they didn’t have any findings. They didn’t study the physical characteristics of the estuary.” Nalcor declined to comment directly on this allegation. To offset the impacts of releasing methylmercury into the environment, the researchers suggest completely clearing the area of trees, vegetation, and topsoil. Even then, however, the report suggests around 30 Inuit people will be negatively affected by the high levels of mercury. “Removal of soil from the reservoir was not considered during the environmental assessment and therefore is not part of our construction plans,” Bennett said in his statement. The flooding of the reservoir to build the Muskrat Falls dam is scheduled to take place later this year, and the dam is set to be constructed by 2017.


Apalkov V.M.,Georgia State University | Chakraborty T.,University of Manitoba
Physical Review Letters | Year: 2011

We study the fractional quantum Hall states on the surface of a topological insulator thin film in an external magnetic field, where the Dirac fermion nature of the charge carriers have been experimentally established only recently. Our studies indicate that the fractional quantum Hall states should indeed be observable in the surface Landau levels of a topological insulator. The strength of the effect will however be different, compared to that in graphene, due to the finite thickness of the topological insulator film and due to the admixture of Landau levels of the two surfaces of the film. At a small film thickness, that mixture results in a strongly nonmonotonic dependence of the excitation gap on the film thickness. At a large enough thickness of the film, the excitation gap in the lowest two Landau levels are comparable in strength. © 2011 American Physical Society.


Apalkov V.M.,Georgia State University | Chakraborty T.,University of Manitoba
Physical Review Letters | Year: 2011

Here, we show that the incompressible Pfaffian state originally proposed for the 52 fractional quantum Hall states in conventional two-dimensional electron systems can actually be found in a bilayer graphene at one of the Landau levels. The properties and stability of the Pfaffian state at this special Landau level strongly depend on the magnetic field strength. The graphene system shows a transition from the incompressible to a compressible state with increasing magnetic field. At a finite magnetic field of ∼10T, the Pfaffian state in bilayer graphene becomes more stable than its counterpart in conventional electron systems. © 2011 American Physical Society.


Guruacharya S.,Nanyang Technological University | Niyato D.,Nanyang Technological University | Kim D.I.,Sungkyunkwan University | Hossain E.,University of Manitoba
IEEE Transactions on Wireless Communications | Year: 2013

This paper considers the problem of downlink power allocation in an orthogonal frequency-division multiple access (OFDMA) cellular network with macrocells underlaid with femtocells. The femto-access points (FAPs) and the macro-base stations (MBSs) in the network are assumed to compete with each other to maximize their capacity under power constraints. This competition is captured in the framework of a Stackelberg game with the MBSs as the leaders and the FAPs as the followers. The leaders are assumed to have foresight enough to consider the responses of the followers while formulating their own strategies. The Stackelberg equilibrium is introduced as the solution of the Stackelberg game, and it is shown to exist under some mild assumptions. The game is expressed as a mathematical program with equilibrium constraints (MPEC), and the best response for a one leader-multiple follower game is derived. The best response is also obtained when a quality-of-service constraint is placed on the leader. Orthogonal power allocation between leader and followers is obtained as a special case of this solution under high interference. These results are used to build algorithms to iteratively calculate the Stackelberg equilibrium, and a sufficient condition is given for its convergence. The performance of the system at a Stackelberg equilibrium is found to be much better than that at a Nash equilibrium. © 2002-2012 IEEE.


Farquhar J.,University of Maryland University College | Zerkle A.L.,University of Maryland University College | Bekker A.,University of Manitoba
Photosynthesis Research | Year: 2011

This article examines the geological evidence for the rise of atmospheric oxygen and the origin of oxygenic photosynthesis. The evidence for the rise of atmospheric oxygen places a minimum time constraint before which oxygenic photosynthesis must have developed, and was subsequently established as the primary control on the atmospheric oxygen level. The geological evidence places the global rise of atmospheric oxygen, termed the Great Oxidation Event (GOE), between ∼ 2.45 and ∼ 2.32 Ga, and it is captured within the Duitschland Formation, which shows a transition from mass-independent to mass-dependent sulfur isotope fractionation. The rise of atmospheric oxygen during this interval is closely associated with a number of environmental changes, such as glaciations and intense continental weathering, and led to dramatic changes in the oxidation state of the ocean and the seawater inventory of transition elements. There are other features of the geologic record predating the GOE by as much as 200-300 million years, perhaps extending as far back as the Mesoarchean-Neoarchean boundary at 2.8 Ga, that suggest the presence of low level, transient or local, oxygenation. If verified, these features would not only imply an earlier origin for oxygenic photosynthesis, but also require a mechanism to decouple oxygen production from oxidation of Earth's surface environments. Most hypotheses for the GOE suggest that oxygen production by oxygenic photosynthesis is a precondition for the rise of oxygen, but that a synchronous change in atmospheric oxygen level is not required by the onset of this oxygen source. The potential lag-time in the response of Earth surface environments is related to the way that oxygen sinks, such as reduced Fe and sulfur compounds, respond to oxygen production. Changes in oxygen level imply an imbalance in the sources and sinks for oxygen. Changes in the cycling of oxygen have occurred at various times before and after the GOE, and do not appear to require corresponding changes in the intensity of oxygenic photosynthesis. The available geological constraints for these changes do not, however, disallow a direct role for this metabolism. The geological evidence for early oxygen and hypotheses for the controls on oxygen level are the basis for the interpretation of photosynthetic oxygen production as examined in this review. © Springer Science+Business Media B.V. 2010.


Apalkov V.M.,Georgia State University | Chakraborty T.,University of Manitoba
Solid State Communications | Year: 2014

We report on the properties of incompressible states of Dirac fermions in graphene in the presence of an anisotropic Hamiltonian and a quantizing magnetic field. We introduce the necessary formalism to incorporate the unimodular spatial metric to deal with the anisotropy in the system. The incompressible state in graphene is found to survive the anisotropy up to a critical value of the anisotropy parameter. The anisotropy also introduces two branches in the collective excitations of the corresponding Laughlin state. It strongly influences the short-range behavior of the pair-correlation functions in the incompressible ground state. © 2013 Elsevier Ltd.


Apalkov V.M.,Georgia State University | Chakraborty T.,University of Manitoba
Physical Review Letters | Year: 2010

Here we report from our theoretical studies that, in biased bilayer graphene, one can induce phase transitions from an incompressible fractional quantum Hall state to a compressible state by tuning the band gap at a given electron density. The nature of such phase transitions is different for weak and strong interlayer coupling. Although for strong coupling more levels interact there is a lesser number of transitions than for the weak coupling case. The intriguing scenario of tunable phase transitions in the fractional quantum Hall states is unique to bilayer graphene and has never before existed in conventional semiconductor systems. © 2010 The American Physical Society.


Hasan M.,University of Manitoba | Hossain E.,University of Manitoba | Kim D.I.,Sungkyunkwan University
IEEE Transactions on Wireless Communications | Year: 2014

Device-to-device (D2D) communication in cellular networks allows direct transmission between two cellular devices with local communication needs. Due to the increasing number of autonomous heterogeneous devices in future mobile networks, an efficient resource allocation scheme is required to maximize network throughput and achieve higher spectral efficiency. In this paper, performance of network-integrated D2D communication under channel uncertainties is investigated where D2D traffic is carried through relay nodes. Considering a multi-user and multi-relay network, we propose a robust distributed solution for resource allocation with a view to maximizing network sum-rate when the interference from other relay nodes and the link gains are uncertain. An optimization problem is formulated for allocating radio resources at the relays to maximize end-to-end rate as well as satisfy the quality-of-service (QoS) requirements for cellular and D2D user equipments under total power constraint. Each of the uncertain parameters is modeled by a bounded distance between its estimated and bounded values. We show that the robust problem is convex and a gradient-aided dual decomposition algorithm is applied to allocate radio resources in a distributed manner. Finally, to reduce the cost of robustness defined as the reduction of achievable sum-rate, we utilize the chance constraint approach to achieve a trade-off between robustness and optimality. The numerical results show that there is a distance threshold beyond which relay-aided D2D communication significantly improves network performance when compared to direct communication between D2D peers. © 2002-2012 IEEE.


Abdelnasser A.,University of Manitoba | Hossain E.,University of Manitoba | Kim D.I.,Sungkyunkwan University
IEEE Transactions on Wireless Communications | Year: 2014

Small cells such as femtocells overlaying the macrocells can enhance the coverage and capacity of cellular wireless networks and increase the spectrum efficiency by reusing the frequency spectrum assigned to the macrocells in a universal frequency reuse fashion. However, management of both the cross-tier and co-tier interferences is one of the most critical issues for such a two-tier cellular network. Centralized solutions for interference management in a two-tier cellular network with orthogonal frequency-division multiple access (OFDMA), which yield optimal/near-optimal performance, are impractical due to the computational complexity. Distributed solutions, on the other hand, lack the superiority of centralized schemes. In this paper, we propose a semi-distributed (hierarchical) interference management scheme based on joint clustering and resource allocation for femtocells. The problem is formulated as a mixed integer non-linear program (MINLP). The solution is obtained by dividing the problem into two sub-problems, where the related tasks are shared between the femto gateway (FGW) and femtocells. The FGW is responsible for clustering, where correlation clustering is used as a method for femtocell grouping. In this context, a low-complexity approach for solving the clustering problem is used based on semi-definite programming (SDP). In addition, an algorithm is proposed to reduce the search range for the best cluster configuration. For a given cluster configuration, within each cluster, one femto access point (FAP) is elected as a cluster head (CH) that is responsible for resource allocation among the femtocells in that cluster. The CH performs sub-channel and power allocation in two steps iteratively, where a low-complexity heuristic is proposed for the sub-channel allocation phase. Numerical results show the performance gains due to clustering in comparison to other related schemes. Also, the proposed correlation clustering scheme offers performance, which is close to that of the optimal clustering, with a lower complexity. © 2014 IEEE.


Chakraborty T.,University of Manitoba | Apalkov V.M.,Georgia State University
Solid State Communications | Year: 2013

The relativistic-like behavior of electrons in graphene significantly influences the interaction properties of these electrons in a quantizing magnetic field, resulting in more stable fractional quantum Hall effect states as compared to those in conventional (non-relativistic) semiconductor systems. In bilayer graphene the interaction strength can be controlled by a bias voltage and by the orientation of the magnetic field. The finite bias voltage between the graphene monolayers can in fact, enhance the interaction strength in a given Landau level. As a function of the bias voltage, a graphene bilayer system shows transitions from a state with weak electron-electron interactions to a state with strong interactions. Interestingly, the in-plane component of a tilted magnetic field can also alter the interaction strength in bilayer graphene. We also discuss the nature of the Pfaffian state in bilayer graphene and demonstrate that the stability of this state can be greatly enhanced by applying an in-plane magnetic field. © 2013 Elsevier Ltd.


Akkarajitsakul K.,University of Manitoba | Hossain E.,University of Manitoba | Niyato D.,Nanyang Technological University | Kim D.I.,Sungkyunkwan University
IEEE Communications Surveys and Tutorials | Year: 2011

Multiple access methods in a wireless network allow multiple nodes to share a set of available channels for data transmission. The nodes can either compete or cooperate with each other to access the channel(s) so that either an individual or a group objective can be achieved. Game theory, which is a mathematical tool developed to understand the interaction among rational entities, can be applied to model and to analyze individual or group behaviour of nodes for multiple access in wireless networks. Game theory also enables us to model the selfish/malicious behaviour of nodes, and subsequently design the punishment or defense mechanisms for robust multiple access in wireless networks. In addition, game models can provide distributed solutions to the multiple access problems, which are based on solid theoretical foundations. In this survey, we provide a comprehensive review of the game models (e.g., noncooperative/cooperative, static/dynamic, and complete/incomplete information) developed for different multiple access schemes (i.e., contention-free and contention-based random channel access) in wireless networks. We consider time-division multiple access (TDMA), frequency-division multiple access (FDMA), and code-division multiple access (CDMA), ALOHA, and carrier sense multiple access (CSMA)-based wireless networks. In addition, game models for multiple access in dynamic spectrum access-based cognitive radio networks are reviewed. The major findings from the game models used for these different access schemes are highlighted. To this end, several of the key open research directions are outlined. © 2005 IEEE.


Apalkov V.M.,Georgia State University | Apalkov V.M.,University of Manitoba | Chakraborty T.,Georgia State University | Chakraborty T.,University of Manitoba
Physical Review Letters | Year: 2014

The effects of mutual Coulomb interactions between Dirac fermions in monolayer graphene on the Hofstadter energy spectrum are investigated. For two flux quanta per unit cell of the periodic potential, interactions open a gap in each Landau level with the smallest gap in the n=1 Landau level. For more flux quanta through the unit cell, where the noninteracting energy spectra have many gaps in each Landau level, interactions enhance the low-energy gaps and strongly suppress the high-energy gaps and almost close a high-energy gap for n=1. The signature of the interaction effects in the Hofstadter system can be probed through magnetization, which is governed by the mixing of the Landau levels and is enhanced by the Coulomb interaction. © 2014 American Physical Society.


Choi K.W.,Seoul National University of Science and Technology | Hossain E.,University of Manitoba | Kim D.I.,Sungkyunkwan University
IEEE Transactions on Wireless Communications | Year: 2011

We propose a novel subchannel and transmission power allocation scheme for multi-cell orthogonal frequency-division multiple access (OFDMA) networks with cognitive radio (CR) functionality. The multi-cell CR-OFDMA network not only has to control the interference to the primary users (PUs) but also has to coordinate inter-cell interference in itself. The proposed scheme allocates the subchannels to the cells in a way to maximize the system capacity, while at the same time limiting the transmission power on the subchannels on which the PUs are active. We formulate this joint subchannel and transmission power allocation problem as an optimization problem. To efficiently solve the problem, we divide it into multiple subproblems by using the dual decomposition method, and present the algorithms to solve these subproblems. The resulting scheme efficiently allocates the subchannels and the transmission power in a distributed way. The simulation results show that the proposed scheme provides significant improvement over the traditional fixed subchannel allocation scheme in terms of system throughput. © 2011 IEEE.


Choi K.W.,Seoul National University of Science and Technology | Hossain E.,University of Manitoba | Kim D.I.,Sungkyunkwan University
IEEE Transactions on Wireless Communications | Year: 2011

We propose a novel cooperative spectrum sensing algorithm for a cognitive radio (CR) network to detect a primary user (PU) network that exhibits some degree of randomness in topology (e.g., due to mobility). We model the PU network as a random geometric network that can better describe small-scale mobile PUs. Based on this model, we formulate the random PU network detection problem in which the CR network detects the presence of a PU receiver within a given detection area. To address this problem, we propose a location-aware cooperative sensing algorithm that linearly combines multiple sensing results from secondary users (SUs) according to their geographical locations. In particular, we invoke the Fisher linear discriminant analysis to determine the linear coefficients for combining the sensing results. The simulation results show that the proposed sensing algorithm yields comparable performance to the optimal maximum likelihood (ML) detector and outperforms the existing ones, such as equal coefficient combining, OR-rule-based and AND-rule-based cooperative sensing algorithms, by a very wide margin. © 2011 IEEE.


Webborn N.,University of Brighton | Van De Vliet P.,International Paralympic Committee Medical and Scientifi | Van De Vliet P.,University of Manitoba
The Lancet | Year: 2012

Paralympic medicine describes the health-care issues of those 4500 or so athletes who gather every 4 years to compete in 20 sports at the Summer Paralympic Games and in fi ve sports at the Winter Paralympic Games. Paralympic athletes compete within six impairment groups: Amputation or limb defi ciencies, cerebral palsy, spinal cord-related disability, visual impairment, intellectual impairment, or a range of physically impairing disorders that do not fall into the other classifi cation categories, known as les autres. The variety of impairments, many of which are severe, fl uctuating, or progressive disorders (and are sometimes rare), makes maintenance of health in thousands of Paralympians while they undertake elite competition an unusual demand on health-care resources. The increased physical fi tness of athletes with disabilities has important implications for cardiovascular risk reduction in a population for whom the prevalence of risk factors can be high.


Saba R.,Public Health Agency of Canada | Sorensen D.L.,Public Health Agency of Canada | Booth S.A.,Public Health Agency of Canada | Booth S.A.,University of Manitoba
Frontiers in Immunology | Year: 2014

MicroRNAs (miRNAs) are a class of small non-coding RNA molecules that can play critical roles as regulators of numerous pathways and biological processes including the immune response. Emerging as one of the most important miRNAs to orchestrate immune and inflammatory signaling, often through its recognized target genes, IRAK1 and TRAF6, is microRNA-146a (miR-146a). MiR-146a is one, of a small number of miRNAs, whose expression is strongly induced following challenge of cells with bacterial endotoxin, and prolonged expression has been linked to immune tolerance, implying that it acts as a fine-tuning mechanism to prevent an overstimulation of the inflammatory response. In other cells, miR-146a has been shown to play a role in the control of the differentiation of megakaryocytic and monocytic lineages, adaptive immunity, and cancer. In this review, we discuss the central role prescribed to miR-146a in innate immunity. We particularly focus on the role played by miR-146a in the regulation and signaling mediated by one of the main pattern recognition receptors, toll/IL-1 receptors (TLRs). Additionally, we also discuss the role of miR-146a in several classes of autoimmune pathologies where this miRNA has been shown to be dysregulated, as well as its potential role in the pathobiology of neurodegenerative diseases. © 2014 Saba, Sorensen and Booth.


Apalkov V.M.,Georgia State University | Chakraborty T.,University of Manitoba
Physical Review B - Condensed Matter and Materials Physics | Year: 2011

Misoriented bilayer graphene with commensurate angles shows unique magneto-optical properties. The optical absorption spectra of such a system strongly depend on the angle of rotation. For a general commensurate twist angle, the absorption spectra has a simple single-peak structure. However, our studies indicate that there are special angles at which the absorption spectra of the rotated bilayer exhibit well developed multipeak structures. These angles correspond to even symmetry of the rotated graphene with respect to the sublattice exchange. Magnetospectroscopy can therefore be a potentially useful scheme to determine the twist angles. © 2011 American Physical Society.


Apalkov V.M.,Georgia State University | Chakraborty T.,University of Manitoba
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

Trilayer graphene in the fractional quantum Hall effect regime displays a set of unique interaction-induced transitions that can be tuned entirely by the applied bias voltage. These transitions occur near the anticrossing points of two Landau levels. In a large magnetic field (8 T) the electron-electron interactions close the anticrossing gap, resulting in some unusual transitions between different Landau levels. For the filling factor ν=23, these transitions are accompanied by a change of spin polarization of the ground state. For small Zeeman energy, this provides a unique opportunity to control the spin polarization of the ground state by fine tuning the bias voltage. © 2012 American Physical Society.


Singh H.,University of Manitoba | Nugent Z.,University of Manitoba | Demers A.A.,University of Manitoba | Demers A.A.,Public Health Agency of Canada | Bernstein C.N.,University of Manitoba
Gastroenterology | Year: 2011

Background & Aims: There are limited data on the risk of nonmelanoma skin cancer (NMSC) among individuals with inflammatory bowel disease (IBD), including those with or without exposure to immunosuppressant medications. Methods: Individuals with IBD (n = 9618) were identified from the University of Manitoba IBD Epidemiology Database and matched with randomly selected controls (n = 91,378) based on age, sex, and postal area of residence on the date of IBD diagnosis (index date). Groups were followed up from the index date until a diagnosis of any invasive cancer (including NMSC), death, migration from the province, or the end of the study (December 31, 2009), whichever came first. Cox regression analysis was performed to calculate the relative risk of NMSC among the individuals with IBD, adjusting for frequency of ambulatory care visits and socioeconomic status. Results: Of the individuals followed, 1696 were diagnosed with basal cell skin cancer (BCC) and 341 were diagnosed with squamous cell skin cancer (SCC). Individuals with IBD had an increased risk for BCC, compared with controls (hazard ratio, 1.20; 95% confidence interval [CI], 1.03-1.40). Among patients with IBD, use of thiopurines increased the risk of SCC (hazard ratio, 5.40; 95% CI, 2.00-14.56), compared with controls. Use of thiopurines also was associated with SCC in a case-control, nested analysis of individuals with IBD (odds ratio, 20.52; 95% CI, 2.42-173.81). Conclusions: The risk of BCC could be increased among individuals with IBD. Use of thiopurines increases the risk of SCC among individuals with IBD. © 2011 AGA Institute.


News Article | December 1, 2016
Site: www.eurekalert.org

Systematic review looks at which DXA-based measurements can be used to help identify patients at increased risk of fracture Increased risk of fracture has been shown to be one of the complications arising from longstanding diabetes. With the worldwide increase in Type 2 Diabetes (T2D), in part due to aging populations, there is also increasing concern about how to identify and manage patients with diabetes who are at high risk of osteoporotic fracture. Osteoporosis is usually diagnosed from bone mineral density (BMD) measured by dual-energy X-ray absorptiometry (DXA). The authors reviewed data on skeletal parameters and techniques readily available from DXA scanning, and considered their utility in routine clinical practice for predicting fracture risk. DXA measures BMD as well these other applications and measurements: trabecular bone score (TBS), skeletal geometry and DXA-based finite-element analysis, vertebral fracture assessment (VFA), and body composition. They also looked at fracture prediction tools, and specifically at the widely used Fracture Risk Assessment Tool (FRAX®) which is incorporated into modern DXA scanners. FRAX underestimates fracture risk in individuals with T2D - with factors contributing to this underestimation including the higher BMD observed in T2D, the greater risk for falls, and alterations in material strength. Nevertheless, several methods have been proposed to improve the performance FRAX in T2D. The review summarizes the evidence for the effect of various DXA-derived skeletal parameters in T1D and T2D. In regard to whether they can be used to account for the excess fracture risk, the review concludes: Lead author Professor William D. Leslie of the Department of Medicine, University of Manitoba, Canada stated: "Diabetes is associated with increased fracture risk that is only partially reflected by the BMD reductions see in in T1D, and is underestimated in T2D where BMD is increased. While BMD from DXA still stratifies fracture risk in those with diabetes, additional measures that can be obtained from DXA help to identify high-risk patients. Incorporating this additional information into risk prediction models may help to avoid systematically underestimating the risk of osteoporosis-related fractures in people with diabetes." Reference: Schacter G I, Leslie W D. DXA-Based Measurements in Diabetes: Can They Predict Fracture Risk? Calcif Tissue Int. DOI 10.1007/s00223-016-0191-x https:/ Calcified Tissue International & Musculoskeletal Research is a peer-reviewed journal which publishes original preclinical, translational and clinical research, and reviews concerning the structure and function of bone, and other musculoskeletal tissues in living organisms, as well as clinical studies of musculoskeletal disease. It includes studies of cell biology, molecular biology, intracellular signalling, and physiology, as well as research into the hormones, cytokines and other mediators that influence the musculoskeletal system. The journal also publishes clinical studies of relevance to bone disease, mineral metabolism, muscle function, and musculoskeletal interactions. > Become an IOF Professional member to freely access IOF scientific journals via the IOF website: https:/ The International Osteoporosis Foundation (IOF) is the world's largest nongovernmental organization dedicated to the prevention, diagnosis and treatment of osteoporosis and related musculoskeletal diseases. IOF members, including committees of scientific researchers as well as 234 patient, medical and research societies in 99 locations, work together to make fracture prevention and healthy mobility a worldwide heath care priority. http://www. / http://www. @iofbonehealth


News Article | December 2, 2015
Site: www.biosciencetechnology.com

The answer to a bendable, flexible body sensor may be in your mouth. A new stretchy sensor has been developed with the help of chewing gum and carbon nanotubes.  While many metal wearable sensors are sensitive, they could stop working if twisted or bent. The new type of sensor can monitor slight movements such as bending a finger or turning your head, with high sensitivity even when strained up to 530 percent, researchers from the University of Manitoba in Canada reported in ACS Applied Materials & Interfaces. According to a press release from the American Chemical Society, the sensor could also be used to monitor breathing by sensing changes in humidity when water vapor is released with each exhale. To create the flexible sensor, scientists chewed a piece of Doublemint gum for 30 minutes, and let it sit overnight after bathing it with ethanol. The sensing material, a carbon nanotubes solution, was added and the material was stretched and creased to line up properly. Check out a video of the wearable sensor here:


News Article | November 30, 2015
Site: news.yahoo.com

PARIS (Reuters) - Bill Gates grows most animated when the talk turns to the "cool" new energy technologies that have yet to leave the lab. Gates was a rare civilian sharing the limelight alongside presidents and prime ministers at the opening session of Paris climate talks on Monday. Offstage, in a barren conference room, he excitedly described the possibility of generating energy through the long-speculated process of artificial photosynthesis, using the energy of sunshine to produce liquid hydrocarbons that could challenge the supremacy of fossil fuels. “If it works it would be magical,” says Gates, hugging his elbows to his side and rocking lightly in his seat. “Because with liquids you don’t have the intermittency problem [that] batteries [do]. You can put the liquid into a big tank and burn it whenever you want. “There are dozens of things like that that are high risk but huge impact if they are successful.” Gates was in Paris to push his latest bit of entrepreneurial philanthropy: the Breakthrough Energy Coalition, an informal club of 28 private investors from around the world, including several hedge fund billionaires who have agreed to follow his lead and pump seed money into energy research and development. Gates believes the energy sector suffers from a dearth of such funding, the reason much of the world is still burning coal for its power. A readiness to put another billion dollars of his own money into what is already a roughly billion-dollar portfolio of energy investments was also enough for Gates to convince 20 governments to commit to doubling their own R&D investments within five years. “If we are to avoid the levels of warming that are dangerous we need to move at full speed,” the co-founder of Microsoft told a trio of journalists including from Reuters. Gates says the energy sector’s complacency about developing new technologies makes it ripe for disruption. “We need to surprise them that these alternative ways of doing energy can come along and come along in an economic way,” he says. Gates has become a devotee of Vaclav Smil, a little-known Czech-Canadian professor of the environment at the University of Manitoba in Winnipeg whom he calls "the best energy author there is”. Smil has written extensively about the long periods of time required for new energy technologies to take off. Oil, gas, nuclear: for all, the period from invention to widespread deployment was half a century. It’s a warning to those who think new technologies will be a quick fix for the warming planet, though Gates thinks new energy sources can be moved to market faster these days. “I’m more of an optimist than Vaclav,” he says, noting that "the world, scientifically, is far more sophisticated than at anytime in the past - our understanding of material science, our ability to simulate things, just the number of scientists and engineers in the world alone." Gates figures it will take a decade to develop two or three breakthrough technologies, then another 20 years before the technologies can become a core of the energy system. Thirty years. But don’t venture capitalists like to be around to see and enjoy the returns on their investments? “Well I hope to be alive then,” he said with a slight grimace. “I just went to my Dad’s 90th birthday. And it will be my 90th birthday in 2045."


The International Association of HealthCare Professionals is pleased to welcome Alex G. Pappas, DMD, to their prestigious organization with his upcoming publication in The Leading Physicians of the World. Dr. Pappas is a highly-trained and qualified dentist with an extensive expertise in all facets of his work, especially general and cosmetic dentistry. Dr. Alex G. Pappas has been in practice for more than 25 years and is currently serving patients within his own practice in Brandon, Manitoba, Canada. Furthermore, he is also affiliated with Brandon Regional Hospital. Dr. Alex G. Pappas’ career in medicine began in 1991, when he graduated with his Doctor of Dental Medicine Degree from the University of Manitoba in Winnipeg, Canada. Since graduating, Dr. Pappas has completed many advanced training courses, including in the use of 3D scanning in dentistry. To keep up to date with the latest advances and developments in field, Dr. Pappas maintains a professional membership with the Winnipeg Dental Society, and reads the Oral Health Medical Journal. He attributes his success to his passion for dentistry, and for utilizing the latest technology to take his dentistry to the next level. When he is not working, Dr. Pappas enjoys skiing, golfing, and racquet sports. Learn more about Dr. Pappas by reading his upcoming publication in The Leading Physicians of the World. FindaTopDoc.com is a hub for all things medicine, featuring detailed descriptions of medical professionals across all areas of expertise, and information on thousands of healthcare topics.  Each month, millions of patients use FindaTopDoc to find a doctor nearby and instantly book an appointment online or create a review.  FindaTopDoc.com features each doctor’s full professional biography highlighting their achievements, experience, patient reviews and areas of expertise.  A leading provider of valuable health information that helps empower patient and doctor alike, FindaTopDoc enables readers to live a happier and healthier life.  For more information about FindaTopDoc, visit http://www.findatopdoc.com


Hess J.E.,Columbia River Inter Tribal Fish Commission | Campbell N.R.,Columbia River Inter Tribal Fish Commission | Close D.A.,University of British Columbia | Docker M.F.,University of Manitoba | Narum S.R.,Columbia River Inter Tribal Fish Commission
Molecular Ecology | Year: 2013

Unlike most anadromous fishes that have evolved strict homing behaviour, Pacific lamprey (Entosphenus tridentatus) seem to lack philopatry as evidenced by minimal population structure across the species range. Yet unexplained findings of within-region population genetic heterogeneity coupled with the morphological and behavioural diversity described for the species suggest that adaptive genetic variation underlying fitness traits may be responsible. We employed restriction site-associated DNA sequencing to genotype 4439 quality filtered single nucleotide polymorphism (SNP) loci for 518 individuals collected across a broad geographical area including British Columbia, Washington, Oregon and California. A subset of putatively neutral markers (N = 4068) identified a significant amount of variation among three broad populations: northern British Columbia, Columbia River/southern coast and 'dwarf' adults (FCT = 0.02, P ≪ 0.001). Additionally, 162 SNPs were identified as adaptive through outlier tests, and inclusion of these markers revealed a signal of adaptive variation related to geography and life history. The majority of the 162 adaptive SNPs were not independent and formed four groups of linked loci. Analyses with matsam software found that 42 of these outlier SNPs were significantly associated with geography, run timing and dwarf life history, and 27 of these 42 SNPs aligned with known genes or highly conserved genomic regions using the genome browser available for sea lamprey. This study provides both neutral and adaptive context for observed genetic divergence among collections and thus reconciles previous findings of population genetic heterogeneity within a species that displays extensive gene flow. © 2012 John Wiley & Sons Ltd.


Ellis M.J.,University of Manitoba | Leddy J.J.,State University of New York at Buffalo | Willer B.,State University of New York at Buffalo
Brain Injury | Year: 2015

Primary objective: To present a novel pathophysiological approach to acute concussion and post-concussion syndrome (PCS). Research design: Review of the literature Methods and procedures: PubMed searches were performed to identify articles related to the pathophysiology and treatment of concussion and PCS. Relevant articles that contributed to the primary objective of the paper were included. Main outcome and results: This paper presents an evidence-based approach to acute concussion and PCS that focuses on the identification of specific post-concussion disorders (PCDs) caused by impairments in global brain metabolism (Physiologic PCD) or neurological sub-system dysfunction (Vestibulo-ocular PCD and Cervicogenic PCD) that can be distinguished by features of the clinical history, physical examination and treadmill exercise testing. This novel approach also allows for the initiation of evidence-based, multi-disciplinary therapeutic interventions that can improve individual symptoms and promote efficient neurological recovery. Conclusion: Future studies incorporating neuro-imaging and exercise science techniques are underway at the author's institutions to validate this novel pathophysiological approach to acute concussion and PCS. © 2015 Informa UK Ltd.


Chochinov H.M.,Manitoba Palliative Care Research Unit | Chochinov H.M.,University of Manitoba
Journal of Pain and Symptom Management | Year: 2013

Providing care for patients and caring about patients should go hand in hand. Caring implicates our fundamental attitude towards patients, and our ability to convey kindness, compassion and respect. Yet all too often, patients and families experience health care as impersonal, mechanical; and quickly discover that patienthood trumps personhood. The consequences of a medical system organized around care rather than caring are considerable. Despite technical competence, patients and families are less satisfied with medical encounters when caring is lacking. Lack of empathy and emotional disengagement from patients typifies health care provider burnout. Caring is the gateway to disclosure; without it, patients are less likely to say what is bothering them, leading to missed diagnoses, medical errors and compromised patient safety. There are also liability issues, with most complaints levied against health care professionals stemming from failures in care tenor. Formal education for health care providers lacks a continued focus on achieving a culture of caring. If caring really matters, health care systems can insist on certain behaviors and impose certain obligations on health care providers to improve care tenor, empathy, and effective communication. Caregivers need to be engaged in looking at their own attitudes towards patients, their own vulnerability, their own fears and whatever else it is that shapes their tone of care. Health care professionals must set aside some time, supported by their institutions, to advance a culture of caring - now is the time to take action. © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.


Chochinov H.M.,University of Manitoba | Chochinov H.M.,Manitoba Palliative Care Research Unit
Journal of Pain and Symptom Management | Year: 2011

Many people believe that spending large amounts of money on end-of-life care is unjustified and even irrational. This fails to recognize that the value of time, particularly quality time, appears to increase as death draws near. Paying for treatment that merely allows patients and families to avoid confronting the inevitability of death is wrong. However, palliative care, which can bolster the quality of a patient's remaining days, provides benefits that extend to the family and beyond. How can the notion of time gaining value toward the end of life be incorporated into conventional cost-benefit analyses? A standard QALY (Quality Adjusted Life Years) is the product of quality of life and time, without adjusting for any change in the value of time. An additional variable - a Valuation Index (Palliative) (or VIP) - needs to be factored into the equation, providing a rational explanation for what otherwise might be deemed irrational spending. When one recognizes the multitude of important things that happen as people approach the very end of life, the numbers start to add up. © 2011 U.S. Cancer Pain Relief Committee Published by Elsevier Inc. All rights reserved.


Kang H.,Pusan National University | Edmon P.P.,University of Manitoba | Jones T.W.,University of Minnesota
Astrophysical Journal | Year: 2012

We calculate nonthermal radiation from cosmic-ray (CR) protons and electrons accelerated at CR modified plane and spherical shocks, using time-dependent, diffusive shock acceleration (DSA) simulations that include radiative losses of CR electrons. Strong non-relativistic shocks with physical parameters relevant for young supernova remnants (SNRs) are considered in both the plane-parallel and spherically symmetric geometries, and compared at times when their dynamical and CR properties are concordant. A thermal leakage injection model and a Bohm-like diffusion coefficient are adopted. After DSA energy gains balance radiative losses, the electron spectrum at the plane shock approaches a time-asymptotic spectrum with a super-exponential cutoff above the equilibrium momentum. The postshock electron spectrum cuts off at a progressively lower momentum downstream from the shock due to the energy losses. That results in the steepening of the volume integrated electron energy spectrum by one power of the particle energy. These features evolve toward lower energies in the spherical, SNR shocks. In a CR modified shock, pion decay gamma-ray emission reveals distinct signatures of nonlinear DSA due to the concave proton momentum spectrum. Although the electron momentum spectrum has a much weaker concavity, the synchrotron spectral slope at the shock may flatten by about 0.1-0.3 between radio and X-ray bands. The slope of the volume integrated emission spectrum behaves nonlinearly around the break frequency. © 2012. The American Astronomical Society. All rights reserved.


Roksandic M.,University of Winnipeg | Armstrong S.D.,University of Manitoba
American Journal of Physical Anthropology | Year: 2011

Paleodemography, the study of demographic parameters of past human populations, relies on assumptions including biological uniformitarianism, stationary populations, and the ability to determine point age estimates from skeletal material. These assumptions have been widely criticized in the literature and various solutions have been proposed. The majority of these solutions rely on statistical modeling, and have not seen widespread application. Most bioarchaeologists recognize that our ability to assess chronological age is inherently limited, and have instead resorted to large, qualitative, age categories. However, there has been little attempt in the literature to systematize and define the stages of development and ageing used in bioarchaeology. We propose that stages should be based in the human life history pattern, and their skeletal markers should have easily defined and clear endpoints. In addition to a standard five-stage developmental model based on the human life history pattern, current among human biologists, we suggest divisions within the adult stage that recognize the specific nature of skeletal samples. We therefore propose the following eight stages recognizable in human skeletal development and senescence: infancy, early childhood, late childhood, adolescence, young adulthood, full adulthood, mature adulthood, and senile adulthood. Striving toward a better prediction of chronological ages will remain important and could eventually help us understand to what extent past societies differed in the timing of these life stages. Furthermore, paleodemographers should try to develop methods that rely on the type of age information accessible from the skeletal material, which uses life stages, rather than point age estimates. Copyright © 2011 Wiley-Liss, Inc.


McCubbin J.A.,University of Winnipeg | Krokhin O.V.,University of Manitoba
Tetrahedron Letters | Year: 2010

Electron-rich aromatic and heteroaromatic rings are functionalized directly with a variety of benzylic alcohols under mild conditions. The reaction is catalyzed by commercially available pentafluorophenylboronic acid, which is stable under ambient conditions and recoverable. The reaction itself is highly atom economical and produces water as the only byproduct. A Friedel-Crafts mechanism is proposed. © 2010 Elsevier Ltd. All rights reserved.


Blonde L.,Ochsner Medical Center | Jendle J.,Örebro University | Gross J.,Federal University of Rio Grande do Sul | Woo V.,University of Manitoba | And 3 more authors.
The Lancet | Year: 2015

Background For patients with type 2 diabetes who do not achieve target glycaemic control with conventional insulin treatment, advancing to a basal-bolus insulin regimen is often recommended. We aimed to compare the efficacy and safety of long-acting glucagon-like peptide-1 receptor agonist dulaglutide with that of insulin glargine, both combined with prandial insulin lispro, in patients with type 2 diabetes. Methods We did this 52 week, randomised, open-label, phase 3, non-inferiority trial at 105 study sites in 15 countries. Patients (aged ≥18 years) with type 2 diabetes inadequately controlled with conventional insulin treatment were randomly assigned (1:1:1), via a computer-generated randomisation sequence with an interactive voice-response system, to receive once-weekly dulaglutide 1·5 mg, dulaglutide 0·75 mg, or daily bedtime glargine. Randomisation was stratified by country and metformin use. Participants and study investigators were not masked to treatment allocation, but were unaware of dulaglutide dose assignment. The primary outcome was a change in glycated haemoglobin A1c (HbA1c) from baseline to week 26, with a 0·4% non-inferiority margin. Analysis was by intention to treat. This trial is registered with ClinicalTrials.gov, number NCT01191268. Findings Between Dec 9, 2010, and Sept 21, 2012, we randomly assigned 884 patients to receive dulaglutide 1·5 mg (n=295), dulaglutide 0·75 mg (n=293), or glargine (n=296). At 26 weeks, the adjusted mean change in HbA1c was greater in patients receiving dulaglutide 1·5 mg (-1·64% [95% CI -1·78 to -1·50], -17·93 mmol/mol [-19·44 to -16·42]) and dulaglutide 0·75 mg (-1·59% [-1·73 to -1·45], -17·38 mmol/mol [-18·89 to -15·87]) than in those receiving glargine (-1·41% [-1·55 to -1·27], -15·41 mmol/mol [-16·92 to -13·90]). The adjusted mean difference versus glargine was -0·22% (95% CI -0·38 to -0·07, -2·40 mmol/mol [-4·15 to -0·77]; p=0·005) for dulaglutide 1·5 mg and -0·17% (-0·33 to -0·02, -1·86 mmol/mol [-3·61 to -0·22]; p=0·015) for dulaglutide 0·75 mg. Five (<1%) patients died after randomisation because of septicaemia (n=1 in the dulaglutide 1·5 mg group); pneumonia (n=1 in the dulaglutide 0·75 mg group); cardiogenic shock; ventricular fibrillation; and an unknown cause (n=3 in the glargine group). We recorded serious adverse events in 27 (9%) patients in the dulaglutide 1·5 mg group, 44 (15%) patients in the dulaglutide 0·75 mg group, and 54 (18%) patients in the glargine group. The most frequent adverse events, arising more often with dulaglutide than glargine, were nausea, diarrhoea, and vomiting. Interpretation Dulaglutide in combination with lispro resulted in a significantly greater improvement in glycaemic control than did glargine and represents a new treatment option for patients unable to achieve glycaemic targets with conventional insulin treatment. Funding Eli Lilly and Company. © 2015 Elsevier Ltd.


Manghera M.,University of Winnipeg | Douville R.N.,University of Winnipeg | Douville R.N.,University of Manitoba
Retrovirology | Year: 2013

Humans are symbiotic organisms; our genome is populated with a substantial number of endogenous retroviruses (ERVs), some remarkably intact, while others are remnants of their former selves. Current research indicates that not all ERVs remain silent passengers within our genomes; re-activation of ERVs is often associated with inflammatory diseases. ERVK is the most recently endogenized and transcriptionally active ERV in humans, and as such may potentially contribute to the pathology of inflammatory disease. Here, we showcase the transcriptional regulation of ERVK. Expression of ERVs is regulated in part by epigenetic mechanisms, but also depends on transcriptional regulatory elements present within retroviral long terminal repeats (LTRs). These LTRs are responsive to both viral and cellular transcription factors; and we are just beginning to appreciate the full complexity of transcription factor interaction with the viral promoter. In this review, an exploration into the inflammatory transcription factor sites within the ERVK LTR will highlight the possible mechanisms by which ERVK is induced in inflammatory diseases. © 2013 Manghera and Douville; licensee BioMed Central Ltd.


News Article | October 15, 2016
Site: www.techtimes.com

Operating a smartwatch, in most cases, requires the use of both hands and can become tedious if one hand is holding an object or occupied with tasks. Researchers from Dartmouth College and the University of Manitoba are looking to address this problem using WristWhirl. According to the researchers, while there are efforts to develop methods that allow sameside hand (SSH) operation of smartwatches, they are centered on discrete input operations and on commands through finger postures. And while tilting the wrist may be a viable approach, losing sight of the smartwatch's display becomes the consequence. The WristWhirl project explores an alternative approach using whirls and continuous wrist movements to operate smartwatches using the same hand. "When observing the collective range-of-motions of the wrist along each of its axes of movement, the hand can be viewed as a natural joystick," Jun Gong, Xing-Dong Yang and Pourang Irani detail in their paper titled WristWhirl: One-handed Continuous Smartwatch Input using Wrist Gestures. "We explore the ability of the human wrist to perform complex gestures using full wrist motions, or wrist whirls." As a proof-of-concept, the team of three designed and built the WristWhirl prototype, which has a 2-inch TFT display. They augmented the watch strap with a dozen infrared proximity sensors and a Piezo vibration sensor that are connected to an Ardruino DUE microcontroller board. The board then gets hooked to a laptop that reads the sensor data. Once the device is strapped to a wrist, the user has to perform a pinch to mark the start of a gesture. Another pinch is needed to tell the system that the gesture ended. In total, the team studied 8 gestures with four for directional — up, down, left, right — and the other half for free-form: circle, rectangle, and question mark. Note that respondents only took half a second to perform that directional gestures while it took them a second and a half for a free-form gesture. To illustrate the potential usage scenarios for the said gestures, the team designed four applications - gesture shortcuts, music player, 2D navigation and game input. Directional gestures, such as swiping, can be used for navigating content within apps - a music player for instance. They can also be good for map navigation - using swipe to pan a map and double taps for zooming. These gestures are the equivalent of flicking a touch screen. On the other hand, free-form gestures, which allow for more complex shapes, can be used to launch preassigned apps or to call a number on speed dial. In combination with directional gestures, games such as Tetris and Fruit Ninja can be played. The paper will be presented next week at the 29th annual ACM Symposium on User Interface Software and Technology, which will be held in Tokyo, Japan. Below is a video demonstrating how the WristWhirl prototype works. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


Muller A.L.,St Boniface Hospital Research Center | Dhalla N.S.,University of Manitoba
Heart Failure Reviews | Year: 2012

It is believed that cardiac remodeling due to geometric and structural changes is a majormechanism for the progression of heart failure in different pathologies including hypertension, hypertrophic cardiomyopathy, dilated cardiomyopathy, diabetic cardiomyopathy, and myocardial infarction. Increases in the activities of proteolytic enzymes such as matrix metalloproteinases, calpains, cathepsins, and caspases contribute to the process of cardiac remodeling. In addition to modifying the extracellular matrix, both matrix metalloproteinases and cathepsins have been shown to affect the activities of subcellular organelles in cardiomyocytes. The activation of calpains and caspases has been identified to induce subcellular remodeling in failing hearts. Proteolytic activities associated with different proteins including caspases, calpain, and the ubiquitin-proteasome systemhave been shown to be involved in cardiomyocyte apoptosis,which is an integral part of cardiac remodeling. This article discusses and compares how the activities of various proteases are involved in different cardiac abnormalitieswith respect to alterations in apoptotic pathways, cardiac remodeling, and cardiac dysfunction. An imbalance appears to occur between the activities of some proteases and their endogenous inhibitors in various types of hypertrophied and failing hearts, and this is likely to further accentuate subcellular remodeling and cardiac dysfunction.The importance of inhibiting the activities of both extracellular and intracellular proteases specific to distinct etiologies, in attenuating cardiac remodeling and apoptosis as well as biochemical changes of subcellular organelles, in heart failure has been emphasized. It is suggested that combination therapy to inhibit different proteases may prove useful for the treatment of heart failure. © 2011 Springer Science+Business Media, LLC.


Fernyhough P.,University of Manitoba | Fernyhough P.,St Boniface Hospital Research Center | Calcutt N.A.,University of California at San Diego
Cell Calcium | Year: 2010

Abnormal neuronal calcium (Ca2+) homeostasis has been implicated in numerous diseases of the nervous system. The pathogenesis of two increasingly common disorders of the peripheral nervous system, namely neuropathic pain and diabetic polyneuropathy, has been associated with aberrant Ca2+ channel expression and function. Here we review the current state of knowledge regarding the role of Ca2+ dyshomeostasis and associated mitochondrial dysfunction in painful and diabetic neuropathies. The central impact of both alterations of Ca2+ signalling at the plasma membrane and also intracellular Ca2+ handling on sensory neurone function is discussed and related to abnormal endoplasmic reticulum performance. We also present new data highlighting sub-optimal axonal Ca2+ signalling in diabetic neuropathy and discuss the putative role for this abnormality in the induction of axonal degeneration in peripheral neuropathies. The accumulating evidence implicating Ca2+ dysregulation in both painful and degenerative neuropathies, along with recent advances in understanding of regional variations in Ca2+ channel and pump structures, makes modulation of neuronal Ca2+ handling an increasingly viable approach for therapeutic interventions against the painful and degenerative aspects of many peripheral neuropathies. © 2009 Elsevier Ltd. All rights reserved.


Chou K.-L.,University of Hong Kong | Liang K.,University of Hong Kong | Sareen J.,University of Manitoba
Journal of Clinical Psychiatry | Year: 2011

Objective: The objective of this study is to document the prevalence of social isolation from close friends and religious group members and to test the association of having infrequently contacted close friends and members of religious groups with the current DSM-IV mood, anxiety, and substance use disorders. Method: We conducted a secondary data analysis based on a cross-sectional, population-based study conducted in 2004-2005 that consists of a nationally representative sample of 34,653 American community-dwelling adults aged 18 years and older. Mood, anxiety, and substance use disorders were assessed using the Alcohol Use Disorder and Associated Disabilities Interview Schedule-DSM-IV version. Due to missing values for social network characteristics, we focused on 33,368 subjects in this study. Results: We found that many Americans lacked frequently contacted close friends (10.1%; 95% CI, 9.6%-10.6%) or religious group members (58.7%; 95% CI, 57.5%-59.9%) in their social network. After adjusting for sociodemographic variables, lifetime diagnosis of the disorder in question, and social isolation in terms of 10 other social ties, we found that the absence of close friends was associated (P < .01) with an increased risk of major depressive disorder, dysthymic disorder, social phobia, and generalized anxiety disorder; the absence of frequently contacted religious group members in a network was positively related (P < .01) to alcohol abuse and dependence, drug abuse, and nicotine dependence. Conclusions: These results suggest that social isolation is common in the United States and is associated with a higher risk of mental health problems. Results provide valuable information for prevention and treatment. © Copyright 2011 Physicians Postgraduate Press, Inc.


Sharanowski B.J.,University of Manitoba | Dowling A.P.G.,University of Arkansas | Sharkey M.J.,University of Kentucky
Systematic Entomology | Year: 2011

This study examined subfamilial relationships within Braconidae, using 4 kb of sequence data for 139 taxa. Genetic sampling included previously used markers for phylogenetic studies of Braconidae (28S and 18S rDNA) as well as new nuclear protein-coding genes (CAD and ACC). Maximum likelihood and Bayesian inference of the concatenated dataset recovered a robust phylogeny, particularly for early divergences within the family. This study focused primarily on non-cyclostome subfamilies, but the monophyly of the cyclostome complex was strongly supported. There was evidence supporting an independent clade, termed the aphidioid complex, as sister to the cyclostome complex of subfamilies. Maxfischeria was removed from Helconinae and placed within its own subfamily within the aphidioid complex. Most relationships within the cyclostome complex were poorly supported, probably because of lower taxonomic sampling within this group. Similar to other studies, there was strong support for the alysioid subcomplex containing Gnamptodontinae, Alysiinae, Opiinae and Exothecinae. Cenocoeliinae was recovered as sister to all other subfamilies within the euphoroid complex. Planitorus and Mannokeraia, previously placed in Betylobraconinae and Masoninae, respectively, were moved to the Euphorinae, and may share a close affiliation with Neoneurinae. Neoneurinae and Ecnomiinae were placed as tribes within Euphorinae. A sister relationship between the microgastroid and sigalphoid complexes was also recovered. The helconoid complex included a well-supported lineage that is parasitic on lepidopteran larvae (macrocentroid subcomplex). Helconini was raised to subfamily status, and was recovered as sister to the macrocentroid subcomplex. Blacinae was demoted to tribal status and placed within the newly circumscribed subfamily Brachistinae, which also contains the tribes Diospilini, Brulleiini and Brachistini, all formerly in Helconinae. © 2011 The Authors. Systematic Entomology © 2011 The Royal Entomological Society.


Hajmohammad S.,University of Manitoba | Vachon S.,University of Western Ontario
Journal of Supply Chain Management | Year: 2016

This study takes a conceptual theory building approach to develop a framework for managing supplier sustainability risk-the adverse impact on a buying organization from a supplier's social or environmental misconduct. Using anecdotal evidence and the literature, we present four distinct risk management strategies that supply managers adopt: risk avoidance, monitoring-based risk mitigation, collaboration-based risk mitigation, and risk acceptance. Drawing on agency and resource dependence theories, we study how the interactions of two key risk management predictors-that is, the supply managers' perceived risk and the buyer-supplier dependence structure-affect supply managers' strategy choice. Specifically, we propose that a collaborative-based mitigation strategy, involving direct interaction and solution development with the suppliers, is selected by supply managers in a high perceived risk-buyer dominant context. In a low perceived risk-buyer dominant context, however, a monitoring-based mitigation strategy is preferred. When the buyer and the supplier are not dependent on each other and there is a low perceived risk, the supply managers accept the risk by taking no actions, whereas in a high perceived risk-independent context the supply managers would avoid the risk by terminating the relationship with the supplier. We conclude the study by describing the theoretical contributions and managerial implications of the study as well as the avenues for future research. © 2016 Institute for Supply Management, Inc.


Dufault B.,University of Manitoba | Klar N.,University of Western Ontario
American Journal of Epidemiology | Year: 2011

The ecologic study design is routinely used by epidemiologists in spite of its limitations. It is presently unknown how well the challenges of the design are dealt with in epidemiologic research. The purpose of this bibliometric review was to critically evaluate the characteristics, statistical methods, and reporting of results of modern cross-sectional ecologic papers. A search through 6 major epidemiology journals identified all cross-sectional ecologic studies published since January 1, 2000. A total of 125 articles met the inclusion requirements and were assessed via common evaluative criteria. It was found that a considerable number of cross-sectional ecologic studies use unreliable methods or contain statistical oversights; most investigators who adjusted their outcomes for age or sex did so improperly (64%), statistical validity was a potential issue for 20% of regression models, and simple linear regression was the most common analytic approach (31%). Many authors omitted important information when discussing the ecologic nature of their study (31%), the choice of study design (58%), and the susceptibility of their research to the ecological fallacy (49%). These results suggest that there is a need for an international set of guidelines that standardizes reporting on ecologic studies. Additionally, greater attention should be given to the relevant biostatistical literature. © 2011 The Author.


Murrell S.,University of Manitoba | Wu S.-C.,National Tsing Hua University | Wu S.-C.,National Health Research Institute | Butler M.,University of Manitoba
Biotechnology Advances | Year: 2011

Dengue viral infection has become an increasing global health concern with over two-fifths of the world's population at risk of infection. It is the most rapidly spreading vector borne disease, attributed to changing demographics, urbanization, environment, and global travel. It continues to be a threat in over 100 tropical and sub-tropical countries, affecting predominantly children. Dengue also carries a hefty financial burden on the health care systems in affected areas, as those infected seek care for their symptoms. The search for a suitable vaccine for dengue has been ongoing for the last sixty years, yet any effective treatment or vaccine remains elusive. A vaccine must be protective for all four serotypes of dengue and be cost-effective. Many approaches to developing candidate vaccines have been employed. The candidates include live attenuated tetravalent vaccines, chimeric tetravalent vaccines based on attenuated dengue virus or Yellow Fever 17D, and recombinant DNA vaccines based on flavivirus and non-flavivirus vectors. This review outlines the challenges involved in dengue vaccine development and presents the current stages of proposed vaccine candidate development. © 2010 Elsevier Inc.


Fernyhough P.,St Boniface Hospital Research Center | Fernyhough P.,University of Manitoba
Current Diabetes Reports | Year: 2015

Diabetic neuropathy is a dying back neurodegenerative disease of the peripheral nervous system where mitochondrial dysfunction has been implicated as an etiological factor. Diabetes (type 1 or type 2) invokes an elevation of intracellular glucose concentration simultaneously with impaired growth factor support by insulin, and this dual alteration triggers a maladaptation in metabolism of adult sensory neurons. The energy sensing pathway comprising the AMP-activated protein kinase (AMPK)/sirtuin (SIRT)/peroxisome proliferator-activated receptor-γ coactivator α (PGC-1α) signaling axis is the target of these damaging changes in nutrient levels, e.g., induction of nutrient stress, and loss of insulin-dependent growth factor support and instigates an aberrant metabolic phenotype characterized by a suppression of mitochondrial oxidative phosphorylation and shift to anaerobic glycolysis. There is discussion of how this loss of mitochondrial function and transition to overreliance on glycolysis contributes to the diminishment of collateral sprouting and axon regeneration in diabetic neuropathy in the context of the highly energy-consuming nerve growth cone. © 2015, Springer Science+Business Media New York.


Zhang Y.,University of Manitoba | Rempel C.,Canola Council of Canada | Liu Q.,Agriculture and Agri Food Canada
Critical Reviews in Food Science and Nutrition | Year: 2014

Canola Council of Canada, Winnipeg, Manitoba, Canada The rising costs of nonrenewable feedstocks and environmental concerns with their industrial usage have encouraged the study and development of renewable products, including thermoplastic starch (TPS). Starch is an abundant, plant-based biodegradable material with interesting physicochemical characteristics that can be exploited, and this has received attention for development of TPS products. Starch exhibits usable thermoplastic properties when plasticizers, elevated temperatures, and shear are present. The choice of plasticizer has an effect on TPS, even when these have similar plasticization principles. Most TPS have glass transition temperature, Tg, in the range of approximately -75 to 10°C. Glassy transition of TPS is detected by differential scanning calorimeter (DSC) and thermodynamic analyzer (DMA), although DMA has been found to be more sensitive and effective. TPS has low tensile properties, typically below 6 MPa in tensile strength (TS). The addition of synthetic polymers, nanoclay, and fiber can improve TS and water-resistance ability. The moisture sorption behavior of TPS is described in GAB and BET models, from which monolayer moisture content and specific area are derived. Current studies on surface tension, gas permeability, crystallinity, and so on of the TPS are also reviewed. © 2014 Copyright Taylor and Francis Group, LLC.


Chou K.-L.,University of Hong Kong | Afifi T.O.,University of Manitoba
American Journal of Epidemiology | Year: 2011

The authors' objective in this study was to examine the role of disordered gambling as a risk factor for the subsequent occurrence of specific Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Axis I psychiatric disorders after adjusting for medical conditions, health-related quality of life, and stressful life events. Community-dwelling respondents from nationally representative US samples (n = 33,231) were interviewed in 2000-2001 and 2004-2005. Past-year disordered gambling at baseline was associated with the subsequent occurrence of any Axis I psychiatric disorder, any mood disorder, bipolar disorder, generalized anxiety disorder, posttraumatic stress disorder, any substance use disorder, alcohol use disorders, and alcohol dependence disorder after adjustment for sociodemographic variables. After simultaneous adjustment for medical conditions, health-related quality of life, and recent stressful life events, disordered gambling remained significantly related to any mood disorder, generalized anxiety disorder, posttraumatic stress disorder, alcohol use disorders, and alcohol dependence. The clinical implications of these findings are that treatment providers need to screen gambling patients for mood, anxiety, and substance use problems and monitor the possible development of later comorbid conditions. © 2011 The Author.


Linnen R.L.,University of Western Ontario | Van Lichtervelde M.,IDR | Cerny P.,University of Manitoba
Elements | Year: 2012

Rare-element granitic pegmatites are well recognized for the diversity and concentrations of metal ores that they host. The supply of some of these elements is of concern, and the European Commission recently designated metals such as tantalum and niobium as "critical materials" or "strategic resources." Field relationships, mineral chemistry, and experimental constraints indicate that these elements are concentrated dominantly by magmatic processes. The granitic melts involved in these processes are very unusual because they contain high concentrations of fluxing compounds, which play a key role at both the primary magmatic and metasomatic stages. In particular, the latter may involve highly fluxed melts rather than aqueous fluids.


Gough S.C.L.,Oxford Center for Diabetes | Harris S.,University of Western Ontario | Woo V.,University of Manitoba | Davies M.,University of Leicester
Diabetes, Obesity and Metabolism | Year: 2013

All the basal insulin products currently available have suboptimal pharmacokinetic (PK) properties, with none reliably providing a reproducible and peakless pharmacodynamic (PD) effect that endures over 24 h from once-daily dosing. Insulin degludec is a novel acylated basal insulin with a unique mechanism of protracted absorption involving the formation of a depot of soluble multihexamer chains after subcutaneous injection. PK/PD studies show that insulin degludec has a very long duration of action, with a half-life exceeding 25 h. Once-daily dosing produces a steady-state profile characterized by a near-constant effect, which varies little from injection to injection in a given patient. Clinically, insulin degludec has been shown consistently to carry a lower risk of nocturnal hypoglycaemia than once-daily insulin glargine, in both basal+bolus and basal-only insulin regimens. The constancy of the steady-state profile of insulin degludec also means that day-to-day irregularities at the time of injection have relatively little PD influence, thereby offering the possibility of greater treatment flexibility for patients. © 2012 Blackwell Publishing Ltd.


Shalchi A.,University of Manitoba | Busching I.,Ruhr University Bochum
Astrophysical Journal | Year: 2010

The propagation of cosmic rays in the Galaxy is controlled by spatial diffusion coefficients. Since parallel scattering is the strongest scattering effect, it is believed that the diffusion coefficient along the mean magnetic field controls the confinement of charged cosmic particles in the Galaxy. In this paper, we combine a turbulence spectrum with dissipation range with a nonlinear particle diffusion theory to compute transport parameters. We find a decreasing parallel diffusion coefficient for increasing particle energy for particles having rigidities lower than 3 GeV. Our approach provides an explanation of the observed boron-to-carbon ratio at these particle energies without the need to introduce stochastic acceleration. © 2010 The American Astronomical Society. All rights reserved.


Biala A.K.,St Boniface Hospital Research Center | Dhingra R.,University of Manitoba | Kirshenbaum L.A.,University of Manitoba
Journal of Molecular and Cellular Cardiology | Year: 2015

Aging is a degenerative process that unfortunately is an inevitable part of life and risk factor for cardiovascular disease including heart failure. Among the several theories purported to explain the effects of age on cardiac dysfunction, the mitochondrion has emerged a central regulator of this process. Hence, it is not surprising that abnormalities in mitochondrial quality control including biogenesis and turnover have such detrimental effects on cardiac function. In fact mitochondria serve as a conduit for biological signals for apoptosis, necrosis and autophagy respectively. The removal of damaged mitochondria by autophagy/mitophagy is essential for mitochondrial quality control and cardiac homeostasis. Defects in mitochondrial dynamism fission/fusion events have been linked to cardiac senescence and heart failure. In this review we discuss the impact of aging on mitochondrial dynamics and senescence on cardiovascular health. This article is part of a Special Issue entitled: CV Aging. © 2015 Elsevier Ltd.


Sonebi M.,Queen's University of Belfast | Bassuoni M.T.,University of Manitoba
Construction and Building Materials | Year: 2013

In this study, the effects of water-to-cement ratio (W/C), cement content and coarse aggregate content on the density, void ratio, infiltration rate, and compressive strength of portland cement pervious concrete (PCPC) were investigated by statistical modelling. Two-level factorial design and response surface methodology (RSM) were used. The PCPC mixtures were made with W/C in the range of 0.28-0.40, cement content in the range of 350-415 kg/m3and coarse aggregate content in the range of 1200-1400 kg/m3. In addition, examples were given on using multi parametric optimization to produce isoresponses of a desirability function for PCPC satisfying specified criteria including cost. The results show that W/C, cement content, coarse aggregate content and their interactions are key parameters, which significantly affect the characteristic performance of PCPC. The statistical models developed in this study can facilitate optimizing the mixture proportions of PCPC for target performance by reducing the number of trial batches needed. © 2012 Elsevier Ltd. All rights reserved.


Kiani K.,Islamic Azad University | Wang Q.,University of Manitoba
European Journal of Mechanics, A/Solids | Year: 2012

Interaction of a moving nanoparticle with a single-walled carbon nanotube (SWCNT) is of concern. The SWCNT is simulated by an equivalent continuum structure (ECS) under simply supported boundary conditions. The moving nanoparticle is modeled by a moving point load by considering its full inertial effects and Coulomb friction with the inner surface of the ECS. The ECS under the moving nanoparticle is modeled based on the Rayleigh, Timoshenko, and higher-order beam theories in the context of the nonlocal continuum theory of Eringen. The dimensionless discrete equations of motion associated with the nonlocal beam models are then obtained by using Galerkin method. The effects of slenderness ratio of the ECS, ratio of mean radius to thickness of the ECS, mass weight and velocity of the moving nanoparticle, and small scale parameter on the dynamic response of the SWCNT are explored. The capabilities of various nonlocal beam theories in capturing the longitudinal and transverse displacements as well as the nonlocal axial force and bending moment are also scrutinized in some detail. The possibility of moving nanoparticle separation from the inner surface of the SWCNT is examined by monitoring the sign of the contact force. Subsequently, the role of important parameters on the possibility of this phenomenon is explored using various nonlocal beam theories. © 2011 Elsevier Masson SAS. All rights reserved.


Bonaz B.L.,Grenoble University Hospital Center | Bernstein C.N.,University of Manitoba
Gastroenterology | Year: 2013

Psycho-neuro-endocrine-immune modulation through the brain-gut axis likely has a key role in the pathogenesis of inflammatory bowel disease (IBD). The brain-gut axis involves interactions among the neural components, including (1) the autonomic nervous system, (2) the central nervous system, (3) the stress system (hypothalamic-pituitary-adrenal axis), (4) the (gastrointestinal) corticotropin-releasing factor system, and (5) the intestinal response (including the intestinal barrier, the luminal microbiota, and the intestinal immune response). Animal models suggest that the cholinergic anti-inflammatory pathway through an anti-tumor necrosis factor effect of the efferent vagus nerve could be a therapeutic target in IBD through a pharmacologic, nutritional, or neurostimulation approach. In addition, the psychophysiological vulnerability of patients with IBD, secondary to the potential presence of any mood disorders, distress, increased perceived stress, or maladaptive coping strategies, underscores the psychological needs of patients with IBD. Clinicians need to address these issues with patients because there is emerging evidence that stress or other negative psychological attributes may have an effect on the disease course. Future research may include exploration of markers of brain-gut interactions, including serum/salivary cortisol (as a marker of the hypothalamic-pituitary-adrenal axis), heart rate variability (as a marker of the sympathovagal balance), or brain imaging studies. The widespread use and potential impact of complementary and alternative medicine and the positive response to placebo (in clinical trials) is further evidence that exploring other psycho-interventions may be important therapeutic adjuncts to the conventional therapeutic approach in IBD. © 2013 AGA Institute.


GREENWOOD VILLAGE, Colo.--(BUSINESS WIRE)--Red Robin Gourmet Burgers, Inc., (NASDAQ:RRGB), today announced the appointment of Guy J. Constant as executive vice president and chief financial officer, effective December 14, 2016. Mr. Constant will be responsible for leading financial disciplines at the Company including accounting and control, financial planning and analysis, operations finance, credit and external reporting. Mr. Constant brings to Red Robin more than 20 years of leadership in corporate finance, including more than a decade in the restaurant industry. He has extensive experience in operations and financial management for public companies, including strategy development and execution, treasury, financial planning and analysis, financial reporting, board management and investor relations. "Guy’s finance experience, strategic mindset and results orientation completes our standout Red Robin leadership team as we continue to improve our performance and set Red Robin up to serve generations of guests to come,” said Denny Marie Post, Red Robin Gourmet Burgers, Inc.’s chief executive officer. “Throughout his career, Guy has developed and driven high performance teams. We’re pumped to have him join our organization.” Prior to joining Red Robin, Mr. Constant served as chief financial officer, executive vice president of Finance and treasurer for Rent-A-Center, Inc. He previously served in various executive roles at Brinker International, Inc. including executive vice president and chief financial officer, president of the Chili’s Global Restaurant Division, senior vice president and vice president of Finance and senior director of Executive Compensation. Before his executive tenure at Brinker, he served in various marketing, finance and human resources roles of increasing scope and responsibility at AMR Corporation, the parent company of American Airlines. Mr. Constant earned his Bachelor of Arts in economics and political science from the University of Manitoba and a Master of Business Administration from the University of Western Ontario. "I am very excited to join the Red Robin team,” said Guy Constant. “During my many years in the industry, I have admired the Red Robin brand and business, which has been built upon a strong culture and talented leadership within both Operations and at the Home Office. I look forward to helping Denny and her team continue to deliver on the promises we have made to our team members, guests and shareholders through Everyday Value and improved service, built upon the foundation of a returns-focused, disciplined approach to capital allocation." Red Robin Gourmet Burgers, Inc. (www.redrobin.com), a casual dining restaurant chain founded in 1969 that operates through its wholly-owned subsidiary, Red Robin International, Inc., and under the trade name, Red Robin Gourmet Burgers and Brews, is the Gourmet Burger Authority™, famous for serving more than two dozen craveable, high-quality burgers with Bottomless Steak Fries® in a fun environment welcoming to guests of all ages. At Red Robin, burgers are more than just something guests eat; they're a bonding experience that brings together friends and families, kids and adults. In addition to its many burger offerings, Red Robin serves a wide variety of salads, soups, appetizers, entrees, desserts and signature beverages. Red Robin offers a variety of options behind the bar, including its extensive selection of local and regional beers, and innovative adult beer shakes and cocktails, earning the restaurant a VIBE Vista Award for Best Beer Program in a Multi-Unit Chain Restaurant. There are more than 540 Red Robin restaurants across the United States and Canada, including Red Robin Express® locations and those operating under franchise agreements. Red Robin… YUMMM®! Connect with Red Robin on Facebook, Instagram and Twitter.


Araji M.T.,University of Manitoba | Shakour S.A.,Abu Dhabi University
Materials and Design | Year: 2013

Sustainable soft materials in design applications should aim to have less resources depletion and pollution, plus inevitably less toxicity for the entire ecosystem. The outcome would result in environmental benefits, particularly with the production, specification and usage of proper materials. For this purpose, the paper conducted a survey among manufactures, designers and end-users to explore the concerns related to the inconsequential consideration of environmental factors associated with the extraction, processing, fabrication, and selection process of soft materials. Four criteria, including aesthetical, functional, economical, and environmental, were examined based on a comprehensive set of 33 governing factors. The analysis concludes criteria response rates that capture the intensity of the respondents experience using a three-point Likert scale. Analysis of Variance (ANOVA) was further considered to determine differences and interaction between independent and dependent variables. Results show that the main effect for criteria is not significant, but there are mean differences in consideration of criteria when respondents are evaluating factors based on the rating scale. Overall, the interaction variation and plots highlight the statistically significant differences between criteria. The environmental criterion is of marginal importance to all populations. © 2012 Elsevier Ltd.


Sanchez G.V.,George Washington University | Master R.N.,Quest Diagnostics Nichols Institute | Karlowsky J.A.,University of Manitoba | Bordon J.M.,Providence Hospital
Antimicrobial Agents and Chemotherapy | Year: 2012

This study examines in vitro antimicrobial resistance data from Escherichia coli isolates obtained from urine samples of U.S. outpatients between 2000 and 2010 using The Surveillance Network (TSN). Antimicrobial susceptibility results (n = 12,253,679) showed the greatest increases in E. coli resistance from 2000 to 2010 for ciprofloxacin (3% to 17.1%) and trimethoprim-sulfamethoxazole (TMP-SMX) (17.9% to 24.2%), whereas nitrofurantoin (0.8% to 1.6%) and ceftriaxone (0.2% to 2.3%) showed minimal change. From 2000 to 2010, the antimicrobial resistance of urinary E. coli isolates to ciprofloxacin and TMP-SMX among outpatients increased substantially. Copyright © 2012, American Society for Microbiology. All Rights Reserved.


Deng C.,University of Manitoba | Deng C.,Massachusetts Institute of Technology | Schuh C.A.,Massachusetts Institute of Technology
Applied Physics Letters | Year: 2012

Molecular dynamics with an embedded-atom method potential is used to simulate the nanoindentation of Cu 63.5Zr 36.5 metallic glasses. In particular, the effects of cyclic loading within the nominal elastic range on the overall strength and plasticity of metallic glass are studied. The simulated results are in line with the characteristics of experimentally observed hardening effects. In addition, analysis based on local von Mises strain suggests that the hardening is induced by confined microplasticity and stiffening in regions of the originally preferred yielding path, requiring a higher applied load to trigger a secondary one. © 2012 American Institute of Physics.


Ono S.,Massachusetts Institute of Technology | Fayek M.,University of Manitoba
Chemical Geology | Year: 2011

Both isotopic (Pb and O) and chemical compositions were measured by two in-situ techniques, SIMS and EPMA, for ~. 20μm-diameter areas of over forty eight individual grains of uraninite in the early Proterozoic quartz pebble conglomerate uranium deposits in the Elliot Lake district, in order to constrain the origin of uraninite and its post-mineralization history. Together with textural observation by SEM, Pb isotope analyses and chemical compositions of brannerite and uranothorite, our data revealed a protracted uranium remobilization history for the uranium deposits in the Elliot Lake district.All grains of uraninite examined show consistently high Th contents, supporting detrital origin of uraninite derived from pegmatitic rocks. The unimodal distribution of Th concentration of uraninite excludes the possibility that uraninite formed by multiple pathways, where some grains are detrital and some are later diagenetic/hydrothermal in origin. The measured uraninite grains yield U-Pb discordia age of 1.8. Ga, which is much younger than the depositional age of the host Huronian Basin (2.45 to 2.2. Ga). This age closely corresponds with the age of peak metamorphism in the Huronian Basin, suggesting that all uraninite grains completely lost its Pb during this time.The least texturally and chemically altered grains of uraninite yield δ18O-SMOW values of -10 to -22 ‰. This range of δ18O value is lower than that expected for uraninite from granitic/pegmatitic rocks by 10 ‰. It is concluded that the oxygen isotope ratios of uraninite is completely reset during the peak metamorphism and/or by interaction with recent meteoric water. The δ18O of uraninite, however, do not show systematic variation with Th/U, Pb/U and 207Pb/206Pb ratios, indicating uraninite exchanged its O isotopes without much disturbance in its chemical composition.Among the various geochemical signatures in uraninite, high Th content is the only original signature preserved since detrital deposition of uraninite. Both Pb and O isotope systems have been disturbed in various degrees by later events. The coupling/decoupling of the composition, Pb and O isotope systematics of uraninite reflect the protracted mineralization/remobilization history of the oldest uraninite. © 2010 Elsevier B.V.


Bellan S.E.,University of California at Berkeley | Fiorella K.J.,University of California at Berkeley | Melesse D.Y.,University of Manitoba | Getz W.M.,University of California at Berkeley | And 3 more authors.
The Lancet | Year: 2013

Background The proportion of heterosexual HIV transmission in sub-Saharan Africa that occurs within cohabiting partnerships, compared with that in single people or extra-couple relationships, is widely debated. We estimated the proportional contribution of different routes of transmission to new HIV infections. As plans to use antiretroviral drugs as a strategy for population-level prevention progress, understanding the importance of different transmission routes is crucial to target intervention efforts. Methods We built a mechanistic model of HIV transmission with data from Demographic and Health Surveys (DHS) for 2003-2011, of 27 201 cohabiting couples (men aged 15-59 years and women aged 15-49 years) from 18 sub-Saharan African countries with information about relationship duration, age at sexual debut, and HIV serostatus. We combined this model with estimates of HIV survival times and country-specific estimates of HIV prevalence and coverage of antiretroviral therapy (ART). We then estimated the proportion of recorded infections in surveyed cohabiting couples that occurred before couple formation, between couple members, and because of extra-couple intercourse. Findings In surveyed couples, we estimated that extra-couple transmission accounted for 27-61% of all HIV infections in men and 21-51% of all those in women, with ranges showing intercountry variation. We estimated that in 2011, extra-couple transmission accounted for 32-65% of new incident HIV infections in men in cohabiting couples, and 10-47% of new infections in women in such couples. Our findings suggest that transmission within couples occurs largely from men to women; however, the latter sex have a very high-risk period before couple formation. Interpretation Because of the large contribution of extra-couple transmission to new HIV infections, interventions for HIV prevention should target the general sexually active population and not only serodiscordant couples. Funding US National Institutes of Health, US National Science Foundation, and J S McDonnell Foundation. © 2013 Elsevier Ltd.


Shamov G.A.,University of Manitoba | Shamov G.A.,Kazan State Technological University
Inorganic Chemistry | Year: 2012

Free and ligated oxide clusters of thorium(IV) and uranium(IV) were studied with density functional theory using all-electron scalar relativistic method, as well as energy-consistent relativistic f-in-core pseudopotentials. The main driving force for the cluster formation is the sintering of the dioxoactinide moieties, which is more favorable for thorium(IV) than for uranium(IV) because, for the latter, a penalty for bending of the uranyl(IV) is to be paid. We assumed that the rhombic structural motif that exists already in the (AnO 2) 2 dimer could be a guide to explaining the preference for the existing An 6O 8-type clusters. On the basis of this, we have theoretically explored the possibility of the existence of similar (zonohedric) polyhedral actinide oxide clusters and found that the next possible cluster would be of An 12O 20 stoicheometry. We have predicted by our DFT computations that the corresponding zonohedral clusters would be minima on the potential energy surface. The alternating An-O rhombic structural motif also offers a possible explanation of the existence and stoichiometry of the only nonfluorite cluster thus far, the An 12O 20, which is nonzonohedral, nonconvex, but still a rhombic polyhedron. Our relativistic all electron DFT computations of both free cationic and ligated clusters predict that preparation of the larger clusters is not forbidden thermodynamically. We have also found that for the uranium(IV), oxide dimer and hexamer clusters are antiferromagnetic, broken spin singlet in their ground state, while ligated [U 6O 8] clusters prefer an all high-spin electronic configuration. © 2012 American Chemical Society.


Hammond A.W.,University of Manitoba | Crist B.D.,University of Missouri
Injury | Year: 2013

Introduction: Diabetics, smokers, patients with open fractures and drug addicts have shown to be at increased risk of having wound complications with traditional calcaneus fixation. The purpose of the study is to examine if high-risk patients with intra-articular calcaneus fractures can be managed safely using percutaneous reduction and fixation by examining a consecutive series of patients treated by the senior author. Methods: The treatment group consisted of the senior author's first 17 percutaneously treated calcaneus fractures in high-risk patients. Risk factors included: open fracture, smoking, diabetes and cocaine, alcohol and solvent abuse. Reduction techniques included temporary external fixation, inflatable bone tamps, and arthroscopic assisted reduction manoeuvres. Fixation was accomplished with cannulated 4.5 mm screws. Patients were followed up for 3 months minimum to look for wound complications and subsidence. Results: Surgery was performed within 15 days from injury (average 6.7 days). Risk factors included: open fracture 1, smoking 16, diabetes 2, and substance abuse 9. Sanders' classification described: six type 2, nine type 3 and two type 4. Bohlers' angle increased from an average of -1.5° (range -37° to +30) to 25.8° (range 7-36°). There were no wound issues or infections with the calcaneal fixation. Reduction was deemed excellent or good in 14, fair in 2 and poor in 1. Loss of Bohlers' angle of >4° occurred in four cases; in three of these, the patients were non-compliant with weight bearing. Conclusion: High-risk patients with intra-articular calcaneus fractures that meet the criteria for surgical management can be managed with percutaneous surgical techniques with low risk of wound complications. © 2012 Elsevier Ltd. All rights reserved.


News Article | October 28, 2016
Site: www.techtimes.com

You might enjoy cranberries, but it won't cure your UTI, a new research suggests. It is believed that consumption of cranberry products such as cranberry juice, cranberry capsules and the fruit itself was a natural way to avoid urinary tract infections, or UTI, but it seems that this is now false, as a new study seems to be debunking this longstanding preconception. Ingestion of cranberry products in juice or capsule form has been vaunted as a legitimate means to combat recurring UTI since "at least the first half of the last century," according to the study. Back when the plateau of medicine couldn't deliver advanced methods of treatment in the underdeveloped era, where antibiotic wasn't part of the vocabulary, it was once thought that acidifying urine was the proper means to cure UTI. Because it was thought to decrease urine pH, cranberry juice was healthily explored as a means to treat UTI in the past. Hippuric acid is formed through the metabolic process of quinic acid present in cranberry juice, a factor that contributed to the initial premise that cranberry juice can treat UTI. However, subsequent studies have determined that the concentration of hippuric acid in the urine is too insufficient for an antibacterial effect to occur, unless consumption of cranberry juice was raised beyond normal levels, essentially rendering the preconception as snake oil. While some studies suggest that cranberries may prevent repeated infections among younger women, researchers have discovered that residents of a nursing home who ingested high-potency cranberry capsules did not incur fewer episode of UTI than those under a placebo. "Although our study was only in nursing home women, many other studies have been done in other populations, which have not shown a benefit," said Dr. Manisha Juthani-Mehta, the study's lead author, now published in the Journal of the American Medical Association. Juthani-Mehta said that there's no reason for women who enjoy cranberry products to stop the habit. However, spending too much money on cranberry products in the hopes of curing or preventing UTI is, according to her, not worthwhile to pursue, especially for patients who snip a sizable amount off their income just to acquire the products. A 30-day supply of cranberry capsules can shoot up to $200 upward. "[C]ranberry products should not be recommended as a medical intervention for the prevention of UTI," said in the same journal by Dr. Lindsay E. Nicolle, a UTI expert in the University of Manitoba. So if you're clinician happens to be promoting or even encouraging the purchase and consumption of cranberry products as a specific means to treat UTI, then you can go ahead and inform them that this age-old notion has been proven false. "It is time to move on from cranberries," Dr. Nicolle said. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


Singh S.,Mayo Medical School | Singh P.P.,Mayo Medical School | Murad M.H.,Robert d Patricia rn Center For The Science Of Health Care Delivery | Singh H.,University of Manitoba | Samadder N.J.,University of Utah
The American journal of gastroenterology | Year: 2014

We performed meta-analysis to estimate pooled prevalence, risk factors, and outcomes of interval colorectal cancers (CRCs). Systematic literature search through October 2013, identified population-based studies, reporting prevalence of interval CRCs (CRCs diagnosed within 6-36 months of colonoscopy). We estimated the pooled prevalence, patient, endoscopist, and tumor-related risk factors, as well as outcomes of interval CRCs, as compared with detected CRCs (CRCs diagnosed at or within 6 months of colonoscopy). Twelve studies reporting on 7,912 interval CRCs were included. Pooled prevalence of interval CRCs was 3.7% (95% confidence interval (CI)=2.8-4.9%). These cancers were 2.4 times more likely to arise in the proximal colon (6.5%; 95% CI=4.9-8.6%) as compared with distal colon (2.9%; 95% CI=2.0-4.2%). Patients with interval CRCs were older (age >65-70 years vs. <65-70 years: odds ratio (OR)=1.15; 95% CI=1.02-1.30), have more comorbidities (high Charlson comorbidity index: OR=2.00; 95% CI=1.77-2.27), and have diverticular disease (OR=4.25; 95% CI=2.58-7.00). There was a nonsignificant time trend of declining prevalence of interval CRCs from 4.8% in 1990s to 4.2% between 2000 and 2005 and 3.7% beyond 2005. Patients with interval CRCs were less likely to present at an advanced stage (OR=0.79; 95% CI=0.67-0.94), although there was no survival benefit. Considerable heterogeneity was observed in most of the analyses. Based on meta-analysis, approximately 1 in 27 CRCs are interval CRCs, although the confidence in these estimates is low because of the heterogeneity among the studies. These are more likely to arise in the proximal colon and are diagnosed in older patients, patients with comorbidities or diverticular disease.


Leslie W.D.,University of Manitoba | Leslie W.D.,St Boniface General Hospital | Morin S.N.,McGill University
Current Opinion in Rheumatology | Year: 2014

Purpose of review: To summarize the recently published studies that provide insights into the changing epidemiology of osteoporosis and fractures. Recent findings: The main themes reviewed are fracture outcomes; trends in fractures rates; fracture risk assessment and monitoring; atypical femoral fractures; male osteoporosis; falls and physical activity; and sarcopenia, obesity, and metabolic syndrome. Summary: Osteoporotic fractures were found to have long-term consequences on excess mortality (10 years) and economic costs (5 years). The large burden of nonhip nonvertebral fractures has been underestimated. Divergent (but mostly declining) trends in fracture rates were confirmed in several cohorts from around the world. This has significant implications for healthcare planners and clinicians responsible for the care of individuals with osteoporosis, and also impacts on the calibration of fracture prediction tools. Although fracture prediction tools differ in their complexity, performance characteristics are similar when applied to the general population. Large, high-quality comparative studies with different case mixes are needed. Fracture probability does not appear to be responsive enough to support goal-directed treatment at this time. A consensus on the diagnosis of osteoporosis in men has emerged, based upon the same absolute bone density cutoff for both men and women. Finally, a plethora of new data highlight the importance of falls, physical activity, and body composition as contributors to skeletal health. © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins.


Buhl M.,University of St. Andrews | Schreckenbach G.,University of Manitoba
Inorganic Chemistry | Year: 2010

A recently proposed pathway for the scrambling of axial (uranyl) and equatorial O atoms in [UO 2(OH) 4] 2- (1) is refined using Car-Parrinello molecular dynamics (CPMD) simulations in an explicit solvent (water) and with model counterions (NH 4 +). According to constrained CPMD/BLYP simulations and thermodynamic integration, 1 can be deprotonated to [UO 3(OH) 3] 3- with a T-shaped UO 3 group (ΔA = 7.1 kcal/mol), which in turn can undergo a solvent-assisted proton transfer via a cis-[UO 2(OH) 4] 2- OH - complex and a total overall barrier of ΔA ‡ = 12.5 kcal/mol. According to computed relative energies of trans- and cis-[UO 2(OH) 4] 2- in the gas phase and in a polarizable continuum, "pure" functionals such as BLYP underestimate this overall barrier somewhat, and estimates of ΔA ‡ ≈ 16 and 17 kcal/mol are obtained at the B3LYP and CCSD(T) levels, respectively, in excellent agreement with the experiment. © 2010 American Chemical Society.


Dillon R.L.,University of Manitoba | Muller W.J.,McGill University
Cancer Research | Year: 2010

The phosphatidylinositol 3′ kinase/Akt pathway is frequently dysregulated in cancer, which can have unfavorable consequences in terms of cell proliferation, survival, metabolism, and migration. Increasing evidence suggests that Akt1, Akt2, and Akt3 play unique roles in breast cancer initiation and progression. We have recently shown that in contrast to Akt1, which accelerates mammary tumor induction in transgenic mice, Akt2 promotes metastasis of tumor cells without affecting the latency of tumor development. Despite the distinct phenotypic outputs resulting from Akt1 or Akt2 activation, very little is known about the mode by which such unique functions originate from these highly related kinases. Here we discuss potential mechanisms contributing to the differing functional specificity of Akt1 and Akt2 with respect to migration, invasion, and metastasis. ©2010 AACR.


Leslie W.D.,University of Manitoba | Morin S.N.,McGill University | Lix L.M.,University of Saskatchewan
Journal of Clinical Endocrinology and Metabolism | Year: 2012

Context: There is contradictory information on whether the rate of bone mineral density(BMD) loss is an independent risk factor for osteoporotic fractures and whether this should be included in fracture prediction systems. Objective: This study was undertaken to better define rate of BMD loss as a contributor to fracture risk in routine clinical practice. Design and Setting: We performed a retrospective cohort study using a database of all clinical BMD results for the province of Manitoba, Canada. Patients: We included 4498 untreated women age 40 yr and older at the time of a second BMD test performed between April 1996 and March 2009. Main Outcome Measures: A total of 146 women with major osteoporotic fracture outcomes after the second BMD test (mean observation, 2.7 yr) and relevant covariates were identified in population- based computerized health databases. Results: Annualized percentage change in total hip BMD was no greater in fracture compared to nonfracturewomen(-0. 4±1.7 vs.-0.5±1.4; P=0.166). After adjustment for final total hip BMD, other covariates, and medication use, rate of total hip BMD change did not predict major osteoporotic fractures (hazard ratio, 0.95 per SD decrease; 95% confidence interval, 0.81-1.10). Similar results were also seen in analyses based upon change in lumbar spine and femoral neck BMD. Conclusions: We found no evidence that BMD loss, as detected during routine clinical monitoring, was a significant independent risk factor for major osteoporotic fractures. Copyright © 2012 by The Endocrine Society.


Ng A.K.Y.,University of Manitoba | Padilha F.,Tembo Solutions | Pallis A.A.,University of Aegean
Journal of Transport Geography | Year: 2013

Many research works on dry ports, associated dry ports with enhanced seaport efficiency, relieving congestion without (significant) capacity expansion. Also, they posited how dry ports being essential elements in the competitive position of seaports, as they acted to facilitate access to (overlapping) hinterlands. However, those focusing on how institutions could strengthen (or dissipate) the bureaucratic and logistical roles of dry ports had remained scarce, especially on developing economies. Hence, through investigating the recent development of dry ports in four Brazilian states, the paper investigates how institutional framework affects the bureaucratic and logistical roles of dry ports in emerging economies. The paper posits that the Brazilian institutional framework in place has acted as causal factors in strengthening the bureaucratic roles of dry ports while at the same time dissipating their logistical roles. Through establishing the causal relation between these forces, the paper provides important insight on the impacts of institutions on transportation and regional development in different geographical regions. © 2012 Elsevier Ltd.


Murphy L.C.,University of Manitoba | Seekallu S.V.,University of Manitoba | Watson P.H.,University of Manitoba | Watson P.H.,BC Cancer Agency
Endocrine-Related Cancer | Year: 2011

Multiple sites of phosphorylation on human estrogen receptor α (ERα) have been identified by a variety of methodologies. Now with the emerging availability of phospho-site-specific antibodies to ERα, the relevance of phosphorylation of ERα in human breast cancer in vivo is being explored. Multiple phosphorylated sites in ERα can be detected in multiple breast tumor biopsy samples, providing evidence of their relevance to human breast cancer in vivo. Published data suggest that the detection in primary breast tumors of phosphorylation at some sites in ERα is associated with a better clinical outcome while phosphorylation at other sites is associated with a poorer clinical outcome most often in patients who have been treated with tamoxifen. This suggests the hypothesis that phospho-profiling of ERα in human breast tumors to establish an 'ERα phosphorylation code', may be a more accurate marker of prognosis and/or response to endocrine therapy in human breast cancer. © 2011 Society for Endocrinology Printed in Great Britain.


Morin S.N.,McGill University | Lix L.M.,University of Manitoba | Leslie W.D.,University of Manitoba
Journal of Bone and Mineral Research | Year: 2014

Previous fracture increases the risk of subsequent fractures regardless of the site of the initial fracture. Fracture risk assessment tools have been developed to guide clinical management; however, no discrimination is made as to the site of the prior fracture. Our objective was to determine which sites of previous nontraumatic fractures are most strongly associated with a diagnosis of osteoporosis, defined by a bone mineral density (BMD) T-score of ≤ -2.5 at the femoral neck, and an incident major osteoporotic fracture. Using administrative health databases, we conducted a retrospective historical cohort study of 39,991women age 45 years and older who had BMD testing with dual-energy X-ray absorptiometry (DXA). Logistic regression and Cox proportional multivariate models were used to test the association of previous fracture site with risk of osteoporosis and incident fractures. Clinical fractures at the following sites were strongly and independently associated with higher risk of an osteoporotic femoral neck T-score after adjustment for age: hip (odds ratio [OR], 3.58; 95% confidence interval [CI], 3.04-4.21), pelvis (OR, 2.23; 95% CI, 1.66-3.0), spine (OR, 2.16; 95% CI, 1.77-2.62), and humerus (OR, 1.74; 95% CI, 1.49-2.02). Cox proportional hazards models, with adjustment for age and femoral neck BMD, showed the greatest increase in risk for a major osteoporotic fracture for women who had sustained previous fractures of the spine (hazard ratio [HR], 2.08; 95% CI, 1.72-2.53), humerus (HR, 1.70; 95% CI, 1.44-2.01), patella (HR, 1.54; 95% CI, 1.10-2.18), and pelvis (HR, 1.45; 95% CI, 1.04-2.02). In summary, our results confirm that nontraumatic fractures in women are associated with osteoporosis at the femoral neck and that the site of previous fracture impacts on future osteoporotic fracture risk, independent of BMD. © 2014 American Society for Bone and Mineral Research.


Hirschmugl C.J.,University of Wisconsin - Milwaukee | Gough K.M.,University of Manitoba
Applied Spectroscopy | Year: 2012

The beamline design, microscope specifications, and initial results from the new mid-infrared beamline (IRENI) are reviewed. Synchrotronbased spectrochemical imaging, as recently implemented at the Synchrotron Radiation Center in Stoughton, Wisconsin, demonstrates the new capability to achieve diffraction limited chemical imaging across the entire mid-infrared region, simultaneously, with high signal-tonoise ratio. IRENI extracts a large swath of radiation (320 hor. × 25 vert. mrads 2) to homogeneously illuminate a commercial infrared (IR) microscope equipped with an IR focal plane array (FPA) detector. Wide-field images are collected, in contrast to single-pixel imaging from the confocal geometry with raster scanning, commonly used at most synchrotron beamlines. IRENI rapidly generates high quality, high spatial resolution data. The relevant advantages (spatial oversampling, speed, sensitivity, and signal-to-noise ratio) are discussed in detail and demonstrated with examples from a variety of disciplines, including formalin-fixed and flash-frozen tissue samples, live cells, fixed cells, paint cross-sections, polymer fibers, and novel nanomaterials. The impact of Mie scattering corrections on this high quality data is shown, and first results with a grazing angle objective are presented, along with future enhancements and plans for implementation of similar, small-scale instruments. © 2012 Society for Applied Spectroscopy.


Leslie W.D.,University of Manitoba | Morin S.,McGill University | Lix L.M.,University of Saskatchewan
Annals of Internal Medicine | Year: 2010

Background: Several national organizations recommend that fracture risk assessment and osteoporotic treatment be based on estimated absolute 10-year fracture risk rather than bone mineral density (BMD) alone. Objective: To assess the changes in physician prescribing behavior after introduction of absolute 10-year fracture risk reporting. Design: Before-and-after study. Setting: Manitoba, Canada, which has an integrated BMD program in which tests are linkable to a population-based administrative health database repository. Patients: Women 50 years or older who were not receiving osteoporosis medication (2042 before and 3889 after intervention). Intervention: Introduction of a system reporting absolute 10-year fracture risk along with dual-energy x-ray absorptiometry results. Measurements: The proportion of untreated women who were prescribed osteoporosis medications in the year after baseline BMD measurement. Results: Absolute fracture risk reporting reclassified more women (32.7%) into lower-risk categories than into higher-risk categories (10%). This effect was more prominent in women younger than 65 years. Fewer women per physician were prescribed osteoporosis drugs after introduction of absolute fracture risk reporting. The absolute fracture risk reporting system was associated with an overall reduction in osteoporosis medications dispensed (adjusted absolute reduction, 9.0 percentage points [95% Cl, 3.9 to 14.2 percentage points]; relative reduction, 21.3% [Cl, 9.2% to 33.5%]; P < 0.001). The reduction was attributed to fewer drugs dispensed to women at low and moderate risk for fracture. No differences in fracture rates were observed. Limitations: This was a nonrandomized study. The risk assessment system studied differs slightly from other 10-year fracture risk assessment models. Conclusion: Change from a T-score-based fracture risk reporting system to a system based on absolute 10-year fracture risk was associated with appropriate, guideline-based changes in prescription of osteoporosis medications. Primary Funding Source: None. © 2010 American College of Physicians.


Bolton J.M.,University of Manitoba | Gunnell D.,University of Bristol | Turecki G.,McGill University
BMJ (Online) | Year: 2015

Suicide is the 15th most common cause of death worldwide. Although relatively uncommon in the general population, suicide rates are much higher in people with mental health problems. Clinicians often have to assess and manage suicide risk. Risk assessment is challenging for several reasons, not least because conventional approaches to risk assessment rely on patient self reporting and suicidal patients may wish to conceal their plans. Accurate methods of predicting suicide therefore remain elusive and are actively being studied. Novel approaches to risk assessment have shown promise, including empirically derived tools and implicit association tests. Service provision for suicidal patients is often substandard, particularly at times of highest need, such as after discharge from hospital or the emergency department. Although several drug based and psychotherapy based treatments exist, the best approaches to reducing the risk of suicide are still unclear. Some of the most compelling evidence supports long established treatments such as lithium and cognitive behavioral therapy. Emerging options include ketamine and internet based psychotherapies. This review summarizes the current science in suicide risk assessment and provides an overview of the interventions shown to reduce the risk of suicide, with a focus on the clinical management of people with mental disorders.


Galpern P.,University of Manitoba | Manseau M.,University of Manitoba | Wilson P.,Trent University
Molecular Ecology | Year: 2012

Landscape genetic analyses are typically conducted at one spatial scale. Considering multiple scales may be essential for identifying landscape features influencing gene flow. We examined landscape connectivity for woodland caribou (Rangifer tarandus caribou) at multiple spatial scales using a new approach based on landscape graphs that creates a Voronoi tessellation of the landscape. To illustrate the potential of the method, we generated five resistance surfaces to explain how landscape pattern may influence gene flow across the range of this population. We tested each resistance surface using a raster at the spatial grain of available landscape data (200 m grid squares). We then used our method to produce up to 127 additional grains for each resistance surface. We applied a causal modelling framework with partial Mantel tests, where evidence of landscape resistance is tested against an alternative hypothesis of isolation-by-distance, and found statistically significant support for landscape resistance to gene flow in 89 of the 507 spatial grains examined. We found evidence that major roads as well as the cumulative effects of natural and anthropogenic disturbance may be contributing to the genetic structure. Using only the original grid surface yielded no evidence for landscape resistance to gene flow. Our results show that using multiple spatial grains can reveal landscape influences on genetic structure that may be overlooked with a single grain, and suggest that coarsening the grain of landcover data may be appropriate for highly mobile species. We discuss how grains of connectivity and related analyses have potential landscape genetic applications in a broad range of systems. © 2012 Blackwell Publishing Ltd.


Lidula N.W.A.,University of Moratuwa | Rajapakse A.D.,University of Manitoba
Renewable and Sustainable Energy Reviews | Year: 2014

Microgrids can operate in parallel with the grid or as a power-island. They are thus, expected to perform seamless transition from islanded to parallel operation and vice versa. This paper reviews the existing DG interconnection standards for microgrid resynchronization, investigates possible simple solutions for voltage balancing, and shows that the existing synchrocheck relay with a circuit breaker is sufficient to reconnect an islanded, highly unbalanced microgrid back to the utility grid. © 2014 Elsevier Ltd.

Loading University of Manitoba collaborators
Loading University of Manitoba collaborators