St. Louis, MO, United States
St. Louis, MO, United States

Washington University in St. Louis is a private research university located in St. Louis, Missouri, United States. Founded in 1853, and named after George Washington, the university has students and faculty from all 50 U.S. states and more than 120 countries. Twenty-two Nobel laureates have been affiliated with Washington University, nine having done the major part of their pioneering research at the university. Washington University's undergraduate program is ranked 14th in the nation and 7th in admissions selectivity by U.S. News and World Report. The university is ranked 30th in the world by the Academic Ranking of World Universities. In 2006, the university received $434 million in federal research funds, ranking seventh among private universities receiving federal research and development support, and in the top four in funding from the National Institutes of Health.Washington University is made up of seven graduate and undergraduate schools that encompass a broad range of academic fields. Officially incorporated as "The Washington University," the university is occasionally referred to as "WUSTL," an acronym derived from its initials. More commonly, however, students refer to the university as "Wash. U." To prevent confusion over its location, the Board of Trustees added the phrase "in St. Louis" in 1976. Wikipedia.


Time filter

Source Type

Patent
Washington University in St. Louis, Eli Lilly and Company | Date: 2017-04-05

A method to treat conditions characterized by formation of amyloid plaques both grophylactically and therapeutically is described. The method employs humanized antibodies which sequester soluble A peptide from human biological fluids or which preferably specifically bind an epitope contained within position 13-28 of the amyloid beta peptide A.


Treiman R.,Washington University in St. Louis | Boland K.,Washington University in St. Louis
Journal of Memory and Language | Year: 2017

Choosing between alternative spellings for sounds can be difficult for even experienced spellers. We examined the factors that influence adults’ choices in one such case: single- versus double-letter spellings of medial consonants in English. The major systematic influence on the choice between medial singletons and doublets has been thought to be phonological context: whether the preceding vowel is phonologically long or short. With phonological context equated, we found influences of graphotactic context—both the number of letters in the spelling of the vowel and the spelling sequence following the medial consonant—in adults’ spelling of nonwords and in the English vocabulary itself. Existing models of the spelling process do not include a mechanism by which the letters that are selected for one phoneme can influence the choice of spellings for another phoneme and thus require modification in order to explain the present results. © 2016 Elsevier Inc.


McLaughlin M.,Washington University in St. Louis
Child Abuse and Neglect | Year: 2017

A number of research studies have documented an association between child maltreatment and family income. Yet, little is known about the specific types of economic shocks that affect child maltreatment rates. The paucity of information is troubling given that more than six million children are reported for maltreatment annually in the U.S. alone. This study examines whether an exogenous shock to families’ disposable income, a change in the price of gasoline, predicts changes in child maltreatment. The findings of a fixed-effects regression show that increases in state-level gas prices are associated with increases in state-level child maltreatment referral rates, even after controlling for demographic and other economic variables. The results are robust to the manner of estimation; random-effects and mixed-effects regressions produce similar estimates. The findings suggest that fluctuations in the price of gas may have important consequences for children. © 2017 Elsevier Ltd


Govero J.,Washington University in St. Louis | Esakky P.,Washington University in St. Louis | Scheaffer S.M.,Washington University in St. Louis | Fernandez E.,Washington University in St. Louis | And 8 more authors.
Nature | Year: 2016

Infection of pregnant women with Zika virus (ZIKV) can cause congenital malformations including microcephaly, which has focused global attention on this emerging pathogen. In addition to transmission by mosquitoes, ZIKV can be detected in the seminal fluid of affected males for extended periods of time and transmitted sexually. Here, using a mouse-adapted African ZIKV strain (Dakar 41519), we evaluated the consequences of infection in the male reproductive tract of mice. We observed persistence of ZIKV, but not the closely related dengue virus (DENV), in the testis and epididymis of male mice, and this was associated with tissue injury that caused diminished testosterone and inhibin B levels and oligospermia. ZIKV preferentially infected spermatogonia, primary spermatocytes and Sertoli cells in the testis, resulting in cell death and destruction of the seminiferous tubules. Less damage was caused by a contemporary Asian ZIKV strain (H/PF/2013), in part because this virus replicates less efficiently in mice. The extent to which these observations in mice translate to humans remains unclear, but longitudinal studies of sperm function and viability in ZIKV-infected humans seem warranted.


Kharasch E.D.,Washington University in St. Louis
Clinical Pharmacology in Drug Development | Year: 2017

Methadone is a cornerstone therapy for opioid addiction and a public health strategy for HIV/AIDS and hepatitis C reduction. Methadone is also used for acute and chronic pain. As use for chronic pain has grown, so too have adverse events. Constitutive and acquired (drug interactions) inter- and intraindividual variability in methadone pharmacokinetics and pharmacodynamics confounds reliable clinical use. Identification of enzymes and transporters responsible for methadone disposition has been a long-sought ideal. Initial in vitro studies identified CYP3A4 as metabolizing methadone. Subsequently, by extrapolation, CYP3A4 was long assumed to be responsible for clinical methadone disposition. However, CYP2B6 is also a major catalyst of methadone metabolism in vitro. It has now been unequivocally established that CYP2B6, not CYP3A4, is the principal determinant of methadone metabolism, clearance, elimination, and plasma concentrations in humans. Methadone disposition is susceptible to inductive and inhibitory drug interactions. CYP2B6 genetics also influences methadone metabolism and clearance, which were diminished in CYP2B6*6 carriers and increased in CYP2B6*4 carriers. CYP2B6 genetics can explain, in part, interindividual variability in methadone metabolism and clearance. Thus, both constitutive variability due to CYP2B6 genetics, and CYP2B6-mediated drug interactions, can alter methadone disposition, clinical effect, and drug safety. Methadone is not a substrate for major influx or efflux transporters. © 2017, The American College of Clinical Pharmacology


Payne P.R.,Washington University in St. Louis
Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing | Year: 2016

The modern healthcare and life sciences ecosystem is moving towards an increasingly open and data-centric approach to discovery science. This evolving paradigm is predicated on a complex set of information needs related to our collective ability to share, discover, reuse, integrate, and analyze open biological, clinical, and population level data resources of varying composition, granularity, and syntactic or semantic consistency. Such an evolution is further impacted by a concomitant growth in the size of data sets that can and should be employed for both hypothesis discovery and testing. When such open data can be accessed and employed for discovery purposes, a broad spectrum of high impact end-points is made possible. These span the spectrum from identification of de novo biomarker complexes that can inform precision medicine, to the repositioning or repurposing of extant agents for new and cost-effective therapies, to the assessment of population level influences on disease and wellness. Of note, these types of uses of open data can be either primary, wherein open data is the substantive basis for inquiry, or secondary, wherein open data is used to augment or enrich project-specific or proprietary data that is not open in and of itself. This workshop is concerned with the key challenges, opportunities, and methodological best practices whereby open data can be used to drive the advancement of discovery science in all of the aforementioned capacities.


Patel R.G.,Washington University in St. Louis
Facial Plastic Surgery | Year: 2017

The nose is a complex structure important in facial aesthetics and in respiratory physiology. Nasal defects can pose a challenge to reconstructive surgeons who must re-create nasal symmetry while maintaining nasal function. A basic understanding of the underlying nasal anatomy is thus necessary for successful nasal reconstruction. © 2017 by Thieme Medical Publishers, Inc..


Han J.,Washington University in St. Louis
Nuclear Medicine Communications | Year: 2017

OBJECTIVE: The P2X7 receptor (P2X7R) is a key regulatory element in the neuroinflammatory cascade that provides a promising target for imaging neuroinflammation. GSK1482160, a P2X7R modulator with nanomolar binding affinity and high selectivity, has been successfully radiolabeled and utilized for imaging P2X7 levels in a mouse model of lipopolysaccharide-induced systemic inflammation. In the current study, we further characterized its binding profile and determined whether [C]GSK1482160 can detect changes in P2X7R expression in a rodent model of multiple sclerosis. METHODS: [C]GSK1482160 was synthesized with high specific activity and high radiochemical purity. Radioligand saturation and competition binding assays were performed for [C]GSK1482160 using HEK293-hP2X7R living cells. Micro-PET studies were carried out in nonhuman primates. In vitro autoradiography and immunohistochemistry studies were then carried out to evaluate tracer uptake and P2X7 expression in experimental autoimmune encephalomyelitis (EAE) rat lumbar spinal cord at EAE-peak and EAE-remitting stages compared with sham rats. RESULTS: [C]GSK1482160 binds to HEK293-hP2X7R living cells with high binding affinity (Kd=5.09±0.98 nmol/l, Ki=2.63±0.6 nmol/l). Micro-PET studies showed high tracer retention and a homogeneous distribution in the brain of nonhuman primates. In the EAE rat model, tracer uptake of [C]GSK1482160 in rat lumbar spinal cord was the highest at the EAE-peak stage (277.74±79.74 PSL/mm), followed by the EAE-remitting stage(149.00±54.14 PSL/mm) and sham (66.37±1.48 PSL/mm). The tracer uptake correlated strongly with P2X7-positive cell counts, activated microglia numbers, and disease severity. CONCLUSION: We conclude that [C]GSK1482160 has the potential for application in monitoring neuroinflammation. Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved.


Camazine M.N.,Washington University in St. Louis
Pediatric Critical Care Medicine | Year: 2017

OBJECTIVE:: To determine if the use of fresh frozen plasma/frozen plasma 24 hours compared to solvent detergent plasma is associated with international normalized ratio reduction or ICU mortality in critically ill children. DESIGN:: This is an a priori secondary analysis of a prospective, observational study. Study groups were defined as those transfused with either fresh frozen plasma/frozen plasma 24 hours or solvent detergent plasma. Outcomes were international normalized ratio reduction and ICU mortality. Multivariable logistic regression was used to determine independent associations. SETTING:: One hundred one PICUs in 21 countries. PATIENTS:: All critically ill children admitted to a participating unit were included if they received at least one plasma unit during six predefined 1-week (Monday to Friday) periods. All children were exclusively transfused with either fresh frozen plasma/frozen plasma 24 hours or solvent detergent plasma. INTERVENTIONS:: None. MEASUREMENTS AND MAIN RESULTS:: There were 443 patients enrolled in the study. Twenty-four patients (5%) were excluded because no plasma type was recorded; the remaining 419 patients were analyzed. Fresh frozen plasma/frozen plasma 24 hours group included 357 patients, and the solvent detergent plasma group included 62 patients. The median (interquartile range) age and weight were 1 year (0.2–6.4) and 9.4 kg (4.0–21.1), respectively. There was no difference in reason for admission, severity of illness score, pretransfusion international normalized ratio, or lactate values; however, there was a difference in primary indication for plasma transfusion (p < 0.001). There was no difference in median (interquartile range) international normalized ratio reduction, between fresh frozen plasma/frozen plasma 24 hours and solvent detergent plasma study groups, –0.2 (–0.4 to 0) and –0.2 (–0.3 to 0), respectively (p = 0.80). ICU mortality was lower in the solvent detergent plasma versus fresh frozen plasma/frozen plasma 24 hours groups, 14.5% versus 29.1%%, respectively (p = 0.02). Upon adjusted analysis, solvent detergent plasma transfusion was independently associated with reduced ICU mortality (odds ratio, 0.40; 95% CI, 0.16–0.99; p = 0.05). CONCLUSIONS:: Solvent detergent plasma use in critically ill children may be associated with improved survival. This hypothesis-generating data support a randomized controlled trial comparing solvent detergent plasma to fresh frozen plasma/frozen plasma 24 hours. ©2017The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies


Holmes B.B.,Washington University in St. Louis
Journal of Neuro-Ophthalmology | Year: 2017

ABSTRACT:: Vertebrobasilar dolichoectasia (VBD) is characterized by significant dilation, elongation, and tortuosity of the vertebrobasilar system. We present a unique case of VBD, confirmed by neuroimaging studies, showing vascular compression of the right optic tract and lower cranial nerves leading to an incongruous left homonymous inferior quadrantanopia and glossopharyngeal neuralgia. © 2017 by North American Neuro-Ophthalmology Society


Queller D.C.,Washington University in St. Louis
American Naturalist | Year: 2017

Evolutionary biology is undergirded by an extensive and impressive set of mathematical models. Yet only one result, Fisher’s theorem about selection and fitness, is generally accorded the status of a fundamental theorem. I argue that although its fundamental status is justified by its simplicity and scope, there are additional results that seem similarly fundamental. I suggest that the most fundamental theoremof evolution is the Price equation, both because of its simplicity and broad scope and because it can be used to derive four other familiar results that are similarly fundamental: Fisher’s average-excess equation, Robertson’s secondary theoremof natural selection, the breeder’s equation, and Fisher’s fundamental theorem. These derivations clarify both the relationships behind these results and their assumptions. Slightly less fundamental results include those for multivariate evolution and social selection. A key feature of fundamental theorems is that they have great simplicity and scope, which are often achieved by sacrificing perfect accuracy. Quantitative genetics has been more productive of fundamental theorems than population genetics, probably because its empirical focus on unknown genotypes freed it from the tyranny of detail and allowed it to focus on general issues. © 2017 by The University of Chicago.


Catano C.P.,Washington University in St. Louis | Dickson T.L.,University of Nebraska at Omaha | Myers J.A.,Washington University in St. Louis
Ecology Letters | Year: 2017

A major challenge in ecology, conservation and global-change biology is to understand why biodiversity responds differently to similar environmental changes. Contingent biodiversity responses may depend on how disturbance and dispersal interact to alter variation in community composition (β-diversity) and assembly mechanisms. However, quantitative syntheses of these patterns and processes across studies are lacking. Using null-models and meta-analyses of 22 factorial experiments in herbaceous plant communities across Europe and North America, we show that disturbance diversifies communities when dispersal is limited, but homogenises communities when combined with increased immigration from the species pool. In contrast to the hypothesis that disturbance and dispersal mediate the strength of niche assembly, both processes altered β-diversity through neutral-sampling effects on numbers of individuals and species in communities. Our synthesis suggests that stochastic effects of disturbance and dispersal on community assembly play an important, but underappreciated, role in mediating biotic homogenisation and biodiversity responses to environmental change. © 2017 John Wiley & Sons Ltd/CNRS.


Madero J.E.,Washington University in St. Louis | Axelbaum R.L.,Washington University in St. Louis
Combustion and Flame | Year: 2017

Studies of high-water-content fuels (a.k.a., wet fuels) have demonstrated that, under proper conditions, stable combustion can be achieved at very high water concentrations. Stable spray flames of wet fuels have been attained with fuel/water mixtures having stoichiometric adiabatic flame temperatures as low as 251 °C. In this study, we investigate low-volatility wet fuels, using glycerol as the fuel and ethanol as a stabilization additive. This study expands on previous work by determining the minimum amount of ethanol that needs to be added to a glycerol/water mixture to produce a stable flame and by investigating the spray dynamics and structure for these fuels, to delineate the mechanism of ignition and to understand how ethanol alters the vaporization behavior, droplet breakup, and spray dynamics. Detailed 2-D velocity, Sauter mean diameter (SMD), 2-D flux, and number concentration measurements were performed with a Phase Doppler Particle Analyzer (PDPA) in sprays of three fuel/water mixtures: (a) 30% glycerol/70% water, (b) 30% glycerol/10% ethanol/60% water, and (c) the same mixture as (b) but in a combusting spray. All percentages are by weight. Results show that the addition of ethanol to the glycerol/water mixture turns the hollow-cone spray pattern into a narrow full-cone pattern, leading to recirculation of fine droplets in the region just downstream of the nozzle, which is essential to ignition. The high concentration of fine droplets, along with the high vapor pressure and high activity coefficient of ethanol, lead to extremely rapid vaporization of ethanol in the inner recirculation zone. The combustion of the ethanol raises the temperature in this region, while the swirling flow brings heat upstream towards the nozzle, further enhancing stability. These results explain why the addition of 10% ethanol can lead to robust flames of glycerol/water mixtures that might not be expected to yield stable combustion. © 2017 The Combustion Institute


Bowles R.D.,University of Utah | Setton L.A.,Washington University in St. Louis
Biomaterials | Year: 2017

The intervertebral disc contributes to motion, weight bearing, and flexibility of the spine, but is susceptible to damage and morphological changes that contribute to pathology with age and injury. Engineering strategies that rely upon synthetic materials or composite implants that do not interface with the biological components of the disc have not met with widespread use or desirable outcomes in the treatment of intervertebral disc pathology. Here we review bioengineering advances to treat disc disorders, using cell-supplemented materials, or acellular, biologically based materials, that provide opportunity for cell-material interactions and remodeling in the treatment of intervertebral disc disorders. While a field still in early development, bioengineering-based strategies employing novel biomaterials are emerging as promising alternatives for clinical treatment of intervertebral disc disorders. © 2017 Elsevier Ltd


Feinstein Z.,Washington University in St. Louis
Operations Research Letters | Year: 2017

This paper provides a framework for modeling the financial system with multiple illiquid assets during a crisis. This work generalizes the paper by Amini et al. (2016) by allowing for differing liquidation strategies. The main result is a proof of sufficient conditions for the existence of an equilibrium liquidation strategy with corresponding unique clearing payments and liquidation prices. An algorithm for computing the maximal clearing payments and prices is provided. © 2017 Elsevier B.V.


Harris J.K.,Washington University in St. Louis
Journal of Public Health Management and Practice | Year: 2017

CONTEXT:: Foodborne illness affects 1 in 4 US residents each year. Few of those sickened seek medical care or report the illness to public health authorities, complicating prevention efforts. Citizens who report illness identify food establishments with more serious and critical violations than found by regular inspections. New media sources, including online restaurant reviews and social media postings, have the potential to improve reporting. OBJECTIVE:: We implemented a Web-based Dashboard (HealthMap Foodborne Dashboard) to identify and respond to tweets about food poisoning from St Louis City residents. DESIGN AND SETTING:: This report examines the performance of the Dashboard in its first 7 months after implementation in the City of St Louis Department of Health. MAIN OUTCOME MEASURES:: We examined the number of relevant tweets captured and replied to, the number of foodborne illness reports received as a result of the new process, and the results of restaurant inspections following each report. RESULTS:: In its first 7 months (October 2015-May 2016), the Dashboard captured 193 relevant tweets. Our replies to relevant tweets resulted in more filed reports than several previously existing foodborne illness reporting mechanisms in St Louis during the same time frame. The proportion of restaurants with food safety violations was not statistically different (P = .60) in restaurants inspected after reports from the Dashboard compared with those inspected following reports through other mechanisms. CONCLUSION:: The Dashboard differs from other citizen engagement mechanisms in its use of current data, allowing direct interaction with constituents on issues when relevant to the constituent to provide time-sensitive education and mobilizing information. In doing so, the Dashboard technology has potential for improving foodborne illness reporting and can be implemented in other areas to improve response to public health issues such as suicidality, spread of Zika virus infection, and hospital quality.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved.


Nussinov Z.,Washington University in St. Louis
Philosophical Magazine | Year: 2017

We apply microcanonical ensemble considerations to suggest that, whenever it may thermalise, a general disorder-free many-body Hamiltonian of a typical atomic system has solid-like eigenstates at low energies and fluid-type (and gaseous, plasma) eigenstates associated with energy densities exceeding those present in the melting (and, respectively, higher energy) transition(s). In particular, the lowest energy density at which the eigenstates of such a clean many body atomic system undergo a non-analytic change is that of the melting (or freezing) transition. We invoke this observation to analyse the evolution of a liquid upon supercooling (i.e. cooling rapidly enough to avoid solidification below the freezing temperature). Expanding the wavefunction of a supercooled liquid in the complete eigenbasis of the many-body Hamiltonian, only the higher energy liquid-type eigenstates contribute significantly to measurable hydrodynamic relaxations (e.g. those probed by viscosity) while static thermodynamic observables become weighted averages over both solid- and liquid-type eigenstates. Consequently, when extrapolated to low temperatures, hydrodynamic relaxation times of deeply supercooled liquids (i.e. glasses) may seem to diverge at nearly the same temperature at which the extrapolated entropy of the supercooled liquid becomes that of the solid. In this formal quantum framework, the increasingly sluggish (and spatially heterogeneous) dynamics in supercooled liquids as their temperature is lowered stems from the existence of the single non-analytic change of the eigenstates of the clean many-body Hamiltonian at the equilibrium melting transition present in low energy solid-type eigenstates. We derive a single (possibly computable) dimensionless parameter fit to the viscosity and suggest other testable predictions of our approach. © 2017 Informa UK Limited, trading as Taylor & Francis Group


Frachetti M.D.,Washington University in St. Louis | Smith C.E.,Washington University in St. Louis | Traub C.M.,Washington University in St. Louis | Williams T.,University College London
Nature | Year: 2017

There are many unanswered questions about the evolution of the ancient 'Silk Roads' across Asia. This is especially the case in their mountainous stretches, where harsh terrain is seen as an impediment to travel. Considering the ecology and mobility of inner Asian mountain pastoralists, we use 'flow accumulation' modelling to calculate the annual routes of nomadic societies (from 750 m to 4,000 m elevation). Aggregating 500 iterations of the model reveals a high-resolution flow network that simulates how centuries of seasonal nomadic herding could shape discrete routes of connectivity across the mountains of Asia. We then compare the locations of known high-elevation Silk Road sites with the geography of these optimized herding flows, and find a significant correspondence in mountainous regions. Thus, we argue that highland Silk Road networks (from 750 m to 4,000 m) emerged slowly in relation to long-established mobility patterns of nomadic herders in the mountains of inner Asia.


Markham C.,Washington University in St. Louis
Current Opinion in Pediatrics | Year: 2017

PURPOSE OF REVIEW: Brain-directed critical care for children is a relatively new area of subspecialization in pediatric critical care. Pediatric neurocritical care teams combine the expertise of neurology, neurosurgery, and critical care medicine. The positive impact of delivering specialized care to pediatric patients with acute neurological illness is becoming more apparent, but the optimum way to implement and sustain the delivery of this is complicated and poorly understood. We aim to provide emerging evidence supporting that effective implementation of pediatric neurocritical care pathways can improve patient survival and outcomes. We also provide an overview of the most effective strategies across the field of implementation science that can facilitate deployment of neurocritical care pathways in the pediatric ICU. RECENT FINDINGS: Implementation strategies can broadly be grouped according to six categories: planning, educating, restructuring, financing, managing quality, and attending to the policy context. Using a combination of these strategies in the last decade, several institutions have improved patient morbidity and mortality. Although much work remains to be done, emerging evidence supports that implementation of evidence-based care pathways for critically ill children with two common neurological diagnoses – status epilepticus and traumatic brain injury – improves outcomes. SUMMARY: Pediatric and neonatal neurocritical care programs that support evidence-based care can be effectively structured using appropriately sequenced implementation strategies to improve outcomes across a variety of patient populations and in a variety of healthcare settings. Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved.


Dhar R.,Washington University in St. Louis
Neurocritical Care | Year: 2017

Neurologic disturbances including encephalopathy, seizures, and focal deficits complicate the course 10–30% of patients undergoing organ or stem cell transplantation. While much or this morbidity is multifactorial and often associated with extra-cerebral dysfunction (e.g., graft dysfunction, metabolic derangements), immunosuppressive drugs also contribute significantly. This can either be through direct toxicity (e.g., posterior reversible encephalopathy syndrome from calcineurin inhibitors such as tacrolimus in the acute postoperative period) or by facilitating opportunistic infections in the months after transplantation. Other neurologic syndromes such as akinetic mutism and osmotic demyelination may also occur. While much of this neurologic dysfunction may be reversible if related to metabolic factors or drug toxicity (and the etiology is recognized and reversed), cases of multifocal cerebral infarction, hemorrhage, or infection may have poor outcomes. As transplant patients survive longer, delayed infections (such as progressive multifocal leukoencephalopathy) and post-transplant malignancies are increasingly reported. © 2017 Springer Science+Business Media New York


Olsen M.C.,Washington University in St. Louis
ASAIO Journal | Year: 2017

Extracorporeal membrane oxygenation (ECMO) has been reported as an alternative to cardiopulmonary bypass during lung transplantation. The reports in the literature have been limited to adult practice and associated with decreased pulmonary and renal complications, lower mortality, and lower in-hospital mortality. We present four pediatric lung transplantations performed on ECMO and discuss relevant perfusion management. Copyright © 2017 by the American Society for Artificial Internal Organs


Im S.,University of California at Merced | Moseley B.,Washington University in St. Louis
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2017

In the paper we consider minimizing the k-norms of flow time on a single machine offline using a preemptive scheduler for k ≥ 1. We show the first O(1)- Approximation for the problem, improving upon the previous best O(log log P)-approximation by Bansal and Pruhs (FOCS 09 and SICOMP 14) where P is the ratio of the maximum job size to the minimum. Our main technical ingredient is a novel combination of quasi-uniform sampling and iterative rounding, which is of interest in its own right. Copyright © by SIAM.


Katz J.I.,Washington University in St. Louis
Astrophysical Journal | Year: 2017

The emission of the white dwarf-M dwarf binary AR Sco is driven by the rapid synchronization of its white dwarf, rather than by accretion. Synchronization requires a magnetic field ∼100 Gauss at the M dwarf and ∼108 Gauss at the white dwarf, larger than the fields of most intermediate polars but within the range of fields of known magnetic white dwarfs. The spindown power is dissipated in the atmosphere of the M dwarf, within the near zone of the rotating white dwarf's field, by magnetic reconnection, accelerating particles that produce the observed synchrotron radiation. The displacement of the optical maximum from conjunction may be explained either by dissipation in a bow wave as the white dwarf's magnetic field sweeps past the M dwarf or by a misaligned white dwarf rotation axis and oblique magnetic moment. In the latter case the rotation axis precesses with a period of decades, predicting a drift in the orbital phase of the optical maximum. Binaries whose emission is powered by synchronization may be termed synchronars, in analogy to magnetars. © 2017. The American Astronomical Society. All rights reserved.


Cross A.J.,Washington University in St. Louis | Skemer P.,Washington University in St. Louis
Journal of Geophysical Research: Solid Earth | Year: 2017

Dynamic recrystallization and phase mixing are considered to be important processes in ductile shear zone formation, as they collectively enable a permanent transition to the strain-weakening, grain-size sensitive deformation regime. While dynamic recrystallization is well understood, the underlying physical processes and timescales required for phase mixing remain enigmatic. Here, we present results from high-strain phase mixing experiments on calcite-anhydrite composites. A poorly mixed starting material was synthesized from fine-grained calcite and anhydrite powders. Samples were deformed in the Large Volume Torsion apparatus at 500°C and shear strain rates of 5×10-5 to 5×10-4s-1, to finite shear strains of up to γ=57. Microstructural evolution is quantified through analysis of backscattered electron images and electron backscatter diffraction data. During deformation, polycrystalline domains of the individual phases are geometrically stretched and thinned, causing an increase in the spatial density of interphase boundaries. At moderate shear strains (γ≥6), domains are so severely thinned that they become "monolayers" of only one or two grain's width and form a thin compositional layering. Monolayer formation is accompanied by a critical increase in the degree of grain boundary pinning and, consequently, grain-size reduction below the theoretical limit established by the grain-size piezometer or deformation mechanism field boundary. Ultimately, monolayers neck and disaggregate at high strains (17 <γ <57) to complete the phase mixing process. This "geometric" phase mixing mechanism is consistent with observations of mylonites, where layer (i.e., foliation) formation is associated with strain localization, and layers are ultimately destroyed at the mylonite-ultramylonite transition. ©2017. American Geophysical Union.


Diringer M.,Washington University in St. Louis
Handbook of clinical neurology | Year: 2017

The brain operates in an extraordinarily intricate environment which demands precise regulation of electrolytes. Tight control over their concentrations and gradients across cellular compartments is essential and when these relationships are disturbed neurologic manifestations may develop. Perturbations of sodium are the electrolyte disturbances that most often lead to neurologic manifestations. Alterations in extracellular fluid sodium concentrations produce water shifts that lead to brain swelling or shrinkage. If marked or rapid they can result in profound changes in brain function which are proportional to the degree of cerebral edema or contraction. Adaptive mechanisms quickly respond to changes in cell size by either increasing or decreasing intracellular osmoles in order to restore size to normal. Unless cerebral edema has been severe or prolonged, correction of sodium disturbances usually restores function to normal. If the rate of correction is too rapid or overcorrection occurs, however, new neurologic manifestations may appear as a result of osmotic demyelination syndrome. Disturbances of magnesium, phosphate and calcium all may contribute to alterations in sensorium. Hypomagnesemia and hypocalcemia can lead to weakness, muscle spasms, and tetany; the weakness from hypophosphatemia and hypomagnesemia can impair respiratory function. Seizures can be seen in cases with very low concentrations of sodium, magnesium, calcium, and phosphate. © 2017 Elsevier B.V. All rights reserved.


Mardis E.R.,Washington University in St. Louis
Nature Protocols | Year: 2017

Recent advances in the field of genomics have largely been due to the ability to sequence DNA at increasing throughput and decreasing cost. DNA sequencing was first introduced in 1977, and next-generation sequencing technologies have been available only during the past decade, but the diverse experiments and corresponding analyses facilitated by these techniques have transformed biological and biomedical research. Here, I review developments in DNA sequencing technologies over the past 10 years and look to the future for further applications.


News Article | April 17, 2017
Site: www.prweb.com

LearnHowToBecome.org, a leading resource provider for higher education and career information, has analyzed more than a dozen metrics to rank Missouri’s best universities and colleges for 2017. Of the 40 four-year schools on the list, Washington University in St. Louis, Saint Louis University, Maryville University of Saint Louis, William Jewell College and Rockhurst University were the top five. 14 two-year schools also made the list, and State Fair Community College, Crowder College, Jefferson College, East Central College and State Technical College of Missouri were ranked as the best five. A full list of the winning schools is included below. “The schools on our list have created high-quality learning experiences for students in Missouri, with career outcomes in mind,” said Wes Ricketts, senior vice president of LearnHowToBecome.Org. “They’ve shown this through the certificates and degrees that they offer, paired with excellent employment services and a record of strong post-college earnings for grads.” To be included on the “Best Colleges in Missouri” list, schools must be regionally accredited, not-for-profit institutions. Each college is also appraised on additional data that includes annual alumni salaries 10 years after entering college, employment services, student/teacher ratio, graduation rate and the availability of financial aid. Complete details on each college, their individual scores and the data and methodology used to determine the LearnHowToBecome.org “Best Colleges in Missouri” list, visit: The Best Four-Year Colleges in Missouri for 2017 include: Avila University Baptist Bible College Calvary Bible College and Theological Seminary Central Methodist University-College of Liberal Arts and Sciences College of the Ozarks Columbia College Culver-Stockton College Drury University Evangel University Fontbonne University Hannibal-LaGrange University Harris-Stowe State University Kansas City Art Institute Lincoln University Lindenwood University Maryville University of Saint Louis Midwestern Baptist Theological Seminary Missouri Baptist University Missouri Southern State University Missouri State University-Springfield Missouri University of Science and Technology Missouri Valley College Missouri Western State University Northwest Missouri State University Park University Rockhurst University Saint Louis University Southeast Missouri State University Southwest Baptist University Stephens College Truman State University University of Central Missouri University of Missouri-Columbia University of Missouri-Kansas City University of Missouri-St Louis Washington University in St Louis Webster University Westminster College William Jewell College William Woods University Missouri’s Best Two-Year Colleges for 2017 include: Crowder College East Central College Jefferson College Lake Career and Technical Center Mineral Area College Missouri State University - West Plains Moberly Area Community College North Central Missouri College Ozarks Technical Community College St. Charles Community College State Fair Community College State Technical College of Missouri Texas County Technical College Three Rivers Community College About Us: LearnHowtoBecome.org was founded in 2013 to provide data and expert driven information about employment opportunities and the education needed to land the perfect career. Our materials cover a wide range of professions, industries and degree programs, and are designed for people who want to choose, change or advance their careers. We also provide helpful resources and guides that address social issues, financial aid and other special interest in higher education. Information from LearnHowtoBecome.org has proudly been featured by more than 700 educational institutions.


Patent
Washington University in St. Louis | Date: 2017-01-25

The present invention relates to mutant peptides of the E protein of the West Nile virus and other flaviviruses useful for discriminating flaviviral infections, as well as kits, methods and uses related thereto.


Botero C.A.,North Carolina State University | Botero C.A.,Washington University in St. Louis | Weissing F.J.,University of Groningen | Wright J.,Norwegian University of Science and Technology | Rubenstein D.R.,Columbia University
Proceedings of the National Academy of Sciences of the United States of America | Year: 2015

In an era of rapid climate change, there is a pressing need to understand how organisms will cope with faster and less predictable variation in environmental conditions. Here we develop a unifying model that predicts evolutionary responses to environmentally driven fluctuating selection and use this theoretical framework to explore the potential consequences of altered environmental cycles. We first show that the parameter space determined by different combinations of predictability and timescale of environmental variation is partitioned into distinct regions where a single mode of response (reversible phenotypic plasticity, irreversible phenotypic plasticity, bet-hedging, or adaptive tracking) has a clear selective advantage over all others. We then demonstrate that, although significant environmental changes within these regions can be accommodated by evolution, most changes that involve transitions between regions result in rapid population collapse and often extinction. Thus, the boundaries between response mode regions in our model correspond to evolutionary tipping points, where even minor changes in environmental parameters can have dramatic and disproportionate consequences on population viability. Finally, we discuss how different life histories and genetic architectures may influence the location of tipping points in parameter space and the likelihood of extinction during such transitions. These insights can help identify and address some of the cryptic threats to natural populations that are likely to result from any natural or human-induced change in environmental conditions. They also demonstrate the potential value of evolutionary thinking in the study of global climate change. © 2015, National Academy of Sciences. All rights reserved.


Repovs G.,University of Ljubljana | Barch D.M.,Washington University in St. Louis
Frontiers in Human Neuroscience | Year: 2012

A growing number of studies have reported altered functional connectivity in schizophrenia during putatively "task-free" states and during the performance of cognitive tasks. However, there have been few systematic examinations of functional connectivity in schizophrenia across rest and different task states to assess the degree to which altered functional connectivity reflects a stable characteristic or whether connectivity changes vary as a function of task demands. We assessed functional connectivity during rest and during three working memory loads of an N-back task (0-back, 1-back, 2-back) among: (1) individuals with schizophrenia (N = 19); (2) the siblings of individuals with schizophrenia (N = 28); (3) healthy controls (N = 10); and (4) the siblings of healthy controls (N = 17). We examined connectivity within and between four brain networks: (1) frontal-parietal (FP); (2) cingulo-opercular (CO); (3) cerebellar (CER); and (4) default mode (DMN). In terms of within-network connectivity, we found that connectivity within the DMN and FP increased significantly between resting state and 0-back, while connectivity within the CO and CER decreased significantly between resting state and 0-back. Additionally, we found that connectivity within both the DMN and FP was further modulated by memory load. In terms of between network connectivity, we found that the DMN became significantly more "anti-correlated" with the FP, CO, and CER networks during 0-back as compared to rest, and that connectivity between the FP and both CO and CER networks increased with memory load. Individuals with schizophrenia and their siblings showed consistent reductions in connectivity between both the FP and CO networks with the CER network, a finding that was similar in magnitude across rest and all levels of working memory load. These findings are consistent with the hypothesis that altered functional connectivity in schizophrenia reflects a stable characteristic that is present across cognitive states. © 2012 Repovš and Barch.


Mamah D.,Washington University in St. Louis | Barch D.M.,Washington University in St. Louis | Repovs G.,University of Ljubljana
Journal of Affective Disorders | Year: 2013

Background: Bipolar disorder (BPD) and schizophrenia (SCZ) share clinical characteristics and genetic contributions. Functional dysconnectivity across various brain networks has been reported to contribute to the pathophysiology of both SCZ and BPD. However, research examining resting-state neural network dysfunction across multiple networks to understand the relationship between these two disorders is lacking. Methods: We conducted a resting-state functional connectivity fMRI study of 35 BPD and 25 SCZ patients, and 33 controls. Using previously defined regions-of-interest, we computed the mean connectivity within and between five neural networks: default mode (DM), fronto-parietal (FP), cingulo-opercular (CO), cerebellar (CER), and salience (SAL). Repeated measures ANOVAs were used to compare groups, adjusting false discovery rate to control for multiple comparisons. The relationship of connectivity with the SANS/SAPS, vocabulary and matrix reasoning was investigated using hierarchical linear regression analyses. Results: Decreased within-network connectivity was only found for the CO network in BPD. Across groups, connectivity was decreased between CO-CER (p<0.001), to a larger degree in SCZ than in BPD. In SCZ, there was also decreased connectivity in CO-SAL, FP-CO, and FP-CER, while BPD showed decreased CER-SAL connectivity. Disorganization symptoms were predicted by connectivity between CO-CER and CER-SAL. Discussion: Our findings indicate dysfunction in the connections between networks involved in cognitive and emotional processing in the pathophysiology of BPD and SCZ. Both similarities and differences in connectivity were observed across disorders. Further studies are required to investigate relationships of neural networks to more diverse clinical and cognitive domains underlying psychiatric disorders. © 2013 Elsevier B.V.


Carter C.S.,University of California at Davis | Barch D.M.,Washington University in St. Louis
Schizophrenia Bulletin | Year: 2012

The Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia initiative, funded by an R13 conference grant from the National Institute of Mental Health, has sought to facilitate the translation of measures from the basic science of cognition into practical brain-based tools to measure treatment effects on cognition in schizophrenia. In this overview article, we summarize the process and products of the sixth meeting in this series, which focused on the identification of promising imaging paradigms, based on the measurement of cognitive evoked potentials (event-related potential) of cognition-related time-frequency analyses of the electroencephalography as well as functional magnetic resonance imaging. A total of 23 well-specified paradigms from cognitive neuroscience that measure cognitive functions previously identified as targets for treatment development were identified at the meeting as being recommended for the further developmental work needed in order to validate and optimize them as biomarker measures. Individual paradigms are discussed in detail in 6 domain-based articles in this volume. Ongoing issues related to the development of these and other measures as valid, sensitive and reliable measurement, and assessment tools, as well as the steps necessary for the development of specific measures for use as biomarkers for treatment development and personalized medicine, are discussed. © 2011 The Author.


Stein P.K.,Washington University in St. Louis | Pu Y.,CardioNet
Sleep Medicine Reviews | Year: 2012

Heart rate (HR) is modulated by the combined effects of the sympathetic and parasympathetic nervous systems. Therefore, measurement of changes in HR over time (heart rate variability or HRV) provides information about autonomic functioning. HRV has been used to identify high risk people, understand the autonomic components of different disorders and to evaluate the effect of different interventions, etc. Since the signal required to measure HRV is already being collected on the electrocardiogram (ECG) channel of the polysomnogram (PSG), collecting data for research on HRV and sleep is straightforward, but applications have been limited. As reviewed here, HRV has been applied to understand autonomic changes during different sleep stages. It has also been applied to understand the effect of sleep-disordered breathing, periodic limb movements and insomnia both during sleep and during the daytime. HRV has been successfully used to screen people for possible referral to a Sleep Lab. It has also been used to monitor the effects of continuous positive airway pressure (CPAP). A novel HRV measure, cardiopulmonary coupling (CPC) has been proposed for sleep quality. Evidence also suggests that HRV collected during a PSG can be used in risk stratification models, at least for older adults. Caveats for accurate interpretation of HRV are also presented. © 2011 Elsevier Ltd.


Gilmore A.W.,Washington University in St. Louis | Nelson S.M.,University of Texas at Dallas | McDermott K.B.,Washington University in St. Louis
Trends in Cognitive Sciences | Year: 2015

The manner by which the human brain learns and recognizes stimuli is a matter of ongoing investigation. Through examination of meta-analyses of task-based functional MRI and resting state functional connectivity MRI, we identified a novel network strongly related to learning and memory. Activity within this network at encoding predicts subsequent item memory, and at retrieval differs for recognized and unrecognized items. The direction of activity flips as a function of recent history: from deactivation for novel stimuli to activation for stimuli that are familiar due to recent exposure. We term this network the 'parietal memory network' (PMN) to reflect its broad involvement in human memory processing. We provide a preliminary framework for understanding the key functional properties of the network. © 2015 Elsevier Ltd.


Griffin I.J.,University of California at Davis | Cooke R.J.,Washington University in St. Louis
Early Human Development | Year: 2012

The long-term effects of prematurity, early diet and catch-up growth on metabolic risk and body adiposity are of increasing interest to Neonatologists. Poor growth is known to be associated with poorer neuro-developmental outcome but concern exists that increased rates of "catch-up" (or "recovery") growth may be associated with increased adiposity and the later development of metabolic syndrome.In this manuscript we review the published data on body composition in preterm infants, and present new analyses of body adiposity in preterm infants during the 12-15. months of life, and the effect of growth rate (weight gain) on body adiposity.We conclude that although preterm infants have increased adiposity at term corrected age, they generally have lower body fat than their term peers during the rest of the 12-15. months of life. Although more rapid "catch-up" growth in preterm infants during the first year of life is associated with greater body fatness than slower rates of growth, these higher rates of growth lead to body composition more similar to that of the term-born infant, than do slower rates of growth.Although more studies are needed to determine whether these short-term increases or the longer-term decreases in adiposity modify the risk on chronic diseases such as diabetes mellitus, hypertension or other components of the metabolic syndrome, the widely held concern that preterm babies have greater adiposity than their term peers, and that this is worsened by greater amounts of catch-up growth, are not supported by the available evidence. © 2012 Elsevier Ireland Ltd.


Clifford D.B.,Washington University in St. Louis | DeLuca A.,Instituto Of Clinica Delle Malattie Infettive | Simpson D.M.,Mount Sinai Medical School | Arendt G.,Heinrich Heine University Düsseldorf
The Lancet Neurology | Year: 2010

Background: Treatment of multiple sclerosis with natalizumab is complicated by rare occurrence of progressive multifocal leukoencephalopathy (PML). Between July, 2006, and November, 2009, there were 28 cases of confirmed PML in patients with multiple sclerosis treated with natalizumab. Assessment of these clinical cases will help to inform future therapeutic judgments and improve the outcomes for patients. Recent developments: The risk of PML increases with duration of exposure to natalizumab over the first 3 years of treatment. No new cases occurred during the first two years of natalizumab marketing but, by the end of November, 2009, 28 cases had been confirmed, of which eight were fatal. The median treatment duration to onset of symptoms was 25 months (range 6-80 months). The presenting symptoms most commonly included changes in cognition, personality, and motor performance, but several cases had seizures as the first clinical event. Although PML has developed in patients without any previous use of disease-modifying therapies for multiple sclerosis, previous therapy with immunosuppressants might increase risk. Clinical diagnosis by use of MRI and detection of JC virus in the CSF was established in all but one case. Management of PML has routinely used plasma exchange (PLEX) or immunoabsorption to hasten clearance of natalizumab and shorten the period in which natalizumab remains active (usually several months). Exacerbation of symptoms and enlargement of lesions on MRI have occurred within a few days to a few weeks after PLEX, indicative of immune reconstitution inflammatory syndrome (IRIS). This syndrome seems to be more common and more severe in patients with natalizumab-associated PML than it is in patients with HIV-associated PML. Where next?: Diagnosis of natalizumab-associated PML requires optimised clinical vigilance, reliable and sensitive PCR testing of the JC virus, and broadened criteria for recognition of PML lesions by use of MRI, including contrast enhancement. Optimising the management of IRIS reactions will be needed to improve outcomes. Predictive markers for patients at risk for PML must be sought. It is crucial to monitor the risk incurred during use of natalizumab beyond 3 years. © 2010 Elsevier Ltd. All rights reserved.


Huang S.,Washington University in St. Louis | Kang W.,Peking University | Yang L.,Washington University in St. Louis
Applied Physics Letters | Year: 2013

We report first-principles results on the electronic structure of silicene. For planar and simply buckled silicenes, we confirm their zero-gap nature and show a significant renormalization of their Fermi velocity by including many-electron effects. However, the other two recently proposed silicene structures exhibit a finite bandgap, indicating that they are gapped semiconductors instead of expected Dirac-fermion semimetals. This finite bandgap is preserved with the Ag substrate included. Moreover, our GW calculation reveals enhanced many-electron effects in these two-dimensional structures. Finally, the bandgap of the latter two structures can be tuned in a wide range by applying strain. © 2013 AIP Publishing LLC.


Faccio R.,Washington University in St. Louis
Ageing Research Reviews | Year: 2011

As the skeleton ages, the balanced formation and resorption of normal bone remodeling is lost, and bone loss predominates. The osteoclast is the specialized cell that is responsible for bone resorption. It is a highly polarized cell that must adhere to the bone surface and migrate along it while resorbing, and cytoskeletal reorganization is critical. Podosomes, highly dynamic actin structures, mediate osteoclast motility. Resorbing osteoclasts form a related actin complex, the sealing zone, which provides the boundary for the resorptive microenvironment. Similar to podosomes, the sealing zone rearranges itself to allow continuous resorption while the cell is moving. The major adhesive protein controlling the cytoskeleton is αvβ3 integrin, which collaborates with the growth factor M-CSF and the ITAM receptor DAP12. In this review, we discuss the signaling complexes assembled by these molecules at the membrane, and their downstream mediators that control OC motility and function via the cytoskeleton. © 2009 Elsevier Ireland Ltd.


Anticevic A.,Yale University | Repovs G.,University of Ljubljana | Barch D.M.,Washington University in St. Louis
Schizophrenia Bulletin | Year: 2013

Substantial evidence implicates working memory (WM) as a core deficit in schizophrenia (SCZ), purportedly due to primary deficits in dorsolateral prefrontal cortex functioning. Recent findings suggest that SCZ is also associated with abnormalities in suppression of certain regions during cognitive engagement-namely the default mode system-that may further contribute to WM pathology. However, no study has systematically examined activation and suppression abnormalities across both encoding and maintenance phases of WM in SCZ. Twenty-eight patients and 24 demographically matched healthy subjects underwent functional magnetic resonance imaging at 3T while performing a delayed match-to-sample WM task. Groups were accuracy matched to rule out performance effects. Encoding load was identical across subjects to facilitate comparisons across WM phases. We examined activation differences using an assumed model approach at the whole-brain level and within meta-analytically defined WM areas. Despite matched performance, we found regions showing less recruitment during encoding and maintenance for SCZ subjects. Furthermore, we identified 2 areas closely matching the default system, which SCZ subjects failed to deactivate across WM phases. Lastly, activation in prefrontal regions predicted the degree of deactivation for healthy but not SCZ subjects. Current results replicate and extend prefrontal recruitment abnormalities across WM phases in SCZ. Results also indicate deactivation abnormalities across WM phases, possibly due to inefficient prefrontal recruitment. Such regional deactivation may be critical for suppressing sources of interference during WM trace formation. Thus, deactivation deficits may constitute an additional source of impairments, which needs to be further characterized for a complete understanding of WM pathology in SCZ. © 2011 The Author.


Coffin Talbot J.,University of Oregon | Johnson S.L.,Washington University in St. Louis | Kimmel C.B.,University of Oregon
Development | Year: 2010

The ventrally expressed secreted polypeptide endothelin1 (Edn1) patterns the skeleton derived from the first two pharyngeal arches into dorsal, intermediate and ventral domains. Edn1 activates expression of many genes, including hand2 and Dlx genes. We wanted to know how hand2/Dlx genes might generate distinct domain identities. Here, we show that differential expression of hand2 and Dlx genes delineates domain boundaries before and during cartilage morphogenesis. Knockdown of the broadly expressed genes dlx1a and dlx2a results in both dorsal and intermediate defects, whereas knockdown of three intermediate-domain restricted genes dlx3b, dlx4b and dlx5a results in intermediate-domain-specific defects. The ventrally expressed gene hand2 patterns ventral identity, in part by repressing dlx3b/4b/5a. The jaw joint is an intermediate-domain structure that expresses nkx3.2 and a more general joint marker, trps1. The jaw joint expression of trps1 and nkx3.2 requires dlx3b/4b/5a function, and expands in hand2 mutants. Both hand2 and dlx3b/4b/5a repress dorsal patterning markers. Collectively, our work indicates that the expression and function of hand2 and Dlx genes specify major patterning domains along the dorsoventral axis of zebrafish pharyngeal arches.


Bostrom K.I.,University of California at Los Angeles | Rajamannan N.M.,Northwestern University | Towler D.A.,Washington University in St. Louis
Circulation Research | Year: 2011

Vascular calcification increasingly afflicts our aging, dysmetabolic population. Once considered only a passive process of dead and dying cells, vascular calcification has now emerged as a highly regulated form of biomineralization organized by collagenous and elastin extracellular matrices. During skeletal bone formation, paracrine epithelial-mesenchymal and endothelial-mesenchymal interactions control osteochondrocytic differentiation of multipotent mesenchymal progenitor cells. These paracrine osteogenic signals, mediated by potent morphogens of the bone morphogenetic protein and wingless-type MMTV integration site family member (Wnt) superfamilies, are also active in the programming of arterial osteoprogenitor cells during vascular and valve calcification. Inflammatory cytokines, reactive oxygen species, and oxylipids-increased in the clinical settings of atherosclerosis, diabetes, and uremia that promote arteriosclerotic calcification-elicit the ectopic vascular activation of osteogenic morphogens. Specific extracellular and intracellular inhibitors of bone morphogenetic protein-Wnt signaling have been identified as contributing to the regulation of osteogenic mineralization during development and disease. These inhibitory pathways and their regulators afford the development of novel therapeutic strategies to prevent and treat valve and vascular sclerosis. © 2011 American Heart Association, Inc.


Cole M.W.,Rutgers University | Cole M.W.,Washington University in St. Louis | Repovs G.,University of Ljubljana | Anticevic A.,Yale University
Neuroscientist | Year: 2014

Recent findings suggest the existence of a frontoparietal control system consisting of flexible hubs that regulate distributed systems (e.g., visual, limbic, motor) according to current task goals. A growing number of studies are reporting alterations of this control system across a striking range of mental diseases. We suggest this may reflect a critical role for the control system in promoting and maintaining mental health. Specifically, we propose that this system implements feedback control to regulate symptoms as they arise (e.g., excessive anxiety reduced via regulation of amygdala), such that an intact control system is protective against a variety of mental illnesses. Consistent with this possibility, recent results indicate that several major mental illnesses involve altered brain-wide connectivity of the control system, likely altering its ability to regulate symptoms. These results suggest that this "immune system of the mind" may be an especially important target for future basic and clinical research. © The Author(s) 2014.


Anticevic A.,Washington University in St. Louis | Repovs G.,University of Ljubljana | Barch D.M.,Washington University in St. Louis
Schizophrenia Bulletin | Year: 2012

Emotional abnormalities are a critical clinical feature of schizophrenia (SCZ), but complete understanding of their underlying neuropathology is lacking. Numerous studies have examined amygdala activation in response to affective stimuli in SCZ, but no consensus has emerged. However, behavioral studies examining 'in-the-moment' processing of affect have suggested intact emotional processing in SCZ. To examine which aspects of emotional processing may be impaired in SCZ, we combined behavior and neuroimaging to investigate effects of aversive stimuli during minimal cognitive engagement, at the level of behavior, amygdala recruitment, and its whole-brain task-based functional connectivity (tb-fcMRI) because impairments may manifest when examining across-region functional integration. Twenty-eight patients and 24 matched controls underwent rapid event-related fMRI at 3 T while performing a simple perceptual decision task with negative or neutral distraction. We examined perceptual decision slowing, amygdala activation, and whole-brain amygdala tb-fcMRI, while ensuring group signal-to-noise profile matching. Following scanning, subjects rated all images for experienced arousal and valence. No significant group differences emerged for negative vs neutral reaction time, emotional ratings across groups, or amygdala activation. However, even in the absence of behavioral or activation differences, SCZ subjects demonstrated significantly weaker amygdala-prefrontal cortical coupling, specifically during negative distraction. Whereas in-the-moment perception, behavioral response, and amygdala recruitment to negative stimuli during minimal cognitive load seem to be intact, there is evidence of aberrant amygdala-prefrontal integration in SCZ subjects. Such abnormalities may prove critical for understanding disturbances in patients' ability to use affective cues when guiding higher level cognitive processes needed in social interactions. © 2012 The Author.


Yonelinas A.P.,University of California at Davis | Jacoby L.L.,Washington University in St. Louis
Memory and Cognition | Year: 2012

The process-dissociation procedure was developed to separate the controlled and automatic contributions of memory. It has spawned the development of a host of new measurement approaches and has been applied across a broad range of fields in the behavioral sciences, ranging from studies of memory and perception to neuroscience and social psychology. Although it has not been without its shortcomings or critics, its growing influence attests to its utility. In the present article, we briefly review the factors motivating its development, describe some of the early applications of the general method, and review the literature examining its underlying assumptions and boundary conditions. We then highlight some of the specific issues that the methods have been applied to and discuss some of the more recent applications of the procedure, along with future directions. © 2012 Psychonomic Society, Inc.


Krug M.K.,Washington University in St. Louis | Carter C.S.,University of California at Davis
Brain Research | Year: 2012

In classic Stroop paradigms, increasing the proportion of control-demanding incongruent trials results in strategic adjustments in behavior and implementation of cognitive control processes. We manipulated expectancy for incongruent trials in an emotional facial Stroop task to investigate the behavioral and neural effects of proportion manipulation in a cognitively demanding task with emotional stimuli. Subjects performed a high expectancy (HE) task (65 incongruent trials) and a low expectancy (LE) task (35 incongruent trials) during functional magnetic resonance imaging (fMRI). As in standard Stroop tasks, behavioral interference was reduced in the emotional facial Stroop HE task compared to the LE task. Functional MRI data revealed a switch in cognitive control strategy, from a reactive, event-related activation of a medial and lateral cognitive control network and right amygdala in the LE task to a proactive, sustained activation of right dorsolateral prefrontal cortex (DLPFC) in the HE task. Higher trait anxiety was associated with impairment (slower response time and decreased accuracy) as well as reduced activity in left ventrolateral prefrontal cortex, anterior insula, and orbitofrontal cortex in the HE task on high conflict trials with task-irrelevant emotional information, suggesting that individual differences in anxiety may be associated with expectancy-related strategic control adjustments, particularly when emotional stimuli must be ignored. © 2012 Elsevier B.V.


News Article | February 22, 2017
Site: www.csmonitor.com

—In the increasingly divisive political atmosphere, many Americans appear to be aligning themselves as if ready for battle. But in the science community, some are arming themselves for conversation rather than a fight. As hundreds of scientists across disciplines gathered in Boston, Mass. for the annual meeting of the American Association for the Advancement of Science (AAAS) last week, many grappled with how to bridge the growing divide separating scientific consensus from public understanding and policy discussions. For some scientists, the widening gulf is a rallying call to demand respect for science and evidence-based decision-making from policymakers. For others, it underscores the need to better understand how the gap formed and to find new ways to bridge it. These scientists say the emerging narrative that pits an "educated elite" against "ignorant masses" is overly simplistic and counterproductive. Science doesn't solely belong to scientists, and suggesting that only credentialed researchers are smart enough to understand its implications and engage with it is fundamentally flawed, suggests Rush Holt, a physicist and the chief executive officer of AAAS. "We probably have ourselves to blame – scientists," he told The Christian Science Monitor in an interview ahead of the meeting. "We've allowed a gap to form, to even widen, between those who do science and those who don't. So people who don't do science say, 'Well, science is what scientists do,' rather than saying, 'It's a way of gathering and evaluating evidence that I, too, can use.' " Some scientists have suggested that the problem is an educational one. Those who disregard science and scientific consensus as not for them simply don't have the knowledge – the facts, according to this thinking. And, as Dietram Scheufele, a professor of science communication at the University of Wisconsin-Madison pointed out in a talk at the AAAS meeting, in the current "fake news panic" that mentality can fuel an impression that "if they just had the correct facts, they could make better decisions." That notion, referred to as the "knowledge deficit hypothesis" in academic circles, is problematic, Dr. Scheufele said. It bestows a sort of responsibility and expertise on those in the know to impart knowledge on those who are not, and ignores the fact that the lay public has anything to contribute to the conversation. That idea, Asheley Landrum, a cognitive scientist at the Annenberg Public Policy Center at the University of Pennsylvania, explained in a talk at the AAAS meeting, suggests that "any public skepticism or negative attitudes toward science is due to the fact that people just don't know enough and that if they only knew more, that they would accept it." But studies testing this theory have shown that added science knowledge only slightly increases subjects' acceptance of scientific consensus on polarized issues, like climate change for example. This suggests that it's not necessarily that people don't know or understand what experts are saying on a topic, Dr. Landrum said. "They just choose not to align with it." And that choice may have more to do with worldview than any active dismissal of the scientific perspective. According to Dan Kahan, a psychology professor at Yale Law School, people stick with their tribe and align their views on a scientific topic according to their political, religious, or other identity. And, he finds, this is true for both liberals and conservatives, Republicans and Democrats. For example, someone who is liberal is more likely to dismiss information that challenges the liberal perspective on an issue, whether or not it is factual. Similarly, they seek out news reports and data that is in line with their own pre-existing views. Landrum suggests that one way to cut through this divide might be simply to pique people's interest in learning about science for the sake of their own curiosity. She posits the "curiosity deficit hypothesis" as the real driver behind polarization of scientific knowledge. The idea is that someone who is motivated to learn more about a scientific topic for personal satisfaction rather than a specific utility will be more open to scientific knowledge that might contradict their previously held viewpoint. Although Landrum has yet to work out how to spark someone's curiosity, she said the goal is to eliminate the polarization of science, the sense that there are two conflicting options, so people are more open to understanding what scientists are reporting – whether or not it aligns with what their political, religious, or other kind of tribes are asserting. Recognizing that anyone can think scientifically, and that science isn't just about having knowledge but a way of gaining it, could be an important way to bridge that gap as well, said AAAS's Dr. Holt. "You don't have to wear a lab coat to be able to ask questions so that they can be answered empirically and verifiably." People in all walks of life employ the scientific method in their daily routines, he notes. Mechanics use it to diagnose engine problems, bakers use it to perfect their confections, and truckers use it to determine the most efficient routes. In Asia, some rice farmers are so knowledgeable about their crop and the ecology of the area that they are referred to as expert farmers, Barbara Schaal, an evolutionary biologist at Washington University in St. Louis and president of AAAS, told the Monitor ahead of the meeting. These farmers didn't go to school to study nutrient density or soil composition or agricultural hydrology. But that doesn't stop them from using science to figure out the best way to grow their crop. Dr. Schaal observed one farmer who discovered a genetic mutation in his field conduct an experiment to figure out why his rice had turned out purple instead of white. "He was a rice farmer, and he was curious," Schaal said. And as a result, he used the scientific processes without even knowing it. But those scientists with PhDs and published papers are unsettled by terms like "fake news" and "alternative facts" appearing in dialogues today. To some of them, their expertise and any consensus among them that has been years in the making is undermined by a growing trend toward doubting scientists and scientific evidence. "We live in a world where people are trying to silence facts," Naomi Oreskes, a professor of the history of science at Harvard University, told the audience during a speech at the AAAS annual meeting. "We need to speak for facts because facts don't speak for themselves." Science is supposed to inform policy decisions, to provide evidence so that policymakers can make informed decisions, Jacquelyn Gill, a paleoecologist at the University of Maine, told the Monitor. But if there is disagreement over the science itself, rather than the policy implications of the science, that undermines that relationship, she said. Dr. Gill and others have decided to rally and to march as a way of drawing attention to the importance of science in our society. This is not to be confused with advocating for more funding or support for scientists per se, she explains. "I'm not interested in a scientists' march," Gill told the Monitor. "What I'm interested in is a group of people that stands up for science, science as evidence-based decision-making, science as publicly accessible, transparent. Science for everyone." She and other scientists in Boston for the meeting spoke at a rally in Copley Square timed to coincide with the meeting on Sunday. There is a "March for Science" planned for Earth Day (April 22) in Washington D.C. as well. Dr. Oreskes also spoke at the rally, which was intended to generate energy in support of science, scientific principles, and the conditions necessary for science to be conducted – including open scientific dialogue across international boundaries. At the rally she told the crowd, "It's not political to defend the integrity of facts." An open event like Sunday's rally invites a variety of political and activism expression, and it is difficult to control the message of such a demonstration. Some of the signs toted by rally attendees aligned with Gill's sentiment of unity and celebrating science, with messages like "Science builds bridges" and "Science is for everyone." But some signs were more overtly political, with messages like "Real Science, Fake President," or "Impeach": Although such rallies might get the attention of policymakers and the public, the language used could drive a wedge further between scientists and non-scientists. Kahan cautioned against divisive language. Rallying calls such as "Make America Smart Again" and calling those who don't trust science "dummies" attaches resonances of the identity-based resistant responses, he said in one of his talks at the meeting. Instead, he suggests that personal connections and less antagonistic dialogues will be more productive to bridge gaps on polarized science issues. Connecting with others, rather than lecturing them, is key to unifying, Scheufele said. "We need to shift from the communication of science to the communication about science."


News Article | December 2, 2016
Site: astrobiology.com

Pluto is thought to possess a subsurface ocean, which is not so much a sign of water as it is a tremendous clue that other dwarf planets in deep space also may contain similarly exotic oceans, naturally leading to the question of life, said one co-investigator with NASA's New Horizon mission to Pluto and the Kuiper Belt. William McKinnon, professor of Earth and planetary sciences in Arts & Sciences at Washington University in St. Louis and a co-author on two of four new Pluto studies published Dec. 1 in Nature, argues that beneath the heart-shaped region on Pluto known as Sputnik Planitia there lies an ocean laden with ammonia. The presence of the pungent, colorless liquid helps to explain not only Pluto's orientation in space but also the persistence of the massive, ice-capped ocean that other researchers call "slushy" -- but McKinnon prefers to depict as syrupy. Using computer models along with topographical and compositional data culled from the New Horizon spacecraft's July 2015 flyby of Pluto, McKinnon led a study on Sputnik Planitia's churning nitrogen ice surface that appeared this past June in Nature. He is also an author on the recently released study regarding the orientation and gravity of Pluto caused by this subsurface ocean some 600 miles wide and more than 50 miles thick. "In fact, New Horizons has detected ammonia as a compound on Pluto's big moon, Charon, and on one of Pluto's small moons. So it's almost certainly inside Pluto," McKinnon said. "What I think is down there in the ocean is rather noxious, very cold, salty and very ammonia-rich -- almost a syrup. "It's no place for germs, much less fish or squid, or any life as we know it," he added. "But as with the methane seas on Titan -- Saturn's main moon -- it raises the question of whether some truly novel life forms could exist in these exotic, cold liquids." As humankind explores deeper into the Kuiper Belt and farther from Earth, this means to McKinnon the possible discovery of more such subsurface seas and more potential for exotic life. "The idea that bodies of Pluto's scale, of which there are more than one out there in the Kuiper Belt, they could all have these kinds of oceans. But they'd be very exotic compared to what we think of as an ocean," McKinnon said. "Life can tolerate a lot of stuff: It can tolerate a lot of salt, extreme cold, extreme heat, etc. But I don't think it can tolerate the amount of ammonia Pluto needs to prevent its ocean from freezing -- ammonia is a superb antifreeze. Not that ammonia is all bad. On Earth, microorganisms in the soil fix nitrogen to ammonia, which is important for making DNA and proteins and such. "If you're going to talk about life in an ocean that's completely covered with an ice shell, it seems most likely that the best you could hope for is some extremely primitive kind of organism. It might even be pre-cellular, like we think the earliest life on Earth was." The newly published research delves into the creation -- likely by a 125-mile-wide Kuiper Belt object striking Pluto more than 4 billion years ago -- of the basin that includes Sputnik Planitia. The collapse of the huge crater lifts Pluto's subsurface ocean, and the dense water -- combined with dense surface nitrogen ice that fills in the hole -- forms a huge mass excess that causes Pluto to tip over, reorienting itself with respect to its big moon. But the ocean uplift won't last if warm water ice at the base of the covering ice shell can flow and adjust in the manner of glaciers on Earth. Add enough ammonia to the water, and it can chill to incredibly cold temperatures (down to minus 145 Fahrenheit) and still be liquid, even if quite viscous, like chilled pancake syrup. At these temperatures, water ice is rigid, and the uplifted surface ocean becomes permanent. "All of these ideas about an ocean inside Pluto are credible, but they are inferences, not direct detections," McKinnon said, sounding the call. "If we want to confirm that such an ocean exists, we will need gravity measurements or subsurface radar sounding, all of which could be accomplished by a future orbiter mission to Pluto. It's up to the next generation to pick up where New Horizons left off!" Reference: "Reorientation of Sputnik Planitia Implies a Subsurface Ocean on Pluto," F. Nimmo et al., 2016 Dec. 1, Nature [http://www.nature.com/nature/journal/v540/n7631/full/nature20148.html]. Image: View of Pluto with color-coded topography as measured by NASA's New Horizons spacecraft. Purple and blue are low and yellow and red are high, and the informally named Sputnik Planitia stands out at top as a broad, 1300 km- (800 mile-) wide, 2.5 km- (1.5 mile-) deep elliptical basin, most likely the site of an ancient impact on Pluto. New Horizons data imply that deep beneath this nitrogen-ice filled basin is an ocean of dense, salty, ammonia-rich water. (Photo: P. M. Schenk LPI / JHUAPL / SwRI / NASA)


News Article | April 27, 2016
Site: www.nature.com

It is precision medicine taken to the extreme: cancer-fighting vaccines that are custom designed for each patient according to the mutations in their individual tumours. With early clinical trials showing promise, that extreme could one day become commonplace — but only if drug developers can scale up and speed up the production of their tailored medicines. The topic was front and centre at the American Association for Cancer Research (AACR) annual meeting in New Orleans, Louisiana, on 16–20 April. Researchers there described early data from clinical trials suggesting that personalized vaccines can trigger immune responses against cancer cells. Investors seem optimistic that those results will translate into benefits for patients; over the past year, venture capitalists have pumped cash into biotechnology start-ups that are pursuing the approach. But some researchers worry that the excitement is too much, too soon for an approach that still faces many technical challenges. “What I do really puzzle at is the level of what I would call irrational exuberance,” says Drew Pardoll, a cancer immunologist at Johns Hopkins University in Baltimore, Maryland. The concept of a vaccine to treat cancer has intrinsic appeal. Some tumour proteins are either mutated or expressed at different levels than in normal tissue. This raises the possibility that the immune system could recognize these unusual proteins as foreign — especially if it were alerted to their presence by a vaccine containing fragments of the mutated protein. The immune system’s army of T cells could then seek out and destroy cancer cells bearing the protein. Decades of research into cancer-treatment vaccines have thus far yielded disappointing clinical trial results, but recent advances — including a suite of drugs that may amplify the effects of cancer vaccines — have rekindled hope for the field. And DNA sequencing of tumour genomes has revealed a staggering diversity of mutations, producing proteins that could serve as ‘antigens’ by alerting the immune system. Last year, researchers reported that they had triggered an immune response in three patients with melanoma by administering a vaccine tailored to their potential tumour antigens1. The vaccines' effects on tumour growth are not yet clear, but by the end of 2015, several companies had announced their intention to enter the field. Gritstone Oncology, a start-up firm in Emeryville, California, raised US$102 million to pursue the approach, and Neon Therapeutics of Cambridge, Massachusetts, raised $55 million. A third company, Caperna, spun out of a prominent biotechnology company called Moderna Therapeutics, also in Cambridge. Academic groups are also moving quickly. At the AACR meeting, Robert Schreiber of Washington University in St. Louis described six ongoing studies at his institution in cancers ranging from melanoma to pancreatic. Cancer researcher Catherine Wu of the Dana-Farber Cancer Institute in Boston, Massachusetts, also presented data from a trial in melanoma, showing signs of T-cell responses to the vaccine. But it takes Wu’s team about 12 weeks to generate a vaccine, and the Washington University team needs about 8 weeks. That could limit the treatment to slow-growing cancers, says Wu. There is also a reason that so many researchers choose melanoma for proof-of-principle trials. Melanoma tumours tend to harbour many mutations — sometimes thousands — which provide scientists with ample opportunity to select those that may serve as antigens. Some researchers worry that tumours with fewer mutations may not be as suitable for personalized vaccines. But Schrieber notes that researchers have been able to design a vaccine for a woman with the brain tumour glioblastoma — which often has relatively few mutations. In that case, however, the tumour had many mutations, some of which may have been caused by her previous cancer treatment. The number of potential antigens could be crucial. At the AACR meeting, Ton Schumacher, an immunologist at the Netherlands Cancer Institute in Amsterdam, noted that many of the mutant proteins that his group has found are not required for tumour survival. As a result, a tumour could maintain its cancerous lifestyle but become resistant to the vaccine if the proteins used to design the vaccine mutate again. “We will need to attack tumours from many different sides,” he says. Pardoll, meanwhile, is concerned that the field is shifting too quickly to the personalized-vaccine approach and leaving behind decades of research on antigens that might be shared across tumours — an approach that has not borne out in clinical trials thus far, but would be much simpler to manufacture and deploy on a large scale. “I will be the happiest person in the world to be proven wrong on these,” he says of personalized vaccines. “But I think one has to nonetheless be cognizant of where the challenges are.”


News Article | October 12, 2016
Site: www.sciencenews.org

Neandertals are the comeback kids of human evolution. A mere decade ago, the burly, jut-jawed crowd was known as a dead-end species that lost out to us, Homo sapiens. But once geneticists began extracting Neandertal DNA from fossils and comparing it with DNA from present-day folks, the story changed. Long-gone Neandertals rode the double helix express back to evolutionary relevance as bits of their DNA turned up in the genomes of living people. A molecular window into interbreeding between Neandertals and ancient humans suddenly flung open. Thanks to ancient hookups, between 20 and 35 percent of Neandertals’ genes live on in various combinations from one person to another. About 1.5 to 4 percent of DNA in modern-day non-Africans’ genomes comes from Neandertals, a population that died out around 40,000 years ago. Even more surprising, H. sapiens’ Stone Age dalliances outside their own kind weren’t limited to Neandertals. Ancient DNA shows signs of interbreeding between now-extinct Neandertal relatives known as Denisovans and ancient humans. Denisovans’ DNA legacy still runs through native populations in Asia and the Oceanic islands. Between 1.9 and 3.4 percent of present-day Melanesians’ genes can be traced to Denisovans (SN Online: 3/17/16). Other DNA studies finger unknown, distant relatives of Denisovans as having interbred with ancestors of native Australians and Papuans (see "Single exodus from Africa gave rise to today’s non-Africans"). Genetic clues also suggest that Denisovans mated with European Neandertals. These findings have renewed decades-old debates about the evolutionary relationship between humans and extinct members of our evolutionary family, collectively known as hominids. Conventional wisdom that ancient hominid species living at the same time never interbred or, if they did, produced infertile offspring no longer holds up. But there is only so much that can be inferred from the handful of genomes that have been retrieved from Stone Age individuals so far. DNA from eons ago offers little insight into how well the offspring of cross-species flings survived and reproduced or what the children of, say, a Neandertal mother and a human father looked like. Those who suspect that Neandertals and other Stone Age hominid species had a big evolutionary impact say that ancient DNA represents the first step to understanding the power of interbreeding in human evolution. But it’s not enough. Accumulating evidence of the physical effects of interbreeding, or hybridization, in nonhuman animals may offer some answers. Skeletal studies of living hybrid offspring — for example, in wolves and monkeys — may tell scientists where to look for signs of interbreeding on ancient hominid fossils. Scientists presented findings on hybridization’s physical effects in a variety of animals in April at the annual meeting of the American Association of Physical Anthropologists in Atlanta. Biological anthropologist Rebecca Ackermann of the University of Cape Town in South Africa co-organized the session to introduce researchers steeped in human evolution to the ins and outs of hybridization in animals and its potential for helping to identify signs of interbreeding on fossils typically regarded as either H. sapiens or Neandertals. “I was astonished by the number of people who came up to me after the session and said that they hadn’t even thought about this issue before,” Ackermann says. Interbreeding is no rare event. Genome comparisons have uncovered unexpectedly high levels of hybridization among related species of fungi, plants, rodents, birds, bears and baboons, to name a few. Species often don’t fit the traditional concept of populations that exist in a reproductive vacuum, where mating happens only between card-carrying species members. Evolutionary biologists increasingly view species that have diverged from a common ancestor within the last few million years as being biologically alike enough to interbreed successfully and evolve as interconnected populations. These cross-species collaborations break from the metaphor of an evolutionary tree sprouting species on separate branches. Think instead of a braided stream, with related species flowing into and out of genetic exchanges, while still retaining their own distinctive looks and behaviors. Research now suggests that hybridization sometimes ignites helpful evolutionary changes. An initial round of interbreeding — followed by hybrid offspring mating among themselves and with members of parent species — can result in animals with a far greater array of physical traits than observed in either original species. Physical variety in a population provides fuel for natural selection, the process by which individuals with genetic traits best suited to their environment tend to survive longer and produce more offspring. Working in concert with natural selection and random genetic changes over time, hybridization influences evolution in other ways as well. Depending on available resources and climate shifts, among other factors, interbreeding may stimulate the merger of previously separate species or, conversely, prompt one of those species to die out while another carries on. The birth of new species also becomes possible. In hybrid zones where the ranges of related species overlap, interbreeding regularly occurs. “Current evidence for hybridization in human evolution suggests not only that it was important, but that it was an essential creative force in the emergence of our species,” Ackermann says. A vocal minority of researchers have argued for decades that signs of interbreeding with Neandertals appear in ancient human fossils. In their view, H. sapiens interbred with Asian and European Neandertals after leaving Africa at least 60,000 years ago (SN: 8/25/12, p. 22). They point to some Stone Age skeletons, widely regarded as H. sapiens, that display unusually thick bones and other Neandertal-like features. Critics of that view counter that such fossils probably come from particularly stocky humans or individuals who happened to develop a few unusual traits. Interbreeding with Neandertals occurred too rarely to make a dent on human anatomy, the critics say. One proposed hybrid fossil has gained credibility because of ancient DNA (SN: 6/13/15, p. 11). A 37,000- to 42,000-year-old human jawbone found in Romania’s Oase Cave contains genetic fingerprints of a Neandertal ancestor that had lived only four to six generations earlier than the Oase individual. Since the fossil’s discovery in 2002, paleoanthropologist Erik Trinkaus of Washington University in St. Louis has argued that it displays signs of Neandertal influence, including a wide jaw and large teeth that get bigger toward the back of the mouth. In other ways, such as a distinct chin and narrow, high-set nose, a skull later found in Oase Cave looks more like that of a late Stone Age human than a Neandertal. Roughly 6 to 9 percent of DNA extracted from the Romanian jaw comes from Neandertals, the team found. “That study gave me great happiness,” Ackermann says. Genetic evidence of hybridization finally appeared in a fossil that had already been proposed as an example of what happened when humans dallied with Neandertals. Hybridization clues such as those seen in the Oase fossil may dot the skulls of living animals as well. Skull changes in mouse hybrids, for instance, parallel those observed on the Romanian fossil, Ackermann’s Cape Town colleague Kerryn Warren reported at the anthropology meeting in April. Warren and her colleagues arranged laboratory liaisons between three closely related house mouse species. First-generation mouse hybrids generally displayed larger heads and jaws and a greater variety of skull shapes than their purebred parents. In later generations, differences between hybrid and purebred mice began to blur. More than 80 percent of second-generation hybrids had head sizes and shapes that fell in between those of their hybrid parents and purebred grandparents. Ensuing generations, including offspring of hybrid-purebred matches, sported skulls that generally looked like those of a purebred species with a few traits borrowed from another species or a hybrid line. Borrowed traits by themselves offered no clear road map for retracing an animal’s hybrid pedigree. There’s a lesson here for hominid researchers, Ackermann warns: Assign fossils to one species or another at your own risk. Ancient individuals defined as H. sapiens or Neandertals or anything else may pull an Oase and reveal a hybrid face. Part of the reason for Ackermann’s caution stems from evidence that hybridization tends to loosen genetic constraints on how bodies develop. That’s the implication of studies among baboons, a primate viewed as a potential model for hybridization in human evolution. Six species of African baboons currently interbreed in three known regions, or hybrid zones. These monkeys evolved over the last several million years in the same shifting habitats as African hominids. At least two baboon species have inherited nearly 25 percent of their DNA from a now-extinct baboon species that inhabited northern Africa, according to preliminary studies reported at the anthropology meeting by evolutionary biologist DietmarZinner of the German Primate Center in Göttingen. Unusual arrangements of 32 bony landmarks on the braincase appear in second-generation baboon hybrids, Cape Town physical anthropologist Terrence Ritzman said in another meeting presentation. Such alterations indicate that interbreeding relaxes evolved biological limits on how skulls grow and take shape in baboon species, he concluded. In line with that proposal, hybridization in baboons and many other animals results in smaller canine teeth and the rotation of other teeth in their sockets relative to parent species. Changes in the nasal cavity of baboons showed up as another telltale sign of hybridization in a recent study by Ackermann and Kaleigh Anne Eichel of the University of Waterloo, Canada. The researchers examined 171 skulls from a captive population of yellow baboons, olive baboons and hybrid offspring of the two species. Skulls were collected when animals died of natural causes at a primate research center in San Antonio. Scientists there tracked the purebred or hybrid backgrounds of each animal. First-generation hybrids from the Texas baboon facility, especially males, possessed larger nasal cavities with a greater variety of shapes, on average, than either parent species, Ackermann and Eichel reported in the May Journal of Human Evolution. Male hybrid baboons, in general, have large faces and boxy snouts. Similarly, sizes and shapes of the mid-face vary greatly from one Eurasian fossil hominid group to another starting around 126,000 years ago, says paleoanthropologist Fred Smith of Illinois State University in Normal. Mating between humans and Neandertals could have produced at least some of those fossils, he says. One example: A shift toward smaller, humanlike facial features on Neandertal skulls from Croatia’s Vindija Cave. Neandertals lived there between 32,000 and 45,000 years ago. Smith has long argued that ancient humans interbred with Neandertals at Vindija Cave and elsewhere. Ackermann agrees. Ancient human skulls with especially large nasal cavities and unusually shaped braincases actually represent human-Neandertal hybrids, she suggests. She points to fossils, dating to between 80,000 and 120,000 years ago, found at the Skhul and Qafzeh caves in Israel. Eurasian Neandertals mated with members of much larger H. sapiens groups before getting swamped by the African newcomers’ overwhelming numbers, Smith suspects. He calls it “extinction by hybridization.” Despite disappearing physically, “Neandertals left a genetic and biological mark on humans,” he says. Some Neandertal genes eluded extinction, he suspects, because they were a help to humans. Several genetic studies suggest that present-day humans inherited genes from both Neandertals and Denisovans that assist in fighting infections (SN: 3/5/16, p. 18). One physical characteristic of hybridization in North American gray wolves is also a sign of interbreeding’s health benefits. Genetic exchanges with coyotes and dogs have helped wolves withstand diseases in new settings, says UCLA evolutionary biologist Robert Wayne. “There are few examples of hybridization leading to new mammal species,” Wayne says. “It’s more common for hybridization to enhance a species’ ability to survive in certain environments.” Despite their name, North American gray wolves often have black fur. Wayne and his colleagues reported in 2009 that black coat color in North American wolves stems from a gene variant that evolved in dogs. Interbreeding with Native American dogs led to the spread of that gene among gray wolves, the researchers proposed. The wolves kept their species identity, but their coats darkened with health benefits, the scientists suspect. Rather than offer camouflage in dark forests, the black-coat gene appears to come with resistance to disease, Wayne said at the anthropology meeting. Black wolves survive distemper and mange better than their gray-haired counterparts, he said. Similarly, DNA comparisons indicate that Tibetan gray wolves acquired a gene that helps them survive at high altitudes by interbreeding with mastiffs that are native to lofty northern Asian locales. Intriguingly, genetic evidence also suggests that present-day Tibetans inherited a high-altitude gene from Denisovans or a closely related ancient population that lived in northeast Asia. Labeling gray wolf hybrids as separate wolf species is a mistake, Wayne and colleagues contend (SN: 9/3/16, p. 7). Hybrids smudge the lines that scientists like to draw between living species as well as fossil hominid species, Wayne says. Like wolves, ancient hominids were medium-sized mammals that traveled great distances. It’s possible that an ability to roam enabled humans, Neandertals and Denisovans to cross paths in more populated areas, resulting in hybrid zones, paleoanthropologist John Hawks of the University of Wisconsin–Madison suggests. Hominids may have evolved traits suited to particular climates or regions. If so, populations may have rapidly dispersed when their home areas underwent dramatic temperature and habitat changes. Instead of slowly moving across the landscape and stopping at many points along the way, hominid groups could have trekked a long way before establishing camps in areas where other hominids had long hunted and foraged. Perhaps these camps served as beachheads from which newcomers ventured out to meet and mate with the natives, Hawks says. All ancient hominid populations were genetically alike enough, based on ancient DNA studies, to have been capable of interbreeding, Hawks said at the anthropology meeting. Specific parts of Asia and Europe could have periodically become contact areas for humans, Neandertals, Denisovans and other hominids. Beneficial genes would have passed back and forth, and then into future generations. Ackermann sees merit in that proposal. Hominid hybrid territories would have hosted cultural as well as genetic exchanges among populations, she says, leading to new tool-making styles, social rituals and other innovations. “These weren’t necessarily friendly exchanges,” Ackermann says. Many historical examples describe cultural exchange involving populations that succumb to invaders but end up transforming their conquerors’ way of life. However genes, behaviors and beliefs got divvied up in the Stone Age, a mix of regional populations — including Neandertals and Denisovans — can be considered human ancestors, she theorizes. They all contributed to human evolution’s braided stream. That’s a controversial view. Neandertals and Denisovans lived in relatively isolated areas where contact with other hominid populations was probably rare, says paleoanthropologist Matthew Tocheri of Lakehead University in Thunder Bay, Canada. Random DNA alterations, leading to the spread of genes that happened to promote survival in specific environments, played far more important roles in human evolution than occasional hybridization did, Tocheri predicts. Neandertals and Denisovans can’t yet boast of being undisputed hybrid powers behind humankind’s rise. But a gallery of interbreeding animals could well help detect hybrid hominids hiding in plain sight in the fossil record. This article appears in the October 15, 2016, issue of Science News with the headline, "The Hybrid Factor: The physical efffects of interbreeding among animals may offer clues to Neandertals' genetic mark on humans." This article was corrected on October 12, 2016, to note Fred Smith .


ROCKVILLE, Md.--(BUSINESS WIRE)--GlycoMimetics, Inc. (NASDAQ:GLYC) today announced that pre-clinical research demonstrating the potential of two of its drug candidates, GMI-1271 and GMI-1359, against multiple myeloma will be shared via an oral presentation at the American Association for Cancer Research (AACR) Annual Meeting 2017 in Washington, DC. The company and its collaborators at Washington University in St. Louis will highlight data on GMI-1271, an antagonist of E-selectin, and GMI-1359, a dual antagonist of E-selectin and CXCR4, showing anti-cancer activity in preclinical models of multiple myeloma. “Our results show a strong effect on cancer cells in combination with chemotherapy and importantly, are supportive of our ongoing Phase 1 clinical studies in multiple myeloma of GMI-1271 as well as our study of GMI-1359 in multiple cancers,” said John Magnani, Ph.D., GlycoMimetics Senior Vice President and Chief Scientific Officer. “These results complement data from other preclinical studies and continue to build the rationale for our on-going clinical programs with both compounds.” Abstract #5005—Muz, B.B., et al. “Inhibition of E-Selectin or E-selectin together with CXCR4 re-sensitizes multiple myeloma to treatment.” Tuesday, April 4, 3:00-5:00 p.m. ET. The AACR Annual Meeting 2017 takes place from April 1 to 5, at the Walter E. Washington Convention Center. Meeting abstracts are available at AACR’s website. GMI-1271 is currently being evaluated in an ongoing Phase 1/2 clinical trial as a potential treatment for acute myeloid leukemia (AML) and in a Phase 1 clinical trial in multiple myeloma. GMI-1359 is now in a Phase 1 clinical trial. GlycoMimetics is a clinical-stage biotechnology company focused on cancer and sickle cell disease. GlycoMimetics' most advanced drug candidate, rivipansel, a pan-selectin antagonist, is being developed for the treatment of vaso-occlusive crisis in sickle cell disease and is being evaluated in a Phase 3 clinical trial being conducted by its strategic collaborator, Pfizer. GlycoMimetics' wholly-owned drug candidate, GMI-1271, an E-selectin antagonist, is being evaluated in an ongoing Phase 1/2 clinical trial as a potential treatment for AML and in a Phase 1 clinical trial in multiple myeloma. GlycoMimetics has also recently initiated a clinical trial with a third drug candidate, GMI-1359, a combined CXCR4 and E-selectin antagonist. GlycoMimetics is located in Rockville, MD in the BioHealth Capital Region. Learn more at www.glycomimetics.com. This press release contains forward-looking statements regarding GlycoMimetics’ planned activities with respect to the clinical development of its drug candidates, GMI-1271 and GMI-1359. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors, including the availability and timing of data from ongoing clinical trials, the uncertainties inherent in the initiation of future clinical trials, whether interim results from a clinical trial will be predictive of the final results of the trial or results of early clinical trials will be indicative of the results of future trials, expectations for regulatory approvals, availability of funding sufficient for GlycoMimetics’ foreseeable and unforeseeable operating expenses and capital expenditure requirements, other matters that could affect the availability or commercial potential of GlycoMimetics’ drug candidates and other factors discussed in the “Risk Factors” section of GlycoMimetics’ Annual Report on Form 10-K that was filed with the U.S. Securities and Exchange Commission on February 29, 2016, and other filings GlycoMimetics makes with the Securities and Exchange Commission from time to time. In addition, the forward-looking statements included in this press release represent GlycoMimetics’ views as of the date hereof. GlycoMimetics anticipates that subsequent events and developments may cause its views to change. However, while GlycoMimetics may elect to update these forward-looking statements at some point in the future, GlycoMimetics specifically disclaims any obligation to do so, except as may be required by law. These forward-looking statements should not be relied upon as representing GlycoMimetics’ views as of any date subsequent to the date hereof.


News Article | December 19, 2016
Site: www.eurekalert.org

It all began innocently enough. Tyrone Daulton, a physicist with the Institute for Materials Science and Engineering at Washington University in St. Louis, was studying stardust, tiny specks of heat-resistant minerals thought to have condensed from the gases exhaled by dying stars. Among the minerals that make up stardust are tiny diamonds. In 2007, Richard Kerr, a writer for the journal Science, knowing Daulton's expertise, called to ask whether nanodiamonds found in sediments could be evidence of an ancient impact. Daulton said it was possible the heat and pressure of such a cataclysm could convert carbon in Earth's crust to diamond, but asked to see the paper, which had been published in Science. The Science paper argued that a shower of exploding comet fragments over the North American ice sheet had triggered a sudden climate reversal called the Younger Dryas. Having read the paper, Daulton told the reporter, "It looks interesting, [but] there's not enough information in this paper to say whether they found diamonds." Since then, Daulton has periodically been asked to evaluate Younger Dryas sediments for nanodiamonds. In the issue of the Journal of Quaternary Science released online Dec.19, he reviews the accumulated evidence and reports on his own analysis of new samples from California and Belgium. For the second time in 10 years, Daulton has carefully reviewed the evidence, and found no evidence for a spike in nanodiamond concentration in Younger Dryas sediments. Since nanodiamonds are the strongest piece of evidence for the impact hypothesis, their absence effectively discredits it. And so a great idea apparently has been brought low by the humblest of evidence. Nanodiamonds, it bears emphasizing, are tiny -- smaller than bacteria. Impact supporters often claim to find them inside small spheres of carbon, and those spheres are about the size of the period at the end of this sentence. Even so, how is it possible for some scientists to find diamonds in samples and others to find none? One answer is that carbon atoms can arrange themselves in many different configurations. These arrangements, which make the difference between pencil lead and diamond, can be confused with one another. Impact supporters often claim to have found lonsdaleite, a rare form of diamond that has a hexagonal rather than the common, cubic atomic structure. "Lonsdaleite is usually reported in the literature associated with impact sites or in meteorites that were shock processed," Daulton said. "It can also be formed by detonation in the laboratory, so the presence of lonsdaleite to me would be a strong suggestion of an impact." But when he examined Younger Dryas samples reported to contain lonsdaleite, Daulton couldn't find it. Instead, he found aggregates of single-atom-thick sheets of carbon atoms (graphene) and sheets of carbon atoms with attached hydrogen atoms (graphane) that looked "very, very similar to lonsdaleite." So the claim of lonsdaleite was based on a misidentification: Daulton published this result in 2010. End of story? Not so fast. In 2014, a group of researchers reported that they had found a nanodiamond-rich sediment layer that spanned three continents. While claiming to find cubic and hexagonal diamond, they also claimed to find much more abundant n-diamond, a controversial form of diamond characterized by electron diffraction patterns similar to diamond, but with extra "forbidden" reflections that diamond does not exhibit. Pulled back into the controversy, Daulton again found no diamond or n-diamond in the samples from the Younger Dryas horizon. What he found instead was nanocrystalline copper, which produces diffraction patterns just like the controversial n-diamond. Daulton also attempted to reproduce the analyses that found a spike in the concentration of nanodiamonds at the Younger Dryas but found flaws in the methodology that invalidated the result. Paradoxically it was Daulton's experience finding nanodiamonds in stardust that prepared him not to find them in sediments.


News Article | November 28, 2016
Site: www.eurekalert.org

Why are there volcanoes on an island that isn't near any tectonic boundaries? Madagascar, the big island off the east coast of Africa with the lemurs and baobabs, is thought to be sitting in the middle of an old tectonic plate, and so, by the rules of plate tectonics, should be tectonically quiet: few earthquakes and no volcanoes. But it's not. The island has been away from tectonic action for the past 80 million years, said Martin Pratt, research scientist in earth and planetary sciences at Washington University in St. Louis, yet it experiences about 500 earthquakes per year. The island also has volcanoes that have been active within the recent geologic past. "Having active volcanoes in Madagascar is like having erupting volcanoes in St. Louis," said Michael Wysession, professor of earth and planetary sciences. "You have to ask yourself, 'What are they doing there?'" Since this part of the world is geologically complex, there are lots of interesting possible explanations for the volcanoes. To figure it out, the geologists needed to be able to examine not just the island's accessible surface, but also what lies beneath the rigid crust and upper mantle. To image Earth's interior, geologists use a technique called seismic tomography that is similar to the medical CT scan, probing the earth's strictire with seismic waves from distant earthquakes and ambient noise. But remote and politically unstable Madagascar was largely unexplored by seismic methods until recently. Starting in 2010, however, three groups, including one led by Washington University seismologists Wysession and Doug Wiens, began to deploy seismic arrays on Madagascar, on nearby islands in the Mozambique channel (between the island and Africa), and on the ocean floor east of Madagascar. In an article published online Nov. 22 in Earth and Planetary Science Letters, the Washington University scientists report that they found three areas of hot rock within the mantle beneath three separate volcanic provinces on the island. They also see signs that the bottom of the lithosphere beneath the central volcanic province has peeled off. As the cold rock sank into the mantle, hotter rock flowed around it to the center and the south of the island. The crust, unburdened, bobbed higher. The northern volcanic province, meanwhile, probably taps a different heat source. Madagascar, originally part of the ancient continent Gondwana, was formed in two steps. The island, together with India, pulled away from Africa 150 million years ago, stretching and thinning the crust on the island's west coast before it finally snapped off. The thinned crust on the west coast sagged and the dips filled with sediments, forming deep basins of sedimentary rocks. Then, about 90 million years ago, when the mini-continent migrated over the Marion hotspot (a mantle plume that now lies beneath the Antarctic plate to the south), brief but voluminous eruptions covered the island in lava. The blast of heat is thought to have cracked the overriding continent into two parts, Madagascar and India, which scraped past the east coast of Madagascar on its way north toward Asia, leaving a very straight coastline there. But the volcanism in the central, northern, and southern provinces are much younger than the basaltic remains of the 90-million-year-old eruption still found around the perimeter of Madagascar. So the question was: Where did they come from? Lead-author Pratt used three complementary methods to analyze surface waves (seismic waves trapped near Earth's surface), which are created by distant earthquakes and from sources of seismic noise, such as ocean storms. "His approach is clever and creative," Wysession said. "He's taken three really different data sets, some good at high frequencies that give you better resolution at shallow depths, and some better at low frequencies that give you better resolution at greater depths, and he's put them all together. It's a bit like combining an X-ray, an MRI and a CT scan to get a clearer image." The images show three low-velocity seismic anomalies corresponding to the upwelling of hotter mantle rock along the island's backbone. "We knew about the named volcanic provinces in the center and north," Wysession said. "But we didn't know about the one in the southwest. When we saw the third blob in the images, we checked the literature and discovered that, sure enough, there was volcanic activity there as recently as 9 million years ago." The cause of the three hot regions in the mantle is a mystery, however. Though there is some indication from the tomographic images that the regions might be connected, particularly the southern two, further modeling of deeper structure will be needed to confirm. One origin of the hot regions previously has been proposed to be hot rock rising through the mantle as the Comores hot spot, which has created a set of volcanic islands just west of the north end of the island. The authors have a different idea, however, and it comes from the way that the central and southwestern provinces appear to be connected at depth. "If you look at the images that Martin has made," Wysession said, "you can see a horseshoe shape where the central hot mantle anomaly swings west and then comes back east again, connecting the central and southern provinces. The deflecting obstacle seems to be a slab of colder rock. "We think the lithosphere (the crust and rigid upper mantle) has delaminated, and the bottom of it fell off," Wysession said. "As the cold, dense slab began to sink, hotter rock flowed up and in to replace it, buoying the central province and, as it tilted, blocking flow to the south." But what caused the bottom of the lithosphere to peel off? "We think it may have been the Marion hotspot," Wysession said. "The underside of the plate was heated by this huge blow torch 95 million years ago, weakening the rock enough that it was able to peel off. So we're still seeing collateral damage from this ancient event." This idea also has the advantage of explaining the unusually high elevations of the northern half of the island. Once the heavy bottom of the plate fell off, it stopped pulling down the crust, which rebounded upward as much as a kilometer as hot rock from below took the place of the delaminated slab. Something similar happened underneath the Great Basin of the western United States, he said, where the bottom of the lithosphere also split off, forming a large blob of cold material sinking down through the mantle below the surface of central Nevada. There, the blow torch that delaminated the plate was an ocean spreading center that was overridden by the North American plate, Wysession said.


News Article | December 5, 2016
Site: www.prweb.com

Poets&Quants, the leading online publication for graduate and undergraduate business education news, has produced its first in-house ranking of the 50 best undergraduate business programs in the U.S. Our winner in the debut ranking is the Olin Business School at Washington University in St. Louis, which boasts the single best undergraduate business experience in the U.S. Rounding out the top five undergraduate business programs are No. 2 Notre Dame, No. 3. Wharton, No. 4 Georgetown, and No. 5. UC-Berkeley. 1)    Admissions standards that measure the quality of the incoming students. 2)    Alumni perspective on the full educational experience, from the quality and accessibility of faculty to whether students had a “signature experience” or a global immersion to best prepare them for work and life. 3)    Employment data including internships before senior year, full-time employment within 90 days of graduation, and average compensation. "This is the most thorough study of undergraduate business education ever undertaken, an amazing resource for prospective students and parents trying to make smart decisions about where to get the business basics for a successful career,” says John A. Byrne, editor-in-chief of Poets&Quants. “Never before has anyone gathered the wealth of data we will publish, from actual acceptance rates and average SAT scores to the schools with the best academic and career advising." Key data and stats from the study and ranking: Along with the rankings, in-depth feature profiles on all 50 schools will be published on poetsandquantsforundergrads.com. Within the school profiles, much more data is revealed ranging from percentage of students receiving scholarships, percentage of students graduating with debt, and where specific schools excel and perform worse compared to other schools. “The most popular undergraduate degree granted in the U.S. is the business degree,” Byrne says. “In 2013-2014, more than 358,000 undergraduate business degrees were conferred in the U.S., roughly double the next most area of studies in the health professions. Yet, most of the information available for would-be students and their parents is at the university level and is not specific to business schools. Some very good universities have so-so business programs. Some lesser known universities have business schools that far excel. We're making that apparent with this ranking.” The ranking was begun at the request of deans at several leading business schools who were unsatisfied with what is currently available on the market. U.S. News and World Report’s ranking, for example, is based solely on the opinions of deans and senior faculty members. Some believe it’s little more than a popularity contest because it does not measure the quality of incoming students, the academic experience, or employment outcomes. For questions on methodology, specific data, and interview requests, please contact Nathan Allen, staff writer and project lead at Nathan(at)poetsandquants(dot)com. About Poets&Quants Poets&Quants is the leading resource for complete coverage of graduate business education. Poetsandquants.com features multiple tools and authoritative content, including: consolidated B-school rankings, news and in-depth features, videos, podcasts, two searchable directories, and events — empowering graduate business degree seekers with information they need to make decisions along their journey from pre- to post-MBA. Poetsandquantsforundergrads.com takes that level of coverage to undergraduate business education. About John A. Byrne Poets&Quants' Editor-in-Chief John A. Byrne is the founder of C-Change Media, a global digital media company of higher education content operating five websites and hosting events worldwide bringing together business school students, the world’s best schools, and largest employers. Byrne is the author or co-author of more than ten books, including two New York Times bestsellers, and is the former executive editor of Businessweek, editor-in-chief of Businessweek.com, and editor-in-chief of Fast Company. He also is the creator of the first regularly published rankings of business schools for Businessweek in 1988 and the author of several business school guidebooks. He wrote an unprecedented 58 cover stories for Businessweek and is the only business journalist to have written covers for Businessweek, Fortune, Forbes, and Fast Company magazines.


News Article | November 30, 2016
Site: www.prweb.com

Memorial Healthcare System is expanding its scope of neurology services with the newly developed Memorial Brain Health and Memory Center. Hilary Glazer, MD, a cognitive neurologist at Memorial Neuroscience Institute who specializes in memory, dementia, Alzheimer’s disease, and the prevention of cognitive decline will lead the Center. The number of Alzheimer’s and dementia patients diagnosed each year continues to rise, and even younger adults as early as their 20’s are having cognitive issues. For this reason, the need for specialized care, diagnosis, and prevention in this field is becoming increasingly critical. “Memory loss affects many,” Glazer said. “While dementia plays a majority role in the lives of older adults, research shows that people can live with cognitive impairment for 20 years or perhaps more, but the stress that this places on families is a burden that causes increased morbidity, hospitalizations, and nursing home placements. Working together with community physicians, we can provide families these resources so they are not left alone to cope with these changes.” The Brain Health and Memory Care Center provides patients and their families with the support, services and tools they need to reverse, slow, or stop the progression of memory loss and help them live life to the fullest. Dr. Glazer became passionate about helping families going through devastating memory changes after watching her mother care for her father during his battle with brain cancer. While her mother struggled with little support from his healthcare providers, she was inspired by a new approach to care that she learned about during her medical training: a compassionate, patient- and family-centered multidisciplinary team that focuses on up-to-date treatments and prevention – a model that she is developing at Memorial. “Everyone is at risk for memory changes,” Glazer said. “The key is to come to a memory specialist, someone who has fellowship training in this field. Families and individuals should come early at the first signs of memory loss because that is when our treatment has the best chance of working.” In the last five years alone, treatments and lifestyle interventions have been developed. There are ways families can incorporate treatments to prevent memory loss and even reverse memory loss with new approaches. The Brain Health and Memory Center at Memorial Neuroscience Institute incorporates a multi-disciplinary approach that is tailored to the individual and designed to empower the patient, their families, and caregivers to promote the best outcomes and get the most out of life. The Brain Health and Memory Center has offices at Hollywood’s Memorial Regional Hospital, and will have offices at Memorial Hospital West in Pembroke Pines. Dr. Glazer earned her medical degree at Washington University in St. Louis and completed neurology training at the University of Miami/Jackson Memorial Hospital. She completed fellowship training in Cognitive and Behavioral Neurology from the United Council for Neurologic Subspecialties at University of Florida. About the Neuroscience Institute at Memorial Healthcare System The Neuroscience Institute at Memorial Regional Hospital is dedicated to the diagnosis and treatment of a wide range of neurological disorders and injuries in adults and children. Its team of physicians – including specialists in neurology services, neurosurgery, and interventional neuroradiology – uses advanced technology and innovative procedures in the effort to effectively treat chronic and acute neurological disorders. Assessment testing, rehabilitative therapies, social services and other support services also are available to assist patients with acute illness or injury. Aside from its location at Memorial Regional Hospital, there is an additional office at Memorial Hospital West in Pembroke Pines, Fla. Memorial Healthcare System is one of the largest public healthcare systems in the country and is a national leader in quality care and patient satisfaction. Its facilities include Memorial Regional Hospital, Memorial Regional Hospital South, Joe DiMaggio Children’s Hospital, Memorial Hospital West, Memorial Hospital Miramar, Memorial Hospital Pembroke and Memorial Manor nursing home. The system received the following recognition: Modern Healthcare magazine’s “Best Place to Work in Healthcare,” Forbes’ America’s Best Employers; Florida Trend’s “Florida’s Best Companies to Work For,” and Becker’s Hospital Review’s “150 Great Places to Work in Healthcare.”


News Article | January 2, 2016
Site: www.scientificamerican.com

The first I ever heard of New Year resolutions was after I moved from Spain to the US for my postdoctoral training, in 1997. In Spain, the New Year’s party rituals are different too: instead of a countdown of the last ten seconds followed by a rendition of “Auld Lang Syne,” we Spaniards close the year by eating twelve grapes of luck— or “uvas de la suerte”—one for each of the twelve strokes of midnight’s bells. The grape-eating custom—said to have been started by grape growers in the early 20th century—is akin to blowing out the candles on your birthday cake. You formulate a wish just as midnight approaches (some people do twelve wishes, but I think that’s just greedy). Then, if you perform the ritual without mistakes (you have no grapes left after the twelfth bell, and you pop the last grape in your mouth simultaneously with the twelfth bell), your wish(es) are said to come true. Thus, how the New Year will turn out is more a matter of eye-hand-mouth coordination skills than anything else. By contrast, making a list of resolutions for the New Year (a tradition rooted in Puritan beliefs, a quick internet search reveals) places the burden of the next twelve months’ outcome smack on the person making the list, and his or her moral fortitude. Setting aside what the two traditions might indicate about Spanish versus US culture, the point is that I was unfamiliar with the idea of New Year resolutions as a child and young adult (although I hear that the concept, just like the notion of  Halloween and Black Friday) is catching up across the Atlantic. Indeed, I had never written down a list of New Year resolutions… but just now I did for the first time ever. The reason for the change? A paper just published in Psychological Science, with the enticing title “Put Your Imperfections behind You: Temporal Landmarks Spur Goal Initiation When They Signal New Beginnings.” A team of scientists from Washington University in St. Louis and the University of Pennsylvania set out to explore, in a series of experiments, why certain dates are more likely to inspire people to pursue their goals (start a new diet, give up smoking, ramp up their exercise regime). The scientists also asked whether people’s strengthened resolve at particular temporal landmarks might be linked to their distancing themselves, psychologically, from their past, imperfect personas. The researchers reasoned that salient temporal landmarks may spur goal initiation because they signal new beginnings (i.e. an opportunity to clean the slate and start from scratch). Such transition points include those in social timetables (the start of holidays, the beginning of the New Year), as well as personal life events, recurring (a birthday, a wedding anniversary) and not (a first date, going away to college, moving to a new city). Previous research had already shown that temporal landmarks serve as dividers between people’s past, present, and future selves, weakening the psychological connection between them. [As an aside, I find the last point fascinating: most of us experience a sense of continuity between who we are now and who we used to be. Even if the actual connection between our past and present selves is tenuous, our autobiographical memories provide usually inescapable evidence that, however different we may feel from the way we were at the age of 4 or 14, we are the same people nevertheless. Even though personal continuity may be a necessary, adaptive illusion, it can nevertheless prevents us from effecting major changes in our behavior (“This is who I am, so there’s nothing I can do to change”). Yet, if the construct of a continuous self does weaken at major temporal milestones, such dates may allow the possibility of new and improved selves to be born)]. The scientists further hypothesized that the more starkly the temporal milestone marked the start of a new period, the more psychological distance it should create between past and present selves. The disconnect between the current self and past imperfections could promote goal initiation in multiple ways: boosting perceived self-efficacy, lessening perceived self-tarnish from recent failures, and creating a clean slate–so that deviating from the goal (say, cheating on one’s diet) may loom much more disastrous than if it is just one more in a long list of infractions. Throughout the experimental series, the researchers found that temporal landmarks associated with new beginnings (i.e. both the start of a season and that of an academy period) were more appealing choices to jumpstart habit changes than ordinary dates. Consistent with these findings, Jewish participants also felt that the same date (October 5th) was more indicative of a new beginning when described as “the first day after Yom Kippur” than as “the 278th day of the year.” The scientists then recruited a sample of participants that planned to pursue a goal in the New Year. They asked half of the participants to write 3 to 5 reasons why the New Year felt meaningful to them, and the other half to write 3 to 5 reasons why this New Year felt ordinary to them. The people that wrote reasons why the New Year felt like a new beginning spent more time in goal-related activities, such as goal-tracking Web sites and written information on how people may increase their chances of achieving their goals. Next, the team set out to exclude the alternate possibility that people simply start new activities (congruent or incongruent with achieving goals) at temporal milestones. Here, experimental participants were given a fictional scenario, in which a Chinese man named Chang turned 36 (which the researchers described as the beginning of a new zodiac cycle to only half of the participants). Chang’s birthday coincided with a visit to the doctor, in which he learned that he was at high risk of lung cancer and should avoid smoking. Half the total number of participants were told that Chang had been wanting to quit smoking for several years but never succeeded. The other half read that Chang had been tempted to start smoking but had never done it. Participants who were familiar with the Chinese zodiac calendar thought that Chang would be more motivated to quit smoking than those who were unfamiliar. In contrast, both the participants that were familiar and unfamiliar with the Chinese zodiac calendar thought it very unlikely that Chang would be motivated to start smoking when he had not done so in the past. These results indicated that major temporal milestones selectively spur the adoption of behaviors that are goal-congruent, rather than the adoption of any new behaviors. Finally, the researchers asked if one reason that people’s motivation to pursue goals increases at temporal landmarks can be that, at such points in time, they feel more psychologically separated from their past imperfect selves. To answer this question, they presented study participants with a fictional scenario in which half of the subjects had to imagine moving to a new city for the first time in 9 years (new-beginning condition). The other half had to imagine moving cities for the 9th consecutive time in 9 years (control condition). Then they had to rate, in various ways, the psychological distance that they felt between their present selves (after the move) and their imperfect past selves. Participants that had to imagine moving for the first time in 9 years felt more disconnected from their imperfect past selves than participants who imagined moving every year for the past 9 years. Participants in the new-beginning condition also said they would be more motivated to tackle personal goals than those in the control condition. The combined results indicated that temporal landmarks that signal new beginnings cause people to engage in activities designed to facilitate goal initiation, and to predict higher motivation to pursue goals, both for themselves and for other people. This renewed impetus to tackle goals derives in part from the psychological disconnect between a person’s current self and his/her past inferior self that occurs at major temporal landmarks. In light of this research, I have been pondering my failure to maintain certain exercise habits during the past year. Can I use the 2016 temporal landmark to get myself to exercise 3 times a week? Yeah—that’s it! I hereby choose to believe that the major culprit for my former noncompliance was my past inferior self. I am positive that that weak-willed malingerer is miles and miles apart from my much enhanced new 2016 personality upgrade. So, for the first time ever, I have written a long list of New Year resolutions. I have also orchestrated an extended discussion with my immediate family about why 2016 feels so much like a new start. You know, to lock in the change.


News Article | December 14, 2016
Site: www.businesswire.com

CORALVILLE, Iowa--(BUSINESS WIRE)--Voxello, developer of communication solutions for impaired hospitalized patients, today announced that Richard Wieland was appointed as a new member of the Company's Board of Directors. Mr. Wieland will also serve as the interim Chief Financial Officer. Mr. Wieland is a senior financial executive with a diversified Life Science business background with more than thirty-five years of business experience in both public and private companies. He completed over twenty capital transactions including two successful IPOs and eleven M&A transactions. Subsequent to the M&A transactions, he successfully managed the post-acquisition integration programs. Most recently, Rich was Executive Vice President and Chief Financial officer of Unilife Corporation, a NASDAQ-traded medical device company. Prior to Unilife he spent six years at two biotech companies in the drug discovery and development field both of whom had products in clinical development. Previous to that Rich had P&L responsibility for two healthcare companies and was a member of the Board of Directors at Option Care Inc., a NASDAQ-traded home healthcare company that was subsequently acquired by Walgreen’s. His first CFO position was with LyphoMed Inc., a generic pharmaceutical company with revenues of ~ $225 million that was eventually acquired by Fujisawa Pharmaceutical Co. Ltd. Rich began his career at Procter & Gamble Company in Cincinnati, OH. Rich earned an MBA degree in finance from the Olin School of Business at Washington University in St. Louis, MO and an undergraduate degree in accounting and economics from Monmouth College in Monmouth, IL. Rich was a member of the Signal Corps in the Army and served on the Alumni Board of Monmouth College. After Fujisawa acquired LyphoMed, he established the Wieland Family Foundation that supports charities for disadvantaged children. “We are very pleased to have Mr. Wieland join our Board of Directors. His life sciences background coupled with his financial and management experience will add significantly to the depth of our already outstanding board, as well as position us for the successful execution of our strategic plan for the Company,” said Rives Bird, CEO of Voxello. Richard Wieland added, “I am pleased to have the opportunity to join the Board of Directors at Voxello. Voxello has a unique opportunity within the hospital market with the noddle™, the Company’s first offering, which allows impaired hospitalized patients who cannot communicate by traditional means to communicate. It’s quite uncommon to find a solution that can provide a tremendous ROI to the hospital, improve patient satisfaction, and comply with new regulations from the Joint Commission for accreditation. Pending FDA clearance the Company plans to launch in key markets. I am looking forward to working with Rives Bird, to help build value for the shareholders.” Voxello provides solutions for impaired hospitalized patients who cannot communicate by traditional means. The company’s first product, the noddle™, detects voluntary gestures such as clicking sounds made with the tongue or an eye blink to control nurse call and speech generation devices. For more information about Voxello, please visit www.voxello.com.


DENVER, CO--(Marketwired - October 21, 2016) - MediaNews Group, Inc. ("MNG"), the largest shareholder of Monster Worldwide, Inc. ( : MWW) ("Monster" or the "Company"), with an ownership interest of 11.5% of Monster's outstanding shares, announced it has released an open letter to Monster shareholders along with its definitive consent solicitation materials filed today with the Securities and Exchange Commission (the "SEC") and announced that it (through an affiliate) intends to make a cash tender offer for up to 8,925,815 shares of common stock of Monster at a price of $3.70 per share. The offer price of $3.70 per share represents a 9.8% premium to the closing price of the Monster common stock reported on the NYSE on October 20, 2016, the last full trading day before we announced the tender offer and a 33.6% premium to the closing price of the Monster common stock reported on the NYSE on August 8, 2016, the last full trading day before announcement of the Randstad merger agreement. The number of shares MNG intends to offer to purchase in the tender offer represents approximately 10% of the outstanding shares of Monster common stock. The tender offer will be open to all Monster shareholders. After giving effect to the tender offer, assuming the purchase of 100% of the Monster common stock sought in the tender offer, MNG is expected to own 19,225,815 shares of Monster common stock or 21.5% of the Company. MNG is not able to purchase more than 25% of the outstanding Monster common stock without causing a "Change of Control" under the Monster credit agreement. MNG'S offer will not be subject to a financing condition, however, it will be subject to certain other conditions, including the termination of both the Randstad tender offer and merger agreement. Once the tender offer is commenced, offering materials will be mailed to Monster shareholders and filed with the SEC. Monster shareholders should read the offering materials when they become available, because they will contain important information. The tender offer will be held open for at least 20 business days following its commencement, and tenders of shares must be made prior to the expiration of the tender offer period. Additionally, MNG has released an extensive shareholder presentation outlining 1) its strategic plan to revitalize Monster and 2) its director candidates' substantial qualifications to replace the current Board of Directors. The presentation and other information related to MNG's campaign can be found at www.revitalizemonster.com. MNG's nominees for the Monster Board of Directors are: The full text of MNG's letter is included below: MediaNews Group, Inc. ("MNG"), currently has an ownership interest of approximately 11.5% of the outstanding shares of Monster Worldwide, Inc. ("Monster" or the "Company"), making us the Company's largest shareholder. We have nominated seven highly qualified candidates to replace the current Board of Directors at Monster, as this Board has proven time and again its inability to make the right strategic and operational decisions to maximize shareholder value at the Company. The current deal with Randstad at $3.40 per share and the "process" that resulted in this offer is just one example in a long line of poor decision-making by the current Board. In addition to entering into the Randstad deal, this Board oversaw a decline in revenue of over 24% since 2012 and approved a stock repurchase program that had the Company buying back stock in December of 2015 at an average price of $5.99, only to advise shareholders to accept $3.40 per share from Randstad months later. Our director candidates are significantly more qualified and more experienced than the existing Board and one of our candidates, Daniel Dienst, is prepared to serve as interim CEO so that the turnaround at Monster can begin immediately after our directors are seated. Why Are We Here? A Better Path Forward For All Monster Shareholders MNG initially established a position in Monster in July of 2016 because we believed the stock was tremendously undervalued relative to its long-term prospects. We continue to strongly believe this is the case and that the deal with Randstad at $3.40 per share significantly undervalues the Company. Our nominees are dedicated to executing a plan to maximize shareholder value for all investors and while MNG would make a profit if the Randstad deal closes, we believe there is SIGNIFICANTLY more upside for everyone if the Company is managed properly over the long-term. MNG has a significant and deep understanding of the pressures facing Monster and the changes going on in the recruitment advertising industry. This, in part, is based on the following: We are confident that with the right talent and plan, Monster can reduce its revenue declines and increase profitability, despite the numerous headwinds facing the Company. We believe the main issues facing Monster are its lack of competent management, its poor strategy to address the shift in the business, and its completely inadequate oversight by a Board that doesn't have the experience, skills or desire to turn the business around. Our current plan for change focuses on three key areas -- 1) returning to growth, 2) optimizing the cost structure and 3) monetizing/restructuring non-core assets -- and would specifically involve the following initiatives: Our strategy for Monster is well thought out and the result of an exhaustive study of the business and industry, combined with our Board's relevant experience executing these types of initiatives, both at MNG and a host of other businesses. MNG has experience, both at our newspapers and our job board business, making significant expense reductions while minimizing impacts to revenue, and our strategy is greatly informed by these experiences. At MNG, we have reduced total operating expenses significantly over the last few years and our revenue performance has been as good or better than similar large newspaper companies in the industry over that same time period. Additionally, at our Jobs in the US business, we've been able to significantly reduce expense while actually growing revenue over the last few years. With the right execution and strategy, focused around the key initiatives listed above, we believe Monster has the ability to deliver long-overdue value to shareholders and can achieve a stock price of $6 - $8 per share over the next 18 months. Recent Restructuring Actions at the Company Have Not Gone Nearly Far Enough Monster claims that they have "already taken" actions to address challenges by cutting expenses over $100 million during the past several years, cutting capital expenditures by 50% and divesting non-core or underperforming assets. To be clear, the Company has not gone nearly far enough in terms of what it could or should do to rein in expenses, reduce capital spending and divest, restructure or shutdown under-performing assets. With regards to operating expenses, Monster still has close to 3,700 employees, with over 1,000 in salesi and over 61 offices in 23 countriesii. We simply do not accept the notion that the Company has done everything it can to reduce operating expense. Monster spends more on capital expenditures as a percentage of its revenue than its competitors and this management team/Board has a terrible track record when it comes to generating a return on capital invested -- return on capital has ranged annually from 0.4% to 4.4% since 2013iii. We are confident that significant improvements can be made to the product even with a reduced capital expenditures budget and what is clear is that the money being spent now is generating minimal returns for shareholders. The Company also contends that all non-core or underperforming assets have been divested. It's disingenuous for the Company to say it has done everything it can here when the international business alone has been unprofitable for the last 3 years. Clearly there is more to do to either restructure, sell or shutdown pieces of the international business. Moreover, our suspicion is that there are other parts of Monster's business, when analyzed with the necessary level of scrutiny, that would fall into the same category. MNG is NOT Trying to Take Control of Monster Monster claims we are trying to take control of the Company without paying a control premium. Comparing a campaign to remove and replace the Board of an underperforming company with an offer to acquire the whole business is akin to comparing apples and oranges. We are offering shareholders a credible and preferable alternative to the sub-optimal Randstad deal. Additionally, since we are shareholders in the Company and intend to continue being shareholders once our nominees are elected to the Board, we are completely aligned with all shareholders in our desire to maximize the value of the Company. MNG has a long-term view on its Monster investment and strong conviction around the well thought out plan we've put together and the experienced slate we've assembled. We are confident the opportunity exists to create significantly more shareholder value if the right Board and leadership team is put in place and the business is operated properly. Collectively, our Board has experience working on 27 turnaround situations and seventeen years of experience working within the recruitment advertising industry. The strategy we are proposing for Monster is absolutely realistic and our Board has the experience and know-how to execute it effectively. Questionable Strategic and Operational Decisions by the Incumbent Board The Company's financial performance under the current Board has been terrible, with revenue declining by over 24% since 2012 and stock performance suffering as a result. The incumbent Board has a long history of poor decision-making that has negatively impacted shareholder value. This is the Board that: Given this history of poor decision making, why should shareholders now believe the Board when it says that the Randstad deal is the best option to deliver value? An outright sale of the Company was not -- and is not -- the only option. The incumbent Board took the easy way out via a "fire-sale" since they have little skin in the game, as the incumbent non-employee directors collectively own a measly 0.3% of the stockviii. It is clear to us that the Randstad deal was entered into by the Board out of desperation to avoid responsibility for yet another quarterly miss by the Company. The Company did not negotiate a go-shop provision with Randstad, even though they did not run a formal auction process. While other potential buyers are able to technically submit bids, practically speaking, the rushed nature of the process would make it hard for any public company, private company with a sophisticated Board, or traditional private equity firm to participate. These types of potential buyers are used to more formal processes with structured bidding rounds, reasonable deadlines and a level playing field for all participants. We have talked to multiple companies who have stated that they would have participated in a formal process had the Company run one. Moreover, now that the deal with Randstad has been executed, potentially going "hostile" and submitting a competing bid with no access to diligence is not something most public companies, large private companies or traditional private equity funds are comfortable with, thereby severely limiting potential bid activity post the execution of the merger agreement. Aside from the rushed, flawed nature of the "process" the Company ran, we take issue with the fact that the Company was exploring a sale without evaluating all other alternatives for restructuring the business. As we've stated before, we don't believe now is the right time to sell the Company, especially without an exhaustive evaluation of other options to improve the business, and there is no evidence the Company went through this type of evaluation. Potential short-term price movements SHOULD NOT be a motivating factor for a Board and management team and it's clear that this, at least in part, was driving the Company's motivation to get a deal done so quickly with Randstad. Board Has Enabled Tim Yates to Potentially Make Over $4.9 Million if Randstad Deal Closes Monster CEO Tim Yates automatically gets paid over $1.7 millionix if the Randstad deal closes -- even if he keeps his job -- because the Board foolishly awarded equity awards with single-trigger change in control provisions to executives prior to March of 2016 and allowed him to negotiate the deal. Mr. Yates stands to make over $4.9 millionx if the Randstad deal is successful and he is terminated "without cause" or terminates "for "good reason." Mr. Yates owns less than 1%xi of the Company and has very little "skin in the game." Given the fact that the Company's stock has declined by over 92% since he first became involved in the business in 2007, and over 55% the last 12 monthsxii, it seems wholly unfair to shareholders that he stands to make such a financial windfall for "selling at the bottom." Despite Mr. Yates' obvious conflicts resulting from his golden parachutes and lack of substantial ownership of the Company, Monster's Board still allowed Mr. Yates to run the Company's haphazard sales "process" and completely relied on him for negotiations and updates. A SIGNFICANT Upgrade to the Board - Introduction to MNG's Nominees Our nominees have a very relevant and diverse set of skills across areas such as finance, sales management, corporate governance, restructuring, technology, recruitment advertising, and operations. Collectively, they have: Mr. Dienst served as a director and the Chief Executive Officer of Martha Stewart Living Omnimedia Inc., a media and merchandising company, from 2013-2015, where he led the turnaround of the famous brand and orchestrated its successful sale in 2015 to Sequential Brands, Inc. for $353 million. Prior to his service at Martha Stewart Living, Mr. Dienst had a distinguished career in the steel and metals industry, having served as the Group Chief Executive Officer of Sims Metal Management, Ltd. from 2008-2013, the world's largest publicly listed metal and electronics recycler, processing and trading in excess of 15 million tons of metal annually from 270 facilities on five continents. He had previously sold Metal Management, Inc., a company that he founded and served in the capacity of Chief Executive Officer from 2004-2008, to Sims for $1.7 billion in 2008. Mr. Dienst also served as Chairman of the Board and Acting Chief Executive Officer of Metals USA, Inc., one of the nation's largest steel processors, after its reorganization and until its going private sale to an affiliate of Apollo Management, L.P. in 2004. Mr. Dienst is also experienced in the financial markets, having served as a Managing Director of Corporate and Leveraged Finance at CIBC World Markets Corp., a diversified global financial services firm, from 2000-2004. From 1998-2000, he held various positions within CIBC, including Executive Director of the High Yield and Financial Restructuring Group. Previous to his time at CIBC, he served in various capacities with Jefferies & Company, Inc., a global investment banking firm. Mr. Dienst also recently served from 2014-2015 as a Director of 1st Dibs, Inc., a venture-backed e-commerce business owned by Benchmark Capital, Spark Capital, Index Ventures and Insight Venture Partners. Mr. Dienst holds a B.A. from Washington University in St. Louis. and a J.D. from Brooklyn Law School. Mr. Dienst's qualifications as a director include his executive experience as a CEO and director of 4 public companies, his expertise in turnarounds, special situations and corporate transactions and his experience in the media sector. Mr. Anto is currently a Senior Vice President at MediaNews Group, Inc. (d/b/a Digital First Media), the second largest newspaper company in the U.S. by circulation, where he has served since 2013. From 2014-2015, he was Vice President of Business Development for MediaNews Group and also CEO at Jobs in the US, a subsidiary of MediaNews with regionally focused job board sites in New England. From 2013-2014 he was Managing Director at Digital First Ventures, the strategic investing division of MediaNews Group. In 2009 he co-founded RumbaTime, LLC, a fashion brand focused on timepieces and accessories and served as the Company's CEO until 2012. From 2006-2009 Mr. Anto was a Senior Analyst and Director of Investments at Harbinger Capital Partners, a multi-strategy investment firm, where he managed one of the largest merchant power investment portfolios in the sector, accounting for approximately 30% of the Fund's assets and completed M&A and debt financing transactions totaling over $4 billion in value. Prior to his time at Harbinger, Mr. Anto was an associate at ABS Capital Partners, a later-stage venture capital firm, and an analyst at First Union Securities in their technology investment banking group. He is currently on the board at CIPS Marketing Group, Inc. and he has previously served on the boards of Kelson Energy Inc., Kelson Canada and Rumbatime. He has a BBA from Emory University and an MBA from Columbia University. Mr. Anto's qualifications as a director include his expertise as a previous CEO of a job board business, his executive experience, particularly in the media industry, and his expertise in turnarounds and corporate transactions. Mr. Bloomfield is currently the CEO of vitalfew, inc, a consulting and advisory business which he founded in 2015. He also serves on the Board of governors for TaTech, a leading industry association which enables the interaction of companies in the recruitment technology space. He has been a member since 2006 and on the board of governors since the first board was elected by the membership. In 2016, he co-founded and is also the current CRO of ConversationDriver, a company that utilizes software to help organizations improve efficiencies in sales outreach. From 2012-2015, he served as the Senior Vice President of Sales and Business Development at recruitment technology company, ZipRecruiter, which he joined in 2012 as the 20th employee and the first in sales. In his role at ZipRecruiter he developed the entire sales organization, which he grew from concept to over 120 reps when he left the company. Previously he was Vice President of Business Development at JobTarget, a company that provides technology to organizations that want to offer their own web-based job boards to their members. While at JobTarget, he was instrumental in launching innovative new products and also led the acquisition of two companies. Mr. Bloomfield holds a B.A. from the University of Massachusetts, Amherst. Mr. Bloomfield's qualifications as a director include his expertise in recruitment technologies, developed over a career spanning more than twelve years in the space. He is widely recognized as a thought leader in the sector and, in addition to advising or having advised almost thirty companies in the industry, he is a frequent speaker at industry conferences and events. Mr. Freeman is the President, a Founding Member, and Director of Alden Global Capital, LLC, a New York-based investment firm focused on deep value, catalyst driven investing. He has been with the firm since its founding in 2007, and has been its President since 2014. Mr. Freeman currently serves as Vice Chairman of MediaNews Group, Inc. (d/b/a Digital First Media), the second largest newspaper business in the United States by circulation with over $1 billion of annual revenue, owning newspapers such as The Denver Post, San Jose Mercury News and Orange County Register. He also serves on the compensation committee and leads the strategic review committee for MNG and has served on its board since 2011. Mr. Freeman is a co-founder and serves on the board of SLT Group, Inc. (d/b/a SLT) a private fitness business based out of New York and started in 2011, which recently took in a large growth investment from North Castle Partners, a private equity firm focused on the health and wellness space. Mr. Freeman also co-founded City of Saints Coffee Roasters in 2013, a third wave coffee roaster, wholesaler and retailer based out of Brooklyn, NY. Prior to Alden, from 2006 - 2007, Mr. Freeman worked as an Investment Analyst at New York-based Smith Management, a private investment firm. Prior to that, from 2003 - 2006, Mr. Freeman was an investment banking analyst at Peter J. Solomon Company, a boutique investment bank, working on mergers and acquisitions, restructurings and refinancing assignments. He has previously served on the boards at The Philadelphia Media Network and The Journal Register Company, among others. Currently, Mr. Freeman also serves as Chairman of the Advisory Board for Jewish Life at Duke University's Freeman Center and he also graduated with a BA from Duke University. Mr. Freeman's qualifications as a director include his experience as an investor, investment banker and board member of multiple companies with expertise in finance, compensation, turnarounds, corporate transactions and significantly improving value at underperforming companies. Mr. Gregson has served as the Americas Leader for the Insurance Industry for Willis Towers Watson plc since 2013. Prior to his role at Willis Towers Watson, Mr. Gregson was a Managing Director at Alvarez and Marsal Holdings, LLC, a financial advisory services company focused primarily on the financial services industry, from 2010-2013. Mr. Gregson has over thirty years of experience in developing and implementing business solutions for global organizations. Prior to joining Alvarez and Marsal, Mr. Gregson served as founder and president of Bridge Pointe, LLC, a Bermuda-based insurance and reinsurance company and advisory services firm that provides innovative insurance solutions for insurers and corporate sponsors. Previously, he was a co-founder and principal of the Gregson Group, a business advisory firm helping companies align business strategies with organizational and human capital strategies. He is currently a director at Fidelity & Guaranty Life, a provider of life insurance and annuity products, where he serves on the audit, compensation and related party transactions committee. Mr. Gregson holds a B.A. from the University of Delaware and has attended the Executive Finance Program at the University of Michigan. Mr. Gregson's qualifications as a director include his experience advising companies on complex business and financial issues for thirty years, and his expertise in corporate governance, strategy, and financial/operational performance improvement. Mr. Robinson is a highly regarded financial and operating executive with thirty years of senior-level strategic, financial, governance, turnaround and M&A experience. He has also been on seven public company boards, and has experience serving as Chairman of the board as well as Chairman of audit and compensation committees. From 2006-2009, Mr. Robinson was Chief Financial Officer and Chief Operating Officer for Miva, Inc., a digital marketing company, and was instrumental in Miva's turnaround and subsequent sale. He was previously Senior Executive Vice President and Chief Financial Officer of HotJobs.com, an online job board, where he was responsible for all finance and administrative functions at the company. After bringing the company to profitability a year ahead of expectation, HotJobs was sold to Yahoo! for $500 million, representing a 75% premium to market. Prior to joining HotJobs, Mr. Robinson was Executive Vice President and Chief Financial Officer for PRT Group, a software and IT services company, where he raised $62 million in its initial public offering. In 1994, Mr. Robinson was recruited by the CEO and Warburg Pincus to serve as the Chief Financial Officer of Valassis Communications, Inc. (f/k/a Advo, Inc.), a Fortune 500 company and the largest direct marketing company on the New York Stock Exchange with $2 billion in revenues. Over a three-year period, shareholder value increased 300% due to operational initiatives which he led, in addition to paying out a one-time $10 special dividend. Previously, Mr. Robinson held senior financial positions with Citigroup, Mars, Inc. and Kraft Foods Group, Inc. He is currently on the board at EVINE Live Inc., and has previously served on the board of The Jones Group, Inc., where he chaired the audit and compensation committees, in addition to having served on the boards of five other public companies over the course of his career. Mr. Robinson holds a B.A. from The University of Wisconsin and an M.B.A. in finance from Harvard Business School. Mr. Robinson's qualifications as a director include his C-level executive experience at multiple companies, his experience serving on the boards of seven public companies and his expertise in finance, corporate governance, turnarounds and corporate transactions. The Hon. Gregory Slayton has served as the Managing Director of Slayton Capital, an international venture capital firm that has been an early investor in some of the most successful companies in Silicon Valley history, since 2002. He was an early investor in Google and Salesforce.com and served on the advisory boards of both companies. From 2005-2009, Mr. Slayton was the United States Chief of Mission (defacto Ambassador) to Bermuda, serving under both the Bush and Obama Administrations. From 2000-2002, he served as Chief Executive Officer of ClickAction Inc., an email marketing services company that was acquired by InfoUSA Inc., and prior to this, he was Chief Executive Officer and Chairman of MySoftware, which merged with ClickAction in 2000. He has also served as Distinguished Visiting Professor at Peking University and as a visiting professor at UIBE Business School, Beijing & Szechuan University, Dartmouth College, Harvard University and the Stanford Graduate School of Business. Mr. Slayton has been featured in the Wall Street Journal, Time and three Harvard Business School case studies. He has lived and worked extensively in Asia, Africa, Europe and Latin America, and was a Fulbright Scholar at the University of the Philippines, where he completed a Masters in Asian Studies with honors. Mr. Slayton holds a B.A. from Dartmouth College and an M.B.A. from Harvard Business School, having graduated from both institutions with honors. Mr. Slayton's qualifications as a director include his experience as an investor in technology companies, his executive experience as CEO of multiple companies, his experience serving on the boards of four public companies and his expertise in technology, operations and international markets. As shareholders, we don't have to settle for the "fire-sale" Randstad deal that was brought to us by the current Board or the poor performance of the Company that has taken place under its watch. Simply put, we expect more from the board members we entrust with creating value at the companies we invest in and we think other shareholders should expect more as well. There is a better path forward for Monster and we are confident that the plan we've laid out and the board candidates we are presenting represent the best possible alternative to deliver substantial value to shareholders. We encourage all shareholders to visit our website, revitalizemonster.com, to learn more about our nominees and our strategic plan to revitalize the Company. If you have any questions, please contact Okapi Partners LLC at info@okapipartners.com or 212-297-0720. MediaNews Group, Inc. (d/b/a Digital First Media) is a leader in local, multiplatform news and information, distinguished by its original content and high quality, diversified portfolio of local media assets. Digital First Media is the second largest newspaper company in the United States by circulation, serving an audience of over 40 million readers on a monthly basis. The Company's portfolio of products includes 67 daily newspapers and 180 non-daily publications. Digital First Media has a leading local news audience share in each of its primary markets and its content monetization platforms serve clients on both a national and local scale. MediaNews Group, Inc., Joseph Anto, Ethan Bloomfield, Daniel Dienst, Heath Freeman, Kevin Gregson, Lowell Robinson and Gregory Slayton (collectively, the "Participants") have filed with the Securities and Exchange Commission (the "SEC") a definitive consent statement and accompanying form of consent card to be used in connection with the solicitation of consents from the stockholders of Monster Worldwide, Inc. (the "Company"). All stockholders of the Company are advised to read the definitive consent statement and other documents related to the solicitation of consents by the Participants as they contain important information, including additional information related to the Participants. The consent statement and an accompanying consent card will be furnished to some or all of the Company's stockholders and will be, along with other relevant documents, available at no charge on the SEC website at http://www.sec.gov/ or from Okapi Partners at (855) 305-0856 or info@okapipartners.com. Information about the Participants and a description of their direct or indirect interests by security holdings is contained in the definitive consent statement on Schedule 14A filed by the Participants with the SEC on October 20, 2016. This document is available free of charge from the sources indicated above. The tender offer referenced in this press release has not yet commenced. This announcement is for informational purposes only and is not an offer to purchase or a solicitation of an offer to sell securities, nor is it a substitute for the tender offer materials that will be filed with the SEC. The solicitation and offer to buy shares of common stock of Monster will only be made pursuant to an Offer to Purchase and related tender offer materials that will be filed by MNG (through an affiliate) with the SEC. THE TENDER OFFER MATERIALS OF MNG ON SCHEDULE TO (INCLUDING AN OFFER TO PURCHASE, A RELATED LETTER OF TRANSMITTAL AND CERTAIN OTHER TENDER OFFER DOCUMENTS) WILL CONTAIN IMPORTANT INFORMATION. MONSTER SHAREHOLDERS SHOULD READ THESE DOCUMENTS CAREFULLY WHEN THEY BECOME AVAILABLE BECAUSE THEY WILL CONTAIN IMPORTANT INFORMATION THAT MONSTER SHAREHOLDERS SHOULD CONSIDER BEFORE MAKING ANY DECISION REGARDING TENDERING THEIR SECURITIES. Copies of these documents, when filed with the SEC, will be available free of charge by contacting Okapi Partners LLC, the information agent for the tender offer, at (855) 305-0856. These documents, when filed with the SEC, will also be available for free at the SEC's website at www.sec.gov. THIS PRESS RELEASE CONTAINS FORWARD LOOKING STATEMENTS. FORWARD LOOKING STATEMENTS CAN BE IDENTIFIED BY USE OF WORDS SUCH AS "OUTLOOK", "BELIEVE", "INTEND", "EXPECT", "POTENTIAL", "WILL", "MAY", "SHOULD", "ESTIMATE", "ANTICIPATE", AND DERIVATIVES OR NEGATIVES OF SUCH WORDS OR SIMILAR WORDS. FORWARD LOOKING STATEMENTS IN THIS PRESS RELEASE ARE BASED UPON PRESENT BELIEFS OR EXPECTATIONS. HOWEVER, FORWARD LOOKING STATEMENTS AND THEIR IMPLICATIONS ARE NOT GUARANTEED TO OCCUR AND MAY NOT OCCUR AS A RESULT OF VARIOUS RISKS, REASONS AND UNCERTAINTIES, INCLUDING UNCERTAINTY AS TO WHETHER THE CONDITIONS TO THE TENDER OFFER WILL BE SATISFIED, THE NUMBER OF SHARES OF MONSTER COMMON STOCK THAT WILL BE TENDERED AND WHETHER THE TENDER OFFER WILL BE COMMENCED OR CONSUMMATED. EXCEPT AS REQUIRED BY LAW, MNG AND ITS OWNERS AND RELATED PERSONS UNDERTAKE NO OBLIGATION TO UPDATE ANY FORWARD LOOKING STATEMENT, WHETHER AS A RESULT OF NEW INFORMATION, FUTURE DEVELOPMENTS OR OTHERWISE. Note: unless cited below, Monster historical financials and data points referenced in this letter are from the Company's SEC filings and press releases i According to previous conversations with Monster Investor Relations. Based on our understanding, this includes inside sales, outside sales, management and sales/customer support ii http://www.monster.com/about/our-locations iii S&P Capital IQ iv 2016 Proxy Peer Group includes: IAC/InterActiveCorp; LinkedIn Corporation; Earthlink Holdings Corp.; VeriSign, Inc.; Shutterfly, Inc.; Pandora Media, Inc.; j2 Global, Inc.; Pegasystems Inc.; Blucora, Inc.; WebMD Health Corp.; NetSuite, Inc.; Web.com Group, Inc.; and DHI Group, Inc. Excludes three companies that are no longer standalone public companies since they have been acquired; Calculation of Cumulative Total Shareholder Return assumes dividends are reinvested v Company's Consent Revocation Statement, filed on Schedule 14A on October 18, 2016 vi Based on stock price of $44.72 on June 8, 2007 vii For instance, Monster made statements in a press release filed on Schedule 14D-9/A on September 30, 2016, claiming, "Monster's Board recognized that enhancing Monster's competitive position in the current environment will require continued investment, and the Company will likely operate in a low growth environment with substantial margin pressure for several years." Several days later, Monster wrote in a shareholder presentation filed on Schedule 14A on October 4, 2016 (the "October 4 DEFA14A), "If the Randstad transaction does not close, Monster's stock price could trade down to or below the pre-announcement price." viii S&P Capital IQ ix Company's Solicitation/Recommendation Statement, filed on Schedule 14D-9 on September 6, 2016 x Company's Solicitation/Recommendation Statement, filed on Schedule 14D-9 on September 6, 2016 xi Company's Consent Revocation Statement, filed on Schedule 14A on October 18, 2016 xii Based on Monster stock price on October 15, 2015 of $7.60


News Article | December 19, 2016
Site: www.eurekalert.org

An international team led by Washington University School of Medicine in St. Louis has selected a third investigational drug to be tested in a worldwide clinical trial - already underway - aimed at finding treatments to prevent Alzheimer's disease. The third drug is being developed by Janssen Research & Development, LLC, in New Jersey. It is designed to lower production of amyloid beta, a protein that clumps together into plaques and damages neurons in the brain, leading to memory loss, cognitive problems and confusion. The drug is designed to block the enzyme beta secretase -- which produces amyloid beta -- with a goal of reducing the amount of amyloid beta available to clump and cause neurodegeneration. This investigational drug joins two others already being evaluated in the Dominantly Inherited Alzheimer's Network Trial Unit (DIAN-TU) study, which involves people with an inherited predisposition to develop Alzheimer's at a young age, usually in their 30s, 40s or 50s. Participants already enrolled will continue on their existing drug regimens, and additional volunteers with no or mild symptoms of cognitive impairment will be enrolled to evaluate the third drug. "We are delighted with the new collaboration with Janssen Research & Development to expand the number of novel therapeutic targets we are testing," said Washington University Alzheimer's specialist Randall J. Bateman, MD, director of the DIAN-TU, a public-private-philanthropic research partnership. "Testing a beta secretase inhibitor in the DIAN-TU trial further diversifies the approach to speed identification of potential preventions and treatments for this devastating disease," added Bateman, who is also the Charles F. and Joanne Knight Distinguished Professor of Neurology at Washington University. The DIAN-TU, launched in 2012, is the first trial aimed at identifying drugs to prevent or slow Alzheimer's in people who are nearly certain to develop the disease due to inherited genetic mutations. Specifically, people in the trial have mutations in one of three genes - APP, PSEN-1 or PSEN-2 - which are linked to early-onset Alzheimer's. The hope is that by intervening early - before Alzheimer's ravages the brain - it may be possible to thwart the disease. As part of the trial, three-quarters of new enrollees will be randomly assigned to receive the beta secretase inhibitor, and one-quarter will receive the placebo. Both groups will be evaluated for at least four years to determine whether the investigational drug delays or prevents the onset of Alzheimer's disease. "Janssen welcomes this opportunity for researchers to test the mechanism of beta secretase inhibition in people who have dominantly inherited genetic mutations that put them at substantial risk of early-onset Alzheimer's disease. The DIAN-TU trials will provide a rigorous and powerful test of the amyloid hypothesis while evaluating a potential preventive treatment option for autosomal dominant Alzheimer's disease," said Gary Romano, MD, PhD, the head of Alzheimer's Disease Development for Janssen Research & Development. Although the trial focuses on people with rare mutations, treatments that are successful in this population potentially could be used to slow or stop the forms of Alzheimer's that occur more commonly in older adults. It is thought that the destructive molecular and cellular processes in the brain are much the same for both types of the disease. The other two investigational drugs already being tested in the DIAN-TU are gantenerumab, an antibody made by Roche that binds to clumps of amyloid beta and helps remove them from the brain, and solanezumab, an antibody made by Eli Lilly and Co. that binds to free-floating fragments of amyloid beta protein, allowing them to be cleared before they clump together to form plaques. Enrollment in these two groups of the trial was completed in 2015, and these participants will be followed through the end of 2019. Alzheimer's researchers selected the investigational drugs from more than 20 drugs nominated by pharmaceutical companies. Each drug has a unique approach to counter the toxic effects of amyloid beta. Each also passed earlier clinical trials that evaluated safety and effectiveness of the drugs and whether they targeted amyloid beta in study participants. "We are pleased to see the DIAN-TU trial researchers continuing to broaden the types of investigational drugs they are testing," said Maria Carrillo, PhD, chief science officer of the Alzheimer's Association, which is helping to fund the trial. "Alzheimer's is a very complex disease, and it is extremely important that we develop therapies to address Alzheimer's from a variety of angles and at multiple stages of the disease." Along with beginning testing of the beta secretase inhibitor, the new arm of the DIAN-TU study uses a new disease progression model to identify changes in cognition earlier and includes more frequent cognitive testing using remote applications. In addition, the trial will include an investigational imaging-based marker targeting disease progression. This novel radiopharmaceutical tracer - which is being developed by General Electric (GE) Healthcare and called THK-5351 - is designed to detect the brain protein tau by positron emission tomography (PET) scan. Tau accumulates in the brain of individuals with Alzheimer's, where it forms toxic tangles. By incorporating a radioactive atom into a molecule that specifically detects tau, researchers may be able to monitor the amount and location of tau tangles in participants' brains by PET scan. Investigators are hoping to determine whether this imaging method will demonstrate the presence of tau tangles before an individual starts to show symptoms of cognitive decline and if it can help predict the onset of dementia more accurately than existing biomarkers. The DIAN-TU trial is underway at 24 sites across seven countries. Because of the rarity of dominantly inherited Alzheimer's disease, the program will be expanded to additional countries, potentially including Argentina, Bulgaria, China, Germany, Japan, Korea, Mexico, Netherlands and Sweden. For people with Alzheimer's, family members, doctors and researchers interested in participating, the DIAN-TU launched the DIAN Expanded Registry (DIAN EXR). For more information or to register for potential participation in the trial, go to http://www. , call 1-844-DIAN-EXR (342-6397) or email dianexr@wustl.edu. Editor's note: David M. Holtzman, MD, the Andrew B. and Gretchen P. Jones Professor and head of neurology, is listed on the patent related to solanezumab, an antibody that is co-owned by Washington University in St. Louis and Lilly. Washington University has licensed its patent rights to Lilly. The financial interests of the university and Holtzman in this patent are managed in accordance with applicable conflict-of-interest policies and regulations.


Expanded Multicenter Study in U.S. and Europe to Begin Fourth Quarter 2017ST. LOUIS, Mar. 2, 2017 /PRNewswire/ -- MediBeacon Inc., a portfolio company within the Pansend Life Sciences platform of HC2 Holdings, Inc. (NYSE MKT: HCHC), announced today the successful completion of a real-time, point of care renal function clinical study on subjects with impaired kidney function at Washington University in St. Louis. During the clinical study, kidney function was measured in subjects ranging from normal to impaired Stage 4 Chronic Kidney Disease (CKD). The study also included subjects at St. Louis University Hospital. MediBeacon's proposed Transdermal Glomerular Filtration Rate ("GFR") Monitor uses an optical skin sensor combined with a proprietary fluorescent tracer agent that glows in the presence of light. The system has been designed to provide clinicians continuous real-time monitoring of kidney function with no need for blood sampling. "Completion of our clinical study on patients with impaired kidney function represents an important milestone," said Steve Hanley, MediBeacon CEO. "We anticipate beginning our multicenter clinical study including sites in the United States and Europe during the fourth quarter 2017." Blood samples taken in clinical practice today provide only time-delayed estimates and suffer from variability that may lead to inaccuracies. "Methods to assess kidney function have not changed in 25 years," said Dr. Richard Solomon, Patrick Professor of Medicine and Director, Division of Nephrology and Hypertension at The University of Vermont College of Medicine. "MediBeacon's point of care system could represent a major breakthrough in measuring kidney function." "We are extremely excited by the continued progress MediBeacon has made in validating their technology," said Philip Falcone, HC2's Chairman, Chief Executive Officer and President. "Over time, MediBeacon's innovations have the potential to improve patient care and reduce costs to the healthcare system." Current MediBeacon technology applications are being investigated in the fields of kidney health, gastrointestinal permeability and optical angiography. The company's Intellectual Property (IP) portfolio has grown to 29 granted U.S. patents, with 17 pending patent applications. In September 2016, MediBeacon was awarded a grant from the National Eye Institute (NEI) of the National Institutes of Health (NIH) under Award Number R43EY027207. With this support, the company is pursuing research into the use of a MediBeacon fluorescent tracer agent to visualize vasculature in the eye. In October 2016, MediBeacon, in collaboration with Washington University, was awarded a $1.1 million grant from the Bill & Melinda Gates Foundation for a research project aimed at improving the understanding of childhood malnutrition and its related problems, including stunted growth. MediBeacon's mission is to commercialize biocompatible optical diagnostic agents for physiological monitoring, surgical guidance, and imaging of pathological disease in the human population. Several product concepts in these arenas are contained in the MediBeacon Intellectual Property estate. MediBeacon's portfolio includes a renal function system that uses an optical skin sensor combined with a proprietary fluorescent tracer agent that glows in the presence of light. This system, currently in human trials, is designed to provide clinicians continuous real-time monitoring of a patient's kidney function. Learn more about MediBeacon at www.medibeacon.com HC2 Holdings, Inc. is a publicly traded (NYSE MKT:HCHC) diversified holding company, which seeks opportunities to acquire and grow businesses that can generate long-term sustainable free cash flow and attractive returns in order to maximize value for all stakeholders. HC2 has a diverse array of operating subsidiaries across seven reportable segments, including Manufacturing, Marine Services, Utilities, Telecommunications, Life Sciences, Insurance and Other. HC2's largest operating subsidiaries include DBM Global Inc., a family of companies providing fully integrated structural and steel construction services, and Global Marine Systems Limited, a leading provider of engineering and underwater services on submarine cables. Founded in 1994, HC2 is headquartered in New York, New York. Learn more about HC2 and its portfolio companies at www.hc2.com


Expanded Multicenter Study in U.S. and Europe to Begin Fourth Quarter 2017 ST. LOUIS, March 2, 2017 /PRNewswire/ -- MediBeacon Inc., a portfolio company within the Pansend Life Sciences platform of HC2 Holdings, Inc. (NYSE MKT: HCHC), announced today the successful completion of a real-time, point of care renal function clinical study on subjects with impaired kidney function at Washington University in St. Louis.  During the clinical study, kidney function was measured in subjects ranging from normal to impaired Stage 4 Chronic Kidney Disease (CKD).  The study also included subjects at St. Louis University Hospital. MediBeacon's proposed Transdermal Glomerular Filtration Rate ("GFR") Monitor uses an optical skin sensor combined with a proprietary fluorescent tracer agent that glows in the presence of light.  The system has been designed to provide clinicians continuous real-time monitoring of kidney function with no need for blood sampling. "Completion of our clinical study on patients with impaired kidney function represents an important milestone," said Steve Hanley, MediBeacon CEO.  "We anticipate beginning our multicenter clinical study including sites in the United States and Europe during the fourth quarter 2017." Blood samples taken in clinical practice today provide only time-delayed estimates and suffer from variability that may lead to inaccuracies.  "Methods to assess kidney function have not changed in 25 years," said Dr. Richard Solomon, Patrick Professor of Medicine and Director, Division of Nephrology and Hypertension at The University of Vermont College of Medicine.  "MediBeacon's point of care system could represent a major breakthrough in measuring kidney function." "We are extremely excited by the continued progress MediBeacon has made in validating their technology," said Philip Falcone, HC2's Chairman, Chief Executive Officer and President.  "Over time, MediBeacon's innovations have the potential to improve patient care and reduce costs to the healthcare system." Current MediBeacon technology applications are being investigated in the fields of kidney health, gastrointestinal permeability and optical angiography.  The company's Intellectual Property (IP) portfolio has grown to 29 granted U.S. patents, with 17 pending patent applications. In September 2016, MediBeacon was awarded a grant from the National Eye Institute (NEI) of the National Institutes of Health (NIH) under Award Number R43EY027207.  With this support, the company is pursuing research into the use of a MediBeacon fluorescent tracer agent to visualize vasculature in the eye. In October 2016, MediBeacon, in collaboration with Washington University, was awarded a $1.1 million grant from the Bill & Melinda Gates Foundation for a research project aimed at improving the understanding of childhood malnutrition and its related problems, including stunted growth. MediBeacon's mission is to commercialize biocompatible optical diagnostic agents for physiological monitoring, surgical guidance, and imaging of pathological disease in the human population. Several product concepts in these arenas are contained in the MediBeacon Intellectual Property estate. MediBeacon's portfolio includes a renal function system that uses an optical skin sensor combined with a proprietary fluorescent tracer agent that glows in the presence of light. This system, currently in human trials, is designed to provide clinicians continuous real-time monitoring of a patient's kidney function. Learn more about MediBeacon at www.medibeacon.com HC2 Holdings, Inc. is a publicly traded (NYSE MKT:HCHC) diversified holding company, which seeks opportunities to acquire and grow businesses that can generate long-term sustainable free cash flow and attractive returns in order to maximize value for all stakeholders. HC2 has a diverse array of operating subsidiaries across seven reportable segments, including Manufacturing, Marine Services, Utilities, Telecommunications, Life Sciences, Insurance and Other. HC2's largest operating subsidiaries include DBM Global Inc., a family of companies providing fully integrated structural and steel construction services, and Global Marine Systems Limited, a leading provider of engineering and underwater services on submarine cables. Founded in 1994, HC2 is headquartered in New York, New York. Learn more about HC2 and its portfolio companies at www.hc2.com


NEW YORK, March 02, 2017 (GLOBE NEWSWIRE) -- HC2 Holdings, Inc. (“HC2”) (NYSE MKT:HCHC), a diversified holding company, announced today that MediBeacon Inc., a portfolio company within HC2’s Pansend Life Sciences subsidiary, successfully completed a real-time, point of care renal function clinical study on subjects with impaired kidney function at Washington University in St. Louis.  During the Pilot Two study, kidney function was measured in subjects ranging from normal to impaired Stage 4 Chronic Kidney Disease (CKD). The study also included subjects at St. Louis University Hospital. A photo accompanying this announcement is available at http://www.globenewswire.com/NewsRoom/AttachmentNg/daa7caaf-6c7f-421b-af21-8df3a0bf3883 “We are extremely excited by the continued progress MediBeacon has made in validating their technology,” said Phil Falcone, HC2’s Chairman, Chief Executive Officer and President.  “This is a significant milestone that further reinforces our confidence in our investment in MediBeacon and its ability to use innovative technology to improve patient care and reduce costs to the healthcare system.” MediBeacon’s proposed Transdermal Glomerular Filtration Rate (“GFR”) Monitor uses an optical skin sensor combined with a proprietary fluorescent tracer agent that glows in the presence of light.  The system has been designed to provide clinicians continuous real-time monitoring of kidney function with no need for blood sampling. “Completion of our Pilot Two study on patients with impaired kidney function represents an important milestone,” said Steve Hanley, MediBeacon CEO.  “We anticipate beginning our multicenter clinical study including sites in the United States and Europe during the fourth quarter 2017.” Blood samples taken in clinical practice today provide only time-delayed estimates and suffer from variability that may lead to inaccuracies. “Methods to assess kidney function have not changed in 25 years,” said Dr. Richard Solomon, Patrick Professor of Medicine and Director, Division of Nephrology and Hypertension at The University of Vermont College of Medicine. “MediBeacon’s point of care system could represent a major breakthrough in measuring kidney function.” Current MediBeacon technology applications are being investigated in the fields of kidney health, gastrointestinal permeability and optical angiography. The company’s Intellectual Property (IP) portfolio has grown to 29 granted U.S. patents, with 17 pending patent applications. In September 2016, MediBeacon was awarded a grant from the National Eye Institute (NEI) of the National Institutes of Health (NIH) under Award Number R43EY027207.  With this support, the company is pursuing research into the use of a MediBeacon fluorescent tracer agent to visualize vasculature in the eye. In October 2016, MediBeacon, in collaboration with Washington University, was awarded a $1.1 million grant from the Bill & Melinda Gates Foundation for a research project aimed at improving the understanding of childhood malnutrition and its related problems, including stunted growth. HC2 Holdings, Inc. is a publicly traded (NYSE MKT:HCHC) diversified holding company, which seeks opportunities to acquire and grow businesses that can generate long-term sustainable free cash flow and attractive returns in order to maximize value for all stakeholders. HC2 has a diverse array of operating subsidiaries across seven reportable segments, including Manufacturing, Marine Services, Utilities, Telecommunications, Life Sciences, Insurance and Other. HC2's largest operating subsidiaries include DBM Global Inc., a family of companies providing fully integrated structural and steel construction services, and Global Marine Systems Limited, a leading provider of engineering and underwater services on submarine cables. Founded in 1994, HC2 is headquartered in New York, New York. Learn more about HC2 and its portfolio companies at www.hc2.com MediBeacon’s mission is to commercialize biocompatible optical diagnostic agents for physiological monitoring, surgical guidance, and imaging of pathological disease in the human population. Several product concepts in these arenas are contained in the MediBeacon Intellectual Property estate. MediBeacon’s portfolio includes a renal function system that uses an optical skin sensor combined with a proprietary fluorescent tracer agent that glows in the presence of light. This system, currently in human trials, provides clinicians continuous real-time monitoring of a patient’s kidney function. Learn more about MediBeacon at www.medibeacon.com Safe Harbor Statement Under the Private Securities Litigation Reform Act of 1995: This release contains, and certain oral statements made by our representatives from time to time may contain, forward-looking statements, including statements regarding the commencement or completion of the offering. Generally, forward-looking statements include information describing the offering and other actions, events, results, strategies and expectations and are generally identifiable by use of the words “believes,” “expects,” “intends,” “anticipates,” “plans,” “seeks,” “estimates,” “projects,” “may,” “will,” “could,” “might,” or “continues” or similar expressions. The forward-looking statements in this press release include, without limitation, statements regarding our expectation regarding building shareholder value.  Such statements are based on the beliefs and assumptions of HC2’s management and the management of HC2’s subsidiaries and portfolio companies. The Company believes these judgments are reasonable, but you should understand that these statements are not guarantees of performance or results, and the Company’s actual results could differ materially from those expressed or implied in the forward-looking statements due to a variety of important factors, both positive and negative, that may be revised or supplemented in subsequent reports on Forms 10-K, 10-Q and 8-K. Such important factors include, without limitation, the ability of our subsidiaries (including, target businesses following their acquisition) to generate sufficient net income and cash flows to make upstream cash distributions, capital market conditions, our and our subsidiaries’ ability to identify any suitable future acquisition opportunities, efficiencies/cost avoidance, cost savings, income and margins, growth, economies of scale, combined operations, future economic performance, conditions to, and the timetable for, completing the integration of financial reporting of acquired or target businesses with HC2 or the applicable subsidiary of HC2, completing future acquisitions and dispositions, litigation, potential and contingent liabilities, management’s plans, changes in regulations and taxes. These risks and other important factors discussed under the caption “Risk Factors” in our most recent Annual Report on Form 10-K filed with the Securities and Exchange Commission (“SEC”), and our other reports filed with the SEC could cause actual results to differ materially from those indicated by the forward-looking statements made in this press release. You should not place undue reliance on forward-looking statements. All forward-looking statements attributable to HC2 or persons acting on its behalf are expressly qualified in their entirety by the foregoing cautionary statements. All such statements speak only as of the date made, and HC2 undertakes no obligation to update or revise publicly any forward-looking statements, whether as a result of new information, future events or otherwise. The photo is also available at Newscom, www.newscom.com, and via AP PhotoExpress.


WASHINGTON, DC, Dec. 13, 2016 (GLOBE NEWSWIRE) -- Rachel Jacobson, former Deputy General Counsel of Environment, Energy and Installations at the US Department of Defense (DOD), will join WilmerHale. Ms. Jacobson is recognized nationally as a leader in environmental and natural resources law, having served more than 30 years in the federal government. In addition to her experience at the DOD, Ms. Jacobson litigated some of the nation's largest environment cases while holding senior leadership positions at the Department of Justice (DOJ) and the Department of the Interior (DOI). Ms. Jacobson also has considerable experience working with states and tribes. "Having served at the DOD, DOI and DOJ, Rachel brings significant, high-level experience in environmental law and policy to WilmerHale," said Andrew Spielman, chair of WilmerHale's Energy, Environment and Natural Resources Practice. "Her extensive, cross-cutting experience will help clients with complex environmental legal challenges achieve effective and long-lasting results." Since August 2014, Ms. Jacobson has been the lead environmental lawyer for the DOD, overseeing all activity pertaining to environmental, energy, natural resources and installations, including environmental compliance and cleanup, natural resource management, endangered species protection and litigation, energy procurement and siting, domestic and international basing, military construction, and historic preservation. In this capacity, she advised senior-level policy officials in the DOD and its three military departments. Ms. Jacobson served at the DOI from 2009-2014, first as Principal Deputy Solicitor, where she was the lead negotiator of the $1 billion early restoration settlement agreement with BP following the Deepwater Horizon oil spill. She was later appointed Acting Assistant Secretary for Fish and Wildlife and Parks, where she oversaw policy for the US Fish and Wildlife Service and National Park Service and led the DOI's planning effort for a suite of post-spill early restoration projects in the Gulf of Mexico valued at $750 million. From 2008-2009, Ms. Jacobson served as director of the impact-directed environmental accounts program at the National Fish and Wildlife Foundation, managing a $100 million mitigation portfolio for environmental restoration and habitat conservation. Ms. Jacobson spent the majority of her career at the DOJ, where she supervised and litigated for more than 20 years and built a reputation as an authority in environmental law and natural resource damages. During this time, she was at the forefront of some of the largest environmental cases in US history, including the Exxon Valdez oil spill and the Coeur d' Alene Superfund trial. "I am proud of my years in public service and honored to be joining WilmerHale," Ms. Jacobson said. "The invaluable experience I gained in environment regulatory and cleanup and working the intersection of natural resource management and energy development is ideally suited for WilmerHale's focus and strength in working with clients facing complex regulatory and litigation challenges." Ms. Jacobson has received numerous academic, professional and government achievement awards and honors, including most recently the 2015 and 2016 Secretary of Defense Medal for Exceptional Public Service. She received her undergraduate degree in Economics from Washington University in St. Louis, Missouri in 1980 and her law degree from Boston University School of Law in 1984. Ms. Jacobson will join WilmerHale as special counsel in the firm's Washington DC office in early January. About Wilmer Cutler Pickering Hale and Dorr LLP WilmerHale provides legal representation across a comprehensive range of practice areas that are critical to the success of its clients. The law firm's leading Intellectual Property, Litigation/Controversy, Regulatory and Government Affairs, Securities, and Transactional Departments participate in some of the highest-profile legal and policy matters. With a staunch commitment to public service, the firm is renowned as a leader in pro bono representation. WilmerHale is 1,000 lawyers strong with 12 offices in the United States, Europe and Asia. For more information, please visit www.wilmerhale.com. A photo accompanying this release is available at: http://www.globenewswire.com/newsroom/prs/?pkgid=41993


Erweiterte multizentrische Studie in den USA und Europa für viertes Quartal 2017 angesetzt ST. LOUIS, 2. März 2017 /PRNewswire/ -- MediBeacon Inc., ein Portfoliounternehmen von HC2 Holdings, Inc. (NYSE MKT: HCHC) und Teil der Pansend Life Sciences-Plattform, gibt heute den erfolgreichen Abschluss einer an der Washington University in St. Louis durchgeführten klinischen Echtzeit-Point-of-Care-Studie zur Nierenfunktion an Probanden mit beeinträchtigter Nierenfunktion bekannt.  Im Rahmen der klinischen Studie wurde die Nierenfunktion von Probanden in einem Spektrum von normal bis hin zur chronischen Nierenkrankheit (CKD) Stadium 4 gemessen.  An der Studie waren ebenfalls Probanden des St. Louis University Hospital beteiligt. Das von MediBeacon vorgeschlagene System zur transdermalen Überwachung der glomerulären Filtrationsrate („GFR") kombiniert einen optischen Hautsensor mit einem proprietären fluoreszierenden Tracer-Agent, dessen Leuchtkraft durch Licht aktiviert wird.  Das System wurde entwickelt, um Klinikern die kontinuierliche Echtzeit-Überwachung der Nierenfunktion eines Patienten zu ermöglichen, ohne dabei Blutproben entnehmen zu müssen. „Der Abschluss unserer klinischen Studie an Patienten mit eingeschränkter Nierenfunktion ist ein wichtiger Meilenstein", erklärte Steve Hanley, MediBeacon CEO.  „Wir erwarten den Beginn unserer multizentrischen klinischen Studie, die Zentren in den Vereinigten Staaten und Europa umfasst, für das vierte Quartal 2017". Derzeitig in der klinischen Praxis entnommene Blutproben bieten nur zeitverzögerte Schätzungen und unterliegen einer Schwankungsbreite, die zu Ungenauigkeiten führen kann.  „Die Methoden zur Beurteilung der Nierenfunktion haben sich seit 25 Jahren nicht verändert", wie Dr. Richard Solomon erläuterte, Patrick Professor of Medicine und Direktor der Abteilung für Nephrologie und Hypertonie am College of Medicine der University of Vermont.  „MediBeacons Point-of-Care-System könnte einen großen Durchbruch in der Nierenfunktionsmessung bringen". „Wir sind begeistert über die kontinuierlichen Fortschritte, die MediBeacon bei der Validierung seiner Technologie gemacht hat", sagte Philip Falcone, HC2s Chairman, Chief Executive Officer und President.  „MediBeacons Innovationen haben das Potenzial, langfristig die Patientenversorgung zu verbessern und die Kosten für das Gesundheitssystem zu senken". Aktuell werden MediBeacons technologische Anwendungsgebiete in den Bereichen Nierengesundheit, gastrointestinale Permeabilität und optische Angiographie untersucht.  Das Intellectual Property (IP)-Portfolio des Unternehmens hat sich auf 29 erteilte US-Patente erweitert, weitere 17 Patente sind derzeit anhängig. MediBeacon hat im September 2016 einen Zuschuss des National Eye Institute (NEI) der National Institutes of Health (NIH) mit Vergabenummer R43EY027207 erhalten.  Das Unternehmen setzt diese Mittel ein, um die Anwendung eines fluoreszierenden Tracer-Agents von MediBeacon für die Visualisierung des Gefäßsystems im Auge zu erforschen. Im Oktober 2016 erhielt MediBeacon in Zusammenarbeit mit der Washington University eine von der Bill & Melinda Gates Foundation vergebene Förderung in Höhe von 1,1 Mio. USD für ein Forschungsprojekt, dass das Verständnis von Unterernährung im Kindesalter und damit einhergehende Probleme wie z.B. verkümmertes Wachstum verbessern soll. MediBeacon verfolgt das Ziel, biokompatible optische Diagnostika für die physiologische Überwachung, computerunterstützte Chirurgie und bildgebende Darstellung von Erkrankungen beim Menschen auf den Markt zu bringen. Verschiedene Produktkonzepte auf diesen Gebieten sind im Bestand des geistigen Eigentums von MediBeacon enthalten. MediBeacons Portfolio schließt ein Nierenfunktionssystem ein, das einen optischen Hautsensor mit einem proprietären fluoreszierenden Tracer-Agent kombiniert, dessen Leuchtkraft durch Licht aktiviert wird. Das derzeit in der Erstanwendung am Menschen erprobte System wurde entwickelt, um Klinikern die kontinuierliche Echtzeit-Überwachung der Nierenfunktion eines Patienten zu ermöglichen. Weitere Informationen zu MediBeacon finden Sie unter www.medibeacon.com HC2 Holdings, Inc. ist eine börsennotierte (NYSE MKT:HCHC) diversifizierte Holding-Gesellschaft, deren Betriebszweck es ist, Unternehmen zu übernehmen und zu vergrößern, die langfristig und nachhaltig freien Cashflow und attraktive Renditen generieren können, um den Nutzen für alle Stakeholder zu maximieren. HC2 verfügt über eine Vielzahl von operativen Tochtergesellschaften in sieben Geschäftsbereichen (berichtspflichtige Segmente): Herstellung, Marine Services, Utilities, Telekommunikation, Life Sciences, Versicherungen, und Sonstige. Zu den größten operativen Tochtergesellschaften von HC2 zählen DBM Global Inc., eine Unternehmensfamilie, die vollständig integrierte Bau- und Stahlbauleistungen anbietet, und Global Marine Systems Limited, ein führender Anbieter im Bereich Engineering und Services für die Unterwasserkabelinfrastruktur. Das 1994 gegründete Unternehmen HC2 hat seinen Hauptsitz in New York, New York. Näheres zu HC2 und seinen Portfoliounternehmen finden Sie unter www.hc2.com


News Article | September 21, 2016
Site: www.techtimes.com

The American presidential debates are crucial to the voting public since the outcome could tip the scale running up to the elections. This is where candidates come face to face with each other and assert their stand on issues common to all, and where they will need to impress viewers. This year, Facebook wants to be at the center of the action. The social media company and ABC are teaming up to make sure folks online can easily tune into the presidential elections later this year. As most should know by now, both Donald Trump and Hillary Clinton will face off in three debates in hopes to become the next president of the United States. ABC is planning to take advantage of Facebook Live's interactive experience to give the debates some spice. The company wants to air the reaction from viewers and, no doubt, there will be some interesting ones for ABC to focus on. Facebook is not the only social network leader to provide live streams to users. Twitter is doing the same thing, and YouTube, the Google-owned entertainment video platform, has also joined in the race. "As we move further into the election cycle, there continues to be a voracious appetite for live content and we know many users turn to Facebook to engage and participate in the conversation," said Colby Smith, vice president of digital at ABC News. Opening the doors for social networks to get involved in the presidential debates is a solid move to democratize access to information and intelligent opinion. The initiative should give more people, especially the cord cutters, a reason to tune in and take part. Bear in mind that this is not just a deal for airing the debates. ABC has plans to showcase its original show "Straight Talk" hosted by Matthew Dowd and LZ Granderson. As it stands, the network will produce and distribute the content, while Facebook will be tasked with providing insights on trending keywords and other relevant information the network might want to use. The first debate is expected to take place on Monday, Sept. 26, at Hofstra University in Hempstead, New York. The second debate will take place on Oct. 9 at Washington University in St. Louis. As for the third, it is expected to take place on Oct. 19 at the University of Nevada, Las Vegas. Millennials might do well to watch the debates online because recent reports have shown that they like to waste time at work on social media. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.


News Article | October 28, 2016
Site: www.techrepublic.com

This article was originally published on CNET. Donald Trump and Hillary Clinton sparred Sunday night in one of the most contentious debates in modern political history. Followers of each presidential candidate, unsurprisingly, took the contest online. The 90-minute duel between the real-estate mogul and seasoned Washington veteran, with just a month to go before Election Day 2016, was the most-tweeted political debate in the 10 years of Twitter, the social network said Sunday. More than 17 million debate-related tweets were sent during the contest and nearly 30 million tweets throughout the course of the day. SEE: Not vetting your tech investments? You're setting fire to money (Tech Pro Research report) Voters were eager to see how Republican nominee Donald Trump would respond to a crumbling presidential campaign, which took a new hit last week with revelations of crude remarks he had made about women. Others were eager to see if his Democratic challengerHillary Clinton would go for the kill. They didn't have to wait long. After the candidates took the stage without a handshake (cue trending hashtag #nohandshake), Trump repeated an apology for his explicit comments about women that have prompted key Republicans to withdraw their support. "This was locker room talk. I'm not proud of it. I apologized to my family. I apologized to the American people. Certainly I'm not proud of it. But this is locker room talk," Trump said. Trump then quickly hit at Clinton, saying her husband, former president Bill Clinton, had done worse, calling him "abusive." SEE: Fact-checking the second presidential debate (CBS News) Clinton responded, questioning Trump's fitness to be president. She said in addition to his constant barrage of insults, it's clear to anyone who heard the video that it represents "exactly who he is." That sparked the most popular tweet of the night, which came from college professor Moustafa Bayoumi and which also referenced Trump's call during the debate for Muslims to alert authorities to signs of dangerous activities. Social media sentiment, a measure of how viewers responded to the debate, was negative throughout the clash at Washington University in St. Louis. Trump had more than 74 percent negative sentiment compared to Clinton's 53 percent on Twitter, according to Brandwatch, which measures social media. Another group, Spredfast, said the candidates' negative figures stayed steady throughout the event. "If we can draw conclusions from this, it would be that the crowd is responding, on both sides, to the over-the-top sleaze factor by extremely high levels of social media activity," said William Stodden, a political science professor at North Dakota State College of Science and Concordia College in Minnesota. Trump dominated the conversation on Twitter with 64 percent of tweets. According to the social network, Clinton got 36 percent. Kellan Terry, a Brandwatch analyst, said the phrase "locker room talk," which Trump used to explain leaked remarks he made 11 years ago, and "sexual assault," which has been applied to actions those comments described, were among the GOP candidate's top negative mentions. Trump's unfavorable rating affected Clinton, Terry said, because their conversations intertwined. That meant negative comments about Trump that mentioned Clinton weighed on the Democratic candidate. "It's fair to say that her sentiment suffered as people were actually talking negatively about her opponent in a tweet that also mentioned her," he said. Trump tried to score points by attacking Clinton's relationship with her husband, Bill, and the former president's past infidelities. Clinton countered by stating that her husband was not running for president. She then linked the economic prosperity of the 1990s to consistent, if slow, economic growth during her husband's administration. Clinton touched on cybersecurity when questioned about the security of her private email server. The former secretary of state was investigated by the FBI for using private equipment for government correspondence. "That was a mistake and I take responsibility for using a personal email account," she said. "I'm not making any excuses." Trump snorted with dissent, and fired back, "All you have to do is take a look at WikiLeaks and see what they say." As with previous presidential sparring matches, Sunday night's debate was livestreamed next to real-time tweets from the #debate hashtag. Highlighting a key difference between the two social media platforms, Facebook also streamed the debate and placed live video next to algorithmically sorted posts and comments. On Twitter, Trump had almost twice as much conversation as Clinton — 64 percent to 36 percent. On Facebook, Trump had 76 percent of the conversation compared to Clinton's 24. "Hillary has support of most influential social media commentators but high usage of Trump support hashtags suggest Trump's band of supporters more vocal," concluded Talkwalker, a social media analytics company. Among the top hashtags used during the debate, Talkwalker said, there was Trump's #BigLeagueTruth with 108,000 mentions, Clinton's #ImWithHer with more than 92,000 mentions, #MAGA (Make America Great Again) with 73,000, and #CrookedHillary with 27,000. The second debate was not without humorous moments for social media commenters. As in the first battle, Trump sniffled throughout the sparring session, prompting #sniffles to become a top trend on Twitter. Jeers were a common response to Trump's threat, if he becomes president, to jail Clinton over her email scandal. The top tweeted moments also included Trump disagreeing with Mike Pence, his running mate, over American involvement in Syria; Trump saying he's "a gentleman" to both hoots and jeers; and Trump saying he would jail Clinton. The top tweeted moments also included Trump disagreeing with Mike Pence, his running mate, over American involvement in Syria; Trump saying he's "a gentleman" to both hoots and jeers; and Trump saying he would jail Clinton. The night wasn't totally negative, however. Both candidates increased their Twitter followings.@HillaryClinton added about 25,000, while @realDonaldTrump added 16,000, the social network said.


News Article | November 4, 2016
Site: www.prweb.com

The Opening Ceremony was held at the Diaoyutai State Guesthouse. Addressing the Opening ceremony, Peking University President Professor Lin Jianhua said, “For universities in the Chinese mainland, we have a commitment to take on the responsibility for the society, the nation and the world. Peking University has inherited and has been diligently following this tradition. On the one hand, we teach students to bear social responsibility during their studies and after graduation. On the other hand, the Universities have to bear their responsibility for the community, the region, and the nation, during their course of development and advancement.” Professor Lin added, “This is a difficult mission, and it calls for concerted efforts of higher education leaders.” Speaking at the Opening Ceremony, PolyU President Professor Timothy W. Tong said, “Over the last few decades, global challenges such as economic development, environmental protection and technological innovation have driven universities worldwide to redefine their roles and responsibilities beyond traditional education and research in order to bolster their impact on society. Consequently, social responsibility has become a subject high on the agenda.” Professor Tong added, “The USR Network member universities sharing the same vision of making our world increasingly just, inclusive, peaceful and sustainable. With an emphasis on collaboration among members and with other networks and alliances, the Network has vigorously promoted USR by organizing a number of projects including this University Social Responsibility Summit.” This year, the Summit has brought together more than 50 speakers who are higher education leaders and scholars from over 10 countries and regions. They exchanged views at three Presidents’ Roundtable sessions respectively themed “Social Responsibility: A Core Mission of Universities in 21st Century?”, “USR: Translating Vision into Action and Impact”, and “USR in Asia: Challenges and Opportunities”. Plenary sessions held tomorrow (5 November) will include “Community Engagement in Higher Education: Policy and Practice”, “Nurturing Future Leaders through Service-Learning: Strategies and Learning Outcomes” and “Building Disaster Response Capacity – University Students as Community First Responders”. This is the first time that the Summit has a separate Student Forum on 4 November at Peking University campus. The Forum attracted more than 100 students, many of them are delegates from the USR Network member universities. In addition, there will be a student presentation tomorrow. Four teams of students from PolyU, PekingU, Sichuan University and Beijing Normal University conducted presentations to share the views and practical experience of USR from the students’ perspective. Their presence and contribution at the Summit are evidence of the USR Network’s commitment to engaging the university community to address world challenges and shape a better future. The second Executive Committee meeting of the USR Network was held yesterday (3 November) to discuss the strategies and work for the coming year. With two new members, University of Sao Paulo, Brazil and University of Pretoria, South Africa, the USR Network now include the following 14 universities (in alphabetical order of their country): Australia | University of New South Wales Brazil | University of Sao Paulo Hong Kong, P.R.C. | The Hong Kong Polytechnic University Israel | University of Haifa Japan | Kyoto University Korea | Yonsei University P.R.C. | Peking University P.R.C. | Beijing Normal University P.R.C. | Sichuan University P.R.C. | South Africa University of Pretoria U.K. | Clare Hall, University of Cambridge U.K. | The University of Manchester U.S.A. | Tufts University U.S.A. | Washington University in St. Louis For details of the USR Network, please visit http://www.usrnetwork.org.


News Article | October 25, 2016
Site: news.yahoo.com

Could "clean coal" meet the energy needs of the United States for the next 1,000 years, as Republican presidential nominee Donald Trump said on Sunday (Oct. 9) during the second presidential debate? Scientists contacted by Live Science are dubious both about whether current U.S. supplies of this fossil fuel could last more than a century and whether the country will start implementing industrywide practices to meet the clean coal definition. As of now, there aren't any operational U.S. coal plants that use so-called clean coal technology, said Edward Rubin, a professor of engineering, public policy and mechanical engineering at Carnegie Mellon University in Pittsburgh. Moreover, if the United States continues to use coal at its current rate of consumption, the known coal deposits will last only about 100 more years, according to a 2007 report from The National Academy of Sciences, a nongovernmental, nonprofit group chartered by the U.S. Congress at the request of President Abraham Lincoln. [Election Day 2016: A Guide to the When, Why, What and How] Trump's comment came about during a town-hall debate, held at Washington University in St. Louis. The newly minted internet star Ken Bone — the man with the bright-red sweater and black-rimmed glasses — asked both candidates, "What steps will your energy policy take to meet our energy needs, while at the same time remaining environmentally friendly and minimizing job loss for fossil power plant workers?" During his 2-minute response, Trump said, "There is a thing called clean coal. Coal will last for 1,000 years in this country." The Department of Energy (DOE) coined the term "clean coal" in the 1980s when it created a program with the same name that was devoted to advancing environmental technologies to remove regulated air pollutants from the emissions of coal-powered plants, Rubin said. "The DOE program was created to develop better or less expensive technology for controlling the major pollutants related to acid rain: sulfur dioxide and nitrogen oxide," Rubin told Live Science. Previous regulations had already addressed coal's particulate matter, including dust and soot, he noted. In 2009, the Environmental Protection Agency (EPA) classified the greenhouse gas carbon dioxide (CO2) as a pollutant that could endanger health and lead to climate change and ocean acidification, according to the EPA. In general, clean coal refers to removing any regulated pollutant emitted from burning coal, but nowadays, the focus is on CO2, Rubin said. This effort is called carbon capture and storage, or CCS. The technology for capturing CO2 already exists in the industry (sectors that actually transform raw materials into energy or products), but not in the electrical power sector, which has larger facilities than industry typically does, he said. What's more, this sector doesn't employ as many chemical engineers, who often implement carbon capture, as the industry does, Rubin said. Compressing CO2 and turning it into a "supercritical fluid," Rubin said, can carry out the storage part of CCS. Then, it can be transported through pipelines to a destination — an underground geologic formation, for example. However, implementing CCS costs money, and there's little impetus for power plants to use it unless they are required to do so by the government, Rubin said. [Science of Politics: Why Trump and Clinton Should Be Nice to Each Other] "From everything I've read about Mr. Trump's view of climate change, I suspect he would not be a proponent of policies to reduce carbon emissions drastically, although that would be a good question to ask him," Rubin said. There are two CCS plants in the country that are under development: the Kemper Project in Mississippi and the Petra Nova Project in Texas, but neither is open yet, Rubin said. (The National Energy Technology Laboratory website has a list of CCS projects through 2015.) But despite these two projects, clean coal does not appear to be spreading to the hundreds of other working coal plants in the country. Mary Finley-Brook, an associate professor of geography, environmental studies and international studies at the University of Richmond in Virginia, laid it out in an email to Live Science. "Clean coal does not currently exist," Finley-Brook wrote in an email. "It will be expensive to develop and is uncertain (unlikely) to work, depending on what technology is selected. This is moving in the wrong direction from a climate change mitigation perspective. We need energy transition away from fossil fuels, not nice-sounding names to confuse people who don't know better." Pushker Kharecha, a climate scientist at The Earth Institute at Columbia University, added that even if clean coal were developed, it might create a sense of complacency when addressing climate change. "In my opinion, the top priority of emission-reduction efforts should instead be switching to non-fossil energy sources (renewables and nuclear) by as much as possible and as quickly as possible," Kharecha told Live Science in an email. Trump's "1,000 years" statement is off by an order of magnitude, Rubin said. In reality, the estimate is about one-tenth of that. "There is probably sufficient coal to meet the nation's needs for more than 100 years at current production levels," but for less than 250 years, according to the 2007 report. However, with natural gas use spiking, there is less demand for coal. In 2008, coal supplied about 50 percent of electricity to the United States, but it was just 33 percent in 2015, Kharecha said. [What Are the Most Dangerous Jobs] Democratic presidential nominee former Secretary of State Hillary Clinton may have also made an erroneous statement regarding energy, when she said that the United States is "now, for the first time ever, energy independent." But that's not the case, as the United States imports about 9.4 million barrels of petroleum [a fossil fuel made from crude oil and natural gas] a day, mostly from Canada, according to NPR. Donald Trump's Sniffling: What Causes Sniffles When You Don't Have a Cold?


News Article | December 23, 2016
Site: news.yahoo.com

FILE - In this Sunday, Oct. 9, 2016 file photo, from left, Melania Trump, Ivanka Trump, Eric Trump and Donald Trump, Jr. wait for the second presidential debate between Donald Trump and Democratic presidential nominee Hillary Clinton at Washington University in St. Louis. President-elect Donald Trump’s children may see his move to the White House as a way to raise money for their favorite causes. Two recent fundraising pitches featuring the incoming first family were meant to benefit charities, but they also raised questions among ethics experts that the Trumps might be inappropriately selling access. (AP Photo/John Locher, File) PALM BEACH, Fla. (AP) — One of President-elect Donald Trump's sons will stop directly raising money for his namesake foundation, saying he worries the donations could be perceived as buying access to his father. Eric Trump said Wednesday that it pained him to cease soliciting donations for his organization, which he says has raised more than $15 million for children terminally ill with cancer. "Fighting childhood cancer is a cause that has been central to my life since I was 21 years old," Eric Trump told The Associated Press. "It's an extremely sad day when doing the right thing isn't the right thing. That said, raising awareness for the cause will be a lifelong mission for me." The Trumps have come under scrutiny in recent days for charity ventures that offered access to the incoming first family — including the president-elect. Eric Trump's foundation scuttled a plan to raise money for the children's hospital through an online auction for coffee with his sister Ivanka Trump, who is considering joining the White House in some capacity. And Eric and Donald Trump Jr. backed away from an inauguration event that aimed to raise money for conservation charities. They were named as directors along with two of their friends in a new Texas-based nonprofit that had considered offering $1 million donors the chance to rub elbows with the new president at a "Camouflage & Cufflinks" ball in Washington the day after Trump's swearing-in. The nonprofit also proposed allowing some donors to join one or both of the sons on a hunting or fishing trip. A spokeswoman for the Texas secretary of state said Thursday that the nonprofit registration had been amended a day earlier to remove the Trump sons as directors. Their friend Gentry Beach, a Dallas businessman, is now listed as the sole director of what's called the Opening Day Foundation. In addition, that group has stripped all references to contributors meeting with any of the Trumps, although the sons remain listed as honorary co-chairmen on a revised invitation to the Jan. 21 event. Ethics counselors for Presidents Barack Obama and George W. Bush praised the family for making quick adjustments to avoid the appearance of selling access — but warned that the family must set up bright lines between their business and charitable ventures and the government they are about to lead. Eric Trump, the younger of the president-elect's two adult sons, has raised enough money over the last decade to fund a new intensive care unit at St. Jude's Children's Research Hospital in Memphis, Tennessee, which provides free medical care for children. The focus on the Eric Trump Foundation comes after Donald Trump relentlessly criticized his Democratic opponent for the White House, Hillary Clinton, for allegedly providing favors to donors to the Clinton Foundation while she was secretary of state. She has denied those allegations. News of Eric Trump's decision was first reported Wednesday by The New York Times. Eric Trump said he will likely wind down the Eric Trump Foundation — which had just one employee — but plans to continue public advocacy against childhood cancer. About $5 million of a $20 million, 10-year commitment to St. Jude's remains outstanding, money that likely will be raised by donations from patrons at Trump-owned hotels and golf courses. Don Jr. and Eric Trump, who were among the Republican businessman's closest campaign advisers and have played an active role in the transition, are planning to remain in New York to run the massive Trump Organization once their father takes office. Critics have demanded the president-elect divest himself from his business. He was to have addressed the future of the company at a press conference last week, but it has been postponed to January. The future also remains murky for the Donald J. Trump Foundation, a separate charity run by the president-elect that solicited outside gifts and has been criticized for using donations to fund business interests. Associated Press writer Julie Bykowicz contributed to this report from Washington.


Li Z.-Y.,CAS Institute of Physics | Xia Y.,Washington University in St. Louis
Nano Letters | Year: 2010

Single-molecule detection via surface-enhanced Raman scattering (SERS) has raised great interest over the past decade. The usual approach toward this goal is to harness the strong surface plasmon resonance of light with complex metallic nanostructures, such as particle aggregates, two-particle gaps, sharp tips, or particles with sharp apexes. Here we propose another route toward the goal by introducing gain medium into single metal nanoparticles with simple geometry. Our calculations show that cubic gold nanobox particles that contain a gain material within the core can create an extremely high enhancement factor of local field intensity larger than 10 8 and a SERS enhancement factor on the order of 10 16 - 10 17. © 2010 American Chemical Society.


Deco G.,University Pompeu Fabra | Corbetta M.,Washington University in St. Louis | Corbetta M.,University of Chieti Pescara
Neuroscientist | Year: 2011

The authors review evidence that spontaneous, that is, not stimulus or task driven, activity in the brain at the level of large-scale neural systems is not noise, but orderly and organized in a series of functional networks that maintain, at all times, a high level of coherence. These networks of spontaneous activity correlation or resting state networks (RSN) are closely related to the underlying anatomical connectivity, but their topography is also gated by the history of prior task activation. Network coherence does not depend on covert cognitive activity, but its strength and integrity relates to behavioral performance. Some RSN are functionally organized as dynamically competing systems both at rest and during tasks. Computational studies show that one of such dynamics, the anticorrelation between networks, depends on noise-driven transitions between different multistable cluster synchronization states. These multistable states emerge because of transmission delays between regions that are modeled as coupled oscillators systems. Large-scale systems dynamics are useful for keeping different functional subnetworks in a state of heightened competition, which can be stabilized and fired by even small modulations of either sensory or internal signals. © The Author(s) 2011.


Kouvelis P.,Washington University in St. Louis | Zhao W.,Shanghai JiaoTong University
Operations Research | Year: 2012

We consider a supply chain with a retailer and a supplier: A newsvendor-like retailer has a single opportunity to order a product from a supplier to satisfy future uncertain demand. Both the retailer and supplier are capital constrained and in need of short-term financing. In the presence of bankruptcy risks for both the retailer and supplier, we model their strategic interaction as a Stackelberg game with the supplier as the leader. We use the supplier early payment discount scheme as a decision framework to analyze all decisions involved in optimally structuring the trade credit contract (discounted wholesale price if paying early, financing rate if delaying payment) from the supplier's perspective. Under mild assumptions we conclude that a risk-neutral supplier should always finance the retailer at rates less than or equal to the risk-free rate. The retailer, if offered an optimally structured trade credit contract, will always prefer supplier financing to bank financing. Furthermore, under optimal trade credit contracts, both the supplier's profit and supply chain efficiency improve, and the retailer might improve his profits relative to under bank financing (or equivalently, a rich retailer under wholesale price contracts), depending on his current "wealth" (working capital and collateral). © 2012 INFORMS.


Younes A.,University of Texas M. D. Anderson Cancer Center | Bartlett N.L.,Washington University in St. Louis | Leonard J.P.,Cornell University | Kennedy D.A.,Seattle Genetics | And 3 more authors.
New England Journal of Medicine | Year: 2010

BACKGROUND: Hodgkin's lymphoma and anaplastic large-cell lymphoma are the two most common tumors expressing CD30. Previous attempts to target the CD30 antigen with monoclonal-based therapies have shown minimal activity. To enhance the antitumor activity of CD30-directed therapy, the antitubulin agent monomethyl auristatin E (MMAE) was attached to a CD30-specific monoclonal antibody by an enzyme-cleavable linker, producing the antibody-drug conjugate brentuximab vedotin (SGN-35). METHODS: In this phase 1, open-label, multicenter dose-escalation study, we administered brentuximab vedotin (at a dose of 0.1 to 3.6 mg per kilogram of body weight) every 3 weeks to 45 patients with relapsed or refractory CD30-positive hematologic cancers, primarily Hodgkin's lymphoma and anaplastic large-cell lymphoma. Patients had received a median of three previous chemotherapy regimens (range, one to seven), and 73% had undergone autologous stem-cell transplantation. RESULTS: The maximum tolerated dose was 1.8 mg per kilogram, administered every 3 weeks. Objective responses, including 11 complete remissions, were observed in 17 patients. Of 12 patients who received the 1.8-mg-per-kilogram dose, 6 (50%) had an objective response. The median duration of response was at least 9.7 months. Tumor regression was observed in 36 of 42 patients who could be evaluated (86%). The most common adverse events were fatigue, pyrexia, diarrhea, nausea, neutropenia, and peripheral neuropathy. CONCLUSIONS: Brentuximab vedotin induced durable objective responses and resulted in tumor regression for most patients with relapsed or refractory CD30-positive lymphomas in this phase 1 study. Treatment was associated primarily with grade 1 or 2 (mild-to-moderate) toxic effects. (Funded by Seattle Genetics; ClinicalTrials.gov number, NCT00430846.) Copyright © 2010 Massachusetts Medical Societ.


Kouvelis P.,Washington University in St. Louis | Zhao W.,Shanghai JiaoTong University
Production and Operations Management | Year: 2011

We study a supply chain of a supplier selling via a wholesale price contract to a financially constrained retailer who faces stochastic demand. The retailer might need to borrow money from a bank to execute his order. The bank offers a fairly priced loan for relevant risks. Failure of loan repayment leads to a costly bankruptcy (fixed administrative costs, costs proportional to sales, and a depressed collateral value). We identify the retailer's optimal order quantity as a function of the wholesale price and his total wealth (working capital and collateral). The analysis of the supplier's optimal wholesale price problem as a Stackelberg game, with the supplier the leader and the retailer the follower, leads to unique equilibrium solutions in wholesale price and order quantity, with the equilibrium order quantity smaller than the traditional newsvendor one. Furthermore, in the presence of the retailer's bankruptcy risks, increases in the retailer's wealth lead to increased supplier's wholesale prices, but without the retailer's bankruptcy risks the supplier's wholesale prices stay the same or decrease in retailer's wealth. © 2010 Production and Operations Management Society.


Patent
University of Illinois at Urbana - Champaign and Washington University in St. Louis | Date: 2014-10-02

Provided are devices and methods capable of interfacing with biological tissues, such as organs like the heart, in real-time and using techniques which provide the ability to monitor and control complex physical, chemical, biochemical and thermal properties of the tissues as a function of time. The described devices and methods utilize micro scale sensors and actuators to spatially monitor and control a variety of physical, chemical and biological tissue parameters, such as temperature, pH, spatial position, force, pressure, electrophysiology and to spatially provide a variety of stimuli, such as heat, light, voltage and current.


Patent
University of Illinois at Urbana - Champaign and Washington University in St. Louis | Date: 2014-02-11

Provided are implantable or surface mounted biomedical devices and related methods for interfacing with a target tissue. The devices have a substrate and device component supported by the substrate. The components of the device are specially configured and packaged to be ultra-thin and mechanically compliant. In particular, device thicknesses are less than 1 mm and have lateral dimensions between about 1 m and 10 mm, depending on the application. Delivery substrates may be incorporated to assist with device implantation and handling. The devices can be shaped to provide injection in a minimally invasive manner, thereby avoiding unnecessary tissue damage and providing a platform for long-term implantation for interfacing with biological tissue.


Patent
Washington University in St. Louis and Auxagen, Inc. | Date: 2014-03-13

The present invention relates to adjuvant compositions, vaccine compositions, and methods of enhancing an immune response to an antigen.


News Article | November 29, 2016
Site: www.businesswire.com

LOS ANGELES--(BUSINESS WIRE)--Pritzker Group Venture Capital announced it is expanding its Los Angeles venture capital office—materially increasing its on-the-ground presence as it continues to pursue early-stage investment opportunities, particularly in enterprise, consumer tech, healthcare IT and emerging technology (artificial intelligence, internet of things, virtual reality). PGVC partner Gabe Greenbaum has relocated from the firm’s Chicago office to Los Angeles where he will lead the PGVC office in Los Angeles. Joining him from Chicago is Peter Liu, vice president. The firm also recently hired Nico Gimenez, who joins as an associate. “We’re bringing two of our best to expand our LA presence,” said Tony Pritzker, managing partner of Pritzker Group. “Gabe has led a number of the firm’s investments, such as X.ai, Catalytic, Augury, Hightower and AiCure, across our four main focus areas. He and Peter Liu are a perfect complement to the expanding LA technology ecosystem.” Drawn by LA’s great entrepreneurial ecosystem, strong universities, ability to recruit talent, proximity to the valley and, most of all, ample startups and early stage companies seeking a capital partner, PGVC has been increasing its presence in Los Angeles over several years and has invested in more than 20 LA-area companies, including Dollar Shave Club, Honest Company, Hello Giggles, AwesomenessTV, BigFrame, Pluto.TV, Graphiq and Airmap. PGVC’s LA-based companies have yielded significant success. PGVC was an early investor in most of these companies. Dollar Shave Club signed a definitive agreement to be acquired by Unilever for $1 billion in July 2016. Hello Giggles sold to Time, Inc. in 2015, and AwesomenessTV and BigFrame were both sold to DreamWorks. As PGVC expands in LA, it will continue its early stage focus with initial investments generally in the $3 million to $8 million range. The firm also deploys smaller amounts of capital (up to $1 million) into earlier stage companies to support serial entrepreneurs and other unique opportunities. While the firm typically invests $15 million to $20 million over the life of its investment in a company, PGVC has the flexibility to deploy up to $50 million in any one company. “In addition to being a value-added partner for entrepreneurs, with the Pritzker network that spans more than 20 years of investing and building businesses with key stakeholders, we’re also a deep pocketed partner that scales as our entrepreneurs scale,” notes Greenbaum. “We’re excited to build upon the success we’ve had in the LA region and look forward to partnering with more great entrepreneurs.” Pritzker Group, led by Tony and J.B. Pritzker, has three principal investment teams: Private Capital, which acquires and operates leading North America-based companies; Venture Capital, which provides multi-stage venture funding to technology companies throughout the United States; and Asset Management, which partners with top-performing investment managers across global public markets. Pritzker Group Venture Capital helps entrepreneurs build market-leading technology companies at every stage of their growth. Since its founding in 1996, the firm has worked side-by-side with entrepreneurs at more than 150 companies, building partnerships based on trust and integrity. The firm’s proprietary capital structure allows for tremendous flexibility, and its experienced team of investment professionals and entrepreneurs offers companies a vast network of strategic relationships and guidance. Successful exits in recent years include Cleversafe (acquired by IBM – NYSE: IBM), Dollar Shave Club (acquired by Unilever), Viv Labs (acquired by Samsung), Fleetmatics (IPO – NYSE: FLTX), SinglePlatform (acquired by Constant Contact), Playdom (acquired by Disney), and LeftHand Networks (acquired by Hewlett-Packard). For more information, visit pritzkergroup.com. Gabe Greenbaum is a partner of Pritzker Group Venture Capital. He joined Pritzker Group in 2012 and has led a number of the firm’s investments across enterprise software, marketplace technologies, machine learning and health care IT. His new role expands on his more than 13 years of entrepreneurial and investment experience, including co-founding two companies of his own, StudentSpace and SwiftIQ. Greenbaum has also been instrumental in building the Pritzker Group Venture Fellows program, helping to expand the firm’s presence in both Chicago and New York, and has been recognized as one of Crain’s 2016 #Tech50. He holds a bachelor’s degree with honors from Washington University in St. Louis, an MBA from Northwestern University’s Kellogg School of Management and a J.D. from Northwestern University School of Law. Peter joined Pritzker Group in 2012 and has been involved in evaluating, executing and supporting more than 25 early- and growth-stage technology investments including SMS Assist, Eved, Opternative and ViV. Liu will continue to source deals and provide oversight for firm investments in digital media, enterprise software, health care IT and emerging technologies. In 2013, Liu founded VentureUP, a network that has galvanized the next generation of VC leaders through peer collaboration, authentic relationship building and professional development. He is a member of the prestigious Kauffman Fellows innovation investment program, the World Economic Forum’s Global Shapers Community and a mentor for the Network for Teaching Entrepreneurship. He holds a bachelor’s degree in finance and management from the McIntire School of Commerce at the University of Virginia.


News Article | December 22, 2016
Site: www.eurekalert.org

DETROIT - Preterm birth -- birth before 37 weeks of pregnancy -- affects up to one in every six births in the United States and many other countries. It is the number one cause of infant death and long-term illnesses and imposes heavy social and economic burdens. Although preterm birth is a complex condition, infection of the mother and ensuing inflammation in pregnancy are very common triggers. In a recent study published in the premier biomedical research journal Nature Medicine, a team of researchers led by Wayne State University's Kang Chen, Ph.D., assistant professor of obstetrics and gynecology in the School of Medicine, discovered the critical function of a type of mother's immune cells -- B lymphocytes -- in resisting preterm birth triggered by inflammation. According to Chen, B lymphocytes make antibodies to defend the body against infections, but scientists and clinicians have always thought these cells are rare or absent in the uterine lining and not important for pregnancy. Chen's lab discovered that in late pregnancy, mothers' B lymphocytes not only reside in the uterine lining in both humans and mice, but also detect inflammation and uterine stress, which are major causes of preterm birth, and in turn, produce molecules -- including one called PIBF1 -- to suppress uterine inflammation and premature birth. "This study not only reveals the long-neglected function of B lymphocytes in promoting healthy pregnancy, but also supports therapeutic approaches of using B lymphocyte-derived molecules -- such as PIBF1 -- to prevent or treat preterm birth," said Chen. Chen's team has performed proof-of-concept and efficacy studies in animal models, and with the help of the Wayne State University's Technology Commercialization Office, filed a patent for this potential therapeutic approach. "It is truly remarkable that Kang has independently convened and led a team of outstanding scientists to accomplish this original and impressive tour de force, especially considering the many challenges he has encountered in the process," said Chen's collaborators, who included scientists and clinicians from Wayne State University, Beaumont Hospital Dearborn, Yale University, Memorial Sloan-Kettering Cancer Center, Washington University in St. Louis and Icahn School of Medicine at Mount Sinai. The lead authors are Wayne State postdoctoral fellows Bihui Huang and Azure Faucette, who have both assumed independent positions in academia. This study was supported by the Burroughs Wellcome Fund, the National Institute of Allergy and Infectious Diseases of the National Institutes of Health (U01AI95776 Young Investigator Award), the American Congress of Obstetricians and Gynecologists, the Wayne State University Perinatal Initiative, and Wayne State's Office of the Vice President for Research. Wayne State University is one of the nation's pre-eminent public research universities in an urban setting. Through its multidisciplinary approach to research and education, and its ongoing collaboration with government, industry and other institutions, the university seeks to enhance economic growth and improve the quality of life in the city of Detroit, state of Michigan and throughout the world. For more information about research at Wayne State University, visit research.wayne.edu.


News Article | November 21, 2016
Site: www.prweb.com

Two faculty members at Worcester Polytechnic Institute (WPI), José M. Argüello, the Walter and Miriam Rutman Professor of Biochemistry, and L. Ramdas Ram-Mohan, professor of physics, have been elected Fellows of the American Association for the Advancement of Science (AAAS), the world’s largest general scientific society. Election as a AAAS Fellow is an honor bestowed upon AAAS members by their peers in recognition of their scientifically or socially distinguished efforts to advance science or its applications. This year, 391 members have been awarded this honor. “We are delighted and very proud that Professors Argüello and Ram-Mohan are being honored by the AAAS,” said Bruce Bursten, WPI’s provost and retiring chair of the AAAS Section on Chemistry. “Election as a Fellow of the AAAS is a tangible recognition of our colleagues’ sustained academic excellence and their dedication to research and education.” Argüello was elected by the AAAS Section on Biological Sciences “for distinguished research discoveries elucidating the mechanisms underlying metal ion transport and the role of bacterial metal transporters in agriculture and infectious disease.” A member of the WPI faculty since 1996, he is a biochemist whose research focuses on the structure and function of proteins that transport heavy metals like copper, zinc, cobalt, and iron across cell membranes. These micronutrients perform fundamental functions in all living organisms, for example, maintaining structure, conferring catalytic activity to proteins, and participating in the transport of oxygen in the blood and the synthesis of sugars in plants. Metals also contribute to the virulence of pathogenic microorganisms and the ability of a cell to resist infection. Because of the importance of these basic biological functions, a better understanding of the mechanisms of heavy metal transport has implications for the treatment of a host of diseases, for human and animal nutrition, and for the bioremediation of heavy metal pollution. Argüello, who also holds an appointment as a member of the University of Massachusetts Center for Clinical and Translational Science, received a degree in biological chemistry from the National University of Cordoba and a PhD in biological sciences from the National University of Rio Cuarto in Argentina. He completed postdoctoral work in the Department of Physiology at the University of Pennsylvania and in the Department of Molecular Genetics at the University of Cincinnati. He has received multiple research grants from the National Science Foundation (NSF) and the National Institutes of Health (NIH), including an NIH Research Development Award for Minority Faculty in 1995 and a $1.3 million award in 2016 for a systematic study of copper in the bacteria Pseudomonas aeruginosa, a leading cause of hospital-associated infections. He has published nearly 60 scientific articles in peer-reviewed journals, including the Journal of Biological Chemistry, the most-cited biomedical research journal in the world; Argüello was appointed to the journal's editorial board in 2012. He is the co-editor of the 2012 book Topics in Membranes: Metal Transporters (Elsevier). Argüello served as a program director in the Division of Molecular and Cellular Biosciences at the NSF's Directorate for Biological Sciences in 2009, and in 2010 was appointed to a four-year term on the NIH's Macromolecular Structure and Function (A) study section to participate in the review and evaluation of research proposals aimed at understanding the nature of biological phenomena and applying that knowledge to enhance human health. In 2012, he received WPI’s Board of Trustees’ Award for Outstanding Research and Creative Scholarship. Ram-Mohan was elected by the AAAS Section on Physics “for major contributions to the development of computational algorithms and important advances in theory of electronic and optical properties of solid state and semiconductor materials.” Since joining the WPI faculty in 1978 he has developed an international reputation as a pioneer in solid state physics, a field that has helped propel extraordinary advances in the speed and power of computers, telecommunications systems, lasers, and other high-tech devices. In addition to exploring the quantum mechanical properties of condensed matter, Ram-Mohan has developed powerful computational tools that have made it possible to predict with great accuracy the properties of increasingly complex semiconductor and optoelectronic devices and to precisely control the design of these ubiquitous systems. The director of the university's Center for Computational NanoScience, Ram-Mohan's work on high-energy physics, condensed matter, and semiconductor physics has resulted in more than 200 peer-reviewed publications that have garnered more than 3,800 citations. He is also the founder of wavefunction engineering, a method for specifying certain quantum properties of semiconductor heterostructures—assemblies of two dissimilar semiconductor materials that display unique electrical or optoelectronic properties. This innovative method arises from the application of the finite element method, or FEM, a numerical analysis technique used widely in engineering, to quantum heterostructures. Ram-Mohan, recognized as one of the foremost authorities on FEM, described this new field in his landmark 2002 book, Finite Element and Boundary Element Applications to Quantum Mechanics. He is also the founder of Quantum Semiconductor Algorithms Inc., which he established to commercialize his software for designing quantum semiconductor heterostructures. In 2012 he was named a Coleman Fellow at WPI in recognition of his entrepreneurial experience and expertise. Ram-Mohan's research has earned him numerous awards and honors, including election as a fellow of the American Physical Society, the Optical Society of America, the American Vacuum Society, Australian Institute of Physics, and the United Kingdom Institute of Physics. He has received the Engineering Excellence Award of the Optical Society of America and the Department of the Air Force Certificate of Achievement, and served as the Clark Way Harrison Distinguished Visiting Professorship at Washington University in St. Louis in 2005. In 2008 he was awarded the Sarojini Damodaran Fellowship to deliver lectures at Tata Institute of Fundamental Research in Mumbai. WPI has recognized his research, teaching, and service with the Sigma Xi Senior Faculty Award for Research Excellence, the Board of Trustees' Award for Outstanding Creative Scholarship and Research, the Board of Trustees' Award for Outstanding Teaching, and the Chairman’s Exemplary Faculty Prize. Professors Argüello and Ram-Mohan will receive an official certificate and a gold and blue (representing science and engineering, respectively) rosette pin during the AAAS annual meeting on Feb. 18, 2017, in Boston. They join four current AAAS fellows at WPI: Provost Bruce Bursten, Dean of Arts and Sciences Karen Kashmanian Oates, and biology professors David Adams and Pamela Weathers. Founded in 1865 in Worcester, Mass., WPI is one of the nation’s first engineering and technology universities. Its 14 academic departments offer more than 50 undergraduate and graduate degree programs in science, engineering, technology, business, the social sciences, and the humanities and arts, leading to bachelor’s, master’s and doctoral degrees. WPI's talented faculty work with students on interdisciplinary research that seeks solutions to important and socially relevant problems in fields as diverse as the life sciences and bioengineering, energy, information security, materials processing, and robotics. Students also have the opportunity to make a difference to communities and organizations around the world through the university's innovative Global Projects Program. There are more than 45 WPI project centers throughout the Americas, Africa, Asia-Pacific, and Europe.


News Article | October 13, 2016
Site: www.sciencenews.org

Anyone who reads news about science (at Science News or otherwise) will recognize that, like the X-Men or any other superhero franchise, there’s a recurring cast of experimental characters. Instead of Magneto, Professor X, Mystique and the Phoenix, scientists have mice, fruit flies, zebrafish and monkeys. Different types of studies use different stand-ins: Flies for genetics; zebrafish for early development; rats and mice and monkeys for cancer, neuroscience and more. Many of these species have been carefully bred so they are genetically identical, giving scientists maximum control as they study changes in genetics or environment. These animal models have added huge volumes to our understanding of human and animal biology, and will continue to add to our knowledge for many years to come. Now, new techniques such as gene editing mean that scientists can probe and alter the genes of any animal. The methods open the door for new organisms — such as squid and octopuses — to join scientists’ basic toolkits. With these new arrivals come new questions. What is needed for a good animal model, and how are gene-snipping tools changing the game? In the early days of biology, the main emphasis was on description of organisms. But description did not allow for experimentation. “One of the major reasons for doing experiments is [that] you can control a system and make predictions from it,” says Garland Allen, a science historian at Washington University in St. Louis. To conduct good experiments, scientists needed models — stand-ins that had some characteristic they needed to know about, whether that was, say, genetics, heart function or behavior. Those stand-ins had to be well-controlled, well-characterized animals that could be kept the same in every way. Once experiments were conducted in those models, the findings could be applied to other species. The first animal models, then, came about from efforts to control experiments. Animals such as mice and flies are relatively easy to genetically manipulate.  Pick out characteristics that are reliably passed down and easy to identify, such as fly eye color and wing shape. To ensure as much purity as possible, inbreed mice to each other, brother to sister, so they are all genetically identical. “For the past 100 years the tendency and the trend has been to develop as consistent a model as possible, so every individual will respond the same way to the same challenges,” says Nadia Rosenthal, the scientific director at Jackson Laboratory in Bar Harbor, Maine. “If you have five mice that are identical genetically you can modulate their environment with impunity, secure in the knowledge when you get the result, it’s going to be about the environment.” Animals such as flies, rats and mice have other advantages. First, they breed like — well, mice. “Mice … have a very short reproduction cycle. That’s a critical thing in genetics,” Rosenthal explains. “You need multiple generations, and unless you are very patient, elephants are not a good model for this.” Other models, such as zebrafish, have the advantage of clear embryos. Transparent eggs and fry are “ideal for getting light in and getting light out,” notes Eric Edsinger, who develops model organisms at the Marine Biological Laboratory in Woods Hole, Mass. This can be useful for everything from using a simple light microscope to watch cells divide to using light-based techniques to drive genetic and cellular actions. Once these model organisms grow up, another advantage becomes clear — they grow up small. When looking for model organisms, Edsinger explains, it’s a real boon to be able to keep many in the lab at once. An octopus model can be useful for studying motion, nervous systems and camouflage — but you’d need a lot of space. “A standard aquarium might only be able to hold one octopus, but you could have 10 pygmy squid,” he notes. A lab the size of a closet can easily hold thousands of fruit flies — a real advantage when scientific space and funding are tight. It also helps if their food is cheap, Allen notes. Grain pellets for mice and rats or a single banana for fruit flies means less money spent. But perhaps the biggest advantage a model organism can have is being able to rough it. “Some octopuses, if the pH just get off by a little bit, they’ll deteriorate and die,” says Edsinger. But pygmy squid — an animal model he has been developing — are a lot more resilient. “I’ve had pygmy squid where I threw them in a bucket and left them on my porch and they were fine. They didn’t mind,” he says. Having a genetically identical, reliable animal model meant the most when an animal’s genetic blueprints, or genomes, were hard to come by. In flies and mice, “their robustness comes in their genetics, and in particular the ability to rapidly develop systems where we can alter genes,” explains Jonathan Gitlin, a developmental biologist and director of research at MBL. That is changing with the advent of CRISPR/Cas9. This gene editing system allows scientists to target specific spots in a genome, where the Cas9 enzyme can then slice, dice and even add in new genes. CRISPR/Cas9 doesn’t require knowing the entire genetic blueprint of the organism you’re tweaking, or developing a specialized system for editing each new area. All the scientist needs is a guide sequence to the right spot.  The ability to edit a single, carefully targeted gene in each organism means that individual members don’t need to be the perfect genetic clones currently filling mouse and fly labs. Scientists may be able to tweak the genes of a single animal to see an effect, and juggle the same gene in another and observe the same impact, even though the two animals aren’t otherwise genetically similar. And if scientists don’t need known DNA sequences, the animal world is their oyster. “I can now take virtually any organism and manipulate the genome and create model systems where I can track cells, manipulate genes,” says Gitlin. Before, he says, if you thought the answers to your scientific question could be found in a cephalopod, “you’d be limited because you [couldn’t] manipulate the genome. Now, you can.” With the help of CRISPR/Cas9, Edsinger is hoping to establish one or more cephalopod organisms that can be used to study motion, camouflage and neural systems. The new models could benefit robotics, computing, prosthetics and much more.  “Genome editing is democratizing the ability of a single person to spend a year focused on something and do powerful functional studies,” Edsinger says. But plenty of care — and plenty of octopuses — will still be required. “I’m not sure there’s any attribute that doesn’t vary at some level,” Rosenthal notes. “You have to assume you’re going to see variability and build that into your experimental design.” CRISPR/Cas9 bursts the world of animal models wide open. But it also runs up against established science and scientific culture.  Right now, the biggest question for new animal models is who will buy. “It’s a marvelous cultural conundrum,” Rosenthal says. “People want to use what others have used, so you can build on people’s observations.” If a scientist writes a grant to study mice, he or she can cite large amounts of literature in that model. For a grant to study cephalopods, the history is a lot thinner. And scientists are human, she notes; they feel most comfortable with what they’ve seen before. So the cycle begins. A grant review committee might reject a grant to work on a new squid or squirrel, citing a lack of research in the area. In rejecting the grant, research doesn’t get done, and so there’s no new literature in the field. Wash, rinse, repeat. Edsinger wonders how the world of animal models will look in five or 10 years. Will it be 10 labs trying to establish a single species, or will each lab be working on their favorite? A single species might make for better controlled experiments. But a single scientific laboratory working on a charismatic species might catch the public eye. Young scientists trying to establish new animal models are also trying to establish their careers. In a world filled with mice, fish and flies, Edsinger says, “there’s no template for that.” While CRISPR/Cas9 may provide the tools, it is up to the culture of science itself to determine how many new animals will join the model zoo.


News Article | November 29, 2016
Site: www.prweb.com

Teckst, a first-to-market service that enables two-way text messaging for customer service and eliminates millions of hours of hold time, today announced the expansion of its executive and technology leadership team. Teckst welcomes Ron Garret, one of the first Google employees and lead engineer on the inaugural release of AdWords, and Jodd Readick, co-founder of several innovative telecom firms including Vumber.com, as Technology Advisors. In addition, Teckst has appointed Kris Wiig as the company’s VP of Sales to drive its rapidly growing sales organization. Alson Kemp joins Teckst as VP of Engineering, to lead its growing engineering team and expand Teckst's mobile customer service product portfolio. The leadership team expansion comes on the heels of Teckst’s recent $2.5M funding round. Teckst is experiencing explosive growth, as customer service teams embrace texting as the next big channel in customer service communications. Teckst has amassed more than 100 enterprise customers across 19 industry verticals, integrated with every CRM platform and introduced a beta of a new, cutting-edge bionic (human + AI) TeckstBot™. “Texting for customer service has moved from a nascent idea to primetime, competing directly with channels like Twitter, web chat, and email. We’re seeing incredible demand from enterprises from retail to ride sharing. Adding key players to our team will help maintain our significant lead in the space that we created. The unmatched experience of Ron, Jodd, Kris and Alson will be instrumental for taking Teckst to the next level,” said Matt Tumbleson, CEO & Founder of Teckst. Ron Garret, Ph.D: Ron is the founder and CEO of Spark Innovations, a maker of auditable digital security products. As an angel investor and serial entrepreneur, he has been a principal in half a dozen startups, including three which he co-founded. One of those companies was acquired pre-launch by Richard Branson and launched as Virgin Charter. Ron was one of the first employees at Google and served as the lead engineer on AdWords, which accounts for a vast majority of Google’s revenues. He began his career at the NASA Jet Propulsion Lab where he worked on ground-breaking research in artificial intelligence and robotics, which ultimately enabled the Mars rover missions. Jodd Readick: Jodd has spent over 30 years as a serial entrepreneur, product Innovator and management consultant. Designed and managed telecom infrastructure systems and software both as an entrepreneur and Fortune 100 executive, Jodd is a co-founder of four innovative telecom firms, including Vumber.com. He has spent five years as Manager of Marketing and Strategic Planning for the DuPont unit that created the Rapid Interactive Prototyping methodology (now known as Agile Development). Previously, he spent six years as a subject matter expert for Arthur Anderson including M&A transactions for Samsung and AT&T. Jodd created the least cost routing system for NYNEX Mobile; advised Wells Fargo on call center architecture, and created the first expert system for medical collection billing and the first auto-dialer. Kris Wiig: Kris is an accomplished sales leader with deep expertise in helping b2b companies develop and execute go-to-market strategies and building sales teams for disruptive technologies. Prior to Teckst, Kris served as Director of Enterprise Sales and as Instructor of Sales Strategy and Sales Communications at General Assembly. Previously, Kris served as Chief Revenue Officer for LogCheck, a mobile platform for buildings and facilities management, where she helped drive significant revenue growth. Kris began her sales career at Bloomberg. Kris is an active volunteer basketball coach and works with Positive Coaching Alliance to lead workshops for youth sports organizations for young athletes. Alson Kemp: Alson comes to Teckst from Vium, where he was the first employee and rose to Head of Platform, leading the development of Vium's next generation pre-clinical research platform and software. Previously, he served as the VP of Engineering at Cerego, where he modernized the infrastructure and guided the expansion of Cerego’s personalized learning platform. As a partner at ROI.DNA, a digital marketing agency, Alson built and led the engineering team through many successful engagements, including StitchFix, BrowserMob and Symnatec. Alson's degrees include an MBA from MIT, a MSEE from Oregon State University and a BSEE from Washington University in St. Louis. “When I was first introduced to Teckst, I was struck by the fact that no one was doing what they were doing and have since had multiple customer service interactions that I wish had been SMS based. Teckst has an incredible, enthusiastic team and I’m thrilled to come on board. As we scale, I look forward to adding AI, Natural Language Processing, translations and a range of other tools to massively improve the future of customer service,” said Alson Kemp. New Features for speed, compliance, and service Teckst has introduced several new features to bolster its native texting platform, making it even easier for companies to have two-way human to human conversations. Teckst usage is growing more than 25% month over month, and will process more than 4 million texts this month. At its current rate, Teckst has reduced more than 60,000 hours of hold time and has saved enterprises millions of dollars. Major retailers, mobile carriers, ride-sharing and other innovative customer-centric companies such as JackThreads, KeyMe, Luxe, MM.LaFleur, Managed by Q, Memebox, Plastiq, Shinola, Swoonery and thredUP are turning to Teckst to transform customer engagement with fast, immediate, and conversational texts. About Teckst Teckst is a first-to-market service that enables two-way text messaging for customer service teams. Teckst is embraced by mid-market early adopters and enterprises to modernize customer engagement and directly integrates with every CRM system. Teckst was founded by former Seamless/GrubHub Creative Director, Matt Tumbleson and is headquartered in New York City. Visit teckst.com for more information. Follow Teckst on Twitter, Crunchbase, AngelList, or check out the Teckst Blog.


News Article | October 28, 2016
Site: www.prweb.com

The Article 20 Network, a new human rights organization formed to defend and advance the right to Freedom of Assembly worldwide, announces its official launch weeks before the U.S. presidential election. The Article 20 Network was created over the summer to address obstacles and threats to peaceful demonstration and protest — from the recent use of "free speech zones" at Hofstra’s U.S. presidential debate to the outright ban on anti-Mugabe demonstrations in Zimbabwe. Demonstrations and protests have been growing in size and frequency around the world in the 21st century, with social media and stark racial and economic disparities driving new levels and types of activism. In this new age of dissent, nonviolent demonstrations are increasingly maligned, restricted or met with violence. The Article 20 Network exists to develop creative and strategic solutions to promote the freedom of assembly, a distinct form of free expression – “the body as voice” – through nonviolent means. “We’re days away from a U.S. presidential election that promises to inflame differences of opinion and see more street protests regardless of the outcome,” says the Article 20 Network co-founder and executive director Dan Aymar-Blair. "Elections choose our leaders. Public demonstrations hold them accountable." This summer, the U.N. Special Rapporteur on Free Assembly and Association, Maina Kiai, delivered a scathing critique of the state of peaceful assembly in the United States. His remarks underscore the urgency of the Article 20 Network's efforts. Unique as a standalone nonprofit organization focusing solely on the human rights of protesters, the Article 20 Network considers not only concerns for the immediate safety of all involved in demonstrations (protesters, law enforcement, media and observers), but also bigger questions like the design of public spaces and the role technology plays in the choreography of assemblies. The Article 20 Network has garnered support from prominent social critics and experts. Noam Chomsky says, “The right of free assembly is an essential element of a free and democratic society, the kind of society that we should strive to create and to defend. The Article 20 Network merits praise and support for upholding this fundamental right.” "The right of peaceful assembly is a cornerstone of our basic liberties. It is distinct from, and often enables, rights to speech and expression. The Article 20 Network promotes and embodies these claims at a time when citizens too often lack basic knowledge of the right of assembly and governments too often suppress that right. Their mission is both important and urgent," says John Inazu, Sally D. Danforth Distinguished Professor of Law and Religion, Washington University in St. Louis, and author of Liberty’s Refuge: The Forgotten Freedom of Assembly. “In recent years, the United States has experienced the proliferation of free speech zones, the militarization of protest policing, and other challenges to public dissent,” says Timothy Zick, Mills E. Godwin, Jr. Professor of Law, William & Mary Law School, and author of Speech Out of Doors. “Now more than ever, the Article 20 Network’s mission of preserving and protecting freedom of assembly is vitally important to our democratic processes.” Identifying public awareness as the first step in its strategic plan, the Article 20 Network is rolling out an online knowledge library for community leaders and activists and offering a series of seminars on public demonstration as a human right. The Article 20 Network is a 501(c)(3) nonprofit organization based in New York City. It takes its name from Article 20 of the Universal Declaration of Human Rights, which secures the human right to peaceful assembly. The Article 20 Network is currently seeking to reach a larger base of supporters interested in protecting the Freedom of Assembly in the United States and around the world. Visit http://www.a20n.org to find out how to get involved or to make a donation.


News Article | December 8, 2016
Site: www.chromatographytechniques.com

Nothing ruins a potentially fun event like putting it on your calendar. In a series of studies, researchers found that scheduling a leisure activity like seeing a movie or taking a coffee break led people to anticipate less enjoyment and actually enjoy the event less than if the same activities were unplanned. That doesn't mean you can't plan at all: The research showed that roughly planning an event (but not giving a specific time) led to similar levels of enjoyment as unplanned events. "People associate schedules with work. We want our leisure time to be free-flowing," said Selin Malkoc, co-author of the study and assistant professor of marketing at The Ohio State University's Fisher College of Business. "Time is supposed to fly when you're having fun. Anything that limits and constrains our leisure chips away at the enjoyment." Malkoc conducted the study with Gabriela Tonietto, a doctoral student at Washington University in St. Louis. Their results are published in the Journal of Marketing Research. In the paper, they report on 13 separate studies that looked at how scheduling leisure activities affects the way we think about and experience them. In one study, college students were given a calendar filled with classes and extracurricular activities and asked to imagine that this was their actual schedule for the week. Half of the participants were then asked to make plans to get frozen yogurt with a friend two days in advance and add the activity to their calendar. The other half imagined running into a friend and deciding to get frozen yogurt immediately. Results showed that those who scheduled getting frozen yogurt with their friend rated the activity as feeling more like a "commitment" and "chore" than those who imagined the impromptu get-together. "Scheduling our fun activities leads them to take on qualities of work," Malkoc said. The effect is not just for hypothetical activities. In an online study, the researchers had people select an entertaining YouTube video to watch. The catch was that some got to watch their chosen video immediately. Others chose a specific date and time to watch the video and put in on their calendar. Results showed that those who watched the scheduled video enjoyed it less than those who watched it immediately. While people seem to get less enjoyment out of precisely scheduled activities, they don't seem to mind if they are more roughly scheduled. In another study, the researchers set up a stand on a college campus where they gave out free coffee and cookies for students studying for finals. Before setting up the stand, they handed out tickets for students to pick up their coffee and cookies either at a specific time or during a two-hour window. As they were enjoying their treat, the students filled out a short survey. The results showed that those who had a specifically scheduled break enjoyed their time off less than did those who only roughly scheduled the break. "If you schedule leisure activities only roughly, the negative effects of scheduling disappear," Malkoc said. Aim to meet a friend "this afternoon" rather than exactly at 1 p.m. One study showed that even just setting a starting time for a fun activity is enough to make it less enjoyable. "People don't want to put time restrictions of any kind on otherwise free-flowing leisure activities," she said. Malkoc said these findings apply to short leisure activities that last a few hours or less. The results also have implications for leisure companies that provide experiences for their customers, Malkoc said. For example, some amusement parks offer tickets for their most popular rides that allow people to avoid long lines. But this research suggests that people will enjoy these rides less if the tickets are set for a particular time. Instead, the parks should give people a window of time to board the ride, which would be the equivalent of rough scheduling in this study.


News Article | January 31, 2017
Site: www.medicalnewstoday.com

Researchers at the University of Pittsburgh and Washington University in St. Louis have provided the first details of how enteroviruses, which cause millions of infections worldwide annually, may enter the body through the intestine. The results of the study are published in the journal Proceedings of the National Academy of Sciences. Enterovirus infections are associated with diseases that can range from mild flu-like symptoms to much more severe outcomes such as inflammation in the brain or heart, acute paralysis, and even death. Enterovirus infections acquired within neonatal intensive care units (NICU) can be devastating as newborns are particularly susceptible to infection by these viruses. Enteroviruses are a class of viruses that are the second most common human infectious agents and are primarily transmitted through close person-to-person contact, touching infected surfaces, or ingesting food or water containing the virus. "Despite their major global impact, especially on the health of children, little is known about the route that these viruses take to cross the intestine, their primary point of entry. Our approach has for the first time shed some light on this process," said senior author Carolyn Coyne, Ph.D., associate professor of microbiology and molecular genetics at the Pitt School of Medicine. In the study, researchers isolated stem cells from premature human small intestines and grew them in the laboratory into enteroids, or so-called "mini-guts," which contained the different cell types and tissue structures that are normally found in the human intestine. Using the mini-gut model, they demonstrated that echovirus 11 (E11), the enterovirus most commonly associated with NICU infections, induced significant damage to the enteroids, which could facilitate passage of the virus into the bloodstream from the infected intestine. The results also provided the first evidence that different types of enteroviruses could target distinct cells within the gastrointestinal tract and might vary in their effectiveness at infecting intestinal cells. "This study not only provides important insights into enterovirus infections, but also provides an important model that could be used to test the efficacy of anti-enterovirus therapeutics in the premature intestine," said Misty Good, M.D., assistant professor of pediatrics at the Washington University School of Medicine in St. Louis and co-senior author of the study. Other authors of the study included Coyne G. Drummond, B.S., and Congrong Ma, M.Sc., of the University of Pittsburgh; and Alexa M. Bolock, B.S., and Cliff J. Luke, Ph.D., of Washington University School of Medicine in St. Louis. The study was funded by National Institutes of Health grants R01AI081759 and K08DK101608, the Burroughs Wellcome Fund, and Children's Hospital of Pittsburgh of UPMC.


News Article | February 16, 2017
Site: www.rdmag.com

A new diagnostic method has correctly predicted autism in 80 percent of high-risk infants, according to a new study. Researchers at the University of North Carolina have developed a method using magnetic resonance imaging (MRI) in infants with older siblings with autism to correctly predict whether infants would later meet the criteria for autism at two years old. “Our study shows that early brain development biomarkers could be very useful in identifying babies at the highest risk for autism before behavioral symptoms emerge,” Dr. Joseph Piven, the Thomas E. Castelloe Distinguished Professor of Psychiatry at UNC and senior author of the paper, said in a statement. “Typically, the earliest an autism diagnosis can be made is between ages two and three. But for babies with older autistic siblings, our imaging approach may help predict during the first year of life which babies are most likely to receive an autism diagnosis at 24 months.” It is estimated that one out of every 68 children develop Autism Spectrum Disorder (ASD) in the U.S. The patients have characteristic social deficits and demonstrate a range of ritualistic, repetitive and stereotyped behaviors. Despite extensive research, it has been impossible to identify those at ultra-high risk for autism prior to two-years old, which is the earliest time when the hallmark behavioral characteristics of ASD can be observed and a diagnosis made in most children. In the study, the researchers conducted MRI scans of infants at six, 12 and 24 months old. The researchers found that the babies who developed autism experienced a hyper-expansion of brain surface area from six to 12 months, as compared to babies who had an older sibling with autism but did not themselves show evidence of the condition at 24 months of age. They also found that increased growth rate of surface area in the first year of life was linked to increased growth rate of overall brain volume in the second year, which is tied to the emergence of autistic social deficits in the second year. The next step was to take the data—MRI’s of brain volume, surface area, cortical thickness at six and 12 months of age and the sex of the infants—and used a computer program to identify a way to classify babies most likely to meet criteria for autism at two-years old. The computer program developed an algorithm that the researchers applied to a separate set of study participants. The researchers concluded that brain differences at six and 12 months in infants with older siblings with autism correctly predicted eight of 10 infants who would later meet criteria for autism at two-years old in comparison to those with older ASD siblings who did not meet the criteria at two years old. “This means we potentially can identify infants who will later develop autism, before the symptoms of autism begin to consolidate into a diagnosis,” Piven said. This test could be helpful to parents who have a child with autism and have a second child, where they could intervene ‘pre-symptomatically’ before the emergence of the defining symptoms of autism. Researchers could then begin to examine the effect of interventions on children during a period before the syndrome is present and when the brain is most malleable. “Putting this into the larger context of neuroscience research and treatment, there is currently a big push within the field of neurodegenerative diseases to be able to detect the biomarkers of these conditions before patients are diagnosed, at a time when preventive efforts are possible,” Piven said. “In Parkinson’s for instance, we know that once a person is diagnosed, they’ve already lost a substantial portion of the dopamine receptors in their brain, making treatment less effective.” The research, which was led by researchers at the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina, where Piven is director, included hundreds of children from across the country. The project’s other clinical sites included the University of Washington, Washington University in St. Louis and The Children’s Hospital of Philadelphia. Other key collaborators are McGill University, the University of Alberta, the University of Minnesota, the College of Charleston and New York University. “This study could not have been completed without a major commitment from these families, many of whom flew in to be part of this,” first author Heather Hazlett, Ph.D., assistant professor of psychiatry at the UNC School of Medicine and a CIDD researcher, said in a statement.


News Article | December 8, 2016
Site: www.eurekalert.org

COLUMBUS, Ohio - Nothing ruins a potentially fun event like putting it on your calendar. In a series of studies, researchers found that scheduling a leisure activity like seeing a movie or taking a coffee break led people to anticipate less enjoyment and actually enjoy the event less than if the same activities were unplanned. That doesn't mean you can't plan at all: The research showed that roughly planning an event (but not giving a specific time) led to similar levels of enjoyment as unplanned events. "People associate schedules with work. We want our leisure time to be free-flowing," said Selin Malkoc, co-author of the study and assistant professor of marketing at The Ohio State University's Fisher College of Business. "Time is supposed to fly when you're having fun. Anything that limits and constrains our leisure chips away at the enjoyment." Malkoc conducted the study with Gabriela Tonietto, a doctoral student at Washington University in St. Louis. Their results are published in the Journal of Marketing Research. In the paper, they report on 13 separate studies that looked at how scheduling leisure activities affects the way we think about and experience them. In one study, college students were given a calendar filled with classes and extracurricular activities and asked to imagine that this was their actual schedule for the week. Half of the participants were then asked to make plans to get frozen yogurt with a friend two days in advance and add the activity to their calendar. The other half imagined running into a friend and deciding to get frozen yogurt immediately. Results showed that those who scheduled getting frozen yogurt with their friend rated the activity as feeling more like a "commitment" and "chore" than those who imagined the impromptu get-together. "Scheduling our fun activities leads them to take on qualities of work," Malkoc said. The effect is not just for hypothetical activities. In an online study, the researchers had people select an entertaining YouTube video to watch. The catch was that some got to watch their chosen video immediately. Others chose a specific date and time to watch the video and put in on their calendar. Results showed that those who watched the scheduled video enjoyed it less than those who watched it immediately. While people seem to get less enjoyment out of precisely scheduled activities, they don't seem to mind if they are more roughly scheduled. In another study, the researchers set up a stand on a college campus where they gave out free coffee and cookies for students studying for finals. Before setting up the stand, they handed out tickets for students to pick up their coffee and cookies either at a specific time or during a two-hour window. As they were enjoying their treat, the students filled out a short survey. The results showed that those who had a specifically scheduled break enjoyed their time off less than did those who only roughly scheduled the break. "If you schedule leisure activities only roughly, the negative effects of scheduling disappear," Malkoc said. Aim to meet a friend "this afternoon" rather than exactly at 1 p.m. One study showed that even just setting a starting time for a fun activity is enough to make it less enjoyable. "People don't want to put time restrictions of any kind on otherwise free-flowing leisure activities," she said. Malkoc said these findings apply to short leisure activities that last a few hours or less. The results also have implications for leisure companies that provide experiences for their customers, Malkoc said. For example, some amusement parks offer tickets for their most popular rides that allow people to avoid long lines. But this research suggests that people will enjoy these rides less if the tickets are set for a particular time. Instead, the parks should give people a window of time to board the ride, which would be the equivalent of rough scheduling in this study.


News Article | December 22, 2016
Site: www.eurekalert.org

New research findings provide insight into the immune system pathways that may be key to developing an effective tuberculosis (TB) vaccine. The study, to be published Thursday in the journal Nature Communications, was supported by the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health. Globally, an estimated 10.4 million new TB cases occurred in 2015, according to the World Health Organization. A TB vaccine called bacille Calmette-Guérin (BCG) is currently used in countries with a high prevalence of TB to prevent severe forms of the disease in children. However, the protection provided against pulmonary TB in adults is very variable, and people vaccinated with BCG are more likely to give false positives on skin tests for TB. Therefore, researchers are working to create an improved TB vaccine. In animal research, scientists have found that during an infection with the tuberculosis bacterium Mycobacterium tuberculosis, immune system cells called T cells are slow to congregate in the lungs and attack the invaders, allowing for infection to develop despite vaccination. Previous research has suggested that a key to a faster immune response might lie with dendritic cells that present molecules from bacteria, viruses, and other harmful invaders to disease-fighting T cells, prompting the T cells to attack incoming pathogens. In this new study, scientists led by a group at Washington University in St. Louis investigated whether the delay could be prevented by activating certain immune pathways. First, the researchers established that T cells from vaccinated mice were capable of responding well to TB bacteria. The scientists then introduced dendritic cells that had already been "primed" by exposure to molecules from TB bacteria into the lungs of vaccinated mice, and exposed the mice to different strains of M. tuberculosis. The researchers found that the vaccinated mice that also received activated dendritic cells at the time of infection reached near-sterilizing levels of immunity against TB. Next, the scientists tried to activate the same immune pathways but without the primed dendritic cells. Specifically, mice vaccinated with the BCG vaccine were treated with two different experimental compounds (amph-CpG and FGK4.5) along with TB antigens designed to activate dendritic cells in the lungs, and then exposed to TB bacteria. The treated mice mounted faster and stronger immune responses against the bacteria, the researchers report. The scientists note that the technique they used in mice may be impractical in humans because the compounds or activated dendritic cells would need to be applied at the time of infection, which is difficult to predict or detect in the real world. However, the research has led to a better understanding of the mechanisms of TB immunity, which may shape the direction of more effective TB vaccine research in the future, the authors wrote. K. Griffiths et al. Targeting Dendritic Cells to accelerate T cell activation overcomes a bottleneck in tuberculosis vaccine efficacy. Nature Communications DOI 10.1038/NCOMMS13894 (2016). Dr. Katrin Eichelberg, Ph.D., a program officer in the Tuberculosis and Other Mycobacterial Diseases Section in NIAID's Division of Microbiology and Infectious Diseases, is available for comment. NIAID conducts and supports research -- at NIH, throughout the United States, and worldwide -- to study the causes of infectious and immune-mediated diseases, and to develop better means of preventing, diagnosing and treating these illnesses. News releases, fact sheets and other NIAID-related materials are available on the NIAID website. About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www. .


News Article | December 14, 2016
Site: www.nature.com

Could predictive algorithms be the key to creating a successful cancer vaccine? Two US nonprofit organizations plan to find out by pitting a range of computer programs against each other to see which can best predict a candidate for a personalized vaccine from a patient’s tumour DNA. The Parker Institute for Cancer Immunotherapy in San Francisco, California, and the Cancer Research Institute of New York City announced the algorithmic battle on 1 December. It is part of a multimillion-dollar joint project to solve a major puzzle in the nascent field of cancer immunotherapy: which of a patient’s sometimes hundreds of cancer mutations could serve as a call-to-arms for their immune system to attack their tumours. If the effort succeeds, it could spur the development of personalized cancer vaccines that use fragments of these mutated proteins to fire up the body’s natural immune responses to them. Because these mutations are found in cancer cells and not healthy ones, the hope is that this would provide a non-toxic way to battle tumours. The idea is gaining traction. In 2014, news that vaccines containing such mutated proteins had vanquished tumours in mice set off a mad dash to find out whether the approach would work in people. A generation of biotechnology companies has been founded around the concept, and clinical trials run by academic labs are under way. Still, a challenge remains. To be a good candidate for a vaccine, a mutated cancer protein must be visible to T cells, the soldiers of the immune system. And for that to happen, tumour cells must chew up the protein into fragments. Those fragments then must bind to specialized proteins, which are shipped to the cell’s surface to be displayed to passing T cells. The trick that vaccine researchers must master is using a tumour’s DNA to predict which mutations to home in on. “We can do the sequencing and find out the mutations, but it’s very hard to know which of these tens or hundreds or thousands of mutations are actually going to protect people from the growth of their cancers,” says Pramod Srivastava, an immunologist at the University of Connecticut School of Medicine in Farmington. One approach is to use algorithms to predict which bits of a mutated protein might be seen by a T cell. These work by analysing where the proteins could be cleaved, for example, and which of the resulting fragments will bind tightly to the molecules that put them on display. But each laboratory has a different “secret sauce”, says Robert Schreiber, a cancer immunologist at Washington University in St. Louis, Missouri. And most are not very predictive: Robert Petit, chief scientific officer of biotechnology company Advaxis in Princeton, New Jersey, estimates that the algorithms are typically less than 40% accurate. To solve the problem, the Parker Institute and the Cancer Research Institute launched their challenge. They have arranged for 30 laboratories that already use such algorithms to apply their secret sauces to the same DNA and RNA sequences. The sequences will come from cancers such as melanoma and lung cancer, which tend to have many hundreds of mutations (see ‘Mutation map’) and thus could provide ample possibilities for a vaccine. A handful of other laboratories will then test whether any T cells in the tumour recognize those fragments, and are stimulated by them — a sign of a good vaccine target. The alliance will not publicly announce a winner, but hopes to use the most accurate algorithms to design vaccines for clinical trials. Algorithms can provide a quick answer to a complicated question — crucial if personalized vaccines are to be deployed on a large scale. But ultimately, Srivastava says that the best way to improve the algorithms is to collect more data from animal studies to learn about how T cells naturally respond to mutations. His lab and others are making hundreds of putative vaccines tailored to an individual tumour, and administering them to mice to see which are capable of fighting the cancer. And Drew Pardoll, a cancer immunologist at Johns Hopkins University in Baltimore, Maryland, worries that algorithms may never account for some factors that influence T-cell responses. For example, mutations may be less suitable for a vaccine if they have arisen early in tumour development, giving the immune system time to begin viewing them as ‘normal’. Pardoll argues that the field needs faster, easier and more accurate laboratory tests to determine which mutations best trigger a T-cell response. “We don’t yet know enough about the rules to make perfect predictions,” he says. “You can algorithm until the cows come home and you’re not really going to know if you’re improving things.” But in the absence of speedy lab tests, companies need algorithms, argues Robert Ang, chief business officer at Neon Therapeutics of Cambridge, Massachusetts. “There is already evidence to show that this approach works despite the imperfect algorithms,” he says. “Improving the algorithms even more could be very meaningful.”


News Article | February 15, 2017
Site: www.eurekalert.org

This first-of-its-kind study used MRIs to image the brains of infants, and then researchers used brain measurements and a computer algorithm to accurately predict autism before symptoms set in CHAPEL HILL, NC - Using magnetic resonance imaging (MRI) in infants with older siblings with autism, researchers from around the country were able to correctly predict 80 percent of those infants who would later meet criteria for autism at two years of age. The study, published today in Nature, is the first to show it is possible to identify which infants - among those with older siblings with autism - will be diagnosed with autism at 24 months of age. "Our study shows that early brain development biomarkers could be very useful in identifying babies at the highest risk for autism before behavioral symptoms emerge," said senior author Joseph Piven, MD, the Thomas E. Castelloe Distinguished Professor of Psychiatry at the University of North Carolina-Chapel Hill. "Typically, the earliest an autism diagnosis can be made is between ages two and three. But for babies with older autistic siblings, our imaging approach may help predict during the first year of life which babies are most likely to receive an autism diagnosis at 24 months." This research project included hundreds of children from across the country and was led by researchers at the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina, where Piven is director. The project's other clinical sites included the University of Washington, Washington University in St. Louis, and The Children's Hospital of Philadelphia. Other key collaborators are McGill University, the University of Alberta, the University of Minnesota, the College of Charleston, and New York University. "This study could not have been completed without a major commitment from these families, many of whom flew in to be part of this," said first author Heather Hazlett, PhD, assistant professor of psychiatry at the UNC School of Medicine and a CIDD researcher. "We are still enrolling families for this study, and we hope to begin work on a similar project to replicate our findings." People with Autism Spectrum Disorder (or ASD) have characteristic social deficits and demonstrate a range of ritualistic, repetitive and stereotyped behaviors. It is estimated that one out of 68 children develop autism in the United States. For infants with older siblings with autism, the risk may be as high as 20 out of every 100 births. There are about 3 million people with autism in the United States and tens of millions around the world. Despite much research, it has been impossible to identify those at ultra-high risk for autism prior to 24 months of age, which is the earliest time when the hallmark behavioral characteristics of ASD can be observed and a diagnosis made in most children. For this Nature study, Piven, Hazlett, and researchers from around the country conducted MRI scans of infants at six, 12, and 24 months of age. They found that the babies who developed autism experienced a hyper-expansion of brain surface area from six to 12 months, as compared to babies who had an older sibling with autism but did not themselves show evidence of the condition at 24 months of age. Increased growth rate of surface area in the first year of life was linked to increased growth rate of overall brain volume in the second year of life. Brain overgrowth was tied to the emergence of autistic social deficits in the second year. Previous behavioral studies of infants who later developed autism - who had older siblings with autism -revealed that social behaviors typical of autism emerge during the second year of life. The researchers then took these data - MRIs of brain volume, surface area, cortical thickness at 6 and 12 months of age, and sex of the infants - and used a computer program to identify a way to classify babies most likely to meet criteria for autism at 24 months of age. The computer program developed the best algorithm to accomplish this, and the researchers applied the algorithm to a separate set of study participants. The researchers found that brain differences at 6 and 12 months of age in infants with older siblings with autism correctly predicted eight out of ten infants who would later meet criteria for autism at 24 months of age in comparison to those infants with older ASD siblings who did not meet criteria for autism at 24 months. "This means we potentially can identify infants who will later develop autism, before the symptoms of autism begin to consolidate into a diagnosis," Piven said. If parents have a child with autism and then have a second child, such a test might be clinically useful in identifying infants at highest risk for developing this condition. The idea would be to then intervene 'pre-symptomatically' before the emergence of the defining symptoms of autism. Research could then begin to examine the effect of interventions on children during a period before the syndrome is present and when the brain is most malleable. Such interventions may have a greater chance of improving outcomes than treatments started after diagnosis. "Putting this into the larger context of neuroscience research and treatment, there is currently a big push within the field of neurodegenerative diseases to be able to detect the biomarkers of these conditions before patients are diagnosed, at a time when preventive efforts are possible," Piven said. "In Parkinson's for instance, we know that once a person is diagnosed, they've already lost a substantial portion of the dopamine receptors in their brain, making treatment less effective." Piven said the idea with autism is similar; once autism is diagnosed at age 2-3 years, the brain has already begun to change substantially. "We haven't had a way to detect the biomarkers of autism before the condition sets in and symptoms develop," he said. "Now we have very promising leads that suggest this may in fact be possible." For this research, NIH funding was provided by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the National Institute of Mental Health (NIMH), and the National Institute of Biomedical Imaging and Bioengineering. Autism Speaks and the Simons Foundation contributed additional support.


News Article | November 17, 2015
Site: news.yahoo.com

A partial map of the distribution of galaxies in the Sloan Digital Sky Survey, going out to a distance of 7 billion light years. The amount of galaxy clustering that we observe today is a signature of how gravity acted over cosmic time, and all More Don Lincoln is a senior scientist at the U.S. Department of Energy's Fermilab, America's largest Large Hadron Collider research institution. He also writes about science for the public, including his recent "The Large Hadron Collider: The Extraordinary Story of the Higgs Boson and Other Things That Will Blow Your Mind" (Johns Hopkins University Press, 2014). You can follow him on Facebook. Lincoln contributed this article to Space.com's Expert Voices: Op-Ed & Insights. November marks the beginning of the holiday season, but this year, there's even more reason to throw a party: It's the 100th birthday of Einstein's theory of general relativity. Forget the turkey this year — we science enthusiasts can instead celebrate the invention of a paradigm that completely overthrew our fundamental understanding of the very meaning of space and time. In November 1915, Albert Einstein published four papers — each separated by a week, followed by a summary paper in March 1916 — in which he laid out his theory of general relativity and blew humanity's collective mind. Einstein's earlier theory of special relativity (1905) was already confusing enough, because of how it inextricably linked space and time. But at least in that theory, space was comfortably familiar to people who had learned Euclidean geometry: Parallel lines never crossed, the sum of the angles of a triangle was 180 degrees and space was flat. Special relativity might have taken a little work to understand — but Einstein's general relativity was, well, twisted. The new theory showed a changing and dynamic space-time that was tied to the energy and mass density of the universe. Space itself could be bent and warped by the presence of matter. This new vision of the universe was not immediately accepted. It wasn't until 1919 that the scientific community embraced the idea, when Sir Arthur Eddington's naval expedition to the then-Portuguese island of Príncipe showed that the sun bent the path of light emitted by distant stars passing near it. The day after Eddington's Nov. 6 presentation to the Royal Society, Einstein and his theory were instant scientific rock stars, trumpeted worldwide on the front page of major newspapers. Even prior to the hubbub of 1919, scientists were exploring the consequences of the proposed new paradigm. In one of his papers of 1915, Einstein had compared classical Newtonian gravity and his own theory, and found differences in how they predicted the precession of the orbit of Mercury. While both calculations predicted a precession, the theory of general relativity agreed with data, while Newton's did not. Another early explorer of the consequences of Einstein's idea was Karl Schwarzschild. Already a well-respected scientist, Schwarzschild joined the German army during World War I. While in the trenches on the Russian Front, he contracted a rare autoimmune skin disease, from which he eventually died. Sent home with the intent that he convalesce, Schwarzschild returned to his love of science. Within a month of the flurry of papers in 1915, he explored the consequences of Einstein's theory. As he lay in his bed, afflicted by painful sores, Schwarzschild worked out a solution to the new equations for an extreme bending of space, which we now call a black hole. In this centennial of Einstein's successful year, we can now look back and see the impact of general relativity on how we understand the universe. In contrast to the halting acceptance of a century ago, the scientific community has now firmly embraced the idea, which has a range of testable consequences. One such prediction is relevant to our modern life. Einstein predicted that, in addition to the familiar (and nonintuitive!) changes in space and time that occur when one approaches the speed of light, the passage of time also depends on the strength of the gravitational field. This implies that clocks that experience stronger gravity tick more slowly than those in a weaker gravitational environment. This prediction was first tested in 1971, when Joseph C. Hafele of Washington University in St. Louis and Richard E. Keating of the United States Naval Observatory flew four very precise atomic clocks around the world and compared them to clocks left stationary in their laboratory. When the clocks were reunited, they reported a different elapsed time, in exact agreement with the predictions of general relativity. Modern relativity skeptics love to dispute the Hafele-Keating measurement, but the experiment has been repeated many times over the past 44 years. In fact, modern strontium clocks built by JILA — a joint institute of the University of Colorado, Boulder, and the National Institute of Standards and Technology — are so precise that they can measure a shift in time if one clock is lifted a mere 2 centimeters (less than an inch) higher than its twin. More practically, general relativity has a real implication for the GPS system built into your phone. Because the system works by comparing both orbiting and Earth-bound clocks, the fact that the clocks in satellites tick more quickly than their terrestrial cousins must be taken into account. If general relativity were not accounted for, the difference in the clocks would lead the GPS system to tell you that you were in the wrong place. And the effect isn't small. Each day, the offset would be about 6 miles (10 kilometers)! Very quickly, the GPS system would be totally useless.  Another triumph of Einstein's theory of general relativity employs the same technique as Eddington's measurement of the deflection of light by distant stars. By using improved versions of the same methods, scientists can use distortions of distant galaxies to literally measure the mass of the universe.  There is one prediction of general relativity that has not yet been confirmed directly. If mass can distort space, then moving mass can set up vibrations of space — what scientists call gravitational waves. In 1974, Russell A. Hulse and Joseph H. Taylor Jr. of the University of Massachusetts Amherst discovered a binary pulsar. A pulsar is a rapidly rotating neutron star that emits regular radio signals. In the case of Hulse and Taylor, the pulsar was co-orbiting another very dense stellar object. By watching the binary system, they saw that the orbital period was decreasing very slowly over the years — specifically, 75 millionths of a second per year. This decline is thought to be caused by the loss of energy by gravitational radiation. The observation is persuasive enough for the two men to have been awarded the 1993 Nobel Prize in physics, but it would be valuable to directly observe gravitational waves. A series of experiments conducted here on Earth employing a range of technologies are underway. With these experiments, scientists hope to observe these gravitational ripples as they pass over the planet. These waves are thought to be created in extremely violent astronomical events, like the merging of two black holes. When observed, the achievement will be a crowning confirmation of Einstein's theory. There is absolutely no question that Einstein's theory of general relativity is one of the most impressive intellectual achievements of all time. Our familiar understanding of space and time were shown to be quite wrong. Space can bend and twist under the influence of matter. Mass and energy are inextricably intertwined with the shape of space and time. It is, indeed, Einstein's greatest triumph. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Space.com. Einstein Is Right About General Relativity — Again Proof Is in the Cosmos: Einstein's General Relativity Confirmed Copyright 2015 SPACE.com, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article | February 16, 2017
Site: www.biosciencetechnology.com

Using magnetic resonance imaging (MRI) in infants with older siblings with autism, researchers from around the country were able to correctly predict 80 percent of those infants who would later meet criteria for  autism at two years of age. The study, published today in Nature, is the first to show it is possible to identify which infants – among those with older siblings with autism – will be diagnosed with autism at 24 months of age. “Our study shows that early brain development biomarkers could be very useful in identifying babies at the highest risk for autism before behavioral symptoms emerge,” said senior author Joseph Piven, MD, the Thomas E. Castelloe Distinguished Professor of Psychiatry at the University of North Carolina-Chapel Hill. “Typically, the earliest an autism diagnosis can be made is between ages two and three. But for babies with older autistic siblings, our imaging approach may help predict during the first year of life which babies are most likely to receive an autism diagnosis at 24 months.” This research project included hundreds of children from across the country and was led by researchers at the Carolina Institute for Developmental Disabilities (CIDD) at the University of North Carolina, where Piven is director. The project’s other clinical sites included the University of Washington, Washington University in St. Louis, and The Children’s Hospital of Philadelphia. Other key collaborators are McGill University, the University of Alberta, the University of Minnesota, the College of Charleston, and New York University. “This study could not have been completed without a major commitment from these families, many of whom flew in to be part of this,” said first author Heather Hazlett, PhD, assistant professor of psychiatry at the UNC School of Medicine and a CIDD researcher. “We are still enrolling families for this study, and we hope to begin work on a similar project to replicate our findings.” People with Autism Spectrum Disorder (or ASD) have characteristic social deficits and demonstrate a range of ritualistic, repetitive and stereotyped behaviors. It is estimated that one out of 68 children develop autism in the United States. For infants with older siblings with autism, the risk may be as high as 20 out of every 100 births. There are about 3 million people with autism in the United States and tens of millions around the world. Despite much research, it has been impossible to identify those at ultra-high risk for autism prior to 24 months of age, which is the earliest time when the hallmark behavioral characteristics of ASD can be observed and a diagnosis made in most children. For this Nature study, Piven, Hazlett, and researchers from around the country conducted MRI scans of infants at six, 12, and 24 months of age. They found that the babies who developed autism experienced a hyper-expansion of brain surface area from six to 12 months, as compared to babies who had an older sibling with autism but did not themselves show evidence of the condition at 24 months of age. Increased growth rate of surface area in the first year of life was linked to increased growth rate of overall brain volume in the second year of life. Brain overgrowth was tied to the emergence of autistic social deficits in the second year. Previous behavioral studies of infants who later developed autism – who had older siblings with autism –revealed that social behaviors typical of autism emerge during the second year of life. The researchers then took these data – MRIs of brain volume, surface area, cortical thickness at 6 and 12 months of age, and sex of the infants – and used a computer program to identify a way to classify babies most likely to meet criteria for autism at 24 months of age. The computer program developed the best algorithm to accomplish this, and the researchers applied the algorithm to a separate set of study participants. The researchers found that brain differences at 6 and 12 months of age in infants with older siblings with autism correctly predicted eight out of ten infants who would later meet criteria for autism at 24 months of age in comparison to those infants with older ASD siblings who did not meet criteria for autism at 24 months. “This means we potentially can identify infants who will later develop autism, before the symptoms of autism begin to consolidate into a diagnosis,” Piven said. If parents have a child with autism and then have a second child, such a test might be clinically useful in identifying infants at highest risk for developing this condition. The idea would be to then intervene ‘pre-symptomatically’ before the emergence of the defining symptoms of autism. Research could then begin to examine the effect of interventions on children during a period before the syndrome is present and when the brain is most malleable.  Such interventions may have a greater chance of improving outcomes than treatments started after diagnosis. “Putting this into the larger context of neuroscience research and treatment, there is currently a big push within the field of neurodegenerative diseases to be able to detect the biomarkers of these conditions before patients are diagnosed, at a time when preventive efforts are possible,” Piven said. “In Parkinson’s for instance, we know that once a person is diagnosed, they’ve already lost a substantial portion of the dopamine receptors in their brain, making treatment less effective.” Piven said the idea with autism is similar; once autism is diagnosed at age 2-3 years, the brain has already begun to change substantially. “We haven’t had a way to detect the biomarkers of autism before the condition sets in and symptoms develop,” he said. “Now we have very promising leads that suggest this may in fact be possible.”


News Article | November 23, 2016
Site: www.eurekalert.org

Scientists at Washington University in St. Louis isolated an enzyme that controls the levels of two plant hormones simultaneously, linking the molecular pathways for growth and defense. Similar to animals, plants have evolved small molecules called hormones to control key events such as growth, reproduction and responses to infections. Scientists have long known that distinct plant hormones can interact in complex ways, but how they do so has remained mysterious. In a paper published in the Nov. 14 issue of Proceedings of the National Academy of Sciences, the research team of Joseph Jez, professor of biology in Arts & Sciences and a Howard Hughes Medical Institute Professor, reports that the enzyme GH3.5 can control the levels of two plant hormones, auxin and salicylic acid. It is the first enzyme of its kind known to control completely different classes of hormones. Auxin controls a range of responses in the plant, including cell and tissue growth and normal development. Salicylic acid, on the other hand, helps plants respond to infections, which often take resources away from growth. Plants must tightly control the levels of auxin and salicylic acid to properly grow and react to new threats. "Plants control hormone levels through a combination of making, breaking, modifying and transporting them," said Corey Westfall, a former graduate student who led this Jez lab work along with current graduate student Ashley Sherp. By stitching an amino acid to a hormone, GH3.5 takes the hormones out of circulation, reducing their effect in the plant. Although scientists suspected GH3.5 controlled auxin and salicylic acid, this double action had not been demonstrated in plants. "Our question was really simple," Sherp said. "Can this enzyme actually control multiple hormones? And if that's true in a test tube, what happens back in a plant?" To find out, the researchers induced plants to accumulate large amounts of the protein and then measured their levels of hormones. When GH3.5 was expressed at high levels, the amounts of both auxin and salicylic acid were reduced. Deprived of growth-promoting auxin, the plants stayed small and stunted. The experiment proved that GH3.5 does regulate distinct classes of hormones, but how does it do this? To better understand how the enzyme could control both auxin and salicylic acid, the scientists crystallized GH3.5 and sent the crystals to the European Synchrotron Radiation Facility in Grenoble, France. The particle accelerator there helped the researchers to fire powerful X-rays into the protein crystal, and the diffraction of the X-rays provided information about the atom-by-atom structure of the enzyme. Westfall assembled this data into a three-dimensional reconstruction of GH3.5, showing it frozen in the act of modifying auxin. The scientists were expecting to find key differences between GH3.5 and related proteins that would account for its unique ability to modify multiple hormones. To their surprise, the part of the enzyme that binds and modifies hormones looked almost identical to related enzymes that can only modify auxin. The surprising similarities between the multi-purpose GH3.5 and its single-use relatives suggests that unrecognized elements of these proteins influence which molecules they can bind and transform. "These surprising results mean there's something going on that we're not seeing in the sequence or the structure of these enzymes," Jez said. Solving this mystery could tell us more about how enzymes distinguish among similar molecules, a discriminatory ability that is critical for all life, including people as well as plants.


News Article | February 17, 2017
Site: www.biosciencetechnology.com

Washington University in St. Louis is collaborating with the biopharmaceutical company Pfizer Inc. on research aimed at speeding the development of new drugs. The university is the first academic institution in the Midwest to join Pfizer’s Centers for Therapeutic Innovation’s (CTI) collaborative network. The new collaboration is aimed at supporting translational research that has the greatest potential to bring innovative therapies to patients. The collaboration will focus on certain rare diseases, as well as on immunology and inflammation, oncology, neuroscience, and cardiovascular and metabolic diseases. In particular, the program will focus on approaches that involve large-molecule therapeutics and antibodies that have the potential to address multiple diseases. “We are excited to be combining the resources and expertise of Pfizer scientists with the talents of our Washington University faculty in this effort to develop the next generation of therapeutics,” said Jennifer K. Lodge, Ph.D., vice chancellor for research at Washington University and a professor of molecular microbiology at the School of Medicine. “With our strength in basic science and translational research and the expertise of Pfizer in drug development, the new collaboration could help St. Louis and our region become even better positioned to make major contributions to benefit patients.” CTI brings together academic and National Institutes of Health (NIH) researchers with Pfizer scientists and patient foundations, to collaborate in drug discovery in broad areas of interest to the global pharmaceutical maker. “We look forward to beginning this new collaboration with Washington University,” said Anthony Coyle, Ph.D., senior vice president and CTI’s chief scientific officer. “Washington University’s world-class scientific expertise is an excellent addition to CTI’s network of academic collaborators, and CTI is proud to complement Pfizer’s long-standing relationship with this institution.” As part of the collaboration, Washington University researchers will be able to apply for funding to support research projects aimed at drug discovery. A joint steering committee made up of Washington University researchers and Pfizer scientists will be responsible for selecting the research projects and tracking their progress. If the steering committee selects a project, the project team would have access to Pfizer’s resources, scientific equipment and opportunities to collaborate with Pfizer scientists, who have extensive expertise in drug development and protein science. “I am pleased that our faculty will be able to participate in this program and have the potential to work on important projects in collaboration with Pfizer,” said David H. Perlmutter, M.D., executive vice chancellor for medical affairs and dean of the School of Medicine. “We are moving into an era in which academic-industry collaborations could capitalize on our considerable research talents and activities and, by collaborating with outstanding pharmaceutical companies like Pfizer, we can facilitate our goal of getting new drugs to patients as soon as possible.” The Washington University faculty serving on the joint steering committee include Karen Seibert, Ph.D., a professor of anesthesiology and of pathology and immunology and of genetics; Michael S. Kinch, Ph.D., associate vice chancellor and director of Washington University’s Center for Research Innovation in Business; Thaddeus S. Stappenbeck, M.D., Ph.D., the Conan Professor of Laboratory and Genomic Medicine; Leena M. Prabhu, Ph.D., associate director of Washington University’s Office of Technology Management; and Patricia J. Gregory, Ph.D., assistant vice chancellor and executive director of corporate and foundation relations. “This kind of collaboration between academic medical centers and private industry holds great potential for identifying the best ideas and moving them through the research pipeline as quickly as possible,” said Seibert, who served as vice president of research and development for Pfizer at the company’s St. Louis site before joining Washington University. “The fact that Pfizer has a long history of operating in St. Louis with well-established laboratories, equipment and a local team of experienced scientists means we have the potential to begin our collaborations and joint projects right away,” added Seibert, who also co-directs the Center for Clinical Pharmacology, which is a collaboration between the St. Louis College of Pharmacy and the School of Medicine.


News Article | November 14, 2016
Site: www.eurekalert.org

Antibiotics Not Necessarily Warranted for Lower Respiratory Infection With a Bacterial Cause; Disease Course Similar to Nonbacterial Infection The illness course of lower respiratory tract infection with a bacterial cause is generally mild, uncomplicated and similar to that of nonbacterial lower respiratory tract infection and does not warrant the immediate prescribing of antibiotics according to a study from researchers in Europe. Analyzing patient-recorded symptoms for 834 adults with acute cough, 162 of whom were diagnosed with a bacterial infection, researchers found patients with acute bacterial LRTI had only slightly worse symptoms at day two to four after the first office visit (P = .014) and returned more often for a second consultation (27 vs. 17 percent) than those without bacterial LRTI; however, the differences were small and not clinically meaningful. Resolution of symptoms rated moderately bad or worse did not differ between the groups (P = .375). The authors conclude that because there appears to be no meaningful difference in the illness course of bacterial LRTIs, physicians can reassure patients that LRTI, even if bacterial, is a self-limiting condition, and that rather than immediately prescribing an antibiotic, follow a strategy of watchful waiting. Disease Course of Lower Respiratory Tract Infection With a Bacterial Cause By Jolien Teepe, MD, MSc, et al University Medical Center Utrecht, The Netherlands Meta-Analyses Examine Prevalence of Atypical Bacterial Pathogens in Patients With Sore Throat, Cough and Community-Acquired Pneumonia Two meta-analyses by Ebell, Marchello and colleagues synthesize the literature to provide new insights into potential respiratory pathogens that are not typically subjected to diagnosis and treatment. In the first study, the researchers find high rates of atypical bacterial pathogens in patients with acute lower respiratory tract diseases, including cough, bronchitis and community-acquired pneumonia. Analyzing 50 studies, the researchers found that among adults with CAP, 14 percent had an atypical pathogen: 7 percent had mycoplasma pneumoniae, 4 percent had Chlamydophila pneumoniae, and 3 percent had Legionella pneumophila. Among children with CAP, 18 percent had Mycoplasma pneumoniae, only 1 percent had Chlamydophila pneumoniae, and Legionella pneumophila was extremely rare (only one case in 1,765 patients). Among patients with prolonged cough, 9 percent of adults and 18 percent of children had Bordetella pertussis. The authors conclude these findings suggest these conditions are underreported, underdiagnosed and undertreated in current clinical practice. They call for future research to help clinicians more accurately diagnose these pathogens and determine if and when antibiotic treatment is helpful. In the second meta-analysis, Ebell, Marchello and colleagues find high rates of both Group C beta-hemolytic streptococcus and Fusobacterium necrophorum in patients presenting with sore throat. Most cases of sore throat are viral, and Group A beta-hemolytic streptococci are responsible for 10 percent of episodes in adults and up to 30 percent in children, but recently it has been suggested that these two other bacteria may be important causes of pharyngitis with similar clinical presentations. Analysis of 16 studies reveals overall prevalences of Group C streptococcus and F necrophorum were 6 percent and 19 percent, respectively, in patients presenting with sore throat in primary care. The authors call for future research to determine whether these bacteria are truly pathogenic in patients with sore throat and whether antibiotics reduce the duration of symptoms, the likelihood of complications or the spread to others. Prevalence of Atypical Pathogens in Patients With Cough and Community-Acquired Pneumonia: A Meta-Analysis Prevalence of Group C Streptococcus and Fusobacterium Necrophorum in Patients With Sore Throat: A Meta-Analysis By Mark H. Ebell, MD, MS, et al University of Georgia, Athens Patients Who Discuss Opioid Risks With Their Physician Less Likely to Report Saving Pills for Later A research brief by Harvard researchers finds that in a nationally-representative sample, the high-risk behavior of saving opioid pills for later use is substantially less likely among patients who report having been counseled by their physicians about the risks of prescription painkiller addiction. Evaluating data from telephone surveys of 385 respondents who reported they had been prescribed strong prescription painkillers within the last two years, the researchers found a 60 percent lower rate, after adjustment for covariates, in self-reported saving of pills among respondents who said they talked with their physicians about the risks of prescription painkiller addiction. The authors conclude these findings suggest that patient education efforts, as currently practiced in the United States, may have positive behavioral consequences that could lower the risks of prescription painkiller abuse. They call for future research in controlled settings to test the effectiveness of physician-patient discussions about addiction risk and related safety measures in promoting appropriate use, storage and disposal of prescription pain killers. Discussing Opioid Risks With Patients to Reduce Misuse and Abuse: Evidence From 2 Surveys By Joachim Hero, MPH, et al Harvard University, Boston Those Left Behind From Voluntary Medical Home Reforms in Ontario, Canada, More Likely to Be Poor, Urban Immigrants and Receive Lower Quality Care Seeking to improve health outcomes and reduce cost, more than three-quarters of family physicians in Ontario, Canada, voluntarily transitioned from traditional fee-for-service practices to medical homes over the past decade. Amid these reforms, approximately one in six Ontarians still remain enrolled in fee-for service practices. Seeking to understand the characteristics and quality of care of patients who did not participate in the voluntary transition to medical homes, researchers found these patients are more likely than patients in medical homes to be poor, urban, and have immigrated in the last 10 years, and less likely to receive recommended screening services. Analyzing administrative data on 10.8 million patients attached to a medical home and 1.3 million patients receiving care from a fee-for-service physician, they found those attached to a fee-for-service physician were less likely to receive recommended testing for diabetes (25 percent vs. 34 percent) and less likely to receive screening for cervical (52 percent vs. 66 percent), breast (58 percent vs. 73 percent), and colorectal cancer (44 percent vs. 62 percent), compared with patients attached to a medical home physician. The authors note these differences in quality of care preceded medial home reforms. Those physicians who opted not to transition to a medical home were more likely to be older, to be international medical graduates and have smaller panel sizes. The authors call for strategies to improve care for patients left behind by medical home reforms in Ontario through improved primary care attachment or improved services with their existing physician. Those Left Behind From Voluntary Medical Home Reforms in Ontario, Canada By Tara Kiran, MD, MSc, CCFP, et al University of Toronto, Canada Researchers identify an unrecognized, preventable and treatable condition that may place hospitalized or recently hospitalized patients at increased risk for falling. In a study of 100 hospital inpatients, the researchers find that nearly one-third of patients at risk for falling have subclinical peroneal neuropathy, a condition in which compression of the peroneal nerve at the fibular neck causes foot drop that can lead to tripping and falling. Examination and history of study participants showed that patients with SCPN were nearly five times as likely to have fallen in the past year than those without the condition. The authors conclude that screening for SCPN, implementing preventive measures and treating the disorder may help reduce fall incidence in hospitalized and recently discharged patients. Subclinical Peroneal Neuropathy: A Common, Unrecognized, and Preventable Finding Associated With a Recent History of Falling in Hospitalized Patients By Susan E. Mackinnon, MD, et al Washington University in St. Louis School of Medicine, Missouri New guidelines released last month by the American Academy of Pediatrics suggest that media use by children is nearly inevitable, but it is up to parents to closely monitor their children's usage. This study sheds light on how primary care physicians can work with parents to realistically manage children's mobile technology use. Researchers interviewed a diverse sample of mothers, fathers and grandmothers to better understand caregivers' views about child mobile technology use, including the perceived benefits, concerns and effects on family interactions with the goal of informing pediatric guidance. The 35 caregivers interviewed reported feeling much uncertainty about whether mobile technologies are beneficial or harmful to their children's development, how to use digital devices beneficially when their rapid evolution seems out of control, and the important functional purposes media serves in their families despite displacing family time. Specifically, the authors identified three issues that invoked internal tensions between competing viewpoints: 1) effects on the child -- fear of missing out on the benefits of mobile devices vs. concerns about their effects on child thinking and behavior; 2) loss of control -- wanting to use mobile technology in educational ways vs. feeling that rapidly evolving technologies are beyond their control; and 3) family stress -- the necessity of mobile device use in stressed families vs. its displacement of high-quality family time. Given these findings, the authors propose a framework with which pediatric clinicians can respectfully and realistically discuss mobile technology use with caregivers so they can make informed and empowered decisions. They assert that the cognitive dissonance revealed in the responses presents prime opportunities for influencing behavior change; therefore exploring the inherent tensions in the unknowns surrounding emerging technologies may be an effective entry point into clinicians' conversations with parents. Overstimulated Consumers or Next-Generation Learners? Parent Tensions About Child Mobile Technology Use By Jenny S. Radesky, MD, et al University of Michigan, Ann Arbor Framework Describes How Health Coaches and Patients Work Together Despite the growing use of health coaches to support patients in making health-related decisions and behavioral changes, there is little research about how health coaches support patients. Analyzing focus group and individual interviews with patients, family members, health coaches and clinicians, researchers identified several core features of successful coaching and developed a conceptual model to describe how health coaches and patients work together. Among the themes identified by respondents were: shared characteristics between health coaches and patients, availability of health coaches to patients, development of a strong relationship based on trust, educational role of the health coach, providing personal support for patients, providing support for decision making, and bridging between the patient and the clinician. The resulting model, which attempts to characterize how health coaches and patients optimally work together, can be used in training and supporting health coaches in practice. A Qualitative Study of How Health Coaches Support Patients in Making Health-Related Decisions and Behavioral Changes By David H. Thom, MD, PhD, et al of California, San Francisco Peer support interventions delivered by people affected by diabetes are associated with a small but statistically significant reduction in glycosylated hemoglobin, with larger effects among predominantly minority participants, particularly Hispanic participants. The meta-analysis of 17 studies with 4,715 participants found an overall 0.24 percent improvement in HbA1c. The subgroup of studies with predominantly Hispanic participants showed an HbA1c improvement of 0.48 percent in the peer support intervention group compared with the control group. In contrast, the pooled effect size from the seven studies with predominantly white, non-Hispanic participants showed no improvement in HbA1c level with peer support interventions. The authors surmise that peer health coaches might be providing more culturally appropriate health education in ethnic minority populations, particularly in the Latino population. They call for future research to assess the effect of peer interventions on long-term patient-centered outcomes. Peer Support Interventions for Adults With Diabetes: A Meta-Analysis of Hemoglobin A1c Outcomes By Sonal J. Patil, MD, et al University of Missouri, Columbia Reflecting on physicians' struggles in medicine, in particular the high prevalence of burnout and the challenge to cultivate compassion and meaning, Benjamin R. Doolittle, MD, MDiv, program director of the Combined Internal Medicine-Pediatrics Residency Program at Yale University, asks, "Are we the walking dead?" Doolittle draws parallels between the plot and themes of the popular television show The Walking Dead, asserting that the zombie apocalypse metaphor could shed light on the state of medicine in 2016 as physicians struggle to survive. He challenges physicians to reclaim their central purpose -- to promote life, to focus on the healing of patients and the flourishing of their own lives -- in order to avoid becoming the walking dead. Are We the Walking Dead? Burnout as Zombie Apocalypse By Benjamin R. Doolittle, MD, MDiv Yale University, New Haven, Connecticut Annals of Family Medicine is a peer-reviewed, indexed research journal that provides a cross-disciplinary forum for new, evidence-based information affecting the primary care disciplines. Launched in May 2003, Annals is sponsored by seven family medical organizations, including the American Academy of Family Physicians, the American Board of Family Medicine, the Society of Teachers of Family Medicine, the Association of Departments of Family Medicine, the Association of Family Medicine Residency Directors, the North American Primary Care Research Group, and the College of Family Physicians of Canada. Annals is published six times each year and contains original research from the clinical, biomedical, social and health services areas, as well as contributions on methodology and theory, selected reviews, essays and editorials. Complete editorial content and interactive discussion groups for each published article can be accessed free of charge on the journal's website, http://www. .


News Article | March 2, 2017
Site: www.eurekalert.org

Researchers at Karolinska Institutet and KTH Royal Institute of Technology in Sweden have contributed to a recent discovery that the heart is filled with the aid of hydraulic forces, the same as those involved in hydraulic brakes in cars. The findings, which are presented in the journal Scientific Reports, open avenues for completely new approaches to the treatment of heart failure. The mechanisms that cause blood to flow into the ventricles of the heart during the filling, or diastolic, phase are only partly understood. While the protein titin in the heart muscle cells is known to operate as a spring that releases elastic energy during filling, new research at Karolinska Institutet and KTH suggests that hydraulic forces are equally instrumental. Hydraulic force, which is the pressure a liquid exerts on an area, is exploited in all kinds of mechanical processes, such as car brakes and jacks. In the body, the force is affected by the blood pressure inside the heart and the size difference between the atria and ventricles. During diastole, the valve between the atrium and the ventricle opens, equalising the blood pressure in both chambers. The geometry of the heart thus determines the magnitude of the force. Hydraulic forces that help the heart's chambers to fill with blood arise as a natural consequence of the fact that the atrium is smaller than the ventricle. Using cardiovascular magnetic resonance (CMR) imaging to measure the size of both chambers during diastole in healthy participants, the researchers found that the atrium is smaller effectively throughout the filling process. "Although this might seem simple and obvious, the impact of the hydraulic force on the heart's filling pattern has been overlooked," says Dr. Martin Ugander, a physician and associate professor who heads a research group in clinical physiology at Karolinska Institutet. "Our observation is exciting since it can lead to new types of therapies for heart failure involving trying to reduce the size of the atrium." Heart failure is a common condition in which the heart is unable to pump sufficient quantities of blood around the body. Many patients have disorders of the filling phase, often in combination with an enlarged atrium. If the atrium gets larger in proportion to the ventricle, it reduces the hydraulic force and thus the heart's ability to be filled with blood. "Much of the focus has been on the ventricular function in heart failure patients," says Dr. Elira Maksuti at KTH's Medical Imaging Unit and recent PhD from KI's and KTH's joint doctoral programme in medical technology. "We think it can be an important part of diagnosis and treatment to measure both the atrium and ventricle to find out their relative dimensions." The study was the result of a multidisciplinary collaboration between Karolinska Institutet, KTH and Lund University in Sweden and Washington University in St. Louis, USA and was financed with grants from the Swedish Research Council, the Swedish Heart-Lung Foundation, Stockholm County Council and Karolinska Institutet.


News Article | December 15, 2015
Site: www.sciencenews.org

Mountains of water ice tower thousands of meters over fields of frozen nitrogen and methane. Glaciers etched with channels hint at heat bubbling up from below. A patchwork of new and old terrains — some laid down in the last 10 million years, some as old as the planet itself — blanket the ground. And what appear to be two ice volcanoes punch through the terrain. The alien landscapes of Pluto and its moons dazzled scientists and nonscientists alike this year. More than eight decades after its discovery, Pluto became much more than a nondescript point of light. It’s a dynamic, complex world unlike any other orbiting the sun (SN: 12/12/15, p. 10). “Seeing a new world for the first time, I mean that’s huge,” says Cathy Olkin, a planetary scientist at the Southwest Research Institute in Boulder, Colo., and a deputy project scientist on the New Horizons mission. “It’s amazing to look at this world and realize I’ve been staring at it for years through a telescope and all that detail was there.” Pluto’s transformation came courtesy of a robotic spacecraft roughly the size of a grand piano (SN: 6/27/15, p. 16). After traveling for nine and a half years across nearly 5 billion kilometers — roughly the distance to the moon and back 6,700 times — New Horizons made its closest approach to Pluto on July 14, just 12,500 kilometers from its surface, close enough to see features the size of New York’s Central Park. As the spacecraft raced toward this remote outpost at roughly 50,000 kilometers per hour, one thing became abundantly clear: People still love Pluto. On flyby day, the world waited anxiously for news from the Johns Hopkins University Applied Physics Laboratory in Laurel, Md., where hundreds of scientists and journalists had gathered for the planet party of the decade (SN Online: 7/15/15). Pictures of Pluto and its largest moon, Charon, soon graced television screens, newspapers and magazines across the globe. The lonely underdog, kicked out of the planet club in 2006, became a celebrity. Planet or not, “Pluto is the star of the solar system,” says mission leader Alan Stern. As soon as the spacecraft beamed back its first detailed images, it was clear that Pluto had reinvented itself many times during its 4.6-billion-year lifetime (SN: 8/8/15, p. 6). Mountains, ice flows and a region devoid of craters implied that Pluto was geologically alive. The surfaces of some moons in the outer solar system have been reworked as well, but unlike Pluto, those satellites are under the influence of gravity from a host planet. Tiny Pluto, far from any other world, mysteriously changes itself. So much about Pluto is alien to Earthlings. On a world where a warm day is about −220° Celsius, the bedrock is made of hardened water ice. But there’s also something oddly familiar: Pluto has blue skies. Layers of haze stack on top of one another to build a tenuous atmosphere (SN Online: 10/15/15). The haze scatters sunlight into the nightside, teasing researchers with glimpses of odd landforms faintly illuminated in Pluto’s twilight. Within that atmosphere, ice moves back and forth across hemispheres through the interminable seasons. It appears to snow on Pluto. Even the five moons held a few surprises. Dramatic canyons slash across Charon, whose dark polar cap has no parallel. Styx, Nix, Kerberos and Hydra tumble and spin like a collection of chaotic tops, a dance seen nowhere else in the solar system (SN: 11/28/15, p. 14). And researchers have uncovered all these riches after the spacecraft has transmitted only 20 percent of its data. Complex composition data and images with even more detail have yet to be downloaded. “Who knows what’s going to show up in that 80 percent,” says Mark Showalter, a planetary scientist at the SETI Institute in Mountain View, Calif., who discovered Kerberos and Styx while New Horizons was en route. Such diversity suggests that even more strangeness awaits in the outer solar system, where the age of discovery is not over. NASA’s Juno spacecraft will arrive at Jupiter in July and plans are under way for a mission to the gas giant’s ice-encrusted moon Europa (SN Online: 6/18/15). In August, NASA tasked engineers at the Jet Propulsion Laboratory in Pasadena, Calif., with figuring out what it would take to return to either Uranus or Neptune, which haven’t been visited since the 1980s. And then there are the other dwarf planets, such as Sedna and Eris. “It gives me hope for younger people,” says William McKinnon, a planetary scientist at Washington University in St. Louis. “When they want to go out and explore Eris or some other world, they’re going to find even more amazing things.” New Horizons, meanwhile, is on course for its next stop: 2014 MU69, a 50-kilometer-wide hunk of ice about 1.6 billion kilometers past Pluto (SN Online: 11/5/15). Unlike Pluto, MU69 is probably pristine, an untouched relic from the dawn of the solar system. There, researchers hope to study an example of one of the fundamental building blocks of the planets. Once MU69 is far behind it, the spacecraft will eventually stop transmitting data. It will leave the confines of the solar system and sail into interstellar space. Long after humans have vanished, New Horizons will continue drifting through the galaxy, a monument to a people who weren’t content to watch those wandering points of light in the sky but reached across billions of kilometers to explore new worlds.


News Article | February 28, 2017
Site: www.eurekalert.org

How can we know anything about the carbon dioxide levels in the atmosphere in earth's deep past? Tiny bubbles trapped in ice provide samples of ancient air but this record goes back only 800,000 years. To reach further back, scientists must depend on climate proxies, or measurable parameters that vary systematically with climate conditions. The standard proxy is the oxygen isotope ratios in tiny zooplankton called foraminifera. There are more than 50,000 different species of these bugs, 10,000 living and 40,000 extinct. Because the foraminifera shells fairly faithfully record the ratios of oxygen isotopes in seawater, they provide a signal that can be used to infer ancient temperatures. But there's another potential proxy gathering dust in the sedimentary archive: tiny phytoplankton called coccolithophores. They are found in large numbers throughout the sunlight layer of the ocean. Their tiny, hub-cap-like plates, called coccoliths, are the main component of the Chalk, the Late Cretaceous formation that outcrops at the White Cliffs of Dover, and a major component of the "calcareous ooze" that covers much of the seafloor. Because coccolithophores are primary producers that are important to ocean biogeochemistry they are well-studied organisms. They are less used for paleoceanographic reconstructions than foraminifera, however, because they create their plates inside their cells rather than precipitating them directly from seawater. This means there is a large biological overprint on the climate signal that makes it difficult to interpret. But new findings, published in the March 6 issue of the journal Nature Communications, could change that. Recreating the prehistoric environment in laboratory conditions, a team of scientists from the University of Oxford, including Harry McClelland, now a postdoctoral research associate at Washington University in St. Louis, and the Plymouth Marine Laboratory grew several different species of this algae, each with varying carbon levels. With this experimental data, they created a mathematical model of carbon fluxes in the coccolithophore cell that accounts for previously unexplained variations in the isotopic composition of the platelets the algae produce and provides the framework for the development of a new set of proxies. Properly understood, the "noise" may itself be a signal. Coccoliths provide a window on ancient biology as well as climate, McClelland said. McClelland explains that the scientists began with a bit of a mystery. Coccoliths had been divided into two groups -- a light and a heavy group -- based on whether the platelets they precipitated was poorer or richer in the rarer heavy isotope of carbon compared to calcium carbonate formed by physical (abiotic) processes. The departures from abiotic norm were "both large and enigmatic," McClelland said. Heavy isotopes undergo all of the same chemical reactions as light isotopes, but, simply because they have slightly different masses, they do so at slightly different rates. These tiny differences in reaction rates cause the products of reactions to have different isotope ratios than the source materials. The coccolithophores undertake the relevant carbon chemistry in two different cellular compartments: the chloroplast, where photosynthesis takes place, and coccolith vesicles, where platelets are precipitated. The main problem with deciphering their isotopic record the algae leave is that these two processes drive the isotopic composition of the carbon pool in opposite directions. In their chloroplasts, coccolithophores take inorganic carbon and build it into biological molecules. This process proceeds far more rapidly for the CO2 containing the light isotope of carbon, causing the isotopic composition to drift to the heavier variant. Platelets growing in coccolith vesicles, on the other hand, preferentially incorporate the heavier form of carbon from the substrate pool. The team chose a number of coccolithophore species, both light and heavy, and grew them in the laboratory -- "it's not all that different from gardening, McClelland said" -- and then constructed a mathematical model of the cell that could predict the isotopic outcomes across all species for which data was available. They were able to show that the ratio of calcification to photosynthesis determines whether the platelets are isotopically heavier or lighter than abiogenic calcium carbonate. They were able to explain the size of the departure as well its direction. For McClelland, the most exciting part of the study is that it opens a window on the biology of ancient creatures. When people use foraminifera as a climate proxy, he said, they usually pick one species and assume a constant biological effect, or offset. But we can see the impact of varying biology in the chemical signatures of the coccolithophores. With more research, said McClelland said, coccolith-based isotopic ratios could be developed into a paleobarometer that would help us to understand the climate system's sensitivity to atmospheric carbon dioxide. "Our model allows scientists to understand algal signals of the past, like never before. It unlocks the potential of fossilized coccolithophores to become a routine tool, used in studying ancient algal physiology and also ultimately as a recorder of past CO2 levels," said senior author Rosalind Rickaby, professor of biogeochemistry at Oxford. The study was funded by the National Environment Research Council and the European Research Council.


News Article | August 31, 2016
Site: www.nature.com

The US Congress and European funding bodies increasingly require science agencies and universities to document the potential impact of research on economic activity. But science agencies, whose job it is to identify and fund the best research, are not the right institutions to unpack the links between research and innovation. Their often well-meaning attempts to count what can be counted — largely, publications or patent activity — have created perverse incentives for researchers and are not credible. More emphasis on publications means that early-career researchers have become replaceable (and often unemployable) cogs in a paper-production machine, while the amount of unread and irreproducible research and patents has exploded. Better incentives, and science, can be established through thoughtful measurement. Countries should think before measuring by drawing on the social and economic sciences and applying standard approaches to evaluation: building testable hypotheses based on theory of change, identifying and measuring inputs and outputs, establishing appropriate comparison groups1, and collecting data and estimating the empirical relationships. Biologists, engineers and physicists might be good at decoding the human genome, expanding our understanding of materials science, and building better models of the origins of the Universe, but they lack the statistical and analytical expertise to evaluate innovation. Although there are enormous hurdles to overcome, more carefully considered approaches will make results more credible and lead to better incentives. The resulting measurement would move the focus away from counting documents and towards tracing what scientists do and how this transitions to economic activity. The measure would focus on the ways in which funding steers scientists into particular research fields, and then the way those scientists transfer ideas. It would use automated approaches to collect data on both funded and non-funded fields of research, rather than relying on manual, burdensome and unreliable self-reports. Let's consider how this approach of thinking, then measuring, might work in the real world to inform links between research and economic growth, and to improve incentives. Take the current imperative from both the US Congress and the Higher Education Funding Council for England (HEFCE) that grants should measure their “impact”. A thinking-first strategy would suggest that grants should be seen as a set of investments that constitute a portfolio, rather than a set of unrelated projects. Evaluating every grant's success would be replaced by a risk-balanced portfolio approach. As such, some grants would surely fail. The results of these failures would be published and valued. The incentives would change from rewarding the publication of positive (and sometimes irreproducible) results to encouraging the publication of failures, and science would gain from the identification of 'dry' research holes2. As US inventor Thomas Edison liked to say, he didn't fail, he just found 10,000 ways that didn't work. The intense focus on publications as a way to measure scientific output has led to three suboptimal outcomes. First, researchers hoard knowledge in order to be the first to publish new findings. Second, institutional structures incentivize lower-risk, incremental research2. And third, too many graduate students are produced who are then put into the academic holding tank of postdoctoral fellowships. But the best way to transmit knowledge is through people3. Science would move forward more effectively by tracing the activities of people rather than publications, particularly if the focus is on regional economic development4. Treating the placement and earnings of graduate students and postdoctoral fellows as key outputs of investment, and their education as crucial for the adoption of new ideas, would result in their training being treated as valuable in its own right. An excellent example of this type of investment is Cofactor Genomics, which was founded by graduate scientists working on the Human Genome Project at Washington University in St. Louis, Missouri. Rather than pursue an academic career, they used their expertise to create a company that uses genomics to develop RNA-based disease diagnostics and hired people they had met through grant-funded research. They saw that the technology had great commercial potential, which would have been difficult to pursue in an academic environment. The correct measure of this project's success was not the number of published articles it spawned, but the strength and vibrancy of the networks of human connections that it helped to create. Establishing institutes is standard practice in many scientific domains — examples include the US National Center for Atmospheric Research and CERN, the European particle-physics laboratory, in physical sciences, and the Poverty Action Lab at the Massachusetts Institute of Technology in Cambridge in social science. To the credit of the US academic community, cooperatives have led to the establishment of the Institute for Research on Innovation and Science (IRIS) at the University of Michigan in Ann Arbor. A partnership between IRIS and the US Census Bureau is, for the first time, building links between funding, the scientists it supports and subsequent entrepreneurship. Teams of scientists from 11 universities are beginning to develop the thoughtful approach to measurement that is urgently required. Alas, similar institutes have not been established in Europe, Australia or New Zealand, despite researchers putting the building blocks together. Given that changing incentives is imperative for any country aiming to foster economy-driving innovation, I hope that this gap is quickly closed.


News Article | September 15, 2016
Site: cen.acs.org

Since the 1970s, scientists have hypothesized that the moon was created billions of years ago when a Mars-sized planetary body, dubbed Theia, collided with a very young Earth. Their original idea, dubbed the giant-impact hypothesis, held that the moon solidified from the melted remnants of Theia. Later, they revised the theory to suggest that the moon coalesced from molten debris and a thick silicate atmosphere kicked up by the collision. Now, however, Kun Wang of Washington University in St. Louis and Stein B. Jacobsen of Harvard University show that small differences in potassium isotope abundances in Earth and moon rocks are enough to tip the balance in favor of a recently proposed theory—that an extremely violent collision vaporized and evenly mixed Earth’s mantle and all of Theia to form the moon (Nature 2016, DOI: 10.1038/nature19341). The general premise of the original giant-impact hypothesis is still favored by scientists. However, over the past few decades, as the sensitivity of isotopic measurements has increased, they realized that the hypothesis had some flaws. For example, in 2001, scientists found that abundances of three oxygen isotopes were the same in both Earth and lunar rocks, suggesting that the moon formed mostly from Earth’s mantle, not Theia. Scientists also could not find any differences in the relative isotopic abundances of potassium, a ubiquitous semivolatile element, between Earth and the moon. To deal with these inconsistencies, theorists invoked a relatively low-energy collision scenario in which the Earth-Theia impact kicked up a silicate atmosphere, allowing elements from both bodies to mix evenly before the moon formed. But Wang contends that this mixing process would be too slow to occur before dust settled back to Earth. Using a newly developed, high-precision method for analyzing potassium isotopes, Wang and Jacobsen found that seven lunar samples returned from Apollo missions are enriched in heavy potassium isotopes compared with terrestrial samples. Though the identical isotopic ratios of other elements such as oxygen still leave no doubt that Earth and Theia mixed during the collision, the subtle differences in potassium isotopes are key to the mechanism by which they mixed. The only process that could have created the potassium isotope fractionation, the authors say, is the preferential condensing of the heavier potassium in the forming moon after a high-energy event. “It was a big puzzle that there was no detectable difference” in the potassium isotopic abundances, says Sarah T. Stewart, a professor at the University of California, Davis, who, with her graduate student Simon Lock, proposed the violent collision model. “Now that a difference has been detected, it places a strong quantitative constraint on the possible mechanisms that led to lunar origin.” “This appears to represent significant new support for the model in which the moon-forming impact was a very high energy event,” adds H. Jay Melosh, an earth, atmospheric, and planetary sciences professor at Purdue University.


News Article | November 9, 2015
Site: www.chromatographytechniques.com

A new study led by scientists at The Scripps Research Institute (TSRI) shows that a technology used in thousands of laboratories, called gas chromatography mass spectrometry (GC-MS), fundamentally alters the samples it analyzes. “We found that even relatively low temperatures used in GC-MS can have a detrimental effect on small molecule analysis,” said study senior author Gary Siuzdak, senior director of TSRI’s Scripps Center for Metabolomics and professor of chemistry, molecular and computational biology. Using new capabilities within XCMS, a data analysis platform developed in the Siuzdak lab, the researchers observed small molecules transforming—and even disappearing—during an experiment meant to mimic the GC-MS process, throwing into question the nature of the data being generated by GC-MS. The study was published online ahead of print on October 4 in the journalAnalytical Chemistry. For more than 50 years, chemists and biologists have used GC-MS to identify and measure concentrations of small molecules. When a sample is injected in a GC-MS system, it is heated and vaporized. The vapor travels through a gas chromatography column and the molecules separate, allowing the mass spectrometer to measure the individual molecules in the sample. Today, GC-MS is widely used in thousands of laboratories for tasks such as chemical analysis, disease diagnosis, environmental monitoring and even forensic investigations. The new experiments were initiated when Siuzdak was preparing a short course for students at the American Society for Mass Spectrometry annual meeting. The question arose of how heat from the GC-MS vaporization process could affect results, so Siuzdak and TSRI Research Associate Mingliang Fang ran a set of experiments to compare how small molecules responded to thermal stress. To their surprise, the molecular profiles of as many as 40 percent of the molecules were altered, suggesting that heat from the GC-MS process could dramatically change the chemical composition of the samples. “The results were quite astounding—as this is a technology that has been used for decades,” said Siuzdak. The finding led the researchers to take a closer look at how molecules degrade and transform during GC-MS. The scientists analyzed small molecule metabolites heated at 60, 100 and 250 degrees Celsius to mimic sample preparation and analysis conditions. The team used XCMS combined with a low-temperature liquid chromatography mass spectrometry technology which has been previously shown not to degrade molecules thermally, to determine the extent of the thermal effects. The researchers observed significant degradation even at the lower temperatures. At the higher temperatures, almost half of the molecules were degraded or completely transformed. “In retrospect, there was very little to be surprised about: heat degrades molecules,” said Siuzdak. “However we’ve simply taken for granted the extent of thermal degradation. While this is a negative result and scientists rarely publish them, I felt compelled especially for the students just getting started in their careers to report the limitation of such a ubiquitous technology.” The researchers noted that even molecules not typically observed in GC-MS can also be transformed; for example, the energy metabolite adenosine triphosphate (ATP) was readily converted into adenosine monophosphate (AMP). This transformation is relevant for medical research because scientists often use a heating process to look at the ratio of ATP to AMP in cells to estimate the function of cellular components in aging and disease. “People use this ratio to detect disease, but if the ratio can be changed by the heating process, the results will not be accurate,” said Fang. “It is known that ATP is thermally sensitive, but not how it changed under these conditions.” Thermal degradation could also explain why many scientists have detected many unknown molecular “peaks” in the past. Based on the new study, the researchers now believe these metabolites may be byproducts of the heating process—the result of reactions between metabolites as they degrade. So why hadn’t scientists figured out the effect of heating until now? Siuzdak explained that while some scientists had noticed changes in specific metabolites, it was difficult to see changes in overall molecular profiles that contain thousands of molecules. This omic-based study was made possible by new capabilities within the XCMS program developed at the TSRI Scripps Center for Metabolomics. XCMS is a free, cloud-based data analysis technology used to analyze mass spectrometry data all over the globe. “With XCMS, we could expand our study to obtain a global profile of how the metabolites were altered—not just a few compounds,” said Fang. “Fortunately these problems can be overcome with the use of standards in GC-MS as well as using newer, ambient temperature mass spectrometry technologies, and this report will likely stimulate more scientists to move to these less destructive alternatives,” said Siuzdak. In addition to Siuzdak and Fang, authors of the study, “Thermal Degradation of Small Molecules: A Global Metabolomic Investigation,” were Julijana Ivanisevic of the University of Lausanne, Michael E. Kurczy of Astrazeneca and TSRI, Gary J. Patti of Washington University in St. Louis; and Caroline H. Johnson, Linh T. Hoang, Winnie Uritboonthai and H. Paul Benton of TSRI. Seehttp://pubs.acs.org/doi/abs/10.1021/acs.analchem.5b03003


News Article | February 21, 2017
Site: www.sciencemag.org

At least two dozen junior and senior researchers are stuck in scientific limbo after being barred from publishing data collected over a 25-year period at a National Institutes of Health (NIH) lab. The unusual ban follows the firing last summer of veteran neurologist Allen Braun by the National Institute on Deafness and Other Communication Disorders (NIDCD) for what many scientists have told are relatively minor, if widespread, violations of his lab’s experimental protocol. Most of the violations, which were unearthed after Braun himself reported a problem, involve the prescreening or vetting of volunteers for brain imaging scans and other experiments on language processing. The fallout from the case was recently chronicled on a blog by one of Braun’s former postdocs, and it highlights a not-uncommon problem across science: the career harm to innocent junior investigators following lab misconduct or accidental violations on the part of senior scientists. But this case, say those familiar with it, is extreme. “We’re truly collateral damage,” says Nan Bernstein Ratner of the University of Maryland in College Park, who researches stuttering. She spent 5 years collaborating with Braun. Now, two of her graduate students have had to shift their master’s theses topics, and an undergraduate she mentored cannot publish a planned paper. “The process has been—you can use this term—surreal.” Braun, who had been at NIH 32 years, including serving as chief of NIDCD’s Language Section since 1994, has filed a legal claim appealing the termination of his employment and would not comment. Andrew Griffith, the scientific director of NIDCD in Bethesda, Maryland, also declined to comment. “Unfortunately, the matter you reference is currently in litigation and therefore NIH is not in a position to provide any information or comment,” he emailed. No one disputes that Braun’s lab failed to properly follow its experimental plan. An audit commissioned by NIH found, among other issues, that more than 200 healthy volunteers had not had a medical history and a physical signed off on by Braun, as required by the lab’s protocol. Nonetheless, affected researchers, senior scientists who are not impacted, and a patient advocacy group all oppose NIH’s decision to halt data publication. There is no evidence, they have argued in letters to NIH officials, that the violations compromised the bulk of the data or the safety of study volunteers. They also question the breadth of the ban. Although the audit examined only volunteers newly enrolled since September 2009, NIDCD determined that the violations it found were sufficiently severe that all data from Braun’s studies back to 1992 were unusable. These ranged from functional MRI (fMRI) studies on people who stutter to how stroke victims regain speech to the neurological basis of more complex language functions. (  found no evidence that NIH officials are seeking retractions for the dozens of published papers that used those same data sets.) Braun’s lab began unraveling in 2015, when it included two postdocs and four other scientists. According to several sources, Braun learned that a new study volunteer had been incorrectly “coded.” The individual, who shared the same name and age as another volunteer, was misidentified as having already been part of the study, and was not screened as a new volunteer should be. Braun reported the incident to his institutional review board (IRB) and the protocol was suspended. NIDCD then commissioned an external audit of the lab. The audit, which is dated February 2016 and which   obtained, noted that Braun had not signed off on histories and physicals for 206 of the 424 volunteers whose records the audit examined. But the audit also noted that of those 206, all but five had received a history and physical elsewhere at the agency, because they were participating in other NIH studies, too. The audit cited other documents missing from Braun’s lab, including screening questionnaires and, in the case of two female volunteers, records of a pregnancy test before they’d entered the scanner. (Pregnant women are typically excluded from MRI studies without special approval.) A month after the audit was completed, the IRB wrote to Braun that it “classified the [violations] as serious deviations and serious unanticipated problems.” But its memo went on: “The Board had insufficient information to determine whether the non-compliance invalidated the study data. The IRB strongly supports the use and publication of data whenever possible so that the contribution of participants is not negated.” The IRB requested that Braun’s work remain suspended until the protocol violations could be resolved. But “if there is appropriate remediation,” it wrote to Braun, the work could continue. This would include re-education of all investigators and staff on the study, as well as, at minimum, better monitoring of the research, medical record forms that support “proper and complete documentation” of various study features, and perhaps additional safeguards. A patient advocacy group whose members volunteered for Braun’s research lobbied NIDCD to free up those data. “I wish to express my opinion that Dr. Braun’s co-researchers be allowed to submit research findings for publication in a timely manner,” wrote Gerald Maguire, a psychiatrist at the University of California, Riverside, School of Medicine, in his role as chair of the Board of Directors of the National Stuttering Association, in a 27 March 2016 letter to Michael Gottesman, NIH deputy director for intramural research. “Stuttering suffers from a dearth of adequate research.” In June 2016, in the same week the IRB planned to discuss a remediation plan, Braun was fired and his lab was shuttered. Because his protocol was still suspended, research from the lab couldn’t be published until NIH administrators gave the green light. Affected researchers pleaded their case. “We believe there are viable ways that data validity can be assured to everyone’s satisfaction,” a group of 15 impacted researchers, most of them junior, wrote to NIDCD administrators. “We sincerely ask you to formulate a plan for data access and publication for all former trainees in Dr. Braun’s lab and his outside collaborators.” NIDCD held firm. “We regret NIDCD cannot approve further publications,” wrote Carter Van Waes, the institute’s clinical director, to one of the signatories, Jed Meltzer of the University of Toronto in Canada. Meltzer was a postdoctoral fellow in Braun’s lab from 2006 to 2010 and continued to collaborate with him. “I have a graduate student who’s got a paper he can’t publish,” Meltzer says. “It’s like one-quarter of his Ph.D.” Another former Braun postdoc is on the job market and has approximately 10 papers that cannot be published. The toll on junior scientists disturbs David Wright of Michigan State University in East Lansing, who from 2012 to 2014 ran the Office of Research Integrity at the Department of Health and Human Services. Although there are times when data cannot be salvaged, when a principal investigator is fired “the institute really ought to extend itself significantly to protect” the careers of investigators who did nothing wrong, Wright says. He cautions that he’s unfamiliar with details of the Braun case, but notes that supporting junior scientists is not only “a matter of fairness and decency, but also in the public interest.” Scientists familiar with fMRI scans are also puzzled by the publication ban. “There’s nothing here that would flaw this data in any way,” says Joshua Shimony, a neuroradiologist at Washington University in St. Louis in Missouri. And he emphasizes that Braun’s lapses would not have jeopardized safety, as fMRI scans are extremely low risk. His view echoes those of seven senior NIH scientists who specialize in MRI research; they wrote a letter on Braun’s behalf before he was fired. Especially frustrating to Ratner was that NIDCD declined to share whether the volunteers in her group’s stuttering research were among those flagged as violations. On a Friday afternoon earlier this month, she pleaded her case yet again. One week later, a senior NIDCD official replied. “We have extensively reviewed the matter and stand by our decision,” he wrote. “We sincerely regret the impact this has on anyone associated with this protocol.” *Correction, 27 February, 10:22 a.m.: This article mistakenly indicated that the Office of Research Integrity is within NIH. It is within the Department of Health and Human Services, of which NIH is a part.


News Article | November 1, 2016
Site: www.eurekalert.org

The American Physical Society and the American Institute of Physics this month awarded the 2017 Dannie Heineman Prize for Mathematical Physics to Carl M. Bender of Washington University in St. Louis . With this prize he joins the illustrious company of Stephen Hawking, Freeman Dyson, Murray Gell-Mann, Roger Penrose, Steven Weinberg and Edward Witten, among others. Bender, the Wilfred R. and Ann Lee Konneker Distinguished Professor of Physics in Arts & Sciences, was cited "for developing the theory of PT symmetry in quantum systems and sustained seminal contributions that have generated profound and creative new mathematics, impacted broad areas of experimental physics, and inspired generations of mathematical physicists." "I use physics to generate interesting problems, and then I use mathematics to solve those problems," Bender said. "My approach is to understand what's going on in the real world -- where we live -- by studying the complex world, which includes the real world as a special case." He explains that everything physicists observe is on the real axis: all the numbers, positive or negative, rational or irrational, that can be found on a number line. But the real axis is just one line in the infinite plane of complex numbers, which includes numbers with "imaginary" parts. "The complex plane helps us to understand what's going on in the real world," he said. "For example, why are the energy levels in an atom quantized? Why can the atom only have certain energies and not others? We don't understand this because we don't look in the complex plane. In the complex plane, the energy levels are quantized. They are smooth and continuous. But if you take a slice through the complex plane along the real axis, the energy is chopped into disconnected points. It is as though the ramp was removed from a multi-level parking garage, leaving disconnected levels." How does Bender know which problem to pick, which problems might yield when pushed in this way? "You can smell it," he said. "Usually something is true because there is a clear argument for why it is true, but if it is true because 'everybody knows it is true,' then such a claim is potentially suspect." Bender tells a charming story to illustrate what he means. Many years ago his father, a high school physics teacher, put Bender's son to bed by telling him the story of the brachistochrone, a well-known physics problem that had been solved 300 years before. But, as Bender listened from another room, he realized the accepted version was wrong. "My father said, 'This is a classic physics problem; its solution is well known.' But I said, 'That's no longer the right answer.'" Working with an undergraduate eager for a challenge, Bender updated the brachistochrone problem to take into account Einstein's theory of relativity. So, Bender listens for physics that produces the dull thunk of an unexamined assumption instead of ringing true, but there is more to it than that. He also is unusually good at seeing moveable patterns: in letters, in chess positions, in musical compositions as well as in mathematical functions. His presentation slides often include anagrams -- he might introduce his name and university as Crab Lender of Washing Nervy Tuitions. He likes to play speed chess and included in his Harvard dissertation a postal chess game that lasted a year and a half. (It was Appendix H, and none of the examiners noticed it.) He also has mastered most of the repertoire for the clarinet and even considered becoming a professional musician. But theoretical physics isn't a monologue; it is a conversation. And theoretical physicists, like mathematicians, take ideas for a test drive by describing them to their peers, who help by trying as hard as they can to find flaws. Because there are only a few hundred highly active mathematical physicists in the world, to have these theory-testing conversations, Bender often travels abroad to conferences or on sabbatical. "The interaction and discussions have enriched my productivity immensely," he said. Bender is currently an International Professor of Physics at the University of Heidelberg, Visiting Professor at King's College London, and Member of the Higgs Center in Edinburgh. While mathematicians declare success once they've convinced other mathematicians of the rigor of their proofs, physicists -- while encouraged by the agreement of their peers -- are not satisfied until nature also expresses an opinion. They want experimental proof. And Bender, at heart, is a physicist. "I started out being interested in experimental science and I was good at it," he said. "Built a lab in my house, built my ham rig, ran a radio repair business, etc. But I think experimental science was too slow for me. I preferred to work with pencil and paper at my own pace. "Physics is something you ultimately know is right or wrong, and mathematics is always right. So that's why physics is tricky, more dangerous," he added. So "the best thing that ever happened" to Bender was the confirmation by experiment of a daring quantum-mechanical theory he and his former graduate student Stefan Boettcher proposed in 1998. This is the PT symmetry cited in the Heineman Prize. Characteristically, he arrived at this theory by questioning one of the fundamental assumptions of quantum mechanics. This axiom states that certain aspects of quantum mechanics must be Hermitian, meaning, among other things, that they must remain in the realm of real numbers. "But insisting that quantum mechanics must be Hermitian," Bender said, "is like saying all numbers must be even." Bender and Boettcher proposed a new non-Hermitian theory, a complex generalization of quantum mechanics, which they called PT-symmetric (parity-time symmetric) quantum mechanics. Parity is the symmetry operation that turns your left hand into your right. Time reversal just means that time runs backward rather than forward. In two famous Nobel-winning experiments, other physicists had shown that the universe is neither parity nor time symmetric. A left-handed laboratory can obtain different experimental results from a right-handed laboratory, and a laboratory traveling backward in time can obtain different results from a laboratory traveling forward in time. What Bender and Boettcher argued is that if you reflect both space and time, everything returns to normal. This is because a parity reflection can be exactly compensated by a time reversal. Bender and Boetccher also made a prediction based on their theory, so that the theory was falsifiable. The prediction was that PT-symmetric systems can undergo a transition from real to complex energies. PT symmetry would be broken at this transition, and the behavior of the system would change in an interesting -- and observable -- way. The neat thing about this was that some optical systems obey equations similar to the quantum-mechanical ones that govern atoms. So PT-symmetric systems can be constructed from simple optical components, such as lasers and optical fibers. "The trick," Bender said, "is to couple a component with gain, where energy flows into the system, to a component that exhibits loss, where energy flows out of the system." The first experiment to confirm the theory was carried out by eight scientists at two universities in the United States and two in Canada, but was physically located at the University of Arkansas. The theory of PT symmetry has since been repeatedly verified by many other experiments. Bender heard about the first experiment in 2008, nearly 10 years after publishing the theory. Demetrios Christodoulides of the University of Central Florida emailed him to say his group was pretty certain they had seen the PT phase transition. "If everything goes well, with a bit of luck, we may have an experimental explosion in the PT area," Christodoulides wrote. "I was on cloud nine for weeks," Bender said. "It took me a long time to come down because never in my life did I think that I would ever predict anything that was directly observable in a laboratory experiment, not to mention a very simple experiment."


WYNNEWOOD, PA, March 03, 2017-- Dr. Howard J. Eisen has been included in Marquis Who's Who. As in all Marquis Who's Who biographical volumes, individuals profiled are selected on the basis of current reference value. Factors such as position, noteworthy accomplishments, visibility, and prominence in a field are all taken into account during the selection process.Inspired by a personal interest in the field, Dr. Eisen has dedicated his career to advancing cardiovascular medicine. His journey started at Cornell University and the University of Pennsylvania, where he earned a Bachelor of Arts in biology and an MD, respectively, and continued at the Hospital of the University of Pennsylvania, where he served as a medical intern and resident of medicine. Dr. Eisen did his Cardiovascular Medicine Fellowship at Washington University in St. Louis/Barnes Hospital. Dr. Eisen proceeded to acquire various academic roles for the University of Pennsylvania, Temple University, and Drexel University. He has served as the Thomas J. Vischer professor of medicine at Drexel's College of Medicine since 2004 and as a member of study section for the National Institutes of Health since 1999. Over the years, he has worked in the areas of cardiology, cardiovascular disorders, heart failures and transplants, and general clinical research.In order to keep abreast of changes in his field, Dr. Eisen affiliates himself with the American College of Cardiology, the American Society of Transplantation, the International Society of Heart and Lung Transplantation, and the American Federation of Clinical Research. He is also a diplomate through the American Board of Medical Examiners, the American Board of Internal Medicine, and the American Board of Cardiovascular Diseases. Dr. Eisen is Boarded by the ABIM in Internal Medicine, Cardiovascular Disease and Advanced Heart Failure & Transplant Cardiology. Notably, he has been named a top doctor in Philadelphia Magazine and Castle & Connolly's Top Doctors in America every year since 1996, and in 2006, he was awarded the Alumni Service Award through the American Federation of Clinical Research. Dr. Eisen has also been featured in the 1st through 8th editions of Who's Who in Medicine and Healthcare, and five editions of Who's Who in Science and Engineering.About Marquis Who's Who :Since 1899, when A. N. Marquis printed the First Edition of Who's Who in America , Marquis Who's Who has chronicled the lives of the most accomplished individuals and innovators from every significant field of endeavor, including politics, business, medicine, law, education, art, religion and entertainment. Today, Who's Who in America remains an essential biographical source for thousands of researchers, journalists, librarians and executive search firms around the world. Marquis publications may be visited at the official Marquis Who's Who website at www.marquiswhoswho.com


News Article | October 26, 2016
Site: www.eurekalert.org

Alison M. Goate, D.Phil, Professor of Neuroscience, Neurology and Genetic and Genomic Sciences at the Icahn School of Medicine at Mount Sinai, and Lynne D. Richardson, MD, FACEP, Professor of Emergency Medicine and Population Health Science and Policy, have been elected as two of 79 new members to the prestigious National Academy of Medicine (NAM), formerly known as The Institute of Medicine (IOM). "Election to the National Academy of Medicine is considered one of the highest honors in medicine," says Dennis S. Charney, MD, the Anne and Joel Ehrenkranz Dean of the Icahn School of Medicine at Mount Sinai. "The election of Drs. Goate and Richardson is a notable achievement and well-deserved recognition of each of their leadership efforts and important contributions to their particular fields of study." Dr. Goate is an internationally renowned neuropsychiatric researcher and the founding Director of The Ronald M. Loeb Center for Alzheimer's Disease at Mount Sinai. As a molecular geneticist, Dr. Goate has established an international reputation for her research to elucidate the genetic, molecular, and cellular basis of Alzheimer's disease (AD) and related neurodegenerative disorders. Dr. Goate has identified key gene mutations linked to the heritable risk for Alzheimer's disease, including her finding that a rare mutation of the PLD3 gene doubles the risk of developing late-onset AD. Prior to joining Mount Sinai, she led a team of researchers at Washington University in St. Louis that performed the largest ever genome-wide association study of protein markers found in cerebrospinal fluid, resulting in the discovery of three genetic variants associated with an increased risk of developing AD. Dr. Richardson is Professor and Vice Chair of Emergency Medicine and Professor of Population Health Science and Policy at the Icahn School of Medicine at Mount Sinai. She is a practicing emergency physician and a nationally recognized expert in health services research. Dr. Richardson's areas of interest are access and barriers to care, improving effective utilization of health care resources, and health care disparities. She is principal investigator (PI) for an NIH-funded study which is seeking to improve effective methods for communicating with communities about emergency research. She is also PI of a trial focused on prevention and early treatment of acute lung injury and of the New York City Sickle Cell Implementation Science Consortium. With a strong track record of mentoring young investigators to successful research careers, Dr. Richardson directs a research career development program in emergency medicine and an emergency care research fellowship program. She also received a Health Care Innovation Award from the Centers for Medicare and Medicaid Services to implement a new model of emergency care for older adults. New members are elected by current, active members through a selective process that recognizes people who have made major contributions to the advancement of the medical sciences, health care, and public health. Established in 1970 by the National Academy of Sciences, NAM is a national resource that provides independent, objective analysis and advice on health issues. The new NAM members bring Mount Sinai's total membership in the prestigious group to 21. The distinguished Mount Sinai faculty members who Drs. Goate and Richardson join in earning this honor are: * Joseph D. Buxbaum, PhD * Dennis S. Charney, MD * Kenneth L. Davis, MD * Robert J. Desnick, MD, PhD * Kurt W. Deuschle, MD * Angela Diaz, MD, MPH * Valentin Fuster, MD, PhD * Bruce Gelb, MD * E. Cuyler Hammond * Kurt Hirschhorn, MD * Philip J. Landrigan, MD, MSc * Diane E. Meier, MD * Eric J. Nestler, MD, PhD * Maria Iandolo New, MD * Peter Palese, PhD * Hugh A. Sampson, MD * Irving J. Selikoff, MD * Pamela Sklar, MD, PhD * Barbara G. Vickrey, MD, MPH. The Mount Sinai Health System is an integrated health system committed to providing distinguished care, conducting transformative research, and advancing biomedical education. Structured around seven hospital campuses and a single medical school, the Health System has an extensive ambulatory network and a range of inpatient and outpatient services -- from community-based facilities to tertiary and quaternary care. The System includes approximately 7,100 primary and specialty care physicians; 12 joint-venture ambulatory surgery centers; more than 140 ambulatory practices throughout the five boroughs of New York City, Westchester, Long Island, and Florida; and 31 affiliated community health centers. Physicians are affiliated with the renowned Icahn School of Medicine at Mount Sinai, which is ranked among the highest in the nation in National Institutes of Health funding per investigator. The Mount Sinai Hospital is in the "Honor Roll" of best hospitals in America, ranked No. 15 nationally in the 2016-2017 "Best Hospitals" issue of U.S. News & World Report. The Mount Sinai Hospital is also ranked as one of the nation's top 20 hospitals in Geriatrics, Gastroenterology/GI Surgery, Cardiology/Heart Surgery, Diabetes/Endocrinology, Nephrology, Neurology/Neurosurgery, and Ear, Nose & Throat, and is in the top 50 in four other specialties. New York Eye and Ear Infirmary of Mount Sinai is ranked No. 10 nationally for Ophthalmology, while Mount Sinai Beth Israel, Mount Sinai St. Luke's, and Mount Sinai West are ranked regionally. Mount Sinai's Kravis Children's Hospital is ranked in seven out of ten pediatric specialties by U.S. News & World Report in "Best Children's Hospitals. For more information, visit http://www. , or find Mount Sinai on Facebook, Twitter and YouTube


A solution to a major challenge in using minimally invasive cryotherapy to target and kill cancer cells with freezing temperatures while protecting adjacent healthy tissues has been reported by a research team in Texas in an article published this week in the Journal of Biomedical Optics. The journal is published by SPIE, the international society for optics and photonics. Cryotherapy may be used to treat internal and external cancer lesions. Patients benefit from fast recovery, low toxicity, minimal anesthesia, and comparatively low cost. However, a major difficulty until now has been finding an efficient method of monitoring temperatures in real time in order to avoid damaging non-targeted tissues. In "Imaging technique for real-time temperature monitoring during cryotherapy of lesions," authors Elena Petrova, Anton Liopo, Vyacheslav Nadvoretskiy, and Sergey Ermilov of TomoWave Laboratories, Inc., in Houston describe a new technique for monitoring temperature that addresses this problem. "Petrova et al. report the use of red blood cells as temperature sensors to convert reconstructed optoacoustic images to temperature maps," said associate editor Bahman Anvari (University of California, Riverside). "The technique is potentially useful in real-time optoacoustic-based temperature measurements during cryotherapy procedures. The investigators have performed systematic and meticulous studies to validate this temperature measurement approach in tissue-mimicking phantoms." The team investigated applying an optoacoustic temperature monitoring method for noninvasive real-time thermometry of vascularized tissue during cryotherapy. The universal temperature-dependent optoacoustic response of red blood cells was used to convert reconstructed optoacoustic images to temperature maps, yielding the potential to prevent noncancerous tissue from being destroyed or damaged through careful monitoring of tissue temperatures during cryotherapy procedures. "Our results provide an important step towards future noninvasive temperature monitoring in live tissues," the authors write. Lihong Wang, Gene K. Beare Distinguished Professor of Biomedical Engineering at Washington University in St. Louis, is editor-in-chief of the Journal of Biomedical Optics. The journal is published in print and digitally in the SPIE Digital Library, which contains more than 458,000 articles from SPIE journals, proceedings, and books, with approximately 18,000 new research papers added each year. More information: Elena Petrova et al, Imaging technique for real-time temperature monitoring during cryotherapy of lesions, Journal of Biomedical Optics (2016). DOI: 10.1117/1.JBO.21.11.116007


News Article | November 14, 2016
Site: www.prweb.com

A solution to a major challenge in using minimally invasive cryotherapy to target and kill cancer cells with freezing temperatures while protecting adjacent healthy tissues has been reported by a research team in Texas in an article published today in the Journal of Biomedical Optics. The journal is published by SPIE, the international society for optics and photonics. Cryotherapy may be used to treat internal and external cancer lesions. Patients benefit from fast recovery, low toxicity, minimal anesthesia, and comparatively low cost. However, a major difficulty until now has been finding an efficient method of monitoring temperatures in real time in order to avoid damaging non-targeted tissues. In “Imaging technique for real-time temperature monitoring during cryotherapy of lesions,” authors Elena Petrova, Anton Liopo, Vyacheslav Nadvoretskiy, and Sergey Ermilov of TomoWave Laboratories, Inc., in Houston describe a new technique for monitoring temperature that addresses this problem. “Petrova et al. report the use of red blood cells as temperature sensors to convert reconstructed optoacoustic images to temperature maps,” said associate editor Bahman Anvari (University of California, Riverside). “The technique is potentially useful in real-time optoacoustic-based temperature measurements during cryotherapy procedures. The investigators have performed systematic and meticulous studies to validate this temperature measurement approach in tissue-mimicking phantoms.” The team investigated applying an optoacoustic temperature monitoring method for noninvasive real-time thermometry of vascularized tissue during cryotherapy. The universal temperature-dependent optoacoustic response of red blood cells was used to convert reconstructed optoacoustic images to temperature maps, yielding the potential to prevent noncancerous tissue from being destroyed or damaged through careful monitoring of tissue temperatures during cryotherapy procedures. “Our results provide an important step towards future noninvasive temperature monitoring in live tissues,” the authors write. Lihong Wang, Gene K. Beare Distinguished Professor of Biomedical Engineering at Washington University in St. Louis, is editor-in-chief of the Journal of Biomedical Optics. The journal is published in print and digitally in the SPIE Digital Library, which contains more than 458,000 articles from SPIE journals, proceedings, and books, with approximately 18,000 new research papers added each year. SPIE is the international society for optics and photonics, an educational not-for-profit organization founded in 1955 to advance light-based science, engineering, and technology. The Society serves nearly 264,000 constituents from approximately 166 countries, offering conferences and their published proceedings, continuing education, books, journals, and the SPIE Digital Library. In 2015, SPIE provided more than $5.2 million in support of education and outreach programs. http://www.spie.org


News Article | November 10, 2016
Site: www.eurekalert.org

BELLINGHAM, Washington, USA, and CARDIFF, UK -- A solution to a major challenge in using minimally invasive cryotherapy to target and kill cancer cells with freezing temperatures while protecting adjacent healthy tissues has been reported by a research team in Texas in an article published this week in the Journal of Biomedical Optics. The journal is published by SPIE, the international society for optics and photonics. Cryotherapy may be used to treat internal and external cancer lesions. Patients benefit from fast recovery, low toxicity, minimal anesthesia, and comparatively low cost. However, a major difficulty until now has been finding an efficient method of monitoring temperatures in real time in order to avoid damaging non-targeted tissues. In "Imaging technique for real-time temperature monitoring during cryotherapy of lesions," authors Elena Petrova, Anton Liopo, Vyacheslav Nadvoretskiy, and Sergey Ermilov of TomoWave Laboratories, Inc., in Houston describe a new technique for monitoring temperature that addresses this problem. "Petrova et al. report the use of red blood cells as temperature sensors to convert reconstructed optoacoustic images to temperature maps," said associate editor Bahman Anvari (University of California, Riverside). "The technique is potentially useful in real-time optoacoustic-based temperature measurements during cryotherapy procedures. The investigators have performed systematic and meticulous studies to validate this temperature measurement approach in tissue-mimicking phantoms." The team investigated applying an optoacoustic temperature monitoring method for noninvasive real-time thermometry of vascularized tissue during cryotherapy. The universal temperature-dependent optoacoustic response of red blood cells was used to convert reconstructed optoacoustic images to temperature maps, yielding the potential to prevent noncancerous tissue from being destroyed or damaged through careful monitoring of tissue temperatures during cryotherapy procedures. "Our results provide an important step towards future noninvasive temperature monitoring in live tissues," the authors write. Lihong Wang, Gene K. Beare Distinguished Professor of Biomedical Engineering at Washington University in St. Louis, is editor-in-chief of the Journal of Biomedical Optics. The journal is published in print and digitally in the SPIE Digital Library, which contains more than 458,000 articles from SPIE journals, proceedings, and books, with approximately 18,000 new research papers added each year. SPIE is the international society for optics and photonics, an educational not-for-profit organization founded in 1955 to advance light-based science, engineering, and technology. The Society serves nearly 264,000 constituents from approximately 166 countries, offering conferences and their published proceedings, continuing education, books, journals, and the SPIE Digital Library. In 2015, SPIE provided more than $5.2 million in support of education and outreach programs. http://www.


News Article | December 15, 2015
Site: www.scientificamerican.com

Francis Kyakulaga, a district sanitation manager, and I had finished eating a meal at the ground floor restaurant of the Mwaana Hotel on the Trans-African Highway in Uganda. During the meal, we noticed an increasing commotion in the hotel lobby area, and Kyakulaga asked a man what was happening. He informed us that someone had collapsed upstairs. We hurried upstairs to find an unconscious man lying in the hallway. A hotel worker had seen him collapse about one hour before and had informed the hotel management. While the hotel staff prepared our food, they had left the unconcious man in the hallway. They did not know what to do to help him. When I arrived on scene, I checked his breathing and carotid pulse. His cold, hard neck made me think that he must have been dead long before I got there. Still, I wanted to be sure. I began CPR and shouted for an ambulance. I heard his ribs crack beneath my compressions, and soon I knew that my efforts were hopeless. Kyakulaga personally knew the top healthcare officials in the district, and about ten minutes after he called the head of the nearby hospital, three vehicles arrived, and two medics joined us. They did not bring any equipment with them, although the vehicles were stocked with medical supplies. One medic slowly put on his gloves before checking for the man’s radial pulse. After what seemed like a long time, he hesitantly and incorrectly proclaimed that the man had a weak pulse. The medic then discussed at length with his colleague what to do next. Still unsure, he decided to call the police ambulance for further direction. After several minutes, someone from the police ambulance arrived and declared the man dead. The news that a man had died spread throughout the town, and the street outside filled up with several hundred people. The crowd waited for a glimpse of the dead body, and did not disperse until the body was carried off in the back of a pickup truck. “People want to help, but they don’t know how,” Kyakulaga told me. Most ambulances in Uganda are not staffed by trained medical technicians. Kyakulaga explained: “the hiring process to staff the ambulance goes like this—they ask if you can drive a car. If you say yes, you are hired.” Here, as in most low-resource settings in Africa, even if someone is taken to the hospital in a timely manner, there are no guarantees that the patient will receive the necessary medical intervention. With the already limited government funds for healthcare targeted to treat infectious diseases such as malaria, tuberculosis, and HIV, there is nothing left to improve any phase of emergency medicine. Uganda’s emergency medical system currently depends on the support of poorly-funded non-governmental organizations. Two weeks after the hotel incident, I heard my neighbor at the home where I was staying shout for help. The neighbor’s maid had collapsed. I assessed her and found that she was unresponsive, but she was breathing and had a pulse. Since the neighbor had no personal connection to the hospital to call for a vehicle, we mobilized a car and drove through the sparsely lit, bumpy dirt roads in the Ugandan night to Iganga District Hospital. When the driver and I arrived at the hospital compound, I shouted for a stretcher. No one came. Other patients and their families surrounded me. Some laughed at seeing a foreigner making a commotion. I ran into the hospital and grabbed a stretcher, put the unconscious woman on it, and pushed her into the in-patient room. There were no medical personnel in the room, so I called for a nurse and a doctor. The woman’s breathing seemed to become fainter with every second. Eventually, a nurse ambled into the room and slowly put on her gloves. She glanced at the patient and asked, “Are you her husband?” At this, I raised my voice to request a glucometer, oxygen, and a blood pressure cuff. The nurse gave me a blank look and left the room. A couple of minutes later, a clinical officer came. He also glanced at the patient from some distance away and said, “She’s fine.” We then transferred her to the casualty ward, and on seeing a foreigner pushing the stretcher with the unconscious woman, the nurses there looked at me blankly. After several seconds of mutual staring, I asked, “Aren’t you supposed to do something?” The three nurses meandered to the stretcher and surrounded it. They looked at the patient without touching her. I asked them if they had a blood pressure cuff or a glucometer, and one of them brought the equipment. We found that the maid was hypoglycemic. Fortunately she recovered, but I was shocked by the lack of the staff’s emergency medicine training. I later found that stories like these are common, not just in Uganda, but in most of Sub-Saharan Africa. Emergency medicine is a grossly neglected part of their healthcare systems. In 2004, South Africa established Africa’s first residency training program in emergency medicine. Since then, only four other countries (Ethiopia, Botswana, Ghana, and Tanzania) in the region have followed suit. Sub-Saharan Africa has one of the highest rates of fatalities from traffic accidents in the world despite having the smallest number of motorized vehicles. As Africa’s economy grows, more people will own cars, and this will result in an even greater need for quality emergency medical systems. According to the World Health Organization, by 2030 traffic accidents will account for 3.6 percent of total deaths in the world, compared to just 0.8 percent for malaria. This translates to an urgent need for functional emergency medical systems. Accentuating the poor emergency care in most Ugandan hospitals, patients who are fortunate enough to make it to a hospital face a nearly insurmountable challenge to receive any emergency care at all. There are two types of triage in Ugandan government hospitals. One is if other sick patients realize that another patient is really sick, then they allow the sicker patient to go ahead in the queue, and the second is if medical personnel see by chance that a patient in line has a very obvious emergency.” To attempt to help address the emergency-medicine crisis, Global Emergency Care Collaborative (GECC), a non-governmental organization founded by four American physicians in 2009, has employed “task-shifting” (teaching a non-physician clinician to perform tasks formerly delegated to specialist physicians) to train nurses to become emergency medical providers. Task-shifting is employed in settings where there are not enough physicians to meet the healthcare needs of the population. According to CIA’s World Factbook, Uganda has 0.12 physicians per 1000 people. In contrast, the United States has 2.5 physicians per 1000 people. GECC’s flagship program is a two-year, “train the trainer” Emergency Care Practitioner (ECP) program for nurses, founded in 2008. After the two-year program, currently run in Nyakibale Hospital in Western Uganda, the nurses are qualified to provide independent emergency medical care without a physician. This independence is necessary in physician-limited settings such as Uganda, where patients often have to wait more than one day in the hospital before seeing a physician. With the ECP program, GECC helped established the first emergency department with internationally acceptable protocols in Uganda at Nyakibale Hospital. Currently, private donors from the West fund GECC, which is collaborating with the Ugandan government and academic institutions to transfer administration of the program to Ugandans. The ultimate goal is to incorporate the nurse training program into the Ugandan healthcare system in order to make the program sustainable without ongoing international investment. Tobias Kisoke, the program director of GECC at Nyakibale Hospital, said, “This program saves a lot of lives. I think that the idea of training nurses to be emergency medical care providers has a lot of potential throughout Africa. Lack of emergency medicine causes many unnecessary deaths.” Nyakibale Hospital Emergency Department has served over 25,000 patients in the past five years, averaging between 13 and 14 patients per day. It remains to be seen if the model at this relatively well-resourced private hospital can be applied to under-resourced, under-staffed, and high-traffic government hospitals. In collaboration with partners from Mbarara University of Science and Technology, the Ministry of Health, and Masaka Regional Referral Hospital, GECC will expand their operations to Masaka Hospital in October 2015. Masaka Hospital is a regional referral government hospital, and it should serve as a good test of the feasibility of their task-shifting program in a more resource-limited setting. There are notable challenges to having a successful emergency department in a district hospital. “[The] GECC model could work in a district hospital, but it needs outside help,” Dr. Luyimbaazi Julius, the medical superintendent at Nyakibale Hospital, said. “The government does not usually take responsibility for the full staffing of the hospital. If a department needs ten nurses to function, the government sometimes only has the resources to pay two.” However, the current benefits of the emergency department at Nyakibale are undeniable. Turyamureeba Claudio, hospital administrator of Nyakibale, said, “Hospital patient numbers are increasing after the GECC came to Nyakibale. Many health centers have opened in this district since GECC arrived in 2008, so the number of patients coming to the hospital should be decreasing, but now, the communities know about the emergency department and that they will be seen right away.” GECC still faces many challenges. Mark Bisanzo, the President of GECC, stated, “To make this program sustainable, there needs to be investment in emergency medicine. Right now, there is hardly any, but we hope that we can make emergency medicine a part of the Ugandan medical system.” Jae Lee is a senior at Washington University in St. Louis, majoring in Biochemistry and International and Area Studies. This summer, he was in Uganda leading an interdisciplinary pediatric malaria research project, reporting on the healthcare system with a grant from Pulitzer Center on Crises Reporting, and coordinating improvements in emergency medicine at the community and hospital levels working with Global Emergency Care Collaborative, Iganga District Hospital, Uganda Development and Health Associates, and a few other organizations. He will attend medical school next year.


News Article | November 10, 2016
Site: www.nature.com

Haematopoietic FAS ablation mouse models LysM–FAS31, Tie2–FAS12, and Tie2–FAS bone-marrow transplantation animals12 were generated as described previously. The high-fat diet was a Western-type diet containing 0.15% cholesterol with 42% calories as fat (TD 88137, Harlan). Studies were conducted with littermates of the C57BL/6 background between 2 and 6 months of age in a specific pathogen-free facility with a 12-h light–12-h dark cycle. The Animal Studies Committee at Washington University in St. Louis approved experiments. Sample sizes were based on variance of previous studies in similar experimental settings. A formal randomization tool was not used to allocate animals to experimental groups. Cages containing both genotypes as littermates were selected for allocation to experimental groups. Animals were not excluded from analyses unless results could not be generated due to the death of the animal. No formal blinding was employed in the animal studies. For metabolic phenotyping experiments, males were studied. For cell biology experiments, macrophages from both genders were studied. For glucose tolerance tests, mice were injected intraperitoneally (i.p.) with 1 g kg−1 glucose after 6 h of fasting; glucose was measured in tail blood using a Contour glucometer (Bayer). Glucose-stimulated insulin-secretion assays were performed in separate cohorts, with insulin measured using ELISA kits (PerkinElmer). For insulin tolerance tests, mice were injected i.p. with 0.75 U kg−1 insulin after 6 h of fasting. For insulin signalling, mice were injected i.p. with 5 mU g−1 insulin after overnight fasting and tissues were collected 10 min later. Body composition analysis was performed using magnetic resonance imaging. Liver triglycerides32 and food intake12 were assayed as described previously. Global insulin sensitivity was determined by hyperinsulinaemic–euglycaemic clamp as described previously33. Animals were implanted with a jugular catheter. Five days after surgery, animals were fasted for 4 h and glucose turnover was measured in the basal state and during the clamp in conscious mice. Visceral fat was fixed in formalin. Sections of 5 μm were stained for Mac2 (BD Bioscience) for crown-like structures. Processing included slides not treated with primary antibodies to correct for any non-specific staining. For neutral lipid detection, 10-μm sections of frozen liver were fixed with formalin (10%, Sigma) followed by Oil red O staining (Sigma) in 60% isopropanol. Epididymal fat pads were isolated after saline perfusion, minced, washed, and centrifuged (1,000 g for 5 min). Floating adipose tissue was collected and digested with type I collagenase solution (1 mg ml−1, 30 min). This digest was pelleted to yield the SVF, which was washed twice in FACS wash buffer (PBS supplemented with 4% FBS), then blocked for Fc receptors using anti-mouse CD16/32 (BD Bioscience), followed by staining with APC-conjugated anti-mouse CD11b, eFluor 450-conjugated anti-mouse CD11c, and PE-conjugated anti-mouse F4/80 (eBioscience). Other surface markers included CD18 and ICAM-1 (both PE-conjugated, eBioscience). Samples were analysed using a BD LSD II flow cytometer. Total RNA was extracted using TRIZOL (Invitrogen) and reverse transcribed using an iScript cDNA synthesis kit (Bio-Rad). PCR reactions were performed with an ABI PRISM 7000 Sequence Detection System (Applied Biosciences) using the SYBR Green PCR Master Mix assay and primer sequences as described previously34. Tissues or cultured cells were treated with lysis buffer (50 mM Tris HCl, pH 7.4, 1 mM EDTA, 150 mM NaCl, 1% NP40, 0.25% Na deoxycholate, 2 mM NaVO , 5 mM NaF, and protease inhibitors; Roche). Whole lysates or fractions were subjected to 4–20% gradient SDS–PAGE (Invitrogen) followed by immunoblotting with antibodies against FAS, transferrin receptor (Abcam), actin (Sigma), total and pAkt (residues T308 and S473), total and pJNK, Lyn, RhoA, Cdc42, Rac, moesin (Cell Signaling), flotillin-1, insulin receptor (BD Bioscience), ICAM-1, myosin (Santa Cruz), and annexin V (Biosensis). Activity of Rho family GTPases was measured by pull-down assays for GTP-bound forms of the proteins (Cytoskeleton). Lysates were mixed with PAK–GST protein beads (for GTP–Rac binding) or Rhotekin–RBD protein GST beads (for GTP–RhoA binding) and precipitates were analysed by immunoblotting. Bone-marrow-derived macrophages were differentiated in DMEM with 20% L929-conditioned medium. Peritoneal macrophages were elicited from mice by i.p. injection of 4% thioglycollate medium (Sigma), and adherent cells were cultured in DMEM with 10% FBS. RAW 264.7 cells and 293T cells were obtained from the ATCC. RAW 264.7 cells were cultured in DMEM plus 10% FBS. FAS was knocked down in RAW 264.7 cells using a lentiviral-based shRNA strategy (Open Biosystems). 293T cells were transfected with packaging vectors, along with an expression plasmid (pLKO.1-puro system) containing shRNA sequences that were scrambled or specific for mouse FAS mRNA34. Viruses were collected and filtered two days later, then used to infect RAW 264.7 cells (10 μg ml−1 polybrene). Infected RAW 264.7 cells were selected with puromycin (4 μg ml−1) for 2 days. For inflammatory activation, macrophages were treated with LPS (100 ng ml−1), or high-dose palmitate (500 μM/1% BSA, with BSA only as control) for 6 h before collection. Lysates were assayed for JNK activation by western blotting. Supernatants were assayed for the pro-inflammatory cytokines TNFα, MCP1, IL1β, and IL12p40 by ELISA (R&D systems). For lipid labelling analyses, cells were serum-starved for 24 h in the absence or presence of simvastatin (10 μM of the active form, Calbiochem). 14C-acetate (10 μCi) was added to 10-cm culture dishes, cells were collected after 4 h, lipids were extracted with chloroform/methanol and radioactivity was measured with a liquid scintillation counter. In chase experiments involving de novo lipid synthesis, cells were treated with 14C-acetate (10 μCi) for 24 h then chased with cold medium for 0.5, 1 or 4 h. DRMs were isolated and lipid extracts were counted. In chase experiments involving palmitate, cells were incubated with 14C-palmitate (5 μCi, ~10 μM) complexed with cold palmitate (50 μM)/BSA (0.1%) then chased with cold medium. In exogenous palmitate reconstitution experiments, FAS-knockout macrophages were treated with palmitate (50 μM/0.1% BSA) for 24 h with BSA-only groups serving as controls. Beyond determination of FAS status, cell identity was not authenticated and cells were not tested for mycoplasma. Cells were homogenized in hypotonic buffer (1 mM HEPES, pH 8.0, 15 mM KCl, 2 mM MgCl , 0.1 mM EDTA, and protease inhibitors from Roche). Lysates were subjected to sequential centrifugation steps (2,000 g for 5 min, 10,000 g for 15 min, 100,000 g for 2 h) to yield crude membrane fractions. To isolate detergent-resistant membranes (DRMs), cells were lysed in MES buffer (10 mM MES, 150 mM NaCl, pH 6.5) containing protease inhibitors, incubated with 1% Triton X-100 on ice for 30 min, and homogenized by 20 passes through 29-gauge needles. Homogenates were adjusted to 40% sucrose and placed under sucrose layers of 5% and 30%. After centrifugation at 39,000 r.p.m., 4 °C for 16 h, fractions were collected from top to bottom. For submembrane isolation in the absence of detergent, sodium carbonate buffer at high pH (500 mM, pH 11.0) was used and Triton X-100 was omitted in the above procedure. Sucrose layers of 5%, 35% and 45% were used to generate fractions. After Bligh–Dyer extraction, organic phases were collected, dried under nitrogen, and reconstituted in 200 μl chloroform/methanol (1:1) with 0.5% sodium acetate. A 50-μl aliquot was directly injected into a Thermo Vantage triple–quadruple mass spectrometer in positive mode for the analysis of phosphatidylcholine (including sphingomyelin) species with neutral loss scan of 183, and for the analysis of phosphatidylethanolamine species with neutral loss scan of 141. Each individual species was compared to its internal standard, and absolute quantity was determined using a standard curve, all as described33. SILAC techniques35, 36 used RAW 264.7 cells cultured in medium containing heavy (that is, 13C l-lysine and 13C l-arginine) stable-isotope-labelled amino acids (Thermo Scientific). Murine bone-marrow-derived macrophage cells from different genetic models were grown in light (that is, 12C l-lysine and 12C l-arginine) medium. Cells were lysed and equal amounts of extracts from labelled cells were combined, then subjected to differential centrifugation for the crude membrane extraction, or sucrose gradient centrifugation for DRM extraction. Samples were resolved by SDS–PAGE and separated into 10 fractions, and in-gel trypsin digestion was performed before liquid chromatography–tandem mass spectroscopy (LC–MS/MS). For nano-high-pressure liquid chromatography–electrospray ionization–MS/MS, studies were performed on a LTQ Orbitrap (Thermo) instrument. Samples were loaded with an autosampler onto a 15-cm Magic C18 column (5-μm particles, 300-Å pores, Michrom Bioresources) packed into a PicoFrit tip (New Objective) and analysed with 2D nanoLC plus HPLC (Eksigent). Analytical gradients were from 0–80% organic phase (95% acetonitrile, 0.1% formic acid) over 60 min. Aqueous phase composition was 2% acetonitrile, 0.1% formic acid. Eluent was routed into a PV-550 Nanospray ion source (New Objective). The LTQ Orbitrap instrument was operated in a data-dependent mode with the precursor scan over the range m/z: 350–2,000, followed by twenty MS2 scans using parent ions selected from the MS1 scan. The Orbitrap AGC target was set to 1 × 106, and the MS2 AGC target was 1 × 104 with maximum injection times of 300 ms and 500 ms, respectively. For MS/MS, LTQ isolation width was 2 Da, normalized collision energy was 30% and activation time was 10 ms. Raw data were submitted to Mascot Server 2.0 and searched against the SwissProt database. Results were quantified by analysing the mascot ‘dat’ file and its respective thermo ‘raw’ file using a locally generated program. Relative protein ratios for control versus knockout macrophages were calculated for each identified protein (averaging the signal from multiple peptides) and presented as a percentage of the control in a heat map (CIMminer). Pathway analysis was performed using databases including DAVID, KEGG and PANTHER. GPMVs were induced in bone-marrow-derived macrophages or RAW 264.7 cells essentially as described previously21. Cells were rinsed with GPMV buffer (10 mM HEPES, 150 mM NaCl and 2 mM CaCl , pH 7.4) followed by a 1–2-h incubation with vesiculation–induction GPMV buffer in the presence of PFA (25 mM) and DTT (2 mM) at 37 °C. The vesicles were stained using FAST-DiI (Invitrogen, 0.25 μg ml−1) at room temperature for 30 min. Vesicles were imaged during temperature-controlled cooling by confocal microscopy37. A four-chambered cover glass containing vesicle suspensions was placed in a metal block that was gradually cooled by circulating water. Images were captured with a 20× air objective in an inverted Zeiss LSM 510 laser-scanning microscope system. An HeNe laser (at 543 nm excitation) was used for FAST-DiI. In multiple scanning mode for sequential captures, an Argon laser (at 488 nm excitation) was used for EGFP imaging, and single-channel labelled cells were used to calibrate the system to exclude the leak between the two channels. For lipid order analysis, an environmentally sensitive dye di-4-ANEPPDHQ was used as described38. The dye was excited by an Argon laser at 488 nm wavelength and the two-channel emission signal was collected at ~560 nm (liquid ordered) and ~620 nm (liquid disordered) simultaneously. Image calculations were carried out in ImageJ using the GP (generalized polarization analysis) plugin (http://www.optinav.com/Generalized_Polarization_Analysis.htm) with modification to include a measured G (calibration factor). To determine the general polarization (GP) value, we analysed the pictures acquired and calculated the general polarizations using the equation: where G is the calibration factor calculated using is the general polarization value of the dye in 100% DMSO measured using the same imaging settings as used for GPMVs. To analyse the partition patterns of Rac protein in GPMVs, cells expressing Rac–EGFP were used. A retroviral-based plasmid for EGFP conjugated wild-type Rac was derived from the pMX–GFP–RacG12V (Addgene #14567, N-terminal GFP) by site-directed mutagenesis. The plasmid was transfected into PLAT-E cells, and a retrovirus vector was used to infect primary bone marrow progenitor cells during macrophage differentiation. The GPMVs generated were then labelled with FAST-DiI for monitoring phase separation, and two-channel fluorescence was captured at 15 °C. The GFP fluorescence intensity peaks in the two domains were quantified. Analysis of the temperature plot of phase separation and quantification of Rac protein membrane partition was performed using ImageJ software37. For spreading, macrophages were plated (30,000 cells per well) into four-chambered Lab-Tek slides (Thermo Scientific). After 30 min, cells were gently washed and fixed with 4% paraformaldehyde, followed by permeabilization with 0.1% Triton X-100. The cells were then stained with rhodamine-phalloidin (Invitrogen). Cell images were captured by immunofluorescence microscopy. Individual cells were outlined, and total cell area was quantified using ImageJ software. For migration, Transwell inserts with 3-μm pore size (Corning) were pre-coated with 0.2% gelatin. Macrophages were trypsinized and 20,000 cells were added in triplicate to inserts in chambers. Medium with vehicle or MCP1 (100 ng ml−1, R&D systems) was added to the lower wells and, 4 h later, cells that had migrated to the underside of the membrane were fixed. Cells on the upper side of the membrane were removed, and membranes were cut and positioned with migrated cells facing up, followed by DAPI staining and counting. Cholesterol oxidase activity and membrane cholesterol release potential were measured as described previously39. Cells cultured overnight in 96-well plates were rinsed twice with PBS then treated with 100 μl of PBS containing cholesterol oxidase (2 U ml−1, Sigma) at 37 °C for 10 min. Then 50 μl Amplex red reagent (Invitrogen) was added, and after incubation at 37 °C for 20 min, activity was quantitated by spectrometry at 560 nm. Samples processed without cholesterol oxidase were used to determine background. For cholesterol release potential, cells were labelled with 3H-cholesterol overnight, rinsed with cold medium three times then incubated with cold medium containing methyl-β-cyclodextrin (1 mM, Sigma) at 37 °C. Aliquots of the medium were collected over time, sedimented and the supernatants were counted. Cell lysates were also processed and counted. For oxysterol production with cholesterol loading23, cells were plated in 6-well plates (50,000 cells/well) and incubated overnight in medium with lipoprotein-deficient serum. Cells were treated with a methyl-β-cyclodextrin:cholesterol complex for 10 min, then washed twice and harvested for oxysterol measurements40. 25-hydroxycholesterol (25-HC) and 27-hydroxycholesterol (27-HC) were extracted by the Bligh–Dyer method from homogenized macrophages and medium after addition of deuterated internal standard (d5-27-HC). The organic layer was taken to dryness under nitrogen, then 50 μl of 0.5 M N,N-dimethylglycine/2M 4-dimethylaminopyridine in chloroform and 50 μl of 1 M 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide in chloroform were added to derivatize the samples. After incubation for 1 h at 50 °C, the reaction was quenched with 50 μl of methanol. The sample was taken to dryness under nitrogen and the residue was treated with 1:3 (v/v) water:hexane to remove the derivatizing reagent. The hexane layer was taken to dryness under nitrogen and the sample was reconstituted with 200 μl of methanol and analysed by LC–MS/MS using a Prominence HPLC system (Shimadzu Scientific Instruments) and a 4000QTRAP mass spectrometer (Applied Biosystems/MDS Sciex Inc.). Data were acquired using Analyst software (v.1.5.1). To express a constitutively active form of Rac protein in FAS-knockout macrophages, we used a selectable retroviral expression system using pMX–RacG12V with a blasticidin-resistance gene. Plasmids were transfected into PLAT-E cells, and produced retrovirus was used to infect primary bone marrow progenitor cells during macrophage differentiation. On the day following infection, positive cells were selected in blasticidin (1 μg ml−1). Differentiated macrophages were used for analysing JNK response 3 days later. For cholesterol rescue experiments, different forms and concentrations of cholesterol were complexed to methyl-β-cyclodextrin (Sigma) at a ratio of 1:10 (with 0.25 mM cyclodextrin) and diluted to desired concentrations using serum-free medium. ent-cholesterol25 and alkyne cholesterol41 were synthesized at Washington University. Coprostanol was purchased from Sigma. Sterol was first dried under nitrogen gas, and then sonicated into DMEM with cyclodextrin until flaky chunks disappeared. The solution was then shaken overnight at 37 °C. Bone-marrow-derived macrophages were pretreated with serum-free medium for 2 h, and then cyclodextrin/cholesterol was added for 10 min followed by 30 min exposure to LPS (100 ng ml−1). For the ‘cholesterol pulse’ experiment, a two-hour serum-free medium incubation followed cyclodextrin/cholesterol exposure before LPS was added. For reconstitution of cholesterol in GPMVs, cells were treated with cholesterol (25 μM) overnight followed by vesicle preparation. For some experiments, cholesterol solution was added directly to vesicle suspensions at 5 μg ml−1. Data are expressed as mean ± s.e.m. Analyses were performed with GraphPad Prism by two-tailed t-test (for two groups), one-way ANOVA (more than two groups) and Tukey’s multiple comparison test, by two-way ANOVA (two independent variables) and Bonferroni post-tests, or by nonlinear curve-fit comparison (where indicated). P < 0.05 was considered significant and is indicated with an asterisk (except for results of nonlinear curve fit comparisons, where the asterisk indicates P < 0.001). For analysis of proteomic data, pathways with P < 0.05 (modified Fisher’s exact P value for gene-enrichment analysis) were selected for protein–protein interaction maps generated by STRING. Data that support the findings of this study are available from the corresponding author upon reasonable request.


News Article | December 15, 2016
Site: www.biosciencetechnology.com

Pilloried for their role in the epidemic of prescription painkiller abuse, drugmakers are aggressively pushing their remedy to the problem: a new generation of harder-to-manipulate opioids that have racked up billions in sales, even though there's little proof they reduce rates of overdoses or deaths. More than prescriptions are at stake. Critics worry the drugmakers' nationwide lobbying campaign is distracting from more productive solutions and delaying crucial efforts to steer physicians away from prescription opioids - addictive pain medications involved in the deaths of more than 165,000 Americans since 2000. "If we've learned one lesson from the last 20 years on opioids it's that these products have very, very high inherent risks," said Dr. Caleb Alexander, co-director of Johns Hopkins University's Center for Drug Safety and Effectiveness. "My concern is that they'll contribute to a perception that there is a safe opioid, and there's no such thing as a fully safe opioid." The latest drugs - known as abuse-deterrent formulations, or ADFs - are generally harder to crush or dissolve, which the drugmakers tout as making them difficult to snort or inject. But they still are vulnerable to manipulation and potentially addictive when simply swallowed. National data from an industry-sponsored tracking system also show drug abusers quickly drop the reformulated drugs in favor of older painkillers or heroin. In the last two years, pharmaceutical companies have made a concerted under-the-radar push for bills benefiting the anti-abuse opioids in statehouses and in Congress, where proposed legislation would require the Food and Drug Administration to replace older opioids with the new drugs. The lobbying push features industry-funded advocacy groups and physicians, along with grieving family members, who rarely disclosed the drugmakers' ties during their testimony in support of the drugs. Besides the tamper-resistant pills, ADF opioids are being rolled out in other forms, including injectable drugs and pills that irritate users when they're snorted or contain substances that counteract highs. Making painkillers harder to abuse is a common-sense step. But it's also a multibillion-dollar sales opportunity, offering drugmakers the potential to wipe out lower-cost generic competitors and lock in sales of their higher-priced versions, which cost many times more than conventional pills. The big companies hold multiple patents on the reformulated drugs, shielding them from competition for years - in some cases decades. Though abuse-deterrent painkillers represented less than 5 percent of all opioids prescribed last year, they generated more than $2.4 billion in sales, or roughly a quarter of the nearly $10 billion U.S. market for the drugs, according to IMS Health. The field is dominated by Purdue Pharma's OxyContin, patent-protected until 2030. "We at Purdue make certain that prescribers and other stakeholders understand that opioids with abuse-deterrent properties won't stop all prescription drug abuse, but they are an important part of the comprehensive approach needed to address this public health issue," Purdue spokesman Robert Josephson said in a statement. Like a spokeswoman for Pfizer Inc., Josephson also noted that some public health officials, including the Food and Drug Administration, have endorsed using ADFs. "We need every tool that we can have in our toolbox," said Kentucky state Rep. Addia Wuchner, a Republican who has worked on several bills to benefit reformulated opioids. "The extra steps are worth the effort in order to prevent this escalation of more addiction." The current industry campaign draws on the same 50-state strategy that painkiller manufacturers successfully deployed to help kill or weaken measures aimed at stemming the tide of prescription opioids, a playbook The Associated Press and Center for Public Integrity exposed in September. The reporting detailed how opioid drugmakers and the nonprofits they help fund spent more than $880 million on lobbying and political contributions at the state and federal level over the past decade, eight times what the gun lobby reported for the same period. The money represents the drugmakers' spending on all their legislative interests, including opioids. The FDA has approved a handful of the reformulated drugs but has not yet concluded that any reduce rates of addiction, abuse or death, and the evidence gap has led to diverging views among health authorities. Whereas FDA regulators emphasize the potential promise of reformulated painkillers, other government officials stress that they contain the same heroin-like ingredients as traditional opioids. An estimated 78 Americans die from heroin and prescription opioid overdoses every day. "'Abuse-deterrent' sounds to people sometimes like 'Oh, maybe it's not addictive.' But it's no less addictive," said Dr. Tom Frieden, head of the Centers for Disease Control and Prevention. Survey results published this year in the Clinical Journal of Pain showed nearly half of U.S. physicians incorrectly believed that reformulated opioids are less addictive than their predecessors. Many experts see a key role for ADFs in reducing the number of people who first begin abusing opioids, and some say the abuse-deterrent formulations should be the default painkiller for patients with histories of drug use, anxiety or depression. But even they worry that some drugmakers are overselling the technology. They stress that separate measures are needed for the majority of opioid abusers who ingest the pills orally. "The way they're handling the ADF is that this is the answer. And it's not the answer - it's part of the bigger puzzle," said Theodore Cicero, a psychiatry professor at Washington University in St. Louis, who has authored several studies on the drugs. 'You can't put a price tag on anybody's life' Two years after the overdose that killed her 21-year-old son, Terri Bartlett traveled to Illinois' state capital to champion an unlikely cause: revamped painkillers. Bartlett's son Michael became hooked on Vicodin and later graduated to heroin. In emotional testimony last year, she urged lawmakers to support a bill that would prioritize the new harder-to-crush pills, saying she believed her son would still be alive if abuse-deterrent formulations had been on the market then. "You can't put a price tag on anybody's life," she said. Bartlett didn't know then that she had been recruited into a wide-ranging lobbying campaign. A public relations firm hired by OxyContin-maker Purdue had helped recruit her to support the bill, along with local sheriffs and fire chiefs. Her words, and similar testimony from parents of drug abusers elsewhere, reflect a tactic used by the drugmakers across the country. For instance, Purdue paid nearly $95,000 for similar lobbying efforts in New York, state records show. And the industry's fingerprints are easy to spot in other areas. Of more than 100 bills dealing with the drugs introduced in 35 states in 2015 and 2016, at least 49 featured nearly identical language requiring insurers to cover abuse-deterrent drugs, according to an analysis of data from Quorum, a legislative tracking service. Several of the bill sponsors said they received the wording from pharmaceutical lobbyists. Since 2012, at least 21 bills related to the drugs have become law, including five that require insurers to pay for the more expensive drugs in Maine, Maryland, Massachusetts, Florida and West Virginia. Wins in such states will give drugmakers momentum to successfully push for copycat laws elsewhere, noted Paul Kelly, a federal lobbyist who has worked on multistate lobbying campaigns for drugstores and major retailers. "It's like a foot in the door," he said. Drugmakers have found fierce opposition to their ADF legislation from insurers and employers who would be on the hook for the far pricier opioid variations. The Illinois bill - and the 48 strikingly similar measures in other states - would require insurers to cover the drugs in the same way as other opioids, which the insurance companies argue would allow drugmakers to charge whatever they want for them. "That is not the best use of our medical care resources," Vernon Rowen, vice president of state government affairs for the insurance company Aetna, told Illinois lawmakers after Bartlett testified. "It totally eliminates our ability to negotiate discounts with manufacturers." New York Gov. Andrew Cuomo and New Jersey Gov. Chris Christie both vetoed such insurance mandates in the past year, citing the high costs and lack of evidence that the drugs help. Federal health officials also have pushed back against requirements to cover the drugs, citing the "staggering" costs. For example, a 30-day supply of Pfizer's abuse-deterrent Embeda, a combination drug containing morphine, costs $268, while a 30-day supply of a generic morphine costs roughly $38, according to data compiled by Truven Health Analytics, a company that tracks drug prices set by manufacturers. The Department of Veterans Affairs' Dr. Bernie Good estimated that converting the 8.8 million patient system exclusively to the new reformulations would increase opioid spending more than tenfold, to over $1.6 billion annually. Good, who co-directs the VA's program for medication safety, said the vast majority of veterans are not at risk for snorting or injecting their medications. "Would the excess money to pay for abuse-deterrent products - mostly to pay for it in cases where it wouldn't be necessary - be better spent for drug treatment centers?" he asked at a recent federal meeting on the drugs. Federal estimates say at least 2.2 million Americans are addicted to prescription opioids or heroin, yet only one in five actually receives treatment, according to a Surgeon General's report published last month. That's despite some $35 billion already spent annually on substance abuse programs by private and public health providers. State lawmakers who support the abuse-deterrent bills often defend them as an important piece of solving the opioid puzzle, preventing more costly overdoses and hospitalizations. And Fred Brason, executive director of Project Lazarus, a North Carolina-based group that promotes anti-addiction policies in several states, called the focus on the drugs' cost too narrow. "You're already spending that money at the back end," he said. "You're spending it at the emergency department." He also noted the costs of addiction treatment. When critics raise alarms about higher costs and limited evidence, drugmakers can rely on groups they support financially to argue their side, including the National Association of Drug Diversion Investigators, the Academy of Integrative Pain Management and the Partnership for Drug-Free Kids. Representatives from those groups have testified in favor of abuse-deterrent legislation in at least seven states. NADDI president Charlie Cichon acknowledged his group receives funds from several ADF-makers, but said it views the drugs as a proven part of the solution to the opioid crisis. "We're not testifying for Purdue Pharma's product or Endo's product," he said. And Bob Twillman, executive director of the Academy, said, "Increased use of abuse-deterrent opioids makes it more likely that those patients who need opiates to treat their pain will be able to get them." The Partnership for Drug-Free Kids did not respond to multiple requests for comment. Physicians with financial ties to drugmakers play similar roles. Dr. Gareth Shemesh, a pain specialist, testified in support of a Colorado bill last year brought to the sponsoring legislator by Pfizer. Shemesh had received more than $13,500 from Pfizer that year in speaking fees, travel and meals, and more than $5,000 from Purdue the year before. He did not respond to repeated calls for comment, but Pfizer said he was not paid to testify and did not speak on behalf of any specific product. Purdue and Pfizer also have ramped up contributions to the Republican and Democratic attorneys general associations, which raise unlimited funds to help elect AGs across the country. In 2015 and 2016, they gave a total of $950,000 - more than in the previous four years combined. To date, 51 attorneys general from U.S. states and territories have signed at least one of two National Association of Attorneys General letters to the FDA, urging the agency to favor abuse-deterrent drugs. The pro-ADF playbook even includes a bit of political theater. In at least seven states, lawmakers or advocates have pounded the reformulated pills with hammers to demonstrate how difficult they are to smash. In Illinois, it was Democratic Rep. Sara Feigenholtz wielding the hammer on the same committee that heard Terri Bartlett's testimony. The main sponsor of the bill prioritizing ADFs, Feigenholtz ranked second-highest among legislative recipients of money from Pfizer since the start of 2010, according to an analysis of data from the National Institute on Money in State Politics. The $6,200 she received during that period was more than she had received in the 14 previous years combined. Her bill passed the committee but later stalled in the Legislature and remains pending. She did not return multiple requests for comment. Pfizer said its contributions to Feigenholtz go back 20 years and it would be "inaccurate and misleading" to suggest a tie to any one piece of legislation. Bartlett said she doesn't mind that Purdue was ultimately responsible for her invitation to testify, even though she didn't know that at the time. She still supports the bill. "I want to believe that in every pharmaceutical company there still remains some sort of humanity," she said. "Saving life is expensive." 'An addict can find a way' The FDA has walked a careful line on the new drugs, promoting them as a promising approach to discouraging abuse while acknowledging their real-world benefits remain largely theoretical. Earlier this year, the agency highlighted the drugs in its "opioids action plan," issued after scathing criticism from some members of Congress that the FDA wasn't doing enough to combat the epidemic. Thus far, the agency has approved seven drugs with labeling suggesting they are "expected to" discourage abuse, based on studies conducted by pharmaceutical companies. But the FDA has not yet concluded that any of the products have a "real-world impact" on measures like overdose or death, according to Dr. Douglas Throckmorton, an agency deputy director. He and other regulators predict, however, that the reformulations will eventually translate into public health results. "We stand by those predictions," Throckmorton said at a recent public meeting on the drugs. "We're confident in the science, we're confident in the assessments we conducted." Even some former FDA advisers who support expanded use of the drugs say they are only part of the solution. Dr. Lewis Nelson, who previously chaired an FDA panel on drug safety, notes that the drugs don't deter the most common form of abuse: swallowing pills whole. "Certainly, you might not eat one and get high," he said. "You eat three and get high." At least one study found that while OxyContin's reformulation coincided with many abusers switching to other drugs, other users still were able to defeat the pills' technology and snort or inject the contents. David Rook, a 40-year-old Henrico, Virginia, resident who now operates a recovery facility, was among them. Before entering treatment, he said, he would break down abuse-deterrent OxyContins and crush-resistant Opanas using water, lemon juice and a microwave. "The truth is an addict can find a way to abuse a medication one way or the other," he said. A recent HIV outbreak in rural Indiana illustrates the sometimes unpredictable effect of ADFs on abusers' behavior. Approximately 210 people have tested positive for the virus in Scott County since 2014, a public health crisis linked to needle-sharing among abusers of Opana. Endo Pharmaceuticals received approval for a reformulated version of the drug in 2011, making it harder to crush. As a result, many abusers switched from snorting the drug to injecting it with syringes, leading to the spread of the blood-borne HIV virus, according to the state health commissioner and other officials. Endo spokeswoman Heather Zoumas Lubeski declined to comment on the outbreak, but issued a statement saying, "Patient safety has always been a top priority for Endo and we are committed to providing patients with approved products that are safe and effective when used as prescribed." The FDA declined to approve labeling claims for Opana's anti-abuse features, noting that the drug still can easily be cooked and injected. Pfizer, Purdue, Endo and Teva Pharmaceuticals Industries Ltd. spent more than $20 million between 2012 and 2015 on federal lobbying efforts that included support of a bill that would require the FDA to gradually replace current opioids with harder-to-abuse versions that become available. Teva declined comment. Rep. William Keating, D-Mass., first introduced the bill in 2012 and tried again in 2013 and 2015. Like his colleagues at the state level, he employed the hammer-smashing routine to illustrate the medications' crush-resistant properties. Keating said the industry played no part in spurring the bill, even though the head of a nonprofit association funded by abuse-deterrent drugmakers spoke at the press conference introducing his legislation. He also received $2,500 in political contributions from makers of reformulated opioids in 2011 and 2012, a small fraction of his overall fundraising haul. "My interest in this stems from when I was a district attorney and I got to see the lives that were lost," Keating said in an interview. While Keating's bill has not received a vote in Congress, the FDA already has begun moving in the direction suggested by companies, mapping out a process for removing older opioids from the market when newer versions are shown to be more effective at thwarting abuse. "You don't have to pass a bill, necessarily, to change policy," said Dan Cohen of the Abuse Deterrent Coalition, which represents smaller abuse-deterrent manufacturers. The lack of real-world data on reformulated opioids is the main reason some federal officials haven't embraced them. The CDC did not recommend ADFs in its landmark opioid guidelines this year, the first-ever federal recommendations for doctors prescribing the drugs. Why? Frieden, the agency's director, said his staff could not find any evidence showing the updated opioids actually reduce rates of addiction, overdoses or deaths. Center for Public Integrity data reporter Ben Wieder contributed to this article.


News Article | December 6, 2016
Site: www.eurekalert.org

This week's edition of PLOS Medicine, featuring four Research Articles and two Perspectives, begins a special issue devoted to research on cancer genomics. Research and discussion papers selected along with two leaders in the field, Guest Editors Elaine Mardis and Marc Ladanyi, will highlight progress in the study of important cancer types, and assess the clinical implications of progress in this fast-moving field. In their Perspective article, James Topham and Marco Marra discuss the acquisition of genetic information from tumors, which in recent years has progressed from localized analyses of single genes, and subsequently panels of genes, that are important in specific cancer types, to whole-genome sequencing. Intensive effort is being applied to analyses of tumor genomes aimed at selection of appropriate therapies for individual patients, and the authors emphasize the need to study the dynamic nature of tumor genome sequences -- which can change over time and adapt to cytotoxic and other treatments -- to maximize the potential benefit for patients. In a Research Article, Dr. Charles Perou of the University of North Carolina's Lineberger Comprehensive Cancer Center, Chapel Hill, NC, USA and colleagues study the evolution of tumors in two patients with triple-negative, basal-like breast cancer, a disease associated with lack of estrogen receptor, progesterone receptor, and HER2 which generally results in poor clinical outcome. For many cancer types, it is the metastases, or the spread of cancer cells from the original tumor to other parts of the body, that are life-threatening, and it is therefore of interest to study the cancer after it has left the site of origin. The researchers studied whole-genome sequence and gene expression information from primary tumors and metastases obtained from the patients at autopsy, and report similar somatic mutation and copy number patterns across all tumors in an individual patient. This analysis identified multiple populations of cells, or clones, in the original tumor as well as in the metastatic sites. The findings suggest that metastatic potential is established early in the trajectory of this form of breast cancer, and that multiple clones from the primary tumor traveled together to distant organs. Anindya Dutta and colleagues present a study of gene expression changes in large datasets derived from patients with brain tumors in a second Research Article -- focusing on low-grade gliomas and glioblastoma multiforme, which is a particularly intractable form of the disease. The authors study expression of large numbers of long noncoding RNAs (lncRNAs), which are thought to be involved in governing the expression of other genes and thereby controlling important processes such as development and tumorigenesis. The authors found that a signature made up of selected lncRNAs was associated with length of survival in patients with low-grade gliomas. If validated in future work, these findings could lead to a way to estimate prognosis for patients with this type of tumor, which might be useful in planning treatment. Further research and discussion articles addressing important topics in the area of cancer genomics will appear throughout the December, 2016 issue of PLOS Medicine. No specific funding received for this article. The authors have declared that no competing interests exist. IN YOUR COVERAGE PLEASE USE THIS URL TO PROVIDE ACCESS TO THE FREELY AVAILABLE PAPER:http://journals. This study was supported by funds from the following sources: the Breast Cancer Research Foundation (LAC); the National Institutes of Health (NIH) (LAC, M01RR00046); National Cancer Institute P50-CA58223 Breast SPORE Program (LAC); National Cancer Institute P50-CA58223 Breast SPORE Program (CMP); National Cancer Institute R01-CA195754-01 (CMP); National Cancer Institute R01-CA148761 (CMP); the Breast Cancer Research Foundation (CMP); National Cancer Institute F30-CA200345 (MBS); and the National Human Genome Research Institute Center Initiated Projects U54HG003079 (ERM). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. I have read the journal's policy and the authors of this manuscript have the following competing interests: CMP is an equity stock holder of BioClassifier LLC and University Genomics, and ERM, CMP, and JSP have filed a patent on the PAM50 subtyping assay. ERM served as guest editor on PLOS Medicine's Special Issue on Cancer Genomics. Hoadley KA, Siegel MB, Kanchi KL, Miller CA, Ding L, Zhao W, et al. (2016) Tumor Evolution in Two Patients with Basal-like Breast Cancer: A Retrospective Genomics Study of Multiple Metastases. PLoS Med 13(12): e1002174. doi:10.1371/journal.pmed.1002174 Department of Genetics, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America McDonnell Genome Institute, Washington University in St. Louis, St. Louis, Missouri, United States of America Department of Mathematics, Washington University in St. Louis, St. Louis, Missouri, United States of America Division of Hematology/Oncology, Department of Medicine, School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America Department of Pathology and Laboratory Medicine, School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America IN YOUR COVERAGE PLEASE USE THIS URL TO PROVIDE ACCESS TO THE FREELY AVAILABLE PAPER:http://journals. The study was funded by the National Cancer Institute grants P01 CA104106 and R01 CA166054 to AD. BJR was supported by training grants T32 GM007267 and T32 CA009109. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors have declared that no competing interests exist. Reon BJ, Anaya J, Zhang Y, Mandell J, Purow B, Abounader R, et al. (2016) Expression of lncRNAs in Low-Grade Gliomas and Glioblastoma Multiforme: An In Silico Analysis. PLoS Med 13(12): e1002192. doi:10.1371/journal.pmed.1002192 Department of Pathology, School of Medicine, University of Virginia, Charlottesville, Virginia, United States of America Department of Biochemistry, University of Virginia, Charlottesville, Virginia, United States of America Division of Neuro-Oncology, Neurology Department, University of Virginia Health System, Old Medical School, Charlottesville, Virginia, United States of America IN YOUR COVERAGE PLEASE USE THIS URL TO PROVIDE ACCESS TO THE FREELY AVAILABLE PAPER:http://journals.


Wong J.R.,U.S. National Cancer Institute | Harris J.K.,Washington University in St. Louis | Rodriguez-Galindo C.,Dana-Farber Cancer Institute | Johnson K.J.,Washington University in St. Louis
Pediatrics | Year: 2013

OBJECTIVE: Childhood and adolescent melanoma is rare but has been increasing. To gain insight into possible reasons underlying this observation, we analyzed trends in melanoma incidence diagnosed between the ages of 0 and 19 years among US whites by gender, stage, age at diagnosis, and primary site. We also investigated incidence trends by UV-B exposure levels. METHODS: By using Surveillance, Epidemiology, and End Results (SEER) program data (1973-2009), we calculated age-adjusted incidence rates (IRs), annual percent changes, and 95% confidence intervals for each category of interest. Incidence trends were also evaluated by using joinpoint and local regression models. SEER registries were categorized with respect to low or high UV-B radiation exposure. RESULTS: From 1973 through 2009, 1230 children of white race were diagnosed with malignant melanoma. Overall, pediatric melanoma increased by an average of 2% per year (95% confidence interval, 1.4%-2.7%). Girls, 15- to 19-year-olds, and individuals with low UV-B exposure had significantly higher IRs than boys, younger children, and those living in SEER registries categorized as high UV-B. Over the study period, boys experienced increased IRs for melanoma on the face and trunk, and females on the lower limbs and hip. The only decreased incidence trend we observed was among 15- to 19-year-olds in the high UV-B exposure group from 1985 through 2009. Local regression curves indicated similar patterns. CONCLUSIONS: These results may help elucidate possible risk factors for adolescent melanoma, but additional individual-level studies will be necessary to determine the reasons for increasing incidence trends. Copyright © 2013 by the American Academy of Pediatrics.


Baumard N.,University of Pennsylvania | Boyer P.,Washington University in St. Louis
Trends in Cognitive Sciences | Year: 2013

Moralizing religions, unlike religions with morally indifferent gods or spirits, appeared only recently in some (but not all) large-scale human societies. A crucial feature of these new religions is their emphasis on proportionality (between deeds and supernatural rewards, between sins and penance, and in the formulation of the Golden Rule, according to which one should treat others as one would like others to treat oneself). Cognitive science models that account for many properties of religion can be extended to these religions. Recent models of evolved dispositions for fairness in cooperation suggest that proportionality-based morality is highly intuitive to human beings. The cultural success of moralizing movements, secular or religious, could be explained based on proportionality. © 2013 Elsevier Ltd.


Finkelman B.S.,University of Pennsylvania | Gage B.F.,Washington University in St. Louis | Johnson J.A.,University of Florida | Brensinger C.M.,University of Pennsylvania | Kimmel S.E.,University of Pennsylvania
Journal of the American College of Cardiology | Year: 2011

Objectives The aim of this study was to compare the accuracy of genetic tables and formal pharmacogenetic algorithms for warfarin dosing. Background Pharmacogenetic algorithms based on regression equations can predict warfarin dose, but they require detailed mathematical calculations. A simpler alternative, recently added to the warfarin label by the U.S. Food and Drug Administration, is to use genotype-stratified tables to estimate warfarin dose. This table may potentially increase the use of pharmacogenetic warfarin dosing in clinical practice; however, its accuracy has not been quantified. Methods A retrospective cohort study of 1,378 patients from 3 anticoagulation centers was conducted. Inclusion criteria were stable therapeutic warfarin dose and complete genetic and clinical data. Five dose prediction methods were compared: 2 methods using only clinical information (empiric 5 mg/day dosing and a formal clinical algorithm), 2 genetic tables (the new warfarin label table and a table based on mean dose stratified by genotype), and 1 formal pharmacogenetic algorithm, using both clinical and genetic information. For each method, the proportion of patients whose predicted doses were within 20% of their actual therapeutic doses was determined. Dosing methods were compared using McNemar's chi-square test. Results Warfarin dose prediction was significantly more accurate (all p < 0.001) with the pharmacogenetic algorithm (52%) than with all other methods: empiric dosing (37%; odds ratio [OR]: 2.2), clinical algorithm (39%; OR: 2.2), warfarin label (43%; OR: 1.8), and genotype mean dose table (44%; OR: 1.9). Conclusions Although genetic tables predicted warfarin dose better than empiric dosing, formal pharmacogenetic algorithms were the most accurate. © 2011 American College of Cardiology Foundation.


Kurby C.A.,Grand Valley State University | Zacks J.M.,Washington University in St. Louis
Memory and Cognition | Year: 2012

During narrative comprehension, readers construct representations of the situation described by a text, called situation models. Theories of situation model construction and event comprehension posit two distinct types of situation model updating: incremental updating of individual situational dimensions, and global updates in which an old model is abandoned and a new one created. No research to date has directly tested whether readers update their situation models incrementally, globally, or both. We investigated whether both incremental and global updating occur during narrative comprehension. Participants typed what they were thinking while reading an extended narrative, and then segmented the narrative into meaningful events. Each typed think-aloud response was coded for whether it mentioned characters, objects, space, time, goals, or causes. There was evidence for both incremental and global updating: Readers mentioned situation dimensions more when those dimensions changed, controlling for the onset of a new event. Readers also mentioned situation dimensions more at points when a new event began than during event middles, controlling for the presence of situational change. These results support theories that claim that readers engage in both incremental and global updating during extended narrative comprehension. © 2012 Psychonomic Society, Inc.


Hogenesch J.B.,University of Pennsylvania | Herzog E.D.,Washington University in St. Louis
FEBS Letters | Year: 2011

Circadian clocks are present in most organisms and provide an adaptive mechanism to coordinate physiology and behavior with predictable changes in the environment. Genetic, biochemical, and cellular experiments have identified more than a dozen component genes and a signal transduction pathway that support cell-autonomous, circadian clock function. One of the hallmarks of biological clocks is their ability to reset to relevant stimuli while ignoring most others. We review recent results showing intracellular and intercellular mechanisms that convey this robust timekeeping to a variety of circadian cell types. © 2011 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.


Holtzman M.J.,Washington University | Holtzman M.J.,Washington University in St. Louis | Byers D.E.,Washington University | Alexander-Brett J.,Washington University | Wang X.,Washington University
Nature Reviews Immunology | Year: 2014

An abnormal immune response to environmental agents is generally thought to be responsible for causing chronic respiratory diseases, such as asthma and chronic obstructive pulmonary disease (COPD). Based on studies of experimental models and human subjects, there is increasing evidence that the response of the innate immune system is crucial for the development of this type of airway disease. Airway epithelial cells and innate immune cells represent key components of the pathogenesis of chronic airway disease and are emerging targets for new therapies. In this Review, we summarize the innate immune mechanisms by which airway epithelial cells and innate immune cells regulate the development of chronic respiratory diseases. We also explain how these pathways are being targeted in the clinic to treat patients with these diseases. © 2014 Macmillan Publishers Limited. All rights reserved.


Power J.D.,University of Washington | Fair D.A.,Oregon Health And Science University | Schlaggar B.L.,University of Washington | Petersen S.E.,University of Washington | Petersen S.E.,Washington University in St. Louis
Neuron | Year: 2010

Recent advances in MRI technology have enabled precise measurements of correlated activity throughout the brain, leading to the first comprehensive descriptions of functional brain networks in humans. This article reviews the growing literature on the development of functional networks, from infancy through adolescence, as measured by resting-state functional connectivity MRI. We note several limitations of traditional approaches to describing brain networks and describe a powerful framework for analyzing networks, called graph theory. We argue that characterization of the development of brain systems (e.g., the default mode network) should be comprehensive, considering not only relationships within a given system, but also how these relationships are situated within wider network contexts. We note that, despite substantial reorganization of functional connectivity, several large-scale network properties appear to be preserved across development, suggesting that functional brain networks, even in children, are organized in manners similar to other complex systems. © 2010 Elsevier Inc.


Edwards M.,Washington University in St. Louis | Zwolak A.,University of Pennsylvania | Schafer D.A.,University of Virginia | Sept D.,University of Michigan | And 2 more authors.
Nature Reviews Molecular Cell Biology | Year: 2014

Capping protein (CP) binds the fast growing barbed end of the actin filament and regulates actin assembly by blocking the addition and loss of actin subunits. Recent studies provide new insights into how CP and barbed-end capping are regulated. Filament elongation factors, such as formins and ENA/VASP (enabled/vasodilator-stimulated phosphoprotein), indirectly regulate CP by competing with CP for binding to the barbed end, whereas other molecules, including V-1 and phospholipids, directly bind to CP and sterically block its interaction with the filament. In addition, a diverse and unrelated group of proteins interact with CP through a conserved 'capping protein interaction' (CPI) motif. These proteins, including CARMIL (capping protein, ARP2/3 and myosin I linker), CD2AP (CD2-associated protein) and the WASH (WASP and SCAR homologue) complex subunit FAM21, recruit CP to specific subcellular locations and modulate its actin-capping activity via allosteric effects. © 2014 Macmillan Publishers Limited. All rights reserved.


Grant
Agency: Department of Defense | Branch: Army | Program: STTR | Phase: Phase I | Award Amount: 99.96K | Year: 2010

This Phase I STTR effort develops and tests spectropolarimetric surface characterization algorithms for LADAR based remote sensing. A systematic approach is used which first defines the operational scenarios for which the algorithms are to work. These scenarios are then used to define requirements for LADAR hardware and algorithms which serves to focus the algorithm development effort. A library of surface material Mueller matrix measurements is used as the basis for a fundamental surface characterization investigation that will establish the ultimate potential to discriminate between different materials/classes of materials. The library used consists of existing measurement data from government and industry sources plus measurements made during Phase I to fill high priority gaps in the data library. A preliminary design of improved instrumentation (which would be built in Phase II) for measuring full Mueller matrix BRDFs will be made in order to address weaknesses in previous measurement data. Finally, a suite of algorithms will be developed to address the high priority scenarios identified at the beginning of the effort, and these algorithms will be tested on synthetic LADAR images created to represent these scenarios. Testing will be performed on SDI’s existing LEAP ATR application.


News Article | November 10, 2016
Site: www.sciencenews.org

A bark-paper document with a weird backstory and once suspected to be a forgery is the real deal, researchers say. If true, that increases the likelihood that the plaster-coated book covered with painted images and writing is the earliest known manuscript from ancient America, dating back to the 13th century. No forger could have known how to reproduce all the bookmaking techniques, colored inks and deities pictured in what’s known as the Grolier Codex, concludes a team of researchers who specialize in the Maya and other ancient American societies. For instance, an illustration of a mountain god includes a flaring, cleft head or headdress. Other images of this god were first discovered at Maya sites several decades after the Grolier Codex turned up in the 1960s. “Even a good faker would not have known about the mountain god’s headdress,” says Maya researcher David Freidel of Washington University in St. Louis, who was not involved in the study. John Carlson, a physicist and astronomer at the University of Maryland in College Park, has argued for the authenticity of the codex since the 1980s. He presented a summary of his research supporting that conclusion in a paper appearing in 2014 in the journal Archaeoastronomy. The new analysis comes from Yale University’s Michael Coe and Mary Miller, Stephen Houston of Brown University and Karl Taube of the University of California, Riverside. Appearing in Maya Archaeology 3, a publication released September 7, the study is based on high-quality photographs of the book’s 10 surviving pages. The original Grolier Codex most likely included 20 attached pages that folded like an accordion, the researchers say. A few surviving pages are still attached to one another. Drawn deities and written glyphs in the Grolier Codex display influences from the Maya site of Chichén Itzá on the Yucatán Peninsula and the Toltec site of Tula in central Mexico, the investigators say. Evidence of contacts between the Maya and Toltec societies dates to between the end of Classic Maya civilization around 950 and the arrival of Spanish explorers in the 1500s. Two radiocarbon studies suggest that the book dates to the 1200s. One study, in 1973, tested uncoated bark paper found with the codex. A 2014 study, by Carlson, dated fragments from the codex itself. “The Grolier Codex was probably made by one scribe who did not distinguish between the Toltec and the Maya,” Houston says. Fake Maya codices are ineptly done and easily spotted, he adds. “They are nothing like the Grolier Codex,” Houston contends, which displays a sophisticated understanding of both Maya and Toltec artistic styles and glyphs. Three other Maya bark-paper manuscripts, dubbed the Dresden, Madrid and Paris codices for the cities where they’re housed, were discovered and authenticated by the mid-1800s (SN: 2/21/15, p. 14). Those codices were made before Spanish contact but lack radiocarbon ages. Since its reported discovery in a southern Mexico cave by looters in the early 1960s, the Grolier Codex has aroused suspicions of fakery. A wealthy Mexican collector exhibited the document at New York City’s Grolier Club in 1971, giving the codex its name. The collector claimed that two looters had taken him by airplane to a remote spot where he was shown the book and other finds from the cave, including a wooden mask. Another red flag was the Grolier Codex’s simple illustrations and glyphs. The three authenticated Maya codices feature elaborate notations, calculations and illustrations. These surviving Maya codices were probably spared by Spanish authorities as special examples of New World culture to show Europeans, Houston says. By contrast, the Grolier Codex is a “run-of-the-mill” ritual guide of a type that the Spanish destroyed en masse, he suspects. A calendar predicting the movements of the planet Venus over 104 years, with an undetermined start date, appears in the Grolier Codex. The number of days between Venus’ appearances as the morning star was used to time ritual events. Gods depicted in the Grolier Codex carry out demands of Venus related to the sun, death and other basic concerns. The new analysis also supports the codex’s authenticity based on examination of previous reports of radiocarbon dates and colored inks in the Grolier Codex that probably include Maya blue (SN: 3/1/08, p. 134), which has a chemical signature that remained unknown until the 1980s, archaeologists Harvey and Victoria Bricker, both of Tulane University in New Orleans, wrote in an email to Science News. The Brickers agree that the Grolier Codex contains a mix of Maya and Central Mexican features. Still, the document remains controversial. Susan Milbrath of the Florida Museum of Natural History in Gainesville, a longtime Grolier Codex skeptic, declined to comment on the new paper. But she emailed a 2007 paper suggesting that stains and jagged tears in some parts of the document were produced long after the 1200s, possibly in an effort to make the manuscript look older than it actually is. Plaster and ink erosion on the manuscript caused that damage, Houston says. Editor’s note: This story was updated November 10, 2016, to include details on previous research supporting the authenticity of the codex, to clarify the description of Maya Archaeology 3 and to correct the description of the radiocarbon dating done in 1973.


News Article | November 3, 2016
Site: www.sciencedaily.com

The discarded bone of a chicken leg, still etched with teeth marks from a dinner thousands of years ago, provides some of the oldest known physical evidence for the introduction of domesticated chickens to the continent of Africa, research from Washington University in St. Louis has confirmed. Based on radiocarbon dating of about 30 chicken bones unearthed at the site of an ancient farming village in present-day Ethiopia, the findings shed new light on how domesticated chickens crossed ancient roads -- and seas -- to reach farms and plates in Africa and, eventually, every other corner of the globe. "Our study provides the earliest directly dated evidence for the presence of chickens in Africa and points to the significance of Red Sea and East African trade routes in the introduction of the chicken," said Helina Woldekiros, lead author and a postdoctoral anthropology researcher in Arts & Sciences at Washington University. The main wild ancestor of today's chickens, the red junglefowl Gallus gallus is endemic to sub-Himalayan northern India, southern China and Southeast Asia, where chickens were first domesticated 6,000-8,000 years ago. Now nearly ubiquitous around the world, the offspring of these first-domesticated chickens are providing modern researchers with valuable clues to ancient agricultural and trade contacts. The arrival of chickens in Africa and the routes by which they both entered and dispersed across the continent are not well known. Previous research based on representations of chickens on ceramics and paintings, plus bones from other archaeological sites, suggested that chickens were first introduced to Africa through North Africa, Egypt and the Nile Valley about 2,500 years ago. The earliest bone-based evidence of chickens in Africa dates to the late first millennium B.C., from the Saite levels at Buto, Egypt -- approximately 685-525 B.C. This study, published in the International Journal of Osteoarchaeology, pushes that date back by hundreds of years. Co-authored by Catherine D'Andrea, professor of archaeology at Simon Fraser University in Canada, the research also suggests that the earliest introductions may have come from trade routes on the continent's eastern coast. "Some of these bones were directly radiocarbon dated to 819-755 B.C., and with charcoal dates of 919-801 B.C. make these the earliest chickens in Africa," Woldekiros said. "They predate the earliest known Egyptian chickens by at least 300 years and highlight early exotic faunal exchanges in the Horn of Africa during the early first millennium B.C." Despite their widespread, modern-day importance, chicken remains are found in small numbers at archaeological sites. Because wild relatives of the galliform chicken species are plentiful in Africa, this study required researchers to sift through the remnants of many small bird species to identify bones with the unique sizes and shapes that are characteristic of domestic chickens. Woldekiros, the project's zooarchaeologist, studied the chicken bones at a field lab in northern Ethiopia and confirmed her identifications using a comparative bone collection at the Institute of Paleoanatomy at Ludwig Maximillian University in Munich. Excavated by a team of researchers led by D'Andrea of Simon Fraser, the bones analyzed for this study were recovered from the kitchen and living floors of an ancient farming community known as Mezber. The rural village was located in northern Ethiopia about 30 miles from the urban center of the pre-Aksumite civilization. The pre-Aksumites were the earliest people in the Horn of Africa to form complex, urban-rural trading networks. Linguistic studies of ancient root words for chickens in African languages suggest multiple introductions of chickens to Africa following different routes: from North Africa through the Sahara to West Africa; and from the East African coast to Central Africa. Scholars also have demonstrated the biodiversity of modern-day African village chickens through molecular genetic studies. "It is likely that people brought chickens to Ethiopia and the Horn of Africa repeatedly over long period of time: over 1,000 years," Woldekiros said. "Our archaeological findings help to explain the genetic diversity of modern Africans chickens resulting from the introduction of diverse chicken lineages coming from early Arabian and South Asian context and later Swahili networks." These findings contribute to broader stories of ways in which people move domestic animals around the world through migration, exchange and trade. Ancient introductions of domestic animals to new regions were not always successful. Zooarchaeological studies of the most popular domestic animals such as cattle, sheep, goats and pigs have demonstrated repeated introductions as well as failures of new species in different regions of the world. "Our study also supports the African Red Sea coast as one possible early route of introduction of chickens to Africa and the Horn," Woldekiros said. "It fits with ways in which maritime exchange networks were important for global distribution of chicken and other agricultural products. The early dates for chickens at Mezber, combined with their presence in all of the occupation phases at Mezber and in Aksumite contexts 40 B.C.- 600 A.D. in other parts of Ethiopia, demonstrate their long-term success in northern Ethiopia."


News Article | October 26, 2016
Site: www.nature.com

The career progression of many junior researchers is hamstrung by a global postdoc glut, ultra-tight funding and microscopic chances for tenure-track posts. We asked Gary McDowell, Chris Pickett and Jessica Polka how they intend to transform the scientific enterprise to repair some of the dysfunction that chokes researchers' careers and forces young people to choose between quality of life and a chance of advancement. McDowell's interest began when he was a postdoc at Tufts University in Boston, Massachusetts. With Polka and others, he formed Future of Research (FoR) in San Francisco, which seeks to give junior researchers a voice for their concerns and to help them develop solutions. As executive director of the organization, he aims to empower postdocs and other junior scientists with information on career options, postdoc classification and compensation. Pickett was in the middle of a postdoc at Washington University in St. Louis, Missouri,when he realized that he wanted to pursue politics and policy as a way to change the culture of science. A policy fellowship at the American Society for Biochemistry and Molecular Biology (ASBMB) in Rockville, Maryland, turned into an analyst post there, and that led to his current position as director of Rescuing Biomedical Research in Washington DC. He and the group — founded by thought leaders including Shirley Tilghman, former president of Princeton University in New Jersey, and Harold Varmus, former director of the US National Cancer Institute in Bethesda, Maryland — seek to tackle the problems that stop junior researchers from launching sustainable careers in biomedical research. As a PhD student at University of California, San Francisco (UCSF), and postdoc at Harvard Medical School in Boston, Polka saw the biomedical enterprise as a vast system of moving parts that does not always function optimally. Active in the development of both FoR and Rescuing Biomedical Research, she has most recently seized on using biology preprints to accelerate the pace of knowledge transfer and to promote career development in a venture dubbed ASAPbio, based at the UCSF. I got into science to work on really big problems. After witnessing colleagues' frustrations with unequal pay, stymied career development, lack of diversity and other issues, I realized that the biggest problem could be systemic to academia. FoR aims to involve junior scientists in making the scientific enterprise more sustainable, and a crucial part of that is getting them to come together and share experiences and data. Transparency is key. Junior scientists need to know what they are getting into. Postdocs, for example, are dealt with haphazardly at the department level, with differing salaries and benefits at the same institution. Early-career scientists often hear platitudes, such as 'More PhDs make America smarter'. That sounds great, but we haven't been using good science to see whether that argument stands up. We don't track anything to see whether the United States is in fact smarter. It's hard to push back on a romantic ideal. I go to conferences and ask questions. Recently, for example, I asked for data on the number of available jobs in non-academic careers, how competitive those jobs are and whether anyone has modelled the future labour market for new PhD students. Of course, there are no such data or models. People say nobody asks those questions. I find that strange. As a scientist, I want to see the data that test those underlying assumptions. A generational divide was clear during a discussion about scientific staff positions held at the ASBMB Sustainability Summit in February. A senior scientist asked, incredulously, who would want a second-tier career position. I argued that these are desirable positions, especially after seeing the many difficulties faced by new principal investigators. People seem to agree that early-career scientists have legitimate concerns, but it's also popular to call them young and entitled. There's a sense that 'everyone has to go through what I did'. I don't think junior researchers should have to martyr themselves for science. This year, however, the conversation has increasingly involved graduate students. It is also a big year for US postdocs because they will attain employee rights. On 1 December, the Fair Labor Standards Act (FLSA) will update the threshold at which salaried workers are exempted from overtime payments for working more than 40 hours per week — from US$23,660 to $47,476. In some instances, salaries will need to double to keep the postdocs on. Institutions are panicking because if they can't double those wages, employees may lose jobs. People think it's just about asking for more funding, but it's not. It's about sustainable funding. Funding booms and busts caused the problems we're facing now. By 2020, I hope that people pursuing scientific research careers are as informed as those who are currently in the medical-school system. Graduate students need to know how they will be supported and trained in particular programmes, and by particular principal investigators. All who are trying to maximize their passion for science need to know what they are getting into. Some graduate students and postdocs are not as free as others to leave the lab to pursue career-development opportunities or training, let alone for advocacy. This can make progress difficult for our group and others, because there is only a very small number of people who can advocate for change. Still, I've been pleasantly surprised by the level of engagement. There were about 20 people involved in our first meeting in 2014, and today, we have roughly 100 active volunteers in the United States and abroad who are engaged and in regular contact. I have left the bench — and I don't know if I'll go back. I feel no sadness whatsoever. My research was interesting, but I hadn't yet figured out that I enjoy doing things that effect some kind of change. It's very liberating not having to worry about all the issues that I now spend my time trying to alleviate for others. The biggest challenge that academia faces is the need for a culture change. My motivation was seeing junior scientists buy into the idea that scientific success means attaining a faculty position. Helping early-career scientists to get information about the skills needed for a variety of careers, and encouraging universities to recognize postdocs and improve their pay and benefits — both these goals require people to change their minds about how things have been done since the inception of the biomedical research enterprise. There will always be pockets of resistance. But more people are addressing these issues now, by offering career training for junior researchers and improving the funding outlook, for example. While I was a science-policy analyst, I compared 9 reports and consolidated 250 suggestions into 8 recommendations. Two of those suggestions are to broaden training for graduate students and postdocs to prepare them for a variety of careers, and to add more staff-scientist positions at universities ( et al. Proc. Natl Acad. Sci. USA 112, 10832–10836; 2015). Some of the recommendations in US reports from the National Academies of Science in the mid-1990s were the same as those in my report. I fear that if I hadn't written this paper, a lot of these reports would continue to sit on shelves collecting dust. People have been talking about solving these problems for a long time, but there hasn't been enough popular support in the scientific community. And that's not helped by resistance to change in government and at universities. We now have long-time advocates who are active at the same time as broad grass-roots efforts. It's a potent mix for achieving real change. We can't let that momentum go. We want to improve the environment for biomedical researchers at all career stages. Take postdocs, for example. The 2014 US National Postdoctoral Association's Institutional Policy Report showed that there are 37 titles for the single job of postdoc. This hinders the ability to introduce a unified pay or benefits scheme. We're pushing for institutions to harmonize postdoc categories into one central group with the same funding and tax codes. For postdocs to be compensated in a single, uniform way would bring about a huge shift in the science community. Our next step is to encourage research institutes, universities and governmental agencies to pilot some of these recommendations. The biggest barrier to change is that the system we have now works, even if inefficiently, and we don't know what will happen if we change it. The only way around that is to experiment with small-scale pilot programmes. Just as at the bench, successes and failures tell you important things that help to move the conversation forward. In a few years, I want to be able to take my paper and cross off the recommendations that have been tried, and know whether they worked or failed. The first pilot I'd like to see would involve universities collecting and publishing data on the eventual careers of their PhD alumni. If we can aggregate that at the national level, it will be a huge benefit to undergraduates, graduate students and biomedical departments across the country. In 2014, when I was a postdoc, I attended a meeting to discuss the future of the research enterprise. Shortly afterwards, I met Gary McDowell at a meeting of the Boston Postdoctoral Association. The FoR advocacy group emerged from all of this. In the same year, I met Shirley Tilghman, then president of the American Society for Cell Biology, while I was co-chairing the society's new student and postdoc committee. She invited me to join the steering committee of Rescuing Biomedical Research. When I read Pickett's paper and subsequent blogposts for the American Society for Biochemistry and Molecular Biology, his framing of these issues in terms of sustainability brought home the need for fundamental change. Publication is the currency through which scientists obtain credit and recognition, and falls at the centre of a lot of the problems in research. Ron Vale, a molecular pharmacologist at the UCSF, really put the issue of increasing time to publication on the table ( Proc. Natl Acad. Sci. USA 112, 13439–13446; 2015) and organized the rest of us to launch ASAPbio as an independent spin-off group last year. My volunteering with policy groups and experience in organizing meetings and conducting outreach helped to soften my transition to director of ASAPbio in August. ASAPbio is trying to promote the most productive use of preprints in biology. We see preprinting in other fields — physics, computer science and maths — as a way to address problems that early-career researchers experience most acutely. Having a long time to publication strains early careers, limits feedback on work through a closed peer-review process and slows the pace of science. Transmission of knowledge is the foundation on which all major discoveries are built. We want to accelerate that process. Preprints are not widely used in biology. There's a general lack of awareness about them and they're not part of our culture. We're trying to encourage scientists to have conversations about preprints. I hear two main concerns about preprints. The first is that they will disqualify authors from publication in top-tier journals. That's not true. There's been a remarkable trend of acceptance of these practices at journals in the past few years. The second is that preprint users will get scooped. This is a valid concern, but one that will be easily remedied as more and more people use them. In other fields, preprints are cited, and treated as a first-class research product. If we want this in biology, we need to create the infrastructure, including the introduction of standards. The rise of social media has enabled people to compare their experiences and to coordinate themselves better. In the past, this was possible only through more formal channels. As people debate the more conservative and radical positions on preprints in the public arena of Twitter, anyone can read them and take part in the discussion. People are also starting to share their first preprints with the hashtag #ASAPbio. I would like to see more than 100,000 biology preprints posted each year by 2020. That would represent 10% of the volume of manuscripts that appear on PubMed annually. It would roughly equal the number in physics, too. This is an ambitious number, so I'll consider any increase a win. These interviews have been edited for length and clarity.


News Article | September 6, 2016
Site: news.yahoo.com

BERKELEY, Calif. (Reuters) - Scientists are developing dust-sized wireless sensors implanted inside the body to track neural activity in real-time, offering a potential new way to monitor or treat a range of conditions including epilepsy and control next-generation prosthetics. The tiny devices have been demonstrated successfully in rats, and could be tested in people within two years, the researchers said. "You can almost think of it as sort of an internal, deep-tissue Fitbit, where you would be collecting a lot of data that today we think of as hard to access," said Michel Maharbiz, an associate professor of electrical engineering and computer science at the University of California, Berkeley. Fitbit Inc sells wearable fitness devices that measure data including heart rate, quality of sleep, number of steps walked and stairs climbed, and more. Current medical technologies employ a range of wired electrodes attached to different parts of the body to monitor and treat conditions ranging from heart arrhythmia to epilepsy. The idea here, according to Maharbiz, is to make those technologies wireless. The new sensors have no need for wires or batteries. They use ultrasound waves both for power and to retrieve data from the nervous system. The sensors, which the scientists called "motes," are about the size of a grain of sand. The scientists used them to monitor in real time the rat peripheral nervous system - the part of the body's nervous system that lies outside the brain and spinal cord, according to findings published last month in the journal Neuron. The sensors consist of components called piezoelectric crystals that convert ultrasound waves into electricity that powers tiny transistors in contact with nerve cells in the body. The transistors record neural activity and, using the same ultrasound wave signal, send the data outside the body to a receiver. The researchers said such wireless sensors potentially could give human amputees or quadriplegics a more efficient means of controlling future prosthetic devices. "It's a meaningful advancement in recording data from inside the body," said Dr. Eric Leuthardt, a professor of neurosurgery at Washington University in St. Louis. "Demonstrations of capability are one thing, but making something for clinical use, to be used as a medical device, is still going to have to be worked out." Before implanting wireless sensors into the brain, the science of understanding how the brain processes and shares information needs to advance further, Leuthardt said. To deliver motes, currently one millimeter in size, into the brain, the researchers would need to miniaturize the sensors further to about 50 microns, about the width of a human hair. "It's not impossible," Maharbiz said. "The math is there."


News Article | February 15, 2017
Site: www.wired.com

This story is part of our special coverage, The News in Crisis. When Morgan DeBaun was a student at Washington University in St. Louis during the early Obama years, she and a handful of friends often found themselves at this one lunch table in a campus cafeteria. It was big and round, whereas the other tables in the cafeteria were long and rectangular, and it was perfect for the hours the group spent talking about what shows they were watching, what music they were listening to, and whatever was happening in the news or around campus. They were among the very few black students at the predominately white university, and the table became a place of both sanctuary and celebration. Over time, other black students would drift into their orbit and join the conversation. It almost felt like gravity—or what DeBaun came to think of as black gravity. That was six years ago, and today DeBaun is the CEO and cofounder of Blavity, a three-year-old media and tech company that’s been described as “BuzzFeed for black millennials.” With 17 full-time staffers in its LA offices, Blavity publishes articles with titles like “From Trayvon Martin to Alton Sterling: Tears That Never Dry” and “Why Atlanta Is the Most Authentically Modern Black Experience on TV Right Now.” At the core of the site is the sense of community DeBaun found at the round table. “Our audience likes to talk to each other,” she says. “You can’t just say, ‘Beyoncé released an album.’ They want to talk and argue about it. So how do we facilitate that engagement?” Part of her strategy is a reliance on user-generated content. Roughly 60 percent of the articles and videos on the site are submitted by readers, then edited by Blavity’s staff. To DeBaun, this isn’t just free content that invites readers into the editorial process—it’s journalism created by and for her target audience. “The people who make the best content on Instagram and Twitter are usually black,” she says. “With Blavity we built a platform to showcase that creativity.” When a series of racist texts were sent to black students at the University of Pennsylvania after Donald Trump’s election victory, Blavity didn’t link to or rely on reporting from, say, The Philadelphia Inquirer (with its 86 percent white newsroom); it published “Reflecting on Racism at UPenn: A Call to Action From the Front Lines,” written by the director of the college’s Black Cultural Center and featuring on-the-ground, in-the-room-where-it-happened details about the incident and its aftermath. “Black people are being attacked at an institutional level,” DeBaun says. “Blavity having scale, and being able to distribute their stories, will be really powerful, especially now.” With the launch this past November of Afrotech, a summit in San Francisco for black people in tech, Blavity is expanding its reach into another community where, as in journalism, people of color remain painfully underrepresented. DeBaun and her growing team at Blavity have another future in mind. This article appears in the March issue. Subscribe now.


News Article | November 15, 2016
Site: www.npr.org

Mantis shrimp, a group of aggressive, reef-dwelling crustaceans, take more than one first-place ribbon in the animal kingdom. Outwardly, they resemble their somewhat larger lobster cousins, but their colorful shells contain an impressive set of superpowers. Now, scientists are finding that one of those abilities — incredible eyesight — has implications for people with cancer that are potentially lifesaving. "They have these ridiculous eyes that sense so many things at once," says Sam Powell, a doctoral student in computer science and engineering at Washington University in St. Louis. "It's been very interesting figuring out what we can do with that, that helps out humans." Powell is part of a collaborative team of engineers and wildlife biologists, co-led by Viktor Gruev at the University of Illinois at Urbana-Champaign. They're working on a set of imaging technologies inspired by the ability of the mantis shrimp to detect polarized light. With the camera the team is developing, Gruev says, cancer surgeons might one day be able to much more clearly see the margins of the tumors they need to remove. Mantis shrimp come in two varieties. There are the "smashers" and the "spearers," named for their attack modes when hunting prey. With their spring-loaded, weaponized legs, these predators can crack a snail shell or harpoon a passing fish in a single punch. The speed of these attacks has earned the mantis shrimp a world record: fastest strike in the animal kingdom. At 30 times faster than the blink of an eye, the attack is so swift that it can vaporize nearby water molecules, producing bubbles where no bubbles should be. The mantis shrimp's powerful punch is aided by its world-class eyesight. Like the eyes of most crustaceans and insects, the mantis shrimp's are made up of thousands of light-trapping facets — picture a fly's eye — known as ommatidia. What's unique to the mantis shrimp is the way the ommatidia of each eye are divided into three sections, each moving independently. That means mantis shrimp vision is able to triangulate distance using up to six images in the brain. "That's important for an animal that makes its living smashing and spearing things," says Roy Caldwell, a mantis shrimp specialist in the integrative biology department at the University of California, Berkeley. But the power of the mantis shrimp eye doesn't end there. Mantis shrimp can perceive an attribute of light that eludes our naked eyes: polarization. Sunlight is messy — a jumble of wavelengths moving in all directions at once. But some surfaces — say, the scale of a fish, or a pair of polarized sunglasses — have a way of changing the light they reflect or transmit, organizing it so it moves in a single plane. We humans can't really tell this is happening. But the mantis shrimp eye has extra sensors that let it analyze the angle of the light wave. That means the shrimp is able to make out where in its neighborhood light is being polarized, and where it isn't. So the polarizing surfaces of fish, crabs and other potential prey look more vivid against the less polarized backdrop of water. This ability to detect and visually capitalize on polarized light "is very common in animals," says Thomas Cronin, a research biologist and professor at the University of Maryland, Baltimore County. "In fact, we're among the few that don't use polarized light very much, if at all." What's unique to some mantis shrimp is their ability to perceive yet another, much more rare variety of polarized light. This "circular" polarized light moves not in a flat plane, but in a twisted one that spins through space like a helix. Circular polarized light is part of what makes 3-D glasses and DVD technology work. Mantis shrimp not only see this kind of polarization, they broadcast it. Parts of the male's body function as a circular-polarizing surface, flashing a secret code that's only visible within the species. "It gives them an incredibly private channel of communication that no other animal can see," says Caldwell. Males display these body parts during courtship to attract females. Other research has shown that adult males flash their polarizing parts to alert other males to their presence inside a burrow, a warning that the homeowner is armed and dangerous. Inspired by the mantis shrimp's superlative eyesight, the group of researchers are collaborating to build polarization cameras that could be used to help diagnose and remove cancerous tumors. Doctors have long known that, at the cellular level, fast-growing cancer cells are disorganized in comparison with healthy cells. Because of the structural differences, it turns out, some diseased tissues also reflect polarized light differently from healthy tissue. These differences can show up early with cancer — before other symptoms or signs. "Looking at nature can help us design better and more sensitive imaging techniques," Gruev says. His team's cameras, which are small enough for endoscopic use, can see polarization patterns on the surfaces of human and animal tissue. In one study, Gruev and his team tested their polarization cameras in mice, looking for signs of colon cancer. Traditional colonoscopy techniques employ black-and-white images to look for abnormal shapes, such as polyps. But sometimes, cancerous tissue in the colon is flat, blending in with healthy tissue. Gruev's camera, in contrast, successfully converted polarization data into color images in real time in the mice studies, revealing where the healthy tissue ended and the diseased tissue began. Interestingly, different types of cancer cells have different polarization signatures, Gruev says, while healthy tissues have a consistent profile. "The polarization structure makes the cancer apparent," he says. Clinical trials with human breast cancer patients are now underway. At this point, doctors have no way to confirm during the operation that they've completely removed a tumor. One day, Gruev believes, polarization imaging will be part of every surgical oncologist's toolkit, bringing the power of a mantis shrimp's eye to the operating room. "It's kind of the cancer moonshot," Gruev says of his hopes for the diagnostic technology. "Right now, we are still detecting cancer way too late in the game." This post and video were produced by our friends at Deep Look, a wildlife video series from KQED and PBS Digital Studios that explores "the unseen at the very edge of our visible world." KQED's Elliott Kennerson is a digital media producer for the series.


News Article | January 20, 2016
Site: www.nature.com

Scientists hunting for academic jobs got a rare glimpse into the mysterious tenure-track hiring process. A blog post written by computational genomicist Sean Eddy at Harvard University in Cambridge, Massachusetts, outlined the steps that he and his colleagues have taken since November to evaluate nearly 200 applicants for a Harvard faculty position. Interviews for six candidates begin this week. A tweet by Eddy on 9 January attracted fresh attention to the blog post, with commenters applauding his efforts to lift the veil on the selection process. Holly Bik, a genomics and bioinformatics researcher at New York University who is applying for jobs, tweeted: Eddy is a co-chair of the hiring committee for the faculty position at Harvard’s FAS Center for Systems Biology, and only joined the faculty there in July 2015. He says that he wrote the blog post to clarify the hiring process, adding that he is not commenting on Harvard’s recruitment policy. “People don’t get a lot of information about what happens in one of these searches and what the selection criteria are,” he says. “I’m worried about people not applying because they think we’re not going to hire minorities or women or people who don’t have Harvard degrees.” Eddy described the first step as triage, in which three faculty members (including the two hiring committee chairs) review every application, spending about ten minutes on each one. He wrote that he is looking for a clear research question in the research proposal, and looks through the publication history and attached publications — not to scrutinize journal impact factors or citation counts, but to assess the quality, creativity and trajectory of a candidate’s scientific contributions. “I think a lot of the angsty gnashing of teeth about needing Nature/Science/Cell papers is self-inflicted by the candidates,” Eddy writes. Honours, grants and letters of recommendation count at this stage, too. Successful applications are then re-read by three faculty members randomly chosen from the entire eight-member committee. They take a more thorough look at the research aims and the publications. In the end, however, Eddy admits: “A lot of it comes down to intangibles, like whether people in the department get excited about a candidate’s research question,” he writes. Eddy noted that the applicant group was made up of only 21% female candidates and 5% from underrepresented minorities. Many — Eddy didn’t count the exact number — had done at least some of their training at Harvard. Bik responded to the blog post, noting that she avoided applying for Harvard positions because of the sheer volume of other jobs that she was also applying for. She decided to place her bets on positions where she thought she had the best chance of success. “I said: ‘I’m not going to apply for this one because I have other applications that I really need to focus more of my time on’,” she said in an interview. On Twitter, Eddy asked Bik’s advice on how to avoid applicants taking themselves out of the running so that employers can recruit from a broader pool of applicants: Bik also suggested in her blog comment that crowd-sourced job wikis — on which job applicants anonymously compile information about available faculty positions — would be good places for schools to encourage applications from people who might need a nudge. In his post, Eddy also wrote that he tries to consider his own implicit biases — such as those against women and minorities in science — during the shortlisting process. One way to measure these biases is with an online test designed by Harvard social ethicist Mahzarin Banaji and her colleagues. Eddy says that he has taken this test a few times — after starting at Harvard and again after reading the hiring committee’s guidelines. “I used to think that I don’t have such biases. … Now I know I have implicit biases,” he wrote in the blog post. To counteract them, he said he initially evaluated applications from women and minority candidates separately from those from men, and then combined the shortlists — but not to create quotas, he added. “It’s one of the few concrete things I can think of to do in a process like this, to force people including myself to have a conscious, slow, second look at their decisions,” Eddy said. “It’s a work in progress. These are tough issues.” Bioengineer Ian Holmes, an associate professor at the University of California, Berkeley, who has worked with Eddy in the past, commended him in a tweet: Holmes acknowledged his own bias in subsequent tweets, adding in an interview: “I find it regrettable that I am biased, but I think there is more shame in not acknowledging one’s bias than in having the bias.” Eddy’s post resonated with other faculty members who have been involved in recruitment. Joan Strassmann, an evolutionary biologist and professor at Washington University in St. Louis, tweeted: Strassmann has written for years about the challenges of hiring faculty members on her Sociobiology blog. “I don’t want anyone to not go into [academia] because it seems like a club with secret rules,” she said in an interview. In November, she wrote how the process is inherently unfair. “Our job is to hire an excellent scientist, colleague, and teacher,” she wrote. “There are likely to be others even better in the pool, but not discoverable by our imperfect techniques.” Bik says that increased transparency in hiring is good for applicants. Whereas some might have insider information about a position because of well-connected mentors, others may have to rely only on what they can find online. “I think that transparency and availability of information are extremely valuable to evening out the playing field.”


News Article | November 3, 2016
Site: www.eurekalert.org

The discarded bone of a chicken leg, still etched with teeth marks from a dinner thousands of years ago, provides some of the oldest known physical evidence for the introduction of domesticated chickens to the continent of Africa, research from Washington University in St. Louis has confirmed. Based on radiocarbon dating of about 30 chicken bones unearthed at the site of an ancient farming village in present-day Ethiopia, the findings shed new light on how domesticated chickens crossed ancient roads -- and seas -- to reach farms and plates in Africa and, eventually, every other corner of the globe. "Our study provides the earliest directly dated evidence for the presence of chickens in Africa and points to the significance of Red Sea and East African trade routes in the introduction of the chicken," said Helina Woldekiros, lead author and a postdoctoral anthropology researcher in Arts & Sciences at Washington University. The main wild ancestor of today's chickens, the red junglefowl Gallus gallus is endemic to sub-Himalayan northern India, southern China and Southeast Asia, where chickens were first domesticated 6,000-8,000 years ago. Now nearly ubiquitous around the world, the offspring of these first-domesticated chickens are providing modern researchers with valuable clues to ancient agricultural and trade contacts. The arrival of chickens in Africa and the routes by which they both entered and dispersed across the continent are not well known. Previous research based on representations of chickens on ceramics and paintings, plus bones from other archaeological sites, suggested that chickens were first introduced to Africa through North Africa, Egypt and the Nile Valley about 2,500 years ago. The earliest bone-based evidence of chickens in Africa dates to the late first millennium B.C., from the Saite levels at Buto, Egypt -- approximately 685-525 B.C. This study, published in the International Journal of Osteoarchaeology, pushes that date back by hundreds of years. Co-authored by Catherine D'Andrea, professor of archaeology at Simon Fraser University in Canada, the research also suggests that the earliest introductions may have come from trade routes on the continent's eastern coast. "Some of these bones were directly radiocarbon dated to 819-755 B.C., and with charcoal dates of 919-801 B.C. make these the earliest chickens in Africa," Woldekiros said. "They predate the earliest known Egyptian chickens by at least 300 years and highlight early exotic faunal exchanges in the Horn of Africa during the early first millennium B.C." Despite their widespread, modern-day importance, chicken remains are found in small numbers at archaeological sites. Because wild relatives of the galliform chicken species are plentiful in Africa, this study required researchers to sift through the remnants of many small bird species to identify bones with the unique sizes and shapes that are characteristic of domestic chickens. Woldekiros, the project's zooarchaeologist, studied the chicken bones at a field lab in northern Ethiopia and confirmed her identifications using a comparative bone collection at the Institute of Paleoanatomy at Ludwig Maximillian University in Munich. Excavated by a team of researchers led by D'Andrea of Simon Fraser, the bones analyzed for this study were recovered from the kitchen and living floors of an ancient farming community known as Mezber. The rural village was located in northern Ethiopia about 30 miles from the urban center of the pre-Aksumite civilization. The pre-Aksumites were the earliest people in the Horn of Africa to form complex, urban-rural trading networks. Linguistic studies of ancient root words for chickens in African languages suggest multiple introductions of chickens to Africa following different routes: from North Africa through the Sahara to West Africa; and from the East African coast to Central Africa. Scholars also have demonstrated the biodiversity of modern-day African village chickens through molecular genetic studies. "It is likely that people brought chickens to Ethiopia and the Horn of Africa repeatedly over long period of time: over 1,000 years," Woldekiros said. "Our archaeological findings help to explain the genetic diversity of modern Africans chickens resulting from the introduction of diverse chicken lineages coming from early Arabian and South Asian context and later Swahili networks." These findings contribute to broader stories of ways in which people move domestic animals around the world through migration, exchange and trade. Ancient introductions of domestic animals to new regions were not always successful. Zooarchaeological studies of the most popular domestic animals such as cattle, sheep, goats and pigs have demonstrated repeated introductions as well as failures of new species in different regions of the world. "Our study also supports the African Red Sea coast as one possible early route of introduction of chickens to Africa and the Horn," Woldekiros said. "It fits with ways in which maritime exchange networks were important for global distribution of chicken and other agricultural products. The early dates for chickens at Mezber, combined with their presence in all of the occupation phases at Mezber and in Aksumite contexts 40 B.C.- 600 A.D. in other parts of Ethiopia, demonstrate their long-term success in northern Ethiopia."


News Article | August 30, 2016
Site: www.fastcompany.com

Whether it’s a new technology, a foreign language, or an advanced skill, staying competitive often means learning new things. Nearly two-thirds of U.S. workers have taken a course or sought additional training to advance their careers, according to a March 2016 study by Pew Research Center. They report that results have included an expanded professional network, new job or different career path. Being a quick learner can give you an even greater edge. Science proves there are six ways you can learn and retain something faster. If you imagine that you’ll need to teach someone else the material or task you are trying to grasp, you can speed up your learning and remember more, according to a study done at Washington University in St. Louis. The expectation changes your mind-set so that you engage in more effective approaches to learning than those who simply learn to pass a test, according to John Nestojko, a postdoctoral researcher in psychology and coauthor of the study. "When teachers prepare to teach, they tend to seek out key points and organize information into a coherent structure," Nestojko writes. "Our results suggest that students also turn to these types of effective learning strategies when they expect to teach." Experts at the Louisiana State University’s Center for Academic Success suggest dedicating 30-50 minutes to learning new material. "Anything less than 30 is just not enough, but anything more than 50 is too much information for your brain to take in at one time," writes learning strategies graduate assistant Ellen Dunn. Once you’re done, take a five to 10 minute break before you start another session. Brief, frequent learning sessions are much better than longer, infrequent ones, agrees Neil Starr, a course mentor at Western Governors University, an online nonprofit university where the average student earns a bachelor’s degree in two and a half years. He recommends preparing for micro learning sessions. "Make note cards by hand for the more difficult concepts you are trying to master," he says. "You never know when you’ll have some in-between time to take advantage of." While it’s faster to take notes on a laptop, using a pen and paper will help you learn and comprehend better. Researchers at Princeton University and UCLA found that when students took notes by hand, they listened more actively and were able to identify important concepts. Taking notes on a laptop, however, leads to mindless transcription, as well as an opportunity for distraction, such as email. "In three studies, we found that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand," writes coauthor and Princeton University psychology professor Pam Mueller. "We show that whereas taking more notes can be beneficial, laptop note takers’ tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning." While it sounds counterintuitive, you can learn faster when you practice distributed learning, or "spacing." In an interview with The New York Times, Benedict Carey, author of How We Learn: The Surprising Truth About When, Where, and Why It Happens, says learning is like watering a lawn. "You can water a lawn once a week for 90 minutes or three times a week for 30 minutes," he said. "Spacing out the watering during the week will keep the lawn greener over time." To retain material, Carey said it’s best to review the information one to two days after first studying it. "One theory is that the brain actually pays less attention during short learning intervals," he said in the interview. "So repeating the information over a longer interval—say a few days or a week later, rather than in rapid succession—sends a stronger signal to the brain that it needs to retain the information." Downtime is important when it comes to retaining what you learn, and getting sleep in between study sessions can boost your recall up to six months later, according to new research published in Psychological Science. In an experiment held in France, participants were taught the Swahili translation for 16 French words in two sessions. Participants in the "wake" group completed the first learning session in the morning and the second session in the evening of the same day, while participants in the "sleep" group completed the first session in the evening, slept, and then completed the second session the following morning. Participants who had slept between sessions recalled about 10 of the 16 words, on average, while those who hadn't slept recalled only about 7.5 words. "Our results suggest that interweaving sleep between practice sessions leads to a twofold advantage, reducing the time spent relearning and ensuring a much better long-term retention than practice alone," writes psychological scientist Stephanie Mazza of the University of Lyon. "Previous research suggested that sleeping after learning is definitely a good strategy, but now we show that sleeping between two learning sessions greatly improves such a strategy." When learning a new motor skill, changing the way you practice it can help you master it faster, according to a new study at Johns Hopkins University School of Medicine. In an experiment, participants were asked to learn a computer-based task. Those who used a modified learning technique during their second session performed better than those who repeated the same method. The findings suggest that reconsolidation—a process in which existing memories are recalled and modified with new knowledge—plays a key role in strengthening motor skills, writes Pablo A. Celnik, senior study author and professor of physical medicine and rehabilitation. "What we found is if you practice a slightly modified version of a task you want to master," he writes, "you actually learn more and faster than if you just keep practicing the exact same thing multiple times in a row."


News Article | February 28, 2017
Site: www.chromatographytechniques.com

Evidence of ancient climate patterns throughout the history of the Earth is recorded in tree rings, and bubbles in polar ice. For really ancient readings, scientists have looked at the oxygen isotopes within fossilized amoeba known as foraminifera, whose tiny shells act as a kind of historical thermometer in ocean sediments. But now there is another reading which may be available to scientists investigating millions of years of temperature and atmosphere changes, according to a new paper in the journal Nature Communications. The phytoplankton called coccolithophores, which thrive in the sunlight but die and descend to make up a huge proportion of the “calcareous ooze” on the ocean floor, could be a whole new gauge on how the Earth has changed, reports the team from the University of Oxford, Washington University in St. Louis, and the Plymouth Marine Laboratory. The team grew different species of the coccolithophores, and then placed the algae in environments with an array of different levels of carbon, they report. It was “not all that different from gardening,” said Harry McClelland, one of the authors, now of Washington University. Based on light and heavy masses of isotopes of carbon and cross-referencing it with the fossils of their plates they produce, they were able to calculate the ratio of calcification to photosynthesis. Senior author Rosalind Rickaby, a professor of biogeochemistry at Oxford, said the work could prove to be a whole new tool to assess ancient climate, and Earth’s temperature and atmosphere fluctuations. “Our model allows scientists to understand algal signals of the past, like never before,” said Rickaby. “It unlocks the potential of fossilized coccolithophores to become a routine tool, used in studying ancient algal physiology and also ultimately as a recorder of past CO2 levels.” Rickaby and some of the same personnel have published a series of coccolithophore studies over the last few months. A September paper in Scientific Reports outlined why the natural recordkeeping of the alage had eluded laboratory detection, and a follow-up in October that was also in the journal Nature Communications found that the uronic acid content of the species shows the adaptation to previous signatures of carbon dioxide in the ambient environment.


News Article | November 2, 2015
Site: phys.org

In the following years, a trail of dead crows marked the spread of the virus from the East through the Midwest on to the West Coast. It took only four years for the introduced virus to span the continent. But what happened to bird populations in the wake of the virus's advance? Were some species decimated and others left untouched? After the initial die-off were the remaining birds immune, or mown down by successive waves of the disease? Nobody really knew. Now, a study published in the Nov. 2 issue of Proceedings of the National Academy of Sciences provides some answers. The study, a collaboration among scientists at Colorado State University, the University of California at Los Angeles, Washington University in St. Louis and the Institute for Bird Populations (IBP), is the first to fully document the demographic impacts of West Nile virus on North American bird populations. The scientists analyzed 16 years of mark-recapture data collected at more than 500 bird-banding stations operated using the Monitoring Avian Productivity and Survival protocol developed by IBP, a California-based nonprofit that studies declines in bird populations. They discovered large-scale declines in roughly half of the species they studied, a much higher fraction than spot checks like the Christmas bird count had found. "Clearly we didn't see the whole picture," said Joseph A. LaManna, PhD, Tyson postdoctoral researcher in the Department of Biology in Arts & Sciences at Washington University. "It wasn't just the jays that were dying; half the species we studied had significant die-offs." The data also revealed, however, that some species fared much better than others. Roughly half of the afflicted species managed to rebound within a year or two. Ironically, the resilient ones include the crows and other corvids who were so strongly associated with the disease on its arrival. But a second group of birds, including Swainson's thrush, the purple finch and tufted titmouse were not so lucky. Knocked down by the arrival of West Nile virus, they have not rebounded and seem to have suffered long-term population declines. "Many more species of birds than we thought are susceptible to this virus," said T. Luke George, PhD, an ecologist at Colorado State University. "And we also found long-term effects on population growth rates. Prior to this study, we generally thought the West Nile virus had a very short-term effect on bird survival." The scientists aren't sure why some species fare better than others. Was a common evolutionary history predictive? Or similar habitats or diets? "That's really the question our study opens," said Ryan J. Harrigan, a biologist at the University of California, Los Angeles. "In the end, we're not completely sure why some species recovered from this disease and some did not. That would be the next step forward, addressing that question." "The deeper story is that without long-term monitoring and detailed data, we miss patterns like these," LaManna said. "Deaths in one area can be easily masked by immigration from other areas, and we wouldn't really notice unless we happened to be looking at the right type of data. "This year was another big West Nile year," LaManna said. "Most people don't even know that." More information: Persistent impacts of West Nile virus on North American bird populations, PNAS, www.pnas.org/cgi/doi/10.1073/pnas.1507747112


News Article | October 29, 2016
Site: www.prweb.com

Lumeris, a pioneer in population health management solutions headquartered in St. Louis, is pleased to announce that it is a sponsor of Washington University’s ArchHacks in its inaugural year. More than 500 college students from around the country will compete in the hackathon on Nov. 4-6, 2016. Teams of computer programmers and other innovators, through intensive collaboration, will build a HealthTech software product in just 48 hours. “We’re thrilled that Washington University chose the theme, HealthTech, for its first ArchHacks,” said Ray Wolf, Lumeris’ senior vice president of architecture and innovation. “It’s an industry that is growing and evolving each day. Lumeris’ technology platform has been integral to our success in helping organizations move from fee-for-service to value-based care so we certainly understand the importance of innovation. We look forward to mentoring the talented teams of students who will help all of us in the healthcare industry address significant challenges and establish superior solutions to drive rapid transformation.” ArchHacks will be offering workshops for students to learn new skills and one-on-one time with leaders from the national health and tech communities. “This is a unique opportunity for students to hack healthcare by using their passion for technology, design and building things in collaboration with other students and HealthTech professionals,” said Stephanie Mertz, Washington University computer science student and co-founder of ArchHacks. “We want to make a difference to improve the health and well being of many populations of people and what better way than to learn than from experts at Lumeris.” In addition to mentorship, Lumeris will also be providing quadcopter drones as prizes for the ArchHacks team that creates the Best Hack Addressing an Aging Population. ArchHacks, which is free for students, is accepting applications until October 15. Students may visit archhacks.io to apply. ABOUT ARCHHACKS ArchHacks is a hackathon hosted at Washington University in St. Louis Nov. 4-6, 2016 for students of all backgrounds with a passion for technology, health, and building things. ArchHacks brings together students from around the country for 48 hours of collaboration, problem solving, and building. We will provide a unique opportunity for students to work with resources and companies they cannot find anywhere else, and our theme, HealthTech, means that the products our participants create will make a real difference. You will have the opportunity to make invaluable connections with corporations, collaborate with friends and, most importantly, develop something that will contribute to the HealthTech community. ABOUT LUMERIS Lumeris serves as a long-term operating partner for organizations that are committed to the transition from volume- to value-based care and delivering extraordinary clinical and financial outcomes. Lumeris enables clients to profitably achieve greater results in value-based care arrangements through proven playbooks based on collaboration, transparent data and innovative engagement methodologies. Lumeris offers comprehensive services for managing all types of populations, including launching new Medicare Advantage Health Plans, Commercial and Government Health Plan Optimization, and Multi-Payer, Multi-Population Health Services Organizations (PHSOs) for provider organizations. Currently, Lumeris is engaged with health systems, provider alliances and payers representing tens of millions of lives moving to value-based care. Lumeris has nearly 800 employees. Lumeris was awarded 2015/2016 Best in KLAS within the Value-Based Care Managed Services category for outstanding efforts to help healthcare professionals deliver better patient care. For 2016 annual enrollment, Essence Healthcare, a Lumeris client, was rated “Excellent” by the Centers for Medicare and Medicaid Services. Essence was Lumeris’ pioneer client and has been leveraging Lumeris for more than a decade to operate its Medicare Advantage plans, which serve more than 62,000 Medicare beneficiaries in various counties throughout Missouri and southern Illinois.


News Article | November 28, 2016
Site: www.rdmag.com

Scientists at Washington University in St. Louis isolated an enzyme that controls the levels of two plant hormones simultaneously, linking the molecular pathways for growth and defense. Similar to animals, plants have evolved small molecules called hormones to control key events such as growth, reproduction and responses to infections. Scientists have long known that distinct plant hormones can interact in complex ways, but how they do so has remained mysterious. In a paper published in the Nov. 14 issue of Proceedings of the National Academy of Sciences, the research team of Joseph Jez, professor of biology in Arts & Sciences and a Howard Hughes Medical Institute Professor, reports that the enzyme GH3.5 can control the levels of two plant hormones, auxin and salicylic acid. It is the first enzyme of its kind known to control completely different classes of hormones. Auxin controls a range of responses in the plant, including cell and tissue growth and normal development. Salicylic acid, on the other hand, helps plants respond to infections, which often take resources away from growth. Plants must tightly control the levels of auxin and salicylic acid to properly grow and react to new threats. "Plants control hormone levels through a combination of making, breaking, modifying and transporting them," said Corey Westfall, a former graduate student who led this Jez lab work along with current graduate student Ashley Sherp. By stitching an amino acid to a hormone, GH3.5 takes the hormones out of circulation, reducing their effect in the plant. Although scientists suspected GH3.5 controlled auxin and salicylic acid, this double action had not been demonstrated in plants. "Our question was really simple," Sherp said. "Can this enzyme actually control multiple hormones? And if that's true in a test tube, what happens back in a plant?" To find out, the researchers induced plants to accumulate large amounts of the protein and then measured their levels of hormones. When GH3.5 was expressed at high levels, the amounts of both auxin and salicylic acid were reduced. Deprived of growth-promoting auxin, the plants stayed small and stunted. The experiment proved that GH3.5 does regulate distinct classes of hormones, but how does it do this? To better understand how the enzyme could control both auxin and salicylic acid, the scientists crystallized GH3.5 and sent the crystals to the European Synchrotron Radiation Facility in Grenoble, France. The particle accelerator there helped the researchers to fire powerful X-rays into the protein crystal, and the diffraction of the X-rays provided information about the atom-by-atom structure of the enzyme. Westfall assembled this data into a three-dimensional reconstruction of GH3.5, showing it frozen in the act of modifying auxin. The scientists were expecting to find key differences between GH3.5 and related proteins that would account for its unique ability to modify multiple hormones. To their surprise, the part of the enzyme that binds and modifies hormones looked almost identical to related enzymes that can only modify auxin. The surprising similarities between the multi-purpose GH3.5 and its single-use relatives suggests that unrecognized elements of these proteins influence which molecules they can bind and transform. "These surprising results mean there's something going on that we're not seeing in the sequence or the structure of these enzymes," Jez said. Solving this mystery could tell us more about how enzymes distinguish among similar molecules, a discriminatory ability that is critical for all life, including people as well as plants.


News Article | April 21, 2016
Site: phys.org

The distance between the galactic cosmic rays' point of origin and Earth is limited by the survival of a very rare type of cosmic ray that acts like a tiny clock. The cosmic ray is a radioactive isotope of iron, 60Fe, which has a half life of 2.6 million years. In that time, half of these iron nuclei decay into other elements. In the 17 years CRIS has been in space, it detected about 300,000 galactic cosmic-ray nuclei of ordinary iron, but just 15 of the radioactive 60Fe . "Our detection of radioactive cosmic-ray iron nuclei is a smoking gun indicating that there has been a supernova in the last few million years in our neighborhood of the galaxy," said Robert Binns, research professor of physics in Arts & Sciences at Washington University in St. Louis, and lead author on the paper published online in Science on April 21, 2016. "The new data also show the source of galactic cosmic rays is nearby clusters of massive stars where supernova explosions occur every few million years," said Martin Israel, professor of physics at Washington University and a co-author on the paper. The radioactive iron is believed to be produced in core-collapse supernovae, violent explosions that mark the death of massive stars, which occur primarily in clusters of massive stars called OB associations. There are more than 20 such associations close enough to Earth to be the source of the cosmic rays, including subgroups of the nearby Scorpius and Centaurus constellations, such as Upper Scorpius (83 stars), Upper Centaurus Lupus (134 stars) and Lower Centaurus Crux (97 stars). Because of their size and proximity, these are the likely sources of the radioactive iron nuclei CRIS detected, the scientists said. The 60Fe results add to a growing body of evidence that galactic cosmic rays are created and accelerated in OB associations. Earlier CRIS measurements of nickel and cobalt isotopes show there must be a delay of at least 100,000 years between creation and acceleration of galactic cosmic-ray nuclei, Binns said. This time lag also means that the nuclei synthesized in a supernova are not accelerated by that supernova but by the shock wave from a second nearby supernova, Israel said, one that occurs quickly enough that a substantial fraction of the 60Fe from the first supernova has not yet decayed. Together, these time constraints mean the second supernova must occur between 100,000 and a few million years after the first supernova. Clusters of massive stars are one of the few places in the universe where supernovae occur often enough and close enough together to bring this off. "So our observation of 60Fe lends support to the emerging model of cosmic-ray origin in OB associations," Israel said. Although the supernovae in a nearby OB association that created the 60Fe CRIS observed happened long before people were around to observe suddenly brightening stars (novae), they also may have left traces in Earth's oceans and on the Moon. In 1999, astrophysicists proposed that a supernova explosion in Scorpius might explain the presence of excessive radioactive iron in 2.2 million-year-old ocean crust. Two research papers recently published in Nature bolster this case. One research group examined 60Fe deposition worldwide and argued that there might have been a series of supernova explosions, not just one. The other simulated by computer the evolution of Scorpius-Centaurus association in an attempt to nail down the sources of the 60Fe. Lunar samples also show elevated levels of 60Fe consistent with supernova debris arriving at the Moon about 2 million years ago. And here, too, there is recent corroboration. A paper just published in Physical Review Letters describes an analysis of nine core samples brought back by the Apollo crews. In fact you could say there has been a virtual supernova of 60Fe research. Explore further: Observed cosmic rays may have come from two-million-year-old supernova More information: "Observation of the 60Fe nucleosynthesis-clock isotope in galactic cosmic rays," Science, DOI: 10.1126/science.aad6004


News Article | March 1, 2017
Site: www.eurekalert.org

Even the most blissful of couples in long-running, exclusive relationships may be fairly clueless when it comes to spotting the ploys their partner uses to avoid dealing with emotional issues, suggests new research from psychologists at Washington University in St. Louis. "Happier couples see their partners in a more positive light than do less happy couples," said Lameese Eldesouky, lead author of the study and a doctoral student in Psychological and Brain Sciences at Washington University. "They tend to underestimate how often a partner is suppressing emotions and to overestimate a partner's ability to see the bright side of an issue that might otherwise spark negative emotions." Titled "Love is Blind, but Not Completely: Emotion Regulation Trait Judgments in Romantic Relationships," Eldesouky's presentation of the study was offered Jan. 20 at the 2017 meeting of the Society for Personality and Social Psychology. Published in the Journal of Personality, the study examines how accurate and biased dating couples are in judging personality characteristics that reflect ways of managing one's emotions. It focuses on two coping mechanisms that can be difficult to spot due to the lack of related visual cues: expressive suppression (stoically hiding one's emotions behind a calm and quiet poker face) and cognitive reappraisal (changing one's perspective to see the silver lining behind a bad situation). Co-authored by Tammy English, assistant professor of psychology at Washington University, and James Gross, professor of psychology at Stanford University, the study is based on completed questionnaires and interviews with 120 heterosexual couples attending colleges in Northern California. Participants, ranging in age from 18 to 25 years, were recruited as part of a larger study on emotion in close relationships. Each couple had been dating on an exclusive basis for more than six months, with some together as long as four years. In a previous study, English and Gross found that men are more likely than women to use suppression with their partners, and that the ongoing use of emotional suppression can be damaging to the long-term quality of a relationship. "Suppression is often considered a negative trait while reappraisal is considered a positive trait because of the differential impact these strategies have on emotional well-being and social relationships," English said. "How well you are able to judge someone else's personality depends on your personal skills, your relationship with the person you are judging and the particular trait you are trying to judge," English added. "This study suggests that suppression might be easier to judge than reappraisal because suppression provides more external cues, such as appearing stoic."


News Article | February 15, 2017
Site: www.prweb.com

Comark is pleased to announce the appointment of Jeff Roberts. As Comark’s President, Jeff will provide strategic direction and leadership to the team and drive the vision and strategy for growth within each segment of the business. Prior to the appointment as President of Comark, Jeff served as President with JTECH, an HME Company, which is the largest onsite paging company in the world. His leadership helped JTECH achieve significant growth in global business development and the successful integration of two competitive acquisitions. “I am very pleased and excited to have this opportunity to work with an outstanding team of people at Comark and to execute the next phase of its growth strategy,” said Mr. Roberts. “We are excited to welcome someone of Jeff’s caliber to the Comark team. His leadership skills will be a valuable asset in growing the business and promoting improvements in product performance and innovation,” stated Greg Baletsa, Operating Partner at JMC Capital Partners. Jeff holds both a Bachelor’s Degree in Mechanical Engineering and an MBA from Washington University in St. Louis, Missouri. About Comark Comark is a world-class provider of smarter automation solutions and IIoT platforms for the industrial and building automation markets. With a proven track record of innovative designs and unparalleled reliability, Comark enables companies to implement automation strategies to improve the performance of their business. Comark is headquartered in Milford, MA, Visit http://www.comarkcorp.com for information on Comark brands and products.


News Article | September 27, 2016
Site: www.biosciencetechnology.com

Researchers at Washington University in St. Louis have made headway on a potential treatment for osteoarthritis, which affects up to 27 million people in the United States, using nanoparticles to quash inflammation and protect cartilage. This type of arthritis occurs when the protective tissue at the ends of bones, known as cartilage, gradually erodes, and pain associated with the condition worsens over time. While anti-inflammatory drugs, or steroid injections can alleviate some pain, there is no cure for the cartilage being worn down and effects of medications are not experienced long-term. Scientists in the recent study injected mice locally into the joint, within 24 hours after an injury, with nanoparticles carrying a modified peptide that is able to bind to a molecule called small interfering RNA (siRNA) and block inflammation, decreasing the damage of cartilage. Patients with osteoarthritis usually experienced some earlier type of injury which often results in strong inflammation around the joints.  The nanoparticles were able to lessen inflammation in the injured joints of mice, and unlike other treatments that are short-lived, the nanoparticles penetrate deep into tissues and stayed in the cartilage cells in joints for weeks, the team reported. “I see a lot of patients with osteoarthritis, and there’s really no treatment,” senior author Christine Pham, M.D., associate professor of medicine said in a university release. “We try to treat their symptoms, but even when we inject steroids into an arthritic joint, the drug only remains for up to a few hours, and then it’s cleared. These nanoparticles remain in the joint longer and help prevent cartilage degeneration.” The findings, published Sept. 26 in the Proceedings of the National Academy of Sciences, suggest that if nanoparticle injections are given quickly following a joint injury, it could potentially prevent the development of osteoarthritis. Further studies need to be done to see if this method could help people who already have osteoarthritis and have lost a lot of cartilage. “The inflammatory molecule that we’re targeting not only causes problems after an injury, but it’s also responsible for a great deal of inflammation in advanced cases of osteoarthritis,” Linda Sandell, Ph.D., director of Washington University’s Center for Musculoskeletal Research, said in a statement. “So we think these nanoparticles may be helpful in patients who already have arthritis, and we’re working to develop experiments to test that idea.”


News Article | November 30, 2016
Site: www.prweb.com

Each of these four ‘MenB Strong Moms’ lost a healthy teen child to a horrifying disease known as group B meningococcal disease (MenB). At the time, the disease couldn’t be prevented. Today, thanks to a new vaccine, it can. But many of the kids who should be getting the vaccine still don’t know it exists. This is why they produced the “Meningitis B Shatters Dreams,” PSA. They hope to educate young adults and their parents about the availability of the MenB vaccine and encourage vaccination when kids are home over the holidays. Most kids receive a meningococcal vaccine known as MenACWY, but it does not prevent MenB which accounts for half of all meningococcal cases in the U.S. among 17-23 year-olds. It has also been responsible for several outbreaks on college campuses in the past few years. Although it has been over a year since the CDC announced a recommendation for 16-23 year-olds to receive the MenB vaccine, many doctors are still not mentioning it to their patients, and most parents and young adults don’t realize the vaccine exists. “We all had our kids vaccinated and thought they were protected.” explains Patti Wukovits, RN and Director of, The Kimberly Coffey Foundation, as she clutches a photo of her daughter. “What we didn’t know was that the vaccine they received didn’t protect them from one of the most common types of meningococcal disease, the one that would kill them - group B. I’m a registered nurse, and I didn’t know. Now I’ve learned that most other medical professionals don’t know either.” Alicia Stillman, Director of The Emily Stillman Foundation, explains, “People need to know that the early symptoms of meningococcal disease are often mistaken for other illnesses. They need to know that the disease can turn deadly in as little as 24 hours. And they need to know that teens are at a high risk of infection. But more importantly, they need to know that kids need both the MenB and the MenACWY vaccines in order to be fully protected against all five of the preventable types of meningococcal disease.” Patti adds, “I couldn’t protect my daughter. But now a vaccine exists and kids are still not protected. It scares me to think that kids are getting the MenACWY vaccine, but don’t know they need a separate MenB vaccine too. Their parents think they’re fully protected, but they’re not. How will they feel when their child is the next victim? We believe it is our responsibility to warn them. That is what our kids would want us to do.” “Our kids have brought us together and their message is loud and clear in this PSA.” says Alicia. “We don’t want parents to have to bury their children like we have, and we want kids to take it upon themselves to get protected and ask for the MenB vaccine.” adds Alicia Stillman. The “Meningitis B Shatters Dreams,” PSA has been created by a special partnership between The Kimberly Coffey Foundation and The Emily Stillman Foundation and is available at https://www.youtube.com/watch?v=UyPxUVzpelw. These are the stories of the children of the ‘MenB Strong Moms’: Emily Stillman (19) was a sophomore at Kalamazoo College in Michigan, working towards a double major in Psychology and Theatre. Her dream was to be on Saturday Night Live. One night, Emily called home complaining of a headache. It got so bad she went to the hospital, where they treated it as a migraine. By the next morning she was in a coma. Within 30 hours she was brain dead. Kimberly Coffey (17) was a high school senior from Long Island, New York who was already en-rolled in college with a plan to fulfill her dream of becoming a pediatric nurse. One afternoon Kim complained of fever and body aches. Within hours she developed a purplish blotchy rash all over her body. She was rushed to the hospital, but her heart, kidneys and lungs were already failing and she went into cardiac arrest. She was declared brain dead nine days later and was buried in her prom dress two days before her high school graduation. Emily Benatar (19) was a freshman at Washington University in St. Louis, who excelled in school and had a passion for art. One afternoon she called home to say she had pain in her sternum and was vomiting. A friend took her to the ER but they released her, telling her to take Tylenol and drink Gatorade. The next day she went to the Student Health Center with a hive-like rash, but not the purplish rash commonly associated with meningococcal disease. They sent her back to her dorm. That evening, within 24 hours of her initial trip to the ER, she became numb and then unresponsive. After twenty-one days in the ICU, she died. Henry Mackaman (21) was a junior at UW Madison, majoring in Economics and Creative Writing. His band, Phantom Vibration, had a song on local radio stations, and he wrote a prize winning play that was performed at the Student Union. He had dreamed of writing a great American novel. One night Henry went to the ER because he felt sick and had a fever of 104 degrees. They sent him home. The next night he returned with slurred speech and numbness on his right side. Six hours later he had a seizure and never regained consciousness. Two days later he was declared brain dead. For More Information and to find us on social media: http://www.ForeverEmily.org http://www.KimberlyCoffeyFoundation.org


News Article | November 8, 2016
Site: news.yahoo.com

Among the postmortems of this most unusual election will be the one by academics and advocates who have spent their careers aiming to put a woman in the White House — and who supported getting Hillary Clinton there. Much of what they’d learned over the years about how to run as and against a woman has been turned upside down this time around. That leaves the question of whether the Trump-Clinton clash was an exception to the rules or a rewriting of them. “All the years of working toward this goal,” says Marie Wilson, a former head of the Ms. Foundation and the creator of the White House Project, which has trained more than 15,000 women to run for office since 1998, with the ultimate prize being the presidency. “It’s completely different from the race we’d assumed and hoped. And it makes you wonder whether this is a new norm.” Here’s a look at some of the decades of accumulated “rules” and how they’ve recently been broken. Remembering how recently that was regarded as an absolute truth makes everyone interviewed for this story laugh. “Yes, that one used to be ironclad,” says Debbie Walsh, director of the Center for American Women and Politics at Rutgers University, which was founded in 1971, back when women held two seats in the U.S. Senate, 13 in the House of Representatives and not a single one in a governor’s mansion. “Every study, every expert, everything we knew about women’s campaigns was clear that when you run against a woman you don’t attack.” Ruth Mandel, director of the Eagleton Institute of Politics, also at Rutgers (and founder of the center that Walsh now runs), agrees: “You can’t attack a woman. How can you run as a man attacking a woman? This would hurt your image — it would turn off voters.” Then, mentioning the many times Donald Trump has done exactly that to Clinton, she adds, “I guess that’s not a problem anymore.” So does this mean all gloves are off for both genders from now on? Not necessarily. While Trump’s attacks on Clinton in general seem to have worked for him, another male candidate was paying the price for similar insults. In the race for Senate in Illinois, Republican incumbent Mark Kirk spoke disparagingly of his Democratic challenger, Rep. Tammy Duckworth, questioning her half-Thai heritage as unpatriotic, even though her ancestors fought with George Washington and she herself is a veteran who lost both legs in battle. He was dinged in the polls. Still, that was an attack seen more as racist than sexist. Which seems to muddy the lesson into something like this: You should be careful attacking a woman unless you are Donald Trump and your brand includes attacking everyone. That has long been a staple of female campaigning: Voters will give de facto credit to a woman candidate for honesty. That was how many of the early successful female candidates positioned themselves, as virtuous outsiders coming to clean up a system filled with corrupt insiders. Not so in this election. Now it is the woman who is seen as the insider and the man who presents himself as the outsider who can fix things and isn’t beholden to outside interest groups or elites. Trump has labeled his opponent “Crooked Hillary” and repeatedly called her a “liar.” It can be argued that he’s made this accusation stick not just to his opponent but to her entire gender. “He’s bundled all women into this untrustworthy category,” says Wilson. By way of example, she cites the high percentage of women reporters Trump publicly accuses of deceit, and the fact that he’s said all the women who’ve accused him of sexual impropriety are lying. Not only is Clinton not presumed to be more honest because of her gender, experts say, but even when there is statistical evidence — Politifact finds that 70 percent of Trump’s statements in this campaign are “mostly false,” “false” or “pants on fire,” compared with 26 percent for Clinton — she still is not accorded the perception of trust. (It should be noted that he isn’t seen as particularly honest either: A recent ABC poll found 46 percent of likely voters describe him that way, while 38 percent say the same about Clinton. Her supporters point out that this is a result not only of Trump’s recent attacks, but also of Republican attacks on her honesty almost since the day she became first lady.) Supporters cheer for Donald Trump during a rally on Nov. 7 in Leesburg, Va. (Photo: Chip Somodevilla/Getty Images) “Somehow the man who has been proven to discriminate against minorities in his business, to not pay his contractors, and who has a fraud trial starting after Election Day for Trump University is seen as the honest straight shooter,” says Kathleen Hall Jamieson, director of the Annenberg Public Policy Center and author of “Beyond the Double Bind,” about overcoming obstacles to women’s leadership. At the same time, she says, “people are shouting ‘lock her up’ when she has been investigated more than anyone in history and there have been no resulting charges.” Women have to overcome looking weak and emotional, while men are assumed to be strong and decisive.  Again, that was long considered true until it wasn’t. Clinton, Jamieson says, “has more than passed the commander in chief test, where even people who aren’t voting for her say she is capable of that role. You would have suspected that would be the main barrier to a woman for president.” But instead it is presumed that she is prepared for the task — and that preparation is, in turn, an unexpected obstacle. On the other hand, Jamieson argues, Trump has displayed all the behavior that would have been criticized as weak and overly emotional if he were female. Yes, he runs on tough ideas — law and order, building a wall — but at the same time “he fits all the traditional stereotypes that have been used to shut down women,” she says. “He’s thin-skinned, he acts on impulse, he can’t seem to get the details right, he’s scatterbrained and goes off on tangents.” By way of example, she cites Trump’s off-message, often weeklong rants against those he believes have insulted him, his reversal of positions sometimes within days or hours, his denial of having said things that are on video for posterity and his frequent misstatements of fact.  Studies and surveys find that women do not believe they are qualified for higher office until they have served at lower levels. Voters seem to expect that of them too. “What’s the first office you are qualified to run for?” asks Laura Liswood, secretary-general of the Council of Women World Leaders, whose membership is 60 women who have led their countries as president or prime minister. “Women say it’s the local school board. Men figure they’ll start with city council, even Congress. Women build their ambition. Men leap to their ambition.” Trump has certainly leaped to his — the first office he’s ever sought is the presidency. But Clinton certainly didn’t start with the school board, either. She took the historically old-fashioned role of first lady and used it as a springboard in a way no woman had ever done before, straight to statewide office, as senator from New York, and then, of course, secretary of state. First lady of Arkansas Hillary Rodham Clinton speaks at a conference in 1987 in Little Rock as then-Gov. Bill Clinton looks on. (Photo: AP) “Hers is hardly the résumé of most female politicians,” Mandel says. But perhaps having to start at the most local levels of government is, she says, “another assumption that should change.” Yes, or maybe all it means is that there are different rules for a race between Hillary Clinton and Donald Trump. “That’s what I am asked most often lately,” says Erin Vilardi, the founder of VoteRunLead, which was originally part of the White House Project and is now a separate entity training women to run for office. “Is this Hillary, or is this all women? Is this just the reality of this campaign, or of all campaigns by women? Is this just the White House, or all the way down the electoral ladder?” Certainly this is a campaign like no other. Trump is a singular candidate, breaking the rules not only of how to run against a woman but of how to run at all. He has an ability to get away with what no other candidate can. To wit, when Marco Rubio attempted to get down in the dirt with him over the size of his hands, it was the beginning of the end of Rubio’s candidacy. Similarly, Clinton is arguably one of a kind, as most firsts in any category tend to be. “It took a unique human being to get here,” Walsh says. “She’s a boundary-crossing leader. So you can’t say that what is true for her will be true for other women who will come after her.” She and other experts interviewed for this article agree that in many ways it would be a good thing if the old stereotypes no longer apply — if women no longer assume the need to start at the bottom, or have to overcome the idea that they are weak or emotional. But this potential silver lining lies within what they see as a dark and worrisome cloud. “Those of us who are schooled in unconscious bias were expecting to spend the election pointing out the sophisticated ways that unseen expectations can work against a woman,” says Gail Evans, who created Working Mother magazine and is a founder of Executive Women for Hillary, a fundraising group. “But this isn’t unconscious anything. This is conscious misogyny. Who would have thought this would be a campaign about women’s bodies, women’s periods, women’s truthfulness?” Donald Trump speaks as Hillary Clinton listens during a presidential town hall debate at Washington University in St. Louis on Oct. 9. (Photo: Saul Loeb/Pool/Reuters) Says Wilson: “To hear people talking about hurting her, to hear people talking about killing her? All this misogyny that we like to think is not that deep? It is that deep.” Too deep, they worry, to disappear with this campaign. So deep that it will, in fact, influence others to come. If Clinton loses, they fear that will have a chilling effect on other campaigns by women. And if she wins, they assume the vitriol of the campaign will make it tougher for a woman to govern. If the old rules are obsolete, or even dented, they wonder, what should the new conventional wisdom be? “I’d love to see a couple more female self-indulgent, self-described billionaires running for office,” Liswood says. Donald Trump inspires his own brand of fashion on the campaign trail >>> Face-off: Documenting the expressive battle between Hillary Clinton and Donald Trump >>>


Liao X.,Rice University | Rong S.,Washington University in St. Louis | Queller D.C.,Washington University in St. Louis
PLoS Biology | Year: 2015

The evolution of sterile worker castes in eusocial insects was a major problem in evolutionary theory until Hamilton developed a method called inclusive fitness. He used it to show that sterile castes could evolve via kin selection, in which a gene for altruistic sterility is favored when the altruism sufficiently benefits relatives carrying the gene. Inclusive fitness theory is well supported empirically and has been applied to many other areas, but a recent paper argued that the general method of inclusive fitness was wrong and advocated an alternative population genetic method. The claim of these authors was bolstered by a new model of the evolution of eusociality with novel conclusions that appeared to overturn some major results from inclusive fitness. Here we report an expanded examination of this kind of model for the evolution of eusociality and show that all three of its apparently novel conclusions are essentially false. Contrary to their claims, genetic relatedness is important and causal, workers are agents that can evolve to be in conflict with the queen, and eusociality is not so difficult to evolve. The misleading conclusions all resulted not from incorrect math but from overgeneralizing from narrow assumptions or parameter values. For example, all of their models implicitly assumed high relatedness, but modifying the model to allow lower relatedness shows that relatedness is essential and causal in the evolution of eusociality. Their modeling strategy, properly applied, actually confirms major insights of inclusive fitness studies of kin selection. This broad agreement of different models shows that social evolution theory, rather than being in turmoil, is supported by multiple theoretical approaches. It also suggests that extensive prior work using inclusive fitness, from microbial interactions to human evolution, should be considered robust unless shown otherwise. © 2015 Liao et al.


Boden M.T.,Center for Innovation to Implementation | Thompson R.J.,Washington University in St. Louis
Emotion | Year: 2015

Emotion theories posit that effective emotion regulation depends upon the nuanced information provided by emotional awareness; attending to and understanding one's own emotions. Additionally, the strong associations between facets of emotional awareness and various forms of psychopathology may be partially attributable to associations with emotion regulation. These logically compelling hypotheses are largely uninvestigated, including which facets compose emotional awareness and how they relate to emotion regulation strategies and psychopathology. We used exploratory structural equation modeling of individual difference measures among a large adult sample (n = 919) recruited online. Results distinguished 4 facets of emotional awareness (type clarity, source clarity, involuntary attention to emotion, and voluntary attention to emotion) that were differentially associated with expressive suppression, acceptance of emotions, and cognitive reappraisal. Facets were associated with depression both directly and indirectly via associations with emotion regulation strategies. We discuss implications for theory and research on emotional awareness, emotion regulation, and psychopathology. © 2014 American Psychological Association.


Murayama K.,University of Reading | Miyatsu T.,Washington University in St. Louis | Buchli D.,University of California at Los Angeles | Storm B.C.,University of California at Santa Cruz
Psychological Bulletin | Year: 2014

Retrieving a subset of items can cause the forgetting of other items, a phenomenon referred to as retrieval-induced forgetting. According to some theorists, retrieval-induced forgetting is the consequence of an inhibitory mechanism that acts to reduce the accessibility of nontarget items that interfere with the retrieval of target items. Other theorists argue that inhibition is unnecessary to account for retrievalinduced forgetting, contending instead that the phenomenon can be best explained by noninhibitory mechanisms, such as strength-based competition or blocking. The current article provides the first major meta-analysis of retrieval-induced forgetting, conducted with the primary purpose of quantitatively evaluating the multitude of findings that have been used to contrast these 2 theoretical viewpoints. The results largely supported inhibition accounts but also provided some challenging evidence, with the nature of the results often varying as a function of how retrieval-induced forgetting was assessed. Implications for further research and theory development are discussed. © 2014 American Psychological Association.


Fomovsky G.M.,Columbia University | Thomopoulos S.,Washington University in St. Louis | Holmes J.W.,University of Virginia
Journal of Molecular and Cellular Cardiology | Year: 2010

Extracellular matrix (ECM) components play essential roles in development, remodeling, and signaling in the cardiovascular system. They are also important in determining the mechanics of blood vessels, valves, pericardium, and myocardium. The goal of this brief review is to summarize available information regarding the mechanical contributions of ECM in the myocardium. Fibrillar collagen, elastin, and proteoglycans all play crucial mechanical roles in many tissues in the body generally and in the cardiovascular system specifically. The myocardium contains all three components, but their mechanical contributions are relatively poorly understood. Most studies of ECM contributions to myocardial mechanics have focused on collagen, but quantitative prediction of mechanical properties of the myocardium, or changes in those properties with disease, from measured tissue structure is not yet possible. Circumstantial evidence suggests that the mechanics of cardiac elastin and proteoglycans merit further study. Work in other tissues used a combination of correlation, modification or digestion, and mathematical modeling to establish mechanical roles for specific ECM components; this work can provide guidance for new experiments and modeling studies in myocardium. © 2009 Elsevier Ltd.


Hyma K.E.,Washington University in St. Louis | Hyma K.E.,Cornell University | Fay J.C.,Washington University in St. Louis
Molecular Ecology | Year: 2013

Humans have had a significant impact on the distribution and abundance of Saccharomyces cerevisiae through its widespread use in beer, bread and wine production. Yet, similar to other Saccharomyces species, S. cerevisiae has also been isolated from habitats unrelated to fermentations. Strains of S. cerevisiae isolated from grapes, wine must and vineyards worldwide are genetically differentiated from strains isolated from oak-tree bark, exudate and associated soil in North America. However, the causes and consequences of this differentiation have not yet been resolved. Historical differentiation of these two groups may have been influenced by geographic, ecological or human-associated barriers to gene flow. Here, we make use of the relatively recent establishment of vineyards across North America to identify and characterize any active barriers to gene flow between these two groups. We examined S. cerevisiae strains isolated from grapes and oak trees within three North American vineyards and compared them to those isolated from oak trees outside of vineyards. Within vineyards, we found evidence of migration between grapes and oak trees and potential gene flow between the divergent oak-tree and vineyard groups. Yet, we found no vineyard genotypes on oak trees outside of vineyards. In contrast, Saccharomyces paradoxus isolated from the same sources showed population structure characterized by isolation by distance. The apparent absence of ecological or genetic barriers between sympatric vineyard and oak-tree populations of S. cerevisiae implies that vineyards play an important role in the mixing between these two groups. © 2013 John Wiley & Sons Ltd.


Havener R.W.,Cornell University | Liang Y.,Washington University in St. Louis | Brown L.,Cornell University | Yang L.,Washington University in St. Louis | Park J.,Cornell University
Nano Letters | Year: 2014

We report a systematic study of the optical conductivity of twisted bilayer graphene (tBLG) across a large energy range (1.2-5.6 eV) for various twist angles, combined with first-principles calculations. At previously unexplored high energies, our data show signatures of multiple van Hove singularities (vHSs) in the tBLG bands as well as the nonlinearity of the single layer graphene bands and their electron-hole asymmetry. Our data also suggest that excitonic effects play a vital role in the optical spectra of tBLG. Including electron-hole interactions in first-principles calculations is essential to reproduce the shape of the conductivity spectra, and we find evidence of coherent interactions between the states associated with the multiple vHSs in tBLG. © 2014 American Chemical Society.


Tripodi S.J.,Florida State University | Pettus-Davis C.,Washington University in St. Louis
International Journal of Law and Psychiatry | Year: 2013

Women are entering US prisons at nearly double the rate of men and are the fastest growing prison population. Current extant literature focuses on the prevalence of the incarceration of women, but few studies exist that emphasize the different trajectories to prison. For example, women prisoners have greater experiences of prior victimization, more reports of mental illness, and higher rates of illicit substance use. The purpose of this study was to understand the prevalence of childhood victimization and its association with adult mental health problems, substance abuse disorders, and further sexual victimization. The research team interviewed a random sample of 125 women prisoners soon to be released from prison to gather information on their childhood physical and sexual victimization, mental health and substance abuse problems as an adult, and sexual victimization in the year preceding incarceration. Results indicate that women prisoners in this sample, who were both physically and sexually victimized as children, were more likely to be hospitalized as an adult for a psychological or emotional problem. Women who were sexually victimized or both physically and sexually victimized were more likely to attempt suicide. Women who experienced physical victimization as children and women who were both physically and sexually victimized were more likely to have a substance use disorder and women who were sexually abused as children or both physically and sexually victimized were more likely to be sexually abused in the year preceding prison. This article ends with a discussion about prisons' role in providing treatment for women prisoners and basing this treatment on women's trajectories to prison, which disproportionately include childhood victimization and subsequent mental health and substance use problems. © 2012 Elsevier Ltd.


Contractor A.,Northwestern University | Klyachko V.A.,Washington University in St. Louis | Portera-Cailliau C.,University of California at Los Angeles
Neuron | Year: 2015

Fragile X syndrome (FXS) results from a genetic mutation in a single gene yet produces a phenotypically complex disorder with a range of neurological and psychiatric problems. Efforts to decipher how perturbations in signaling pathways lead to the myriad alterations in synaptic and cellular functions have provided insights into the molecular underpinnings of this disorder. From this large body of data, the theme of circuit hyperexcitability has emerged as a potential explanation for many of the neurological and psychiatric symptoms in FXS. The mechanisms for hyperexcitability range from alterations in the expression or activity of ion channels to changes in neurotransmitters and receptors. Contributions of these processes are often brain region and cell type specific, resulting in complex effects on circuit function that manifest as altered excitability. Here, we review the current state of knowledge of the molecular, synaptic, and circuit-level mechanisms underlying hyperexcitability and their contributions to the FXS phenotypes. Contractor, Klyachko, and Portera-Cailliau review accumulating evidence of changes in channels, neurotransmitters, synapses, and circuits in Fmr1 knockout mice that ultimately cause hyperexcitability. They propose that such alterations in neuronal and circuit excitability could be exploited to treat Fragile X syndrome. © 2015 Elsevier Inc..


Bender C.M.,Washington University in St. Louis | Klevansky S.P.,University of Heidelberg
Physical Review Letters | Year: 2010

An elementary field-theoretic mechanism is proposed that allows one Lagrangian to describe a family of particles having different masses but otherwise similar physical properties. The mechanism relies on the observation that the Dyson-Schwinger equations derived from a Lagrangian can have many different but equally valid solutions. Nonunique solutions to the Dyson-Schwinger equations arise when the functional integral for the Green's functions of the quantum field theory converges in different pairs of Stokes' wedges in complex-field space, and the solutions are physically viable if the pairs of Stokes' wedges are PT symmetric. © 2010 The American Physical Society.


Kaliberov S.A.,Washington University in St. Louis | Buchsbaum D.J.,University of Alabama at Birmingham
Advances in Cancer Research | Year: 2012

Radiation therapy methods have evolved remarkably in recent years which have resulted in more effective local tumor control with negligible toxicity of surrounding normal tissues. However, local recurrence and distant metastasis often occur following radiation therapy mostly due to the development of radioresistance through the deregulation of the cell cycle, apoptosis, and inhibition of DNA damage repair mechanisms. Over the last decade, extensive progress in radiotherapy and gene therapy combinatorial approaches has been achieved to overcome resistance of tumor cells to radiation. In this review, we summarize the results from experimental cancer therapy studies on the combination of radiation therapy and gene therapy. © 2012 Elsevier Inc.


Towler D.A.,Washington University in St. Louis | Demer L.L.,University of California at Los Angeles
Circulation Research | Year: 2011

Vascular calcification increasingly afflicts our aging, dysmetabolic population. Once considered only a passive process of dead and dying cells, data from multiple laboratories worldwide have converged to demonstrate that vascular calcification is a highly regulated form of biomineralization. The goal of this thematic review series is to highlight what is known concerning the biological "players" and "game rules" with respect to vascular mineral metabolism. Armed with this understanding, it is hoped that novel therapeutic strategies can be crafted to prevent and treat vascular calcium accrual, to the benefit of our patients afflicted with arteriosclerotic valvular and vascular diseases. © 2011 American Heart Association, Inc.


Baffy G.,Harvard University | Brunt E.M.,Washington University in St. Louis | Caldwell S.H.,University of Virginia
Journal of Hepatology | Year: 2012

Hepatocellular carcinoma (HCC) is a common cancer worldwide that primarily develops in cirrhosis resulting from chronic infection by hepatitis B virus and hepatitis C virus, alcoholic injury, and to a lesser extent from genetically determined disorders such as hemochromatosis. HCC has recently been linked to non-alcoholic fatty liver disease (NAFLD), the hepatic manifestation of obesity and related metabolic disorders such as diabetes. This association is alarming due to the globally high prevalence of these conditions and may contribute to the rising incidence of HCC witnessed in many industrialized countries. There is also evidence that NAFLD acts synergistically with other risk factors of HCC such as chronic hepatitis C and alcoholic liver injury. Moreover, HCC may complicate non-cirrhotic NAFLD with mild or absent fibrosis, greatly expanding the population potentially at higher risk. Major systemic and liver-specific molecular mechanisms involved include insulin resistance and hyperinsulinemia, increased TNF signaling pathways, and alterations in cellular lipid metabolism. These provide new targets for prevention, early recognition, and effective treatment of HCC associated with NAFLD. Indeed, both metformin and PPAR gamma agonists have been associated with lower risk and improved prognosis of HCC. This review summarizes current evidence as it pertains to the epidemiology, pathogenesis, and prevention of NAFLD-associated HCC. © 2012 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.


Ding W.,Florida State University | Seide A.,Washington University in St. Louis | Yang K.,Florida State University
Physical Review X | Year: 2012

The logarithmic violations of the area law, i.e., an "area law" with logarithmic correction of the form S ~ L d-1 logL, for entanglement entropy are found in both 1D gapless fermionic systems with Fermi points and high-dimensional free fermions. This paper shows that both violations are of the same origin, and that, in the presence of Fermi-liquid interactions, such behavior persists for 2D fermion systems. In this paper, we first consider the entanglement entropy of a toy model, namely, a set of decoupled 1D chains of free spinless fermions, to relate both violations in an intuitive way. We then use multidimensional bosonization to rederive the formula by Gioev and Klich [D. Gioev and I. Klich, Entanglement Entropy of Fermions in Any Dimension and the Widom Conjecture, Phys. Rev. Lett. 96, 100503 (2006).] for free fermions through a low-energy effective Hamiltonian and explicitly show that, in both cases, the logarithmic corrections to the area law share the same origin: the discontinuity at the Fermi surface (points). In the presence of Fermi-liquid (forward-scattering) interactions, the bosonized theory remains quadratic in terms of the original local degrees of freedom, and, after regularizing the theory with a mass term, we are able to calculate the entanglement entropy perturbatively up to second order in powers of the coupling parameter for a special geometry via the replica trick. We show that these interactions do not change the leading scaling behavior for the entanglement entropy of a Fermi liquid. At higher orders, we argue that this should remain true through a scaling analysis.


Seidel A.,Washington University in St. Louis | Yang K.,Florida State University
Physical Review B - Condensed Matter and Materials Physics | Year: 2011

We study the thin-torus limit of the Haldane-Rezayi state. Eight of the ten ground states are found to assume a simple product form in this limit, as is known to be the case for many other quantum Hall trial wave functions. The two remaining states have a somewhat unusual thin-torus limit, where a broken pair of defects forming a singlet is completely delocalized. We derive these limits from the wave functions on the cylinder, and deduce the dominant matrix elements of the thin-torus hollow-core Hamiltonian. We find that there are gapless excitations in the thin-torus limit. This is in agreement with the expectation that local Hamiltonians stabilizing wave functions associated with nonunitary conformal field theories are gapless. We also use the thin-torus analysis to obtain explicit counting formulas for the zero modes of the hollow-core Hamiltonian on the torus, as well as for the parent Hamiltonians of several other paired and related quantum Hall states. © 2011 American Physical Society.


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Research Grant | Award Amount: 432.76K | Year: 2014

The connectome, the comprehensive map of neural connections in the human brain, is unique in every individual. Even identical twins differ at the level of neural connectivity. Mapping the human connectome and its variability across individuals is essential in getting insight into the unknown cognitive aspects of brain function, but also into identifying dysfunctional features of the diseased brain. For these reasons, understanding the human brain, its organisation and ultimately its function, is amongst the key scientific challenges of the 21st century. Magnetic resonance imaging (MRI) has revolutionised neuroscience by uniquely allowing both brain anatomy and function to be probed in living humans. Even if MRI allows only macroscopic features to be recovered (at the level of relatively large tissue regions, rather than individual neuronal cells), its non-invasive and in-vivo application has opened tremendous possibilities for brain research. Diffusion-weighted MRI (dMRI) is a particular modality that uniquely allows the mapping of fibre bundles, the underlying connection pathways that mediate information flow between different brain regions. The connection mapping is performed indirectly by processing dMRI images via computational algorithms referred to as tractography. Tractography has already provided fundamental new insights into brain anatomy. The importance of brain connectivity to our understanding of the brain along, with the great potential revealed by tractography algorithms have led to large initiatives from both sides of the Atlantic. These utilise dMRI to collect state-of-the-art datasets of the healthy adult and the developing brain and map the structural connectome through tractography. They include the $30M NIH Human Connectome Project, the 15M Euros ERC Developing Human Connectome Project and the £30M UK funded Biobank Imaging. However, without state-of-the-art analysis methods, and new ways of analysing dMRI data, researchers will fai to get the most out of this vast wealth of upcoming data. In this project, we propose new frameworks for tractography methods centred on neuroanatomy. We particularly focus on problems arising from ambiguous mapping of complex geometries (which are very common in the brain) to the dMRI measurements. These pose significant limits to the accuracy of existing approaches. We propose wholesale changes through computational and algorithmic solutions that will allow connections to be measured in-vivo with unprecedented detail, whole brain organization to be studied at a much finer scale and anatomical features -invisible to existing techniques- to be revealed. These advances will open new possibilities for neuroanatomical studies, but also set the foundations for new basic research in MRI processing and connectivity mapping. We will illustrate their potential using compelling demonstrator applications from basic and clinical neuroscience, including the assessment of benefits from using the new technology in assisting neurosurgical planning.


News Article | October 23, 2015
Site: phys.org

But rehydrating is a dangerous process, one that can kill the pollen grain before it can fertilize the egg if not properly controlled. New research from the lab of Elizabeth Haswell, PhD, associate professor of biology in Arts & Sciences at Washington University in St. Louis, published Oct. 23, 2015 in the journal Science shows how pollen survives the reanimation process. A specialized protein with ancient origins helps the hydrating pollen grain relieve excessive pressure and survive the stressful transition. Too little—or too much—of this protein impairs pollen's ability to fertilize the female egg, showing that the protein is a crucial part of reproductive success. Like all living things, plants must respond to forces in their environment in order to adapt and thrive. Sensing gravity, their roots grow down instead of up. Feeling strong winds, plants grow shorter and stockier. And sensing that the membranes that enclose their cells are stretched, they open pores to reduce potentially damaging pressure. The pressure sensor and safety valve are combined in a single protein, named MSL8, which is a mechanosensitive ion channel. An ion channel is a small pore in the cell membrane that allows specific ions (charged atoms) to enter or leave the cell. It is not continuously open, however, but instead is gated, opening and closing in response to membrane stretch. For years, scientists have known that bacteria use stretch-activated channels to relieve excessive internal pressure. When too much water rushes into the cell and stretches the membrane, like an inflating balloon, these channels open to keep the cell from bursting, acting as a kind of safety valve. Researchers found evolutionary cousins of these proteins in plants more than ten years ago, but it wasn't clear just what they were doing. Plants and bacteria diverged billions of years ago—so what links the two still? In many ways, a mature pollen grain resembles a bacterium: a single cell, all on its own, without support from its mother plant. So when an undergraduate in Haswell's lab discovered that dry pollen missing MSL8 died when it took up water too quickly, Haswell realized that despite the evolutionary distance between them, pollen and bacteria both used stretch-activated channels as safety valves. "The bacterial channel protects the bacteria from random environmental stress," Haswell said. "In the pollen a related channel is also protecting the cell, but from stresses it must withstand in order to reproduce." After rehydrating, reanimated pollen grows a long tube to carry sperm cells to the waiting egg cell, something with no parallel in bacteria. Pollen missing MSL8 readily germinated this pollen tube—even better than pollen with the channel—but the tubes went on to burst, unable to control the pressure that powered their growth. The upshot was that pollen without MSL8 didn't fertilize eggs as well. Eric Hamilton, a fourth-year PhD candidate in the Plant and Microbial Biosciences program and the lead author of the paper, was surprised to find it was also difficult to propagate plants that produced high levels of MSL8. Investigating, he discovered that their pollen had a hard time germinating its pollen tube. Pollen with excess MSL8 couldn't build up the pressure required to bust through the tough pollen cell wall. Unable to drive a pollen tube to an egg, it was infertile. So it's a Goldilocks situation: too little MSL8 and pollen bursts; too much, and it can't power the growth required to reach the egg. Although pollen protects itself during hydration much as bacteria do, MSL8's role in the pollen tube growth shows that plants have adapted this ancient channel to their unique needs. "This study illustrates how important mechanical signals are in biology," Haswell said. "They are not just stress signals from the environment, but also signals that are part of normal developmental processes. Mechanotransduction is important to every aspect of an organism's life." Explore further: Oriental honey buzzards might stop to smell the pollen More information: "Mechanosensitive channel MSL8 regulates osmotic forces during pollen hydration and germination," by E.S. Hamiltonet al. DOI: 10.1126/science.aac6014


In a study released yesterday in Proceedings of the National Academy of Sciences (PNAS), scientists at the University of New Mexico use biophysical models of thermoregulation in order to reveal multiple ways birds and mammals adapt to a wide range of temperatures. The Scholander-Irving model illustrates how warm-blooded birds and mammals maintain body temperature by balancing the rate of metabolic heat production with the rate of heat lost to the environment. Body size has been shown to affect both rates and, as a result, influences an organism's thermal limits – big species are generally able to deal with colder temperatures than smaller species and vice versa. This has been used to explain Bergmann's rule, the geographic pattern of increasing size with decreasing temperature that is seen in some groups of animals. However, after looking at the distribution of body sizes across temperatures on Earth, the scientists saw that birds and mammals of nearly every size live basically everywhere. Examples include tiny chickadees that can survive cold Alaskan winters, or elephants that live in some of the hottest parts of Africa. Clearly size isn't everything, the scientists hypothesized. The researchers extended the Scholander-Irving model to understand how species adapt to temperature without changing size. The research includes three graduate students from UNM: Trevor Fristoe, now a postdoc at Washington University in St. Louis, Mo., Robbie Burger, now a postdoc at the University of North Carolina, current UNM graduate student Meghan Balk, along with UNM Distinguished Professor James Brown. "We were interested in understanding ways other than body size that species can adapt their physiology and morphology in order to deal with environmental temperatures," Fristoe said, "So we developed a method of measuring adaptation to the thermal environment independent of body size. We incorporated changes in both the rate of heat production via a species' metabolism as well as thermal conductance - the loss of bodily heat to the environment." Thermal conductance could be affected by changes in insulation like developing thick fur or changes in body proportion like big ears or long legs that can help to dissipate heat. The scientists thought that if these types of adaptations are important, then a measure of their mass-independent adaptation should correlate with the temperatures that species experience in the wild. In order to test this, they collaborated with Imran Khaliq of Ghazi University in Pakistan and Christian Hof of the Senckenberg Biodiversity and Climate Research Centre in Frankfurt, Germany who compiled data on thermal physiology for hundreds of species of birds and mammals. "Our ideas build on the Scholander-Irving model of heat transfer, which has been around for over 60 years," said Fristoe, who is the lead author of the study. "However, it has only become recently possible to test these types of questions at such a large scale because of the growing availability of physiological data." Comparing physiological and environmental temperature data for 211 bird and 178 mammal species, the scientists demonstrated that birds and mammals have adapted to geographic variation in environmental temperature by concerted changes in both metabolic heat production and thermal conductance. Fristoe and colleagues found that species combined these traits in a number of ways. "It was possible to adapt to cold environments, for example, by either increasing metabolic heat production, decreasing thermal conductance, or both - the interaction between the two is what really mattered," Fristoe said. "Our study extends on a classic idea in thermal physiology in order to understand adaptations to temperatures across a global scale that goes beyond body size." "Unraveling these various avenues of adaptation to thermal environments has important implications for understanding how species respond to past, present and future climate change," he added.


News Article | October 29, 2016
Site: www.prweb.com

According to the Multiple Sclerosis Foundation there are more than 400,000 in the United States and 2.5 million people around the world who have Multiple Sclerosis (MS). An MS patient and medical professional, Barry Farr, MD has unique insight into the complex nature of this disease and its debilitating effects. He shares these insights in his new book “Multiple Sclerosis: Coping with Complications” (published by Archway Publishing). Farr writes from his extensive experience as a physician and patient who has battled MS for 24 years. Unlike most other MS books, his work aims to help patients deal with the reality of chronic complications of this disease and shares new strategies. Farr recommends it to patients, their families and their caregivers. Healthcare workers would also find the book interesting and helpful because it includes many new observations about the disease and many new strategies for preventing and controlling its complications. “Multiple Sclerosis: Coping with Complications” is an incisive and innovative addition to the existing literature on MS. It has been recommended by several medical professionals. The following are some reviews: “Every person with multiple sclerosis should read this book.” –Jack M. Gwaltney, Jr., MD “This book should help all patients with MS and many with similar problems due to other conditions (e.g., childbirth causes more cases of chronic rectal sphincter laxity than MS does).” –Robert J. Sherertz, MD “Multiple Sclerosis: Coping with Complications” By Barry Farr, MD Hardcover | 6 x 9in | 450 pages | ISBN 9781480829237 Softcover | 6 x 9in | 450 pages | ISBN 9781480829220 E-Book | 450 pages | ISBN 9781480829244 Available at Amazon and Barnes & Noble About the Author Barry Farr, MD received his medical degree from Washington University in St. Louis and a Master of Science degree in Epidemiology from the London School of Hygiene and Tropical Medicine. He trained in internal medicine and infectious diseases at the University of Virginia. He spent the rest of his career on the faculty of the University of Virginia School of Medicine where he retired as the William S. Jordan Jr., Professor of Medicine and Epidemiology at the age of 52 due to physical disability from multiple sclerosis. He co-authored 167 medical publications, 137 scientific abstracts for national scientific meetings and co-edited two books. Simon & Schuster, a company with nearly ninety years of publishing experience, has teamed up with Author Solutions, LLC, the leading self-publishing company worldwide, to create Archway Publishing. With unique resources to support books of all kind, Archway Publishing offers a specialized approach to help every author reach his or her desired audience. For more information, visit archwaypublishing.com or call 888-242-5904.


NASA's New Horizons spacecraft has sent new images to Earth from Pluto that reveal new details about the dwarf planet's icy terrain. New Horizons used data from two instruments, including the its Long Range Reconnaissance Imager (Lorri) to take pictures from a range of about 31,000 miles on July 14, 2015, and the Ralph/Multispectral Visible Imaging Camera (MVIC), which enhanced the color photo approximately 20 minutes after the photos were taken. MVIC did so at a range of 21,000 miles and at a resolution of about 2,100 feet per pixel. The result is a composite image of 160 milieus across of Pluto's Viking Terra area. The photo shows bright methane ices that are located on many rims of the planet's craters. The image shows a collection of small, red soot-like particles called tholins that are created from methane and nitrogen reactions. The viewer can see where this red material appears thick. Scientists believe the particles are riding along with the ice that is flowing underneath. Lorri also transmitted back a high-resolution image on Dec. 24 that shows the icy plain known as Sputnik Planum that forms to the left side of the dwarf planet's "heart." The images were also taken during its July 14 flyby when New Horizons was at its closet approach to the planet. The photos were taken with resolutions of about 250 to 280 feet per pixel and show off the planet's icy plains. The image focuses on a strip of 50 miles wide and more than 400 miles long from the northwestern shoreline of Sputnik Planum and its surrounding plains. The surface of this area shows a change in its composition, with scientists believing the darker blocks are probably icebergs that are floating in denser solid nitrogen. The high-res image also reveals an "X," which scientists think is a site where four convection cells meet. They believe a pattern of the cells form the thermal conviction of the nitrogen-filled ices, and that a reservoir could be located somewhere deep within. "This part of Pluto is acting like a lava lamp," William McKinnon, deputy lead of the New Horizons Geology, Geophysics and Imaging team from Washington University in St. Louis, said in a press release, "if you can imagine a lava lamp as wide as, and even deeper than, the Hudson Bay." NASA recently released other mind-bending photos of Pluto that were arranged using the infrared Linear Etalon Imaging Spectral Array (Leisa). The results are psychedelic renderings that make any space enthusiast want to take a trip to the dwarf planet.


News Article | December 8, 2016
Site: globenewswire.com

ITASCA, Ill., Dec. 08, 2016 (GLOBE NEWSWIRE) -- First Midwest Bank (the “Bank”) today announced that it has hired four veteran bankers to further the Bank’s strategic expansion of its existing healthcare business. In making the announcement, Mark G. Sander, President of First Midwest Bank said, “We are pleased to have Ken Sinha, Michael Mason, Don Woods and Gerri-Ann Bagdonas join us to expand our already successful and growing Midwest healthcare group.  With extensive client relationships, these bankers increase our healthcare lending expertise and capacity while providing us the opportunity to broaden our geographic reach.” The four newly hired bankers, who will be based in a new loan production office in Cleveland, Ohio, have provided credit and non-credit services to numerous healthcare providers as well as senior living organizations in a multi-state region and, as such, provide enhanced market opportunities to the Bank.  Healthcare is among the largest and fastest growing industries in the country and the Bank is excited to build its presence in this important sector.  The pending acquisition of Standard Bank & Trust Company includes another step in this effort, as First Midwest looks to further leverage Standard Bank’s successful commercial team servicing medical and dental practitioners. Ken Sinha has more than 30 years of commercial banking experience and joins the Bank as a Senior Vice President in the healthcare group, leading the new office.  Active in lending to both non-profit and for-profit healthcare providers throughout the broader Midwest region, Sinha maintains relationships with many large national providers and multi-site operators. Sinha holds a Bachelor of Business Administration degree in economics from Denison University in Ohio. Mike Mason, with more than 20 years of experience, has been named a Senior Vice President. He will have primary responsibility for clients in Ohio and Indiana as well as national providers.  Mason holds a Bachelor of Business Administration degree in finance and real estate from the University of Cincinnati and has a Master of Business Administration degree in finance from Washington University in St. Louis. Don Woods also joins the bank as a Senior Vice President. He has over 30 years of commercial banking experience, including ten years at RBS Citizens in C&I lending.  He will focus his efforts in the Michigan, Ohio, Kentucky, and Tennessee markets.  Woods holds a Bachelor of Business Administration degree in economics from the University of Dayton. Gerri‑Ann Bagdonas has been named a Vice President. With over 20 years of commercial banking experience, she will continue to concentrate her efforts in the markets of Pennsylvania, Ohio, West Virginia and New York. Bagdonas holds a Bachelor in Business Administration degree in management from Lake Erie College in Ohio. First Midwest Bank is a wholly-owned subsidiary of First Midwest Bancorp, Inc. (NASDAQ:FMBI), which is a relationship-based financial institution and one of the largest independent publicly-traded bank holding companies based on assets headquartered in the Midwest.  First Midwest Bank and other affiliates provide a full range of commercial, leasing, retail, wealth management, trust and private banking products and services through over 110 locations in metropolitan Chicago, northwest Indiana, central and western Illinois, and eastern Iowa. First Midwest’s website is www.firstmidwest.com.


News Article | January 11, 2016
Site: phys.org

Transmitted to Earth on Dec. 24, this image from the Long Range Reconnaissance Imager (LORRI) extends New Horizons' highest-resolution views of Pluto to the very center of Sputnik Planum, the informally named icy plain that forms the left side of Pluto's "heart" feature. Sputnik Planum is at a lower elevation than most of the surrounding area by a couple of miles, but is not completely flat. Its surface is separated into cells or polygons 10 to 25 miles (16 to 40 kilometers) wide, and when viewed at low sun angles (with visible shadows), the cells are seen to have slightly raised centers and ridged margins, with about 100 yards (100 meters) of overall height variation. Mission scientists believe the pattern of the cells stems from the slow thermal convection of the nitrogen-dominated ices that fill Sputnik Planum. A reservoir that's likely several miles or kilometers deep in some places, the solid nitrogen is warmed at depth by Pluto's modest internal heat, becomes buoyant and rises up in great blobs, and then cools off and sinks again to renew the cycle. "This part of Pluto is acting like a lava lamp," said William McKinnon, deputy lead of the New Horizons Geology, Geophysics and Imaging team, from Washington University in St. Louis, "if you can imagine a lava lamp as wide as, and even deeper than, the Hudson Bay." Computer models by the New Horizons team show that these blobs of overturning solid nitrogen can slowly evolve and merge over millions of years. The ridged margins, which mark where cooled nitrogen ice sinks back down, can be pinched off and abandoned. The "X" feature is likely one of these—a former quadruple junction where four convection cells meet. Numerous, active triple junctions can be seen elsewhere in the LORRI mosaic.


News Article | December 21, 2016
Site: www.chromatographytechniques.com

Chromium is an odorless, tasteless metallic element. One form, chromium-3, is essential to human health and is found in many vegetables, fruits, meats and grains and is often included in multi-vitamins. Its cancer-causing cousin, the chromium-6 infamous from the California exposure and Hollywood movie about Erin Brockovich, occurs naturally but is also produced in high quantities by industry, and can contaminate both soil and groundwater. An engineer at Washington University in St. Louis has found a new way to convert the dangerous chromium-6 into common chromium-3 in drinking water, making it safer for human consumption. “The health effects are quite well-known. It’s very potent as an inhaled contaminant, but in drinking water chromium-6 definitely has a negative impact on human health,” said Daniel Giammar, the Walter E. Browne Professor of Environmental Engineering at the School of Engineering & Applied Science. Scientists have previously converted chromium-6 to chromium-3 in a chemical process using iron. During the course of the new research, recently published in the journal Environmental Science & Technology, Giammar and his team took a novel approach, using electricity to do the job. “Electrocoagulation is the particular approach we used to introduce iron into the water,” Giammar said. “Typically, you would use an iron salt and physically add a dose to the water. Electrocoagulation uses two pieces of iron metal in the water, you apply a voltage between them, and that is how you dose iron into the water and convert the chromium-6.” Electrocoagulation systems are widely available, and Giammar finds using electricity as opposed to chemical alteration is an easier, more precise and scalable process. “It allows you to tailor your dose in a very easy way,” Giammar said. “Electronic controls can be easier than chemical feeding controls. It also allows it to be more applicable for remote operations, because you don’t have to have a source of chemicals. You just use the same pieces of iron, and you can treat the water for a long time.” Giammar’s team previously used the electrocoagulation approach to remove arsenic from drinking water; this is the first time it has been done to convert chromium in drinking water into a safer form. The next step: Researchers hope to use the same technique with selenium, a metal that’s notoriously difficult to remove from water.


News Article | December 21, 2016
Site: www.rdmag.com

A new way to convert the dangerous chromium-6 into the benign chromium-3 in drinking water has emerged. Daniel Giammar, the Walter E. Browne Professor of Environmental Engineering at the School of Engineering & Applied Science at Washington University in St. Louis has found a new way to convert the carcinogenic form of chromium to the safer form by using a process called electrocoagulation. “Electrocoagulation is the particular approach we used to introduce iron into the water,” Giammar said in a statement. “Typically, you would use an iron salt and physically add a dose to the water. “Electrocoagulation uses two pieces of iron metal in the water, you apply a voltage between them and that is how you dose iron into the water and convert the chromium-6.” Chromium is an odorless, tasteless metallic element that is essential to human health when found in the form of chromium-3. Chromium-3 is found in many vegetables, fruits, meats and grains and is often included in multi-vitamins. However, when in the chromium-6 form it has been known to cause cancer and can contaminate both soil and groundwater. “The health effects are quite well-known,” Giammar said. “It’s very potent as an inhaled contaminant but in drinking water chromium-6 definitely has a negative impact on human health.” Scientists have previously converted chromium-6 to chromium-3 using a chemical process involving iron. According to Giammar, electrocoagulation systems are widely available and using electricity as opposed to chemical alteration is an easier, more precise and scalable process. “It allows you to tailor your dose in a very easy way,” he said. “Electronic controls can be easier than chemical feeding controls. “It also allows it to be more applicable for remote operations because you don’t have to have a source of chemicals,” he added. “You just use the same pieces of iron, and you can treat the water for a long time.” Giammar and a team of scientists previously used the electrocoagulation approach to remove arsenic from drinking water. Chromium-6 or hexavalent chromium gained fame when environmental activist Erin Brockovich successfully sued the Pacific Gas and Electric Co. of California over contamination in 1993. The case settled in 1996 for $333 million and the story became the focus of a major motion picture in 2000, starring Julia Roberts. Advocacy group Environmental Working Group released a report in September identifying that 200 million Americans in all 50 states have water with chromium-6 detected in it with 138 water systems in New Jersey being identified. However, both New Jersey government officials and other environmental advocates said the readings in New Jersey showed that the levels of chromium-6 were so low that the public is not currently at risk.


Cobanera E.,Indiana University Bloomington | Ortiz G.,Indiana University Bloomington | Nussinov Z.,Washington University in St. Louis
Physical Review Letters | Year: 2010

We show how classical and quantum dualities, as well as duality relations that appear only in a sector of certain theories (emergent dualities), can be unveiled, and systematically established. Our method relies on the use of morphisms of the bond algebra of a quantum Hamiltonian. Dualities are characterized as unitary mappings implementing such morphisms, whose even powers become symmetries of the quantum problem. Dual variables, which have been guessed in the past, can be derived in our formalism. We obtain new self-dualities for four-dimensional Abelian gauge field theories. © 2010 The American Physical Society.


Rebmann T.,Washington University in St. Louis | Greene L.R.,Rochester General Health System
American Journal of Infection Control | Year: 2010

The Association for Professionals in Infection Control and Epidemiology (APIC) began publishing their series of Elimination Guides in 2007. Since then, 9 Elimination Guides have been developed that cover a range of important infection prevention issues, including the prevention of catheter-related bloodstream infections, ventilator-associated pneumonia, and catheter-associated urinary tract infections (CAUTIs), as well as mediastinitis surgical site surveillance. Multidrug-resistant organisms, including methicillin-resistant Staphylococcus aureus, Clostridium difficile, and multidrug-resistant Acinetobacter baumannii, also have been the focus of APIC Elimination Guides. The content of each of these Elimination Guides will be summarized in a series of upcoming Brief Reports published in The Journal. This article provides an executive summary of the APIC Elimination Guide for CAUTIs. Infection preventionists are encouraged to obtain the original, full-length APIC Elimination Guide for more thorough coverage of CAUTI prevention. Copyright © 2010 by the Association for Professionals in Infection Control and Epidemiology, Inc.


Bonaccorso A.,National Institute of Nuclear Physics, Italy | Charity R.J.,Washington University in St. Louis
Physical Review C - Nuclear Physics | Year: 2014

The optical-model potential for the n + 9Be reaction is obtained by two methods. The first method is from a modification and generalization of previous work [Bonaccorso and Bertsch, Phys. Rev. C 63, 044604 (2001)PRVCAN0556-281310.1103/PhysRevC.63.044604] and the second is from a dispersive-optical-model fit. The two potentials and also quantities derived from the S matrices used to calculate neutron knockout cross sections are compared. © 2014 American Physical Society.


Chambers J.R.,Washington University in St. Louis | Swan L.K.,University of Florida | Heesacker M.,University of Florida
Psychological Science | Year: 2014

Three studies examined Americans' perceptions of incomes and income inequality using a variety of criterion measures. Contrary to recent findings indicating that Americans underestimate wealth inequality, we found that Americans not only overestimated the rise of income inequality over time, but also underestimated average incomes. Thus, economic conditions in America are more favorable than people seem to realize. Furthermore, ideological differences emerged in two of these studies, such that political liberals overestimated the rise of inequality more than political conservatives. Implications of these findings for public policy debates and ideological disagreements are discussed. © The Author(s) 2013.


Gao H.,Harbin Institute of Technology | Fei Z.,Washington University in St. Louis | Lam J.,University of Hong Kong | Du B.,University of Hong Kong
IEEE Transactions on Automatic Control | Year: 2011

This technical note studies the problem of exponential estimates for Markovian jump systems with mode-dependent interval time-varying delays. A novel LyapunovKrasovskii functional (LKF) is constructed with the idea of delay partitioning, and a less conservative exponential estimate criterion is obtained based on the new LKF. Illustrative examples are provided to show the effectiveness of the proposed results. © 2010 IEEE.


Morrow L.E.,Creighton University | Kollef M.H.,Washington University in St. Louis | Casale T.B.,Creighton University
American Journal of Respiratory and Critical Care Medicine | Year: 2010

Rationale: Enteral administration of probiotics may modify the gastrointestinal environment in a manner that preferentially favors the growth of minimally virulent species. It is unknown whether probiotic modification of the upper aerodigestive flora can reduce nosocomial infections. Objectives: To determine whether oropharyngeal and gastric administration of Lactobacillus rhamnosus GG can reduce the incidence of ventilator-associated pneumonia (VAP). Methods: We performed a prospective, randomized, double-blind, placebo-controlled trial of 146 mechanically ventilated patients at high risk of developing VAP. Patients were randomly assigned to receive enteral probiotics (n = 68) or an inert inulin-based placebo (n = 70) twice a day in addition to routine care. Measurements and Main Results: Patients treated with Lactobacillus were significantly less likely to develop microbiologically confirmed VAP compared with patients treated with placebo (40.0 vs. 19.1%; P = 0.007). Although patients treated with probiotics had significantly less Clostridium difficile-associated diarrhea than patients treated with placebo (18.6 vs. 5.8%; P = 0.02), the duration of diarrhea per episode was not different between groups (13.2 ± 7.4 vs. 9.8 ± 4.9 d; P = 0.39). Patients treated with probiotics had fewer days of antibiotics prescribed for VAP (8.6 ± 10.3 vs. 5.6 ± 7.8 d; P = 0.05) and for C. difficile-associated diarrhea (2.1 ± 4.8 SD d vs. 0.5 ± 2.3 d; P = 0.02). No adverse events related to probiotic administration were identified. Conclusions: These pilot data suggest that L. rhamnosus GG is safe and efficacious in preventing VAP in a select, high-risk ICU population. Clinical trial registered with www.clinicaltrials.gov (NCT00613795).


Chase J.M.,Biodiversity Synthesis Laboratory | Knight T.M.,Washington University in St. Louis
Ecology Letters | Year: 2013

There is little consensus about how natural (e.g. productivity, disturbance) and anthropogenic (e.g. invasive species, habitat destruction) ecological drivers influence biodiversity. Here, we show that when sampling is standardised by area (species density) or individuals (rarefied species richness), the measured effect sizes depend critically on the spatial grain and extent of sampling, as well as the size of the species pool. This compromises comparisons of effects sizes within studies using standard statistics, as well as among studies using meta-analysis. To derive an unambiguous effect size, we advocate that comparisons need to be made on a scale-independent metric, such as Hurlbert's Probability of Interspecific Encounter. Analyses of this metric can be used to disentangle the relative influence of changes in the absolute and relative abundances of individuals, as well as their intraspecific aggregations, in driving differences in biodiversity among communities. This and related approaches are necessary to achieve generality in understanding how biodiversity responds to ecological drivers and will necessitate a change in the way many ecologists collect and analyse their data. © 2013 John Wiley & Sons Ltd/CNRS.


Wan S.-H.,Foundation Medicine | Vogel M.W.,Washington University in St. Louis | Chen H.H.,Mayo Clinic and Foundation
Journal of the American College of Cardiology | Year: 2014

Pre-clinical diastolic dysfunction (PDD) has been broadly defined as left ventricular diastolic dysfunction without the diagnosis of congestive heart failure (HF) and with normal systolic function. PDD is an entity that remains poorly understood, yet has definite clinical significance. Although few original studies have focused on PDD, it has been shown that PDD is prevalent, and that there is a clear progression from PDD to symptomatic HF including dyspnea, edema, and fatigue. In diabetic patients and in patients with coronary artery disease or hypertension, it has been shown that patients with PDD have a significantly higher risk of progression to heart failure and death compared with patients without PDD. Because of these findings and the increasing prevalence of the heart failure epidemic, it is clear that an understanding of PDD is essential to decreasing patients' morbidity and mortality. This review will focus on what is known concerning pre-clinical diastolic dysfunction, including definitions, staging, epidemiology, pathophysiology, and the natural history of the disease. In addition, given the paucity of trials focused on PDD treatment, studies targeting risk factors associated with the development of PDD and therapeutic trials for heart failure with preserved ejection fraction will be reviewed. © 2014 by the American College of Cardiology Foundation.


Ortiz G.,Indiana University Bloomington | Cobanera E.,Indiana University Bloomington | Nussinov Z.,Washington University in St. Louis
Nuclear Physics B | Year: 2012

A new "bond-algebraic" approach to duality transformations provides a very powerful technique to analyze elementary excitations in the classical two-dimensional XY and p-clock models. By combining duality and Peierls arguments, we establish the existence of non-Abelian symmetries, the phase structure, and transitions of these models, unveil the nature of their topological excitations, and explicitly show that a continuous U(1) symmetry emerges when p≥5. This latter symmetry is associated with the appearance of discrete vortices and Berezinskii-Kosterlitz-Thouless-type transitions. We derive a correlation inequality to prove that the intermediate phase, appearing for p≥5, is critical (massless) with decaying power-law correlations. © 2011 Elsevier B.V.


Cohen-Cymberknoh M.,Hebrew University of Jerusalem | Kerem E.,Hebrew University of Jerusalem | Ferkol T.,Washington University in St. Louis | Elizur A.,Institute of Asthma
Thorax | Year: 2013

Airway epithelial cells and immune cells participate in the inflammatory process responsible for much of the pathology found in the lung of patients with cystic fibrosis (CF). Intense bronchial neutrophilic inflammation and release of proteases and oxygen radicals perpetuate the vicious cycle and progressively damage the airways. In vitro studies suggest that CF transmembrane conductance regulator (CFTR)-deficient airway epithelial cells display signalling abnormalities and aberrant intracellular processes which lead to transcription of inflammatory mediators. Several transcription factors, especially nuclear factor-êB, are activated. In addition, the accumulation of abnormally processed CFTR in the endoplasmic reticulum results in unfolded protein responses that trigger 'cell stress' and apoptosis leading to dysregulation of the epithelial cells and innate immune function in the lung, resulting in exaggerated and ineffective airway inflammation. Measuring airway inflammation is crucial for initiating treatment and monitoring its effect. No inflammatory biomarker predictive for the clinical course of CF lung disease is currently known, although neutrophil elastase seems to correlate with lung function decline. CF animal models mimicking human lung disease may provide an important insight into the pathogenesis of lung inflammation in CF and identify new therapeutic targets.


Grant
Agency: GTR | Branch: BBSRC | Program: | Phase: Research Grant | Award Amount: 379.29K | Year: 2015

The Darwinian idea of survival of the fittest is central to our understanding of the diversity of life on this planet. However, if only the fittest survive and reproduce, then why do we see so much variation among individuals in traits that are tied to fitness? This problem is especially striking in social systems where cooperating individuals perform some sort of costly act that helps others. Cooperative behaviour therefore has important effects on the fitness of individuals and those that they interact with (often their relatives). Furthermore, cooperating individuals run the risk of invasion by disruptive cheaters that reap the benefits of cooperative behaviours, but do not pay their fair share of the cost. In such situations, we would expect the best strategy to emerge: either cheating or cooperating. Surprisingly, however, studies of natural populations often reveal variation in the degree to which individuals appear to cooperate and cheat. If either cheating or cooperating is the better strategy, then why is there variation along a cooperator-cheater continuum? To better understand this problem, we believe that it is important to not only describe the nature of the variation that is actually present in populations, but also the genes that generate this variation and the processes shaping their variation. This is because, although evolutionary theory may suggest the best strategy, the genetic changes required may not be possible. For example, some strategies may not exist because any gains may be offset by other fitness costs. Alternatively, cooperative traits may be expressed rarely, or there may be limited opportunities to cheat, and as a result the action of Darwinian selection may simply be too inefficient to mould variation to achieve the optimal or favoured strategy. We propose to address this fundamental question using a simple system for the study of cooperative behaviour, the soil dwelling social amoeba Dictyostelium discoideum. Under favourable conditions, D. discoideum amoebae exist as single celled individuals that grow and divide by feeding on bacteria. Upon starvation, however, up to 100,000 amoebae aggregate and cooperate to make a multicellular fruiting body consisting of hardy spores supported by dead stalk cells. Stalk cells thus sacrifice themselves to help the dispersal of spores. Such sacrifices can be favoured because they typically help relatives, but when non-relatives interact, the sacrifices of an individual may help non-relatives. Crucially, like other systems, we have discovered that D. discoideum show enormous diversity in a wide array of traits, including the degree to which different individuals cooperate, thus providing us with a simple system to investigate why such variation exists. To achieve this goal, we will employ a novel combination of approaches in D. discoideum that allow the genetics and evolution of cooperative behaviour and other traits to be analysed with great power. We will use a large panel of naturally occurring strains to identify natural variation in genes that account for the diversity in the traits we observe. We will characterize the types of genes that produce natural diversity in social traits and ask whether those genes also affect other types of non-social traits, which could suggest that they are constrained or shaped by non-social processes. We will be able to determine the types of evolutionary processes that appear to be responsible for the maintenance or persistence of variation in populations. Finally, we will integrate these results with models of evolution to develop a better theoretical understanding of how genetic diversity is maintained and evolutionary outcomes constrained. This work will therefore lead to a fundamental advance in our understanding of the types of variation underlying phenotypic diversity in natural populations and the evolutionary processes shaping that variation.