Entity

Time filter

Source Type

Sheffield, United Kingdom

The University of Sheffield is a research university in the city of Sheffield in South Yorkshire, England. It received its royal charter in 1905 as successor to Sheffield Medical School and University College of Sheffield . As one of the original red brick universities, it is also a member of the prestigious Russell Group of research-intensive universities.The University of Sheffield is widely recognized as a leading research and teaching university in the UK and in the world. In 2014, QS World University Rankings placed Sheffield as the 66th university worldwide and 12th in the UK. In 2011, Sheffield was named 'University of the Year' in the Times Higher Education awards. The latest Times Higher Education Student Experience Survey 2014 ranked the University of Sheffield 1st for student experience, social life, university facilities and accommodation, among other categories.The university had more than 17000 undergraduate and around 9000 postgraduate students in 2012. Its annual income for 2012-13 was £479.8 million, with an expenditure of £465.0 million, resulting in a surplus of £14.8 million. Wikipedia.


Shaw P.J.,University of Sheffield
Clinical Medicine, Journal of the Royal College of Physicians of London | Year: 2010

Motor neurone disease (MND) is an adultonset neurodegenerative disease which leads inexorably via weakness of limb, bulbar and respiratory muscles to death from respiratory failure three to five years later. Most MND is sporadic but approximately 10% is inherited. In exciting recent breakthroughs two new MND genes have been identified. Diagnosis is clinical and sometimes difficult - treatable mimics must be excluded before the diagnosis is ascribed. Riluzole prolongs life by only three to four months and is only available for the amyotrophic lateral sclerosis (ALS) form of MND. Management therefore properly focuses on symptom relief and the preservation of independence and quality of life. Malnutrition is a poor prognostic factor. In appropriate patients enteral feeding is recommended although its use has yet to be shown to improve survival. In ALS patients with respiratory failure and good or only moderately impaired bulbar function non-invasive positive pressure ventilation prolongs life and improves quality of life. © Royal College of Physicians, 2010. All rights reserved. Source


Piper P.W.,University of Sheffield
Sub-Cellular Biochemistry | Year: 2015

When investigating aging it is important to focus on the factors that are needed to attain, and which can be manipulated to extend, the longest lifespans. This has long been appreciated by those workers who use Drosophila or Caenorhabditis elegans as model experimental systems to study aging. Often though it seems it is not a consideration in many studies of yeast chronological aging. In this chapter I summarise how recent work has revealed the preconditioning that is needed for yeast to survive for long periods in stationary phase, therefore for it to exhibit a long chronological life span (CLS). Of critical importance in this regard is the nature of the nutrient limitation that, during the earlier growth phase, had forced the cells to undergo growth arrest. I have attempted to highlight those studies that have focussed on the longest CLSs, as this helps to identify investigations that may be addressing – not just factors that can influence chronological longevity – but those factors that are correlated with the authentic processes of chronological aging. Attempting to maximize long-term stationary survival in yeast should also enhance the potential relevance of this organism as an aging model to those who wrestle with the problems of aging in more complex systems. Finally I also give a personal perspective of how studies on the yeast CLS may still yet provide some important new insights into events that are correlated with aging. © Springer Science+Business Media B.V. 2012. Source


Beerling D.J.,University of Sheffield
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2012

Exciting evidence from diverse fields, including physiology, evolutionary biology, palaeontology, geosciences and molecular genetics, is providing an increasingly secure basis for robustly formulating and evaluating hypotheses concerning the role of atmospheric carbon dioxide (CO 2) in the evolution of photosynthetic eukaryotes. Such studies span over a billion years of evolutionary change, from the origins of eukaryotic algae through to the evolution of our present-day terrestrial floras, and have relevance for plant and ecosystem responses to future global CO 2 increases. The papers in this issue reflect the breadth and depth of approaches being adopted to address this issue. They reveal new discoveries pointing to deep evidence for the role of CO 2 in shaping evolutionary changes in plants and ecosystems, and establish an exciting cross-disciplinary research agenda for uncovering new insights into feedbacks between biology and the Earth system. © 2012 The Royal Society. Source


Simon J. P.,University of Sheffield | Patricia W. S.,Columbia University
International Journal of Nursing Studies | Year: 2015

Background: Pressure ulcers have an adverse impact on patients and can also result in additional costs and workload for healthcare providers. Interventions to prevent pressure ulcers are focused on identifying at risk patients and using systems such as mattresses and turning to relieve pressure. Treatments for pressure ulcers are directed towards promoting wound healing and symptom relief. Both prevention and treatments have associated costs for healthcare providers.The aim of this study was to systematically review the economic evidence for prevention and treatment interventions for pressure ulcers. Design: A systematic review of comparative clinical studies that evaluate interventions to either prevent or treat pressure ulcers. Data sources: Searches of the major electronic databases were conducted to identify citations that reported costs or economic analysis for interventions directed towards prevention or treatment of pressure ulcers. Only comparative clinical studies were included. Review articles, case-series, non-randomised studies, and studies in a foreign language that did not have an abstract in English were excluded from the review. Review methods: Decisions regarding inclusion or exclusion were based on a consensus of the authors after review of the title or abstract. Potential citations were obtained for more detailed review and assessed against the inclusion criteria.The studies identified for inclusion were assessed against the 24 key criteria contained in the CHEERS checklist. Costs were standardised to US dollars and adjusted for inflation to 2012 rates. Results: The searches identified 105 potential studies. After review of the citations a total of 23 studies were included: 12 examined prevention interventions and 11 treatments. Review against the CHEERS criteria showed that the majority of included trials had poor reporting and a lack of detail regarding how costs were calculated. Few studies reported more than aggregate costs of treatments with only a small number reporting unit cost outcomes. Conclusions: Existing evidence was poor in regard to the economic evaluation of interventions for the prevention and treatment of pressure ulcers. Much of the published literature had poor reporting quality when compared to guidelines which provide key criteria for studies to adequately examine costs within an economic analysis. © 2014 Elsevier Ltd. Source


Albutt G.,University of Sheffield
Nursing standard (Royal College of Nursing (Great Britain) : 1987) | Year: 2013

To report nurse educators' perspectives of the appropriateness of pre-registration nursing education programmes in preparing nurses to practise in primary care. Data were collected through semi-structured telephone and face-to-face interviews with eight nurse educators, and were subject to thematic analysis. Nurse educators believed that nursing education programmes did not adequately prepare newly qualified nurses to work in primary care because they provided limited experience in this setting. Factors such as shortage of practice placements in primary care and lack of mentors to supervise and support students were identified as major barriers to student learning and subsequent preparedness to work in primary care. Source


Mann B.E.,University of Sheffield
Topics in Organometallic Chemistry | Year: 2010

Carbon monoxide (CO), like nitric oxide (NO), is an essential signalling molecule in humans. It is active in the cardiovascular system as a vasodilator. In addition, CO possesses anti-inflammatory, anti-apoptotic and anti-proliferative properties and protects tissues from hypoxia and reperfusion injury. Some of its applications in animal models include suppression of organ graft rejection and safeguarding the heart during reperfusion after cardiopulmonary bypass surgery. CO also suppresses arteriosclerotic lesions following angioplasty, reverses established pulmonary hypertension and mitigates the development of post-operative ileus in the murine small intestine and the development of cerebral malaria in mice as well as graft-induced intimal hyperplasia in pigs. There have been several clinical trials using air-CO mixtures for the treatment of lung-, heart-, kidney- and abdominal-related diseases. This review examines the research involving the development of classes of compounds (with particular emphasis on metal carbonyls) that release CO, which could be used in clinically relevant conditions. The review is drawn not only from published papers in the chemical literature but also from the extensive biological literature and patents on CO-releasing molecules (CO-RMs). © 2010 Springer-Verlag Berlin Heidelberg. Source


It is widely recognised that the appropriate representation for expert judgements of uncertainty is as a probability distribution for the unknown quantity of interest. However, formal elicitation of probability distributions is a non-trivial task. We provide an overview of this field, including an outline of the process of eliciting knowledge from experts in probabilistic form. We explore approaches to probabilistic uncertainty specification including direct elicitation and Bayesian analysis. In particular, we introduce the generic technique of elaboration and present a variety of forms of elaboration, illustrated with a series of examples.The methods are applied to the expression of uncertainty in a case study. Mechanistic models are built in just about every area of science and technology, to represent complex physical processes. They are used to predict, understand and control those processes, and increasingly play a role in national and international policy making. As such models gain higher prominence, recipients of their forecasts are increasingly demanding to know how accurate they are. There is therefore a growing interest in quantifying the uncertainties in model predictions.Uncertainty in model outputs, as representations of reality, arise from uncertainty about model inputs (such as initial conditions, external forcing variables and parameters in model equations) and from uncertainty about model structure.Our case study is based on the Sheffield Dynamic Global Vegetation Model (SDGVM), which is used to estimate the combined carbon flux from vegetation in England and Wales in a given year. The extent to which vegetation acts as a carbon sink is an important component of the debate about climate change. We show how different approaches were used to characterise uncertainty in vegetation model parameters, soil conditions and land cover. © 2011 Elsevier Ltd. Source


Psychologically focused group interventions for multiple sclerosis were reviewed. Studies reviewed (14) were quantitative, experimental and involved a comparison group (control or other intervention). Compared with controls, psychologically focused group interventions achieved considerable improvements in depression and moderate improvements in self-efficacy and quality of life but little change in anxiety. Psychologically focused group interventions compared well with other interventions, although evidence was limited. Psychologically focused group intervention was less effective short term for depression than individual cognitive behavioural therapy or medication but comparable long term. Intervention heterogeneity made comparisons difficult. Specificity of effect is unclear. Limited evidence suggests psychologically focused group intervention is effective in improving certain outcomes. © 2013 The Author(s). Source


Sugihara S.,University of Fukui | Armes S.P.,University of Sheffield | Lewis A.L.,Biocompatibles UK Ltd
Angewandte Chemie - International Edition | Year: 2010

Figure Equation Present. Shell shock: The one-pot synthesis of shell cross-linked (SCL) micelles was achieved using an ABC triblock copolymer in a 9:1 alcohol/water mixture. These SCL micelles were dialyzed against water, leading to solvation of the core-forming PMPC chains. If the cross-linking is not too high, these chains migrate through the inner shell to join the coronal PEO chains, hence forming nanocages. © 2010 Wiley-VCH Verlag GmbH & Co. KGaA. Source


Pacey A.A.,University of Sheffield
Asian Journal of Andrology | Year: 2010

Quality assurance (QA) and quality control (QC) are fundamental aspects of any laboratory measurement. However, in comparison with other medical disciplines, the need for QA and QC in laboratory andrology has been recognized only recently. Furthermore, there is evidence that the effort required to undertake QA and QC has not been wholly welcomed by some clinicians. Nevertheless, accrediting bodies and regulatory authorities increasingly require evidence that laboratories have effective QA and QC measures in place because both are central to the quality management processes. Following the publication of the 5th edition of the World Health Organization Laboratory Manual, existing QA and QC systems will need to be updated to take into account some of the methodological changes recommended by the manual. Three of these are discussed in this commentary; they relate to: (i) the move to infer semen volume from its weight; (ii) the re-classification of sperm motility grades from four to three; and (iii) the publication of a lower reference limit for morphology of 4% (with a corresponding 95% confidence interval of 3%-4%). The importance of QA and QC in all laboratory tests, including up and coming new tests to assess sperm DNA integrity, is discussed. The need for adequate initial training and continuing professional development programmes to support laboratory scientists performing andrology is also described. © 2010 AJA, SIMM & SJTU. Source


Morton R.J.,University of Sheffield
Astronomy and Astrophysics | Year: 2012

Aims. Evidence is beginning to be put forward that demonstrates the role of the chromosphere in supplying energy and mass to the corona. We aim to assess the role of chromospheric jets in active region dynamics. Methods. Using a combination of the Hinode/SOT Ca II H and TRACE 1550 Å and 1600 Å filters we examine chromospheric jets situated at the edge of a sunspot. Results. Analysis reveals a near continuous series of jets, that raise chromospheric material into the low corona above a sunspot. The jets have average rise speeds of 30 km s -1 and a range of 10-100 km s -1. Enhanced emission observed at the jets leading edge suggests the formation of a shock front. Increased emission in TRACE bandpasses above the sunspot and the disappearance of the jets from the Ca II filter suggests that some of the chromospheric jet material is at least heated to ∼0.1 MK. The evidence suggests that the jets could be a source of steady, low-level heating for active region features. © 2012 ESO. Source


Introduction: ADHD is a common and growing problem which manifests and is diagnosed via a cluster of behaviours such as inability to regulate emotions or manage motivational delay and problems with executive functioning. It frequently accompanies autism spectrum disorders and dyslexia. Homoeopathy is a system of therapeutics based on the Law of Similars where 'like cures like'. Conditions are treated by highly diluted substances that cause, in healthy persons, symptoms like those of the condition to be treated. The aim of this case report is to describe the homoeopathic treatment and progress of one 16 year old youth with diagnoses of ADHD, Asperger's syndrome and dyslexia subjected to in-utero cannabis exposure. Methods: The youth received individualised homoeopathic medicines and additional ultra-molecular dilutions of cannabis. Outcome was measured using the parent completed Conner's Parent Rating Scale-Revised-Long version (CPRS:R-L) every 4 months, with DSMIV total score selected for analysis; and Measure Your Own Medical Outcome Measure (MYMOP) every 6 weeks, completed by parent and patient. Results: At start of treatment the patient's DSMIV total T score was 90+ (highest possible); after 18 months it was 59 (within normal range). MYMOP score at start of treatment was 4.5 and 1.75 after 18 months. Conclusion: Treatment by a homoeopath over 1 1/2 years was associated with improvements in ADHD status and patient generated outcomes. Ultra molecular dilutions of a recreational drug the patient was subjected to in-utero appear to have contributed to improvements. Systematic research with larger numbers would be required to confirm or refute this single case observation. © 2015 Elsevier GmbH. Source


Sanderson M.,University of Sheffield
Foundations and Trends in Information Retrieval | Year: 2010

Use of test collections and evaluation measures to assess the effectiveness of information retrieval systems has its origins in work dating back to the early 1950s. Across the nearly 60 years since that work started, use of test collections is a de facto standard of evaluation. This monograph surveys the research conducted and explains the methods and measures devised for evaluation of retrieval systems, including a detailed look at the use of statistical significance testing in retrieval experimentation. This monograph reviews more recent examinations of the validity of the test collection approach and evaluation measures as well as outlining trends in current research exploiting query logs and live labs. At its core, the modern-day test collection is little different from the structures that the pioneering researchers in the 1950s and 1960s conceived of. This tutorial and review shows that despite its age, this long-standing evaluation method is still a highly valued tool for retrieval research. © 2010 M. Sanderson. Source


Brennan V.K.,Business and Conference Center | Dixon S.,University of Sheffield
PharmacoEconomics | Year: 2013

Objective: This review aimed to identify published studies that provide an empirical measure of process utility, which can be incorporated into estimates of QALY calculations. Methods: A literature search was conducted in PubMed to identify published studies of process utility. Articles were included if they were written in the English language and reported empirical measures of process utility that could be incorporated into the QALY calculation; those studies reporting utilities that were not anchored on a scale of 0 representing dead and 1 representing full health were excluded from the review. Results: Fifteen studies published between 1996 and 2012 were included. Studies included respondents from the USA, Australia, Scotland and the UK, Europe and Canada. Eight of the included studies explored process utility associated with treatments; six explored process utility associated with screening procedures or tests; and one was performed in preventative care. A variety of approaches were used to detect and measure process utility: four studies used standard gamble techniques; four studies used time trade-off (TTO); one study used conjoint analysis and one used a combination of conjoint analysis and TTO; one study used SF-36 data; one study used both TTO and EQ-5D; and three studies used wait trade-off techniques. Measures of process utility for different drug delivery methods ranged from 0.02 to 0.27. Utility estimates associated with different dosing strategies ranged from 0.005 to 0.09. Estimates for convenience (able to take on an empty stomach) ranged from 0.001 to 0.028. Estimates of process utility associated with screening and testing procedures ranged from 0.0005 to 0.031. Both of these estimates were obtained for management approaches to cervical cancer screening. Conclusion: The identification of studies through conventional methods was difficult due to the lack of consistent indexing and terminology across studies; however, the evidence does support the existence of process utility in treatment, screening and preventative care settings. There was considerable variation between estimates. The range of methodological approaches used to identify and measure process utility, coupled with the need for further research into, for example, the application of estimates in economic models, means it is difficult to know whether these differences are a true reflection of the amount of process utility that enters into an individual's utility function, or whether they are associated with features of the studies' methodological design. Without further work, and a standardised approach to the methodology for the detection and measurement of process utility, comparisons between estimates are difficult. This literature review supports the existence of process utility and indicates that, despite the need for further research in the area, it could be an important component of an individual's utility function, which should at least be considered, if not incorporated, into cost-utility analyses. © 2013 Springer International Publishing Switzerland. Source


Cooper R.J.,University of Sheffield
BMJ Open | Year: 2013

Objectives: Over-the-counter (OTC) pharmacy medicines are considered relatively safe in contrast to prescribed and illicit substances, but their abuse and addiction potential is increasingly recognised. Those affected represent a hard to reach group, with little known about their experiences. Study objectives were to describe the experiences and views of those self-reporting OTC medicine abuse, and why medicines were taken, how they were obtained and associated treatment and support sought. Design: Qualitative study using in-depth mainly telephone interviews. Participants: A purposive sample of 25 adults, aged 20-60s, 13 women. Setting: UK, via two internet support groups. Results: Individuals considered themselves 'addicted', but socially and economically active and different from illicit substance misusers. They blamed themselves for losing control over their medicine use, which usually began for genuine medical reasons and not experimentation and was often linked to the cessation of, or ongoing, medical prescribing. Codeine, in compound analgesics, was the main medicine implicated with three distinct dose ranges emerging with decongestant and sedative antihistamine abuse also being reported. Subsequent use was for the 'buzz' or similar effects of the opiate, which was obtained unproblematically by having lists of pharmacies to visit and occasionally using internet suppliers. Perceived withdrawal symptoms were described for all three dose ranges, and work and health problems were reported with higher doses. Mixed views about different treatment and support options emerged with standard drug treatment services being considered inappropriate for OTC medicines and concerns that this 'hidden addiction' was recorded in medical notes. Most supported the continued availability of OTC medicines with appropriate addiction warnings. Conclusions: Greater awareness of the addiction potential of OTC medicines is needed for the public, pharmacists and medical prescribers, along with appropriate communication about, and reviews of, treatment and support options, for this distinct group. Source


Nagase A.,Chiba University | Dunnett N.,University of Sheffield
Landscape and Urban Planning | Year: 2012

Increased stormwater runoff from impervious surfaces is a major concern in urban areas and green roofs are increasingly used as an innovative means of stormwater management. However, there are very few studies on how different vegetation types affect the amount of water runoff. This paper describes an experiment that investigates the influence of plant species and plant diversity on the amount of water runoff from a simulated green roof. Twelve species were selected from the three major taxonomic and functional plant groups that are commonly used for extensive green roofs (forbs, sedum and grasses). Four species were chosen from each group and planted in combinations of increasing diversity and complexity: monocultures, four-species mixtures and twelve-species mixtures. The results showed that there was a significant difference in amount of water runoff between vegetation types; grasses were the most effective for reducing water runoff, followed by forbs and sedum. It was also shown that the size and structure of plants significantly influenced the amount of water runoff. Plant species with taller height, larger diameter, and larger shoot and root biomass were more effective in reducing water runoff from simulated green roofs than plant species with shorter height, smaller diameter, and smaller shoot and root biomass. The amount of water runoff from Sedum spp. was higher than that from bare ground. Species richness did not affect the amount of water runoff in this study. © 2011 Elsevier B.V. Source


Cooper R.J.,University of Sheffield
Journal of Substance Use | Year: 2013

Background: The sale of over-The-counter (OTC) medicines from pharmacies can help individuals self-manage symptoms. However, some OTC medicines may be abused, with addiction and harms being increasingly recognised. This review describes the current knowledge and understanding of OTC medicine abuse. Approach: Comprehensive search of international empirical and review literature between 1990 and 2011. Findings: OTC medicine abuse was identified in many countries and although implicated products varied, five key groups emerged: codeine-based (especially compound analgesic) medicines, cough products (particularly dextromethorphan), sedative antihistamines, decongestants and laxatives. No clear patterns relating to those affected or their experiences were identified and they may represent a hard-to-reach group, which coupled with heterogeneous data, makes estimating the scale of abuse problematic. Associated harms included direct physiological or psychological harm (e.g. opiate addiction), harm from another ingredient (e.g. ibuprofen-related gastric bleeding) and associated social and economic problems. Strategies and interventions included limiting supplies, raising public and professional awareness and using existing services and Internet support groups, although associated evaluations were lacking. Terminological variations were identified. Conclusions: OTC medicine abuse is a recognised problem internationally but is currently incompletely understood. Research is needed to quantify scale of abuse, evaluate interventions and capture individual experiences, to inform policy, regulation and interventions. © 2013 Informa UK, Ltd. Source


Harman M.,Kings College London | McMinn P.,University of Sheffield
IEEE Transactions on Software Engineering | Year: 2010

Search-based optimization techniques have been applied to structural software test data generation since 1992, with a recent upsurge in interest and activity within this area. However, despite the large number of recent studies on the applicability of different search-based optimization approaches, there has been very little theoretical analysis of the types of testing problem for which these techniques are well suited. There are also few empirical studies that present results for larger programs. This paper presents a theoretical exploration of the most widely studied approach, the global search technique embodied by Genetic Algorithms. It also presents results from a large empirical study that compares the behavior of both global and local search-based optimization on real-world programs. The results of this study reveal that cases exist of test data generation problem that suit each algorithm, thereby suggesting that a hybrid global-local search (a Memetic Algorithm) may be appropriate. The paper presents a Memetic Algorithm along with further empirical results studying its performance. © 2010 IEEE. Source


Mariani E.,University of Liverpool | Ghassemieh E.,University of Sheffield
Acta Materialia | Year: 2010

6061 O Al alloy foils were welded to form monolithic and SiC fibre-embedded samples using the ultrasonic consolidation (UC) process. Contact pressures of 135, 155 and 175 MPa were investigated at 20 kHz frequency, 50% of the oscillation amplitude, 34.5 mm s-1 sonotrode velocity and 20 °C. Deformed microstructures were analysed using electron backscatter diffraction (EBSD). At all contact pressures deformation occurs by non-steady state dislocation glide. Dynamic recovery is active in the upper and lower foils. Friction at the welding interface, instantaneous internal temperatures (0.5-0.8 of the melting temperature, Tm), contact pressure and fast strain rates result in transient microstructures and grain size reduction by continuous dynamic recrystallization (CDRX) within the bonding zone. Bonding occurs by local grain boundary migration, which allows diffusion and atom interlocking across the contact between two clean surfaces. Textures weaken with increasing contact pressure due to increased strain hardening and different grain rotation rates. High contact pressures enhance dynamic recovery and CDRX. Deformation around the fibre is intense within 50 μm and extends to 450 μm from it. © 2009 Acta Materialia Inc. Source


Golestanian R.,University of Sheffield | Golestanian R.,CNRS Gulliver Laboratory
Physical Review Letters | Year: 2010

A minimal design for a molecular swimmer is proposed that is based on a mechanochemical propulsion mechanism. Conformational changes are induced by electrostatic actuation when specific parts of the molecule temporarily acquire net charges through catalyzed chemical reactions involving ionic components. The mechanochemical cycle is designed such that the resulting conformational changes would be sufficient for achieving low Reynolds number propulsion. The system is analyzed within the recently developed framework of stochastic swimmers to take account of the noisy environment at the molecular scale. The swimming velocity of the device is found to depend on the concentration of the fuel molecule according to the Michaelis-Menten rule in enzymatic reactions. © 2010 The American Physical Society. Source


Unlike simpler organisms, C. elegans possesses several distinct chemosensory pathways and chemotactic mechanisms. These mechanisms and pathways are individually capable of driving chemotaxis in a chemical concentration gradient. However, it is not understood if they are redundant or co-operate in more sophisticated ways. Here we examine the specialisation of different chemotactic mechanisms in a model of chemotaxis to NaCl. We explore the performance of different chemotactic mechanisms in a range of chemical gradients and show that, in the model, far from being redundant, the mechanisms are specialised both for different environments and for distinct features within those environments. We also show that the chemotactic drive mediated by the ASE pathway is not robust to the presence of noise in the chemical gradient. This problem cannot be solved along the ASE pathway without destroying its ability to drive chemotaxis. Instead, we show that robustness to noise can be achieved by introducing a second, much slower NaCl-sensing pathway. This secondary pathway is simpler than the ASE pathway, in the sense that it can respond to either up-steps or down-steps in NaCl but not both, and could correspond to one of several candidates in the literature which we identify and evaluate. This work provides one possible explanation of why there are multiple NaCl sensing pathways and chemotactic mechanisms in C. elegans: rather than being redundant the different pathways and mechanism are specialised both for the characteristics of different environments and for distinct features within a single environment. © 2013 Springer Science+Business Media. Source


Stocks M.,University of Sheffield
Nature Genetics | Year: 2015

Three strikingly different alternative male mating morphs (aggressive 'independents', semicooperative 'satellites' and female-mimic 'faeders') coexist as a balanced polymorphism in the ruff, Philomachus pugnax, a lek-breeding wading bird. Major differences in body size, ornamentation, and aggressive and mating behaviors are inherited as an autosomal polymorphism. We show that development into satellites and faeders is determined by a supergene consisting of divergent alternative, dominant and non-recombining haplotypes of an inversion on chromosome 11, which contains 125 predicted genes. Independents are homozygous for the ancestral sequence. One breakpoint of the inversion disrupts the essential CENP-N gene (encoding centromere protein N), and pedigree analysis confirms the lethality of homozygosity for the inversion. We describe new differences in behavior, testis size and steroid metabolism among morphs and identify polymorphic genes within the inversion that are likely to contribute to the differences among morphs in reproductive traits. © 2015 Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved. Source


Hettema E.H.,University of Sheffield | Erdmann R.,Ruhr University Bochum | van der Klei I.,University of Groningen | Veenhuis M.,University of Groningen
Current Opinion in Cell Biology | Year: 2014

Significant progress has been made towards our understanding of the mechanism of peroxisome formation, in particular concerning sorting of peroxisomal membrane proteins, matrix protein import and organelle multiplication. Here we evaluate the progress made in recent years. We focus mainly on progress made in yeasts. We indicate the gaps in our knowledge and discuss conflicting models. © 2014. Source


Petkovski M.,University of Sheffield
Cement and Concrete Research | Year: 2010

Concrete in structures exposed to high temperatures is practically always heated under stress. Yet, there are few experimental studies in which the concrete was heated under stress and then loaded to the peak, and most of these were performed under uniaxial compression. This paper reports on an experimental study of the effects of different heat-load regimes on the stress-strain behaviour of partially sealed concrete under multiaxial compression, at elevated temperature. The specimens were first heated (stressed/unstressed), then loaded to the peak in multiaxial compression. In contrast with previous experimental research, the results show that concrete heated under relatively low compressive stress has lower strength and stiffness than concrete heated without load. The results suggest that the presence of stress during first heating produces a specific damage, which could be the cause for a major component of the load induced thermal strain (LITS) in concrete. © 2010 Elsevier Ltd. All rights reserved. Source


Herbert R.,University of Sheffield | Best W.,University College London
Cortex | Year: 2010

We describe MH who presents with agrammatic aphasia and anomia, and who produces semantic errors in the absence of a central semantic impairment. This pattern of performance implies damage to syntactic processes operating between semantics and phonological output. Damage here may lead to lexical selection errors and a deficit in combining words to form phrases.We investigated MH's knowledge and processing of noun syntax in mass and count nouns. She produced more count nouns than mass nouns. She showed impaired knowledge of noun syntax in judgement tasks and production tasks, with mass noun syntax being more impaired than count.We interpret these results in terms of a two-stage model of lexical retrieval. We propose that syntactic information represented at the lemma level is activated even in bare noun production, and can be differentially impaired across noun categories. That same damage can lead to semantic errors in production. For MH limited syntactic options are available to support production, and these favour count noun production. The data provide a new account of output semantic errors. © 2009 Elsevier Srl. Source


Zhu Z.-Q.,University of Sheffield
COMPEL - The International Journal for Computation and Mathematics in Electrical and Electronic Engineering | Year: 2011

Purpose - Fractional slot permanent magnet (PM) brushless machines having concentrated non-overlapping windings have been the subject of research over last few years. They have already been employed in the commercial hybrid electric vehicles (HEVs) due to high-torque density, high efficiency, low-torque ripple, good flux-weakening and fault-tolerance performance. The purpose of this paper is to overview recent development and research challenges in such machines in terms of various structural and design features for electric vehicle (EV)/HEV applications. Design/methodology/approach - In the paper, fractional slot PM brushless machines are overviewed according to the following main and sub-topics: first, machine topologies: slot and pole number combinations, all and alternate teeth wound (double- and single-layer windings), unequal tooth structure, modular stator, interior magnet rotor; second, machine parameters and control performance: winding inductances, flux-weakening capability, fault-tolerant performance; and third, parasitic effects: cogging torque, iron loss, rotor eddy current loss, unbalanced magnetic force, acoustic noise and vibration. Findings - Many fractional slot PM machine topologies exist. Owing to rich mmf harmonics, fractional slot PM brushless machines exhibit relatively high rotor eddy current loss, potentially high unbalanced magnetic force and acoustic noise and vibration, while the reluctance torque component is relatively low or even negligible when an interior PM rotor is employed. Originality/value - This is the first overview paper which systematically reviews the recent development and research challenges in fractional-slot PM machines. It summarizes their various structural and design features for EV/HEV applications. © Emerald Group Publishing Limited. Source


Hornik T.,Turbo Power Systems | Zhong Q.-C.,University of Sheffield
IEEE Transactions on Industrial Electronics | Year: 2013

For applications in renewable energy and distributed generation, there is often a need to have a neutral line to provide a current path for unbalanced loads. This can be achieved by using a neutral-point circuit that consists of a conventional neutral leg and a split dc link. In this paper, an $H \infty current controller is proposed to force the current flowing through the split dc link to be nearly zero so that the neutral-point voltage is stable, and then, a voltage control loop is added to eliminate the imbalance of the voltage so that the neutral-point voltage is balanced with respect to the dc terminals. The combination of these two controllers, connected in parallel, is able to maintain the neutral point at the midpoint of the dc bus even when the neutral current is large. Hence, the inverter can be connected to unbalanced loads and/or the utility grid. Experimental results are presented to demonstrate the excellent performance of the proposed control scheme. The proposed strategy can also be applied to neutral-point-clamped three-level converters. © 1982-2012 IEEE. Source


Sudbery P.,University of Sheffield
Fungal Genetics and Biology | Year: 2011

Fungal hyphae show extreme polarized growth at the tip. Electron microscope studies have revealed a apical body called the Spitzenkörper that is thought to drive polarized growth. Studies of polarized growth in S. cerevisiae have identified the protein components of the polarized growth machinery, that are conserved in other fungi. Fusion of these proteins to GFP and its variants has for the first time allowed the localization of these proteins in real time to the hyphal tip without the need for drastic fixation procedures. Such studies showed that vesicle-associated proteins localize to the Spitzenkörper and identified a second compartment located at the tip surface composed of exocyst and other proteins that mediate the fusion of secretory vesicles with the plasma membrane. © 2011 Elsevier Inc. Source


Liu W.,University of Sheffield
Journal of the Franklin Institute | Year: 2011

A novel design of the frequency invariant beamformer based on a rectangular array is proposed with two unique features: there are no taped delay-lines (TDLs) or any other temporal processing involved and the resultant beamformer has a full 360° azimuth angle coverage. This leads to a wideband beamformer with complex-valued coefficients and its implementation is not as straightforward as the traditional ones. Depending on whether the input signal is complex-valued or real-valued, special arrangements and structures are required for its effective implementation. Several design examples are provided with a satisfactory frequency invariant property. Two sets of implementation results are given based on both simulations and data collected by a planar microphone array system. © 2011 The Franklin Institute. Source


Zhong Q.-C.,University of Sheffield
IEEE Transactions on Industrial Electronics | Year: 2013

In this paper, the inherent limitations of the conventional droop control scheme are revealed. It has been proven that parallel-operated inverters should have the same per-unit impedance in order for them to share the load accurately in proportion to their power ratings when the conventional droop control scheme is adopted. The droop controllers should also generate the same voltage set-point for the inverters. Both conditions are difficult to meet in practice, which results in errors in proportional load sharing. An improved droop controller is then proposed to achieve accurate proportional load sharing without meeting these two requirements and to reduce the load voltage drop due to the load effect and the droop effect. The load voltage can be maintained within the desired range around the rated value. The strategy is robust against numerical errors, disturbances, noises, feeder impedance, parameter drifts and component mismatches. The only sharing error, which is quantified in this paper, comes from the error in measuring the load voltage. When there are errors in the voltage measured, a fundamental tradeoff between the voltage drop and the sharing accuracy appears. It has also been explained that, in order to avoid errors in power sharing, the global settings of the rated voltage and frequency should be accurate. Experimental results are provided to verify the analysis and design. © 2012 IEEE. Source


Zhong Q.-C.,University of Sheffield | Hornik T.,University of Liverpool
IEEE Transactions on Industrial Electronics | Year: 2013

In this paper, a cascaded current-voltage control strategy is proposed for inverters to simultaneously improve the power quality of the inverter local load voltage and the current exchanged with the grid. It also enables seamless transfer of the operation mode from stand-alone to grid-connected or vice versa. The control scheme includes an inner voltage loop and an outer current loop, with both controllers designed using the H\infty repetitive control strategy. This leads to a very low total harmonic distortion in both the inverter local load voltage and the current exchanged with the grid at the same time. The proposed control strategy can be used to single-phase inverters and three-phase four-wire inverters. It enables grid-connected inverters to inject balanced clean currents to the grid even when the local loads (if any) are unbalanced and/or nonlinear. Experiments under different scenarios, with comparisons made to the current repetitive controller replaced with a current proportional-resonant controller, are presented to demonstrate the excellent performance of the proposed strategy. © 2012 IEEE. Source


Zhong Q.-C.,University of Sheffield
IEEE Transactions on Industrial Electronics | Year: 2013

In this paper, the load and/or grid connected to an inverter is modeled as the combination of voltage sources and current sources at harmonic frequencies. As a result, the system can be analyzed at each individual frequency, which avoids the difficulty in defining the reactive power for a system with different frequencies because it is now defined at each individual frequency. Moreover, a droop control strategy is developed for systems delivering power to a constant current source, instead of a constant voltage source. This is then applied to develop a harmonic droop controller so that the right amount of harmonic voltage is added to the inverter reference voltage to compensate the harmonic voltage dropped on the output impedance due to the harmonic current. This forces the output voltage at the individual harmonic frequency to be close to zero and improves the total harmonic distortion (THD) of the output voltage considerably. Both simulation and experimental results are provided to demonstrate that the proposed strategy can significantly improve the voltage THD. © 2012 IEEE. Source


Eiser C.,University of Sheffield | Varni J.W.,Texas A&M University
European Journal of Pediatrics | Year: 2013

Health-related quality of life (HRQOL) is increasingly seen as important to reflect the impact of an illness and its treatment on a patient from the patient's perspective. However, there may be times when it is difficult to obtain this information directly from pediatric patients, and parents are therefore used as substitutes. Nevertheless, an informant discrepancy between children and their parents increase the need to identify variables which contribute to the observed differences between children's self-reports and parents' proxy-reports. Discrepancies between child and parent reports have often been regarded as "methodological error" and have led to misconceived arguments about who is "right." The aims of this review are to provide an overview and update to help understand the relation between children's self-report of their symptoms and HRQOL and parents' proxy-reports, the circumstances in which informant discrepancies might be expected, and potential reasons for these discrepancies. Discrepancies can be summarized in relation to characteristics of the child, the adult and the HRQOL domain being measured. We conclude that informant discrepancy is not simply an irritating measurement error, but also has its clinical implications. We argue that parents and children base their judgments of pediatric HRQOL on different information and as such, comprehensive evaluation needs to take account of both perspectives. This perspective has implications for the design of clinical trials and necessitates routine collection of data from both sources in clinical research and practice. © 2013 Springer-Verlag Berlin Heidelberg. Source


Hassan H.M.A.,University of Manchester | Hassan H.M.A.,University of Sheffield
Chemical Communications | Year: 2010

Lactams are an important class of compounds owing to their presence in numerous biologically active molecules of natural and unnatural nature. They are also highly versatile intermediates that can be elaborated into interesting compounds for potential use in organic and medicinal chemistry endeavors. In this feature article, the reader will be given a background to olefin metathesis followed by concise discussions (with selected examples) to report recent applications of ring-closing metathesis to form lactams and macrolactams from acyclic diene precursors, an area which continues to deposit attractive applications in the chemical literature en route or in the final step to the target molecules. © 2010 The Royal Society of Chemistry. Source


Butlin R.K.,University of Sheffield
Genetica | Year: 2010

The process of speciation begins with genomically-localised barriers to gene exchange associated with loci for local adaptation, intrinsic incompatibility or assortative mating. The barrier then spreads until reproductive isolation influences the whole genome. The population genomics approach can be used to identify regions of reduced gene flow by detecting loci with greater differentiation than expected from the average across many loci. Recently, this approach has been used in several systems. I review these studies, concentrating on the robustness of the approach and the methods available to go beyond the simple identification of differentiated markers. Population genomics has already contributed significantly to understanding the balance between gene flow and selection during the evolution of reproductive isolation and has great future potential both in genome species and in non-model organisms. © Springer Science+Business Media B.V. 2008. Source


Weetman A.P.,University of Sheffield
Endocrinology and Metabolism Clinics of North America | Year: 2014

Thyroid abnormalities and nonthyroidal illness complicate human immunodeficiency virus (HIV) infection. Among the effects that result from HIV and other opportunistic infections, distinctive features of HIV infection include early lowering of reverse tri-iodothyromine (T3) levels, with normal free T3 levels. Later, some patients develop an isolated low free thyroxine level. After highly active antiretroviral therapy, the immune system reconstitutes in a way that leads to dysregulation of the autoimmune response and the appearance of Graves disease in 1% to 2% of patients. Opportunistic thyroid infections with unusual organisms are most commonly asymptomatic, but can lead to acute or subacute thyroiditis. © 2014 Elsevier Inc. Source


Tucker A.S.,Kings College London | Fraser G.J.,University of Sheffield
Seminars in Cell and Developmental Biology | Year: 2014

This review considers the diversity observed during both the development and evolution of tooth replacement throughout the vertebrates in a phylogenetic framework from basal extant chondrichthyan fish and more derived teleost fish to mammals. We illustrate the conservation of the tooth regeneration process among vertebrate clades, where tooth regeneration refers to multiple tooth successors formed de novo for each tooth position in the jaws from a common set of retained dental progenitor cells. We discuss the conserved genetic mechanisms that might be modified to promote morphological diversity in replacement dentitions. We review current research and recent progress in this field during the last decade that have promoted our understanding of tooth diversity in an evolutionary developmental context, and show how tooth replacement and dental regeneration have impacted the evolution of the tooth-jaw module in vertebrates. © 2014 Elsevier Ltd. Source


Cunliffe V.T.,University of Sheffield
Wiley Interdisciplinary Reviews: Systems Biology and Medicine | Year: 2015

A wide range of developmental, nutritional, environmental, and social factors affect the biological activities of epigenetic mechanisms. These factors change spatiotemporal patterns of gene expression in a variety of different ways and bring significant impacts to bear on development, physiology, and disease risk throughout the life course. Abundant evidence demonstrates that behavioral stressors and adverse nutritional conditions are particularly potent inducers of epigenetic changes and enhancers of chronic disease risks. Recent insights from both human clinical studies and research with model organisms further indicate that such experience-dependent changes to the epigenome can be transmitted through the germline across multiple generations, with important consequences for the heritability of both adaptive and maladaptive phenotypes. Epigenetics research thus offers many possibilities for developing informative biomarkers of acquired chronic disease risk and determining the effectiveness of preventive and therapeutic interventions. Moreover, the experience-sensitive nature of these disease risks raises important questions about societal and individual responsibilities for the prevention of ill-health and the promotion of well-being during development, across the life course and between generations. Better understanding of how epigenetic mechanisms regulate developmental plasticity and mediate the biological embedding of chronic disease risks is therefore likely to shed important new light on the nature of the pathophysiological mechanisms linking social and health inequalities, and will help to inform public policy initiatives in this area. © 2015 Wiley Periodicals, Inc. Source


Alvarez M.A.,University of Manchester | Lawrence N.D.,University of Sheffield
Journal of Machine Learning Research | Year: 2011

Recently there has been an increasing interest in regression methods that deal with multiple outputs. This has been motivated partly by frameworks like multitask learning, multisensor networks or structured output data. From a Gaussian processes perspective, the problem reduces to specifying an appropriate covariance function that, whilst being positive semi-definite, captures the dependencies between all the data points and across all the outputs. One approach to account for non-trivial correlations between outputs employs convolution processes. Under a latent function interpretation of the convolution transform we establish dependencies between output variables. The main drawbacks of this approach are the associated computational and storage demands. In this paper we address these issues. We present different efficient approximations for dependent output Gaussian processes constructed through the convolution formalism. We exploit the conditional independencies present naturally in the model. This leads to a form of the covariance similar in spirit to the so called PITC and FITC approximations for a single output. We show experimental results with synthetic and real data, in particular, we show results in school exams score prediction, pollution prediction and gene expression data. © 2011 Mauricio A. Álvarez and Neil D. Lawrence. Source


Towers M.,University of Sheffield | Wolpert L.,University College London | Tickle C.,University of Bath
Current Opinion in Cell Biology | Year: 2012

The developing limb is one of the first systems where it was proposed that a signalling gradient is involved in pattern formation. This gradient for specifying positional information across the antero-posterior axis is based on Sonic hedgehog signalling from the polarizing region. Recent evidence suggests that Sonic hedgehog signalling also specifies positional information across the antero-posterior axis by a timing mechanism acting in parallel with graded signalling. The progress zone model for specifying proximo-distal pattern, involving timing to provide cells with positional information, continues to be challenged, and there is further evidence that graded signalling by retinoic acid specifies the proximal part of the limb. Other recent papers present the first evidence that gradients of signalling by Wnt5a and FGFs govern cell behaviour involved in outgrowth and morphogenesis of the developing limb. © 2011 Elsevier Ltd. Source


Susmel L.,University of Sheffield
International Journal of Fatigue | Year: 2014

This paper summarises an attempt of proposing a unifying design curve suitable for estimating fatigue damage in un-notched plain and short-fibre/particle reinforced concretes subjected to in-service uniaxial cyclic loading. The above reference fatigue curve was determined by post-processing about 1500 experimental results taken from the literature and generated by testing both plain and short-fibre/particle reinforced concretes cyclically loaded either in tension, in tension/compression, in compression, or in bending. Further, the effect on the fatigue behaviour of concretes of both lateral static loading and stress concentrator was investigated. The most relevant peculiarity of the proposed design methodology is that the mean stress effect is directly taken into account by using the maximum stress in the cycle under either tension/compression or bending, and the absolute value of the minimum stress under compression (the above design stresses being normalised through the corresponding static strength). This strategy resulted in a great simplification of the problem, allowing all the considered experimental results to fall within a narrow scatter band. The high level of accuracy which can be reached through the proposed unifying fatigue assessment methodology strongly supports the idea that this approach can successfully be used in situations of practical interest to design concretes against fatigue by remarkably reducing the time and costs associated with the design process. © 2013 Elsevier Ltd. All rights reserved. Source


Rees M.,University of Sheffield
Ecology Letters | Year: 2013

Many experimental studies have quantified how the effects of competition vary with habitat productivity, with the results often interpreted in terms of the ideas of Grime and Tilman. Unfortunately, these ideas are not relevant to many experiments, and so we develop an appropriate resource competition model and use this to explore the effects of habitat productivity on the intensity of competition. Several mechanisms influencing the productivity-competition intensity relationship are identified, and these mechanisms explored using two classic data sets. In both cases, there is good agreement between the model predictions and empirical patterns. Quantification of the mechanisms identified by the models will allow the development of a simple predictive theory linking measures of the intensity of competition with ecosystem-level properties. © 2012 Blackwell Publishing Ltd/CNRS. Source


Phillips K.L.E.,Sheffield Hallam University | Jordan-Mahy N.,Sheffield Hallam University | Nicklin M.J.H.,University of Sheffield | Le Maitre C.L.,Sheffield Hallam University
Annals of the Rheumatic Diseases | Year: 2013

Objectives: Interleukin 1 (IL-1) is potentially important in the pathogenesis of intervertebral disc (IVD) degeneration; increasing production of matrix degradation enzymes and inhibiting matrix synthesis. Although IL-1 polymorphisms have been linked to increased risk of IVD degeneration, it is still unclear whether IL-1 drives IVD degeneration in vivo or is a secondary feature of degeneration. Here, we investigated whether IVD degeneration could be induced spontaneously by the removal of the natural inhibitor of IL-1 (IL-1 receptor antagonist) in mice that lack a functional IL-1rn gene. Methods: Histological staining and immunohistochemistry was performed on BALB/c IL-1rn+/+ and IL-1rn-/- mice to examine degeneration and to localise and detect IL-1, matrix metalloproteinases (MMP)3, MMP7, a disintigrin and MMP with thrombospondin motifs (ADAMTS)4 protein production. In addition, IVD cells were isolated using collagenase and proliferation potential determined. Results: IL-1rn-/- knockout mice displayed typical features of human disc degeneration: loss of proteoglycan and normal collagen structure and increased expression of matrix degrading enzymes: MMP3; MMP7 and ADAMTS4. Histological grade of degeneration increased in IL-1rn-/- mice which was more evident within older mice. In addition IVD cells isolated from IL-1rn-/- mice displayed reduced proliferation potential. Conclusions: Here, we show that IL-1rn-/- mice develop spinal abnormalities that resemble characteristic features associated with human disc degeneration. The current evidence is consistent with a role for IL-1 in the pathogenesis of IVD degeneration. The imbalance between IL-1 and IL-1Ra which is observed during human IVD degeneration could therefore be a causative factor in the degeneration of the IVD, and as such, is an appropriate pharmaceutical target for inhibiting degeneration. Source


Malicki J.,University of Sheffield | Avidor-Reiss T.,University of Toledo
Organogenesis | Year: 2014

The primary cilium compartmentalizes a tiny fraction of the cell surface and volume, yet many proteins are highly enriched in this area and so efficient mechanisms are necessary to concentrate them in the ciliary compartment. Here we review mechanisms that are thought to deliver protein cargo to the base of cilia and are likely to interact with ciliary gating mechanisms. Given the immense variety of ciliary cytosolic and transmembrane proteins, it is almost certain that multiple, albeit frequently interconnected, pathways mediate this process. It is also clear that none of these pathways is fully understood at the present time. Mechanisms that are discussed below facilitate ciliary localization of structural and signaling molecules, which include receptors, G-proteins, ion channels, and enzymes. These mechanisms form a basis for every aspect of cilia function in early embryonic patterning, organ morphogenesis, sensory perception and elsewhere. © 2014 Landes Bioscience. Source


Corfe B.M.,University of Sheffield
Molecular BioSystems | Year: 2012

The short-chain fatty acid butyrate is classically referred to as an inhibitor of histone deacetylases (HDACi), however evidence from direct assays is both sparse and contradictory. This paper assesses the strength of the historical evidence, potential gaps, inadequacies and simplifications in the butyrate-as-HDACi hypothesis. An alternate model to explain the action of butyrate is proposed wherein butyrate acts as a product inhibitor of deacetylation. The model makes testable predictions which may enable future determination of the mode of action of this and other SCFAs. © 2012 The Royal Society of Chemistry. Source


The present paper investigates the different ways of using the Modified Wöhler Curve Method (MWCM) to perform the fatigue assessment of steel and aluminium welded joints subjected to in-service variable amplitude (VA) multiaxial load histories. Thanks to its specific features, the above critical plane approach can efficiently be applied in terms of both nominal, hot-spot, and local quantities, that is, by using any of the stress analysis strategies suggested by the Design Recommendations of the International Institute of Welding (IIW). The MWCM can efficiently be used also along with the so-called Theory of Critical Distances applied in the form of the Point Method (PM). The accuracy of the different formalisations of the MWCM investigated in the present paper was systematically checked against a large number of experimental results taken from the literature and generated by testing, under VA biaxial nominal loading, welded samples having different geometries. Such a systematic validation exercise allowed us to prove that our multiaxial fatigue criterion is successful in designing welded joints against VA multiaxial fatigue, this holding true independently from both definition adopted to calculate the necessary stress quantities and complexity of the assessed load history. © 2014 Elsevier Ltd. All rights reserved. Source


Use of preference-based measures (PBM) of health-related quality of life (HRQoL) is increasing. PBMs allow the calculation of quality-adjusted life years, which can be used in decision making. Research in the field of pediatric PBMs is lacking. This work is the first stage in the development of a new generic, pediatric PBM of HRQoL. Seventy-four qualitative interviews were undertaken with children to find out how health affects their lives. Sampling was purposive, balancing primarily for health within age, with gender and ethnicity as secondary criteria. Interviews covered a wide range of health conditions, and children were successfully able to articulate how their health affected their lives. Eleven dimensions of HRQoL were identified, covering social, emotional, and physical aspects, in common with other generic pediatric HRQoL measures, but differ by including feeling jeal'ous and feeling tired/weak and not including dimensions related to parental, family, or behavioral issues. Source


Lin L.,University of Sheffield | Zhang S.,University of Exeter
Journal of Materials Chemistry | Year: 2012

Functional group free graphene materials with high electronic conductivities were prepared via solvothermal reaction of solid sulphur (S) and graphene oxide (GO) in a H 2O-NMP or H 2O-DMF (1:1 in volume ratio) solvent at 110 °C for 10 h. Ultraviolet-visible (UV/Vis) analysis revealed that S exhibited strong reductive ability in a boiling GO aqueous solution. Its reducing effect could be further enhanced by using NMP or DMF as a surfactant to adjust the solvent surface tension close to the surface energy of graphene. UV/Vis and X-ray photoelectron spectroscopy (XPS) results confirmed that the electronic structure of graphene had been completely restored and the oxygen contents reduced remarkably. No graphitic stacking between the reduced GO (RGO) was found. As-prepared RGO products also exhibited good dispersivities in various solvents. Moreover, it was inter-convertible between their different forms: agglomerates, simply air dried bulk solids and films and well-dispersed suspensions. © 2012 The Royal Society of Chemistry. Source


Luminescent complexes of the [M(diimine)(CN)4]2- family (M = Ru, Os), and their polynuclear analogues, are structurally versatile components for preparation of supramolecular assemblies based on interaction of the cyanide groups with other metal ions or metal complexes via direct coordination, hydrogen bonding, or halogen bonding. In addition their environment-dependent photophysical properties (solvatochromism and metallochromism), and the ability of the CN groups to act as reporters for excited state behaviour via time-resolved IR spectroscopy, make these fragments spectroscopically as well as structurally versatile. This Perspective article summarises work from the author's group over the last decade on the structures and photophysical properties of these fascinating complexes and their supramolecular assemblies. © The Royal Society of Chemistry 2010. Source


Mahmoud M.M.,University of Sheffield
Circulation Research | Year: 2016

RATIONALE:: Blood flow-induced shear stress controls endothelial cell (EC) physiology during atherosclerosis via transcriptional mechanisms that are incompletely understood. The mechanosensitive transcription factor TWIST is expressed during embryogenesis but its role in EC responses to shear stress and focal atherosclerosis is unknown. OBJECTIVE:: Investigate whether TWIST regulates endothelial responses to shear stress during vascular dysfunction and atherosclerosis, and compare TWIST function in vascular development and disease. METHODS AND RESULTS:: The expression and function of TWIST1 was studied in EC in both developing vasculature and during the initiation of atherosclerosis. In zebrafish, twist was expressed in early embryonic vasculature where it promoted angiogenesis by inducing EC proliferation and migration. In adult porcine and murine arteries, TWIST1 was expressed preferentially at low shear stress regions as evidenced by qPCR and en face staining. Moreover, studies of experimental murine carotid arteries and cultured EC revealed that TWIST1 was induced by low shear stress via a GATA4-dependent transcriptional mechanism. Gene silencing in cultured EC and EC-specific genetic deletion in mice demonstrated that TWIST1 promoted atherosclerosis by inducing inflammation and enhancing EC proliferation associated with vascular leakiness. CONCLUSIONS:: TWIST expression promotes developmental angiogenesis by inducing EC proliferation and migration. In addition to its role in development, TWIST is expressed preferentially at low shear stress regions of adult arteries where it promotes atherosclerosis by inducing EC proliferation and inflammation. Thus pleiotropic functions of TWIST control vascular disease as well as development. © 2016 American Heart Association, Inc. Source


Sudholt D.,University of Sheffield
IEEE Transactions on Evolutionary Computation | Year: 2013

In this paper a new method for proving lower bounds on the expected running time of evolutionary algorithms (EAs) is presented. It is based on fitness-level partitions and an additional condition on transition probabilities between fitness levels. The method is versatile, intuitive, elegant, and very powerful. It yields exact or near-exact lower bounds for LO, OneMax, long $k$-paths, and all functions with a unique optimum. Most lower bounds are very general; they hold for all EAs that only use bit-flip mutation as variation operator, i.e., for all selection operators and population models. The lower bounds are stated with their dependence on the mutation rate. These results have very strong implications. They allow us to determine the optimal mutation-based algorithm for LO and OneMax, i.e., the algorithm that minimizes the expected number of fitness evaluations. This includes the choice of the optimal mutation rate. © 1997-2012 IEEE. Source


Hill J.G.,University of Sheffield | Legon A.C.,University of Bristol
Physical Chemistry Chemical Physics | Year: 2015

Benchmark quality structures and interaction energies have been produced using explicitly correlated coupled cluster methods for a systematic series of hydrogen and halogen bonded complexes: B⋯HCCH, B⋯HCl and B⋯ClF, with six different Lewis bases B. Excellent agreement with experimental structures is observed, verifying the method used to deduce the equilibrium deviation from collinearity of the intermolecular bond via rotational spectroscopy. This level of agreement also suggests that the chosen theoretical method can be employed when experimental equilibrium data are not available. The application of symmetry adapted perturbation theory reveals differences in the underlying mechanisms of interaction for hydrogen and halogen bonding, providing insights into the differences in non-linearity. In the halogen bonding case it is shown that the dispersion term is approximately equal to the overall interaction energy, highlighting the importance of choosing the correct theoretical method for this type of interaction. © the Owner Societies 2015. Source


Chown S.L.,Stellenbosch University | Gaston K.J.,University of Sheffield
Biological Reviews | Year: 2010

Body size is a key feature of organisms and varies continuously because of the effects of natural selection on the size-dependency of resource acquisition and mortality rates. This review provides a critical and synthetic overview of body size variation in insects from a predominantly macroecological (large-scale temporal and spatial) perspective. Because of the importance of understanding the proximate determinants of adult size, it commences with a brief summary of the physiological mechanisms underlying adult body size and its variation, based mostly on findings for the model species Drosophila melanogaster and Manduca sexta. Variation in nutrition and temperature have variable effects on critical weight, the interval to cessation of growth (or terminal growth period) and growth rates, so influencing final adult size. Ontogenetic and phylogenetic variation in size, compensatory growth, scaling at the intra- and interspecific levels, sexual size dimorphism, and body size optimisation are then reviewed in light of their influences on individual and species body size frequency distributions. Explicit attention is given to evolutionary trends, including gigantism, Cope's rule and the rates at which size change has taken place, and to temporal ecological trends such as variation in size with succession and size-selectivity during the invasion process. Large-scale spatial variation in size at the intraspecific, interspecific and assemblage levels is considered, with special attention being given to the mechanisms proposed to underlie clinal variation in adult body size. Finally, areas particularly in need of additional research are identified. © 2009 Cambridge Philosophical Society. Source


Objective: To investigate the phonetic and phonological parameters of speech production associated with cleft palate in single words and in sentence repetition in order to explore the impact of connected speech processes, prosody, and word juncture on word production across contexts. Participants: Two boys (aged 9 years 5 months and 11 years 0 months) with persisting speech impairments related to a history of unilateral cleft lip and palate formed the main focus of the study; three typical adult male speakers provided control data. Method: Audio, video, and electropalatographic recordings were made of the participants producing single words and repeating two sets of sentences. The data were transcribed and the electropalatographic recordings were analyzed to explore lingual-palatal contact patterns across the different speech conditions. Acoustic analysis was used to further inform the perceptual analysis and to make specific durational measurements. Results: The two boys' speech production differed across the speech conditions. Both boys showed typical and atypical phonetic features in their connected speech production. One boy, although often unintelligible, resembled the adult speakers more closely prosodically and in his specific connected speech behaviors at word boundaries. The second boy produced developmentally atypical phonetic adjustments at word boundaries that appeared to promote intelligibility at the expense of naturalness. Conclusion: For older children with persisting speech impairments, it is particularly important to examine specific features of connected speech production, including word juncture and prosody. Sentence repetition data provide useful information to this end, but further investigations encompassing detailed perceptual and instrumental analysis of real conversational data are warranted. © Copyright 2013 American Cleft Palate-Craniofacial Association. Source


Tilling C.,University of Sheffield
Medical decision making : an international journal of the Society for Medical Decision Making | Year: 2010

The time tradeoff (TTO) method of preference elicitation allows respondents to value a state as worse than dead, generally either through the Torrance protocol or the Measurement and Valuation of Health (MVH) protocol. Both of these protocols have significant weaknesses: Valuations for states worse than dead (SWD) are elicited through procedures different from those for states better than dead (SBD), and negative values can be extremely negative. To provide an account of the different TTO designs for SWD, to identify any alternatives to the MVH and Torrance approaches, and to consider the merits of the approaches identified. Medline was searched to identify all health state valuation studies employing TTO. The ways in which SWD were handled were recorded. Furthermore, to ensure that there are no unpublished but feasible TTO variants, the authors developed a theoretical framework for identifying all potential variants. The search produced 593 hits, of which 218 were excluded. Of the remaining 375 articles, only 29 included protocols for SWD. Of these, 23 used the MVH protocol and 4 used the Torrance protocol. The other 2 used 1 protocol for SBD and SWD, one making use of lead time and the other using a 2-stage procedure with chaining. The systematic framework did not identify any alternatives to the Torrance and MVH protocols that were superior to the lead time approach. Few studies elicit values for SWD. The lead time approach is a potential alternative to the Torrance and MVH protocols. Key words: QALY; states worse than dead; health state valuation; preference elicitation. Source


Photolithographic techniques have been used to fabricate polymer brush micro- and nanostructures. On exposure to UV light with a wavelength of 244 nm, halogens were selectively removed from films of chloromethylphenyltrichlorosilane and 3-(2-bromoisobutyramido)propyl-triethoxysilane on silicon dioxide. Patterning was achieved at the micrometer scale, by using a mask in conjunction with the incident laser beam, and at the nanometer scale, by utilizing interferometric lithography (IL). Friction force microscopy images of patterned surfaces exhibited frictional contrast due to removal of the halogen but no topographical contrast. In both cases the halogenated surface was used as an initiator for surface atom-transfer radical polymerization. Patterning of the surface by UV lithography enabled the definition of patterns of initiator from which micro- and nanostructured poly[oligo(ethylene glycol)methacrylate] bottle brushes were grown. Micropatterned brushes formed on both surfaces exhibited excellent resistance to protein adsorption, enabling the formation of protein patterns. Using IL, brush structures were formed that covered macroscopic areas (approximately 0.5 cm2) but exhibited a full width at half maximum height as small as 78 nm, with a period of 225 nm. Spatially selective photolytic removal of halogens that are immobilized on a surface thus appears to be a simple, rapid, and versatile method for the formation of micro- and nanostructured polymer brushes and for the control of protein adsorption. Source


Brazier J.,University of Sheffield
British Journal of Psychiatry | Year: 2010

The EQ-5D is a widely used questionnaire for calculating quality-adjusted life-years (QALYs) for assessing cost-effectiveness in healthcare. It reflects the impact of common mental health conditions such as mild to moderate depression but seems to be more problematic for use in people with psychotic and severe and complex nonpsychotic disorders. Source


Gene expression profiling (GEP) and expanded immunohistochemistry (IHC) tests aim to improve decision-making relating to adjuvant chemotherapy for women with early breast cancer. The aim of this report is to assess the clinical effectiveness and cost-effectiveness of nine GEP and expanded IHC tests compared with current prognostic tools in guiding the use of adjuvant chemotherapy in patients with early breast cancer in England and Wales. The nine tests are BluePrint, Breast Cancer Index (BCI), IHC4, MammaPrint, Mammostrat, NPI plus (NPI+), OncotypeDX, PAM50 and Randox Breast Cancer Array. Databases searched included MEDLINE, MEDLINE In-Process & Other Non-Indexed Citations, EMBASE and The Cochrane Library. Databases were searched from January 2009 to May 2011 for the OncotypeDX and MammaPrint tests and from January 2002 to May 2011 for the other tests. A systematic review of the evidence on clinical effectiveness (analytical validity, clinical validity and clinical utility) and cost-effectiveness was conducted. An economic model was developed to evaluate the cost-effectiveness of adjuvant chemotherapy treatment guided by four of the nine test (OncotypeDX, IHC4, MammaPrint and Mammostrat) compared with current clinical practice in England and Wales, using clinicopathological parameters, in women with oestrogen receptor-positive (ER+), lymph node-negative (LN-), human epidermal growth factor receptor type 2-negative (HER2-) early breast cancer. The literature searches for clinical effectiveness identified 5993 citations, of which 32 full-text papers or abstracts (30 studies) satisfied the criteria for the effectiveness review. A narrative synthesis was performed. Evidence for OncotypeDX supported the prognostic capability of the test. There was some evidence on the impact of the test on decision-making and to support the case that OncotypeDX predicts chemotherapy benefit; however, few studies were UK based and limitations in relation to study design were identified. Evidence for MammaPrint demonstrated that the test score was a strong independent prognostic factor, but the evidence is non-UK based and is based on small sample sizes. Evidence on the Mammostrat test showed that the test was an independent prognostic tool for women with ER+, tamoxifen-treated breast cancer. The three studies appeared to be of reasonable quality and provided data from a UK setting (one study). One large study reported on clinical validity of the IHC4 test, with IHC4 score a highly significant predictor of distant recurrence. This study included data from a UK setting and appeared to be of reasonable quality. Evidence for the remaining five tests (PAM50, NPI+, BCI, BluePrint and Randox) was limited. The economic analysis suggests that treatment guided using IHC4 has the greatest potential to be cost-effective at a £20,000 threshold, given the low cost of the test; however, further research is needed on the analytical validity and clinical utility of IHC4, and the exact cost of the test needs to be confirmed. Current limitations in the evidence base produce significant uncertainty in the results. OncotypeDX has a more robust evidence base, but further evidence on its impact on decision-making in the UK and the predictive ability of the test in an ER+, LN-, HER- population receiving current drug regimens is needed. For MammaPrint and Mammostrat there were significant gaps in the available evidence and the estimates of cost-effectiveness produced were not considered to be robust by the External Assessment Group. Methodological weaknesses in the clinical evidence base relate to heterogeneity of patient cohorts and issues arising from the retrospective nature of the evidence. Further evidence is required on the clinical utility of all of the tests and on UK-based populations. A key area of uncertainty relates to whether the tests provide prognostic or predictive ability. The clinical evidence base for OncotypeDX is considered to be the most robust. The economic analysis suggested that treatment guided using IHC4 has the most potential to be cost-effective at a threshold of £20,000; however, the evidence base to support IHC4 needs significant further research. PROSPERO 2011:CRD42011001361, available from www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42011001361. Source


Storey R.F.,University of Sheffield
Thrombosis and Haemostasis | Year: 2011

The important role of the P2Y 12 receptor in amplification of platelet activation and associated responses and the limitations associated with clopidogrel therapy have led to the development of novel inhibitors of this receptor. Three reversibly-binding P2Y 12 inhibitors are in phase 3 development, ticagrelor, cangrelor and elinogrel. The pharmacology and clinical trial data for each of these inhibitors are discussed and compared with relevant data for the thienopyridines clopidogrel and prasugrel. © Schattauer 2011. Source


Haslegrave J.,University of Sheffield
Journal of Combinatorial Theory. Series B | Year: 2014

Vince and Wang [6] showed that the average subtree density of a series-reduced tree is between 12 and 34, answering a conjecture of Jamison [4]. They ask under what conditions a sequence of such trees may have average subtree density tending to either bound; we answer these questions by giving simple necessary and sufficient conditions in each case. © 2014 Elsevier Inc. Source


Staton C.A.,University of Sheffield
Biochemical Society Transactions | Year: 2011

Class 3 semaphorins (Sema3) are a family of secreted proteins that were originally identified as axon guidance factors mediating their signal transduction by forming complexes with neuropilins and plexins. However, the wide expression pattern of Sema3 suggested additional functions other than those associated with the nervous system, and indeed many studies have now indicated that Sema3 proteins and their receptors play a role in angiogenesis. The present review specifically focuses on recent evidence for this role in both physiological and pathological angiogenesis. ©The Authors Journal compilation ©2011 Biochemical Society. Source


Simons M.J.P.,University of Sheffield
Ageing Research Reviews | Year: 2015

Multiple studies have demonstrated that telomere length predicts mortality and that telomeres shorten with age. Although rarely acknowledged these associations do not dictate causality. I review telomerase knockout and overexpression studies and find little support that telomeres cause aging. In addition, the causality hypothesis assumes that there is a critical telomere length at which senescence is induced. This generates the prediction that variance in telomere length decreases with age. In contrast, using meta-analysis of human data, I find no such decline. Inferring the causal involvement of telomeres in aging from current knowledge is therefore speculative and could hinder scientific progress. © 2015 The Author. Source


Borycki A.-G.,University of Sheffield
Cell Adhesion and Migration | Year: 2013

The importance of laminin-containing basement membranes (BM) for adult muscle function is well established, in particular due to the severe phenotype of congenital muscular dystrophies in patients with mutations disrupting the BM-muscle cell interaction. Developing muscles in the embryo are also dependent on an intact BM. However, the processes controlled by BM-muscle cell interactions in the embryo are only beginning to be elucidated. In this review, we focus on the myotomal BM to illustrate the critical role of laminin-111 in BM assembly and function at the surface of embryonic muscle cells. The myotomal BM provides also an interesting paradigm to study the complex interplay between laminins-containing BM and growth factor-mediated signaling and activity. © 2013 Landes Bioscience. Source


Background: Previous research to develop a new generic paediatric healthrelated quality of life (HR-QOL) measure generated 11 dimensions of HR-QOL, covering physical, emotional and social functioning. These dimensions and their response scales were developed from interviews with children. Some of these dimensions have alternative wording choices. The measure is intended to be preference based so that it can be used in paediatric economic evaluation. Objectives: The aims of this research were to assess the performance of this new descriptive system in a general and clinical paediatric population, to determine the most appropriate wording for the dimensions and to refine the descriptive system to be amenable to health state valuation to make it suitable for use in economic evaluation. Methods: A sample of 247 children was recruited from general and clinical paediatric populations. Each child completed the descriptive system and data were collected to allow assessment of practicality (including response rates, completion rates and time to complete), content, face and construct validity, whether the child could self-complete and preferences for alternative wordings that could be used for dimensions. These data were used to inform the final choice of wording for dimensions, the scales used for each dimension and the reduction of dimensions to meet the constraints of health state valuation. Results: The descriptive system demonstrated good practicality and validity in both the general and clinical paediatric samples. The completion rates were excellent (>98%), the mean time to complete was low (3.8 minutes for the general and 5.3 minutes for the clinical sample) and there was evidence of face, content and construct validity. The descriptive system was able to demonstrate significant differences between the general and clinical samples and according to the level of health of children. 96%of the school sample and 85% of the clinical sample were able to self-complete. The final choice of wording for the 11 dimensions was determined by the preferences and comments of the children. To make it amenable for health state valuation, the number of dimensions was reduced from 11 to 9 by removing the dimensions 'jealous' and 'embarrassed'. Conclusions: The descriptive system performed well in both the general and the clinical populations, and the final descriptive system generates health states that are feasible for health state valuation. Further research is needed to value the final descriptive system by obtaining preference weights for each health state, thereby making the measure suitable for use in paediatric economic evaluation. © 2011 Adis Data Information BV. All rights reserved. Source


Parker R.J.,ETH Zurich | Goodwin S.P.,University of Sheffield
Monthly Notices of the Royal Astronomical Society | Year: 2012

Observations of binaries in clusters tend to be of visual binaries with separations of tens to hundreds of au. Such binaries are 'intermediates' and their destruction or survival depends on the exact details of their individual dynamical history. We investigate the stochasticity of the destruction of such binaries and the differences between the initial and processed populations using N-body simulations. We concentrate on Orion nebula cluster-like clusters, where the observed binary separation distribution ranges from 62 to 620au. We find that, starting from the same initial binary population in statistically identical clusters, the number of intermediate binaries that are destroyed after 1Myr can vary by a factor of >2, and that the resulting separation distributions can be statistically completely different in initially substructured clusters. We also find that the mass ratio distributions are altered (destroying more low mass-ratio systems), but not as significantly as the binary fractions or separation distributions. We conclude that finding very different intermediate (visual) binary populations in different clusters does not provide conclusive evidence that the initial populations were different. © 2012 The Authors Monthly Notices of the Royal Astronomical Society © 2012 RAS. Source


Askes H.,University of Sheffield | Aifantis E.C.,Aristotle University of Thessaloniki
International Journal of Solids and Structures | Year: 2011

In this paper, we discuss various formats of gradient elasticity and their performance in static and dynamic applications. Gradient elasticity theories provide extensions of the classical equations of elasticity with additional higher-order spatial derivatives of strains, stresses and/or accelerations. We focus on the versatile class of gradient elasticity theories whereby the higher-order terms are the Laplacian of the corresponding lower-order terms. One of the challenges of formulating gradient elasticity theories is to keep the number of additional constitutive parameters to a minimum. We start with discussing the general Mindlin theory, that in its most general form has 903 constitutive elastic parameters but which were reduced by Mindlin to three independent material length scales. Further simplifications are often possible. In particular, the Aifantis theory has only one additional parameter in statics and opens up a whole new field of analytical and numerical solution procedures. We also address how this can be extended to dynamics. An overview of length scale identification and quantification procedures is given. Finite element implementations of the most commonly used versions of gradient elasticity are discussed together with the variationally consistent boundary conditions. Details are provided for particular formats of gradient elasticity that can be implemented with simple, linear finite element shape functions. New numerical results show the removal of singularities in statics and dynamics, as well as the size-dependent mechanical response predicted by gradient elasticity. © 2011 Elsevier Ltd. All rights reserved. Source


Smith P.J.,University of Sheffield | Morrin A.,Dublin City University
Journal of Materials Chemistry | Year: 2012

Functional materials can be synthesised at the same time as their final device geometries are patterned. © 2012 The Royal Society of Chemistry. Source


Ong A.C.M.,University of Sheffield | Ong A.C.M.,Sheffield Kidney Institute | Devuyst O.,University of Zurich | Devuyst O.,Catholic University of Louvain | And 2 more authors.
The Lancet | Year: 2015

Autosomal dominant polycystic kidney disease is the most common inherited kidney disease and accounts for 7-10% of all patients on renal replacement therapy worldwide. Although first reported 500 years ago, this disorder is still regarded as untreatable and its pathogenesis is poorly understood despite much study. During the past 40 years, however, remarkable advances have transformed our understanding of how the disease develops and have led to rapid changes in diagnosis, prognosis, and treatment, especially during the past decade. This Review will summarise the key findings, highlight recent developments, and look ahead to the changes in clinical practice that will likely arise from the adoption of a new management framework for this major kidney disease. © 2015 Elsevier Ltd. Source


Ojovan M.I.,University of Sheffield | Lee W.E.,Imperial College London
Metallurgical and Materials Transactions A: Physical Metallurgy and Materials Science | Year: 2011

Glassy wasteforms currently being used for high-level radioactive waste (HLW) as well as for low- and intermediate-level radioactive waste (LILW) immobilization are discussed and their most important parameters are examined, along with a brief description of waste vitrification technology currently used worldwide. Recent developments in advanced nuclear wasteforms are described such as polyphase glass composite materials (GCMs) with higher versatility and waste loading. Aqueous performance of glassy materials is analyzed with a detailed analysis of the role of ion exchange and hydrolysis, and performance of irradiated glasses. © 2010 The Minerals, Metals & Materials Society and ASM International. Source


Feder J.L.,University of Notre Dame | Egan S.P.,University of Notre Dame | Nosil P.,University of Colorado at Boulder | Nosil P.,University of Sheffield
Trends in Genetics | Year: 2012

The emerging field of speciation genomics is advancing our understanding of the evolution of reproductive isolation from the individual gene to a whole-genome perspective. In this new view it is important to understand the conditions under which 'divergence hitchhiking' associated with the physical linkage of gene regions, versus 'genome hitchhiking' associated with reductions in genome-wide rates of gene flow caused by selection, can enhance speciation-with-gene-flow. We describe here a theory predicting four phases of speciation, defined by changes in the relative effectiveness of divergence and genome hitchhiking, and review empirical data in light of the theory. We outline future directions, emphasizing the need to couple next-generation sequencing with selection, transplant, functional genomics, and mapping studies. This will permit a natural history of speciation genomics that will help to elucidate the factors responsible for population divergence and the roles that genome structure and different forms of hitchhiking play in facilitating the genesis of new biodiversity. © 2012 Elsevier Ltd. Source


Goodeve A.C.,University of Sheffield
Blood Reviews | Year: 2010

The common autosomally inherited mucocutaneous bleeding disorder, von Willebrand disease (VWD) results from quantitative or qualitative defects in plasma von Willebrand factor (VWF). Mutation can affect VWF quantity or its functions mediating platelet adhesion and aggregation at sites of vascular damage and carrying pro-coagulant factor VIII (FVIII). Phenotype and genotype analysis in patients with the three VWD types has aided understanding of VWF structure and function. Investigation of patients with specific disease types has identified mutations in up to 70% of type 1 and 100% of type 3 VWD cases. Missense mutations predominate in type 1 VWD and act through mechanisms including rapid clearance and intracellular retention. Many mutations are incompletely penetrant and attributing pathogenicity is challenging. Other factors including blood group O contribute to low VWF level. Missense mutations affecting platelet- or FVIII-binding through a number of mechanisms are responsible for the four type 2 subtypes; 2A, 2B, 2M and 2N. In contrast, mutations resulting in a lack of VWF expression predominate in recessive type 3 VWD. This review explores the genetic basis of each VWD type, relating mutations identified to disease mechanism. Additionally, utility of genetic analysis within the different disease types is explored. © 2010 Elsevier Ltd. Source


Tee K.L.,University of Manchester | Wong T.S.,University of Sheffield
Biotechnology Advances | Year: 2013

Genetic diversity creation is a core technology in directed evolution where a high quality mutant library is crucial to its success. Owing to its importance, the technology in genetic diversity creation has seen rapid development over the years and its application has diversified into other fields of scientific research. The advances in molecular cloning and mutagenesis since 2008 were reviewed. Specifically, new cloning techniques were classified based on their principles of complementary overhangs, homologous sequences, overlapping PCR and megaprimers and the advantages, drawbacks and performances of these methods were highlighted. New mutagenesis methods developed for random mutagenesis, focused mutagenesis and DNA recombination were surveyed. The technical requirements of these methods and the mutational spectra were compared and discussed with references to commonly used techniques. The trends of mutant library preparation were summarised. Challenges in genetic diversity creation were discussed with emphases on creating "smart" libraries, controlling the mutagenesis spectrum and specific challenges in each group of mutagenesis methods. An outline of the wider applications of genetic diversity creation includes genome engineering, viral evolution, metagenomics and a study of protein functions. The review ends with an outlook for genetic diversity creation and the prospective developments that can have future impact in this field. © 2013 The Authors. Source


Dolan S.R.,University of Sheffield
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

Bosonic fields on rotating black hole spacetimes are subject to amplification by superradiance, which induces exponentially-growing instabilities (the "black hole bomb") in two scenarios: if the black hole is enclosed by a mirror, or if the bosonic field has rest mass. Here we present a time-domain study of the scalar field on Kerr spacetime which probes ultra-long timescales up to tâ‰2;5×106M, to reveal the growth of the instability. We describe a highly-efficient method for evolving the field, based on a spectral decomposition into a coupled set of 1+1D equations, and an absorbing boundary condition inspired by the "perfectly-matched layers" paradigm. First, we examine the mirror case to study how the instability timescale and mode structure depend on mirror radius. Next, we examine the massive-field, whose rich spectrum (revealed through Fourier analysis) generates "beating" effects which disguise the instability. We show that the instability is clearly revealed by tracking the stress-energy of the field in the exterior spacetime. We calculate the growth rate for a range of mass couplings, by applying a frequency-filter to isolate individual modal contributions to the time-domain signal. Our results are in accord with previous frequency-domain studies which put the maximum growth rate at τ-1≈1.72×10-7(GM/c3) -1 for the massive scalar field on Kerr spacetime. © 2013 American Physical Society. Source


Dolan S.R.,University of Sheffield | Barack L.,University of Southampton
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

This is the third in a series of papers aimed at developing a practical time-domain method for self-force calculations in Kerr spacetime. The key elements of the method are (i) removal of a singular part of the perturbation field with a suitable analytic "puncture," (ii) decomposition of the perturbation equations in azimuthal (m-)modes, taking advantage of the axial symmetry of the Kerr background, (iii) numerical evolution of the individual m-modes in 2+1 dimensions with a finite difference scheme, and (iv) reconstruction of the local self-force from the mode sum. Here we report a first implementation of the method to compute the gravitational self-force. We work in the Lorenz gauge, solving directly for the metric perturbation in 2+1 dimensions, for the case of circular geodesic orbits. The modes m=0, 1 contain nonradiative pieces, whose time-domain evolution is hampered by certain gauge instabilities. We study this problem in detail and propose ways around it. In the current work we use the Schwarzschild geometry as a platform for development; in a forthcoming paper - the fourth in the series - we apply our method to the gravitational self-force in Kerr geometry. © 2013 American Physical Society. Source


Nouspikel T.,University of Sheffield
Future Oncology | Year: 2013

Human embryonic stem cells (hESCs) display a leaky G1/S checkpoint and inefficient nucleotide excision repair activity. Maintenance of genomic stability in these cells mostly relies on the elimination of damaged cells by high rates of apoptosis. However, a subpopulation survives and proliferates actively, bypassing DNA damage by translesion synthesis, a known mutagenic process. Indeed, high levels of damage-induced mutations were observed in hESCs, similar to those in repair-deficient cells. The surviving cells also become more resistant to further damage, leading to a progressive enrichment of cultures in mutant cells. In long-term cultures, hESCs display features characteristic of neoplastic progression, including chromosomal anomalies often similar to those observed in embryo carcinoma. The implication of these facts for stem cell-based therapy and cancer research are discussed. © 2013 Future Medicine Ltd. Source


Ng F.S.L.,University of Sheffield
Nature Geoscience | Year: 2015

Fast-flowing ice streams carry ice from the interior of the Antarctic Ice Sheet towards the coast. Understanding how ice-stream tributaries operate and how networks of them evolve is essential for developing reliable models of the ice sheet's response to climate change. A particular challenge is to unravel the spatial complexity of flow within and across tributary networks. Here I define a measure of planimetric flow convergence, which can be calculated from satellite measurements of the ice sheet's surface velocity, to explore this complexity. The convergence map of Antarctica clarifies how tributaries draw ice from its interior. The map also reveals curvilinear zones of convergence along lateral shear margins of streaming, and abundant ripples associated with nonlinear ice rheology and changes in bed topography and friction. Convergence on ice-stream tributaries and their feeding zones is uneven and interspersed with divergence. For individual drainage basins, as well as the ice sheet as a whole, fast flow cannot converge or diverge as much as slow flow. I therefore deduce that flow in the ice-stream networks is subject to mechanical regulation that limits flow-orthonormal strain rates. These findings provide targets for ice-sheet simulations and motivate more research into the origin and dynamics of tributarization. © 2015 Macmillan Publishers Limited. Source


Gariballa S.,United Arab Emirates University | Gariballa S.,University of Sheffield | Alessa A.,United Arab Emirates University
Clinical Nutrition | Year: 2013

Background: Sarcopenia is prevalent in older populations with many causes and varying outcomes however information for use in clinical practice is still lacking. Aims: The aim of this report is to identify the clinical determinants and prognostic significance of sarcopenia in a cohort of hospitalized acutely ill older patients. Methods: Four hundred and thirty two randomly selected patients had their baseline clinical characteristic data assessed within 72h of admission, at 6 weeks and at 6 months. Nutritional status was assessed from anthropometric and biochemical data. Sarcopenia was diagnosed from low muscle mass and low muscle strength-hand grip using anthropometric measures based on the European Working Group criteria. Results: Compared with patients without sarcopenia, those diagnosed with sarcopenia 44 (10%) were more likely to be older, have more depression symptoms and lower serum albumin concentration. The length of hospital stay (LOS) was significantly longer in patients diagnosed with sarcopenia compared with patients without sarcopenia [mean (SD) LOS 13.4 (8.8) versus 9.4 (7) days respectively, p=0.003]. The risk of non-elective readmission in the 6 months follow up period was significantly lower in patients without sarcopenia compared with those diagnosed with sarcopenia (adjusted hazard ratio. 53 (95% CI: .32 to .87, p=0.013). The death rate was also lower in patients without sarcopenia 38/388 (10%), compared with those with sarcopenia 12/44 (27%), p-value=.001. Conclusion: Older people with sarcopenia have poor clinical outcome following acute illness compared with those without sarcopenia. © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. Source


Birkhead T.R.,University of Sheffield
Journal of Zoology | Year: 2010

Science progresses through ideas or hypotheses; novel ways of viewing the world. If those ideas survive testing, then they are considered 'the truth', or more crucially, truth-for-now, for the essence of science is that if a new idea provides a better explanation of the way the world is, the truth changes. Darwin's idea of evolution by natural selection, published as the Origin in 1859, replaced the earlier truth of physico- or natural-theology introduced by John Ray in 1691. Despite resistance by the church, Darwin's truth gained widespread acceptance, in part due to the efforts of T. H. Huxley, who on reading the Origin said 'How extremely stupid not to have thought of that!' Despite natural selection's enormous explanatory power, there were certain phenomena it apparently could not explain, including female promiscuity. It was only in the 1960s when natural selection was viewed as operating explicitly on individuals (rather than populations or groups), that this changed. Rather than being a cooperative venture between the sexes, sexual reproduction was now viewed in terms of conflicts of interests, and in so doing provided an explanation for female promiscuity (albeit in a male-biased sort of way). Until this point, sexual selection had been concerned exclusively with mate acquisition. With an evolutionary perspective focussing on individuals, it was recognized that sexual selection might continue after insemination, and that rather than competing for partners, males compete for fertilizations. Later it was acknowledged that females, through cryptic processes can also influence the outcome of sperm competition. Today, post-copulatory sexual selection provides explanations for many previously bewildering reproductive traits, including the extraordinary diversity in male and female genitalia, the design of spermatozoa and ova, of seminal fluid and of copulation behaviour itself. © 2010 The Authors. Journal compilation © 2010 The Zoological Society of London. Source


Walker D.A.,University of Sheffield
Annals of Applied Biology | Year: 2010

The present world population is largely fed by 'industrialised agriculture'. This, in turn, depends on massive inputs of fossil fuels. While this energy expenditure is inescapable, it is an expensive way of 'converting oil into potatoes'. Arguably, in view of global climate change, ever increasing population, ever increasing oil prices, ever diminishing availability of water and arable land, this is not sustainable. Despite the fact that biofuels inevitably compete for resources that might otherwise be used to grow, store and distribute food, they are frequently held to be desirable and feasible 'green' substitutes for fossil fuels and even that they spare carbon emissions to the atmosphere. This article challenges the absurdity of such 'retro-agriculture' (i.e. except in a few local circumstances) incurring yet more energy expenditure. It seeks to illustrate the misinformation on which some of the advocacy of biofuels has been based. © 2010 Association of Applied Biologists. Source


Cameron D.D.,University of Sheffield
Plant and Soil | Year: 2010

Symbiotic interactions have been shown to facilitate shifts in the structure and function of host plant communities. For example, parasitic plants can induce changes in plant diversity through the suppression of competitive community dominants. Arbuscular mycorrhizal (AM) fungi have also be shown to induce shifts in host communities by increasing host plant nutrient uptake and growth while suppressing non-mycorrhizal species. AM fungi can therefore function as ecosystem engineers facilitating shifts in host plant communities though the presumed physiological suppression of non-contributing or non-mycorrhizal plant species. This dichotomy in plant response to AM fungi has been suggested as a tool to suppress weed species (many of which are non-mycorrhizal) in agro-ecosystems where mycorrhizal crop species are cultivated. Rinaudo et al. (2010), this issue, have demonstrated that AM fungi can suppress pernicious non-mycorrhizal weed species including Chenopodium album (fat hen) while benefiting the crop plant Helianthus annuus (sunflower). These findings now suggest a future for harnessing AM fungi as agro-ecosystem engineers representing potential alternatives to costly and environmentally damaging herbicides. © 2010 Springer Science+Business Media B.V. Source


Leslie W.D.,University of Manitoba | Rubin M.R.,Columbia University | Schwartz A.V.,University of California at San Francisco | Kanis J.A.,University of Sheffield
Journal of Bone and Mineral Research | Year: 2012

There is a growing body of research showing that diabetes is an independent risk factor for fracture. Type 2 diabetes (T2D), which predominates in older individuals and is increasing globally as a consequence of the obesity epidemic, is associated with normal or even increased dual-energy x-ray absorptiometry (DXA)-derived areal bone mineral density (BMD). Therefore, the paradoxical increase in fracture risk has led to the hypothesis that there are diabetes-associated alterations in material and structural properties. An overly glycated collagen matrix, confounded by a low turnover state, in the setting of subtle cortical abnormalities, may lead to compromised biomechanical competence. In current clinical practice, because BMD is central to fracture prediction, a consequence of this paradox is a lack of suitable methods, including FRAX, to predict fracture risk in older adults with T2D. The option of adding diabetes to the FRAX algorithm is appealing but requires additional data from large population-based cohorts. The need for improved methods for identification of fracture in older adults with T2D is an important priority for osteoporosis research. © 2012 American Society for Bone and Mineral Research. Source


Huang Z.,University of Sheffield
Engineering Structures | Year: 2010

A non-linear procedure is presented for modelling the bond characteristic between concrete and reinforcing steel for reinforced concrete structures in a fire. The accuracy and reliability of the model are demonstrated by the analysis of one pull-out test and one beam test at ambient temperature and four full-scale beams tested under two fire conditions. The model is clearly capable of predicting the response of reinforced concrete members and structures in a fire with acceptable accuracy. The bond-link element has been found to have good computational stability and efficiency for 3D analysis of reinforced concrete structures in fires. It is shown that the bond condition between the concrete and reinforcing steel bar has an important influence on the fire resistance of reinforced concrete structures, especially when the temperature of the reinforcing steel bar is high (more than 500 °C). Hence, the current assumption of a perfect bond condition for analysis of reinforced concrete structures under fire conditions is unconservative. © 2010 Elsevier Ltd. Source


Japan's rural regions have been shrinking for the entire post-war period, and successive efforts to revitalise rural society have failed. This article examines whether the Great East Japan Earthquake and tsunami, and the subsequent meltdown at the Fukushima Daiichi nuclear power plant, present the Japanese state and society with a watershed opportunity to rethink regional revitalisation and national energy procurement strategies. The article begins by summarising the events of March and April 2011, examines possible approaches to the reconstruction of communities in the Tō)hoku region, and critiques problems of governance in post-war Japan that the disaster reveals. It concludes by pulling together the information and analysis presented into a discussion of the prospects for achieving the three-point vision for a safe, sustainable, and compassionate society that Prime Minister Naoto Kan set the Reconstruction Design Council. © 2011 Taylor & Francis. Source


Staszewski W.J.,AGH University of Science and Technology | Wallace D.M.,University of Sheffield
Mechanical Systems and Signal Processing | Year: 2014

A wavelet-based Frequency Response Function (FRF) is proposed for vibration analysis of systems with time-varying parameters. The classical FRF is extended to a representation in the combined time-frequency domain using wavelet analysis. Time averaging is performed on the initial FRF to improve signal-to-noise ratio. It is shown that use of the wavelet ridge algorithm is effective in extracting and representing visually data from wavelet-based FRFs. The wavelet-based FRF is demonstrated on selected time-variant simulated lumped parameter systems and one experimental vibrating system. © 2013 Elsevier Ltd. Source


Lopez-Perez D.,Alcatel - Lucent | Chu X.,University of Sheffield | Vasilakos A.V.,National Technical University of Athens | Claussen H.,Alcatel - Lucent
IEEE Journal on Selected Areas in Communications | Year: 2014

With the introduction of femtocells, cellular networks are moving from the conventional centralized network architecture to a distributed one, where each network cell should make its own radio resource allocation decisions, while providing inter-cell interference mitigation. However, realizing such distributed network architecture is not a trivial task. In this paper, we first introduce a simple self-organization rule, based on minimizing cell transmit power, following which a distributed cellular network is able to converge into an efficient resource reuse pattern. Based on such self-organization rule and taking realistic resource allocation constraints into account, we also propose two novel resource allocation algorithms, being autonomous and coordinated, respectively. Performance of the proposed self-organization rule and resource allocation algorithms are evaluated using system-level simulations, and show that power efficiency is not necessarily in conflict with capacity improvements at the network level. The proposed resource allocation algorithms provide significant performance improvements in terms of user outages and network capacity over cutting-edge resource allocation algorithms proposed in the literature. © 2014 IEEE. Source


Powers H.J.,University of Sheffield
Sub-cellular biochemistry | Year: 2012

Riboflavin (7,8-dimethyl-10-ribitylisoalloxazine; vitamin B2) is a water-soluble vitamin, cofactor derivatives of which (FAD, FMN) act as electron acceptors in the oxidative metabolism of carbohydrate, amino acids and fatty acids and which in the reduced state can donate electrons to complex II of the electron transport chain. This means that riboflavin is essential for energy generation in the aerobic cell, through oxidative phosphorylation. The classic effects of riboflavin deficiency on growth and development have generally been explained in terms of these functions. However, research also suggests that riboflavin may have specific functions associated with cell fate determination, which would have implications for growth and development. In particular, riboflavin depletion interferes with the normal progression of the cell cycle, probably through effects on the expression of regulatory genes, exerted at both the transcriptional and proteomic level. Source


Wright H.,University of Sheffield
Local Environment | Year: 2011

The idea of "green infrastructure" has experienced a quick emergence in planning policy with little opportunity to reflect on the meanings attached to the concept by different interests. This has contributed to confusion and discomfort with the "lack of understanding" among planning practitioners in England who argue that green infrastructure could be a "corruptible concept". Here I respond to the resistance to multiple meanings of green infrastructure by critically examining it as a contested concept. Building on the literature that positions ambiguity as unavoidable, I argue that a single precise meaning of "green infrastructure" is problematic because the concept is evolving and divided between environmental theory and socio-economic policy. By doing this, I intend to better equip practitioners in England with an understanding of the meanings attached to green infrastructure and how its ambiguity is used, so that they may secure the environmental benefits that are at risk of being isolated in green infrastructure theory. © 2011 Copyright Taylor and Francis Group, LLC. Source


Connelly S.,University of Sheffield
Urban Studies | Year: 2011

What is the legitimacy of new forms of governance at community level? This paper addresses the important yet little understood issue of how this is established, developing a constructivist approach to the concept of 'legitimacy' and presenting an analysis of how the legitimacy of community-based organisations is understood and constructed in a northern English city. This shows how their legitimacy draws on a range of pre-existing norms as well as new ones, only some of which are recognisably democratic, and is more a product of informal practices than formal structures. It is consequently fragile and open to challenge, and weak according to the norms of legitimacy derived from the representative democratic tradition or the standpoint of modern deliberative democracy. What could appropriately replace such norms remains unclear, although it is suggested that a way forward may be through reintroducing the value of activism as an acceptable grounding for political legitimacy. © 2010 Urban Studies Journal Limited. Source


Jorgensen A.,University of Sheffield
Landscape and Urban Planning | Year: 2011

The dominant view of landscape research in the latter half of the 20th century saw landscape aesthetics as a discrete area of study, a socio-cultural value detached from other considerations. This view was later challenged by proponents of ecological aesthetics, who countered that what makes landscapes beautiful is often intimately linked to other intrinsic landscape values, such as biodiversity, and that these other values can shift perceptions of how we perceive and appreciate the beauty of landscapes. At the same time, environmental psychologists and others wrestled with questions regarding the extent to which landscape aesthetics had a biological or cultural basis, and examined the impact of individual differences and life experiences. More recently landscape urbanism has reexamined the drivers of urban landscape change, prompting questions of whether landscape aesthetics should be abandoned in favour of landscape pragmatism and instrumentality. Furthermore, new understandings of how we might best sustain biological diversity in the context of global climate change signal an end to the perceived biological status quo and the advent of an aesthetics of necessity. This essay outlines these trends and explores their implications for researching landscape aesthetics. © 2011. Source


Hitchmough J.,University of Sheffield
Landscape and Urban Planning | Year: 2011

This essay provides a perspective on non-invasive exotic plant species in relation to sustainable designed urban plantings. It considers the reasons why, despite their ubiquitous presence in most towns and cities, non-invasive exotic plants are increasingly believed to be either hazardous or at the very least incompatible with urban sustainability notions. Exotic species are reviewed in relation to key measures of what sustainable planting might entail, for example the capacity to minimise carbon expenditure, support biodiversity, and how these species play a positive role in human perception of designed landscapes. © 2011 Elsevier B.V. Source


Evison S.E.,University of Sheffield
Current Opinion in Insect Science | Year: 2015

Chalkbrood is a fungal brood disease of the honey bee, Apis mellifera, caused by the parasite Ascosphaera apis. Considered as a stress-related disease, the severity of chalkbrood outbreaks depend on a multitude of interacting factors. The specific relationship between host and parasite in this disease is interesting because the parasite is both heterothallic and semelparous. Recent studies highlight that this specific host-parasite relationship is influenced by factors such as interactions with other parasite strains or species, and environmental perturbations. To understand how to protect pollinators most effectively, it is crucial that future research takes a more ecologically relevant approach by studying the basic biology of the host-parasite relationship in the context of the multi-factorial processes that influence it. ©2015 Elsevier Inc. Source


Objective To examine the diagnostic accuracy of novel biomarkers of myocardial injury and troponin assays for diagnosis of myocardial infarction. Methods 850 patients randomised to the point-of-care testing arm of the Randomised Assessment of Panel Assay of Cardiac markers (RATPAC) study in six emergency departments of low-risk patients presenting with chest pain were studied. Blood samples were obtained on admission and 90 min from admission. Myocardial infarction was defined by the universal definition of myocardial infarction. The following diagnostic strategies were compared by receiver operator characteristic curve analysis and comparison of area under the curve: individual marker values and the combination of presentation heart fatty acid binding protein (HFABP) and copeptin with troponin. Results 68 patients had a final diagnosis of myocardial infarction. Admission samples were available from 838/1132 patients enrolled in the study. Areas under the curve were as follows (CIs in parentheses): cardiac troponin I (cTnI) Stratus CS 0.94 (0.90 to 0.98), cTnI Beckmann 0.92 (0.88 to 0.96), cTnI Siemens ultra 0.90 (0.85 to 0.95), cardiac troponin T high sensitivity 0.92 (0.88 to 0.96), HFABP 1 0.84 (0.77 to 0.90) copeptin 0.62 (0.57 to 0.68). HFABP and copeptin were diagnostically inferior to troponin. The combination of HFABP (at the 95th percentile) and troponin (at the 99th percentile) increased diagnostic sensitivity. Conclusions High-sensitivity cardiac troponin is the best single marker. Addition of HFABP to high-sensitivity troponin increased diagnostic sensitivity. Additional measurement of copeptin is not useful in the chest pain population. Source


Longworth L.,Brunel University | Rowen D.,University of Sheffield
Value in Health | Year: 2013

Quality-adjusted life-years (QALYs) are widely used as an outcome for the economic evaluation of health interventions. However, preference-based measures used to obtain health-related utility values to produce QALY estimates are not always included in key clinical studies. Furthermore, organizations responsible for reviewing or producing health technology assessments (HTAs) may have preferred instruments for obtaining utility estimates for QALY calculations. Where data using a preference-based measure or the preferred instrument have not been collected, it may be possible to map or crosswalk from other measures of health outcomes. The aims of this study were 1) to provide an overview of how mapping is currently used as reported in the published literature and in an HTA policy-making context, specifically at the National Institute for Health and Clinical Excellence in the United Kingdom, and 2) to comment on best current practice on the use of mapping for HTA more generally. The review of the National Institute for Health and Clinical Excellence guidance found that mapping has been used since first established but that reporting of the models used to map has been poor. Recommendations for mapping in HTA include an explicit consideration of the generalizability of the mapping function to the target sample, reporting of standard econometric and statistical tests including the degree of error in the mapping model across subsets of the range of utility values, and validation of the model(s). Mapping can provide a route for linking outcomes data collected in a trial or observational study to the specific preferred instrument for obtaining utility values. In most cases, however, it is still advantageous to directly collect data by using the preferred utility-based instrument and mapping should usually be viewed as a second-best solution. © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. Source


Lafrance Jr. W.C.,Brown University | Reuber M.,University of Sheffield | Goldstein L.H.,Kings College London
Epilepsia | Year: 2013

The International League Against Epilepsy (ILAE) Neuropsychobiology Commission gave the charge to provide practical guidance for health professionals for the pharmacologic and nonpharmacologic treatment of patients with psychogenic nonepileptic seizures (PNES). Using a consensus review of the literature, an international group of clinician-researchers in epilepsy, neurology, neuropsychology, and neuropsychiatry evaluated key management approaches for PNES. These included the following: presentation of the diagnosis, early phase treatment, psychological and pharmacologic interventions, and maintenance management. The aim of this report is to provide greater clarity about the range and current evidence base for treatment for patients with PNES, with the intention of improving the care of patients with PNES and patients who develop PNES as a comorbidity of epilepsy. © Wiley Periodicals, Inc. © 2013 International League Against Epilepsy. Source


Paravastu S.C.,University of Sheffield
The Cochrane database of systematic reviews | Year: 2013

Beta (β) blockers are indicated for use in coronary artery disease (CAD). However, optimal therapy for people with CAD accompanied by intermittent claudication has been controversial because of the presumed peripheral haemodynamic consequences of beta blockers, leading to worsening symptoms of intermittent claudication. This is an update of a review first published in 2008. To quantify the potential harmful effects of beta blockers on maximum walking distance, claudication distance, calf blood flow, calf vascular resistance and skin temperature when used in patients with peripheral arterial disease (PAD). For this update, the Cochrane Peripheral Vascular Diseases Group Trials Search Co-ordinator searched the Specialised Register (last searched March 2013) and the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, 2013, Issue 2). Randomised controlled trials (RCTs) evaluating the role of both selective (β1) and non-selective (β1 and β2) beta blockers compared with placebo. We excluded trials that compared different types of beta blockers. Primary outcome measures were claudication distance in metres, time to claudication in minutes and maximum walking distance in metres and minutes (as assessed by treadmill).Secondary outcome measures included calf blood flow (mL/100 mL/min), calf vascular resistance and skin temperature (oC). We included six RCTs that fulfilled the above criteria, with a total of 119 participants. The beta blockers studied were atenolol, propranolol, pindolol and metoprolol. All trials were of poor quality with the drugs administered over a short time (10 days to two months). None of the primary outcomes were reported by more than one study. Similarly, secondary outcome measures, with the exception of vascular resistance (as reported by three studies), were reported, each by only one study. Pooling of such results was deemed inappropriate. None of the trials showed a statistically significant worsening effect of beta blockers on time to claudication, claudication distance and maximal walking distance as measured on a treadmill, nor on calf blood flow, calf vascular resistance and skin temperature, when compared with placebo. No reports described adverse events associated with the beta blockers studied. Currently, no evidence suggests that beta blockers adversely affect walking distance, calf blood flow, calf vascular resistance and skin temperature in people with intermittent claudication. However, because of the lack of large published trials, beta blockers should be used with caution, if clinically indicated. Source


Maklakov A.A.,Uppsala University | Lummaa V.,University of Sheffield
BioEssays | Year: 2013

Why do the two sexes have different lifespans and rates of aging? Two hypotheses based on asymmetric inheritance of sex chromosomes ("unguarded X") or mitochondrial genomes ("mother's curse") explain sex differences in lifespan as sex-specific maladaptation leading to increased mortality in the shorter-lived sex. While asymmetric inheritance hypotheses equate long life with high fitness, considerable empirical evidence suggests that sexes resolve the fundamental tradeoff between reproduction and survival differently resulting in sex-specific optima for lifespan. However, selection for sex-specific values in life-history traits is constrained by intersexual genetic correlations resulting in intra-locus sexual conflict over optimal lifespan. The available data suggest that the evolution of sexual dimorphism only partially resolves these conflicts. Sexual conflict over optimal trait values, which has been demonstrated in model organisms and in humans, is likely to play a key role in shaping the evolution of lifespan, as well as in maintaining genetic variation for sex-specific diseases. © 2013 WILEY Periodicals, Inc. Source


Meier P.S.,University of Sheffield
Addiction | Year: 2011

Aims This paper aims to contribute to a rethink of marketing research priorities to address policy makers' evidence needs in relation to alcohol marketing. Method Discussion paper reviewing evidence gaps identified during an appraisal of policy options to restrict alcohol marketing. Findings Evidence requirements can be categorized as follows: (i) the size of marketing effects for the whole population and for policy-relevant population subgroups, (ii) the balance between immediate and long-term effects and the time lag, duration and cumulative build-up of effects and (iii) comparative effects of partial versus comprehensive marketing restrictions on consumption and harm. These knowledge gaps impede the appraisal and evaluation of existing and new interventions, because without understanding the size and timing of expected effects, researchers may choose inadequate time-frames, samples or sample sizes. To date, research has tended to rely on simplified models of marketing and has focused disproportionately on youth populations. The effects of cumulative exposure across multiple marketing channels, targeting of messages at certain population groups and indirect effects of advertising on consumption remain unclear. Conclusion It is essential that studies into marketing effect sizes are geared towards informing policy decision-makers, anchored strongly in theory, use measures of effect that are well-justified and recognize fully the complexities of alcohol marketing efforts. © 2010 The Author, Addiction © 2010 Society for the Study of Addiction. Source


Finger L.D.,University of Sheffield
Sub-cellular biochemistry | Year: 2012

Processing of Okazaki fragments to complete lagging strand DNA synthesis requires coordination among several proteins. RNA primers and DNA synthesised by DNA polymerase α are displaced by DNA polymerase δ to create bifurcated nucleic acid structures known as 5'-flaps. These 5'-flaps are removed by Flap Endonuclease 1 (FEN), a structure-specific nuclease whose divalent metal ion-dependent phosphodiesterase activity cleaves 5'-flaps with exquisite specificity. FENs are paradigms for the 5' nuclease superfamily, whose members perform a wide variety of roles in nucleic acid metabolism using a similar nuclease core domain that displays common biochemical properties and structural features. A detailed review of FEN structure is undertaken to show how DNA substrate recognition occurs and how FEN achieves cleavage at a single phosphate diester. A proposed double nucleotide unpairing trap (DoNUT) is discussed with regards to FEN and has relevance to the wider 5' nuclease superfamily. The homotrimeric proliferating cell nuclear antigen protein (PCNA) coordinates the actions of DNA polymerase, FEN and DNA ligase by facilitating the hand-off intermediates between each protein during Okazaki fragment maturation to maximise through-put and minimise consequences of intermediates being released into the wider cellular environment. FEN has numerous partner proteins that modulate and control its action during DNA replication and is also controlled by several post-translational modification events, all acting in concert to maintain precise and appropriate cleavage of Okazaki fragment intermediates during DNA replication. Source


Renshaw S.A.,University of Sheffield | Trede N.S.,University of Utah
DMM Disease Models and Mechanisms | Year: 2012

Since its first splash 30 years ago, the use of the zebrafish model has been extended from a tool for genetic dissection of early vertebrate development to the functional interrogation of organogenesis and disease processes such as infection and cancer. In particular, there is recent and growing attention in the scientific community directed at the immune systems of zebrafish. This development is based on the ability to image cell movements and organogenesis in an entire vertebrate organism, complemented by increasing recognition that zebrafish and vertebrate immunity have many aspects in common. Here, we review zebrafish immunity with a particular focus on recent studies that exploit the unique genetic and in vivo imaging advantages available for this organism. These unique advantages are driving forward our study of vertebrate immunity in general, with important consequences for the understanding of mammalian immune function and its role in disease pathogenesis. © 2012. Published by The Company of Biologists Ltd. Source


Horton P.,University of Sheffield
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2012

The distinctive lateral organization of the protein complexes in the thylakoid membrane investigated by Jan Anderson and co-workers is dependent on the balance of various attractive and repulsive forces.Modulation of these forces allows critical physiological regulation of photosynthesis that provides efficient light-harvesting in limiting light but dissipation of excess potentially damaging radiation in saturating light. The light-harvesting complexes (LHCII) are central to this regulation, which is achieved by phosphorylation of stromal residues, protonation on the lumen surface and de-epoxidation of bound violaxanthin. The functional flexibility of LHCII derives from a remarkable pigment composition and configuration that not only allow efficient absorption of light and efficient energy transfer either to photosystem II or photosystem I core complexes, but through subtle configurational changes can also exhibit highly efficient dissipative reactions involving chlorophyll-xanthophyll and/or chlorophyll- chlorophyll interactions. These changes in function are determined at a macroscopic level by alterations in protein-protein interactions in the thylakoid membrane. The capacity and dynamics of this regulation are tuned to different physiological scenarios by the exact protein and pigment content of the light-harvesting system. Here, the molecular mechanisms involved will be reviewed, and the optimization of the light-harvesting system in different environmental conditions described. © 2012 The Royal Society. Source


Khamas S.K.,University of Sheffield
IEEE Transactions on Antennas and Propagation | Year: 2010

An efficient algorithm is introduced to enhance the convergence of dyadic Green's functions (DGF) in a layered spherical media where asymptotic expressions have been developed. The formulated expressions involve an infinite series of spherical eigenmodes that can be reduced to the simple homogenous media Green's function using the addition theorem of spherical Hankel functions. Substantial improvements in the convergence speed have been attained by subtracting the asymptotic series representation from the original DGF. The subtracted components are then added to the solution using the homogenous media Green's function format. © 2010 IEEE. Source


Vasiljevic M.,University of Kent | Crisp R.J.,University of Sheffield
PLoS ONE | Year: 2013

Prejudices towards different groups are interrelated, but research has yet to find a way to promote tolerance towards multiple outgroups. We devise, develop and implement a new cognitive intervention for achieving generalized tolerance based on scientific studies of social categorization. In five laboratory experiments and one field study the intervention led to a reduction of prejudice towards multiple outgroups (elderly, disabled, asylum seekers, HIV patients, gay men), and fostered generalized tolerance and egalitarian beliefs. Importantly, these effects persisted outside the laboratory in a context marked by a history of violent ethnic conflict, increasing trust and reconciliatory tendencies towards multiple ethnic groups in the Former Yugoslav Republic of Macedonia. We discuss the implications of these findings for intervention strategies focused on reducing conflict and promoting peaceful intergroup relations. © 2013 Vasiljevic, Crisp. Source


Zimmerman W.B.,University of Sheffield
Chemical Engineering Science | Year: 2011

Electrochemical microfluidics is a young field, but now achieving substantial successes in science, engineering, and technology. In this review article, the use of electrochemical effects for actuation in microfluidic devices is described, with a focus on electrokinetic flow. Furthermore, the use of electrochemical microfluidic devices in analytic chemistry and biochemistry is detailed, largely for separation and detection, typically exploiting electrophoretic effects. Finally, the use of electrochemical microreactors is explored, with an eye to the synthesis and processing advantages that come from microscale operations. Microfluidic devices are, more than ever, serving as a platform for nanoscience and nanotechnology, with molecular scale manipulation and detection enabled by microfluidic control of the environment. © 2010 Elsevier Ltd. Source


Relton C.,University of Sheffield
Complementary Therapies in Medicine | Year: 2013

The 'placebo effect' concept is intrinsic to the architecture of the double blind placebo randomised controlled trial (RCT), the oft quoted 'gold standard' method of clinical research whose findings are supposed to inform our understanding of the interventions used in clinical practice. The 'placebo effect' concept is often used in discussions of both clinical practice and clinical research, particularly when discussing why patients report improvements with complementary and alternative medicines (CAMs). Despite its frequent use, 'placebo effect' is a non-sequitur, thus confusion abounds.In routine healthcare patients are not told that they might receive placebo. However, in clinical trials the opposite is true. Telling people that they might receive a placebo really complicates things. The uncertainty invoked by information that a placebo may be given can impact trial recruitment, the delivery of the intervention, and the reporting of outcomes, as can the 'meaning responses' invoked by other types of information provided to patients in standard RCT designs.Future CAM research should consider alternative RCT designs that help ensure that participants' experiences are uncontaminated by 'meaning responses' to information that they may receive fake treatments, i.e. placebos. © 2012 Elsevier Ltd. Source


Guttmann W.,University of Sheffield
Science of Computer Programming | Year: 2013

Extended designs distinguish non-terminating and aborting executions of sequential, non-deterministic programs. We show how to treat them algebraically based on techniques we have previously applied to total and general correctness approaches. In particular, we propose modifications to the definition of an extended design which make the theory more clear and simplify calculations, and an approximation order for recursion. We derive explicit formulas for operators on extended designs including non-deterministic choice, sequential composition, while-loops and full recursion. We show how to represent extended designs as designs or prescriptions over an extended state space. The new theory generalises our previous algebraic theory of general correctness by weakening its axioms. It also integrates with partial, total and general correctness into a common foundation which gives a unified semantics of while-programs. Program transformations derived using this semantics are valid in all four correctness approaches. © 2012 Elsevier B.V. All rights reserved. Source


Julious S.A.,University of Sheffield
Statistical Methods in Medical Research | Year: 2013

A number of meta-analyses have been undertaken to assess both the safety and efficacy of antidepressants in paediatric and adolescent patients. This article updates the analyses with additionally reported trials. The aim of this analysis was to investigate whether antidepressant treatments are associated with an increased risk of suicide-related outcomes in paediatric and adolescent patients. Also, in the same population, to assess whether antidepressant treatments are beneficial in terms of efficacy. A meta-analysis of randomised controlled trials of antidepressant treatments compared with placebo in paediatric and adolescent patients was undertaken of 6039 individuals participating in 35 randomised controlled trials. For suicide-related outcomes suicidal behaviour, suicidal ideation and suicidal behaviour or ideation were examined. These data presented the additional problem of the events of interest being rare. An analysis was described in this article to account for the rare events that also included studies which had no events on either treatment arm. There were trends to indicate that active treatments increased the risk of these events in absolute terms. For efficacy, the results indicated that antidepressant treatments did have a statistically significant effects compared to placebo but the effect was less for the trials in depression. The results are in the main consistent with previous meta-analyses on a smaller number of trials. There was evidence of an increased risk in suicide-related outcomes on antidepressant treatments, while antidepressant treatments were also shown to be efficacious. © The Author(s) 2011 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav. Source


Background. In health technology assessments (HTAs) of interventions that affect survival, it is essential to accurately estimate the survival benefit associated with the new treatment. Generally, trial data must be extrapolated, andmany models are available for this purpose. The choice of extrapolation model is critical because different models can lead to very different cost-effectiveness results. A failure to systematically justify the chosen model creates the possibility of bias and inconsistency between HTAs. Objective. To demonstrate the limitations and inconsistencies associated with the survival analysis component of HTAs and to propose a process guide that will help exclude these from future analyses. Methods. We reviewed the survival analysis component of 45 HTAs undertaken for the National Institute for Health and Clinical Excellence (NICE) in the cancer disease area. We drew upon our findings to identify common limitations and to develop a process guide. Results. The chosen survival models were not systematically justified in any of the HTAs reviewed. The range of models considered was usually insufficient, and the rationale for the chosen model was universally limited: In particular, the plausibility of the extrapolated portion of fitted survival curves was very rarely explicitly considered. Limitations. We do not seek to describe and review all methods available for performing survival analysis - several approaches exist that are not mentioned in this article. Instead we seek to analyze methods commonly used in HTAs and limitations associated with their application. Conclusions. Survival analysis has not been conducted systematically in HTAs. A systematic approach such as the one proposed here is required to reduce the possibility of bias in cost-effectiveness results and inconsistency between technology assessments. Source


Theato P.,Johannes Gutenberg University Mainz | Theato P.,Seoul National University | Theato P.,University of Sheffield
Angewandte Chemie - International Edition | Year: 2011

Worth one's while: The careful use of a single chromophore on methodically designed polymers can be sufficient to modify polymer properties. Recent examples range from controlled micelle destruction and light-controlled precipitation in aqueous solution to the fabrication of nanoporous thin films (see schematic illustration of the use of a photocleavable block copolymer as a template for a nanoporous thin film). © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Glaus P.,University of Manchester | Honkela A.,Aalto University | Rattray M.,University of Sheffield
Bioinformatics | Year: 2012

Motivation: High-throughput sequencing enables expression analysis at the level of individual transcripts. The analysis of transcriptome expression levels and differential expression (DE) estimation requires a probabilistic approach to properly account for ambiguity caused by shared exons and finite read sampling as well as the intrinsic biological variance of transcript expression. Results: We present Bayesian inference of transcripts from sequencing data (BitSeq), a Bayesian approach for estimation of transcript expression level from RNA-seq experiments. Inferred relative expression is represented by Markov chain Monte Carlo samples from the posterior probability distribution of a generative model of the read data. We propose a novel method for DE analysis across replicates which propagates uncertainty from the sample-level model while modelling biological variance using an expression-level-dependent prior. We demonstrate the advantages of our method using simulated data as well as an RNA-seq dataset with technical and biological replication for both studied conditions. © The Author(s) 2012. Published by Oxford University Press. Source


Ahmedzai S.H.,University of Sheffield
Journal of Pain and Palliative Care Pharmacotherapy | Year: 2013

Personalized medicine can be defined as the tailoring of therapies to defined subsets of patients based on their likelihood to respond to therapy or their risk of adverse events. This medical model is more established in oncology but personalized pain therapy is showing potential promise. Pharmacogenomics is of growing relevance to the pain field, for example cytochrome P450 2D6 (CYP2D6) polymorphisms with resulting variation in degree of CYP2D6 expression may affect codeine analgesia. Research using quantitative sensory testing is seeking to identify phenotypic subgroups of neuropathic pain patients with different underlying pain mechanisms. Imaging studies have suggested that genetic, environmental, mood, and injury-specific factors combine to produce a unique cerebral pain "signature." The search for central nervous system (CNS) biomarkers for chronic pain is ongoing. © 2013 Informa Healthcare USA, Inc. Source


Enderby P.,University of Sheffield
Handbook of Clinical Neurology | Year: 2013

Dysarthria is a motor speech disorder which can be classified according to the underlying neuropathology and is associated with disturbances of respiration, laryngeal function, airflow direction, and articulation resulting in difficulties of speech quality and intelligibility. There are six major types of dysarthria: flaccid dysarthria associated with lower motor neuron impairment, spastic dysarthria associated with damaged upper motor neurons linked to the motor areas of the cerebral cortex, ataxic dysarthria primarily caused by cerebellar dysfunction, and hyperkinetic dysarthria and hypokinetic dysarthria, which are related to a disorder of the extrapyramidal system. The sixth is generally termed a mixed dysarthria and is associated with damage in more than one area, resulting in speech characteristics of at least two groups. The features of the speech disturbance of these six major types of dysarthria are distinctive and can assist with diagnosis. Dysarthria is a frequent symptom of many neurological conditions and is commonly associated with progressive neurological disease. It has a profound effect upon the patient and their families as communication is integrally related with expressing personality and social relationships. Speech and language therapy can be used to encourage the person to use the speech that is already available to them more effectively, can increase the range and consistency of sound production, can teach strategies for improving intelligibility and communicative effectiveness, can guide the individual to use methods that are less tiring and more successful, and can introduce the appropriate Augmentative and Alternative Communication approaches as and when required. © 2013 Elsevier B.V. Source


Ooi M.K.J.,University of Sheffield
Australian Journal of Botany | Year: 2010

Delayed seedling emergence can negatively affect plant recruitment. Recent work has shown that some species with innate seasonal requirements for germination can have seedling emergence delayed, depending on the season of fire. The impact of this delay, which is in relation to both resprouters and seedlings of species that emerge independent of season, remains unknown. I assessed delayed emergence and subsequent impacts on post-fire recruitment success of three Leucopogon species, which all display a seasonal emergence pattern related to their physiological dormancy. Intra-population comparisons showed that both small (1-6 months) and much larger (12-15 months) delays of emergence reduced seedling survival and growth, and increased the time taken for plants to reach maturity. Fire season induced delays produced very similar results, with higher mortality and slower growth after winter fires compared with post-summer fire cohorts. Seasonal emergence patterns, associated with seed dormancy and germination cues, may therefore provide a mechanism that determines the variation of recruitment success after fires in different seasons. A better understanding of the relationship between fire season and timing of emergence of physiologically dormant species would be timely considering the forecast widening of the fire season due to climate change.© CSIRO 2010. Source


Hedayat M.,Tehran University of Medical Sciences | Netea M.G.,Radboud University Nijmegen | Rezaei N.,Tehran University of Medical Sciences | Rezaei N.,University of Sheffield
The Lancet Infectious Diseases | Year: 2011

Toll-like receptors (TLRs) recognise highly conserved molecular structures, collectively known as pathogen-associated molecular patterns. In the past two decades, development and clinical implementation of TLR ligands-ie, chemically modified synthetic derivatives of naturally occurring ligands and fully synthetic small molecules-have been topics of intense research. Targeted manipulation of TLR signalling has been applied clinically to boost vaccine effectiveness, promote a robust T helper 1-predominant immune response against viral infection, or dampen the exaggerated inflammatory response to bacterial infection. Use of these new therapeutic molecules as adjuncts to conventional pharmacotherapy or stand-alone treatments might offer solutions to unmet clinical needs or could replace existing partly effective therapeutic strategies. © 2011 Elsevier Ltd. Source


De Palma M.,Ecole Polytechnique Federale de Lausanne | Lewis C.E.,University of Sheffield
Cancer Cell | Year: 2013

Tumor-associated macrophages (TAMs) promote key processes in tumor progression, like angiogenesis, immunosuppression, invasion, and metastasis. Increasing studies have also shown that TAMs can either enhance or antagonize the antitumor efficacy of cytotoxic chemotherapy, cancer-cell targeting antibodies, and immunotherapeutic agents-depending on the type of treatment and tumor model. TAMs also drive reparative mechanisms in tumors after radiotherapy or treatment with vascular-targeting agents. Here, we discuss the biological significance and clinical implications of these findings, with an emphasis on novel approaches that effectively target TAMs to increase the efficacy of such therapies. © 2013 Elsevier Inc. Source


Willett P.,University of Sheffield
Journal of Chemical Information and Modeling | Year: 2013

The use of data fusion in similarity-based virtual screening is studied. Data fusion is the name given to a body of techniques that combine multiple sources of data into a single source, with the expectation that the resulting fused source will be more informative than will the individual input sources. The scores that are merged by the fusion rule can be of two types, either the structure's actual similarity, as computed using some particular similarity measure; or the rank of the structure when all of the N computed similarities are ranked in decreasing order of the scores for the chosen similarity measure. It first found application in similarity fusion, where a single reference structure is searched using different similarity measures; it has since been extended to encompass multiple reference structures. Both approaches can benefit from the availability of training data linking similarity scores and probabilities of activity, but unsupervised fusion rules are available that enables effective searches to be carried out even in the absence of such data. Source


Chantry M.,University of Bristol | Willis A.P.,University of Sheffield | Kerswell R.R.,University of Bristol
Physical Review Letters | Year: 2014

The aim in the dynamical systems approach to transitional turbulence is to construct a scaffold in phase space for the dynamics using simple invariant sets (exact solutions) and their stable and unstable manifolds. In large (realistic) domains where turbulence can coexist with laminar flow, this requires identifying exact localized solutions. In wall-bounded shear flows, the first of these has recently been found in pipe flow, but questions remain as to how they are connected to the many known streamwise-periodic solutions. Here we demonstrate that the origin of the first localized solution is in a modulational symmetry-breaking Hopf bifurcation from a known global traveling wave that has twofold rotational symmetry about the pipe axis. Similar behavior is found for a global wave of threefold rotational symmetry, this time leading to two localized relative periodic orbits. The clear implication is that many global solutions should be expected to lead to more realistic localized counterparts through such bifurcations, which provides a constructive route for their generation. © 2014 American Physical Society. Source


Black J.A.,University of Sheffield
Geotechnical Testing Journal | Year: 2015

Transparent synthetic soils have been developed as a soil surrogate to enable internal visualization of geotechnical processes in physical models. While significant developments have been made to enhance the methodology and capabilities of transparent soil modelling, the technique is not yet exploited to its fullest potential. Tests are typically conducted at 1 g in small bench size models, which invokes concerns about the impact of scale and stress level observed in previously reported work. This paper recognized this limitation and outlines the development of improved testing methodology whereby the transparent soil and laser aided imaging technique are translated to the centrifuge environment. This has a considerable benefit such that increased stresses are provided, which better reflect the prototype condition. The paper describes the technical challenges associated with implementing this revised experimental methodology, summarizes the test equipment/systems developed, and presents initial experimental results to validate and confirm the successful implementation and scaling of transparent soil testing to the high gravity centrifuge test environment. A 0.6 m wide prototype strip foundation was tested at two scales using the principle of "modelling of models," in which similar performance was observed. The scientific developments discussed have the potential to provide a step change in transparent soil modelling methodology, crucially providing more representative stress conditions that reflect prototype conditions, while making a broader positive contribution to physical modelling capabilities to assess complex soil-structure boundary problems. Copyright © 2015 by ASTM Int'l (all rights reserved). Source


Mason S.,University of Sheffield
Academic Emergency Medicine | Year: 2011

Demand for emergency care is rising throughout the western world and represents a major public health problem. Increased reliance on professionalized health care by the public means that strategies need to be developed to manage the demand safely and in a way that is achievable and acceptable to both consumers of emergency care, but also to service providers. In the United Kingdom, strategies have previously been aimed at managing demand better and included introducing new emergency services for patients to access, extending the skills within the existing workforce, and more recently, introducing time targets for emergency departments (EDs). This article will review the effect of these strategies on demand for care and discuss the successes and failures with reference to future plans for tackling this increasingly difficult problem in health care. © 2011 by the Society for Academic Emergency Medicine. Source


Lion S.,Royal Holloway, University of London | Boots M.,University of Sheffield
Ecology Letters | Year: 2010

Ecology Letters (2010)There has been a renewed controversy on the processes that determine evolution in spatially structured populations. Recent theoretical and empirical studies have suggested that parasites should be expected to be more ''prudent'' (less harmful and slower transmitting) when infection occurs locally. Using a novel approach based on spatial moment equations, we show that the evolution of parasites in spatially structured host populations is determined by the interplay of genetic and demographic spatial structuring, which in turn depends on the details of the ecological dynamics. This allows a detailed understanding of the roles of epidemiology, demography and network topology. Demographic turnover is needed for local interactions to select for prudence in the susceptible-infected models that have been the focus of previous studies. In diseases with little demographic turnover (as typical of many human diseases), we show that only parasites causing diseases with long-lived immunity are likely to be prudent in space. We further demonstrate why, at intermediate parasite dispersal, virulence can evolve to higher levels than predicted by non-spatial theory. © 2010 Blackwell Publishing Ltd/CNRS. Source


Tsoi D.T.,University of Sheffield
Cochrane database of systematic reviews (Online) | Year: 2010

BACKGROUND: Patients with schizophrenia smoke more heavily than the general population and this contributes to their higher morbidity and mortality from smoking-related illnesses. It remains unclear what interventions can help them to quit or reduce smoking. OBJECTIVES: To evaluate the benefits and harms of different treatments for nicotine dependence in schizophrenia. SEARCH STRATEGY: We searched the Cochrane Tobacco Addiction Group Specialized Register and electronic databases including MEDLINE, EMBASE and PsycINFO from inception to April 2010. SELECTION CRITERIA: We included randomized trials for smoking cessation or reduction, comparing any pharmacological or non-pharmacological intervention with placebo or with another therapeutic control in adult smokers with schizophrenia or schizoaffective disorder. DATA COLLECTION AND ANALYSIS: Two reviewers independently assessed the eligibility and quality of trials and extracted data. Outcome measures included smoking abstinence, reduction in the amount smoked and any change in mental state. We extracted abstinence and reduction data at the end of treatment and at least six months after the intervention. We used the most rigorous definition of abstinence or reduction and biochemically validated data where available. Any reported adverse events were noted. Where appropriate, we pooled data using a random effects model. MAIN RESULTS: We included 21 trials (11 trials of smoking cessation; four trials of smoking reduction; one trial for relapse prevention; five trials reported smoking outcomes for interventions aimed at other purposes). Seven trials compared bupropion with placebo; meta-analysis showed that smoking cessation rates after bupropion were significantly higher than placebo at the end of treatment (seven trials, N=340; risk ratio [RR] 2.84; 95% confidence interval [CI] 1.61 to 4.99) and after six months (five trials, N=214, RR 2.78; 95% CI 1.02 to 7.58). Expired carbon monoxide (CO) level and the number of cigarettes smoked daily were significantly lower with bupropion at the end of therapy but not after six months. There were no significant differences in positive, negative and depressive symptoms between bupropion and placebo group. There was no report of major adverse event such as seizures with bupropion.Contingent reinforcement (CR) with money may increase smoking abstinence rates and reduce the level of smoking in patients with schizophrenia. However, it is uncertain whether these benefits are maintained in the longer term. There was no evidence of benefit for the few trials of other pharmacological therapies (including nicotine replacement therapy (NRT)) and psychosocial interventions in helping smokers with schizophrenia to quit or reduce smoking. AUTHORS' CONCLUSIONS: Bupropion increases smoking abstinence rates in smokers with schizophrenia, without jeopardising their mental state. Bupropion may also reduce the amount these patients smoke. CR may help this group of patients to quit and reduce smoking. We failed to find convincing evidence that other interventions have a beneficial effect on smoking behaviour in schizophrenia. Source


Richards D.R.,University of Sheffield
Biodiversity and Conservation | Year: 2013

In order to better understand public interest in environmental issues it is necessary to not only consider present and recent levels of environmental awareness, but to set a longer term historical baseline. Large databases derived from scanned historical books, such as Google Ngram, provide a resource which can be used to assess historical levels of interest in environmental issues. Historical trends in the occurrence of nine environmental indicator terms were analysed between 1800 and 2009, and it was found that usage of all terms was highest during the last 50 years of this period. However, the usage of seven of the indicator terms investigated has now peaked and is in decline, and in some cases this decline began around 20 years ago. The observed patterns may indicate reduced interest in the environment, acceptance of environmental issues, or shifting trends in the terminology used by the environmental movement. © 2013 Springer Science+Business Media Dordrecht. Source


Bryant R.G.,University of Sheffield
Progress in Physical Geography | Year: 2013

The dust cycle can play an important role in the land-atmosphere-ocean system through interaction with biogeochemical cycles and direct and indirect radiative forcing of the atmosphere. One of the limiting factors for existing global models of dust transport, atmospheric processing and deposition is the quality and availability of data to allow evaluation and validation of emission schemes against in situ data from source regions. This review provides a critical overview of recent studies of aeolian processes from within or on dust sources, and focuses on studies dealing with retrieval of dust emission data, quantification of the contribution and variability of dust emissions from specific landforms, and the use of remote sensing data to reconcile dust storm inventories by direct comparison to dust source geomorphology. These case studies highlight significant advances in both field measurement and regional understanding of important components of the dust cycle derived through use of remote sensing data. However, recent research also demonstrates that most source regions exhibit significant spatial and temporal heterogeneity in dust emissions from candidate geomorphologies, which has direct implications for strategies aimed at inclusion of dust emission schemes at a scale relevant to climate models. To accommodate these factors and other significant scaling issues, additional research is needed to increase our quantification of a wider range of dust source types and geomorphological contexts over longer time periods. © The Author(s) 2013. Source


Petchey O.L.,University of Sheffield | Belgrano A.,Institute of Marine Research
Biology Letters | Year: 2010

The sizes of individual organisms, rather than their taxonomy, are used to inform management and conservation in some aquatic ecosystems. The European Science Foundation Research Network, SIZEMIC, facilitates integration of such approaches with the more taxonomic approaches used in terrestrial ecology. During its 4-year tenure, the Network is bringing together researchers from disciplines including theorists, empiricists, government employees, and practitioners, via a series of meetings, working groups and research visits. The research conducted suggests that organismal size, with a generous helping of taxonomy, provides the most probable route to universal indicators of ecological status. © 2010 The Royal Society. Source


Meth P.,University of Sheffield
Environment and Planning A | Year: 2011

Interconnections between crime prevention and local governance practices are increasingly evident through the involvement of local partnerships and local government in crime prevention. The microlocal workings and the political implications of these interconnections have, however, received far less attention. This paper uses a case study from South Africa to understand the microlocal experiences of the interconnections between what is described here as 'crime management' and local governance. It is argued that the extent of interconnection is beyond that captured by the concept of 'partnership', as multiple governance structures, including local political parties, engage in crime management. Furthermore, the paper illustrates how local governance is dominated by crime management and that this domination is explicitly tied to the party political 'ambitions' of the dominant ANC party. The interconnection is theorised as the criminalisation of governance within a context of state-building. Source


Fishwick D.,University of Sheffield
Clinics in Chest Medicine | Year: 2012

Asthma and extrinsic allergic alveolitis (EAA) remain prevalent respiratory diseases and the cause of a significant disease burden. This article reviews the recent occupational and environmental causes described for these conditions. Even over the limited time spam addressed by this article, novel agents and new data relating to already suggested causes have been described. Various types of work tasks or exposures are described that appear to cause both asthma and EAA. Isocyanates, the best example of dual potential to cause asthma and EAA are discussed, as is the new understanding of the role metal-working fluids play when causing respiratory diseases. © 2012. Source


Cucchiella F.,University of LAquila | D'Adamo I.,University of LAquila | Lenny Koh S.C.,University of Sheffield
Journal of Cleaner Production | Year: 2015

Solar energy is a form of renewable energy that can be used to combat climate change through an environmentally accepted energy supply policy with support from both private and public consumers. There are numerous factors contributing to the definition of the economic and environmental performance of solar energy investments, such as average annual irradiation, consumers' consumption, Feed in Tariff incentive system, energy portfolio, emissions produced by the photovoltaic system, rated power of the individual modules, disposable income of the investor, availability of surface for the installation of the photovoltaic panels and mission, that characterise the project (environmental maximisation, economic maximisation or self-sufficiency of the system during the first year). Given the particular geographical position of Italy, the economic profitability and environmental impact of such system were estimated, first on the provincial scale and then on the regional scale, to delineate the general characteristics that are not caused by a single scenario. The indicators used include the following: net present value (NPV), internal rate of return (IRR), discounted payback period (DPbP), discounted aggregate cost-benefit ratio (BCr) and reduction of emissions of carbon dioxide (ERcd). The ultimate objective of the paper is to define the number of photovoltaic (PV) systems necessary to reach the target of renewable energy production in the above settings. A general scenario appropriate to achieve this goal, as well as implementing the total wealth generated by this framework and the reduction of CO2 emissions resulting from the implementation of that plan, will be examined. The indicators used are total net present value per capita and reduction of carbon dioxide emissions per capita. © 2013 Elsevier Ltd. Source


Leggett G.J.,University of Sheffield
ACS Nano | Year: 2011

Continued progress in the fast-growing field of nanoplasmonics will require the development of new methods for the fabrication of metal nanostructures. Optical lithography provides a continually expanding tool box. Two-photon processes, as demonstrated by Shukla et al. (doi: 10.1021/nn103015g), enable the fabrication of gold nanostructures encapsulated in dielectric material in a simple, direct process and offer the prospect of three-dimensional fabrication. At higher resolution, scanning probe techniques enable nanoparticle particle placement by localized oxidation, and near-field sintering of nanoparticulate films enables direct writing of nanowires. Direct laser printing of single gold nanoparticles offers a remarkable capability for the controlled fabrication of model structures for fundamental studies, particle-by-particle. Optical methods continue to provide a powerful support for research into metamaterials. © 2011 American Chemical Society. Source


Jetz W.,Imperial College London | Freckleton R.P.,University of Sheffield
Philosophical transactions of the Royal Society of London. Series B, Biological sciences | Year: 2015

In taxon-wide assessments of threat status many species remain not included owing to lack of data. Here, we present a novel spatial-phylogenetic statistical framework that uses a small set of readily available or derivable characteristics, including phylogenetically imputed body mass and remotely sensed human encroachment, to provide initial baseline predictions of threat status for data-deficient species. Applied to assessed mammal species worldwide, the approach effectively identifies threatened species and predicts the geographical variation in threat. For the 483 data-deficient species, the models predict highly elevated threat, with 69% 'at-risk' species in this set, compared with 22% among assessed species. This results in 331 additional potentially threatened mammals, with elevated conservation importance in rodents, bats and shrews, and countries like Colombia, Sulawesi and the Philippines. These findings demonstrate the future potential for combining phylogenies and remotely sensed data with species distributions to identify species and regions of conservation concern. © 2015 The Author(s) Published by the Royal Society. All rights reserved. Source


Young-Afat D.A.,University of Sheffield
Epidemiology | Year: 2016

The ‘cohort multiple randomized controlled trial’, a new design for pragmatic trials, embeds multiple trials within a cohort. The cohort multiple RCT is an attractive alternative to conventional RCTs in fields where recruitment is slow, multiple new (competing) interventions for the same condition have to be tested, new interventions are highly preferred by patients and doctors, and the risk of disappointment bias, cross-over, and contamination is considerable. In order to prevent these unwanted effects, the cohort multiple RCT provides information on randomization to the intervention group/arm only, and only after randomization (i.e. pre-randomization). To some, especially in a clinical setting, this is not ethically acceptable.In this paper, we argue that pre-randomization in the cohort multiple RCT can be avoided by adopting a staged-informed consent procedure. In the first stage, at entry into the cohort, all potential participants are asked for their informed consent to participate in a cohort study and broad consent to be either randomly selected to be approached for experimental interventions or to serve as control without further notice during participation in the cohort. In a second stage, at the initiation of an RCT within the cohort, informed consent to receive the intervention is then only sought in those randomly selected for the intervention arm. At the third stage, after completion of each RCT, all cohort participants receive aggregate disclosure of trial results.This staged-informed consent procedure avoids pre-randomization in cmRCT and aims to keep participants actively engaged in the research process. Copyright © 2016 Wolters Kluwer Health, Inc. All rights reserved. Source


Buchan R.,University of Sheffield
Journal of Conflict and Security Law | Year: 2012

The legality of cyber attacks is generally approached from the use of force prohibition contained in Article 2(4) UN Charter. In order to constitute an unlawful use of force it is widely accepted that an intervention must produce physical damage. Of course, a cyber attack can cause physical damage and therefore violate Article 2(4). Upon the available evidence, I submit that the deployment of the Stuxnet virus against Iran in 2010 is such an example. However, the issue is that many cyber attacks do not manifest physical damage and are thus not captured by Article 2(4). Contrary to claims in existing cyber war literature, this does not mean that such attacks are lawful. Instead, I argue that where such attacks are coercive in nature they will nevertheless violate the non-intervention principle that is embedded in customary international law. I suggest that the cyber attack against Estonia in 2007 provides a good example of a cyber attack amounting to an unlawful intervention. © Oxford University Press 2012; all rights reserved. Source


Stevens J.W.,University of Sheffield
Pharmaceutical Statistics | Year: 2011

A meta-analysis of a continuous outcome measure may involve missing standard errors. This is not a problem depending on assumptions made about the population standard deviation. Multiple imputation can be used to impute missing values while allowing for uncertainty in the imputation. Markov chain Monte Carlo simulation is a multiple imputation technique for generating posterior predictive distributions for missing data. We present an example of imputing missing variances using WinBUGS. The example highlights the importance of checking model assumptions, whether for missing or observed data. © 2011 John Wiley & Sons, Ltd. Source


Lewis S.L.,University College London | Lewis S.L.,University of Leeds | Edwards D.P.,University of Sheffield | Galbraith D.,University of Leeds
Science | Year: 2015

Tropical forests house over half of Earth's biodiversity and are an important influence on the climate system. These forests are experiencing escalating human influence, altering their health and the provision of important ecosystem functions and services. Impacts started with hunting and millennia-old megafaunal extinctions (phase I), continuing via low-intensity shifting cultivation (phase II), to today's global integration, dominated by intensive permanent agriculture, industrial logging, and attendant fires and fragmentation (phase III). Such ongoing pressures, together with an intensification of global environmental change, may severely degrade forests in the future (phase IV, global simplification) unless new "development without destruction" pathways are established alongside climate change-resilient landscape designs. © 2015 American Association for the Advancement of Science. All rights reserved. Source


Haycock J.W.,University of Sheffield
Methods in molecular biology (Clifton, N.J.) | Year: 2011

Cell culture in two dimensions has been routinely and diligently undertaken in thousands of laboratories worldwide for the past four decades. However, the culture of cells in two dimensions is arguably primitive and does not reproduce the anatomy or physiology of a tissue for informative or useful study. Creating a third dimension for cell culture is clearly more relevant, but requires a multidisciplinary approach and multidisciplinary expertise. When entering the third dimension, investigators need to consider the design of scaffolds for supporting the organisation of cells or the use of bioreactors for controlling nutrient and waste product exchange. As 3D culture systems become more mature and relevant to human and animal physiology, the ability to design and develop co-cultures becomes possible as does the ability to integrate stem cells. The primary objectives for developing 3D cell culture systems vary widely - and range from engineering tissues for clinical delivery through to the development of models for drug screening. The intention of this review is to provide a general overview of the common approaches and techniques for designing 3D culture models. Source


Willett P.,University of Sheffield
Methods in molecular biology (Clifton, N.J.) | Year: 2011

This chapter reviews the use of molecular fingerprints for chemical similarity searching. The fingerprints encode the presence of 2D substructural fragments in a molecule, and the similarity between a pair of molecules is a function of the number of fragments that they have in common. Although this provides a very simple way of estimating the degree of structural similarity between two molecules, it has been found to provide an effective and an efficient tool for searching large chemical databases. The review describes the historical development of similarity searching since it was first described in the mid-1980s, reviews the many different coefficients, representations, and weightings that can be combined to form a similarity measure, describes quantitative measures of the effectiveness of similarity searching, and concludes by looking at current developments based on the use of data fusion and machine learning techniques. Source


Iorio A.,McMaster University | Puccetti P.,University of Perugia | Makris M.,University of Sheffield
Blood | Year: 2012

The development of alloantibodies or inhibitors is the most serious complication a patient with severe hemophilia can experience from treatment with clotting factor concentrates. Although common in previously untreated patients, inhibitor development is rare in multiply exposed, well-tolerized patients. There has been a nonevidence-based reluctance to change concentrate because of a perceived greater inhibitor risk after the switch, even though most patients are now likely to be using a concentrate on which they did not begin. Inhibitors in previously treated patients are observed in approximately 2 per 1000 patient/years, which makes it difficult to study and compare rates among different products. Because the baseline inhibitor risk in previously treated patients may vary over time, it is important to compare the risk in patients switching to a new product with that in a parallel control group of nonswitching patients or within a case-controlled study. The study designs imposed by regulators are suboptimal in detecting immunogenicity signals. The issue of immunogenicity of new products is likely to gain more relevance in the near future, with a call for effective postmarketing surveillance studies for all of the new engineered factor VIII concentrates with prolonged half-lives that are likely to enter clinical practice. © 2012 by The American Society of Hematology. Source


Halawa A.,University of Sheffield
Annals of Transplantation | Year: 2011

Acute graft dysfunction can be caused by ischaemic damage or immunological injury leading to serious consequences both in the short and long term. We are in a desperate need for biomarkers of immune and nonimmune injury at different time points of the transplantation time course, beginning from a potential kidney donors where acute kidney damage can pass unnoticed, during the early post-transplant periods to predict acute transplant dysfunction due to various causes and during long term follow up to predict chronic histological changes. The implementation of these novel biomarkers could increase the sensitivity of diagnosis and monitoring of kidney injury in kidney transplant recipients. Traditionally acute graft dysfunction is diagnosed by measuring serum creatinine concentrations. Unfortunately rise in serum creatinine is a late sign of kidney damage. It indicates rather predicts the damage. The treatment, in order to be effective, must be instituted very early after the initiating insult, well before the serum creatinine even begins to rise. Fortunately, emerging technologies such as functional genomics and proteomics have uncovered novel candidates that are emerging as potentially useful biomarkers of acute kidney injury (AKI). The most promising of biomarkers in AKI for clinical use include a plasma panel consisting of Neutrophil Gelatinase-Associated Lipocalin (NGAL) and Cystatin C and a urine panel including NGAL, Il-18 and Kidney Injury Molecule 1 (KIM-1). Most of these biomarkers were developed in non-transplant AKI, yet their role in clinical transplantation has to be identified. © Ann Transplant. Source


Ryan F.P.,University of Sheffield
Current Neuropharmacology | Year: 2011

There is growing evidence that the env genes of two or more human endogenous retroviruses (HERVs) of the W family are contributing to the inflammatory processes, and thus to the pathogenesis, of multiple sclerosis (MS). Increasing understanding of the human endogenous retroviral locus, ERVWE1, and the putative multiple sclerosis associated retrovirus, or MSRV, and in particular of the HERV-W env sequences associated with these, offers the potential of new lines of pharmacological research that might assist diagnosis, prognosis and therapy of multiple sclerosis. © 2011 Bentham Science Publishers Ltd. Source


We present a reanalysis of the stochastic model of organelle production and show that the equilibrium distributions for the organelle numbers predicted by this model can be readily calculated in three different scenarios. These three distributions can be identified as standard distributions, and the corresponding exact formulae for their mean and variance can therefore be used in further analysis. This removes the need to rely on stochastic simulations or approximate formulae (derived using the fluctuation dissipation theorem). These calculations allow for further analysis of the predictions of the model. On the basis of this we question the extent to which the model can be used to conclude that peroxisome biogenesis is dominated by de novo production when Saccharomyces cerevisiae cells are grown on glucose medium. © Craven. Source


Green P.L.,University of Sheffield
Mechanical Systems and Signal Processing | Year: 2015

This work details the Bayesian identification of a nonlinear dynamical system using a novel MCMC algorithm: 'Data Annealing'. Data Annealing is similar to Simulated Annealing in that it allows the Markov chain to easily clear 'local traps' in the target distribution. To achieve this, training data is fed into the likelihood such that its influence over the posterior is introduced gradually - this allows the annealing procedure to be conducted with reduced computational expense. Additionally, Data Annealing uses a proposal distribution which allows it to conduct a local search accompanied by occasional long jumps, reducing the chance that it will become stuck in local traps. Here it is used to identify an experimental nonlinear system. The resulting Markov chains are used to approximate the covariance matrices of the parameters in a set of competing models before the issue of model selection is tackled using the Deviance Information Criterion. © 2014 The Author. Source


Wise S.,University of Sheffield
Computers and Geosciences | Year: 2011

Studies of the detailed characteristics of DEM error have been hampered by the difficulty in obtaining a large sample of error values for a DEM. The approach proposed in this paper is to resample a DEM to a lower resolution and then reinterpolate back to the original resolution which produces a large sample of error values well distributed across the DEM. This method is applied to a sample area from Scotland, which contains a variety of terrain types. The results show that the standard measure of error, the root mean square error (RMSE) of elevation, shows only moderate correlation with a visual assessment of the quality of DEMs produced by a range of interpolation methods. The frequency distribution and strength of spatial autocorrelation are shown to vary with the initial data density and interpolation method. When the source data density is low, the error has strong spatial autocorrelation and a distribution that is close to being Gaussian. However, as the data density increases, levels of spatial autocorrelation drop and the distribution becomes leptokurtic with values very strongly clustered around zero. At the level of the individual DEM point, elevation error is shown to be a poor predictor of error in slope derivatives which depend on the spatial pattern of elevation errors around the point and are also sensitive to differences in terrain. At the level of a whole DEM, however, RMSE of elevation is a good predictor of RMSE in gradient and aspect but not of curvature. © 2011 Elsevier Ltd. Source


Julious S.A.,University of Sheffield
Pharmaceutical Statistics | Year: 2011

In a non-inferiority trial to assess a new investigative treatment, there may need to be consideration of an indirect comparison with placebo using the active control in the current trial. We can, therefore, use the fact that there is a common active control in the comparisons of the investigative treatment and placebo. In analysing a non-inferiority trial, the ABC of: Assay sensitivity, Bias minimisation and Constancy assumption needs to be considered. It is highlighted how the ABC assumptions can potentially fail when there is placebo creep or a patient population shift. In this situation, the belief about the placebo response expressed in terms of a prior probability in Bayesian formulation could be used with the observed treatment effects to set the non-inferiority limit. © 2011 John Wiley & Sons, Ltd. Source


Ong A.C.M.,University of Sheffield | Harris P.C.,Mayo Medical School
Kidney International | Year: 2015

It is 20 years since the identification of PKD1, the major gene mutated in autosomal dominant polycystic kidney disease (ADPKD), followed closely by the cloning of PKD2. These major breakthroughs have led in turn to a period of intense investigation into the function of the two proteins encoded, polycystin-1 and polycystin-2, and how defects in either protein lead to cyst formation and nonrenal phenotypes. In this review, we summarize the major findings in this area and present a current model of how the polycystin proteins function in health and disease. Source


OBJECTIVES: To investigate the association between genotype at the soluble interleukin 6 receptor (sIL-6R) A358C single nucleotide polymorphism (SNP, rs8192284), previously reported to correlate with soluble receptor levels, and response to anti-TNF therapy in subjects with RA. METHODS: In a large cohort of Caucasian RA patients treated with anti-TNF medications (total, n = 1050; etanercept, n = 455; infliximab, n = 450; and adalimumab, n = 142), the sIL-6R A358C polymorphism was genotyped using a Taqman 5'-allelic discrimination assay. Linear regression analysis adjusted for baseline 28 joint disease activity score (DAS28), baseline HAQ score, gender and use of concurrent DMARDs was used to assess the association of genotype at this polymorphism with response to anti-TNF therapy, defined by change in DAS28 after 6 months of treatment. Analyses were performed in the entire cohort, and also stratified by an anti-TNF agent. Additional analysis according to the EULAR response criteria was also performed, with the chi-squared test used to compare genotype groups. RESULTS: No association between genotype at sIL-6R A358C and response to anti-TNF treatment was detected either in the cohort as a whole or after stratification by anti-TNF agent, in either the linear regression analysis or with response segregated according to EULAR criteria. CONCLUSIONS: This study shows that genotype at the functional sIL-6R A358C SNP is not associated with response to anti-TNF treatment in patients with RA. Source


Ahmed M.Y.M.,Military Technical College | Qin N.,University of Sheffield
Progress in Aerospace Sciences | Year: 2011

Among a variety of design requirements, reducing the drag and aeroheating on hypersonic vehicles is the most crucial one. Unfortunately, these two objectives are often conflicting. On one hand, sharp slender forebodies design reduces the drag and ensures longer ranges and more economic flights. However, they are more vulnerable to aerodynamic heating. On the other hand, blunt forebodies produce more drag, however, they are preferred as far as aeroheating is concerned. In addition, in the context of hypersonic vehicles, blunt geometries are preferred over slender ones for practical implications such as higher volumetric efficiency, better accommodation of crew or on-board equipment. In principle, a blunt vehicle flying at hypersonic speeds generates a strong bow shock wave ahead of its nose, which is responsible for the high drag and aeroheating levels. There have been a number of efforts devoted towards reducing both the drag and the aeroheating by modifying the flowfield ahead of the vehicles nose. Of these techniques, using spikes is the simplest and the most reliable technique. A spike is simply a slender rod attached to the stagnation point of the vehicles nose. The spike replaces the strong bow shock with a system of weaker shocks along with creating a zone of recirculating flow ahead of the forebody thus reducing both drag and aeroheating. Since their introduction to the high-speed vehicles domain in the late 1940s, spikes have been extensively studied using both experimental facilities and numerical simulation techniques. The present paper is devoted to surveying these studies and illustrating the contributions of the authors in this field. The paper also raises some of the areas in the field that need further investigations. © 2011 Elsevier Ltd. Source


Thomas J.A.,University of Sheffield
Dalton Transactions | Year: 2011

Recent years have seen a surge of interest in the metal-ion directed construction of discrete molecular assemblies. Once the versatility of this approach for the construction of hitherto inaccessible molecular architectures was demonstrated, work towards fully functional systems rapidly developed. Since these architectures have a wide range of possible applications, this perspective review will focus on one important aspect of this research: the construction of hosts with optical or electrochemical sensing outputs. As this overview will illustrate, research in this area is now producing working examples of sensors for variety of ionic, molecular, and biomolecular guests. © 2011 The Royal Society of Chemistry. Source


Nosil P.,University of Sheffield
Evolution | Year: 2013

In a recent paper, Yukilevich (2012) showed that asymmetries between Drosophila species in the strength of premating isolation tend to match asymmetries in the costs of hybridization (inferred from asymmetries in the strength of postzygotic isolation and range sizes). The results provide novel evidence that the outcome of reinforcement can depend on the strength and frequency of selection against hybridization. Here, I reanalyze the data to demonstrate that another (unconsidered) factor, namely the quantitative degree of sympatry between species, also predictably affects reinforcement. Specifically, premating isolation is strongest at intermediate degrees of sympatry. This result complements, rather than challenges, those of Yukilevich (2012). One possible explanation for this newly discovered pattern is that when the degree of sympatry is small, selection for avoidance of hybridization is rare, but when the degree of sympatry is large, homogenizing gene flow overcomes reinforcing selection. Thus, reinforcement may depend on the balance between selection and gene flow. However, the current work examined degree of sympatry, not gene flow itself. Thus, further data on gene flow levels in Drosophila is required to test this hypothesis, which emerged from the patterns reported here. © 2012 The Author(s). Evolution © 2012 The Society for the Study of Evolution. Source


The anthrax protein protective antigen (PA) is responsible for cell-surface recognition and aids the delivery of the toxic anthrax enzymes into host cells. By targeting PA and preventing it from binding to host cells, it is hoped that the delivery of toxins into the cell will be inhibited. The current assay reported for PA is a low throughput functional assay. Here, the high throughput screening method using differential scanning fluorimetry (DSF) was developed and optimized to screen a number of libraries from various sources including a selection of FDA-approved drugs as well as hits selected by a virtual screening campaign. DSF is a rapid technique that uses fluorescence to monitor the thermal unfolding of proteins using a standard QPCR instrument. A positive shift in the calculated melting temperature (Tm), of the protein in the presence of a compound, relative to the Tm of the unbound protein, indicates that stabilization of the protein by ligand binding may have occurred. Optimization of the melting assay showed SYPRO Orange to be an ideal dye as a marker and lead to the reduction of DMSO concentration to <1% (v/v) in the final assay. The final assay volume was minimized to 25 L with 5 g protein per well of 96-well plate. In addition, a buffer, salt and additive screen lead to the selection of 10 mM HEPES-NaOH pH 7.5, 100 mM NaCl as the assay buffer. This method has been shown here to be useful as a primary method for the detection of small-molecule PA ligands, giving a hit rate of approximately 7%. These ligands can then be studied further using PA functional assays to confirm their biological activities before being selected as lead compounds for the treatment of anthrax. Source


Booth A.,University of Sheffield
International Journal of Technology Assessment in Health Care | Year: 2010

Objectives: The aim of this study is to review briefly different methods for determining the optimal retrieval of studies for inclusion in a health technology assessment (HTA) report. Methods: This study reviews the methodology literature related to specific methods for evaluating yield from literature searching strategies and for deciding whether to continue or desist in the searching process. Results: Eight different methods were identified. These include using the Capture-recapture technique; obtaining Feedback from the commissioner of the HTA report; seeking the Disconfirming case; undertaking comparison against a known Gold standard; evaluating retrieval of Known items; recognizing the Law of diminishing returns, specifying a priori Stopping rules, and identifying a point of Theoretical saturation. Conclusions: While this study identified a variety of possible methods, there has been very little formal evaluation of the specific strengths and weaknesses of the different techniques. The author proposes an evaluation agenda drawing on an examination of existing data together with exploration of the specific impact of missing relevant studies. Copyright © Cambridge University Press 2010. Source


Naser H.,University of Sheffield
International Journal of Energy Economics and Policy | Year: 2015

This paper attempts to examine the causal relationship between nuclear energy consumption and economic growth for four industrialised countries; the US, Canada, Japan, and France, between 1965 to 2010. In a multivariate framework that accounts for other key determinants such that of oil demand and price, a modified version of the Granger causality test developed by Toda and Yamamoto (1995) is applied. Results show that there is one-way causality from nuclear energy consumption to economic growth in Japan denoting that an energy conservation policy that aims to minimise nuclear energy consumption may adversely affect economic growth. Oppositely, increasing real GDP causes additional nuclear energy consumption in France. In the US and Canada, there is evidence that support the neutrality hypothesis. Looking at the other investigated channels, the level of real oil prices seems to have a vital role in deriving the demand for nuclear power in three out of four countries. There is also a causal linkage between oil and nuclear energy consumption in the US, Japan, and France, suggesting that the uncertainty surrounding the global oil market plays a key role in determining the demand for nuclear energy. This means that the policies in these countries should endeavor to overcome the constrains on nuclear energy consumption to face any un-expected hikes in oil prices, which may adversely affect economic growth in such oil importing countries. © 2015, Econjournals. All rights reserved. Source


Many patients who attend an emergency department (ED) with chest pain receive a diagnosis of non-cardiac chest pain (NCCP), and often suffer poor psychological outcomes and continued pain. This study assessed the role of illness representations in explaining psychological distress and continued chest pain in patients attending an ED. ED NCCP patients (N = 138) completed measures assessing illness representations, anxiety, depression and quality of life (QoL) at baseline, and chest pain at one month. Illness representations explained significant amounts of the variance in anxiety (Adj. R2 = .38), depression (Adj. R2 = .18) and mental QoL (Adj. R2 = .36). A belief in psychological causes had the strongest associations with outcomes. At one month, 28.7% of participants reported experiencing frequent pain, 13.2% infrequent pain and 58.1% no pain. Anxiety, depression and poor QoL, but not illness representations, were associated with continued chest pain. The findings suggest that (i) continued chest pain is related to psychological distress and poor QoL, (ii) interventions should be aimed at reducing psychological distress and improving QoL and (iii) given the associations between perceived psychological causes and psychological distress/QoL, NCCP patients in the ED might benefit from psychological therapies to manage their chest pain. Source


Dean P.,University of Sheffield
Functional neurology | Year: 2010

Many functional models of the cerebellar microcircuit are based on the adaptive-filter model first proposed by Fujita. The adaptive filter has powerful signal processing capacities that are suitable for both sensory and motor tasks, and uses a simple and intuitively plausible decorrelation learning rule that offers and account of the evolution of the inferior olive. Moreover, in those cases where the input-output transformations of cerebellar microzones have been sufficiently characterised, they appear to conform to those predicted by the adaptive-filter model. However, these cases are few in number, and comparing the model with the internal operations of the microcircuit itself has not proved straightforward. Whereas some microcircuit features appear compatible with adaptive-filter function, others such as simple granular-layer processing or Purkinje cell bistability, do not. How far these seeming incompatibilities indicate additional computational roles for the cerebellar microcircuit remains to be determined. Source


Background: Osteoarthritis is a common presentation in primary care, and non-selective non-steroidal anti-inflammatory drugs (sometimes also referred to as traditional NSAIDs or tNSAIDs) and selective cyclo-oxygenase 2 inhibitors (COX-2 inhibitors) are commonly used to treat it. The UK's National Institute for Health and Clinical Excellence (NICE) recommends taking patient risk factors into account when selecting a tNSAID or a COX-2 inhibitor, but GPs have lacked practical guidance on assessing patient risk. Methods. A multi-disciplinary group that included primary care professionals (PCPs) developed an evidence-based consensus statement with an accompanying flowchart that aimed at providing concise and specific guidance on NSAID use in osteoarthritis treatment. An open invitation to meet and discuss the issue was made to relevant healthcare professionals in South Yorkshire. A round table meeting was held that used a modified nominal group technique, aimed at generating opinions and ideas from all stakeholders in the consensus process. A draft developed from this meeting went through successive revisions until a consensus was achieved. Results: Four statements on the use of tNSAIDs and COX-2 inhibitors (and an attached category of evidence) were agreed: 1) tNSAIDs are effective drugs in relieving pain and immobility associated with osteoarthritis. COX-2 inhibitors are equally effective; 2) tNSAIDs and COX-2 inhibitors vary in their potential gastrointestinal, liver, and cardio-renal toxicity. This risk varies between individual treatments within both groups and is increased with dose and duration of treatment; 3) COX-2 inhibitors are associated with a significantly lower gastrointestinal toxicity compared to tNSAIDs. Co-prescribing of aspirin reduces this advantage; 4) PPIs should always be considered with a tNSAID and with a COX-2 inhibitor in higher GI risk patients. An accompanying flowchart to guide management was also agreed. Conclusions: Individual patient risk is an important factor in choice of treatment for patients with osteoarthritis and the consensus statement developed offers practical guidance for GPs and others in primary care. Where there are clinical uncertainties, guidance developed and agreed by local clinicians has a role to play in improving patient management. © 2012 Adebajo; licensee BioMed Central Ltd. Source


Milne R.,University of Sheffield
Social Studies of Science | Year: 2012

The paper explores the role of imagined geographies in the shaping of new technologies. I argue that the role of place in future-oriented visions of technoscience is a neglected topic in studies of the social shaping of technology. The paper proposes an approach that combines the sociology of expectations with the geography of science. It focuses on the interplay between envisaged and current geographies to highlight the recursive dynamics of place and imagination. To illustrate this approach, the paper discusses the example of biopharming, the production of biopharmaceuticals using genetically modified crops. I argue that expectations for biopharming bear the imprint of place, or rather of the places in which they are imagined, as well as those they imagine, and ultimately those they produce. I use this example to suggest how social studies of science and technology can usefully investigate the spaces, places and scales of technological development. © The Author(s) 2012. Source


Kerswell R.R.,University of Bristol | Pringle C.C.T.,Coventry University | Willis A.P.,University of Sheffield
Reports on Progress in Physics | Year: 2014

This article introduces and reviews recent work using a simple optimization technique for analysing the nonlinear stability of a state in a dynamical system. The technique can be used to identify the most efficient way to disturb a system such that it transits from one stable state to another. The key idea is introduced within the framework of a finite-dimensional set of ordinary differential equations (ODEs) and then illustrated for a very simple system of two ODEs which possesses bistability. Then the transition to turbulence problem in fluid mechanics is used to show how the technique can be formulated for a spatially-extended system described by a set of partial differential equations (the well-known Navier-Stokes equations). Within that context, the optimization technique bridges the gap between (linear) optimal perturbation theory and the (nonlinear) dynamical systems approach to fluid flows. The fact that the technique has now been recently shown to work in this very high dimensional setting augurs well for its utility in other physical systems. © 2014 IOP Publishing Ltd. Source


Dayer M.J.,Taunton and Somerset NHS trust | Jones S.,University of Surrey | Prendergast B.,Cardiothoracic Services | Baddour L.M.,Rochester College | And 3 more authors.
The Lancet | Year: 2015

Summary Background Antibiotic prophylaxis given before invasive dental procedures in patients at risk of developing infective endocarditis has historically been the focus of infective endocarditis prevention. Recent changes in antibiotic prophylaxis guidelines in the USA and Europe have substantially reduced the number of patients for whom antibiotic prophylaxis is recommended. In the UK, guidelines from the National Institute for Health and Clinical Excellence (NICE) recommended complete cessation of antibiotic prophylaxis for prevention of infective endocarditis in March, 2008. We aimed to investigate changes in the prescribing of antibiotic prophylaxis and the incidence of infective endocarditis since the introduction of these guidelines. Methods We did a retrospective secular trend study, analysed as an interrupted time series, to investigate the effect of antibiotic prophylaxis versus no prophylaxis on the incidence of infective endocarditis in England. We analysed data for the prescription of antibiotic prophylaxis from Jan 1, 2004, to March 31, 2013, and hospital discharge episode statistics for patients with a primary diagnosis of infective endocarditis from Jan 1, 2000, to March 31, 2013. We compared the incidence of infective endocarditis before and after the introduction of the NICE guidelines using segmented regression analysis of the interrupted time series. Findings Prescriptions of antibiotic prophylaxis for the prevention of infective endocarditis fell substantially after introduction of the NICE guidance (mean 10 900 prescriptions per month [Jan 1, 2004, to March 31, 2008] vs 2236 prescriptions per month [April 1, 2008, to March 31, 2013], p<0·0001). Starting in March, 2008, the number of cases of infective endocarditis increased significantly above the projected historical trend, by 0·11 cases per 10 million people per month (95% CI 0·05-0·16, p<0·0001). By March, 2013, 35 more cases per month were reported than would have been expected had the previous trend continued. This increase in the incidence of infective endocarditis was significant for both individuals at high risk of infective endocarditis and those at lower risk. Interpretation Although our data do not establish a causal association, prescriptions of antibiotic prophylaxis have fallen substantially and the incidence of infective endocarditis has increased significantly in England since introduction of the 2008 NICE guidelines. Funding Heart Research UK, Simplyhealth, and US National Institutes of Health. © 2015 Elsevier Ltd. Source


Goodeve A.C.,University of Sheffield
Journal of Thrombosis and Haemostasis | Year: 2015

Hemophilia B is an X-chromosome-linked inherited bleeding disorder primarily affecting males, but those carrier females with reduced factor IX activity (FIX:C) levels may also experience some bleeding. Genetic analysis has been undertaken for hemophilia B since the mid-1980s, through linkage analysis to track inheritance of an affected allele, and to enable determination of the familial mutation. Mutation analysis using PCR and Sanger sequencing along with dosage analysis for detection of large deletions/duplications enables mutation detection in > 97% of patients with hemophilia B. The risk of the development of inhibitory antibodies, which are reported in ~ 2% of patients with hemophilia B, can be predicted, especially in patients with large deletions, and these individuals are also at risk of anaphylaxis, and nephrotic syndrome if they receive immune tolerance induction. Inhibitors also occur in patients with nonsense mutations, occasionally in patients with small insertions/deletions or splice mutations, and rarely in patients with missense mutations (p.Gln237Lys and p.Gln241His). Hemophilia B results from several different mechanisms, and those associated with hemophilia B Leyden, ribosome readthrough of nonsense mutations and apparently 'silent' changes that do not alter amino acid coding are explored. Large databases of genetic variants in healthy individuals and patients with a range of disorders, including hemophilia B, are yielding useful information on sequence variant frequency to help establish possible variant pathogenicity, and a growing range of algorithms are available to help predict pathogenicity for previously unreported variants. © 2015 The Authors. Source


Dennell R.,University of Sheffield
Quaternary International | Year: 2013

Deserts are now extensive across continental Asia south of 45° N from Arabia and SW Asia to the Thar Desert of India, and north-eastwards through Central Asia to North China. Despite the potential importance of arid regions to human evolutionary studies, Palaeolithic records from areas that are now desert are generally poor, and the best information tends to be derived from springs and palaeolakes, partly because these are obvious taphonomic traps for archaeological, faunal and other environmental material, and partly because water would have been the most critical resource for survival. This paper provides an overview of what can currently be stated about the Palaeolithic record from areas of Asia that are now deserts, particularly in relation to Middle Pleistocene hominin evolution, the expansion in MIS4-3 of Homo sapiens, and the extinction of its competitors. It is suggested that among the reasons why H. sapiens was ultimately more successful than Neanderthals in MIS 3-4 in colonising continental Asia are that they were physiologically better adapted to high summer temperatures, and were probably more skilled in creating a viable resource base in semi-arid and arid landscapes. Neanderthals in Central Asia may have faced additional problems in dealing with low winter temperatures, large areas of salt deserts and sand seas, and non-potable water supplies. Nevertheless, even H. sapiens does not appear to have developed the means to survive habitually in Asian deserts until the terminal Pleistocene, and in most cases, the Holocene. © 2012 Elsevier Ltd and INQUA. Source


Davies J.,University of Sheffield
Computers and Education | Year: 2012

This paper focuses on 25 UK teenagers' language and literacy practices on Facebook; it draws on data from interviews as well as from Facebook 'walls'. To explore whether Facebook provides opportunities for new literacy practices through text-making, the research considers how teenagers use the site to present themselves and 'do friendship'. Continuities of the teenagers' interactions were traced across the domains of school, home and Facebook and were found to reflect both 'traditional' and new ways of self presenting and of 'doing friendship'. © 2011 Elsevier Ltd. Source


Naser H.,University of Sheffield
International Journal of Energy Economics and Policy | Year: 2014

This paper empirically examines the relationship between oil consumption, nuclear energy consumption, oil price and economic growth in four emerging economies (Russia, China, South Korea, and India) over the period from 1965 to 2010. Applying a modified version of the granger causality test developed by Toda and Yamamoto, we find that the level of world crude oil prices (WTI) plays a crucial role in determining the economic growth in the investigated countries. The results suggest that there is a unidirectional causality running from real GDP to oil consumption in China and South Korea, while bidirectional relationship between oil consumption and real GDP growth appears in India. Furthermore, the results propose that while nuclear energy stimulates economic growth in both South Korea and India, the rapid increase in China economic growth requires additional usage of nuclear energy. Source


Goodwin S.P.,University of Sheffield
Monthly Notices of the Royal Astronomical Society: Letters | Year: 2013

Binary properties are usually expressed (for good observational reasons) as a function of primary mass. It has been found that the distribution of companion masses - the mass ratio distribution - is different for different primary masses. We argue that system mass is the more fundamental physical parameter to use. We show that if system masses are drawn from a log-normal mass function, then the different observed mass ratio distributions as a function of primary mass, from M-dwarfs to A-stars, are all consistent with a universal, flat, system mass ratio distribution. We also show that the brown dwarf mass ratio distribution is not drawn from the same flat distribution, suggesting that the process which decides upon mass ratios is very different in brown dwarfs and stars. © 2012 The Author Published by Oxford University Press on behalf of the Royal Astronomical Society. Source


Bishop N.,University of Sheffield
Early Human Development | Year: 2010

Osteogenesis imperfecta is characterised by bone fragility leading to fracture and bone deformity, chronic bone pain and reduced mobility. Presentation in infancy may be anticipated through shortened or bowed femurs on antenatal ultrasound scanning, or because of family history. Other conditions can present in the neonatal period with osteoporosis and fractures, but clinical features should allow differentiation.Management is multidisciplinary, with the mainstay of medical intervention being the use of bisphosphonates. Intervention with these medications, in association with specialised nursing, physio- and occupational therapy input, has reduced fracture frequency by up to 50% in published series, and has shown significant effects on vertebral morphometry when started early (around 6. weeks age). Outcomes in older children are encouraging with a reduction in fracture frequency of up to 50%; however, the longer term effects of early intervention remain to be determined. In particular the effects on life-limiting structural outcomes such as scoliosis and basilar invagination remain unclear. © 2010 Elsevier Ltd. Source


Abbasov F.G.,University of Sheffield
Energy Policy | Year: 2014

The major objective of this paper is to apply a multidimensional lens to the European Union's (EU's) vision to the yet to be establish Southern Gas Corridor. I will argue that, the EU's natural gas vision towards the Caspian basin is based not only on bringing additional gas volumes to the EU markets in order to ensure physical security of supply. It is rather multidimensional external governance geared, firstly, towards absorbing all the actors along the whole value chain in to the EU's common energy regulatory framework and shifting energy provision from a bilateral political domain onto a multilateral market domain. Secondly, it is a process of diffusion of norms and values into the governance system of the energy partners. © 2013 Elsevier Ltd. Source


Sidambe A.T.,University of Sheffield
Materials | Year: 2014

Titanium (Ti) and its alloys may be processed via advanced powder manufacturing routes such as additive layer manufacturing (or 3D printing) or metal injection moulding. This field is receiving increased attention from various manufacturing sectors including the medical devices sector. It is possible that advanced manufacturing techniques could replace the machining or casting of metal alloys in the manufacture of devices because of associated advantages that include design flexibility, reduced processing costs, reduced waste, and the opportunity to more easily manufacture complex or custom-shaped implants. The emerging advanced manufacturing approaches of metal injection moulding and additive layer manufacturing are receiving particular attention from the implant fabrication industry because they could overcome some of the difficulties associated with traditional implant fabrication techniques such as titanium casting. Using advanced manufacturing, it is also possible to produce more complex porous structures with improved mechanical performance, potentially matching the modulus of elasticity of local bone. While the economic and engineering potential of advanced manufacturing for the manufacture of musculo-skeletal implants is therefore clear, the impact on the biocompatibility of the materials has been less investigated. In this review, the capabilities of advanced powder manufacturing routes in producing components that are suitable for biomedical implant applications are assessed with emphasis placed on surface finishes and porous structures. Given that biocompatibility and host bone response are critical determinants of clinical performance, published studies of in vitro and in vivo research have been considered carefully. The review concludes with a future outlook on advanced Ti production for biomedical implants using powder metallurgy. © 2014 by the authors. Source


Quantitative trait locus (QTL) mapping is frequently used in evolutionary studies to understand the genetic architecture of continuously varying traits. The majority of studies have been conducted in specially created crosses, in which genetic differences between parental lines are identified by linkage analysis. Detecting QTL segregating within populations is more problematic, especially in wild populations, because these populations typically have complicated and unbalanced multigenerational pedigrees. However, QTL mapping can still be conducted in such populations using a variance components mixed model approach, and the advent of appropriate statistical frameworks and better genotyping methods mean that the approach is gaining popularity. In this study it is shown that all studies described to date report evidence of QTL of major effect on trait variation, but that these findings are probably caused by inflated estimates of QTL effect sizes due to the Beavis effect. Using simulations I show that even the most powerful studies conducted to date are likely to give misleading descriptions of the genetic architecture of a trait. I show that an interpretation of a mapping study of beak color in the zebra finch (Taeniopygia guttata), that suggested genetic variation was determined by a small number of loci of large effect, which are possibly maintained by antagonistic pleiotropy, is likely to be incorrect. More generally, recommendations are made to how QTL mapping can be combined with other approaches to provide more accurate descriptions of a trait's genetic architecture. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution. Source


Mccourt D.,University of Sheffield
British Journal of Politics and International Relations | Year: 2013

This article employs the interpretive approach to show that Britain's embrace of humanitarian intervention in Kosovo in 1999 was less a result of the election of New Labour or the psychology of Tony Blair, as conventional wisdom suggests, and more a consequence of a change in belief among policy-makers in the UK and beyond regarding the use of force for humanitarian ends, which originated prior to 1997 in American intervention in Bosnia in the summer of 1995. The effects of the moralism of the new government and its leader must therefore be viewed within a wider transatlantic context and against the background of the continued importance of the 'Atlanticist' tradition in UK foreign policy, with important implications for the study of UK foreign policy beyond the Balkans. © 2012 Political Studies Association. Source


Birkin F.,University of Sheffield | Polesie T.,Gothenburg University
Ecological Economics | Year: 2013

This paper considers how epistemic analyses (Birkin and Polesie, 2011; Foucault, 1970, 1990a, 1990b) may assist with the development of sustainability economics (Bartelmus, 2010; Baumgärtner and Quaas, 2010a, 2010b; and Söderbaum, 2011) and the capability approach (Ballet et al., 2011; Martins, 2011; Rauschmayer and Leßmann, 2011; Scerri, 2012).It was the French social theorist Michel Foucault (1926-1984) who coined the term "episteme" to refer to the "possibility of knowledge" that determines the development of thought and knowledge in a given period. For Foucault epistemes were the "buried" foundations of knowledge that his epistemic "archaeology" could unearth. In 2007, Foucault was identified as the most cited author of books in the humanities by Thomson Reuters' ISI Web of Science.This paper begins with a brief definition and description of epistemic analyses. A summary analysis of the Modern episteme and neoclassical economics is then provided and this is followed by outline evidence for the emerging episteme. Finally the opportunity is considered for the emerging episteme to reinforce and enhance sustainability economics and the capability approach. © 2013 Elsevier B.V. Source


Spears D.A.,University of Sheffield
International Journal of Coal Geology | Year: 2012

Tonsteins are volcanic ash falls in coal-bearing sequences that have altered to kaolinite. Only in the last twenty years or so has the volcanic origin become firmly established. The evidence for the volcanic origin encompasses bed form, including lateral extent, structures, textures, volcanogenic mineralogy and geochemistry. The lines of evidence are reviewed from a historical perspective. Tonsteins came to prominence over a hundred years ago because of their stratigraphic value. Tonsteins continue to be of value in coalfield exploration, but their stratigraphic value has been enhanced in recent years with the ability to determine radiometric ages with a high level of accuracy. The geochemistry not only provides good evidence for the volcanic origin, but also enables tectomagmatic deductions to be made for areas external to the coal basin. In addition, some tonsteins are the indicators for the discovering of rare metal ore deposits (Nb, REEs, and Ga). There is also the potential to use the geochemistry to identify specific tonsteins; essentially a chemostratigraphic approach. Tonsteins are seen as one class of bentonites; others include K-bentonites, metatonsteins, and possibly illitic-bentonites.Tonsteins have been linked to seatearths, fireclays and fragmental clay rocks. Although all such rocks contain kaolinite, the origins differ. The seatearths contain rootlets and were subjected to pedogenic activity. Alteration of the silicate minerals is limited. Under conditions of a high water table, and in the presence of organic matter, reducing conditions prevailed leading to gleization. On the other hand, the fireclays show extensive alteration to kaolinite. There is a link with the major marine bands and this has implications in terms of sequence stratigraphy. The fragmental clayrocks have also been linked to soil-forming processes, but in this case possibly the reworked soil was the product of an open system containing free alumina minerals that reacted with silica in solution during diagenesis. © 2011 Elsevier B.V. Source


It has been suggested that primary afferent C-fibres that respond to innocuous tactile stimuli are important in the sensation of pleasurable touch. Although it is known that C-tactile fibres terminate in the substantia gelatinosa (lamina II) of the spinal cord, virtually all of the neurons in this region are interneurons, and currently it is not known how impulses in C-mechanoreceptors are transmitted to higher centres. In the current study, I have tested the quantitative response properties of 'wide dynamic range' projection neurons in lamina I of the spinal cord to graded velocity brushing stimuli to identify whether low-threshold mechanoreceptor input to these neurons arises from myelinated or umyelinated nerve fibres. Graded velocity brushing stimuli (6.6-126 cm s-1) were used to characterize the mechanoreceptor inputs to 'wide dynamic range' neurons in lamina I of the dorsal horn that had axons that projected to the contralateral parabrachial nucleus. The most effective tactile stimuli for activation of 'wide dynamic range' lamina I spinoparabrachial neurons were low velocity brush strokes: peak discharge occurred at a mean velocity of 9.2 cm s-1 (range 6.6-20.4 cm s-1, s.d. 5.0 cm s-1), and declined exponentially as brush velocity increased. The data indicate that C-fibres, but not A-fibres, conveyed low-threshold mechanoreceptor inputs to lamina I projection neurons. © 2010 The Author. Journal compilation © 2010 The Physiological Society. Source


McDermott C.J.,University of Sheffield
The Lancet Neurology | Year: 2015

Background: Gastrostomy feeding is commonly used to support patients with amyotrophic lateral sclerosis who develop severe dysphagia. Although recommended by both the American Academy of Neurology and the European Federation of Neurological Societies, currently little evidence indicates the optimum method and timing for gastrostomy insertion. We aimed to compare gastrostomy insertion approaches in terms of safety and clinical outcomes. Methods: In this large, longitudinal, prospective cohort study (ProGas), we enrolled patients with a diagnosis of definite, probable, laboratory supported, or possible amyotrophic lateral sclerosis who had agreed with their treating clinicians to undergo gastrostomy at 24 motor neuron disease care centres or clinics in the UK. The primary outcome was 30-day mortality after gastrostomy. This study was registered on the UK Clinical Research Network database, identification number 9923. Findings: Between Nov 2, 2010, and Jan 31, 2014, 345 patients were recruited of whom 330 had gastrostomy. 163 (49%) patients underwent percutaneous endoscopic gastrostomy, 121 (37%) underwent radiologically inserted gastrostomy, 43 (13%) underwent per-oral image-guided gastrostomy, and three (1%) underwent surgical gastrostomy. 12 patients (4%, 95% CI 2·1-6·2) died within the first 30 days after gastrostomy: five (3%) of 163 after percutaneous endoscopic gastrostomy, four (3%) of 121 after radiologically inserted gastrostomy, and three (7%) of 43 after per-oral image-guided gastrostomy (p=0·46). Including repeat attempts in 14 patients, 21 (6%) of 344 gastrostomy procedures could not be completed: 11 (6%) of 171 percutaneous endoscopic gastrostomies, seven (6%) of 121 radiologically inserted gastrostomies, and three (6%) of 45 per-oral image-guided gastrostomies (p=0·947). Interpretation: The three methods of gastrostomy seemed to be as safe as each other in relation to survival and procedural complications. In the absence of data from randomised trials, our findings could inform clinicians and patients in reaching decisions about gastrostomy and will stimulate further research into the nutritional management in patients with amyotrophic lateral sclerosis. Funding: Motor Neurone Disease Association of Great Britain and Northern Ireland (MNDA) and the Sheffield Institute for Translational Neuroscience (SITraN). © 2015 ProGas Study Group. Open Access article distributed under the terms of CC BY. Source


Wilkinson R.,University of Sheffield
AAC: Augmentative and Alternative Communication | Year: 2013

This paper uses conversation analysis to investigate the form and use of iconic gestures by a man with severe Broca-type aphasia in interaction with his speech and language therapist. Deconstructing iconic gestures into the different types of methods used to produce them, the paper analyzes how these gestures can depict certain entities, such as actions or types of people, in ways that may be understandable to the recipient. It is also observed that these iconic gestures can constitute gestural contributions, which not only communicate certain semantic meanings, but also accomplish social actions, such as answering or repairing. The implications of this analysis for our understanding of compensatory behavior in aphasia, and of augmentative and alternative communication in social interaction more generally, are discussed. © 2013 International Society for Augmentative and Alternative Communication. Source


Marcotti W.,University of Sheffield
Experimental Physiology | Year: 2012

Hair cells in the mammalian inner ear convert sound into electrical signals that are relayed to the nervous system by the chemical neurotransmitter glutamate. Electrical information encoding sound is then passed through the central nervous system to the higher auditory centres in the brain, where it is used to construct a temporally and spatially accurate representation of the auditory landscape. To achieve this, hair cells must encode fundamental properties of sound stimuli at extremely high rates, not only during mechano-electrical transduction, which occurs in the hair bundles at the cell apex, but also during electrochemical transduction at the specialized ribbon synapses at the cell base. How is the development of such a sophisticated cell regulated? More specifically, to what extent does physiological activity contribute to the progression of the intrinsic genetic programmes that drive cell differentiation? Hair cell differentiation takes about 3 weeks in most rodents, from terminal mitosis during embryonic development to the onset of hearing around 2 weeks after birth. Until recent years, most of the molecules involved in hair cell development and function were unknown, which was mainly due to difficulties in working with the mammalian cochlea and the very small number of hair cells, about 16,000 in humans, present in the auditory organ. Recent advances in the ability to record from the acutely isolated cochlea maintained in near-physiological conditions, combined with the use of genetically modified mouse models, has allowed the identification of several proteins and molecular mechanisms that are crucial for the maturation and function of hair cells. In this article, I highlight recent findings from my laboratory that have furthered our understanding of how developing hair cells acquire the remarkable sensitivity of adult auditory sensory receptors. © 2012 The Author. Experimental Physiology © 2012 The Physiological Society. Source


Taylor W.,University of California at Santa Barbara | Jones R.A.L.,University of Sheffield
Langmuir | Year: 2013

The adsorption of lysozyme protein was measured ex situ on well-characterized gold surfaces coated by end-tethered polyethylene oxide brushes of various molecular weights and controlled grafting densities. The adsorbed amount of protein for different molecular weight brushes was found to collapse onto one master curve when plotted against brush coverage. We interpret this relationship in terms of a model involving site-blocking of the adsorption of proteins at the substrate and discuss the role of the physical attraction of PEO segments to gold. We account for our observation of a simple exponential relationship between protein adsorption and normalized brush coverage with a simple protein adsorption model. In contrast to other studies in similar systems, we do not observe protein adsorption on brushes at high grafting density, and we suggest that this discrepancy may be due to the solubility effects of salt upon the brushes, influencing their protein binding affinity, in the limit of high grafting density and high brush volume fraction. © 2013 American Chemical Society. Source


Jackson B.C.,University of Sheffield
Genetica | Year: 2011

Over the past decade several theoretical and empirical studies have revived interest in the role of chromosomes in speciation. The resulting models do not suffer from the problems experienced by previously proposed mechanisms of chromosomal speciation, because they invoke suppression of recombination rather than a reduction in the fitness of heterokaryotypes as their core process. However, they are not free from difficulties. The evidence for recombination-suppression models is discussed here. The general conclusion is that a consensus opinion on which models best describe the real-world situation is currently unlikely because of an inability of the available empirical evidence to fully distinguish between them, which may be due in part to a lack of exclusivity. I argue that future work should take this lack of exclusivity into account. Resolving the biogeography of speciation is also suggested in order to tell the various models apart. Further study is needed which focuses on confirming the operation of individual elements of the various models, rather than attempting to validate any single mechanism as a whole. © 2011 Springer Science+Business Media B.V. Source


Rennie I.G.,University of Sheffield
Eye | Year: 2010

Vasoproliferative tumours are uncommon retinal lesions that may occur in isolation (primary) or in association with another ocular condition (secondary). They may be unilateral or bilateral and have a predilection for the peripheral inferior temporal quadrant of the retina. Vasoproliferative tumours can be associated with abnormalities of the macular, including epiretinal membrane formation and cystoid macular edema. A number of modalities have been used to treat these tumours including cryotherapy and radiotherapy. © 2010 Macmillan Publishers Limited All rights reserved. Source


Rees M.,University of Sheffield | Ellner S.P.,Cornell University
Methods in Ecology and Evolution | Year: 2016

Patterns of survival and reproduction determine fitness, and there is a rich body of theory linking demography with evolution. For example, selection analysis can be used to predict changes in trait values, and by approximating selection, equations describing trait dynamics derived and evolutionary endpoints predicted. Here, we provide an overview of how these methods can be used to understand evolutionary dynamics and selection in a structured population modelled by an integral projection model (IPM). General expressions for selection are given, and we show how these can be approximated using eigenvalue sensitivities. Approximations for the dynamics of both the trait mean and variance are presented and compared to simulation results (IPMs and individual-based models) for monocarpic perennials. We describe how selection can be decomposed into components related to survival and reproduction and into components resulting from changes in demography versus changes in population structure. We show that, in an empirically based IPM, effects of changes in population structure can be a major component of the overall selection pressure. Most studies of selection in natural populations have focused entirely on selection due to changes in demography, so our results may help explain why directional selection is often predicted but no evolutionary response is observed. The endpoints of evolution, what we expect to see in nature, are explored using ideas from adaptive dynamics theory. The key methods are based on evolutionarily stable strategies (ESS) and convergence stability. Efficient methods for finding an ESS are described. We then extend these ideas to function-valued traits, where it is assumed that an entire function can evolve, rather than a few parameters defining it. By combining theory for IPMs with ideas from several different fields, we show how the dynamics of ecologically complex, evolving systems can be understood. © 2016 British Ecological Society. Source


Tennant A.,University of Sheffield
IEEE Transactions on Antennas and Propagation | Year: 2010

A two-element time-modulated array system is configured to operate as a direction finding antenna. The signal from each element of the array is time switched to provide a phase modulated output in which the depth of modulation is dependent on the angle of arrival of the received signal. Details of an experimental system designed to operate at 2400 MHz are presented and the results are compared to theoretical predictions. © 2010 IEEE. Source


Parkin N.,University of Sheffield
Cochrane database of systematic reviews (Online) | Year: 2012

The permanent canine tooth in the maxillary (upper) jaw sometimes does not erupt into the mouth correctly. In about 1% to 3% of the population these teeth will be diverted into the roof of the mouth (palatally). It has been suggested that if the primary canine is removed at the right time this palatal eruption might be avoided. This is an update of a Cochrane review first published in 2009. To evaluate the effect of extracting the primary maxillary canine on the eruption of the palatally ectopic maxillary permanent canine. We searched the following electronic databases: the Cochrane Oral Health Group's Trials Register (to 20 April 2012), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2012, Issue 1), MEDLINE via OVID (1946 to 20 April 2012) and EMBASE via OVID (1980 to 20 April 2012). There were no restrictions regarding language or date of publication. Trials were selected if they met the following criteria: a randomised or quasi-randomised controlled trial, involving the extraction of the deciduous maxillary canine and assessing eruption/non-eruption of the palatally displaced maxillary permanent canine. Data extraction was undertaken independently by two review authors. The primary outcome was the reported prevalence of eruption or non-eruption of the ectopic permanent canine into the mouth following observation or intervention. Results were to be expressed as risk ratios for dichotomous outcomes with 95% confidence intervals and mean differences for continuous outcomes. Heterogeneity was to be investigated, including both clinical and methodological factors. Authors of trials were contacted to request unpublished data. Reports of two randomised controlled trials previously excluded from an earlier version of the review due to "deficiencies in reporting, insufficient data" have now been included. These two trials included approximately 128 children, with more than 150 palatally displaced canine teeth, and both were conducted by the same research group. Data presented in the trial reports are either incomplete or inconsistent. Both trials are at high risk of bias. It must be emphasised that both trials have serious deficiencies in the way they were designed, conducted, and reported, and attempts to contact the authors to obtain detailed information and clarify inconsistencies have been unsuccessful. Allocation to treatment appears to be at the level of the individual, but outcomes of successful treatment relate to included teeth and data are not reported for each treatment group. Adverse effects are not reported. Neither trial provides any evidence to guide clinical decision making. There is currently no evidence of the effects of extraction of primary canine teeth in 10-13 year old children with one or two palatally displaced permanent canine teeth. Source


Prescott T.J.,University of Sheffield | Diamond M.E.,International School for Advanced Studies | Wing A.M.,University of Birmingham
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2011

Active sensing systems are purposive and information-seeking sensory systems. Active sensing usually entails sensor movement, but more fundamentally, it involves control of the sensor apparatus, in whatever manner best suits the task, so as to maximize information gain. In animals, active sensing is perhaps most evident in the modality of touch. In this theme issue, we look at active touch across a broad range of species from insects, terrestrial and marine mammals, through to humans. In addition to analysing natural touch, we also consider how engineering is beginning to exploit physical analogues of these biological systems so as to endow robots with rich tactile sensing capabilities. The different contributions show not only the varieties of active touch-antennae, whiskers and fingertips-but also their commonalities. They explore how active touch sensing has evolved in different animal lineages, how it serves to provide rapid and reliable cues for controlling ongoing behaviour, and even how it can disintegrate when our brains begin to fail. They demonstrate that research on active touch offers a means both to understand this essential and primary sensory modality, and to investigate how animals, including man, combine movement with sensing so as to make sense of, and act effectively in, the world. © 2011 The Royal Society. Source


Griffin D.W.,University of British Columbia | Harris P.R.,University of Sheffield
Psychological Science | Year: 2011

Self-affirmation, reflecting on one's defining personal values, increases acceptance of threatening information, but does it do so at the cost of inducing undue alarm in people at low risk of harm? We contrast an alarm model, wherein self-affirmation simply increases response to threat, with a calibration model, wherein self-affirmation increases sensitivity to the self-relevance of health-risk information. Female seafood consumers (N = 165) completed a values self-affirmation or control task before reading a U.S. Food and Drug Administration brochure on mercury in seafood. Findings support the calibration model: Among frequent seafood consumers, self-affirmation generally increased concern (reports of depth of thought, personal message relevance, perceived risk, and negative affect) for those high in defensiveness and reduced it for those low in defensiveness. Among infrequent consumers of seafood, self-affirmation typically reduced concern. Thus, self-affirmation increased the sensitivity with which women at different levels of risk, and at different levels of defensiveness, responded cognitively and affectively to the materials. © The Author(s) 2011. Source


Sequential chemical leaching is a long-established method for the determination of the composition of specific fractions in the coal. A British Standard Method [BS 1016, 1977] to determine forms of sulphur in the coal is one example of such an approach. In an international collaborative programme on modes of occurrence of trace elements in coals edited by Davidson (2000) [11] sequential leaching was the method of choice of three of the participating laboratories, reflecting the advantages of this method in establishing trace element distributions in coal samples. In the present work results of a sequential leaching study using a well-documented coal (Eggborough power station, UK) are compared with previous results on this coal primarily to test the leaching protocol. Quantitative extraction of all coal fractions is difficult to achieve and further treatment of the data is rewarding. If the analyses are comprehensive, and include major elements, then additional information is obtained from a statistical analysis of the data. With this approach, failure to quantitatively extract specific fractions can be overcome. The problem is analogous to statistically analyzing a set of samples representing one or more coal seams. Published sequential leaching schemes do differ in the number of steps and also in the sequence. Clearly this is a disadvantage and a common scheme should be adopted. In the scheme we used at Sheffield we attempted to take the organic fraction into solution before the silicates. This was not successful, although trace elements associated with a resistate fraction were identified. It is proposed that the CSIRO protocol be adopted in which the "organic" associated elements are determined by difference. However, this is not entirely satisfactory and future work could be directed towards improving the organic-silicate split for the trace elements. In the present work a mudrock analysis has been used to estimate the silicate contribution in the coal and from this the organic contribution can be calculated. An alternative approach would be to preferentially remove organic matter at some stage within the leaching sequence by low temperature oxidation. © 2012 Elsevier Ltd. All rights reserved. Source


Hardie R.C.,University of Cambridge | Juusola M.,University of Sheffield | Juusola M.,Beijing Normal University
Current Opinion in Neurobiology | Year: 2015

Phototransduction in Drosophila's microvillar photoreceptors is mediated by phospholipase C (PLC) resulting in activation of two distinct Ca2+-permeable channels, TRP and TRPL. Here we review recent evidence on the unresolved mechanism of their activation, including the hypothesis that the channels are mechanically activated by physical effects of PIP2 depletion on the membrane, in combination with protons released by PLC. We also review molecularly explicit models indicating how Ca2+-dependent positive and negative feedback along with the ultracompartmentalization provided by the microvillar design can account for the ability of fly photoreceptors to respond to single photons 10-100× more rapidly than vertebrate rods, yet still signal under full sunlight. © 2015 Elsevier Ltd. Source


Pope S.A.,University of Sheffield
IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control | Year: 2013

Previous studies into solid elastic metamaterials which have a simultaneously negative effective bulk modulus and density have proposed designs for materials with relatively narrow bandwidths, because of the reliance on resonators to provide the dispersive material properties. Some of the proposed novel applications for metamaterials, such as invisibility cloaks and sub-wavelength lenses, generally require materials with inherently larger bandwidths for practical exploitation. In this paper, a well-known electromagnetic metamaterial design is used together with the electrical-mechanical circuit analogies to propose a simultaneously double negative elastic metamaterial design which does not suffer from the narrow bandwidth constraints of previous designs. An interesting consequence of the proposed design is that it has an effective wavelength which asymptotically goes to infinity with frequency. © 1986-2012 IEEE. Source


Meier P.S.,University of Sheffield
NAD Nordic Studies on Alcohol and Drugs | Year: 2010

This paper sets out to chart key trends in alcohol consumption and harm, and of related policy activity in the UK between 1990 and 2010. As a journal paper cannot provide a comprehensive picture, the focus will be on England as the most populous region, with comments on salient developments in the other regions where these are different. The paper draws on a variety of data sources, especially general population surveys, government reports, industry figures, National Statistics products, and recent reviews of data trends. It is structured around the themes: 1) trends in volume and patterns of consumption in adults and children; 2) trends in major alcohol-related harms; 3) changes in the affordability and availability of alcohol; 4) influences of major players including policy makers, media and industry and 5) the current (mid-2010) status of policy efforts. The reviewed data show that the UK has seen significant changes in the patterns and contexts of consumption during the 1990s and 2000s. Major consumption changes include falling per capita consumption, a rise in heavy episodic drinking, increasing preference of higher alcohol content beverages and a polarisation of the distribution of consumption in the population where heavy drinkers consume even higher volumes whilst moderate drinkers appearing to decrease their average intake. Context changes include rising availability and affordability of alcohol, with few alcohol control policy efforts, and a switch from predominantly on-trade to off-trade drinking. Such trends help explain the current rapid increase in alcohol-related admissions and other heavy end consequences in the context of falling per capita consumption. Source


Von Caemmerer S.,Australian National University | Quick W.P.,International Rice Research Institute | Quick W.P.,University of Sheffield | Furbank R.T.,CSIRO
Science | Year: 2012

Another "green revolution" is needed for crop yields to meet demands for food. The international C4 Rice Consortium is working toward introducing a higher-capacity photosynthetic mechanism - the C 4 pathway - into rice to increase yield. The goal is to identify the genes necessary to install C4 photosynthesis in rice through different approaches, including genomic and transcriptional sequence comparisons and mutant screening. Source


Khamas S.K.,University of Sheffield
IEEE Transactions on Antennas and Propagation | Year: 2010

An efficient model is developed to accelerate the convergence of the dyadic Green's function's (DGF) infinite summation when the source and observation points are placed in different layers of a dielectric sphere, thereby expediting computational analysis. The proposed procedure is based on asymptotic extraction principles in which the quasi-static images are extracted from the spectral domain DGF. The effectiveness of the approach is demonstrated in a method of moment model where a microstrip antenna as well as a conformal dipole array have been studied. © 2010 IEEE. Source


Rennie I.G.,University of Sheffield
Eye | Year: 2012

Eye colour is one of the most important characteristics in determining facial appearance. In this paper I shall discuss the anatomy and genetics of normal eye colour, together with a wide and diverse range of conditions that may produce an alteration in normal iris pigmentation or form. © 2012 Macmillan Publishers Limited All rights. Source


Brown J.E.,University of Leeds | Sim S.,University of Sheffield
Neoplasia | Year: 2010

The preferential metastasis of prostate cancer cells to bone disrupts the process of bone remodeling and results in lesions that cause significant pain and patient morbidity. Although prostate-specific antigen (PSA) is an established biomarker in prostate cancer, it provides only limited information relating to bone metastases and the treatment of metastatic bone disease with bisphosphonates or novel noncytotoxic targeted or biological agents that may provide clinical benefits without affecting PSA levels. As bone metastases develop, factors derived from bone metabolism are released into blood and urine, including N- and C-terminal peptide fragments of type 1 collagen and bone-specific alkaline phosphatase, which represent potentially useful biomarkers for monitoring metastatic bone disease. A number of clinical trials have investigated these bone biomarkers with respect to their diagnostic, prognostic, and predictive values. Results suggest that higher levels of bone biomarkers are associated with an increased risk of skeletal-related events and/or death. As a result of these findings, bone biomarkers are now being increasingly used as study end points, particularly in studies investigating novel agents with putative bone effects. Data from prospective clinical trials are needed to validate the use of bone biomarkers and to confirm that marker levels provide additional information beyond traditional methods of response evaluation for patients with metastatic prostate cancer. © 2010 Neoplasia Press, Inc. Source


Stafford T.,University of Sheffield | Dewar M.,The New York Times
Psychological Science | Year: 2014

In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning. We discuss the benefits and opportunities of behavioral data sets with very large sample sizes and suggest that this approach could be particularly fecund for studies of skill acquisition. © The Author(s) 2013. Source


Excessive alcohol consumption may lead to the development of alcohol-related liver disease (ALD). Liver biopsy may be used in patients with suspected ALD to confirm the diagnosis, exclude other or additional liver pathologies, and provide accurate staging of the degree of liver injury in order to enable the prediction of prognosis and inform treatment decisions. However, as it is an invasive procedure that carries the risk of morbidity and mortality, current UK guidance Nursing and Allied Health Literature (from 1982 to January 2010), Web of Knowledge and Science Citation Index (from 1969 to January 2010). Study quality was assessed using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies) checklist. Owing to the heterogeneity of the studies, no formal meta-analysis was undertaken. A de novo mathematical model was constructed to estimate the incremental costs and incremental quality-adjusted life-years (QALYs) associated with alternative strategies compared with a biopsy-all strategy. The tests are assessed first as a replacement for liver biopsy, and secondly as an additional test prior to liver biopsy. Thirty-six scenarios were assessed for each non-invasive test strategy, which varied the sensitivity of biopsy, the anxiety associated with biopsy, sensitivity and specificity values and whether or not the biopsy was percutaneous or transjugular. For each scenario, threshold levels were reported where biopsying all patients was more cost-effective than the strategy for two parameters (the decreased level of abstinence associated with the strategy compared with biopsying all and the level of incidental QALY gain associated with biopsy). No studies were identified that specifically assessed the ELF test, although a study was identified that evaluated the diagnostic accuracy of the European Liver Fibrosis Test (essentially, the ELF test with the addition of age to the algorithm) compared with biopsy. Three studies of FibroTest, no relevant studies of FibroMax, and six studies of FibroScan assessing accuracy compared with biopsy in patients with known or suspected alcohol-related liver disease were identified. In all studies, the number of patients with suspected ALD was small, meaning that the estimated sensitivities and specificities were not robust. No conclusive estimate of the cost per QALY of each non-invasive test could be provided. Scenarios exist in which each of the strategies analysed is more cost-effective than biopsying all patients and, in contrast, scenarios exist in which each strategy is less cost-effective than biopsying all patients. Study selection and data analysis were undertaken by one reviewer. No conclusive result can be provided on the most cost-effective strategy until further data are available. A large number of parameters require data; however, the following are selected as being of most importance: (1) the sensitivity and specificity of each non-invasive liver test (NILT) against biopsy at validated and pre-selected cut-off thresholds; (2) the influence of potential confounding variables such as current drinking behaviour and the degree of hepatic inflammation on the performance of NILTs; and (3) the likelihood, and magnitude, of decreases in abstinence rates associated with a diagnosis of significant ALD by diagnostic modality and the incidental gains in QALYs that may be associated with biopsy. The National Institute for Health Research Technology Assessment programme. Source


Bingle C.D.,University of Sheffield
Biochemical Society Transactions | Year: 2011

WFDC (whey/four-disulfide core)-domain-containing proteins are defined by the possession of one or more 40-50 amino acid domains that include eight conserved cysteine residues linked by four characteristic intramolecular disulfide bonds. Some also contain other structural domains, whereas in many the WFDCdomain is the only domain present. The WFDC-domain is not limited to mammals but is widespread across all lineages. There is increasing evidence to suggest that mammalian WFDC-domain-containing proteins are undergoing rapid molecular evolution and as might be expected they exhibit low levels of sequence similarity coupled with multiple examples of species-specific gene acquisition and gene loss. The characteristic structural domain (that is generally encoded by a single exon) makes these proteins relatively easy to identify in databases. This review will outline the repertoire of such domains within the mouse, but similar principles can be applied to the identification of all proteins within individual species. ©The Authors Journal compilation ©2011 Biochemical Society. Source


Crossley J.G.M.,University of Sheffield
Medical Teacher | Year: 2014

This article describes the problem of disorientation in students as they become doctors. Disorientation arises because students have a poor or inaccurate understanding of what they are training to become. If they do not know what they are becoming it is hard for them to prioritise and contextualise their learning, to make sense of information about where they are now (assessment and feedback) or to determine the steps they need to take to develop (formative feedback and "feedforward"). It is also a barrier to the early development of professional identity. Using the analogy of a map, the paper describes the idea of a curriculum that is articulated as a developmental journey - a "roadmap curriculum". This is not incompatible with a competency-based curriculum, and certainly requires the same integration of knowledge, skills and attitudes. However, the semantic essence of a roadmap curriculum is fundamentally different; it must describe the pathway or pathways of development toward being a doctor in ways that are both authentic to qualified doctors and meaningful to learners. Examples from within and outside medicine are cited. Potential advantages and implications of this kind of curricular reform are discussed. © 2014 Informa UK Ltd. All rights reserved: reproduction in whole or part not permitted. Source


Gartland A.,University of Sheffield
Wiley Interdisciplinary Reviews: Membrane Transport and Signaling | Year: 2012

Bone is a dynamic organ that from the early stages of development defines an organism's form and function and responds to the external environment, a process that continues as the organism grows and persists even once maturity is reached. The formation, growth, and integrity of bone are co-ordinated and maintained throughout life via the finely tuned actions of osteoblasts and osteoclasts, with disruption in this balance leading to skeletal abnormalities and bone disease. The precise complement of mechanisms balancing these actions is not fully known, although several regulatory systems are known to be involved and current treatments for bone disease target these systems. The actions of purinergic signaling in bone have come to light over the past 20 years or so, but previously the emphasis was largely placed upon G-protein coupled P2Y receptors. This article details the current status of P2X receptors in bone, mainly focussing on the P2X7 receptor for which the most compelling evidence exists for its regulatory role in bone. The contribution of other P2X receptors to bone biology and future directions are also discussed. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Johnson M.P.,University of Sheffield | Ruban A.V.,Queen Mary, University of London
Photosynthesis Research | Year: 2014

Light-driven photosynthetic electron transport is coupled to the movement of protons from the chloroplast stroma to the thylakoid lumen. The resulting proton motive force that is generated is used to drive the conformational rotation of the transmembrane thylakoid ATPase enzyme which converts ADP (adenosine diphosphate) and Pi (inorganic phosphate) into ATP (adenosine triphosphate), the energy currency of the plant cell required for carbon fixation and other metabolic processes. According to Mitchell's chemiosmotic hypothesis, the proton motive force can be parsed into the transmembrane proton gradient (ΔpH) and the electric field gradient (Δψ), which are thermodynamically equivalent. In chloroplasts, the proton motive force has been suggested to be split almost equally between Δψ and ΔpH (Kramer et al., Photosynth Res 60:151-163, 1999). One of the central pieces of evidence for this theory is the existence of a steady-state electrochromic shift (ECS) absorption signal detected ~515 nm in plant leaves during illumination. The interpretation of this signal is complicated, however, by a heavily overlapping absorption change ~535 nm associated with the formation of photoprotective energy dissipation (qE) during illumination. In this study, we present new evidence that dissects the overlapping contributions of the ECS and qE-related absorption changes in wild-type Arabidopsis leaves using specific inhibitors of the ΔpH (nigericin) and Δψ (valinomycin) and separately using leaves of the Arabidopsis lut2npq1 mutant that lacks qE. In both cases, our data show that no steady-state ECS signal persists in the light longer than ~60 s. The consequences of our observations for the suggesting parsing of steady-state thylakoid proton motive force between (ΔpH) and the electric field gradient (Δψ) are discussed. © 2013 Springer Science+Business Media Dordrecht. Source


Drayton R.M.,University of Sheffield
Biochemical Society Transactions | Year: 2012

Resistance to the cytotoxic effects of cisplatin can be mediated through changes in a wide variety of cellular processes and signalling pathways. The discovery of microRNAs as regulators of protein expression through the targeting of mRNA has led to a number of studies on the effect of cisplatin treatment on microRNA expression, and the ability of microRNAs to modulate cisplatin resistance. ©The Authors Journal compilation ©2012 Biochemical Society. Source


Thomas J.A.,University of Sheffield
Chemical Society Reviews | Year: 2015

An overview of optical biomolecular imaging is provided. Following a brief history of the development of probes and technologies in this area, general approaches which use biomolecular imaging in current commercial systems are discussed. A brief summary of research challenges in this area-in terms of both the chemistry and technique development-is introduced. Finally, areas rich for possible future development are suggested. This journal is © The Royal Society of Chemistry 2015. Source


Ruderman M.S.,University of Sheffield
Solar Physics | Year: 2010

In this paper we study kink oscillations of coronal loops in the presence of flows. Using the thin-tube approximation we derive the general governing equation for kink oscillations of a loop with the density varying along the loop in the presence of flows. This equation remains valid even when the density and flow are time dependent. The derived equation is then used to study the effect of flows on eigenfrequencies of kink oscillations of coronal loops. The implication of the obtained results on coronal seismology is discussed. © 2010 Springer Science+Business Media B.V. Source


Keylock C.J.,University of Sheffield
Nonlinear Processes in Geophysics | Year: 2010

In this paper, classical surrogate data methods for testing hypotheses concerning nonlinearity in time-series data are extended using a wavelet-based scheme. This gives a method for systematically exploring the properties of a signal relative to some metric or set of metrics. A signal continuum is defined from a linear variant of the original signal (same histogram and approximately the same Fourier spectrum) to the exact replication of the original signal. Surrogate data are generated along this continuum with the wavelet transform fixing in place an increasing proportion of the properties of the original signal. Eventually, chaotic or nonlinear behaviour will be preserved in the surrogates. The technique permits various research questions to be answered and examples covered in the paper include identifying a threshold level at which signals or models for those signals may be considered similar on some metric, analysing the complexity of the Lorenz attractor, characterising the differential sensitivity of metrics to the presence of multifractality for a turbulence time-series, and determining the amplitude of variability of the H¶lder exponents in a multifractional Brownian motion that is detectable by a calculation method. Thus, a wide class of analyses of relevance to geophysics can be undertaken within this framework. © 2010 Author(s). Source


Walters K.,University of Sheffield
Journal of Theoretical Biology | Year: 2012

In this paper we use approximate Bayesian computation to estimate the parameters in an immortal model of colonic stem cell division. We base the inferences on the observed DNA methylation patterns of cells sampled from the human colon. Utilising DNA methylation patterns as a form of molecular clock is an emerging area of research and has been used in several studies investigating colonic stem cell turnover. There is much debate concerning the two competing models of stem cell turnover: the symmetric (immortal) and asymmetric models. Early simulation studies concluded that the observed methylation data were not consistent with the immortal model. A later modified version of the immortal model that included preferential strand segregation was subsequently shown to be consistent with the same methylation data. Most of this earlier work assumes site independent methylation models that do not take account of the known processivity of methyltransferases whilst other work does not take into account the methylation errors that occur in differentiated cells. This paper addresses both of these issues for the immortal model and demonstrates that approximate Bayesian computation provides accurate estimates of the parameters in this neighbour-dependent model of methylation error rates. The results indicate that if colonic stem cells divide asymmetrically then colon stem cell niches are maintained by more than 8 stem cells. Results also indicate the possibility of preferential strand segregation and provide clear evidence against a site-independent model for methylation errors. In addition, algebraic expressions for some of the summary statistics used in the approximate Bayesian computation (that allow for the additional variation arising from cell division in differentiated cells) are derived and their utility discussed. © 2012 Elsevier Ltd. Source


Hay C.,University of Sheffield
British Journal of Politics and International Relations | Year: 2013

It has taken quite a while for a consolidated crisis discourse to emerge in Britain in response to the seismic events of 2007-09. But one is now clearly evident, widely accepted and deeply implicated in government economic policy. It is a 'crisis of debt' discourse to which the response is austerity and deficit reduction; it is paradigm-reinforcing rather than paradigm-threatening. In this article I consider the appropriateness of such a crisis discourse, arguing that an alternative 'crisis of growth' discourse is rather more compelling and would point in very different policy directions while generating very different expectations about the effects of deficit reduction. Such a discourse can just about be detected in the growing criticism of the government's austerity programme, but it is yet to lead to the positing of a new growth model. I explore the implications of both crisis discourses for responses to the crisis, concluding with an assessment of the prospects for the return to growth under a new growth model in the years ahead. © 2012 The Author. British Journal of Politics and International Relations © 2012 Political Studies Association. Source


Diamond P.,University of Sheffield
British Journal of Politics and International Relations | Year: 2013

This article traces the roots of ideas currently influencing the Labour party relating to the role of the state in Britain's political economy, exploring the trajectory by which such ideas have entered contemporary debate and how they continue to shape the party's agenda. Subsequent sections explore diverse interpretations of New Labour, the economic legacy of the Blair and Brown governments, the re-imagining of British political economy undertaken by Labour under Miliband's leadership, and the interlinking 'progressive dilemmas' that have so far emerged. The article concludes by suggesting that the ambitious rediscovery of interventionism will only be realisable if Labour confronts historical dilemmas relating to the structure and efficacy of the British state. Such a confrontation requires serious engagement between the doctrines of British social democracy and the overlapping and interlinking narratives of liberal political thought. British Journal of Politics and International Relations © 2012 Political Studies Association. Source


Kesserwani G.,University of Sheffield
Journal of Hydraulic Research | Year: 2013

This paper compares various topography discretization approaches for Godunov-type shallow water numerical models. Many different approaches have emerged popular with Godunov-type water wave models. To date, literature lacks an investigative study distinguishing their pros and cons, and assessing their reliability relating to issues of practical interest. To address this gap, this work reviews and assesses five standard topography discretization methods that consist of the Upwind, the surface gradient method, the mathematically balanced set of the shallow water equations, the hydrostatic reconstruction technique and the discontinuous Galerkin discretization. The study further considers mix-mode approaches that incorporate wetting and drying in conjunction with the topography discretization. Steady and transient hydraulic tests are employed to measure the performance of the approaches relating to the issues of mesh size, topography's differentiability, accuracy-order of the numerical scheme, and impact of wetting and drying. © 2013 Copyright International Association for Hydro-Environment Engineering and Research. Source


Warren S.,University of Sheffield
Cultural Geographies | Year: 2013

This article investigates the under-addressed topic of audiencing in relation to art in landscape, considering the ways in which this study can enliven cultural geography. Exploring how issues of interpretation and reception have been approached in the past, it tailors mixed methods to trace audience practices using the case study of James Turrell's Skyspace at Yorkshire Sculpture Park, England. Turrell's site art is installed in a remodelled deershelter within the Bretton Estate, bringing together contemporary art, heritage and working landscape. This research contributes to recent debate on post-phenomenological work by representing multiple subjects' engagements with site art. Vignettes of audiencing are presented that challenge authorial control and curatorial interpretation in specific ways pointing toward the open-endedness of the production and reception of cultural forms. Developing cultural geography's engagement with art, the article challenges geographers to consider how the meaning of works and sites can be renegotiated according to the specialisms of others, and the social dimensions of audience experience. By showing how critical enquiry can become more democratized through the inclusion of different subjects, it reveals the important theoretical, methodological and empirical contributions the study of audiencing can make to the geographies of art. © The Author(s) 2012. Source


There is considerable interest in the role of inequality in affecting social outcomes yet there is also uncertainty and disagreement about the appropriate scale at which to measure inequality within such analyses. Whilst some have argued for larger-area inequality measures to be used there are good theoretical, empirical and intuitive grounds to think that local inequality may have relevance as a driver of social ills. This paper explores whether differing understandings of 'local' inequality does-or can-matter and, if so, within which contexts this is the case. Contrasting findings across the two areas support the notion that local inequality does have relevance to social outcomes but that the socio-spatial context matters. © 2012 Urban Studies Journal Limited. Source


Cooper J.R.,University of Sheffield
Geomorphology | Year: 2012

Previous studies have shown that spatial variance in fluid and critical shear stress, caused by form roughness, can increase bedload flux. Others have revealed that variance in flow velocity increases with relative submergence and that bed mobility is reduced at lower submergences. The paper explores the link between these observations and addresses the following questions: is grain roughness sufficient to cause variance in fluid shear stress and an increase in bedload flux; if this variance changes with submergence, does this mean that the increase is dependent on submergence; and does this explain the change in mobility with submergence? A simple, statistical bedload model, based on spatial distributions of fluid and critical shear stress, has been used to explore these effects over a water-worked gravel deposit. Estimates of spatially distributed fluid shear stress were gained from laboratory flume measurements of near-bed flow velocity, and a distribution of critical shear stress was simulated using a discrete particle model of the sediment distribution used in the flume. The velocity data were used to describe the change in the spatial distribution of near-bed velocity with relative submergence, which allowed the effects of submergence on flux to also be considered. The main conclusions were: (i) spatial variance in fluid shear stress from grain roughness was not sufficient to have an appreciable effect on bedload flux over water-worked gravel beds with a spatial distribution of critical shear stress; (ii) spatial variance in critical shear stress, caused by grain roughness, had a much larger influence and increased bedload flux. This was a similar level of increase observed in studies for conditions where form roughness was high; (iii) spatially averaged estimates of fluid and critical shear stress should not be used to estimate bedload flux even if form roughness is low; and (iv) a rise in relative submergence increased bedload flux. This was due to changes in the spatial distribution of near-bed velocity and not due to a lowering in the local flow velocity as has been suggested by previous studies. © 2011 Elsevier B.V.. Source


There is increasing evidence that background selection, the effects of the elimination of recurring deleterious mutations by natural selection on variability at linked sites, may be a major factor shaping genome-wide patterns of genetic diversity. To accurately quantify the importance of background selection, it is vital to have computationally efficient models that include essential biological features. To this end, a structured coalescent procedure is used to construct a model of background selection that takes into account the effects of recombination, recent changes in population size and variation in selection coefficients against deleterious mutations across sites. Furthermore, this model allows a flexible organization of selected and neutral sites in the region concerned, and has the ability to generate sequence variability at both selected and neutral sites, allowing the correlation between these two types of sites to be studied. The accuracy of the model is verified by checking against the results of forward simulations. These simulations also reveal several patterns of diversity that are in qualitative agreement with observations reported in recent studies of DNA sequence polymorphisms. These results suggest that the model should be useful for data analysis. © 2013 Macmillan Publishers Limited All rights reserved. Source


Mann B.E.,University of Sheffield
Organometallics | Year: 2012

The development of CO-releasing molecules, CO-RMs, for medical applications is reviewed from a personal point of view. The review covers the initial discovery of CO-RMs and then concentrates on developments involving the author. The review finishes with suggestions for areas meriting further investigation. © 2012 American Chemical Society. Source


Buckley H.L.,Lincoln University at Christchurch | Freckleton R.P.,University of Sheffield
Journal of Ecology | Year: 2010

The positive interspecific abundance-occupancy relationship (AOR) is a ubiquitous, but highly variable ecological pattern. Understanding this variation is a key challenge for community ecologists and little progress has been made using ecological trait data to predict variation in abundance and occupancy. We used a data set from vascular plants in New Zealand South Island tussock grasslands measured at a landscape scale over 25 years to analyse AORs within a single habitat type. We firstly modelled the interspecific relationship between abundance and occupancy across species. We then measured the deviations in the slopes of the abundance-occupancy relationship for individual species from this overall interspecific relationship and related these slope deviations to data on species' life-history and ecological traits. Highly invasive species that increased their ranges and abundances during the 25-year study period had significantly steeper slopes in abundance-occupancy space than the interspecific relationship although species with increased dispersal ability did not. Those species that were clonal showed significantly shallower slopes suggesting that clonality causes species to respond more slowly in occupancy than abundance to changes in their environment. Synthesis. These results show that considering the population dynamics of individual species allows us to relate species' traits to their trajectory over time in abundance-occupancy space and thus can lead to a better understanding of the variation and scatter around the interspecific abundance-occupancy pattern. © 2010 The Authors. Journal compilation © 2010 British Ecological Society. Source


While A.,University of Sheffield | Jonas A.E.G.,University of Hull | Gibbs D.,University of Hull
Transactions of the Institute of British Geographers | Year: 2010

The management of carbon emissions holds some prospect for challenging sustainable development as the organising principle of socio-environmental regulation. This paper explores the rise of a distinctive low-carbon polity as an ideological state project, and examines its potential ramifications for the regulation of economy-environment relations at the urban and regional scale. Carbon control would seem to introduce a new set of values into state regulation and this might open up possibilities for challenging mainstream modes of urban and regional development in a manner not possible under sustainable development. But low-carbon restructuring also portends intensified uneven development, new forms of state control and a socially uneven reworking of state-society relations. In order to explore these issues we start by setting out a framework for conceptualising environmental regulation based around the idea of eco-state restructuring. This idea is introduced to capture the conflicts, power struggles and strategic selectivities involved as governments seek to reconcile environmental protection with multiple other pressures and demands. Overall the paper seeks to make a distinctive contribution to theoretical work on state environmental regulation and the emerging spatial dimensions of climate policy. © 2009 The Authors. Journal compilation © Royal Geographical Society (with The Institute of British Geographers) 2009. Source


The integration of top-down (lithographic) and bottom-up (synthetic chemical) methodologies remains a major goal in nanoscience. At larger length scales, light-directed chemical synthesis, first reported two decades ago, provides a model for this integration, by combining the spatial selectivity of photolithography with the synthetic utility of photochemistry. This review describes attempts to realise a similar integration at the nanoscale, by employing near-field optical probes to initiate selective chemical transformations in regions a few tens of nm in size. A combination of near-field exposure and an ultra-thin resist yields exceptional performance: in self-assembled monolayers, an ultimate resolution of 9 nm (ca. λ/30) has been achieved. A wide range of methodologies, based on monolayers of thiols, silanes and phosphonic acids, and thin films of nanoparticles and polymers, have been developed for use on metal and oxide surfaces, enabling the fabrication of metal nanowires, nanostructured polymers and nanopatterned oligonucleotides and proteins. Recently parallel lithography approaches have demonstrated the capacity to pattern macroscopic areas, and the ability to function under fluid, suggesting exciting possibilities for surface chemistry at the nanoscale. This journal is © The Royal Society of Chemistry 2012. Source


Graves' disease and other disorders of thyroid function may occur following treatment with novel anticancer agents or during periods of lymphocyte recovery after lymphopenia. There are three main settings for such lymphocyte reconstitution: recovery after a bone marrow or haematopoietic stem cell transplant, alemtuzumab treatment and the use of highly active antiretroviral therapy (HAART) for human immunodeficiency virus infection. The available evidence suggests that Graves' disease behaves as normal in most of these cases and should be treated conventionally, but it may follow a more favourable course in those receiving alemtuzumab or HAART. As spontaneous or drug-induced remission may be more likely in these two settings, first-line treatment should usually consist of an antithyroid drug. © 2014 John Wiley & Sons Ltd. Source


Wise S.,University of Sheffield
Computers and Geosciences | Year: 2012

The Shannon-Weaver Information statistic has been proposed as a useful measure of the quality of a Digital Elevation Model. However the statistic, usually referred to as entropy, is based purely on the range of values in a dataset and their relative proportion and is not directly related to the accuracy of those values or their spatial arrangement. These properties suggest that a better understanding is needed of how entropy behaves with respect to DEMs and that is what this paper seeks to provide. Previous work has suggested two uses for entropy: firstly as a measure of the loss of information caused by smoothing and aggregation of a DEM and secondly as a means of comparing DEMs and DEM products of varying quality. A series of theoretical and practical tests were used to test these ideas. It was found that entropy may well be useful as a measure of the information lost when a DEM is aggregated to a coarser scale but is not related to smoothing in any systematic way. There was no consistent relationship between entropy and the measures of DEM quality tested. Although it may have uses as a quality measure in some instances further work would be needed to establish its reliability. © 2012 Elsevier Ltd. Source


Wyatt L.R.,University of Sheffield
Journal of Atmospheric and Oceanic Technology | Year: 2012

The accuracy of wave direction and spreading at the Bragg-matched wavelength measured with HF radar over a wide range of HF operating frequencies is demonstrated by comparison with buoy data. The agreement for shortwave direction is better than that obtained for wind direction, which has been the more common application of this measurement, because these waves are not always aligned with the wind direction, particularly in short fetch and low wind speed situations. The method assumes a model of shortwave directionality and the validity of this is explored by using the buoy Fourier coefficients, with inconclusive results. The radar measurements do not use the linear dispersion relationship, but the comparison with buoy data does, and the implications of this are discussed. © 2012 American Meteorological Society. Source


Carlton J.,University of Sheffield
Optometry and Vision Science | Year: 2013

PURPOSE: Patient-reported outcome (PRO) instruments are increasingly common in both clinical practice and research. The data obtained from these instruments can be used to help inform decision making and policy-making decisions. The methodological approaches undertaken in developing PROs is not frequently reported. Literature on the development of the descriptive systems for PROs is sparse in comparison with that on the assessment of the psychometric properties of such instruments. The purpose of this study is to describe the results of qualitative interviews conducted to identify potential themes for the Child Amblyopia Treatment Questionnaire (CAT-QoL), a pediatric disease-specific health-related quality of life instrument for amblyopia designed for children aged 4 to 7 years. METHODS: Semistructured interviews were undertaken with 59 children (aged 3 years 9 months to 9 years 11 months; average, 6 years 3 months) with amblyopia. The interviews were transcribed verbatim and imported into QSR NVivo 8. Interview transcripts were analyzed to identify potential items to be included in the descriptive system. Thematic content analysis was undertaken using Framework. RESULTS: Eleven potential themes were identified for inclusion in the CAT-QoL instrument, namely, physical sensation of the treatment, pain, being able to play with other children, how other children have treated them, ability to undertake schoolwork, ability to undertake other tasks, sad or unhappy, cross, worried, frustrated, and feelings toward family members. CONCLUSIONS: Children are able to identify their thoughts and opinions of their own health and to describe what impact their amblyopia treatment has had on their daily lives. Themes for the draft descriptive system for a pediatric self-reported amblyopia QoL instrument have been identified. A draft version of the CAT-QoL instrument has been developed. Further research is required to refine and assess the psychometric properties of the instrument. © 2013 American Academy of Optometry. Source


Lopez-Perez D.,Alcatel - Lucent | Guvenc I.,Florida International University | Chu X.,University of Sheffield
IEEE Communications Magazine | Year: 2012

In this article we provide a comprehensive review of the handover process in heterogeneous networks (HetNets), and identify technical challenges in mobility management. In this line, we evaluate the mobility performance of HetNets with the 3rd Generation Partnership Project (3GPP) Release-10 range expansion and enhanced inter-cell interference coordination (eICIC) features such as almost blank subframes (ABSFs). Simulation assumptions and parameters of a related study item in 3GPP are used to investigate the impact of various handover parameters on mobility performance. In addition, we propose a mobility-based inter-cell interference coordination (MB-ICIC) scheme, in which picocells configure coordinated resources so that macrocells can schedule their high-mobility UEs in these resources without co-channel interference from picocells. MB-ICIC also benefits low-mobility UEs, since handover parameters can now be more flexibly optimized. Simulations using the 3GPP simulation assumptions are performed to evaluate the performance of MB-ICIC under several scenarios. © 2012 IEEE. Source


Cook S.J.,Aberystwyth University | Swift D.A.,University of Sheffield
Earth-Science Reviews | Year: 2012

Closed topographic basins are found beneath contemporary ice masses and within the footprint of former ice masses in all glaciated regions. We present the first integrated review of subglacial basin occurrence and formation and the implications of such basins for glaciological processes and the evolution of landscape. Our purpose is to motivate research in areas where understanding of basin origin and process significance is weak. Basins on the order of 10-102m deep and 102-103m long are produced by glacial erosion of subglacial rock and/or sediment and are known as 'overdeepenings'. Outlet and valley glaciers can 'overdeepen' their beds far below sea level or local fluviatile base level. Larger basins, typically in ice sheet contexts, may have a pre-glacial (usually tectonic) origin. Subglacial basins are important glaciologically because they require ice, water and sediment to ascend an adverse subglacial slope in order to exit the glacial system, the efficiency of which is dependent upon the gradient of the adverse slope and that of the ice surface. Basins thus influence subglacial drainage system morphology and transmissivity, the thickness and distribution of basal ice and sediment layers, and the mechanisms and dynamics of ice flow. Adverse gradients that exceed 11 times that of the ice surface may even permit the formation of subglacial lakes. We speculate that, in comparison to ice masses with few or no subglacial basins, those with numerous or very large basins may respond to climatic changes with unexpected vigour. In addition, erosion rates and transport pathways of water and sediment through the glacial system, and the expression of these processes in the sediment and landform record, may be unexpectedly complex. Further, our review shows that, in a warming climate, ice masses resting on adverse slopes will be vulnerable to rapid and potentially catastrophic retreat; new lakes in subglacial basins exposed by mountain glacier retreat will present an increasing hazard; and subglacial lakes may drain catastrophically. On even longer time scales, we speculate that the glacial excavation and post-glacial filling of basins in mountainous regions should contribute importantly to climate-related changes in isostasy and relief. Although the controls on overdeepening and their influence on other glacial and landscape processes remain uncertain, we hypothesise that overdeepened glacial systems reflect an equilibrium ice-bed geometry that maximises the efficiency of ice discharge. Improved understanding of overdeepening processes, especially overdeepened-bed hydrology, is therefore necessary to understand fully the dynamic behaviour of valley and outlet glaciers, and thus the fate of Earth's largest ice masses. © 2012 Elsevier B.V. Source


Levin I.,U.S. National Institute of Standards and Technology | Reaney I.M.,University of Sheffield
Advanced Functional Materials | Year: 2012

The room-temperature structure of Na 1/2Bi 1/2 TiO 3 (NBT) ceramics was studied using several transmission electron microscopy (TEM) techniques. High-angle annular dark field imaging in a scanning TEM confirmed an essentially random distribution of Bi and Na, while electron diffraction revealed significant disorder of the octahedral rotations and cation displacements. Diffraction-contrast dark-field and Fourier-filtered high-resolution TEM images were used to develop a model that reconciles local and average octahedral tilting in NBT. According to this model, NBT consists of nanoscale twin domains which exhibit a -a -c + tilting. The coherence length of the in-phase tilting, however, is limited to a few unit cells and is at least one order of magnitude shorter than that of anti-phase tilting. Assemblages of such nanodomains are proposed to exhibit an average a -a -c - tilt system. Diffuse sheets of intensity in electron diffraction patterns are attributed to local cation displacements correlated along both (111) and (100) chains and suggest partial polar ordering of these displacements. Overall, the TEM data indicate significant chemical, cation-displacement and tilt disorder of the NBT structure at the nano and mesoscale and support the premise that the Cc symmetry recently proposed from powder diffraction refinements is an averaged "best fit" cell. The structure of Na 1/2Bi 1/2 TiO 3 (NBT) is studied using transmission electron microscopy (TEM). NBT consists of nanoscale twin domains featuring in-phase and anti-phase octahedral tilting. The coherence length of the in-phase tilting is limited to a few unit cells and is at least one order of magnitude shorter than that of anti-phase tilting. Assemblages of such nanodomains exhibit average anti-phase tilting with monoclinic pseudo-symmetry. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Weiner J.,Copenhagen University | Freckleton R.P.,University of Sheffield
Annual Review of Ecology, Evolution, and Systematics | Year: 2010

Constant final yield is an empirical generalization concerning the total biomass production of plant stands growing at different densities after a period of growth. Total standing biomass initially increases in proportion to density, levels off, and then remains constant as density increases further. We review the empirical basis for and mathematical formulations of this pattern, and we clarify the relationship of constant final yield to density-dependent mortality (self-thinning). There are several mechanisms that can explain the pattern, and it has a clear evolutionary basis. Constant final yield is a key to understanding population- and community-level phenomena. Establishing whether or not a plant community is at or close to constant final yield is important for understanding and predicting its behavior. It represents the maximum biomass for a genotype in an environment after a period of growth and, as such, can serve as a baseline for the measurement of disturbance in plant communities. Copyright © 2010 by Annual Reviews. All rights reserved. Source


Dorling D.,University of Sheffield
Geographical Journal | Year: 2010

In January 2010 we learnt that within London the best-off 10th of the population each had recourse to 273 times the wealth of the worse-off 10th of that population (Hills et al. 2010, An anatomy of economic inequality in the UK Report of the National Equality Panel, Government Equalities Office, London). It is hard to find any city in an affluent country that is more unequal. This wealth gap did not include the assets of the UK super-rich, who mostly live in or near London. In April 2010 the Sunday Times newspaper reported the wealth of the richest 1000 people in the UK had risen by an average of £77 million each in just one year, to now stand at £335.5 billion. Today in the UK we are again as unequal as we were around 1918. For 60 years we became more equal, but for the last 30 years, more unequal. Looking at inequality trends it is very hard, initially, to notice when the party of government changed. However, closer inspection of the time series suggests there were key times when the trends changed direction, when the future was much less like the past and when how people voted and acted appeared to matter more than at other times. With all three main parties offering what may appear to be very similar solutions to the issue of reducing inequality it seems unlikely that voting in 2010 will make much of a difference. However, today inequalities are now at unsustainable extremes. Action has been taken such that some inequalities, especially in education, have begun to shrink. The last two times that the direction of trends in inequalities changed, in the 1920s and 1970s, there were several general elections held within a relatively short time period. Inequality is expensive. The UK is not as well-off as it once was. It could be time for a change again. Which way will we go? © 2010 The Author(s). Journal compilation © 2010 The Royal Geographical Society. Source


Lambie-Mumford H.,University of Sheffield
Journal of Social Policy | Year: 2013

This article charts the rise of one of the UK's most high profile forms of food banks: the Trussell Trust Foodbank franchise. Employing empirical data it seeks to embed the phenomenon of the growth of Foodbanks within a social policy research context. In the first instance, the role of recent and on-going shifts in the social policy context are examined, notably the importance of welfare diversification under previous Labour governments (1997-2010) and the current public spending cuts, welfare restructuring and Big Society rhetoric of the Conservative-Liberal Democrat Coalition government. The paper goes on to explore the nature of Foodbanks as emergency initiatives, providing relief and alleviation for the 'symptoms' of food insecurity and poverty. Data are presented which demonstrate some of the ways in which the Foodbank model and those who run the projects navigate the tension between addressing symptoms rather than 'root causes' of poverty and food insecurity. In the face of the simultaneous growth in emergency food initiatives and significant upheavals in social policy and welfare provision, the article culminates with an argument for social policy research and practice to harness and prioritise the human rights-based approach to food experiences. Copyright © 2012 Cambridge University Press. Source


Armitage C.J.,University of Sheffield
Journal of Child Psychology and Psychiatry and Allied Disciplines | Year: 2012

Background: Body satisfaction interventions have typically been multifaceted and targeted at clinical populations. The aim of the present research was to isolate the effects of self-affirmation on body satisfaction in a community sample and to see whether self-affirmation works by basing one's self-esteem on domains other than body weight and shape. Methods: Adolescents (N = 220) were randomized to complete a self-affirmation manipulation or an equivalently active control task before rating their body shape and weight, and completing measures of perceived threat, body satisfaction and self-esteem. Results: Affirmed girls showed significantly greater body satisfaction and perceived significantly less threat from having to rate their body shape and weight compared with an equivalently active control group. Mediator analyses showed that the effects were due both to increases in self-esteem and shifts away from using body shape and weight as a source of self-esteem. Self-affirmation did not affect boys because they: (a) were less threatened by having to rate their body shape and weight, and (b) principally derived their self-esteem from sources other than body shape and weight. Conclusions: The findings provide support for the unique effects of self-affirmation on girls' body satisfaction thereby isolating one active ingredient of programs to increase body satisfaction and identify a potential mechanism for understanding self-affirmation effects. Further research is required to establish the long-term effects of self-affirmation and test how self-affirmation interacts with other active ingredients in treatment programs. © 2011 The Author. Journal of Child Psychology and Psychiatry © 2011 Association for Child and Adolescent Mental Health. Source


Snowden J.A.,University of Sheffield
Blood | Year: 2016

Autologous hematopoietic stem cell transplantation (HSCT) is increasingly used for severe autoimmune and inflammatory diseases, but the mechanisms involved have yet to be elucidated. In this issue of Blood, Delemarre et al report their findings in both animal and human models which provide insights into restoration of functionality and diversity within the regulatory T-cell (Treg) compartment following HSCT.1. ©2016 by The American Society of Hematology. Source


Jenkins L.,University of Sheffield
Sociology of Health and Illness | Year: 2015

Traditional theories of socialisation, in which the child was viewed as a passive subject of external influences, are increasingly being rejected in favour of a new sociology of childhood which frames the child as a social actor. This article demonstrates the way in which conversation analysis can reveal children's agency in the micro-detail of naturally occurring episodes in which children express bodily sensations and pain in everyday life. Based on 71 video-recordings of mealtimes with five families, each with two children under 10 years old, the analysis focuses on the components of children's expressions of bodily sensation (including pain), the character of parents' responses and the nature of the subsequent talk. The findings provide further evidence that children are social actors, active in constructing, accepting and resisting the nature of their physical experience and pain. A conversation analysis of ordinary family talk facilitates a description of how a child's agency is built, maintained or resisted through the interactional practices participants employ to display knowledge. © 2015 Foundation for the Sociology of Health & Illness/John Wiley & Sons Ltd. Source


Remote monitoring (RM) strategies have the potential to deliver specialised care and management to patients with heart failure (HF). To determine the clinical effectiveness and cost-effectiveness of home telemonitoring (TM) or structured telephone support (STS) strategies compared with usual care for adult patients who have been recently discharged (within 28 days) from acute care after a recent exacerbation of HF. Fourteen electronic databases (including MEDLINE, EMBASE, PsycINFO and The Cochrane Library) and research registers were searched to January 2012, supplemented by hand-searching relevant articles and contact with experts. The review included randomised controlled trials (RCTs) or observational cohort studies with a contemporaneous control group that included the following RM interventions: (1) TM (including cardiovascular implanted monitoring devices) with medical support provided during office hours or 24/7; (2) STS programmes delivered by human-to-human contact (HH) or human-to-machine interface (HM). A systematic review and network meta-analysis (where appropriate) of the clinical evidence was carried out using standard methods. A Markov model was developed to evaluate the cost-effectiveness of different RM packages compared with usual care for recently discharged HF patients. TM 24/7 or using cardiovascular monitoring devices was not considered in the economic model because of the lack of data and/or unsuitability for the UK setting. Given the heterogeneity in the components of usual care and RM interventions, the cost-effectiveness analysis was performed using a set of costing scenarios designed to reflect the different configurations of usual care and RM in the UK. The literature searches identified 3060 citations. Six RCTs met the inclusion criteria and were added to the 15 trials identified from the previous systematic reviews giving a total of 21 RCTs included in the systematic review. No trials of cardiovascular implanted monitoring devices or observational studies met the inclusion criteria. The methodological quality of the studies varied widely and reporting was generally poor. Compared with usual care, RM was beneficial in reducing all-cause mortality for STS HH [hazard ratio (HR) 0.77, 95% credible interval (CrI) 0.55 to 1.08], TM during office hours (HR 0.76, 95% CrI 0.49 to 1.18) and TM 24/7 (HR 0.49, 95% CrI 0.20 to 1.18); however, these results were statistically inconclusive. The results for TM 24/7 should be treated with caution because of the poor methodological quality of the only included study in this network. No favourable effect on mortality was observed with STS HM. Similar reductions were observed in all-cause hospitalisations for TM interventions, whereas STS interventions had no major effect. A sensitivity analysis, in which a study was excluded because it provided better-than-usual support to the control group, showed larger beneficial effects for most outcomes, particularly for TM during office hours. In the cost-effectiveness analyses, TM during office hours was the most cost-effective strategy with an estimated incremental cost-effectiveness ratio (ICER) of £11,873 per quality-adjusted life-year (QALY) compared with usual care, whereas STS HH had an ICER of £228,035 per QALY compared with TM during office hours. STS HM was dominated by usual care. Similar results were observed in scenario analyses performed using higher costs of usual care, higher costs of STS HH and lower costs of TM during office hours. The RM interventions included in the review were heterogeneous in terms of monitored parameters and HF selection criteria and lacked detail in the components of the RM care packages and usual care (e.g. communication protocols, routine staff visits and resources used). As a result, the economic model developed scenarios for different RM classifications and their costs were estimated using bottom-up costing methods. Although the users can decide which of these scenarios is most representative of their setting, uncertainties still remain about the assumptions made in the estimation of these costs. In addition, the model assumed that the effectiveness of the interventions was constant over time, irrespective of the duration of deployment, and that the intervention was equally effective in different age/severity groups. Despite wide variation in usual care and RM strategies, cost-effectiveness analyses suggest that TM during office hours was an optimal strategy (in most costing scenarios). However, clarity was lacking among descriptions of the components of RM packages and usual care and there was a lack of robust estimation of costs. Further research is needed in these areas. PROSPERO registration no. CRD42011001368. The National Institute for Health Research Health Technology Assessment programme. Source


Cleasby I.R.,University of Sheffield | Nakagawa S.,University of Otago
Behavioral Ecology and Sociobiology | Year: 2011

One of the fundamental assumptions underlying linear regression models is that the errors have a constant variance (i. e., homoscedastic). When this assumption is violated, standard errors from a regression can be biased and inconsistent, meaning that the associated p values and 95% confidence intervals cannot be trusted. The assumption of homoscedasticity is made for statistical reasons rather than biological reasons; in most real datasets, some form of heteroscedasticity is likely to exist. However, a survey of the behavioural ecology literature showed that only about 5% of articles explicitly mentioned heteroscedasticity, leaving 95% of articles in which heteroscedasticity was apparently absent. These results strongly indicate that the prevalence of heteroscedasticity is widely under-reported within behavioural ecology. The aim of this article is to raise awareness of heteroscedasticity amongst behavioural ecologists. Using topical examples from fields in behavioural ecology such as sexual dimorphism and animal personality, we highlight the biological importance of considering heteroscedasticity. We also emphasize that researchers should pay closer attention to the variance in their data and consider what factors could cause heteroscedasticity. In addition, we introduce some simple methods of dealing with heteroscedasticity. The two methods we focus on are: (1) incorporating variance functions within a generalised least squares (GLS) framework to model the functional form of heteroscedasticity and; (2) heteroscedasticity-consistent standard error (HCSE) estimators, which can be used when the functional form of heteroscedasticity is unknown. Using case studies, we show how both methods can influence the output from linear regression models. Finally, we hope that more researchers will consider heteroscedasticity as an important source of additional information about the particular biological process being studied, rather than an impediment to statistical analysis. © 2011 Springer-Verlag. Source


Goodeve A.C.,University of Sheffield
Blood | Year: 2013

In this issue of Blood, Johnsen et al have analyzed the frequency of coding sequence variants in the von Willebrand factor gene (VWF) and have identified 7 missense variants independently associated with levels of von Willebrand factor (VWF) or factor VIII (FVIII).1 Several rare missense variants have been previously identified, predominantly in European patients with von Willebrand disease (VWD),2,3 and some have now been reported to be common African American sequence variations. Copyright © 2011 by The American Society of Hematology; all rights reserved. Source


Deppe M.H.,University of Sheffield
Magnetic resonance in medicine : official journal of the Society of Magnetic Resonance in Medicine / Society of Magnetic Resonance in Medicine | Year: 2011

Washout of inert gases is a measure of pulmonary function well-known in lung physiology. This work presents a method combining inert gas washout and spatially resolved imaging using hyperpolarized (3) He, thus providing complementary information on lung function and physiology. The nuclear magnetic resonance signal of intrapulmonary hyperpolarized (3) He is used to track the total amount of gas present within the lungs during multiple-breath washout via tidal breathing. Before the washout phase, 3D ventilation images are acquired using (3) He magnetic resonance imaging from the same dose of inhaled gas. The measured washout signal is corrected for T(1) relaxation and radiofrequency depletion, converting it into a quantity proportional to the apparent amount of gas within the lungs. The use of a pneumotachograph for acquisition of breathing volumes during washout, together with lung volumes derived from the magnetic resonance imaging data, permits assessment of the washout curves against physiological model predictions for healthy lungs. The shape of the resulting washout curves obtained from healthy volunteers matches the predictions, demonstrating the utility of the technique for the quantitative assessment of lung function. The proposed method can be readily integrated with a standard breath-hold (3) He ventilation imaging sequence, thus providing additional information from a single dose of gas. Copyright © 2010 Wiley-Liss, Inc. Source


Freckleton R.P.,University of Sheffield
Behavioral Ecology and Sociobiology | Year: 2011

There has been a great deal of recent discussion of the practice of regression analysis (or more generally, linear modelling) in behaviour and ecology. In this paper, I wish to highlight two factors that have been under-considered, collinearity and measurement error in predictors, as well as to consider what happens when both exist at the same time. I examine what the consequences are for conventional regression analysis (ordinary least squares, OLS) as well as model averaging methods, typified by information theoretic approaches based around Akaike's information criterion. Collinearity causes variance inflation of estimated slopes in OLS analysis, as is well known. In the presence of collinearity, model averaging reduces this variance for predictors with weak effects, but also can lead to parameter bias. When collinearity is strong or when all predictors have strong effects, model averaging relies heavily on the full model including all predictors and hence the results from this and OLS are essentially the same. I highlight that it is not safe to simply eliminate collinear variables without due consideration of their likely independent effects as this can lead to biases. Measurement error is also considered and I show that when collinearity exists, this can lead to extreme biases when predictors are collinear, have strong effects but differ in their degree of measurement error. I highlight techniques for dealing with and diagnosing these problems. These results reinforce that automated model selection techniques should not be relied on in the analysis of complex multivariable datasets. © 2010 Springer-Verlag. Source


Gaston K.J.,University of Sheffield
BioScience | Year: 2011

In contrast to their rarity, the commonness of species has historically received surprisingly little explicit attention from ecologists. However, this situation is changing. Here I review the current understanding of the nature of commonness, with particular emphasis on the dynamics and causes of this state, as well as on its ecological and evolutionary implications. Depending on the focal issue, common species can variously have lower, greater, or similar per capita influences compared with rare ones. Importantly, however, these influences almost invariably remain strong because of the high numbers of individuals and local occurrences in taxonomic assemblages contributed by the relatively few species that are common. The importance of these species highlights the significance of deepening concerns over the declines of many common species and the vital need for a balanced approach to maintaining their commonness while also addressing the more familiar conservation issue of preventing the loss of rare species. © 2011 by American Institute of Biological Sciences. All rights reserved. Source


Hunter C.A.,University of Sheffield
Chemical Science | Year: 2013

The bulk properties of liquids provide information on the thermodynamic properties of intermolecular interactions between non-polar molecules. Literature data on noble gases, alkanes and perfluorocarbons have been analysed to investigate the relationship of the magnitude of van der Waals interactions between non-polar molecules with chemical structure and molecular architecture. A molecular model of the liquid state is proposed based on the concept of a zero point void, which has a volume of 5 Å