Time filter

Source Type

Southampton, United Kingdom

The University of Southampton is a public university located in Southampton, England. Southampton is a research intensive university and a founding member of the Russell Group of elite British universities.The origins of the university date back to the founding of the Hartley Institution in 1862 following a legacy to the Corporation of Southampton by Henry Robertson Hartley. In 1902, the Institution developed into the Hartley University College, with degrees awarded by the University of London. On 29 April 1952, the institution was granted a Royal Charter to give the University of Southampton full university status. It is a member of the European University Association, the Association of Commonwealth Universities and is an accredited institution of the Worldwide Universities Network.Besides being recognised as one of the leading research universities in the UK, Southampton has also achieved consistently high scores for its teaching and learning activities. It additionally has one of the highest proportions of income derived from research activities in Britain, and is regularly ranked in the top 100 universities in the world. As of 2014 Southampton is one of the few universities to achieve a top 20 UK position in the most established national and international rankings .The University of Southampton currently has over 16,000 undergraduate and 7,000 postgraduate students, making it the largest university by higher education students in the South East region. The university has seven teaching campuses. The main campus is located in the Highfield area of Southampton and is supplemented by four other campuses within the city: Avenue Campus housing the Faculty of Humanities, the National Oceanography Centre housing courses in Ocean and Earth science, Southampton General Hospital offering courses in Medicine and Health science, and Boldrewood Campus an engineering and maritime technology campus housing also the university's strategic ally Lloyd's Register. In addition, the university operates a School of Art based in nearby Winchester and an international branch in Malaysia offering courses in Engineering. Each campus is equipped with its own library facilities.The university has over 5000 places at university-owned halls of residence, spread over two main complexes and several other smaller halls located within a couple of miles from the university. The University of Southampton Students' Union, provides support, representation and social activities for the students ranging from involvement in the Union's four media outlets to any of the 200 affiliated societies and 80 sports. The university owns and operates a sports ground at nearby Wide Lane for use by students and also operates a sports centre on the main campus. Highfield Campus also houses three main art venues supported by the university and Arts Council England. Wikipedia.

Zhang X.,University of Southampton
Acta Mechanica Sinica/Lixue Xuebao | Year: 2012

Noise generated by civil transport aircraft during take-off and approach-to-land phases of operation is an environmental problem. The aircraft noise problem is firstly reviewed in this article. The review is followed by a description and assessment of a number of sound propagation methods suitable for applications with a background mean flow field pertinent to aircraft noise. Of the three main areas of the noise problem, i.e. generation, propagation, and radiation, propagation provides a vital link between near-field noise generation and far-field radiation. Its accurate assessment ensures the overall validity of a prediction model. Of the various classes of propagation equations, linearised Euler equations are often casted in either time domain or frequency domain. The equations are often solved numerically by computational aeroacoustics techniques, bur are subject to the onset of Kelvin-Helmholtz (K-H) instability modes which may ruin the solutions. Other forms of linearised equations, e.g. acoustic perturbation equations have been proposed, with differing degrees of success. © The Chinese Society of Theoretical and Applied Mechanics and Springer-Verlag Berlin Heidelberg 2012. Source

Garabato A.C.N.,University of Southampton
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2012

The ocean flows because it is forced by winds, tides and exchanges of heat and freshwater with the overlying atmosphere and cryosphere. To achieve a state where the defining properties of the ocean (such as its energy and momentum) do not continuously increase, some form of dissipation or damping is required to balance the forcing. The ocean circulation is thought to be forced primarily at the large scales characteristic of ocean basins, yet to be damped at much smaller scales down to those of centimetre-sized turbulence. For decades, physical oceanographers have sought to comprehend the fundamentals of this fractal puzzle: how the ocean circulation is driven, how it is damped and how ocean dynamics connects the very different scales of forcing and dissipation. While in the last two decades significant advances have taken place on all these three fronts, the thrust of progress has been in understanding the driving mechanisms of ocean circulation and the ocean's ensuing dynamical response, with issues surrounding dissipation receiving comparatively little attention. This choice of research priorities stems not only from logistical and technological difficulties in observing and modelling the physical processes responsible for damping the circulation, but also from the untested assumption that the evolution of the ocean's state over time scales of concern to humankind is largely independent of dissipative processes. In this article, I illustrate some of the key advances in our understanding of ocean circulation that have been achieved in the last 20 years and, based on a range of evidence, contend that the field will soon reach a stage in which uncertainties surrounding the arrest of ocean circulation will pose the main challenge to further progress. It is argued that the role of the circulation in the coupled climate system will stand as a further focal point of major advances in understanding within the next two decades, supported by the drive of physical oceanography towards a more operational enterprise by contextual factors. The basic elements that a strategy for the future must have to foster progress in these two areas are discussed, with an overarching emphasis on the promotion of curiosity-driven fundamental research against opposing external pressures and on the importance of upholding fundamental research as the apex of education in the field. © 2012 The Royal Society. Source

Evans N.,University of Southampton | Kim K.-Y.,Gwangju Institute of Science and Technology
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2014

We study the D3/probe D5 system with two domain wall hypermultiplets. The conformal symmetry can be broken by a magnetic field, B (or running coupling), which promotes condensation of the fermions on each individual domain wall. Separation of the domain walls promotes condensation of the fermions between one wall and the other. We study the competition between these two effects showing a first order phase transition when the separation is ~ 0.56λ1/4B -1/2. We identify extremal brane configurations which exhibit both condensations simultaneously but they are not the preferred ground state. © 2013 The Authors. Published by Elsevier B.V. Source

Makrodimopoulos A.,University of Southampton
International Journal for Numerical Methods in Biomedical Engineering | Year: 2010

A major difficulty when applying the kinematic theorem in limit analysis is the derivation of expressions of the dissipation functions and the set of plastically admissible strains. At present, no standard methodology exists. Here, it is shown that they can be readily obtained, provided that the yield restriction can be rewritten as an intersection of cones, and that the expression defining the dual cones is available. This is always possible for the case of self-dual cones and some other classes, and covers many of the well-known criteria. Therefore, a difficult obstacle with respect to the use of the kinematic theorem in conjunction with any numerical method can be overcome. The methodology is illustrated by giving the expressions of the dissipation functions for various conic yield restrictions. A special emphasis is given on upper bound finite element limit analysis. Taking advantage of duality in conic programming, we can obtain the dual problem, where knowledge of the dual cone is not necessary. Therefore, this formulation is feasible for any cone. Finally, it is interesting that the form of the dual problem, for varying yield strength within the finite element, differs from that presented in other papers. © 2009 John Wiley & Sons, Ltd. Source

The influence of the use of the generalized Hermite polynomial on the Hermite-based lattice Boltzmann (LB) construction approach, lattice sets, the thermal weights, moments and the equilibrium distribution function (EDF) are addressed. A new moment system is proposed. The theoretical possibility to obtain a unique high-order Hermite-based singel relaxation time LB model capable to exactly match some first hydrodynamic moments thermally i) on-Cartesian lattice, ii) with thermal weights in the EDF, iii) whilst the highest possible hydrodynamic moments that are exactly matched are obtained with the shortest on-Cartesian lattice sets with some fixed real-valued temperatures, is also analyzed. © 2014, Higher Education Press and Springer-Verlag Berlin Heidelberg. Source

A free surface Green function method is employed in numerical simulations of hydrodynamic performance of a submerged spheroid in a fluid of infinite depth. The free surface Green function consists of the Rankine source potential and a singular wave integral. The singularity of the wave integral is removed with the use of the Havelock regular wave integral. The finite boundary element method is applied in the discretisation of the fluid motion problem so that the panel integral of the Rankine source potential is evaluated by the Hess-Smith formula and the panel integral of the regular wave integral is evaluated in a straightforward way due to the regularity nature. Present method's results are in good agreement with earlier numerical results. © 2013 Elsevier B.V. Source

Gale P.A.,University of Southampton
Accounts of Chemical Research | Year: 2011

Cystic fibrosis is the most well-known of a variety of diseases termed channelopathies, in which the regulation of ion transport across cell membranes is so disrupted that the threshold of a pathology is passed. The human toll exacted by these diseases has led a number of research groups, including our own, to create compounds that mediate ion transport across lipid bilayers.In this Account, we discuss three classes of synthetic compounds that were refined to bind and transport anions across lipid bilayer membranes. All of the compounds were originally designed as anion receptors, that is, species that would simply create stable complexes with anions, but were then further developed as transporters. By studying structurally simple systems and varying their properties to change the degree of preorganization, the affinity for anions, or the lipophilicity, we have begun to rationalize why particular anion transport mechanisms (cotransport or antiport processes) occur in particular cases. For example, we have studied the chloride transport properties of receptors based on the closely related structures of isophthalamide and pyridine-2,6-dicarboxamide: the central ring in each case was augmented with pendant methylimidazole groups designed to cotransport H+ and Cl -. We observed that the more preorganized pyridine-based receptor was the more efficient transporter, a finding replicated with a series of isophthalamides in which one contained hydroxyl groups designed to preorganize the receptor. This latter class of compound, together with the natural product prodigiosin, can transport bicarbonate (as part of a chloride/bicarbonate antiport process) across lipid bilayer membranes.We have also studied the membrane transport properties of calix[4]pyrroles. Although the parent meso-octamethylcalix[4]pyrrole functions solely as a Cs+/Cl - cotransporter, other compounds with increased anion affinities can function through an antiport process. One example is octafluoro-meso- octamethylcalix[4]pyrrole; with its electron-withdrawing substituents, it can operate through a chloride/bicarbonate antiport process. Moreover, calix[4]pyrroles with additional hydrogen bond donors can operate through a chloride/nitrate antiport process. Thus, increasing the affinity of the receptor in these cases allows the compound to transport an anion in the absence of a cation.Finally, we have studied the transport properties of simple thioureas and shown that these compounds are highly potent chloride/bicarbonate antiport agents that function at low concentrations. In contrast, the urea analogues are inactive. The higher hydrophobicity (reflected in higher values for the logarithm of the water-octanol partition constant, or log P) and lower polar surface areas of the thiourea compounds compared to their urea analogues may provide a clue to the high potency of these compounds. This observation might serve as a basis for designing future small-molecule transporters. © 2011 American Chemical Society. Source

Baldwin D.S.,University of Southampton | Loft H.,Lundbeck | Dragheim M.,Lundbeck
European Neuropsychopharmacology | Year: 2012

The efficacy, safety, and tolerability of Lu AA21004 versus placebo, using duloxetine as active reference, in patients with DSM-IV-TR diagnosed major depressive disorder (MDD) were evaluated in this 8-week, multi-site study. Patients (n. =. 766) had a baseline Montgomery-Åsberg Depression Rating Scale (MADRS) total score ≥. 26 and were randomly assigned (1:1:1:1:1) to 2.5, 5 or 10. mg Lu AA21004, placebo, or 60. mg duloxetine. The 5. mg and 10. mg doses of Lu AA21004 were tested separately versus placebo at p. ≤. 0.025 in a pre-specified order. In the pre-defined primary efficacy analysis [mean change from baseline in MADRS total score at Week 8, full analysis set, ANCOVA, last observation carried forward (LOCF)], the differences to placebo (n. =. 145) of -. 1.7 (Lu AA21004 5. mg, n. =. 155) and -. 1.5 points (Lu AA21004 10. mg, n. =. 151) were not statistically significant; nor were those for Lu AA21004 2.5. mg (-. 1.4 points, n. =. 155) or duloxetine (-. 2.0 points, n. =. 149). Using mixed model, repeated measures (MMRM) analyses of the primary endpoint and most secondary endpoints were supportive of likely efficacy for Lu AA21004 5. mg and 10. mg and duloxetine. Treatment-emergent adverse events led to the withdrawal of 72 patients: 8% (placebo), 12% (duloxetine), and 6%, 11% and 9% in the Lu AA21004 groups (2.5. mg, 5. mg and 10. mg, respectively). The most common adverse events were nausea, headache, dizziness, and dry mouth. No clinically relevant changes were seen in vital signs, weight, ECG, or laboratory results. In summary, none of the active treatment groups, including duloxetine, separated from placebo in the primary analysis in this 'failed' study. Findings on secondary outcome measures, using MMRM instead of LOCF, were supportive of likely efficacy for Lu AA21004 5. mg and 10. mg and duloxetine. Lu AA21004 (2.5, 5 and 10. mg) was well tolerated. © 2011 Elsevier B.V. Source

Chen G.,University of Southampton
Journal of Physics D: Applied Physics | Year: 2010

Surface potential measurement provides a useful tool to gauge the electrical properties of materials. It has been observed that the potential of a sample with an initial high surface potential decays faster than that with an initial lower surface potential, known as the cross-over phenomenon. The phenomenon was found a few decades ago and various theories and models have been proposed. A common feature of the existing models is based on single charge carrier injection from a corona-charged surface. With our recent space charge measurement results on corona-charged samples, double injection from both electrodes has been verified. Based on this new fact, a new model based on bipolar charge injection is proposed and initial numerical simulation reveals that the surface potential cross-over phenomenon can occur under bipolar charge injection. © 2010 IOP Publishing Ltd. Source

Clubley S.K.,University of Southampton
Engineering Structures | Year: 2014

This paper investigates the influence of long-duration blast loads on the structural response of aluminium cylindrical shell structures. Full scale coupled non-linear dynamics are examined experimentally at one of the worlds' most powerful air blast testing facilities. Evaluating structural response to blast loads of this magnitude is exceptionally difficult using only computational fluid dynamics; typically not achievable without incurring unmanageable solution domains. Clearing, diffraction and exhaust of a long-duration blast wave across any comparatively small structure imposes constraints leading to the use of approximated drag coefficients, designed primarily to expedite the calculation of net translational forces. In this research, detailed pressure histories measured experimentally on the surface of the cylindrical shell are used to accurately configure a computational analysis dispensing with the requirement to utilise approximated drag forces. When further combined with accurate material test data, fibre optic controlled strain gauge instrumentation and high-speed video photography, a full comparative model was possible. This paper shows that without exact knowledge of long-duration flow-field effects a priori, it is very difficult to reliably determine the mode of structural response and degree of blast resistance. Preliminary modelling predicted a global sway and localised plate buckling; however, subsequent experimental testing showed a crushing failure of the shell before any translational movement occurred. Results in this paper will be of direct interest to both practitioners and researchers considering the dynamic response of cylindrical shell structures subject to high power explosive blasts from sources such as hydrocarbon vapour cloud ignition. © 2013 Elsevier Ltd. Source

Gale P.A.,University of Southampton | Gale P.A.,King Abdulaziz University | Gale P.A.,University of Canterbury | Caltagirone C.,University of Cagliari
Chemical Society Reviews | Year: 2015

This Tutorial Review provides a short survey of anion sensing by small molecule anion receptors, molecular ensembles and chemodosimeters. The review highlights the many different mechanisms and approaches employed by supramolecular chemists for anion sensing and the wide structural variety present in these systems. This journal is © The Royal Society of Chemistry. Source

Sandberg R.D.,University of Southampton
Computers and Fluids | Year: 2011

A novel axis treatment using parity conditions is presented for flow equations in cylindrical coordinates that are represented in azimuthal Fourier modes. The correct parity states of scalars and the velocity vector are derived such that symmetry conditions for each Fourier mode of the respective variable can be determined. These symmetries are then used to construct finite-difference and filter stencils at and near the axis, and an interpolation scheme for the computation of terms premultiplied by 1/. r. A grid convergence study demonstrates that the axis treatment retains the formal accuracy of the spatial discretization scheme employed. Two further test cases are considered for evaluation of the axis treatment, the propagation of an acoustic pulse and direct numerical simulation of a fully turbulent supersonic axisymmetric wake. The results demonstrate the applicability of the axis treatment for non-axisymmetric flows. © 2011 Elsevier Ltd. Source

Pretorius M.L.,European Southern Observatory | Knigge C.,University of Southampton
Monthly Notices of the Royal Astronomical Society | Year: 2012

We combine two complete, X-ray flux-limited surveys, theROSATBright Survey (RBS) and theROSATNorth Ecliptic Pole (NEP) survey, to measure the space density (ρ) and X-ray luminosity function (Φ) of non-magnetic cataclysmic variables (CVs). The combined survey has a flux limit ofF X≳ 1.1× 10 -12ergcm -2s -1 over most of its solid angle of just over, but is as deep as ≃10 -14ergcm -2s -1 over a small area. The CV sample that we construct from these two surveys contains 20 non-magnetic systems. We carefully include all sources of statistical error in calculating ρ and Φ by using Monte Carlo simulations; the most important uncertainty proves to be the often large errors in distances estimates. If we assume that the 20 CVs in the combined RBS and NEP survey sample are representative of the intrinsic population, the space density of non-magnetic CVs is. We discuss the difficulty in measuring Φ in some detail - in order to account for biases in the measurement, we have to adopt a functional form for Φ. Assuming that the X-ray luminosity function of non-magnetic CVs is a truncated power law, we constrain the power-law index to -0.80 ± 0.05. It seems likely that the two surveys have failed to detect a large, faint population of short-period CVs, and that the true space density may well be a factor of 2 or 3 larger than what we have measured; this is possible, even if we only allow for undetected CVs to have X-ray luminosities in the narrow range 28.7 < log(L X/ergs -1) < 29.7. However, ρ as high as 2× 10 -4pc -3 would require that the majority of CVs has X-ray luminosities belowL X= 4× 10 28ergs -1 in the 0.5-2.0keV band. © 2011 The Authors Monthly Notices of the Royal Astronomical Society © 2011 RAS. Source

Zheludev N.I.,University of Southampton
Optics and Photonics News | Year: 2011

Advancements in metamaterials have enabled them to be used for dynamic quantum-effect-enabled systems offering exciting applications. The metamaterial paradigm is an incredible one that promises groundbreaking new functionalities such as invisibility and imaging with unlimited resolution. When high speed switching is not the prime objective, metamaterials can be reliably and reversibly controlled by microelectromechanical (MEMS) actuators that reposition parts of the metamolecules. Metamaterials in which metal nanostructures are hybridized with nonlinear and switchable dielectrics or semiconductors provide a way to achieve changes faster than they can be achieved by mechanical repositioning of parts. Sensor applications represent another rapidly growing area in metamaterials research. Plasmonic metamaterial nanostructures can also be used to improve light-harvesting solutions, permitting a considerable reduction in physical thickness and improved efficiency in solar photovoltaic absorber layers. Source

Sonuga-Barke E.J.,University of Southampton
Journal of child psychology and psychiatry, and allied disciplines | Year: 2010

Early intervention approaches have rarely been implemented for the prevention of attention deficit/hyperactivity disorder (ADHD). In this paper we explore whether such an approach may represent an important new direction for therapeutic innovation. We propose that such an approach is most likely to be of value when grounded in and informed by developmental models of the dynamic, complex and heterogeneous nature of the condition. First, we set out a rationale for early intervention grounded in the science of ADHD viewed through developmental models. Second, we re-examine the concept of disorder-onset from the perspective of developmental trajectories and phenotypes. Third, we examine potential causal pathways to ADHD with regard to originating risk, pathophysiological mediators, environmental moderators and developmental continuities. Finally, we explore the potential value of strategies for identifying young children at risk for ADHD, and implementing interventions in ways that can target these underlying pathogenic processes. The utility of such an approach represents an important area for future research but still requires 'proof of concept'. Therefore prior to widespread clinical implementation, far greater knowledge is required of (i) developmental pathways into ADHD, (ii) the value of identifying neuropsychological mediators of these pathways, and (iii) the extent to which targeting mediating mechanisms will improve treatment outcomes for children with ADHD. Source

King S.F.,University of Southampton
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2013

We discuss a minimal predictive see-saw model in which the right-handed neutrino mainly responsible for the atmospheric neutrino mass has couplings to (νe, νμ, ντ) proportional to (0, 1, 1) and the right-handed neutrino mainly responsible for the solar neutrino mass has couplings to (νe, νμ, ντ) proportional to (1, 4, 2), with a relative phase η=-2π/5. We show how these patterns of couplings could arise from an A4 family symmetry model of leptons, together with Z3 and Z5 symmetries which fix η=-2π/5 up to a discrete phase choice. The PMNS matrix is then completely determined by one remaining parameter which is used to fix the neutrino mass ratio m2/m3. The model predicts the lepton mixing angles θ12≈34°, θ23≈41°, θ13≈9.5°, which exactly coincide with the current best fit values for a normal neutrino mass hierarchy, together with the distinctive prediction for the CP violating oscillation phase δ≈106°. © 2013 Elsevier B.V. Source

Siddle H.V.,University of Southampton | Kaufman J.,University of Cambridge
Immunology | Year: 2015

Naturally transmissible tumours can emerge when a tumour cell gains the ability to pass as an infectious allograft between individuals. The ability of these tumours to colonize a new host and to cross histocompatibility barriers contradicts our understanding of the vertebrate immune response to allografts. Two naturally occurring contagious cancers are currently active in the animal kingdom, canine transmissible venereal tumour (CTVT), which spreads among dogs, and devil facial tumour disease (DFTD), among Tasmanian devils. CTVT are generally not fatal as a tumour-specific host immune response controls or clears the tumours after transmission and a period of growth. In contrast, the growth of DFTD tumours is not controlled by the Tasmanian devil's immune system and the disease causes close to 100% mortality, severely impacting the devil population. To avoid the immune response of the host both DFTD and CTVT use a variety of immune escape strategies that have similarities to many single organism tumours, including MHC loss and the expression of immunosuppressive cytokines. However, both tumours appear to have a complex interaction with the immune system of their respective host, which has evolved over the relatively long life of these tumours. The Tasmanian devil is struggling to survive with the burden of this disease and it is only with an understanding of how DFTD passes between individuals that a vaccine might be developed. Further, an understanding of how these tumours achieve natural transmissibility should provide insights into general mechanisms of immune escape that emerge during tumour evolution. © 2014 The Authors. Immunology published by John Wiley & Sons Ltd. Source

Juffmann T.,University of Vienna | Juffmann T.,Stanford University | Ulbricht H.,University of Southampton | Arndt M.,University of Vienna
Reports on Progress in Physics | Year: 2013

We describe the state of the art in preparing, manipulating and detecting coherent molecular matter. We focus on experimental methods for handling the quantum motion of compound systems from diatomic molecules to clusters or biomolecules. Molecular quantum optics offers many challenges and innovative prospects: already the combination of two atoms into one molecule takes several well-established methods from atomic physics, such as for instance laser cooling, to their limits. The enormous internal complexity that arises when hundreds or thousands of atoms are bound in a single organic molecule, cluster or nanocrystal provides a richness that can only be tackled by combining methods from atomic physics, chemistry, cluster physics, nanotechnology and the life sciences. We review various molecular beam sources and their suitability for matter-wave experiments. We discuss numerous molecular detection schemes and give an overview over diffraction and interference experiments that have already been performed with molecules or clusters. Applications of de Broglie studies with composite systems range from fundamental tests of physics up to quantum-enhanced metrology in physical chemistry, biophysics and the surface sciences. Nanoparticle quantum optics is a growing field, which will intrigue researchers still for many years to come. This review can, therefore, only be a snapshot of a very dynamical process. © 2013 IOP Publishing Ltd. Source

Meyer C.,CNRS Physics of Complex Systems | Luckhurst G.R.,University of Southampton | Dozov I.,CNRS Physics of Complex Systems
Physical Review Letters | Year: 2013

We extend the twist-bend nematic (NTB) model to describe the electro-optics of this novel phase. We predict an electroclinic effect (ECE) subject to a dc electric field E applied perpendicular to the helix axis or wave vector q, with rotation of the NTB optic axis around E. This linear effect, with its flexoelectric origin, is a close analog to the electro-optic effects observed for chiral liquid crystals. However, in nematics composed of achiral molecules having a bent shape, it is the electro-optic signature of the NTB phase. We test our model experimentally in the low-temperature nematic phase of the odd liquid crystal dimer, CB7CB, with its molecules having, on average, a bent shape. The ECE measurements confirm the previously proposed twist-bend nematic structure of this phase, with its broken chiral symmetry, extremely short (<10 nm) doubly degenerate pitch and ultrafast, submicrosecond response times. © 2013 American Physical Society. Source

Di Bari P.,University of Southampton | Marzola L.,University of Tartu
Nuclear Physics B | Year: 2013

We show that, within SO(10)-inspired leptogenesis, there exists a solution, with definite constraints on neutrino parameters, able simultaneously to reproduce the observed baryon asymmetry and to satisfy the conditions for the independence of the final asymmetry of the initial conditions (strong thermal leptogenesis). We find that the wash-out of a pre-existing asymmetry as large as O(0.1) requires: (i) reactor mixing angle 2°≲θ13≲20°, in agreement with the experimental result θ13=8°-10°; (ii) atmospheric mixing angle 16°≲θ23≲41°, compatible only with current lowest experimentally allowed values; (iii) Dirac phase in the range -π/2≲δ≲π/5, with the bulk of the solutions around δ≃-π/5 and such that sign(JCP)=-sign(ηB); (iv) neutrino masses mi normally ordered; (v) lightest neutrino mass in the range m1≃15-25meV, corresponding to ∑imi≃85-105meV; (vi) neutrinoless double beta decay (0νββ) effective neutrino mass mee≃0.8m1. All together this set of predictive constraints characterises the solution quite distinctively, representing a difficultly forgeable, fully testable, signature. In particular, the predictions mee≃0.8m1≃15meV can be tested by cosmological observations and (ultimately) by 0νββ experiments. We also discuss different interesting aspects of the solution such as theoretical uncertainties, stability under variation of the involved parameters, forms of the orthogonal and RH neutrino mixing matrices. © 2013 Elsevier B.V. Source

Elkington P.T.,University of Southampton | Friedland J.S.,Imperial College London
The Lancet Infectious Diseases | Year: 2015

Tuberculosis remains a global health pandemic. The current depiction of the Mycobacterium tuberculosis life cycle proposes that airborne bacilli are inhaled and phagocytosed by alveolar macrophages, resulting in the formation of a granuloma that ruptures into the airways to reinitiate the infectious cycle. However, this widely proposed model overlooks the fact, established 100 years ago, that the initial site of M tuberculosis implantation is in the lower zones of the lungs, whereas infectious cavitary pulmonary disease develops at the lung apices. The immunological events at these two pulmonary locations are different-cavitation only occurs in the apices and not in the bases. Yet the current conceptual model of tuberculosis renders the immunology of these two temporally and spatially separated events identical. One key consequence is that prevention of primary childhood tuberculosis at the lung bases is regarded as adequate immunological protection, but extensive evidence shows that greater immunity could predispose to immunopathology and transmission at the lung apex. A much greater understanding of time and place in the immunopathological mechanisms underlying human tuberculosis is needed before further pre-exposure vaccination trials can be done. © 2015 Elsevier Ltd. Source

Byrne J.P.,Trinity College Dublin | Kitchen J.A.,University of Southampton | Gunnlaugsson T.,Trinity College Dublin
Chemical Society Reviews | Year: 2014

Ligands containing the btp [2,6-bis(1,2,3-triazol-4-yl)pyridine] motif have appeared with increasing regularity over the last decade. This class of ligands, formed in a one pot 'click' reaction, has been studied for various purposes, such as for generating d and f metal coordination complexes and supramolecular self-assemblies, and in the formation of dendritic and polymeric networks, etc. This review article introduces btp as a novel and highly versatile terdentate building block with huge potential in inorganic supramolecular chemistry. We will focus on the coordination chemistry of btp ligands with a wide range of metals, and how it compares with other classical pyridyl and polypyridyl based ligands, and then present a selection of applications including use in catalysis, enzyme inhibition, photochemistry, molecular logic and materials, e.g. polymers, dendrimers and gels. The photovoltaic potential of triazolium derivatives of btp and its interactions with anions will also be discussed. This journal is © the Partner Organisations 2014. Source

Shetty P.,University of Southampton
Indian Journal of Pediatrics | Year: 2013

Advances in agriculture and food systems, consequent increases in food availability, and a shift in dietary consumption patterns with economic development and urbanization of developing societies leads to adverse health outcomes. The structure of the habitual diet is altered and is characterized by increasing consumption of fats, saturated fats largely from animal sources and sugars. Lifestyle changes in an increasingly urbanized environment which occurs concurrently contributes to a reduction in physical activity levels which promotes overweight and obesity. The essence of these changes is captured by the term 'nutrition transition' which accompanies the demographic and epidemiologic transition in these countries with economic development. The existing burden of undernutrition in developing countries is thus compounded by the adverse effects of the nutrition transition, notably the increasing prevalence of obesity and non-communicable diseases. This double burden of malnutrition adds to the health and economic burden of developing societies. © 2013 Dr. K C Chaudhuri Foundation. Source

Diagnostic formulations attempt to impose order on the messy reality of psychopathological phenomena. By doing this, so their advocates argue, they provide both the platform necessary for systematic scientific study, and, crucially, the bridge of shared terms and concepts vital if psychiatric science is to be truly translational; where scientific endeavour is guided by clinical priorities and, in-turn, scientific findings innovate clinical practice. The diagnostic schemes we currently work with, taking DSM-5 as the obvious case, are the product of an interesting historical process of ongoing revision - at the same time pragmatic and scientific. On the one hand, it is a process both anchored firmly in historical precedent and constrained by the practical needs of clinicians, patients and health insurance companies. On the other hand, it is a process open to new empirical data about how to best cluster symptomatic expressions and differentiate clinical presentations - so that over historical time diagnostic categories achieve an increasingly accurate mapping of the taxonomy (i.e., underlying structure), and related pathophysiology, of psychiatric phenomenon. Resolving the inevitable tensions that arise when trying to reconcile these pragmatic (economic and professional) and scientific priorities has proved to be both challenging and contentious. The study of heterogeneity as exemplified by the articles highlighted in this editorial indicate a range of different approaches that can be effectively used to refine psychiatric taxonomies by incorporating developmental and pathophysiological data to help identify new putative subtypes of potential therapeutic significance. © 2015 Association for Child and Adolescent Mental Health. Source

Non-alcoholic fatty liver disease is now recognized as the hepatic component of the metabolic syndrome. Non-alcoholic fatty liver disease is a spectrum of fat-associated liver conditions that can result in end-stage liver disease and the need for liver transplantation. Simple steatosis, or fatty liver, occurs early in non-alcoholic fatty liver disease and may progress to non-alcoholic steatohepatitis, fibrosis and cirrhosis with increased risk of hepatocellular carcinoma. Prevalence estimates for non-alcoholic fatty liver disease range from 17 to 33% in the general populations and it has been estimated that non-alcoholic fatty liver disease exists in up to 70% of people with Type2 diabetes. Non-alcoholic fatty liver disease increases risk of Type2 diabetes and cardiovascular disease. In people with Type2 diabetes, non-alcoholic fatty liver disease is the most frequent cause (∼80%) of fatty liver diagnosed by ultrasound. As non-alcoholic fatty liver disease is strongly associated with insulin resistance, the presence of non-alcoholic fatty liver disease with diabetes often contributes to poor glycaemic control. Consequently, strategies that decrease liver fat and improve whole-body insulin sensitivity may both contribute to prevention of Type2 diabetes and to better glycaemic control in people who already have developed diabetes. This review summarizes the Dorothy Hodgkin lecture given by the author at the 2012 Diabetes UK annual scientific conference, proposing that fatty acid fluxes through the liver are crucial for the pathogenesis of non-alcoholic fatty liver disease and for increasing insulin resistance. © 2012 Diabetes UK. Source

Calder P.C.,University of Southampton
British Journal of Clinical Pharmacology | Year: 2013

Eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) are n-3 fatty acids found in oily fish and fish oil supplements. These fatty acids are able to inhibit partly a number of aspects of inflammation including leucocyte chemotaxis, adhesion molecule expression and leucocyte-endothelial adhesive interactions, production of eicosanoids like prostaglandins and leukotrienes from the n-6 fatty acid arachidonic acid, production of inflammatory cytokines and T cell reactivity. In parallel, EPA gives rise to eicosanoids that often have lower biological potency than those produced from arachidonioc acid and EPA and DHA give rise to anti-inflammatory and inflammation resolving resolvins and protectins. Mechanisms underlying the anti-inflammatory actions of n-3 fatty acids include altered cell membrane phospholipid fatty acid composition, disruption of lipid rafts, inhibition of activation of the pro-inflammatory transcription factor nuclear factor kappa B so reducing expression of inflammatory genes, activation of the anti-inflammatory transcription factor NR1C3 (i.e. peroxisome proliferator activated receptor γ) and binding to the G protein coupled receptor GPR120. These mechanisms are interlinked. In adult humans, an EPA plus DHA intake greater than 2gday-1 seems to be required to elicit anti-inflammatory actions, but few dose finding studies have been performed. Animal models demonstrate benefit from n-3 fatty acids in rheumatoid arthritis (RA), inflammatory bowel disease (IBD) and asthma. Clinical trials of fish oil in patients with RA demonstrate benefit supported by meta-analyses of the data. Clinical trails of fish oil in patients with IBD and asthma are inconsistent with no overall clear evidence of efficacy. © 2012 The British Pharmacological Society. Source

Abusara M.A.,University of Exeter | Guerrero J.M.,University of Aalborg | Sharkh S.M.,University of Southampton
IEEE Transactions on Industrial Electronics | Year: 2014

Line-interactive uninterruptible power supply (UPS) systems are good candidates for providing energy storage within a microgrid to help improve its reliability, economy, and efficiency. In grid-connected mode, power can be imported from the grid by the UPS to charge its battery. Power can be also exported when required, e.g., when the tariffs are advantageous. In stand-alone mode, the UPS supplies local distributed loads in parallel with other sources. In this paper, a line-interactive UPS and its control system are presented and discussed. Power flow is controlled using the frequency and voltage drooping technique to ensure seamless transfer between grid-connected and stand-alone parallel modes of operation. The drooping coefficients are chosen to limit the energy imported by the UPS when reconnecting to the grid and to give good transient response. Experimental results of a microgrid consisting of two 60-kW line-interactive UPS systems are provided to validate the design. © 1982-2012 IEEE. Source

Like other forms of infant feeding, breastfeeding is a fundamental act of care. Yet despite being the recommended way of feeding babies, breastfeeding is not always easy to do. In addition to lack of support, bio-physical problems and the need to return to work; discomfort with breastfeeding in public is a factor shaping infant feeding choice (and the decision to stop breastfeeding specifically). With increased awareness of breast milk's health benefits in recent years, there has been a rise in efforts to make breastfeeding in public more commonplace and socially acceptable (including through lactation advocacy or "lactivism"). This paper considers breastfeeding in public and lactation advocacy in the UK through interviews with lactation activists, non-activist breastfeeding mothers, and participant-observation at two breastfeeding picnics held in 2009. Building on existing scholarship in Geography, I suggest that lactivism can be understood as an effort to expand the boundaries of where care-work is allowed to take place: thus constituting a form of "care-work activism". © 2010 Elsevier Ltd. Source

Heller V.,University of Southampton
Journal of Hydraulic Research | Year: 2011

Scale effects arise due to force ratios which are not identical between a model and its real-world prototype and result in deviations between the up-scaled model and prototype observations. This review article considers mechanical, Froude and Reynolds model-prototype similarities, describes scale effects for typical hydraulic flow phenomena and discusses how scale effects are avoided, compensated or corrected. Four approaches are addressed to obtain model-prototype similarity, to quantify scale effects and to define limiting criteria under which they can be neglected. These are inspectional analysis, dimensional analysis, calibration and scale series, which are applied to landslide generated impulse waves. Tables include both limiting criteria to avoid significant scale effects and typical scales of physical hydraulic engineering models for a wide variety of hydraulic flow phenomena. The article further shows why it is challenging to model sediment transport and distensible structures in a physical hydraulic model without significant scale effects. Possible future research directions are finally suggested. © 2011 International Association for Hydro-Environment Engineering and Research. Source

Freeman C.T.,University of Southampton
Control Engineering Practice | Year: 2014

Electrode arrays are gaining increasing popularity within the rehabilitation and assistive technology communities, due to their potential to deliver selective electrical stimulation to underlying muscles. This paper develops the first model-based control strategy in this area, unlocking the potential for faster, more accurate postural control. Due to time-varying nonlinear musculoskeletal dynamics, the approach fuses model identification with iterative learning control (ILC), and employs a restricted input subspace comprising only those inputs deemed critical to task completion. The subspace selection embeds past experience and/or structural knowledge, with a dimension chosen to affect a trade-off between the test time and overall accuracy. Experimental results using a 40 element surface electrode array confirm accurate tracking of three reference hand postures. © 2013 Elsevier Ltd. Source

Sainsbury R.,University of Southampton
Cancer Treatment Reviews | Year: 2013

The development of endocrine therapies has transformed the treatment of patients with breast cancer. The shift from ablative surgery and aggressive chemotherapies to more targeted, better tolerated therapy has improved both mortality and quality of life for patients with hormone-responsive disease. During the 1970s, the selective oestrogen-receptor modulator, tamoxifen, emerged as a new treatment for women with advanced breast cancer. The subsequent development of numerous and diverse selective endocrine therapies such as luteinising hormone-releasing hormone agonists, aromatase inhibitors and oestrogen-receptor antagonists have added further treatment options. Furthermore, with well-tolerated and effective endocrine therapy, adjuvant treatment became an option for patients with early breast cancer. Tamoxifen emerged as the gold standard adjuvant therapy in the 1980s; however, later trials in postmenopausal women showed the aromatase inhibitors offer advantages over tamoxifen. In addition to AIs being indicated as adjuvant therapy, some are also being evaluated for use as a preventative measure in high-risk women. This chronological account outlines key milestones in the evolution of endocrine therapies over the last 40. years, highlighting each class of agent and the key trials that have led to changes in clinical practice. The advances in endocrine therapies outlined here, coupled with advances in breast cancer management and diagnostics, will likely lead to more patient-tailored therapy, resulting in greater clinical benefits and more cost-effective treatment strategies. © 2012 Elsevier Ltd. Source

Yardley L.,University of Southampton
Journal of medical Internet research | Year: 2011

Hand-washing is regarded as a potentially important behavior for preventing transmission of respiratory infection, particularly during a pandemic. The objective of our study was to evaluate whether a Web-based intervention can encourage more frequent hand-washing in the home, and to examine potential mediators and moderators of outcomes, as a necessary first step before testing effects of the intervention on infection rates in the PRIMIT trial (PRimary care trial of a website based Infection control intervention to Modify Influenza-like illness and respiratory infection Transmission). In a parallel-group pragmatic exploratory trial design, 517 nonblinded adults recruited through primary care were automatically randomly assigned to a fully automated intervention comprising 4 sessions of tailored motivational messages and self-regulation support (n = 324) or to a no-intervention control group (n = 179; ratio 2:1). Hand-washing frequency and theory of planned behavior cognitions relating to hand-washing were assessed by online questionnaires at baseline (in only half of the control participants, to permit evaluation of effects of baseline assessment on effect sizes), at 4 weeks (postintervention; all participants), and at 12 weeks. Hand-washing rates in the intervention group were higher at 4 weeks than in the control group (mean 4.40, n = 285 and mean 4.04, n = 157, respectively; P < .001, Cohen d = 0.42) and remained higher at 12 weeks (mean 4.45, n = 282 and mean 4.12, n = 154, respectively; P < .001, Cohen d = 0.34). Hand-washing intentions and positive attitudes toward hand-washing increased more from baseline to 4 weeks in the intervention group than in the control group. Mediation analyses revealed positive indirect effects of the intervention on change in hand-washing via intentions (coefficient = .15, 95% confidence interval [CI], .08-.26) and attitudes (coefficient = 0.16, 95% CI, .09-.26). Moderator analyses confirmed that the intervention was similarly effective for men and women, those of higher and lower socioeconomic status, and those with higher and lower levels of perceived risk. This study provides promising evidence that Web-based interventions could potentially provide an effective method of promoting hand hygiene in the home. Data were collected during the 2010 influenza pandemic, when participants in both groups had already been exposed to extensive publicity about the need for hand hygiene, suggesting that our intervention could add to existing public health campaigns. However, further research is required to determine the effects of the intervention on actual infection rates. TRIAL: International Standard Randomized Controlled Trial Number (ISRCTN): 75058295; http://www.controlled-trials.com/ISRCTN75058295 (Archived by WebCite at http://www.webcitation.org/62KSbkNmm). Source

Fall C.H.,University of Southampton
Indian journal of pediatrics | Year: 2013

The "developmental origins of health and disease" (DOHaD) hypothesis proposes that environmental conditions during fetal and early post-natal development influence lifelong health and capacity through permanent effects on growth, structure and metabolism. This has been called 'programming'. The hypothesis is supported by epidemiological evidence in humans linking newborn size, and infant growth and nutrition, to adult health outcomes, and by experiments in animals showing that maternal under- and over-nutrition and other interventions (e.g., glucocorticoid exposure) during pregnancy lead to abnormal metabolism and body composition in the adult offspring. Early life programming is now thought to be important in the etiology of obesity, type 2 diabetes, and cardiovascular disease, opening up the possibility that these common diseases could be prevented by achieving optimal fetal and infant development. This is likely to have additional benefits for infant survival and human capital (e.g., improved cognitive performance and physical work capacity). Fetal nutrition is influenced by the mother's diet and body size and composition, but hard evidence that the nutrition of the human mother programmes chronic disease risk in her offspring is currently limited. Recent findings from follow-up of children born after randomised nutritional interventions in pregnancy are mixed, but show some evidence of beneficial effects on vascular function, lipid concentrations, glucose tolerance and insulin resistance. Work in experimental animals suggests that epigenetic phenomena, whereby gene expression is modified by DNA methylation, and which are sensitive to the nutritional environment in early life, may be one mechanism underlying programming. Source

Wilson S.J.,University of Southampton
Clinical and experimental allergy : journal of the British Society for Allergy and Clinical Immunology | Year: 2013

Eosinophilia is a marker of corticosteroid responsiveness and risk of exacerbation in asthma; although it has been linked to submucosal matrix deposition, its relationship with other features of airway remodelling is less clear. The aim of this study was to investigate the relationship between airway eosinophilia and airway remodelling. Bronchial biopsies from subjects (n = 20 in each group) with mild steroid-naïve asthma, with either low (0-0.45 mm(-2)) ) or high submucosal eosinophil (23.43-46.28 mm(-2) ) counts and healthy controls were assessed for in vivo epithelial damage (using epidermal growth factor receptor staining), mucin expression, airway smooth muscle (ASM) hypertrophy and inflammatory cells within ASM. The proportion of in vivo damaged epithelium was significantly greater (P = 0.02) in the high-eosinophil (27.37%) than the low-eosinophil (4.14%) group. Mucin expression and goblet cell numbers were similar in the two eosinophil groups; however, MUC-2 expression was increased (P = 0.002) in the high-eosinophil group compared with controls. The proportion of submucosa occupied by ASM was higher in both asthma groups (P = 0.021 and P = 0.046) compared with controls. In the ASM, eosinophil and T-lymphocyte numbers were higher (P < 0.05) in the high-eosinophil group than both the low-eosinophil group and the controls, whereas the numbers of mast cells were increased in the high-eosinophil group (P = 0.01) compared with controls. Submucosal eosinophilia is a marker (and possibly a cause) of epithelial damage and is related to infiltration of ASM with eosinophils and T lymphocytes, but is unrelated to mucus metaplasia or smooth muscle hypertrophy. © 2013 John Wiley & Sons Ltd. Source

Croston J.H.,University of Southampton | Hardcastle M.J.,University of Hertfordshire
Monthly Notices of the Royal Astronomical Society | Year: 2014

The synchrotron-radiating particles and magnetic fields in low-power radio galaxies (including most nearby cluster-centre sources), if at equipartition, can provide only a small fraction of the total internal energy density of the radio lobes or plumes, which is now well constrained via X-ray observations of their external environments. We consider the constraints on models for the dominant energy contribution in low-power radio-galaxy lobes obtained from a detailed comparison of how the internal equipartition pressure and external pressure measured from X-ray observations evolve with distance for two radio galaxies, 3C 31 and Hydra A. We rule out relativistic lepton dominance of the radio lobes, and conclude that models in which magnetic field or relativistic protons/ions carried up the jet dominate lobe energetics are unlikely. Finally, we argue that entrainment of material from the jet surroundings can provide the necessary pressure, and construct a simple self-consistent model of the evolution of the entrainment rate required for pressure balance along the 100-kpc-scale plumes of 3C 31. Such a model requires that the entrained material is heated to temperatures substantially above that of the surrounding intragroup medium, and that the temperature of the thermal component of the jet increases with distance, though remaining sub-relativistic. © 2014 The Authors Published by Oxford University Press on behalf of the Royal Astronomical Society. Source

Palmer K.T.,University of Southampton
Clinical medicine (London, England) | Year: 2013

Most pregnant women are exposed to some physical activity at work. This Concise Guidance is aimed at doctors advising healthy women with uncomplicated singleton pregnancies about the risks arising from five common workplace exposures (prolonged working hours, shift work, lifting, standing and heavy physical workload). The adverse outcomes considered are: miscarriage, preterm delivery, small for gestational age, low birth weight, pre-eclampsia and gestational hypertension. Systematic review of the literature indicates that these exposures are unlikely to carry much of an increased risk for any of the outcomes, since small apparent effects might be explicable in terms of chance, bias, or confounding, while larger and better studies yield lower estimated risks compared with smaller and weaker studies. In general, patients can be reassured that such work is associated with little, if any, adverse effect on pregnancy. Moreover, moderate physical exercise is thought to be healthy in pregnancy and most pregnant women undertake some physical work at home. The guidelines provide risk estimates and advice on counselling. Source

Boyer K.,University of Southampton
Health and Place | Year: 2012

The UK has some of the lowest breastfeeding duration rates in the industrialised world. This paper considers women's experiences breastfeeding in public as a factor in breastfeeding duration. Research is based on an analysis of: 11 interviews and a 46-person survey of new mothers in Southampton, Hampshire; 180 postings about breastfeeding in public on UK parenting website mumsnet; and a patent application for a 'portable lactation module'. I analyse these data through an engagement with the work of cultural theorist Sara Ahmed to argue that the 'limits of sociability' in public space in the UK can be marked through affective practice. This paper makes three unique contributions to scholarship. First, it increases understanding regarding an issue of direct importance to health policy by filling a gap in knowledge about women's experiences breastfeeding outside the home in the UK. Second, it contributes to the field of health geography by showing how affective environments can constrain health-promoting behaviours. Third, it extends conceptual work in human geography more broadly through an analysis of the relationships between affect, embodiment and urban subjectivity. © 2012 Elsevier Ltd. Source

Read R.C.,University of Southampton
Clinical Microbiology and Infection | Year: 2014

Neisseria meningitidis, the cause of meningococcal disease, has been the subject of sophisticated molecular epidemiological investigation as a consequence of the significant public health threat posed by this organism. The use of multilocus sequence typing and whole genome sequencing classifies the organism into clonal complexes. Extensive phenotypic, genotypic and epidemiological information is available on the PubMLST website. The human nasopharynx is the sole ecological niche of this species, and carrier isolates show extensive genetic diversity as compared with hyperinvasive lineages. Horizontal gene exchange and recombinant events within the meningococcal genome during residence in the human nasopharynx result in antigenic diversity even within clonal complexes, so that individual clones may express, for example, more than one capsular polysaccharide (serogroup). Successful clones are capable of wide global dissemination, and may be associated with explosive epidemics of invasive disease. © 2014 European Society of Clinical Microbiology and Infectious Diseases. Source

Shepherd J.P.,University of Southampton
Cochrane database of systematic reviews (Online) | Year: 2011

Human papillomavirus (HPV) is the key risk factor for cervical cancer. Continuing high rates of HPV and other sexually transmitted infections (STIs) in young people demonstrate the need for effective behavioural interventions. To assess the effectiveness of behavioural interventions for young women to encourage safer sexual behaviours to prevent transmission of STIs (including HPV) and cervical cancer. Systematic literature searches were performed on the following databases: Cochrane Central Register of Controlled Trials (CENTRAL Issue 4, 2009) Cochrane Gynaecological Cancer Review Group (CGCRG) Specialised Register, MEDLINE, EMBASE, CINAHL, PsychINFO, Social Science Citation Index and Trials Register of Promoting Health Interventions (TRoPHI) up to the end of 2009. All references were screened for inclusion against selection criteria. Randomised controlled trials (RCTs) of behavioural interventions for young women up to the age of 25 years that included, amongst other things, information provision about the transmission and prevention of STIs. Trials had to measure behavioural outcomes (e.g. condom use) and/or biological outcomes (e.g. incidence of STIs, cervical cancer). A narrative synthesis was conducted. Meta-analysis was not considered appropriate due to heterogeneity between the interventions and trial populations. A total of 5271 references were screened and of these 23 RCTs met the inclusion criteria. Most were conducted in the USA and in health-care clinics (e.g. family planning).The majority of interventions provided information about STIs and taught safer sex skills (e.g. communication), occasionally supplemented with provision of resources (e.g. free sexual health services). They were heterogeneous in duration, contact time, provider, behavioural aims and outcomes. A variety of STIs were addressed including HIV and chlamydia. None of the trials explicitly mentioned HPV or cervical cancer prevention.Statistically significant effects for behavioural outcomes (e.g. increasing condom use) were common, though not universal and varied according to the type of outcome. There were no statistically significant effects of abstaining from or reducing sexual activity. There were few statistically significant effects on biological (STI) outcomes. Considerable uncertainty exists in the risk of bias due to incomplete or ambiguous reporting. Behavioural interventions for young women which aim to promote sexual behaviours protective of STI transmission can be effective, primarily at encouraging condom use. Future evaluations should include a greater focus on HPV and its link to cervical cancer, with long-term follow-up to assess impact on behaviour change, rates of HPV infection and progression to cervical cancer. Studies should use an RCT design where possible with integral process evaluation and cost-effectiveness analysis where appropriate. Given the predominance of USA studies in this systematic review evaluations conducted in other countries would be particularly useful. Source

Knigge C.,University of Southampton | Baraffe I.,University of Exeter | Patterson J.,Columbia University
Astrophysical Journal, Supplement Series | Year: 2011

We present an attempt to reconstruct the complete evolutionary path followed by cataclysmic variables (CVs), based on the observed mass-radius relationship of their donor stars. Along the way, we update the semi-empirical CV donor sequence presented previously by one of us, present a comprehensive review of the connection between CV evolution and the secondary stars in these systems, and reexamine most of the commonly used magnetic braking (MB) recipes, finding that even conceptually similar ones can differ greatly in both magnitude and functional form. The great advantage of using donor radii to infer mass-transfer and angular-momentum-loss (AML) rates is that they sample the longest accessible timescales and are most likely to represent the true secular (evolutionary average) rates. We show explicitly that if CVs exhibit long-term mass-transfer-rate fluctuations, as is often assumed, the expected variability timescales are so long that other tracers of the mass-transfer rate - including white dwarf (WD) temperatures - become unreliable. We carefully explore how much of the radius difference between CV donors and models of isolated main-sequence stars may be due to mechanisms other than mass loss. The tidal and rotational deformation of Roche-lobe-filling stars produces ≃ 4.5% radius inflation below the period gap and ≃ 7.9% above. A comparison of stellar models to mass-radius data for non-interacting stars suggests a real offset of ≃ 1.5% for fully convective stars (i.e., donors below the gap) and ≃ 4.9% for partially radiative ones (donors above the gap). We also show that donor bloating due to irradiation is probably smaller than, and at most comparable to, these effects. After calibrating our models to account for these issues, we fit self-consistent evolution sequences to our compilation of donor masses and radii. In the standard model of CV evolution, AMLs below the period gap are assumed to be driven solely by gravitational radiation (GR), while AMLs above the gap are usually described by an MB law first suggested by Rappaport et al. We adopt simple scaled versions of these AML recipes and find that these are able to match the data quite well. The optimal scaling factors turn out to be f GR = 2.47 0.22 below the gap and f MB = 0.66 0.05 above (the errors here are purely statistical, and the standard model corresponds to f GR = f MB = 1). This revised model describes the mass-radius data significantly better than the standard model. Some of the most important implications and applications of our results are as follows. (1) The revised evolution sequence yields correct locations for the minimum period and the upper edge of the period gap; the standard sequence does not. (2) The observed spectral types of CV donors are compatible with both standard and revised models. (3) A direct comparison of predicted and observed WD temperatures suggests an even higher value for f GR, but this comparison is sensitive to the assumed mean WD mass and the possible existence of mass-transfer-rate fluctuations. (4) The predicted absolute magnitudes of donor stars in the near-infrared form a lower envelope around the observed absolute magnitudes for systems with parallax distances. This is true for all of our sequences, so any of them can be used to set firm lower limits on (or obtain rough estimates of) the distances toward CVs based only on P orb and single epoch near-IR measurements. (5) Both standard and revised sequences predict that short-period CVs should be susceptible to dwarf nova (DN) eruptions, consistent with observations. However, both sequences also predict that the fraction of DNe among long-period CVs should decline with P orb above the period gap. Observations suggest the opposite behavior, and we discuss the possible explanations for this discrepancy. (6) Approximate orbital period distributions constructed from our evolution sequences suggest that the ratio of long-period CVs to short-period, pre-bounce CVs is about 3 × higher for the revised sequence than the standard one. This may resolve a long-standing problem in CV evolution. Tables describing our donor and evolution sequences are provided in electronically readable form. © 2011. The American Astronomical Society. All rights reserved.. Source

Vesely J.,Charles University | Rios R.,University of Southampton
Chemical Society Reviews | Year: 2014

Nucleophilic addition to carbon-nitrogen double bonds (imines) represents one of the most common strategies for the synthesis of amine derivatives. In order to circumvent the problem associated with low reactivity of imines in nucleophilic addition, various imines with electron-withdrawing groups at nitrogen have been studied, and many of them were successfully applied in asymmetric methodologies. Especially N-carbamoyl imines were found to be useful in the enantioselective synthesis of various organic compounds, due to their increased reactivity toward nucleophiles as well as limited difficulties connected with the removal of the carbamoyl moiety in target molecules. The aim of this review is to cover enantioselective methods based on N-carbamoyl imines, focusing on synthetically useful protocols. © The Royal Society of Chemistry. Source

Wood R.J.K.,University of Southampton
International Journal of Refractory Metals and Hard Materials | Year: 2010

This paper looks at the tribology of thermal sprayed WC-Co based coatings and covers the high energy air-sand erosion resistance and slurry jet impingement erosion performance, dry and wet sliding tribology of thermal spray WC-Co based coatings as well as the abrasion and abrasion-corrosion of these coatings. The tribological and tribo-corrosion performance of the coatings will be related to their mechanical and corrosion properties as well as deposition parameters, microstructure and actual composition. For example, the anisotropic microstructure of thermally sprayed WC-Co-Cr coatings, in particular the low fracture toughness in a direction parallel to the substrate, has been observed to affect the nature of crack formation under 200 μJ air-solid particle erosion conditions. Voids and occasionally other microstructural features (i.e., cobalt lakes, splat boundaries, interfacial inclusions) in the coating act as crack initiation sites. The erosion rate was dominated by cracks within 5 μm of the surface and was relatively insensitive to total length of cracks, showing a near-surface damage front controls the erosion rate and this region is coincident with the region of maximum shear stress induced by erodent impacts. Optimisation of the deposition parameters of HVOF 86WC-10Co-4Cr coatings show an improvement in erosion resistance of more than 50% over the conventional D-gun applied coating of identical nominal composition. The variation in the slurry erosion performance of the thermally sprayed coatings is also linked to directional fracture toughness and crack propagation paths which are influenced by the presence of pores, inhomogeneous carbide distributions and substrate grit blast remnants. The influence of slurry jet angle is more pronounced under 0.4 μJ energy conditions where maximum erosion occurred at 90° and the minimum at 30° in contrast to 7 μJ slurry erosion rates which were independent of jet angle. This reflects the lower levels of fluctuating stresses imparted to the coating during low energy slurry impacts leading to the impact angle having a greater effect on sub critical crack growth rate than for higher energy conditions. The abrasion resistance of these coatings was found comparable to sintered cermets of the same composition. The synergistic effects between micro and macro abrasion and corrosion for detonation gun (D-gun) sprayed WC-10Co-4Cr coatings are shown to be significant and depend on the environment. The size effect of the abradant relative to the microstructure and splat size is important as well as the propensity for the various phases to passivate to control corrosion levels. Comparisons between exposed and freshly polished coating surfaces in strong NaOH solutions (pH 11) show that significantly lower wear rates were seen for the exposed surface due to a negative wear-corrosion synergy due to selective phase removal and the effects of localised passivation. Dry and wet sliding wear resistance of these coatings is shown to be high (wear rates of 10-16-10-18 m3/Nm) with modest coefficient of friction levels between 0.2 and 0.5. The presence of oxides on the binder phases appears to influence the friction and wear levels. Wear appears to be by carbide ejection and/or by tribo-chemical processes. © 2009 Elsevier Ltd. All rights reserved. Source

Littlefield B.T.R.,University of Southampton | Weller M.T.,University of Bath
Nature Communications | Year: 2012

Nanoporous materials have important industrial applications as molecular sieves, catalysts and in gas separation and storage. They are normally produced as moderately dense silicates (SiO2) and aluminosilicates making their specific capacities for the uptake and storage of gases, such as hydrogen, relatively low. Here we report the synthesis and characterization of lightweight, nanoporous structures formed from the metal hydroxide Be(OH) 2 in combination with relatively low levels of framework phosphate or arsenate. Three new zeotype structures are described, constructed mainly of Be(OH)4 tetrahedra bridged through hydroxide into three-membered rings; these units link together to produce several previously unknown zeotype cage types and some of the most structurally complex, nanoporous materials ever discovered. These materials have very low densities between 1.12 and 1.37 g cm-3 and theoretical porosities of 63-68% of their total volume thereby yielding very high total specific pore volumes of up to 0.60 cm 3g-1. © 2012 Macmillan Publishers Limited. All rights reserved. Source

Roach P.L.,University of Southampton
Current Opinion in Chemical Biology | Year: 2011

The radical SAM superfamily of enzymes catalyzes a broad spectrum of biotransformations by employing a common obligate intermediate, the 5'-deoxyadenosyl radical (DOA.). Radical formation occurs via the reductive cleavage of S-adenosylmethionine (SAM or AdoMet). The resultant highly reactive primary radical is a potent oxidant that enables the functionalization of relatively inert substrates, including unactivated C-H bonds. The reactions initiated by the DOA. are breathtaking in their efficiency, elegance and in many cases, the complexity of the biotransformation achieved. This review describes the common features shared by enzymes that generate the DOA. and the intriguing variations or modifications that have recently been reported. The review also highlights selected examples of the diverse biotransformations that ensue. © 2010 Elsevier Ltd. Source

Brailsford S.,University of Southampton
Journal of Simulation | Year: 2014

At the 2010 OR Society Simulation Workshop, there was a lively panel discussion entitled 'Discrete-event simulation is dead, long live agent-based simulation, which was subsequently written up as a position paper for the Journal of Simulation (Siebers et al, 2010). This paper continues that discussion and, to quote Mark Twain, argues that rumours of the death of discrete-event simulation (DES) are greatly exaggerated. There has undoubtedly been a recent surge of interest within the mainstream OR community in the use of agent-based modelling, but this paper suggests that many of the cited benefits of agent-based simulation (ABS) can be achieved through the use of a traditional DES approach. These arguments are illustrated by several examples where DES has been used successfully to tackle 'ABS-type' problems. © 2014 Operational Research Society Ltd. All rights reserved. Source

Vlachantoni A.,University of Southampton
Maturitas | Year: 2012

Gender inequalities in the financial resources in later life result from the combined effect of women's atypical life courses, which include interrupted employment records and periods of care provision, and the fact that pension systems have generally been slow in mitigating 'diversions' from continuous and full-time working lives. Gender differentials in financial resources can often result in a greater likelihood of facing poverty for older women compared to older men, and such risk can be experienced for longer periods for women, as a result of their higher life expectancy on average. For example, across the EU-27, 16% of men compared to 23% of women aged 65 and over faced a poverty risk, and at age 65, men can expect to live another 17 years on average, while women another 21 years. Although modern pension systems are increasingly recognising the diversity of women's patterns of paid and unpaid work, for example by accounting for periods of childcare in the calculation of the state pension, research continues to show a 'penalty' for women who have spent significant periods of their life providing care to children or dependent adults in and outside the household. Reducing such penalty is particularly important as population ageing and an increasing demand for formal and informal care are likely to present challenges with critical policy implications for societies and individuals alike. © 2012 Elsevier Ireland Ltd. Source

Walker V.,University of Southampton
Annals of Clinical Biochemistry | Year: 2012

Ammonia is produced continuously in the body. It crosses the blood-brain barrier readily and at increased concentration it is toxic to the brain. A highly integrated system protects against this: ammonia produced during metabolism is detoxified temporarily by incorporation into the non-toxic amino acid glutamine. This is transported safely in the circulation to the small intestine, where ammonia is released, carried directly to the liver in the portal blood, converted to non-toxic urea and finally excreted in urine. As a result, plasma concentrations of ammonia in the systemic circulation are normally very low (<40μmol/L). Hyperammonaemia develops if the urea cycle cannot control the ammonia load. This occurs when the load is excessive, portal blood from the intestines bypasses the liver and/or the urea cycle functions poorly. By far, the commonest cause is liver damage. This review focuses on other causes in adults. Because they are much less common, the diagnosis may be missed or delayed, with disastrous consequences. There is effective treatment for most of them, but it must be instituted promptly to avoid fatality or long-term neurological damage. Of particular concern are unsuspected inherited defects of the urea cycle and fatty acid oxidation presenting with catastrophic illness in previously normal individuals. Early identification of the problem is the challenge. Source

Dymond M.K.,University of Southampton
Journal of the Royal Society, Interface / the Royal Society | Year: 2013

While it is widely accepted that the lipid composition of eukaryotic membranes is under homeostatic control, the mechanisms through which cells sense lipid composition are still the subject of debate. It has been postulated that membrane curvature elastic energy is the membrane property that is regulated by cells, and that lipid composition is maintained by a ratio control function derived from the concentrations of type II and type 0 lipids, weighted appropriately. We assess this proposal by seeking a signature of ratio control in quantified lipid composition data obtained by electrospray ionization mass spectrometry from over 40 independent asynchronous cell populations. Our approach revealed the existence of a universal 'pivot' lipid, which marks the boundary between type 0 lipids and type II lipids, and which is invariant between different cell types or cells grown under different conditions. The presence of such a pivot species is a distinctive signature of the operation in vivo, in human cell lines, of a control function that is consistent with the hypothesis that membrane elastic energy is homeostatically controlled. Source

Ruhl H.A.,University of Southampton | Rybicki N.B.,U.S. Geological Survey
Proceedings of the National Academy of Sciences of the United States of America | Year: 2010

Great effort continues to focus on ecosystem restoration and reduction of nutrient inputs thought to be responsible, in part, for declines in estuary habitats worldwide. The ability of environmental policy to address restoration is limited, in part, by uncertainty in the relationships between costly restoration and benefits. Here, we present results from an 18-y field investigation (1990-2007) of submerged aquatic vegetation (SAV) community dynamics and water quality in the Potomac River, a major tributary of the Chesapeake Bay. River and anthropogenic discharges lower water clarity by introducing nutrients that stimulate phytoplankton and epiphyte growth as well as suspended sediments. Efforts to restore the Chesapeake Bay are often viewed as failing. Overall nutrient reduction and SAV restoration goals have not been met. In the Potomac River, however, reduced in situ nutrients, wastewater-treatment effluent nitrogen, and total suspended solids were significantly correlated to increased SAV abundance and diversity. Species composition and relative abundance also correlated with nutrient and water-quality conditions, indicating declining fitness of exotic species relative to native species during restoration. Our results suggest that environmental policies that reduce anthropogenic nutrient inputs do result in improved habitat quality, with increased diversity and native species abundances. The results also help elucidate why SAV cover has improved only in some areas of the Chesapeake Bay. Source

Holgate S.T.,University of Southampton
Allergy, Asthma and Immunology Research | Year: 2010

The original concept of asthma being primarily a disease of airways smooth muscle drove the development of bronchodilator drugs. However when it was realised that airway inflammation underpinned the disordered airway function, this gave way to the development of controller therapies such as inhaled cromones and corticosteroids. More recently the discovery of complex interconnecting cytokine and chemokine networks has stimulated the development of biologics with varying success. With the recognition that airway wall "remodelling" is present early in asthma inception and is in part driven by aberrant epithelial-mesenchymal communication both genetic and environmental factors beyond allergen exposure such as virus infection and air pollution are being seen as being increasingly important not only in asthma exacerbations but in the origins of asthma and its evolution into different sub-phenotypes. This brings us round full circle to once again considering that the origins of asthma lie in defects in the formed elements of the airway; the epithelium, smooth muscle, and vasculature. Over the last 25 years Professor You Young Kim has engaged in the exciting discovery science of allergy and asthma and has made an enormous contribution in bringing Korea to the forefront of disease management and research, a position that both he and his colleagues can justly be proud of. © Copyright The Korean Academy of Asthma, Allergy and Clinical Immunology. Source

Sugiura S.,Toyota Central RandD Laboratories Inc. | Hanzo L.,University of Southampton
IEEE Signal Processing Letters | Year: 2012

In this letter, we investigate the effects of training-based channel estimation on the achievable performance of the recent spatial modulation (SM) based multiple-input multiple-output (MIMO) scheme. This is motivated by the fact that the SM transmitter is constituted by a single radio-frequency (RF) branch and multiple antenna elements (AEs), hence simultaneous pilot transmissions from the AEs are impossible, unlike in the classic multiple-RF MIMO transmitters. Our simulation results demonstrate that the SM scheme's BER curve exhibits a performance penalty, while relying on realistic imperfect channel-estimation. In order to combat these limitations, we propose two single-RF arrangements, namely a reduced-complexity joint channel estimation and data detection aided SM scheme as well as a non-coherently detected single-RF space-time shift keying scheme dispensing with channel estimation. © 1994-2012 IEEE. Source

Blumensath T.,University of Southampton | Davies M.E.,University of Edinburgh
IEEE Journal on Selected Topics in Signal Processing | Year: 2010

Sparse signal models are used in many signal processing applications. The task of estimating the sparsest coefficient vector in these models is a combinatorial problem and efficient, often suboptimal strategies have to be used. Fortunately, under certain conditions on the model, several algorithms could be shown to efficiently calculate near-optimal solutions. In this paper, we study one of these methods, the so-called Iterative Hard Thresholding algorithm. While this method has strong theoretical performance guarantees whenever certain theoretical properties hold, empirical studies show that the algorithm's performance degrades significantly, whenever the conditions fail. What is more, in this regime, the algorithm also often fails to converge. As we are here interested in the application of the method to real world problems, in which it is not known in general, whether the theoretical conditions are satisfied or not, we suggest a simple modification that guarantees the convergence of the method, even in this regime. With this modification, empirical evidence suggests that the algorithm is faster than many other state-of-the-art approaches while showing similar performance. What is more, the modified algorithm retains theoretical performance guarantees similar to the original algorithm. © IEEE. Source

O'Kelly I.,University of Southampton
Pflugers Archiv European Journal of Physiology | Year: 2015

Two-pore domain potassium (K2P) channels are implicated in an array of physiological and pathophysiological roles. As a result of their biophysical properties, these channels produce a background leak K+ current which has a direct effect on cellular membrane potential and activity. The regulation of potassium leak from cells through K2P channels is of critical importance to cell function, development and survival. Controlling the cell surface expression of these channels is one mode to regulate their function and is achieved through a balance between regulated channel delivery to and retrieval from the cell surface. Here, we explore the modes of retrieval of K2P channels from the plasma membrane and observe that K2P channels are endocytosed in both a clathrin-mediated and clathrin-independent manner. K2P channels use a variety of pathways and show altered internalisation and sorting in response to external cues. These pathways working in concert, equip the cell with a range of approaches to maintain steady state levels of channels and to respond rapidly should changes in channel density be required. © 2014, The Author(s). Source

Freeman C.T.,University of Southampton
Control Engineering Practice | Year: 2012

Iterative learning control is a methodology applicable to systems which repeatedly track a specified reference trajectory defined over a finite time duration. Here the methodology is instead applied to the point-to-point motion control problem in which the output is only specified at a subset of time instants. The iterative learning framework is expanded to address this case, and conditions for convergence to zero point-to-point tracking error are derived. It is shown how the extra design freedom the point-to-point set-up brings allows additional input, output and state constraints to be simultaneously addressed, hence providing a powerful design framework of wide practical utility. Experimental results confirm the performance and accuracy that can be achieved, and the improvements gained over the standard ILC framework. © 2012 Elsevier Ltd. Source

Chillingworth D.R.J.,University of Southampton
Nonlinearity | Year: 2010

We give a complete analysis of low-velocity dynamics close to grazing for a generic one degree of freedom impact oscillator. This includes nondegenerate (quadratic) grazing and minimally degenerate (cubic) grazing, corresponding respectively to nondegenerate and degenerate chatter. We also describe the dynamics associated with generic one-parameter bifurcation at a more degenerate (quartic) graze, showing in particular howthis gives rise to the oftenobserved highly convoluted structure in the stable manifolds of chattering orbits. The approach adopted is geometric, using methods from singularity theory. © 2010 IOP Publishing Ltd & London Mathematical Society. Source

King S.F.,University of Southampton | Muhlleitner M.,Karlsruhe Institute of Technology | Nevzorov R.,Institute of Theoretical and Experimental Physics | Walz K.,Karlsruhe Institute of Technology
Nuclear Physics B | Year: 2013

We study the phenomenology of Higgs bosons close to 126 GeV within the scale invariant unconstrained Next-to-Minimal Supersymmetric Standard Model (NMSSM), focusing on the regions of parameter space favoured by low fine-tuning considerations, namely stop masses of order 400 GeV to 1 TeV and an effective μ parameter between 100-200 GeV, with large (but perturbative) λ and low tan. β=2-4. We perform scans over the above parameter space, focusing on the observable Higgs cross sections into γγ, WW, ZZ, bb, ττ final states, and study the correlations between these observables. We show that the γγ signal strength may be enhanced up to a factor of about two not only due to the effect of singlet-doublet mixing, which occurs more often when the 126 GeV Higgs boson is the next-to-lightest CP-even one, but also due to light stops (and to a lesser extent light chargino and charged Higgs loops). There may be also smaller enhancements in the Higgs decay channels into WW, ZZ, correlated with the γγ enhancement. However there is no such correlation observed involving the Higgs decay channels into bb, ττ. The requirement of having perturbative couplings up to the GUT scale favours the interpretation of the 126 GeV Higgs boson as being the second lightest NMSSM CP-even state, which can decay into pairs of lighter neutralinos, CP-even or CP-odd Higgs bosons, leading to characteristic signatures of the NMSSM. In a non-negligible part of the parameter range the increase in the γγ rate is due to the superposition of rates from nearly degenerate Higgs bosons. Resolving these Higgs bosons would rule out the Standard Model, and provide evidence for the NMSSM. © 2013 Elsevier B.V. Source

Calder P.C.,University of Southampton
Nutrients | Year: 2010

Long chain fatty acids influence inflammation through a variety of mechanisms; many of these are mediated by, or at least associated with, changes in fatty acid composition of cell membranes. Changes in these compositions can modify membrane fluidity, cell signaling leading to altered gene expression, and the pattern of lipid mediator production. Cell involved in the inflammatory response are typically rich in the n-6 fatty acid arachidonic acid, but the contents of arachidonic acid and of the n-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) can be altered through oral administration of EPA and DHA. Eicosanoids produced from arachidonic acid have roles in inflammation. EPA also gives rise to eicosanoids and these often have differing properties from those of arachidonic acid-derived eicosanoids. EPA and DHA give rise to newly discovered resolvins which are anti-inflammatory and inflammation resolving. Increased membrane content of EPA and DHA (and decreased arachidonic acid content) results in a changed pattern of production of eicosanoids and resolvins. Changing the fatty acid composition of cells involved in the inflammatory response also affects production of peptide mediators of inflammation (adhesion molecules, cytokines etc.). Thus, the fatty acid composition of cells involved in the inflammatory response influences their function; the contents of arachidonic acid, EPA and DHA appear to be especially important. The anti-inflammatory effects of marine n-3 PUFAs suggest that they may be useful as therapeutic agents in disorders with an inflammatory component. © 2010 by the authors. licensee Molecular Diversity Preservation International, Basel, Switzerland. Source

Grimble R.F.,University of Southampton
Proceedings of the Nutrition Society | Year: 2010

The objective of the present review is to provide an overview of the metabolic effects of pro-inflammatory cytokine production during infection and injury; to highlight the disadvantages of pro-inflammatory cytokine production and inflammatory stress on morbidity and mortality of patients; to identify the influence of genetics and adiposity on inflammatory stress in patients and to indicate how nutrients may modulate the inflammatory response in patients. Recent research has shown clearly that adipose tissue actively secretes a wide range of pro- and anti-inflammatory cytokines. Paradoxically, although inflammation is an essential part of the response of the body to infection, surgery and trauma, it can adversely affect patient outcome. The metabolic effects of inflammation are mediated by pro-inflammatory cytokines. Metabolic effects include insulin insensitivity, hyperlipidaemia, muscle protein loss and oxidant stress. These effects, as well as being present during infective disease, are also present in diseases with a covert inflammatory basis. These latter diseases include obesity and type 2 diabetes mellitus. Inflammatory stress also increases during aging. The level of cytokine production, within individuals, is influenced by single nucleotide polymorphisms (SNP) in cytokine genes. The combination of SNP controls the relative level of inflammatory stress in both overt and covert inflammatory diseases. The impact of cytokine genotype on the intensity of inflammatory stress derived from an obese state is unknown. While studies remain to be done in the latter context, evidence shows that these genomic characteristics influence morbidity and mortality in infectious disease and diseases with an underlying inflammatory basis and thereby influence the cost of in-patient obesity. Antioxidants and n-3 PUFA alter the intensity of the inflammatory process. Recent studies show that genotypic factors influence the effectiveness of immunonutrients. A better understanding of this aspect of nutrient-gene interactions and of the genomic factors that influence the intensity of inflammation during disease will help in the more effective targeting of nutritional therapy. © 2010 The Author. Source

Modi N.,Imperial College London | Clark H.,University of Southampton | Wolfe I.,London School of Hygiene and Tropical Medicine | Costello A.,University College London | Budge H.,University of Nottingham
The Lancet | Year: 2013

Despite a general acknowledgment that research in children is necessary and ethical, the evidence base for childspecifi c treatments is still sparse. We investigated children's biomedical and health services research in the UK in relation to training, infrastructure and activity, research evidence, and visibility. We show that excellent opportunities for career researchers exist through a competitive, national integrated academic training programme, but that the number of academic paediatricians has decreased by 18% between 2000 and 2011, falling from 11·3% to 5·9% of the consultant workforce. The potential for rapid delivery of studies in children through the National Health Service (NHS) is not being realised: clinical trainees are poorly equipped with core research skills; most newly appointed consultant paediatricians have little or no research experience; less than 5% of contracted consultant time supports research; less than 2·5% of the 2 million children seen in the NHS every year are recruited to studies; and ten of the 20 UK children's hospitals do not have a clinical research facility. Support through National Institute for Health Research networks is good for studies into drugs, but inconsistent for non-drug research; less than 5% of registered studies involve children and only one children's biomedical research centre has been allocated funding from 2012. Of the UK annual public and charitable biomedical research expenditure of roughly 2·2 billion, about 5% is directed at child health research. The scant evidence base is impeding the development of clinical guidance and policy-less than 20% of the outputs of the National Institute for Health and Clinical Excellence are applicable to children. Paediatric representation on major research boards is weak. Parent and young people's advocacy is fragmented, and their views are insuffi ciently heeded by regulatory bodies. The strong UK Government commitment to biomedical research has not been translated fully to research for children. The power of research in children to turn the tide of the growing burden of non-communicable, chronic, adult diseases that have their origins in early life, to benefi t the health of an ageing population and future generations, and to reduce health-care costs is inadequately recognised. On the basis of our fi ndings, we make several recommendations to improve early-years research, including the formation of multidisciplinary, cross-institutional groups of clinical and non-clinical child health researchers and their access to diagnostic and laboratory facilities suitable for children; a unifi ed Children's Research Network for drug studies and non-drug studies; regulatory assessment of research that is proportionate and based on consistent national criteria; an expansion of research posts; support for parents' and young people's advocacy; collaboration between children's research charities; improved research training for paediatric trainees; and closer integration of child health research with core NHS activities. Source

Bakar K.S.,Yale University | Sahu S.K.,University of Southampton
Journal of Statistical Software | Year: 2015

Hierarchical Bayesian modeling of large point-referenced space-time data is increas- ingly becoming feasible in many environmental applications due to the recent advances in both statistical methodology and computation power. Implementation of these meth- ods using the Markov chain Monte Carlo (MCMC) computational techniques, however, requires development of problem-specific and user-written computer code, possibly in a low-level language. This programming requirement is hindering the widespread use of the Bayesian model-based methods among practitioners and, hence there is an urgent need to develop high-level software that can analyze large data sets rich in both space and time. This paper develops the package spTimer for hierarchical Bayesian modeling of stylized environmental space-time monitoring data as a contributed software package in the R language that is fast becoming a very popular statistical computing platform. The package is able to fit, spatially and temporally predict large amounts of space-time data using three recently developed Bayesian models. The user is given control over many options regarding covariance function selection, distance calculation, prior selection and tuning of the implemented MCMC algorithms, although suitable defaults are provided. The package has many other attractive features such as on the y transformations and an ability to spatially predict temporally aggregated summaries on the original scale, which saves the problem of storage when using MCMC methods for large datasets. A simulation example, with more than a million observations, and a real life data example are used to validate the underlying code and to illustrate the software capabilities. Source

Moreau L.,University of Southampton
Foundations and Trends in Web Science | Year: 2010

Provenance, i.e., the origin or source of something, is becoming an important concern, since it offers the means to verify data products, to infer their quality, to analyse the processes that led to them, and to decide whether they can be trusted. For instance, provenance enables the reproducibility of scientific results; provenance is necessary to track attribution and credit in curated databases; and, it is essential for reasoners to make trust judgements about the information they use over the Semantic Web. As the Web allows information sharing, discovery, aggregation, filtering and flow in an unprecedented manner, it also becomes very difficult to identify, reliably, the original source that produced an information item on the Web. Since the emerging use of provenance in niche applications is undoubtedly demonstrating the benefits of provenance, this monograph contends that provenance can and should reliably be tracked and exploited on the Web, and investigates the necessary foundations to achieve such a vision. Multiple data sources have been used to compile the largest bibliographical database on provenance so far. This large corpus permits the analysis of emerging trends in the research community. Specifically, the CiteSpace tool identifies clusters of papers that constitute research fronts, from which characteristics are extracted to structure a foundational framework for provenance on the Web. Such an endeavour requires a multi-disciplinary approach, since it requires contributions from many computer science sub-disciplines, but also other non-technical fields given the human challenge that is anticipated. To develop such a vision, it is necessary to provide a definition of provenance that applies to the Web context. A conceptual definition of provenance is expressed in terms of processes, and is shown to generalise various definitions of provenance commonly encountered. Furthermore, by bringing realistic distributed systems assumptions, this definition is refined as a query over assertions made by applications. Given that the majority of work on provenance has been undertaken by the database, workflow and e-science communities, some of their work is reviewed, contrasting approaches, and focusing on important topics believed to be crucial for bringing provenance to the Web, such as abstraction, collections, storage, queries, workflow evolution, semantics and activities involving human interactions. However, provenance approaches developed in the context of databases and workflows essentially deal with closed systems. By that, it is meant that workflow or database management systems are in full control of the data they manage, and track their provenance within their own scope, but not beyond. In the context of the Web, a broader approach is required by which chunks of provenance representation can be brought together to describe the provenance of information flowing across multiple systems. For this purpose, this monograph puts forward the Open Provenance Vision, which is an approach that consists of controlled vocabulary, serialisation formats and interfaces to allow the provenance of individual systems to be expressed, connected in a coherent fashion, and queried seamlessly. In this context, the Open Provenance Model is an emerging community-driven representation of provenance, which has been actively used by some 20 teams to exchange provenance information, in line with the Open Provenance Vision. After identifying an open approach and a model for provenance, techniques to expose provenance over the Web are investigated. In particular, Semantic Web technologies are discussed since they have been successfully exploited to express, query and reason over provenance. Symmetrically, Semantic Web technologies such as RDF, underpinning the Linked Data effort, are analysed since they offer their own difficulties with respect to provenance. A powerful argument for provenance is that it can help make systems transparent, so that it becomes possible to determine whether a particular use of information is appropriate under a set of rules. Such capability helps make systems and information accountable. To offer accountability, provenance itself must be authentic, and rely on security approaches, which are described in the monograph. This is then followed by systems where provenance is the basis of an auditing mechanism to check past processes against rules or regulations. In practice, not all users want to check and audit provenance, instead, they may rely on measures of quality or trust; hence, emerging provenance-based approaches to compute trust and quality of data are reviewed. © 2010 L. Moreau. Source

Objective: Nonpharmacological treatments are available for attention deficit hyperactivity disorder (ADHD), although their efficacy remains uncertain. The authors undertook meta-analyses of the efficacy of dietary (restricted elimination diets, artificial food color exclusions, and free fatty acid supplementation) and psychological (cognitive training, neurofeedback, and behavioral interventions) ADHD treatments. Method: Using a common systematic search and a rigorous coding and data extraction strategy across domains, the authors searched electronic databases to identify published randomized controlled trials that involved individuals who were diagnosed with ADHD (or who met a validated cutoff on a recognized rating scale) and that included an ADHD outcome. Results: Fifty-four of the 2,904 nonduplicate screened records were included in the analyses. Two different analyses were performed. When the outcome measure was based on ADHD assessments by raters closest to the therapeutic setting, all dietary (standardized mean differences= 0.21-0.48) and psychological (standardized mean differences=0.40-0.64) treatments produced statistically significant effects. However, when the best probably blinded assessment was employed, effects remained significant for free fatty acid supplementation (standardized mean difference=0.16) and artificial food color exclusion (standardized mean difference=0.42) but were substantially attenuated to nonsignificant levels for other treatments. Conclusions: Free fatty acid supplementation produced small but significant reductions in ADHD symptoms even with probably blinded assessments, although the clinical significance of these effects remains to be determined. Artificial food color exclusion produced larger effects but often in individuals selected for food sensitivities. Better evidence for efficacy from blinded assessments is required for behavioral interventions, neurofeedback, cognitive training, and restricted elimination diets before they can be supported as treatments for core ADHD symptoms. Source

Unemo M.,Orebro University | Clarke I.N.,University of Southampton
Current Opinion in Infectious Diseases | Year: 2011

Purpose of review This review focuses on the anatomy of the Swedish new variant of Chlamydia trachomatis (nvCT). This information provides an interesting insight into the emergence of new strains (how, where, and when), and the important lessons learned are discussed. Recent findings In late 2006, the nvCT was first reported in Sweden; it carries a 377 bp deletion within its plasmid which covers the single targets originally used by Roche and Abbott diagnostic systems. The nvCT spread rapidly with thousands of falsely negative diagnoses. Genome sequencing and phenotypic characterization showed that the biological fitness of nvCT when compared with wild-type CT in vitro is unaltered. Therefore, the rapid transmission of nvCT was due to the selective advantage gained from failed diagnosis and the introduction of nvCT into a high-frequency transmitting population. The proportions of nvCT cases are now converging toward equilibrium with the wild-type CT strains. Interestingly, the nvCT remains rarely reported beyond the Nordic countries. Summary The spread of nvCT had a substantial impact on C. trachomatis identification, epidemiology, and public health in Sweden. Lessons learned from this experience include the importance of investigating the incidence and epidemiology of infection in detail, the frequent participation in appropriate quality assurance schemes, and the careful design of diagnostic assays. The nvCT presents a unique opportunity to study the spread of a single C. trachomatis strain within both the human and bacterial populations; this may substantially increase our knowledge of epidemiology and transmission of chlamydial infections, and other sexually transmitted infections. © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins. Source

Roure D.D.,University of Southampton
Computer | Year: 2010

e-Science projects have generated new ways of thinking, new expertise and methods, and a new collaborative infrastructure. © 2010 IEEE. Source

Calder P.C.,University of Southampton
Proceedings of the Nutrition Society | Year: 2012

Inflammation plays a key role in many common conditions and diseases. Fatty acids can influence inflammation through a variety of mechanisms acting from the membrane to the nucleus. They act through cell surface and intracellular receptors that control inflammatory cell signalling and gene expression patterns. Modifications of inflammatory cell membrane fatty acid composition can modify membrane fluidity, lipid raft formation and cell signalling leading to altered gene expression and can alter the pattern of lipid and peptide mediator production. Cells involved in the inflammatory response usually contain a relatively high proportion of the n-6 fatty acid arachidonic acid in their membrane phospholipids. Eicosanoids produced from arachidonic acid have well-recognised roles in inflammation. Oral administration of the marine n-3 fatty acids EPA and DHA increases the contents of EPA and DHA in the membranes of cells involved in inflammation. This is accompanied by a decrease in the amount of arachidonic acid present. EPA is a substrate for eicosanoid synthesis and these are often less potent than those produced from arachidonic acid. EPA gives rise to E-series resolvins and DHA gives rise to D-series resolvins and protectins. Resolvins and protectins are anti-inflammatory and inflammation resolving. Thus, the exposure of inflammatory cells to different types of fatty acids can influence their function and so has the potential to modify inflammatory processes. © 2012 The Author. Source

Khalid S.,University of Southampton
Methods in molecular biology (Clifton, N.J.) | Year: 2013

The time and length scales accessible by biomolecular simulations continue to increase. This is in part due to improvements in algorithms and computing performance, but is also the result of the emergence of coarse-grained (CG) potentials, which complement and extend the information obtainable from fully detailed models. CG methods have already proven successful for a range of applications that benefit from the ability to rapidly simulate spontaneous self-assembly within a lipid membrane environment, including the insertion and/or oligomerization of a range of "toy models," transmembrane peptides, and single- and multi-domain proteins. While these simplified approaches sacrifice atomistic level detail, it is now straightforward to "reverse map" from CG to atomistic descriptions, providing a strategy to assemble membrane proteins within a lipid environment, prior to all-atom simulation. Moreover, recent developments have been made in "dual resolution" techniques, allowing different molecules in the system to be modeled with atomistic or CG resolution simultaneously. Source

Starink M.J.,University of Southampton
Thermochimica Acta | Year: 2014

In this work a new model for diffusion-controlled precipitation reactions is derived, analysed and tested against a wide range of data. The model incorporates elements of the extended volume concept and combines this with a new treatment of soft impingement of diffusion fields. The model derivation involves an integration over iso-concentration regions in the parent phase in the extended volume, which leads to a single analytical equation describing the relation the fraction transformed, α , and the extended volume fraction, αext, as: α ={exp(-2αext) 1} / (2αext) 1. The model is compared to a range of new and old data on diffusion-controlled reactions including precipitation reactions and exsolution reactions, showing a very good performance, outperforming classical and recent models. The model allows new interpretation of existing data which, for the first time, show a consistent analysis, in which Avrami constants, n, equal values that are always consistent with transformation theory. © 2014 Elsevier B.V. All rights reserved. Source

Merle A.,University of Southampton
International Journal of Modern Physics D | Year: 2013

We review the model building aspects for keV sterile neutrinos as Dark Matter (DM) candidates. After giving a brief discussion of some cosmological, astrophysical and experimental aspects, we first discuss the currently known neutrino data and observables. We then explain the purpose and goal of neutrino model building, and review some generic methods used. Afterwards certain aspects specific for keV neutrino model building are discussed, before reviewing the bulk of models in the literature. We try to keep the discussion on a pedagogical level, while nevertheless pointing out some finer details where necessary and useful. Ideally, this review should enable a grad student or an interested colleague from cosmology or astrophysics with some prior experience to start working on the field. © 2013 World Scientific Publishing Company. Source

Jones G.A.,University of Southampton
Journal of Combinatorial Theory. Series B | Year: 2013

A generalised Paley map is a Cayley map for the additive group of a finite field F, with a subgroup S=-S of the multiplicative group as generating set, cyclically ordered by powers of a generator of S. We characterise these as the orientably regular maps with orientation-preserving automorphism group acting primitively and faithfully on the vertices; allowing a non-faithful primitive action yields certain cyclic coverings of these maps. We determine the fields of definition and the orbits of the absolute Galois group GalQ- on these maps, and we show that if (q-1)/(p-1) divides |S|, where |F|=q=pe with p prime, then these maps are the only orientably regular embeddings of their underlying graphs; in particular this applies to the Paley graphs, where |S|=(q-1)/2 is even. © 2012. Source

Levitt M.H.,University of Southampton
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2013

Molecular endofullerenes are supramolecular systems consisting of fullerene cages encapsulating small molecules. Although most early examples consist of encapsulated metal clusters, recently developed synthetic routes have provided endofullerenes with non-metallic guest molecules in high purity and macroscopic quantities. The encapsulated light molecule behaves as a confined quantum rotor, displaying rotational quantization as well as translational quantization, and a rich coupling between the translational and rotational degrees of freedom. Furthermore, many encapsulated molecules display spin isomerism. Spectroscopies such as inelastic neutron scattering, nuclear magnetic resonance and infrared spectroscopy may be used to obtain information on the quantized energy level structure and spin isomerism of the guest molecules. It is also possible to study the influence of the guest molecules on the cages, and to explore the communication between the guest molecules and the molecular environment outside the cage. © 2013 The Author(s) Published by the Royal Society. All rights reserved. Source

This paper is a sympathetic critique of mainstream grammars of urban injustice, arguing that they are frequently too one-sided and selective to adequately grasp the full complexity of urban realities. Most prominently, I contend that urban injustice and punitiveness co-exist with, if not sometimes depend upon, more supportive responses within urban space. I therefore counterbalance the spectacular logics of punitive urbanism and the everyday logics of control with a tripartite approach to logics assembled within the urban voluntary sector (abeyance, care and survival) as a way to reconnect to a broader set of practices. Two case studies are used to illustrate these contentions. © 2012 The Author. Antipode© 2012 Antipode Foundation Ltd. Source

King S.F.,University of Southampton
Journal of High Energy Physics | Year: 2011

Tri-bimaximal neutrino mixing may arise from see-saw models based on family symmetry which is spontaneously broken by flavons with particular vacuum alignments. In this paper we derive approximate analytic results which express the deviations from tri-bimaximal neutrino mixing due to vacuum misalignment. We also relate vacuum misalignment to departures from form dominance, corresponding to complex deviations from the real orthogonal R matrix, where such corrections are necessary to allow for successful leptogenesis. The analytic results show that the corrections to tri-bimaximal mixing and form dominance depend on the pattern of the vacuum misalignment, with the two effects being uncorrelated. © SISSA 2011. Source

Del Debbio L.,University of Edinburgh | Zwicky R.,University of Southampton
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2011

We consider mass-deformed conformal gauge theories (mCGT) and investigate the scaling behaviour of hadronic observables as a function of the fermion mass. Applying renormalization group arguments directly to matrix elements, we find mH~m1/(1+γ) and F~mηF(γ*) for given ηF(γ*), for the hadronic masses and the decay constants respectively, thereby generalizing our results from a previous paper to the entire spectrum. Applying the Hellmann-Feynman theorem to the trace anomaly we obtain the hadron mass scaling independent of renormalization group arguments. From the trace anomaly we obtain a relation reminiscent of the Gell-Mann-Oakes-Renner relation in QCD. Using the new results we discuss the scaling of the S-parameter inside the conformal window. Finally, we discuss how spectral representations can be used to relate the mass and decay constant trajectories. © 2011 Elsevier B.V. Source

Beullens P.,University of Southampton
International Journal of Production Economics | Year: 2014

While many review articles exist on (deterministic) lot sizing models used in the context of price and quantity discounts, buyer-vendor coordination, supply chain management, and joint economic lot sizing problems, they do not convey the impact of important findings which date back to at least 2002, or, in hindsight, to 1984. As a result, many recent articles still model the financial implications of lot sizing decisions without having the assurance that these models would help the firm(s) involved in maximising the Net Present Value (NPV). This paper therefore reviews these findings, while adding also its own contributions, as to convey the general importance to lot sizing theory. We show that the underlying principles used in the four key articles that have led to a division in modelling approaches are in fact all in line with NPV, and argue that therefore there should not be these discrepancies that currently persist in the literature. We establish the connections between these four strands of the literature using the solution to a simple variation of Harris' EOQ model, deriving thereby results from Boyaci and Gallego (2002) and Beullens and Janssens (2011), but showing their general applicability to any type of supply-chain structure. The breath of implications to deterministic lot sizing theory is illustrated using practical examples. We present a stochastic version of the model of Crowther (1964), which is arguably the least understood and applied model, but on the other hand the most important one in realising how these modelling strands can be unified. © 2014 Elsevier B.V. All rights reserved. Source

King S.F.,University of Southampton | Luhn C.,Durham University
Reports on Progress in Physics | Year: 2013

This is a review paper about neutrino mass and mixing and flavour model building strategies based on discrete family symmetry. After a pedagogical introduction and overview of the whole of neutrino physics, we focus on the PMNS mixing matrix and the latest global fits following the Daya Bay and RENO experiments which measure the reactor angle. We then describe the simple bimaximal, tri-bimaximal and golden ratio patterns of lepton mixing and the deviations required for a non-zero reactor angle, with solar or atmospheric mixing sum rules resulting from charged lepton corrections or residual trimaximal mixing. The different types of see-saw mechanism are then reviewed as well as the sequential dominance mechanism. We then give a mini-review of finite group theory, which may be used as a discrete family symmetry broken by flavons either completely, or with different subgroups preserved in the neutrino and charged lepton sectors. These two approaches are then reviewed in detail in separate chapters including mechanisms for flavon vacuum alignment and different model building strategies that have been proposed to generate the reactor angle. We then briefly review grand unified theories (GUTs) and how they may be combined with discrete family symmetry to describe all quark and lepton masses and mixing. Finally, we discuss three model examples which combine an SU(5) GUT with the discrete family symmetries A4, S4 and Δ(96). © 2013 IOP Publishing Ltd. Source

Scaife A.M.M.,University of Southampton | Heald G.H.,Netherlands Institute for Radio Astronomy
Monthly Notices of the Royal Astronomical Society: Letters | Year: 2012

We present parametrized broad-band spectral models valid at frequencies between 30 and 300MHz for six bright radio sources selected from the 3C survey, spread in right ascension from 0 to 24 h. For each source, data from the literature are compiled and tied to a common flux density scale. These data are then used to parametrize an analytic polynomial spectral calibration model. The optimal polynomial order in each case is determined using the ratio of the Bayesian evidence for the candidate models. Maximum likelihood parameter values for each model are presented, with associated errors, and the percentage error in each model as a function of frequency is derived. These spectral models are intended as an initial reference for science from the new generation of low-frequency telescopes now coming online, with particular emphasis on the Low Frequency Array (LOFAR). © 2012 The Authors. Monthly Notices of the Royal Astronomical Society. © 2012 RAS. Source

Niu X.,University of Southampton | DeMello A.J.,ETH Zurich
Biochemical Society Transactions | Year: 2012

In the present paper, we review and discuss current developments and challenges in the field of droplet-based microfluidics. This discussion includes an assessment of the basic fluid dynamics of segmented flows, material requirements, fundamental unit operations and how integration of functional components can be applied to specific biological problems. ©The Authors Journal compilation ©2012 Biochemical Society. Source

Poletti F.,University of Southampton
Optics Express | Year: 2014

We propose a novel hollow core fiber design based on nested and non-touching antiresonant tube elements arranged around a central core. We demonstrate through numerical simulations that such a design can achieve considerably lower loss than other state-of-the-art hollow fibers. By adding additional pairs of coherently reflecting surfaces without introducing nodes, the Hollow Core Nested Antiresonant Nodeless Fiber (HC-NANF) can achieve values of confinement loss similar or lower than that of its already low surface scattering loss, while maintaining multiple and octave-wide antiresonant windows of operation. As a result, the HC-NANF can in principle reach a total value of loss - including leakage, surface scattering and bend contributions - that is lower than that of conventional solid fibers. Besides, through resonant out-coupling of high order modes they can be made to behave as effectively single mode fibers. © 2014 Optical Society of America. Source

Shang H.L.,University of Southampton
Computational Statistics and Data Analysis | Year: 2013

Error density estimation in a nonparametric functional regression model with functional predictor and scalar response is considered. The unknown error density is approximated by a mixture of Gaussian densities with means being the individual residuals, and variance as a constant parameter. This proposed mixture error density has a form of a kernel density estimator of residuals, where the regression function is estimated by the functional Nadaraya-Watson estimator. A Bayesian bandwidth estimation procedure that can simultaneously estimate the bandwidths in the kernel-form error density and the functional Nadaraya-Watson estimator is proposed. A kernel likelihood and posterior for the bandwidth parameters are derived under the kernel-form error density. A series of simulation studies show that the proposed Bayesian estimation method performs on par with the functional cross validation for estimating the regression function, but it performs better than the likelihood cross validation for estimating the regression error density. The proposed Bayesian procedure is also applied to a nonparametric functional regression model, where the functional predictors are spectroscopy wavelengths and the scalar responses are fat/protein/moisture content, respectively. © 2013 Elsevier B.V. All rights reserved. Source

Del Valle E.,University of Southampton
Journal of the Optical Society of America B: Optical Physics | Year: 2011

The maximum entanglement allowed between two coupled qubits in the steady state established by independent incoherent sources of excitation is reported. Asymmetric configurations where one qubit is excited while the other dissipates the excitation are optimal for entanglement, reaching values three times larger than with thermal sources. The reason is the purification of the steady-state mixture (that includes a Bell state) thanks to the saturation of the pumped qubit. Photon antibunching between the cross emission of the qubits is proposed to experimentally evidence such large degrees of entanglement. © 2011 Optical Society of America. Source

Nedic J.,Imperial College London | Vassilicos J.C.,Imperial College London | Ganapathisubramani B.,University of Southampton
Physical Review Letters | Year: 2013

The recently discovered nonequilibrium turbulence dissipation law implies the existence of axisymmetric turbulent wake regions where the mean flow velocity deficit decays as the inverse of the distance from the wake-generating body and the wake width grows as the square root of that distance. This behavior is different from any documented boundary-free turbulent shear flow to date. Its existence is confirmed in wind tunnel experiments of wakes generated by plates with irregular edges placed normal to an incoming free stream. The wake characteristics of irregular bodies such as buildings, bridges, mountains, trees, coral reefs, and wind turbines are critical in many areas of environmental engineering and fluid mechanics. © 2013 American Physical Society. Source

Bassi A.,University of Trieste | Bassi A.,National Institute of Nuclear Physics, Italy | Lochan K.,Tata Institute of Fundamental Research | Satin S.,Chennai Mathematical Institute | And 2 more authors.
Reviews of Modern Physics | Year: 2013

Quantum mechanics is an extremely successful theory that agrees with every experimental test. However, the principle of linear superposition, a central tenet of the theory, apparently contradicts a commonplace observation: macroscopic objects are never found in a linear superposition of position states. Moreover, the theory does not explain why during a quantum measurement, deterministic evolution is replaced by probabilistic evolution, whose random outcomes obey the Born probability rule. In this article a review is given of an experimentally falsifiable phenomenological proposal, known as continuous spontaneous collapse: a stochastic nonlinear modification of the Schrödinger equation, which resolves these problems, while giving the same experimental results as quantum theory in the microscopic regime. Two underlying theories for this phenomenology are reviewed: trace dynamics and gravity-induced collapse. As the macroscopic scale is approached, predictions of this proposal begin to differ appreciably from those of quantum theory and are being confronted by ongoing laboratory experiments that include molecular interferometry and optomechanics. These experiments, which test the validity of linear superposition for large systems, are reviewed here, and their technical challenges, current results, and future prospects summarized. It is likely that over the next two decades or so, these experiments can verify or rule out the proposed stochastic modification of quantum theory. © 2013 American Physical Society. Source

In the World Health Organisation European Region, more than 2,370,000 years of life are lost from liver disease before the age of 50; more than lung cancer, trachea, bronchus, oesophageal, stomach, colon, rectum and pancreatic cancer combined. Between 60-80% of these deaths are alcohol related, a disease for which no pharmaceutical therapy has yet been shown to improve long-term survival. The toxicity of alcohol is dose related at an individual level, and is dose related at a population level; overall liver mortality is largely determined by population alcohol consumption. Trends in alcohol consumption correlate closely with trends in overall liver mortality, with 3-5-fold decreases or increases in liver mortality in different European countries over the last few decades. The evidence base for alcohol control measures aimed at reducing population alcohol consumption has been subjected to rigorous evaluation; most recently by the Organisation for Economic Co-Operation and Development (OECD). Effective alcohol policy measures reduce alcohol mortality, including mortality from liver disease. The most effective and cost effective measures have been summarised by the OECD and the World Health Organisation: regular incremental above inflation tax increases, a minimum price for alcohol, effective protection of children from alcohol marketing and low level interventions from clinicians. Simple, cheap and effective changes to alcohol policy by European Institutions and member states have the potential to dramatically reduce liver mortality in Europe. © 2015 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved. Source

Mirnezami A.H.,University of Southampton
Colorectal disease : the official journal of the Association of Coloproctology of Great Britain and Ireland | Year: 2010

Robotic colorectal surgery is an emerging field and may offer a solution to some of the difficulties inherent to conventional laparoscopic surgery. The aim of this review is to provide a comprehensive and critical analysis of the available literature on the use of robotic technology in colorectal surgery. Studies reporting outcomes of robotic colorectal surgery were identified by systematic searches of electronic databases. Outcomes examined included operating time, length of stay, blood loss, complications, cost, oncological outcome, and conversion rates. Seventeen Studies (nine case series, seven comparative studies, one randomized controlled trial) describing 288 procedures were identified and reviewed. Study heterogeneity precluded a meta-analysis of the data. Robotic procedures tend to take longer and cost more, but may reduce the length of stay, blood loss, and conversion rates. Complication profiles and short-term oncological outcomes are similar to laparoscopic surgery. Robotic colorectal surgery is a promising field and may provide a powerful additional tool for optimal management of more challenging pathology, including rectal cancer. Further studies are required to better define its role. © 2010 The Authors. Colorectal Disease © 2010 The Association of Coloproctology of Great Britain and Ireland. Source

Palmer K.T.,University of Southampton
Best Practice and Research: Clinical Rheumatology | Year: 2011

Carpal tunnel syndrome (CTS) is a fairly common condition in working-aged people, sometimes caused by physical occupational activities, such as repeated and forceful movements of the hand and wrist or use of hand-held, powered, vibratory tools. Symptoms may be prevented or alleviated by primary control measures at work, and some cases of disease are compensable. Following a general description of the disorder, its epidemiology and some of the difficulties surrounding diagnosis, this review focusses on the role of occupational factors in causation of CTS and factors that can mitigate risk. Areas of uncertainty, debate and research interest are emphasised where relevant.© 2011 Elsevier Ltd. All rights reserved. Source

Stulz E.,University of Southampton
Chemistry - A European Journal | Year: 2012

The use of DNA in nanobiotechnology has advanced to a stage at which almost any two or three dimensional architecture can be designed with high precision. The choice of the DNA sequences is essential for successful self-assembly, and opens new ways of making nanosized monomolecular assemblies with predictable structure and size. The inclusion of designer nucleoside analogues further adds functionality with addressable groups, which have an influence on the function of the DNA nano-objects. This article highlights the recent achievements in this emerging field and gives an outlook on future perspectives and applications. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source

Non-adherence to prescribed treatments is the primary cause of treatment failure in pediatric long-term conditions. Greater understanding of parents and caregivers' reasons for non-adherence can help to address this problem and improve outcomes for children with long-term conditions. We carried out a systematic review and thematic synthesis of qualitative studies. Medline, Embase, Cinahl and PsycInfo were searched for relevant studies published in English and German between 1996 and 2011. Papers were included if they contained qualitative data, for example from interviews or focus groups, reporting the views of parents and caregivers of children with a range of long-term conditions on their treatment adherence. Papers were quality assessed and analysed using thematic synthesis. Nineteen papers were included reporting 17 studies with caregivers from 423 households in five countries. Long-term conditions included; asthma, cystic fibrosis, HIV, diabetes and juvenile arthritis. Across all conditions caregivers were making on-going attempts to balance competing concerns about the treatment (such as perceived effectiveness or fear of side effects) with the condition itself (for instance perceived long-term threat to child). Although the barriers to implementing treatment regimens varied across the different conditions (including complexity and time-consuming nature of treatments, un-palatability and side-effects of medications), it was clear that caregivers worked hard to overcome these day-to-day challenges and to deal with child resistance to treatments. Yet, carers reported that strict treatment adherence, which is expected by health professionals, could threaten their priorities around preserving family relationships and providing a 'normal life' for their child and any siblings. Treatment adherence in long-term pediatric conditions is a complex issue which needs to be seen in the context of caregivers balancing the everyday needs of the child within everyday family life. Health professionals may be able to help caregivers respond positively to the challenge of treatment adherence for long-term conditions by simplifying treatment regimens to minimise impact on family life and being aware of difficulties around child resistance and supportive of strategies to attempt to overcome this. Caregivers would also welcome help with communicating with children about treatment goals. Source

Andrade T.,Durham University | Withers B.,University of Southampton
Journal of High Energy Physics | Year: 2014

We consider a holographic model consisting of Einstein-Maxwell theory in d + 1 bulk spacetime dimensions with d - 1 massless scalar fields. Momentum relaxation is realised simply through spatially dependent sources for operators dual to the neutral scalars, which can be engineered so that the bulk stress tensor and resulting black brane geometry are homogeneous and isotropic. We analytically calculate the DC conductivity, which is finite. In the d = 3 case, both the black hole geometry and shear-mode currentcurrent correlators are those of a sector of massive gravity. © 2014 The Author(s). Source

Withers B.,University of Southampton
Journal of High Energy Physics | Year: 2014

Abstract: We construct cohomogeneity-three, finite temperature stationary black brane solutions dual to a field theory exhibiting checkerboard order. The checkerboards form a backreacted part of the bulk solution, and are obtained numerically from the coupled Einstein-Maxwell-scalar PDE system. They arise spontaneously and without the inclusion of an explicit lattice. The phase exhibits both charge and global U(1)-current modulation, which are periodic in two spatial directions. The current circulates within each checkerboard plaquette. We explore the competition with striped phases, finding first-order checkerboard to stripe phase transitions.We also detail spatially modulated instabilities of asymptotically AdS black brane backgrounds with neutral scalar profiles, including those with an hyperscaling violating IR geometry at zero temperature. © 2014, The Author(s). Source

Clarke N.,University of Southampton
Health and Social Care in the Community | Year: 2013

Recent UK social care reforms characterised by a policy of increasing personalisation and choice in adult social care have been accompanied by major reorganisation and investment in workforce training and development. There is an assumed link between training and the quality of care received. This assumption has a long pedigree in social care, but rarely does it receive the scrutiny necessary for us to understand better the nature of this relationship. This paper focuses on the potential for in-service training to contribute to the transformation in social care as expected by policy-makers. Reviewing recent findings from the evaluation of training in social care shows that problems continue to persist in demonstrating that training results in changes in practitioner behaviour back on the job. Findings within the social care literature mirror those found more widely in suggesting learner characteristics, intervention design, and delivery and the workplace environment combine to influence whether training transfers to use on the job. The argument advanced here is that without a focus on the transfer of training, the contribution of training to quality of care outcomes will remain illusory. A shift is required in policy-makers' mindsets away from training, to a focus on training transfer in directing workforce development strategies. It might then be possible to begin to identify how and in what configurations training may be associated with quality of care outcomes. © 2012 Blackwell Publishing Ltd. Source

May C.,University of Southampton
Implementation Science | Year: 2013

Understanding and evaluating the implementation of complex interventions in practice is an important problem for healthcare managers and policy makers, and for patients and others who must operationalize them beyond formal clinical settings. It has been argued that this work should be founded on theory that provides a foundation for understanding, designing, predicting, and evaluating dynamic implementation processes. This paper sets out core constituents of a general theory of implementation, building on Normalization Process Theory and linking it to key constructs from recent work in sociology and psychology. These are informed by ideas about agency and its expression within social systems and fields, social and cognitive mechanisms, and collective action. This approach unites a number of contending perspectives in a way that makes possible a more comprehensive explanation of the implementation and embedding of new ways of thinking, enacting and organizing practice. © 2013 May; licensee BioMed Central Ltd. Source

Shih S.-I.,University of Southampton
PLoS ONE | Year: 2013

There is a rapidly increasing trend in media-media multitasking or MMM (using two or more media concurrently). In a recent conference, scholars from diverse disciplines expressed concerns that indulgence in MMM may compromise well-being and/or cognitive abilities. However, research on MMM's impacts is too sparse to inform the general public and policy makers whether MMM should be encouraged, managed, or minimized. The primary purpose of the present study was to develop an innovative computerized instrument - the Survey of the Previous Day (SPD) - to quantify MMM as well as media-nonmedia and nonmedia-nonmedia multitasking and sole-tasking. The secondary purpose was to examine whether these indices could predict a sample of well-being related, psychosocial measures. In the SPD, participants first recalled (typed) what they did during each hour of the previous day. In later parts of the SPD, participants analysed activities and their timing and duration for each hour of the previous day, while relevant recall was on display. Participants also completed the Media Use Questionnaire. The results showed non-significant relationship between tasking measures and well-being related measures. Given how little is known about the associations between MMM and well-being, the null results may offer some general reassurance to those who are apprehensive about negative impacts of MMM. © 2013 Shui-I Shih. Source

Holgate S.T.,University of Southampton
Journal of Allergy and Clinical Immunology | Year: 2011

Current asthma therapy is based on the use of adrenergic bronchodilator and anti-inflammatory drugs the specificity, efficacy, duration of action, and safety of which have been derived through classical pharmacology and medicinal chemistry. That asthma is a TH2-type inflammatory disorder frequently associated with atopy and allergic comorbidities has led to a concentrated effort to find treatments that act selectively on this pathway. A systematic literature review was undertaken, as well as a review of the Web site Clinicaltrials.gov for ongoing trials. Targets have included T cells themselves and their associated cytokines, chemokines, and receptors mostly targeted with biological agents. With the exception of anti-human IgE, none of these have met the expectations predicted from animal models and human in vitro tests. For most of these new therapies, only a very small subpopulation appears to respond. A case is made for a different approach to drug discovery based on acquiring a greater understanding of asthma stratification, the relevant pathways involved, and the development of appropriate diagnostic tests enabling the targeting of selective treatments to those asthmatic phenotypes most likely to respond. The recognition that asthma is more than allergy mandates improved predictive animal models and an appreciation that many of the environmental insults that initiate, consolidate, and exacerbate asthma operate through an epithelium functioning in a disorderly fashion. An integrated model that places the epithelium at the forefront of asthma pathogenesis suggests that greater emphasis should be placed on therapeutics that increase the airways' resistance against the inhaled environment rather than focusing only on suppression of inflammation. © 2011 American Academy of Allergy, Asthma & Immunology. Source

Blainey S.,University of Southampton
Journal of Transport Geography | Year: 2010

This paper details models which have been developed to forecast the total number of trips made from local rail stations in England and Wales over a one year period. The use of multiple linear regression and geographically weighted regression in calibration are compared, with both explaining over 75% of the variation in the observed data. The latter technique has not previously been used in rail demand modelling, and allows significant spatial variations in the effect of independent variables to be identified and mapped. A number of catchment definition methods are investigated, as is the inclusion of a wide range of demographic and service related explanatory variables. The models developed are used to forecast usage at stations on the recently opened Ebbw Vale branch line in South Wales and these predictions are compared to initial usage figures. © 2008 Elsevier Ltd. All rights reserved. Source

Vorobev A.,University of Southampton
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2010

We use the Cahn-Hilliard approach to model the slow dissolution dynamics of binary mixtures. An important peculiarity of the Cahn-Hilliard-Navier-Stokes equations is the necessity to use the full continuity equation even for a binary mixture of two incompressible liquids due to dependence of mixture density on concentration. The quasicompressibility of the governing equations brings a short time-scale (quasiacoustic) process that may not affect the slow dynamics but may significantly complicate the numerical treatment. Using the multiple-scale method we separate the physical processes occurring on different time scales and, ultimately, derive the equations with the filtered-out quasiacoustics. The derived equations represent the Boussinesq approximation of the Cahn-Hilliard-Navier-Stokes equations. This approximation can be further employed as a universal theoretical model for an analysis of slow thermodynamic and hydrodynamic evolution of the multiphase systems with strongly evolving and diffusing interfacial boundaries, i.e., for the processes involving dissolution/nucleation, evaporation/condensation, solidification/melting, polymerization, etc. © 2010 The American Physical Society. Source

Holgate S.T.,University of Southampton
Immunological Reviews | Year: 2011

The adoption of the concept that asthma is primarily a disease most frequently associated with elaboration of T-helper 2 (Th2)-type inflammation has led to the widely held concept that its origins, exacerbation, and persistence are allergen driven. Taking aside the asthma that is expressed in non-allergic individuals leaves the great proportion of asthma that is associated with allergy (or atopy) and that often has its onset in early childhood. Evidence is presented that asthma is primarily an epithelial disorder and that its origin as well as its clinical manifestations have more to do with altered epithelial physical and functional barrier properties than being purely linked to allergic pathways. In genetically susceptible individuals, impaired epithelial barrier function renders the airways vulnerable to early life virus infection, and this in turn provides the stimulus to prime immature dendritic cells toward directing a Th2 response and local allergen sensitization. Continued epithelial susceptibility to environmental insults such as viral, allergen, and pollutant exposure and impaired repair responses leads to asthma persistence and provides the mediator and growth factor microenvironment for persistence of inflammation and airway wall remodeling. Increased deposition of matrix in the epithelial lamina reticularis provides evidence for ongoing epithelial barrier dysfunction, while physical distortion of the epithelium consequent upon repeated bronchoconstriction provides additional stimuli for remodeling. This latter response initially serves a protective function but, if exaggerated, may lead to fixed airflow obstruction associated with more severe and chronic disease. Dual pathways in the origins, persistence, and progression of asthma help explain why anti-inflammatory treatments fail to influence the natural history of asthma in childhood and only partially does so in chronic severe disease. Positioning the airway epithelium as fundamental to the origins and persistence of asthma provides a rationale for pursuit of therapeutics that increase the resistance of the airways to environmental insults rather than concentrating all effort on suppressing inflammation. © 2011 John Wiley & Sons A/S. Source

Channon A.A.R.,University of Southampton
Journal of Biosocial Science | Year: 2011

Birth weight is known to be closely related to child health, although as many infants in developing countries are not weighed at birth and thus will not have a recorded birth weight it is difficult to use birth weight when analysing the determinants of child illness. It is common to use a proxy for birth weight instead, namely the mother's perception of the baby's size at birth. Using DHS surveys in Cambodia, Kazakhstan and Malawi the responses to this question were assessed to indicate the relationship between birth weight and mother's perception. The determinants of perception were investigated using multilevel ordinal regression to gauge if they are different for infants with and without a recorded birth weight, and to consider if there are societal or community influences on perception of size. The results indicate that mother's perception is closely linked to birth weight, although there are other influences on the classification of infants into size groups. On average, a girl of the same birth weight as a boy will be classified into a smaller size category. Likewise, infants who died by the time of the survey will be classified as smaller than similarly heavy infants who are still alive. There are significant variations in size perception between sampling districts and clusters, indicating that mothers mainly judge their child for size against a national norm. However, there is also evidence that the size of infants in the community around the newborn also has an effect on the final size perception classification. Overall the results indicate that mother's perception of size is a good proxy for birth weight in large nationally representative surveys, although care should be taken to control for societal influences on perception. © Copyright Cambridge University Press 2011. Source

Dias O.J.C.,University of Southampton | Godazgar M.,University of Cambridge | Santos J.E.,University of Cambridge
Physical Review Letters | Year: 2015

We provide strong evidence that, up to 99.999% of extremality, Kerr-Newman black holes (KNBHs) are linear mode stable within Einstein-Maxwell theory. We derive and solve, numerically, a coupled system of two partial differential equations for two gauge invariant fields that describe the most general linear perturbations of a KNBH. We determine the quasinormal mode (QNM) spectrum of the KNBH as a function of its three parameters and find no unstable modes. In addition, we find that the lowest radial overtone QNMs that are connected continuously to the gravitational =m=2 Schwarzschild QNM dominate the spectrum for all values of the parameter space (m is the azimuthal number of the wave function and measures the number of nodes along the polar direction). Furthermore, the (lowest radial overtone) QNMs with =m approach Reω=mΩHext and Imω=0 at extremality; this is a universal property for any field of arbitrary spin |s|≤2 propagating on a KNBH background (ω is the wave frequency and ΩHext the black hole angular velocity at extremality). We compare our results with available perturbative results in the small charge or small rotation regimes and find good agreement. © 2015 American Physical Society. Source

Pound A.,University of Southampton
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

In general-relativistic perturbation theory, a point mass accelerates away from geodesic motion due to its gravitational self-force. Because the self-force is small, one can often approximate the motion as geodesic. However, it is well known that self-force effects accumulate over time, making the geodesic approximation fail on long time scales. It is less well known that this failure at large times translates to a failure at large distances as well. At second perturbative order, two large-distance pathologies arise: spurious secular growth and infrared-divergent retarded integrals. Both stand in the way of practical computations of second-order self-force effects. Utilizing a simple flat-space scalar toy model, I develop methods to overcome these obstacles. The secular growth is tamed with a multiscale expansion that captures the system's slow evolution. The divergent integrals are eliminated by matching to the correct retarded solution at large distances. I also show how to extract conservative self-force effects by taking local-in-time "snapshots" of the global solution. These methods are readily adaptable to the physically relevant case of a point mass orbiting a black hole. © 2015 American Physical Society. Source

Wintrup J.,University of Southampton
BMC Medical Ethics | Year: 2015

Background: In the UK, higher education and health care providers share responsibility for educating the workforce. The challenges facing health practice also face health education and as educators we are implicated, by the way we design curricula and through students' experiences and their stories. This paper asks whether ethics education has a new role to play, in a context of major organisational change, a global and national austerity agenda and the ramifications of disturbing reports of failures in care. It asks: how would it be different if equal amounts of attention were given to the conditions in which health decisions are made, if the ethics of organisational and policy decisions were examined, and if guiding collaborations with patients and others who use services informed ethics education and its processes? Discussion: This is in three parts. In part one an example from an inspection report is used to question the ways in which clinical events are decontextualised and constructed for different purposes. Ramifications of a decision are reflected upon and a case made for different kinds of allegiances to be developed. In part two I go on to broaden the scope of ethics education and make a case for beginning with the messy realities of practice rather than with overarching moral theories. The importance of power in ethical practice is introduced, and in part three the need for greater political and personal awareness is proposed as a condition of moral agency. Summary: This paper proposes that ethics education has a new contribution to make, in supporting and promoting ethical practice - as it is defined in and by the everyday actions and decisions of practitioners and people who need health services. Ethics education that promotes moral agency, rather than problem solving approaches, would explore not only clinical problems, but also the difficult and contested arenas in which they occur. It would seek multiple perspectives and would begin with places and people, and their priorities. It would support students to locate their practice in imperfect global contexts, and to understand how individual and collective forms of power can influence healthcare quality. © 2015 Wintrup; licensee BioMed Central. Source

Pound A.,University of Southampton
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2015

Through second order in perturbative general relativity, a small compact object in an external vacuum spacetime obeys a generalized equivalence principle: although it is accelerated with respect to the external background geometry, it is in free fall with respect to a certain effective vacuum geometry. However, this single principle takes very different mathematical forms, with very different behaviors, depending on how one treats perturbed motion. Furthermore, any description of perturbed motion can be altered by a gauge transformation. In this paper, I clarify the relationship between two treatments of perturbed motion and the gauge freedom in each. I first show explicitly how one common treatment, called the Gralla-Wald approximation, can be derived from a second, called the self-consistent approximation. I next present a general treatment of smooth gauge transformations in both approximations, in which I emphasize that the approximations' governing equations can be formulated in an invariant manner. All of these analyses are carried through second perturbative order, but the methods are general enough to go to any order. Furthermore, the tools I develop, and many of the results, should have broad applicability to any description of perturbed motion, including osculating-geodesic and two-timescale descriptions. © 2015 American Physical Society. Source

King S.F.,University of Southampton
Journal of Physics G: Nuclear and Particle Physics | Year: 2015

motivates extending the Standard Model (SM) to include a non-Abelian discrete flavour symmetry in order to accurately predict the large leptonic mixing angles and CP violation. We begin with an overview of the SM puzzles, followed by a description of some classic lepton mixing patterns. Lepton mixing may be regarded as a deviation from tri-bimaximal mixing, with charged lepton corrections leading to solar mixing sum rules, or tri-maximal lepton mixing leading to atmospheric mixing rules. We survey neutrino mass models, using a roadmap based on the open questions in neutrino physics. We then focus on the seesaw mechanism with right-handed neutrinos, where sequential dominance (SD) can account for large lepton mixing angles and CP violation, with precise predictions emerging from constrained SD (CSD). We define the flavour problem and discuss progress towards a theory of favour using GUTs and discrete family symmetry. We classify models as direct, semidirect or indirect, according to the relation between the Klein symmetry of the mass matrices and the discrete family symmetry, in all cases focussing on spontaneous CP violation. Finally we give two examples of realistic and highly predictive indirect models with CSD, namely an A to Z of flavour with PatiSalam and a fairly complete A4 × SU(5) SUSY GUT of flavour, where both models have interesting implications for leptogenesis. Source

Pang D.-W.,University of Southampton
Journal of High Energy Physics | Year: 2015

We study corner contributions to holographic entanglement entropy in non-conformal backgrounds: a kink for D2-branes as well as a cone and two different types of crease for D4-branes. Unlike 2 + 1-dimensional CFTs, the corner contribution to the holographic entanglement entropy of D2-branes exhibits a power law behaviour rather than a ogarithmic term. However, the logarithmic term emerges in the holographic entanglement entropy of D4-branes. We identify the logarithmic term for a cone in D4-brane background as the universal contribution under appropriate limits and compare it with other physical quantities. © 2015, The Author(s). Source

de Medeiros Varzielas I.,University of Southampton
Journal of High Energy Physics | Year: 2015

Abstract: The observed neutrino mixing, having a near maximal atmospheric neutrino mixing angle and a large solar mixing angle, is close to tri-bi-maximal. This structure may be related to the existence of a discrete non-Abelian family symmetry. In this paper the family symmetry is the non-Abelian discrete group Δ(27), a subgroup of SU(3) with triplet and anti-triplet representations. Different frameworks are constructed in which the mixing follows from combining fermion mass terms with the vacuum structure enforced by the discrete symmetry. Mass terms for the fermions originate from familon triplets, anti-triplets or both. Vacuum alignment for the family symmetry breaking familons follows from simple invariants. © 2015, The Author(s). Source

Macarthur B.D.,University of Southampton | Lemischka I.R.,Mount Sinai School of Medicine
Cell | Year: 2013

Recent reports using single-cell profiling have indicated a remarkably dynamic view of pluripotent stem cell identity. Here, we argue that the pluripotent state is not well defined at the single-cell level but rather is a statistical property of stem cell populations, amenable to analysis using the tools of statistical mechanics and information theory. © 2013 Elsevier Inc. Source

Parker J.D.,University of Southampton
Myrmecological News | Year: 2010

Research on aging in social insects has progressed much more than has been generally acknowledged. Here I review what I think are the four greatest contributions of social insect work to the field of aging research with the hope of high-lighting the truly exciting discoveries being made. These include the reversal of the fecundity / lifespan and size / lifespan trade-offs due to the evolution of sociality, that social environment can reverse the effects of aging, the contribution of social insect work to the overturning of the free radical theory of aging, and the discovery of vitellogenin as an im-portant protein for longevity. All of these discoveries have important ramifications for human and mammalian aging. Source

To determine the natural history and clinical significance of forefoot bursae over a 12-month period in patients with rheumatoid arthritis (RA). Patients with RA (n=149) attending rheumatology outpatient clinics were assessed at baseline. A total of 120 participants, mean±SD age 60.7±12.1 years and mean±SD disease duration 12.99±10.4 years, completed the 12-month followup (98 women, 22 men, 93 rheumatoid factor positive, 24 rheumatoid factor negative, and 3 unknown). Musculoskeletal ultrasound (US) was used to identify forefoot bursae in all of the participants. Clinical markers of disease activity (well-being visual analog scale [VAS], erythrocyte sedimentation rate [ESR], C-reactive protein [CRP] level, and Disease Activity Score in 28 joints [DAS28]) and foot symptoms on the Leeds Foot Impact Scale (LFIS) Questionnaire were recorded on both occasions. Presence of US-detectable forefoot bursae was identified in 93.3% of returnee (n=120) participants (individual mean 3.7, range 0-11) at baseline. Significant associations were identified between bursae presence and patient-reported foot impact for impairment/footwear (LFISIF ; baseline: r=0.226, P=0.013 and 12 months: r=0.236, P=0.009) and activity limitation/participation restriction (LFISAP; baseline: r=0.254, P=0.005 and 12 months: r=0.235, P=0.010). After 12 months, 42.5% of participants had an increase in the number of US-detectable forefoot bursae and 45% of participants had a decrease. Changes in bursae number significantly correlated with changes in LFISIF (r=0.216, P=0.018) and LFISAP (r=0.193, P=0.036). No significant associations were identified between changes in bursae and changes in global well-being VAS, ESR, CRP level, or DAS28. The findings of this study suggest that forefoot bursae may regress or hypertrophy over time in patients with RA, and that these changes may be associated with self-reported foot impairment and activity restriction. Copyright © 2010 by the American College of Rheumatology. Source

Shadbolt N.,University of Southampton
International Journal of Human Computer Studies | Year: 2013

In this issue Brian Gaines (Gaines, 2012) provides a magisterial review of the origins of human kind and human knowledge. It is a reminder that in spite of all our technology it takes a very special type of scholarship to weave such a compelling narrative over such a monumental range of time and material. This article takes a number of strands of Gaines arguments and presents them in the context of my own, colleagues and the field's research trajectory. © 2012 Published by Elsevier Ltd. Source

Sung K.-C.,Sungkyunkwan University | Wild S.H.,University of Edinburgh | Byrne C.D.,University of Southampton
Journal of Clinical Endocrinology and Metabolism | Year: 2013

Context: Fatty liver is associated with an increased risk of type 2 diabetes, but whether an increased risk remains in people in whom fatty liver resolves over time is not known. Objective: The objective of the study was to assess the risk of incident diabetes at a 5-year follow-up in people in whom: 1) new fatty liver developed; 2) existing fatty liver resolved, and 3) fatty liver severity worsened over 5 years. Design and Methods: A total of 13 218 people without diabetes at baseline from a Korean occupational cohort were examined at baseline and after 5 years, using a retrospective study design. Fatty liver status was assessed at baseline and follow-up as absent, mild, or moderate/severe using standard ultrasound criteria. Adjusted odds ratios (aORs) and 95% confidence intervals (CIs) for incident diabetes at follow-up were estimated after controlling for multiple potential confounders. Results: Two hundred thirty-four people developed incident diabetes. Over 5 years, fatty liver resolved in 828, developed in 1640, and progressed from mild to moderate/severe in 324 people. Resolution of fatty liver was not associated with a risk of incident diabetes [aOR 0.95 (95% CIs 0.46, 1.96), P .89]. Development of new fatty liver was associated with incident diabetes [aOR 2.49 (95% CI 1.49, 4.14), P .001]. In individuals in whom severity of fatty liver worsened over 5 years (from mild to moderate/severe), there was a marked increase in the risk of incident diabetes [aOR 6.13 (2.56, 95% CI 14.68) P.001 (compared with the risk in people with resolution of fatty liver)]. Conclusion: Change in fatty liver status over time is associated with markedly variable risks of incident diabetes. Copyright © 2013 by The Endocrine Society. Source

Holt R.I.G.,University of Southampton
Analytical and Bioanalytical Chemistry | Year: 2011

It is believed that athletes have been abusing growth hormone (GH) for its anabolic and lipolytic effects since the early 1980s, at least a decade before endocrinologists began to treat adults with GH deficiency. There is an on-going debate about whether GH is performance enhancing. Although many of the early studies were negative, more recent studies suggest that GH improves strength and sprint capacity, particularly when it is combined with anabolic steroids. Although use of GH is banned by the World Anti-Doping Agency (WADA), its detection remains challenging. Two approaches have been developed to detect GH abuse. The first is based on measurement of pituitary GH isoforms; after injection of recombinant human GH, which comprises solely the 22-kDa isoform, endogenous production is down-regulated leading to an increase in the 22-kDa isoform relative to other isoforms. The second is based on measurement of markers of GH action. Insulin-like growth factor-I (IGF-I) and N-terminal pro-peptide of type III collagen (P-III-NP) increase in response to GH administration in a dose-dependent manner. When combined with discriminant function analysis, use of these markers differentiates between individuals taking GH and placebo. Subsequent studies have shown that the test is applicable across different ethnicities and is unaffected by injury. WADA regulations state that when analytes are measured by immunoassay, two assays are needed. Final validation of the marker test is currently being undertaken with modern commercially available immunoassays to finalise the threshold values to be used to determine whether a doping offence has been committed. [Figure not available: see fulltext.] © 2011 Springer-Verlag. Source

Tavassoli A.,University of Southampton
Chemical Society Reviews | Year: 2011

The human immunodeficiency virus (HIV), the causative agent of acquired immunodeficiency syndrome (AIDS), relies heavily on protein-protein interactions in almost every step of its lifecycle. Targeting these interactions, especially those between virus and host proteins, is increasingly viewed as an ideal avenue for the design and development of new therapeutics. In this tutorial review, we outline the lifecycle of HIV and describe some of the protein-protein interactions that control and regulate each step of this process, also detailing efforts to develop therapies that target these interactions. © 2011 The Royal Society of Chemistry. Source

Elkington P.T.,University of Southampton
Journal of Infection | Year: 2013

Transmission of Mycobacterium tuberculosis (Mtb) continues uninterrupted. Pre-exposure vaccination remains a central focus of tuberculosis research but 25 years of follow up is needed to determine whether a novel childhood vaccination regime protects from adult disease, or like BCG assists Mtb dissemination by preventing childhood illness but not infective adult pulmonary tuberculosis. Therefore, different strategies to interrupt the life cycle of Mtb need to be explored. This personal perspective discusses alternative approaches that may be delivered in a shorter time frame. © 2013 The British Infection Association. Source

Clayton C.R.I.,University of Southampton
Geotechnique | Year: 2011

This paper provides the background to the 50th Rankine Lecture. It considers the growth in emphasis of the prediction of ground displacements during design in the past two decades of the 20th century, as a result of the lessons learnt from field observations. The historical development of the theory of elasticity is then described, as are the constitutive frameworks within which it has been proposed that geotechnical predictions of deformation should be carried out. Factors affecting the stiffness of soils and weak rocks are reviewed, and the results of a numerical experiment, assessing the impact of a number of stiffness parameters on the displacements around a retaining structure, are described. Some field and laboratory methods of obtaining stiffness parameters are considered and critically discussed, and the paper concludes with a suggested strategy for the measurement and integration of stiffness data, and the developments necessary to improve the existing state of the art. Source

Jones D.I.,University of Southampton
Monthly Notices of the Royal Astronomical Society | Year: 2010

In this paper, we investigate the effect of a pinned superfluid component on the gravitational wave emission of a steadily rotating deformed neutron star. We show that the superfluid pinning allows the possibility for there to be gravitational wave emission at both the stellar spin frequency Ω and its first harmonic, 2Ω. This contrasts with the conventional case where there is no pinned superfluidity, where either only the 2Ω harmonic is present or else the star undergoes precession, a feature which is not believed to be common in the known pulsar population. This work motivates the carrying out of gravitational wave searches where both the Ω and 2Ω harmonics are searched for, even in targeted searches for waves from known pulsars which are not observed to precess. Observation of such a two-component signal would provide evidence in favour of pinned superfluidity inside the star. © 2010 The Author. Journal compilation © 2010 RAS. Source

Carling P.A.,University of Southampton
Marine Geology | Year: 2014

Lag deposits of cobble-sized clasts found within the troughs of fine-gravel dunes on a rock-bed intertidal zone are explained by the process of enhanced fluid transport due to attached seaweed. Field determinations of the competence of the tidal currents demonstrated that the currents were not able to transport weed-free clasts on the tidal flat. Controlled flume experiments were used to demonstrate that the presence of the fucoid alga (Fucus vesiculosus (L.)) attached to clasts reduces the critical velocities and shear stresses to levels that allow the tidal currents to entrain the clasts and move them over the backs of dunes to the lee sides. Subsequent burial of the weedy-clasts in the dune troughs by dune progression kills the weed, leaving the latterly exposed clasts as an accumulating weed-free lag. The weight of attached weed typically equals the submerged weight of the clast and the critical velocity and critical shear stress for initial motion are both roughly halved by the presence of the weed. The drag induced by weed is four-times that experienced by weed-free clasts and mobile clasts exhibit near-equal mobility. © 2014 Elsevier B.V. Source

Ishimori H.,Kyoto University | King S.F.,University of Southampton
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2014

We propose a first model of quarks based on the discrete family symmetry δ(6N2) in which the Cabibbo angle is correctly determined by a residual Z2 × Z2 subgroup, and the smaller quark mixing angles may be qualitatively understood from the model. The present model of quarks may be regarded as a first step towards formulating a complete model of quarks and leptons based on δ(6N2), in which the lepton mixing matrix is fully determined by a Klein subgroup. For example, the choice N = 28 provides an accurate determination of both the reactor angle and the Cabibbo angle. © 2014. Source

Existing models for the initiation of salt withdrawal minibasins focus on the role of triggers that exist within the minibasin, either stratigraphic (e.g. differential deposition) or tectonic (extension, translation or contraction). Existing studies tend to focus on complex settings, such as continental margins, which contain many different potential triggering mechanisms. It can be difficult in these settings to identify which process is responsible for minibasin initiation, or the influence of individual factors on their subsequent development. Salt withdrawal minibasins also exist in simpler settings, without any obvious intrinsic trigger; the region of the North German Basin used by Trusheim (1960) in the classic definition of salt withdrawal geometries was of this nature. There is no overall basal or surface slope, no major lateral movement, and there is no depositional heterogeneity. Previously recognized trigger processes for minibasin initiation do not apply in this benign setting, suggesting that other, potentially more fundamental, influences may be at work. A simple forward-modelling approach shows how, in the absence of any other mechanism, a new minibasin can develop as the consequence of salt movement driven by its neighbour, and families ofwithdrawal minibasins can propagate across a region from a single seed point. This new mechanism may explain how some minibasins appear to initiate before the sediment density has exceeded that of the underlying salt. The forward modelling also indicates that some minibasins begin to invert to form turtle anticlines before the underlying salt has been evacuated, so that the timing of turtle formation may not be diagnostic of weld formation. This mechanism may also give rise to salt-cored turtles that have a lens of salt trapped beneath their cores. These new findings have implications for hydrocarbon migration and trapping. © 2014 Elsevier B.V. Source

Serjeantson D.,University of Southampton
Environmental Archaeology | Year: 2014

A recent review of bone remains from more than 90 assemblages from southern Britain confirms that the animals show no evidence for continuity from the Mesolithic period. Fish and birds are almost absent and few remains - less than 5% - are from wild animals. One site only, the Coneybury Anomaly, has a mix of wild and domestic animals as well as birds and fish, but it is unique. Nearly all assemblages, even those with a few bones only, include sheep, an animal unsuited to the environment of Britain at the time. The animal remains support the argument that all aspects of the Neolithic way of life were introduced together by incomers rather than adopted by a local population. © Association for Environmental Archaeology 2014. Source

Lan Z.,University of Southampton | Ohberg P.,Heriot - Watt University
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2014

The recently realized spin-orbit-coupled quantum gases [Lin, Nature (London) 471, 83 (2011)NATUAS0028-083610.1038/nature09887; Wang, Phys. Rev. Lett. 109, 095301 (2012)PRLTAO0031-900710.1103/PhysRevLett.109.095301; Cheuk, Phys. Rev. Lett. 109, 095302 (2012)PRLTAO0031-900710.1103/PhysRevLett.109. 095302] mark a breakthrough in the cold atom community. In these experiments, two hyperfine states are selected from a hyperfine manifold to mimic a pseudospin-1/2 spin-orbit-coupled system by the method of Raman dressing, which is applicable to both bosonic and fermionic gases. In this paper, we show that the method used in these experiments can be generalized to create any large pseudospin spin-orbit-coupled gas if more hyperfine states are coupled equally by the Raman lasers. As an example, we study, in detail, a quantum gas with three hyperfine states coupled by the Raman lasers and show, when the state-dependent energy shifts of the three states are comparable, triple-degenerate minima will appear at the bottom of the band dispersions, thus, realizing a spin-1 spin-orbit-coupled quantum gas. A novel feature of this three-minima regime is that there can be two different kinds of stripe phases with different wavelengths, which has an interesting connection to the ferromagnetic and polar phases of spin-1 spinor Bose-Einstein condensates without spin-orbit coupling.. © 2014 American Physical Society. Source

Naish D.,University of Southampton
Journal of Zoology | Year: 2014

Between the Middle Jurassic and Holocene, birds evolved an enormous diversity of behaviours. The distribution and antiquity of these behaviours is difficult to establish given a relatively poor fossil record. Rare crop, stomach and gut contents typically reveal diets consistent with morphology but stem-members of some lineages (including Cariamae and Coraciiformes) seem to have been different in ecology from their extant relatives. Most of our ideas about the behaviour of fossil birds are based on analogy (with skull form, limb proportions and claw curvature being used to guide hypotheses). However, this has limitations given that some extinct taxa lack extant analogues and that some extant taxa do not behave as predicted by osteology. Reductionist methods have been used to test predation style and running ability in fossil taxa including moa, Gastornis and phorusrhacids. Virtually nothing is known of nesting and nest-building behaviour but colonial nesting is known from the Cretaceous onwards. Rare vegetative nests demonstrate modern nest-building from the Eocene onwards. Ornamental rectrices indicate that sexually driven display drove some aspects of feather evolution and evidence for loud vocal behaviour and intraspecific combat is known for some taxa. Our knowledge of fossil bird behaviour indicates that 'modern' behaviours are at least as old as crown birds. Stem-members of extant lineages, however, may sometimes or often have differed from extant taxa. © 2014 The Zoological Society of London. Source

Danovaro R.,Stazione Zoologica Anton Dohrn | Danovaro R.,Marche Polytechnic University | Snelgrove P.V.R.,Memorial University of Newfoundland | Tyler P.,University of Southampton
Trends in Ecology and Evolution | Year: 2014

Deep-sea ecosystems represent Earth's major ecological research frontier. Focusing on seafloor ecosystems, we demonstrate how new technologies underpin discoveries that challenge major ecological hypotheses and paradigms, illuminating new deep-sea geosphere-biosphere interactions. We now recognize greater habitat complexity, new ecological interactions and the importance of 'dark energy', and chemosynthetic production in fuelling biodiversity. We also acknowledge functional hotspots that contradict a food-poor, metabolically inactive, and minor component of global carbon cycles. Symbioses appear widespread, revealing novel adaptations. Populations show complex spatial structure and evolutionary histories. These new findings redefine deep-sea ecology and the role of Earth's largest biome in global biosphere functioning. Indeed, deep-sea exploration can open new perspectives in ecological research to help mitigate exploitation impacts. © 2014 Elsevier Ltd. Source

Arshad S.H.,University of Southampton
Current Allergy and Asthma Reports | Year: 2010

Common indoor allergens include house dust mite, cockroach, animal dander, and certain molds. In genetically susceptible children, exposure to these indoor allergens during the critical postnatal period may lead to sensitization in early childhood. Consistent evidence indicates that children sensitized to common indoor allergens are at several-fold higher risk of asthma and allergy. Due to conflicting evidence from prospective studies, some doubt remains regarding a direct and dose-response relationship between exposure and development of asthma. However, in recent years, evidence has accumulated that exposure to indoor allergen causes asthma and allergy, but this effect may depend on dose and type of allergen as well as the underlying genetic susceptibility of the child. © 2010 Springer Science+Business Media, LLC. Source

Holgate S.T.,University of Southampton
Current Opinion in Allergy and Clinical Immunology | Year: 2010

PURPOSE OF REVIEW: To explore new ground in asthma pathogenesis. Asthma is an inflammatory disorder of the airways that has strong association with allergy as characterized by a Th2-type T cell response. However, ranges of approaches that have targeted this immunological component have so far been disappointing. Most asthma therapy still relies on bronchodilators and corticosteroids rather than treating underlying disease mechanisms. RECENT FINDINGS: In this review, a case is made that asthma has its primary origin in the airways that involves defective behaviour of the epithelium in relation to environmental exposures. These include defects in barrier function and an impaired innate immunity to provide the substrate upon which allergic sensitization can occur. Once the airways are sensitized repeated allergen exposure leads to disease persistence. Such mechanisms could explain airway wall remodelling and the susceptibility of the asthmatic lung to exacerbations provoked by viruses, air pollution, certain drugs, and biologically active allergens. SUMMARY: Activation of the epithelial-mesenchymal trophic unit could be responsible for the emergence of different asthma phenotypes and direct a more targeted approach to treatment. There is also the possibility of developing treatments that increase the lung's resistance to the inhaled environment rather than focusing on the suppression of inflammation once established. © 2010 Wolters Kluwer Health | Lippincott Williams & Wilkins. Source

Sevellec F.,University of Southampton | Fedorov A.V.,Yale University
Journal of Climate | Year: 2013

Variations in the strength of the Atlantic meridional overturning circulation (AMOC) are a major potential source of decadal and longer climate variability in the Atlantic. This study analyzes continuous integrations of tangent linear and adjoint versions of an ocean general circulation model [Océan Paralleélisé (OPA)] and rigorously shows the existence of a weakly damped oscillatory eigenmode of the AMOCcentered in the North Atlantic Ocean and controlled solely by linearized ocean dynamics. In this particular GCM, the mode period is roughly 24 years, its e-folding decay time scale is 40 years, and it is the least-damped oscillatory mode in the system. Its mechanism is related to the westward propagation of large-scale temperature anomalies in the northern Atlantic in the latitudinal band between 30° and 60°N. The westward propagation results froma competition among mean eastward zonal advection, equivalent anomalous westward advection caused by the mean meridional temperature gradient, and westward propagation typical of long baroclinic Rossby waves. The zonal structure of temperature anomalies alternates between a dipole (corresponding to an anomalous AMOC) and anomalies of one sign (yielding no changes in the AMOC). Further, it is shown that the system is nonnormal, which implies that the structure of the least-damped eigenmode of the tangent linear model is different from that of the adjoint model. The "adjoint" mode describes the sensitivity of the system(i.e., it gives the most efficient patterns for exciting the leading eigenmode). An idealized model is formulated to highlight the role of the background meridional temperature gradient in the North Atlantic for the mode mechanism and the system nonnormality. © 2013 American Meteorological Society. Source

Calder P.C.,University of Southampton
Current Opinion in Clinical Nutrition and Metabolic Care | Year: 2013

Purpose of Review: The purpose of this review is to discuss recent studies reporting on the influence of fatty acids on gene expression in relation to inflammation and immune responses. RECENT FINDINGS: Saturated fatty acids promote, whereas several n-3 fatty acids, in particular eicosapentaenoic and docosahexaenoic acids, some isomers of conjugated linoleic acid, and punicic acid suppress, expression of inflammatory genes. The most common targets of fatty acids are genes encoding cytokines, chemokines, cyclooxygenase, nitric oxide synthase, and matrix metalloproteinases. The anti-inflammatory actions of fatty acids often involve inhibition of activation of nuclear factor-κB and activation of peroxisome proliferator-activated receptors α and γ. Common upstream events include actions on Toll-like receptors and via G-protein coupled receptors. Fatty acids can influence expression of genes involved in immune and inflammatory cell development and differentiation. Recent studies using genome-wide analyses demonstrate that dietary fatty acids can alter expression of a large number (many hundreds) of genes in human peripheral blood mononuclear cells. SUMMARY: A wide range of fatty acids alter expression of genes involved in development, differentiation, and function of cells involved in inflammation and immunity. © 2013 Wolters Kluwer Health | Lippincott Williams & Wilkins. Source

Gale P.A.,University of Southampton | Perez-Tomas R.,University of Barcelona | Quesada R.,University of Burgos
Accounts of Chemical Research | Year: 2013

In this Account, we discuss the development of new lipid bilayer aniontransporters based on the structure of anionophoric natural products (the prodigiosins) and purely synthetic supramolecular systems. We have studied the interaction of these compounds with human cancer cell lines, and, in general, the most active anion transporter compounds possess the greatest anti-cancer properties.Initially, we describe the anion transport properties of synthetic mol-ecules that are based on the structure of the family of natural products known as the prodiginines. Obatoclax, for example, is a prodiginine derivative with an indole ring that is currently in clinical trials for use as an anti-cancer drug. The anion transport properties of the compounds were correlated with their toxicity toward small cell human lung cancer GLC4 cells. We studied related compounds with enamine moieties, tambjamines, that serve as active transporters. These molecules and others in this series could depolarize acidic compartments within GLC4 cells and trigger apoptosis. In a study of the variation of lipophilicity of a series of these compounds, we observed that, as log P increases, the anion transport efficiency reaches a peak and then decreases.In addition, we discuss the anion transport properties of series of synthetic supramolecular anion receptor species. We synthesized trisureas and thioureas based on the tren backbone, and found that the thiourea compounds effectively transport anions. Fluorination of the pendant phenyl groups in this series of compounds greatly enhances the transport properties. Similar to our earlier results, the most active anion transporters reduced the viability of human cancer cell lines by depolarizing acidic compartments in GLC4 cells and triggering apoptosis.In an attempt to produce simpler transporters that obey Lipinski's Rule of Five, we synthesized simpler systems containing a single urea or thiourea group. Once again the thiourea systems, and in particular a thiourea with a pendant indole group, transported anions efficiently. A series of related compounds containing a pendant trifluoromethyl group showed enhanced transport and significant anticancer properties.Researchers still need to determine of the exact mechanism of how these compounds depolarize acidic organelles within cancer cells. However, this work shows that these transporters based upon both natural products and purely synthetic supramolecular systems transport anions, depolarize acidic compartments within cancer cells and trigger apoptosis. © 2013 American Chemical Society. Source

Trueman C.N.,University of Southampton
Palaeontology | Year: 2013

Biomineralized tissues are chemically altered after death, and this diagenetic alteration can obscure original biological chemical features or provide new chemical information about the depositional environment. To use the chemistry of fossil biominerals to reconstruct biological, environmental or taphonomic information, a solid appreciation of biomineralization, mineral diagenesis and biomineral-water interaction is needed. Here, I summarize the key recent developments in the fields of biomineralization and post-mortem trace element exchange that have significant implications for our understanding of the diagenetic behaviour of biominerals and the ways in which biomineral chemistry can be used in palaeontological and taphonomic research. © The Palaeontological Association. Source

In this paper I investigate the evolution of cooperation in the prisoner's dilemma when individuals change their strategies subject to performance evaluation of their neighbours over variable time horizons. In the monochrome setting, in which all agents per default share the same performance evaluation rule, weighing past events strongly dramatically enhances the prevalence of cooperators. For co-evolutionary models, in which evaluation time horizons and strategies can co-evolve, I demonstrate that cooperation naturally associates with long-term evaluation of others while defection is typically paired with very short time horizons. Moreover, considering the continuous spectrum in between enhanced and discounted weights of past performance, cooperation is optimally supported when cooperators neither give enhanced weight to past nor more recent events, but simply average payoffs. Payoff averaging is also found to emerge as the dominant strategy for cooperators in co-evolutionary models, thus proposing a natural route to the evolution of cooperation in viscous populations. © 2013 Brede. Source

Free vibration and thermal stability analyzes of functionally graded (FG) sandwich plates are carried out by using the advanced Hierarchical Trigonometric Ritz Formulation (HTRF). Refined higher-order kinematics plate models accounting for through-the-thickness deformation are developed within the framework of Carrera's Unified Formulation (CUF). The Principle of Virtual Displacements (PVD) is used as variational statement to develop the HTRF. Uniform, linear and non-linear temperature rises through-the-thickness layer plate direction are considered. The non-linear temperature distribution is given in different forms: (i) power law through-the-thickness variation; (ii) solution of the one-dimensional Fourier heat conduction equation; (iii) sinusoidal. The effect of initial thermal stresses on the free vibration behavior of the FG sandwich plates is investigated. Accuracy of the presented formulation is discussed in details. Moreover, the effects of volume fraction index, aspect ratio, boundary conditions, length-to-thickness ratio, sandwich plate type and temperature distributions through-the-thickness plate direction, for both natural frequencies and critical temperatures are thoroughly investigated. © 2014 Elsevier Ltd. Source

Wang Y.,University of Southampton
Journal of Applied Physics | Year: 2010

This paper presents a systematic review of long period fiber gratings (LPFGs) written by the CO2 laser irradiation technique. First, various fabrication techniques based on CO2 laser irradiations are demonstrated to write LPFGs in different types of optical fibers such as conventional glass fibers, solid-core photonic crystal fibers, and air-core photonic bandgap fibers. Second, possible mechanisms, e.g., residual stress relaxation, glass structure changes, and physical deformation, of refractive index modulations in the CO2 -laser-induced LPFGs are analyzed. Third, asymmetrical mode coupling, resulting from single-side laser irradiation, is discussed to understand unique optical properties of the CO2 -laser-induced LPFGs. Fourthly, several pretreament and post-treatment techniques are proposed to enhance the efficiency of grating fabrications. Fifthly, sensing applications of the CO2 -laser-induced LPFGs are investigated to develop various LPFG-based temperature, strain, bend, torsion, pressure, and biochemical sensors. Finally, communication applications of the CO2 -laser-induced LPFGs are investigated to develop various LPFG-based band-rejection filters, gain equalizers, polarizers, and couplers. © 2010 American Institute of Physics. Source

Mcnabb J.,University of Southampton
Geological Journal | Year: 2015

H.G. Wells' novels The Time Machine and The Island of Doctor Moreau were both concerned with the evolutionary destiny of mankind and what it meant to be human, both important areas of discussion for Victorian natural science in the 1890s. In this essay, I set these two works in their broader scientific context and explore some of the then contemporary influences on them drawn from the emerging disciplines of archaeology and anthropology. Wells was a student of T.H. Huxley whose influence on his own emerging views on human evolution is clear. While most scientists and the lay-public accepted the reality of evolution by the 1890s, and the natural origins of the human species, fear of the implications of our 'primitive' heritage pervaded popular and scientific works. Wells bridged that gap with an uncompromising outlook delivered to the public as scientific truth delivered through short stories, novels and scientific journalism. © 2014 John Wiley & Sons, Ltd. Source

Gabard G.,University of Southampton
Journal of Sound and Vibration | Year: 2013

Acoustic liners remain a key technology for reducing community noise from aircraft engines. The choice of optimal impedance relies heavily on the modeling of sound absorption by liners under grazing flows. The Myers condition assumes an infinitely thin boundary layer, but several impedance conditions have recently been proposed to include a small but finite boundary layer thickness. This paper presents a comparison of these impedance conditions against an exact solution for a simple benchmark problem and for parameters representative of inlet and bypass ducts on turbofan engines. The boundary layer thickness can have a significant impact on sound absorption, although its actual influence depends strongly on the details of the incident sound field. The impedance condition proposed by Brambley seems to provide some improvements in predicting sound absorption compared to the Myers condition. The boundary layer profile is found to have little influence on sound absorption. © 2012 Elsevier Ltd. All rights reserved. Source

Dorne J.L.C.M.,University of Southampton
Toxicology | Year: 2010

For non-genotoxic carcinogens, "thresholded toxicants", Acceptable/Tolerable Daily Intakes (ADI/TDI) represent a level of exposure "without appreciable health risk" when consumed everyday or weekly for a lifetime and are derived by applying an uncertainty factor of a 100-fold to a no-observed-adverse-effect-levels (NOAEL) or to a benchmark dose. This UF allows for interspecies differences and human variability and has been subdivided to take into account toxicokinetics and toxicodynamics with even values of 100.5 (3.16) for the human aspect. Ultimately, such refinements allow for chemical-specific adjustment factors and physiologically based models to replace such uncertainty factors. Intermediate to chemical-specific adjustment factors are pathway-related uncertainty factors which have been derived for phase I, phase II metabolism and renal excretion. Pathway-related uncertainty factors are presented here as derived from the result of meta-analyses of toxicokinetic variability data in humans using therapeutic drugs metabolised by a single pathway in subgroups of the population. Pathway-related lognormal variability was derived for each metabolic route. The resulting pathway-related uncertainty factors showed that the current uncertainty factor for toxicokinetics (3.16) would not cover human variability for genetic polymorphism and age differences (neonates, children, the elderly). Latin hypercube (Monte Carlo) models have also been developed using quantitative metabolism data and pathway-related lognormal variability to predict toxicokinetics variability and uncertainty factors for compounds handled by several metabolic routes. For each compound, model results gave accurate predictions compared to published data and observed differences arose from data limitations, inconsistencies between published studies and assumptions during model design and sampling. Finally, under the 6th framework EU project NOMIRACLE (http://viso.jrc.it/nomiracle/), novel methods to improve the risk assessment of chemical mixtures were explored (1) harmonisation of the use of uncertainty factors for human and ecological risk assessment using mechanistic descriptors (2) use of toxicokinetics interaction data to derive UFs for chemical mixtures. The use of toxicokinetics data in risk assessment are discussed together with future approaches including sound statistical approaches to optimise predictability of models and recombinant technology/toxicokinetics assays to identify metabolic routes for chemicals and screen mixtures of environmental health importance. © 2009 Elsevier Ireland Ltd. All rights reserved. Source

Wood R.J.,University of Southampton
Nucleic acids research | Year: 2010

A real-time assay for CpG-specific cytosine-C5 methyltransferase activity has been developed. The assay applies a break light oligonucleotide in which the methylation of an unmethylated 5'-CG-3' site is enzymatically coupled to the development of a fluorescent signal. This sensitive assay can measure rates of DNA methylation down to 0.34 +/- 0.06 fmol/s. The assay is reproducible, with a coefficient of variation over six independent measurements of 4.5%. Product concentration was accurately measured from fluorescence signals using a linear calibration curve, which achieved a goodness of fit (R(2)) above 0.98. The oligonucleotide substrate contains three C5-methylated cytosine residues and one unmethylated 5'-CG-3' site. Methylation yields an oligonucleotide containing the optimal substrate for the restriction enzyme GlaI. Cleavage of the fully methylated oligonucleotide leads to separation of fluorophore from quencher, giving a proportional increase in fluorescence. This method has been used to assay activity of DNMT1, the principle maintenance methyltransferase in human cells, and for the kinetic characterization of the bacterial cytosine-C5 methyltransferase M.SssI. The assay has been shown to be suitable for the real-time monitoring of DNMT1 activity in a high-throughput format, with low background signal and the ability to obtain linear rates of methylation over long periods, making this a promising method of high-throughput screening for inhibitors. Source

Metwally M.,University of Southampton
Cochrane database of systematic reviews (Online) | Year: 2012

Fibroids are the most common benign tumours of the female genital tract and are associated with numerous clinical problems including a possible negative impact on fertility. In women requesting preservation of fertility, fibroids can be surgically removed (myomectomy) by laparotomy, laparoscopically or hysteroscopically depending on the size, site and type of fibroid. Myomectomy is however a procedure that is not without risk and can result in serious complications. It is therefore essential to determine whether such a procedure can result in an improvement in fertility and, if so, to then determine the ideal surgical approach. To examine the effect of myomectomy on fertility outcomes and to compare different surgical approaches. We searched the Cochrane Menstrual Disorders and Subfertility Group (MDSG) Specialised Register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PsycINFO, CINAHL, Database of Abstracts of Reviews of Effects (DARE), LILACS, conference abstracts on the ISI Web of Knowledge, OpenSigle for grey literature from Europe, and ongoing clinical trials registered online. The final search was in June 2012. Randomised controlled trials examining the effect of myomectomy compared to no intervention or where different surgical approaches are compared regarding the effect on fertility outcomes in a group of infertile women suffering from uterine fibroids. Data collection and analysis were conducted in accordance with the procedure suggested in the Cochrane Handbook for Systematic Reviews of Interventions. One study examined the effect of myomectomy on reproductive outcomes and showed no evidence for a significant effect on the clinical pregnancy rate for intramural (OR 1.88, 95% CI 0.57 to 6.14), submucous (OR 2.04, 95% CI 0.62 to 6.66), combined intramural and subserous (OR 2.00, 95% CI 0.40 to 10.09) and combined intramural submucous fibroids (OR 3.24, 95% CI 0.72 to 14.57). Similarly, there was no evidence for a significant effect of myomectomy for any of the described types of fibroids on the miscarriage rate (intramural fibroids OR 0.89 (95% CI 0.14 to 5.48), submucous fibroids OR 0.63 (95% CI 0.09 to 4.40), combined intramural and subserous fibroids OR 0.25 (95% CI 0.01 to 4.73) and combined intramural submucous fibroids OR 0.50 (95% CI 0.03 to 7.99).Two studies compared open versus laparoscopic myomectomy and found no evidence for a significant effect on the live birth rate (OR 0.80, 95% CI 0.42 to 1.50), clinical pregnancy rate (OR 0.96, 95% CI 0.52 to 1.78), ongoing pregnancy rate (OR 1.61, 95% CI 0.26 to 10.04), miscarriage rate (OR 1.31, 95% CI 0.40 to 4.27), preterm labour rate (OR 0.68, 95% CI 0.11 to 4.43) and caesarean section rate (OR 0.59, 95% CI 0.13 to 2.72). There is currently insufficient evidence from randomised controlled trials to evaluate the role of myomectomy to improve fertility. Regarding the surgical approach to myomectomy, current evidence from two randomised controlled trials suggests there is no significant difference between the laparoscopic and open approach regarding fertility performance. This evidence needs to be viewed with caution due to the small number of studies. Finally, there is currently no evidence from randomised controlled trials regarding the effect of hysteroscopic myomectomy on fertility outcomes. Source

Dobson D.,University of Southampton
Cochrane database of systematic reviews (Online) | Year: 2012

Infantile colic is a common disorder, affecting around one in six families, and in 2001 was reported to cost the UK National Health Service in excess of £65 million per year (Morris 2001). Although it usually remits by six months of age, there is some evidence of longer-term sequelae for both children and parents.Manipulative therapies, such as chiropractic and osteopathy, have been suggested as interventions to reduce the severity of symptoms. To evaluate the results of studies designed to address efficacy or effectiveness of manipulative therapies (specifically, chiropractic, osteopathy and cranial manipulation) for infantile colic in infants less than six months of age. We searched following databases: CENTRAL (2012, Issue 4), MEDLINE (1948 to April Week 3 2012), EMBASE (1980 to 2012 Week 17), CINAHL (1938 to April 2012), PsycINFO (1806 to April 2012), Science Citation Index (1970 to April 2012), Social Science Citation Index (1970 to April 2012), Conference Proceedings Citation Index - Science (1990 to April 2012) and Conference Proceedings Citation Index - Social Science & Humanities (1970 to April 2012). We also searched all available years of LILACS, PEDro, ZETOC, WorldCat, TROVE, DART-Europe, ClinicalTrials.gov and ICTRP (May 2012), and contacted over 90 chiropractic and osteopathic institutions around the world. In addition, we searched CentreWatch, NRR Archive and UKCRN in December 2010. Randomised trials evaluating the effect of chiropractic, osteopathy or cranial osteopathy alone or in conjunction with other interventions for the treatment of infantile colic. In pairs, five of the review authors (a) assessed the eligibility of studies against the inclusion criteria, (b) extracted data from the included studies and (c) assessed the risk of bias for all included studies. Each article or study was assessed independently by two review authors. One review author entered the data into Review Manager software and the team's statistician (PP) reviewed the chosen analytical settings. We identified six studies for inclusion in our review, representing a total of 325 infants. There were three further studies that we could not find information about and we identified three other ongoing studies. Of the six included studies, five were suggestive of a beneficial effect and one found no evidence that manipulative therapies had any beneficial effect on the natural course of infantile colic. Tests for heterogeneity imply that there may be some underlying difference between this study and the other five.Five studies measured daily hours of crying and these data were combined, suggesting that manipulative therapies had a significant effect on infant colic - reducing average crying time by one hour and 12 minutes per day (mean difference (MD) -1.20; 95% confidence interval (CI) -1.89 to -0.51). This conclusion is sustained even when considering only studies with a low risk of selection bias (sequence generation and allocation concealment) (MD -1.24; 95% CI -2.16 to -0.33); those with a low risk of attrition bias (MD -1.95; 95% CI -2.96 to -0.94), or only those studies that have been published in the peer-reviewed literature (MD -1.01; 95% CI -1.78 to -0.24). However, when combining only those studies with a low risk of performance bias (parental 'blinding'), the improvement in daily crying hours was not statistically significant (MD -0.57; 95% CI -2.24 to 1.09).One study considered whether the reduction in crying time was clinically significant. This found that a greater proportion of parents of infants receiving a manipulative therapy reported clinically significant improvements than did parents of those receiving no treatment (reduction in crying to less than two hours: odds ratio (OR) 6.33; 95% CI 1.54 to 26.00; more than 30% reduction in crying: OR 3.70; 95% CI 1.15 to 11.86).Analysis of data from three studies that measured 'full recovery' from colic as reported by parents found that manipulative therapies did not result in significantly higher proportions of parents reporting recovery (OR 11.12; 95% CI 0.46 to 267.52).One study measured infant sleeping time and found manipulative therapy resulted in statistically significant improvement (MD 1.17; 95% CI 0.22 to 2.12). Source

Flower A.,University of Southampton
Cochrane database of systematic reviews (Online) | Year: 2012

Endometriosis is characterized by the presence of tissue that is morphologically and biologically similar to normal endometrium in locations outside the uterus. Surgical and hormonal treatment of endometriosis have unpleasant side effects and high rates of relapse. In China, treatment of endometriosis using Chinese herbal medicine (CHM) is routine and considerable research into the role of CHM in alleviating pain, promoting fertility, and preventing relapse has taken place.This review is an update of a previous review published in the Cochrane Database of Systematic Reviews 2009, issue No 3. To review the effectiveness and safety of CHM in alleviating endometriosis-related pain and infertility. We searched the Menstrual Disorders and Subfertility Group Trials Register, Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library) and the following English language electronic databases (from their inception to 31/10/2011): MEDLINE, EMBASE, AMED, CINAHL, and NLH.We also searched Chinese language electronic databases: Chinese Biomedical Literature Database (CBM), China National Knowledge Infrastructure (CNKI), Chinese Sci & Tech Journals (VIP), Traditional Chinese Medical Literature Analysis and Retrieval System (TCMLARS), and Chinese Medical Current Contents (CMCC). Randomised controlled trials (RCTs) involving CHM versus placebo, biomedical treatment, another CHM intervention; or CHM plus biomedical treatment versus biomedical treatment were selected. Only trials with confirmed randomisation procedures and laparoscopic diagnosis of endometriosis were included. Risk of bias assessment, and data extraction and analysis were performed independently by three review authors. Data were combined for meta-analysis using relative risk (RR) for dichotomous data. A fixed-effect statistical model was used, where appropriate. Data not suitable for meta-analysis were presented as descriptive data. Two Chinese RCTs involving 158 women were included in this review. Both these trials described adequate methodology. Neither trial compared CHM with placebo treatment.There was no evidence of a significant difference in rates of symptomatic relief between CHM and gestrinone administered subsequent to laparoscopic surgery (95.65% versus 93.87%; risk ratio (RR) 1.02, 95% confidence interval (CI) 0.93 to 1.12, one RCT). The intention-to-treat analysis also showed no significant difference between the groups (RR 1.04, 95% CI 0.91 to 1.18). There was no significant difference between the CHM and gestrinone groups with regard to the total pregnancy rate (69.6% versus 59.1%; RR 1.18, 95% CI 0.87 to 1.59, one RCT).CHM administered orally and then in conjunction with a herbal enema resulted in a greater proportion of women obtaining symptomatic relief than with danazol (RR 5.06, 95% CI 1.28 to 20.05; RR 5.63, 95% CI 1.47 to 21.54, respectively). Overall, 100% of women in all the groups showed some improvement in their symptoms.Oral plus enema administration of CHM showed a greater reduction in average dysmenorrhoea pain scores than did danazol (mean difference (MD) -2.90, 95% CI -4.55 to -1.25; P < 0.01). Combined oral and enema administration of CHM also showed a greater improvement measured as the disappearance or shrinkage of adnexal masses than with danazol (RR 1.70, 95% CI 1.04 to 2.78). For lumbosacral pain, rectal discomfort, or vaginal nodules tenderness, there was no significant difference between CHM and danazol. Post-surgical administration of CHM may have comparable benefits to gestrinone but with fewer side effects. Oral CHM may have a better overall treatment effect than danazol; it may be more effective in relieving dysmenorrhoea and shrinking adnexal masses when used in conjunction with a CHM enema. However, more rigorous research is required to accurately assess the potential role of CHM in treating endometriosis. Source

Dearing J.A.,University of Southampton
Journal of Paleolimnology | Year: 2013

The new Future Earth Framework and International Council of Science Grand Challenges highlight the need to combine environmental and complexity sciences. An improved understanding of trajectories, interactions, fast and slow processes, alternate steady states and thresholds in key natural and social phenomena are vital to the design of sustainable management strategies. Lake sediment records can provide highly resolved time-series of data that give essential long term perspectives for complex socio-ecological systems, especially at regional scales. This means that these records have important roles in addressing the Future Earth agenda, especially for Forecasting, Observing and Confining environmental change within the proposed interdisciplinary themes of Dynamic Planet and Global Development. © 2013 Springer Science+Business Media Dordrecht. Source

Annison H.,University of Southampton
Theoretical Criminology | Year: 2014

This article engages with the Imprisonment for Public Protection (IPP) sentence of the UK Criminal Justice Act 2003, a prominent measure against 'dangerous offenders', in a 'substantively political light' (O'Malley, 1999). It provides an interpretation based on policymakers' beliefs and traditions. I argue that the perceived need for the IPP sentence and its ultimate form was the result of New Labour ministers' reliance on the Third Way political ideology and its implications for criminal justice policy. In addition, in terms of the policymaking process, I suggest that the 'Westminster tradition' conditioned policymakers' actions in relation to the IPP sentence, in ways that were crucial to its outcome. The article concludes with an examination of the moral significance of these beliefs and traditions by reference to Bauman's discussion of the dangers of a modern 'garden culture'. © The Author(s) 2013. Source

Nield J.M.,University of Southampton
Geology | Year: 2011

Aeolian dune development is influenced by feedback between surface properties and sediment transport, yet little is known about the larger scale temporal and spatial natures of this relationship. Surface moisture is particularly influential, and is generally recognized in aeolian environments for its ability to increase the critical shear velocity required to entrain sediment in beach settings or, alternatively, to sustain vegetation and stabilize surfaces at a dune-field scale. However, conceptual models and field work have alluded to its importance in protodune initiation, while field observations infer that seasonal moisture input may contribute to residual dune ridge formation at the dune-field scale. This has the potential to reveal geomorphic adaptation to variations in climate, and identify a recognizable signature in the rock record. This article presents a simulation model that produces geomorphological features similar to field observations and is capable of examining the implications of surface and transport feedback at both scales. Results (1) reveal the control of surface moisture at different temporal scales, (2) display complexity in the development of multiple spatial scales within a cellular automaton framework, (3) highlight the importance of transient sand strips and sediment supply frequency in aeolian transport dynamics and protodune development, and (4) explore the relationship and significance of feedback duration, development time, and bedform spatial scale in the development of incipient dunes. This study illustrates the importance of considering geomorphic feedback when assessing the influence of surface moisture in aeolian process-dominated systems. © 2011 Geological Society of America. Source

Thase M.E.,University of Pennsylvania | Kingdon D.,University of Southampton | Turkington D.,NTW NHS Foundation Trust
World Psychiatry | Year: 2014

Cognitive behavior therapy (CBT), as exemplified by the model of psychotherapy developed and refined over the past 40 years by A.T. Beck and colleagues, is one of the treatments of first choice for ambulatory depressive and anxiety disorders. Over the past several decades, there have been vigorous efforts to adapt CBT for treatment of more severe mental disorders, including schizophrenia and the more chronic and/or treatment refractory mood disorders. These efforts have primarily studied CBT as an adjunctive therapy, i.e., in combination with pharmacotherapy. Given the several limitations of state-of-the-art pharmacotherapies for these severe mental disorders, demonstration of clinically meaningful additive effects for CBT would have important implications for improving public health. This paper reviews the key developments in this important area of therapeutics, providing a summary of the current state of the art and suggesting directions for future research. Source

Capraro V.,University of Southampton
PLoS ONE | Year: 2013

Social dilemmas are situations in which collective interests are at odds with private interests: pollution, depletion of natural resources, and intergroup conflicts, are at their core social dilemmas. Because of their multidisciplinarity and their importance, social dilemmas have been studied by economists, biologists, psychologists, sociologists, and political scientists. These studies typically explain tendency to cooperation by dividing people in proself and prosocial types, or appealing to forms of external control or, in iterated social dilemmas, to long-term strategies. But recent experiments have shown that cooperation is possible even in one-shot social dilemmas without forms of external control and the rate of cooperation typically depends on the payoffs. This makes impossible a predictive division between proself and prosocial people and proves that people have attitude to cooperation by nature. The key innovation of this article is in fact to postulate that humans have attitude to cooperation by nature and consequently they do not act a priori as single agents, as assumed by standard economic models, but they forecast how a social dilemma would evolve if they formed coalitions and then they act according to their most optimistic forecast. Formalizing this idea we propose the first predictive model of human cooperation able to organize a number of different experimental findings that are not explained by the standard model. We show also that the model makes satisfactorily accurate quantitative predictions of population average behavior in one-shot social dilemmas. © 2013 Valerio Capraro. Source

Grivas C.,University of Southampton
Progress in Quantum Electronics | Year: 2016

The field of optically pumped planar waveguide lasers has seen a rapid development over the last two decades driven by the requirements of a range of applications. This sustained research effort has led to the demonstration of a large variety of miniature highly efficient laser sources by combining different gain media and resonator geometries. One of the most attractive features of waveguide lasers is the broad range of regimes that they can operate, spanning from continuous wave and single frequency through to the generation of femtosecond pulses. Furthermore, their technology has experienced considerable advances to provide increased output power levels, deriving benefits from the relative immunity from the heat generated in the gain medium during laser operation and the use of cladding-pumped architectures. This second part of the review on optically pumped planar waveguide lasers provides a snapshot of the state-of-the-art research in this field in terms of gain materials, laser system designs, and as well as a perspective on the status of their application as real devices in various research areas. © 2016 Elsevier Ltd. All rights reserved. Source

Carling P.A.,University of Southampton
Journal of Geophysical Research: Earth Surface | Year: 2013

[1] Landforms, morphologically similar to aeolian yardangs but formed by erosion of bedrock by currents on an estuarine rock platform, are described for the first time. The geometries of the "yardangs" are described and related to semi-lemniscate shapes that minimize hydraulic drag. The processes of bedrock erosion by the reversing sedimentladen tidal currents are described, and a semi-quantitative model for landform evolution is proposed. The model casts doubt on the "simple" role of the maximum in the two-dimensional vertical suspended sediment flux distribution and the consequent distribution of potential kinetic energy flux in the process of shaping the rock wall facing the ebb flow. Rather, although the kinetic energy flux increases away from the bed, the sediment becomes finer and abrasion likely is insignificant compared with coarse sand abrasion lower in the profile. In addition, the vertical distribution of sediment flux is mediated by topographic forcing which raises the elevation at which bed load intersects the yardang prow. Consequent erosion leads to ebb-facing caprock collapse and yardang shortening. In contrast, the role of ebb-flow separation is paramount in mediating the abrasion process that molds the rock surface facing the flood flow. The length of yardangs is the least conservative dimension, reducing through time more rapidly than the height and width. Width is the more conservative dimension which implies that once the caprock is destroyed, scour over the obstacle is significant in reducing body height, more so than scour of the flanks which reduces width. The importance of vertical fissures in instigating the final breakdown of smaller yardangs and their extinction is noted. Similarities to aeolian yardang geometries and formation principles and processes are noted, as are the differences. The model has implications for aeolian yardang models generally. © 2012. American Geophysical Union. All Rights Reserved. Source

Recently (Starink, 2014) a new model for diffusion-controlled precipitation reactions based on the extended volume concept was derived. The model leads to an analytical equation describing the relation between the fraction transformed, α, the reaction time, t, and the reaction exponent, n, as: α = {exp(-2(k1t)n) - 1}/(2(k1t)n) + 1 In the present work, new analysis methods are derived which allow determination of the reaction exponent n. The new methods are applied to analysis of nucleation and it is shown that generally during a reaction with growth in 3 dimensions there are only 2 modes: either the nucleation rate in the extended volume is constant or it is negligibly small. A new approach to the interaction of diffusion-controlled growth and nucleation is proposed to rationalise these findings. The exponential decay of the average solute content predicted by the new model is further analysed and compared with a range of experimental data and contrasted with other models. The new model is found to correspond excellently to these solute decay data. © 2015 Elsevier B.V. All rights reserved. Source

Nicholls R.J.,University of Southampton
Oceanography | Year: 2011

Coastal areas constitute important habitats, and they contain a large and growing population, much of it located in economic centers such as London, New York, Tokyo, Shanghai, Mumbai, and Lagos. The range of coastal hazards includes climate-induced sea level rise, a long-term threat that demands broad response. Global sea levels rose 17 cm through the twentieth century, and are likely to rise more rapidly through the twenty-first century when a rise of more than 1 m is possible. In some locations, these changes may be exacerbated by (1) increases in storminess due to climate change, although this scenario is less certain, and (2) widespread human induced subsidence due to ground fluid withdrawal from, and drainage of, susceptible soils, especially in deltas. Relative sea level rise has a range of potential impacts, including higher extreme sea levels (and flooding), coastal erosion, salinization of surface and ground waters, and degradation of coastal habitats such as wetlands. Without adaptation, large land areas and millions of people could be displaced by sea level rise. Appropriate responses include climate mitigation (a global response) and/or adaptation (a local response). A combination of these strategies appears to be the most appropriate approach to sea level rise regardless of the uncertainty. Adaptation responses can be characterized as (1) protect, (2) accommodate, or (3) retreat. While these adaptation responses could reduce impacts significantly, they will need to be consistent with responses to all coastal hazards, as well as with wider societal and development objectives; hence, an integrated coastal management philosophy is required. In some developed countries, including England and the Netherlands, proactive adaptation plans are already being formulated. Coastal cities worldwide will be a major focus for adaptation efforts because of their concentrations of people and assets. Developing countries will pose adaptation challenges, especially in deltaic areas and small islands, which are the most vulnerable settings. © 2011 by The Oceanography Society. All rights reserved. Source

Prugel-Bennett A.,University of Southampton
IEEE Transactions on Evolutionary Computation | Year: 2010

This paper identifies five distinct mechanisms by which a population-based algorithm might have an advantage over a solo-search algorithm in classical optimization. These mechanisms are illustrated through a number of toy problems. Simulations are presented comparing different search algorithms on these problems. The plausibility of these mechanisms occurring in classical optimization problems is discussed. The first mechanism we consider relies on putting together building blocks from different solutions. This is extended to include problems containing critical variables. The second mechanism is the result of focusing of the search caused by crossover. Also discussed in this context is strong focusing produced by averaging many solutions. The next mechanism to be examined is the ability of a population to act as a low-pass filter of the landscape, ignoring local distractions. The fourth mechanism is a population's ability to search different parts of the fitness landscape, thus hedging against bad luck in the initial position or the decisions it makes. The final mechanism is the opportunity of learning useful parameter values to balance exploration against exploitation. © 2010 IEEE. Source

Kingan M.J.,University of Southampton
Journal of Sound and Vibration | Year: 2013

Abstract A theoretical model is presented for calculating the broadband noise produced by the interaction of an open rotor with the wake from either an upstream contra-rotating rotor or a stationary pylon. The model is used to investigate the dependence of the radiated noise on parameters such as pylon-rotor gap and the polar and azimuthal directivity of the noise field. A simple model is also presented which assumes that the unsteady loading on adjacent blades is uncorrelated. It is shown that the simple model can be used to calculate broadband interaction noise for most practical open rotor geometries. © 2013 Elsevier Ltd. All rights reserved. Source

Sonuga-Barke E.J.S.,University of Southampton
Journal of Personality Disorders | Year: 2014

Young people with conduct disorder often experience histories of psychosocial adversity and socioeconomic insecurity. For these individuals, real-world future outcomes are not only delayed in their delivery but also highly uncertain. Under such circumstances, accentuated time preference (extreme favoring of the present over the future) is a rational response to the everyday reality of social and economic transactions. Building on this observation, the author sets out the hypothesis that the exaggerated temporal discounting displayed by individuals with conduct disorder reported by White et al. (2014) is an adaptation to chronic exposure to psychosocial insecurity during development. The author postulates that this adaptation leads to (a) a decision-making bias whereby delay and uncertainty are coded as inseparable characteristics of choice outcomes and/or (b) reprogramming of the brain networks regulating intertemporal decision making. Future research could explore the putative role of environmental exposures to adversity in the development of exaggerated temporal discounting in conduct disorder as well as the mediating role of putative cognitive and neurobiological adaptations. © 2014 The Guilford Press Source

Kaparis K.,University of Southampton | Letchford A.N.,Lancaster University
Mathematical Programming | Year: 2010

Valid inequalities for 0-1 knapsack polytopes often prove useful when tackling hard 0-1 Linear Programming problems. To generate such inequalities, one needs separation algorithms for them, i.e., routines for detecting when they are violated. We present new exact and heuristic separation algorithms for several classes of inequalities, namely lifted cover, extended cover, weight and lifted pack inequalities. Moreover, we show how to improve a recent separation algorithm for the 0-1 knapsack polytope itself. Extensive computational results, on MIPLIB and OR Library instances, show the strengths and limitations of the inequalities and algorithms considered. © 2010 Springer and Mathematical Programming Society. Source

Fischbacher T.,University of Southampton
Journal of High Energy Physics | Year: 2010

The list of six previously known nontrivial stationary points in the scalar potential of N = 8, D = 4 supergravity with gauge group SO(8) is extended by fourteen new entries, whose properties have been obtained numerically using the sensitivity backpropagation technique. Eight of the new solutions break the gauge group completely, while three have a residual symmetry of U(1). Three further ones break the gauge group to U(1) × U(1). While the approximate numerical data are somewhat inconclusive, there is evidence that one of these may have a residual N = 1 supersymmetry, hence correspond to a stable vacuum. It must be pointed out that this list of new solutions most likely is not exhaustive. © SISSA 2010. Source

King S.F.,University of Southampton
Journal of High Energy Physics | Year: 2010

Following the anomalous like-sign dimuon charge asymmetry measured by the D0 collaboration at the Tevatron collider we discuss the implications of large CP violation in B d,s mixing for Supersymmetric (SUSY) Standard Models, focussing on those models which allow a family symmetry and unification. For the Minimal Supersymmetric Standard Model (MSSM) we show that it is only possible to account for B s mixing and CP violation at the expense of large squark mixing which would require a new approach to family symmetry models. In order to describe both B s and B d mixing and CP violation we are led to consider SUSY models with Higgs fields transforming as triplets under a family symmetry. We describe a realistic such model based on Δ 27 family symmetry in which tree-level exchange of the second Higgs family predicts B s and B d mixing and CP violation in good agreement with a recent global fit, while naturally suppressing flavour and CP violation involving the first and second quark and lepton families. © SISSA 2010. Source

Broderick N.G.R.,University of Southampton
Optics Express | Year: 2010

I introduce the problem of transforming one optical pulse into another via nonlinear propagation in a length of dispersion varying optical fibre. Then using a genetic algorithm to design the dispersion profiles, I show that the problem can be solved leading to high quality pulse transforms that are significantly better than what has been published previously. Finally I suggestion further work and other applications for this method. © 2010 Optical Society of America. Source

Peacock A.C.,University of Southampton
Optics Letters | Year: 2010

Numerical simulations are used to investigate soliton-like propagation in tapered silicon core optical fibers. The simulations are based on a realistic tapered structure with nanoscale core dimensions and a decreasing anomalous dispersion profile to compensate for the effects of linear and nonlinear loss. An intensity misfit parameter is used to establish the optimum taper dimensions that preserve the pulse shape while reducing temporal broadening. Soliton formation from Gaussian input pulses is also observed-further evidence of the potential for tapered silicon fibers to find use in a range of signal processing applications. © 2010 Optical Society of America. Source

Barker D.J.P.,University of Southampton | Barker D.J.P.,Oregon Health And Science University
Public Health | Year: 2012

Coronary heart disease, type 2 diabetes, breast cancer and many other chronic diseases are unnecessary. Their occurrence is not mandated by genes passed down to us through thousands of years of evolution. Chronic diseases are not the inevitable lot of humankind. They are the result of the changing pattern of human development. We could readily prevent them, had we the will to do so. Prevention of chronic disease, and an increase in healthy ageing require improvement in the nutrition of girls and young women. Many babies in the womb in the Western world today are receiving unbalanced and inadequate diets. Many babies in the developing world are malnourished because their mothers are chronically malnourished. Protecting the nutrition and health of girls and young women should be the cornerstone of public health. Not only will this prevent chronic disease, but it will produce new generations who have better health and well-being through their lives. © 2011 The Royal Society for Public Health. Source

Leonardi S.,University of Puerto Rico at Mayaguez | Castro I.P.,University of Southampton
Journal of Fluid Mechanics | Year: 2010

Computations of channel flow with rough walls comprising staggered arrays of cubes having various plan area densities are presented and discussed. The cube height h is 12.5% of the channel half-depth and Reynolds numbers (uτh/v) are typically around 700-well into the fully rough regime. A direct numerical simulation technique, using an immersed boundary method for the obstacles, was employed with typically 35 million cells. It is shown that the surface drag is predominantly form drag, which is greatest at an area coverage around 15%. The height variation of the axial pressure force across the obstacles weakens significantly as the area coverage decreases, but is always largest near the top of the obstacles. Mean flow velocity and pressure data allow precise determination of the zero-plane displacement (defined as the height at which the axial surface drag force acts) and this leads to noticeably better fits to the log-law region than can be obtained by using the zero-plane displacement merely as a fitting parameter. There are consequent implications for the value of von Kármán's constant. As the effective roughness of the surface increases, it is also shown that there are significant changes to the structure of the turbulence field around the bottom boundary of the inertial sublayer. In distinct contrast to two-dimensional roughness (longitudinal or transverse bars), increasing the area density of this three-dimensional roughness leads to a monotonic decrease in normalized vertical stress around the top of the roughness elements. Normalized turbulence stresses in the outer part of the flows are nonetheless very similar to those in smooth-wall flows. © 2010 Cambridge University Press. Source

Atkinson P.M.,University of Southampton | Massari R.,University of San Marino
Geomorphology | Year: 2011

In previous research, a logistic regression of landslide occurrence on several explanatory variables was fitted and used to map landslide susceptibility for a small area in the central Apennines, Italy. Here, the spatial dependence or spatial correlation in the residuals from the fitted regression model was accounted for by inserting an autocovariate into the model. The autocovariate was estimated by applying a Gibbs sampler to the susceptibilities for neighbouring pixels. As in any landslide susceptibility analysis, accuracy was difficult to assess because of the requirement for data on future landslides. However, by comparing the ordinary logistic model to the autologistic model obtained on the same set of data, it was possible to assess the influence of the autocovariate. The autocovariate rendered the model simpler because several variables lost their significance and were, therefore, omitted from the model. Further, areas of high landslide susceptibility estimated from the autologistic model were geographically clustered, as one would expect, and this may be advantageous in terms of (i) interpreting the model and (ii) displaying the results to non-experts. © 2011 Elsevier B.V. Source

Calder P.C.,University of Southampton
European Journal of Pharmacology | Year: 2011

Inflammation underlies many common conditions and diseases. Fatty acids can influence inflammation through a variety of mechanisms, including acting via cell surface and intracellular receptors/sensors that control inflammatory cell signalling and gene expression patterns. Some effects of fatty acids on inflammatory cells appear to be mediated by, or at least are associated with, changes in fatty acid composition of cell membranes. Changes in these compositions can modify membrane fluidity, lipid raft formation, cell signalling leading to altered gene expression, and the pattern of lipid and peptide mediator production. Cells involved in the inflammatory response are typically rich in the n-6 fatty acid arachidonic acid, but the contents of arachidonic acid and of the n-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) can be altered through oral administration of EPA and DHA. Eicosanoids produced from arachidonic acid have roles in inflammation. EPA also gives rise to eicosanoids and these may have differing properties from those of arachidonic acid-derived eicosanoids. EPA and DHA give rise to resolvins which are anti-inflammatory and inflammation resolving. Thus, fatty acid exposure and the fatty acid composition of human inflammatory cells influences their function. As a result of their anti-inflammatory actions marine n-3 fatty acids have therapeutic efficacy in rheumatoid arthritis, although benefits in other inflammatory diseases and conditions have not been unequivocally demonstrated. The anti-inflammatory effects of marine n-3 fatty acids may contribute to their protective actions towards atherosclerosis, plaque rupture and cardiovascular mortality. The therapeutic dose of n-3 fatty acids is not clear. © 2011 Elsevier B.V. All rights reserved. Source

Muller G.,University of Southampton
Renewable Energy | Year: 2013

Many industrial processes and renewable energy sources produce thermal energy with temperatures below 100 °C. The cost-effective generation of mechanical energy from this thermal energy still constitutes an engineering problem. The atmospheric steam engine is a very simple machine which employs the steam generated by boiling water at atmospheric pressures. Its main disadvantage is the low theoretical efficiency of 0.064. In this article, first the theory of the atmospheric steam engine is extended to show that operation for temperatures between 60 °C and 100 °C is possible although efficiencies are further reduced. Second, the addition of a forced expansion stroke, where the steam volume is increased using external energy, is shown to lead to significantly increased overall efficiencies ranging from 0.084 for a boiler temperature of T0 = 60 °C to 0.25 for T0 = 100 °C. The simplicity of the machine indicates cost-effectiveness. The theoretical work shows that the atmospheric steam engine still has development potential. © 2012 Elsevier Ltd. Source

Bartlett R.,University of Southampton
Ageing and Society | Year: 2014

After decades of silencing and discrimination, people with dementia are beginning to join forces, take action and campaign for social change. Drawing on data obtained from 'activists' with dementia using diary interview method and participant observation, this paper considers the emergent modes of dementia activism in the context of the social movement literature, and in particular, work emphasising the role of networks in health social movements. The study identified three emergent modes of dementia activism; these were the 'protecting-self against decline' mode, '(re) gaining respect' mode, and 'creating connections with other people with dementia' mode. Taken together, these modes show how a sense of elapsing time pervades this form of activism. The investigation reinforces the contention that time is a dominated force that structures human motivation and goals. Furthermore, it raises the possibility that activism can protect against decline amongst people with dementia given the appropriate temporal space. Copyright © Cambridge University Press 2012. Source

Grocott M.P.,University of Southampton
Cochrane database of systematic reviews (Online) | Year: 2012

Studies have suggested that increasing whole body blood flow and oxygen delivery around the time of surgery reduces mortality, morbidity and the expense of major operations. To describe the effects of increasing perioperative blood flow using fluids with or without inotropes or vasoactive drugs. Outcomes were mortality, morbidity, resource utilization and health status. We searched CENTRAL (The Cochrane Library 2012, Issue 1), MEDLINE (1966 to March 2012) and EMBASE (1982 to March 2012). We manually searched the proceedings of major conferences and personal reference databases up to December 2011. We contacted experts in the field and pharmaceutical companies for published and unpublished data. We included randomized controlled trials with or without blinding. We included studies involving adult patients (aged 16 years or older) undergoing surgery (patients having a procedure in an operating room). The intervention met the following criteria. 'Perioperative' was defined as starting up to 24 hours before surgery and stopping up to six hours after surgery. 'Targeted to increase global blood flow' was defined by explicit measured goals that were greater than in controls, specifically one or more of cardiac index, oxygen delivery, oxygen consumption, stroke volume (and the respective derived indices), mixed venous oxygen saturation (SVO(2)), oxygen extraction ratio (0(2)ER) or lactate. Two authors independently extracted the data. We contacted study authors for additional data. We used Review Manager software. We included 31 studies of 5292 participants. There was no difference in mortality: 282/2615 (10.8%) died in the control group and 238/2677 (8.9%) in the treatment group, RR of 0.89 (95% CI 0.76 to 1.05, P = 0.18). However, the results were sensitive to analytical methods and the intervention was better than control when inverse variance or Mantel-Haenszel random-effects models were used, RR of 0.72 (95% CI 0.55 to 0.95, P = 0.02). The results were also sensitive to withdrawal of studies with methodological limitations. The rates of three morbidities were reduced by increasing global blood flow: renal failure, RR of 0.71 (95% CI 0.57 to 0.90); respiratory failure, RR of 0.51 (95% CI 0.28 to 0.93); and wound infections, RR of 0.65 (95% CI 0.51 to 0.84). There were no differences in the rates of nine other morbidities: arrhythmia, pneumonia, sepsis, abdominal infection, urinary tract infection, myocardial infarction, congestive cardiac failure or pulmonary oedema, or venous thrombosis. The number of patients with complications was reduced by the intervention, RR of 0.68 (95% CI 0.58 to 0.80). Hospital length of stay was reduced in the treatment group by a mean of 1.16 days (95% CI 0.43 to 1.89, P = 0.002). There was no difference in critical care length of stay. There were insufficient data to comment on quality of life and cost effectiveness. It remains uncertain whether increasing blood flow using fluids, with or without inotropes or vasoactive drugs, reduces mortality in adults undergoing surgery. The primary analysis in this review (mortality at longest follow-up) showed no difference between the intervention and control, but this result was sensitive to the method of analysis, the withdrawal of studies with methodological limitations, and is dominated by a single large RCT. Overall, for every 100 patients in whom blood flow is increased perioperatively to defined goals, one can expect 13 in 100 patients (from 40/100 to 27/100) to avoid a complication, 2/100 to avoid renal impairment (from 8/100 to 6/100), 5/100 to avoid respiratory failure (from 10/100 to 5/100), and 4/100 to avoid postoperative wound infection (from 10/100 to 6/100). On average, patients receiving the intervention stay in hospital one day less. It is unlikely that the intervention causes harm. The balance of current evidence does not support widespread implementation of this approach to reduce mortality but does suggest that complications and duration of hospital stay are reduced. Source

Calder P.C.,University of Southampton
Proceedings of the Nutrition Society | Year: 2010

Lipids traditionally used in artificial nutrition are based on n-6 fatty acid-rich vegetable oils like soyabean oil. This may not be optimal because it may present an excessive supply of linoleic acid. One alternative to the use of soyabean oil is its partial replacement by fish oil, which contains n-3 fatty acids. These fatty acids influence inflammatory and immune responses and so may be useful in particular situations where those responses are not optimal. Fish oil-containing lipid emulsions have been used in parenteral nutrition in adult patients post-surgery (mainly gastrointestinal). This has been associated with alterations in patterns of inflammatory mediators and in immune function and, in some studies, a reduction in length of intensive care unit (ICU) and hospital stay. Perioperative administration of fish oil may be superior to post-operative. Parenteral fish oil has been used in critically ill adults. Here the influence on inflammatory processes, immune function and clinical endpoints is not clear, since there are too few studies and those that are available report contradictory findings. Fish oil is included in combination with other nutrients in various enteral formulas. In post-surgical patients and in those with mild sepsis or trauma, there is clinical benefit from a formula including fish oil and arginine. A formula including fish oil, borage oil and antioxidants has demonstrated marked benefits on gas exchange, ventilation requirement, new organ failures, ICU stay and mortality in patients with acute respiratory distress syndrome, acute lung injury or severe sepsis. © 2010 The Author. Source

Strefford J.C.,University of Southampton
British Journal of Haematology | Year: 2015

Chronic lymphocytic leukaemia (CLL) remains at the forefront of the genetic analysis of human tumours, principally due its prevalence, protracted natural history and accessibility to suitable material for analysis. With the application of high-throughput genetic technologies, we have an unbridled view of the architecture of the CLL genome, including a comprehensive description of the copy number and mutational landscape of the disease, a detailed picture of clonal evolution during pathogenesis, and the molecular mechanisms that drive genomic instability and therapeutic resistance. This work has nuanced the prognostic importance of established copy number alterations, and identified novel prognostically relevant gene mutations that function within biological pathways that are attractive treatment targets. Herein, an overview of recent genomic discoveries will be reviewed, with associated biological and clinical implications, and a view into how clinical implementation may be facilitated. © 2014 John Wiley & Sons Ltd. Source

Brunton-Smith I.,University of Surrey | Sturgis P.,University of Southampton
Criminology | Year: 2011

For a long time, criminologists have contended that neighborhoods are important determinants of how individuals perceive their risk of criminal victimization. Yet, despite the theoretical importance and policy relevance of these claims, the empirical evidence base is surprisingly thin and inconsistent. Drawing on data from a national probability sample of individuals, linked to independent measures of neighborhood demographic characteristics, visual signs of physical disorder, and reported crime, we test four hypotheses about the mechanisms through which neighborhoods influence fear of crime. Our large sample size, analytical approach, and the independence of our empirical measures enable us to overcome some of the limitations that have hampered much previous research into this question. We find that neighborhood structural characteristics, visual signs of disorder, and recorded crime all have direct and independent effects on individual-level fear of crime. Additionally, we demonstrate that individual differences in fear of crime are strongly moderated by neighborhood socioeconomic characteristics; between-group differences in expressed fear of crime are both exacerbated and ameliorated by the characteristics of the areas in which people live. © 2011 American Society of Criminology. Source

Proud C.G.,University of Southampton
Biochemical Society Transactions | Year: 2013

mTORC1 (mammalian target of rapamycin complex 1) is activated by nutrients, growth factors and certain hormones. Signalling downstream of mTORC1 promotes protein synthesis by both activating the processes of translation initiation and elongation, in the short term, and the production of new ribosomes, in the longer term. mTORC1 signalling stimulates the translation of the mRNAs encoding the ribosomal proteins, activates RNA polymerases I and III, which make the rRNAs, and promotes the processing of the precursor for the main rRNAs. Taken together, these effects allow mTORC1 signalling to drive cell growth and proliferation. © 2013 Biochemical Society. Source

Rius M.,University of Southampton | Darling J.A.,U.S. Environmental Protection Agency
Trends in Ecology and Evolution | Year: 2014

Genetic admixture of divergent intraspecific lineages is increasingly suspected to have an important role in the success of colonising populations. However, admixture is not a universally beneficial genetic phenomenon. Selection is typically expected to favour locally adapted genotypes and can act against admixed individuals, suggesting that there are some conditions under which admixture will have negative impacts on population fitness. Therefore, it remains unclear how often admixture acts as a true driver of colonisation success. Here, we review the population consequences of admixture and discuss its costs and benefits across a broad spectrum of ecological contexts. We critically evaluate the evidence for a causal role of admixture in successful colonisation, and consider that role more generally in driving population range expansion. © 2014 Elsevier Ltd. Source

Chen S.,University of Southampton
IEEE Transactions on Broadcasting | Year: 2011

This contribution applies digital predistorter to compensate distortions caused by memory high power amplifiers (HPAs) which exhibit true output saturation characteristics. Particle swarm optimization is first implemented to identify the Wiener HPA's parameters. The estimated Wiener HPA model is then directly used to design the predistorter. The proposed digital predistorter solution is attractive owing to its low on-line computational complexity, small memory units required and simple VLSI hardware structure implementation. Moreover, the designed predistorter is capable of successfully compensating serious nonlinear distortions and memory effects caused by the memory HPA operating in the output saturation region. Simulation results obtained are presented to demonstrate the effectiveness of this novel digital predistorter design. © 2011 IEEE. Source

Isenring E.,Bond University | Elia M.,University of Southampton
Nutrition | Year: 2015

The risk for malnutrition increases with age and presence of cancer, and it is particularly common in older cancer patients. A range of simple and validated nutrition screening tools can be used to identify malnutrition risk in cancer patients (e.g., Malnutrition Screening Tool, Mini Nutritional Assessment Short Form Revised, Nutrition Risk Screening, and the Malnutrition Universal Screening Tool). Unintentional weight loss and current body mass index are common components of screening tools. Patients with cancer should be screened at diagnosis, on admission to hospitals or care homes, and during follow-up at outpatient or general practitioner clinics, at regular intervals depending on clinical status. Nutritional assessment is a comprehensive assessment of dietary intake, anthropometrics, and physical examination often conducted by dietitians or geriatricians after simple screening has identified at-risk patients. The result of nutritional screening, assessment and the associated care plans should be documented, and communicated, within and between care settings for best patient outcomes. © 2015 Elsevier Inc. Source

Maertens A.P.,Massachusetts Institute of Technology | Weymouth G.D.,University of Southampton
Computer Methods in Applied Mechanics and Engineering | Year: 2015

An accurate Cartesian-grid treatment for intermediate Reynolds number fluid-solid interaction problems is described. We first identify the inability of existing immersed boundary methods to handle intermediate Reynolds number flows to be the discontinuity of the velocity gradient at the interface. We address this issue by generalizing the Boundary Data Immersion Method (BDIM, Weymouth and Yue (2011)), in which the field equations of each domain are combined analytically, through the addition of a higher order term to the integral formulation. The new method retains the desirable simplicity of direct forcing methods and smoothes the velocity field at the fluid-solid interface while removing its bias. Based on a second-order convolution, it achieves second-order convergence in the L2 norm, regardless of the Reynolds number. This results in accurate flow predictions and pressure fields without spurious fluctuations, even at high Reynolds number. A treatment for sharp corners is also derived that significantly improves the flow predictions near the trailing edge of thin airfoils. The second-order BDIM is applied to unsteady problems relevant to ocean energy extraction as well as animal and vehicle locomotion for Reynolds numbers up to 105. © 2014 Elsevier B.V. Source

Yates C.M.,University of Birmingham | Calder P.C.,University of Southampton | Ed Rainger G.,University of Birmingham
Pharmacology and Therapeutics | Year: 2014

Omega-3 (n-3) polyunsaturated fatty acids (n-3 PUFAs) have well documented anti-inflammatory properties, and consequently therapeutic potential in chronic inflammatory diseases. Here we discuss the effects of n-3 PUFAs on various inflammatory pathways and how this leads to alterations in the function of inflammatory cells, most importantly endothelial cells and leukocytes. Strong evidence indicates n-3 PUFAs are beneficial as a dietary supplement in certain diseases such as rheumatoid arthritis; however for other conditions such as asthma, the data are less robust. A clearer understanding of the pharmacology of n-3 PUFAs will help to establish targets to modulate chronic inflammatory diseases. © 2013 Elsevier Inc. All rights reserved. Source

Ersoy E.Y.,Dokuz Eylul University | Palmer M.R.,University of Southampton
Lithos | Year: 2013

We present a compilation and comparison of geochemical data of Aegean Eocene to Recent magmatic rocks: (1) North Anatolian Eocene magmatic rocks (NAEM), (2) Aegean to west Anatolian Oligocene-Miocene magmatic rocks (AOMM), (3) Pliocene-Quaternary South Aegean volcanic arc (SAVA), (4) Pliocene-Quaternary Denizli-Isparta volcanics (DIV), and (5) Na-alkaline basalts with intra-plate geochemical affinity (IPV). These rocks are also compared with Miocene Galatean volcanics (GVP) from central Anatolia.The NAEM, SAVA and GVP show similar geochemical features indicative of a subduction-related origin in which subducted oceanic plate contaminated the overlying mantle wedge. The distinct geochemical features of the AOMM reflect derivation from an intensely metasomatised mantle source, resulting from partial subduction and accretion of both continental and oceanic assemblages in the fore-arc of a southward migrating subduction system. These features provide an insight into the history of the distinct types of mantle metasomatism in the region and into its geodynamic evolution - an evolution that include complex interaction of subduction roll-back, slab break-off, strike-slip faulting along major transfer zones, block rotations and core complex formation.Thus, the Eocene to recent magmatism in the region was controlled by various tectonic events: (1) the NAEM was most probably related to break-off of the subducted slab in western Anatolia, (2) magmatic activity in the western AOMM was controlled by rotational extension around poles in northern Greece developed in response to rotational roll-back of the Hellenic subduction system, (3) while AOMM magmatism in the east is closely associated with core complex formation and asthenosphere-related thermal input along a ~. N-S-trending slab tear. In contrast, the rocks of the DIV and IPV carry asthenospheric mantle geochemical signatures indicative of roll-back induced asthenospheric upwelling in Rhodope to NW Anatolia, and slab tear-induced asthenospheric upwelling beneath the Menderes Core Complex. © 2013 Elsevier B.V. Source

Lander S.K.,University of Southampton
Astrophysical Journal Letters | Year: 2016

The activity of magnetars is believed to be powered by colossal magnetic energy reservoirs. We sketch an evolutionary picture in which internal field evolution in magnetars generates a twisted corona, from which energy may be released suddenly in a single giant flare, or more gradually through smaller outbursts and persistent emission. Given the ages of magnetars and the energy of their giant flares, we suggest that their evolution is driven by a novel mechanism: magnetic flux transport/decay due to persistent plastic flow in the crust, which would invalidate the common assumption that the crustal lattice is static and evolves only under Hall drift and Ohmic decay. We estimate the field strength required to induce plastic flow as a function of crustal depth, and the viscosity of the plastic phase. The star's superconducting core may also play a role in magnetar field evolution, depending on the star's spindown history and how rotational vortices and magnetic fluxtubes interact. © 2016. The American Astronomical Society. All rights reserved.. Source

Proud C.G.,University of Southampton
American Journal of Clinical Nutrition | Year: 2014

Amino acids are the precursors for the synthesis of proteins. In humans, approximately half of the 20 different amino acids are essential, ie, must be obtained from the diet. Cells must therefore take account of amino acid availability to achieve sustainable rates of protein synthesis. One of the major mechanisms involved in this is signaling through a complex of proteins termed mammalian target of rapamycin complex (mTORC) 1, which is activated by amino acids. In turn, mTORC1 regulates the production of ribosomes, the molecular machines that make proteins, and the activity of other cellular components required for protein synthesis. mTORC1 signaling promotes the transcription of the genes for ribosomal RNAs and many other components involved in ribosome production. It also positively regulates the translation of the messenger RNAs (mRNAs) for ribosomal proteins. Indeed, recent studies have shown that mammalian target of rapamycin signaling drives the translation of mRNAs for many anabolic enzymes and other proteins involved in diverse cellular functions. The translational machinery is also regulated by the absence of amino acids through the protein kinase GCN2 (general control nonrepressed 2), which phosphorylates and in end-effect inhibits the translation initiation factor eIF2 (eukaryotic initiation factor 2). This process shuts down general protein synthesis to conserve amino acids. © 2014 American Society for Nutrition. Source

Dolan S.R.,University of Sheffield | Barack L.,University of Southampton
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

This is the third in a series of papers aimed at developing a practical time-domain method for self-force calculations in Kerr spacetime. The key elements of the method are (i) removal of a singular part of the perturbation field with a suitable analytic "puncture," (ii) decomposition of the perturbation equations in azimuthal (m-)modes, taking advantage of the axial symmetry of the Kerr background, (iii) numerical evolution of the individual m-modes in 2+1 dimensions with a finite difference scheme, and (iv) reconstruction of the local self-force from the mode sum. Here we report a first implementation of the method to compute the gravitational self-force. We work in the Lorenz gauge, solving directly for the metric perturbation in 2+1 dimensions, for the case of circular geodesic orbits. The modes m=0, 1 contain nonradiative pieces, whose time-domain evolution is hampered by certain gauge instabilities. We study this problem in detail and propose ways around it. In the current work we use the Schwarzschild geometry as a platform for development; in a forthcoming paper - the fourth in the series - we apply our method to the gravitational self-force in Kerr geometry. © 2013 American Physical Society. Source

Merle A.,University of Southampton | Niro V.,University of Barcelona
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

Earlier studies of the influence of dark matter keV sterile neutrinos on neutrinoless double beta decay concluded that there is no significant modification of the decay rate. These studies have focused only on a mass of the keV sterile neutrino above 2 and 4 keV, respectively, as motivated by certain production mechanisms. On the other hand, alternative production mechanisms have been proposed, which relax the lower limit for the mass, and new experimental data are available, too. For this reason, an updated study is timely and worthwhile. We focus on the most recent data, i.e., the newest Chandra and XMM-Newton observational bounds on the x-ray line originating from radiative keV sterile neutrino decay, as well as the new measurement of the previously unknown leptonic mixing angle θ13. While the previous works might have been a little short-sighted, the new observational bounds do indeed render any influences of keV sterile neutrinos on neutrinoless double beta decay small. This conclusion even holds in case not all the dark matter is made up of keV sterile neutrinos. © 2013 American Physical Society. Source

De Liberato S.,University of Southampton
Physical Review Letters | Year: 2014

Improvements in both the photonic confinement and the emitter design have led to a steady increase in the strength of the light-matter coupling in cavity quantum electrodynamics experiments. This has allowed us to access interaction-dominated regimes in which the state of the system can only be described in terms of mixed light-matter excitations. Here we show that, when the coupling between light and matter becomes strong enough, this picture breaks down, and light and matter degrees of freedom totally decouple. A striking consequence of such a counterintuitive phenomenon is that the Purcell effect is reversed and the spontaneous emission rate, usually thought to increase with the light-matter coupling strength, plummets instead for large enough couplings. © 2014 American Physical Society. Source

Richardson D.J.,University of Southampton
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2016

Researchers are within a factor of 2 or so fromrealizing the maximum practical transmission capacity of conventional single-mode fibre transmission technology. It is therefore timely to consider new technological approaches offering the potential for more cost-effective scaling of network capacity than simply installing more and more conventional singlemode systems in parallel. In this paper, I review physical layer options that can be considered to address this requirement including the potential for reduction in both fibre loss and nonlinearity for single-mode fibres, the development of ultrabroadband fibre amplifiers and finally the use of space division multiplexing. © 2016 The Author(s) Published by the Royal Society. All rights reserved. Source

Sharkey A.M.,University of Cambridge | Macklon N.S.,University of Southampton
Reproductive BioMedicine Online | Year: 2013

Although embryo implantation is essential for human survival, it remains an enigmatic biological phenomenon. Following fertilization, the resulting blastocyst must signal its presence to the mother, attach to the luminal epithelium of the endometrium and embed into the decidualising stroma. Failure to do so results in infertility, which affects around 9% of women. Subsequent placental development requires remodelling of maternal blood vessels by trophoblast cells from the placenta, that invade deep into the decidua. Failure in these very early stages can compromise fetal development, resulting in diseases of pregnancy such as intrauterine growth restriction or pre-eclampsia which can also impact on health in adulthood. Abnormal implantation therefore constitutes a significant disease burden in humans. Although we have known for many years that successful implantation requires an embryo that is competent to implant and an endometrium that is receptive, the molecular basis of these processes remains poorly understood. Our inability to identify implantation-competent embryos or to diagnose/treat the non-receptive endometrium therefore limits our ability to intervene through assisted reproduction techniques. This Implantation Symposium aims to review recent exciting developments in our understanding of the biology of early implantation and to highlight the rapid progress being made to translate these into improved diagnosis and treatment. © 2013, Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved. Source

Javanainen J.,University of Connecticut | Ruostekoski J.,University of Southampton | Li Y.,University of Connecticut | Yoo S.-M.,University of Connecticut
Physical Review Letters | Year: 2014

We study the collective response of a dense atomic sample to light essentially exactly using classical-electrodynamics simulations. In a homogeneously broadened atomic sample there is no overt Lorentz-Lorenz local field shift of the resonance, nor a collective Lamb shift. However, the addition of inhomogeneous broadening restores the usual mean-field phenomenology. © 2014 American Physical Society. Source

King S.F.,University of Southampton
Journal of High Energy Physics | Year: 2014

We propose a model of quark and lepton mixing based on the tetrahedral A 4 family symmetry with quark-lepton unification via the tetra-colour Pati-Salam gauge group SU(4) PS, together with SU(2) L × U(1) R . The "tetra-model" solves many of the flavour puzzles and remarkably gives ten predictions at leading order, including all six PMNS parameters. The Cabibbo angle is approximately given by θC ≈ 1/4, due to the tetra-vacuum alignment (1, 4, 2), providing the Cabibbo connection between quark and lepton mixing. Higher order corrections are responsible for the smaller quark mixing angles and CP violation and provide corrections to the Cabibbo and lepton mixing angles and phases. The tetra-model involves an SO(10)-like pattern of Dirac and heavy right-handed neutrino masses, with the strong up-type quark mass hierarchy cancelling in the see-saw mechanism, leading to a normal hierarchy of neutrino masses with an atmospheric angle in the first octant, θ23/l = 40° ± 1°, a solar angle θ12/l = 34° ± 1°, a reactor angle θ13/l = 9.0° ± 0.5°, depending on the ratio of neutrino masses m 2/m3, and a Dirac CP violating oscillation phase δ l = 260 ±° 5. © 2014 The Author(s). Source

King S.F.,University of Southampton
Journal of High Energy Physics | Year: 2014

Abstract: We propose an elegant theory of flavour based on A4 × Z5 family symmetry with Pati-Salam unification which provides an excellent description of quark and lepton masses, mixing and CP violation. The A4 symmetry unifies the left-handed families and its vacuum alignment determines the columns of Yukawa matrices. The Z5 symmetry distinguishes the right-handed families and its breaking controls CP violation in both the quark and lepton sectors. The Pati-Salam symmetry relates the quark and lepton Yukawa matrices, with Yu = Yν and Yd ~ Ye. Using the see-saw mechanism with very hierarchical right-handed neutrinos and CSD4 vacuum alignment, the model predicts the entire PMNS mixing matrix and gives a Cabibbo angle θC ≈ 1/4. In particular, for a discrete choice of Z5 phases, it predicts maximal atmospheric mixing, θ23 l = 45° ± 0.5° and leptonic CP violating phase δl = 260° ± 5°. The reactor angle prediction is θ13 l = 9° ± 0.5°, while the solar angle is 34° ≳ θ12 l ≳ 31°, for a lightest neutrino mass in the range 0 ≲ m1 ≲ 0.5 meV, corresponding to a normal neutrino mass hierarchy and a very small rate for neutrinoless double beta decay. © 2014, The Author(s). Source

Koppens F.H.L.,ICFO - Institute of Photonic Sciences | Chang D.E.,California Institute of Technology | Garcia De Abajo F.J.,CSIC - Institute of Optics | Garcia De Abajo F.J.,University of Southampton
Nano Letters | Year: 2011

Graphene plasmons provide a suitable alternative to noble-metal plasmons because they exhibit much tighter confinement and relatively long propagation distances, with the advantage of being highly tunable via electrostatic gating. Here, we propose to use graphene plasmons as a platform for strongly enhanced light-matter interactions. Specifically, we predict unprecedented high decay rates of quantum emitters in the proximity of a carbon sheet, observable vacuum Rabi splittings, and extinction cross sections exceeding the geometrical area in graphene nanoribbons and nanodisks. Our theoretical results provide the basis for the emerging and potentially far-reaching field of graphene plasmonics, offering an ideal platform for cavity quantum electrodynamics, and supporting the possibility of single-molecule, single-plasmon devices. © 2011 American Chemical Society. Source

Huefner A.,University of Cambridge | Kuan W.-L.,University of Cambridge | Barker R.A.,University of Cambridge | Mahajan S.,University of Cambridge | Mahajan S.,University of Southampton
Nano Letters | Year: 2013

Distinction between closely related and morphologically similar cells is difficult by conventional methods especially without labeling. Using nuclear-targeted gold nanoparticles (AuNPs) as intracellular probes we demonstrate the ability to distinguish between progenitor and differentiated cell types in a human neuroblastoma cell line using surface-enhanced Raman spectroscopy (SERS). SERS spectra from the whole cell area as well as only the nucleus were analyzed using principal component analysis that allowed unambiguous distinction of the different cell types. SERS spectra from the nuclear region showed the developments during cellular differentiation by identifying an increase in DNA/RNA ratio and proteins transcribed. Our approach using nuclear-targeted AuNPs and SERS imaging provides label-free and noninvasive characterization that can play a vital role in identifying cell types in biomedical stem cell research. © 2013 American Chemical Society. Source

Ou J.Y.,Optoelectronics Research Center | Plum E.,Optoelectronics Research Center | Jiang L.,University of Southampton | Zheludev N.I.,Optoelectronics Research Center
Nano Letters | Year: 2011

We introduce mechanically reconfigurable photonic metamaterials (RPMs) as a flexible platform for realizing metamaterial devices with reversible and large-range tunable characteristics in the optical part of the spectrum. Here we illustrate this concept for a temperature-driven RPM exhibiting reversible relative transmission changes of up to 50%. © 2011 American Chemical Society. Source

Brambilla G.,University of Southampton
Journal of Optics A: Pure and Applied Optics | Year: 2010

Optical fibre nanowires and microwires offer a variety of enabling properties, including large evanescent fields, flexibility, configurability, high confinement, robustness and compactness. These distinctive features have been exploited in a wealth of applications ranging from telecommunication devices to sensors, from optical manipulation to high Q resonators. In this paper I will review the fundamentals and applications of nanowires and microwires manufactured from optical fibres. © 2010 IOP Publishing Ltd. Source

Background: The impact of childhood sexual abuse on birth experiences was highlighted 20 years ago in Birth. Subsequent accounts in the midwifery press testify to the emotional trauma that women who were sexually abused as children may suffer during childbirth and the potential for caregivers to make the situation worse. This study synthesizes research on the maternity care experiences of women who were sexually abused in childhood to answer the questions: what do women need during their childbearing experiences and what can health care practitioners do about it? Methods: A metasynthesis was conducted to integrate the findings of several qualitative studies. The eight eligible studies identified by database searches were closely read, recurring themes were extracted and compared across studies, and core themes were identified by means of an interpretative process of synthesis. Results: The key themes identified were control, remembering, vulnerability, dissociation, disclosure, and healing. If women were able to retain control and forge positive, trusting relationships with health care professionals, they felt safe and might experience healing in the process. "Safety" requires that women are not reminded of abusive situations. In the absence of control and trusting relationships, maternity care can be experienced as a re-enactment of abuse. Conclusions: During their maternity care experience women who were sexually abused in childhood need to "feel safe." Health care professionals can help them achieve this feeling by seeking to ensure that those experiences do not re-enact abuse. (BIRTH 40:2 June 2013) © 2013, Wiley Periodicals, Inc. Source

Abusara M.A.,University of Exeter | Sharkh S.M.,University of Southampton
IEEE Transactions on Power Electronics | Year: 2013

This paper is concerned with the design and control of a three-phase voltage source grid-connected interleaved inverter. This topology enables the use of low-current devices capable of switching at high frequency, which together with the ripple cancelation feature reduces the size of the output filter and the inverter considerably compared to an equivalent classical two-level voltage source inverter with an LCL output filter using high-current devices with considerably lower switching frequency. Due to its higher switching frequency and low-filter component values, the interleaved inverter also has a much higher bandwidth than the classical inverter, which improves grid voltage harmonics disturbance rejection and increases the speed of response of the inverter and its capability to ride through grid disturbance (e.g., voltage sags and swells). The paper discusses the selection of the number of channels and the filter component values of the interleaved inverter. The design of the digital control system is then discussed in detail. Simulation and practical results are presented to validate the design and demonstrate its capabilities. © 1986-2012 IEEE. Source

Ranasinghe R.T.,University of Cambridge | Brown T.,University of Southampton
Chemical Communications | Year: 2011

Real time PCR is the mainstay of current nucleic acid assays, underpinning applications in forensic science, point-of-care diagnostics and detection of bioterrorism agents. Despite its broad utility, the search for new tests continues, inspired by second and third generation DNA sequencing technologies and fuelled by progress in single molecule fluorescence spectroscopy, nanotechnology and microfabrication. These new methods promise the direct detection of nucleic acids without the need for enzymatic amplification. In this feature article, we provide a chemist's perspective on this multidisciplinary area, introducing the concepts of single molecule detection then focussing on the selection of labels and probe chemistry suitable for generating a signal detectable by ultrasensitive fluorescence spectroscopy. Finally, we discuss the further developments that are required to incorporate these detection platforms into integrated 'sample-in-answer-out' instruments, capable of detecting many target sequences in a matter of minutes. © The Royal Society of Chemistry. Source

Holmes C.,University of Southampton
Neuropathology and Applied Neurobiology | Year: 2013

There is a great deal of evidence suggesting an important role for systemic inflammation in the pathogenesis of Alzheimer's disease. The role of systemic inflammation, and indeed inflammation in general, is still largely considered to be as a contributor to the disease process rather than of aetiological importance although there is emerging evidence to suggest that its role may predate the deposition of amyloid. Therapies aimed at reducing inflammation in individuals with mild cognitive impairment and Alzheimer's disease have been disappointing and have largely focused on the need to ameliorate central inflammation with little attention to the importance of dampening down systemic inflammation. Novel approaches in this area require a greater understanding of the effects of systemic inflammation on the development and progression of Alzheimer's disease and of the communicating pathways between the systemic and central innate immune systems. © 2012 The Authors Neuropathology and Applied Neurobiology © 2012 British Neuropathological Society. Source

Freeman C.T.,University of Southampton | Tan Y.,University of Melbourne
IEEE Transactions on Control Systems Technology | Year: 2013

Iterative learning control (ILC) is concerned with tracking a reference trajectory defined over a finite time duration, and is applied to systems which perform this action repeatedly. However, in many application domains the output is not critical at all points over the task duration. In this paper the facility to track an arbitrary subset of points is therefore introduced, and the additional flexibility this brings is used to address other control objectives in the framework of iterative learning. These comprise hard and soft constraints involving the system input, output and states. Experimental results using a robotic arm confirm that embedding constraints in the ILC framework leads to superior performance than can be obtained using standard ILC and an a priori specified reference. © 1993-2012 IEEE. Source

Chang V.,Leeds Beckett University | Chang V.,University of Southampton
Future Generation Computer Systems | Year: 2014

Limitations imposed by the traditional practice in financial institutions of running risk analysis on the desktop mean many rely on models which assume a "normal" Gaussian distribution of events which can seriously underestimate the real risk. In this paper, we propose an alternative service which uses the elastic capacities of Cloud Computing to escape the limitations of the desktop and produce accurate results more rapidly. The Business Intelligence as a Service (BIaaS) in the Cloud has a dual-service approach to compute risk and pricing for financial analysis. The first type of BIaaS service uses three APIs to simulate the Heston Model to compute the risks and asset prices, and computes the volatility (unsystematic risks) and the implied volatility (systematic risks) which can be tracked down at any time. The second type of BIaaS service uses two APIs to provide business analytics for stock market analysis, and compute results in the visualised format, so that stake holders without prior knowledge can understand. A full case study with two sets of experiments is presented to support the validity and originality of BIaaS. Additional three examples are used to support accuracy of the predicted stock index movement as a result of the use of the Heston Model and its associated APIs. We describe the architecture of deployment, together with examples and results which show how our approach improves risk and investment analysis and maintaining accuracy and efficiency whilst improving performance over desktops. © 2013 Elsevier B.V. All rights reserved. Source

Kingdon D.,University of Southampton
British Journal of Psychiatry | Year: 2013

Original ideas are needed in developing new interventions for psychosis, and computer-assisted therapy for auditory hallucinations is one such novel approach. As with any early-phase development, it will require further refinement and evaluation. There are now a range of ongoing studies into different intervention strategies and these promise to enhance the therapeutic potency of clinical psychiatrists and mental health teams. If the relative lack of research funding, focus and support from academic sources on this area were to change, even more could be delivered. Source

Baldwin D.S.,University of Southampton | Foong T.,Kirkstall Health Center
British Journal of Psychiatry | Year: 2013

Depressive symptoms and depressive illness are associated with impairments in sexual function and satisfaction but the findings of randomised placebo-controlled trials demonstrate that antidepressant drugs can be associated with the development or worsening of sexual dysfunction. Sexual difficulties during antidepressant treatment often resolve as depression lifts but may persist over long periods, and can reduce self-esteem and affect mood and relationships adversely. Sexual dysfunction during antidepressant treatment is typically associated with many possible causes, but the risk of dysfunction varies with differing antidepressants, and should be considered when selecting an antidepressant. Source

Gale P.A.,University of Southampton
Chemical Communications | Year: 2011

This article looks back at advances made in anion complexation since the turn of the millennium and ahead to the application of receptors in areas such as organocatalysis and nanotechnology. © 2011 The Royal Society of Chemistry. Source

Avramidis A.N.,University of Southampton
INFORMS Journal on Computing | Year: 2014

Arandom vector X with given univariate marginals can be obtained by first applying the normal distribution function to each coordinate of a vector Z of correlated standard normals to produce a vector U of correlated uniforms over (0,1) and then transforming each coordinate of U by the relevant inverse marginal. One approach to fitting requires, separately for each pair of coordinates of X, the rank correlation, r(ρ), or the product-moment correlation, rL(ρ), where ρ is the correlation of the corresponding coordinates of Z, to equal some target r*. We prove the existence and uniqueness of a solution for any feasible target, without imposing restrictions on the marginals. For the case where r(ρ) cannot be computed exactly because of an infinite discrete support, the relevant infinite sums are approximated by truncation, and lower and upper bounds on the truncation errors are developed. With a function r̃(ρ) defined by the truncated sums, a bound on the error r(ρ*) - r * is given, where ρ* is a solution to r̃(ρ*) = r*. Based on this bound, an algorithm is proposed that determines truncation points so that the solution has any specified accuracy. The new truncation method has potential for significant work reduction relative to truncating heuristically largely because as required accuracy decreases, so does the number of terms in the truncated sums. This is quantified with examples. The gain appears to increase with the heaviness of tails. © 2014 INFORMS. Source

Ho W.C.G.,University of Southampton
Monthly Notices of the Royal Astronomical Society: Letters | Year: 2011

Neutron stars accreting matter from low-mass binary companions are observed to undergo bursts of X-rays due to the thermonuclear explosion of material on the neutron star surface. We use recent results on superfluid and superconducting properties to show that the core temperature in these neutron stars may not be uniquely determined for a range of observed accretion rates. The degeneracy in inferred core temperatures could contribute to explaining the difference between neutron stars which have very short recurrence times between multiple bursts and those which have long recurrence times between bursts: short bursting sources have higher temperatures and normal neutrons in the stellar core, while long bursting sources have lower temperatures and superfluid neutrons. If correct, measurements of the lowest luminosity from among the short bursting sources and highest luminosity from among the long bursting sources can be used to constrain the critical temperature for the onset of neutron superfluidity. © 2011 The Author Monthly Notices of the Royal Astronomical Society © 2011 RAS. Source

Cheong Y.C.,University of Southampton
The Cochrane database of systematic reviews | Year: 2013

Acupuncture is commonly undertaken during an assisted reproductive technology (ART) cycle although its role in improving live birth and pregnancy rates is unclear. To determine the effectiveness and safety of acupuncture as an adjunct to ART cycles for male and female subfertility. All reports which described randomised controlled trials of acupuncture in assisted conception were obtained through searches of the Menstrual Disorders and Subfertility Group Specialised Register, CENTRAL, Ovid MEDLINE, EMBASE, CINAHL (Cumulative Index to Nursing & Allied Health Literature), AMED , www.clinicaltrials.gov (all from inception to July 2013), National Research Register, and the Chinese clinical trial database (all to November 2012). Randomised controlled trials of acupuncture for couples who were undergoing ART, comparing acupuncture treatment alone or acupuncture with concurrent ART versus no treatment, placebo or sham acupuncture plus ART for the treatment of primary and secondary infertility. Women with medical illness that was deemed to contraindicate ART or acupuncture were excluded. Twenty randomised controlled trials were included in the review and nine were excluded. Study selection, quality assessment and data extraction were performed independently by two review authors. Meta-analysis was performed using odds ratio (OR) and 95% confidence intervals (CI). The outcome measures were live birth rate, clinical ongoing pregnancy rate, miscarriage rate, and any reported side effects of treatment. The quality of the evidence for the primary outcome (live birth) was rated using GRADE methods. This updated meta-analysis showed no evidence of overall benefit of acupuncture for improving live birth rate (LBR) regardless of whether acupuncture was performed around the time of oocyte retrieval (OR 0.87, 95% CI 0.59 to 1.29, 2 studies, n = 464, I(2) = 0%, low quality evidence) or around the day of embryo transfer (ET) (OR 1.22, 95% CI 0.87 to 1.70, 8 studies, n = 2505, I(2) = 69%, low quality evidence). There was no evidence that acupuncture had any effect on pregnancy or miscarriage rates, or had significant side effects. There is no evidence that acupuncture improves live birth or pregnancy rates in assisted conception. Source

Doncaster C.P.,University of Southampton
Proceedings. Biological sciences / The Royal Society | Year: 2013

Altruistic acts involve the actor donating fitness to beneficiaries at net cost to itself. In contrast, parasitic acts involve the actor extracting benefit from others at net cost to the donors. Both behaviours may have the same direct net-cost transferral of fitness from donor to beneficiary; the key difference between parasitism and altruism is thus who drives the interaction. Identifying the evolutionary driver is not always straightforward in practice, yet it is crucial in determining the conditions necessary to sustain such fitness exchange. Here, we put classical ecological competition into a novel game-theoretic framework in order to distinguish altruism from parasitism. The distinction depends on the type of interaction that beneficiaries have among themselves. When this is not costly, net-cost transferrals of fitness from the donor are strongly altruistic, and sustained only by indirect benefits to the donor from assortative mixing. When the interaction among beneficiaries is costly, however, net-cost transferrals of fitness from the donor are sustainable without assortative mixing. The donor is then forced into apparent or incidental altruism driven by parasitism from the beneficiary. We consider various scenarios in which direct and indirect fitness consequences of strong altruism may have different evolutionary drivers. Source

Clarke I.N.,University of Southampton
Annals of the New York Academy of Sciences | Year: 2011

We know surprisingly little about the evolutionary origins of Chlamydia trachomatis. It causes both ocular (trachoma) and sexually transmitted infections in humans, it is an obligate intracellular pathogen, and there are only a few "isolates" that have been well characterized. From the first few genomes analyzed, it seems that the C. trachomatis genome is highly conserved. The genomes possess high synteny and, in some cases, the sequence variation between genomes is as little as 20 SNPs. Recent indications from partial genome analyses suggest that recombination is the mechanism for generating diversity. There is no accurate molecular clock by which to measure the evolution of C. trachomatis. The origins of both sexually transmitted and ocular C. trachomatis are unclear, but it seems likely that they evolved with humans and shared a common ancestor with environmental chlamydiae some 700 million years ago. Subsequently, evolution within mammalian cells has been accompanied by radical reduction in the C. trachomatis genome. © 2011 New York Academy of Sciences. Source

Holgate S.T.,University of Southampton
Allergy, Asthma and Immunology Research | Year: 2013

My research career has focused on the causes of asthma and its treatment. After establishing the key role that mast cells play in the inflammatory response in asthma, attention was turned towards understanding disease chronicity and variability across the lifecourse. Through a combination of studies on airway biopsies and primary cell cultures we have established that asthma is primarily an epithelial disease driven by increased environmental susceptibility to injury and an altered repair response as depicted by sustained activation of the epithelial mesenchymal trophic unit (EMTU) that is invoked in foetal branching morphogenesis. Varied activation of the EMTU connects the origins of asthma to its progression over time with involvement of epithelial susceptibility through impaired barrier and innate immune functions and altered mesenchymal susceptibility as exemplified by polymorphisms of the metalloprotease gene, ADAM33. Taken together these observations have led to a fundamental re-evaluation of asthma pathogenesis. Rather than placing allergic inflammation as the prime abnormality, it is proposed that the airway epithelium lies at the center of asthma pathogenesis, and that in conjunction with the underlying mesenchyme, it is the principle orchestrator of both the induction of asthma and its evolution over the lifecourse. This concept has provided the basis for a new preventative and therapeutic approach focused more on increasing the airways' resistance to environmental insults rather than suppressing downstream inflammation once it is established. © Copyright The Korean Academy of Asthma, Allergy and Clinical Immunology. Source

Leaf A.,University of Southampton
Seminars in Fetal and Neonatal Medicine | Year: 2013

Establishing enteral feeding in high-risk, very preterm infants is difficult: they are born at a time of rapid growth and development, yet immaturity of gut and metabolic function makes it difficult to accumulate adequate nutrients. Parenteral nutrition will provide the bulk of nutrients in the first few weeks while the preterm infant gut adapts. Intestinal function, nutritional substrate and microbial environment all interact to enable this to happen, and imbalance of these components may result in the serious condition of necrotising enterocolitis. Mother's breast milk is the safest feed and there is no evidence that delaying the introduction of small volumes is of benefit. Volumes can gradually be increased as tolerated and nutrient intakes optimised with addition of supplements or breast-milk fortifier to minimise the extent of extrauterine growth restriction. © 2013. Source

Barack L.,University of Southampton | Sago N.,Kyoto University
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2011

We study conservative finite-mass corrections to the motion of a particle in a bound (eccentric) strong-field orbit around a Schwarzschild black hole. We assume the particles mass μ is much smaller than the black hole mass M, and explore post-geodesic corrections of O(μ/M). Our analysis uses numerical data from a recently developed code that outputs the Lorenz-gauge gravitational self-force (GSF) acting on the particle along the eccentric geodesic. First, we calculate the O(μ/M) conservative correction to the periastron advance of the orbit, as a function of the (gauge-dependent) semilatus rectum and eccentricity. A gauge-invariant description of the GSF precession effect is made possible in the circular-orbit limit, where we express the correction to the periastron advance as a function of the invariant azimuthal frequency. We compare this relation with results from fully nonlinear numerical-relativistic simulations. In order to obtain a gauge-invariant measure of the GSF effect for fully eccentric orbits, we introduce a suitable generalization of Detweilers circular-orbit "redshift" invariant. We compute the O(μ/M) conservative correction to this invariant, expressed as a function of the two invariant frequencies that parametrize the orbit. Our results are in good agreement with results from post-Newtonian calculations in the weak-field regime, as we shall report elsewhere. The results of our study can inform the development of analytical models for the dynamics of strongly gravitating binaries. They also provide an accurate benchmark for future numerical-relativistic simulations. © 2011 American Physical Society. Source

Lowe M.,University of Surrey | Wrigley N.,University of Southampton
Economic Geography | Year: 2010

One important element of recent conceptualizations of the distinctive nature and challenges of retail transnational corporations (TNCs) is a focus on the mutual transformation of both the markets entered by the retail TNCs and, reciprocally, of the organizational structures of the firms themselves. There are important similarities between this view of ongoing transnational-operation-induced organizational transformation of the retail TNCs and processes that strategic and organizational management scholars describe as "continuous morphing." In this article, we provide a theoretically informed study of one of the most dynamic of the retail TNCs (Tesco plc) "morphing" its organizational structures and competencies during a high risk, but potentially transformational, market entry into the western United States. The article positions the study within the rapidly emerging literature on transnational retail and the global economy, interprets the innovative dimensions of Tesco's U.S. market entry-particularly its attempts to achieve "territorial" and "network embeddedness"-through the conceptual lens recently provided by economic geographers, and assesses ongoing transformational impacts of the entry on the firm. It adds value to the literature on retail foreign direct investment by documenting the ways in which this leading retail TNC has attempted both to reconfigure its existing capabilities and to develop the new capabilities required to support its U.S. market entry via a combination of processes that we describe as transference, splicing, and enhanced imitation. © 2010 Clark University. Source

Stanton N.A.,University of Southampton
Ergonomics | Year: 2014

This paper presents the Event Analysis of Systemic Teamwork (EAST) method as a means of modelling distributed cognition in systems. The method comprises three network models (i.e. task, social and information) and their combination. This method was applied to the interactions between the sound room and control room in a submarine, following the activities of returning the submarine to periscope depth. This paper demonstrates three main developments in EAST. First, building the network models directly, without reference to the intervening methods. Second, the application of analysis metrics to all three networks. Third, the combination of the aforementioned networks in different ways to gain a broader understanding of the distributed cognition. Analyses have shown that EAST can be used to gain both qualitative and quantitative insights into distributed cognition. Future research should focus on the analyses of network resilience and modelling alternative versions of a system. Practitioner summary: This paper presents a practical method for analysing and evaluating distributed cognition in complex systems. The Event Analysis of Systemic Teamwork (EAST) method presents task, social and information network models both individually and combined. The network models can be analysed qualitatively by visual inspection and quantitatively using network analysis metrics. © 2013 Taylor & Francis. Source

Tonin M.,University of Southampton
Journal of Public Economics | Year: 2011

This paper examines the interaction between minimum wage legislation and tax evasion by employed labor. I develop a model in which firms and workers may agree to report less than the true amount of earnings to the fiscal authorities. I show that introducing a minimum wage creates a spike in the distribution of declared earnings and induces higher compliance by some agents, thus reducing their disposable income. The comparison of food consumption and of the consumption-income gap before and after the massive minimum wage hike that took place in Hungary in 2001 reveals that households who appeared to benefit from the hike actually experienced a drop compared to similar but unaffected households, thus supporting the prediction of the theory. © 2011 Elsevier B.V. Source

Reichle E.D.,University of Southampton | Reingold E.M.,University of Toronto
Frontiers in Human Neuroscience | Year: 2013

Several current computational models of eye-movement control in reading posit a tight link between the eye and mind, with lexical processing directly triggering most "decisions" about when to start programming a saccade to move the eyes from one word to the next. One potential problem with this theoretical assumption, however, is that it may violate neurophysiological constraints imposed by the time required to encode visual information, complete some amount of lexical processing, and then program a saccade. In this article, we review what has been learned about these timing constraints from studies using ERP and MEG. On the basis of this review, it would appear that the temporal constraints are too severe to permit direct lexical control of eye movements without a significant amount of parafoveal processing (i.e., pre-processing of word n+1 from word n). This conclusion underscores the degree to which the perceptual, cognitive, and motor processes involved in reading must be highly coordinated to support skilled reading, a par excellence example of a task requiring visual-cognitive expertise. © 2013 Reichle and Reingold. Source

Population dynamics between and within Pleistocene groups are vital to understanding wider behavioural processes like social transmission and cultural variation. The late Middle Palaeolithic (MIS 5d-3, ca. 115,000-35,000 BP [years before present]) permits a novel, data-driven assessment of these concepts through a unique record: bifacial tools made by classic Neanderthals. Previously, studies of late Middle Palaeolithic bifacial tools were hampered by a convoluted plethora of competing terms, types and regional entities. This paper presents a large-scale intercomparison of this tool type, and bridges typo-technological and spatio-temporal data from across Western Europe (Britain, Belgium, the Netherlands, France and Germany).Results indicate a high level of variation among individual bifacial tools and assemblages. Each bifacial tool concept is correlated with various methods of production, resulting in large degrees of morphological variation. Despite such variation, a distinct three-fold, macro-regional pattern was identified: the Mousterian of Acheulean Tradition (MTA) in the southwest dominated by handaxes, the Keilmessergruppen (KMG) in the northeast typified by backed and leaf-shaped bifacial tools, and, finally a new unit, the Mousterian with Bifacial Tools (MBT), geographically situated between these two major entities, and characterised by a wider variety of bifacial tools.Differing local conditions, such as raw material or function, are not sufficient to explain this observed macro-regional tripartite. Instead, the MTA and KMG can be viewed as two distinct cultural traditions, where the production of a specific bifacial tool concept was passed on over generations. Conversely, the MBT is interpreted as a border zone where highly mobile groups of Neanderthals from both the east (KMG) and west (MTA) interacted.Principally, this study presents an archaeological contribution to behavioural concepts such as regionality, culture, social transmission and population dynamics. It illustrates the interpretive potential of large-scale lithic studies, and more specifically the presence of regionalised cultural behaviour amongst late Neanderthal groups in Western Europe. © 2013 Elsevier Ltd. Source

Breen D.J.,University of Southampton | Lencioni R.,University of Pisa
Nature Reviews Clinical Oncology | Year: 2015

Image-guided ablation (IGA) techniques have evolved considerably over the past 20 years and are increasingly used to definitively treat small primary cancers of the liver and kidney. IGA is recommended by most guidelines as the best therapeutic choice for patients with early stage hepatocellular carcinoma (HCC)-defined as either a single tumour smaller than 5 cm or up to three nodules smaller than 3 cm-when surgical options are precluded, and has potential as first-line therapy, in lieu of surgery, for patients with very early stage tumours smaller than 2 cm. With regard to renal cell carcinoma, despite the absence of any randomized trial comparing the outcomes of IGA with those of standard partial nephrectomy, a growing amount of data demonstrate robust oncological outcomes for this minimally invasive approach and testify to its potential as a standard-of-care treatment. Herein, we review the various ablation techniques, the supporting evidence, and clinical application of IGA in the treatment of primary liver and kidney cancers. © 2015 Macmillan Publishers Limited. Source

Del Valle E.,University of Southampton | Laussy F.P.,TU Munich
Physical Review Letters | Year: 2010

A counterpart of the Mollow triplet (luminescence line shape of a two-level system under coherent excitation) is obtained for the case of incoherent excitation in a cavity. The system acquires coherence through the strong-coupling between the cavity and the emitter. Analytical expressions, in excellent agreement with numerical results, pinpoint analogies and differences between the conventional resonance fluorescence spectrum and its cavity QED analogue under incoherent excitation. Most notably, the satellites broaden and split sublinearly with increasing incoherent pumping. © 2010 The American Physical Society. Source

Sayer A.A.,University of Southampton
Age and ageing | Year: 2013

Sarcopenia is the age-related loss of skeletal muscle mass and function. It is now recognised as a major clinical problem for older people and research in the area is expanding exponentially. One of the most important recent developments has been convergence in the operational definition of sarcopenia combining measures of muscle mass and strength or physical performance. This has been accompanied by considerable progress in understanding of pathogenesis from animal models of sarcopenia. Well-described risk factors include age, gender and levels of physical activity and this knowledge is now being translated into effective management strategies including resistance exercise with recent interest in the additional role of nutritional intervention. Sarcopenia is currently a major focus for drug discovery and development although there remains debate about the best primary outcome measure for trials, and various promising avenues to date have proved unsatisfactory. The concept of 'new tricks for old drugs' is, however, promising, for example, there is some evidence that the angiotensin-converting enzyme inhibitors may improve physical performance. Future directions will include a deeper understanding of the molecular and cellular mechanisms of sarcopenia and the application of a lifecourse approach to understanding aetiology as well as to informing the optimal timing of interventions. Source

Akeroyd A.G.,University of Southampton | Sugiyama H.,Ritsumeikan University
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2011

The existence of doubly charged Higgs bosons (H± ±) is a distinctive feature of the Higgs triplet model (HTM), in which neutrinos obtain tree-level masses from the vacuum expectation value of a neutral scalar in a triplet representation of SU(2)L. We point out that a large branching ratio for the decay of a singly charged Higgs boson to a doubly charged Higgs boson via H±→H ±±W* is possible in a sizeable parameter space of the HTM. From the production mechanism q ′q̄→W*→H ±±H″ the above decay mode would give rise to pair production of H±±, with a cross section which can be comparable to that of the standard pair-production mechanism qq̄→Z*(γ *)→H++H-. We suggest that the presence of a sizeable branching ratio for H±→H ±±W* could significantly enhance the detection prospects of H±± in the four-lepton channel. Moreover, the decays H0→H±W * and A0→H±W* from production of the neutral triplet scalars H0 and A0 would also provide an additional source of H±, which can subsequently decay to H±±. © 2011 American Physical Society. Source

Keane A.J.,University of Southampton
AIAA Journal | Year: 2012

This paper addresses the problem of robust design optimization. Such formulations are inevitably multi-objective because the designer wants good performance and also small variations in that performance. The desire for processes that produce robust designs stems from the observation that if only nominal performance is considered during design optimization, sensitive designs often result, and these commonly fail to meet objectives when the inevitable uncertainties of manufacture, operating conditions, and degradation in operation are considered. It is assumed that design is carried out using analysis codes that are expensive to run. Because of this and the need for the multiple calls associated with Monte Carlo methods, use is made of surrogate-based optimization tools to speed up the search. Here, the methodology of cokriging is examined to allow results from using differing numbers of Monte Carlo samples to be simply combined. The primary aim was avoid always having to use large numbers of samples in the Monte Carlo assessment of design robustness. The application of these methods is illustrated by considering a gas-turbine compressor blade optimization, in which a range of shape errors are considered that simulate foreign object damage, erosion damage, and manufacturing errors. Consideration is also given to variation in operating conditions. Copyright © 2012 by Andy J. Keane. Source

O'Hara K.,University of Southampton
IEEE Internet Computing | Year: 2014

This column critically examines the hypothesis that the Internet is responsible for creating echo chambers, in which groups can seal themselves off from heterodox opinion, via filtering and recommendation technology. Echo chambers are held responsible by many for political polarization, and the growth of extremism, yet the evidence doesn't seem to support this view. Echo chambers certainly exist, and can be detrimental to deliberation and discussion, but equally have a role to play in group formation, solidarity, and identity. The case for intervening in Internet governance to suppress echo chambers is not proven. © 2014 IEEE. Source

Postle A.D.,University of Southampton
Current Opinion in Clinical Nutrition and Metabolic Care | Year: 2012

Purpose of review: Lipidomics characterizes the composition of intact lipid molecular species in biological systems and the field has been driven by some spectacular advances in mass spectrometry instrumentation and applications. This review will highlight these advances and outline their recent application to address clinical issues. Recent findings: This review first identifies recent advances in lipid detection and analysis by a variety of mass spectrometry techniques, then reviews specific application including stable isotope labelling of lipids, lipid mass spectrometry imaging, data analysis and bioinformatics, and finally presents examples of the application of lipidomics to selected disease states. Summary: Lipidomics so far has been principally concerned with identifying novel methodologies, but recent advances demonstrating applications in diabetes, neurodegenerative diseases, cystic fibrosis and other respiratory diseases clearly indicate the potential usefulness of lipidomics both to generate biomarkers of disease and to probe signalling and metabolic processes. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins. Source

Sugiura S.,Toyota Central RandD Laboratories Inc. | Hanzo L.,University of Southampton
IEEE Transactions on Wireless Communications | Year: 2013

Joint dispersion-matrix and constellation optimization algorithm is proposed, which is invoked for the recent space-time shift keying (STSK) scheme. More specifically, the theoretical gradients of the discrete-input continuous-output memoryless channel's (DCMC) capacity with respect to both a dispersion-matrix set and to the modem constellations are derived, which allows a substantial reduction in the computational complexity required for maximizing the system's capacity. Furthermore, we also conceive a near-capacity irregular-precoded STSK (IR-PSTSK) architecture, which is designed with the aid of extrinsic information transfer (EXIT) charts, while invoking STSK subcodes, which are optimized by using the proposed algorithm. © 2013 IEEE. Source

Nicholls R.J.,University of Southampton | Cazenave A.,CNRS Geophysical Research and Oceanographic Laboratory
Science | Year: 2010

Global sea levels have risen through the 20th century. These rises will almost certainly accelerate through the 21st century and beyond because of global warming, but their magnitude remains uncertain. Key uncertainties include the possible role of the Greenland and West Antarctic ice sheets and the amplitude of regional changes in sea level. In many areas, nonclimatic components of relative sealevel change (mainly subsidence) can also be locally appreciable. Although the impacts of sea-level rise are potentially large, the application and success of adaptation are large uncertainties that require more assessment and consideration. Copyright Science 2010 by the American Association for the Advancement of Science; all rights reserved. Source

Milledge J.J.,University of Southampton
Reviews in Environmental Science and Biotechnology | Year: 2011

There has been considerable discussion in recent years about the potential of micro-algae for the production of sustainable and renewable biofuels, but there may be other more readily exploitable commercial opportunities for microalgae. This paper briefly reviews the current and potential situation for the commercial application of the growth of microalgae for products other than biofuels. © 2010 Springer Science+Business Media B.V. Source

Moodie R.,University of Melbourne | Stuckler D.,University of Cambridge | Monteiro C.,University of Sao Paulo | Sheron N.,University of Southampton | And 4 more authors.
The Lancet | Year: 2013

The 2011 UN high-level meeting on non-communicable diseases (NCDs) called for multisectoral action including with the private sector and industry. However, through the sale and promotion of tobacco, alcohol, and ultra-processed food and drink (unhealthy commodities), transnational corporations are major drivers of global epidemics of NCDs. What role then should these industries have in NCD prevention and control? We emphasise the rise in sales of these unhealthy commodities in low-income and middle-income countries, and consider the common strategies that the transnational corporations use to undermine NCD prevention and control. We assess the effectiveness of selfregulation, public-private partnerships, and public regulation models of interaction with these industries and conclude that unhealthy commodity industries should have no role in the formation of national or international NCD policy. Despite the common reliance on industry self-regulation and public-private partnerships, there is no evidence of their effectiveness or safety. Public regulation and market intervention are the only evidence-based mechanisms to prevent harm caused by the unhealthy commodity industries. Source

Dalby M.J.,University of Glasgow | Gadegaard N.,University of Glasgow | Oreffo R.O.C.,University of Southampton
Nature Materials | Year: 2014

Stem cells respond to nanoscale surface features, with changes in cell growth and differentiation mediated by alterations in cell adhesion. The interaction of nanotopographical features with integrin receptors in the cells' focal adhesions alters how the cells adhere to materials surfaces, and defines cell fate through changes in both cell biochemistry and cell morphology. In this Review, we discuss how cell adhesions interact with nanotopography, and we provide insight as to how materials scientists can exploit these interactions to direct stem cell fate and to understand how the behaviour of stem cells in their niche can be controlled. We expect knowledge gained from the study of cell-nanotopography interactions to accelerate the development of next-generation stem cell culture materials and implant interfaces, and to fuel discovery of stem cell therapeutics to support regenerative therapies. © 2014 Macmillan Publishers Limited. Source

Objectives: To determine the extent to which undergraduate medical students experience (and/or witness) bullying and harassment during their first year on full-time placements and to compare with new General Medical Council (GMC) evidence on bullying and harassment of doctors in training. Setting: A UK university offering medical and nursing undergraduate programmes. Participants: 309 medical and nursing undergraduate students with 30-33 weeks' placement experience (123 medical students and 186 nursing students); overall response rate: 47%. Primary and secondary outcome measures: (A) students' experience of bullying and harassment; (B) witnessing bullying and harassment; (C) actions taken by students; (D) comparison of medical and nursing students' data. Results: Within 8 months of starting clinical placements, a fifth of medical and a quarter of nursing students reported experiencing bullying and harassment. Cohorts differ in the type of exposure reported and in their responses. Whereas some nursing students follow incidences with query and challenge, most medical students acquiesce. Conclusions: Bullying and harassment of medical (and nursing) students - as well as witnessing of such incidents - occurs as soon as students enter the clinical environment. This augments evidence published by the GMC in its first report on undermining of doctors in training (December 2013). The data suggest differences between nursing and medical students in how they respond to such incidents. Source

Palmer K.T.,University of Southampton
British Medical Bulletin | Year: 2012

Background: Changing demographics mean that many patients with large joint arthritis will work beyond traditional retirement age. This review considers the impact of knee osteoarthritis (OA) on work participation and the relation between work and total knee replacement (TKR).Sources: Two systematic searches in Embase and Medline, supplemented by three systematic reviews.Areas of agreement: Probably, although evidence is limited, knee OA considerably impairsparticipation in work (labour force participation, work attendance and work productivity).Areas of uncertainty/research needLittle is known about effective interventions (treatments, work changes and policies) to improve vocational participation in patients with knee OA; or how type of work affects long-term clinical outcomes (e.g. pain, function and the need for revision surgery) in patients with TKRs. The need for such research is pressing and opportune, as increasing numbers of patients with knee OA or TKR expect to work on. © 2012 The Author. Source

Bahaj A.S.,University of Southampton
Renewable and Sustainable Energy Reviews | Year: 2011

Ocean energy has many forms, encompassing tides, surface waves, ocean circulation, salinity and thermal gradients. This paper will considers two of these, namely those found in the kinetic energy resource in tidal streams or marine currents, driven by gravitational effects, and the resources in wind-driven waves, derived ultimately from solar energy. There is growing interest around the world in the utilisation of wave energy and marine currents (tidal stream) for the generation of electrical power. Marine currents are predictable and could be utilised without the need for barrages and the impounding of water, whilst wave energy is inherently less predictable, being a consequence of wind energy. The conversion of these resources into sustainable electrical power offers immense opportunities to nations endowed with such resources and this work is partially aimed at addressing such prospects. The research presented conveys the current status of wave and marine current energy conversion technologies addressing issues related to their infancy (only a handful being at the commercial prototype stage) as compared to others such offshore wind. The work establishes a step-by-step approach that could be used in technology and project development, depicting results based on experimental and field observations on device fundamentals, modelling approaches, project development issues. It includes analysis of the various pathways and approaches needed for technology and device or converter deployment issues. As most technology developments are currently UK based, the paper also discusses the UK's financial mechanisms available to support this area of renewable energy, highlighting the needed economic approaches in technology development phases. Examination of future prospects for wave and marine current ocean energy technologies are also discussed. © 2011 Elsevier Ltd All rights reserved. Source

Parker H.,University of Southampton
Leukemia | Year: 2016

Histone methyltransferases (HMTs) are important epigenetic regulators of gene transcription and are disrupted at the genomic level in a spectrum of human tumours including haematological malignancies. Using high-resolution single nucleotide polymorphism (SNP) arrays, we identified recurrent deletions of the SETD2 locus in 3% (8/261) of chronic lymphocytic leukaemia (CLL) patients. Further validation in two independent cohorts showed that SETD2 deletions were associated with loss of TP53, genomic complexity and chromothripsis. With next-generation sequencing we detected mutations of SETD2 in an additional 3.8% of patients (23/602). In most cases, SETD2 deletions or mutations were often observed as a clonal event and always as a mono-allelic lesion, leading to reduced mRNA expression in SETD2-disrupted cases. Patients with SETD2 abnormalities and wild-type TP53 and ATM from five clinical trials employing chemotherapy or chemo-immunotherapy had reduced progression-free and overall survival compared with cases wild type for all three genes. Consistent with its postulated role as a tumour suppressor, our data highlight SETD2 aberration as a recurrent, early loss-of-function event in CLL pathobiology linked to aggressive disease.Leukemia advance online publication, 10 June 2016; doi:10.1038/leu.2016.134. © 2016 Macmillan Publishers Limited Source

de Medeiros Varzielas I.,University of Southampton | Hiller G.,TU Dortmund
Journal of High Energy Physics | Year: 2015

Abstract: Flavor symmetries successfully explain lepton and quark masses and mixings yet it is usually hard to distinguish different models that predict the same mixing angles. Further experimental input could be available, if the agents of flavor breaking are sufficiently low in mass and detectable or if new physics with non-trivial flavor charges is sufficiently low in mass and detectable. The recent hint for lepton-nonuniversality in the ratio of branching fractions B → Kμμ over B → Kee, RK, suggests the latter, at least for indirect detection via rare decays. We demonstrate the discriminating power of the rare decay data on flavor model building taking into account viable leptonic mixings and show how correlations with other observables exist in leptoquark models. We give expectations for branching ratios B → Kℓℓ′, B(s) → ℓℓ′ and ℓ → ℓ′γ, and Higgs decays h → ℓℓ′. © 2015, The Author(s). Source

Merle A.,Max Planck Institute for Physics | Merle A.,University of Southampton | Schneider A.,University of Zurich | Schneider A.,University of Sussex
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2015

The recent observation of an X-ray line at an energy of 3.5 keV mainly from galaxy clusters has initiated a discussion about whether we may have seen a possible dark matter signal. If confirmed, this signal could stem from a decaying sterile neutrino of a mass of 7.1 keV. Such a particle could make up all the dark matter, but it is not clear how it was produced in the early Universe. In this letter we show that it is possible to discriminate between different production mechanisms with present-day astronomical data. The most stringent constraint comes from the Lyman-. α forest and seems to disfavor all but one of the main production mechanisms proposed in the literature, which is the production via decay of heavy scalar singlets. Pinning down the production mechanism will help to decide whether the X-ray signal indeed comprises an indirect detection of dark matter. © 2015 The Authors. Published by Elsevier B.V. Source

Elliott S.J.,University of Southampton
The Journal of the Acoustical Society of America | Year: 2010

In order to reduce annoyance from the audio output of personal devices, it is necessary to maintain the sound level at the user position while minimizing the levels elsewhere. If the dark zone, within which the sound is to be minimized, extends over the whole far field of the source, the problem reduces to that of minimizing the radiated sound power while maintaining the pressure level at the user position. It is shown analytically that the optimum two-source array then has a hypercardioid directivity and gives about 7 dB reduction in radiated sound power, compared with a monopole producing the same on-axis pressure. The performance of other linear arrays is studied using monopole simulations for the motivating example of a mobile phone. The trade-off is investigated between the performance in reducing radiated noise, and the electrical power required to drive the array for different numbers of elements. It is shown for both simulations and experiments conducted on a small array of loudspeakers under anechoic conditions, that both two and three element arrays provide a reasonable compromise between these competing requirements. The implementation of the two-source array in a coupled enclosure is also shown to reduce the electrical power requirements. Source

Hector A.L.,University of Southampton
Coordination Chemistry Reviews | Year: 2016

The use of preceramic polymer and sol-gel processing methods in the production of silicon nitride and a number of related materials is reviewed. Amorpho