Wellstead P.,Maynooth Co.
AI and Society | Year: 2011
The times required to develop new drugs is growing continuously and most drugs fail in the development process because we lack the detailed knowledge of biology and physiology needed to understand the result of a proposed treatment. The problem is one of complexity-we do not know the full complexity of living organisms, neither does traditional biology have the language to capture and integrate complexity. As a result, the life sciences are undergoing a period of radical change as the technological and mathematical methods developed for the analysis of physical sciences are being adapted for use in understanding living systems. This introduction of quantitative mathematical methods to represent and understand a previously descriptive subject resembles the Newtonian revolution in physics and its subsequent impact upon industry and manufacture. And just as in the post-Newtonian developments, the new ways are being resisted as the traditional reductionist biologists argue against a system level analysis. The comparison between the industrial revolution and the emerging revolution in life sciences is so strong that it can be usefully employed to explain the current process-the industrialisation of biology-in a way that informs the traditionalist movement. In particular, we draw upon ideas from innovation cycles and the staging of change in science and industry to clarify the current change processes in life science. Using specific examples in technology development we outline lessons that can be learnt in order to smooth the process of change and make it a harmonious one, rather than one of conflict. © 2009 Springer-Verlag London Limited. Source
Ordonez-Hurtado R.H.,Maynooth Co.
International Journal of Control | Year: 2015
A new methodology to provide conclusive information about the existence/non-existence of a common quadratic Lyapunov function (CQLF) for a finite set of stable second-order systems is presented. Despite the high complexity of the CQLF problem, even in the case of N second-order systems, the results presented in this paper have a very simple and intuitive theoretical support, including topics such as classical intersection of convex sets and properties of convex linear combinations. Illustrative examples to show the performance of the proposed methodology are provided. © 2014 Taylor & Francis. Source
Doran A.G.,Teagasc |
Doran A.G.,Maynooth Co. |
BMC Bioinformatics | Year: 2013
Background: Single nucleotide polymorphisms (SNPs) are the most abundant genetic variant found in vertebrates and invertebrates. SNP discovery has become a highly automated, robust and relatively inexpensive process allowing the identification of many thousands of mutations for model and non-model organisms. Annotating large numbers of SNPs can be a difficult and complex process. Many tools available are optimised for use with organisms densely sampled for SNPs, such as humans. There are currently few tools available that are species non-specific or support non-model organism data.Results: Here we present SNPdat, a high throughput analysis tool that can provide a comprehensive annotation of both novel and known SNPs for any organism with a draft sequence and annotation. Using a dataset of 4,566 SNPs identified in cattle using high-throughput DNA sequencing we demonstrate the annotations performed and the statistics that can be generated by SNPdat.Conclusions: SNPdat provides users with a simple tool for annotation of genomes that are either not supported by other tools or have a small number of annotated SNPs available. SNPdat can also be used to analyse datasets from organisms which are densely sampled for SNPs. As a command line tool it can easily be incorporated into existing SNP discovery pipelines and fills a niche for analyses involving non-model organisms that are not supported by many available SNP annotation tools. SNPdat will be of great interest to scientists involved in SNP discovery and analysis projects, particularly those with limited bioinformatics experience. © 2013 Doran and Creevey; licensee BioMed Central Ltd. Source
Lu B.,Maynooth Co. |
Charlton M.,Maynooth Co. |
Fotheringham A.S.,Maynooth Co.
Procedia Environmental Sciences | Year: 2011
Geographically Weighted Regression (GWR) is a local modelling technique to estimate regression models with spatially varying relationships. Generally, the Euclidean distance is the default metric for calibrating a GWR model in previous research and applications; however, it may not always be the most reasonable choice due to a partition by some natural or man-made features. Thus, we attempt to use a non-Euclidean distance metric in GWR. In this study, a GWR model is established to explore spatially varying relationships between house price and floor area with sampled house prices in London. To calibrate this GWR model, network distance is adopted. Compared with the other results from calibrations with Euclidean distance or adaptive kernels, the output using network distance with a fixed kernel makes a significant improvement, and the river Thames has a clear cut-off effect on the parameter estimations. © 2010 Published by Elsevier Ltd. Source
Coll J.,Maynooth Co. |
Bourke D.,National University of Ireland |
Skeffington M.S.,National University of Ireland |
Gormally M.,National University of Ireland |
Sweeney J.,Maynooth Co.
Climate Research | Year: 2014
Active blanket bogs are ombrotrophic peatland systems of the boreo-temperate zones, although blanket peat tends to form only under the warmest and wettest of those conditions. In Europe, this is common only in Scotland and Ireland, coincident with the oceanic climate, and constitutes a significant global component of this ecosystem. Associated with this Atlantic distribution, Ireland has 50% of the remaining blanket bogs of conservation importance within the Atlantic Biogeographic Region of Europe. It is anticipated that future climate change will place additional pressure on these systems. Active blanket bog distributions in Ireland were modelled using 7 bioclimatic envelope modelling techniques implemented in the BIOMOD modelling framework. The 1961 to 1990 baseline models achieved a very good agreement with the observed distribution, and suggest a strong dependency on climate. The discrimination ability of the fitted models was assessed using the area under the curve (range 0.915 to 0.976) of a receiver operating characteristic plot. An ensemble prediction from all the models was computed in BIOMOD and used to project changes based on outputs from a dynamically downscaled climate change scenario for 2031 to 2060. The consistent predictions between the individual models for the baseline change substantially for the climate change projections, with losses of ~-82% to gains of ~+15% projected depending on the individual model type. However, small gains in climate space in the Midlands, east and northeast of the country projected by the consensus model are unlikely to be realised as it will not be possible for new habitat to form. © Inter-Research 2014. Source