Entity

Time filter

Source Type

Abingdon, United Kingdom

Pickett S.D.,Glaxosmithkline | Green D.V.S.,Glaxosmithkline | Hunt D.L.,Tessella plc | Pardoe D.A.,Glaxosmithkline | Hughes I.,Glaxosmithkline
ACS Medicinal Chemistry Letters | Year: 2011

Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods. © 2010 American Chemical Society. Source


Caie P.D.,Astrazeneca | Walls R.E.,Astrazeneca | Ingleston-Orme A.,Astrazeneca | Daya S.,Astrazeneca | And 4 more authors.
Molecular Cancer Therapeutics | Year: 2010

The application of high-content imaging in conjunction with multivariate clustering techniques has recently shown value in the confirmation of cellular activity and further characterization of drug mode of action following pharmacologic perturbation. However, such practical examples of phenotypic profiling of drug response published to date have largely been restricted to cell lines and phenotypic response markers that are amenable to basic cellular imaging. As such, these approaches preclude the analysis of both complex heterogeneous phenotypic responses and subtle changes in cell morphology across physiologically relevant cell panels. Here, we describe the application of a cell-based assay and custom designed image analysis algorithms designed to monitor morphologic phenotypic response in detail across distinct cancer cell types. We further describe the integration of these methods with automated data analysis workflows incorporating principal component analysis, Kohonen neural networking, and kNN classification to enable rapid and robust interrogation of such data sets. We show the utility of these approaches by providing novel insight into pharmacologic response across four cancer cell types, Ovcar3, MiaPaCa2, and MCF7 cells wild-type and mutant for p53. These methods have the potential to drive the development of a new generation of novel therapeutic classes encompassing pharmacologic compositions or polypharmacology in appropriate disease context. ©2010 AACR. Source


Isherwood B.J.,Astrazeneca | Walls R.E.,Astrazeneca | Roberts M.E.,Tessella plc | Houslay T.M.,University of Stirling | And 3 more authors.
Journal of Biomolecular Screening | Year: 2013

Phenotypic screening seeks to identify substances that modulate phenotypes in a desired manner with the aim of progressing first-in-class agents. Successful campaigns require physiological relevance, robust screening, and an ability to deconvolute perturbed pathways. High-content analysis (HCA) is increasingly used in cell biology and offers one approach to prosecution of phenotypic screens, but challenges exist in exploitation where data generated are high volume and complex. We combine development of an organotypic model with novel HCA tools to map phenotypic responses to pharmacological perturbations. We describe implementation for angiogenesis, a process that has long been a focus for therapeutic intervention but has lacked robust models that recapitulate more completely mechanisms involved. The study used human primary endothelial cells in co-culture with stromal fibroblasts to model multiple aspects of angiogenic signaling: cell interactions, proliferation, migration, and differentiation. Multiple quantitative descriptors were derived from automated microscopy using custom-designed algorithms. Data were extracted using a bespoke informatics platform that integrates processing, statistics, and feature display into a streamlined workflow for building and interrogating fingerprints. Ninety compounds were characterized, defining mode of action by phenotype. Our approach for assessing phenotypic outcomes in complex assay models is robust and capable of supporting a range of phenotypic screens at scale. © 2013 Society for Laboratory Automation and Screening. Source


Ljosa V.,The Broad Institute of MIT and Harvard | Caie P.D.,Astrazeneca | Caie P.D.,University of Edinburgh | Ter Horst R.,Radboud University Nijmegen | And 13 more authors.
Journal of Biomolecular Screening | Year: 2013

Quantitative microscopy has proven a versatile and powerful phenotypic screening technique. Recently, image-based profiling has shown promise as a means for broadly characterizing molecules' effects on cells in several drug-discovery applications, including target-agnostic screening and predicting a compound's mechanism of action (MOA). Several profiling methods have been proposed, but little is known about their comparative performance, impeding the wider adoption and further development of image-based profiling. We compared these methods by applying them to a widely applicable assay of cultured cells and measuring the ability of each method to predict the MOA of a compendium of drugs. A very simple method that is based on population means performed as well as methods designed to take advantage of the measurements of individual cells. This is surprising because many treatments induced a heterogeneous phenotypic response across the cell population in each sample. Another simple method, which performs factor analysis on the cellular measurements before averaging them, provided substantial improvement and was able to predict MOA correctly for 94% of the treatments in our ground-truth set. To facilitate the ready application and future development of image-based phenotypic profiling methods, we provide our complete ground-truth and test data sets, as well as open-source implementations of the various methods in a common software framework. © 2013 Society for Laboratory Automation and Screening. Source


Prescott B.,ITER Organization | Downing J.,ITER Organization | Downing J.,Tessella plc | Maio M.D.,ITER Organization | How J.,ITER Organization
Fusion Engineering and Design | Year: 2010

The ITER Organization, in common with many other fusion laboratories, has an authenticated-access website devoted to the communication of information to all its staff and remote collaborators. In 2007 and 2008, the number of registered users of this site increased by more than a factor often, to over 3000 at present, and with approximately 900 unique users using the website per month. In parallel, the project management of the organisation has been put in place. A decision was taken to move the web platform from simple HTML to Microsoft SharePoint [1] and to web-enable the many applications and databases used for ITER management. This decision has been well justified by the power and extensive flexibility provided by SharePoint, for example it permits different groups to publish their own information and to collaborate, and to consolidate disparate spreadsheet data in linked SharePoint lists to improve quality and maintainability. This paper examines the use of SharePoint at ITER: why it was selected and what benefits it brings to both the local and remote ITER community. Some active case studies are presented. The paper also looks ahead at what future benefits to ITER this platform offers, and reviews the type of information that the site can profitably publish. The paper also highlights some of the limitations of the platform, the problems of integration with other ITER systems, and discusses its potential for adaptability in other scientific organisations. © 2010 ITER Organization. Published by Elsevier B.V. All rights reserved. Source

Discover hidden collaborations