Time filter

Source Type

Wilson P.,PricewaterhouseCoopers LLP
Information Security Technical Report | Year: 2011

The adoption of cloud computing has faced challenges and there are concerns about the risks, the loss of control of data and the assurance of security and access control. This paper aims to show that these should be viewed as requirements which need to be fulfilled, but that the overriding benefits from cloud computing are such that businesses could face real challenges in future if they resist adoption and so the risks need to be, and can be, faced with a more positive outlook given this more balanced view. © 2011 Elsevier Ltd. All rights reserved. Source

Mason A.,PricewaterhouseCoopers LLP
Journal of business continuity & emergency planning | Year: 2011

As the development of many new standards for business continuity (BC) is seen across the globe, there is the danger that some of the benefits of developing an industry code or standard are being eroded. The very definition of the term 'standard' - a level of quality or excellence that is accepted as the norm or by which actual attainments are judged - is at risk as the proliferation and diversification of standards in existence and under development today continue to grow almost unchecked. This paper seeks to provide a personal view on the necessity of an international certifiable standard within the BC industry, with the hope that it will influence the debate in this area. In this manner, the paper contributes to the international evolution of BC. The standards related information is based on the author's experience as a member of the British Standards Institute's technical committee that developed BS25999 parts 1 and 2, and his experience in implementing both standards through to certification within his own organisation. References to the Business Continuity Institute are made not as a parochial 'British' group, but in terms of its growing development into a true global professional membership organisation. Source

Shaikh A.R.,PricewaterhouseCoopers LLP | Butte A.J.,Stanford University | Schully S.D.,U.S. National Cancer Institute | Dalton W.S.,rtolo Family Personalized Medicine Institute at the Moffitt Cancer Center | And 2 more authors.
Journal of Medical Internet Research | Year: 2014

Biomedicine is undergoing a revolution driven by high throughput and connective computing that is transforming medical research and practice. Using oncology as an example, the speed and capacity of genomic sequencing technologies is advancing the utility of individual genetic profiles for anticipating risk and targeting therapeutics. The goal is to enable an era of "P4" medicine that will become increasingly more predictive, personalized, preemptive, and participative over time. This vision hinges on leveraging potentially innovative and disruptive technologies in medicine to accelerate discovery and to reorient clinical practice for patient-centered care. Based on a panel discussion at the Medicine 2.0 conference in Boston with representatives from the National Cancer Institute, Moffitt Cancer Center, and Stanford University School of Medicine, this paper explores how emerging sociotechnical frameworks, informatics platforms, and health-related policy can be used to encourage data liquidity and innovation. This builds on the Institute of Medicine's vision for a "rapid learning health care system" to enable an open source, population-based approach to cancer prevention and control. © Abdul R Shaikh, Atul J Butte, Sheri D Schully, William S Dalton, Muin J Khoury, Bradford W Hesse. Source

Dogan N.,PricewaterhouseCoopers LLP | Tanrikulu Z.,Bogazici University
Information Technology and Management | Year: 2013

Classification algorithms are the most commonly used data mining models that are widely used to extract valuable knowledge from huge amounts of data. The criteria used to evaluate the classifiers are mostly accuracy, computational complexity, robustness, scalability, integration, comprehensibility, stability, and interestingness. This study compares the classification of algorithm accuracies, speed (CPU time consumed) and robustness for various datasets and their implementation techniques. The data miner selects the model mainly with respect to classification accuracy; therefore, the performance of each classifier plays a crucial role for selection. Complexity is mostly dominated by the time required for classification. In terms of complexity, the CPU time consumed by each classifier is implied here. The study first discusses the application of certain classification models on multiple datasets in three stages: first, implementing the algorithms on original datasets; second, implementing the algorithms on the same datasets where continuous variables are discretised; and third, implementing the algorithms on the same datasets where principal component analysis is applied. The accuracies and the speed of the results are then compared. The relationship of dataset characteristics and implementation attributes between accuracy and CPU time is also examined and debated. Moreover, a regression model is introduced to show the correlating effect of dataset and implementation conditions on the classifier accuracy and CPU time. Finally, the study addresses the robustness of the classifiers, measured by repetitive experiments on both noisy and cleaned datasets. © 2012 Springer Science+Business Media, LLC. Source

Pricewaterhousecoopers Llp | Date: 2013-10-23

A document processing system for accurately and efficiently analyzing documents and methods for making and using same. Each incoming document includes at least one section of textual content and is provided in an electronic form or as a paper-based document that is converted into an electronic form. Since many categories of documents, such as legal and accounting documents, often include one or more common text sections with similar textual content, the document processing system compares the documents to identify and classify the common text sections. The document comparison can be further enhanced by dividing the document into document segments and comparing the document segments; whereas, the conversion of paper-based documents likewise can be improved by comparing the resultant electronic document with a library of standard phrases, sentences, and paragraphs. The document processing system thereby enables an image of the document to be manipulated, as desired, to facilitate its review.

Discover hidden collaborations