Bethesda, MD, United States
Bethesda, MD, United States

Time filter

Source Type

Wentzensen N.,U.S. National Cancer Institute | Gravitt P.E.,Johns Hopkins University | Long R.,Communications Engineering Branch | Schiffman M.,U.S. National Cancer Institute | And 9 more authors.
Journal of Clinical Microbiology | Year: 2012

Carcinogenic human papillomavirus (HPV) infections are necessary causes of most anogenital cancers. Viral load has been proposed as a marker for progression to cancer precursors but has been confirmed only for HPV16. Challenges in studying viral load are related to the lack of validated assays for a large number of genotypes. We compared viral load measured by Linear Array (LA) HPV genotyping with the gold standard, quantitative PCR (Q-PCR). LA genotyping and Q-PCR were performed in 143 cytology specimens from women referred to colposcopy. LA signal strength was measured by densitometry. Correlation coefficients and receiver operating characteristic (ROC) analyses were used to evaluate analytical and clinical performance. We observed a moderate to strong correlation between the two quantitative viral load measurements, ranging from an R value of 0.61 for HPV31 to an R value of 0.86 for HPV52. We also observed agreement between visual LA signal strength evaluation and QPCR. Both quantifications agreed on the disease stages with highest viral load, which varied by type (cervical intraepithelial neoplasia grade 2 [CIN2] for HPV52, CIN3 for HPV16 and HPV33, and cancer for HPV18 and HPV31). The area under the curve (AUC) for HPV16 Q-PCR at the CIN3 cutoff was 0.72 (P = 0.004), and the AUC for HPV18 LA at the CIN2 cutoff was 0.78 (P = 0.04). Quantification of LA signals correlates with the current gold standard for viral load, Q-PCR. Analyses of viral load need to address multiple infections and type attribution to evaluate whether viral load has clinical value beyond the established HPV16 finding. Our findings support conducting comprehensive studies of viral load and cervical cancer precursors using quantitative LA genotyping data. Copyright © 2012, American Society for Microbiology. All Rights Reserved.


Kim E.,Lehigh University | Huang X.,Lehigh University | Tan G.,Lehigh University | Long L.R.,Communications Engineering Branch | Antani S.,Communications Engineering Branch
Progress in Biomedical Optics and Imaging - Proceedings of SPIE | Year: 2010

As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images. © 2010 Copyright SPIE - The International Society for Optical Engineering.


Kim E.,Lehigh University | Antani S.,Communications Engineering Branch | Huang X.,Lehigh University | Long L.R.,Communications Engineering Branch | Demner-Fushman D.,Communications Engineering Branch
Progress in Biomedical Optics and Imaging - Proceedings of SPIE | Year: 2011

In clinical decision processes, relevant scientific publications and their associated medical images can provide valuable and insightful information. However, effectively searching through both text and image data is a difficult and arduous task. More specifically in the area of image search, finding similar images (or regions within images) poses another significant hurdle for effective knowledge dissemination. Thus, we propose a method using local regions within images to perform and refine medical image retrieval. In our first example, we define and extract large, characteristic regions within an image, and then show how to use these regions to match a query image to similar content. In our second example, we enable the formulation of a mixed query based upon text, image, and region information, to better represent the end user's search intentions. Given our new framework for region-based queries, we present an improved set of similar search results. © 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).


Song D.,Thomson Reuters | Kim E.,Villanova University | Huang X.,Lehigh University | Patruno J.,Lehigh Valley Health Network | And 4 more authors.
IEEE Transactions on Medical Imaging | Year: 2015

Cervical cancer is the second most common type of cancer for women. Existing screening programs for cervical cancer, such as Pap Smear, suffer from low sensitivity. Thus, many patients who are ill are not detected in the screening process. Using images of the cervix as an aid in cervical cancer screening has the potential to greatly improve sensitivity, and can be especially useful in resource-poor regions of the world. In this paper, we develop a data-driven computer algorithm for interpreting cervical images based on color and texture. We are able to obtain 74% sensitivity and 90% specificity when differentiating high-grade cervical lesions from low-grade lesions and normal tissue. On the same dataset, using Pap tests alone yields a sensitivity of 37% and specificity of 96%, and using HPV test alone gives a 57% sensitivity and 93% specificity. Furthermore, we develop a comprehensive algorithmic framework based on Multimodal Entity Coreference for combining various tests to perform disease classification and diagnosis. When integrating multiple tests, we adopt information gain and gradient-based approaches for learning the relative weights of different tests. In our evaluation, we present a novel algorithm that integrates cervical images, Pap, HPV, and patient age, which yields 83.21% sensitivity and 94.79% specificity, a statistically significant improvement over using any single source of information alone. © 2014 IEEE.


Xu T.,Lehigh University | Xin C.,Lehigh University | Long L.R.,Communications Engineering Branch | Antani S.,Communications Engineering Branch | And 3 more authors.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

Cervical cancer is one of the most common types of cancer in women worldwide. Most deaths of cervical cancer occur in less developed areas of the world. In this work, we introduce a new image dataset along with ground truth diagnosis for evaluating image-based cervical disease classification algorithms. We collect a large number of cervigram images from a database provided by the US National Cancer Institute. From these images, we extract three types of complementary image features, including Pyramid histogram in L*A*B* color space (PLAB), Pyramid Histogram of Oriented Gradients (PHOG), and Pyramid histogram of Local Binary Patterns (PLBP). PLAB captures color information, PHOG encodes edges and gradient information, and PLBP extracts texture information. Using these features, we run seven classic machine-learning algorithms to differentiate images of high-risk patient visits from those of low-risk patient visits. Extensive experiments are conducted on both balanced and imbalanced subsets of the data to compare the seven classifiers. These results can serve as a baseline for future research in cervical dysplasia classification using images. The image based classifiers also outperform results of several other screening tests on the same datasets. © Springer International Publishing Switzerland 2015.


Huang X.,Lehigh University | Zhu Y.,Lehigh University | Li H.,Lehigh University | Guan H.,Communications Engineering Branch | And 4 more authors.
International Journal of Biomedical Engineering and Technology | Year: 2012

We propose a new method for automated delineation of tumour boundaries by using joint information from whole-body PET and diagnostic CT images. Due to varying levels of FDG uptake in different organs, we apply locally adaptive thresholds of SUV to acquire an initial estimate of hot spot locations and shape in PET images. The hot spot boundaries are further improved by applying Competition Diffusion (CD) and Mode-Seeking Region Growing (MSRG) algorithms. These hot spots seen in PET are then confirmed and more accurately segmented considering CT information, through the Joint Likelihood Ratio Test technique for probabilistic integration. Experiments show that the proposed multi-modal method achieves more accurate and reproducible tumour delineation than using PET or CT alone. Copyright © 2012 Inderscience Enterprises Ltd.


Xu T.,Lehigh University | Huang X.,Lehigh University | Kim E.,Villanova University | Long L.R.,Communications Engineering Branch | Antani S.,Communications Engineering Branch
Progress in Biomedical Optics and Imaging - Proceedings of SPIE | Year: 2015

Cervical cancer is a leading most common type of cancer for women worldwide. Existing screening programs for cervical cancer suffer from low sensitivity. Using images of the cervix (cervigrams) as an aid in detecting pre-cancerous changes to the cervix has good potential to improve sensitivity and help reduce the number of cervical cancer cases. In this paper, we present a method that utilizes multi-modality information extracted from multiple tests of a patient's visit to classify the patient visit to be either low-risk or high-risk. Our algorithm integrates image features and text features to make a diagnosis. We also present two strategies to estimate the missing values in text features: Image Classifier Supervised Mean Imputation (ICSMI) and Image Classifier Supervised Linear Interpolation (ICSLI). We evaluate our method on a large medical dataset and compare it with several alternative approaches. The results show that the proposed method with ICSLI strategy achieves the best result of 83.03% specificity and 76.36% sensitivity. When higher specificity is desired, our method can achieve 90% specificity with 62.12% sensitivity. © 2015 SPIE.


Gururajan A.,Texas Tech University | Kamalakannan S.,Texas Tech University | Sari-Sarraf H.,Texas Tech University | Shahriar M.,Texas Tech University | And 2 more authors.
Computerized Medical Imaging and Graphics | Year: 2011

In this paper, we address the issue of computer-assisted indexing in one specific case, i.e., for the 17,000 digitized images of the spine acquired during the National Health and Nutrition Examination Survey (NHANES). The crucial step in this process is to accurately segment the cervical and lumbar spine in the radiographic images. To that end, we have implemented a unique segmentation system that consists of a suite of spine-customized automatic and semi-automatic statistical shape segmentation algorithms. Using the aforementioned system, we have developed experiments to optimally generate a library of spine segmentations, which currently include 2000 cervical and 2000 lumbar spines. This work is expected to contribute toward the creation of a biomedical Content-Based Image Retrieval system that will allow retrieval of vertebral shapes by using query by image example or query by shape example. © 2010 Elsevier Ltd.

Loading Communications Engineering Branch collaborators
Loading Communications Engineering Branch collaborators