Entity

Time filter

Source Type


Navalpakkam B.K.,Pattern Recognition Laboratory | Navalpakkam B.K.,Institute of Medical Physics | Braun H.,Institute of Medical Physics | Kuwert T.,Friedrich - Alexander - University, Erlangen - Nuremberg | Quick H.H.,Institute of Medical Physics
Investigative Radiology | Year: 2013

OBJECTIVES: Attenuation correction of positron emission tomographic (PET) data is critical in providing accurate and quantitative PET volumes. Deriving an attenuation map (μ-map) from magnetic resonance (MR) volumes is a challenge in PET/MR hybrid imaging. The difficulty lies in differentiating cortical bone from air from standard MR sequences because both these classes yield little to no MR signal and thus shows no distinguishable information. The objective of this contribution is 2-fold: (1) to generate and evaluate a continuous valued computed tomography (CT)-like attenuation map (μ-map) with continuous density values from dedicated MR sequences and (2) to compare its PET quantification accuracy with respect to a CT-based attenuation map as the criterion standard and other segmentation-based attenuation maps for studies of the head. MATERIALS AND METHODS: Three-dimensional Dixon-volume interpolated breath-hold examination and ultrashort echo time sequences were acquired for each patient on a Siemens 3-T Biograph mMR PET/MR hybrid system and the corresponding patient CT on a Siemens Biograph 64. A pseudo-CT training was done using the epsilon-insensitive support vector regression ([Latin Small Letter Open E]-SVR) technique on 5 patients who had CT/MR/PET triplets, and the generated model was evaluated on 5 additional patients who were not included in the training process. Four μ-maps were compared, and 3 of them derived from CT: scaled CT (μ-mapCT), 3-class segmented CT without cortical bone (μ-mapnobone), 4-class segmented CT with cortical bone (μ-mapbone), and 1 from MR sequences via [Latin Small Letter Open E]-SVR technique previously mentioned (ie, MR predicted [μ-mapMR]). Positron emission tomographic volumes with each of the previously mentioned μ-maps were reconstructed, and relative difference images were calculated with respect to μ-mapCT as the criterion standard. RESULTS: For PET quantification, the proposed method yields a mean (SD) absolute error of 2.40% (3.69%) and 2.16% (1.77%) for the complete brain and the regions close to the cortical bone, respectively. In contrast, PET using μ-mapnobone yielded 10.15% (3.31%) and 11.03 (2.26%) for the same, although PET using μ-mapbone resulted in errors of 3.96% (3.71%) and 4.22% (3.91%). Furthermore, it is shown that the model can be extended to predict pseudo-CTs for other anatomical regions on the basis of only MR information. CONCLUSIONS: In this study, the generation of continuous valued attenuation maps from MR sequences is demonstrated and its effect on PET quantification is evaluated in comparison with segmentation-based μ-maps. A less-than-2-minute acquisition time makes the proposed approach promising for a clinical application for studies of the head. However, further experiments are required to validate and evaluate this technique for attenuation correction in other regions of the body. Copyright © 2013 by Lippincott Williams & Wilkins. Source


Chitchian S.,University of Texas Medical Branch | Chitchian S.,Intel Corporation | Mayer M.A.,Pattern Recognition Laboratory | Boretsky A.R.,University of Texas Medical Branch | And 2 more authors.
Journal of Biomedical Optics | Year: 2012

Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). Source


Wasza J.,Pattern Recognition Laboratory | Bauer S.,Pattern Recognition Laboratory | Hornegger J.,Pattern Recognition Laboratory | Hornegger J.,Friedrich - Alexander - University, Erlangen - Nuremberg
Proceedings of the IEEE International Conference on Computer Vision | Year: 2011

Recent advances in range imaging (RI) have enabled dense 3-D scene acquisition in real-time. However, due to physical limitations and the underlying range sampling principles, range data are subject to noise and may contain invalid measurements. Hence, data preprocessing is a prerequisite for practical applications but poses a challenge with respect to real-time constraints. In this paper, we propose a generic and modality-independent pipeline for efficient RI data preprocessing on the graphics processing unit (GPU). The contributions of this work are efficient GPU implementations of normalized convolution for the restoration of invalid measurements, bilateral temporal averaging for dynamic scenes, and guided filtering for edge-preserving denoising. Furthermore, we show that the transformation from range measurements to 3-D world coordinates can be computed efficiently on the GPU. The pipeline has been evaluated on real data from a Time-of-Flight sensor and Microsoft's Kinect. In a run-time performance study, we show that for VGA-resolution data, our preprocessing pipeline runs at 100 fps on an off-the-shelf consumer GPU. © 2011 IEEE. Source


Neumann D.,Pattern Recognition Laboratory | Lugauer F.,Pattern Recognition Laboratory | Bauer S.,Pattern Recognition Laboratory | Wasza J.,Pattern Recognition Laboratory | And 2 more authors.
Proceedings of the IEEE International Conference on Computer Vision | Year: 2011

The modeling of three-dimensional scene geometry from temporal point cloud streams is of particular interest for a variety of computer vision applications. With the advent of RGB-D imaging devices that deliver dense, metric and textured 6-D data in real-time, on-the-fly reconstruction of static environments has come into reach. In this paper, we propose a system for real-time point cloud mapping based on an efficient implementation of the iterative closest point (ICP) algorithm on the graphics processing unit (GPU). In order to achieve robust mappings at real-time performance, our nearest neighbor search evaluates both geometric and photometric information in a direct manner. For acceleration of the search space traversal, we exploit the inherent computing parallelism of GPUs. In this work, we have investigated the fitness of the random ball cover (RBC) data structure and search algorithm, originally proposed for high-dimensional problems, for 6-D data. In particular, we introduce a scheme that enables both fast RBC construction and queries. The proposed system is validated on an indoor scene modeling scenario. For dense data from the Microsoft Kinect sensor (640×480 px), our implementation achieved ICP runtimes of< 20 ms on an off-the-shelf consumer GPU. © 2011 IEEE. Source


Mayer M.A.,Pattern Recognition Laboratory | Hornegger J.,Pattern Recognition Laboratory | Mardin C.Y.,Universitatsklinikum | Tornow R.P.,Universitatsklinikum
Biomedical Optics Express | Year: 2010

Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 μm was achieved on glaucomatous eyes, and 3.6 μm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 μm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis. © 2010 Optical Society of America. Source

Discover hidden collaborations