Time filter

Source Type

Mayer M.A.,Pattern Recognition Laboratory | Borsdorf A.,Pattern Recognition Laboratory | Wagner M.,University of Heidelberg | Hornegger J.,Pattern Recognition Laboratory | And 2 more authors.
Biomedical Optics Express | Year: 2012

We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. © 2012 Optical Society of America.

Navalpakkam B.K.,Pattern Recognition Laboratory | Navalpakkam B.K.,Institute of Medical Physics | Braun H.,Institute of Medical Physics | Kuwert T.,Friedrich - Alexander - University, Erlangen - Nuremberg | Quick H.H.,Institute of Medical Physics
Investigative Radiology | Year: 2013

OBJECTIVES: Attenuation correction of positron emission tomographic (PET) data is critical in providing accurate and quantitative PET volumes. Deriving an attenuation map (μ-map) from magnetic resonance (MR) volumes is a challenge in PET/MR hybrid imaging. The difficulty lies in differentiating cortical bone from air from standard MR sequences because both these classes yield little to no MR signal and thus shows no distinguishable information. The objective of this contribution is 2-fold: (1) to generate and evaluate a continuous valued computed tomography (CT)-like attenuation map (μ-map) with continuous density values from dedicated MR sequences and (2) to compare its PET quantification accuracy with respect to a CT-based attenuation map as the criterion standard and other segmentation-based attenuation maps for studies of the head. MATERIALS AND METHODS: Three-dimensional Dixon-volume interpolated breath-hold examination and ultrashort echo time sequences were acquired for each patient on a Siemens 3-T Biograph mMR PET/MR hybrid system and the corresponding patient CT on a Siemens Biograph 64. A pseudo-CT training was done using the epsilon-insensitive support vector regression ([Latin Small Letter Open E]-SVR) technique on 5 patients who had CT/MR/PET triplets, and the generated model was evaluated on 5 additional patients who were not included in the training process. Four μ-maps were compared, and 3 of them derived from CT: scaled CT (μ-mapCT), 3-class segmented CT without cortical bone (μ-mapnobone), 4-class segmented CT with cortical bone (μ-mapbone), and 1 from MR sequences via [Latin Small Letter Open E]-SVR technique previously mentioned (ie, MR predicted [μ-mapMR]). Positron emission tomographic volumes with each of the previously mentioned μ-maps were reconstructed, and relative difference images were calculated with respect to μ-mapCT as the criterion standard. RESULTS: For PET quantification, the proposed method yields a mean (SD) absolute error of 2.40% (3.69%) and 2.16% (1.77%) for the complete brain and the regions close to the cortical bone, respectively. In contrast, PET using μ-mapnobone yielded 10.15% (3.31%) and 11.03 (2.26%) for the same, although PET using μ-mapbone resulted in errors of 3.96% (3.71%) and 4.22% (3.91%). Furthermore, it is shown that the model can be extended to predict pseudo-CTs for other anatomical regions on the basis of only MR information. CONCLUSIONS: In this study, the generation of continuous valued attenuation maps from MR sequences is demonstrated and its effect on PET quantification is evaluated in comparison with segmentation-based μ-maps. A less-than-2-minute acquisition time makes the proposed approach promising for a clinical application for studies of the head. However, further experiments are required to validate and evaluate this technique for attenuation correction in other regions of the body. Copyright © 2013 by Lippincott Williams & Wilkins.

Chitchian S.,University of Texas Medical Branch | Chitchian S.,Intel Corporation | Mayer M.A.,Pattern Recognition Laboratory | Boretsky A.R.,University of Texas Medical Branch | And 2 more authors.
Journal of Biomedical Optics | Year: 2012

Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE).

Balk L.,VU University Amsterdam | Mayer M.,Pattern Recognition Laboratory | Uitdehaag B.M.J.,VU University Amsterdam | Petzold A.,VU University Amsterdam | Petzold A.,University College London
Journal of Neurology | Year: 2013

The application of spectral domain optical coherence tomography as a surrogate for neurodegeneration in a range of neurological disorders demands better understanding of the physiological variation of retinal layer thicknesses, which may mask any value of this emerging outcome measure. A prospective study compared retinal layer thicknesses between control subjects (n = 15) and runners (n = 27) participating in a 10-km charity run. Three scans were performed using an eye-tracking function (EBF) and automated scan registration for optimal precision at (1) baseline, (2) directly after the run, and (3) following a rehydration period. Retinal layer segmentation was performed with suppression of axial retinal vessel signal artifacts. Following the run, there was an increase in the relative retinal nerve fibre layer (p = 0.018), the combined inner plexiform/ganglion cell layer (p = 0.038), and the outer nuclear layer (p = 0.018) in runners compared to controls. The initial increase of thickness in the outer nuclear layer of runners (p < 0.0001) was likely related to (noncompliant) rehydration during exercise. Following a period of rest and rehydration, the difference in thickness change for all retinal layers, except the retinal nerve fibre layer (RNFL) (p < 0.05), disappeared between the two groups. There is a quantifiable change in the axial thickness of retinal layersthat which can be explained by an increase in the cellular volume. This effect may potentially be caused by H2O volume shifts. © 2013 Springer-Verlag Berlin Heidelberg.

Grimm R.,Pattern Recognition Laboratory | Grimm R.,Siemens AG | Furst S.,TU Munich | Souvatzoglou M.,TU Munich | And 9 more authors.
Medical Image Analysis | Year: 2014

Accurate localization and uptake quantification of lesions in the chest and abdomen using PET imaging is challenged by respiratory motion occurring during the exam. This work describes how a stack-of-stars MRI acquisition on integrated PET/MRI systems can be used to derive a high-resolution motion model, how many respiratory phases need to be differentiated, how much MRI scan time is required, and how the model is employed for motion-corrected PET reconstruction. MRI self-gating is applied to perform respiratory gating of the MRI data and simultaneously acquired PET raw data. After gated PET reconstruction, the MRI motion model is used to fuse the individual gates into a single, motion-compensated volume with high signal-to-noise ratio (SNR). The proposed method is evaluated in vivo for 15 clinical patients. The gating requires 5-7 bins to capture the motion to an average accuracy of 2. mm. With 5 bins, the motion-modeling scan can be shortened to 3-4. min. The motion-compensated reconstructions show significantly higher accuracy in lesion quantification in terms of standardized uptake value (SUV) and different measures of lesion contrast compared to ungated PET reconstruction. Furthermore, unlike gated reconstructions, the motion-compensated reconstruction does not lead to SNR loss. © 2014 Elsevier B.V.

Bauer S.,Pattern Recognition Laboratory | Wasza J.,Pattern Recognition Laboratory | Haase S.,Pattern Recognition Laboratory | Marosi N.,Siemens AG | And 2 more authors.
Proceedings of the IEEE International Conference on Computer Vision | Year: 2011

In radiation therapy, prior to each treatment fraction, the patient must be aligned to computed tomography (CT) data. Patient setup verification systems based on range imaging (RI) can accurately verify the patient position and adjust the treatment table at a fine scale, but require an initial manual setup using lasers and skin markers. We propose a novel markerless solution that enables a fully-automatic initial coarse patient setup. The table transformation that brings template and reference data in congruence is estimated from point correspondences based on matching local surface descriptors. Inherently, this point-based registration approach is capable of coping with gross initial misalignments and partial matching. Facing the challenge of multi-modal surface registration (RI/CT), we have adapted state-of-the-art descriptors to achieve invariance to mesh resolution and robustness to variations in topology. In a case study on real data from a low-cost RI device (Microsoft Kinect), the performance of different descriptors is evaluated on anthropomorphic phantoms. Furthermore, we have investigated the system's resilience to deformations for mono-modal RI/RI registration of data from healthy volunteers. Under gross initial misalignments, our method resulted in an average angular error of 1.5 and an average translational error of 13.4 mm in RI/CT registration. This coarse patient setup provides a feasible initialization for subsequent refinement with verification systems. © 2011 IEEE.

Schmalz C.,Friedrich - Alexander - University, Erlangen - Nuremberg | Schmalz C.,Pattern Recognition Laboratory | Forster F.,Friedrich - Alexander - University, Erlangen - Nuremberg | Angelopoulou E.,Pattern Recognition Laboratory
Optical Engineering | Year: 2011

Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the cameras focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better. © 2011 Society of Photo-Optical Instrumentation Engineers (SPIE).

Neumann D.,Pattern Recognition Laboratory | Lugauer F.,Pattern Recognition Laboratory | Bauer S.,Pattern Recognition Laboratory | Wasza J.,Pattern Recognition Laboratory | And 2 more authors.
Proceedings of the IEEE International Conference on Computer Vision | Year: 2011

The modeling of three-dimensional scene geometry from temporal point cloud streams is of particular interest for a variety of computer vision applications. With the advent of RGB-D imaging devices that deliver dense, metric and textured 6-D data in real-time, on-the-fly reconstruction of static environments has come into reach. In this paper, we propose a system for real-time point cloud mapping based on an efficient implementation of the iterative closest point (ICP) algorithm on the graphics processing unit (GPU). In order to achieve robust mappings at real-time performance, our nearest neighbor search evaluates both geometric and photometric information in a direct manner. For acceleration of the search space traversal, we exploit the inherent computing parallelism of GPUs. In this work, we have investigated the fitness of the random ball cover (RBC) data structure and search algorithm, originally proposed for high-dimensional problems, for 6-D data. In particular, we introduce a scheme that enables both fast RBC construction and queries. The proposed system is validated on an indoor scene modeling scenario. For dense data from the Microsoft Kinect sensor (640×480 px), our implementation achieved ICP runtimes of< 20 ms on an off-the-shelf consumer GPU. © 2011 IEEE.

Wasza J.,Pattern Recognition Laboratory | Bauer S.,Pattern Recognition Laboratory | Hornegger J.,Pattern Recognition Laboratory | Hornegger J.,Friedrich - Alexander - University, Erlangen - Nuremberg
Proceedings of the IEEE International Conference on Computer Vision | Year: 2011

Recent advances in range imaging (RI) have enabled dense 3-D scene acquisition in real-time. However, due to physical limitations and the underlying range sampling principles, range data are subject to noise and may contain invalid measurements. Hence, data preprocessing is a prerequisite for practical applications but poses a challenge with respect to real-time constraints. In this paper, we propose a generic and modality-independent pipeline for efficient RI data preprocessing on the graphics processing unit (GPU). The contributions of this work are efficient GPU implementations of normalized convolution for the restoration of invalid measurements, bilateral temporal averaging for dynamic scenes, and guided filtering for edge-preserving denoising. Furthermore, we show that the transformation from range measurements to 3-D world coordinates can be computed efficiently on the GPU. The pipeline has been evaluated on real data from a Time-of-Flight sensor and Microsoft's Kinect. In a run-time performance study, we show that for VGA-resolution data, our preprocessing pipeline runs at 100 fps on an off-the-shelf consumer GPU. © 2011 IEEE.

Mayer M.A.,Pattern Recognition Laboratory | Hornegger J.,Pattern Recognition Laboratory | Mardin C.Y.,Universitatsklinikum | Tornow R.P.,Universitatsklinikum
Biomedical Optics Express | Year: 2010

Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 μm was achieved on glaucomatous eyes, and 3.6 μm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 μm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis. © 2010 Optical Society of America.

Loading Pattern Recognition Laboratory collaborators
Loading Pattern Recognition Laboratory collaborators