Entity

Time filter

Source Type

Montréal, Canada

Tomei S.,CNRS Research Center for Image Acquisition and Processing for Health | Reilhac A.,CH Lyon Sud | Visvikis D.,French Institute of Health and Medical Research | Boussion N.,CERMEP | And 3 more authors.
IEEE Transactions on Nuclear Science | Year: 2010

The purpose of this paper is to generate and distribute a database of simulated whole body 18F-FDG positron emission tomography (PET) oncology images. As far as we know, this database is the first addressing the need for simulated 18F-FDG PET oncology images by providing a series of realistic whole-body patient images with well-controlled inserted lesions of calibrated uptakes. It also fulfills the requirements of detection performance studies by including normal and pathological cases. The originality of the database is based on three points. First, we built a complex model of 18F-FDG patient based on the Zubal phantom in combination with activity distributions in the main organs of interest derived from a series of 70 clinical cases. Secondly, we proposed a model of lesions extent corresponding to real lymphoma patients. The lesion contrast levels were derived from a human observer detection study so as to cover the entire range of detectability. Lastly, the simulated database was generated with the PET-SORTEO Monte Carlo simulation tool that was fully validated against the geometry of the ECAT EXACT HR+ (CTI/Siemens Knoxville). The oncoPET-DB database is composed of 100 whole-body PET simulated images, including 50 normal cases coming from different realizations of noise of the healthy model and 50 pathological cases including lesions of calibrated uptakes and various diameters. Such a database will be useful to evaluate algorithms that may impact quantification or contrast recovery, to perform observer studies or to assess computer-aided diagnosis methods. Perspectives include enriching the present database with new pathological and normal cases accounting for interindividual variability of anatomy and FDG uptake. © 2010 IEEE.


Vunckx K.,Catholic University of Leuven | Atre A.,Catholic University of Leuven | Baete K.,Catholic University of Leuven | Reilhac A.,Biospective Inc. | And 4 more authors.
IEEE Transactions on Medical Imaging | Year: 2012

In emission tomography, image reconstruction and therefore also tracer development and diagnosis may benefit from the use of anatomical side information obtained with other imaging modalities in the same subject, as it helps to correct for the partial volume effect. One way to implement this, is to use the anatomical image for defining the a priori distribution in a maximum-a-posteriori (MAP) reconstruction algorithm. In this contribution, we use the PET-SORTEO Monte Carlo simulator to evaluate the quantitative accuracy reached by three different anatomical priors when reconstructing positron emission tomography (PET) brain images, using volumetric magnetic resonance imaging (MRI) to provide the anatomical information. The priors are: 1) a prior especially developed for FDG PET brain imaging, which relies on a segmentation of the MR-image (Baete , 2004); 2) the joint entropy-prior (Nuyts, 2007); 3) a prior that encourages smoothness within a position dependent neighborhood, computed from the MR-image. The latter prior was recently proposed by our group in (Vunckx and Nuyts, 2010), and was based on the prior presented by Bowsher (2004). The two latter priors do not rely on an explicit segmentation, which makes them more generally applicable than a segmentation-based prior. All three priors produced a compromise between noise and bias that was clearly better than that obtained with postsmoothed maximum likelihood expectation maximization (MLEM) or MAP with a relative difference prior. The performance of the joint entropy prior was slightly worse than that of the other two priors. The performance of the segmentation-based prior is quite sensitive to the accuracy of the segmentation. In contrast to the joint entropy-prior, the Bowsher-prior is easily tuned and does not suffer from convergence problems. © 2011 IEEE.


Nguyen T.M.,University of Windsor | Nguyen T.M.,Biospective Inc. | Wu Q.M.J.,University of Windsor
IEEE Transactions on Medical Imaging | Year: 2016

The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature. © 2016 IEEE.


Bellec P.,University of Montreal | Lavoie-Courchesne S.,University of Montreal | Lavoie-Courchesne S.,Montreal Neurological Institute | Dickinson P.,University of Montreal | And 5 more authors.
Frontiers in Neuroinformatics | Year: 2012

The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and suffcient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an opensource MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images. The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources. © 2012 Bellec, Lavoie-courchesne, Dickinson, Lerch, Zijdenbos and Evans.


Vrenken H.,VU University Amsterdam | Jenkinson M.,University of Oxford | Horsfield M.A.,Royal Infirmary | Battaglini M.,University of Siena | And 13 more authors.
Journal of Neurology | Year: 2013

Focal lesions and brain atrophy are the most extensively studied aspects of multiple sclerosis (MS), but the image acquisition and analysis techniques used can be further improved, especially those for studying within-patient changes of lesion load and atrophy longitudinally. Improved accuracy and sensitivity will reduce the numbers of patients required to detect a given treatment effect in a trial, and ultimately, will allow reliable characterization of individual patients for personalized treatment. Based on open issues in the field of MS research, and the current state of the art in magnetic resonance image analysis methods for assessing brain lesion load and atrophy, this paper makes recommendations to improve these measures for longitudinal studies of MS. Briefly, they are (1) images should be acquired using 3D pulse sequences, with near-isotropic spatial resolution and multiple image contrasts to allow more comprehensive analyses of lesion load and atrophy, across timepoints. Image artifacts need special attention given their effects on image analysis results. (2) Automated image segmentation methods integrating the assessment of lesion load and atrophy are desirable. (3) A standard dataset with benchmark results should be set up to facilitate development, calibration, and objective evaluation of image analysis methods for MS. © 2012 The Author(s).

Discover hidden collaborations