CNRS Computer Science Laboratory

Paris, France

CNRS Computer Science Laboratory

Paris, France
SEARCH FILTERS
Time filter
Source Type

Eskildsen S.F.,University of Aarhus | Eskildsen S.F.,Montreal Neurological Institute | Coupe P.,Montreal Neurological Institute | Coupe P.,CNRS Computer Science Laboratory | And 5 more authors.
NeuroImage | Year: 2013

Predicting Alzheimer's disease (AD) in individuals with some symptoms of cognitive decline may have great influence on treatment choice and disease progression. Structural magnetic resonance imaging (MRI) has the potential of revealing early signs of neurodegeneration in the human brain and may thus aid in predicting and diagnosing AD. Surface-based cortical thickness measurements from T1-weighted MRI have demonstrated high sensitivity to cortical gray matter changes. In this study we investigated the possibility for using patterns of cortical thickness measurements for predicting AD in subjects with mild cognitive impairment (MCI). We used a novel technique for identifying cortical regions potentially discriminative for separating individuals with MCI who progress to probable AD, from individuals with MCI who do not progress to probable AD. Specific patterns of atrophy were identified at four time periods before diagnosis of probable AD and features were selected as regions of interest within these patterns. The selected regions were used for cortical thickness measurements and applied in a classifier for testing the ability to predict AD at the four stages. In the validation, the test subjects were excluded from the feature selection to obtain unbiased results. The accuracy of the prediction improved as the time to conversion from MCI to AD decreased, from 70% at 3. years before the clinical criteria for AD was met, to 76% at 6. months before AD. By inclusion of test subjects in the feature selection process, the prediction accuracies were artificially inflated to a range of 73% to 81%. Two important results emerge from this study. First, prediction accuracies of conversion from MCI to AD can be improved by learning the atrophy patterns that are specific to the different stages of disease progression. This has the potential to guide the further development of imaging biomarkers in AD. Second, the results show that one needs to be careful when designing training, testing and validation schemes to ensure that datasets used to build the predictive models are not used in testing and validation. © 2012 Elsevier Inc.


Aubert-Broche B.,Montreal Neurological Institute | Fonov V.S.,Montreal Neurological Institute | Garcia-Lorenzo D.,Montreal Neurological Institute | Garcia-Lorenzo D.,French Institute of Health and Medical Research | And 7 more authors.
NeuroImage | Year: 2013

Cross-sectional analysis of longitudinal anatomical magnetic resonance imaging (MRI) data may be suboptimal as each dataset is analyzed independently. In this study, we evaluate how much variability can be reduced by analyzing structural volume changes in longitudinal data using longitudinal analysis. We propose a two-part pipeline that consists of longitudinal registration and longitudinal classification. The longitudinal registration step includes the creation of subject-specific linear and nonlinear templates that are then registered to a population template. The longitudinal classification step comprises a four-dimensional expectation-maximization algorithm, using a priori classes computed by averaging the tissue classes of all time points obtained cross-sectionally.To study the impact of these two steps, we apply the framework completely ("LL method": Longitudinal registration and Longitudinal classification) and partially ("LC method": Longitudinal registration and Cross-sectional classification) and compare these with a standard cross-sectional framework ("CC method": Cross-sectional registration and Cross-sectional classification).The three methods are applied to (1) a scan-rescan database to analyze reliability and (2) the NIH pediatric population to compare gray matter growth trajectories evaluated with a linear mixed model.The LL method, and the LC method to a lesser extent, significantly reduced the variability in the measurements in the scan-rescan study and gave the best-fitted gray matter growth model with the NIH pediatric MRI database. The results confirm that both steps of the longitudinal framework reduce variability and improve accuracy in comparison with the cross-sectional framework, with longitudinal classification yielding the greatest impact.Using the improved method to analyze longitudinal data, we study the growth trajectories of anatomical brain structures in childhood using the NIH pediatric MRI database. We report age- and gender-related growth trajectories of specific regions of the brain during childhood that could be used as a reference in studying the impact of neurological disorders on brain development. © 2013 Elsevier Inc.


Balister P.,University of Memphis | Li H.,CNRS Computer Science Laboratory | Schelp R.,University of Memphis
Journal of Combinatorial Theory. Series B | Year: 2017

We show that if G is a graph on at least 3r+4s vertices with minimum degree at least 2r+3s, then G contains r+s vertex disjoint cycles, where each of s of these cycles either contain two chords, or are of order 4 and contain one chord. © 2017 Elsevier Inc.


Eskildsen S.F.,University of Aarhus | Coupe P.,CNRS Computer Science Laboratory | Fonov V.S.,Montreal Neurological Institute | Pruessner J.C.,McGill University | Collins D.L.,Montreal Neurological Institute
Neurobiology of Aging | Year: 2015

Optimized magnetic resonance imaging (MRI)-based biomarkers of Alzheimer's disease (AD) may allow earlier detection and refined prediction of the disease. In addition, they could serve as valuable tools when designing therapeutic studies of individuals at risk of AD. In this study, we combine (1) a novel method for grading medial temporal lobe structures with (2) robust cortical thickness measurements to predict AD among subjects with mild cognitive impairment (MCI) from a single T1-weighted MRI scan. Using AD and cognitively normal individuals, we generate a set of features potentially discriminating between MCI subjects who convert to AD and those who remain stable over a period of 3 years. Using mutual information-based feature selection, we identify 5 key features optimizing the classification of MCI converters. These features are the left and right hippocampi gradings and cortical thicknesses of the left precuneus, left superior temporal sulcus, and right anterior part of the parahippocampal gyrus. We show that these features are highly stable in cross-validation and enable a prediction accuracy of 72% using a simple linear discriminant classifier, the highest prediction accuracy obtained on the baseline Alzheimer's Disease Neuroimaging Initiative first phase cohort to date. The proposed structural features are consistent with Braak stages and previously reported atrophic patterns in AD and are easy to transfer to new cohorts and to clinical practice. © 2015 Elsevier Inc.


Coupe P.,CNRS Computer Science Laboratory | Manjon J.V.,Polytechnic University of Valencia | Chamberland M.,Université de Sherbrooke | Descoteaux M.,Université de Sherbrooke | Hiba B.,University of Bordeaux Segalen
NeuroImage | Year: 2013

In this paper, a new single image acquisition super-resolution method is proposed to increase image resolution of diffusion weighted (DW) images. Based on a nonlocal patch-based strategy, the proposed method uses a non-diffusion image (b0) to constrain the reconstruction of DW images. An extensive validation is presented with a gold standard built on averaging 10 high-resolution DW acquisitions. A comparison with classical interpolation methods such as trilinear and B-spline demonstrates the competitive results of our proposed approach in terms of improvements on image reconstruction, fractional anisotropy (FA) estimation, generalized FA and angular reconstruction for tensor and high angular resolution diffusion imaging (HARDI) models. Besides, first results of reconstructed ultra high resolution DW images are presented at 0.6×0.6×0.6mm3 and 0.4×0.4×0.4mm3 using our gold standard based on the average of 10 acquisitions, and on a single acquisition. Finally, fiber tracking results show the potential of the proposed super-resolution approach to accurately analyze white matter brain architecture. © 2013 Elsevier Inc.


Poizat P.,University of Évry Val d'Essonne | Poizat P.,CNRS Computer Science Laboratory | Salaun G.,Grenoble Institute of Technology
Proceedings of the ACM Symposium on Applied Computing | Year: 2012

Choreographies allow business and service architects to specify with a global perspective the requirements of applications built over distributed and interacting software entities. While being a standard for the abstract specification of business workflows and collaboration between services, the Business Process Modeling Notation (BPMN) has only been recently extended into BPMN 2.0 to support an interaction model of choreography, which, as opposed to interconnected interface models, is better suited to top-down development processes. An important issue with choreographies is real-izability, i.e., whether peers obtained via projection from a choreography interact as prescribed in the choreography requirements. In this work, we propose a realizability checking approach for BPMN 2.0 choreographies. Our approach is formally grounded on a model transformation into the LOTOS NT process algebra and the use of equivalence checking. It is also completely tool-supported through interaction with the Eclipse BPMN 2.0 editor and the CADP process algebraic toolbox. © 2012 ACM.


Boubchir L.,Qatar University | Boubchir L.,CNRS Computer Science Laboratory | Boashash B.,Qatar University | Boashash B.,University of Queensland
IEEE Transactions on Signal Processing | Year: 2013

This paper presents a novel nonparametric Bayesian estimator for signal and image denoising in the wavelet domain. This approach uses a prior model of the wavelet coefficients designed to capture the sparseness of the wavelet expansion. A new family of Bessel K Form (BKF) densities are designed to fit the observed histograms, so as to provide a probabilistic model for the marginal densities of the wavelet coefficients. This paper first shows how the BKF prior can characterize images belonging to Besov spaces. Then, a new hyper-parameters estimator based on EM algorithm is designed to estimate the parameters of the BKF density; and, it is compared with a cumulants-based estimator. Exploiting this prior model, another novel contribution is to design a Bayesian denoiser based on the Maximum A Posteriori (MAP) estimation under the 0-1 loss function, for which we formally establish the mathematical properties and derive a closed-form expression. Finally, a comparative study on a digitized database of natural images and biomedical signals shows the effectiveness of this new Bayesian denoiser compared to other classical and Bayesian denoising approaches. Results on biomedical data illustrate the method in the temporal as well as the time-frequency domain. © 1991-2012 IEEE.


Manjon J.V.,Polytechnic University of Valencia | Coupe P.,CNRS Computer Science Laboratory
Frontiers in Neuroinformatics | Year: 2016

The amount of medical image data produced in clinical and research settings is rapidly growing resulting in vast amount of data to analyze. Automatic and reliable quantitative analysis tools, including segmentation, allow to analyze brain development and to understand specific patterns of many neurological diseases. This field has recently experienced many advances with successful techniques based on non-linear warping and label fusion. In this work we present a novel and fully automatic pipeline for volumetric brain analysis based on multi-atlas label fusion technology that is able to provide accurate volumetric information at different levels of detail in a short time. This method is available through the volBrain online web interface (http://volbrain.upv.es), which is publically and freely accessible to the scientific community. Our new framework has been compared with current state-of-the-art methods showing very competitive results. © 2016 Manjón and Coupé.


This paper deals with noise propagation from camera sensor to displacement and strain maps when the grid method is employed to estimate these quantities. It is shown that closed-form equations can be employed to predict the link between metrological characteristics such as resolution and spatial resolution in displacement and strain maps on the one hand and various quantities characterising grid images such as brightness, contrast and standard deviation of noise on the other hand. Various numerical simulations confirm first the relevance of this approach in the case of an idealised camera sensor impaired by a homoscedastic Gaussian white noise. Actual CCD or CMOS sensors exhibit, however, a heteroscedastic noise. A pre-processing step is therefore proposed to first stabilise noise variance prior to employing the predictive equations, which provide the resolution in strain and displacement maps due to sensor noise. This step is based on both a modelling of sensor noise and the use of the generalised Anscombe transform to stabilise noise variance. Applying this procedure in the case of a translation test confirms that it is possible to model correctly noise propagation from sensor to displacement and strain maps, and thus also to predict the actual link between resolution, spatial resolution and standard deviation of noise in grid images. © 2013 Wiley Publishing Ltd.


Teytaud F.,CNRS Computer Science Laboratory | Teytaud O.,CNRS Computer Science Laboratory
2011 IEEE Conference on Computational Intelligence and Games, CIG 2011 | Year: 2011

Solving games is usual in the fully observable case. The partially observable case is much more difficult; whenever the number of strategies is finite (which is not necessarily the case, even when the state space is finite), the main tool for the exact solving is the construction of the full matrix game and its solving by linear programming. We here propose tools for approximating the value of partially observable games. The lemmas are relatively general, and we apply them for deriving rigorous bounds on the Nash equilibrium of phantom-tic-tac-toe and phantom-Go. © 2011 IEEE.

Loading CNRS Computer Science Laboratory collaborators
Loading CNRS Computer Science Laboratory collaborators