CNRS Computer Science Laboratory

Paris, France

CNRS Computer Science Laboratory

Paris, France

Time filter

Source Type

Eskildsen S.F.,University of Aarhus | Coupe P.,CNRS Computer Science Laboratory | Fonov V.S.,Montreal Neurological Institute | Pruessner J.C.,McGill University | Collins D.L.,Montreal Neurological Institute
Neurobiology of Aging | Year: 2015

Optimized magnetic resonance imaging (MRI)-based biomarkers of Alzheimer's disease (AD) may allow earlier detection and refined prediction of the disease. In addition, they could serve as valuable tools when designing therapeutic studies of individuals at risk of AD. In this study, we combine (1) a novel method for grading medial temporal lobe structures with (2) robust cortical thickness measurements to predict AD among subjects with mild cognitive impairment (MCI) from a single T1-weighted MRI scan. Using AD and cognitively normal individuals, we generate a set of features potentially discriminating between MCI subjects who convert to AD and those who remain stable over a period of 3 years. Using mutual information-based feature selection, we identify 5 key features optimizing the classification of MCI converters. These features are the left and right hippocampi gradings and cortical thicknesses of the left precuneus, left superior temporal sulcus, and right anterior part of the parahippocampal gyrus. We show that these features are highly stable in cross-validation and enable a prediction accuracy of 72% using a simple linear discriminant classifier, the highest prediction accuracy obtained on the baseline Alzheimer's Disease Neuroimaging Initiative first phase cohort to date. The proposed structural features are consistent with Braak stages and previously reported atrophic patterns in AD and are easy to transfer to new cohorts and to clinical practice. © 2015 Elsevier Inc.

Coupe P.,CNRS Computer Science Laboratory | Manjon J.V.,Polytechnic University of Valencia | Chamberland M.,Université de Sherbrooke | Descoteaux M.,Université de Sherbrooke | Hiba B.,University of Bordeaux Segalen
NeuroImage | Year: 2013

In this paper, a new single image acquisition super-resolution method is proposed to increase image resolution of diffusion weighted (DW) images. Based on a nonlocal patch-based strategy, the proposed method uses a non-diffusion image (b0) to constrain the reconstruction of DW images. An extensive validation is presented with a gold standard built on averaging 10 high-resolution DW acquisitions. A comparison with classical interpolation methods such as trilinear and B-spline demonstrates the competitive results of our proposed approach in terms of improvements on image reconstruction, fractional anisotropy (FA) estimation, generalized FA and angular reconstruction for tensor and high angular resolution diffusion imaging (HARDI) models. Besides, first results of reconstructed ultra high resolution DW images are presented at 0.6×0.6×0.6mm3 and 0.4×0.4×0.4mm3 using our gold standard based on the average of 10 acquisitions, and on a single acquisition. Finally, fiber tracking results show the potential of the proposed super-resolution approach to accurately analyze white matter brain architecture. © 2013 Elsevier Inc.

Poizat P.,University of Évry Val d'Essonne | Poizat P.,CNRS Computer Science Laboratory | Salaun G.,Grenoble Institute of Technology
Proceedings of the ACM Symposium on Applied Computing | Year: 2012

Choreographies allow business and service architects to specify with a global perspective the requirements of applications built over distributed and interacting software entities. While being a standard for the abstract specification of business workflows and collaboration between services, the Business Process Modeling Notation (BPMN) has only been recently extended into BPMN 2.0 to support an interaction model of choreography, which, as opposed to interconnected interface models, is better suited to top-down development processes. An important issue with choreographies is real-izability, i.e., whether peers obtained via projection from a choreography interact as prescribed in the choreography requirements. In this work, we propose a realizability checking approach for BPMN 2.0 choreographies. Our approach is formally grounded on a model transformation into the LOTOS NT process algebra and the use of equivalence checking. It is also completely tool-supported through interaction with the Eclipse BPMN 2.0 editor and the CADP process algebraic toolbox. © 2012 ACM.

Poizat P.,University of Évry Val d'Essonne | Poizat P.,CNRS Computer Science Laboratory | Yan Y.,Concordia University at Montréal
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

Service-Oriented Computing supports description, publication, discovery and composition of services to fulfil end-user needs. Yet, service composition processes commonly assume that service descriptions and user needs share the same abstraction level, and that services have been pre-designed to integrate. To release these strong assumptions and to augment the possibilities of composition, we add adaptation features into the service composition process using semantic structures for exchanged data, for service functionalities, and for user needs. Graph planning encodings enable us to retrieve service compositions efficiently. Our composition technique supports conversations for both services and user needs, and it is fully automated thanks to a tool, pycompose, which can interact with state-of-the-art graph planning tools. © 2010 Springer-Verlag.

Loshchilov I.,French Institute for Research in Computer Science and Automation | Schoenauer M.,French Institute for Research in Computer Science and Automation | Sebag M.,CNRS Computer Science Laboratory
GECCO'12 - Proceedings of the 14th International Conference on Genetic and Evolutionary Computation Companion | Year: 2012

In this paper, we study the performance of IPOP- s*aACM-ES and BIPOP- s*aACM-ES, recently proposed self-adaptive surrogate-assisted Covariance Matrix Adaptation Evolution Strategies. Both algorithms were tested using restarts till a total number of function evaluations of 10 6D was reached, where D is the dimension of the function search space. We compared surrogate-assisted algorithms with their surrogateless versions IPOP-aCMA-ES and BIPOP-CMA-ES, two algorithms with one of the best overall performance observed during the BBOB-2009 and BBOB-2010. The comparison shows that the surrogate-assisted versions outperform the original CMA-ES algorithms by a factor from 2 to 4 on 8 out of 24 noiseless benchmark problems, showing the best results among all algorithms of the BBOB-2009 and BBOB-2010 on Ellipsoid, Discus, Bent Cigar, Sharp Ridge and Sum of different powers functions. Copyright 2012 ACM.

Boubchir L.,Qatar University | Boubchir L.,CNRS Computer Science Laboratory | Boashash B.,Qatar University | Boashash B.,University of Queensland
IEEE Transactions on Signal Processing | Year: 2013

This paper presents a novel nonparametric Bayesian estimator for signal and image denoising in the wavelet domain. This approach uses a prior model of the wavelet coefficients designed to capture the sparseness of the wavelet expansion. A new family of Bessel K Form (BKF) densities are designed to fit the observed histograms, so as to provide a probabilistic model for the marginal densities of the wavelet coefficients. This paper first shows how the BKF prior can characterize images belonging to Besov spaces. Then, a new hyper-parameters estimator based on EM algorithm is designed to estimate the parameters of the BKF density; and, it is compared with a cumulants-based estimator. Exploiting this prior model, another novel contribution is to design a Bayesian denoiser based on the Maximum A Posteriori (MAP) estimation under the 0-1 loss function, for which we formally establish the mathematical properties and derive a closed-form expression. Finally, a comparative study on a digitized database of natural images and biomedical signals shows the effectiveness of this new Bayesian denoiser compared to other classical and Bayesian denoising approaches. Results on biomedical data illustrate the method in the temporal as well as the time-frequency domain. © 1991-2012 IEEE.

Manjon J.V.,Polytechnic University of Valencia | Coupe P.,CNRS Computer Science Laboratory
Frontiers in Neuroinformatics | Year: 2016

The amount of medical image data produced in clinical and research settings is rapidly growing resulting in vast amount of data to analyze. Automatic and reliable quantitative analysis tools, including segmentation, allow to analyze brain development and to understand specific patterns of many neurological diseases. This field has recently experienced many advances with successful techniques based on non-linear warping and label fusion. In this work we present a novel and fully automatic pipeline for volumetric brain analysis based on multi-atlas label fusion technology that is able to provide accurate volumetric information at different levels of detail in a short time. This method is available through the volBrain online web interface (, which is publically and freely accessible to the scientific community. Our new framework has been compared with current state-of-the-art methods showing very competitive results. © 2016 Manjón and Coupé.

This paper deals with noise propagation from camera sensor to displacement and strain maps when the grid method is employed to estimate these quantities. It is shown that closed-form equations can be employed to predict the link between metrological characteristics such as resolution and spatial resolution in displacement and strain maps on the one hand and various quantities characterising grid images such as brightness, contrast and standard deviation of noise on the other hand. Various numerical simulations confirm first the relevance of this approach in the case of an idealised camera sensor impaired by a homoscedastic Gaussian white noise. Actual CCD or CMOS sensors exhibit, however, a heteroscedastic noise. A pre-processing step is therefore proposed to first stabilise noise variance prior to employing the predictive equations, which provide the resolution in strain and displacement maps due to sensor noise. This step is based on both a modelling of sensor noise and the use of the generalised Anscombe transform to stabilise noise variance. Applying this procedure in the case of a translation test confirms that it is possible to model correctly noise propagation from sensor to displacement and strain maps, and thus also to predict the actual link between resolution, spatial resolution and standard deviation of noise in grid images. © 2013 Wiley Publishing Ltd.

Teytaud F.,CNRS Computer Science Laboratory | Teytaud O.,CNRS Computer Science Laboratory
2011 IEEE Conference on Computational Intelligence and Games, CIG 2011 | Year: 2011

Solving games is usual in the fully observable case. The partially observable case is much more difficult; whenever the number of strategies is finite (which is not necessarily the case, even when the state space is finite), the main tool for the exact solving is the construction of the full matrix game and its solving by linear programming. We here propose tools for approximating the value of partially observable games. The lemmas are relatively general, and we apply them for deriving rigorous bounds on the Nash equilibrium of phantom-tic-tac-toe and phantom-Go. © 2011 IEEE.

Bourgeau T.,CNRS Computer Science Laboratory
2011 7th International Conference on Network and Service Management, CNSM 2011 | Year: 2011

Network topology discovery with distributed traceroute-based measurement systems is important to monitor, measure, diagnose and capture IP-level network topology dynamism. Depending on the discovered topology size and the captured topology dynamism accuracy, a compromise has to be done regarding the measurement time granularity and the scale of these measurement systems. In this paper, we present our large-scale measurement dataset, and analysis of the network topology dynamism captured in a real measurement scenario. We also quantify the missed dynamism information with coarser measurement time granularity inferred by our proposed algorithm. These results confirm that probing less frequently, as it is the case of most of the existing measurement systems today, can dramatically affect the dynamism information captured. © 2011 IFIP.

Loading CNRS Computer Science Laboratory collaborators
Loading CNRS Computer Science Laboratory collaborators