Entity

Time filter

Source Type

Le Touquet – Paris-Plage, France

Lacombe F.,Mauna Kea Technologies MKT | Lavaste O.,CNRS Chemistry Institute of Rennes | Senhadji L.,French Institute of Health and Medical Research | Senhadji L.,University of Rennes 1
IRBM | Year: 2011

The DIAPRECA project is aimed at exploiting conjointly or consecutively optical microscopy exploration techniques such as reflectance and Raman in a perspective of a better early and in vivo discrimination between healthy, precancerous and cancerous tissues. DIAPRECA brings together the laboratories LTSI-INSERM U 642, ProCadec-CNRS 6164, IGDR-CNRS 6061 of the university of Rennes 1, the service of gastroenterology university hospital of Brest and the company Mauna Kea Technologies. This project covered both aspects of dedicated instrument and software design and ex vivo and in vivo experiments. © 2011 Elsevier Masson SAS. All rights reserved. Source


Andre B.,Mauna Kea Technologies MKT | Andre B.,French Institute for Research in Computer Science and Automation | Vercauteren T.,Mauna Kea Technologies MKT | Ayache N.,French Institute for Research in Computer Science and Automation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

In this paper we present the first Content-Based Image Retrieval (CBIR) framework in the field of in vivo endomicroscopy, with applications ranging from training support to diagnosis support. We propose to adjust the standard Bag-of-Visual-Words method for the retrieval of endomicroscopic videos. Retrieval performance is evaluated both indirectly from a classification point-of-view, and directly with respect to a perceived similarity ground truth. The proposed method significantly outperforms, on two different endomicroscopy databases, several state-of-the-art methods in CBIR. With the aim of building a self-training simulator, we use retrieval results to estimate the interpretation difficulty experienced by the endoscopists. Finally, by incorporating clinical knowledge about perceived similarity and endomicroscopy semantics, we are able: 1) to learn an adequate visual similarity distance and 2) to build visual-word-based semantic signatures that extract, from low-level visual features, a higher-level clinical knowledge expressed in the endoscopist own language. © 2012 Springer-Verlag. Source


Andre B.,Mauna Kea Technologies MKT | Andre B.,French Institute for Research in Computer Science and Automation | Vercauteren T.,Mauna Kea Technologies MKT | Buchner A.M.,University of Pennsylvania | Ayache N.,French Institute for Research in Computer Science and Automation
2010 7th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI 2010 - Proceedings | Year: 2010

In vivo pathology from endomicroscopy videos can be a challenge for many physicians. To ease this task, we propose a content-based video retrieval method providing, given a query video, relevant similar videos from an expert-annotated database. Our main contribution consists in revisiting the Bag of Visual Words method by weighting the contributions of the dense local regions according to the registration results of mosaicing. We perform a leave-one-patient-out k-nearest neighbors classification and show a significantly better accuracy (e.g. around 94% for 9 neighbors) when compared to using the video images independently. Less neighbors are needed to classify the queries and our signature summation technique reduces retrieval runtime. ©2010 IEEE. Source


Andre B.,Mauna Kea Technologies MKT | Andre B.,French Institute for Research in Computer Science and Automation | Vercauteren T.,Mauna Kea Technologies MKT | Perchant A.,Mauna Kea Technologies MKT | Ayache N.,French Institute for Research in Computer Science and Automation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

Interpreting endomicroscopic images is still a significant challenge, especially since one single still image may not always contain enough information to make a robust diagnosis. To aid the physicians, we investigated some local feature-based retrieval methods that provide, given a query image, similar annotated images from a database of endomicroscopic images combined with high-level diagnosis represented as textual information. Local feature-based methods may be limited by the small field of view (FOV) of endomicroscopy and the fact that they do not take into account the spatial relationship between the local features, and the time relationship between successive images of the video sequences. To extract discriminative information over the entire image field, our proposed method collects local features in a dense manner instead of using a standard salient region detector. After the retrieval process, we introduce a verification step driven by the textual information in the database and in which spatial relationship between the local features is used. A spatial criterion is built from the co-occurence matrix of local features and used to remove outliers by thresholding on this criterion. To overcome the small-FOV problem and take advantage of the video sequence, we propose to combine image retrieval and mosaicing. Mosaicing essentially projects the temporal dimension onto a large field of view image. In this framework, videos, represented by mosaics, and single images can be retrieved with the same tools. With a leave-n-out cross-validation, our results show that taking into account the spatial relationship between local features and the temporal information of endomicroscopic videos by image mosaicing improves the retrieval accuracy. © 2010 Springer Berlin Heidelberg. Source

Discover hidden collaborations