Dittami S.M.,Paris-Sorbonne University |
Dittami S.M.,CNRS Integrative Biology of Marine Models |
Eveillard D.,CNRS Nantes Atlantic Computer Science Laboratory |
Tonon T.,Paris-Sorbonne University |
Tonon T.,CNRS Integrative Biology of Marine Models
Molecular Ecology | Year: 2014
Increasing evidence exists that bacterial communities interact with and shape the biology of algae and that their evolutionary histories are connected. Despite these findings, physiological studies were and still are generally carried out with axenic or at least antibiotic-treated cultures. Here, we argue that considering interactions between algae and associated bacteria is key to understanding their biology and evolution. To deal with the complexity of the resulting 'holobiont' system, a metabolism-centred approach that uses combined metabolic models for algae and associated bacteria is proposed. We believe that these models will be valuable tools both to study algal-bacterial interactions and to elucidate processes important for the acclimation of the holobiont to environmental changes. © 2014 John Wiley & Sons Ltd.
Rusu I.,CNRS Nantes Atlantic Computer Science Laboratory
Theoretical Computer Science | Year: 2014
Common intervals of K permutations over the same set of n elements were firstly investigated by T. Uno and M. Yagiura (2000) , who proposed an efficient algorithm to find common intervals when K= 2. Several particular classes of intervals have been defined since then, e.g. conserved intervals and nested common intervals, with applications mainly in genome comparison. Each such class, including common intervals, led to the development of a specific algorithmic approach for K = 2, and - except for nested common intervals - for its extension to an arbitrary K. In this paper, we propose a common and efficient algorithmic framework for finding different types of common intervals in a set P of K permutations, with arbitrary K. Our generic algorithm is based on a global representation of the information stored in P, called the MinMax-profile of P, and an efficient data structure, called an LR-stack, that we introduce here. We show that common intervals (and their subclasses of irreducible common intervals and same-sign common intervals), nested common intervals (and their subclass of maximal nested common intervals) as well as conserved intervals (and their subclass of irreducible conserved intervals) may be obtained by appropriately setting the parameters of our algorithm in each case. All the resulting algorithms run in O( Kn+ N) time and need O( n) additional space, where N is the number of solutions. The algorithms for nested common intervals and maximal nested common intervals are new for K> 2, in the sense that no other algorithm has been given so far to solve the problem with the same complexity, or better. The other algorithms are as efficient as the best known algorithms. © 2014 Elsevier B.V.
Le Capitaine H.,CNRS Nantes Atlantic Computer Science Laboratory |
Le Capitaine H.,University of La Rochelle
IEEE Transactions on Fuzzy Systems | Year: 2012
Matching pairs of objects is a fundamental operation in data analysis. However, it requires the definition of a similarity measure between objects that are to be matched. The similarity measure may not be adapted to the various properties of each object. Consequently, designing a method to learn a measure of similarity between pairs of objects is an important generic problem in machine learning. In this paper, a general framework of fuzzy logical-based similarity measures based on T-equalities that are derived from residual implication functions is proposed. Then, a model that allows us to learn the parametric similarity measures is introduced. This is achieved by an online learning algorithm with an efficient implication-based loss function. Experiments on real datasets show that the learned measures are efficient at a wide range of scales and achieve better results than existing fuzzy similarity measures. Moreover, the learning algorithm is fast so that it can be used in real-world applications, where computation times are a key feature when one chooses an inference system. © 2012 IEEE.
Pigeau A.,CNRS Nantes Atlantic Computer Science Laboratory
Multimedia Tools and Applications | Year: 2016
Usage of camera-equipped mobile devices raises the need for automatically organizing large personal media collections. Event detection algorithms have been proposed in the literature to tackle this problem but a shortcoming of these studies resides in the lack of comparison of the different solutions. The goal of this work is then to provide a set of tools for the assessment of an event detection method. Our first contribution is our prototype Life Gallery, which aims to retrieve metadata of a personal media collection in order to facilitate the building of a benchmark. Our second contribution is an event detection algorithm based on successive refinements of an initial temporal partition. Our solution uses a simple algorithm that regroups consecutive similar media or split groups with dissimilar media. The choice to provide a basic solution is to compare in a future work our results with more complex solutions, in order to really assess the benefit of these latter. Finally, our third contribution is a benchmark of 6 different users’ collections obtained thanks to our Life Gallery application. © 2016 Springer Science+Business Media New York
Goualard F.,CNRS Nantes Atlantic Computer Science Laboratory
ACM Transactions on Mathematical Software | Year: 2014
The algorithm that computes the midpoint of an interval with floating-point bounds requires some careful devising to handle all possible inputs correctly. We review several implementations from prominent C/C++ interval arithmetic packages and analyze their potential failure to deliver the expected results. We then show how to amend them to avoid common pitfalls. The results presented are also relevant to noninterval arithmetic computation such as the implementation of bisection methods. Enough background on IEEE 754 floating-point arithmetic is provided for this article to serve as a practical introduction to the analysis of floating-point computation.. © 2014 ACM.
Mourad R.,CNRS Nantes Atlantic Computer Science Laboratory |
Sinoquet C.,CNRS Nantes Atlantic Computer Science Laboratory |
Leray P.,CNRS Nantes Atlantic Computer Science Laboratory
BMC Bioinformatics | Year: 2011
Background: Discovering the genetic basis of common genetic diseases in the human genome represents a public health issue. However, the dimensionality of the genetic data (up to 1 million genetic markers) and its complexity make the statistical analysis a challenging task.Results: We present an accurate modeling of dependences between genetic markers, based on a forest of hierarchical latent class models which is a particular class of probabilistic graphical models. This model offers an adapted framework to deal with the fuzzy nature of linkage disequilibrium blocks. In addition, the data dimensionality can be reduced through the latent variables of the model which synthesize the information borne by genetic markers. In order to tackle the learning of both forest structure and probability distributions, a generic algorithm has been proposed. A first implementation of our algorithm has been shown to be tractable on benchmarks describing 105variables for 2000 individuals.Conclusions: The forest of hierarchical latent class models offers several advantages for genome-wide association studies: accurate modeling of linkage disequilibrium, flexible data dimensionality reduction and biological meaning borne by latent variables. © 2011 Mourad et al; licensee BioMed Central Ltd.
Ishii D.,Japan National Institute of Information and Communications Technology |
Goldsztejn A.,CNRS Nantes Atlantic Computer Science Laboratory |
Jermann C.,CNRS Nantes Atlantic Computer Science Laboratory
Constraints | Year: 2012
This paper presents an interval-based method that follows the branch-and-prune scheme to compute a verified paving of a projection of the solution set of an under-constrained system. Benefits of this algorithm include anytime solving process, homogeneous verification of inner boxes, and applicability to generic problems, allowing any number of (possibly nonlinear) equality and inequality constraints. We present three key improvements of the algorithm dedicated to projection problems: (i) The verification process is enhanced in order to prove faster larger boxes in the projection space. (ii) Computational effort is saved by pruning redundant portions of the solution set that would project identically. (iii) A dedicated branching strategy allows reducing the number of treated boxes. Experimental results indicate that various applications can be modeled as projection problems and can be solved efficiently by the proposed method. © Springer Science+Business Media, LLC 2012.
Goldsztejn A.,CNRS Nantes Atlantic Computer Science Laboratory
Reliable Computing | Year: 2012
In Modal Intervals Revisited Part 1, new extensions to generalized intervals (intervals whose bounds are not constrained to be ordered), called AE-extensions, have been defined. They provide the same interpretations as modal intervals and therefore enhance the interpretations of classical interval extensions (for example, both inner and outer approximations of function ranges are in the scope of AE-extensions). The construction of AE-extensions is similar to the cnstruction of classical interval extensions. In particular, a natural AE-extension has been dened from Kaucher arithmetic which simplied some central results of modal interval theory. Starting from this framework, the mean-value AE-extension is now defined. It represents a new way to linearize a real function, which is compatible with both inner and outer approximations of its range. With a quadratic order of convergence for real-valued functions, it allows one to overcome some dificulties which were encountered using a preconditioning process together with the natural AE-extensions. Some application examples are finally presented, displaying the application potential of the mean-value AE-extension.
Rusu I.,CNRS Nantes Atlantic Computer Science Laboratory
Journal of Discrete Algorithms | Year: 2012
In this paper, we address two different problems related to conserved regions in K≥2 genomes represented as permutations. The first one requires to enumerate the conserved regions, whereas the second one asks only to count them. We show that the generator-based technique, introduced by Bergeron et al. to enumerate common intervals of K genomes represented as permutations, may be extended following two directions. Firstly, it may be applied to signed permutations, yielding (1) a method to enumerate in O(Kn+N) time the N conserved intervals of K such permutations on n elements, and (2) a method to enumerate in O(Kn) time the irreducible conserved intervals of those K permutations. Secondly, it may be used to solve in O(Kn) time the counting problem, for every class of intervals which admits a so-called canonical generator. Both common and conserved intervals of K (signed) permutations admit such a generator. Although some (not all) of the above running times have already been reached by previous algorithms, it is the first time that the features shared by common and conserved intervals are exploited under a common efficient framework. © 2011 Elsevier B.V. All rights reserved.
Le Capitaine H.,CNRS Nantes Atlantic Computer Science Laboratory
Pattern Recognition | Year: 2014
The possibility of selecting a subset of classes instead of one unique class for assignation is of great interest in many decision making systems. Selecting a subset of classes instead of singleton allows to reduce the error rate and to propose a reduced set to another classifier or an expert. This second step provides additional information, and therefore increases the quality of the result. In this paper, a unified view of the problem of class-selection with probabilistic classifiers is presented. The proposed framework, based on the evaluation of the probabilistic equivalence, allows to retrieve class-selective frameworks that have been proposed in the literature. We also describe an approach in which the decision rules are compared by the help of a normalized area under the error/selection curve. It allows to get a relative independence of the performance of a classifier without reject option, and thus a reliable class-selection decision rule evaluation. The power of this generic proposition is demonstrated by evaluating and comparing it to several state of the art methods on nine real world datasets, and four different probabilistic classifiers. © 2013 Elsevier Ltd.