Time filter

Source Type

Vega C.,IRSTEA | Vega C.,CNRS Laboratory for Informatics | Durrieu S.,IRSTEA
International Journal of Applied Earth Observation and Geoinformation

This paper presents a method for individual tree crown extraction and characterisation from a canopy surface model (CSM). The method is based on a conventional algorithm used for localising LM on a smoothed version of the CSM and subsequently for modelling the tree crowns around each maximum at the plot level. The novelty of the approach lies in the introduction of controls on both the degree of CSM filtering and the shape of elliptic crowns, in addition to a multi-filtering level crown fusion approach to balance omission and commission errors. The algorithm derives the total tree height and the mean crown diameter from the elliptic tree crowns generated. The method was tested and validated on a mountainous forested area mainly covered by mature and even-aged black pine (Pinus nigra ssp. nigra [Arn.]) stands. Mean stem detection per plot, using this method, was 73.97%. Algorithm performance was affected slightly by both stand density and heterogeneity (i.e. tree diameter classes' distribution). The total tree height and the mean crown diameter were estimated with root mean squared error values of 1.83 m and 1.48 m respectively. Tree heights were slightly underestimated in flat areas and overestimated on slopes. The average crown diameter was underestimated by 17.46% on average. © 2011 Elsevier B.V. Source

Vu V.-V.,University Pierre and Marie Curie | Labroche N.,University Pierre and Marie Curie | Bouchon-Meunier B.,CNRS Laboratory for Informatics
Pattern Recognition

In this article, we address the problem of automatic constraint selection to improve the performance of constraint-based clustering algorithms. To this aim we propose a novel active learning algorithm that relies on a k-nearest neighbors graph and a new constraint utility function to generate queries to the human expert. This mechanism is paired with propagation and refinement processes that limit the number of constraint candidates and introduce a minimal diversity in the proposed constraints. Existing constraint selection heuristics are based on a random selection or on a minmax criterion and thus are either inefficient or more adapted to spherical clusters. Contrary to these approaches, our method is designed to be beneficial for all constraint-based clustering algorithms. Comparative experiments conducted on real datasets and with two distinct representative constraint-based clustering algorithms show that our approach significantly improves clustering quality while minimizing the number of human expert solicitations. © 2011 Elsevier Ltd All rights reserved. Source

El Din M.S.,CNRS Laboratory for Informatics | Zhi L.,Key Laboratory of Mathematics Mechanization
SIAM Journal on Optimization

Let P = {h1, . . . , hs} ⊂ Z[Y1, . . . ,Yk], D ≥ deg(hi) for 1 ≤ i ≤ s, ς bounding the bit length of the coefficients of the hi's, and let ? be a quantifier-free P-formula defining a convex semialgebraic set. We design an algorithm returning a rational point in S if and only if S∩Q ≠ ∅. It requires ςO(1)DO(k3) bit operations. If a rational point is outputted, its coordinates have bit length dominated by ςDO(k3). Using this result, we obtain a procedure for deciding whether a polynomial f ∈ Z[X1, . . . ,Xn] is a sum of squares of polynomials in Q[X1, . . . ,Xn]. Denote by d the degree of f, ? the maximum bit length of the coefficients in f, D = (n+d/n), and k ≤ D(D + 1) ?(n+d/n), This procedure requires τO(1)DO(k3) bit operations, and the coefficients of the outputted polynomials have bit length dominated by τDO(k3). © 2010 Society for Industrial and Applied Mathematics. Source

Gonzales C.,CNRS Laboratory for Informatics | Dubuisson S.,CNRS Institute of Robotics and Intelligent Systems
International Journal of Computer Vision

Particle filter (PF) is a method dedicated to posterior density estimations using weighted samples whose elements are called particles. In particular, this approach can be applied to object tracking in video sequences in complex situations and, in this paper, we focus on articulated object tracking, i.e., objects that can be decomposed as a set of subparts. One of PF’s crucial step is a resampling step in which particles are resampled to avoid degeneracy problems. In this paper, we propose to exploit mathematical properties of articulated objects to swap conditionally independent subparts of the particles in order to generate new particle sets. We then introduce a new resampling method called Combinatorial Resampling that resamples over the particle set resulting from all the “admissible” swappings, the so-called combinatorial set. In essence, combinatorial resampling (CR) is quite similar to the combination of a crossover operator and a usual resampling, but there exists a fundamental difference between CR and the use of crossover operators: we prove that CR is sound, i.e., in a Bayesian framework, it is guaranteed to represent without any bias the posterior densities of the states over time. By construction, the particle sets produced by CR better represent the density to estimate over the whole state space than the original set and, therefore, CR produces higher quality samples. Unfortunately, the combinatorial set is generally of an exponential size and, therefore, to be scalable, we show how it can be implicitly constructed and resampled from, thus resulting in both an efficient and effective resampling scheme. Finally, through experimentations both on challenging synthetic and real video sequences, we also show that our resampling method outperforms all classical resampling methods both in terms of the quality of its results and in terms of computation times. © 2014, Springer Science+Business Media New York. Source

Labroche N.,CNRS Laboratory for Informatics

This paper describes two new online fuzzy clustering algorithms based on medoids. These algorithms have been developed to deal with either very large datasets that do not fit in main memory or data streams in which data are produced continuously. The innovative aspect of our approach is the combination of fuzzy methods, which are well adapted to outliers and overlapping clusters, with medoids and the introduction of a decay mechanism to adapt more effectively to changes over time in the data streams. The use of medoids instead of means allows to deal with non-numerical data (e.g. sequences. .) and improves the interpretability of the cluster centers. Experiments conducted on artificial and real datasets show that our new algorithms are competitive with state-of-the-art clustering algorithms in terms of purity of the partition, F1 score and computation times. Finally, experiments conducted on artificial data streams show the benefit of our decay mechanism in the case of evolving distributions. © 2013 Elsevier B.V. Source

Discover hidden collaborations