Time filter

Source Type

University of Technology of Compiègne, France

Piro P.,University of Nice Sophia Antipolis | Piro P.,Italian Institute of Technology | Nock R.,University of Antilles Guyane | Nielsen F.,Ecole Polytechnique - Palaiseau | And 2 more authors.

Voting rules relying on k-nearest neighbors (k-NN) are an effective tool in countless many machine learning techniques. Thanks to its simplicity, k-NN classification is very attractive to practitioners, as it enables very good performances in several practical applications. However, it suffers from various drawbacks, like sensitivity to "noisy" instances and poor generalization properties when dealing with sparse high-dimensional data.In this paper, we tackle the k-NN classification problem at its core by providing a novel k-NN boosting approach. Namely, we propose a supervised learning algorithm, called Universal Nearest Neighbors (UNN), that induces a leveraged k-NN rule by globally minimizing a surrogate risk upper bounding the empirical misclassification rate over training data. Interestingly, this surrogate risk can be arbitrary chosen from a class of Bregman loss functions, including the familiar exponential, logistic and squared losses. Furthermore, we show that UNN allows to efficiently filter a dataset of instances by keeping only a small fraction of data.Experimental results on the synthetic Ripley's dataset show that such a filtering strategy is able to reject "noisy" examples, and yields a classification error close to the optimal Bayes error. Experiments on standard UCI datasets show significant improvements over the current state of the art. © 2011 Elsevier B.V. Source

Nielsen F.,Ecole Polytechnique - Palaiseau | Nielsen F.,Sony | Nock R.,University of Antilles Guyane
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

A Jensen-Bregman divergence is a distortion measure defined by a Jensen convexity gap induced by a strictly convex functional generator. Jensen-Bregman divergences unify the squared Euclidean and Mahalanobis distances with the celebrated information-theoretic Jensen-Shannon divergence, and can further be skewed to include Bregman divergences in limit cases. We study the geometric properties and combinatorial complexities of both the Voronoi diagrams and the centroidal Voronoi diagrams induced by such as class of divergences. We show that Jensen-Bregman divergences occur in two contexts: (1) when symmetrizing Bregman divergences, and (2) when computing the Bhattacharyya distances of statistical distributions. Since the Bhattacharyya distance of popular parametric exponential family distributions in statistics can be computed equivalently as Jensen-Bregman divergences, these skew Jensen-Bregman Voronoi diagrams allow one to define a novel family of statistical Voronoi diagrams. © 2011 Springer-Verlag. Source

Piro P.,University of Nice Sophia Antipolis | Nock R.,University of Antilles Guyane | Nielsen F.,Sony | Barlaud M.,University of Nice Sophia Antipolis
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

The k-nearest neighbors (κ-NN) classification rule is still an essential tool for computer vision applications, such as scene recognition. However, κ-NN still features some major drawbacks, which mainly reside in the uniform voting among the nearest prototypes in the feature space. In this paper, we propose a new method that is able to learn the "relevance" of prototypes, thus classifying test data using a weighted κ-NN rule. In particular, our algorithm, called Multi-class Leveraged κ-nearest neighbor (MLNN), learns the prototype weights in a boosting framework, by minimizing a surrogate exponential risk over training data. We propose two main contributions for improving computational speed and accuracy. On the one hand, we implement learning in an inherently multiclass way, thus providing significant computation time reduction over one-versus-all approaches. Furthermore, the leveraging weights enable effective data selection, thus reducing the cost of κ-NN search at classification time. On the other hand, we propose a kernel generalization of our approach to take into account real-valued similarities between data in the feature space, thus enabling more accurate estimation of the local class density. We tested MLNN on three datasets of natural images. Results show that MLNN significantly outperforms classic κ-NN and weighted κ-NN voting. Furthermore, using an adaptive Gaussian kernel provides significant performance improvement. Finally, the best results are obtained when using MLNN with an appropriate learned metric distance. © 2011 Springer-Verlag Berlin Heidelberg. Source

Piro P.,University of Nice Sophia Antipolis | Barlaud M.,University of Nice Sophia Antipolis | Nock R.,University of Antilles Guyane | Nielsen F.,Ecole Polytechnique - Palaiseau
Lecture Notes in Electrical Engineering

Image classification is a challenging task in computer vision. For example fully understanding real-world images may involve both scene and object recognition. Many approaches have been proposed to extract meaningful descriptors from images and classifying them in a supervised learning framework. In this chapter, we revisit the classic k-nearest neighbors (k-NN) classification rule, which has shown to be very effective when dealing with local image descriptors. However, k-NN still features some major drawbacks, mainly due to the uniform voting among the nearest prototypes in the feature space. In this chapter, we propose a generalization of the classic k-NN rule in a supervised learning (boosting) framework. Namely, we redefine the voting rule as a strong classifier that linearly combines predictions from the k closest prototypes. In order to induce this classifier, we propose a novel learning algorithm, MLNN (Multiclass Leveraged Nearest Neighbors), which gives a simple procedure for performing prototype selection very efficiently. We tested our method first on object classification using 12 categories of objects, then on scene recognition as well, using 15 real-world categories. Experiments show significant improvement over classic k-NN in terms of classification performances. © 2013 Springer Science+Business Media. Source

Musard M.,University of Franche Comte | Poggi M.-P.,University of Antilles Guyane
Physical Education and Sport Pedagogy

Introduction: The aims of this literature review were to characterize the communications presented during six Association for Research on Intervention in Sport (ARIS) French-speaking congresses from 2000 to 2010 and to compare the research trends between French and English research traditions. The definition of pedagogy is close to the notion of intervention and attempts to study, in an interacting or isolated way, three key components – the educator (professor, coach, teacher's educator), the participants (students, children, adults) and the curriculum/knowledge in specific contexts. The theoretical framework is centred on the social construction of the scientific knowledge.Method: All the communications of the six ARIS congresses (n = 836), reflecting the multiple facets of educational research were analysed. The quantitative treatment of the data, with the assistance of Sphinx software, consisted of univariate (frequencies, percentages) and bi-variate (chi-square statistic) analyses to identify possible significant relations between variables such as the congress, country and sex of author(s), institution, and so on.Findings: The results highlight the commonalties and differences between French and English research about intervention/pedagogy. We notice the continuing expansion of the field of intervention/pedagogy in the Francophone and the Anglophone worlds. But the low participation of the practitioners shows how intricate it is to move educational research closer to everyday practice. The results show that the contexts studied within ARIS are mainly school physical education and teacher's education, then coaching and other contexts. The majority of studies are centred on one component of pedagogy, but rarely on the interactions between the educators and the participants. The Francophone research is essentially descriptive and heuristic, using mostly qualitative methods (interviews and observations). This orientation of Francophone studies towards heuristic research seems to contrast with Anglophone research, even though qualitative research has recently come to dominate.Conclusion: The ARIS researchers have a common object of study (intervention/pedagogy), share the same ideas and aims, learn how to do it better as they interact regularly and develop a shared repertoire of knowledge, theories and methods. We see here signs of an undeniable wealth of knowledge and the combination of different theories and methods leading to a better understanding of educational phenomena. © 2013, © 2013 Association for Physical Education. Source

Discover hidden collaborations