Laboratoire Jean Kuntzmann

Grenoble, France

Laboratoire Jean Kuntzmann

Grenoble, France

Time filter

Source Type

Cevikalp H.,Eskiehir Osmangazi University | Triggs B.,Laboratoire Jean Kuntzmann
Pattern Recognition | Year: 2013

We introduce a large margin linear binary classification framework that approximates each class with a hyperdisk - the intersection of the affine support and the bounding hypersphere of its training samples in feature space - and then finds the linear classifier that maximizes the margin separating the two hyperdisks. We contrast this with Support Vector Machines (SVMs), which find the maximum-margin separator of the pointwise convex hulls of the training samples, arguing that replacing convex hulls with looser convex class models such as hyperdisks provides safer margin estimates that improve the accuracy on some problems. Both the hyperdisks and their separators are found by solving simple quadratic programs. The method is extended to nonlinear feature spaces using the kernel trick, and multi-class problems are dealt with by combining binary classifiers in the same ways as for SVMs. Experiments on a range of data sets show that the method compares favourably with other popular large margin classifiers. © 2012 Elsevier Ltd.


Cevikalp H.,Eskiehir Osmangazi University | Triggs B.,Laboratoire Jean Kuntzmann | Franc V.,Czech Technical University
2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013 | Year: 2013

In this paper, we consider face detection along with facial landmark localization inspired by the recent studies showing that incorporating object parts improves the detection accuracy. To this end, we train roots and parts detectors where the roots detector returns candidate image regions that cover the entire face, and the parts detector searches for the landmark locations within the candidate region. We use a cascade of binary and one-class type classifiers for the roots detection and SVM like learning algorithm for the parts detection. Our proposed face detector outperforms the most of the successful face detection algorithms in the literature and gives the second best result on all tested challenging face detection databases. Experimental results show that including parts improves the detection performance when face images are large and the details of eyes and mouth are clearly visible, but does not introduce any improvement when the images are small. © 2013 IEEE.


Cevikalp H.,Eskiehir Osmangazi University | Triggs B.,Laboratoire Jean Kuntzmann | Yavuz H.S.,Eskiehir Osmangazi University | Kucuk Y.,Anadolu University | And 2 more authors.
Neurocomputing | Year: 2010

This paper introduces a geometrically inspired large margin classifier that can be a better alternative to the support vector machines (SVMs) for the classification problems with limited number of training samples. In contrast to the SVM classifier, we approximate classes with affine hulls of their class samples rather than convex hulls. For any pair of classes approximated with affine hulls, we introduce two solutions to find the best separating hyperplane between them. In the first proposed formulation, we compute the closest points on the affine hulls of classes and connect these two points with a line segment. The optimal separating hyperplane between the two classes is chosen to be the hyperplane that is orthogonal to the line segment and bisects the line. The second formulation is derived by modifying the ν-SVM formulation. Both formulations are extended to the nonlinear case by using the kernel trick. Based on our findings, we also develop a geometric interpretation of the least squares SVM classifier and show that it is a special case of the proposed method. Multi-class classification problems are dealt with constructing and combining several binary classifiers as in SVM. The experiments on several databases show that the proposed methods work as good as the SVM classifier if not any better. © 2010 Elsevier B.V.


Krysta M.,Laboratoire des Ecoulements Geophysiques et Industriels | Krysta M.,Laboratoire Jean Kuntzmann | Blayo E.,Laboratoire Jean Kuntzmann | Cosme E.,Laboratoire des Ecoulements Geophysiques et Industriels | Verron J.,Laboratoire des Ecoulements Geophysiques et Industriels
Monthly Weather Review | Year: 2011

In the standard four-dimensional variational data assimilation (4D-Var) algorithm the background error covariance matrix B remains static over time. It may therefore be unable to correctly take into account the information accumulated by a system into which data are gradually being assimilated. A possible method for remedying this flaw is presented and tested in this paper. A hybrid variationalsmoothing algorithm is based on a reduced-rank incremental 4D-Var. Its consistent coupling to a singular evolutive extended Kalman (SEEK) smoother ensures the evolution of the B matrix. In the analysis step, a low-dimensional error covariance matrix is updated so as to take into account the increased confidence level in the state vector it describes, once the observations have been introduced into the system. In the forecast step, the basis spanning the corresponding control subspace is propagated via the tangent linear model. The hybrid method is implemented and tested in twin experiments employing a shallow-water model. The background error covariance matrix is initialized using an EOF decomposition of a sample of model states. The quality of the analyses and the information content in the bases spanning control subspaces are also assessed. Several numerical experiments are conducted that differ with regard to the initialization of the B matrix. The feasibility of the method is illustrated. Since improvement due to the hybrid method is not universal, configurations that benefit from employing it instead of the standard 4D-Var are described and an explanation of the possible reasons for this is proposed. © 2011 American Meteorological Society.


Veres D.,CNRS Laboratory for Glaciology and Environmental Geophysics | Veres D.,Romanian Academy of Sciences | Bazin L.,French National Center for Scientific Research | Landais A.,French National Center for Scientific Research | And 13 more authors.
Climate of the Past | Year: 2013

The deep polar ice cores provide reference records commonly employed in global correlation of past climate events. However, temporal divergences reaching up to several thousand years (ka) exist between ice cores over the last climatic cycle. In this context, we are hereby introducing the Antarctic Ice Core Chronology 2012 (AICC2012), a new and coherent timescale developed for four Antarctic ice cores, namely Vostok, EPICA Dome C (EDC), EPICA Dronning Maud Land (EDML) and Talos Dome (TALDICE), alongside the Greenlandic NGRIP record. The AICC2012 timescale has been constructed using the Bayesian tool Datice (Lemieux-Dudon et al., 2010) that combines glaciological inputs and data constraints, including a wide range of relative and absolute gas and ice stratigraphic markers. We focus here on the last 120 ka, whereas the companion paper by Bazin et al. (2013) focuses on the interval 120-800 ka. Compared to previous timescales, AICC2012 presents an improved timing for the last glacial inception, respecting the glaciological constraints of all analyzed records. Moreover, with the addition of numerous new stratigraphic markers and improved calculation of the lock-in depth (LID) based on δ15N data employed as the Datice background scenario, the AICC2012 presents a slightly improved timing for the bipolar sequence of events over Marine Isotope Stage 3 associated with the seesaw mechanism, with maximum differences of about 600 yr with respect to the previous Datice-derived chronology of Lemieux-Dudon et al. (2010), hereafter denoted LD2010. Our improved scenario confirms the regional differences for the millennial scale variability over the last glacial period: while the EDC isotopic record (events of triangular shape) displays peaks roughly at the same time as the NGRIP abrupt isotopic increases, the EDML isotopic record (events characterized by broader peaks or even extended periods of high isotope values) reached the isotopic maximum several centuries before. It is expected that the future contribution of both other long ice core records and other types of chronological constraints to the Datice tool will lead to further refinements in the ice core chronologies beyond the AICC2012 chronology. For the time being however, we recommend that AICC2012 be used as the preferred chronology for the Vostok, EDC, EDML and TALDICE ice core records, both over the last glacial cycle (this study), and beyond (following Bazin et al., 2013). The ages for NGRIP in AICC2012 are virtually identical to those of GICC05 for the last 60.2 ka, whereas the ages beyond are independent of those in GICC05modelext (as in the construction of AICC2012, the GICC05modelext was included only via the background scenarios and not as age markers). As such, where issues of phasing between Antarctic records included in AICC2012 and NGRIP are involved, the NGRIP ages in AICC2012 should therefore be taken to avoid introducing false offsets. However for issues involving only Greenland ice cores, there is not yet a strong basis to recommend superseding GICC05modelext as the recommended age scale for Greenland ice cores. © Author(s) 2013. CC Attribution 3.0 License.


Bonnivard M.,Laboratoire Jean Kuntzmann | Bucur D.,CNRS Mathematics Laboratory
Journal of Mathematical Fluid Mechanics | Year: 2012

Relying on the effect of microscopic asperities, one can mathematically justify that viscous fluids adhere completely on the boundary of an impermeable domain. The rugosity effect accounts asymptotically for the transformation of complete slip boundary conditions on a rough surface in total adherence boundary conditions, as the amplitude of the rugosities vanishes. The decreasing rate (average velocity divided by the amplitude of the rugosities) computed on close flat layers is definitely influenced by the geometry. Recent results prove that this ratio has a uniform upper bound for certain geometries, like periodical and "almost Lipschitz" boundaries. The purpose of this paper is to prove that such a result holds for arbitrary (non-periodical) crystalline boundaries and general (non-smooth) periodical boundaries. © 2011 Springer Basel AG.


Jakabcin L.,Laboratoire Jean Kuntzmann
ESAIM - Control, Optimisation and Calculus of Variations | Year: 2016

We study a model for visco-elasto-plastic deformation with fracture, in which fracture is approximated via a diffuse interface model. We show that a discretized (in time) quasistatic evolution, converges to a solution of the continuous (in time) evolution, proving existence of a solution to our model. © EDP Sciences, SMAI, 2015.


Doyen L.,Laboratoire Jean Kuntzmann
Computational Statistics and Data Analysis | Year: 2012

An imperfect maintenance model for repairable systems is considered. Corrective Maintenances (CM) are assumed to be minimal, i.e. As Bad As Old (ABAO). They are considered to be left and right censored. Preventive Maintenances (PM) are assumed to be done at predetermined planned times and to follow a Brown-Proschan (BP) model, i.e. they renew (As Good As New, AGAN) the system with probability p, and they are minimal (ABAO) with probability 1-p. BP PM effects (AGAN or ABAO) are assumed to be not observed. In this context, different methods (maximum likelihood, moment estimation, and expectation-maximization algorithm) are considered in order to estimate jointly the PM efficiency parameter p and the parameters of the first time to failure distribution corresponding to the new unmaintained system. A method to individually assess the effect of each PM is also proposed. Finally, some reliability characteristics are computed: failure and cumulative failure intensities, reliability and expected cumulative number of failures. All the corresponding algorithms are detailed and applied to a real maintenance data set from an electricity power plant. © 2012 Elsevier B.V. All rights reserved.


Hussain S.U.,University of Caen Lower Normandy | Triggs B.,Laboratoire Jean Kuntzmann
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Features such as Local Binary Patterns (LBP) and Local Ternary Patterns (LTP) have been very successful in a number of areas including texture analysis, face recognition and object detection. They are based on the idea that small patterns of qualitative local gray-level differences contain a great deal of information about higher-level image content. Current local pattern features use hand-specified codings that are limited to small spatial supports and coarse graylevel comparisons. We introduce Local Quantized Patterns (LQP), a generalization that uses lookup-table-based vector quantization to code larger or deeper patterns. LQP inherits some of the flexibility and power of visual word representations without sacrificing the run-time speed and simplicity of local pattern ones. We show that it outperforms well-established features including HOG, LBP and LTP and their combinations on a range of challenging object detection and texture classification problems. © 2012 Springer-Verlag.


Diaz M.,Laboratoire Jean Kuntzmann | Sturm P.,French Institute for Research in Computer Science and Automation
2011 IEEE International Conference on Computational Photography, ICCP 2011 | Year: 2011

Access to the scene irradiance is a desirable feature in many computer vision algorithms. Applications like BRDF estimation, relighting or augmented reality need measurements of the object's photometric properties and the simplest method to get them is using a camera. However, the first step necessary to achieve this goal is the computation of the function that relates scene irradiance to image intensities. In this paper we propose to exploit the large variety of an object's appearances in photo collections to recover this non linear function for each of the cameras that acquired the available images. This process, also known as radiometric calibration, uses an unstructured set of images, to recover the camera's geometric calibration and a 3D scene model, using available methods. From this input, the camera response function is estimated for each image. This highly ill-posed problem is made tractable by using appropriate priors. The proposed approach is based on the empirical prior on camera response functions introduced by Grossberg and Nayar. Linear methods are proposed that allow to compute approximate solutions, which are then refined by non-linear least squares optimization. © 2011 IEEE.

Loading Laboratoire Jean Kuntzmann collaborators
Loading Laboratoire Jean Kuntzmann collaborators