CNRS Lille Research Center in Informatics, Signal and Automatic control

Lille, France

CNRS Lille Research Center in Informatics, Signal and Automatic control

Lille, France

Time filter

Source Type

Elvira C.,CNRS Lille Research Center in Informatics, Signal and Automatic control | Chainais P.,CNRS Lille Research Center in Informatics, Signal and Automatic control | Dobigeon N.,National Polytechnic Institute of Toulouse
IEEE Transactions on Signal Processing | Year: 2017

Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as robust encoding in digital communications. Antisparse regularization can be naturally expressed through an -norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote antisparsity in a Gaussian linear model, yielding a fully Bayesian formulation of antisparse coding. Two Markov chain Monte Carlo algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The second one uses Metropolis-Hastings moves that exploit the proximity mapping of the log-posterior distribution. These samples are used to approximate maximum a posteriori and minimum mean square error estimators of both parameters and hyperparameters. Simulations on synthetic data illustrate the performances of the two proposed samplers, for both complete and over-complete dictionaries. All results are compared to the recent deterministic variational FITRA algorithm. © 2016 IEEE.

El Ahmar Y.,CEA Saclay Nuclear Research Center | Le Pallec X.,CNRS Lille Research Center in Informatics, Signal and Automatic control | Gerard S.,CEA Saclay Nuclear Research Center
CEUR Workshop Proceedings | Year: 2016

Recent empirical studies about UML showed that software practitioners often use it to communicate. When they use diagram(s) during a meeting with clients/users or during an informal discussion with their architect, they may want to highlight some elements to synchronise the visual support to their discourse. To that end, they are free to use color, size, brightness, grain and/or orientation. The mentioned freedom is due to the lack of formal specifications of their use in the UML standard and refers to what is called the secondary notation, by the Cognitive dimensions framework. According to the Semiology of Graphics (SoG), one of the main references in cartography, each mean of visual annotation is characterized by its perceptual properties. Being under modeler's control, the 5 means of visual annotations can differently be applied to UML graphic components: to the border, text, background and to the related other graphic nodes. In that context, the goal of this research is to study the effective implementations, which maintain the perceptual properties of, especially, the size visual variation. This latter has been chosen because it is considered as the "strongest" among the other visual means, having all the perceptual properties. The present proposal consists of a quantitative methodology using an experiment as strategy of inquiry. The participants will be the ∼ 20 attendees of the HuFaMo workshop. They must be experts on modeling and they know UML. The treatment is the reading and the visual extraction of information from a set of UML sequence diagrams, provided via a web application. The dependent variables we study are the responses and the response times of participants, that will be validated based on the SoG principles. © 2016, CEUR-WS. All rights reserved.

Sun Y.,Sant'Anna School of Advanced Studies | Lipari G.,CNRS Lille Research Center in Informatics, Signal and Automatic control
Proceedings - Real-Time Systems Symposium | Year: 2016

We address the problem of schedulability analysis for a set of sporadic real-time tasks scheduled by the Global Earliest Deadline First (G-EDF) policy on a multiprocessor platform. State-of-the-art tests for schedulability analysis of multiprocessor global scheduling are often incomparable. That is, a task set that is judged not schedulable by a test may be verified to be schedulable by another test, and vice versa. In this paper, we first develop a new schedulability test that integrates the limited carry-in technique and Response Time Analysis (RTA) procedure for Global EDF schedulability analysis. Then, we provide an over-approximate variant of this test with better run-time efficiency. Later, we extend these two tests to self-suspending tasks. All schedulability tests proposed in the paper have provable dominance over their state-of-the-art counterparts. Finally, we conduct extensive comparisons among different schedulability tests. Our new tests show significant improvements for schedulability analysis of Global EDF. © 2015 IEEE.

Amor B.B.,CNRS Lille Research Center in Informatics, Signal and Automatic control | Su J.,Texas Tech University | Srivastava A.,Florida State University
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2016

We study the problem of classifying actions of human subjects using depth movies generated by Kinect or other depth sensors. Representing human body as dynamical skeletons, we study the evolution of their (skeletons') shapes as trajectories on Kendall's shape manifold. The action data is typically corrupted by large variability in execution rates within and across subjects and, thus, causing major problems in statistical analyses. To address that issue, we adopt a recently-developed framework of Su et al. [1] , [2] to this problem domain. Here, the variable execution rates correspond to re-parameterizations of trajectories, and one uses a parameterization-invariant metric for aligning, comparing, averaging, and modeling trajectories. This is based on a combination of transported square-root vector fields (TSRVFs) of trajectories and the standard Euclidean norm, that allows computational efficiency. We develop a comprehensive suite of computational tools for this application domain: smoothing and denoising skeleton trajectories using median filtering, up- and down-sampling actions in time domain, simultaneous temporal-registration of multiple actions, and extracting invertible Euclidean representations of actions. Due to invertibility these Euclidean representations allow both discriminative and generative models for statistical analysis. For instance, they can be used in a SVM-based classification of original actions, as demonstrated here using MSR Action-3D, MSR Daily Activity and 3D Action Pairs datasets. Using only the skeletal information, we achieve state-of-the-art classification results on these datasets. © 1979-2012 IEEE.

Boiret A.,Lille University of Science and Technology | Boiret A.,CNRS Lille Research Center in Informatics, Signal and Automatic control
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2016

We study a subclass of tree-to-word transducers: linear treeto- word transducers, that cannot use several copies of the input. We aim to study the equivalence problem on this class, by using minimization and normalization techniques. We identify a Myhill-Nerode characterization. It provides a minimal normal form on our class, computable in Exptime. This paper extends an already existing result on tree-to-word transducers without copy or reordering (sequential tree-to-word transducers), by accounting for all the possible reorderings in the output. © Springer International Publishing Switzerland 2016.

Chainais P.,CNRS Lille Research Center in Informatics, Signal and Automatic control | Leray A.,Laboratory Interdisciplinaire Carnot de Bourgogne
IEEE Transactions on Image Processing | Year: 2016

The registration process is a key step for super-resolution (SR) reconstruction. More and more devices permit to overcome this bottleneck using a controlled positioning system, e.g., sensor shifting using a piezoelectric stage. This makes possible to acquire multiple images of the same scene at different controlled positions. Then, a fast SR algorithm can be used for efficient SR reconstruction. In this case, the optimal use of r{2} images for a resolution enhancement factor r is generally not enough to obtain satisfying results due to the random inaccuracy of the positioning system. Thus, we propose to take several images around each reference position. We study the error produced by the SR algorithm due to spatial uncertainty as a function of the number of images per position. We obtain a lower bound on the number of images that is necessary to ensure a given error upper bound with probability higher than some desired confidence level. Such results give precious hints to the design of SR systems. © 1992-2012 IEEE.

Devlaminck V.,CNRS Lille Research Center in Informatics, Signal and Automatic control
Journal of the Optical Society of America A: Optics and Image Science, and Vision | Year: 2015

In this paper, we address the issue of the existence of a solution of depolarizing differential Mueller matrix for a homogeneous medium. Such a medium is characterized by linear changes of its differential optical properties with z the thickness of the medium. We show that, under a short correlation distance assumption, it is possible to derive such linear solution, and we clarify this solution in the particular case where the random fluctuation processes associated to the optical properties are Gaussian white noise-like. A solution to the problem of noncommutativity of a previously proposed model [J. Opt. Soc. Am. 30, 2196 (2013)] is given by assuming a random permutation of the order of the layers and by averaging all the differential matrices resulting from these permutations. It is shown that the underlying assumption in this case is exactly the Gaussian white noise assumption. Finally, a recently proposed approach [Opt. Lett. 39, 4470 (2014)] for analysis of the statistical properties related to changes in optical properties is revisited, and the experimental conditions of application of these results are specified. © 2015 Optical Society of America.

Baudry B.,French Institute for Research in Computer Science and Automation | Monperrus M.,CNRS Lille Research Center in Informatics, Signal and Automatic control
ACM Computing Surveys | Year: 2015

Early experiments with software diversity in the mid 1970s investigated N-version programming and recovery blocks to increase the reliability of embedded systems. Four decades later, the literature about software diversity has expanded in multiple directions: goals (fault tolerance, security, software engineering), means (managed or automated diversity), and analytical studies (quantification of diversity and its impact). Our article contributes to the field of software diversity as the first work that adopts an inclusive vision of the area, with an emphasis on the most recent advances in the field. This survey includes classical work about design and data diversity for fault tolerance, as well as the cybersecurity literature that investigates randomization at different system levels. It broadens this standard scope of diversity to include the study and exploitation of natural diversity and the management of diverse software products. Our survey includes the most recent works, with an emphasis from 2000 to the present. The targeted audience is researchers and practitioners in one of the surveyed fields who miss the big picture of software diversity. Assembling the multiple facets of this fascinating topic sheds a new light on the field. © 2015 ACM.

Lemaire F.,CNRS Lille Research Center in Informatics, Signal and Automatic control | Temperville A.,CNRS Lille Research Center in Informatics, Signal and Automatic control
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2016

We present new algorithms for computing sparse representations of systems of parametric rational fractions by means of change of coordinates. Our algorithms are based on computing sparse matrices encoding the degrees of the parameters occurring in the fractions. Our methods facilitate further qualitative analysis of systems involving rational fractions. Contrary to symmetry based approaches which reduce the number of parameters, our methods only increase the sparsity, and are thus complementary. Previously hand made computations can now be fully automated by our methods. © Springer International Publishing AG 2016.

Bressel M.,University of Franche Comte | Bressel M.,CNRS Lille Research Center in Informatics, Signal and Automatic control | Hilairet M.,University of Franche Comte | Hissel D.,University of Franche Comte | Ould Bouamama B.,CNRS Lille Research Center in Informatics, Signal and Automatic control
IEEE Transactions on Industrial Electronics | Year: 2016

Although, the proton exchange membrane fuel cell is a promising clean and efficient energy converter that can be used to power an entire building in electricity and heat in a combined manner, it suffers from a limited lifespan due to degradation mechanisms. As a consequence, in the past years, researches have been conducted to estimate the state of health and now the remaining useful life (RUL) in order to extend the life of such devices. However, the developed methods are unable to perform prognostics with an online uncertainty quantification due to the computational cost. This paper aims at tackling this issue by proposing an observer-based prognostic algorithm. An extended Kalman filter estimates the actual state of health and the dynamic of the degradation with the associated uncertainty. An inverse first-order reliability method is used to extrapolate the state of health until a threshold is reached, for which the RUL is given with a 90% confidence interval. The global method is validated using a simulation model built from degradation data. Finally, the algorithm is tested on a dataset coming from a long-term experimental test on an eight-cell fuel cell stack subjected to a variable power profile. © 1982-2012 IEEE.

Loading CNRS Lille Research Center in Informatics, Signal and Automatic control collaborators
Loading CNRS Lille Research Center in Informatics, Signal and Automatic control collaborators