Time filter

Source Type

Évry, France

Pechberti S.,LIVIC Laboratory | Gruyer D.,LIVIC Laboratory | Vigneron V.,IBISC Laboratory
IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC

This paper proposes a new radar sensor modelling for Advanced Driver Assistance Systems (ADAS) prototyping. The model is embedded on the SiVIC platform (Simulator for Vehicle, Infrastructure and Sensors). Lots of simulators already exist for this issue, but none is designed to address the objectives of real-time computation, highly sampled signal generation. And few simulators offer the ability to be integrated in a dynamic platform for the ADAS prototyping. In this paper, several radar technologies will be presented. Then, a radar designed especially for automotive domain will be described exploring each subparts, radar antenna andi.e. propagation channel. Such as the generic model, hypothesis done on electromagnetic waves and environmental objects modelling will also be provided. A first model of simple duplex radar with Frequency Shift Keying (FSK) modulation is implemented and shown as illustration for the defined architecture. Finally, in order to optimize the duration for signal generation, several software architecture solution will be proposed. © 2012 IEEE. Source

Migniot C.,IBISC Laboratory | Ababsa F.,IBISC Laboratory
Journal of Real-Time Image Processing

This paper addresses the problem of 3D tracking of human gesture for buying behavior estimation. The top view of the customers, which has been little treated for human tracking, is exploited in this particular context. This point of view avoids occlusion except for those of the arms. We propose an hybrid 3D-2D tracking method based on the particle filtering framework, which uses the exclusion principle to separate the observation related to each customer and deals with multi-person tracking. The head and shoulders are tracked in the 2D space, while the arms are tracked in the 3D space: these are the spaces where they are the most descriptive. We validate our method both experimentally, so as to obtain qualitative results, and on-site. We demonstrated that it makes a good estimation for various cases and situations in real-time ((Formula presented.)40 fps). © 2014 Springer-Verlag Berlin Heidelberg. Source

Didier J.-Y.,IBISC Laboratory | Mallem M.,IBISC Laboratory
CBSE 2014 - Proceedings of the 17th International ACM SIGSOFT Symposium on Component-Based Software Engineering (Part of CompArch 2014)

When programming software applications, developers have to deal with many functional and non-functional requirements. During the last decade, especially in the augmented reality field of research, many frameworks have been developed using a component-based approach in order to fulfil the non-functional requirements. In this paper, we focus on such a specific requirement: race conditions issues in component-based systems. We present a heuristic that analyses data flows and detects components that may be subject to race conditions. A toy example introducing the problem and the solution is developed and implemented under the ARCS (for Augmented Reality Component System) framework. We also show the results of our algorithm on real size applications using up to 70 components and compare those results with some obtained by developers who had to make exactly the same work by hand. Copyright © 2014 ACM. Source

Rukubayihunga A.,IBISC Laboratory | Didier J.-Y.,IBISC Laboratory | Otmane S.,IBISC Laboratory
5th International Conference on Image Processing, Theory, Tools and Applications 2015, IPTA 2015

Providing relevant information at the right time in the right place is the major challenge in augmented reality, especially when it is applied in industry related applications. Indeed, by superimposing virtual elements on images which capture the real scene, augmented reality has proved its potential and maturity for facilitating maintenance activities, especially in training, repairing or inspections. In this system, instructions for an assembly or disassembly actions are linearly displayed to users and triggered sequentially and manually by the operator once he has completed each individual step. In this paper, we explore a metric which will allow the system to determine automatically when two objects are assembled since it provides hints on the current step of the maintenance scenario. This metric is based on pose estimations and reprojection errors by considering that the two objects are independent. The first results obtained on both synthetic and real image sequences show that this metric is efficient in detecting assembly/disassembly instants. We also lend guidelines on how to integrate this metric in a bigger computer vision system designed around maintenance task scenarios provided using augmented reality. © 2015 IEEE. Source

Boucher M.,IBISC Laboratory | Ababsa F.,IBISC Laboratory | Mallem M.,IBISC Laboratory
Procedia Computer Science

Historically popular, the well established monocular-SLAM is however subject to some limitations. The advent of cheap depth sensors allowed to circumvent some of these. Related methods frequently focus heavily on depth data. However these sensors have their own weaknesses. In some cases it is more appropriate to use both intensity and depth informations equally. We first conduct a few experiments in optimal conditions to determine how to use good quality information in our monocular based SLAM. From this we propose a lightweight SLAM designed for small constrained environments. © 2014 The Authors. Published by Elsevier B.V. Source

Discover hidden collaborations