Allard J.,NRIA |
Allard J.,Lille University of Science and Technology |
Faure F.,NRIA |
Faure F.,Joseph Fourier University |
And 7 more authors.
ACM Transactions on Graphics | Year: 2010
We introduce a new method for simulating frictional contact between volumetric objects using interpenetration volume constraints. When applied to complex geometries, our formulation results in dramatically simpler systems of equations than those of traditional mesh contact models. Contact between highly detailed meshes can be simplified to a single unilateral constraint equation, or accurately processed at arbitrary geometry-independent resolution with simultaneous sticking and sliding across contact patches. We exploit fast GPU methods for computing layered depth images, which provides us with the intersection volumes and gradients necessary to formulate the contact equations as linear complementarity problems. Straightforward and popular numerical methods, such as projected Gauss-Seidel, can be used to solve the system. We demonstrate our method in a number of scenarios and present results involving both rigid and deformable objects at interactive rates. © 2010 ACM.
Ncibi A.,NRIA |
Claveau V.,French National Center for Scientific Research |
Gravier G.,French National Center for Scientific Research |
MMEDIA 2013 - 5th International Conferences on Advances in Multimedia | Year: 2013
Multi-label video annotation is a challenging task and a necessary first step for further processing. In this paper, we investigate the task of labelling TV stream segments into programs or several types of breaks through machine learning. Our contribution is twofold: 1) we propose to use simple yet efficient descriptors for this labelling task, 2) we show that Conditional Random Fields (CRF) are especially suited for this task. In particular, through several experiments, we show that CRF out-perform other machine learning techniques, while requiring few training data thanks to its ability to handle the different types of sequential information lying in our data.