Nieto M.,Vicomtech IK4 |
Salgado L.,Technical University of Madrid
Pattern Recognition Letters | Year: 2011
This paper introduces a new method for the simultaneous computation of sets of lines meeting at multiple vanishing points through the use of the Expectation-Maximisation (EM) algorithm. The proposed method is based on the formulation of novel error functions in the projective plane between lines and points which involve the use of non-linear optimisation procedures that allow to treat equally finite and infinite vanishing points. These functions are included into the EM framework, which handles the multi-modality of the problem naturally with mixture models. Results show that the proposed method of joint computation of vanishing points and lines converging at such points enhances the robustness and the accuracy of locating these points. © 2011 Elsevier B.V. All rights reserved.
Shafiq S.I.,University of Newcastle |
Sanin C.,University of Newcastle |
Szczerbicki E.,Technical University of Gdansk |
Toro C.,Vicomtech IK4
Studies in Computational Intelligence | Year: 2015
This paper presents a knowledge representation case study by constructing Decisional DNA of engineering objects. Decisional DNA, as a knowledge representation structure not only offers great possibilities on gathering explicit knowledge of formal decision events but also it is a powerful tool for decision-making process. The concept of Virtual engineering Object (VEO), which is a knowledge and experience representation of engineering artefacts, is also discussed. In this paper, we present several Sets of Experience of engineering objects used in manufacturing that were collected for the construction of a VEO-chromosome within the VEO-Decisional DNA. VEO is used to enhance manufacturing systems with predicting capabilities, facilitating decision-making in engineering processes knowledge handling. © Springer International Publishing Switzerland 2015.
Nieto M.,Vicomtech IK4 |
Cortes A.,Vicomtech IK4 |
Otaegui O.,Vicomtech IK4 |
Arrospide J.,Technical University of Madrid |
And 2 more authors.
Journal of Real-Time Image Processing | Year: 2016
A novel approach to real-time lane modeling using a single camera is proposed. The proposed method is based on an efficient design and implementation of a particle filter which applies the concepts of the Rao-Blackwellized particle filter (RBPF) by separating the state into linear and non-linear parts. As a result the dimensionality of the problem is reduced, which allows the system to perform in real-time in embedded systems. The method is used to determine the position of the vehicle inside its own lane and the curvature of the road ahead to enhance the performance of advanced driver assistance systems. The effectiveness of the method has been demonstrated implementing a prototype and testing its performance empirically on road sequences with different illumination conditions (day and nightime), pavement types, traffic density, etc. Results show that our proposal is capable of accurately determining if the vehicle is approaching the lane markings (Lane Departure Warning), and the curvature of the road ahead, achieving processing times below 2 ms per frame for laptop CPUs, and 12 ms for embedded CPUs. © 2012, Springer-Verlag Berlin Heidelberg.
Murua A.,Vicomtech IK4 |
Gonzalez I.,Iriscom Sistemas S.L. |
2011 Federated Conference on Computer Science and Information Systems, FedCSIS 2011 | Year: 2011
Cloud computing will play a large part in the ICT domain over the next 10 years or more. Many long-term aspects are still in an experimental stage, where the long-term impact on provisioning and usage is still unknown. While first attempts at this field focused on service provisioning for enterprises, cloud is reaching individuals nowadays. Our proposal is to go a step further and, based on the proven benefits of the Cloud, improve Internet and technology access for those people always left behind when any technological progress takes place. This paper presents the Cloud-based Assistive Technology Service delivering to individuals who face technology accessibility barriers due to ageing or disabilities. An example of how an Assistive Service is delivered to an individual in an easy and seamless way is given as a demonstration of how the future should be. This proof of concept has been developed within the INREDIS research project. © 2011 Polish Info Processing Soc.
Unzueta L.,Vicomtech IK4 |
Pimenta W.,University of Minho |
Goenetxea J.,Vicomtech IK4 |
Santos L.P.,University of Minho |
And 2 more authors.
Image and Vision Computing | Year: 2014
In this paper we present a robust and lightweight method for the automatic fitting of deformable 3D face models on facial images. Popular fitting techniques such as those based on statistical models of shape and appearance require a training stage based on a set of facial images and their corresponding facial landmarks, which have to be manually labeled. Therefore, new images in which to fit the model cannot differ too much in shape and appearance (including illumination variation, facial hair, wrinkles, etc.) from those used for training. By contrast, our approach can fit a generic face model in two steps: (1) the detection of facial features based on local image gradient analysis and (2) the backprojection of a deformable 3D face model through the optimization of its deformation parameters. The proposed approach can retain the advantages of both learning-free and learning-based approaches. Thus, we can estimate the position, orientation, shape and actions of faces, and initialize user-specific face tracking approaches, such as Online Appearance Models (OAMs), which have shown to be more robust than generic user tracking approaches. Experimental results show that our method outperforms other fitting alternatives under challenging illumination conditions and with a computational cost that allows its implementation in devices with low hardware specifications, such as smartphones and tablets. Our proposed approach lends itself nicely to many frameworks addressing semantic inference in face images and videos. © 2014 Elsevier B.V.