Center for Machine Perception

Prague, Czech Republic

Center for Machine Perception

Prague, Czech Republic
SEARCH FILTERS
Time filter
Source Type

Gronat P.,French Institute for Research in Computer Science and Automation | Gronat P.,Czech Technical University | Gronat P.,Center for Machine Perception | Obozinski G.,CNRS ENS Informatics Department | And 4 more authors.
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2013

The aim of this work is to localize a query photograph by finding other images depicting the same place in a large geotagged image database. This is a challenging task due to changes in viewpoint, imaging conditions and the large size of the image database. The contribution of this work is two-fold. First, we cast the place recognition problem as a classification task and use the available geotags to train a classifier for each location in the database in a similar manner to per-exemplar SVMs in object recognition. Second, as only few positive training examples are available for each location, we propose a new approach to calibrate all the per-location SVM classifiers using only the negative examples. The calibration we propose relies on a significance measure essentially equivalent to the p-values classically used in statistical hypothesis testing. Experiments are performed on a database of 25,000 geotagged street view images of Pittsburgh and demonstrate improved place recognition accuracy of the proposed approach over the previous work. © 2013 IEEE.


Cintula P.,Academy of Sciences of the Czech Republic | Klement E.P.,Johannes Kepler University | Mesiar R.,Slovak University of Technology in Bratislava | Mesiar R.,University of Ostrava | Navara M.,Center for Machine Perception
Fuzzy Sets and Systems | Year: 2010

This paper surveys the present state of knowledge on propositional fuzzy logics extending SBL with an additional involutive negation. The involutive negation is added as a new propositional connective in order to improve the expressive power of the standard mathematical fuzzy logics based on continuous triangular norms. © 2009 Elsevier B.V. All rights reserved.


Simanek J.,Czech Technical University | Reinstein M.,Center for Machine Perception | Kubelka V.,Center for Machine Perception
IEEE/ASME Transactions on Mechatronics | Year: 2014

This paper presents evaluation of four different state estimation architectures exploiting the extended Kalman filter (EKF) for 6-DOF dead reckoning of a mobile robot. The EKF is a well proven and commonly used technique for fusion of inertial data and robot's odometry. However, different approaches to designing the architecture of the state estimator lead to different performance and computational demands. While seeking the best possible solution for the mobile robot, the nonlinear model and the error model are addressed, both with and without a complementary filter for attitude estimation. The performance is determined experimentally by means of precision of both indoor and outdoor navigation, including complex-structured environment such as stairs and rough terrain. According to the evaluation, the nonlinear model combined with the complementary filter is selected as a best candidate (reaching 0.8 m RMSE and average of 4% return position error (RPE) of distance driven) and implemented for real-time onboard processing during a rescue mission deployment. © 2014 IEEE.


Navara M.,Center for Machine Perception | Petrik M.,University of Ostrava | Petrik M.,Academy of Sciences of the Czech Republic | Sarkoci P.,Slovak University of Technology in Bratislava
Proceedings of The International Symposium on Multiple-Valued Logic | Year: 2010

The paper shows a direct correspondence between the first partial derivatives of a continuous Archimedean triangular norm and the first derivatives of its additive generator. An explicit formula for the additive generator is obtained. Application of the result is demonstrated on the problem of convex combinations of strict triangular norms. © 2010 IEEE.


Kukelova Z.,Center for Machine Perception | Byrod M.,Lund University | Josephson K.,Lund University | Pajdla T.,Center for Machine Perception | Astrom K.,Lund University
Computer Vision and Image Understanding | Year: 2010

A number of minimal problems of structure from motion for cameras with radial distortion have recently been studied and solved in some cases. These problems are known to be numerically very challenging and in several cases there were no practical algorithms yielding solutions in floating point arithmetic. We make some crucial observations concerning the floating point implementation of Gröbner basis computations and use these new insights to formulate fast and stable algorithms for two minimal problems with radial distortion previously solved in exact rational arithmetic only: (i) simultaneous estimation of essential matrix and a common radial distortion parameter for two partially calibrated views and six image point correspondences and (ii) estimation of fundamental matrix and two different radial distortion parameters for two uncalibrated views and nine image point correspondences. We demonstrate that these two problems can be efficiently solved in floating point arithmetic in simulated and real experiments. For comparison we have also invented a new non-minimal algorithm for estimating fundamental matrix and two different radial distortion parameters for two uncalibrated views and twelve image point correspondences based on a generalized eigenvalue problem. © 2008 Elsevier Inc. All rights reserved.


Fojtu S.,Center for Machine Perception | Havlena M.,Center for Machine Perception | Pajdla T.,Center for Machine Perception
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Nao humanoid robot from Aldebaran Robotics is equipped with an odometry sensor providing rather inaccurate robot pose estimates. We propose using Structure from Motion (SfM) to enable visual odometry from Nao camera without the necessity to add artificial markers to the scene and show that the robot pose estimates can be significantly improved by fusing the data from the odometry sensor and visual odometry. The implementation consists of the sensor modules streaming robot data, the mapping module creating a 3D model, the visual localization module estimating camera pose w.r.t. the model, and the navigation module planning robot trajectories and performing the actual movement. All of the modules are connected through the RSB middleware, which makes the solution independent on the given robot type. © Springer-Verlag Berlin Heidelberg 2012.


Pajdla T.,Center for Machine Perception | Havlena M.,Center for Machine Perception | Heller J.,Center for Machine Perception
Studies in Computational Intelligence | Year: 2012

We present an approach to constructing a model of the universe for explaining observations and making decisions based on learning new concepts. We use a weak statistical model, e.g. a discriminative classifier, to distinguish errors in measurements from improper modeling. We use boolean logic to combine outcomes of direct detectors of relevant events, e.g. presence of sound and presence of human shape in the field of view, into more complex models explaining the states in which the universe may appear. The process of constructing a new concept is initiated when a significant disagreement - incongruence - has been observed between incoming data and the current model of the universe. Then, a new concept, i.e. a new direct detector, is trained on incongruent data and combined with existing models to remove the incongruence.We demonstrate the concept in an experiment with human audio-visual detection. © 2012 Springer-Verlag Berlin Heidelberg.


Havlena M.,Center for Machine Perception | Heller J.,Center for Machine Perception | Kayser H.,Carl von Ossietzky University | Bach J.-H.,Carl von Ossietzky University | And 2 more authors.
Studies in Computational Intelligence | Year: 2012

The recently introduced theory of incongruence allows for detection of unexpected events in observations via disagreement of classifiers on specific and general levels of a classifier hierarchy which encodes the understanding a machine currently has of the world. We present an application of this theory, a hierarchy of classifiers describing an audio-visual speaker detector, and show successful incongruence detection on sequences acquired by a static as well as by a moving AWEAR 2.0 device using the presented classifier hierarchy. © 2012 Springer-Verlag Berlin Heidelberg.


Cech J.,Center for Machine Perception | Mittal R.,French Institute for Research in Computer Science and Automation | Deleforge A.,French Institute for Research in Computer Science and Automation | Sanchez-Riera J.,French Institute for Research in Computer Science and Automation | And 2 more authors.
IEEE-RAS International Conference on Humanoid Robots | Year: 2015

In this paper we present a method for detecting and localizing an active speaker, i.e., a speaker that emits a sound, through the fusion between visual reconstruction with a stereoscopic camera pair and sound-source localization with several microphones. Both the cameras and the microphones are embedded into the head of a humanoid robot. The proposed statistical fusion model associates 3D faces of potential speakers with 2D sound directions. The paper has two contributions: (i) a method that discretizes the two-dimensional space of all possible sound directions and that accumulates evidence for each direction by estimating the time difference of arrival (TDOA) over all the microphone pairs, such that all the microphones are used simultaneously and symmetrically and (ii) an audio-visual alignment method that maps 3D visual features onto 2D sound directions and onto TDOAs between microphone pairs. This allows to implicitly represent both sensing modalities into a common audiovisual coordinate frame. Using simulated as well as real data, we quantitatively assess the robustness of the method against noise and reverberations, and we compare it with several other methods. Finally, we describe a real-time implementation using the proposed technique and with a humanoid head embedding four microphones and two cameras: this enables natural human-robot interactive behavior. © 2013 IEEE.

Loading Center for Machine Perception collaborators
Loading Center for Machine Perception collaborators