MVTec Software GmbH

München, Germany

MVTec Software GmbH

München, Germany

Time filter

Source Type

Steger C.,MVTec Software GmbH
International Journal of Computer Vision | Year: 2016

We propose camera models for cameras that are equipped with lenses that can be tilted in an arbitrary direction (often called Scheimpflug optics). The proposed models are comprehensive: they can handle all tilt lens types that are in common use for machine vision and consumer cameras and correctly describe the imaging geometry of lenses for which the ray angles in object and image space differ, which is true for many lenses. Furthermore, they are versatile since they can also be used to describe the rectification geometry of a stereo image pair in which one camera is perspective and the other camera is telecentric. We also examine the degeneracies of the models and propose methods to handle the degeneracies. Furthermore, we examine the relation of the proposed camera models to different classes of projective camera matrices and show that all classes of projective cameras can be interpreted as cameras with tilt lenses in a natural manner. In addition, we propose an algorithm that can calibrate an arbitrary combination of perspective and telecentric cameras (no matter whether they are tilted or untilted). The calibration algorithm uses a planar calibration object with circular control points. It is well known that circular control points may lead to biased calibration results. We propose two efficient algorithms to remove the bias and thus obtain accurate calibration results. Finally, we perform an extensive evaluation of the proposed camera models and calibration algorithms that establishes the validity and accuracy of the proposed models. © 2016 The Author(s)


Steger C.,MVTec Software GmbH
ISPRS Journal of Photogrammetry and Remote Sensing | Year: 2012

Four algorithms for estimating the fundamental matrix and radial distortion for the case that the images are related by a pure camera translation are proposed. This restricted class of motions is important in robotics and industrial applications. It is assumed that the radial distortions can be modeled by the division model. First, a linear 3-point minimal solver is derived, which is essential to drive a RANSAC algorithm. The minimal solver is then extended to an overdetermined linear solver, which is essential to compute starting values for the third algorithm, a Gold Standard solver that returns the maximum likelihood estimate fundamental matrix and radial distortion. The three algorithms are combined into a RANSAC algorithm that determines the fundamental matrix and radial distortion automatically. An experimental evaluation determines the characteristics of the algorithms and shows the high accuracy, precision, and robustness that can be achieved. Furthermore, the evaluation shows that using algorithms that determine the fundamental matrix and radial distortion for general camera motions are not a viable alternative for the case of pure camera translations because they are significantly slower and less robust than the algorithms proposed here. © 2012 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).


Steger C.,MVTec Software GmbH
Computer Vision and Image Understanding | Year: 2013

This paper presents an approach to extract curvilinear structures (lines) and their widths from two-dimensional images with high accuracy. Models for asymmetric parabolic and Gaussian line profiles are proposed. These types of lines occur frequently in applications. Scale-space descriptions of parabolic and Gaussian lines are derived in closed form. A detailed analysis of these scale-space descriptions shows that parabolic and Gaussian lines are biased more significantly than the well-known asymmetric barshaped lines by the partial derivatives of the Gaussian filters that are used to extract the lines. A bias function is constructed that relates the parameters of the lines to biased measurements that can be extracted from the image. It is shown that this bias function can be inverted. This is used to derive an algorithm to remove the bias from the line positions and widths. Examples on synthetic and real images show the high subpixel accuracy that can be achieved with the proposed algorithm. In particular, the line extractor is tested on a publicly available data set that includes manually labeled ground truth. The results on this data set show that very accurate results can be achieved on real data if the appropriate line model is used. © 2012 Elsevier Inc. All rights reserved.


The invention provides a method for recognizing instances of a 3D object in 3D scene data and for determining the 3D poses of said instances comprising the following steps: (a) providing 3D scene data; (b) selecting at least one reference point from the 3D scene data; (c) computing, for each selected reference point, pose candidates for the 3D object under the assumption that said reference point is part of the 3D object; and (d) computing a set of filtered poses from the pose candidates.


Patent
Mvtec Software Gmbh | Date: 2011-01-06

The invention provides a method for recognizing instances of a 3D object in 3D scene data and for determining the 3D poses of said instances comprising the following steps: (a) providing 3D scene data; (b) selecting at least one reference point from the 3D scene data; (c) computing, for each selected reference point, pose candidates for the 3D object under the assumption that said reference point is part of the 3D object; and (d) computing a set of filtered poses from the pose candidates.


The present invention provides a system and methods for automatic parameter determination in machine vision in general, and in object recognition in particular. Many machine vision systems use algorithms that demand the user to specify one or more parameters in order to adapt the behavior of the algorithm in dependence of the current application. This is not desirable because the complexity of the algorithm should be hidden from the user and a manual parameter determination is contrary to a desirable high degree of automation. The present invention provides a method to automatically determine the most frequently used parameters in machine vision solely based on the input image itself. The method is explained in detail using an object recognition system as an example. In particular, the model generation process based on a model image of the object is explained. However, also other systems that use edge extraction algorithms, for example, can benefit from the present invention.


The present invention provides a system and methods for automatic parameter determination in machine vision in general, and in object recognition in particular. Many machine vision systems use algorithms that demand the user to specify one or more parameters in order to adapt the behavior of the algorithm in dependence of the current application. This is not desirable because the complexity of the algorithm should be hidden from the user and a manual parameter determination is contrary to a desirable high degree of automation. The present invention provides a method to automatically determine the most frequently used parameters in machine vision solely based on the input image itself. The method is explained in detail using an object recognition system as an example. In particular, the model generation process based on a model image of the object is explained. However, also other systems that use edge extraction algorithms, for example, can benefit from the present invention.


Patent
Mvtec Software Gmbh | Date: 2012-12-04

The invention provides a method for recognizing instances of a 3D object in 3D scene data and scene intensity data and for determining the 3D poses of said instances comprising the following steps: (a) providing 3D object data and obtaining object intensity data; (b) providing 3D scene data and scene intensity data; (c) extracting scene feature points from the intensity data; (d) selecting at least one reference point from the 3D scene data; (e) computing, for each selected reference point, pose candidates for the 3D object under the assumption that said reference point is part of the 3D object by maximizing the number of extracted scene feature points that are consistent with the 3D object under the given pose candidate; (f) computing a set of filtered poses from the pose candidates.


Patent
MVTec Software GmbH | Date: 2014-04-16

The invention provides a method for recognizing instances of a 3D object in 3D scene data and scene intensity data and for determining the 3D poses of said instances comprising the following steps: (a) providing 3D object data and obtaining object intensity data; (b) providing 3D scene data and scene intensity data; (c) extracting scene feature points from the intensity data; (d) selecting at least one reference point from the 3D scene data; (e) computing, for each selected reference point, pose candidates for the 3D object under the assumption that said reference point is part of the 3D object by maximizing the number of extracted scene feature points that are consistent with the 3D object under the given pose candidate; (f) computing a set of filtered poses from the pose candidates.


Patent
Mvtec Software Gmbh | Date: 2013-03-18

The CV-CAD (computer vision-computer-aided design) model is an enhanced CAD (computer-aided design) model that integrates local and global computer vision data in order to represent an object not only geometrically but also in terms of computer vision. The CV-CAD model provides a scalable solution for intelligent and automatic object recognition, tracking and augmentation based on generic models of objects.

Loading MVTec Software GmbH collaborators
Loading MVTec Software GmbH collaborators