Fraunhofer Institute for Optronics, System Technologies and Image Exploitation

Ettlingen, Germany

Fraunhofer Institute for Optronics, System Technologies and Image Exploitation

Ettlingen, Germany
Time filter
Source Type

Pfrommer J.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2017

The OPC Unified Architecture (OPC UA) is a protocol for Ethernet-based communication in industrial settings. At its core, OPC UA defines a set of services for interaction with a server-side information model that combines object-orientation with semantic technologies. Additional companion specifications use the OPC UA meta-model to define domain-specific modeling concepts for semantic interoperability. The open62541 project is an open source implementation of the OPC UA standard. In this work, we give a short introduction to the core concepts of OPC UA and how the measures taken to scale OPC UA to Big-Data scale reflect in the architecture of open62541. © Springer International Publishing AG 2017.

Manger D.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
Proceedings of the 2012 9th Conference on Computer and Robot Vision, CRV 2012 | Year: 2012

In current biometric-based identification systems, tattoos and other body modifications have shown to provide a useful source of information. Besides manual category label assignment, approaches utilizing state-of-the-art content-based image retrieval (CBIR) techniques have become increasingly popular. While local feature-based similarities of tattoo images achieve excellent retrieval accuracy, scalability to large image databases can be addressed with the popular bag-of-word model. In this paper, we show how recent advances in CBIR can be utilized to build up a large-scale tattoo image retrieval system. Compared to other systems, we chose a different approach to circumvent the loss of accuracy caused by the bag-of-word quantization. Its efficiency and effectiveness are shown in experiments with several tattoo databases of up to 330,000 images. © 2012 IEEE.

Metzler J.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
Proceedings - 2012 IEEE 9th International Conference on Advanced Video and Signal-Based Surveillance, AVSS 2012 | Year: 2012

The objective of human re-identification is to recognize a specific individual on different locations and to determine whether an individual has already appeared. This is especially in multi-camera networks with non-overlapping fields of view of interest. However, this is still an unsolved computer vision task due to several challenges, e.g. significant changes of appearance of humans as well as different illumination, camera parameters etc. In addition, for instance, in surveillance scenarios only low-resolution videos are usually available, so that biometric approaches may not be applied. This paper presents a whole-body appearance-based human reidentification approach for low-resolution videos. We propose a novel appearance model computed from several images of an individual. The model is based on means of covariance descriptors determined by spectral clustering techniques. The proposed approach is tested on a multi-camera data set of a typical surveillance scenario and compared to a color histogram based method. © 2012 IEEE.

Hebel M.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation | Stilla U.,TU Munich
IEEE Transactions on Geoscience and Remote Sensing | Year: 2012

Tasks such as city modeling or urban planning require the registration, alignment, and comparison of multiview and/or multitemporal remote sensing data. Airborne laser scanning (ALS) is one of the established techniques to deliver these data. Regrettably, direct georeferencing of ALS measurements usually leads to considerable displacements that limit connectivity and/or comparability of overlapping point clouds. Most reasons for this effect can be found in the impreciseness of the positioning and orientation sensors and their misalignment to the laser scanner. Typically, these sensors are comprised of a global navigation satellite system receiver and an inertial measurement unit. This paper presents a method for the automatic self-calibration of such ALS systems and the alignment of the acquired laser point clouds. Although applicable to classical nadir configurations, a novelty of our approach is the consideration of multiple data sets that were recorded with an oblique forward-looking full-waveform laser scanner. A combination of a region-growing approach with a random-sample-consensus segmentation method is used to extract planar shapes. Matching objects in overlapping data sets are identified with regard to several geometric attributes. A new methodology is presented to transfer the planarity constraints into systems of linear equations to determine both the boresight parameters and the data alignment. In addition to system calibration and data registration, the presented workflow results in merged 3-D point clouds that contain information concerning rooftops and all building facades. This database represents a solid basis and reference for applications such as change detection. © 2012 IEEE.

Huebner C.S.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2011

The degree of image degradation due to atmospheric turbulence is particularly severe when imaging over long horizontal paths since the turbulence is strongest close to the ground. The most pronounced effects include image blurring and image dancing and in case of strong turbulence image distortion as well. To mitigate these effects a number of methods from the field of image processing have been proposed most of which aim exclusively at the restoration of static scenes. But there is also an increasing interest in advancing turbulence mitigation to encompass moving objects as well. Therefore, in this paper a procedure is described that employs block-matching for the segmentation of static scene elements and moving objects such that image restoration can be carried out for both separately. This way motion blurring is taken into account in addition to atmospheric blurring, effectively reducing motion artefacts and improving the overall restoration result. Motion-compensated averaging with subsequent blind deconvolution is used for the actual image restoration. © 2011 SPIE.

Adomeit U.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2012

A main criterion for comparison and selection of thermal imagers for military applications is their nominal range performance. This nominal range performance is calculated for a defined task and standardized target and environmental conditions. The only standardization available to date is STANAG 4347. The target defined there is based on a main battle tank in front view. Because of modified military requirements, this target is no longer up-to-date. Today, different topics of interest are of interest, especially differentiation between friend and foe and identification of humans. There is no direct way to differentiate between friend and foe in asymmetric scenarios, but one clue can be that someone is carrying a weapon. This clue can be transformed in the observer tasks detection: a person is carrying or is not carrying an object, recognition: the object is a long / medium / short range weapon or civil equipment and identification: the object can be named (e. g. AK-47, M-4, G36, RPG7, Axe, Shovel etc.). These tasks can be assessed experimentally and from the results of such an assessment, a standard target for handheld objects may be derived. For a first assessment, a human carrying 13 different handheld objects in front of his chest was recorded at four different ranges with an IR-dual-band camera. From the recorded data, a perception experiment was prepared. It was conducted with 17 observers in a 13-alternative forced choice, unlimited observation time arrangement. The results of the test together with Minimum Temperature Difference Perceived measurements of the camera and temperature difference and critical dimension derived from the recorded imagery allowed defining a first standard target according to the above tasks. This standard target consist of 2.5 / 3.5 / 5 DRI line pairs on target, 0.24 m critical size and 1 K temperature difference. The values are preliminary and have to be refined in the future. Necessary are different aspect angles, different carriage and movement. © 2012 SPIE.

Huebner C.S.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2012

Many remote sensing applications are concerned with observing objects over long horizontal paths and often the atmosphere between observer and object is quite turbulent, especially in arid or semi-arid regions. Depending on the degree of turbulence, atmospheric turbulence can cause quite severe image degradation, the foremost effects being temporal and spatial blurring. And since the observed objects are not necessarily stationary, motion blurring can also factor in the degradation process. At present, the majority of these image processing methods aim exclusively at the restoration of static scenes. But there is a growing interest in enhancing turbulence mitigation methods to include moving objects as well. Therefore, the approach in this paper is to employ block-matching as motion detection algorithm to detect and estimate object motion in order to separate directed movement from turbulence-induced undirected motion. This enables a segmentation of static scene elements and moving objects, provided that the object movement exceeds the turbulence motion. Local image stacking is carried out for the moving elements, thus effectively reducing motion blur created by averaging and improving the overall final image restoration by means of blind deconvolution. © 2012 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).

Zepp A.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2012

Wavefront sensing with a holographically created diffraction grating is a promising new approach in adaptive optics. In this paper the realization of such a holographic wavefront sensor for the detection of defocus is presented. Core of the sensor is a computer-generated hologram displayed on a spatial light modulator. The experimental results of the sensor usage are compared with simulated data to proof its functionality. For the sensor design the detector size could be an important parameter. Hence the influence of this factor on the sensor response is analyzed. Furthermore the effect of scintillation on the sensor output is covered. A preliminary simulation shows that the holographic wavefront sensor is not absolutely independent of scintillation, as assumed due to theoretical consumptions. © 2012 SPIE.

Michaelsen E.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
Symmetry | Year: 2014

Gestalt Algebra gives a formal structure suitable for describing complex patterns in the image plain. This can be useful for recognizing hidden structure in images. The work at hand refers to the laws of perceptual psychology. A manifold called the Gestalt Domain is defined. Next to the position in 2D it also contains an orientation and a scale component. Algebraic operations on it are given for mirror symmetry as well as organization into rows. Additionally the Gestalt Domain contains an assessment component, and all the meaning of the operations implementing the Gestalt-laws is realized in the functions giving this component. The operation for mirror symmetry is binary, combining two parts into one aggregate as usual in standard algebra. The operation for organization into rows, however, combines n parts into an aggregate, where n may well be more than two. This is algebra in its more general sense. For recognition, primitives are extracted from digital raster images by Lowe's Scale Invariant Feature Transform (SIFT). Lowe's key-point descriptors can also be utilized. Experiments are reported with a set of images put forth for the Computer Vision and Pattern Recognition Workshops (CVPR) 2013 symmetry contest. © 2014 by the authors.

Pagel F.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
IEEE Intelligent Vehicles Symposium, Proceedings | Year: 2010

Cameras are getting smaller and cheaper. So a cost-effective usage of multi-camera systems in vehicles gets more and more attractive. In many cases it is desirable to cover the whole environment all around the vehicle. But often design restrictions and energy consumption do not allow constellations of cameras with overlapping field of views. However, for a common and geometric usage of the extracted information, e. g. in structure from motion tasks, it is necessary to know the relative alignment of the cameras, which are the extrinsic calibration parameters. This paper adresses the extrinsic calibration of a multi camera rig with non-overlapping field of views on a mobile platform. As the field of views do not necessarily overlap, common calibration methods based on corresponding image points between the camera views will fail. This problem can be overcome by using the mobiltiy of the platform. A patternbased method for extrinsic calibration of the camera rig on a mobile plattform is presented. © 2010 IEEE.

Loading Fraunhofer Institute for Optronics, System Technologies and Image Exploitation collaborators
Loading Fraunhofer Institute for Optronics, System Technologies and Image Exploitation collaborators