Entity

Time filter

Source Type


Manger D.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
Proceedings of the 2012 9th Conference on Computer and Robot Vision, CRV 2012 | Year: 2012

In current biometric-based identification systems, tattoos and other body modifications have shown to provide a useful source of information. Besides manual category label assignment, approaches utilizing state-of-the-art content-based image retrieval (CBIR) techniques have become increasingly popular. While local feature-based similarities of tattoo images achieve excellent retrieval accuracy, scalability to large image databases can be addressed with the popular bag-of-word model. In this paper, we show how recent advances in CBIR can be utilized to build up a large-scale tattoo image retrieval system. Compared to other systems, we chose a different approach to circumvent the loss of accuracy caused by the bag-of-word quantization. Its efficiency and effectiveness are shown in experiments with several tattoo databases of up to 330,000 images. © 2012 IEEE. Source


Toselli I.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation | Toselli I.,University of Miami
Journal of the Optical Society of America A: Optics and Image Science, and Vision | Year: 2014

In this paper, the concept of anisotropy at different atmospheric turbulence scales is introduced. A power spectrum and its associated structure function with inner and outer scale effects and anisotropy are also shown. The power spectrum includes an effective anisotropic parameter ζeff to describe anisotropy, which is useful for modeling optical turbulence when a non-Kolmogorov power law and anisotropy along the direction of propagation are present. © 2014 Optical Society of America. Source


Metzler J.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
Proceedings - 2012 IEEE 9th International Conference on Advanced Video and Signal-Based Surveillance, AVSS 2012 | Year: 2012

The objective of human re-identification is to recognize a specific individual on different locations and to determine whether an individual has already appeared. This is especially in multi-camera networks with non-overlapping fields of view of interest. However, this is still an unsolved computer vision task due to several challenges, e.g. significant changes of appearance of humans as well as different illumination, camera parameters etc. In addition, for instance, in surveillance scenarios only low-resolution videos are usually available, so that biometric approaches may not be applied. This paper presents a whole-body appearance-based human reidentification approach for low-resolution videos. We propose a novel appearance model computed from several images of an individual. The model is based on means of covariance descriptors determined by spectral clustering techniques. The proposed approach is tested on a multi-camera data set of a typical surveillance scenario and compared to a color histogram based method. © 2012 IEEE. Source


Hebel M.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation | Stilla U.,TU Munich
IEEE Transactions on Geoscience and Remote Sensing | Year: 2012

Tasks such as city modeling or urban planning require the registration, alignment, and comparison of multiview and/or multitemporal remote sensing data. Airborne laser scanning (ALS) is one of the established techniques to deliver these data. Regrettably, direct georeferencing of ALS measurements usually leads to considerable displacements that limit connectivity and/or comparability of overlapping point clouds. Most reasons for this effect can be found in the impreciseness of the positioning and orientation sensors and their misalignment to the laser scanner. Typically, these sensors are comprised of a global navigation satellite system receiver and an inertial measurement unit. This paper presents a method for the automatic self-calibration of such ALS systems and the alignment of the acquired laser point clouds. Although applicable to classical nadir configurations, a novelty of our approach is the consideration of multiple data sets that were recorded with an oblique forward-looking full-waveform laser scanner. A combination of a region-growing approach with a random-sample-consensus segmentation method is used to extract planar shapes. Matching objects in overlapping data sets are identified with regard to several geometric attributes. A new methodology is presented to transfer the planarity constraints into systems of linear equations to determine both the boresight parameters and the data alignment. In addition to system calibration and data registration, the presented workflow results in merged 3-D point clouds that contain information concerning rooftops and all building facades. This database represents a solid basis and reference for applications such as change detection. © 2012 IEEE. Source


Huebner C.S.,Fraunhofer Institute for Optronics, System Technologies and Image Exploitation
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2011

The degree of image degradation due to atmospheric turbulence is particularly severe when imaging over long horizontal paths since the turbulence is strongest close to the ground. The most pronounced effects include image blurring and image dancing and in case of strong turbulence image distortion as well. To mitigate these effects a number of methods from the field of image processing have been proposed most of which aim exclusively at the restoration of static scenes. But there is also an increasing interest in advancing turbulence mitigation to encompass moving objects as well. Therefore, in this paper a procedure is described that employs block-matching for the segmentation of static scene elements and moving objects such that image restoration can be carried out for both separately. This way motion blurring is taken into account in addition to atmospheric blurring, effectively reducing motion artefacts and improving the overall restoration result. Motion-compensated averaging with subsequent blind deconvolution is used for the actual image restoration. © 2011 SPIE. Source

Discover hidden collaborations