Raytrix GmbH

Kiel, Germany

Raytrix GmbH

Kiel, Germany
SEARCH FILTERS
Time filter
Source Type

Carlsohn M.F.,Ingenieurberatung Dr. Carlsohn | Kemmling A.,Universitatsklinikum Campus Lubeck | Petersen A.,Raytrix GmbH | Wietzke L.,Raytrix GmbH
Informatik aktuell | Year: 2017

Particle image velocimetry of simulated blood flow within transparent 3D resin models of cerebral aneurysms allows 3D real-time simulation of the flow-dynamics with and without implants as a new pre-interventional opportunity for endovascular treatment. The purpose of the experiments is to demonstrate the feasibility of 3D-flow visualization in small volumes (below 5 mm diameter) of particle flow although capturing the flow dynamics by a monocular camera set-up. © Springer-Verlag Berlin Heidelberg 2016.


Johannsen O.,University of Heidelberg | Heinze C.,Raytrix GmbH | Goldluecke B.,University of Heidelberg | Perwass C.,Raytrix GmbH
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

Plenoptic cameras provide a robust way to capture 3D information with a single shot. This is accomplished by encoding the direction of the incoming rays with a microlens array (MLA) in front of the camera sensor. In the focused plenoptic camera, a MLA acts like multiple small cameras that capture the virtual scene on the focus plane of a main lens from slightly different angles, which enables algorithmic depth reconstruction. This virtual depth is measured on the camera side, and independent of the main lens used. The connection between actual lateral distances and virtual depth, however, does depend on the main lens parameters, and needs to be carefully calibrated. In this paper, we propose an approach to calibrate focused plenoptic cameras, which allows a metric analysis of a given scene. To achieve this, we minimize an energy model based upon the thin lens equation. The model allows to estimate intrinsic and extrinsic parameters and corrects for radial lateral as well as radial depth distortion. © Springer-Verlag 2013.


Damghanian M.,Mid Sweden University | Olsson R.,Mid Sweden University | Sjostrom M.,Mid Sweden University | Erdmann A.,Raytrix GmbH | Perwass C.,Raytrix GmbH
2014 IEEE International Conference on Image Processing, ICIP 2014 | Year: 2014

Evaluation of the state of the art plenoptic cameras is necessary for design and application purposes. In this work, spatial resolution is investigated in a multi-focus plenoptic camera using two approaches: empirical and model-based. The Raytrix R29 plenoptic camera is studied which utilizes three types of micro lenses with different focal lengths in a hexagonal array structure to increase the depth of field. The modelbased approach utilizes the previously proposed sampling pattern cube (SPC) model for representation and evaluation of the plenoptic capturing systems. For the experimental resolution measurements, spatial resolution values are extracted from images reconstructed by the provided Raytrix reconstruction method. Both the measurement and the SPC model based approaches demonstrate a gradual variation of the resolution values in a wide depth range for the multi focus R29 camera. Moreover, the good agreement between the results from the model-based approach and those from the empirical approach confirms suitability of the SPC model in evaluating high-level camera parameters such as the spatial resolution in a complex capturing system as R29 multi-focus plenoptic camera. © 2014 IEEE.


Heinze C.,Raytrix GmbH | Spyropoulos S.,Raytrix GmbH | Hussmann S.,Nurnberg University of Applied Sciences | Perwass C.,Raytrix GmbH
IEEE Transactions on Instrumentation and Measurement | Year: 2016

The concept of plenoptic cameras has existed since the early 1900s, with recent advances in computational power making them a viable tool for industrial measurement applications. Plenoptic cameras can give 3-D information for every point of the scene with one camera, one lens, and a single image. This is possible by placing a microlens array directly in front of the image sensor. The depth estimation is based on disparities observed in neighboring microlens images, similar to stereo camera approaches. To relate these so-called virtual depth units to metric distances, a camera calibration must be performed. This paper presents a robust, automated camera calibration technique, which introduces new ways to model a multifocus plenoptic camera. © 2015 IEEE.


Heinze C.,Raytrix GmbH | Spyropoulos S.,Raytrix GmbH | Hussmann S.,Nurnberg University of Applied Sciences | Perwass C.,Raytrix GmbH
Conference Record - IEEE Instrumentation and Measurement Technology Conference | Year: 2015

The concept of plenoptic cameras has existed since the early 1900s, with recent advances in computational power making them a viable tool for industrial applications. Plenoptic cameras can give 3D information about the scene with one camera, one lens, and a single image. This is possible by placing a microlens array directly in front of the image sensor. The depth estimation is based on disparities observed in individual microlens images, similar to stereo camera approaches. To relate these so-called virtual depth units to metric distances, a camera calibration must be performed. This paper presents a robust, automated camera calibration technique, which introduces new ways to model a multi-focus plenoptic camera. © 2015 IEEE.


Grant
Agency: European Commission | Branch: H2020 | Program: MSCA-ITN-ETN | Phase: MSCA-ITN-2015-ETN | Award Amount: 3.80M | Year: 2015

Current displays fall far short of truly recreating visual reality. This can never be achieved by painting an image on a flat surface such as a TV screen, but requires a full-parallax display which can recreate the complete lightfield, i.e. the light traveling in every direction through every point in space. Recent years have seen major developments towards this goal, promising a new generation of ultra-realistic displays with applications in medicine, informatics, manufacturing, entertainment, gaming and more. However, achieving this will require a new generation of researchers trained both in the relevant physics, and in the biology of human vision. The European Training Network on Full-Parallax Imaging (ETN-FPI) aims at developing this new generation. The research concept of the network is to depart from the notations of plenoptics, light field and integral imaging, used sometimes interchangeably, and to harmonize and advance further the research in these areas under the umbrella of the wider and viewer-centred FPI concept. Fifteen early-stage researchers will be employed on novel inter-disciplinary FPI projects. The training disciplines include theoretical and applied optics, multi-dimensional image processing, and visual neuroscience. The targeted applications range from plenoptic microscopy to interaction with visual content generated by wide field-of-view displays. Extensive studies on perception of continuous parallax and subjective tests on various displays are planned. The training program includes local training, online seminars, workshops, training schools and a conference. The researchers trained within the network are expected to gather competitive skills and become highly competent in bringing innovations to the field of ultra-realistic and interactive visual media. This will enhance their employability in a rapidly growing sector in academia and industry and position Europe as a leader in this emerging multidisciplinary field.


Fleischmann O.,University of Kiel | Wietzke L.,Raytrix GmbH | Sommer G.,University of Kiel
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

We propose a novel curvature estimation algorithm which is capable of estimating the curvature of digital curves and two-dimensional curved image structures. The algorithm is based on the conformal projection of the curve or image signal to the two-sphere. Due to the geometric structure of the embedded signal the curvature may be estimated in terms of first order partial derivatives in ℝ3. This structure allows us to obtain the curvature by just convolving the projected signal with the appropriate kernels. We show that the method performs an implicit plane fitting by convolving the projected signals with the derivative kernels. Since the algorithm is based on convolutions its implementation is straightforward for digital curves as well as images. We compare the proposed method with differential geometric curvature estimators. It turns out that the novel estimator is as accurate as the standard differential geometric methods in synthetic as well as real and noise perturbed environments. © 2010 Springer-Verlag.


Wietzke L.,Raytrix GmbH | Sommer G.,University of Kiel
Journal of Mathematical Imaging and Vision | Year: 2010

This work covers a fundamental problem of local phase based image analysis: the isotropic generalization of the classical 1D analytic signal to two dimensions. The analytic signal enables the analysis of local phase and amplitude information of 1D signals. Local phase, amplitude and additional orientation information can be extracted by the 2D monogenic signal with the restriction to intrinsically 1D signals. In case of 2D image signals the monogenic signal enables the rotationally invariant analysis of lines and edges. In this work we present the 2D analytic signal as a novel generalization of both the analytic signal and the 2D monogenic signal. In case of 2D image signals the 2D analytic signal enables the isotropic analysis of lines, edges, corners and junctions in one unified framework. Furthermore, we show that 2D signals are defined on a 3D projective subspace of the homogeneous conformal space which delivers a descriptive geometric interpretation of signals providing new insights on the relation of geometry and 2D image signals. Finally, we will introduce a novel algebraic signal representation, which can be regarded as an alternative and fully isomorphic representation to classical matrices and tensors. We will show the solution of isotropic intrinsically 2D image analysis without the need of steering techniques. © Springer Science+Business Media, LLC 2010.


Perwass C.,Raytrix GmbH | Wietzke L.,Raytrix GmbH
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2012

Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution. © 2012 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).


Digital imaging system for synthesizing an image using data recorded with a plenoptic camera comprising a photosensor line having a plurality of photosensors; an optics arrangement, including a main lens (10) and a microlens set-up (12) arranged in at least a line of focus of the main lens (10), to direct light from an object (30) to the photosensor line via the main lens (10) and the microlens line set-up (12); a control arrangement to set a relative distance between the photosensor line and the microlens set-up; wherein, for a set of light rays passing through a particular portion of the focal line of the main lens (10), a plurality of the photosensors are arranged to sense different ones of the set of light rays that concurrently arrive at the particular portion of the focal line at different angles of incidence, and to output light data characterizing the sensed light rays; and a processor in a computer set-up programmable to provide image data characterizing a synthesized image as a function of the light sensed at different photosensors, the position of the photosensors relative to the microlens set-up (12), and an angle of incidence of the sensed light upon the focal line of the main lens (10), wherein the focal lengths of each of the microlenses over the at least one-dimensional set-up differ from each other.

Loading Raytrix GmbH collaborators
Loading Raytrix GmbH collaborators