Visual Computing Group

Sardinia, Italy

Visual Computing Group

Sardinia, Italy
SEARCH FILTERS
Time filter
Source Type

Yang Y.,Yale University | Pintus R.,Visual Computing Group | Gobbetti E.,Visual Computing Group | Rushmeier H.,Yale University
Journal on Computing and Cultural Heritage | Year: 2017

We propose three automatic algorithms for analyzing digitized medieval manuscripts, text block computation, text line segmentation, and special component extraction, by taking advantage of previous clustering algorithms and a template-matching technique. These three methods are completely automatic, so no user intervention or input is required to make them work. Moreover, they are all per-page based; that is, unlike some prior methods-that need a set of pages from the same manuscript for training purposes-they are able to analyze a single page without requiring any additional pages for input, eliminating the need for training on additional pages with similar layout. We extensively evaluated the algorithms on 1,771 images of pages of six different publicly available historical manuscripts, which differ significantly from each other in terms of layout structure, acquisition resolution, writing style, and so on. The experimental results indicate that they are able to achieve very satisfactory performance, that is, the average precision and recall values obtained by the text block computation method can reach as high as 98% and 99%, respectively. © 2017 ACM.


Rodriguez M.B.,Visual Computing Group | Alcocer P.P.V.,Polytechnic University of Catalonia
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Volume rendering has been a relevant topic in scientific visualization for the last two decades. A decade ago the exploration of reasonably big volume datasets required costly workstations due to the high processing cost of this kind of visualization. In the last years, a high end PC or laptop was enough to be able to handle medium-sized datasets thanks specially to the fast evolution of GPU hardware. New embedded CPUs that sport powerful graphics chipsets make complex 3D applications feasible in such devices. However, besides the much marketed presentations and all its hype, no real empirical data is usually available that makes comparing absolute and relative capabilities possible. In this paper we analyze current graphics hardware in most high-end Android mobile devices and perform a practical comparison of a well-known GPU-intensive task: volume rendering. We analyze different aspects by implementing three different classical algorithms and show how the current state-of-the art mobile GPUs behave in volume rendering. © 2012 Springer-Verlag.


Balsa Rodriguez M.,Visual Computing Group | Gobbetti E.,Visual Computing Group | Iglesias Guitian J.A.,Visual Computing Group | Iglesias Guitian J.A.,University of Zaragoza | And 5 more authors.
Computer Graphics Forum | Year: 2014

Great advancements in commodity graphics hardware have favoured graphics processing unit (GPU)-based volume rendering as the main adopted solution for interactive exploration of rectilinear scalar volumes on commodity platforms. Nevertheless, long data transfer times and GPU memory size limitations are often the main limiting factors, especially for massive, time-varying or multi-volume visualization, as well as for networked visualization on the emerging mobile devices. To address this issue, a variety of level-of-detail (LOD) data representations and compression techniques have been introduced. In order to improve capabilities and performance over the entire storage, distribution and rendering pipeline, the encoding/decoding process is typically highly asymmetric, and systems should ideally compress at data production time and decompress on demand at rendering time. Compression and LOD pre-computation does not have to adhere to real-time constraints and can be performed off-line for high-quality results. In contrast, adaptive real-time rendering from compressed representations requires fast, transient and spatially independent decompression. In this report, we review the existing compressed GPU volume rendering approaches, covering sampling grid layouts, compact representation models, compression techniques, GPU rendering architectures and fast decoding techniques. GPU-based volume rendering is the major currently adopted solution for interactive exploration of rectilinear scalar volumes on commodity platforms. Nevertheless, long data transfer times and GPU memory size limitations are often the main limiting factors, especially for massive, time-varying or multi-volume visualization, as well as for networked visualization on the emerging mobile devices. This article reviews the existing compressed GPU volume rendering approaches, covering sampling grid layouts, compact representation models, compression techniques, GPU rendering architectures and fast decoding techniques. © 2014 The Authors Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd.


Gunther T.,Visual Computing Group | Theisel H.,Otto Von Guericke University of Magdeburg
Computer Graphics Forum | Year: 2016

Inertial particles are finite-sized objects traveling with a certain velocity that differs from the underlying carrying flow, i.e., they are mass-dependent and subject to inertia. Their backward integration is in practice infeasible, since a slight change in the initial velocity causes extreme changes in the recovered position. Thus, if an inertial particle is observed, it is difficult to recover where it came from. This is known as the source inversion problem, which has many practical applications in recovering the source of airborne or waterborne pollutions. Inertial trajectories live in a higher dimensional spatio-velocity space. In this paper, we show that this space is only sparsely populated. Assuming that inertial particles are released with a given initial velocity (e.g., from rest), particles may reach a certain location only with a limited set of possible velocities. In fact, with increasing integration duration and dependent on the particle response time, inertial particles converge to a terminal velocity. We show that the set of initial positions that lead to the same location form a curve. We extract these curves by devising a derived vector field in which they appear as tangent curves. Most importantly, the derived vector field only involves forward integrated flow map gradients, which are much more stable to compute than backward trajectories. After extraction, we interactively visualize the curves in the domain and display the reached velocities using glyphs. In addition, we encode the rate of change of the terminal velocity along the curves, which gives a notion for the convergence to the terminal velocity. With this, we present the first solution to the source inversion problem that considers actual inertial trajectories. We apply the method to steady and unsteady flows in both 2D and 3D domains. © 2016 The Author(s) Computer Graphics Forum © 2016 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.


Mura C.,University of Zürich | Mattausch O.,University of Zürich | Villanueva A.J.,Visual Computing Group | Gobbetti E.,Visual Computing Group | Pajarola R.,University of Zürich
Proceedings - 13th International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2013 | Year: 2013

We present a robust approach for reconstructing the architectural structure of complex indoor environments given a set of cluttered input scans. Our method first uses an efficient occlusion-aware process to extract planar patches as candidate walls, separating them from clutter and coping with missing data. Using a diffusion process to further increase its robustness, our algorithm is able to reconstruct a clean architectural model from the candidate walls. To our knowledge, this is the first indoor reconstruction method which goes beyond a binary classification and automatically recognizes different rooms as separate components. We demonstrate the validity of our approach by testing it on both synthetic models and real-world 3D scans of indoor environments. © 2013 IEEE.


Gobbetti E.,Visual Computing Group | Guitian J.A.I.,Visual Computing Group | Marton F.,Visual Computing Group
Computer Graphics Forum | Year: 2012

We present a novel multiresolution compression-domain GPU volume rendering architecture designed for inter- active local and networked exploration of rectilinear scalar volumes on commodity platforms. In our approach, the volume is decomposed into a multiresolution hierarchy of bricks. Each brick is further subdivided into smaller blocks, which are compactly described by sparse linear combinations of prototype blocks stored in an overcomplete dictionary. The dictionary is learned, using limited computational and memory resources, by applying the K-SVD algorithm to a re-weighted non-uniformly sampled subset of the input volume, harnessing the recently introduced method of coresets. The result is a scalable high quality coding scheme, which allows very large volumes to be compressed off-line and then decompressed on-demand during real-time GPU-accelerated rendering. Volumetric information can be maintained in compressed format through all the rendering pipeline. In order to efficiently support high quality filtering and shading, a specialized real-time renderer closely coordinates decompression with rendering, combining at each frame images produced by raycasting selectively decompressed portions of the current view- and transfer-function-dependent working set. The quality and performance of our approach is demonstrated on massive static and time-varying datasets. © 2012 The Author(s).


Mura C.,University of Zürich | Mattausch O.,University of Zürich | Jaspe Villanueva A.,Visual Computing Group | Gobbetti E.,Visual Computing Group | Pajarola R.,University of Zürich
Computers and Graphics (Pergamon) | Year: 2014

We present a robust approach for reconstructing the main architectural structure of complex indoor environments given a set of cluttered 3D input range scans. Our method uses an efficient occlusion-aware process to extract planar patches as candidate walls, separating them from clutter and coping with missing data, and automatically extracts the individual rooms that compose the environment by applying a diffusion process on the space partitioning induced by the candidate walls. This diffusion process, which has a natural interpretation in terms of heat propagation, makes our method robust to artifacts and other imperfections that occur in typical scanned data of interiors. For each room, our algorithm reconstructs an accurate polyhedral model by applying methods from robust statistics. We demonstrate the validity of our approach by evaluating it on both synthetic models and real-world 3D scans of indoor environments. © 2014 The Authors.


Iglesias Guitian J.A.,Visual Computing Group | Gobbetti E.,Visual Computing Group | Marton F.,Visual Computing Group
Visual Computer | Year: 2010

We report on a light-field display based virtual environment enabling multiple naked-eye users to perceive detailed multi-gigavoxel volumetric models as floating in space, responsive to their actions, and delivering different information in different areas of the workspace. Our contributions include a set of specialized interactive illustrative techniques able to provide different contextual information in different areas of the display, as well as an out-of-core CUDA-based raycasting engine with a number of improvements over current GPU volume raycasters. The possibilities of the system are demonstrated by the multi-user interactive exploration of 64 Gvoxel data sets on a 35 Mpixel light field display driven by a cluster of PCs. © Springer-Verlag 2010.


Mura C.,University of Zürich | Villanueva A.J.,Visual Computing Group | Mattausch O.,University of Zürich | Gobbetti E.,Visual Computing Group | Pajarola R.,University of Zürich
Computer Graphics Forum | Year: 2014

Reconstructing the architectural shape of interiors is a problem that is gaining increasing attention in the field of computer graphics. Some solutions have been proposed in recent years, but cluttered environments with multiple rooms and non-vertical walls still represent a challenge for state-of-the-art methods. We propose an occlusions-aware pipeline that extends current solutions to work with complex environments with arbitrary wall orientations. © The Eurographics Association 2014.


Rodriguez M.B.,Visual Computing Group | Agus M.,Visual Computing Group | Marton F.,Visual Computing Group | Gobbetti E.,Visual Computing Group
Computer Graphics Forum | Year: 2015

We introduce a novel approach for letting casual viewers explore detailed 3D models integrated with structured spatially associated descriptive information organized in a graph. Each node associates a subset of the 3D surface seen from a particular viewpoint to the related descriptive annotation, together with its author-defined importance. Graph edges describe, instead, the strength of the dependency relation between information nodes, allowing content authors to describe the preferred order of presentation of information. At run-time, users navigate inside the 3D scene using a camera controller, while adaptively receiving unobtrusive guidance towards interesting viewpoints and history- and location-dependent suggestions on important information, which is adaptively presented using 2D overlays displayed over the 3D scene. The capabilities of our approach are demonstrated in a real-world cultural heritage application involving the public presentation of sculptural complex on a large projection-based display. A user study has been performed in order to validate our approach. © 2015 The Author(s) Computer Graphics Forum © 2015 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

Loading Visual Computing Group collaborators
Loading Visual Computing Group collaborators