Entity

Time filter

Source Type

DEI
Porto, Portugal

Rodrigues R.,INESC Porto | Coelho A.,INESC Porto | Reis L.P.,DEI
GRAPP 2010 - Proceedings of the International Conference on Computer Graphics Theory and Applications | Year: 2010

The generation of three-dimensional models of urban environments using procedural modelling is presented as being a solution which allows financial and temporal gains, maintaining an acceptable visual fidelity level. Nevertheless, the modelling of anchor buildings (or monumental), identifying certain urban areas, needs a more careful modelling due to the high level of detail necessary, using, generally, manual modelling. We present an automation proposal of the building modelling process through the introduction of additional knowledge from textual descriptions in a procedural modelling system. The results show that the data model is flexible enough to build distinct models of churches. The data model can also provide an initial structure for high level modelling, providing the global shape for the building and the location of doors, windows or other structures. High detailed models can be built from this initial structure. The results demonstrate also that it is possible to create a 3D model from a text and thus permitting that non-specialised users may increase effectiveness using a procedural modelling system. Source


Rahimi A.,University of California at San Diego | Ghofrani A.,University of California at Santa Barbara | Angel M.,University of California at Santa Barbara | Cheng K.-T.,University of California at Santa Barbara | And 3 more authors.
Proceedings - Design Automation Conference | Year: 2014

Thousands of deep and wide pipelines working concurrently make GPGPU high power consuming parts. Energy-effciency techniques employ voltage overscaling that increases timing sensitivity to variations and hence aggravating the energy use issues. This paper proposes a method to increase spatiotemporal reuse of computational effort by a combination of compilation and micro-architectural design. An associative memristive memory (AMM) module is integrated with the oating point units (FPUs). Together, we enable negrained partitioning of values and nd high-frequency sets of values for the FPUs by searching the space of possible inputs, with the help of application-specic prole feedback. For every kernel execution, the compiler pre-stores these high-frequent sets of values in AMM modules { representing partial functionality of the associated FPU that are concurrently evaluated over two clock cycles. Our simulation results show high hit rates with 32-entry AMM modules that enable 36% reduction in average energy use by the kernel codes. Compared to voltage overscaling, this technique enhances robustness against timing errors with 39% average energy saving. Copyright 2014 ACM. Source


Gupta P.,Birla Institute of Technology and Science | Markan C.M.,DEI
Journal of Consciousness Studies | Year: 2015

As species evolved, consciousness (awareness) manifested at different levels: physical mental and subtle. But why different species exhibit different grades of consciousness continues to intrigue researchers. A plausible reason could be that adaptation to environmental changes, and hence survival and evolution, all depend on the level of consciousness species possess. This could be the reason why evolutionarily older species (with lower order consciousness) only implicitly (slowly and unconsciously) adapt, whereas evolved species (with higher order consciousness) explicitly (quickly and consciously) adapt to unforeseen situations gaining tremendous survival advantage. This ability requires exploring innumerable possibilities including representations that may not have been experienced before and requires faster, brain-wide computations. We argue that the transition from slow adaptation to fast learning can be explained by considering two different regimes of computation in the brain: a Classical Regime based on slow neuronal signalling, and a much faster Quantum Resime marked by subtler quantum computations at the sub-neuronal level. We conjecture that as brains of species increased in size, a threshold was reached, beyond which species could volitionally control attention and exploit the Quantum Regime which not only enabled them to quickly perform non-local computations, develop dynamic brain-wide neural associations, and adapt very fast, but also be mentally aware, observe themselves, and hence speed-up their own evolution. © Imprint Academi Ltd, 2015. Source


Carvalho V.,DEI | Vasconcelos R.M.,DNV GL | Soares F.O.,DEI
Digital Signal Processing: A Review Journal | Year: 2013

This paper presents a study undertaken to identify the type and location of yarn periodical errors using three different signal processing approaches based on FFT - Fast Fourier Transform, FWHT - Fast Walsh-Hadamard Transform and FDFI - Fast Impulse Frequency Determination. Available commercial equipment is based exclusively on an FFT approach which is unable to clearly detect all types of periodical yarn errors, particularly impulse errors. The theoretical basis of each the three signal processing techniques is described. Their performance when applied to several simulated errors, namely, pulse errors and impulse errors is analyzed in detail. Finally, the three different techniques' ability to successfully characterize errors in actual data taken from real textile yarns with known sinusoidal errors is explored and commented upon. © 2013 Elsevier Inc. Source


Berthold T.,Zuse Institute Berlin | Salvagnin D.,DEI
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

Branch-and-bound methods for mixed-integer programming (MIP) are traditionally based on solving a linear programming (LP) relaxation and branching on a variable which takes a fractional value in the (single) computed relaxation optimum. In this paper we study branching strategies for mixed-integer programs that exploit the knowledge of multiple alternative optimal solutions (a cloud ) of the current LP relaxation. These strategies naturally extend state-of-the-art methods like strong branching, pseudocost branching, and their hybrids. We show that by exploiting dual degeneracy, and thus multiple alternative optimal solutions, it is possible to enhance traditional methods. We present preliminary computational results, applying the newly proposed strategy to full strong branching, which is known to be the MIP branching rule leading to the fewest number of search nodes. It turns out that cloud branching can reduce the mean running time by up to 30% on standard test sets. © Springer-Verlag 2013. Source

Discover hidden collaborations