Entity

Time filter

Source Type


Rocha B.M.,Medical University of Graz | Rocha B.M.,National Laboratory of Scientific Computing | Kickinger F.,CAE Software Solutions | Prassl A.J.,Medical University of Graz | And 6 more authors.
IEEE Transactions on Biomedical Engineering | Year: 2011

Electrical activity in cardiac tissue can be described by the bidomain equations whose solution for large-scale simulations still remains a computational challenge. Therefore, improvements in the discrete formulation of the problem, which decrease computational and/or memory demands are highly desirable. In this study, we propose a novel technique for computing shape functions of finite elements (FEs). The technique generates macro FEs (MFEs) based on the local decomposition of elements into tetrahedral subelements with linear shape functions. Such an approach necessitates the direct use of hybrid meshes (HMs) composed of different types of elements. MFEs are compared to classic standard FEs with respect to accuracy and RAM memory usage under different scenarios of cardiac modeling, including bidomain and monodomain simulations in 2-D and 3-D for simple and complex tissue geometries. In problems with analytical solutions, MFEs displayed the same numerical accuracy of standard linear triangular and tetrahedral elements. In propagation simulations, conduction velocity and activation times agreed very well with those computed with standard FEs. However, MFEs offer a significant decrease in memory requirements. We conclude that HMs composed of MFEs are well suited for solving problems in cardiac computational electrophysiology. © 2011 IEEE. Source


Ocana K.,National Laboratory of Scientific Computing | De Oliveira D.,Federal University of Fluminense
Advances and Applications in Bioinformatics and Chemistry | Year: 2015

Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. © 2015 Ocaña and de Oliveira. Source


Rocha B.M.,National Laboratory of Scientific Computing | Campos F.O.,Medical University of Graz | Amorim R.M.,Federal University of Juiz de fora | Plank G.,Medical University of Graz | And 4 more authors.
Concurrency Computation Practice and Experience | Year: 2011

The modeling of the electrical activity of the heart is of great medical and scientific interest, because it provides a way to get a better understanding of the related biophysical phenomena, allows the development of new techniques for diagnoses and serves as a platform for drug tests. The cardiac electrophysiology may be simulated by solving a partial differential equation coupled to a system of ordinary differential equations describing the electrical behavior of the cell membrane. The numerical solution is, however, computationally demanding because of the fine temporal and spatial sampling required. The demand for real-time high definition 3D graphics made the new graphic processing units (GPUs) a highly parallel, multithreaded, many-core processor with tremendous computational horsepower. It makes the use of GPUs a promising alternative to simulate the electrical activity in the heart. The aim of this work is to study the performance of GPUs for solving the equations underlying the electrical activity in a simple cardiac tissue. In tests on 2D cardiac tissues with different cell models it is shown that the GPU implementation runs 20 times faster than a parallel CPU implementation running with 4 threads on a quad-core machine, parts of the code are even accelerated by a factor of 180. © 2010 John Wiley & Sons, Ltd. Source


Amorim R.M.,Federal University of Juiz de fora | Rocha B.M.,National Laboratory of Scientific Computing | Campos F.O.,Medical University of Graz | Dos Santos R.W.,Federal University of Juiz de fora
2010 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC'10 | Year: 2010

The modeling of the electrical activity of the heart is of great medical and scientific interest, as it provides a way to get a better understanding of the related biophysical phenomena, allows the development of new techniques for diagnoses and serves as a platform for drug tests. However, due to the multi-scale nature of the underlying processes, the simulations of the cardiac bioelectric activity are still a computational challenge. In addition to that, the implementation of these computer models is a time consuming and error prone process. In this work we present a tool for prototyping ordinary differential equations (ODEs) in the area of cardiac modeling that aim to provide the automatic generation of high performance solvers tailored to the new hardware architecture of the graphic processing units (GPUs). The performance of these automatic solvers was evaluated using four different cardiac myocyte models. The GPU version of the solvers were between 75 and 290 times faster than the CPU versions. © 2010 IEEE. Source


Esquef P.A.A.,National Laboratory of Scientific Computing | Apolinario J.A.,Military Institute of Engineering of Rio de Janeiro | Biscainho L.W.P.,Federal University of Rio de Janeiro
IEEE Transactions on Information Forensics and Security | Year: 2014

In this paper, an edit detection method for forensic audio analysis is proposed. It develops and improves a previous method through changes in the signal processing chain and a novel detection criterion. As with the original method, electrical network frequency (ENF) analysis is central to the novel edit detector, for it allows monitoring anomalous variations of the ENF related to audio edit events. Working in unsupervised manner, the edit detector compares the extent of ENF variations, centered at its nominal frequency, with a variable threshold that defines the upper limit for normal variations observed in unedited signals. The ENF variations caused by edits in the signal are likely to exceed the threshold providing a mechanism for their detection. The proposed method is evaluated in both qualitative and quantitative terms via two distinct annotated databases. Results are reported for originally noisy database signals as well as versions of them further degraded under controlled conditions. A comparative performance evaluation, in terms of equal error rate (EER) detection, reveals that, for one of the tested databases, an improvement from 7% to 4% EER is achieved, respectively, from the original to the new edit detection method. When the signals are amplitude clipped or corrupted by broadband background noise, the performance figures of the novel method follow the same profile of those of the original method. © 2014 IEEE. Source

Discover hidden collaborations