Entity

Time filter

Source Type


Chen D.,Hubei University | Chen D.,Collaborative and Innovative Center for Educational Technology | Li X.,Beijing Normal University | Li X.,Center for Collaboration and Innovation in Brain and Learning science | And 5 more authors.
IEEE Transactions on Computers | Year: 2015

Analysis of neural data with multiple modes and high density has recently become a trend with the advances in neuroscience research and practices. There exists a pressing need for an approach to accurately and uniquely capture the features without loss or destruction of the interactions amongst the modes (typically) of space, time, and frequency. Moreover, the approach must be able to quickly analyze the neural data of exponentially growing scales and sizes, in tens or even hundreds of channels, so that timely conclusions and decisions may be made. A salient approach to multi-way data analysis is the parallel factor analysis (PARAFAC) that manifests its effectiveness in the decomposition of the electroencephalography (EEG). However, the conventional PARAFAC is only suited for offline data analysis due to the high complexity, which computes to be O(n2) with the increasing data size. In this study, a large-scale PARAFAC method has been developed, which is supported by general-purpose computing on the graphics processing unit (GPGPU). Comparing to the PARAFAC running on conventional CPU-based platform, the new approach dramatically excels by >360 times in run-time performance, and effectively scales by >400 times in all dimensions. Moreover, the proposed approach forms the basis of a model for the analysis of electrocochleography (ECoG) recordings obtained from epilepsy patients, which proves to be effective in the epilepsy state detection. The time evolutions of the proposed model are well correlated with the clinical observations. Moreover, the frequency signature is stable and high in the ictal phase. Furthermore, the spatial signature explicitly identifies the propagation of neural activities among various brain regions. The model supports real-time analysis of ECoG in >1,000 channels on an inexpensive and available cyber-infrastructure. © 2014 IEEE. Source


Cai C.,Wuhan University | Zeng K.,Beijing Normal University | Tang L.,Wuhan University | Chen D.,Collaborative and Innovative Center for Educational Technology | And 4 more authors.
Future Generation Computer Systems | Year: 2015

Synchronization measurement of non-stationary nonlinear data is an ongoing problem in the study of complex systems, e.g., neuroscience. Existing methods are largely based on Fourier transform and wavelet transform, and there is a lack of methods capable of (1) measuring the synchronization strength of multivariate data by adapting to non-stationary, non-linear dynamics, and (2) meeting the needs of sophisticated scientific or engineering applications. This study proposes an approach that measures the synchronization strength of bivariate non-stationary nonlinear data against phase differences. The approach (briefed as AD-PDSA) relies on adaptive algorithms for data decomposition. A parallelized approach was also developed with general-purpose computing on the graphics processing unit (GPGPU), which largely improved the scalability of data processing, namely, GAD-PDSA. We developed a model on the basis of GAD-PDSA to verify its effectiveness in analyzing multi-channel, event-related potential (ERP) recordings against Daubechies (DB) wavelet with reference to the Morlet wavelet transform (MWT). GAD-PDSA was applied to an EEG dataset obtained from epilepsy patients, and the synchronization analysis manifested an effective indicator of epileptic focus localization. © 2014 Elsevier B.V. Source


Chen X.,Wuhan University | Chen D.,Wuhan University | Chen D.,Collaborative and Innovative Center for Educational Technology | Wang L.,Wuhan University | And 5 more authors.
IEEE Transactions on Emerging Topics in Computing | Year: 2014

As the very large scale integration (VLSI) technology enters the nanoscale regime, VLSI design is increasingly sensitive to variations on process, voltage, and temperature. Layer assignment technology plays a crucial role in industrial VLSI design flow. However, existing layer assignment approaches have largely ignored these variations, which can lead to significant timing violations. To address this issue, a variation-aware layer assignment approach for cost minimization is proposed in this paper. The proposed layer assignment approach is a single-stage stochastic program that directly controls the timing yield via a single parameter, and it is solved using Monte Carlo simulations and the Latin hypercube sampling technique. A hierarchical design is also adopted to enable the optimization process on a multicore platform. Experiments have been performed on 5000 industrial nets, and the results demonstrate that the proposed approach: 1) can significantly improve the timing yield by 64% in comparison with the nominal design and 2) can reduce the wire cost by 15.7% in comparison with the worst case design. © 2015 IEEE. Source


Deng Z.,Wuhan University | Wu X.,Wuhan University | Wang L.,Wuhan University | Wang L.,CAS Institute of Remote Sensing | And 5 more authors.
IEEE Transactions on Parallel and Distributed Systems | Year: 2015

More and more real-time applications need to handle dynamic continuous queries over streaming data of high density. Conventional data and query indexing approaches generally do not apply for excessive costs in either maintenance or space. Aiming at these problems, this study first proposes a new indexing structure by fusing an adaptive cell and KDB-tree, namely CKDB-tree. A cell-tree indexing approach has been developed on the basis of the CKDB-tree that supports dynamic continuous queries. The approach significantly reduces the space costs and scales well with the increasing data size. Towards providing a scalable solution to filtering massive steaming data, this study has explored the feasibility to utilize the contemporary general-purpose computing on the graphics processing unit (GPGPU). The CKDB-tree-based approach has been extended to operate on both the CPU (host) and the GPU (device). The GPGPU-aided approach performs query indexing on the host while perform streaming data filtering on the device in a massively parallel manner. The two heterogeneous tasks execute in parallel and the latency of streaming data transfer between the host and the device is hidden. The experimental results indicate that (1) CKDB-tree can reduce the space cost comparing to the cell-based indexing structure by 60 percent on average, (2) the approach upon the CKDB-tree outperforms the traditional counterparts upon the KDB-tree by 66, 75 and 79 percent in average for uniform, skewed and hyper-skewed data in terms of update costs, and (3) the GPGPU-aided approach greatly improves the approach upon the CKDB-tree with the support of only a single Kepler GPU, and it provides real-time filtering of streaming data with 2.5M data tuples per second. The massively parallel computing technology exhibits great potentials in streaming data monitoring. © 2014 IEEE. Source


Chen D.,Hubei University | Chen D.,Collaborative and Innovative Center for Educational Technology | Wang L.,Hubei University | Zomaya A.Y.,University of Sydney | And 4 more authors.
IEEE Transactions on Parallel and Distributed Systems | Year: 2015

Simulation study on evacuation scenarios has gained tremendous attention in recent years. Two major research challenges remain along this direction: (1) how to portray the effect of individuals' adaptive behaviors under various situations in the evacuation procedures and (2) how to simulate complex evacuation scenarios involving huge crowds at the individual level due to the ultrahigh complexity of these scenarios. In this study, a simulation framework for general evacuation scenarios has been developed. Each individual in the scenario is modeled as an adaptable and autonomous agent driven by a weight-based decision-making mechanism. The simulation is intended to characterize the individuals' adaptable behaviors, the interactions among individuals, among small groups of individuals, and between the individuals and the environment. To handle the second challenge, this study adopts GPGPU to sustain massively parallel modeling and simulation of an evacuation scenario. An efficient scheme has been proposed to minimize the overhead to access the global system state of the simulation process maintained by the GPU platform. The simulation results indicate that the 'adaptability' in individual behaviors has a significant influence on the evacuation procedure. The experimental results also exhibit the proposed approach's capability to sustain complex scenarios involving a huge crowd consisting of tens of thousands of individuals. © 2014 IEEE. Source

Discover hidden collaborations