Time filter

Source Type

Chen D.,Wuhan University | Chen D.,Collaborative and Innovative Center for Educational Technology | Hu Y.,Wuhan University | Wang L.,University of California at Riverside | And 2 more authors.
IEEE Transactions on Parallel and Distributed Systems | Year: 2017

It has long been an important issue in various disciplines to examine massive multidimensional data superimposed by a high level of noises and interferences by extracting the embedded multi-way factors. With the quick increases of data scales and dimensions in the big data era, research challenges arise in order to (1) reflect the dynamics of large tensors while introducing no significant distortions in the factorization procedure and (2) handle influences of the noises in sophisticated applications. A hierarchical parallel processing framework over a GPU cluster, namely H-PARAFAC, has been developed to enable scalable factorization of large tensors upon a "divide-and-conquer" theory for Parallel Factor Analysis (PARAFAC). The H-PARAFAC framework incorporates a coarse-grained model for coordinating the processing of sub-tensors and a fine-grained parallel model for computing each sub-tensor and fusing sub-factors. Experimental results indicate that (1) the proposed method breaks the limitation on the scale of multidimensional data to be factorized and dramatically outperforms the traditional counterparts in terms of both scalability and efficiency, e.g., the runtime increases in the order of n2 when the data volume increases in the order of n3, (2) H-PARAFAC has potentials in refraining the influences of significant noises, and (3) H-PARAFAC is far superior to the conventional window-based counterparts in preserving the features of multiple modes of large tensors. © 2017 IEEE.

Chen D.,Hubei University | Chen D.,Collaborative and Innovative Center for Educational Technology | Li X.,Beijing Normal University | Li X.,Center for Collaboration and Innovation in Brain and Learning science | And 5 more authors.
IEEE Transactions on Computers | Year: 2015

Analysis of neural data with multiple modes and high density has recently become a trend with the advances in neuroscience research and practices. There exists a pressing need for an approach to accurately and uniquely capture the features without loss or destruction of the interactions amongst the modes (typically) of space, time, and frequency. Moreover, the approach must be able to quickly analyze the neural data of exponentially growing scales and sizes, in tens or even hundreds of channels, so that timely conclusions and decisions may be made. A salient approach to multi-way data analysis is the parallel factor analysis (PARAFAC) that manifests its effectiveness in the decomposition of the electroencephalography (EEG). However, the conventional PARAFAC is only suited for offline data analysis due to the high complexity, which computes to be O(n2) with the increasing data size. In this study, a large-scale PARAFAC method has been developed, which is supported by general-purpose computing on the graphics processing unit (GPGPU). Comparing to the PARAFAC running on conventional CPU-based platform, the new approach dramatically excels by >360 times in run-time performance, and effectively scales by >400 times in all dimensions. Moreover, the proposed approach forms the basis of a model for the analysis of electrocochleography (ECoG) recordings obtained from epilepsy patients, which proves to be effective in the epilepsy state detection. The time evolutions of the proposed model are well correlated with the clinical observations. Moreover, the frequency signature is stable and high in the ictal phase. Furthermore, the spatial signature explicitly identifies the propagation of neural activities among various brain regions. The model supports real-time analysis of ECoG in >1,000 channels on an inexpensive and available cyber-infrastructure. © 2014 IEEE.

Cai C.,Wuhan University | Zeng K.,Beijing Normal University | Tang L.,Wuhan University | Chen D.,Collaborative and Innovative Center for Educational Technology | And 4 more authors.
Future Generation Computer Systems | Year: 2015

Synchronization measurement of non-stationary nonlinear data is an ongoing problem in the study of complex systems, e.g., neuroscience. Existing methods are largely based on Fourier transform and wavelet transform, and there is a lack of methods capable of (1) measuring the synchronization strength of multivariate data by adapting to non-stationary, non-linear dynamics, and (2) meeting the needs of sophisticated scientific or engineering applications. This study proposes an approach that measures the synchronization strength of bivariate non-stationary nonlinear data against phase differences. The approach (briefed as AD-PDSA) relies on adaptive algorithms for data decomposition. A parallelized approach was also developed with general-purpose computing on the graphics processing unit (GPGPU), which largely improved the scalability of data processing, namely, GAD-PDSA. We developed a model on the basis of GAD-PDSA to verify its effectiveness in analyzing multi-channel, event-related potential (ERP) recordings against Daubechies (DB) wavelet with reference to the Morlet wavelet transform (MWT). GAD-PDSA was applied to an EEG dataset obtained from epilepsy patients, and the synchronization analysis manifested an effective indicator of epileptic focus localization. © 2014 Elsevier B.V.

Chen J.,Central China Normal University | Chen J.,Collaborative and Innovative Center for Educational Technology | Luo N.,Central China Normal University | Liu Y.,Central China Normal University | And 6 more authors.
Computing | Year: 2016

E-Learning has revolutionized the delivery of learning through the support of rapid advances in Internet technology. Compared with face-to-face traditional classroom education, e-learning lacks interpersonal and emotional interaction between students and teachers. In other words, although a vital factor in learning that influences a human’s ability to solve problems, affect has been largely ignored in existing e-learning systems. In this study, we propose a hybrid intelligence-aided approach to affect-sensitive e-learning. A system has been developed that incorporates affect recognition and intervention to improve the learner’s learning experience and help the learner become better engaged in the learning process. The system recognizes the learner’s affective states using multimodal information via hybrid intelligent approaches, e.g., head pose, eye gaze tracking, facial expression recognition, physiological signal processing and learning progress tracking. The multimodal information gathered is fused based on the proposed affect learning model. The system provides online interventions and adapts the online learning material to the learner’s current learning state based on pedagogical strategies. Experimental results show that interest and confusion are the most frequently occurring states when a learner interacts with a second language learning system and those states are highly related to learning levels (easy versus difficult) and outcomes. Interventions are effective when a learner is disengaged or bored and have been shown to help learners become more engaged in learning. © 2014, Springer-Verlag Wien.

Liu Y.,Central China Normal University | Liu Y.,Collaborative and Innovative Center for Educational Technology | Liu Y.,Wenhua College | Liu L.,Central China Normal University | And 7 more authors.
ICPRAM 2016 - Proceedings of the 5th International Conference on Pattern Recognition Applications and Methods | Year: 2016

Visual focus of attention (VFOA) recognition is important for human-computer interface (HCI). Recently, promising results have been reported using special hardware in constrained environment. However, VFOA recognition remains challenging in natural environment, e.g., various illuminations, occlusions, expressions and wide scenes. In this paper, we propose a multi-person VFOA recognition approach from head pose in a natural classroom. The approach includes three stages. First, multi-face positions are detected and tracked in the sub-windows. Second, a hierarchical learning approach, Dirichlet-tree enhanced hybrid framework of classification and regression forests (Hybrid D-RF) is proposed to estimate continuous head pose. Third, a VFOA model is proposed based on the estimated head pose, visual environment cues and prior state for attention recognition and tracking in the natural classroom. The experimental results on several public databases and real-life applications show the proposed approach is robust and efficient for multi-persons' VFOA recognition and tracking in practice. © Copyright 2016 by SCITEPRESS - Science and Technology Publications, Lda. All rights reserved.

Liu Y.,Central China Normal University | Liu Y.,Collaborative and Innovative Center for Educational Technology | Liu Y.,Huazhong University of Science and Technology | Chen J.,Central China Normal University | And 5 more authors.
ICPRAM 2014 - Proceedings of the 3rd International Conference on Pattern Recognition Applications and Methods | Year: 2014

Head pose estimation is important in human-machine interfaces. However, illumination variation, occlusion and low image resolution make the estimation task difficult. Hence, a Dirichlet-tree distribution enhanced Random Forests approach (D-RF) is proposed in this paper to estimate head pose efficiently and robustly under various conditions. First, Gabor features of the facial positive patches are extracted to eliminate the influence of occlusion and noise. Then, the D-RF is proposed to estimate the head pose in a coarse-to-fine way. In order to improve the discrimination capability of the approach, an adaptive Gaussian mixture model is introduced in the tree distribution. The proposed method has been evaluated with different data sets spanning from -90° to 90° in vertical and horizontal directions under various conditions. The experimental results demonstrate the approach's robustness and efficiency. Copyright © 2014 SCITEPRESS.

Liu Y.,Central China Normal University | Liu Y.,Collaborative and Innovative Center for Educational Technology | Liu Y.,Wenhua College | Chen J.,Central China Normal University | And 2 more authors.
2014 IEEE International Conference on Image Processing, ICIP 2014 | Year: 2014

A cascaded approach for facial feature detection in unconstrained environment, e.g., various poses and illuminations, occlusion, low image resolution, different facial expressions and makeup, is proposed in this paper. First the positive facial area is extracted to eliminate the influence of noise under various conditions. Then a Dirichlet-tree distribution enhanced random forests (D-RF) algorithm is proposed to detect facial features using cascaded head pose models in local sub-regions. Meanwhile, multiple probability models are learned and stored in leaves of the D-RF, i.e., a positive/negative patch probability, a head pose probability, the locations of facial features and facial deformation models (FDM). Finally, the composite weighted voting that fuses classification and regression methods is used to decide the locations of facial features. Experiments with the public databases demonstrate the robustness and accuracy of the proposed approach. © 2014 IEEE.

Deng Z.,Wuhan University | Wu X.,Wuhan University | Wang L.,Wuhan University | Wang L.,CAS Institute of Remote Sensing | And 5 more authors.
IEEE Transactions on Parallel and Distributed Systems | Year: 2015

More and more real-time applications need to handle dynamic continuous queries over streaming data of high density. Conventional data and query indexing approaches generally do not apply for excessive costs in either maintenance or space. Aiming at these problems, this study first proposes a new indexing structure by fusing an adaptive cell and KDB-tree, namely CKDB-tree. A cell-tree indexing approach has been developed on the basis of the CKDB-tree that supports dynamic continuous queries. The approach significantly reduces the space costs and scales well with the increasing data size. Towards providing a scalable solution to filtering massive steaming data, this study has explored the feasibility to utilize the contemporary general-purpose computing on the graphics processing unit (GPGPU). The CKDB-tree-based approach has been extended to operate on both the CPU (host) and the GPU (device). The GPGPU-aided approach performs query indexing on the host while perform streaming data filtering on the device in a massively parallel manner. The two heterogeneous tasks execute in parallel and the latency of streaming data transfer between the host and the device is hidden. The experimental results indicate that (1) CKDB-tree can reduce the space cost comparing to the cell-based indexing structure by 60 percent on average, (2) the approach upon the CKDB-tree outperforms the traditional counterparts upon the KDB-tree by 66, 75 and 79 percent in average for uniform, skewed and hyper-skewed data in terms of update costs, and (3) the GPGPU-aided approach greatly improves the approach upon the CKDB-tree with the support of only a single Kepler GPU, and it provides real-time filtering of streaming data with 2.5M data tuples per second. The massively parallel computing technology exhibits great potentials in streaming data monitoring. © 2014 IEEE.

Dou M.,Wuhan University | Chen J.,Central China Normal University | Chen J.,Collaborative and Innovative Center for Educational Technology | Chen D.,Wuhan University | And 6 more authors.
Future Generation Computer Systems | Year: 2014

Natural disasters occur unexpectedly and usually result in huge losses of life and property. How to effectively make contingency plans is an intriguing question constantly faced by governments and experts. Human rescue operations are the most critical issue in contingency planning. A natural disaster scenario is, in general, highly complicated and dynamic. Modeling and simulation technologies have been gaining considerable momentum in investigating natural disaster scenarios to enable contingency planning. However, existing M&S systems still suffer from two open problems: (1) a lack of real data on natural disasters; and (2) the absence of methods and platforms to describe the collective behaviors of people in disaster situations. Considering these problems, an M&S framework for human rescue operations in a typical natural disaster, i.e., a landslide, has been developed in this study. The framework consists of three modules: (1) remote sensing information extraction, (2) landslide simulation, and (3) crowd simulation. The crowd simulation module is driven by the real/virtual data provided by the former modules. A number of simulations (using the Zhouqu landslide as an example) have been performed to study human relief operations spontaneously and under manipulation, with the effect of contingency plans highlighted. The experimental results demonstrate that (1) the simulation framework is an effective tool for contingency planning, and (2) real data can make the simulation outputs more meaningful. © 2013 Elsevier B.V. All rights reserved.

Chen D.,Hubei University | Chen D.,Collaborative and Innovative Center for Educational Technology | Wang L.,Hubei University | Zomaya A.Y.,University of Sydney | And 4 more authors.
IEEE Transactions on Parallel and Distributed Systems | Year: 2015

Simulation study on evacuation scenarios has gained tremendous attention in recent years. Two major research challenges remain along this direction: (1) how to portray the effect of individuals' adaptive behaviors under various situations in the evacuation procedures and (2) how to simulate complex evacuation scenarios involving huge crowds at the individual level due to the ultrahigh complexity of these scenarios. In this study, a simulation framework for general evacuation scenarios has been developed. Each individual in the scenario is modeled as an adaptable and autonomous agent driven by a weight-based decision-making mechanism. The simulation is intended to characterize the individuals' adaptable behaviors, the interactions among individuals, among small groups of individuals, and between the individuals and the environment. To handle the second challenge, this study adopts GPGPU to sustain massively parallel modeling and simulation of an evacuation scenario. An efficient scheme has been proposed to minimize the overhead to access the global system state of the simulation process maintained by the GPU platform. The simulation results indicate that the 'adaptability' in individual behaviors has a significant influence on the evacuation procedure. The experimental results also exhibit the proposed approach's capability to sustain complex scenarios involving a huge crowd consisting of tens of thousands of individuals. © 2014 IEEE.

Loading Collaborative and Innovative Center for Educational Technology collaborators
Loading Collaborative and Innovative Center for Educational Technology collaborators