Entity

Time filter

Source Type


Liu L.,Key Laboratory of Dependable Service Computing in Cyber Physical Society | Liu L.,Chongqing University | Peng Y.,National University of Singapore | Wang S.,Lanzhou University of Technology | And 2 more authors.
Information Sciences | Year: 2016

Sensor-based human activity recognition has become an important research field within pervasive and ubiquitous computing. Techniques for recognizing atomic activities such as gestures or actions are mature for now, but complex activity recognition still remains a challenging issue. In this paper, we address the problem of complex activity recognition using time series extracted from multiple sensors. We first build a dictionary of time series patterns, called shapelets, to represent atomic activities, then present three shapelet-based models to recognize sequential, concurrent, and generic complex activities. We use the datasets collected from three different labs to evaluate our shapelet-based approach and the results show that our approach can handle complex activity recognition effectively. Our experimental results also show that the shapelet-based approach outperforms other competing approaches in terms of recognition accuracy and system usage. © 2016 Elsevier Inc. All rights reserved.


Liong V.E.,Advanced Digital science Center | Lu J.,Advanced Digital science Center | Ge Y.,Key Laboratory of Dependable Service Computing in Cyber Physical Society | Ge Y.,Chongqing University
Pattern Recognition Letters | Year: 2015

In this paper, we propose a regularized local metric learning (RLML) method for person re-identification. Unlike existing metric learning based person re-identification methods which learn a single distance metric to measure the similarity of each pair of human body images, our method combines global and local metrics to represent the within-class and between-class variances. By doing so, we utilize the local distribution of the training data to avoid the overfitting problem. In addition, to address the lacking of training samples in most person re-identification systems, our method also regulates the covariance matrices in a parametric manner, so that discriminative information can be better exploited. Experimental results on four widely used datasets demonstrate the advantage of our proposed RLML over both existing metric learning and state-of-the-art person re-identification methods. © 2015 Elsevier B.V.


Liong V.E.,Advanced Digital science CenterSingapore | Lu J.,Advanced Digital science CenterSingapore | Ge Y.,Key Laboratory of Dependable Service Computing in Cyber Physical Society | Ge Y.,Chongqing University
Pattern Recognition Letters | Year: 2015

In this paper, we propose a regularized local metric learning (RLML) method for person re-identification. Unlike existing metric learning based person re-identification methods which learn a single distance metric to measure the similarity of each pair of human body images, our method combines global and local metrics to represent the within-class and between-class variances. By doing so, we utilize the local distribution of the training data to avoid the overfitting problem. In addition, to address the lacking of training samples in most person re-identification systems, our method also regulates the covariance matrices in a parametric manner, so that discriminative information can be better exploited. Experimental results on four widely used datasets demonstrate the advantage of our proposed RLML over both existing metric learning and state-of-the-art person re-identification methods. © 2015 Elsevier B.V.


Liu Z.,Chongqing University | Yin H.,Chongqing University | Yin H.,Key Laboratory of Dependable Service Computing in Cyber Physical Society | Chai Y.,Chongqing University | Yang S.X.,University of Guelph
Expert Systems with Applications | Year: 2014

Fusion of multimodal medical images increases robustness and enhances accuracy in biomedical research and clinical diagnosis. It attracts much attention over the past decade. In this paper, an efficient multimodal medical image fusion approach based on compressive sensing is presented to fuse computed tomography (CT) and magnetic resonance imaging (MRI) images. The significant sparse coefficients of CT and MRI images are acquired via multi-scale discrete wavelet transform. A proposed weighted fusion rule is utilized to fuse the high frequency coefficients of the source medical images; while the pulse coupled neural networks (PCNN) fusion rule is exploited to fuse the low frequency coefficients. Random Gaussian matrix is used to encode and measure. The fused image is reconstructed via Compressive Sampling Matched Pursuit algorithm (CoSaMP). To show the efficiency of the proposed approach, several comparative experiments are conducted. The results reveal that the proposed approach achieves better fused image quality than the existing state-of-the-art methods. Furthermore, the novel fusion approach has the superiority of high stability, good flexibility and low time consumption. © 2014 Elsevier Ltd. All rights reserved.


Liu Z.,Chongqing University | Yin H.,Chongqing University | Yin H.,Key Laboratory of Dependable Service Computing in Cyber Physical Society | Fang B.,Chongqing University | Chai Y.,Chongqing University
Optics Communications | Year: 2014

An appropriate fusion of infrared and visible images can integrate their complementary information and obtain more reliable and better description of the environmental conditions. Compressed sensing theory, as a low signal sampling and compression method based on the sparsity of signal under a certain transformation, is widely used in various fields. Applying to the image fusion applications, only a part of sparse coefficients are needed to be fused. Furthermore, the fused sparse coefficients can be used to accurately reconstruct the fused image. The CS-based fusion approach can greatly reduce the computational complexity and simultaneously enhance the quality of the fused image. In this paper, an improved image fusion scheme based on compressive sensing is presented. This proposed approach can preserve more detail information, such as edges, lines and contours in comparison to the conventional transform-based image fusion approaches. In the proposed approach, the sparse coefficients of the source images are obtained by discrete wavelet transform. The low and high coefficients of infrared and visible images are fused by an improved entropy weighted fusion rule and a max-abs-based fusion rule, respectively. The fused image is reconstructed by a compressive sampling matched pursuit algorithm after local linear projection using a random Gaussian matrix. Several comparative experiments are conducted. The experimental results show that the proposed image fusion scheme can achieve better image fusion quality than the existing state-of-the-art methods. © 2014 Elsevier B.V.

Discover hidden collaborations