Time filter

Source Type

Zheng W.-S.,Sun Yat Sen University | Zheng W.-S.,Guangdong Province Key Laboratory of Computational Science | Gong S.,Queen Mary, University of London | Xiang T.,Queen Mary, University of London
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2016

Solving the problem of matching people across non-overlapping multi-camera views, known as person re-identification (re-id), has received increasing interests in computer vision. In a real-world application scenario, a watch-list (gallery set) of a handful of known target people are provided with very few (in many cases only a single) image(s) (shots) per target. Existing re-id methods are largely unsuitable to address this open-world re-id challenge because they are designed for (1) a closed-world scenario where the gallery and probe sets are assumed to contain exactly the same people, (2) person-wise identification whereby the model attempts to verify exhaustively against each individual in the gallery set, and (3) learning a matching model using multi-shots. In this paper, a novel transfer local relative distance comparison (t-LRDC) model is formulated to address the open-world person re-identification problem by one-shot group-based verification. The model is designed to mine and transfer useful information from a labelled open-world non-target dataset. Extensive experiments demonstrate that the proposed approach outperforms both non-transfer learning and existing transfer learning based re-id methods. © 1979-2012 IEEE. Source

Hu J.-F.,Sun Yat Sen University | Zheng W.-S.,Sun Yat Sen University | Zheng W.-S.,Guangdong Province Key Laboratory of Computational Science | Lai J.,Sun Yat Sen University | Zhang J.,University of Dundee
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2015

In this paper, we focus on heterogeneous feature learning for RGB-D activity recognition. Considering that features from different channels could share some similar hidden structures, we propose a joint learning model to simultaneously explore the shared and feature-specific components as an instance of heterogenous multi-task learning. The proposed model in an unified framework is capable of: 1) jointly mining a set of subspaces with the same dimensionality to enable the multi-task classifier learning, and 2) meanwhile, quantifying the shared and feature-specific components of features in the subspaces. To efficiently train the joint model, a three-step iterative optimization algorithm is proposed, followed by two inference models. Extensive results on three activity datasets have demonstrated the efficacy of the proposed method. In addition, a novel RGB-D activity dataset focusing on human-object interaction is collected for evaluating the proposed method, which will be made available to the community for RGB-D activity benchmarking and analysis. © 2015 IEEE. Source

Hu J.-F.,Sun Yat Sen University | Zheng W.-S.,Sun Yat Sen University | Zheng W.-S.,Guangdong Province Key Laboratory of Computational Science | Lai J.,Sun Yat Sen University | And 2 more authors.
Proceedings of the IEEE International Conference on Computer Vision | Year: 2013

Human action can be recognised from a single still image by modelling Human-object interaction (HOI), which infers the mutual spatial structure information between human and object as well as their appearance. Existing approaches rely heavily on accurate detection of human and object, and estimation of human pose. They are thus sensitive to large variations of human poses, occlusion and unsatisfactory detection of small size objects. To overcome this limitation, a novel exemplar based approach is proposed in this work. Our approach learns a set of spatial pose-object interaction exemplars, which are density functions describing how a person is interacting with a manipulated object for different activities spatially in a probabilistic way. A representation based on our HOI exemplar thus has great potential for being robust to the errors in human/object detection and pose estimation. A new framework consists of a proposed exemplar based HOI descriptor and an activity specific matching model that learns the parameters is formulated for robust human activity recognition. Experiments on two benchmark activity datasets demonstrate that the proposed approach obtains state-of-the-art performance. © 2013 IEEE. Source

Wu J.-S.,Sun Yat Sen University | Wu J.-S.,SYSU CMU Shunde International Joint Research Institute | Zheng W.-S.,Sun Yat Sen University | Zheng W.-S.,Guangdong Province Key Laboratory of Computational Science | And 2 more authors.
Neural Networks | Year: 2015

Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. © 2014 Elsevier Ltd. Source

Chen Y.,South China Agricultural University | Chen Y.,Guangzhou University | Zheng W.-S.,University of Science and Technology of China | Zheng W.-S.,Guangdong Province Key Laboratory of Computational Science | And 2 more authors.
Neural Networks | Year: 2013

High-dimensionality of data and the small sample size problem are two significant limitations for applying subspace methods which are favored by face recognition. In this paper, a new linear dimension reduction method called locally uncorrelated discriminant projections (LUDP) is proposed, which addresses the two problems from a new aspect. More specifically, we propose a locally uncorrelated criterion, which aims to decorrelate learned discriminant factors over data locally rather than globally. It has been shown that the statistical uncorrelation criterion is an important property for reducing dimension and learning robust discriminant projection as well. However, data are always locally distributed, so it is more important to explore locally statistical uncorrelated discriminant information over data. We impose this new constraint into a graph-based maximum margin analysis, so that LUDP also characterizes the local scatter as well as nonlocal scatter, seeking to find a projection that maximizes the difference, rather than the ratio between the nonlocal scatter and the local scatter. Experiments on ORL, Yale, Extended Yale face database B and FERET face database demonstrate the effectiveness of our proposed method. © 2013 Elsevier Ltd. Source

Discover hidden collaborations