Time filter

Source Type

Fang X.,Harbin Institute of Technology | Xu Y.,Harbin Institute of Technology | Li X.,CAS Xi'an Institute of Optics and Precision Mechanic | Fan Z.,Harbin Institute of Technology | And 2 more authors.
Neurocomputing | Year: 2014

Feature selection (FS) methods have commonly been used as a main way to select the relevant features. In this paper, we propose a novel unsupervised FS method, i.e., locality and similarity preserving embedding (LSPE) for feature selection. Specifically, the nearest neighbor graph is firstly constructed to preserve the locality structure of data points, and then this locality structure is mapped to the reconstruction coefficients such that the similarity among these data points is preserved. Moreover, the sparsity derived by the locality is also preserved. Finally, the low dimensional embedding of the sparse reconstruction is evaluated to best preserve the locality and similarity. We impose ℓ2,1-norm on the transformation matrix to achieve row-sparsity, which allows us to select relevant features and learn the embedding simultaneously. The selected features have good stability due to the locality and similarity preserving, and more importantly, they contain natural discriminating information even if no class labels are provided. We present the optimization algorithm and analysis of convergence of the proposed method. The extensive experimental results show the effectiveness of the proposed method. © 2013 Elsevier B.V.


Zhang Z.,Harbin Institute of Technology | Wang L.,Harbin Institute of Technology | Zhu Q.,Nanjing University of Aeronautics and Astronautics | Liu Z.,Henan University of Science and Technology | And 2 more authors.
Neurocomputing | Year: 2015

In this paper, we propose a novel noise modeling framework to improve a representation based classification (NMFIRC) method for robust face recognition. The representation based classification method has evoked large repercussions in the field of face recognition. Generally, the representation based classification method (RBCM) always first represents the test sample as a linear combination of the training samples, and then classifies the test sample by judging which class leads to a minimum reconstruction residual. However, RBCMs still cannot ideally resolve the face recognition problem owing to the varying facial expressions, poses and different illumination conditions. Furthermore, these variations can immensely influence the representation accuracy when using RBCMs to perform classification. Thus, it is a crucial problem to explore an effective way to better represent the test sample in RBCMs. In order to obtain a highly precise representation metric, the proposed framework first iteratively diminishes the representation noise and achieves better representation solution of the linear combination until it converges, and then exploits the determined 'optimal' representation solution and a fusion method to perform classification. Extensive experiments demonstrated that the proposed framework can simultaneously notably improve the representation capability by decreasing the representation noise and improve the classification accuracy of RCBM. © 2014 Elsevier B.V.


Chen Y.,Harbin Institute of Technology | Chen Y.,Shenzhen Sunwin Intelligent Corporation | Zhang L.,Zhejiang University | Lin B.,Shenzhen Sunwin Intelligent Corporation | And 2 more authors.
Proceedings - 2011 2nd International Conference on Innovations in Bio-Inspired Computing and Applications, IBICA 2011 | Year: 2011

This paper proposes a new feature, optical flow context histogram (OFCH) for detecting abnormal events, especially the fighting violence events from a live camera stream. The optical flow context histogram is a log-polar histogram system which combines the histogram of orientation and magnitude of optical flow together. The human action is represented by using the histogram sequence of orientation and magnitude of optical flow. PCA is adopted to reduce the dimension of the human action representation. Several machine learning methods, including random forest, support vector machine and Bayesnet are employed for sequence classification. The experiments were carried out on the video clips downloaded from the Internet. The results show that the proposed methods work well when using a fixed surveillance camera. © 2011 IEEE.


Lu Y.,Harbin Institute of Technology | Fei L.,Harbin Institute of Technology | Chen Y.,Shenzhen Sunwin Intelligent Co.
Proceedings of 2014 International Conference on Smart Computing, SMARTCOMP 2014 | Year: 2014

In different color spaces, the three color channels might have different relationship, but most of color face recognition methods exploit the color information in a simple way. In this paper, we propose a novel hybrid fusion scheme for color face recognition, which first uses two-phase test sample representation (TPTSR) to obtain matching scores of each color channel of the test sample and then uses the hybrid fusion scheme to combine these three kinds of matching scores for classification of the test sample. The hybrid fusion scheme exploits low- and high-order components of three kinds of matching scores based on the sum and product rule. Scores from each color channel generated from TPTSR includes both little correlated and very correlated scores, to extract low- and high-order components of these scores will allow them to be well integrated and used for classification. For evaluating the proposed method, we not only make a comparison of our method with some global and local methods such as principal component analysis (PCA), linear discriminant analysis (LDA), kernel PCA (KPCA), kernel LDA (KLDA), locality preserving projection (LPP) and TPTSR. We also make a comparison of our method with some recently proposed local feature based methods, such as color local Gabor wavelets (CLGW), color local binary pattern (CLBP) and tensor discriminant color space (TDCS). © 2014 IEEE.


Deng Q.,Harbin Institute of Technology | Xu Y.,Harbin Institute of Technology | Wang J.,Nanyang Technological University | Sun K.,Shenzhen Sunwin Intelligent Corporation
Proceedings - 2015 International Conference on Computers, Communications and Systems, ICCCS 2015 | Year: 2015

The secondary sex characteristics in faces of people are quite different due to variations of age, sex hormone, race and dress-up style. It is a very challenging work to build a gender recognition model for all kinds of people. This paper proposes to train a gender recognition model based on the deep convolutional network on a complete dataset. Our newly built complete dataset contains as many common variations of face images as possible. Based on this complete dataset, we design a very deep convolutional network as our gender classifier. We achieve an accuracy of 98.67% on the most challenging public database, labelled faces in the wild (LFW) [1]· We collect 10000 images from Internet and build a new dataset-Chinese wild database. Our model achieves the accuracy of 97.51% This indicates our model is robust to racial variation. In the above two experiments, our model achieves the state-of-the-art performances in the wild. © 2015 IEEE.


Zhang Z.,Harbin Institute of Technology | Wang L.,Harbin Institute of Technology | Zhu Q.,Nanjing University of Aeronautics and Astronautics | Chen S.-K.,Shenzhen ZKTeco Inc. | And 2 more authors.
Knowledge-Based Systems | Year: 2015

Face recognition across pose is a popular issue in biometrics. Facial rotations caused by pose dramatically enlarge the intra-class variations, which considerably obstructs the performance of the face recognition algorithms. It is advisable to extract more discriminative features to overcome this difficulty. In this paper, we present a simple but efficient feature extraction method based on facial landmarks and multi-scale fusion features. We first extract local features by using Weber local descriptors (WLD) and multi-scale patches centered at predefined facial landmarks, and then construct fusion features by randomly selecting parts of local features. Finally, the classification result is obtained by decision fusion of all local features and fusion features. The proposed method has the following two characteristics: (1) local features around landmarks can well describe the similarity between two images under pose variations and simultaneously reduce redundant information and (2) fusion features constructed by randomly selecting local features from predefined regions further alleviate the influence of pose variations. Extensive experimental results on public face datasets have shown that the proposed method greatly outperforms the previous state-of-the-art algorithms. © 2015 Elsevier B.V. All rights reserved.


Zhang H.,Harbin Institute of Technology | Chen Y.,Shenzhen Sunwin Intelligent Corporation | Shi J.,Harbin Vicog Intelligent Systems Co.
Journal of Modern Optics | Year: 2014

The sparse representation classification (SRC) method proposed by Wright et al. is considered as the breakthrough of face recognition because of its good performance. Nevertheless it still cannot perfectly address the face recognition problem. The main reason for this is that variation of poses, facial expressions, and illuminations of the facial image can be rather severe and the number of available facial images are fewer than the dimensions of the facial image, so a certain linear combination of all the training samples is not able to fully represent the test sample. In this study, we proposed a novel framework to improve the representation-based classification (RBC). The framework first ran the sparse representation algorithm and determined the unavoidable deviation between the test sample and optimal linear combination of all the training samples in order to represent it. It then exploited the deviation and all the training samples to resolve the linear combination coefficients. Finally, the classification rule, the training samples, and the renewed linear combination coefficients were used to classify the test sample. Generally, the proposed framework can work for most RBC methods. From the viewpoint of regression analysis, the proposed framework has a solid theoretical soundness. Because it can, to an extent, identify the bias effect of the RBC method, it enables RBC to obtain more robust face recognition results. The experimental results on a variety of face databases demonstrated that the proposed framework can improve the collaborative representation classification, SRC, and improve the nearest neighbor classifier. © 2014 Taylor & Francis.


Wen J.,Harbin Institute of Technology | Chen Y.,Harbin Institute of Technology | Chen Y.,Shenzhen Sunwin Intelligent Corporation
Optical Engineering | Year: 2012

The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE).


Zhang Z.,Harbin Institute of Technology | Li Z.,Guangdong Polytechnic Normal University | Xie B.,Harbin Institute of Technology | Wang L.,Harbin Institute of Technology | And 2 more authors.
Mathematical Problems in Engineering | Year: 2014

The representation based classification method (RBCM) has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC) method and collaborative representation classification (CRC) method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a "locality representation" method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the "globality representation." On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy. © 2014 Zheng Zhang et al.


Zhu Q.,Harbin Institute of Technology | Zhu Q.,University of Shanghai for Science and Technology | Li Z.,Harbin Institute of Technology | Li Z.,Guangdong Polytechnic Normal University | And 5 more authors.
PLoS ONE | Year: 2013

Minimum squared error based classification (MSEC) method establishes a unique classification model for all the test samples. However, this classification model may be not optimal for each test sample. This paper proposes an improved MSEC (IMSEC) method, which is tailored for each test sample. The proposed method first roughly identifies the possible classes of the test sample, and then establishes a minimum squared error (MSE) model based on the training samples from these possible classes of the test sample. We apply our method to face recognition. The experimental results on several datasets show that IMSEC outperforms MSEC and the other state-of-the-art methods in terms of accuracy. © 2013 Zhu et al.

Loading Shenzhen Sunwin Intelligent Co. collaborators
Loading Shenzhen Sunwin Intelligent Co. collaborators