Entity

Time filter

Source Type


Yang W.,Shenzhen Key Laboratory of Information Science and Technology | Yang W.,Visual Information Processing Laboratory | Huang X.,Shenzhen Key Laboratory of Information Science and Technology | Huang X.,Visual Information Processing Laboratory | And 3 more authors.
Information Sciences | Year: 2014

In this paper, we present a multimodal personal identification system that fuses finger vein and finger dorsal images at the feature level. First, we design an image acquisition device, which can synchronously capture finger vein and finger dorsal images. Also, a small dataset of the images has been established for algorithm testing and evaluation. Secondly, to utilize the intrinsic positional relationship between the finger veins and the finger dorsal, we perform a special registration on two kinds of images. Subsequently, the regions-of-interest (ROIs) of both kinds of images are extracted and normalized in both size and intensity. Thirdly, we develop a magnitude-preserved competitive code feature extraction method, which is utilized in both the finger vein and finger dorsal images. Furthermore, according to the preserved magnitude, a comparative competitive code (C2Code) is explored for finger vein and dorsal fusion at the feature level. The proposed feature map of C2Code, which contains new features of junction points and positions from the finger vein and finger dorsal image pairs, is extremely informative for identification. Finally, the C2Code feature map is fed into a nearest neighbor (NN) classifier to carry out personal authentication. Experimentally, we compare the performance of the proposed fusion strategy with that of state-of-the-art unimodal biometrics by using the established dataset, and it is found that there is higher identification accuracy and lower equal-error-rates (EERs). © 2013 Elsevier Inc. All rights reserved. Source


Wang C.,Tsinghua University | Wang C.,Shenzhen Key Laboratory of Information SciandTech | Wang C.,Visual Information Processing Laboratory | Yang W.,Tsinghua University | And 5 more authors.
Proceedings - 2013 7th International Conference on Image and Graphics, ICIG 2013 | Year: 2013

In this paper, we consider the problem of reconstructing a 3D model from a set of pictures taken from calibrated and arbitrarily placed cameras. Our method is based on existing space carving algorithm which considers photo hull as the final result. Our goal is to solve the visibility problem during carving the visual hull. A new concept Discrete Viewing Edge (DVE) is proposed to represent the visual hull instead of a 3D array. DVE is based on voxels and is simple but effective. With models represented by DVEs, we present a surface extraction algorithm and a carve procedure, during which the visibility of a voxel can be determined rapidly and easily. Our way of determining the visibility of a voxel is global, i.e., we take all possible cameras to which this voxel is visible into account. We apply our method to a set of synthetic pictures and provide arbitrary views of target model which are different from existing cameras. © 2013 IEEE. Source


Yang W.,Tsinghua University | Yang W.,Shenzhen Key Laboratory of Information Science and Technology | Yang W.,Visual Information Processing Laboratory | Huang X.,Tsinghua University | And 5 more authors.
Proceedings - International Conference on Image Processing, ICIP | Year: 2012

In this paper, we present a multimodal personal identification system using finger vein and finger dorsal images with their fusion applied at the feature level. A scheme which combines the registration of image pairs with the region-of-interest (ROI) segmentation, is explored on simultaneously captured finger ventral vein and finger dorsal images. We developed a 'Comparative Competitive Coding' (C2Code) fusion scheme. It is capable of discarding undesired information in unimodal feature extraction stage. And only discriminative information can be preserved. Furthermore, the C2Code contains new feature of junction points from the finger vein and finger dorsal image pairs. Experimentally, we establish a dataset of finger vein and finger dorsal images. Comparing the performance of proposed fusion scheme with unimodal methods, higher identification accuracy and lower Equal-Error-Rate (EER) are achieved. © 2012 IEEE. Source


Liang C.,Tsinghua University | Liang C.,Shenzhen Key Laboratory of Information Science and Technology | Liang C.,Visual Information Processing Laboratory | Yang W.,Tsinghua University | And 8 more authors.
IEICE Transactions on Information and Systems | Year: 2016

In this letter, we propose a novel texture descriptor that takes advantage of an anisotropic neighborhood. A brand new encoding scheme called Reflection and Rotation Invariant Uniform Patterns (rriu2) is proposed to explore local structures of textures. The proposed descriptor is called Oriented Local Binary Patterns (OLBP). OLBP may be incorporated into other varieties of Local Binary Patterns (LBP) to obtain more powerful texture descriptors. Experimental results on CUReT and Outex databases show that OLBP not only significantly outperforms LBP, but also demonstrates great robustness to rotation and illuminant changes. © Copyright 2016 The Institute of Electronics, Information and Communication Engineers. Source


Liang C.,Tsinghua University | Liang C.,Shenzhen Key Laboratory of Information Science and Technology | Liang C.,Visual Information Processing Laboratory | Yang W.,Tsinghua University | And 8 more authors.
IEICE Transactions on Information and Systems | Year: 2014

In this paper, we propose a texture descriptor based on amplitude distribution and phase distribution of the discrete Fourier transform (DFT) of an image. One dimensional DFT is applied to all the rows and columns of an image. Histograms of the amplitudes and gradients of the phases between adjacent rows/columns are computed as the feature descriptor, which is called aggregated DFT (ADFT). ADFT can be easily combined with completed local binary pattern (CLBP). The combined feature captures both global and local information of the texture. ADFT is designed for isotropic textures and demonstrated to be effective for roughness classification of castings. Experimental results show that the amplitude part of ADFT is also discriminative in describing anisotropic textures and it can be used as a complementary descriptor of local texture descriptors such as CLBP. Copyright © 2014 The Institute of Electronics, Information and Communication Engineers. Source

Discover hidden collaborations