Chongqing Key Laboratory of Computational Intelligence
Chongqing Key Laboratory of Computational Intelligence
Zhang Q.,Chongqing University of Posts and Telecommunications |
Zhang Q.,Chongqing Key Laboratory of Computational Intelligence |
Zhang P.,Chongqing Key Laboratory of Computational Intelligence |
Wang G.,Chongqing Key Laboratory of Computational Intelligence
Journal of Intelligent and Fuzzy Systems | Year: 2017
Rough set characterizes an uncertain set with two crisp boundaries, i.e., lower and upper-approximation sets, and it has successfully been applied into artificial intelligence fields. However, there is little discuss on how to establish a crisp set as approximation set of a target set instead of only giving out two boundary lines. Although many uncertain decision rules can be acquired from the information system based on lower-approximation set by keeping the positive region unchanged, the accuracy of these rules depends on the similarity between a target set and its lower-approximation set. In order to solve this problem, an approximation set model of rough set was proposed and applied into attribute reduction or uncertain image segmentation successfully in our previous research. In this paper, a kind of fuzzy similarity between a target set and its approximation set is presented based on Euclidean distance instead of the similarity based on cardinality of a finite set. Then many good properties of 0.5-approximation set of a rough set are presented, and the conclusion that 0.5-appoximation set is the best approximation set of a rough set in all definable sets is successfully proved with Euclidean similarity between two fuzzy sets. Finally, the change laws of fuzzy similarity between a target set and its 0.5-approximation set in different granularity spaces is analyzed in detail. © 2017 IOS Press and the authors. All rights reserved.
Zeng X.-H.,Chongqing University of Posts and Telecommunications |
Zeng X.-H.,Chongqing Key Laboratory of Computational Intelligence |
Hou S.-L.,Chongqing University of Posts and Telecommunications |
Hou S.-L.,Chongqing Key Laboratory of Computational Intelligence
Journal of Computers (Taiwan) | Year: 2017
The conventional sparse coding-based super-resolution image reconstruction methods cannot preserve the image local smoothing structures, i.e., has not considered the smoothness between patches in the reconstruction process. In this paper, we introduce the manifold regularization to constrain the image patches lay on the intrinsic smoothing manifold, and incorporate the image nonlocal self-similarity into sparse representation model to improve the accuracy of the sparse coefficients, thus more accurate high-resolution patches can be obtained and used to reconstruct the better high-resolution image. Accordingly a novel Manifold-Regularization Super-Resolution Image Reconstruction algorithm (MSIR algorithm) is proposed. Experimental results on benchmark medical images and the publicly available medical image sets (100 images) demonstrate the effectiveness of MSIR algorithm outperform the four classical super-resolution reconstruction methods. © 2017, Computer Society of the Republic of China. All rights reserved.
Gong X.,University of South China |
Gong X.,Chongqing Key Laboratory of Computational Intelligence |
Luo J.,Sichuan Academy of Medical science |
Fu Z.,University of South China
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013
This paper presents a framework for 3D face representation, including pose and depth image normalization. Different than a 2D image, a 3D face itself contains sufficient discriminant information. We propose to map the original 3D coordinates to a depth image using a specific resolution, hence, we can remain the original information in 3D space. 1) Posture correction, we propose 2 simple but effective methods to standardize a face model that is appropriate to handle in following steps; 2) create depth image which remain original measurement information. Tests on a large 3D face dataset containing 2700 3D faces from 450 subjects show that, the proposed normalization provides higher recognition accuracies over other representations. © Springer International Publishing 2013.
Liu L.-H.,Chongqing University of Posts and Telecommunications |
Luan X.,Chongqing Key Laboratory of Computational Intelligence
Proceedings - International Conference on Machine Learning and Cybernetics | Year: 2017
From a perspective of feature extraction, we present a histogram-based sparsity descriptor (HSD) which is derived from the robust principal component analysis (RPCA) and histogram technique. Given a test image, sparse error images with respect to each class can be obtained by using RPCA decomposition. In order to extract the facial features in terms of intensity distribution, a sparseness measure based on intensity histogram is then introduced by computing the histogram sparsity of those sparse error images. By doing this, we can firstly choose t candidates similar to the test face image. Finally, we use LRC to find the correct individual. The efficacy and robustness of the proposed method is verified on three popular face databases (i.e., ORL, CMU PIE and AR) with promising results. We show that if the sparsity hidden in error images can be properly harnessed, the proposed HSD is able to correctly recognize corrupted or occluded faces. © 2016 IEEE.
Wang J.,Chongqing University of Posts and Telecommunications |
Wang J.,Chongqing Key Laboratory of Computational Intelligence |
Jin L.,Chongqing University of Posts and Telecommunications |
Sun K.,Chongqing University of Posts and Telecommunications
Jiangsu Daxue Xuebao (Ziran Kexue Ban)/Journal of Jiangsu University (Natural Science Edition) | Year: 2013
In order to improve the performance of Chinese text categorization, a Chinese text categorization method was proposed based on evolutionary hypernetwork. A Chinese Lexical Analysis System (ICTCLAS) was employed to take the words with parts of verb, noun and adjective as candidate features. The χ2-test method was used to realize feature selection, and the feature weight was calculated by Boolean weighting. The preprocessed data sets were divided into training set and testing set. A hyperedge replacement strategy was used to train hypernetwork classification model for classifying testing sets. The classification performances of the hypernetwork models with different orders were analyzed and compared with traditional KNN and SVM. The experimental results show that the proposed scheme can achieve 87.2% and 72.5% of macro precision, 86.9% and 70.5% of macro recall, 87.0% and 71.5% of macro F1 for Fudan University corpus and Sohu corpus, respectively. As an efficient tool for Chinese text classification, the proposed scheme is close to or better than KNN and SVM classification methods.
Qin H.,Chongqing Key Laboratory of Computational Intelligence |
Qin H.,Chongqing University of Posts and Telecommunications |
Ye B.,Chongqing Key Laboratory of Computational Intelligence |
Ye B.,Chongqing University of Posts and Telecommunications |
And 2 more authors.
Computerized Medical Imaging and Graphics | Year: 2015
Volume visualization is a very important work in medical imaging and surgery plan. However, determining an ideal transfer function is still a challenging task because of the lack of measurable metrics for quality of volume visualization. In the paper, we presented the voxel vibility model as a quality metric to design the desired visibility for voxels instead of designing transfer functions directly. Transfer functions are obtained by minimizing the distance between the desired visibility distribution and the actual visibility distribution. The voxel model is a mapping function from the feature attributes of voxels to the visibility of voxels. To consider between-class information and with-class information simultaneously, the voxel visibility model is described as a Gaussian mixture model. To highlight the important features, the matched result can be obtained by changing the parameters in the voxel visibility model through a simple and effective interface. Simultaneously, we also proposed an algorithm for transfer functions optimization. The effectiveness of this method is demonstrated through experimental results on several volumetric data sets. © 2014 Elsevier Ltd.
Xiao D.,Beihang University |
He T.,Beihang University |
Pan Q.,Beihang University |
Liu X.,Beihang University |
And 2 more authors.
Ultrasonics | Year: 2014
A novel acoustic emission (AE) source localization approach based on beamforming with two uniform linear arrays is proposed, which can localize acoustic sources without accurate velocity, and is particularly suited for plate-like structures. Two uniform line arrays are distributed in the x-axis direction and y-axis direction. The accurate x and y coordinates of AE source are determined by the two arrays respectively. To verify the location accuracy and effectiveness of the proposed approach, the simulation of AE wave propagation in a steel plate based on the finite element method and the pencil-lead-broken experiment are conducted, and the AE signals obtained from the simulations and experiments are analyzed using the proposed method. Moreover, to study the ability of the proposed method more comprehensive, a plate of carbon fiber reinforced plastics is taken for the pencil-lead-broken test, and the AE source localization is also realized. The results indicate that the two uniform linear arrays can localize different sources accurately in two directions even though the localizing velocity is deviated from the real velocity, which demonstrates the effectiveness of the proposed method in AE source localization for plate-like structures. © 2013 Elsevier B.V. All rights reserved.
Wu W.,Zhejiang Ocean University |
Wu W.,Chongqing Key Laboratory of Computational Intelligence |
Gao C.,Zhejiang Ocean University |
Li T.,Zhejiang Ocean University
Jisuanji Yanjiu yu Fazhan/Computer Research and Development | Year: 2014
Granular computing, which imitates human being's thinking, is an approach for knowledge representation and data mining. Its basic computing unit is called granule, and its objective is to establish effective computation models for dealing with large scale complex data and information. In order to study knowledge acquisition in ordered information systems with multi-granular labels, rough set approximations based on ordered granular labeled structures are explored. The concept of ordered labeled structures is first introduced. A dominance relation on the universe of discourse from an ordered labeled structure is also defined. Dominated labeled blocks determined by the dominance relation are constructed. Ordered lower approximations and ordered upper approximations, as well as ordered labeled lower approximations and ordered labeled upper approximations of sets based on dominance relations, are then proposed. Properties of approximation operators are examined. It is further proved that the qualities of lower and upper approximations of a set derived from an ordered labeled structure are a dual pair of necessity measure and possibility measure. Finally, multi-scale ordered granular labeled structures are defined and relationships among rough approximations with different scales induced from multi-scale ordered granular labeled structures are discussed. ©, 2014, Science Press. All right reserved.
Wang J.,Chongqing Key Laboratory of Computational Intelligence |
Wang J.,Inha University |
Lee C.-H.,Inha University
Journal of Central South University | Year: 2014
A virtual reconfigurable architecture (VRA)-based evolvable hardware is proposed for automatic synthesis of combinational logic circuits at gate-level. The proposed VRA is implemented by a Celoxica RC1000 peripheral component interconnect (PCI) board with an Xilinx Virtex xcv2000E field programmable gate array (FPGA). To improve the quality of the evolved circuits, the VRA works through a two-stage evolution: finding a functional circuit and minimizing the number of logic gates used in a feasible circuit. To optimize the algorithm performance in the two-stage evolutionary process and set free the user from the time-consuming process of mutation parameter tuning, a self-adaptive mutation rate control (SAMRC) scheme is introduced. In the evolutionary process, the mutation rate control parameters are encoded as additional genes in the chromosome and also undergo evolutionary operations. The efficiency of the proposed methodology is tested with the evolutions of a 4-bit even parity function, a 2-bit multiplier, and a 3-bit multiplier. The obtained results demonstrate that our scheme improves the evolutionary design of combinational logic circuits in terms of quality of the evolved circuit as well as the computational effort, when compared to the existing evolvable hardware approaches. © 2014 Central South University Press and Springer-Verlag Berlin Heidelberg.
Ji L.-H.,Chongqing University |
Ji L.-H.,Chongqing Key Laboratory of Computational Intelligence |
Liao X.-F.,Chongqing University |
Chen X.,Chongqing University
Chinese Physics B | Year: 2013
In this paper the pinning consensus of multi-agent networks with arbitrary topology is investigated. Based on the properties of M-matrix, some criteria of pinning consensus are established for the continuous multi-agent network and the results show that the pinning consensus of the dynamical system depends on the smallest real part of the eigenvalue of the matrix which is composed of the Laplacian matrix of the multi-agent network and the pinning control gains. Meanwhile, the relevant work for the discrete-time system is studied and the corresponding criterion is also obtained. Particularly, the fundamental problem of pinning consensus, that is, what kind of node should be pinned, is investigated and the positive answers to this question are presented. Finally, the correctness of our theoretical findings is demonstrated by some numerical simulated examples. © 2013 Chinese Physical Society and IOP Publishing Ltd.