Scientific Computing Key Laboratory of Shanghai Universities

Shanghai, China

Scientific Computing Key Laboratory of Shanghai Universities

Shanghai, China

Time filter

Source Type

Peng X.,Shanghai Normal University | Peng X.,Scientific Computing Key Laboratory of Shanghai Universities | Wang Y.,Shanghai University | Xu D.,Shanghai Normal University
Knowledge-Based Systems | Year: 2013

Twin parametric-margin support vector machine (TPMSVM) determines the more flexible parametric-margin hyperplanes through a pair of quadratic programming problems (QPPs) compared with classical support vector machine (SVM). However, it ignores the prior structural information in data. In this paper, we present a structural twin parametric-margin support vector machine (STPMSVM) for classification. The two optimization problems of STPMSVM focus on the structural information of the corresponding classes based on the cluster granularity, which is vital for designing a good classifier in different real-world problems. Furthermore, two Mahalanobis distances are respectively introduced into its corresponding QPPs based on the structural information. STPMSVM has a special case of TPMSVM when each ellipsoid cluster is a unit ball in a reproducing kernel Hilbert space. Experimental results demonstrate that STPMSVM is often superior in generalization performance to other learning algorithms. © 2013 Elsevier B.V. All rights reserved.


Ceng L.-C.,Shanghai Normal University | Ceng L.-C.,Scientific Computing Key Laboratory of Shanghai Universities | Ansari Q.H.,Aligarh Muslim University | Schaible S.,Chung Yuan Christian University
Journal of Global Optimization | Year: 2012

In this paper, we introduce and analyze a new hybrid extragradient-like iterative algorithm for finding a common solution of a generalized mixed equilibrium problem, a system of generalized equilibrium problems and a fixed point problem of infinitely many non expansive mappings. Under some mild conditions, we prove the strong convergence of the sequence generated by the proposed algorithm to a common solution of these three problems. Such solution also solves an optimization problem. Several special cases are also discussed. The results presented in this paper are the supplement, extension, improvement and generalization of the previously known results in this area. © 2011 Springer Science+Business Media, LLC.


Peng X.,Shanghai Normal University | Peng X.,Scientific Computing Key Laboratory of Shanghai Universities
Neurocomputing | Year: 2012

In this paper, an efficient twin parametric insensitive support vector regression (TPISVR) is proposed. The TPISVR determines indirectly the regression function through a pair of nonparallel parametric-insensitive up- and down-bound functions solved by two smaller sized support vector machine (SVM)-type problems, which causes the TPISVR not only have the faster learning speed than the classical SVR, but also be suitable for many cases, especially when the noise is heteroscedastic, that is, the noise strongly depends on the input value. The proposed method has the advantage of using the ratio of the parameters ν and c for controlling the bounds of fractions of support vectors and errors. The experimental results on several artificial and benchmark datasets indicate that the TPISVR not only has fast learning speed, but also shows good generalization performance. © 2011.


Xinjun P.,Shanghai Normal University | Xinjun P.,Scientific Computing Key Laboratory of Shanghai Universities
Expert Systems with Applications | Year: 2010

The twin support vector hypersphere (TSVH) is a novel efficient pattern recognition tool, because it determines a pair of hyperspheres by solving two related SVM-type problems, each of which is smaller than in a classical SVM. In this paper we formulate a least squares version for this classifier, termed as the least squares twin support vector hypersphere (LS-TSVH). This formulation leads to extremely simple and fast algorithm for generating binary classifier based on a pair of hyperspheres. Due to equality type constraints in the formulation, the solution follows from solving two sets of nonlinear equations, instead of the two dual quadratic programming problems (QPPs) for TSVH. We show that the two sets of nonlinear equations are solved using the well-known Newton downhill algorithm. The effectiveness of proposed LS-TSVH is demonstrated by experimental results on several artificial and benchmark datasets. © 2010 Elsevier Ltd. All rights reserved.


Peng X.,Shanghai Normal University | Peng X.,Scientific Computing Key Laboratory of Shanghai Universities | Xu D.,Shanghai Normal University
Neural Computing and Applications | Year: 2013

The recently proposed twin support vector machine (TWSVM) obtains much faster training speed and comparable performance than classical support vector machine. However, it only considers the empirical risk minimization principle, which leads to poor generalization for real-world applications. In this paper, we formulate a robust minimum class variance twin support vector machine (RMCV-TWSVM). RMCV-TWSVM effectively overcomes the shortcoming in TWSVM by introducing a pair of uncertain class variance matrices in its objective functions. As a special case, we present a special type of the uncertain class variance matrices by combining the empirical positive and negative class variance matrices. Computational results on several synthetic as well as benchmark datasets indicate the significant advantages of proposed classifier in both computational time and test accuracy. © 2012 Springer-Verlag London Limited.


Peng X.,Shanghai Normal University | Peng X.,Scientific Computing Key Laboratory of Shanghai Universities
Neurocomputing | Year: 2010

Twin support vector regression (TSVR) obtains faster learning speed by solving a pair of smaller sized support vector machine (SVM)-typed problems than classical support vector regression (SVR). In this paper, a primal version for TSVR, termed primal TSVR (PTSVR), is first presented. By introducing a quadratic function to approximate its loss function, PTSVR directly optimizes the pair of quadratic programming problems (QPPs) of TSVR in the primal space based on a series of sets of linear equations. PTSVR can obviously improve the learning speed of TSVR without loss of the generalization. To improve the prediction speed, a greedy-based sparse TSVR (STSVR) in the primal space is further suggested. STSVR uses a simple back-fitting strategy to iteratively select its basis functions and update the augmented vectors. Computational results on several synthetic as well as benchmark datasets confirm the merits of PTSVR and STSVR. © 2010 Elsevier B.V.


Liu T.,Dalian University of Technology | Zheng X.,Shanghai Normal University | Zheng X.,Scientific Computing Key Laboratory of Shanghai Universities | Wang J.,Shanghai Normal University | Wang J.,Scientific Computing Key Laboratory of Shanghai Universities
Biochimie | Year: 2010

Knowledge of structural class plays an important role in understanding protein folding patterns. In this study, a simple and powerful computational method, which combines support vector machine with PSI-BLAST profile, is proposed to predict protein structural class for low-similarity sequences. The evolution information encoding in the PSI-BLAST profiles is converted into a series of fixed-length feature vectors by extracting amino acid composition and dipeptide composition from the profiles. The resulting vectors are then fed to a support vector machine classifier for the prediction of protein structural class. To evaluate the performance of the proposed method, jackknife cross-validation tests are performed on two widely used benchmark datasets, 1189 (containing 1092 proteins) and 25PDB (containing 1673 proteins) with sequence similarity lower than 40% and 25%, respectively. The overall accuracies attain 70.7% and 72.9% for 1189 and 25PDB datasets, respectively. Comparison of our results with other methods shows that our method is very promising to predict protein structural class particularly for low-similarity datasets and may at least play an important complementary role to existing methods. © 2010 Elsevier Masson SAS.


Peng X.,Shanghai Normal University | Peng X.,Scientific Computing Key Laboratory of Shanghai Universities | Xu D.,Shanghai Normal University
Pattern Recognition | Year: 2013

This paper presents a novel feature-selection algorithm for data regression with a lot of irrelevant features. The proposed method is based on well-established machine-learning technique without any assumption about the underlying data distribution. The key idea in this method is to decompose an arbitrarily complex nonlinear problem into a set of locally linear ones through local information, and to learn globally feature relevance within the least squares loss framework. In contrast to other feature-selection algorithms for data regression, the learning of this method is efficient since the solution can be readily found through gradient descent with a simple update rule. Experiments on some synthetic and real-world data sets demonstrate the viability of our formulation of the feature-selection problem and the effectiveness of our algorithm. Crown Copyright © 2013 Published by Elsevier Ltd. All rights reserved.


Peng X.,Shanghai Normal University | Peng X.,Scientific Computing Key Laboratory of Shanghai Universities
Information Sciences | Year: 2011

Twin support vector machines (TSVM) obtain faster training speeds than classical support vector machines (SVM). However, TSVM augmented vectors lose sparsity. In this paper, a rapid sparse twin support vector machine (STSVM) classifier in primal space is proposed to improve the sparsity and robustness of TSVM. Based on a simple back-fitting strategy, the STSVM iteratively builds each nonparallel hyperplanes by adding one support vector (SV) from the corresponding class at one time. This process is terminated using an adaptive and stable stopping criterion. STSVM learning is implemented by linear equation computing systems through introducing a quadratic function to approximate the empirical risk. The computational results on several synthetic and benchmark datasets indicate that the STSVM obtains a sparse separating hyperplane at a low cost without sacrificing its generalization performance. © 2011 Elsevier Inc. All rights reserved.


Peng X.,Shanghai Normal University | Peng X.,Scientific Computing Key Laboratory of Shanghai Universities | Xu D.,Shanghai Normal University
Information Sciences | Year: 2013

This paper formulates a twin-hypersphere support vector machine (THSVM) classifier for binary recognition. Similar to the twin support vector machine (TWSVM) classifier, this THSVM determines two hyperspheres by solving two related support vector machine (SVM)-type problems, each one is smaller than the classical SVM, which makes the THSVM be more efficient than the classical SVM. In addition, the THSVM avoids the matrix inversions in its two dual quadratic programming problems (QPPs) compared with the TWSVM. By considering the characteristics of the dual QPPs of THSVM, an efficient Gilbert's algorithm for the THSVM based on the reduced convex hull (RCH) instead of directly optimizing its pair of QPPs is further presented. Computational results on several synthetic as well as benchmark datasets indicate the significant advantages of the THSVM classifier in the computational time and test accuracy. Crown Copyright © 2012 Published by Elsevier Inc. All rights reserved.

Loading Scientific Computing Key Laboratory of Shanghai Universities collaborators
Loading Scientific Computing Key Laboratory of Shanghai Universities collaborators