Suzhou, China
Suzhou, China

Time filter

Source Type

Qian S.,Nanjing Southeast University | Qian S.,China Electronics Technology Group Corporation | Wang H.,AI Speech Co. | Pei W.,Nanjing Southeast University | Wang K.,Nanjing Southeast University
IEEE Signal Processing Letters | Year: 2012

The ordering property of LSP cannot be guaranteed during HMM-based speech synthesis when LSP is adopted as the spectrum feature, with the result that naturalness of synthetic speech will degrade. This letter introduces a modified parameter generation method to preserve ordering property of generated LSPs, by not only maximizing the likelihoods of HMM and global variance (GV) as in conventional method but also minizing a penalty on mis-orderings. Experimental results show that the proposed method can alleviate the mis-orderings significantly and achieve high quality synthesizing performance when the penalty weight is selected appropriately. © 2012 IEEE.


Zhang L.,Soochow University of China | Zhou W.-D.,AI Speech Ltd
Soft Computing | Year: 2014

Sparse regression ensemble (SRE) is to sparsely combine the outputs of multiple learners using a sparse weight vector. This paper deals with SRE based on the ℓ2–ℓ2 problem and applies it to time series prediction problems. The ℓ2–ℓ1 problem consists of ℓ1-norm and ℓ1-norm regularization terms, where the former denotes the total ensemble empirical risk, and the latter represents the ensemble complexity. Thus, the goal is both to minimize the total ensemble training error and control the ensemble complexity. Experiments on real-world data for regression and time series prediction are given. © 2014, Springer-Verlag Berlin Heidelberg.


Zhang L.,Soochow University of China | Zhou W.-D.,AI Speech Ltd. | Chang P.-C.,Yuan Ze University | Yang J.-W.,Soochow University of China | Li F.-Z.,Soochow University of China
Neurocomputing | Year: 2013

Support vector regression (SVR) model has been widely applied to time series prediction. In general iterative methods, the multi-step-ahead prediction is based on the iteration of the exact one-step prediction. Even if the one-step prediction model is very exact, the iteration procedure would accumulate prediction errors when repeating one-step-ahead prediction, which results in bad prediction performance. This paper deals with iterated time series prediction problem by using multiple SVR models, which are trained independently based on the same training data with different targets. In other words, the n-th SVR model performs an n-step-ahead prediction. The prediction outputs of these models are considered as the next input state variables to perform further prediction. Since each SVR model is an exact prediction model, the accumulate prediction error would be reduced by using multiple SVR models possibly. Actually, the MSVR method can be taken as a compromise of the iterative method and the direct method. Experimental results on time series data show that the multiple SVR model method is effective. © 2012 Elsevier B.V.


Zhang L.,Soochow University of China | Zhou W.-D.,AI Speech Ltd.
Knowledge-Based Systems | Year: 2013

This paper presents a fast algorithm called Column Generation Newton (CGN) for kernel 1-norm support vector machines (SVMs). CGN combines the Column Generation (CG) algorithm and the Newton Linear Programming SVM (NLPSVM) method. NLPSVM was proposed for solving 1-norm SVM, and CG is frequently used in large-scale integer and linear programming algorithms. In each iteration of the kernel 1-norm SVM, NLPSVM has a time complexity of O(ℓ3), where ℓ is the sample number, and CG has a time complexity between O(ℓ3) and O(n′3), where n′ is the number of columns of the coefficient matrix in the subproblem. CGN uses CG to generate a sequence of subproblems containing only active constraints and then NLPSVM to solve each subproblem. Since the subproblem in each iteration only consists of n′ unbound constraints, CGN thus has a time complexity of O(n ′3), which is smaller than that of NLPSVM and CG. Also, CGN is faster than CG when the solution to 1-norm SVM is sparse. A theorem is given to show a finite step convergence of CGN. Experimental results on the Ringnorm and UCI data sets demonstrate the efficiency of CGN to solve the kernel 1-norm SVM. © 2013 Elsevier B.V. All rights reserved.


Zhang L.,Soochow University of China | Zhou W.-D.,AI Speech Ltd.
Information Sciences | Year: 2016

Support vector machine (SVM) and Fisher discriminant analysis (FDA) are two commonly used methods in machine learning and pattern recognition. A combined method of the linear SVM and FDA, called SVM/LDA (linear discriminant analysis), has been proposed only for the linear case. This paper generalizes this combined method to the nonlinear case from the view of regularization. A Fisher regularization is defined and incorporated into SVM to obtain a Fisher regularized support vector machine (FisherSVM). In FisherSVM, there are two regularization terms, the maximum margin regularization and Fisher regularization, which allow FisherSVM to maximize the classification margin and minimize the within-class scatter. Roughly speaking, FisherSVM can approximatively fulfill the Fisher criterion and obtain good statistical separability. This paper also discusses the connections of FisherSVM to graph-based regularization methods and the mathematical programming method to Kernel Fisher Discriminant Analysis (KFDA). It shows that FisherSVM can be thought of as a graph-based supervised learning method or a robust KFDA. Experimental results on artificial and real-world data show that FisherSVM has a promising generalization performance. © 2016 Elsevier Inc. All rights reserved.


Zhang L.,Soochow University of China | Zhou W.-D.,AI Speech Ltd. | Chen G.-R.,Xidian University | Lu Y.-P.,Soochow University of China | Li F.-Z.,Soochow University of China
Knowledge-Based Systems | Year: 2013

In compressed sensing, sparse signal reconstruction is a required stage. To find sparse solutions of reconstruction problems, many methods have been proposed. It is time-consuming for some methods when the regularization parameter takes a small value. This paper proposes a decomposition algorithm for sparse signal reconstruction, which is almost insensitive to the regularization parameter. In each iteration, a subproblem or a small quadratic programming problem is solved in our decomposition algorithm. If the extended solution in the current iteration satisfies optimality conditions, an optimal solution to the reconstruction problem is found. On the contrary, a new working set must be selected for constructing the next subproblem. The convergence of the decomposition algorithm is also shown in this paper. Experimental results show that the decomposition method is able to achieve a fast convergence when the regularization parameter takes small values. © 2013 Elsevier B.V. All rights reserved.


Zhang L.,Soochow University of China | Zhou W.,AI Speech Ltd.
Neural Networks | Year: 2013

This paper deals with fast methods for training a 1-norm support vector machine (SVM). First, we define a specific class of linear programming with many sparse constraints, i.e., row-column sparse constraint linear programming (RCSC-LP). In nature, the 1-norm SVM is a sort of RCSC-LP. In order to construct subproblems for RCSC-LP and solve them, a family of row-column generation (RCG) methods is introduced. RCG methods belong to a category of decomposition techniques, and perform row and column generations in a parallel fashion. Specially, for the 1-norm SVM, the maximum size of subproblems of RCG is identical with the number of Support Vectors (SVs). We also introduce a semi-deleting rule for RCG methods and prove the convergence of RCG methods when using the semi-deleting rule. Experimental results on toy data and real-world datasets illustrate that it is efficient to use RCG to train the 1-norm SVM, especially in the case of small SVs. © 2013 Elsevier Ltd.


Zhang L.,Soochow University of China | Zhou W.,AI Speech Ltd.
Neural Networks | Year: 2013

This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. © 2013 Elsevier Ltd.


Zhang L.,Soochow University of China | Zhou W.,AI Speech Ltd. | Wang B.,Soochow University of China
Proceedings of the International Joint Conference on Neural Networks | Year: 2015

Detecting Edge in synthetic aperture radar (SAR) imagery is to extract contours across the investigated SAR image. Classical edge detection methods provide a limited efficiency in SAR imagery for the presence of speckle noise. This paper deals with a novel filtering method for edge detection in SAR imagery based on support value transform (SVT). By using SVT, we decompose a SAR image into a low-frequency component image and a series of support value images (SVIs). The low-frequency image contains the slowly variation information, or the contour information in the SAR image. Then, the classical edge detection methods can be used to extract the contours of the low-frequency image. Experimental results show that this novel edge detection scheme is promising. © 2015 IEEE.


Zhang L.,Soochow University of China | Zhou W.-D.,AI Speech Ltd. | Chang P.-C.,Yuan Ze University | Liu J.,Xidian University | And 3 more authors.
IEEE Transactions on Signal Processing | Year: 2012

Sparse representation-based classifier (SRC), a combined result of machine learning and compressed sensing, shows its good classification performance on face image data. However, SRC could not well classify the data with the same direction distribution. The same direction distribution means that the sample vectors belonging to different classes distribute on the same vector direction. This paper presents a new classifier, kernel sparse representation-based classifier (KSRC), based on SRC and the kernel trick which is a usual technique in machine learning. KSRC is a nonlinear extension of SRC and can remedy the drawback of SRC. To make the data in an input space separable, we implicitly map these data into a high-dimensional kernel feature space by using some nonlinear mapping associated with a kernel function. Since this kernel feature space has a very high (or possibly infinite) dimensionality, or is unknown, we have to avoid working in this space explicitly. Fortunately, we can indeed reduce the dimensionality of the kernel feature space by exploiting kernel-based dimensionality reduction methods. In the reduced subspace, we need to find sparse combination coefficients for a test sample and assign a class label to it. Similar to SRC, KSRC is also cast into an ℓ 1-minimization problem or a quadratically constrained ℓ 1 -minimization problem. Extensive experimental results on UCI and face data sets show KSRC improves the performance of SRC. © 2011 IEEE.

Loading AI Speech Ltd. collaborators
Loading AI Speech Ltd. collaborators